diff --git a/.gitattributes b/.gitattributes index c949d585daf084ff88f805fe59ec2df29da1409a..a6e2a2c1e140a6d962bbe44718fb51b58175421c 100644 --- a/.gitattributes +++ b/.gitattributes @@ -380,3 +380,22 @@ samples/pdfs/2234121.pdf filter=lfs diff=lfs merge=lfs -text samples/pdfs/7621530.pdf filter=lfs diff=lfs merge=lfs -text samples/pdfs/4150074.pdf filter=lfs diff=lfs merge=lfs -text samples/pdfs/5687555.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/7569662.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/3327355.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/4971236.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/1836869.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/3884483.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/199837.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/1168240.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/6016935.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/1885128.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/393503.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/3193892.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/6813453.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/6426180.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/500594.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/3495399.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/6218816.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/4239587.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/7089754.pdf filter=lfs diff=lfs merge=lfs -text +samples/pdfs/230879.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/samples/pdfs/1168240.pdf b/samples/pdfs/1168240.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4d8a9b44f2f18aa51459e3bd5d0e3bf17cb03f61 --- /dev/null +++ b/samples/pdfs/1168240.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25aeac281a45f2b60b35010a6b83cfeb3df054ea48abc28250479b648a75c694 +size 164000 diff --git a/samples/pdfs/1836869.pdf b/samples/pdfs/1836869.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5a9bdce7d7d873f8d20ddf4a4fedde8345644447 --- /dev/null +++ b/samples/pdfs/1836869.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b3b420a7da248f295a8ea9698ff516d67161b540fb727297ee611beec80f736 +size 609919 diff --git a/samples/pdfs/1885128.pdf b/samples/pdfs/1885128.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ce1343e7bd342975bc3a5ac809c78225b0835371 --- /dev/null +++ b/samples/pdfs/1885128.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ccc17ede8e64c5a57e7ece9afe260e5fd63d40134ddb002627eb5b95daf26ce +size 488805 diff --git a/samples/pdfs/199837.pdf b/samples/pdfs/199837.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a82173a7d819a4e32ad7d7677c1b7561f0790e70 --- /dev/null +++ b/samples/pdfs/199837.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06775c3b500f12171f1657b77930bbb60a3b2700f7483eae231bcab49e7a055d +size 444863 diff --git a/samples/pdfs/230879.pdf b/samples/pdfs/230879.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f82ab1cff5424fcfdce2188b26a69e8265f07391 --- /dev/null +++ b/samples/pdfs/230879.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0778cac5815650c4ee155089b69599e1b24b58535889ba2d99789053f9260a05 +size 366665 diff --git a/samples/pdfs/3193892.pdf b/samples/pdfs/3193892.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9a0ed786bb10c8a39006c9b022dca16b3e09e749 --- /dev/null +++ b/samples/pdfs/3193892.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:670f4c970cf14ea7130d1db817a38b6d97b6f56ba261f68ea05550fa57f2d4e4 +size 180892 diff --git a/samples/pdfs/3327355.pdf b/samples/pdfs/3327355.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ccd94d9501f5be878630bfeeacb3e6258a9013ab --- /dev/null +++ b/samples/pdfs/3327355.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b5a6a355f367a37b1f1520ee23591aa795d65ebad17ad72630b2844b2c567dc +size 694508 diff --git a/samples/pdfs/3495399.pdf b/samples/pdfs/3495399.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6e8754a5af2a18494b97963fba11e490cf86f7a7 --- /dev/null +++ b/samples/pdfs/3495399.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70ccd2ebefeabb840e2eab6fc67948571c0c725b145ccdd34adfa1376121ee0a +size 148653 diff --git a/samples/pdfs/3884483.pdf b/samples/pdfs/3884483.pdf new file mode 100644 index 0000000000000000000000000000000000000000..22aa15d562bc9988ec36cbc86a78b8ccd0d9acc6 --- /dev/null +++ b/samples/pdfs/3884483.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d5b428e4718f6eea9e4f339288c737ce6712885c927aa18629e1556b1cd84c8 +size 8815754 diff --git a/samples/pdfs/393503.pdf b/samples/pdfs/393503.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9914103c1ecde2c3d0e9868848db1564b1ce2ff0 --- /dev/null +++ b/samples/pdfs/393503.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cbc18f5f59e1b64ffed302c73064ccd608f869a88770d1aa3122aa2b13dc380 +size 365246 diff --git a/samples/pdfs/4239587.pdf b/samples/pdfs/4239587.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1d03b5fc2ff6279a66e2bb84374c72b620df3598 --- /dev/null +++ b/samples/pdfs/4239587.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88d0a3d596c851224e35d178b7b49f8acc0fe42c9d19034200ed17ec481717c2 +size 873703 diff --git a/samples/pdfs/4971236.pdf b/samples/pdfs/4971236.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d4de79054238fb5211ed98f49259b00d0c1c1d2e --- /dev/null +++ b/samples/pdfs/4971236.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a06a42b1dbde87db16afbdc6a60b6184cd33dcc82fdd6f30c28c8f2500efc083 +size 372574 diff --git a/samples/pdfs/500594.pdf b/samples/pdfs/500594.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4d057132b603e85ce0fbe961115b46d9bd97da4e --- /dev/null +++ b/samples/pdfs/500594.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:102ddf114ae1020b97bbd55520214c30550c64760e5608327445f0ac12a22ff8 +size 185636 diff --git a/samples/pdfs/6016935.pdf b/samples/pdfs/6016935.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b2466ad132a7a66d0e5d46b1eef8e2d9293f5ced --- /dev/null +++ b/samples/pdfs/6016935.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c225b82666897c57e9af8a6b7a6873024157e8cd9ac160e12f93503b7e62eda6 +size 1249342 diff --git a/samples/pdfs/6218816.pdf b/samples/pdfs/6218816.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4ea4964d65242f6e251d4f117cc5ace5eafe3a83 --- /dev/null +++ b/samples/pdfs/6218816.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8adeb393ba9273c37d14720f4e5066c92aec95de2fe62d78d09e2f7a4d48415 +size 1641505 diff --git a/samples/pdfs/6426180.pdf b/samples/pdfs/6426180.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ecc5fd05ab7864fe66b968cf8cc7ce6f1195a893 --- /dev/null +++ b/samples/pdfs/6426180.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf60e08e68a938f240186c4137ea053dd0e9c2e8cd1a24389d10f138deaa29ca +size 188785 diff --git a/samples/pdfs/6813453.pdf b/samples/pdfs/6813453.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5395ded6513ccbc29a8bfb6e015b13f8e05b7a02 --- /dev/null +++ b/samples/pdfs/6813453.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbe4eb2560244abb0e94352d0cada9d71d7c9f5ea8e5166a4d23011bba532384 +size 645965 diff --git a/samples/pdfs/7089754.pdf b/samples/pdfs/7089754.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4632969309e1886661fc171e4e7724460173d19e --- /dev/null +++ b/samples/pdfs/7089754.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4d59d451e710cebc205fd0dd2dc402e51c7670433f17a536b6d426e5c43c489 +size 447634 diff --git a/samples/pdfs/7569662.pdf b/samples/pdfs/7569662.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3690de22f3a5261be24c7f969ba03e25de14d06d --- /dev/null +++ b/samples/pdfs/7569662.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4502ab7df7ec80f5adf2f232488fc92bbd4ae5bafec8ebe85abbc2d8e47eb94 +size 494074 diff --git a/samples/texts_merged/1117773.md b/samples/texts_merged/1117773.md new file mode 100644 index 0000000000000000000000000000000000000000..073a595e7e0a37e4b95166023e69c7f76492df48 --- /dev/null +++ b/samples/texts_merged/1117773.md @@ -0,0 +1,241 @@ + +---PAGE_BREAK--- + +Resolving electron transfer kinetics in porous electrodes via diffusion-less +cyclic voltammetry + +Shida Yang,ac Yang Li,b Qing Chen.ab* + +aDepartment of Chemistry, bDepartment of Mechanical and Aerospace Engineering, and +cThe Energy Institute, HKUST, Hong Kong. + +*Corresponding Author E-mail: chenqing@ust.hk (Qing Chen) +---PAGE_BREAK--- + +**Figure S1.** Background current on Ti foil as assembled in the cell with the active electrolyte but without the carbon felt. (a) $K_3Fe(CN)_6$, (b) $FeCl_3$, and (c) $VOSO_4$. The currents are at least two orders of magnitude lower than those measured with the carbon felt for all three cases, so no background subtraction is necessary for the analysis. +---PAGE_BREAK--- + +**Figure S2.** Electrochemical surface area measurements of the carbon felt electrode in the electrolytes of (a) $K_3Fe(CN)_6$, (b) $FeCl_3$, and (c) $VOSO_4$. We scan CV in ranges of potential with no visible Faradaic current and plot the average currents against the scan rates. The slopes are divided with a specific capacitance of 20 µF/cm² to derive the areas. +---PAGE_BREAK--- + +**Figure S3.** X-ray photoelectron spectra of different carbon felts. + +**Table S1.** O/C ratio of different carbon felts and the corresponding standard rate constants $k^0$ of VO$^{2+}$/VO$_2^+$ on these electrodes. + +
Carbon FeltC ratio/%O ratio/%O/Ck0 (cm/s)
CeTech CF020, 400 °C92.517.490.0811.56±0.15 × 10-6
SGL GFA6EA, 400 °C90.149.860.1091.642±0.072 × 10-7
SGL GFA6EA, 450 °C89.3410.660.1192.095±0.518 × 10-7
SGL GFA6EA, 500 °C88.9311.070.1242.455±0.216 × 10-8
+---PAGE_BREAK--- + +**Figure S4.** Additional results of the RFB tests. (a) Electrochemical impedance spectroscopy (EIS) and (b) IR-corrected polarization curves of VRFB with CF baked at different temperatures. + +**Table S2.** Polarization resistance of VRFB with different CF. + +
SGL CFRu/Ω cm²polarization resistance/Ω cm²corrected polarization resistance/Ω cm²
400°C0.3950.4870.092
450°C0.4210.5400.119
500°C0.4500.6640.214
+---PAGE_BREAK--- + +**Table S3.** Summary of standard rate constants *k* of VO2+/VO2+ reported in literature. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Electrodes + + Treatment + + Method + + Area + + k (cm/s) + + Ref +
+ SGL Carbon GFD4.6 + + Baked at 400 °C for 12 hrs + + Symmetrical RFB + + Electro-chemical + + 2.38×10-6 + + [1] +
+ Disk made from carbon felt (SigraCELL GFA6, SGL carbon) + + Baked at 400 °C for 30 hrs + + Linear sweep voltammetry (LSV) + + Geometric + + 1.6-8.8×10-8 + + [2] +
+ Ultra-microelectrode made from carbon felts (GrafTech) + + Electrochemical oxidation and reduction + + LSV and EIS + + Electro-chemical + + 1.7-17×10-5 + + [3] +
+ Carbon felt (Sigratherm GFA5) + + Not mentioned + + Galvanic charging / discharging + + Calculated + + 3×10-7 + + [4] +
+ Carbon felt (Liao Yang Carbon Fiber Sci-tech. Co., Ltd. China) + + None + + CV and EIS + + Geometric + + 1.84×10-3 + + [5] +
+ Carbon paper (29, SGL group) + + Baked at 450 °C for 30 hrs + + Polarization curve and EIS in a RFB + + Electro-chemical + + 0.2-1.8×10-7 + + [6] +
+ Carbon paper (10AA, SGL group) + + None + + Symmetrical RFB + + Gas adsorption + + 2.05×10-6 + + [7] +
+ Carbon paper (Shanghai Hesen, Ltd. HCP030 N) + + Electrochemical oxidation and reduction + + CV + + Gas adsorption + + 1.04×10-3 + + [8] +
+ +SI references: + +[1] M. V. Holland-Cunz, J. Friedl, U. Stimming, *J. Electroanal. Chem.* **2018**, *819*, 306-311. +---PAGE_BREAK--- + +[2] Y. Li, J. Parrondo, S. Sankarasubramanian, V. Ramani, *J. Phys. Chem. C* **2019**, *123*, 6370-6378. + +[3] M. A. Miller, A. Bourke, N. Quill, J. S. Wainright, R. P. Lynch, D. N. Buckley, R. F. Savinell, *J. Electrochem. Soc.* **2016**, *163*, A2095. + +[4] A. A. Shah, M. J. Watt-Smith, F. C. Walsh, *Electrochim. Acta* **2008**, *53*, 8087-8100. + +[5] W. Li, Z. Zhang, Y. Tang, H. Bian, T.-W. Ng, W. Zhang, C.-S. Lee, *Adv. Sci.* **2016**, *3*, 1500276. + +[6] K. V. Greco, A. Forner-Cuenca, A. Mularczyk, J. Eller, F. R. Brushett, *ACS Appl. Mater. Interfaces* **2018**, *10*, 44430-44442. + +[7] D. Aaron, C.-N. Sun, M. Bright, A. B. Papandrew, M. M. Mench, T. A. Zawodzinski, *ECS Electrochemistry Letters* **2013**, *2*, A29. + +[8] X. W. Wu, T. Yamamura, S. Ohta, Q. X. Zhang, F. C. Lv, C. M. Liu, K. Shirasaki, I. Satoh, T. Shikama, D. Lu, S. Q. Liu, *J Appl Electrochem* **2011**, *8*. \ No newline at end of file diff --git a/samples/texts_merged/1131204.md b/samples/texts_merged/1131204.md new file mode 100644 index 0000000000000000000000000000000000000000..81c48b9483e10afb29352440bb009cecf3af0eda --- /dev/null +++ b/samples/texts_merged/1131204.md @@ -0,0 +1,426 @@ + +---PAGE_BREAK--- + +# Appendices + +## A. Derivations and Additional Methodology + +### A.1. Generalized PointConv Trick + +The matrix notation becomes very cumbersome for manipulating these higher order n-dimensional arrays, so we will instead use index notation with Latin indices i, j, k indexing points, Greek indices α, β, γ indexing feature channels, and c indexing the coordinate dimensions of which there are $d = 3$ for PointConv and $d = \dim(G) + 2 \dim(Q)$ for LieConv.³ As the objects are not geometric tensors but simply n-dimensional arrays, we will make no distinction between upper and lower indices. After expanding into indices, it should be assumed that all values are scalars, and that any free indices can range over all of the values. + +Let $k_{ij}^{\alpha,\beta}$ be the output of the MLP $k_\theta$ which takes $\{a_{ij}^c\}$ as input and acts independently over the locations $i, j$. For PointConv, the input $a_{ij}^c = x_i^c - x_j^c$ and for LieConv the input $a_{ij}^c = \text{Concat}([\log(v_j^{-1}u_i), q_i, q_j])^c$. + +We wish to compute + +$$h_i^\alpha = \sum_{j;\beta} k_{ij}^{\alpha;\beta} f_j^\beta. \quad (12)$$ + +In Wu et al. (2019), it was observed that since $k_{ij}^{\alpha,\beta}$ is the output of an MLP, $k_{ij}^{\alpha,\beta} = \sum_\gamma W_\gamma^{\alpha,\beta} s_{i,j}^\gamma$ for some final weight matrix $W$ and penultimate activations $s_{i,j}^\gamma$ ($s_{i,j}^\gamma$ is simply the result of the MLP after the last nonlinearity). With this in mind, we can rewrite (12) + +$$h_i^\alpha = \sum_{j,\beta} \left( \sum_\gamma W_\gamma^{\alpha,\beta} s_{i,j}^\gamma \right) f_j^\beta \quad (13)$$ + +$$= \sum_{\beta, \gamma} W_\gamma^{\alpha, \beta} \left( \sum_j s_{i,j}^\gamma f_j^\beta \right) \quad (14)$$ + +In practice, the intermediate number of channels is much less than the product of $c_{in}$ and $c_{out}$: $|\gamma| < |\alpha||\beta|$ and so this reordering of the computation leads to a massive reduction in both memory and compute. Furthermore, $b_i^{\gamma,\beta} = \sum_j s_{i,j}^\gamma f_j^\beta$ can be implemented with regular matrix multiplication and $h_i^\alpha = \sum_{\beta,\gamma} W_\gamma^{\alpha,\beta} b_i^{\gamma,\beta}$ can be also by flattening $(\beta, \gamma)$ into a single axis $\varepsilon$: $h_i^\alpha = \sum_\varepsilon W^{\alpha,\varepsilon} b_i^\varepsilon$. + +The sum over index $j$ can be restricted to a subset $j(i)$ (such as a chosen neighborhood) by computing $f_j^\beta$ at each of the required indices and padding to the size of the maximum subset with zeros, and computing $b_i^{\gamma,\beta} = \sum_j s_{i,j(i)}^\gamma f_{j(i)}^\beta$ using dense matrix multiplication. Masking out of the values + +at indices *i* and *j* is also necessary when there are different numbers of points per minibatch but batched together using zero padding. The generalized PointConv trick can thus be applied in batch mode when there may be varied number of points per example and varied number of points per neighborhood. + +### A.2. Abelian G and Coordinate Transforms + +For Abelian groups that cover $\mathcal{X}$ in a single orbit, the computation is very similar to ordinary Euclidean convolution. Defining $a_i = \log(u_i)$, $b_j = \log(v_j)$, and using the fact that $e^{-b_j} e^{a_i} = e^{a_i - b_j}$ means that $\log(v_j^{-1} u_i) = (\log \circ \exp)(a_i - b_j)$. Defining $\tilde{f} = f \circ \exp$, $\tilde{h} = h \circ \exp$; we get + +$$\tilde{h}(a_i) = \frac{1}{n} \sum_{j \in \text{nbhd}(i)} (\tilde{k}_{\theta} \circ \text{proj})(a_i - b_j) \tilde{f}(b_j), \quad (15)$$ + +where proj = log ◦ exp projects to the image of the logarithm map. Apart from a projection and a change to logarithmic coordinates, this is equivalent to Euclidean convolution in a vector space with dimensionality of the group. When the group is Abelian and $\mathcal{X}$ is a homogeneous space, then the dimension of the group is the dimension of the input. In these cases we have a trivial stabilizer group $H$ and single origin $o$, so we can view $f$ and $h$ as acting on the input $x_i = u_i o$. + +This directly generalizes some of the existing coordinate transform methods for achieving equivariance from the literature such as log polar coordinates for rotation and scaling equivariance (Esteves et al., 2017), and using hyperbolic coordinates for squeeze and scaling equivariance. + +**Log Polar Coordinates:** Consider the Abelian Lie group of positive scalings and rotations: $G = \mathbb{R}^* \times SO(2)$ acting on $\mathbb{R}^2$. Elements of the group $M \in G$ can be expressed as a $2 \times 2$ matrix + +$$M(r, \theta) = \begin{bmatrix} r \cos(\theta) & -r \sin(\theta) \\ r \sin(\theta) & r \cos(\theta) \end{bmatrix}$$ + +for $r \in \mathbb{R}^+$ and $\theta \in \mathbb{R}$. The matrix logarithm is⁴ + +$$\log\left(\begin{bmatrix} r \cos(\theta) & -r \sin(\theta) \\ r \sin(\theta) & r \cos(\theta) \end{bmatrix}\right) = \begin{bmatrix} \log(r) & -\theta \mod 2\pi \\ \theta \mod 2\pi & \log(r) \end{bmatrix},$$ + +or more compactly $\log(M(r, \theta)) = \log(r)I + (\theta \mod 2\pi)J$, which is $[\log(r), \theta \mod 2\pi]$ in the basis for the Lie algebra $[I, J]$. It is clear that proj = log ◦ exp is simply mod $2\pi$ on the J component. + +As $\mathbb{R}^2$ is a homogeneous space of $G$, one can choose the global origin $o = [1, 0] \in \mathbb{R}^2$. A little algebra shows that + +³dim($Q$) is the dimension of the space into which $Q$, the orbit identifiers, are embedded. + +⁴Here $\theta \mod 2\pi$ is defined to mean $\theta + 2\pi n$ for the integer $n$ such that the value is in $(-\pi, \pi)$, consistent with the principal matrix logarithm. $(\theta + \pi)\%2\pi - \pi$ in programming notation. +---PAGE_BREAK--- + +lifting to the group yields the transformation $u_i = M(r_i, \theta_i)$ +for each point $p_i = u_i o$, where $r = \sqrt{x^2 + y^2}$, and +$\theta = \operatorname{atan2}(y, x)$ are the polar coordinates of the point $p_i$. +Observe that the logarithm of $v_j^{-1} u_i$ has a simple expression +highlighting the fact that it is invariant to scale and rotational +transformations of the elements, + +$$ +\begin{align*} +\log(v_j^{-1} u_i) &= \log(M(r_j, \theta_j)^{-1} M(r_i, \theta_i)) \\ +&= \log(r_i/r_j) I + (\theta_i - \theta_j \bmod 2\pi) J. +\end{align*} +$$ + +Now writing out our Monte Carlo estimation of the integral: + +$$h(p_i) = \frac{1}{n} \sum_j \tilde{k}_\theta(\log(r_i/r_j), \theta_i - \theta_j \bmod 2\pi) f(p_j),$$ + +which is a discretization of the log polar convolution from +Esteves et al. (2017). This can be trivially extended to +encompass cylindrical coordinates with the group $T(1) \times$ +$\mathbb{R}^* \times \text{SO}(2)$. + +**Hyperbolic coordinates:** For another nontrivial example, +consider the group of scalings and squeezes $G = \mathbb{R}^* \times \text{SQ} +acting on the positive orthant $\mathcal{X} = \{(x, y) \in \mathbb{R}^2 : x >$ +$0, y > 0\}$. Elements of the group can be expressed as the +product of a squeeze mapping and a scaling + +$$M(r, s) = \begin{bmatrix} s & 0 \\ 0 & 1/s \end{bmatrix} \begin{bmatrix} r & 0 \\ 0 & r \end{bmatrix} = \begin{bmatrix} rs & 0 \\ 0 & r/s \end{bmatrix}$$ + +for any $r, s \in \mathbb{R}^{+}$. As the group is abelian, the logarithm +splits nicely in terms of the two generators $I$ and $A$: + +$$\log\left(\begin{bmatrix} rs & 0 \\ 0 & r/s \end{bmatrix}\right) = (\log r)\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + (\log s)\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.$$ + +Again $\mathcal{X}$ is a homogeneous space of $G$, and we choose a +single origin $o = [1, 1]$. With a little algebra, it is clear that +$M(r_i, s_i)_o = p_i$ where $r = \sqrt{xy}$ and $s = \sqrt{x/y}$ are the +hyperbolic coordinates of $p_i$. + +Expressed in the basis $B = [I, A]$ for the Lie algebra above, +we see that + +$$\log(v_j^{-1} u_i) = \log(r_i / r_j) I + \log(s_i / s_j) A$$ + +yielding the expression for convolution + +$$h(p_i) = \frac{1}{n} \sum_j \tilde{k}_\theta(\log(r_i/r_j), \log(s_i/s_j)) f(p_j),$$ + +which is equivariant to squeezes and scalings. + +As demonstrated, equivariance to groups that contain the +input space in a single orbit and are abelian can be achieved +with a simple coordinate transform; however our approach +generalizes to groups that are both 'larger' and 'smaller' than +the input space, including coordinate transform equivariance +as a special case. + +### A.3. Sufficient Conditions for Geodesic Distance + +In general, the function $d(u, v) = \| \log(v^{-1}u) \|_F$, defined +on the domain of GL(d) covered by the exponential map, +satisfies the first three conditions of a distance metric but +not the triangle inequality, making it a semi-metric: + +1. $d(u, v) \geq 0$ + +2. $d(u, v) = 0 \Leftrightarrow \log(u^{-1}v) = 0 \Leftrightarrow u = v$ + +3. $d(u, v) = \|\log(v^{-1}u)\| = \|- \log(u^{-1}v)\| = d(v, u).$ + +However for certain subgroups of GL(d) with additional +structure, the triangle inequality holds and the function is +the distance along geodesics connecting group elements u +and v according to the metric tensor + +$$\langle A, B\rangle_u := \mathrm{Tr}(A^T u^{-T} u^{-1} B), \quad (16)$$ + +where $-T$ denotes inverse and transpose. + +Specifically, if the subgroup $G$ is in the image of the exp : +$g \to G$ map and each infinitesimal generator commutes with +its transpose: $[A, A^T] = 0$ for $\forall A \in g$, then $d(u, v) =$ +$\|\log(v^{-1}u)\|_F$ is the geodesic distance between $u, v$. + +**Geodesic Equation:** Geodesics of (16) satisfying $\nabla_{\dot{\gamma}}\dot{\gamma} = 0$ can equivalently be derived by minimizing the energy functional + +$$E[\gamma] = \int_{\gamma} \langle \dot{\gamma}, \dot{\gamma} \rangle_{\gamma} dt = \int_{0}^{1} \mathrm{Tr}(\dot{\gamma}^{T} \gamma^{-T} \gamma^{-1} \dot{\gamma}) dt$$ + +using the calculus of variations. Minimizing curves $\gamma(t)$, +connecting elements $u$ and $v$ in $G$ ($\gamma(0) = v, \gamma(1) = u$) +satisfy + +$$0 = \delta E = \delta \int_0^1 \mathrm{Tr}(\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \dot{\gamma}) dt$$ + +Noting that $\delta(\gamma^{-1}) = -\gamma^{-1}\delta\gamma\gamma^{-1}$ and the linearity of the +trace, + +$$2 \int_0^1 \operatorname{Tr} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \delta \dot{\gamma}) - \operatorname{Tr} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \delta \gamma \gamma^{-1} \dot{\gamma}) dt = 0.$$ + +Using the cyclic property of the trace and integrating by +parts, we have that + +$$-2 \int_0^1 \operatorname{Tr} \left( (\frac{d}{dt}(\dot{\gamma}^T \gamma^{-T} \gamma^{-1}) + \gamma^{-1} \dot{\gamma} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1})^\intercal) \delta\gamma \right) dt = 0,$$ + +where the boundary term $\operatorname{Tr}(\dot{\gamma}\gamma^{-T}\gamma^{-1}\delta\dot{\gamma})|_{0}^{1}$ vanishes since +$(\delta\gamma)(0) = (\delta\gamma)(1) = 0.$ + +As $\delta\gamma$ may be chosen to vary arbitrarily along the path, $\gamma$ +must satisfy the geodesic equation: + +$$\frac{d}{dt}(\dot{\gamma}^T\gamma^{-T}\gamma^{-1}) + \gamma^{-1}\dot{\gamma}\dot{\gamma}^T\gamma^{-T}\gamma^{-1} = 0. \quad (17)$$ +---PAGE_BREAK--- + +**Solutions:** When $A = \log(v^{-1}u)$ satisfies $[A, A^T] = 0$, the curve $\gamma(t) = v \exp(t \log(v^{-1}u))$ is a solution to the geodesic equation (17). Clearly $\gamma$ connects $u$ and $v$, $\gamma(0) = v$ and $\gamma(1) = u$. Plugging in $\dot{\gamma} = \gamma A$ into the left hand side of equation (17), we have + +$$ +\begin{align*} +&= \frac{d}{dt}(A^T \gamma^{-1}) + AA^T \gamma^{-1} \\ +&= -A^T \gamma^{-1} \dot{\gamma} \gamma^{-1} + AA^T \gamma^{-1} \\ +&= [A, A^T]\gamma^{-1} = 0 +\end{align*} +$$ + +**Length of $\gamma$:** The length of the curve $\gamma$ connecting $u$ and $v$ is $\|\log(v^{-1}u)\|_F$, + +$$ +\begin{align*} +L[\gamma] &= \int_{\gamma} \sqrt{\langle \dot{\gamma}, \dot{\gamma} \rangle_{\gamma}} dt = \int_{0}^{1} \sqrt{\operatorname{Tr}(\dot{\gamma}^{T}\gamma^{-T}\gamma^{-1}\dot{\gamma})} dt \\ +&= \int_{0}^{1} \sqrt{\operatorname{Tr}(A^{T}A)} dt = \|A\|_{F} = \|\log(v^{-1}u)\|_{F} +\end{align*} +$$ + +Of the Lie Groups that we consider in this paper, all of which have a single connected component, the groups $G = T(d)$, $SO(d)$, $\mathbb{R}^* \times SO(d)$, $\mathbb{R}^* \times SQ$ satisfy this property that $[\mathfrak{g}, \mathfrak{g}^T] = 0$; however, the $SE(d)$ groups do not. + +## A.4. Equivariant Subsampling + +Even if all distances and neighborhoods are precomputed, the cost of computing equation (6) for $i = 1, ..., N$ is still quadratic, $O(nN) = O(N^2)$, because the number of points in each neighborhood $n$ grows linearly with $N$ as $f$ is more densely evaluated. So that our method can scale to handle a large number of points, we show two ways two equivariantly subsample the group elements, which we can use both for the locations at which we evaluate the convolution and the locations that we use for the Monte Carlo estimator. Since the elements are spaced irregularly, we cannot readily use the coset pooling method described in (Cohen and Welling, 2016a), instead we can perform: + +**Random Selection:** Randomly selecting a subset of $p$ points from the original $n$ preserves the original sampling distribution, so it can be used. + +**Farthest Point Sampling:** Given a set of group elements $S = \{u_i\}_{i=1}^k \in G$, we can select a subset $S_p^*$ of size $p$ by maximizes the minimum distance between any two elements in that subset, + +$$ \mathrm{Sub}_p(S) := S_p^* = \arg \max_{S_p \subset S} \min_{u,v \in S_p: u \neq v} d(u,v), \quad (18) $$ + +farthest point sampling on the group. Acting on a set of elements, $\mathrm{Sub}_p : S \mapsto S_p^*$, the farthest point sub-sampling is equivariant $\mathrm{Sub}_p(wS) = w\mathrm{Sub}_p(S)$ for any $w \in G$. Meaning that applying a group element to each of the elements does not change the chosen indices in + +the subsampled set because the distances are left invariant $d(u_i, u_j) = d(wu_i, wu_j)$. + +Now we can use either of these methods for $\mathrm{Sub}_p(\cdot)$ to equivariantly subsample the quadrature points in each neighborhood used to estimate the integral to a fixed number $p$, + +$$ h_i = \frac{1}{p} \sum_{j \in \mathrm{Sub}_p(\mathrm{nbhd}(u_i))} k_\theta(v_j^{-1} u_i) f_j. \quad (19) $$ + +Doing so has reduced the cost of estimating the convolution from $O(N^2)$ to $O(pN)$, ignoring the cost of computing $\mathrm{Sub}_p$ and $\{\mathrm{nbhd}(u_i)\}_{i=1}^N$. + +## A.5. Review and Implications of Noether's Theorem + +In the Hamiltonian setting, Noether's theorem relates the continuous symmetries of the Hamiltonian of a system with conserved quantities, and has been deeply impactful in the understanding of classical physics. We give a review of Noether's theorem, loosely following Butterfield (2006). + +### More on Hamiltonian Dynamics + +As introduced earlier, the Hamiltonian is a function acting on the state $H(z) = H(q,p)$, (we will ignore time dependence for now) can be viewed more formally as a function on the cotangent bundle $(q,p) = z \in M = T^*C$ where $C$ is the coordinate configuration space, and this is the setting for Hamiltonian dynamics. + +In general, on a manifold $\mathcal{M}$, a vector field $X$ can be viewed as an assignment of a directional derivative along $\mathcal{M}$ for each point $z \in \mathcal{M}$. It can be expanded in a basis using coordinate charts $X = \sum_{\alpha} X^{\alpha} \partial_{x^{\alpha}}$, where $\partial_{\alpha} = \frac{\partial}{\partial z^{\alpha}}$ and acts on functions $f$ by $X(f) = \sum_{\alpha} X^{\alpha} \partial_{x^{\alpha}} f$. In the chart, each of the components $X^{\alpha}$ are functions of $z$. + +In Hamiltonian mechanics, for two functions on $\mathcal{M}$, there is the Poisson bracket which can be written in terms of the canonical coordinates $q_i, p_i,$ + +$$ \{f,g\} = \sum_i \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial q_i} - \frac{\partial f}{\partial q_i} \frac{\partial g}{\partial p_i}. $$ + +The Poisson bracket can be used to associate each function $f$ to a vector field + +$$ X_f = \{f, \cdot\} = \sum_i \frac{\partial f}{\partial p_i} \frac{\partial}{\partial q_i} - \frac{\partial f}{\partial q_i} \frac{\partial}{\partial p_i}, $$ + +which specifies, by its action on another function $g$, the directional derivative of $g$ along $X_f: X_f(g) = \{f,g\}$. Vector fields that can be written in this way are known as Hamiltonian vector fields, and the Hamiltonian dynamics of the + +⁵Here we take the definition of the Poisson bracket to be negative of the usual definition in order to streamline notation. +---PAGE_BREAK--- + +system is a special example $X_H = \{H, \cdot\}$. This vector field in canonical coordinates $z = (p, q)$ is the vector field $X_H = F(z) = J\nabla_z H$ (i.e. the symplectic gradient, as discussed in Section 6.1). Making this connection clear, a given scalar quantity evolves through time as $\dot{f} = \{H, f\}$. But this bracket can be used to evaluate the rate of change of a scalar quantity along the flows of vector fields other than the dynamics, such as the flows of continuous symmetries. + +## Noether's Theorem + +The flow $\phi_{\lambda}^X$ by $\lambda \in \mathbb{R}$ of a vector field $X$ is the set of integral curves, the unique solution to the system of ODEs $\dot{z}^\alpha = X^\alpha$ with initial condition $z$ and at parameter value $\lambda$, or more abstractly the iterated application of $X$: $\phi_{\lambda}^X = \exp(\lambda X)$. Continuous symmetries transformation are the transformations that can be written as the flow $\phi_{\lambda}^X$ of a vector field. The directional derivative characterizes how a function such as the Hamiltonian changes along the flow of $X$ and is a special case of the Lie Derivative $\mathcal{L}$. + +$$ \mathcal{L}_X H = \frac{d}{d\lambda} (H \circ \phi_\lambda^X)|_{\lambda=0} = X(H) $$ + +A scalar function is invariant to the flow of a vector field if and only if the Lie Derivative is zero + +$$ H(\phi_{\lambda}^{X}(z)) = H(z) \Leftrightarrow \mathcal{L}_{X}H = 0. $$ + +For all transformations that respect the Poisson Bracket⁶, which we add as a requirement for a symmetry, the vector field $X$ is (locally) Hamiltonian and there exists a function $f$ such that $X = X_f = \{f, \cdot\}$. If $M$ is a contractible domain such as $\mathbb{R}^{2n}$, then $f$ is globally defined. For every continuous symmetry $\phi_{\lambda}^{X_f}$, + +$$ \mathcal{L}_{X_f} H = X_f(H) = \{f, H\} = -\{H, f\} = -X_H(f), $$ + +by the antisymmetry of the Poisson bracket. So if $\phi_{\lambda}^X$ is a symmetry of $H$, then $X = X_f$ for some function $f$, and $H(\phi_{\lambda}^{X_f}(z)) = H(z)$ implies + +$$ \mathcal{L}_{X_f} H = 0 \Leftrightarrow \mathcal{L}_{X_H} f = 0 \Leftrightarrow f(\phi_{\tau}^{X_H}(z)) = f(z) $$ + +or in other words $f(z(t+\tau)) = f(z(t))$ and $f$ is a conserved quantity of the dynamics. + +⁶More precisely, the Poisson Bracket can be formulated in a coordinate free manner in terms of a symplectic two form $\omega$, $\{f,g\} = \omega(X_f, X_g)$. In the original coordinates $\omega = \sum_i dp_i \wedge dq^i$, and this coordinate basis, $\omega$ is represented by the matrix $J$ from earlier. The dynamics $X_H$ are determined by $dH = \omega(X_H, \cdot) = \iota_{X_H}\omega$. Transformations which respect the Poisson Bracket are symplectic, $\mathcal{L}_{X_H}\omega = 0$. With Cartan's magic formula, this implies that $d(\iota_{X_H}\omega) = 0$. Because the form $\iota_{X_H}\omega$ is closed, Poincare's Lemma implies that locally $(\iota_{X_H}\omega) = df$ for some function $f$ and hence $X = X_f$ (locally) a Hamiltonian vector field. For more details see Butterfield (2006). + +This implication goes both ways, if $f$ is conserved then $\phi_{\lambda}^{X_f}$ is necessarily a symmetry of the Hamiltonian, and if $\phi_{\lambda}^{X_f}$ is a symmetry of the Hamiltonian then $f$ is conserved. + +## Hamiltonian vs Dynamical Symmetries + +So far we have been discussing Hamiltonian symmetries, invariances of the Hamiltonian. But in the study of dynamical systems there is a related concept of dynamical symmetries, symmetries of the equations of motion. This notion is also captured by the Lie Derivative, but between vector fields. A dynamical system $\dot{z} = F(z)$, has a continuous dynamical symmetry $\phi_{\lambda}^X$ if the flow along the dynamical system commutes with the symmetry: + +$$ \phi_{\lambda}^{X}(\phi_{t}^{F}(z)) = \phi_{t}^{F}(\phi_{\lambda}^{X}(z)). \quad (20) $$ + +Meaning that applying the symmetry transformation to the state and then flowing along the dynamical system is equivalent to flowing first and then applying the symmetry transformation. Equation (20) is satisfied if and only if the Lie Derivative is zero: + +$$ \mathcal{L}_X F = [X, F] = 0, $$ + +where $[.,]$ is the Lie bracket on vector fields.⁷ + +For Hamiltonian systems, every Hamiltonian symmetry is also a dynamical symmetry. In fact, it is not hard to show that the Lie and Poisson brackets are related, + +$$ [X_f, X_g] = X_{\{f,g\}} $$ + +and this directly shows the implication. If $X_f$ is a Hamiltonian symmetry, $\{f, H\} = 0$, and then + +$$ [X_f, F] = [X_f, X_H] = X_{\{f,H\}} = 0. $$ + +However, the converse is not true, dynamical symmetries of a Hamiltonian system are not necessarily Hamiltonian symmetries and thus might not correspond to conserved quantities. Furthermore even if the system has a dynamical symmetry which is the flow along a Hamiltonian vector field $\phi_{\lambda}^X$, $X = X_f = \{f, \cdot\}$, but the dynamics $F$ are not Hamiltonian, then the dynamics will not conserve $f$ in general. Both the symmetry and the dynamics must be Hamiltonian for the conservation laws. + +This fact is demonstrated by Figure 9, where the dynamics of the (non-Hamiltonian) equivariant LieConv-T(2) model has a T(2) dynamical symmetry with the generators $\partial_x, \partial_y$ which are Hamiltonian vector fields for $f = p_x, f = p_y$, and yet linear momentum is not conserved by the model. + +⁷The Lie bracket on vector fields produces another vector field and is defined by how it acts on functions, for any smooth function $g: [X, F](g) = X(F(g)) - F(X(g))$ +---PAGE_BREAK--- + +Figure 9. Equivariance alone is not sufficient, for conservation we need both to model $\mathcal{H}$ and incorporate the given symmetry. For comparison, LieConv-T(2) is T(2)-equivariant but models $F$, and HLieConv-Trivial models $\mathcal{H}$ but is not T(2)-equivariant. Only HLieConv-T(2) conserves linear momentum. + +## Conserving Linear and Angular Momentum + +Consider a system of $N$ interacting particles described in Euclidean coordinates with position and momentum $q_{im}, p_{im}$, such as the multi-body spring problem. Here the first index $i = 1, 2, 3$ indexes the spatial coordinates and the second $m = 1, 2, ..., N$ indexes the particles. We will use the bolded notation $\mathbf{q}_m, \mathbf{p}_m$ to suppress the spatial indices, but still indexing the particles $m$ as in Section 6.1. + +The total linear momentum along a given direction **n** is +$$ \mathbf{n} \cdot \mathbf{P} = \sum_{i,m} n_i p_{im} = \mathbf{n} \cdot (\sum_m \mathbf{p}_m). $$ +Expanding the Poisson bracket, the Hamiltonian vector field + +$$ X_{nP} = \{\mathbf{n} \cdot \mathbf{P}, \cdot\} = \sum_{i,m} n_i \frac{\partial}{\partial q_{im}} = \mathbf{n} \cdot \sum_{m} \frac{\partial}{\partial \mathbf{q}_{m}} $$ + +which has the flow $\dot{\phi}_{\lambda}^{X_{nP}}(\mathbf{q}_m, \mathbf{p}_m) = (\mathbf{q}_m + \lambda\mathbf{n}, \mathbf{p}_m)$, a translation of all particles by $\lambda\mathbf{n}$. So our model of the Hamiltonian conserves linear momentum if and only if it is invariant to a global translation of all particles, (e.g. T(2) invariance for a 2D spring system). + +The total angular momentum along a given axis **n** is + +$$ \mathbf{n} \cdot \mathbf{L} = \mathbf{n} \cdot \sum_m \mathbf{q}_m \times \mathbf{p}_m = \sum_{i,j,k,m} \epsilon_{ijk} n_i q_{jm} p_{km} = \sum_m \mathbf{p}_m^T A \mathbf{q}_m $$ + +where $\epsilon_{ijk}$ is the Levi-Civita symbol and we have defined +the antisymmetric matrix $A$ by $A_{kj} = \sum_i \epsilon_{ijk} n_i$. + +$$ X_{nL} = \{\mathbf{n} \cdot \mathbf{L}, \cdot\} = \sum_{j,k,m} A_{kj} q_{jm} \frac{\partial}{\partial q_{km}} - A_{jk} p_{jm} \frac{\partial}{\partial p_{km}} $$ + +$$ X_{nL} = \sum_m (\mathbf{q}_m^T A^T \frac{\partial}{\partial \mathbf{q}_m} + \mathbf{p}_m^T A^T \frac{\partial}{\partial \mathbf{p}_m}) $$ + +where the second line follows from the antisymmetry of $\mathcal{A}$. +We can find the flow of $X_{nL}$ from the differential equations + +$\dot{\mathbf{q}}_m = A\mathbf{q}, \dot{\mathbf{p}}_m = A\mathbf{q}$ which have the solution + +$$ \phi_{\theta}^{X_{nL}}(\mathbf{q}_m, \mathbf{p}_m) = (e^{\theta A}\mathbf{q}_m, e^{\theta A}\mathbf{p}_m) = (R_{\theta}\mathbf{q}_m, R_{\theta}\mathbf{p}_m), $$ + +where $R_\theta$ is a rotation about the axis **n** by the angle $\theta$, which follows from the Rodriguez rotation formula. Therefore, the flow of the Hamiltonian vector field of angular momentum along a given axis is a global rotation of the position and momentum of each particle about that axis. Again, the dynamics of a neural network modeling a Hamiltonian conserve total angular momentum if and only if the network is invariant to simultaneous rotation of all particle positions and momenta. + +# B. Additional Experiments + +## B.1. Equivariance Demo + +While (7) shows that the convolution estimator is equivariant, we have conducted the ablation study below examining the equivariance of the network empirically. We trained LieConv (Trivial, T(3), SO(3), SE(3)) models on a limited subset of 20k training examples (out of 100k) of the HOMO task on QM9 without any data augmentation. We then evaluate these models on a series of modified test sets where each example has been randomly transformed by an element of the given group (the test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5). In table B.1 the rows are the models configured with a given group equivariance and the columns N/G denote no augmentation at training time and transformations from G applied to the test set (test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5). + +
ModelN/NN/T(3)N/SO(3)N/SE(3)
Trivial173183239243
T(3)113113133133
SO(3)159238160240
SE(3)62626362
+ +Table 4. Test MAE (in meV) on HOMO test set randomly transformed by elements of $\mathcal{G}$. Despite no data augmentation (N), $\mathcal{G}$ equivariant models perform as well on $\mathcal{G}$ transformed test data. + +Notably, the performance of the LieConv-G models do not degrade when random G transformations are applied to the test set. Also, in this low data regime, the added equivariances are especially important. + +## B.2. RotMNIST Comparison + +While the RotMNIST dataset consists of 12k rotated MNIST digits, it is standard to separate out 10k to be used for training and 2k for validation. However, in Ti-Pooling and E(2)-Steerable CNNs, it appears that after hyperparameters were tuned the validation set is folded back into the training set +---PAGE_BREAK--- + +to be used as additional training data, a common approach used on other datasets. Although in table 1 we only use 10k training points, in the table below we report the performance with and without augmentation trained on the full 12k examples. + +
AugTrivialTyT(2)SO(2)SO(2)×R*SE(2)
SO(2)1.441.351.321.271.131.13
None1.602.642.341.261.251.15
+ +Table 5. Classification Error (%) on RotMNIST dataset for LieConv with different group equivariances and baselines: + +## C. Implementation Details + +### C.1. Practical Considerations + +While the high-level summary of the lifting procedure (Algorithm 1) and the LieConv layer (Algorithm 2) provides a useful conceptual understanding of our method, there are some additional details that are important for a practical implementation. + +1. According to Algorithm 2, $a_{ij}$ is computed in every LieConv layer, which is both highly redundant and costly. In practice, we precompute $a_{ij}$ once after lifting and feed it through the network with layers operating on the state ($\{a_{ij}\}_{i,j}^{N,N}, \{f_i\}_{i=1}^N$) instead of $\{(u_i, q_i, f_i)\}_{i=1}^N$. Doing so requires fixing the group elements that will be used at each layer for a given forwards pass. + +2. In practice only $p$ elements of $nbhd_i$ are sampled (randomly) for computing the Monte Carlo estimator in order to limit the computational burden (see Appendix A.4). + +3. We use the analytic forms for the exponential and logarithm maps of the various groups as described in Eade (2014). + +### C.2. Sampling from the Haar Measure for Various groups + +When the lifting map from $\mathcal{X} \to G \times \mathcal{X}/G$ is multi-valued, we need to sample elements of $u \in G$ that project down to $x: uo = x$ in a way consistent with the Haar measure $\mu(\cdot)$. In other words, since the restriction $\mu(\cdot)|_{\text{nbhd}}$ is a distribution, then we must sample from the conditional distribution $u \sim \mu(u|uo = x)|_{\text{nbhd}}$. In general this can be done by parametrizing the distribution of $\mu$ as a collection of random variables that includes $x$, and then sampling the remaining variables. + +In this paper, the groups we use in which the lifting map is multi-valued are SE(2), SO(3), and SE(3). The process is especially straightforward for SE(2) and SE(3) as these groups can be expressed as a semi-direct product of two groups $G = H \times N$, + +$$d\mu_G(h, n) = \delta(h)d\mu_H(h)d\mu_N(n), \quad (21)$$ + +where $\delta(h) = \frac{d\mu_N(n)}{d\mu_N(hnh^{-1})}$ (Willson, 2009). For $G = \text{SE}(d) = \text{SO}(d) \times \text{T}(d)$, $\delta(h) = 1$ since the Lebesgue measure $d\mu_{\text{T}(d)}(x) = d\lambda(x) = dx$ is invariant to rotations. So simply $d\mu_{\text{SE}(d)}(R, x) = d\mu_{\text{SO}(d)}(R)dx$. + +So lifts of a point $x \in \mathcal{X}$ to $\text{SE}(d)$ consistent with the $\mu$ are just $T_x R$, the multiplication of a translation by $x$ and randomly sampled rotations $R \sim \mu_{\text{SO}(d)}(\cdot)$. There are multiple easy methods to sample uniformly from $\text{SO}(d)$ given in (Kuffner, 2004), for example sampling uniformly from $\text{SO}(3)$ can be done by sampling a unit quaternion from the 3-sphere, and identifying it with the corresponding rotation matrix. + +### C.3. Model Architecture + +We employ a ResNet-style architecture (He et al., 2016), using bottleneck blocks (Zagoruyko and Komodakis, 2016), and replacing ReLUs with Swish activations (Ramachandran et al., 2017). The convolutional kernel $g_\theta$ internal to each LieConv layer is parametrized by a 3-layer MLP with 32 hidden units, batch norm, and Swish nonlinearities. Not only do the Swish activations improve performance slightly, but unlike ReLUs they are twice differentiable which is a requirement for backpropagating through the Hamiltonian dynamics. The stack of elementwise linear and bottleneck blocks is followed by a global pooling layer that computes the average over all elements, but not over channels. Like for regular image bottleneck blocks, the channels for the convolutional layer in the middle are smaller by a factor of 4 for increased parameter and computational efficiency. + +**Downsampling:** As is traditional for image data, we increase the number of channels and the receptive field at every downsampling step. The downsampling is performed with the farthest point downsampling method described in Appendix A.4. For a downsampling by a factor of $s < 1$, the radius of the neighborhood is scaled up by $s^{-1/2}$ and the channels are scaled up by $s^{-1/2}$. When an image is downsampled with $s = (1/2)^2$ that is typical in a CNN, this results in 2x more channels and a radius or dilation of 2x. In the bottleneck block, the downsampling operation is fused with the LieConv layer, so that the convolution is only evaluated at the downsampled query locations. We perform downsampling only on the image datasets, which have more points. + +**BatchNorm:** In order to handle the varied number of group elements per example and within each neighborhood, we +---PAGE_BREAK--- + +use a modified batchnorm that computes statistics only over +elements from a given mask. The batch norm is computed +per channel, with statistics averaged over the batch size and +each of the valid locations. + +### C.4. Details for Hamiltonian Models + +**Model Symmetries:** + +As the position vectors are mean centered in the model forward pass $q_i^{(i)} = q_i - \bar{q}$, HOGN and HLieConv-SO2* have additional T(2) invariance, yielding SE(2) invariance for HLieConv-SO2*. We also experimented with a HLieConv-SE2 equivariant model, but found that the exponential map for SE2 (involving taylor expands and masking) was not numerically stable enough for second derivatives, required for optimizing through the Hamiltonian dynamics. So instead we benchmark the HLieConv-SO2 (without centering) and the HLieConv-SO2* (with centering) models separately. Layer equivariance is preferable for not prematurely discarding useful information and for better modeling performance, but invariance alone is sufficient for the conservation laws. Additionally, since we know a priori that the spring problem has Euclidean coordinates, we need not model the kinetic energy $K(\mathbf{p}, m) = \sum_{j=1}^n \|\mathbf{p}_j\|^2/m_j$ and instead focus on modeling the potential $V(q, k)$. We observe that this additional inductive bias of Euclidean coordinates improves model performance. Table 6 shows the invariance and equivariance properties of the relevant models and baselines. For Noether conservation, we need both to model the Hamiltonian and have the symmetry property. + +**Dataset Generation:** To generate the spring dynamics datasets we generated *D* systems each with *N* = 6 particles connected by springs. The system parameters, mass and spring constant, are set by sampling {$m_1^{(i)}, \dots, m_6^{(i)}, k_1^{(i)}, \dots, k_6^{(i)}$}$_{i=1}^N$, $m_j^{(i)} \sim U(0.1, 3.1)$, $k_j^{(i)} \sim U(0, 5)$. Following Sanchez-Gonzalez et al. (2019), we set the spring constants as $k_{ij} = k_i k_j$. For each system + +$$ \begin{array}{|c|c|c|c|c|} \hline F(\mathbf{z}, t) & \mathcal{H}(\mathbf{z}, t) & T(2) & SO(2) \\ \hline \text{FC} & \bullet & & \\ \hline \text{OGN} & \bullet & & \\ \hline \text{HOGN} & & \bullet & \star \\ \hline \text{LieConv-T(2)} & \bullet & & \star \\ \hline \text{HLieConv-Trivial} & & \bullet & \\ \hline \text{HLieConv-T(2)} & & \bullet & \star \\ \hline \text{HLieConv-SO(2)} & & \bullet & \star \\ \hline \text{HLieConv-SO(2)*} & & \bullet & \star \\ \hline \end{array} $$ + +Table 6. Model characteristics. Models with layers invariant to *G* are denoted with ⋆, and those with equivariant layers with ⋘. + +$i$, the position and momentum of body $j$ were distributed as $\mathbf{q}_j^{(i)} \sim N(0, 0.16I)$, $\mathbf{p}_j^{(i)} \sim N(0, 0.36I)$. Using the analytic form of the Hamiltonian for the spring problem, $\mathcal{H}(\mathbf{q}, \mathbf{p}) = K(\mathbf{p}, m) + V(\mathbf{q}, k)$, we use the RK4 numerical integration scheme to generate 5 second ground truth trajectories broken up into 500 evaluation timesteps. We use a fixed step size scheme for RK4 chosen automatically (as implemented in Chen et al. (2018)) with a relative tolerance of 1e-8 in double precision arithmetic. We then randomly selected a single segment for each trajectory, consisting of an initial state $\mathbf{z}_t$ and $\tau = 4$ transition states: $(\mathbf{z}_{t+1}^{(i)}, \dots, \mathbf{z}_{t+\tau}^{(i)})$. + +**Training:** All models were trained in single precision arithmetic (double precision did not make any appreciable difference) with an integrator tolerance of 1e-4. We use a cosine decay for the learning rate schedule and perform early stopping over the validation MSE. We trained with a minibatch size of 200 and for 100 epochs each using the Adam optimizer (Kingma and Ba, 2014) without batch normalization. With 3k training examples, the HLieConv model takes about 20 minutes to train on one 1080Ti. + +For the examination of performance over the range of dataset sizes in 8, we cap the validation set to the size of the training set to make the setting more realistic, and we also scale the number of training epochs up as the size of the dataset shrinks (epochs = $100(\sqrt{10^3/D})$) which we found to be sufficient to fit the training set. For $D \le 200$ we use the full dataset in each minibatch. + +**Hyperparameters:** + +
channelslayerslr
(H)FC25641e-2
(H)OGN25611e-2
(H)LieConv38441e-3
+ +**Hyperparameter tuning:** Model hyperparameters were tuned by grid search over channel width, number of layers, and learning rate. The models were tuned with training, validation, and test datasets consisting of 3000, 2000, and 2000 trajectory segments respectively. + +### C.5. Details for Image and Molecular Experiments + +**RotMNIST Hyperparameters:** For RotMNIST we train each model for 500 epochs using the Adam optimizer with learning rate 3e-3 and batch size 25. The first linear layer maps the 1-channel grayscale input to $k = 128$ channels, and the number of channels in the bottleneck blocks follow the scaling law from Appendix C.3 as the group elements are downsampled. We use 6 bottleneck blocks, and the total downsampling factor $S = 1/10$ is split geometrically between the blocks as $s = (1/10)^{1/6}$ per block. The initial radius $r$ of the local neighborhoods in the first layer is set so +---PAGE_BREAK--- + +as to include 1/15 of the total number of elements in each +neighborhood and is scaled accordingly. The subsampled +neighborhood used to compute the Monte Carlo convolution +estimator uses *p* = 25 elements. The models take less than +12 hours to train on a 1080Ti. + +**QM9 Hyperparameters:** For the QM9 molecular data, we use the featurization from Anderson et al. (2019), where the input features $f_i$ are determined by the atom type (C,H,N,O,F) and the atomic charge. The coordinates $x_i$ are simply the raw atomic coordinates measured in angstroms. A separate model is trained for each prediction task, all using the same hyperparameters and early stopping on the validation MAE. We use the same train, validation, test split as Anderson et al. (2019), with 100k molecules for train, 10% for test and the remaining for validation. Like with the other experiments, we use a cosine learning rate decay schedule. Each model is trained using the Adam optimizer for 1000 epochs with a learning rate of 3e-3 and batch size of 100. We use SO(3) data augmentation, 6 bottleneck blocks, each with $k = 1536$ channels. The radius of the local neighborhood is set to $r = \infty$ to include all elements. The model takes about 48 hours to train on a single 1080Ti. + +### C.6. Local Neighborhood Visualizations + +In Figure 10 we visualize the local neighborhood used with different groups under three different types of transformations: translations, rotations and scaling. The distance and neighborhood are defined for the tuples of group elements and orbit. For Trivial, T(2), SO(2), $\mathbb{R} \times SO(2)$ the correspondence between points and these tuples is one-to-one and we can identify the neighborhood in terms of the input points. For SE(2) each point is mapped to multiple tuples, each of which defines its own neighborhood in terms of other tuples. In the Figure, for SE(2) for a given point we visualize the distribution of points that enter the computation of the convolution at a specific tuple. +---PAGE_BREAK--- + +**Figure 10.** A visualization of the local neighborhood for different groups, in terms of the points in the input space. For the computation of the convolution at the point in red, elements are sampled from colored region. In each panel, the top row shows translations, middle row shows rotations and bottom row shows scalings of the same image. For $SE(2)$ we visualize the distribution of points entering the computation of the convolution over multiple lift samples. For each of the equivariant models that respects a given symmetry, the points that enter into the computation are not affected by the transformation. \ No newline at end of file diff --git a/samples/texts_merged/174916.md b/samples/texts_merged/174916.md new file mode 100644 index 0000000000000000000000000000000000000000..40669e83bcda2608263fe0dba777d7034627b1d7 --- /dev/null +++ b/samples/texts_merged/174916.md @@ -0,0 +1,469 @@ + +---PAGE_BREAK--- + +ON THE LOCATION OF ZEROS OF THE LAPLACIAN MATCHING +POLYNOMIALS OF GRAPHS + +JIANG-CHAO WAN, YI WANG, ALI MOHAMMADIAN + +School of Mathematical Sciences, Anhui University, Hefei 230601, Anhui, China + +**ABSTRACT.** The Laplacian matching polynomial of a graph $G$, denoted by $\mathcal{LM}(G,x)$, is a new graph polynomial whose all roots are nonnegative real numbers. In this paper, we investigate the location of zeros of the Laplacian matching polynomials. Let $G$ be a connected graph. We show that $0$ is a root of $\mathcal{LM}(G,x)$ if and only if $G$ is a tree. We prove that the number of distinct positive zeros of $\mathcal{LM}(G,x)$ is at least equal to the length of the longest path in $G$. It is also established that the zeros of $\mathcal{LM}(G,x)$ and $\mathcal{LM}(G-e,x)$ interlace for each edge $e$ of $G$. Using the path-tree of $G$, we present a linear algebraic approach to investigate the largest zero of $\mathcal{LM}(G,x)$ and particularly to give tight upper and lower bounds on it. + +# 1. INTRODUCTION + +The graph polynomials, such as the characteristic polynomial, the chromatic polynomial, the independence polynomial, the matching polynomial, and many others, are widely studied and play important roles in applications of graphs in several diverse fields. The location of zeros of graph polynomials is a main topic in algebraic combinatorics and can be used to describe some structures and parameters of graphs. In this paper, we focus on the location of zeros of the Laplacian matching polynomials of graphs. For more results on the location of zeros of graph polynomials, we refer to [9]. + +Throughout this paper, all graphs are assumed to be finite, undirected, and without loops or multiple edges. Let $G$ be a graph. We denote the vertex set of $G$ by $V(G)$ and the edge set of $G$ by $E(G)$. Let $M$ be a subset of $E(G)$. We denote by $V(M)$ the set of vertices of $G$ each of which is an endpoint of one of the edges in $M$. If no two distinct edges in $M$ share a common endpoint, then $M$ is called a *matching* of $G$. The set of matchings of $G$ is denoted by $\mathcal{M}(G)$. A matching $M \in \mathcal{M}(G)$ is said to be *perfect* if $V(M) = V(G)$. The *matching polynomial* of $G$ is + +$$ \mathcal{M}(G,x) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} x^{|V(G) \setminus V(M)|} $$ + +which was formally defined by Heilmann and Lieb [7] in studying statistical physics, although it has appeared independently in several different contexts. + +The matching polynomial is a fascinating mathematical object and attracts considerable attention of researchers. For an instance, by studying the multiplicity of zeros of the matching polynomials, Chen and Ku [8] gave a generalization of the Gallai–Edmonds theorem which is a + +2020 Mathematics Subject Classification. Primary: 05C31, 05C70. Secondary: 05C05, 05C50, 12D10. +Key words and phrases. Graph polynomial, Matching, Subdivision of graphs, Zeros of polynomials. +Email adress:wanjc@stu.ahu.edu.cn (J.-C. Wan), wangy@ahu.edu.cn (Y. Wang, corresponding author), ali.m@ahu.edu.cn (A. Mohammadian). +Funding. The research of the second author is supported by the National Natural Science Foundation of China with grant numbers 11771016 and 11871073. The research of the third author is supported by the Natural Science Foundation of Anhui Province with grant number 2008085MA03. +---PAGE_BREAK--- + +structure theorem in classical graph theory. For another instance, using a well known upper bound on zeros of the matching polynomials, Marcus, Spielman, and Srivastava [10] established that infinitely many bipartite Ramanujan graphs exist. Some earlier facts on the matching polynomials can be found in [4]. + +We want to summarize here some basic features of the zeros of the matching polynomial. For this, let us first introduce some more notations and terminology which we need. For a vertex $v$ of a graph $G$, we denote by $N_G(v)$ the set of all vertices of $G$ adjacent to $v$. The degree of $v$ is defined as $|N_G(v)|$ and is denoted by $d_G(v)$. The maximum degree and the minimum degree of the vertices of $G$ are denoted by $\Delta(G)$ and $\delta(G)$, respectively. For a subset $W$ of $V(G)$, we shall use $G[W]$ to denote the induced subgraph of $G$ induced by $W$ and we simply use $G-W$ instead of $G[V(G)\setminus W]$. Also, for a vertex $v$ of $G$, we simply write $G-v$ for $G - \{v\}$. For an edge $e$ of $G$, we denote by $G-e$ the subgraph of $G$ obtained by deleting the edge $e$. + +Let $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_m$ be respectively the zeros of two real rooted polynomials $f$ and $g$ with $\deg f = n$ and $\deg g = m$. We say that the zeros of $f$ and $g$ interlace if either + +$$\alpha_1 \le \beta_1 \le \alpha_2 \le \beta_2 \le \dots$$ + +or + +$$\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots$$ + +in which case one clearly must have $|n-m| \le 1$. We adopt the convention that the zeros of any polynomial of degree 0 interlace the zeros of any other polynomial. + +For any connected graph $G$, the assertions given in (1.1)-(1.3) are known. + +(1.1) All the roots of $\mathcal{M}(G, x)$ are real. Moreover, if $\Delta(G) \ge 2$, then the zeros of $\mathcal{M}(G, x)$ lie in the interval $(-2\sqrt{\Delta(G)-1}, 2\sqrt{\Delta(G)-1})$ [7]. + +(1.2) The number of distinct roots of $\mathcal{M}(G, x)$ is at least equal to $\ell(G)+1$, where $\ell(G)$ is the length of the longest path in $G$ [5]. + +For each vertex $v \in V(G)$, the zeros of $\mathcal{M}(G-v, x)$ interlace the zeros of $\mathcal{M}(G, x)$. + +(1.3) In addition, the largest zero of $\mathcal{M}(G, x)$ has the multiplicity 1 and is greater than the largest zero of $\mathcal{M}(G-v, x)$ [6]. + +Recently, Mohammadian [11] introduced a new graph polynomial that is called the *Laplacian matching polynomial* and is defined for a graph $G$ as + +$$ (1.4) \qquad \mathcal{LM}(G,x) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} \left( \prod_{v \in V(G) \setminus V(M)} (x - d_G(v)) \right). $$ + +Mohammadian proved that all roots of $\mathcal{LM}(G, x)$ are real and nonnegative, and moreover, if $\Delta(G) \ge 2$, then the zeros of $\mathcal{LM}(G, x)$ lie in the interval $[0, \Delta(G) + 2\sqrt{\Delta(G)-1})$. By observing this interval, it is natural to ask: What is the sufficient and necessary condition for 0 to be a root of $\mathcal{LM}(G, x)$? More generally, as a new real rooted graph polynomial, it is natural to investigate the properties of zeros such as the interlacing of zeros, the upper and lower bounds of the largest zero, the maximum multiplicity of zeros, and the number of distinct zeros. In this paper, we mainly prove that the assertions given in (1.5)-(1.7) hold for any connected graph $G$, letting $\ell(G)$ be the length of the longest path in $G$. + +If $\Delta(G) \ge 2$, then the zeros of $\mathcal{LM}(G, x)$ are contained in the interval $[0, \Delta(G) + 2\sqrt{\Delta(G)-1}\cos\frac{\pi}{2\ell(G)+2}]$, and in addition, the upper bound of the interval is a zero of $\mathcal{LM}(G, x)$ if and only if $G$ is a cycle. + +(1.5) +---PAGE_BREAK--- + +$$ +(1.6) \quad \text{The number of distinct positive roots of } \mathcal{LM}(G, x) \text{ is at least equal to } \ell(G). \text{ Also, if } \delta(G) \ge 2, \text{ then } \mathcal{LM}(G, x) \text{ has at least } \ell(G) + 1 \text{ distinct positive roots.} +$$ + +For each edge $e \in E(G)$, the zeros of $\mathcal{L}\mathcal{M}(G,x)$ and $\mathcal{L}\mathcal{M}(G-e,x)$ interlace in +the sense that, if $\alpha_1 \le \cdots \le \alpha_n$ and $\beta_1 \le \cdots \le \beta_n$ are respectively the zeros of +$\mathcal{L}\mathcal{M}(G,x)$ and $\mathcal{L}\mathcal{M}(G-e,x)$ in which $n = |V(G)|$, then $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le$ +$\cdots \le \beta_n \le \alpha_n$. Further, the largest zero of $\mathcal{L}\mathcal{M}(G,x)$ has the multiplicity 1 and +is strictly greater than the largest zero of $\mathcal{L}\mathcal{M}(H,x)$ for any proper subgraph $H$ of +$G$. + +It should be mentioned that the Laplacian matching polynomial is recently studied under a different name and expression by Chen and Zhang [17]. + +For a graph $G$, the *subdivision* of $G$, denoted by $S(G)$, is the graph derived from $G$ by replac- +ing every edge $e = \{a,b\}$ of $G$ with two edges $\{a,v_e\}$ and $\{v_e,b\}$ along with the new vertex $v_e$ +corresponding to the edge $e$. We know from a result of Yan and Yeh [16] that + +$$ +(1.8) \qquad \mathcal{M}(S(G),x) = x^{|E(G)|-|V(G)|} \mathcal{L}\mathcal{M}(G,x^2) +$$ + +for any graph $G$, which is also proved by Chen and Zhang [17] by different method. The equality (1.8) shows that the problem of the location of zeros of the Laplacian matching polynomial of a graph $G$ can be transformed into the problem that deals with the location of zeros of the matching polynomial of $S(G)$. For an instance, using (1.8) and the first statement in (1.1), it immediately follows that the zeros of $\mathcal{LM}(G,x)$ are nonnegative real numbers. The assertion (1.6) is proved in Section 2 by the subdivision of graphs. + +One of the most important tools in the theory of the matching polynomial is the concept of ‘path-tree’ which is introduced by Godsil [5]. Given a graph $G$ and a vertex $u \in V(G)$, the *path-tree* $T(G, u)$ is the tree which has as vertices the paths in $G$ which start at $u$ where two such paths are adjacent if one is a maximal proper subpath of the other. In Section 3, we show that the path-tree is also applicable for the Laplacian matching polynomial by making some appropriate adjustments. Using this, we prove (1.5) which is a slight improvement of the second statement of Theorem 2.6 of [11]. The assertion (1.7) is proved in Section 3 by linear algebra arguments. + +Let us introduce more notations and definitions before moving on to the next section. We use +$\lambda(f(x))$ to denote the largest zero of a real rooted polynomial $f(x)$. For a square matrix $M$, we shall +use $\varphi(M, x)$ to denote the characteristic polynomial of $M$ in the indeterminate $x$. If all the roots of +$\varphi(M, x)$ are real, then its largest zero is denoted by $\lambda(M)$. For a graph $G$, the *adjacency matrix* of +$G$, denoted by $A(G)$, is a matrix whose rows and columns are indexed by $V(G)$ and the $(u, v)$-entry +is 1 if $u$ and $v$ are adjacent and 0 otherwise. Let $D(G)$ be the diagonal matrix whose rows and +columns are indexed as the rows and the columns of $A(G)$ with $d_G(v)$ in the $v$th diagonal position. +The matrices $L(G) = D(G) - A(G)$ and $Q(G) = D(G) + A(G)$ are respectively said to be the +*Laplacian matrix* and the *signless Laplacian matrix* of $G$. It is known that $\mathcal{M}(G, x) = \varphi(A(G), x)$ +if and only if $G$ is a forest [14]. In addition, it is proved that $\mathcal{LM}(G, x) = \varphi(L(G), x)$ if and only +if $G$ is a forest [11]. Among other results, we present a generalization of these results in Section 2. + +## 2. SUBDIVISION OF GRAPHS AND THE LAPLACIAN MATCHING POLYNOMIAL + +In this section, we examine the location of zeros of the Laplacian matching polynomial by establishing a relation between the Laplacian matching polynomial of a graph and the matching polynomial of the subdivision of that graph. Then, by analysing the structures of the subdivision of graphs, we will prove (1.6). To begin with, we recall the multivariate matching polynomial that covers both the matching polynomial and the Laplacian matching polynomial. This multivariate graph polynomial was introduced by Heilmann and Lieb [7]. +---PAGE_BREAK--- + +Let $G$ be a graph and associate the vector $\mathbf{x}_G = (x_v)_{v \in V(G)}$ with $G$ in which $x_v$ is an indeterminate corresponding to the vertex $v \in V(G)$. Notice that, for a subgraph $H$ of $G$, $\mathbf{x}_H$ is the vector that has the same coordinate as $\mathbf{x}_G$ in the positions corresponding to the vertices in $V(H)$. The *multivariate matching polynomial* of $G$ is defined as + +$$ (2.1) \qquad \mathfrak{M}(G, \mathbf{x}_G) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} \left( \prod_{v \in V(G) \setminus V(M)} x_v \right). $$ + +Let $\mathbf{1}_G$ be the all one vector of length $|V(G)|$. Also, for a subgraph $H$ of $G$, we let $\mathbf{d}_{G,H} = (d_G(v))_{v \in V(H)}$. For simplicity, we write $\mathbf{d}_G$ instead of $\mathbf{d}_{G,G}$. We sometimes drop the subscript of the vector symbols if there is no possible confusion. It is easy to see that + +$$ (2.2) \qquad \mathfrak{M}(G, x\mathbf{1}_G) = \mathcal{M}(G, x) $$ + +and + +$$ (2.3) \qquad \mathfrak{M}(G, x\mathbf{1}_G - \mathbf{d}_G) = \mathcal{L}\mathcal{M}(G, x). $$ + +Note that + +$$ \mathfrak{M}(G_1 \cup G_2, (\mathbf{x}_{G_1}, \mathbf{x}_{G_2})) = \mathfrak{M}(G_1, \mathbf{x}_{G_1})\mathfrak{M}(G_2, \mathbf{x}_{G_2}), $$ + +where $G_1 \cup G_2$ denotes the disjoint union of two graphs $G_1$ and $G_2$. So, in what follows, we often restrict our attention on connected graphs. + +We need the following useful lemma in the sequel. + +**Lemma 2.1** (Amini [1]). Let $G$ be a graph. For any vertex $v \in V(G)$, + +$$ \mathfrak{M}(G, \mathbf{x}_G) = x_v \mathfrak{M}(G - v, \mathbf{x}_{G-v}) - \sum_{w \in N_G(v)} \mathfrak{M}(G - v - w, \mathbf{x}_{G-w}). $$ + +By combining Lemma 2.1 and (2.2), we get + +$$ (2.4) \qquad \mathcal{M}(G, x) = x\mathcal{M}(G-v, x) - \sum_{w \in N_G(v)} \mathcal{M}(G-v-w, x), $$ + +which is a well known recursive formula for the matching polynomial. + +The following theorem, which is a generalization of (1.8), plays a crucial role in our proofs in Section 3. + +**Theorem 2.2.** Let $G$ be a graph. For any subset $W$ of $V(G)$, + +$$ \mathcal{M}(S(G) - W, x) = x^{|E(G)| - |V(G)| + |W|} \mathfrak{M}(G - W, x^2 \mathbf{1}_{G-W} - \mathbf{d}_{G,G-W}). $$ + +*Proof.* For simplicity, let $k = |V(G) \setminus W|$ and $m = |E(G)|$. We prove the assertion by induction on $k$. If $V(G) \setminus W = \{u\}$ for some vertex $u \in V(G)$, then $S(G) - W$ consists of a star on $d_G(u) + 1$ vertices and $|E(G)| - d_G(u)$ isolated vertices. Therefore, + +$$ \mathcal{M}(S(G) - W, x) = x^{m+1} - d_G(u)x^{m-1} $$ + +and + +$$ \mathfrak{M}(G - W, x^2 \mathbf{1} - \mathbf{d}) = x^2 - d_G(u). $$ +---PAGE_BREAK--- + +So, the claimed equality holds for $k=1$. Assume that $k \ge 2$. Choose a vertex $u \in V(G) \setminus W$ and let $H = S(G) - W - u$. By Lemma 2.1, the induction hypothesis and (2.4), we have + +$$ +\begin{align*} +x^{m-k+2}\mathfrak{M}(G-W, x^2\mathbf{1}-\mathbf{d}) &= x(x^2-d_G(u))x^{m-k+1}\mathfrak{M}(G-W-u, x^2\mathbf{1}-\mathbf{d}) \\ +&\quad - \sum_{v \in N_{G-W}(u)} x^{m-k+2}\mathfrak{M}(G-W-u-v, x^2\mathbf{1}-\mathbf{d}) \\ +&= x(x^2-d_G(u))\mathcal{M}(H,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x) \\ +&= x^2\mathcal{M}(S(G)-W,x) + x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) \\ +&\quad - d_G(u)x\mathcal{M}(H,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x). +\end{align*} +$$ + +Hence, in order to complete the induction step, it suffices to prove that + +$$ (2.5) \qquad d_G(u)x\mathcal{M}(H,x) = x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x). $$ + +To establish (2.5), let $N_G(u) \cap W = \{a_1, \dots, a_s\}$ and $N_G(u) \setminus W = \{b_1, \dots, b_t\}$. Also, for $i=1, \dots, s$, let $a'_i$ be the vertex of $S(G)$ corresponding to the edge $\{u, a_i\}$ of $G$ and, for $j=1, \dots, t$, let $b'_j$ be the vertex of $S(G)$ corresponding to the edge $\{u, b_j\}$ of $G$. Notice that, if one of $N_G(u) \cap W$ and $N_G(u) \setminus W$ is empty, then we may derive (2.5) by the same discussion as below. We have $d_G(u) = s+t$ and $N_{S(G)-W}(u) = N_{S(G)}(u) = \{a'_1, \dots, a'_s, b'_1, \dots, b'_t\}$. The structure of $H$ is illustrated in Figure 1. + +**Figure 1.** The structure of $H$. + +We have $d_H(a'_i) = 0$ for $i = 1, \dots, s$ and $d_H(b'_j) = 1$ for $j = 1, \dots, t$. By applying (2.4) for $a'_i$ and $b'_j$, we find that + +$$ \mathcal{M}(H,x) = x\mathcal{M}(H-a'_i,x) $$ +---PAGE_BREAK--- + +and + +$$ +\begin{align*} +x\mathcal{M}(H, x) &= x^2\mathcal{M}(H - b'_j, x) - x\mathcal{M}(H - b_j - b'_j, x) \\ +&= x^2\mathcal{M}(H - b'_j, x) - \mathcal{M}(H - b_j, x). +\end{align*} +$$ + +Therefore, + +$$ +\begin{align*} +d_G(u)x\mathcal{M}(H,x) &= sx\mathcal{M}(H,x) + tx\mathcal{M}(H,x) \\ +&= x^2 \sum_{i=1}^{s} \mathcal{M}(H - a'_i, x) + x^2 \sum_{j=1}^{t} \mathcal{M}(H - b'_j, x) - \sum_{j=1}^{t} \mathcal{M}(H - b_j, x) \\ +&= x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x), +\end{align*} +$$ + +which is exactly (2.5). This completes the proof. +□ + +In what follows, we prove some results about the Laplacian matching polynomial by analysing +the structures of the subdivision of graphs. The following consequence immediately follows from +Theorem 2.2 and the first statement in (1.1). It worth to mention that the following result is proved +in [17] for a different expression of the Laplacian matching polynomial. + +**Corollary 2.3.** Let $G$ be a graph. Then + +$$ +\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|} \mathcal{L}\mathcal{M}(G, x^2). +$$ + +In particular, the zeros of $\mathcal{L}\mathcal{M}(G, x)$ are nonnegative real numbers. + +For a graph $G$, it is proved that $\mathcal{L}\mathcal{M}(G, x) = \varphi(L(G), x)$ if and only if $G$ is a forest [11]. Since $0$ is an eigenvalue of $L(G)$, we deduce that $\mathcal{L}\mathcal{M}(G, 0) = 0$ if $G$ is a forest. From (1.4), we get the combinatorial identity + +$$ +\sum_{M \in \mathcal{M}(F)} (-1)^{|M|} \left( \prod_{v \in V(F) \setminus V(M)} d_F(v) \right) = 0 +$$ + +for any forest F. The following theorem, which is proved in [17], gives a necessary and sufficient condition for 0 to be a root of the Laplacian matching polynomial. We present here a different proof for it. + +**Theorem 2.4** (Chen, Zhang [17]). Let $G$ be a connected graph. Then, $0$ is a root of $\mathcal{LM}(G, x)$ if and only if $G$ is a tree. + +*Proof.* If $G$ is a tree, then $|E(G)| = |V(G)| - 1$ and so $\mathcal{LM}(G, x^2) = x\mathcal{M}(S(G), x)$ by Corollary 2.3, implying that $0$ is a root of $\mathcal{LM}(G, x)$. We prove that $0$ is not a root of $\mathcal{LM}(G, x)$ if $G$ is not a tree. For this, assume that $|E(G)| \ge |V(G)|$. One may easily consider $S(G)$ as a bipartite graph with the bipartition $\{V(G), E(G)\}$ after identifying each new vertex $v_e$ of $S(G)$ with its corresponding edge $e$ of $G$. + +We claim that $S(G)$ has a matching that saturates the part $V(G)$. If $G$ contains a vertex $u$ with degree 1 and $e$ is the edge incident to $u$ in $G$, then it suffices to prove that $S(G-u)$ has a matching that saturates the part $V(G-u)$, since the union of such matching and the edge $\{u, v_e\}$ forms a matching of $S(G)$ that saturates the part $V(G)$. Thus, we may assume that $d_G(v) \ge 2$ for all vertices $v \in V(G)$. We are going to establish that $S(G)$ satisfies Hall's condition [2, Theorem 16.4]. For a subset $W$ of $V(G)$, we shall use $N_G(W)$ to denote the set of vertices of $G$ each of which is adjacent to a vertex in $W$ and $\partial_G(W)$ to denote the set of edges of $G$ each of which has exactly one endpoint in $W$. For any subset $U$ of the part $V(G)$, since $d_G(v) \ge 2$ for all vertices $v \in V(G), + +$$ +(2.6) \qquad |\partial_{S(G)}(U)| \ge 2|U|. +$$ +---PAGE_BREAK--- + +On the other hand, $d_{S(G)}(v_e) = 2$ for each $e \in E(G)$, so + +$$ (2.7) \qquad |\partial_{S(G)}(N_{S(G)}(U))| = 2|N_{S(G)}(U)|. $$ + +Clearly, $|\partial_{S(G)}(N_{S(G)}(U))| \ge |\partial_{S(G)}(U)|$ which implies that $|N_{S(G)}(U)| \ge |U|$ using (2.6) and (2.7). This means that $S(G)$ satisfies Hall's condition, as required. + +We proved that $S(G)$ has a matching that saturates the part $V(G)$. This means that the smallest power of $x$ in $\mathcal{M}(S(G), x)$ is $|E(G)| - |V(G)|$ by (2.1) and (2.2). In view of Corollary 2.3, $\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|}\mathcal{LM}(G, x^2)$ which shows that the constant term in $\mathcal{LM}(G, x)$ is nonzero. So, 0 is not a root of $\mathcal{LM}(G, x)$. This completes the proof. $\square$ + +In the next theorem, we give a lower bound on the number of distinct zeros of the Laplacian matching polynomial. + +**Theorem 2.5.** Let $G$ be a connected graph and let $\ell(G)$ be the length of the longest path in $G$. Then the number of distinct positive roots of $\mathcal{LM}(G, x)$ is at least equal to $\ell(G)$. Also, if $\delta(G) \ge 2$, then $\mathcal{LM}(G, x)$ has at least $\ell(G) + 1$ distinct positive roots. + +*Proof.* For convenience, let $\ell = \ell(G)$. Denote by $\ell'$ the length of the longest path in $S(G)$. From (1.2), $\mathcal{M}(S(G), x)$ has at least $\ell' + 1$ distinct roots. By Corollary 2.3, $\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|}\mathcal{LM}(G, x^2)$ which shows that $\mathcal{LM}(G, x^2)$ has at least $\ell'$ distinct nonzero roots. Since all roots of $\mathcal{LM}(G, x)$ are real and nonnegative by Corollary 2.3, it follows that $\mathcal{LM}(G, x)$ has at least $\lceil \ell'/2 \rceil$ distinct positive roots. + +For each edge $e \in E(G)$, denote by $v_e$ the vertex of $S(G)$ corresponding to $e$. Let $w_0, w_1, \dots, w_\ell$ be a path in $G$. Then, $w_0, v_{e_1}, w_1, \dots, v_{e_\ell}, w_\ell$ is a path in $S(G)$ of length $2\ell$, where $e_i = \{w_{i-1}, w_i\} \in E(G)$ for $i=1, \dots, \ell$. Thus, $\ell' \ge 2\ell$ and so $\mathcal{LM}(G, x)$ has at least $\ell$ distinct positive roots. + +Now, assume that $\delta(G) \ge 2$. This assumption allows us to consider a vertex $w' \in N_G(w_0) \setminus \{w_1\}$. Then, $S(G)$ contains the path $v_{e'}$, $w_0$, $v_{e_1}$, $w_1$, \dots, $v_{e_\ell}$, $w_\ell$ of length $2\ell + 1$, where $e' = \{w', w_0\} \in E(G)$. Therefore, $\ell' \ge 2\ell + 1$ and so $\mathcal{LM}(G, x)$ has at least $\lceil \ell'/2 \rceil \ge \ell + 1$ distinct positive roots. This completes the proof. $\square$ + +**Remark 2.6.** The second statement in Theorem 2.5 implies that, if $G$ is a graph with a Hamilton cycle, then the zeros of $\mathcal{LM}(G, x)$ are all distinct. + +Given a graph $G$, it is known that $\mathcal{M}(G, x) = \varphi(A(G, x))$ if and only if $G$ is a forest [14]. Also, as we mentioned before, it is established that $\mathcal{LM}(G, x) = \varphi(L(G), x)$ if and only if $G$ is a forest [11]. Below, we present a general result which shows that the multivariate matching polynomial of a forest has a determinantal representation in terms of its adjacency matrix, which will be used in the next section. + +**Theorem 2.7.** Let $F$ be a forest. Then $\mathfrak{M}(F, \mathbf{x}_F) = \det(\mathbf{X}_F - A(F))$, where $\mathbf{X}_F$ is a diagonal matrix whose rows and columns are indexed by $V(F)$ and the $(v,v)$-entry is $x_v$ for any vertex $v \in V(F)$. In particular, $\mathcal{M}(F, x) = \varphi(A(F), x)$ and $\mathcal{LM}(F, x) = \varphi(L(F), x)$. + +*Proof.* We prove that $\mathfrak{M}(F, \mathbf{x}_F) = \det(\mathbf{X}_F - A(F))$ by induction on $|E(F)|$. The equality is trivially valid if $|E(F)| = 0$. So, assume that $|E(F)| \ge 1$. As $F$ is a forest, we may consider two vertices $u, v \in V(F)$ with $N_F(u) = \{v\}$. Without loss of generality, we may assume that the first row and column of $A(F)$ are corresponding to $u$ and the second row and column of $A(F)$ are corresponding to $v$. Expanding the determinant of $\mathbf{X}_F - A(F)$ along its first row, we obtain by the induction hypothesis and Lemma 2.1 that + +$$ \begin{align*} +\det (\mathbf{X}_F - A(F)) &= x_u \det (\mathbf{X}_{F-u} - A(F-u)) - \det (\mathbf{X}_{F-u-v} - A(F-u-v)) \\ +&= x_u \mathfrak{M}(F-u, \mathbf{x}_{F-u}) - \mathfrak{M}(F-u-v, \mathbf{x}_{F-u-v}) \\ +&= \mathfrak{M}(F, \mathbf{x}_F), +\end{align*} $$ + +as desired. The 'in particular' statement immediately follows from (2.2) and (2.3). $\square$ +---PAGE_BREAK--- + +**Corollary 2.8.** For a tree $T$, the multiplicity of $0$ as a root of $\mathcal{LM}(T, x)$ is $1$. + +*Proof.* It is well known that the number of connected components of a graph $\Gamma$ is equal to the multiplicity of $0$ as a root of $\varphi(L(\Gamma), x)$ [3, Proposition 1.3.7]. So, the result follows from $\mathcal{LM}(T, x) = \varphi(L(T), x)$ which is given in Theorem 2.7. $\square$ + +### 3. THE LARGEST ZERO OF THE LAPLACIAN MATCHING POLYNOMIAL + +The purpose of this section is to investigate the location of the largest zero of the Laplacian matching polynomial. We give a linear algebraic approach to study the largest zero of the Laplacian matching polynomial and present sharp upper and lower bounds on it. The assertions (1.5) and (1.7) are also proved in this section based on the linear algebraic approach. + +Let $G$ be a connected graph and $u \in V(G)$. Let $T(G, u)$ be the path-tree of $G$ respect to the vertex $u$ which is introduced in Section 1. Consider two vectors $x_G = (x_v)_{v \in V(G)}$ and $x_{T(G,u)} = (x_P)_{P \in V(T(G,u))}$ of indeterminates associated with $G$ and $T(G, u)$, respectively. For every vertex $P \in V(T(G, u))$, we may identify $x_P$ with $x_{v(P)}$ in which $v(P)$ is the terminal vertex of the path $P$ in $G$. In such way, $G$ and $T(G, u)$ will be equipped with two vectors consisting of the same indeterminates, which are simply denoted by **x** when there is no ambiguity. In what follows, for every subgraph $H$ of $G$ and vertex $u \in V(H)$, we denote by $D_G(T(H, u))$ the diagonal matrix whose rows and columns are indexed by $V(T(H, u))$ and the $(P, P)$-entry is $d_G(v(P))$. + +The univariate version of the following theorem, which is proved by Godsil [5], has a key role in the theory of the matching polynomial. Notice that, for a graph $G$ and a vertex $u \in V(G)$, $u$ is a path in $G$ and the corresponding vertex in $T(G, u)$ will also be referred to as $u$. + +**Theorem 3.1 (Amini [1]).** Let $G$ be a connected graph and let $u \in V(G)$. Then + +$$ \frac{\mathfrak{M}(G - u, \boldsymbol{x})}{\mathfrak{M}(G, \boldsymbol{x})} = \frac{\mathfrak{M}(T(G, u) - u, \boldsymbol{x})}{\mathfrak{M}(T(G, u), \boldsymbol{x})}, $$ + +and moreover, $\mathfrak{M}(G, \boldsymbol{x})$ divides $\mathfrak{M}(T(G, u), \boldsymbol{x})$. + +For a connected graph $G$ and a vertex $u \in V(G)$, Theorem 3.1 and Theorem 2.7 yield that $\mathcal{M}(G, x)$ divides $\varphi(A(T(G, u)), x)$. Since all roots of the characteristic polynomial of a symmetric matrix are real, the first statement in (1.1) is obtained as an application of Theorem 3.1. For the Laplacian matching polynomial, we get the following result. + +**Corollary 3.2.** Let $G$ be a connected graph, $H$ be a subgraph of $G$, and $u \in V(H)$. If $H$ is connected, then $\mathfrak{M}(H, x\mathbf{1}_H - d_{G,H})$ divides $\varphi(D_G(T(H, u)) + A(T(H, u)), x)$. In particular, $\varphi(D_G(T(G, u)) + A(T(G, u)), x)$ is divisible by $\mathcal{LM}(G, x)$ for every vertex $u \in V(G)$. + +*Proof.* By Theorem 3.1, we find that $\mathfrak{M}(H, x\mathbf{1}_H - d_{G,H})$ divides $\mathfrak{M}(T(H, u), x\mathbf{1}_H - d_{G,H})$. It follows from Theorem 2.7 that + +$$ \begin{aligned} \mathfrak{M}(T(H, u), x\mathbf{1}_H - d_{G,H}) &= \det (xI - D_G(T(H, u)) - A(T(H, u))) \\ &= \varphi(D_G(T(H, u)) + A(T(H, u)), x), \end{aligned} $$ + +which establishes what we require. Since $\mathfrak{M}(G, x\mathbf{1}_G - d_G) = \mathcal{LM}(G, x)$ using (2.3), the ‘in particular’ statement immediately follows. $\square$ + +**Remark 3.3.** The matrix $D_G(T(G, u)) + A(T(G, u))$, which appeared in Corollary 3.2, is a symmetric diagonally dominant matrix with nonnegative diagonal entries, so all of its eigenvalues are nonnegative real numbers. Hence, Corollary 3.2 gives us another proof for the fact that all roots of the Laplacian matching polynomial are real and nonnegative which was also proved in Corollary 2.3. +---PAGE_BREAK--- + +It is well known that the largest zero of the matching polynomial of a graph is equal to the largest eigenvalue of the adjacency matrix of a path-tree of that graph. This fact is obtained by combining the Perron-Frobenius theorem [3, Theorem 2.2.1] and Theorems 2.7 and 3.1. The following theorem can be considered as an analogue of the fact. Indeed, the following theorem presents a linear algebra technique to treat with the largest zero of the Laplacian matching polynomial. + +**Theorem 3.4.** Let $G$ be a connected graph, $H$ be a subgraph of $G$, and $u \in V(H)$. If $H$ is connected, then + +$$ (3.1) \qquad \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})) = \lambda(D_G(T(H,u)) + A(T(H,u))). $$ + +In particular, $\lambda(\mathcal{LM}(G,x)) = \lambda(D_G(T(G,u)) + A(T(G,u)))$. Also, the largest root of $\mathcal{LM}(G,x)$ has the multiplicity 1. + +*Proof.* We prove (3.1) by induction on $|V(H)|$. Clearly, (3.1) is valid for $|V(H)| = 1$. Assume that $|V(H)| \ge 2$. We first show that + +$$ (3.2) \qquad \lambda(\mathfrak{M}(H-u, x\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) < \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})). $$ + +To see (3.2), we apply Theorem 2.2 and (1.3) to get that + +$$ \begin{align*} \lambda(\mathfrak{M}(H-u, x^2\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) &= \lambda(\mathcal{M}(S(G)-W-u, x)) \\ &< \lambda(\mathcal{M}(S(G)-W, x)) \\ &= \lambda(\mathfrak{M}(H, x^2\mathbf{1}_H - \mathbf{d}_{G,H})), \end{align*} $$ + +where $W = V(G) \setminus V(H)$. This clearly proves (3.2). Now, let $N_H(u) = \{u_1, \dots, u_k\}$ and let $H_i$ be the connected component of $H-u$ containing $u_i$ for $i=1, \dots, k$. By the induction hypothesis, + +$$ (3.3) \qquad \lambda(\mathfrak{M}(H_i, x\mathbf{1}_{H_i} - \mathbf{d}_{G,H_i})) = \lambda(D_G(T(H_i, u_i)) + A(T(H_i, u_i))) $$ + +for $i=1, \dots, k$. It is not hard to see the $k \times k$ block diagonal matrix whose $i$th block diagonal entry is $D_G(T(H_i, u_i)) + A(T(H_i, u_i))$, say $R$, is a principal submatrix of $D_G(T(H, u)) + A(T(H, u))$ with size $|T(H, u)|-1$. Hence, by the interlacing theorem [3, Corollary 2.5.2], it follows that $\lambda(R)$ is greater than or equal to the second largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. Further, it follows from (3.3) and (3.2) that + +$$ \begin{align*} \lambda(R) &= \max \left\{ \lambda(\mathfrak{M}(H_i, x\mathbf{1}_{H_i} - \mathbf{d}_{G,H_i})) \middle| 1 \le i \le k \right\} \\ &= \lambda(\mathfrak{M}(H-u, x\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) \\ &< \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})). \end{align*} $$ + +Thus, $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is strictly greater than the second largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. On the other hand, Corollary 3.2 implies that $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is a zero of $\varphi(D_G(T(H, u)) + A(T(H, u)), x)$. So, we conclude that $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is the largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. This completes the induction step and demonstrates that (3.1) holds. + +For the 'in particular' statement, note that (3.1) and (2.3) yield that + +$$ \lambda(D_G(T(G,u)) + A(T(G,u))) = \lambda(\mathfrak{M}(G, x\mathbf{1}_G - \mathbf{d}_G)) = \lambda(\mathcal{LM}(G,x)), $$ + +and further, the connectedness of $G$ implies that $D_G(T(G,u)) + A(T(G,u))$ is an irreducible matrix with nonnegative entries, and consequently, its largest eigenvalue has the multiplicity 1 by the Perron-Frobenius theorem [3, Theorem 2.2.1]. $\square$ +---PAGE_BREAK--- + +**Corollary 3.5.** Let $G$ be a connected graph and $u \in V(G)$. Then + +$$ (3.4) \qquad \lambda(\mathcal{LM}(G,x)) \ge \lambda(L(T(G,u))) $$ + +with the equality holds if and only if $G$ is a tree. + +*Proof.* We first recall the fact that a graph $\Gamma$ is bipartite if and only if $\varphi(L(\Gamma), x) = \varphi(Q(\Gamma), x)$ [3, Proposition 1.3.10]. For each $P \in V(T(G, u))$, we have $d_{T(G,u)}(P) \le d_G(v(P))$, where $v(P)$ is the terminal vertex of the path $P$ in $G$. Therefore, $R = D_G(T(G, u)) + A(T(G, u)) - Q(T(G, u))$ has nonnegative entries, and thus, Theorem 3.4, the Perron-Frobenius theorem [3, Theorem 2.2.1], and the above mentioned fact yield that + +$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &= \lambda(R + Q(T(G,u))) \\ &\ge \lambda(Q(T(G,u))) \\ &= \lambda(L(T(G,u))), \end{align} \tag{3.5} $$ + +proving (3.4). If $G$ is a tree, then $G$ is isometric to $T(G, u)$ and since $\mathcal{LM}(G,x) = \varphi(L(G), x)$ by Theorem 2.7, the equality in (3.4) is attained. Conversely, assume that the equality in (3.4) holds. Consequently, the equality in (3.5) occurs, and hence, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $R=0$. This means that $d_{T(G,u)}(P) = d_G(v(P))$ for each $P \in V(T(G, u))$. We assert that $G$ is a tree. Towards a contradiction, suppose that there is a cycle $C$ in $G$. As $G$ is connected, there is a path $P_1$ in $G$ which start at $u$, none of its internal vertices is on $C$, and $v(P_1) \in V(C)$. Fix $w \in N_G(v(P_1)) \cap V(C)$ and let $P_2$ be the path on $C$ between $v(P_1)$ and $w$ whose length is more that 1. If $P$ is the path between $u$ and $w$ formed by $P_1$ and $P_2$, then it is clear that $d_{T(G,u)}(P) < d_G(v(P))$. This contradiction completes the proof. $\square$ + +In the following consequence, we give some lower bounds on the largest zero of the Laplacian matching polynomial. + +**Corollary 3.6.** Let $G$ be a connected graph. Then + +$$ \lambda(\mathcal{LM}(G,x)) \ge \max \left\{ \Delta(G) + 1, \delta(G) + \sqrt{\Delta(G)} \right\} $$ + +with the equality holds if and only if $G$ is a star. + +*Proof.* Let $u \in V(G)$ be of degree $\Delta(G)$. Indeed, $d_{T(G,u)}(u) = d_G(u)$ and therefore $\Delta(T(G,u)) = \Delta(G)$. For each connected graph $\Gamma$, Proposition 3.9.3 of [3] states that $\lambda(L(\Gamma)) \ge \Delta(\Gamma) + 1$ with the equality holds if and only if $\Delta(\Gamma) = |V(\Gamma)| - 1$. By this fact and Corollary 3.5, we obtain that $\lambda(\mathcal{LM}(G,x)) \ge \lambda(L(T(G,u))) \ge \Delta(T(G,u)) + 1 = \Delta(G) + 1$, and moreover, the equality $\lambda(\mathcal{LM}(G,x)) = \Delta(G) + 1$ holds if and only if $G$ is a star. + +For each connected graph $\Gamma$, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $\lambda(A(\Gamma)) \ge \sqrt{\Delta(\Gamma)}$ with the equality holds if and only if $\Gamma$ is a star. Using this fact, Theorem 3.4, and the Weyl inequality [3, Theorem 2.8.1], we derive + +$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &\ge \delta(G) + \lambda(A(T(G,u))) \\ &\ge \delta(G) + \sqrt{\Delta(T(G,u))} \\ &= \delta(G) + \sqrt{\Delta(G)}. \end{align} \tag{3.6} $$ +---PAGE_BREAK--- + +Suppose that the equality $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ holds. So, the equality in (3.6) is attained, and thus, $T(G, u)$ is a star. This implies that $G$ is a star, and then, $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ forces that $|V(G)| \le 2$. Since the equality $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ is valid for the stars $G$ on at most 2 vertices, the proof is complete. $\square$ + +In the following theorem, we establish (1.5) which slightly improves the second statement of Theorem 2.6 of [11]. + +**Theorem 3.7.** Let $G$ be a connected graph with $\Delta(G) \ge 2$ and let $\ell(G)$ be the length of the longest path in $G$. Then, + +$$ (3.7) \qquad \lambda(\mathcal{LM}(G, x)) \le \Delta(G) + 2\sqrt{\Delta(G)-1} \cos \frac{\pi}{2\ell(G)+2} $$ + +with the equality holds if and only if $G$ is a cycle. + +*Proof.* For simplicity, let $\Delta = \Delta(G)$ and $\ell = \ell(G)$. For every positive integers $d$ and $k \ge 2$, the Bethe tree $B_{d,k}$ is a rooted tree with $k$ levels in which the root vertex is of degree $d$, the vertices on levels $2, \dots, k-1$ are of degree $d+1$, and the vertices on level $k$ are of degree 1. By Theorem 7 of [13], + +$$ (3.8) \qquad \lambda(A(B_{d,k})) = 2\sqrt{d} \cos \frac{\pi}{k+1}. $$ + +Let $u \in V(G)$. It is not hard to check that $T(G, u)$ is isomorphic to a subgraph of $B_{\Delta-1,2\ell+1}$. For this, it is enough to correspond $u \in V(T(G, u))$ to an arbitrary vertex on level $\ell+1$ in $B_{\Delta-1,2\ell+1}$. By applying Theorem 3.4, the Weyl inequality [3, Theorem 2.8.1], the interlacing theorem [3, Corollary 2.5.2], and (3.8), we derive + +$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &\le \lambda(D_G(T(G,u))) + \lambda(A(T(G,u))) \tag{3.9} \\ &\le \Delta + \lambda(A(B_{\Delta-1,2\ell+1})) \\ &= \Delta + 2\sqrt{\Delta-1} \cos \frac{\pi}{2\ell+2}, \end{align} $$ + +proving (3.7). Now, assume that the equality in (3.7) is achieved. Therefore, the equality in (3.9) occurs, and thus, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $T(G, u)$ is isomorphic to $B_{\Delta-1,2\ell+1}$. Since $\Delta \ge 2$, one can easily obtain that $G$ is a cycle. Conversely, if $G$ is a cycle, then $T(G, u)$ is a path on $2\ell+1$ vertices. By Theorem 3.4 and (3.8), we get + +$$ \lambda(\mathcal{LM}(G,x)) = \lambda(D_G(T(G,u)) + A(T(G,u))) = 2 + \lambda(A(B_{1,2\ell+1})) = 2 + 2\cos \frac{\pi}{2\ell+2}. $$ + +This completes the proof. $\square$ + +Stevanović [15] proved that the eigenvalues of the adjacency matrix of a tree $T$ are less than $2\sqrt{\Delta(T)-1}$. The corollary below gives an improvement of this upper bound for the subdivision of trees. + +**Corollary 3.8.** Let $G$ be a graph with $\Delta(G) \ge 2$. Then + +$$ (3.10) \qquad \lambda(\mathcal{M}(S(G), x)) < 1 + \sqrt{\Delta(G)-1}. $$ + +In particular, if $F$ is a forest with $\Delta(F) \ge 2$, then $\lambda(A(S(F))) < 1 + \sqrt{\Delta(F)-1}$. +---PAGE_BREAK--- + +*Proof.* It follows from Theorem 3.7 that $\lambda(\mathcal{LM}(G, x)) < \Delta(G) + 2\sqrt{\Delta(G)-1}$. Moreover, it follows from Corollary 2.3 that $\lambda(\mathcal{M}(S(G), x)) = \sqrt{\lambda(\mathcal{LM}(G, x))}$. From these, we find that + +$$ \lambda(\mathcal{M}(S(G), x)) < \sqrt{\Delta(G) + 2\sqrt{\Delta(G) - 1}} = 1 + \sqrt{\Delta(G) - 1}, $$ + +proving (3.10). As the subdivision of a forest is a forest, the 'in particular' statement follows from Theorem 2.7 and (3.10). $\square$ + +**Remark 3.9.** Note that $\Delta(S(G)) = \Delta(G)$ for every graph $G$ with $\Delta(G) \ge 2$. So, for the subdivision of a graph with the maximum degree at least 2, the upper bound which appears in (3.10) is sharper than the upper bound that comes from (1.1). + +We demonstrated in Theorem 3.4 that the largest zero of the Laplacian matching polynomial has the multiplicity 1. In the following theorem, we prove the remaining statements of (1.7) as analogues of the results given in (1.3). + +**Theorem 3.10.** Let $G$ be a graph and let $n = |V(G)|$. For each edge $e \in E(G)$, the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$ interlace in the sense that, if $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_n$ are respectively the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$, then $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots \le \beta_n \le \alpha_n$. Also, if $G$ is connected, then $\lambda(\mathcal{LM}(G, x)) > \lambda(\mathcal{LM}(H, x))$ for any proper subgraph $H$ of $G$. + +*Proof.* Fix an edge $e \in E(G)$ and denote by $v_e$ the vertex of $S(G)$ corresponding to $e$. Let $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_n$ be the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$, respectively. Corollary 2.3 yields that $\sqrt{\alpha_1} \le \dots \le \sqrt{\alpha_n}$ is the end part of the nondescending sequence which consists of all the zeros of $\mathcal{M}(S(G), x)$ and $\sqrt{\beta_1} \le \dots \le \sqrt{\beta_n}$ is the end part of the nondescending sequence which consists of all the zeros of $\mathcal{M}(S(G-e), x)$. As $S(G-e) = S(G) - v_e$, it follows from (1.3) that the zeros of $\mathcal{M}(S(G), x)$ and $\mathcal{M}(S(G-e), x)$ interlace. So, we find that + +$$ \sqrt{\beta_1} \le \sqrt{\alpha_1} \le \sqrt{\beta_2} \le \sqrt{\alpha_2} \le \dots \le \sqrt{\beta_n} \le \sqrt{\alpha_n} $$ + +which means that $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots \le \beta_n \le \alpha_n$, as desired. + +Now, assume that $G$ is connected. Let $H$ be a proper subgraph of $G$ and let $u \in V(H)$. As $T(H, u)$ is a proper subgraph of $T(G, u)$, if $R$ denotes the submatrix of $D_G(T(G, u)) + A(T(G, u))$ corresponding to the vertices in $V(T(H, u))$, then $R - (D_H(T(H, u)) + A(T(H, u)))$ is a nonzero matrix with nonnegative entries. So, by applying Theorem 3.4 and the Perron-Frobenius theorem [3, Theorem 2.2.1], we get + +$$ +\begin{align*} +\lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ +&> \lambda(R) \\ +&> \lambda(D_H(T(H,u)) + A(T(H,u))) \\ +&= \lambda(\mathcal{LM}(H,x)). \quad \square +\end{align*} +$$ + +**Remark 3.11.** For every graph $G$ and real number $\alpha$, let $m_G(\alpha)$ denote the multiplicity of $\alpha$ as a root of $\mathcal{LM}(G,x)$. As a consequence of Theorem 3.10, we have $|m_G(\alpha) - m_{G-e}(\alpha)| \le 1$ for each edge $e \in E(G)$. + +It is known that among all trees with a fixed number of vertices the path has the smallest value of the largest Laplacian eigenvalue [12]. The following result can be considered as an analogue of this fact and is obtained from Theorems 2.7 and 3.10. + +**Corollary 3.12.** Let $P_n$ and $K_n$ be the path and complete graph on $n$ vertices, respectively. For any connected graph $G$ on $n$ vertices which is not $P_n$ and $K_n$, + +$$ \lambda(\mathcal{LM}(P_n, x)) < \lambda(\mathcal{LM}(G, x)) < \lambda(\mathcal{LM}(K_n, x)). $$ +---PAGE_BREAK--- + +4. CONCLUDING REMARKS + +In this paper, we have discovered some properties of the location of zeros of the Laplacian matching polynomial. Most of our results can be considered as analogues of known results on the matching polynomial. Comparing to the matching polynomial, the Laplacian matching polynomial contains not only the information of the sizes of matchings in the graph but also the vertex degrees of the graph. Hence, it seems to be that more structural properties of graphs can be reflected by the Laplacian matching polynomial rather than the matching polynomial. For an instance, 0 is a root of $LM(G,x)$ if and only if $G$ is a forest, in while 0 is a root of $M(G,x)$ if and only if $G$ has no perfect matchings. + +More interesting facts about the Laplacian matching polynomial can be concerned in further. For example, one may focus on the multiplicities of zeros of the Laplacian matching polynomial as there are many results on the multiplicities of zeros of the matching polynomial. In view of Remark 3.11, for every graph $G$ and real number $\alpha$, one may divide $E(G)$ into three subsets based on how the multiplicity of $\alpha$ changes when an edge of $G$ is removed. The corresponding problem about the matching polynomial is investigated by Chen and Ku [8]. Also, it is a known result that the multiplicity of a zero of the matching polynomial is at most the path partition number of the graph, that is, the minimum number of vertex disjoint paths required to cover all the vertices of the graph [4, Theorem 6.4.5]. It seems to be an interesting problem to find a sharp upper bound on the multiplicity of a zero of the Laplacian matching polynomial. + +REFERENCES + +[1] N. Amini, Spectrahedrality of hyperbolicity cones of multivariate matching polynomials, Journal of Algebraic Combinatorics 50 (2019) 165–190. + +[2] J.A. Bondy, U.S.R. Murty, Graph Theory, Graduate Texts in Mathematics, Volume 244, Springer, New York, 2008. + +[3] A.E. Brouwer, W.H. Haemers, Spectra of Graphs, Springer, New York, 2012. + +[4] C.D. Godsil, Algebraic Combinatorics, Chapman and Hall Mathematics Series, Chapman & Hall, New York, 1993. + +[5] C.D. Godsil, Matchings and walks in graphs, Journal of Graph Theory 5 (1981) 285–297. + +[6] C.D. Godsil, I. Gutman, On the theory of the matching polynomial, Journal of Graph Theory 5 (1981) 137–144. + +[7] O.J. Heilmann, E.H. Lieb, Theory of monomer-dimer systems, Communications in Mathematical Physics 25 (1972) 190–232. + +[8] C.Y. Ku, W. Chen, An analogue of the Gallai–Edmonds structure theorem for non-zero roots of the matching polynomial, Journal of Combinatorial Theory—Series B 100 (2010) 119–127. + +[9] J.A. Makowsky, E.V. Ravve, N.K. Blanchard, On the location of roots of graph polynomials, European Journal of Combinatorics 41 (2014) 1–19. + +[10] A.W. Marcus, D.A. Spielman, N. Srivastava, Interlacing families I: Bipartite Ramanujan graphs of all degrees, Annals of Mathematics—Second Series 182 (2015) 307–325. + +[11] A. Mohammadian, Laplacian matching polynomial of graphs, Journal of Algebraic Combinatorics 52 (2020) 33–39. + +[12] M. Petrović, I. Gutman, The path is the tree with smallest greatest Laplacian eigenvalue, Kragujevac Journal of Mathematics 24 (2002) 67–70. + +[13] O. Rojo, M. Robbiano, An explicit formula for eigenvalues of Bethe trees and upper bounds on the largest eigenvalue of any tree, Linear Algebra and its Applications 427 (2007) 138–150. + +[14] H. Sachs, Beziehungen zwischen den in einem Graphen enthaltenen Kreisen und seinem charakteristischen Polynom, Publicationes Mathematicae Debrecen 11 (1964) 119–134. + +[15] D. Stevanović, Bounding the largest eigenvalue of trees in terms of the largest vertex degree, Linear Algebra and its Applications 360 (2003) 35–42. + +[16] W. Yan, Y.-N. Yeh, On the matching polynomial of subdivision graphs, Discrete Applied Mathematics 157 (2009) 195–200. + +[17] Y. Zhang, H. Chen, The average Laplacian polynomial of a graph, Discrete Applied Mathematics 283 (2020) 737–743. \ No newline at end of file diff --git a/samples/texts_merged/213815.md b/samples/texts_merged/213815.md new file mode 100644 index 0000000000000000000000000000000000000000..e1424766a2e4e6d709f1bb4b1ecef5aa4ee97e00 --- /dev/null +++ b/samples/texts_merged/213815.md @@ -0,0 +1,271 @@ + +---PAGE_BREAK--- + +Design and Performance of a 24 GHz Band FM-CW +Radar System and Its Application + +Kazuhiro Yamaguchi\*, Mitsumasa Saito\†, Kohei Miyasaka\* and Hideaki Matsue\* + +\* Tokyo University of Science, Suwa + +‡ CQ-S net Inc., Japan + +Email: yamaguchi@rs.tus.ac.jp, matsue@rs.suwa.tus.ac.jp, saitoh@kpe.biglobe.ne.jp + +*Abstract*—This paper describes a design and performance of a FM-CW (Frequency Modulated Continuous Wave) radar system using 24 GHz band. The principle for measuring the distance and the small displacement of target object is described, and the differential detection method for detecting the only target is proposed under the environments which multiple objects are located. In computer simulation, the basic performance of FM-CW radar system is analyzed about the distance resolution and error value according to the various sampling time and sweep bandwidth. Furthermore, the FM-CW radar system with the proposed differential detection method can clearly detect only the target object under the multiple object environment, and the small displacement within 3.11 mm can be measured. In experiment, the performance about measuring the distance and displacement is described by using the designed 24 GHz FM-CW radar system. As the results, it is confirmed that 24 GHz FM-CW radar system with the proposed differential detection method is effective for measuring target under the environments which multiple objects are located. + +Fig. 1. Sawtooth frequency modulation. + +I. INTRODUCTION + +Radar systems with 24 GHz band is based on ARIB standard T73 [1] as sensors for detecting or measuring mobile objects for specified low power radio station. And the 24 GHz band radar system can be applied in various field such as security, medical imaging and so on under indoor and outdoor environments. There are various radar systems have been proposed [2], [3], [4], [5]. The pulsed radar system measures the period between the signal is transmitted and received. The pulsed radar can detect the distance in far field, however, the target in near field can not be detected correctly. The Doppler radar system measures the frequency difference between the reflected and transmitted signals. The Doppler radar can detect the moving velocity of the target, however, the distance of the target can not be detected. The FM-CW (Frequency-Modulated Continuous-Wave) radar system [6], [7] is the most widely used for detecting the distance of the target object in near field and the small displacement of the target. + +In this paper, we used and developed the 24 GHz FM-CW radar system for measuring the distance and displacement of an object when the object is static or moves very slowly. The basic performance of the 24 GHz FM-CW radar system for measuring a target object is analyzed by using the computer simulation. Moreover, we proposed the differential detection method for signal processing in the FM-CW radar system in order to detect only the target object under the environments which multiple objects are located. Furthermore, an example of application with the 24 GHz FM-CW radar system is shown in experiment. + +This paper consists of the following sections. Section II describes the principle of a FM-CW radar system. Section III describes and analyses the basic performance and the proposed differential detection method in computer simulation. Section IV shows the experimental results with 24 GHz FM-CW radar system. Finally, Section V concludes this paper. + +II. PRINCIPLE FOR FMCW RADAR + +FM-CW (Frequency-Modulated Continuous-Wave) radar +is a radar transmitting a continuous carrier modulated by a +periodic function such as a sawtooth wave to provide range +data shown in Fig. 1. Fig. 2 shows the block diagram of a +FM-CW radar system [8]. + +In the FM-CW radar system, frequency modulated signal +at the VCO is transmitted from the transmitter Tx, then signals +reflected from the targets are received at the receiver Rx. +Transmitted and received signals are multiplied by a mixer, and +beat signals are generated as multiplying the two signals. The +beat signal pass through a low pass filter, then an output signal +is obtained. In this process, the frequency of the input signal +is varied with time at the VCO. The modulation waveform +with a linear sawtooth pattern [9] as shown in Fig. 1. This +figure illustrates frequency-time relation in the FM-CW radar, +and the red line denotes the transmitted signal and the blue +line denotes the received signal. Here, f₀ denotes the center +frequency, fₛ denotes the frequency bandwidth for sweep, and +tₛ denotes the period for sweep. + +We define that the transmitting signal $V_T(f, x)$ at the +transmitter Tx in Fig. 2 is represented as + +$$ +V_{\mathrm{T}}(f,x)=A e^{j \frac{2 \pi f}{c} x}, +\quad(1) +$$ +---PAGE_BREAK--- + +Fig. 2. Block diagram of a FM-CW radar system. + +where *f* denotes a frequency at a time, *x* denotes a distance between a target and the transmitter, *A* denotes an amplitude value and *c* denotes the speed of light. + +The reflected signal $V_R(f, x)$ at the receiver Rx in Fig. 2 is represented as + +$$ V_R(f, x) = \sum_{k=1}^{K} A \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{2\pi f}{c} (2d_k - x)} , \quad (2) $$ + +where $\gamma_k$ and $\varphi_k$ are the reflectivity coefficients for amplitude and phase on kth target, respectively. $\alpha_k$, denotes amplitude coefficient for transmission loss from kth target, and $d_k$ is the distance between the transmitter and the kth target. + +Here, at the receiver whose position is $x = 0$, Eq. (2) is rewritten as + +$$ V_R(f, 0) = \sum_{k=1}^{K} A \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{2\pi f}{c} (2d_k)} . \quad (3) $$ + +The beat signal are generated as multiplying the transmitted signal in Eq. (1) and the received signal in Eq. (3) at the position $x = 0$. After LPF, the output signal $V_{\text{out}}(f, 0)$ is generated by + +$$ V_{\text{out}}(f, 0) = \sum_{k=1}^{K} A^2 \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f d_k}{c}} . \quad (4) $$ + +By using signal processing, a distance and a displacement for the target are given from the generated output signal in Eq. (4). By using the Fourier transform, the distance spectrum of the output signal $P(x)$ is calculated as follows. + +$$ +\begin{align} +P(x) &= \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} V_{\text{out}} e^{-j \frac{4\pi f}{c} x} df \nonumber \\ +&= \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} \sum_{k=1}^{K} A^2 \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f d_k}{c}} e^{-j \frac{4\pi f x}{c}} df \nonumber \\ +&= A^2 \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} e^{j \frac{4\pi f (d_k - x)}{c}} df \nonumber \\ +&= A^2 \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f_0 (d_k - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} . \tag{5} +\end{align} +$$ + +The amplitude value of the distance spectrum $|P(x)|$ in Eq. (5) is given as + +$$ +\begin{aligned} +|P(x)| &= A^2 \left| \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f_0 (d_k - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} \right| \\ +&\leq A^2 f_w \sum_{k=1}^{K} \alpha_k \gamma_k \left| \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} \right|, \quad (6) +\end{aligned} +$$ + +and we have equality if and only if the phase components $\phi_k + \frac{4\pi f_0 (d_k - x)}{c}$ about all of $k$ are equal. + +Here, we assumed that the number of target is 1. The distance spectrum in Eq. (5) is rewritten as + +$$ P(x) = A^2 \alpha_1 \gamma_1 e^{j \varphi_1} e^{j \frac{4\pi f_0 (d_1 - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_1 - x)}{c}\right\}}{\frac{2\pi f_w (d_1 - x)}{c}}, \quad (7) $$ + +and the amplitude value of distance spectrum is given as + +$$ |P(x)| = A^2 \alpha_1 \gamma_1 f_w \left| \frac{\sin\left\{\frac{2\pi f_w (d_1-x)}{c}\right\}}{\frac{2\pi f_w (d_1-x)}{c}} \right|. \quad (8) $$ + +This equation indicates that the distance for the target is generated by the amplitude value of distance spectrum. + +The phase value of distance spectrum $\angle P(x)$ is represented as + +$$ \angle P(x) = \varphi_1 + \frac{4\pi f_0 (d_1 - x)}{c} = \theta_1(x) . \quad (9) $$ + +Here, $\theta_1(x)$ satisfy $-\pi \leq \theta_1(x) \leq \pi$, then the displacement for the target is + +$$ -\frac{c(-\pi - \varphi_1)}{4\pi f_0} \leq d_1 \leq \frac{c(\pi - \varphi_1)}{4\pi f_0} . \quad (10) $$ + +If the phase value satisfies $\phi_1 = 0$, Eq. (10) is rewritten as $-3.11 [\text{mm}] \leq d_1 \leq +3.11 [\text{mm}]$ with $f_0 = 24.15 [\text{GHz}]$. That is, the small displacement of the target within $\pm 3.11 [\text{mm}]$ is generated by the phase value of distance spectrum. +---PAGE_BREAK--- + +TABLE I. PARAMETERS IN COMPUTER SIMULATIONS + +
ParametersValue
Center frequency24.15 GHz
Bandwidth50, 100, 200, 400 MHz
Sweep time1024 µs
Sampling time of sweep0.1, 1, 10 µs
Number of FFT points4096
Window functionhamming
+ +Fig. 3. Resolution for distance spectrum according to sweep bandwidth. + +On the other hands, the maximum distance for measuring +$d_{\max}$ is + +$$ +\begin{aligned} +\Delta f &= \frac{f_w}{t_w/t_s} [\text{Hz}] \, , \\ +d_{\max} &= \frac{c}{4\Delta f} [\text{m}] \, , +\end{aligned} +\quad (11) $$ + +where $t_w$ denotes the sweep time, $t_s$ denotes the interval time for sampling. For example, in the case with $t_w = 1024$ [µs] and $t_s = 1$ [µs], the maximum distance is $d_{\max} = 384$ [m]. + +III. COMPUTER SIMULATION + +A. Basic Performance + +At first, we describe the basic performance about the FM-CW radar with 24 GHz band. Parameters for computer simulation are listed in Table I. Center frequency is 24.15 GHz, bandwidth are 50, 100, 200, and 400 MHz. Note that the 400 MHz bandwidth is only used for the computer simulation because of standards in the Radio Law in Japan. Sweep time is 1024 µs, sampling times of sweep are 0.1, 1, 10 µs, number of FFT points is 4096, and the hamming windows is adapted as the window function in signal processing. + +We assumed that a static target is located at 10 m from the transmitter and receiver, and the distance spectrums are outputted with various parameters. Fig. 3 shows the amplitude value for distance spectrum versus measured distance with various sweep bandwidth. The result shows that the sweep bandwidth influences the distance resolutions and widely bandwidth can improve the resolution. In the case with $t_s = 1$ µs, the distance resolutions with $f_w = 50, 100, 200, 400$ MHz are ±5, ±1.5, ±1, ±0.5 m, respectively. Fig. 4 shows the amplitude value for distance spectrum versus measured distance with various sampling time. The result shows that + +Fig. 4. Error value for distance spectrum according to sampling interval. + +Fig. 5. Distance spectrum for measuring moving target. + +the sampling interval influences the error about the measured distance and shortly sampling interval can reduce the error value for distance. In the case with $f_w = 200$ MHz, the error values about the measured distance with $t_s = 10$ µs is about 0.5 m. + +Fig. 5 shows the result for measuring a slowly moving target with $f_w = 200$ MHz and $t_s = 1$ µs. The target moved from 10 m to 20 m at intervals of 0.5 m. Fig. 5(a) shows +---PAGE_BREAK--- + +Fig. 6. Measured displacement. + +the amplitude value versus measured distance versus target distance with 3-dimensional viewing, and Fig. 5(b) shows measured distance versus target distance with 2-dimensional viewing. The color in (b) is corresponding to the strength of the amplitude value in (a). From these figures, it is confirmed that the distance can be measured correctly according to the positions of the moving target. + +Fig. 6 shows the result for measuring a target with small displacement, and the measured displacement versus target displacement is outputted. The object is located at 10 m from the receiver, and the object moved from -5 mm to 5 mm at intervals of 0.1 mm. The small displacement can be measured by the phase value of distance spectrum, and the measured displacement is corresponding to the target displacement. Note that the measured displacement denotes the relative displacement and it is not corresponding to the absolute distance between the receiver and the target object. The small displacement within ±3.11 mm is correctly measured with the parameters of the FM-CW radar system in this paper, however, the displacement more than ±3.11 mm has uncertainty. + +## B. Proposed target detection + +As mentioned in the above section, the FM-CW radar system can measure the distance and the small displacement for 1 target object. However, it is a special case that only the reflected signal on a target can be received at the receiver. In general, the receiver may receive the reflected signals from many objects. Therefore, when there is some objects for measuring the target distance, signal processing for detecting the distance spectrum from the only target is required. + +The proposed method removes the signals from the other objects by using the differential detection of distance spectrum. Fig. 7 shows the distance spectrum when the target object moves from 10 m to 20 m and the other objects are located at 15 m and 20 m. The transmitted signal is reflected on the target and the other objects, the receiver receives several reflected signals. Therefore, the distance spectrum of the other objects are also generated by the FM-CW radar system in Fig. 7(a), and the distance spectrum of the target can not be detected clearly. In particular, when the reflection coefficient of the target is lower than that of the other objects, the distance spectrum of the other object has higher amplitude value than that of the target. + +Fig. 7. Distance spectrum for measuring moving target distance with / without the differential detection under the environments which multiple objects are located. + +In the proposed differential detection, at first, the distance spectrum of the other objects $P_0$ is generated beforehand in Fig. 7(a). Then, the distance spectrum of the target and the other object $P$ is subtracted by $P_0$. By using the differential detection, distance spectrum removed the distance spectrum of the other targets is generated as $P-P_0$. Therefore, the distance spectrum of the desired target is only detected. Fig. 7(b) shows the distance spectrum by using the proposed differential detection method, and the distance spectrum of the target is correctly measured. As compared with the measured distance spectrums in Fig. 7(a) and (b), it is clearly confirmed that the proposed method can detect target distance by using the difference detection. The proposed differential detection can effectively detect the moving or static target distance from multiple reflections of the background static objects. + +# IV. EXPERIMENTS + +In order to evaluate the effectiveness of the proposed method for detecting the target distance and displacement, we develop a FM-CW radar system and carried out the experiments with the radar system in actual environment. Table II lists the parameters, and the developed FM-CW radar system get a certificate of conformity with technical regulations in +---PAGE_BREAK--- + +TABLE II. PARAMETERS IN EXPERIMENTS + +
ParametersValue
Center frequency f024.15 GHz
Sweep bandwidth fw200 MHz
Sweep time tw1024 μs
Sampling time of sweep ts1 μs
Transmitter power output0.007 W
Antenna gain11 dBi
Range of distance0 - 100 m
Range of relative displacement±3.11 mm
+ +Fig. 8. Distance spectrum for measuring moving target distance with / without the differential detection. + +Article 38-6 Paragraph 1 of the Radio Law in Japan, and developed FM-CW radar system is accommodate to ARIB standard T73 in Japan [1]. + +## A. Distance Spectrum + +Fig. 8 shows the distance spectrum of a moving target. A person walked away from the FM-CW radar and then came close between 2 [m] to 10 [m]. In Fig. 8(a), several distance spectrums of the person and the background objects are outputted. The distance spectrum of the moving person is not clearly detected in Fig. 8(a). In order to detect the distance spectrum of the moving person with the differential + +detection method, the distance spectrum without the person is measured beforehand. By generating the distance spectrum of the background objects beforehand, the distance spectrum of the moving person is correctly detected in Fig. 8(b) with the proposed differential detection. Therefore, the FM-CW radar system can measure movement of the target person effectively. + +Fig. 9 shows the result of measuring the small displacement for human breathing. The human's chest movement is measured within the range of relative small displacement. In Fig. 9, it is detected that the period of breathing is about 4 [s] and the breathing movement is about within ±2 [mm]. + +## B. Example for application + +Finally, we show an example of application with 24 GHz FM-CW radar system. Fig. 10 shows a setup of the FM-CW radar system for detecting human breathing in actual environments. The FM-CW radar satisfies the safety guideline, and the details of the safety guideline is described in Appendix. + +Fig. 11 shows the example for detecting human breathing. + +Fig. 9. Displacement for measuring the movement of human breathing. + +Fig. 10. Setup of FM-CW Radar for detecting human breathing. + +Fig. 11. Example of application. +---PAGE_BREAK--- + +The distance spectrum in this example is measured as following flow. + +1) Measuring distance spectrum without any person. + +2) A person comes to the bed. The radar received signals from human's body. + +3) The person lies asleep on the bed. The radar detects the person's breathing movement. + +By generating the distance spectrum of the background objects without the person, the distance spectrum of the person is only detected. When the person comes within the range of radar, the radar system can detect reflected signals from the person, and the distance spectrums of the human's body are detected. After the person lies on the bed, the radar system can detect the small displacement for the person's breathing movement. By using the differential detection method, the distance and small displacement of the moving object is clearly detected. + +## V. CONCLUSION + +In this paper, design and performance of a FM-CW radar system with 24 GHz band is described. In computer simulations, basic performances of FM-CW radar system is analyzed about the distance resolution and error value according to the sweep time and the sampling interval, respectively. Moreover, the differential detection method for detecting only the target object is proposed for measuring the distance and the displacement of the target under the environments which multiple objects are located. In experiments, the distance spectrum of the target object is clearly detected by using the differential detection method under the environments which multiple objects are located. Furthermore, an example of application for detecting human's breathing movement is shown. As the result, the 24 GHz FM-CW radar with the proposed differential detection method effectively detect the distance and the small displacement under the environments which multiple objects are located. + +## ACKNOWLEDGMENT + +A part of this work was supported by “Ashita wo Ninau Kanagawa Venture Project” of Kanagawa in Japan. + +The authors appreciate Prof. Toshio Nojima at Hokkaido University in Japan getting the valuable advices for analyzing the safety properties of the developed FM-CW radar system according to the safety guideline. + +## REFERENCES + +[1] ARIB STD-T73 Rev. 1.1, *Sensors for Detecting or Measureing Mobile Objects for Specified Low Power Radio Station*, Association of Radio Industries and Businesses Std. + +[2] S. MIYAKE and Y. MAKINO, "Application of millimeter-wave heating to materials processing(special issue; recent trends on microwave and millimeter wave application technology)," *IEICE transactions on electronics*, vol. 86, no. 12, pp. 2365-2370, dec 2003. + +[3] M. Skolnik, *Introduction to Radar Systems*. McGraw Hill, 2003. + +[4] S. Fujimori, T. Uebo, and T. Iritani, "Short-range high-resolution radar utilizing standing wave for measuring of distance and velocity of a moving target," *ELECTRONICS AND COMMUNICATIONS IN JAPAN PART I-COMMUNICATIONS*, vol. 89, no. 5, pp. 52-60, 2006. + +[5] T. Uebo, Y. Okubo, and T. Iritani, "Standing wave radar capable of measuring distances down to zero meters," *IEICE TRANSACTIONS ON COMMUNICATIONS*, vol. 88, no. 6, pp. 2609-2615, jun 2005. + +[6] T. SAITO, T. NINOMIYA, O. ISAJI, T. WATANABE, H. SUZUKI, and N. OKUBO, "Automotive fm-cw radar with heterodyne receiver," *IEICE transactions on communications*, vol. 79, no. 12, pp. 1806-1812, dec 1996. + +[7] W. Butler, P. Poitevin, and J. Bjomholt, "Benefits of wide area intrusion detection systems using fmcw radar," in *Security Technology, 2007 41st Annual IEEE International Carnahan Conference on*, Oct 2007, pp. 176-182. + +[8] M. Skolnik, *Radar Handbook, Third Edition*. McGraw-Hill Education, 2008. + +[9] W. Sediono and A. Lestari, "2d image reconstruction of radar indera," in *Mechatronics (ICOM), 2011 4th International Conference On*, May 2011, pp. 1-4. + +[10] C95.1-2005, *IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields*, 3 kHz to 300 GHz, IEEE Std. + +[11] Ministry of Internal Affairs and Communications. [Online]. Available: http://www.tele.soumu.go.jp/resource/j/material/dwn/guide38.pdf + +# APPENDIX + +In general, electromagnetic wave must be satisfied the guidelines on human exposure to electromagnetic fields, where it have been instituted in various organizations. IEEE C95.1 in USA [10] and ICNIRP in Europe are the guidelines, and MIC also have instituted the guideline in Japan [11]. + +Developed 24 GHz FM-CW radar in this paper have the properties as follow. The power of the transmitter is 7 [mW], the transmitting antenna gain is 11 [dBi], the effective radiated power is 88 [mW], the radiation angle of the transmitting wave is about 50 [degree], and the distance between the transmitter and the human is 2.5 [m]. According to the radar equation, the electric field strength $E$ and the power density $P$ on the human body is calculated as + +$$ +\begin{aligned} +E &= \sqrt{\frac{30 \times 0.088}{2.5}} = 0.65 \text{ [V/m]} , \\ +P &= \frac{E^2}{z_0} = \frac{0.65^2}{120\pi} = 1.12 \times 10^{-4} \text{ [mW/cm}^2\text{]} . +\end{aligned} + $$ + +According to the guideline [11], these parameters must be satisfied as + +$$ +\begin{aligned} +&E \leq 61.4 \text{ [V/m]} , \\ +&P \leq 1 \text{ [mW/cm}^2\text{]} . +\end{aligned} + $$ + +Therefore, the developed 24 GHz FM-CW radar system in this paper sufficiently satisfies the conditions in the guideline. \ No newline at end of file diff --git a/samples/texts_merged/250922.md b/samples/texts_merged/250922.md new file mode 100644 index 0000000000000000000000000000000000000000..954c6cfa7fa614479afb04bad4d1f2c84a1c8172 --- /dev/null +++ b/samples/texts_merged/250922.md @@ -0,0 +1,893 @@ + +---PAGE_BREAK--- + +# Sequential persuasion + +FEI LI + +Department of Economics, University of North Carolina Chapel Hill + +PETER NORMAN + +Department of Economics, University of North Carolina Chapel Hill + +This paper studies sequential Bayesian persuasion games with multiple senders. We provide a tractable characterization of equilibrium outcomes. We apply the model to study how the structure of consultations affects information revelation. Adding a sender who moves first cannot reduce informativeness in equilibrium and results in a more informative equilibrium in the case of two states. Moreover, with the exception of the first sender, it is without loss of generality to let each sender move only once. Sequential persuasion cannot generate a more informative equilibrium than simultaneous persuasion and is always less informative when there are only two states. + +KEYWORDS. Bayesian persuasion, communication, competition in persuasion, multiple senders, sequential persuasion. + +JEL CLASSIFICATION. D82, D83. + +## 1. INTRODUCTION + +This paper studies a canonical model of Bayesian persuasion with multiple senders in which senders disclose information sequentially. An uninformed decision maker seeks to maximize her state-dependent payoff. Also, many senders move in sequence, each constructing an experiment with a precision ranging from no information to full revelation of the state. Each sender observes the experiments designed by previous players when moving. + +Decision makers often must rely on outside experts to take informed actions. Sometimes multiple experts are consulted and then consultations are often sequential. For + +Fei Li: lifei@email.unc.edu +Peter Norman: normanp@email.unc.edu + +We thank three anonymous referees for detailed comments. We also thank Yu Awaya, Gary Biglaiser, Yeon-Koo Che, Navin Kartik, Keiichi Kawai, Kyungmin (Teddy) Kim, Alexey Kushnir, Elliot Lipnowski, Shuo Liu, Giuseppe Lopomo, Laurent Mathevet, Stephen Morris, Vendela Norman, Luca Rigotti, Joel Sobel, and seminar participants at University of Hawaii at Manoa, the Einaudi Institute for Economics and Finance, Rome, and Collegio Carlo Alberto, Turin, as well as participants at the 2017 CETC Conference in Vancouver, the 2017 International Conference on Game Theory at Stony Brook, the 2017 Midwest Economic Theory Conference in Dallas, the 2018 Southern Economic Association Annual Meeting, the 2018 NSF/CEME Decentralization Conference at Duke, and the 2018 NSF/NBER/CEME Mathematical Economics Conference in Chicago for helpful comments. The usual disclaimer applies. +---PAGE_BREAK--- + +example, in a recent lawsuit, Students for Fair Admissions claims that Harvard intentionally discriminates against Asian-American applicants.¹ Each party used an economist expert witness to analyze Harvard's admissions data and testify in court. Despite using the same data, the conclusions reached by the expert witnesses on each side were vastly different due to different statistical models. This example fits the Bayesian persuasion model well because experts were symmetrically informed and designed their own experiments. Furthermore, the consultations were truly sequential. Throughout the process, the expert on each side sequentially released rebuttals to reports made by the other side. Our model aims to understand how strategic considerations among experts shape information revelation in such settings. + +Instead of relying on the concavification approach popularized by Aumann and Maschler (1995) and Kamenica and Gentzkow (2011), we characterize equilibrium outcomes using linear algebra techniques. Equilibrium conditions are expressed as incentive compatibility constraints and share a similar flavor as in Bergemann and Morris (2016). + +The first step in the equilibrium construction is to show that every subgame perfect equilibrium outcome can be supported using one-step equilibrium strategies. In a one-step equilibrium, the only player who provides information is the first sender to move. The preferences of the other senders matter, but instead of actually refining the information on the path, their preferences restrict what the first sender does through incentive compatibility constraints. This works also off the equilibrium path, so any equilibrium can be replicated by strategies that are one-step on and off the equilibrium path. + +Our second simplifying step is to show that only a finite set of vertex beliefs matter for the analysis. We assume a finite set of states and actions, so, in belief space, the optimal choice rule of the decision maker can be characterized as intersections of upper half spaces, or convex polytopes. Each polytope defines a set of beliefs for which an action is optimal and is spanned by a finite set of vertices. We demonstrate that it is without loss of generality for every sender to provide only information that generate beliefs on these vertices. + +Focusing on one-step strategies with support on a finite set of vertices, we use backward induction to construct equilibria, which are Markov. We also use the fact that one-step equilibria on a finite set of vertices fully characterize the set of equilibrium outcomes so as to demonstrate that for a set of preferences of full measure, there is a unique equilibrium distribution over states and outcomes. + +Equilibrium distributions are recursively defined as stable vertex beliefs. In concrete terms, for the truncated game starting with the last sender, a stable belief is a probability distribution over the state space that the last sender has no incentive to further refine. Moreover, it is without loss of generality to consider only the vertices of the polytopes that define optimal actions for the decision maker, which we denote by $X$. For a persuasion game with $n$ senders, let $X_n \subseteq X$ be the stable vertex beliefs in the single-sender persuasion game with sender $n$ only. The penultimate sender, $n-1$, understands that + +¹Students for Fair Admissions, Inc. v. President & Fellows of Harvard Coll. (Harvard Corp.), Civil Action 14-cv-14176-ADB, 2019 U.S. Dist. LEXIS 170309 (D. Mass. Sep. 30, 2019). +---PAGE_BREAK--- + +any belief not in $X_n$ will be split onto $X_n$, so he may as well consider only beliefs in this set. However, for some beliefs in $X_n$, he may be better off by creating a mean-preserving spread over other beliefs in $X_n$, so the set of stable beliefs in the sequential-persuasion game starting with sender $n-1$, $X_{n-1}$, is a subset of $X_n$. The set of stable beliefs for the full game is constructed recursively from this idea, and it shrinks for each step of the backward induction process. + +By studying these stable beliefs, we find that adding a sender who moves first cannot reduce the informativeness. In contrast, strategic considerations may reduce information disclosure if a sender is added later in the game. + +Next, we ask whether multiple counterarguments can make equilibria more informative in our model. The answer is mainly negative. We prove that the set of stable beliefs is unchanged if a sender is given an additional chance to provide information that precedes the last time that the sender moves. Hence, there is no loss of generality in considering an extensive form in which each sender moves only once when characterizing the set of stable beliefs. However, the first sender can choose the distribution over stable beliefs, and different senders may prefer different distributions. Hence, having all senders except possibly the first moving only once is without loss of generality. This may seem counterintuitive in the context of debates or legal proceedings, but our model lacks natural constraints such as limitations on the amount of information that can be transmitted using a single argument. + +We also compare sequential and simultaneous persuasion. We find that sequential persuasion can never generate a more informative equilibrium than simultaneous persuasion. Finally, we provide a simple and easy to interpret sufficient condition for when full revelation is the unique equilibrium, which is invariant on the order of moves. + +**Literature.** Our paper relates to a large body of work on information disclosure, but is most directly connected to the literature on Bayesian persuasion started by Kamenica and Gentzkow (2011) and Rayo and Segal (2010). This literature has recently been extended to incorporate multiple senders by Gentzkow and Kamenica (2017a, 2017b), Boleslavsky and Cotton (2015, 2018), Au and Kawai (2019, 2020), Hwang et al. (2019), and others. However, none of these papers deals with sequential moves by the senders. In a companion paper, Li and Norman (2018) provide some examples to show that adding new senders may reduce information revelation in multi-sender persuasion settings. + +Wu (2018) considers a sequential Bayesian persuasion model similar to ours. He develops a recursive concavification approach based on Harris (1985) and Kamenica and Gentzkow (2011) to establish equilibrium existence, and he independently constructs a one-step equilibrium (referred to as a silent equilibrium). Our paper differs from Wu (2018) in the following aspects. First, our methodologies are different. Thanks to the assumption of finite action space, we can apply primitive tools such as backward induction, convex polytope analysis, and linear programming to transparently characterize the equilibrium. Second, our model clarifies how senders' experiments are combined. This enables us to transparently compare equilibria for different extensive forms. + +A growing body of work embeds persuasion into dynamic models (see Ely et al. 2015 and Ely 2017), but the paper closest in spirit to ours is Board and Lu (2018), which incorporates Bayesian persuasion into a search model. However, Board and Lu (2018) consider payoff functions that are more restrictive than ours, and the decision maker in their +---PAGE_BREAK--- + +paper faces an optimal stopping problem. In contrast, the decision maker has no influence on the precision of her information in our model. Our formal analysis has some similarities with that of Lipnowski and Mathevet (2017, 2018), which focus on single-sender persuasion games. + +Multi-sender information provision has been studied in other frameworks. Glazer and Rubinstein (2001) study a finite horizon sequential-persuasion model with limitations on the amount of information that can be revealed in each stage. There are also papers in the cheap talk and disclosure literature that ask what the implications of multiple senders are. See Ambrus and Takahashi (2008), Battaglini (2002), Kawai (2015), Krishna and Morgan (2001), Kartik et al. (2019, 2017), Bhattacharya and Mukherjee (2013), and Milgrom and Roberts (1986). Hu and Sobel (2019) compare simultaneous and sequential information disclosure in a setting where senders decide which set of facts to disclose and where the focus is on equilibria surviving iterated elimination of weakly dominated strategies. + +With different applications in mind, these papers introduce frictions on information transmission such as asymmetric information, limited information process ability, restricted forms of signals, etc. Instead, our framework eliminates all such frictions and focuses solely on the strategic interaction among senders. It thus serves as a natural benchmark for identifying sources of communication inefficiency. + +**Organization.** The remainder of this paper is organized as follows. Section 2 describes the model. Section 3 characterizes the set of equilibria, shows that every equilibrium outcome is supported as a one-step equilibrium with finite support, that equilibria exist, and that the equilibrium outcome is generically unique. In Section 4, we apply the equilibrium characterization to discuss effects of changes in the extensive form. Appendix A collects omitted proofs and some examples are collected in Appendix B. + +## 2. THE MODEL + +**Players.** Consider an environment with senders $i = 1, \dots, n$ and a decision maker $d$. Player $i = 1, \dots, n$, $d$ has a utility function $u_i: A \times \Omega \to \mathbb{R}$, where $A$ is a finite set of actions and $\Omega$ is a finite state space. Payoff functions are common knowledge and players evaluate lotteries using expected utilities. Players hold a common prior belief $\mu_0 \in \Delta(\Omega)$. Fixing a belief $\mu$ and an action $a$, we define player $i$'s expected payoff as + +$$v_i(a, \mu) = \sum_{\omega \in \Omega} u_i(a, \omega)\mu(\omega) \quad \text{for } i=1, \dots, n, d.$$ + +*Experiments.* Players are uninformed about the state of the world, but a sender may provide information to the decision maker by creating an *experiment*. We use the *partition representation* of experiments from Green and Stokey (1978) because combining multiple experiments becomes very intuitive under this representation.² + +Under the partition representation, an experiment is given by a partition of $[0, 1] \times \Omega$, where, for each state $\omega$, $\{\pi(s|\omega)\}_{s \in S}$ are disjoint sets such that $\bigcup_{s \in S} \pi(s|\omega) = [0, 1]$ and $S$ + +²This also allows us to easily compare our sequential framework with the simultaneous move model in Gentzkow and Kamenica (2017b). +---PAGE_BREAK--- + +FIGURE 1. There are two states $\omega_0$ and $\omega_1$, and two senders $i=1,2$. Sender 1's signal space contains two signals: $s_1$ and $s'_1$. Sender 2's experiment has two possible signals $\{s_2, s'_2\}$. The combination of two experiments $\hat{\pi}_2 = \pi_1 \lor \pi_2$ has three possible signals $\{\hat{s}_2, \hat{s}'_2, \hat{s}''_2\}$, and it is finer than $\pi_1$ and $\pi_2$. + +indexes the sets in partitions. Given experiment $\pi$, one can interpret each $s$ as a *signal* by assigning state-contingent probabilities to each $s$ according to the Lebesgue measure of each $\pi(s|\omega)$. In doing so, experiment $\pi$ induces a state-contingent distribution over signals $p_\pi: \Omega \rightarrow \Delta(S)$. Letting $\lambda(\cdot)$ denote the Lebesgue measure, the probability of signal $s \in S$ being realized conditional on state $\omega$ is + +$$p_{\pi}(s|\omega) = \lambda(\pi(s|\omega)), \quad (1)$$ + +where $\sum_s p_{\pi}(s|\omega) = 1$ for each $\omega \in \Omega$ because $\{\pi(s|\omega)\}_{s \in S}$ is a partition of the unit interval. With a slight abuse of notation, we use $s$ both as a generic indexing set and the corresponding subset of $[0, 1] \times \Omega$ in the discussion below and in Figure 1. + +Given two experiments $\pi, \pi'$, players combine the information into a joint experiment that we denote by $\pi \lor \pi'$, which consists of the set of all intersections of the sets in $\pi$ and $\pi'$. Since each set in the joint experiment is an intersection of a set in the partition $\pi$ with a set in the partition $\pi'$, it is immediate that $\pi \lor \pi'$ is finer than both $\pi$ and $\pi'$. This, in turn, implies that the combined experiment $\pi \lor \pi'$ is more informative in Blackwell's sense than either of the two underlying experiments.³ + +**Extensive form.** Let $\Pi$ denote the set of all experiments. Senders $1, \dots, n$ move sequentially to post experiments $\pi_1, \dots, \pi_n$ in order of their index, where $\pi_i \in \Pi$ for every $i$ and where each sender observes all previous senders' experiments. Then nature draws $\omega$. Finally, the decision maker observes $(\pi_1, \dots, \pi_n)$ and a joint realization $s = (s_1, \dots, s_n)$ according to the corresponding state-contingent probability $p_{\bigvee_i \pi_i}(s|\omega) = \lambda(\bigvee_i \pi_i(s|\omega))$ for $i=1, \dots, n$, and takes an action $a \in A$. + +As illustrated in Figure 1, combining sender 2's experiment with the experiment of sender 1 generates a finer joint experiment than either underlying experiment. Each + +³Assume that $\pi$ is finer than $\pi'$, and let $p_\pi$ and $p_{\pi'}$ denote the corresponding state-contingent distributions over signals they generate. Then $p_\pi$ is more informative in the sense of Blackwell (1953) than $p_{\pi'}$. See Green and Stokey (1978) for a proof. +---PAGE_BREAK--- + +signal in $\pi_1$ may be further partitioned and provides an example in which $s_1$, but not $s_1'$, is refined when combined with the experiment played by player 2. A sender, therefore, acts as if he observes and responds to the signal realizations of previous senders' experiments, despite the fact that the formal model assumes that the joint signal realization is drawn at the end. Generating joint experiments by taking intersections is, therefore, without loss of generality in our model because senders move sequentially. + +*Strategies and equilibrium.* A pure *strategy* for sender *i* is a map $\sigma_i: \Pi^{i-1} \to \Pi$, where $\Pi^0$ is the trivial null history. That is, given a history $\{\pi_1, \dots, \pi_{i-1}\}$, sender *i* chooses $\pi_i$ that results in a finer experiment $\bigvee_{k=1}^i \pi_k$. A history for the decision maker is a vector $(\pi_1, \dots, \pi_n, s_1, \dots, s_n)$. Let $\mathcal{H}_d$ be the set of all histories for the decision maker and let $\sigma_d: \mathcal{H}_d \to A$ denote her strategy. There is uncertainty about the state, but information is symmetric and there is, therefore, never any point in the game in which any player needs to update the beliefs about the type of other players. Hence, subgame perfection is applicable. + +### 3. EQUILIBRIUM CHARACTERIZATION + +In this section, we first prove a result similar to the revelation principle. Without loss of generality, we may focus on *one-step equilibria*, which are equilibria where only the first sender discloses nontrivial information on the equilibrium path. Preferences of other senders enter much like incentive compatibility constraints in such equilibria. Then we construct an equilibrium, and show that the game has a unique equilibrium distribution over states and actions for a set of payoff functions with full Lebesgue measure. + +#### 3.1 Simplifying the problem + +Players ultimately care only about the distribution over actions and states, which motivates the following definition. + +**DEFINITION 1.** Two strategy profiles are *outcome equivalent* if they generate identical joint distributions over $\Omega \times A$. + +There are often multiple outcome-equivalent equilibrium information structures, but all players are indifferent across all such equilibria. We, therefore, consider them equivalent even if they are Blackwell comparable, because ultimately players care only about probability distributions over $\Omega \times A$. + +Next, we define strategy profiles in which only the first sender provides any information. + +**DEFINITION 2.** Consider a strategy profile $\sigma'$ and let $h_i'$ denote the implied outcome path before the move by sender $i$. We say that $\sigma'$ is *one-step* if $\bigvee_{i=1}^n \sigma_i'(h_i') = \sigma_1'$. + +We are now ready to present the first result. +---PAGE_BREAK--- + +PROPOSITION 1. For any subgame perfect equilibrium, there exists an outcome-equivalent subgame perfect equilibrium in which senders play a one-step continuation strategy profile after any history of play. + +The idea behind Proposition 1 is similar to the revelation principle. Consider an arbitrary subgame perfect equilibrium $\sigma^*$ and let $\{\pi_1^*, \dots, \pi_n^*\}$ be the individual experiments on the equilibrium path that generate a joint experiment $\pi^* = \bigvee_{i=1}^n \pi_i^*$. To construct a one-step equilibrium, let sender 1 play $\pi^*$ and assume that on the equilibrium path players $i=2, \dots, n$ provide only redundant information. It then follows that the decision maker may as well generate the same distribution over $A \times \Omega$ as in the initial equilibrium after observing the one-step path history. Moreover, because $\pi^*$ is finer than $\pi_i^*$ for each $i < n$, any deviation that is feasible from the one-step outcome path is feasible also in the original equilibrium, so it is possible to replicate continuation play following deviations from the one-step equilibrium from the original equilibrium just like in the proof of the revelation principle. Off the equilibrium path, we can follow the original equilibrium strategies.⁴ + +For the one-step equilibrium characterization to be a significant simplification, it is important that it applies not only on the equilibrium path, but also off the path. The same logic as on the equilibrium path generalizes to any continuation equilibrium following an arbitrary history of play. + +Proposition 1 implies that solving for an equilibrium of a sequential-persuasion game is equivalent to solving a static single-sender persuasion game disciplined by additional recursively defined incentive compatible constraints. After stage 1, no sender has an incentive to provide further information given the threat of subsequent senders’ best responses. + +## 3.2 Equilibrium construction + +Now we explicitly construct a one-step equilibrium. The construction is essential for the rest of our analysis because several concepts critical to understanding the equilibrium structure and the effect of competition in persuasion are introduced through the process. + +An equilibrium is constructed by backward induction. We begin with the decision maker's problem. As in standard persuasion models, what matters for the decision maker is her posterior belief about the state. Moreover, a key simplification is that we without loss may restrict attention to a finite set of *vertex beliefs*. Combined with the one-step equilibrium characterization, this allows us to construct equilibria recursively by checking which *stable vertex beliefs* in the continuation game are weakly better for the current sender than every mean-preserving spread over the stable vertex beliefs in the continuation game. + +⁴The proof of Proposition 1 is drastically simplified by two slightly unconventional modelling decisions. First, the partition representation makes it very easy to describe how individual experiments combine into a joint experiment. Second, having the uncertainty being resolved after all senders have moved implies that a history is a sequence of successively finer partitions. Hence, we can avoid senders having to condition on realized signals, which is a big simplification of the proof. +---PAGE_BREAK--- + +*Decision maker's problem.* Suppose that the decision maker observes a history of experiments $\{\pi_j\}_{j=1}^n$, which induces a joint experiment $\bigvee_{j=1}^n \pi_j$, as well as a signal realization $s$. Using $\bigvee_{j=1}^n \pi_j$ and $s$, the decision maker updates her belief about the state that summarizes all payof-relevant aspects of the history. The posterior probability of state $\omega \in \Omega$ is + +$$ \mu(\omega|s) = \frac{p(s|\omega)\mu_0(\omega)}{\sum_{\omega' \in \Omega} p(s|\omega')\mu_0(\omega')}, \quad (2) $$ + +where we have dropped the subscript of $p(s|\omega)$ defined in (1). Denoting the unconditional probability of $s$ by $p(s) = \sum_{\omega' \in \Omega} p(s|\omega')\mu_0(\omega')$, we note that an experiment $\pi$ induces a distribution of posterior beliefs that satisfies the Bayes plausibility constraint + +$$ \sum_{s \in \pi} \mu(\omega|s) p(s) = \mu_0(\omega). $$ + +To characterize the optimal actions for the decision maker, we note that for any distinct pair $a, a' \in A$, the set + +$$ H(a \geq a') = \left\{ \mu \in \Delta(\Omega) \mid \sum_{\omega \in \Omega} \mu(\omega) [u_d(a, \omega) - u_d(a', \omega)] \geq 0 \right\} $$ + +defines the posterior beliefs such that the decision maker weakly prefers $a$ to $a'$. It follows that the set of beliefs such that $a \in A$ is optimal is given by + +$$ M(a) = \bigcap_{a' \in A} H(a \geq a'), $$ + +which is a finite convex polytope. See Figure 2 for a simple illustration. + +FIGURE 2. $\Omega = \{\omega_1, \omega_2, \omega_3\}$ and $A = \{a_1, a_2, a_3, a_4\}$. +---PAGE_BREAK--- + +*Interim beliefs.* A history $h_i = \{\pi_j\}_{j=1}^{i-1}$ induces a joint experiment $\pi^{i-1} = \bigvee_{j=1}^{i-1} \pi_j$. For each signal $s$ of $\pi^{i-1}$, the corresponding belief $\mu(\omega|s)$ is given by (2). This is the decision maker's posterior belief if senders $i+1, \dots, n$ do not add any information in the continuation game and $s$ is realized. We call such a belief an interim belief. Each joint experiment $\pi^{i-1} \in \Pi$ generates a distribution of interim posterior beliefs $\tau^{i-1}$, and we let $\Delta(\Delta(\Omega))$ denote the set of distributions of (interim or posterior) beliefs. + +Given a joint experiment $\pi^{i-1}$ that induces a joint belief distribution $\tau^{i-1}$, sender $i$ can refine the information into any partition that is finer than $\pi^{i-1}$. Using Theorem 1 in Green and Stokey (1978) together with the characterization in Gentzkow and Kamenica (2017a), we know that any mean-preserving spread of $\tau^{i-1}$ can be induced by some re-fined partitioning of $\pi^{i-1}$. Every feasible experiment for sender $i$ therefore corresponds to a mean-preserving spread of each interim belief in the support of $\tau^{i-1}$. Hence, sender $i$'s problem separates into finding an optimal mean-preserving spread belief by belief from the distribution induced by previous senders. + +*Sender n's problem.* Next, we consider the last sender's problem. The construction of $\{M(a)\}$ implies that we may consider optimal strategies for the decision maker that map posterior beliefs to actions. We abuse notation and denote such a map by $\sigma_d(\mu) \in \{a : \mu \in M(a)\}$. To guarantee that sender $n$'s problem is well defined, we assume that the decision maker always breaks ties in favor of sender $n$. If there are multiple such rules, we arbitrarily pick one of them. Given an interim belief $\mu$ and decision rule $\sigma_d$, sender $n$'s program can be written as + +$$ +\begin{align} +V_n(\mu) &= \max_{\tau \in \Delta(\Delta(\Omega))} \sum_{\mu'} v_n(\sigma_d(\mu'), \mu') \tau(\mu') \tag{3} \\ +\text{s.t.} \quad & \sum_{\mu'} \mu' \tau(\mu') = \mu, \nonumber +\end{align} +$$ + +and a solution is a mean-preserving spread of $\mu$, denoted by $\tau_n(\cdot|\mu)$. + +By construction, the beliefs for which $a$ is optimal for the decision maker, $M(a)$, is a finite convex polytope for each $a \in A$. Such a convex polytope has a finite set of $J(a)$ vertices $\{\mu_j^a\}_{j=1}^{J(a)}$ and these vertices span $M(a)$ so that every $\mu \in M(a)$ can be represented as a convex combination of the vectors $\{\mu_j^a\}_{j=1}^{J(a)}$.⁵ Denote + +$$ X = \bigcup_{a \in A} \{\mu_j^a\}_{j=1}^{J(a)} \qquad (4) $$ + +as the set of all vertices that defines the optimal actions for the decision maker, which is finite because both $\Omega$ and $A$ are finite. + +**LEMMA 1.** *Program (3) has a solution* $\tau \in \Delta(X)$. + +Hence, while there may be optimal solutions to (3) with support on a larger (even infinite) set, we can always find a solution in $\Delta(X)$. The idea is that each $M(a)$ is spanned + +⁵See Grünbaum et al. (1967). +---PAGE_BREAK--- + +by its vertices. Hence the sender can replace any belief $\mu$ that is not one of the vertices with a convex combination over the vertices. There are then two possibilities. The first is that the action $\sigma_d(\mu)$ is taken on all the vertices in the convex combination. In this case, the sender is indifferent between $\mu$ and the convex combination over the vertices of $M(a)$. The second possibility is that a different action is taken on one or more of the vertices. Because the tie-breaking favors the sender, he is either indifferent or strictly better off by using the convex combination. Hence, restricting to $\Delta(X)$ generates a utility at least as great as (3). But $\Delta(X)$ is a subset of the feasible set in (3), so the two problems must have the same value. + +Figure 2 provides an illustration. It depicts a feasible solution in which belief $\mu$ in the interior of $M(A_2)$ is played with positive probability. Replacing $\mu$ with the mean-preserving spread onto $\{\mu_j^{a_2}\}_{j=1,2,3}$ can be no worse for $n$ because the decision maker breaks ties in favor of $n$ at $\{\mu_1^{a_2}\}$ and $\{\mu_2^{a_2}\}$. + +**Lemma 1** suggests that we may characterize the optimal mean-preserving spread of every sender in terms of a finite optimization problem. The general idea is that if the last sender always uses a best response with support on the vertex beliefs $X$, then previous senders may as well use strategies limited to the same set of vertices, since the final sender will undo any attempt to generate any other beliefs by splitting them onto $X$. + +*Stable beliefs.* To proceed further, we recursively define a set of stable (vertex) beliefs. Let $X_n$ denote the set of vertex beliefs where sender $n$ has no incentive to provide further information, i.e., + +$$ X_n = \{ \mu \in X : v_n(\sigma_d(\mu), \mu) = V_n(\mu) \}. \quad (5) $$ + +Then we recursively define $\{X_i\}_{i=1}^n$ such that + +$$ X_i = \{ \mu \in X_{i+1} : v_i(\sigma_d(\mu), \mu) = \tilde{V}_i(\mu) \}, \quad (6) $$ + +where + +$$ +\begin{align} +\tilde{V}_i(\mu) &= \max_{\tau \in \Delta(X_{i+1})} \sum v_i(\sigma_d(\mu'), \mu') \tau(\mu'|\mu) \tag{7} \\ +\text{s.t.} \quad & \sum_{\mu' \in X_{i+1}} \mu' \tau(\mu'|\mu) = \mu. \nonumber +\end{align} + $$ + +Notice that (i) a solution to the auxiliary program (7) exists, (ii) $X_i \subseteq X_{i+1}$, and (iii) $X_1 \neq \emptyset$. In the auxiliary problem (7), sender $i$ is restricted to use experiments that induce vertex beliefs only in $X_{i+1}$, and he believes that senders $i+1, \dots, n$ will not add any information. Because $X_i \subseteq X_j \forall j > i$, sender $i$'s belief is indeed justified.⁶ + +**DEFINITION 3.** A belief is *stable* if $\mu \in X_i$ which is recursively defined by (5) and (6) for $i=1, \dots, n$. + +⁶It would be natural to define stable beliefs not just on the vertices. However, it is without loss of generality to consider equilibria with support on the vertices, and we avoid tedious repetitions of “stable vertex beliefs” by having the definition apply to vertices only. +---PAGE_BREAK--- + +By construction, no sender has an incentive to refine a stable belief. Therefore, one can recursively construct a one-step equilibrium where the resulting posterior belief is distributed only on the set of stable beliefs. On the path of play, if $\mu_0 \in X_1$, no sender sends a nontrivial signal; if $\mu_0 \notin X_1$, only sender 1 posts an informative experiment and the other senders provide no information. Off the equilibrium path, if one of sender i's interim beliefs is $\mu_{i-1} \notin X_i$, he posts an experiment that “splits” the beliefs only in $X_i$ and the subsequent senders do not add further information. + +A key step in the construction is to make sure that best responses on the vertices exist for each sender. This is done by using strategies that split any nonvertex belief onto vertices and, which is crucial, never refine a stable vertex belief.⁷ Together, these two restrictions on continuation play imply that each player effectively has a finite choice set. This does not rely on making value functions continuous (or upper semicontinuous) in beliefs. For further details the reader may consult Appendix A. + +**PROPOSITION 2.** *There exists a one-step equilibrium.* + +Notice that the equilibrium is Markov in the following sense. The decision maker's +equilibrium strategy $\sigma_d$ depends on the history only through the posterior belief, and for +each $i = 1, 2, \dots, n$, for every experiment profile $\pi_1, \dots, \pi_{i-1}$ and possible signal profile +$(s_1, \dots, s_{i-1})$ that induce the same interim belief, the mean-preserving spread $\tau_i$ induced +by sender $i$'s equilibrium strategy is identical. + +3.3 *Outcome uniqueness* + +Our third result regards the uniqueness of the equilibrium outcome, formally stated as +follows. + +**PROPOSITION 3.** *All subgame perfect equilibria are outcome equivalent for a set of payoff function profiles with full Lebesgue measure.* + +Proposition 3 says that for generic preferences, there is an essentially unique equi- +librium. Together with the fact that we can always construct a Markov equilibrium, this +implies that restricting attention to Markov strategies is almost always without loss of +generality. The case that may create multiple equilibrium outcomes is if one sender +is indifferent between some vertex $\mu \in X_i$ and some mean-preserving spread over $X_i$ +while some other players are not indifferent. However, such preferences are knife edge +and have probability 0. + +The proof is relegated to Appendix A. For intuition, first notice that the one-step +equilibrium we construct in Section 3.2 induces vertex beliefs only. A key intermediate +result, Lemma 2 below, establishes that this is without loss of generality + +⁷Players may be indifferent between refining and not refining a stable vertex belief, and using a best response in which a stable belief is refined could make the best response problem of a previous mover ill-defined +---PAGE_BREAK--- + +LEMMA 2. For every subgame perfect equilibrium, there exists an outcome-equivalent subgame perfect equilibrium in which senders play one-step strategies with implied beliefs with support on X after every history of play. + +The basic idea is much like Proposition 1, but the proof has to deal with on and off +equilibrium path histories, and is, therefore, notationally more cumbersome. Lemma 2 +is crucial because not only can we restrict attention to an equilibrium experiment that +is restricted to vertex beliefs, but, additionally, it is without loss of generality to check +one-step deviations to vertices. Therefore, if two continuation equilibria that are not +outcome equivalent exist, some sender must be indifferent between some $\mu \in X$ and a +mean-preserving spread with support on $X$. + +There are two cases in which a sender is indifferent to splitting a belief to X. The first case is when a mean-preserving spread always induces the same action as the original belief. Such indeterminacy is irrelevant, as the distribution over A × Ω is unchanged. Any failure of essential uniqueness therefore corresponds to indifferences over mean-preserving spreads that induce distinct actions. However, this requires nongeneric preferences. Since X is a finite set, there exists a finite number of affinely independent sets of belief vectors and indifference between any two such sets can hold for a measure zero set of preferences. There is a finite set of pairs to consider, and it follows that essential uniqueness can fail for at most a measure zero set of preferences. + +4. APPLICATIONS + +This section discusses some applications. The aim is to shed light on some issues relevant for the design of a communication protocol. Specifically, to maximize the amount of information disclosure, the decision maker can structure the communication by selecting experts, organizing the order of consultations, deciding what information to share with experts, etc. As a first step, we examine some key aspects that affect the incentives for information revelation, including the number of senders, the order of the senders' moves, and the information shared among senders. Thanks to the stable belief characterization of equilibrium outcomes, this becomes relatively straightforward, as we can focus on how changes in the extensive form affect the set of stable beliefs. + +Our goal is to derive some principles guiding the design of how to structure consultations. We focus on results that hold for arbitrary preferences. The justification for this is that results that do not depend on specific assumptions about preferences are more robust and may also be of value for real-world applications when preferences are not observable. + +**4.1 Information criteria** + +We begin with defining the criteria to evaluate information revelation. A unique equi- +librium outcome makes comparisons more straightforward and transparent. Unfortu- +nately, when senders move simultaneously, the only possibility to have such uniqueness +---PAGE_BREAK--- + +is when full revelation is the unique equilibrium. In general, one must use setwise comparisons. In contrast, the sequential model has a unique outcome for generic preferences. In the rest of the paper, we focus on the generic case with an essentially unique equilibrium distribution over states and outcomes in the sequential model. + +It is easy to construct examples with multiple equilibrium belief systems that can be ranked according to the Blackwell order, but where the differences in informativeness are irrelevant because all equilibria induce the same joint distribution over $A \times \Omega$. We, therefore, treat $\pi$ and $\pi'$ as equivalent in terms of the information content provided that they are outcome equivalent. + +**DEFINITION 4 (Essential Blackwell order).** For any given decision correspondence, we say that $\pi$ is essentially less informative than $\pi'$ if there exists an experiment that is outcome equivalent to $\pi'$ and more informative than any experiment that is outcome equivalent to $\pi$ in the Blackwell order. + +First, note that this is a well defined partial order. If $\pi''$ is outcome equivalent to $\pi'$ and more informative than any experiment that is outcome equivalent to $\pi$, there exists no experiment outcome equivalent to $\pi$ that is strictly more informative than $\pi''$, so antisymmetry holds. Transitivity is equally obvious. + +Next, note that it is sufficient to compare experiments with support on vertex beliefs, as there exists an outcome-equivalent mean-preserving spread onto the vertices for any experiment involving at least one signal that is not on a vertex. Hence, consider an experiment in which belief $\mu$ in Figure 2 has a positive probability. When comparing the informativeness of this experiment to another experiment, we first replace $\mu$ with the mean-preserving spread onto $\{\mu_1^{a_2}, \mu_2^{a_2}, \mu_3^{a_2}\}$. In this example, the relevant mean-preserving spread is unique, which is not always true. However, by Proposition 3, for generic preferences, there is a unique mean-preserving spread on the vertices of $M(a)$ for every $\mu \in M(a)$ and then the order compares the finest experiment outcome equivalent with $\pi$ to the finest experiment outcome equivalent with $\pi'$. + +Finally, note that outcome equivalence can only be defined given the decision maker's preference. Hence, our essential Blackwell order is not purely based on informativeness, which is different from individual sufficiency in Bergemann and Morris (2016) and other conventional information criteria applying to all preferences. The advantage of our order is to allow us to focus on the comparison of outcome-relevant information of two information structures. + +## 4.2 Adding senders in sequential persuasion + +In this section, we examine the effect of adding senders in a sequential move Bayesian persuasion game and derive some general results. Intuition suggests that the added competition from an increase in the number of experts should increase the amount of information revealed in the market. This view may even be seen as an intellectual foundation for freedom of speech, a free press, the English common law system, and many other institutions. While the literature provides a somewhat mixed support for this view, +---PAGE_BREAK--- + +FIGURE 3. Continuation payoffs and the order of moves. The solid lines are the senders' payoffs as a function of decision maker beliefs, while the dashed line represents the concavified payoff when 1 (2) is the only sender. In (a), sender 1 splits $\mu < 1/3$ onto {0, 1/3}, $\mu \in [1/3, 1/2]$ onto {1/3, 1/2}, and $\mu > 1/2$ onto {1/2, 1}. In (b), sender 2 splits $\mu < 2/3$ onto {0, 2/3} and $\mu > 2/3$ onto {2/3, 1}. + +Gentzkow and Kamenica (2017a, 2017b) provide sufficient conditions under which additional senders do not reduce the amount of information revealed in simultaneous move games. Sequential moves further weakens the argument for additional experts generating more information, because the order of moves matters. + +Consider an example with two states and two senders. Figure 3 depicts the preferences over the beliefs of the decision maker for sender 1 and 2 (in their single-sender persuasion games), respectively.⁸ Notice in particular that in a single-sender persuasion problem with $\mu > 2/3$, beliefs are split onto {1/2, 1} when the experiment is constructed by sender 1. + +When the two senders move in sequence, full revelation is the unique equilibrium if sender 1 is the last mover. In contrast, full revelation is *not* an equilibrium when sender 2 is the last mover, and for priors exceeding 2/3, the equilibrium is less informative than the experiment constructed when sender 1 is the single sender. The difference between the two cases is that sender 1 is unable to commit to not splitting $\mu = 2/3$. Anticipating this, sender 2 provides full information. When the order is reversed, the commitment issue is gone. + +To understand this, suppose that sender 1 is the last mover. Note that the tie is broken in favor of the action corresponding to [1/3, 1/2] at $\mu = 1/3$ and the action corresponding to [1/2, 2/3] at $\mu = 1/2$. Any belief in (0, 1) is thus split by sender 1 in such a way that sender 2 gets the lowest possible payoff except when $\mu$ is 0 or 1. The unique best response for sender 2 is, therefore, to fully reveal the state. + +In contrast, if sender 2 is the last mover, any $\mu$ in [0, 2/3] is split onto {0, 2/3}. It follows that if the prior exceeds 2/3, the (finest) best response for sender 1 is to split the beliefs onto {2/3, 1}, which results in no further refinement by sender 2. Hence, the order of moves matters for the equilibrium outcome. Moreover, for a prior larger than 2/3, the equilibrium is less informative than the single-sender equilibrium with sender 1, which is to split the prior onto {1/2, 1}. + +⁸We can generate such preferences if the decision maker has four actions available. +---PAGE_BREAK--- + +In the example above, the equilibrium is more informative when the new sender is added as a first mover. This is not quite general due to the incompleteness of the Blackwell ordering, but we can establish an analogue of the result for simultaneous move games. + +**PROPOSITION 4.** For generic preferences, if a sender is added who moves before all other senders, there is no equilibrium with $n + 1$ senders that is essentially less informative than the equilibrium in the original game. + +**PROOF.** Let $X_1^n$ be the set of stable beliefs in the game with $n$ senders and let $X_1^{n+1}$ be the set of stable beliefs in the game with $n+1$ senders. Because sender $n+1$ is added to move before senders $1, \dots, n$ and the set of stable beliefs is defined backwardly, we have that + +$$X_1^{n+1} \subseteq X_1^n. \qquad (8)$$ + +Fix the prior belief $\mu_0$, let $X_1^n(\mu_0)$ be the support of the equilibrium in the game with $n$ senders and let $X_1^{n+1}(\mu_0)$ be the support of the equilibrium in the game with $n+1$ senders. As discussed in Section 4.1, when introducing the essential Blackwell ordering, it is without loss to assume that these beliefs are vertices and stable, i.e., $X_1^j(\mu_0) \subseteq X_1^j$ for $j=n, n+1$. + +For contradiction, suppose that the game with $n+1$ senders has an equilibrium that is essentially less informative than the equilibrium in the original game with $n$ senders. Then there exists at least one belief $\mu' \in X_1^{n+1}(\mu_0)$ such that $\mu'$ is in the convex hull of $X_1^n(\mu_0)$, but $\mu' \notin X_1^n(\mu_0)$. Because preferences are generic, in the original $n$-sender game, some sender has a strict incentive to split $\mu'$ onto $X_1^n$. Hence, $\mu' \notin X_1^n$, which contradicts (8). $\square$ + +The proposition says that when a new sender is added to move before all previous senders, the equilibrium cannot sustain more uncertainty regardless of the preference profile of the senders. The idea is simple. If a belief is induced by an equilibrium, it must be stable. Recall that the set of stable beliefs is constructed backwardly. Adding a new sender who moves first can only reduce the set of stable beliefs. As a result, such a change cannot make the outcome essentially less informative unless there are multiple equilibrium outcomes, which is ruled out by the restriction to generic preferences. In contrast, the counterexample in Figure 3 with sender 2 added at the end is robust in the sense that the qualitative features of the example are robust to perturbations in sender and decision maker payoff functions. + +In the special case where there are only two states, the incompleteness of the essential Blackwell order no longer matters and we obtain a stronger result. + +**PROPOSITION 5.** Suppose that $\Omega = \{\omega_0, \omega_1\}$. If a sender is added who moves before all other senders, every equilibrium with $n+1$ senders is weakly essentially more informative in the Blackwell ordering. +---PAGE_BREAK--- + +PROOF. Without loss of generality, consider a one-step equilibrium with support on $X$, which contains beliefs where the decision maker is indifferent between two actions together with 0 and 1. Let $X_1^j(\mu_0)$ be the support of the equilibrium in the game with $j$ senders where $j=n, n+1$. For contradiction, assume that there are $\{\mu_L, \mu_M, \mu_H\}$ such that $\mu_M \in X_1^{n+1}(\mu_0)$, $\{\mu_L, \mu_H\} \in X_1^n(\mu_0)$, and $\mu_M \in (\mu_L, \mu_H)$. Without loss we can assume that there are at least two distinct actions that are taken at beliefs $\{\mu_L, \mu_M, \mu_H\}$, as otherwise $\mu_M$ would not be on the set of vertices $X$. But for $\mu_M$ to be stable with $n+1$ players, every sender $i \in \{1, \dots, n+1\}$ must be weakly better off at $\mu_M$ than at the unique mean-preserving spread onto $\{\mu_L, \mu_H\}$. This implies that transferring probability from $\{\mu_L, \mu_H\}$ to $\mu_M$ is consistent with equilibrium in the model with $n$ senders, contradicting uniqueness with $n$ senders. $\square$ + +The difference between Propositions 4 and 5 can be illustrated in Figure 4. The left panel of Figure 4 visualizes a case with three states. The support of the finest equilibrium is $X_n^1(\mu_0) = \{\mu_1, \mu_2, \mu_3\}$ in the original $n$-sender game. When a new sender is added to speak before other senders, the support of the finest equilibrium becomes $X_{n+1}^1(\mu_0) = \{\mu_1, \mu_2, \mu_4\}$. Proposition 4 leaves the possibility that two equilibria are non-comparable in the sense of Blackwell. To the contrary, when there are only two states, the support of the finest equilibrium contains at most two stable beliefs for generic preferences. Proposition 4 implies that $\mu_L^{n+1} \le \mu_L^n \le \mu_H^n \le \mu_H^{n+1}$, which is visualized on the right panel of Figure 4. + +When senders are added at any place except as a first mover, there is nothing that can be said in general about how the informativeness is affected. We know from Li and Norman (2018) that adding a sender at the end may strictly reduce the information revealed, and the example in Figure 3 is another example of that. To see that the same possibility exists when senders are added in the middle, assume that there is a sender 3 who has a preference such that splitting any vertex beliefs makes him worse off. Adding this sender (or multiple versions of him) at the end of any game with one or two senders leaves the equilibrium unchanged. Hence, we can use any example in which adding a sender at the end reduces information relative to the single-sender problem to create an example where adding a sender in the middle reduces the information relative to a persuasion game with two (or multiple) senders. To construct examples where adding a + +FIGURE 4. The left panel represents an example with $|\Omega| = 3$, while the right panel represents an example with $|\Omega| = 2$. +---PAGE_BREAK--- + +sender in the middle adds information is even easier. For example, one can just add a sender who prefers full revelation at any position in a game that does not fully reveal the state before the addition of the sender. + +### 4.3 Multiple moves by the same sender + +Our second application considers the communication protocol for a given set of senders. Up to this point, we have allowed each player to move only once. This is without loss of generality for results having to do with the characterization, existence, and uniqueness of equilibria, because we can always add multiple players with identical preferences. However, we now ask whether it is useful for the decision maker to allow multiple counterarguments or whether a sender is better off by moving more than once. + +This exercise is relevant because senders who speak at late stages can respond to early movers' arguments, i.e., disclosing information conditional on the signals sent by previous senders. Then it is natural to ask if there is any value in letting senders respond to counterarguments from other senders? If so, what is the source of the value? + +Our model offers a frictionless benchmark to identify the conditions needed to rationalize multiple rounds of rebuttals and counterarguments. Preferences are common knowledge and a sender can provide as much information as he wants in a single round of disclosure. Hence, the only constraints on communication are strategic considerations. Our results imply that these strategic considerations are per se insufficient to justify multiple rounds of communication, except that moving twice may be useful for the first sender who moves. + +Formally, we let $i \in \{1, \dots, n\}$ denote the set of senders and we let the stage when senders move be denoted by $t = 1, \dots, T$ with $n \le T$. + +**PROPOSITION 6.** Consider any sequential-persuasion game with $n$ senders and finite horizon $n \le T$. Then the set of stable beliefs is the same as in the sequential game with $n$ senders and $n$ periods in which, for each sender $i$, every move except the last one is eliminated. + +Proposition 6 says that for any sequential-persuasion game where senders move multiple times, to pin down its stable beliefs, it is sufficient to examine a reduced form game where each sender moves only once. For example, consider a game with three senders $i=1,2,3$ and five stages. Exactly one sender moves at each stage and the order of moves is $1 \to 2 \to 3 \to 2 \to 3$. In words, sender 1 moves at the first stage, sender 2 moves at the second stage, sender 3 moves at the third and fourth stages, and then sender 2 moves again at the fifth stage. By Proposition 6, the game has the same set of stable beliefs as the game with three stages and the order of moves is $1 \to 3 \to 2$. The intuition is very simple. Consider the incentive of a sender who can speak at stages $t_1$ and $t_2$, where $t_2 > t_1$. He may prefer to gradually disclose at multiple stages for two reasons. First, he may want to withhold information at $t_1$ but release it at $t_2$ to avoid triggering undesirable disclosure of his opponents who move in between. Second, he may want to respond to the experiments of some senders, which are only observed at $t_2$. However, neither of these concerns is sufficient to rationalize gradual information disclosure +---PAGE_BREAK--- + +in our model. The first concern is inconsistent with the concept of Nash equilibrium. When it comes to the second one, whatever the sender can disclose at early stages can also be disclosed at the last stage, making it redundant to speak multiple times. This is due to the fact that a sender can deliver as much information to the decision maker as he wants. + +Proposition 6 implies that if we begin with a game with $n$ rounds of persuasion and $n$ senders moving in the order 1, ..., $n$ and add a move for sender $i$ that precedes his move in the initial game, then the set of stable beliefs is unaffected. In contrast, if the additional move comes after player $i+1$, then the stable beliefs could change. However, in this case we can remove the move in the initial game, so the number of moves is irrelevant for the set of stable beliefs, whereas the order of moves matters. + +However, there is one case in which multiple moves can be useful. Suppose that we start with a game in which $1 \to 2 \to 3$, so that each player moves only once. Change the game to $2 \to 1 \to 2 \to 3$, so that player 2 now moves first and third. By Proposition 6, the two games have the same set of stable beliefs. However, the two games may generate different equilibrium outcomes because the first mover in the game can choose a Bayes plausible distribution of stable beliefs. Hence, in a spirit similar to the literature on agenda-setting in political economy (Romer and Rosenthal (1978), McKelvey (1976), Chen and Eraslan (2017), and others), having the right sender speak first can be useful for the decision maker. + +If the prior belief is stable, this choice does not matter, as any first mover is happy to not provide any information. If there are only two states, it is also irrelevant. This is because for any distribution of beliefs $\tau$ that is not finer than $\tau'$, there is some $\mu$ in the convex hull of the support to $\tau'$, a property that fails with more than two states. However, in general, it can be strictly better to be the first mover. + +Notice that the claim is that adding a first move without giving up the existing turn is what is advantageous, whereas swapping a move from later in the game to position 1 may be disadvantageous, because then the relevant order of play changes, which may affect the set of stable beliefs. A simple example illustrating this first-mover advantage is provided in Appendix B.2. + +Proposition 6 may seem at odds with some real-world institutions that allow for multiple rounds of counterarguments. However, information transmission is frictionless in our model, whereas constraints on the complexity of what can be communicated in a single argument seem likely to matter in many real-world settings. We believe that to justify multiple rounds of counterarguments, which are ubiquitous in legal settings and debates, one has to look beyond purely strategic considerations and consider information asymmetries or constraints on the complexity of what can be communicated in a single argument. + +## 4.4 *Simultaneous versus sequential persuasion* + +Now we fix the set of senders and the order of consultation. When the decision maker receives disclosures from senders sequentially, she can decide to what extent (if any) +---PAGE_BREAK--- + +to share the received information with subsequent senders. On the one hand, revealing this information disciplines subsequent senders' strategic information manipulation in a certain manner. On the other hand, as long as the decision maker's information remains imperfect, revealing this information allows subsequent senders to make targeted opportunistic disclosures. A natural starting point to study this question is to compare two extreme cases: the one in which each sender observes all suggestions made by previous senders and the one where a sender observes no suggestions by other senders. The Bayesian persuasion game of the first policy corresponds to our baseline model, whereas the second policy corresponds to Gentzkow and Kamenica (2017a), where senders choose their experiments simultaneously, and where each sender may make their experiment arbitrarily correlated with any other experiment. We conclude that information revealed in the simultaneous game cannot be essentially less informative than in the sequential game. + +Suppose that $\tau \in \Delta(\Delta(\Omega))$ is an equilibrium distribution of beliefs in a simultaneous move persuasion game. By Proposition 2 in Gentzkow and Kamenica (2017a), we know that this is true if and only if for each $\mu$ in the support of $\tau$ and for each player i, the payoff from $\mu$ is weakly higher than for any mean-preserving spread $\tau'$ of $\mu$. Additionally, we use the same reasoning as in the sequential setup to prove that we may restrict attention to distributions with support on $X$. + +**PROPOSITION 7.** Suppose that $\tau \in \Delta(\Delta(\Omega))$ is an equilibrium distribution of beliefs in a simultaneous-persuasion game. Then there exists an outcome-equivalent equilibrium in which $\tau' \in \Delta(X)$. + +Hence, the difference between the sequential model and the simultaneous model boils down to a comparison that can be done vertex belief by vertex belief. A vertex belief in the support of an equilibrium of the sequential model must be unimprovable with respect to Bayes plausible deviations over the set of stable beliefs, that is, vertex beliefs that no sender would like to further refine. In contrast, a belief in the support of an equilibrium in the simultaneous move game must be unimprovable with respect to any Bayes plausible deviation. + +It thus follows that for both the simultaneous game and the sequential game, we need to make sure that there is no vertex belief such that an admissible mean-preserving spread is preferred to a sender. The difference is thus that we have to check stability against arbitrary mean-preserving spreads in the simultaneous model, whereas some mean-preserving spreads can be ruled out in the sequential model because they would be undone by future senders. The following proposition therefore follows.⁹ + +**PROPOSITION 8.** For generic preferences, there exists no pure-strategy equilibrium in the simultaneous game that is essentially less informative than the equilibrium in the sequential game. + +⁹A similar comparison is made in the multi-sender cheap talk literature. The conditions under which a fully revealing equilibrium exists is weaker in a simultaneous move cheap talk model than a sequential move one. See Ambrus and Takahashi (2008), Battaglini (2002), Kawai (2015), and Krishna and Morgan (2001). +---PAGE_BREAK--- + +PROOF. Suppose that the simultaneous game has an equilibrium essentially less informative than the finest equilibrium in the sequential game. Then there exists an $\mu$ such that (i) it is in the support of the equilibrium of the simultaneous move game and (ii) it is in the interior of the convex hull of the beliefs in the support of the finest equilibrium in the sequential move game. Since preferences are generic, $\mu$ cannot be stable belief in the sequential move game. Hence, some sender in the simultaneous move game has a profitable deviation, a contradiction. $\square$ + +There are two important caveats to Proposition 8. Arbitrarily correlated experiments must be allowed and it applies only to pure-strategy equilibria in the simultaneous game. For the sequential game, generic preferences rule out mixed strategies and arbitrarily correlated experiments are without loss of generality, but this is not so in the simultaneous game. + +The needs for pure strategies and arbitrary correlation are related. Together these assumptions imply that a simultaneous move equilibrium must be immune to profitable deviations at any realized signal in support of the joint equilibrium experiment. When experiments are independent or mixed strategies are used, it is impossible to fine tune deviations in this way. + +When arbitrary correlation is violated, Li and Norman (2018) show that adding a sender may result in a strict loss of information. From Proposition 4, we know that we cannot lose information in the sequential setting by adding the second player at the top, so the example combined with Proposition 4 generates an explicit example where the sequential game is more informative when signals are independent. Similarly, Li and Norman (2018) provide an example in which a strictly less informative mixed-strategy equilibrium emerges when a sender is added. Again combining with Proposition 4, we obtain an example with an equilibrium in the sequential model being more informative than an equilibrium in the simultaneous model while still allowing for arbitrarily correlated signals. + +In each counterexample above, a fully revealing equilibrium also exists in the simultaneous game. In a related setting, Hu and Sobel (2019) argue that when multiple equilibria exist, this is not the most plausible equilibrium, because agents use strategies that are eliminated by iterations on weak dominance. However, this problem does not apply to Proposition 4 as it is for *any* equilibrium in the simultaneous model. + +Just as in the case of adding senders, the incompleteness of Blackwell's ordering implies that experiments may be noncomparable. However, we can again obtain a sharp characterization for the case with two states. + +**PROPOSITION 9.** *Suppose that $\Omega = \{\omega_0, \omega_1\}$ and that there is an essentially unique equilibrium in the sequential game. Then any pure-strategy equilibrium in the simultaneous move game is weakly essentially more informative.* + +The proof is similar to that of Proposition 5 and is relegated to the Appendix. While non-Blackwell comparable distributions also exist in the case of two states, it is immediate to see that if the result fails, there is some belief $\mu$ in the support of an equilibrium +---PAGE_BREAK--- + +with simultaneous moves that lies strictly between the smallest and the largest beliefs in +the support of the equilibrium with sequential moves. But then at least one sender must +have an incentive to split the beliefs onto the smallest and the largest sequential move +beliefs. Otherwise there must be an indifference, which is ruled out in the generic case. +Again, Figure 4 illustrates how the two state case is different from the general case. + +We can also compare payoffs between simultaneous and sequential games. An im- +plication of Proposition 9 is that the last sender prefers the sequential move game to the +simultaneous move game. The same is true for the general model whenever equilibria +can be ranked using the Blackwell order. Hence, the persuasion framework generates +the opposite result compared to duopolistic quantity competition. An intuition for this +is that the reason why the Stackelberg leader is better off and the follower is worse off +than under Cournot competition is that there is commitment value to overproduction, +which allows the leader to grab a larger share of the pie. In contrast, in the persuasion +model, the follower can always refine whatever the leader does. It is for this reason that +the follower is made better off than in the simultaneous move game. Whether senders +moving earlier are made better or worse off than in the simultaneous game is ambigu- +ous. + +**4.5 Fully-revealing equilibria** + +A shortcut to the optimal design of the consultation structure problem is to look for conditions under which full revelation is an equilibrium. Then the decision maker can select senders and organize the order of moves to satisfy the conditions and achieve the complete information payoff. + +Thanks to the one-step vertex characterization of the equilibrium outcome, we can identify an easy-to-check sufficient condition for when the unique equilibrium is fully revealing. One can rule out non-fully-revealing equilibria as long as at each nondegenerate vertex belief, there exists at least one sender who prefers full revelation to the current belief being observed by the decision maker. + +PROPOSITION 10. All equilibria are fully revealing if for each nondegenerate $\mu \in X$, there exists a sender $i$ such that + +$$v_i(\sigma_d(\mu), \mu) < \sum_{\omega \in \Omega} u_i(\sigma_d(\delta_\omega, \omega))\mu(\omega), \quad (9)$$ + +where $\delta_\omega$ is the degenerate belief about state $\omega$. + +Given the characterization of equilibrium outcomes in terms of stable vertex beliefs, the proof is obvious, so it is omitted. It is easy to check condition (9) as it depends only on the decision maker's strategy and the current sender's payoff at a small number of vertices. Although persuasion is sequential, the one-step characterization makes it unnecessary to take the subsequent senders' actions into account, which explains why the condition is order invariant (it also applies to the simultaneous model, and the case of both sequential and simultaneous moves). +---PAGE_BREAK--- + +Proposition 10 suggests a simple method to achieve full revelation. The decision maker selects senders in a such a way that the corresponding sequential-persuasion game does not have nondegenerate stable beliefs. To do so, it must be the case that every particular nondegenerate vertex belief is “disliked” by at least one sender. + +It is worth mentioning that condition (9) applies regardless of the extensive form of the game. As discussed in Sobel (2013), in most multi-sender strategic communication models, a fully-revealing equilibrium exists under very weak conditions. The key reason is that when others fully reveal the state, a sender has no way to further affect the outcome. However, this means that full revelation can be supported as an equilibrium outcome even if it is Pareto dominated in a simultaneous move game, making the prediction less convincing. Some natural questions are prompted by this. In a multi-sender Bayesian persuasion game where senders move simultaneously, when should we expect full revelation as an equilibrium outcome if senders are coordinating on a plausible equilibrium, and under what conditions is full revelation the unique equilibrium outcome? Proposition 10 offers some insight into these questions. + +## 5. CONCLUDING REMARKS + +We consider a sequential Bayesian persuasion model with multiple senders. Because it is without loss of generality to focus on equilibria corresponding to a finite set of beliefs, we establish that subgame perfect equilibria exist and generate a unique joint distribution over states and outcomes for generic preferences. Having a finite set of stable beliefs characterizing the equilibrium makes it easy to identify the unique equilibrium outcome and to apply the model to study changes in the extensive form. In particular, (i) adding a sender who moves first cannot reduce informativeness in equilibrium, and will result in a more informative equilibrium in the case of two states, (ii) it is without loss to let each sender speak only once, with the exception that the first mover may benefit from having a second move, and (iii) sequential persuasion cannot generate a more informative equilibrium than simultaneous persuasion and is less informative in the case of two states. + +# APPENDIX A: OMITTED PROOFS + +## A.1 Proofs: One-step equilibrium and equilibrium construction + +**PROOF OF PROPOSITION 1.** To proceed, we extend the definition of one-step equilibrium to off the path of play. + +**DEFINITION 5.** Consider a strategy $\sigma'$ and let $h_i$ be an arbitrary history when sender $i \in \{1, \dots, n-1\}$ moves. Also, for $j \ge i$, let $h'_j|h_i$ be the implied continuation outcome path induced if each player $j \ge i$ follows $\sigma'_j$ after history $h_i$ and let $\sigma'|_{h_i}$ denote the continuation strategy profile.¹⁰ We say that $\sigma'|_{h_i}$ is *one-step* if $\bigvee_{j=i}^n \sigma'_j(h'_j|h_i) = \sigma'_i(h_i)$. + +¹⁰That is, $h'_{i}|h_i = h_i$, $h'_{i+1}|h_i = (h_i, \sigma'_i(h_i))$, $h'_{i+2}|h_i = (h_i, \sigma'_i(h_i), \sigma'_{i+1}(h_i, \sigma'_i(h_i)))$, and so on. +---PAGE_BREAK--- + +Now we are ready to proceed. Fix a subgame perfect equilibrium $\sigma^*$ and let $h_i = (\pi_1, \dots, \pi_{i-1})$ be an arbitrary history when $i$ moves. Let $(\pi_i^*|_{h_i}, \dots, \pi_n^*|_{h_i})$ be the continuation equilibrium path following $h_i$. Let + +$$ \pi^*|_{h_i} = \left( \bigvee_{i=i}^{i-1} \pi_i \right) \lor \left( \bigvee_{i=i}^{n} \pi_i^* \middle|_{h_i} \right) $$ + +be the joint experiment generated by the continuation equilibrium path. Replace the +continuation equilibrium strategies following $h_i$ by $(\sigma'_i, \dots, \sigma'_n, \sigma'_d)$ where, on the con- +tinuation outcome path, + +$$ +\begin{align} +\sigma'_{i}(h_i) &= \pi^*|_{h_i} \\ +\sigma'_{j}(h_i, \pi^*|_{h_i}, \dots, \pi^*|_{h_i}) &= \pi^*|_{h_i} \quad \text{for } j \in \{i+1, \dots, n\} \tag{A.1} \\ +\sigma'_{d}(h_i, \pi^*|_{h_i}, \dots, \pi^*|_{h_i}, s) &= \sigma_{d}(h_i, (\pi^*|_{h_i}, \dots, \pi^*|_{h_i}), s). +\end{align} +$$ + +For a history in which *i* plays $\pi^*|_{h_i}$ but some $j \in \{i+1, \dots, n\}$ deviates, let + +$$ +\begin{align} +\sigma'_{k}(h_i, \pi^*|_{h_i}, \dots, \pi^*|_{h_i}, \pi_j, \dots, \pi_k) &= \sigma^*_k(h_i, \pi^*_i|_{h_i}, \dots, \pi^*_{j-1}|_{h_i}, \pi_j, \dots, \pi_k) \tag{A.2} \\ +\sigma'_{d}(h_i, \pi^*|_{h_i}, \dots, \pi^*|_{h_i}, \pi_j, \dots, \pi_n) &= \sigma^*_d(h_i, \pi^*_i|_{h_i}, \dots, \pi^*_{j-1}|_{h_i}, \pi_j, \dots, \pi_n), +\end{align} +$$ + +and for any other history, let + +$$ +\begin{align} +\sigma'_{j}(h_i, \pi_i, \ldots, \pi_{j-1}) &= \sigma^*_j(h_i, \pi_i, \ldots, \pi_{j-1}) && \text{for } j \in \{i+2, \ldots, n\} \tag{A.3} \\ +\sigma'_{d}(h_i, \pi_i, \ldots, \pi_n, s) &= \sigma^*_d(h_i, \pi_i, \ldots, \pi_n, s). +\end{align} +$$ + +The decision maker plays an optimal response following any path of play after $h_i$, as after each continuation path, the response is selected as some response for an identical joint experiment. Moreover, if each $j \ge i$ plays in accordance with $\sigma'_j$, it follows from (A.1) that the implied distribution over $\Omega \times A$ is identical if each $j \ge i$ plays in accordance with the original equilibrium $\sigma^*$. Also, the strategies in (A.3) imply that the continuation play after a deviation by $i$ is the same under $\sigma'$ as under $\sigma^*$, so $i$ has no incentive to deviate. As $\sigma^*$ is subgame perfect, the continuation play in (A.3) is trivially subgame perfect. Finally, (A.2) implies that if $j$ is the first player after $i$ to deviate from $\pi^*|_{h_i}$, then continuation play replicates that after the same deviation from the $\sigma^*$ equilibrium following history $(h_i, \pi_i^*|_{h_i}, \dots, \pi_{j-1}^*|_{h_i})$ in the original equilibrium, so $j \in \{i+1, \dots, n\}$ have no incentives to deviate. Clearly, $\sigma'$ is not one-step after any history, but $i$ and $h_i$ were arbitrary, so adjusting $\sigma^*$ in accordance with (A.1), (A.2), and (A.3) following any history $i$ and $h_i$, we obtain a subgame perfect strategy profile that is one-step after every history $h$ with the same equilibrium outcome. +$\square$ +---PAGE_BREAK--- + +PROOF OF LEMMA 1. For each program on form (3), we consider a restricted *finite* linear program + +$$ +\begin{equation} +\begin{aligned} +\tilde{V}_n(\mu) &= \max_{\tau \in \Delta(X)} \sum_{\mu' \in X} v_n(\sigma_d(\mu'), \mu')\tau(\mu') \\ +\text{s.t.} \quad & \sum_{\mu'} \mu'\tau(\mu') = \mu, +\end{aligned} +\tag{A.4} +\end{equation} +$$ + +where $X$ is defined in (4). Hence, (A.4) is well defined as it is a finite-dimensional +bounded linear program. + +Pick any feasible solution $\tau$ to program (3). For each $a \in A$, define $\hat{M}(a) \subset M(a)$ as the beliefs under which the decision maker takes action $a$: $\hat{M}(a) = \{\mu \in \Omega | \sigma_d(\mu) = a\}$. Since $\hat{M}(a) \subset M(a)$, it follows that for each $\mu' \in \hat{M}(a)$, there exists $\lambda' \in \Delta(\{\mu_j^a\}_{j=1}^{J(a)})$ such that $\mu' = \sum_{j=1}^{J(a)} \lambda'_j \mu_j^a$. Hence, all beliefs that generate action $a$ under $\tau$ may be split onto the vertices of $M(a)$ and aggregated into + +$$ +\sum_{j=1}^{J(a)} \hat{\tau}(\mu_j^a) = \sum_{\mu' \in \bar{M}(a)} \tau(\mu') \sum_{j=1}^{J(a)} \lambda'_j = \sum_{\mu' \in \bar{M}(a)} \tau(\mu'). +$$ + +Since it is possible that $v_n(a, \mu_j^a) < v_n(a', \mu_j^a)$ for some $\mu_j^a \in M(a)$ (and $\mu_j^a \notin \hat{M}(a)$, because breaking the tie in favor of $a'$ may be better than $a$), it follows that the solution to (A.4) satisfies + +$$ +\begin{align*} +\tilde{V}_n(\mu) &\geq \sum_{a \in A} \sum_{j=1}^{J(a)} v_n(a, \mu_j^a) \hat{\tau}(\mu_j^a) = \sum_{a \in A} \sum_{j=1}^{J(a)} \sum_{\omega \in \Omega} u_n(a, \omega) \mu_j^a(\omega) \hat{\tau}(\mu_j^a) \\ +&= \sum_{a \in A} \sum_{\omega \in \Omega} u_n(a, \omega) \sum_{j=1}^{J(a)} [\mu_j^a(\omega) \lambda'_j] \left[ \sum_{\mu' \in \tilde{M}(a)} \tau(\mu') \right] \\ +&= \sum_{a \in A} \sum_{\omega \in \Omega} u_n(a, \omega) \mu' \left[ \sum_{\mu' \in \tilde{M}(a)} \tau(\mu') \right] \\ +&= \sum_{\mu'} v_n(\sigma_d(\mu'), \mu') \tau(\mu'). \tag{A.5} +\end{align*} +$$ + +This holds for any feasible solution to (3). Hence, $\tilde{V}_n(\mu) \ge V_n(\mu)$. Moreover, any +optimal solution to (A.4) is a feasible solution to (3), so $\tilde{V}_n(\mu) \le V_n(\mu)$. This establishes +that solutions to (3) exist, and that $\tilde{V}_n(\mu) = V_n(\mu)$ and that every $\tau \in \Delta(X)$ that solves +(A.4) also solves (3). Finally, if $\tau$ solves (3) and $\mu'$ is such that $\tau(\mu') > 0$, there can be no +$\mu_k^a \in M(a)$ such that $v_n(a, \mu_k^a) < v_n(a', \mu_k^a)$ and $\lambda_k' > 0$ for the weight on vector $\mu_k^a$ in the +convex combination such that $\mu' = \sum_{j=1}^{J(a)} \lambda'_j \mu_j^a$. This is seen from noting that this would +generate a strict inequality in the first inequality of (A.5). $\square$ + +PROOF OF LEMMA 2. Proposition 1 implies that for every subgame perfect equilibrium, +there is an outcome-equivalent equilibrium in which strategies are one-step for every +---PAGE_BREAK--- + +history, so we assume that $\sigma^*$ is such a strategy profile. Suppose that there is a sender $i$ and history $h_i$ with associated continuation experiment $\pi^{|_{h_i}}$ such that there exists some realization $s'$ of experiment $\pi^{|_{h_i}}$ that induces a decision maker posterior belief $\mu' \notin X$ with positive probability. Let $a' = \sigma_d(h_i, \pi^{|_{h_i}}, \dots, \pi^{|_{h_i}})$ be the equilibrium action induced by $s'$. Furthermore, let $M(a')$ be the belief polytope where $a'$ is optimal and let $X(a') = \{\mu_j^{a'}\}_{j=1}^m$ be the set of vertices of $M(a')$. Since $M(a')$ is the convex hull spanned by $X(a')$, there exists $\lambda \in \Delta(X(a'))$ such that $\mu' = \sum_{j=1}^m \lambda_j \mu_j^{a'}$. Consider an alternative one-step strategy with $\pi^{|_{h_i}}$ replaced by some $\pi'$ in which the realization $s'$ is replaced by the set $\{s_1, \dots, s_m\}$, where each $s_j$ generates posterior $\mu_j^{a'}$ and has unconditional probability $p(s')\lambda_j$, and everything else in $\pi'$ is like the original equilibrium.¹¹ We also assume that the decision maker follows a strategy in which + +$$ \sigma'_d(h, s) = \begin{cases} a' & \text{if } h = (h_i, \pi', \dots, \pi') \text{ and } s \in \{s_1, \dots, s_m\} \\ \sigma_d^*(\pi^{|_{h_i}}, \dots, \pi^{|_{h_i}}, s) & \text{if } h = (h_i, \pi', \dots, \pi'), \text{ and } s \neq s' \\ \sigma_d^*(\pi^{|_{h_i}}, \dots, \pi^{|_{h_i}}, \pi_j, \dots, \pi_n, s) & \text{is a realization of } \pi^* \\ \sigma_d^*(h, s) & \text{j} \ge i \text{ is the first player playing } \pi_j \neq \pi' \\ \sigma_d^*(h, s) & \text{for any other } h, \end{cases} $$ + +where $\sigma_d^*$ is the strategy of the decision maker in the original equilibrium. For each $\mu_j^{a'} \in M(a')$, $\sigma_d'$ is a best response if $\sigma_d^*$ is a best response. Also assume that all senders with $j < i$ follow the original equilibrium strategy $\sigma_i^*$ and that sender $j = \{i, \dots, n\}$ plays + +$$ \sigma'_j(h_j) = \begin{cases} \pi' & \text{if } h_j = (h_i, \pi', \dots, \pi') \\ \sigma_i^*(h_i, \pi^*, \dots, \pi^*, h_k, \dots, h_{j-1}) & \text{if } h_j = (h_i, \pi', \dots, \pi', h_k, \dots, h_{j-1}) \\ \sigma_i^*(h_j) & \text{if } h_j = (h_i, \pi_i, \dots, h_{j-1}) \text{ is such that } \pi_i \neq \pi' \\ \end{cases} $$ + +and leaves everything as in the original equilibrium if $h_i$ is not played by $\{1, \dots, i-1\}$. The continuation outcome path following $h_i$ is then $(\pi', \dots, \pi')$ and + +$$ v_i(a, \mu) = \sum_{j=1}^{m} \lambda_j v_n(a, \mu_j^a) = \sum_{j=1}^{m} \lambda_j v_n(\sigma_d(\pi', \dots, \pi', s_j), \mu_j^a), $$ + +while nothing is changed for signal realizations that are kept as in $\pi^*$, so the distribution over states and outcomes is the same as in the original equilibrium if no player deviates after $h_i$. Moreover, if $j \ge i$ is the first sender deviating from playing $\pi'$ to $\pi_j$, the path of play replicates what happens if $j$ is the first sender to deviate from $\pi^*$ to $\pi_j$ in the original continuation equilibrium. Hence, there is no profitable deviation on the path. Finally, off-path play replicates off-path continuation play in the original equilibrium, so there is no profitable deviation off the path. Repeating the same argument for each + +¹¹It is possible that $\lambda_j = 0$ for some $j$. Instead of eliminating these beliefs, we may simply generate a probability zero signal so as not to treat this case separately. +---PAGE_BREAK--- + +history $h_i$, every continuation experiment $\pi^*|_{h_i}$ and every realization $s'$ of $\pi^*|_{h_i}$ with +corresponding belief $\mu' \notin X$ completes the proof. $\square$ + +PROOF OF PROPOSITION 2. In what follows, we construct a subgame perfect equilib- +rium where sender *i*'s equilibrium strategy coincides with the solution to program (7). +That is, every sender *i* adds no information as long as $\mu \in X_i$ and posts an experiment +that induces beliefs on $X_i$ after any history. + +Fix a pair $(\sigma_d, \tau_n)$ such that the following statements hold: + +• The variable $\sigma_d$ is optimal for the decision maker and breaks the ties in favor of sender $n$. + +* We have $\tau_n : \Delta(\Omega) \rightarrow \Delta(X_n)$, so that only vertex beliefs are induced following any history, which is without loss by Lemma 1. Additionally, $\tau_n$ leaves any belief in $X_n$ unchanged, so that $\tau_n(\mu|\mu) = 1 \ \forall \mu \in \Delta(X_n)$. + +Sender $n-1$'s problem can then be formulated as + +$$ +V_{n-1}(\mu) = \max_{\tau} \left[ \sum_{\mu' \in \Delta(\Omega)} \left( \sum_{\mu'' \in \Delta(X_n)} v_{n-1}(\sigma_d(\mu''), \mu'') \tau_n(\mu''|\mu') \right) \tau(\mu'|\mu) \right] \quad (A.6) +$$ + +s.t. $\displaystyle\sum_{\mu' \in \Delta(\Omega)} \mu' \tau(\mu' | \mu) = \mu.$ + +That is, sender $n-1$ chooses a mean-preserving spread that splits an interim belief $\mu$ +into some updated interim beliefs $\tau$, and for each induced interim belief $\mu'$ in $\tau$, sender +$n$ further splits it into $\Delta(X_n)$ according to the selected $\tau_n$. + +Fix an arbitrary interim belief $\mu$ and a feasible strategy $\tau$ for program (A.6). Addition- +ally, let $\tau_n$ be an any best response by player $n$ that induces vertex beliefs only following +any history and also satisfies $\tau_n(\mu|\mu) = 1$ for any $\mu \in X_n$. Together, $\tau_n$ and $\tau$ induce a +compound mean-preserving spread $\tau_{n-1}: \Delta(\Omega) \to \Delta(X_n)$ from $\tau$ and $\tau_n$ defined as + +$$ +\tau_{n-1}(\mu''|\mu) = \sum_{\mu' \in \Delta(\Omega)} \tau_n(\mu''|\mu')\tau(\mu'|\mu). +$$ + +Since sender *n* always splits beliefs onto vertices, every compound mean-preserving +spread τn−1 has support on vertex beliefs only. Hence, every feasible solution to program +(A.6) is feasible also in the restricted program (7) for *i* = *n* − 1, so + +$$ +\tilde{V}_{n-1}(\mu) \geq V_{n-1}(\mu) +$$ + +for every $\mu$. In program (A.6), it is feasible to choose any mean-preserving spread $\tau \in \Delta(X_n)$. Since sender $n$ does not add information when $\mu \in X_n$, + +$$ +\tilde{V}_{n-1}(\mu) \leq V_{n-1}(\mu), +$$ + +holds for every $\mu$. Notice that this inequality crucially relies on our restriction on behavior on $X_n$. If sender $n$ adds information at some interim belief $\mu \in X_n$, some feasible mean-preserving spreads in program (7) may no longer be feasible in program (A.6). +---PAGE_BREAK--- + +Consequently, $V_{n-1}(\cdot) = \tilde{V}_{n-1}(\cdot).$¹² Since $\tilde{V}_{n-1}(\mu)$ is well defined, an optimal mean-preserving spread $\tau_{n-1}$ exists for $n-1$ and has support on $X_{n-1}$. Whenever there exist multiple $\tau_{n-1}$, we select ones such that sender $n-1$ adds no information at every $\mu \in X_{n-1}$, ensuring that the best response of sender $n-2$ is well defined. By induction, continuation strategies exist such that best responses for senders $1, \dots, n-3$ are also defined. ☐ + +## A.2 Proofs: Outcome uniqueness + +The proof of Proposition 3 has two parts. First, we state and prove a few intermediate results. Then we use these intermediate results to prove the uniqueness of equilibrium outcome. + +### A.2.1 Preliminaries +The following corollary is more or less a direct consequence of Proposition 1. + +**COROLLARY 1.** Fix an equilibrium $\sigma^*$ and a history $h_i$. For any deviation $\sigma'_i$ by sender $i$, there exists a one-step continuation strategy profile $\sigma^\dagger$ of senders $i+1, \dots, n$ after history $h_i$ such that the following statements hold: + +(i) Strategy profiles $(\sigma_i^\dagger, \dots, \sigma_n^\dagger, \sigma_d^*)$ and $(\sigma'_i, \sigma_{i+1}^*, \dots, \sigma_d^*)$ are outcome equivalent. + +(ii) Strategy profile $(\sigma_{i+1}^\dagger, \dots, \sigma_n^\dagger, \sigma_d^*)$ is a subgame perfect equilibrium of the continuation game after history $(h_i, \sigma_i^\dagger(h_i))$. + +(iii) The resulting posterior beliefs are vertices. + +**PROOF.** If $i=n$, then there is nothing to prove, so assume that $i \sum_{\mu' \in Y} \hat{v}_i(\mu')\tau(\mu') \quad (A.10) $$ + +for a set of sender i Bernoulli utility functions over $A \times \Omega$ with full Lebesgue measure. + +PROOF. If $\sigma_d(\mu') = \sigma_d(\mu)$ for each $\mu \in X_i$ and every $i$, there is nothing to prove. Suppose instead that there exist $\mu \in X_i$, $Y \subset X_i$, and $\tau \in \Delta(Y)$ such that $\mu = \sum_{\mu' \in Y} \mu'\tau(\mu')$ and that (A.10) is violated for sender $i$. Denote $\{\mu_1, \dots, \mu_{m+1}\} = Y$ and $\tau = (\tau_1, \dots, \tau_{m+1})$, and write the failure of (A.10) as + +$$ \hat{v}_i(\mu) = \sum_{j=1}^{m+1} \hat{v}_i(\mu_j)\tau_j. $$ + +If $Y$ is an affinely independent set, there is a unique mean-preserving spread of $\mu$ onto $Y$. In this case, the next step in which we find an affinely independent set that spans $\mu$ can be skipped. The case that requires more work is when $Y$ is an affinely dependent set of vectors. This is true if and only if $\{\mu_2 - \mu_1, \dots, \mu_{m+1} - \mu_1\}$ are linearly dependent. Then there are scalars $(\alpha_2, \dots, \alpha_{m+1}) \neq (0, \dots, 0)$ such that $\sum_{j=2}^{m+1} \alpha_j(\mu_j - \mu_1) = 0$. So + +$$ \left(-\sum_{j=2}^{m+1} \alpha_j\right) \mu_1 + \sum_{j=2}^{m+1} \alpha_j \mu_j = \sum_{j=1}^{m+1} \alpha_j \mu_j = 0 $$ + +by defining $\alpha_1 = -\sum_{j=2}^{m+1} \alpha_j$, which also implies that $\sum_{j=1}^{m+1} \alpha_j = 0$. For every $\beta$, we have + +$$ \mu = \sum_{j=1}^{m+1} \mu_j \tau_j = \sum_{j=1}^{m+1} \mu_j \tau_j - \beta \sum_{j=1}^{m+1} \alpha_j \mu_j = \sum_{j=1}^{m+1} (\tau_j - \beta \alpha_j) \mu_j. $$ + +Let $I^+ = (j \in \{1, \dots, m+1\} | \tau_j > 0)$ and let $j^*$ be chosen so that $0 < \frac{\tau_{j^*}}{\alpha_j^*} \le \frac{\tau_j}{a_j}$ for all $j$ such that $\alpha_j > 0$. Such $j^*$ exists as there is at least one $j$ such that $\alpha_j > 0$. Let $\beta^* = \frac{\tau_{j^*}}{\alpha_j^*}$ and + +$$ \tau_j^* = \tau_j - \frac{\tau_{j^*}}{\alpha_j^*} \alpha_j. $$ + +It follows that $\tau_j^* \ge 0$ for all $j$, that $\sum_{j=1}^{m+1} \tau_j^* = 1$ and $\tau_{j^*}^* = 0$. Hence, we can remove $\mu_{j^*}$ from $\{\mu_1, \dots, \mu_{m+1}\}$ and still find a convex combination that generates $\mu$. By induction, there exists an affinely independent set of vectors $\{\hat{\mu}_1, \dots, \hat{\mu}_k\} \subseteq Y$ such that $\mu$ is in its convex hull, implying that there exists a unique solution $\hat{\tau}$ such that $\mu = \sum_{j=1}^k \hat{\mu}_j \hat{\tau}_j$.¹⁴ + +¹⁴If $\hat{\tau} \neq \tau$ are distinct mean-preserving spreads of $\mu$ onto $\{\hat{\mu}_1, \dots, \hat{\mu}_k\}$, then $0 = \sum_{i=1}^k \hat{\mu}_i (\hat{\tau}_i - \tau_i)$ or $0 = \sum_{i=2}^k (\hat{\mu}_i - \hat{\mu}_1)(\hat{\tau}_k - \tau_k)$, which implies $\{\hat{\mu}_1, \dots, \hat{\mu}_k\}$ is affinely dependent, as $\hat{\tau}_i - \tau_i \neq 0$ for at least one $i \in \{2, \dots, k\}$. +---PAGE_BREAK--- + +If $\sigma_d(\hat{\mu}_j) = \sigma_d(\hat{\mu}_{j'})$ for every pair of beliefs in $\{\hat{\mu}_1, \dots, \hat{\mu}_k\}$, then $\mu$ and $\tau$ are outcome equivalent. If $\sigma_d(\hat{\mu}_j) \neq \sigma_d(\hat{\mu}_{j'})$ for some beliefs in $\{\hat{\mu}_1, \dots, \hat{\mu}_k\}$, + +$$ \hat{v}_i(\mu) = \sum_{j=1}^{k} \hat{v}_i(\hat{\mu}_j) \hat{\tau}_j, $$ + +then $\hat{v}_i : \Delta(\Omega) \rightarrow R$ belongs to a Lebesgue measure zero set of utility functions.¹⁵ We conclude that for every affinely independent subset of $X_i$, there is a Lebesgue measure zero of utility functions for $i$ that can generate indifference that are not outcome equivalent. There is a finite number of affinely independent subsets and every mean-preserving spread of $\mu$ with support on $X_i$ can be written in the form + +$$ \mu = \sum_{l=1}^{L} \beta_l \sum_{j=1}^{k(j)} \hat{\mu}_j(l) \tau_j(l), $$ + +where $\beta_l \ge 0$ for each $l$, $\sum_{l=1}^L \beta_l = 1$ and every set $\{\hat{\mu}_1(j), \dots, \hat{\mu}_k(j)\}$ is affinely independent. Hence, if (A.10) holds for every affinely independent subset of $X_i$, it holds for all subsets of $X_i$. The result follows. $\square$ + +The first case of Lemma 4 simply points out that it is possible that the decision maker action is constant on a subset of stable beliefs. This is relevant because it is possible that there may exist a nontrivial mean-preserving spread $\tau \in \Delta(X_i)$ of $\mu \in X_i$ and if $\sigma_d(\mu') = \sigma_d(\mu)$ for each $\mu'$ in the support of $\tau$, the sender is indifferent. However, this multiplicity is not essential because staying on $\mu$ or splitting beliefs in accordance with $\tau$ generates identical joint distribution over actions and states. + +In the second case of Lemma 4, $X_i$, the set of stable beliefs of a sequential game played by senders $i$, $i+1, \dots, n$, contains beliefs that result in at least two distinct actions according to $\sigma_d$. Suppose that $\tau \in \Delta(Y)$ is a vector such that (A.10) does not hold, implying that + +$$ \hat{v}_i(\mu) = \sum_{\mu' \in Y} \hat{v}_i(\mu') \tau(\mu'), \quad (\text{A.11}) $$ + +as otherwise $\mu$ could not be a stable belief. If $Y$ is an affinely independent set of vectors, there is a unique mean-preserving spread of $\mu$ onto $Y$ and it should be clear that (A.11) can only hold for a nongeneric set of functions $\hat{v}_i : \Delta(\Omega) \to \mathbb{R}$.¹⁶ If, instead, $Y$ is an affinely dependent set, then there must be an affinely independent subset of $Y$ such that (A.11) holds for some mean-preserving spread with support on the affinely independent subset. For each affinely independent subset of $Y$, this requires nongeneric preferences, and since there is a finite number of senders and affinely independent subsets, the result follows by induction. + +¹⁵By repeating the steps in (A.12), (A.13), and (A.14) below, the measure zero condition in belief space implies measure zero in terms of maps $u_i: A \times \Omega \to \mathbb{R}$. + +¹⁶This also implies that a nongeneric set of Bernoulli utility functions $u_i: A \times \Omega \to \mathbb{R}$ can satisfy the equality. +---PAGE_BREAK--- + +In a similar spirit, we establish that indifferences over distinct distributions over stable continuation beliefs are rare. + +LEMMA 5. Fix any $i \in \{1, \dots, n\}$. Then + +$$ \sum_{\mu' \in Y} \hat{v}_i(\mu')\tau(\mu') \neq \sum_{\mu' \in \tilde{Y}} \hat{v}_i(\mu')\tilde{\tau}(\mu') $$ + +for every $\mu \in X \cup \{\mu_0\}$ and every distinct pair $(\tau, Y)$, $(\tilde{\tau}, \tilde{Y})$ with $Y \subseteq X_i$ and $\tilde{Y} \subseteq X_i$ being affinely independent sets, and $\tau(\tilde{\tau})$ being a the unique mean-preserving spread of $\mu$ onto $Y$ ($\tilde{Y}$) holds for a set of sender i Bernoulli utility functions over $A \times \Omega$ with full Lebesgue measure. + +PROOF. Let $X(\mu_0)$ be the support for the unique equilibrium given prior $\mu_0$ and let $\tau$ be the associated equilibrium distribution. We note that $\tau$ and $\lambda$ are unique vectors so that + +$$ \mu_0 = \sum_{\mu \in X(\mu_0)} \mu \tau(\mu) $$ + +$$ \tilde{\mu}_0 = \sum_{\mu \in X(\mu_0)} \mu \lambda(\mu). $$ + +Hence, for any $\beta$, + +$$ \mu_0 = \sum_{\mu \in X(\mu_0)} (\mu(\tau(\mu)) - \beta\lambda(\mu)) + \beta\tilde{\mu}_0, $$ + +and all coefficients are positive if $\beta$ is small enough. Also, we assume that $\tilde{\tau}$ has support on $X(\tilde{\mu}_0) \neq X(\mu_0)$ so that + +$$ \tilde{\mu}_0 = \sum_{\mu \in X(\tilde{\mu}_0)} \mu \tilde{\tau}(\mu). $$ + +This implies that when the prior is $\mu_0$, it is feasible to split beliefs over $X(\mu_0) \cup X(\tilde{\mu}_0)$ in accordance with + +$$ \{\tau(\mu) - \beta\lambda(\mu) + \beta\tilde{\tau}(\mu)\}_{\mu \in X(\mu_0) \cup X(\tilde{\mu}_0)}, $$ + +provided that $\beta$ is small enough. But since $\tau$ is the generically unique equilibrium given $\mu_0$, this is suboptimal, so + +$$ +\begin{aligned} +\sum_{\mu \in X(\mu_0)} \hat{v}_1(\mu)\tau(\mu) &> \sum_{\mu \in X(\mu_0) \cup X(\tilde{\mu}_0)} \hat{v}_1(\mu)[\tau(\mu) - \beta\lambda(\mu) + \beta\tilde{\tau}(\mu)] \\ +&= \sum_{\mu \in X(\mu_0)} \hat{v}_1(\mu)\tau(\mu) + \beta \left[ \sum_{\mu \in X(\tilde{\mu}_0)} \hat{v}_1(\mu)\tilde{\tau}(\mu) - \sum_{\mu \in X(\mu_0)} \hat{v}_1(\mu)\lambda(\mu) \right]. +\end{aligned} +$$ + +Hence, + +$$ \sum_{\mu \in X(\tilde{\mu}_0)} \hat{v}_1(\mu)\tilde{\tau}(\mu) < \sum_{\mu \in X(\mu_0)} \hat{v}_1(\mu)\lambda(\mu), $$ + +which contradicts that $\tilde{\tau}$ is better than $\lambda$ for prior belief $\tilde{\mu}_0$. □ +---PAGE_BREAK--- + +A.2.2 *Proof of Proposition 3* Lemma 2 and Corollary 1 imply that for sender $i = 2, \dots, n$, we need only to consider responses at $X$ onto $\Delta(X_i)$. Lemma 4 implies that, generically, each sender has a strict incentive not to refine any $\mu \in X_i$. By linearity, an optimal mean-preserving spread with support on an affinely independent set must exist, so Lemma 5 implies that for generic preferences, each deviation onto $\Delta(X)$ generates an essentially unique response for generic preferences and, since every deviation is equivalent to a deviation onto $\Delta(X)$, we conclude that the off-equilibrium path is generically unique. Finally, Lemma 5 applied to sender 1 also implies that sender 1 generically has a unique optimal mean-preserving spread of the prior onto the set of stable beliefs. + +Assume that there exist two distinct affinely independent sets of vectors $Y \subseteq X_i$ and $\tilde{Y} \subset X_i$ such that + +$$ \sum_{\mu' \in Y} \hat{v}_i(\mu')\tau(\mu') = \sum_{\mu' \in \tilde{Y}} \hat{v}_i(\mu')\tilde{\tau}(\mu'), \quad (A.12) $$ + +where $\tau$ is the unique mean-preserving spread of $\mu$ onto $Y$ and $\tilde{\tau}$ is the unique mean-preserving of $\mu$ onto $\tilde{Y}$. Also assume there are at least two distinct actions chosen by the decision maker. In terms of the primitive preferences over $A \times \Omega$, (A.12) can be rewritten as + +$$ \sum_{\mu' \in Y} \sum_{\omega \in \Omega} [u_i(\sigma_d(\mu'), \omega)\mu'(\omega)]\tau(\mu') = \sum_{\mu' \in \tilde{Y}} \sum_{\omega \in \Omega} [u_i(\sigma_d(\mu'), \omega)\mu'(\omega)]\tilde{\tau}(\mu'). \quad (A.13) $$ + +Notice that if for each $a \in A$, we let $Y(a) = \{\mu' \in Y \text{ s.t } \sigma_d(\mu') = a\}$ and symmetrically for $\tilde{Y}(a)$, we may rewrite (A.13) further as + +$$ \sum_{a \in A} \left\{ \sum_{\omega \in \Omega} u_i(a, \omega) \left[ \sum_{\mu' \in X(\mu, a)} \mu'(\omega)\tau(\mu') - \sum_{\mu' \in \tilde{X}(\mu, a)} \mu'(\omega)\tilde{\tau}(\mu') \right] \right\} = 0. \quad (A.14) $$ + +Since $\tau$ and $\tilde{\tau}$ are unique, this defines a lower dimensional subspace of $|A \times \Omega|$-dimensional Euclidean space, so the set of sender $i$ payoff functions such that (A.12) holds is measure zero. Since $X_i$ is finite, there is a finite set of pairs of affinely independent sets spanning $\mu$ and we consider only $\mu$ from the finite set $X \cup \{\mu_0\}$. The result follows. + +## A.3 *Proofs: Applications* + +PROOF OF PROPOSITION 6. Since the stage and the player identity no longer coincide, let $X_i^t$ denote the stable beliefs in the truncated game starting with player $i$ moving at stage $t$. Suppose that $t$ is the final move of player $i$ and that $i$ also moves at $t'$, with $t' < t$. If $t'$ and $t$ are consecutive stages, it is immediate at $X_i^t = X_i^{t'}$, so assume that there exists a player $j$ moving in between $t'$ and $t$. Without loss of generality, let $j$ move at time $t' + 1$ and let $X_j^{t'+1} \subseteq X_i^t$ be the set of stable beliefs in the truncated game starting +---PAGE_BREAK--- + +with player *j* at time *t*' + 1. We claim that $X_i^{t'} = X_j^{t'+1}$, that is, that player *i* moving at *t'* does not affect the set of stable beliefs in the truncated game starting at the next stage, so the move by *i* at *t'* is redundant. For contradiction, assume that the move by *i* at *t'* refines the set of stable beliefs, so that there exists $\mu \in X_j^{t'+1}$ such that $\mu \notin X_i^{t'}$. But if $\mu \in X_j^{t'+1}$, then $\mu \in X_i^t$, which implies that *i* has no incentive to create a mean-preserving spread of $\mu$ with support in $X_i^t \subseteq X_i^{t'}$. Since any mean-preserving spread that is feasible at time *t*' is feasible also at *t*, this contradicts $X_i^t$ being the set of stable beliefs in the truncated game starting at time *t*. Since $t' < t$ and *i* were arbitrary, the proposition follows. □ + +PROOF OF PROPOSITION 7. Consider some $\mu$ in the support of $\tau$ that is not in $\Delta(X)$. Assume that $\sigma_d(\mu) = a$ is the action taken by the decision maker following $\mu$ and let $M(a)$ be the set of beliefs for which $a$ is optimal. Replace $\mu$ with any mean-preserving spread $\tau'$ of onto beliefs in $M(a)$, suppose that $\sigma_d(\mu') = a$ for each $\mu'$ in the support of $\tau'$, and let the probability of any other belief in $\tau$ be unchanged. Clearly, this belief distribution is outcome equivalent with $\tau$. To see that it must also be an equilibrium, assume that it is not. Then there exists some player $i$ and belief $\mu'$ in the support of $\tau'$ and a mean-preserving spread $\tau''$ of $\mu'$ such that $i$ strictly prefers $\tau''$ to $\mu'$. But then $i$ strictly prefers the compound mean-preserving spread constructed by first splitting $\mu$ into $\tau$ and then further splitting $\mu'$ into $\tau''$. Since this compound mean-preserving spread is a feasible deviation for $i$ given belief $\mu$, this contradicts $\mu$ being in the support of an equilibrium distribution. Since $\tau'$ is any mean-preserving spread with support in $M(a)$, we may choose one with support on the vertices of $M(a)$, which is always possible. The proof is completed by noting that the argument can be repeated for any $\mu$ not in $\Delta(X)$. □ + +PROOF OF PROPOSITION 9. Fix the prior $\mu_0$ and begin by noting that for the result to fail some information must be provided in the sequential model. Hence, without loss there must be a pair $(\mu_L, \mu_H) \in X$ such that $\mu_L < \mu_0 < \mu_H$, where $\mu_L$ and $\mu_H$ are in the support of the equilibrium in the sequential model. Suppose that there is some $\mu$ with $\mu_L < \mu < \mu_H$ that is in the support of an equilibrium in the simultaneous move model. As in the proof of Proposition 8, there are two cases. First, suppose first that the action is the same at $\mu_L$ and $\mu_H$. Then putting positive probability on $\mu$ or the unique mean-preserving spread onto $\{\mu_L, \mu_H\}$ has no effect on the distribution over actions and states, so putting positive probability on $\mu$ does not affect the essential informativeness. Second, suppose that $\mu_L$ and $\mu_H$ generate distinct actions. Then for $\mu$ to be part of an equilibrium in the simultaneous game, all senders must weakly prefer $\mu$ to the unique mean-preserving spread to $\{\mu_L, \mu_H\}$. But then $\mu$ must be an equilibrium (not necessarily on a vertex) in the sequential game, which since $\mu$ and the mean-preserving spread to $\{\mu_L, \mu_H\}$ generate different distribution over states and action contradicts essential uniqueness. Hence, an equilibrium in the simultaneous game is either more or equally informative as the finest equilibrium of the sequential game. □ +---PAGE_BREAK--- + +## APPENDIX B: EXAMPLES + +### B.1 Non-Markov equilibrium + +In this section, we consider an example that has a non-Markov equilibrium that is qualitatively different from the Markov equilibrium. Suppose that $\Omega = \{\omega_0, \omega_1\}$ and the optimal choice correspondence for the decision maker is + +$$ \sigma(\mu) = \begin{cases} a_1, & a_2 \\ a_3 & 0.1 \le \mu \le 9/10 \\ a_4, & a_5 \end{cases} \quad \begin{array}{l} \text{if } \mu \le 1/10 \\ \text{if } 0.1 \le \mu \le 9/10 \\ \text{if } \mu \ge 9/10. \end{array} $$ + +Also suppose that two senders have state-independent preferences + +$$ u_1(a, \omega) = \begin{cases} 3 & \text{if } a \in \{a_1, a_4\} \\ 1 & \text{if } a = a_3 \\ 0 & \text{if } a \in \{a_2, a_5\}, \end{cases} \qquad u_2(a, \omega) = \begin{cases} 3 & \text{if } a \in \{a_2, a_5\} \\ 1 & \text{if } a = a_3 \\ 0 & \text{if } a \in \{a_1, a_4\}. \end{cases} $$ + +Consider a Markov equilibrium. Allowing for mixed strategies, let $\sigma_1(0)$ be the probability for $a_1$ given belief $\mu=0$ and let $\sigma_4(1)$ be the probability of $a_4$ given belief $\mu=1$. Suppose that the decision maker has full information. Then the payoffs of sender 1 and sender 2 are $3[\sigma_1(0) + \sigma_4(1)]/2$ and $3[2 - \sigma_1(0) - \sigma_4(1)]/2$, respectively, so the payoff is greater than or equal to $3/2$ for at least one sender. Hence, beliefs in [1/10, 9/10] can be ruled out in any Markov equilibrium. In contrast, if the decision maker always breaks the tie against the sender who first splits the belief into [0, 1/10] or [9/10, 1], each sender may as well not provide any information, and qualitatively different equilibria with action $a_3$ can be supported by such non-Markov strategies. + +### B.2 First-mover advantage + +To illustrate the first-mover advantage, assume that there are three states, i.e., $\Omega = \{\omega_1, \omega_2, \omega_3\}$, and that the prior is (1/3, 1/3, 1/3). For simplicity, take the set of stable beliefs as a primitive. We assume that the stable vertex beliefs are $e_1 = (1, 0, 0)$, $e_2 = (0, 1, 0)$, $e_3 = (0, 0, 1)$, $\mu_1 = (1/2, 1/2, 0)$, and $\mu_2 = (0, 1/2, 1/2)$. There can be an arbitrary number of senders, but we consider just two of them, labeled 1 and 2. Let their expected utilities evaluated at the stable beliefs be + +$$ (\hat{v}_1(e_1), \hat{v}_1(e_2), \hat{v}_1(e_3), \hat{v}_1(\mu_1), \hat{v}_1(\mu_2)) = (0, -1, -1, 0, 1) $$ + +$$ (\hat{v}_2(e_1), \hat{v}_2(e_2), \hat{v}_2(e_3), \hat{v}_2(\mu_1), \hat{v}_2(\mu_2)) = (-1, -1, 0, 1, 0). $$ + +While $e_1$, $e_2$, and $e_3$ are trivially stable, we need to check the stability of $\mu_1$ and $\mu_2$. We have that $\mu_1$ is stable because + +$$ \hat{v}_1(\mu_1) = 0 > \frac{1}{2}\hat{v}_1(e_1) + \frac{1}{2}\hat{v}_1(e_2) = -\frac{1}{2} $$ + +$$ \hat{v}_2(\mu_1) = 1 > \frac{1}{2}\hat{v}_2(e_1) + \frac{1}{2}\hat{v}_2(e_2) = -1, $$ +---PAGE_BREAK--- + +and $\mu_2$ is stable by a symmetric computation. It follows that in the game in which sender 1 moves first, the equilibrium will be that sender 1 puts probability 1/3 on $e_1$ and 2/3 on $\mu_2$, giving player 1 an expected utility of 2/3 and player 2 an expected utility of -1/3. In contrast, when sender 2 moves first, $\mu_1$ is played with probability 2/3 and $e_3$ is played with probability 1/3, resulting in the opposite expected utilities. + +REFERENCES + +Ambrus, Attila and Satoru Takahashi (2008), "Multi-sender cheap talk with restricted state spaces." *Theoretical Economics*, **3**, 1-27. [642, 657] + +Au, Pak Hung and Keiichi Kawai (2019), “Competitive disclosure of correlated information.” *Economic Theory*. [641] + +Au, Pak Hung and Keiichi Kawai (2020), “Competitive information disclosure by multiple senders.” *Games and Economic Behavior*, **119**, 56–78. [641] + +Aumann, Robert J. and Michael B. Maschler (1995), *Repeated Games of Incomplete Information, the Zero-Sum Extensive Case*. MIT. [640] + +Battaglini, Marco (2002), “Multiple referrals and multidimensional cheap talk.” *Econometrica*, **70**, 1379–1401. [642, 657] + +Bergemann, Dirk and Stephen Morris (2016), “Bayes correlated equilibrium and the comparison of information structures in games.” *Theoretical Economics*, **11**, 487–522. [640, 651] + +Bhattacharya, Sourav and Arijit Mukherjee (2013), “Strategic information revelation when experts compete to influence.” *RAND Journal of Economics*, **44**, 522–544. [642] + +Blackwell, David (1953), “Equivalent comparisons of experiments.” *Annals of Mathematical Statistics*, **24**, 265–272. [643] + +Board, Simon and Jay Lu (2018), “Competitive information disclosure in search markets.” *Journal of Political Economy*, **126**, 1965–2010. [641] + +Boleslavsky, Raphael and Christopher Cotton (2015), “Grading standards and education quality.” *American Economic Journal: Microeconomics*, **7**, 248–279. [641] + +Boleslavsky, Raphael and Christopher Cotton (2018), “Limited capacity in project selection: Competition through evidence production.” *Economic Theory*, **65**, 385–421. [641] + +Chen, Ying and Hülya Eraslan (2017), “Dynamic agenda setting.” *American Economic Journal: Microeconomics*, **9**, 1–32. [656] + +Ely, Jeffrey, Alexander Frankel, and Emir Kamenica (2015), “Suspense and surprise.” *Journal of Political Economy*, **123**, 215–260. [641] + +Ely, Jeffrey C. (2017), “Beeps.” *American Economic Review*, **107**, 31–53. [641] + +Gentzkow, Matthew and Emir Kamenica (2017a), “Bayesian persuasion with multiple senders and rich signal spaces.” *Games and Economic Behavior*, **104**, 411–429. [641, 647, 652, 659] +---PAGE_BREAK--- + +Gentzkow, Matthew and Emir Kamenica (2017b), "Competition in persuasion." *Review of Economic Studies*, 84, 300–322. [641, 642, 652] + +Glazer, Jacob and Ariel Rubinstein (2001), "Debates and decisions: On a rationale of argumentation rules." *Games and Economic Behavior*, 36, 158–173. [642] + +Green, Jerry and Nancy Stokey (1978), "Two representations of information structures and their comparisons." Institute for Mathematical Studies in the Social Sciences, Stanford University, Technical report no. 271. [642, 643, 647] + +Grünbaum, Branko, Volker Kaibel, Victor Klee, and Günter M. Ziegler (1967), *Convex Polytopes*. John Wiley and Sons, New York, New York. [647] + +Harris, Christopher (1985), "Existence and characterization of perfect equilibrium in games of perfect information." *Econometrica*, 53, 613–628. [641] + +Hu, Peicong and Joel Sobel (2019), "Simultaneous versus sequential disclosure." Unpublished Paper, University of California, San Diego. [642, 658] + +Hwang, Ilwoo, Kyungmin Kim, and Raphael Boleslavsky (2019), "Competitive advertising and pricing." [641] + +Kamenica, Emir and Matthew Gentzkow (2011), "Bayesian persuasion." *American Economic Review*, 101, 2590–2615. [640, 641] + +Kartik, Navin, Frances Xu Lee, and Wing Suen (2017), "Investment in concealable information by biased experts." *RAND Journal of Economics*, 48, 24–43. [642] + +Kartik, Navin, Frances Xu Lee, and Wing Suen (2019), "A theorem on Bayesian updating and applications to signaling games." Unpublished Paper, Department of Economics, Columbia University. [642] + +Kawai, Keiichi (2015), "Sequential cheap talks." *Games and Economic Behavior*, 90, 128–133. [642, 657] + +Krishna, Vijay and John Morgan (2001), "A model of expertise." *Quarterly Journal of Economics*, 116, 747–775. [642, 657] + +Li, Fei and Peter Norman (2018), "On Bayesian persuasion with multiple senders." *Economics Letters*, 170, 66–70. [641, 654, 658] + +Lipnowski, Elliot and Laurent Mathevet (2017), "Simplifying Bayesian persuasion." Unpublished Paper, Columbia University. [642] + +Lipnowski, Elliot and Laurent Mathevet (2018), "Disclosure to a psychological audience." *American Economic Journal: Microeconomics*, 10, 67–93. [642] + +McKelvey, Richard (1976), "Intransitivities in multidimensional voting models and some implications of agenda control." *Journal of Economic Theory*, 12, 472–473. [656] + +Milgrom, Paul R. and John Roberts (1986), "Relying on the information of interested parties." *Rand Journal of Economics*, 17, 18–32. [642] +---PAGE_BREAK--- + +Rayo, Luis and Ilya Segal (2010), “Optimal information disclosure.” *Journal of Political Economy*, 118, 949–987. [641] + +Romer, Thomas and Howard Rosenthal (1978), “Political resource allocation, controlled +agendas, and the status quo.” *Public Choice*, 33, 27–43. [656] + +Sobel, Joel (2013), “Giving and receiving advice.” In *Advances in Economics and Econometrics: Tenth World Congress*, volume 1, 305–341, Cambridge University Press, New York, New York. [660] + +Wu, Wenhao (2018), “Sequential Bayesian persuasion.” Unpublished Paper, University of Arizona. [641] + +Co-editor Simon Board handled this manuscript. + +Manuscript received 15 October, 2018; final version accepted 9 July, 2020; available online 20 July, 2020. \ No newline at end of file diff --git a/samples/texts_merged/2515306.md b/samples/texts_merged/2515306.md new file mode 100644 index 0000000000000000000000000000000000000000..c88f775aa576ed39a250d89d4993b4bee04b0013 --- /dev/null +++ b/samples/texts_merged/2515306.md @@ -0,0 +1,523 @@ + +---PAGE_BREAK--- + +New Encoding for Translating Pseudo-Boolean Constraints into SAT + +Amir Aavani and David Mitchell and Eugenia Ternovska + +Simon Fraser University, Computing Science Department +{aaa78,mitchell,ter}@sfu.ca + +Abstract + +A Pseudo-Boolean (PB) constraint is a linear arithmetic constraint over Boolean variables. PB constraints are and widely used in declarative languages for expressing NP-hard search problems. While there are solvers for sets of PB constraints, there are also reasons to be interested in transforming these to propositional CNF formulas, and a number of methods for doing this have been reported. We introduce a new, two-step, method for transforming PB constraints to propositional CNF formulas. The first step re-writes each PB constraint as a conjunction of PB-Mod constraints, and the second transforms each PB-Mod constraint to CNF. The resulting CNF formulas are compact, and make effective use of unit propagation, in that unit propagation can derive facts from these CNF formulas which it cannot derive from the CNF formulas produced by other commonly-used transformation. We present a preliminary experimental evaluation of the method, using instances of the number partitioning problem as a benchmark set, which indicates that our method out-performs other transformations to CNF when the coefficients of the PB constraints are not small. + +Introduction + +A Pseudo-Boolean constraint (PB-constraint) is an equality or inequality on a linear combination of Boolean literals, of the form + +$$ \sum_{i=1}^{n} a_i l_i \text{ op } b $$ + +where op is one of {<, ≤, =, ≥, >}, $a_1, \dots, a_n$ and b are integers, and $l_1, \dots, l_n$ are Boolean literals. Under truth assignment $\mathcal{A}$ for the literals, the left-hand evaluates to the sum of the coefficients whose corresponding literals are mapped to true by $\mathcal{A}$. PB-constraints are also known as 0-1 integer linear constraints. By taking the variables to be propositional literals, rather than 0-1 valued arithmetic variables, we can consider the combination of PB-constraints with other logical expressions. Moreover, a propositional clause ($l_1 \lor \dots \lor l_k$) is equivalent to the PB-constraint $\sum_{i=1}^k l_i \ge 1$. Thus, PB-constraints are a natural generalization of propositional clauses with which it is easier to describe arithmetic + +properties of a problem. For example, the Knapsack problem has a trivial representation as a conjunction of two PB-constraints: + +$$ \sum_{i=1}^{n} w_i l_i < C \quad \land \quad \sum_{i=1}^{n} v_i l_i > V, $$ + +but directly representing it with a propositional CNF formula is non-trivial. + +Software which finds solutions to sets of PB-constraints (PB solvers) exist, for example PBS (Aloul et al. 2002) and PUEBLO (Sheini and Sakallah 2006), but there is not a sustained effort to produce continually updated high-performance solvers. Integer linear programming (ILP) systems can be used to find solutions to sets of PB-constraints, but they are generally optimized for performance on certain types of optimization problems, and do not perform well on some important families of search problems. Moreover, the standard ILP input is a set of linear inequalities, and many problems are not effectively modelled this way, such as problems involving disjunctions of constraints, such as $(p \land q) \lor (r \land s)$. There are standard techniques for transforming these, involving additional variables, but extensive use of these techniques causes performance problems. (Transforming problems to propositional CNF also requires adding new variables, but there seems to be little performance penalty in this case.) + +Another approach to solving problems modelled with PB-constraints is to transform them to a logically equivalent set of propositional clauses and then apply a SAT solver. There are at least two clear benefits of this approach. One is that high-performance SAT solvers are being improved constantly, and since they take a standard input format, there is always a selection of good, and frequently updated, solvers to make use of. A second is that solving problems involving Boolean combinations of constraints is straightforward. This approach is particularly attractive for problems which are naturally represented by a relatively small number of PB constraints together which a large number of purely Boolean constraints. + +The question of how best to transform a set of PB-constraints to a set of clauses is complex. Several methods have been reported, but there is still much to be learned. Here, we describe a new method of transformation, and to present some preliminary evidence of its utility. + +Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. +---PAGE_BREAK--- + +We define a PBMod-constraint to be of the form: + +$$ +\sum_{i=1}^{n} a_i l_i \equiv b \pmod{M} +$$ + +where $a_1, \cdots, a_n$ and $b$ are non-negative integers less than $M$, and $l_1, \cdots, l_n$ are literals. + +Our method of transforming a PB-constraint to CNF in- +volves first transforming it to a set of PB-Mod constraints, +and then transforming these to CNF. Thus, we replace the +question of how best to transform an arbitrary PB-constraint +to CNF with two questions: how to choose a set of PB- +Mod constraints, and how to transform each of these to CNF. +There are benefits of this, due to properties of the PB-Mod +constraints. For example, we show that there are many PB- +constraints whose unsatisfiability can be proven by showing +the unsatisfiability of a PBMod-constraint, which is much +simple. + +We present two methods for translating PBMod- +constraints to CNF. Both these encodings allow unit prop- +agation to infer inconsistency if the current assignment can- +not be extended to a satisfying assignment for that PBMod- +constraint, and hence unit propagation can infer inconsis- +tency for the original PB-constraint. We also show that the +number of PB-constraints for which unit propagation can in- +fer inconsistency, given the output of proposed translation, is +much larger than for the other existing encodings. We also +point out that it is impossible to translate all PB-constraints +in the form $\sum a_i l_i = b$ into polynomial size arc-consistent +CNF unless P=NP. + +We also present the results of an experimental study, us- +ing instances of the number partitioning problem as a bench- +mark, which indicates that our new method outperforms oth- +ers in the literature. + +For the sake of space, proofs are omitted from this paper. +All proofs can be found in (Aavani 2011). + +**Notation and Terminology** + +Let $X$ be a set of Boolean variables. An assignment $\mathcal{A}$ to $X$ is a possibly partial function from $X$ to $\{\text{true, false}\}$. Assignment $\mathcal{A}$ to $X$ is a total assignment if it is defined at every variable in $X$. For any $S \subseteq X$, we write $\mathcal{A}[S]$ for the assignment obtained by restricting the domain of $\mathcal{A}$ to the variables in $S$. We say assignment $\mathcal{B}$ extends assignment $\mathcal{A}$ if $\mathcal{B}$ is defined on every variable that $\mathcal{A}$ is, and for every variable $x$ where $\mathcal{A}$ is defined, $\mathcal{A}(x) = \mathcal{B}(x)$. + +A literal, *l*, is either a Boolean variable or negation of a +Boolean variable and we denote by var(l) the variable un- +derlying literal *l*. Assignment *A* satisfies literal *l*, written +*A* |= *l*, if *l* is an atom *x* and *A*(*x*) = true or *l* is a negated +atom ¬*x* and *A*(*x*) = false. + +A clause $C = \{l_1, \dots, l_m\}$ over $X$ is a set of literals such that $\text{var}(l_i) \in X$. Assignment $\mathcal{A}$ satisfies clause $C = \{l_1, \dots, l_m\}$ if there exists at least one literal $l_i$ such that $\mathcal{A} \models l_i$. A total assignment falsifies clause $C$ if it does not satisfy any of its literals. An assignment satisfies a set of clauses if it satisfies all the clauses in that set. + +A PB-constraint $Q$ on $X$ is an expression of the form: + +$$ +a_1 l_1 + \cdots + a_n l_n \quad \mathbf{op} \quad b \qquad (1) +$$ + +where op is one of {<, ≤, =, ≥, >}, for each i, a_i is an +integer and l_i a literal over X, and b is an integer. We call a_i +the coefficient of l_i, and b the bound. + +Total assignment $\mathcal{A}$ to $X$ satisfies PB-constraint $Q$ on $X$, written $\mathcal{A} \models Q$, if $\sum_{i:\mathcal{A}=\lhd l_i} a_i \mathrm{op} b$, that is, the sum of coefficients for literals mapped to true (the left hand side) satisfies the given relation to the bound (the right hand side). + +Canonical Form + +In this paper, we focus on translating PB equality constraints +with positive coefficients: + +$$ +a_1 x_1 + \cdots + a_n x_n = b +\quad (2) +$$ + +where integers $(a_1 \cdots a_n$ and $b$) are all positive. + +**Definition 1** Constraints $Q_1$ on $X$ and $Q_2$ on $Y \supseteq X$ are equivalent iff for every total assignment $\mathcal{A}$ for $X$ which satisfies $Q_1$, there exists an extension of $\mathcal{A}$ to $Y$ which satisfies $Q_2$, and every total assignment $\mathcal{B}$ to $Y$ which satisfies $Q_2$ also satisfies $Q_1$. + +It is not hard to show that every PB-constraint has an +equivalent PB-constraint of the form (2). For sake of space, +we do not include the details but refer interested readers to +(Aavani 2011). + +Valid Translation + +**Definition 2** Let $Q$ be a PB-constraint or PB-Mod con- +straint over variables $X = \{x_1, \dots, x_n\}$, $Y$ a set of +Boolean variables (called auxiliary variables) disjoint from +$X$, and $v$ a Boolean variable not occurring in $X \cup Y$, and +$C = \{C_1, \dots, C_m\}$ a set of clauses on $X \cup Y \cup \{v\}$. Then +we say the pair $(v, C)$, is a valid translation of $Q$ if + +1. *C* is satisfiable, and + +2. if *A* is a total assignment for *X* ∪ *Y* ∪ {*v*} that satisfies *C*, then + +$$ +\mathcal{A} \models Q \iff \mathcal{A} \models v. +$$ + +Intuitively, *C* ensures that every *v* always takes the same +truth value as *Q*. + +In (Bailleux, Boufkhad, and Roussel 2009), a translation +is defined to be a set of clauses *C* such that *A* |= *Q* off some +extension of *A* (to the auxiliary variables of *C*) satisfies *C*. If +⟨*v*, *C*⟩ is a valid translation by Definition 2, then (*v*) ∪ *C* is a +translation in this other sense, and if *C* is a translation in the +other sense, then ⟨*v*, *D*⟩, where *D* is equivalent to *v* ↔ *C*, +is a valid translation. So these two definitions are essentially +equivalent, except that our definition makes available a vari- +able which always has the same truth value as *Q*, which can +be convenient. For example, it makes it easy to use *Q* condi- +tionally. + +**Example 1** Let $Q$ be the unsatisfiable PB-constraint $2x_1 + 4\neg x_2 = 3$. Then the pair $(v, \{C_1\})$, where $C_1 = \{\neg v\}$, is a valid translation of $Q$. + +**Example 2** Let $Q$ be the satisfiable PB-constraint $1x_1 + 2x_2 = 2$. Then $\langle v, C \rangle$, where $C$ is any set of clauses logically equivalent to $(v \leftrightarrow \neg x_1) \land (v \leftrightarrow x_2)$ is a valid translation of $Q$. Here, $X = \{x_1, x_2\}$ and $Y = \emptyset$. +---PAGE_BREAK--- + +In describing construction of translations, we will some- +times overload our notation, using a symbol for both a vari- +able and a translation. For example, if *D* is a valid transla- +tion, we may use *D* as a variable in a clause for constructing +another translation. Thus, *D* is the pair ⟨*D*, *C*⟩. + +Tseitin Transformation + +The usual method for transforming a propositional formula to CNF is that of Tseitin(Tseitin 1968). To transform formula φ to CNF, a fresh propositional variable is used to represent the truth value of each subformula of φ. For each subformula ψ, denote by ψ' the associated propositional variable. If ψ is a variable, then ψ' is just ψ. The CNF formula is the set of clauses containing the clause (φ'), and for each sub-formula ψ of φ: + +1. If $\psi = \psi_1 \lor \psi_2$, the clauses $\{\neg\psi', \psi'_1, \psi'_2\}$, $\{\psi', \neg\psi'_1\}$ and $\{\psi', \neg\psi'_2\}$; + +2. If $\psi = \psi_1 \land \psi_2$, the clauses $\{\neg\psi', \psi'_1\}$, $\{\neg\psi', \psi'_2\}$, and $\{\psi', \neg\psi'_1, \neg\psi'_2\}$; + +3. If $\psi = \neg\psi_1$, the clauses $\{\neg\psi', \neg\psi'_1\}$ and $\{\psi', \psi'_1\}$. + +New Method for PBMod-constraints + +We define a normal PBMod-constraint be of the form: + +$$ +\sum_{i=1}^{n} a_i l_i \equiv b (\text{mod } M), \quad (3) +$$ + +where $0 \le a_i < M$ for all $1 \le i \le n$ and $0 \le b < M$. +Total assignment $\mathcal{A}$ is a solution to a PBMod-constraint iff +the value of left-hand side summation under $\mathcal{A}$ minus the +value of right-hand side of the equation, $b$, is a multiple of +$M$. + +**Definition 3** If $Q$ is the PB-constraint $\sum a_i l_i = b$ and $M$ an integer greater than 1, then by $Q[M]$ we denote the PBMod-constraint $\sum a'_i l_i = b'(\text{mod } M)$ where: + +1. $a'_i = a_i \bmod M$, + +2. $b' = b \bmod M$. + +**Example 3** Let $Q$ be the constraint $6x1+5x2+7x3 = 12$. Then, we have that + +$Q[3]$ is $0x1 + 2x2 + 1x3 \equiv 0 (\text{mod } 3)$, and + +$Q[5]$ is $1x1 + 0x2 + 2x3 = 2 (\text{mod } 5).$ + +Every solution to a PB-constraint *Q* is also a solution to +*Q*[*M]* for any *M* ≥ 2. Also, for sufficiently large values of +*M*, each solution to *Q*[*M]* is a solution to *Q*. + +**Proposition 1** If *Q* is a PB-constraint ∑ *a*ᵢ*l*ᵢ = *b* and *M* > ∑ *a*ᵢ then *Q*[*M*] and *Q* have the same satisfying assignments. + +More interesting is that, for a given PB-constraint Q, we +can construct sets constraints Q[M_i], none of which are +equivalent to Q, but such that their conjunction has the same +set of solutions as Q. Our goal will be to choose values of +M_i such that the resulting set of PB-Mod constraints is easy +to transform to CNF. + +**Proposition 2** Let $Q$ be the PB-constraint $\sum a_i l_i = b, M_1$ and $M_2$ be integers with $M_3 = \text{lcm}(M_1, M_2)$. Further, let $S_1$ be the set of satisfying assignments for $Q[M_1]$, and $S_2$ + +be the set of assignments satisfying $Q[M_2]$. Then the set of satisfying assignments for $Q[M3]$ is $S_1 \cap S_2$. + +Proposition 2 tells us that in order to find the set of solu- +tions to a PBMod-constraint modulo $M_3 = \text{lcm}(M_1, M_2)$, +one can find the set of solutions to two PBMod-constraints +(modulo $M_1$ and $M_2$) and return their intersection. This gen- +eralizes in the obvious way. + +**Lemma 1** Let {$M_1, \dots, M_m$} be a set of $m$ positive integers and $M = \text{lcm}(M_1, \dots, M_m)$. Let $Q$ the PB-constraint $\sum a_i l_i = b$. If $M > \sum a_i$, and $S_i$ is the set of satisfying assignments for $Q[M_i]$, then the set of satisfying assignments of $Q[M]$ is + +$$ +\bigcap_{i \in 1..m} S_i. +$$ + +We can now easily construct a valid translation of a PB- +constraint from valid translations of a suitable set of PB- +Mod constraints. + +**Theorem 1** Let $Q$ be a PB-constraint $\sum a_i l_i = b$, $\{M_1, \dots, M_m\}$ a set of positive integers, and $M = \text{lcm}(M_1, \dots, M_m)$ with $M > \sum a_i$. Suppose that, for each $i \in \{1, \dots, m\}$, $\langle v_k, C_k \rangle$ is a valid translation of $Q[M_i]$, each over distinct sets of variables. Then for any set $C$ of clauses logically equivalent to $\cup_i C_i \cup C'$, where $C'$ is a set of clauses equivalent to $v \leftrightarrow (v_1 \wedge v_2 \cdots \wedge v_m)$, the pair $(v, C)$, is a valid translation of $Q$. + +Since $\text{lcm}(2, \dots, k) \ge 2^{k-1}$, (Farhi and Kane 2009), the +set $\mathbb{M}^\mathbb{N} = \{2, \dots, [\log \sum a_i] + 1\}$ can be used as the set of +moduli for encoding $\sum a_i l_i = b$. + +Another candidate for set of moduli is the first *m* prime numbers, where *m* is the smallest number such that the produce of the first *m* primes exceeds $\sum a_i$. We will denote this set by $\mathbb{M}^p$. The following proposition gives an estimate for the size of set $\mathbb{M}^p$, and for the value of $P_m$. As usual, we denote by $P_i$ the $i^{th}$ prime number. + +**Proposition 3** Let $m$ be the smallest integer such that the product of the first $m$ primes is greater than $S$. Then: + +$$ +1. m = |\mathbb{M}^p| = \theta(\frac{\ln S}{\ln \ln S}). +$$ + +2. $P_m < \ln S.$ + +A third candidate is the set + +$$ +\mathbb{M}^{\mathbb{P}} = \{ P_i^{n_i} | P_i^{n_i - 1} \leq \lg S \leq P_i^{n_i} \}. +$$ + +It is straightforward to observe that |$\mathbb{M}^{\mathbb{P}}$| ≤ (ln S)/(ln ln S) and the its maximum element is at most lg S. + +In general, the size of a description of PB-constraint +∑ *a**i**l**i* = *b* is θ(n log *a*Max) where *n* is the number of liter- +als (coefficients) in the constraint and *a*Max is the value of +the largest coefficient. The description of PBMod-constraint +Q[*M*] has size θ(n log *M*). So, a translation for Q[*M*] which +produces a CNF with O(n*k*1 *M**k*2) clauses and variables, for +some constants *k*1 and *k*2, (which may be exponential the in- +put size), provides a may to translate PB-constraints to CNF +of size polynomial in the representation of the PB-constraint. +Two such translations are described in the next section. We +describe several others in (Aavani 2011). +---PAGE_BREAK--- + +# Encoding For PB-Mod Constraints + +In this section, we describe translations of PBMod-constraints of the form (3) to CNF. Remember that our ultimate goal is not translation of PB-constraints. For simplicity, we assume all coefficients in each PBMod-constraint are non-zero. + +## Dynamic Programming Based Transformation (DP) + +The translation presented here encodes PBMod-constraints using a Dynamic Programming approach. Let $D_m^j$ be a valid translation for $\sum_{i=1}^j a_i l_i \equiv m (\text{mod } M)$. We can use the following set of clauses to describe the relationship among $D_m^j$, $D_{m-a_j}^{j-1}$, $D_m^j$ and $l_j$: + +1. If both $D_{m-a_j}^{j-1}$ and $l_j$ are true, $D_m^j$ must be true, which can be represented by the clause $\{\neg D_{m-a_j}^{j-1}, \neg l_j, D_m^l\}$. + +2. If $D_m^{j-1}$ is true and $l_j$ is false, $D_m^j$ must be true, i.e., $\{\neg D_m^{j-1}, l_j, D_m^j\}$. + +3. If $D_m^j$ is true, either $D_m^{j-1}$ or $D_{m-a_j}^{j-1}$ must be true, i.e., $\{\neg D_m^j, D_m^{j-1}, D_{m-a_j}^{j-1}\}$. + +For the base cases, when $j=0$, we have: + +1. $D_0^0$ is true, i.e., $\{D_0^0\}$. + +2. If $m \neq 0$, $D_m^0$ is false, i.e., $\{\neg D_m^0\}$. + +**Proposition 4** Let $D = \{D_m^j\}$ and C be the set of clauses used to describe variables in D. Then, pair $\langle D_b^n, C \rangle$ is valid translation for (3). + +By applying standard dynamic programming techniques, we can avoid describing the unnecessary $D_m^j$, and obtain a smaller CNF. + +By adding the following clauses, we can boost the performance of unit propagation. + +1. If $D_{m_1}^j$ is true, $D_{m_2}^j$ should be false($m_1 \neq m_2$), i.e., $\{\neg D_{m_1}^j, \neg D_{m_2}^j\}$. + +2. There is at least one $m$ such that $D_m^j$ is true, i.e., $\{D_m^j | m = 0 \cdots M - 1\}$. + +Binary Decision Diagrams, BDD, are standard tools for translating constraints to SAT. One can construct a BDD-based encoding for PBMod-constraints similar to BDD-based encoding for PB-constraint described in (Eén and Sorensson 2006). Unit propagation can infer more facts on the CNF generated by boosted version of DP-based encoding than the CNF generated by BDD-based encoding. Comparing BDD-based and DP-based encodings, former produces larger CNF while unit propagation infers the same facts on the output of both encodings. + +**Remark 1** In (Aavani 2011), we proved that DP-based encoding, plus the extra clauses, has the following property. Given partial assignment A, if there is no total assignment B extending A such that B satisfies both C and $\sum_{i=1}^j a_i l_i \equiv m (\text{mod } M)$, then unit propagation infers false as the value for variable $D_m^j$. + +# Divide and Conquer Based Transformation (DC) + +The translation presented next reflects a Divide and Conquer approach. We define auxiliary variables in $D = \{D_a^{s,l}\}$ such that variable $D_a^{s,l}$ describes the necessary and sufficient condition for satisfiability of subproblem $\sum_{i=s}^{s+l-1} a_i x_i \equiv a (\text{mod } M)$. + +Let $D^{s,l} = \{D_a^{s,l} : 0 \le a < M\}$. We can use the following set of clauses to describe the relation among the $3 * M$ variables in sets $D^{s,l}$, $D_{s,\frac{l}{2}}$ and $D_{s+\frac{l}{2},\frac{l}{2}}$: + +1. If both $D_{m_1}^{s,\frac{l}{2}}$ and $D_{m_2}^{s+\frac{l}{2},\frac{l}{2}}$ are true, $D_{m_1+m_2}^{s,l}$ should be true, i.e., $\{\neg D_{m_1}^{s,\frac{l}{2}}, \neg D_{m_2}^{s+\frac{l}{2},\frac{l}{2}}, D_{m_1+m_2}^{s,l}\}$. + +2. If $D_{m_1}^{s,l}$ is true, $D_{m_2}^{s,l}$ should be false($m_1 \neq m_2$), i.e., $\{\neg D_{m_1}^{s,l}, \neg D_{m_2}^{s,l}\}$. + +3. There is at least one $m$ such that $D_m^{s,l}$ is true, i.e., $\{D_m^{s,l} | m = 0 \cdots M - 1\}$. + +For the base cases, when $l=1$, we have: + +1. $D_0^{s,1}$ is true iff $x_s$ is false, i.e., $\{{x_s, D_1^{s,1}}, \{\neg x_s, \neg D_1^{s,1}\}}$. + +2. $D_1^{s,1}$ is true iff $x_s$ is true, i.e., $\{{\neg x_s, D_1^{s,1}}, \{x_s, \neg D_1^{s,1}\}}$. + +**Proposition 5** Let $D = \{D_a^{s,l}\}$ and C be the clauses which are used to describe the variables in D. Then, pair $\langle D_b^n, C \rangle$ is a valid translation for (3). + +**Remark 2** In (Aavani 2011), we showed another version of DC-based encoding which also has the property we described in Remark 1. + +**Theorem 2** The numbers of clauses and auxiliary variables used in the DP and CD translations of PBMod constraint $\sum a_i x_i \equiv b (\text{mod } M)$, and the depths of the formulas implicit in these CNF formulas, are as given in Table 1. These same properties, for the PB-constrain translations obtained DP and DC translations together with $M^P$ or $M^{PP}$ as moduli, are as given in Table 2. + +
Encoder# of Aux. Vars.# of ClausesDepth
DPO(nM)O(nM)O(n)
DCO(nM)O(nM2)O(log n)
+ +Table 1: Summary of size and depth of translations for $\sum a_i x_i \equiv b (\text{mod } M)$. + +In the previous section, we described two candidates for sets of moduli, namely Prime and PrimePower, and in this section, we explained two encodings for transforming PBMod constraints to SAT, namely DP and DC. This gives us four different translations for PB constraints to SAT. Table 2 summarizes the number of clauses and variables and the depth of corresponding formula for these translations, and also for the Sorting Network based encoding (Eén 2005), and Binary Adder encoding (Eén 2005). +---PAGE_BREAK--- + +
PBMod Endr# of Vars.# of ClausesDepth
Prime.DPO(1/ln(S))O(n ln(S))O(n)
Prime.DCO(n log(S)/ln ln(S))O(n (log(S)/ln ln(S)))2)O(log n)
PPower.DPO(log(S)/log log S)O(n log(S)/log log(S))O(n)
PPower.DCO(n log(S)/log log(S))O(n (log(S)/log log(S)))2)O(log n)
BAdderO(n log(S))O(n log(S))O(log(S) * log n)
SNO(n log(S/n) log2(n log(S/n)))O(n log(S/n) log2(n log(S/n)))O(log2(n log(S/n)))
+ +Table 2: Summary of size and depth of different encodings for translating $\sum a_i x_i = b$, where $S = \sum a_i$. + +## Performance of Unit Propagation + +Here we examine some properties of the proposed encodings. + +### Background + +Generalized arc-consistency (GAC) is one of the desired properties for an encoding which is related to the performance of unit propagation, UP, procedure inside a SAT Solver. Bailluex and et. al., in (Bailleux, Boufkhad, and Roussel 2009), defined UP-detect inconsistency and UP-maintain GAC for PB-constraint's encodings. Although, the way they defined a translation is slightly different from us, these two concepts can still be discussed in our context. + +Let $E$ be an encoding method for PB-constraints, $Q$ be a PB-constraint on $X$ and $\langle v, C \rangle = E(Q)$ the translation for $Q$ obtained from encoding $E$. Then, + +1. Encoding $E$ for constraint $Q$ supports UP-detect inconsistency if for every (partial) assignment $\mathcal{A}$, we have that every total extension of $A[X]$ makes $Q$ false if and only if unit propagation derives $\{-v\}$ from $C \cup \{\{x\} \mid \mathcal{A} \models x\}$; + +2. Encoding $E$ for constraint $Q$ is said to UP-maintain GAC if for every (partial) assignment $\mathcal{A}$ and any literal $l$ where $\text{var}(l) \in X$, we have that $l$ is true in every total extension of $\mathcal{A}$ that satisfies $Q$, if and only if unit propagation derives $\{l\}$ from $C \cup \{v\} \cup \{\{x\} \mid \mathcal{A} \models x\}$; + +An encoding for PB-constraints is generalized arc-consistent, or simply arc-consistent, if it supports both UP-detect inconsistency and UP-maintain GAC for all possible constraints. + +In this section, we show that there cannot be an encoding for PB-constraint in form $\sum a_i l_i = b$ which always produces a polynomial size arc-consistent CNF unless P=co-NP. Also we study the arc-consistency of our encoding and discuss why one can expect the proposed encodings to perform well. + +### Hardness Result + +Here, we show that it is not very likely to have a generalized arc-consistent encoding which always produces polynomial size CNF. + +**Theorem 3** There does not exist a UP-detectable encoding which always produces polynomial size CNF unless P= co-NP. There does not exist a UP-maintainable encoding which always produces polynomial size CNF unless P= co-NP. + +**Proof (sketch)** The theorem can be proven by observing that a subset sum problem instance can be written as a PB-constraint, and having a UP-detectable encoding enables us to prove unsatisfiability whenever the original subset problem instance is not satisfiable. The proof for hardness of having UP-maintainable encoding is similar to this argument. For complete proof, see (Aavani 2011). + +## UP for Proposed Encodings + +Although there is no arc-consistent encoding for PB-constraints, both DP-based and DC-based encodings for PBMod-constraints are generalized arc-consistent encodings. + +Also, as mentioned before, unit propagation is able to infer inconsistency, on the CNF generated by these encodings, as soon as the current partial assignment cannot be extended to a total satisfying assignment. Notice that what we state here is more powerful than arc-consistency as it considers the auxiliary variables, too. More formally, let $\langle v, C \rangle$ be the output of DP-based (DC-based) encoding for PBMod-constraint $Q$. Given a partial assignment $\mathcal{A}$ s.t., $v \in \mathcal{A}^+$, + +$$ \mathcal{A} \not\models C \cup \{v\} \Leftrightarrow \mathcal{A} \not\models_{UP} C \cup \{v\}. \quad (4) $$ + +This feature enables SAT solver to detect their mistakes on each of PBMod-constraints as soon as such a mistake occurs. + +In the rest of this section, we study the cases for which we expect SAT solvers to perform well on the output of our encoding. Let $Q$ be a PB-constraint on $X$, $\mathcal{A}$ be a partial assignment and $\text{Ans}(\mathcal{A})$ be the set of total assignment, to $X$, satisfying $Q$ and extending $\mathcal{A}[X]$. There are two situations in which UP is able to infer the values of input variables: + +1. Unit Propagation Detects Inconsistency: One can infer the current partial assignment, $\mathcal{A}$, cannot satisfying $Q$ by knowing $\text{Ans}(\mathcal{A}) = \emptyset$. Recall that there are partial assignments and PB-constraints such that although $\text{Ans}(\mathcal{A}) = \emptyset$, each of the $m$ PBMod-constraints has non-empty solution (but the intersection of their solution is empty). + +If at least one of the $m$ PBMod-constraints is inconsistent with the current partial assignment, UP can infer inconsistency, in both DP and DC encodings. + +2. Unit Propagation Infers the Value for an Input Variable: One can infer the value of input variable $x_k$ is true/false if $x_k$ takes the same value in all the solutions to $Q$. For this kind of constraints, UP might be able to infer the value of $x_k$, too. +---PAGE_BREAK--- + +If there exists a PBMod-constraint for which all its solutions which extend $\mathcal{A}$, have mapped $x_k$ to the same value, UP can infer the value of $x_k$. + +These two cases are illustrated in the following example. + +**Example 4** Let $Q(X) = x_1+2*x_2+3x_3+4*x_4+5*x_5 = 12$. + +1. If $\mathcal{A}$, the current partial assignment, is $\mathcal{A}=\{\neg x_2, \neg x_4\}$ and $M=5$. There is no total assignment satisfying $1x_1 + 3x_3 + 0x_5 \equiv 2 \pmod 5$. + +2. If $\mathcal{A}$, the current partial assignment, is $\mathcal{A}=\{\neg x_3, \neg x_5\}$ and $M=2$, there are four total assignments extending $\mathcal{A}$ and satisfying PBMod-constraint $1x_1 + 0x_2 + 0x_4 \equiv 0 \pmod 2$. In all of them, $x_1$ is mapped to false. + +A special case of the second situation is when UP can detect the values of all $x \in X$ given the current partial assignment. In the rest of this section, we estimate the number of PB-constraints for which UP can solve the problem. More precisely, we give a lower bound on the number of PB-constraints for which UP detects inconsistency or it expands an empty assignment to a solution given the translation of those constraints. + +Let us assume the constraints are selected, uniformly at random, from $\{\sum a_i l_i + \dots + a_n l_n = b : 1 \le a_i \le A = 2^{R(n)} \text{ and } 1 \le b \le n * A\}$ where $R(n)$ is a polynomial in $n$ and $R(n) > n$. To simplify the analysis, we use the same prime modulos $\mathbb{P}^n = \{P_1 = 2, \dots, P_m = \theta(R(n)) > 2n\}$ for all constraints. + +Consider the following PBMod-constraints: + +$$1x_1 + \cdots + 1x_{n-1} + 1x_n = n + 1 \pmod{P_m} \quad (5)$$ + +$$1x_1 + \cdots + 1x_{n-1} + 1x_n = n \pmod{P_m} \quad (6)$$ + +It is not hard to verify that (5) does not have any solution and (6) has exactly one solution. It is straightforward to verify that UP can infer inconsistency given a translation obtained by DP-based (DC-based) encoding for (5), even if the current assignment is empty. Also, UP expands the empty assignment to an assignment mapping all $x_i$ to true on a translation for (6) obtained by either DP-based encoding or DC-based encoding. Chinese Remainder Theorem, (Ding, Pei, and Salomaa 1996), implies that there are $(A/P_m)^{n+1} = 2^{(n+1)R(n)}/R(n)^{n+1}$ different PB-constraints in the form $\sum a_i l_i = b$ such that their corresponding PBMod-constraints, where the modulo is $P_m$, are the same as (5). The same claim is true for (6). + +The above argument shows that, for proposed encoding, the number of easy to solve PB-constraints is huge. In (Aavani 2011), we showed that this number is much smaller for Sorting network: + +**Observation 1** (Aavani 2011) There are at most $(\log A)^n$ instances where the CNF produced by Sorting Network encoding maintains arc-consistency, while this number for our encoding is at least $(A/\log(A))^n$. So, if $A = 2^{R(n)}$, almost always we have $2^{R(n)}/R(n) \gg R(n)$. + +**Observation 2** (Aavani 2011) There is a family of PB-constraints whose translation through totalizer-based encoding is not arc-consistent but the translation obtained by our encoding is arc-consistent. + +## Experimental Evaluation + +By combining any modulo selection approach and any PBMod-constraint encoder, one can construct a PB-constraint solver. In this section, we selected the following configurations: Prime with DP (Prime.DP), Prime with DC (Prime.DC). We used CryptoMiniSAT as the SAT solver for our encodings, as it performed better than MiniSAT, on our initial benchmarking experiments. + +To evaluate the performance of these configurations, we used the Number Partitioning Problem, NPP. Given a set of integers $S = \{a_1, \dots, a_n\}$, NPP asks whether there is a subset of $S$ such that the summation of its members is exactly $\sum a_i/2$. Following (Gent and Walsh 1998), we generated 100 random instances for NPP, for a given $n$ and $L$ as follows: + +Create set $S = \{a_1, \dots, a_n\}$ such that each of $a_i$ is selected independently at random from $[0 \dots 2^L]$. + +We ran each instance on our two configurations and also on two other encodings, Sorting Network based encoding (SN), Binary Adder Encoding (BADD)(Eén and Sorensson 2006), provided by MiniSAT+¹. All running times, reported in this paper, are the total running times (the result of summation of times spent to generate CNF formulas and time spent to solve the CNF formulas). We also tried to run the experiments with BDD encoder, but as the CNF produced by BDD encoder is exponentially big, it failed to solve medium and large size instances. + +Before we describe the result of experiments, we discuss some properties of the number partitioning problem. + +### Number Partitioning Problem + +The Number partitioning problem is an NP-Complete problems, and it can also be seen as a special case of subset sum problem. In the SAT context, an instance of NPP can be rewritten as a PB-constraint whose comparison operator is “=”. Neither this problem nor subset sum problem has received much attention by the SAT community. + +Size of an instance of NPP, where set $S$ with $n$ elements and $a_{Max}$ is the maximum absolute value in $S$, is $\theta(n * \log(a_{Max})) + n$. It is known that if the value of $a_{Max}$ is polynomial wrt $n$, the standard dynamic programming approach can solve this problem in time $O(na_{Max})$, which is polynomial time wrt to the instance size. If $a_{Max}$ is too large, $2^\Omega(2^{\theta(n)})$, the naive algorithm, which generates all the $2^n$ subsets of $S$, works in polynomial time wrt the instance size. The hard instances for this problem are those in which $a_{Max}$ is neither too small nor too large wrt $n$. + +In (Borgs, Chayes, and Pittel 2001), the authors defined $k = L/n$ and showed that NPP has a phase transition at $k=1$: for $k < 1$, there are many perfect partitions with probability tending to 1 as $n \mapsto \infty$, while for $k > 1$, there are not perfect partitions with probability tending to 1. As $n \mapsto \infty$. + +### Experiments + +All the experiments were performed on a Linux cluster (Intel(R) Xeon(R) 2.66GHz). We set the time limit for the to + +¹http://minisat.se/ +---PAGE_BREAK--- + +Figure 1: The left hand figure plots the best solver for pairs $n$ and $L$ ($n = 3 \cdots 30$, $L = 3 \cdots 2n$). The right hand figure shows the average solving time, in second, of the engines which solved all the 100 instances in 10 minutes timeout, for $n = L \in 3 \cdots n$. + +be 10 minutes. During our experiments, we noticed that +the sorting network encoding in MiniSAT+ incorrectly an- +nounces some unsatisfiable instances to be satisfiable (an +example of which is the following constraint). We did not +investigate the reason of this issue in the source code of +MiniSAT+, and all the reported timings are using the bro- +ken code. + +$$5x_1 + 7x_2 + 1x_3 + 5x_4 = 9.$$ + +In our experiments, we generated 100 instances for $n \in \{3..30\}$ and $L \in \{3..2 * n\}$. We say a solver wins on a set of instances if it solves more instances than the others and in the case of a tie, we decide the winner by looking at the average running time. The instances on which each solver performed the best are plotted on Figure 1. As the Sorting Network solver was never a winner on any of the sets, it did not show up in the graph. + +One can observe the following patterns from the data pre- +sented in Figure 1: + +1. For $n < 15$, all solvers successfully solve all the instances. + +2. Sorting network fails to solve all the instances where $n = 20$. + +3. BADD solves all the instances when $n = L = 24$ in a reasonable time, but it suddenly fails when the $n(L)$ gets larger. + +4. For large enough $n$ ($n < 15$) BADD is the winner only when $L$ is small. + +5. For large enough $n$ ($n < 15$) either PDC or PDP is the best performing solver. + +Conclusion and Future Work + +We presented a method for translating Pseudo-Boolean con- +straints into CNF. The size of produces CNF is polyno- +mial with respect to the input size. We also showed that +for exponentially many instances, the produced CNF is arc- + +consistent. The number of arc-consistent instances, for our +encodings, is much bigger than that of the existing encod- +ings. + +In our experimental evaluation section, we described a set +of randomly generated number partitioning instances with +two parameters, *n* and *L*, where *n* describes the size of our +set and $2^L$ is the maximum value in the set. The experimen- +tal result suggests that Prime.DP and Prime.DC encoding +outperform Binary Adder and Sorting Network encodings. + +Future work + +The upper bounds for our encodings, presented in Table 2, +are not tight. We hope to improve these and give the exact +asymptotic sizes. Further experimental evaluation is needed +to determine the relative performance of the various meth- +ods on more practical instances, and on instances with larger +numbers of variables. Finally, we hope to develop heuristics +for automatically choosing the best encoding to use for any +given PB constraint. + +References + +Aavani, A. 2011. Translating pseudo-boolean constraints into cnf. CoRR abs/1104.1479. + +Aloul, F.; Ramani, A.; Markov, I.; and Sakallah, K. 2002. +PBS: a backtrack-search pseudo-boolean solver and opti- +mizer. In Proceedings of the 5th International Symposium +on Theory and Applications of Satisfiability, 346–353. Cite- +seer. + +Bailleux, O.; Boufkhad, Y.; and Roussel, O. 2009. New Encodings of Pseudo-Boolean Constraints into CNF. Theory and Applications of Satisfiability Testing-SAT 2009 181–194. + +Borgs, C.; Chayes, J.; and Pittel, B. 2001. Phase transition and finite-size scaling for the integer partitioning problem. Random Structures & Algorithms 19(3-4):247–288. +---PAGE_BREAK--- + +Ding, C.; Pei, D.; and Salomaa, A. 1996. *Chinese remainder theorem: applications in computing, coding, cryptography*. World Scientific Publishing Co., Inc. River Edge, NJ, USA. + +Eén, N., and Sorensson, N. 2006. Translating pseudo-boolean constraints into SAT. *Journal on Satisfiability, Boolean Modeling and Computation* 2(3-4):1-25. + +Eén, N. 2005. *SAT Based Model Checking*. Ph.D. Dissertation, Department of Computing Science, Chalmers University of Technology and Goteborg University. + +Farhi, B., and Kane, D. 2009. New results on the least common multiple of consecutive integers. In *Proc. Amer. Math. Soc*, volume 137, 1933-1939. + +Gent, I. P., and Walsh, T. 1998. Analysis of heuristics for number partitioning. *Computational Intelligence* 14(3):430-451. + +Sheini, H., and Sakallah, K. 2006. Pueblo: A hybrid pseudo-boolean SAT solver. *Journal on Satisfiability, Boolean Modeling and Computation* 2:61-96. + +Tseitin, G. 1968. On the complexity of derivation in propositional calculus. *Studies in constructive mathematics and mathematical logic* 2(115-125):10-13. \ No newline at end of file diff --git a/samples/texts_merged/2590883.md b/samples/texts_merged/2590883.md new file mode 100644 index 0000000000000000000000000000000000000000..2b43cbcb61e05d80aaf842817d0c12591f51c761 --- /dev/null +++ b/samples/texts_merged/2590883.md @@ -0,0 +1,504 @@ + +---PAGE_BREAK--- + +A LOADING-DEPENDENT +MODEL OF PROBABILISTIC +CASCADING FAILURE + +**IAN DOBSON** + +Electrical & Computer Engineering Department +University of Wisconsin-Madison +Madison, WI 53706 +E-mail: dobson@engr.wisc.edu + +**BENJAMIN A. CARRERAS** + +Oak Ridge National Laboratory +Oak Ridge, TN 37831 +E-mail: carrerasba@ornl.gov + +**DAVID E. NEWMAN** + +Physics Department +University of Alaska +Fairbanks, AK 99775 +E-mail: ffden@uaf.edu + +We propose an analytically tractable model of loading-dependent cascading failure that captures some of the salient features of large blackouts of electric power transmission systems. This leads to a new application and derivation of the quasibinomial distribution and its generalization to a saturating form with an extended parameter range. The saturating quasibinomial distribution of the number of failed components has a power-law region at a critical loading and a significant probability of total failure at higher loadings. + +# 1. INTRODUCTION + +Cascading failure is the usual mechanism for large blackouts of electric power transmission systems. For example, long, intricate cascades of events caused the August 1996 blackout in northwestern America [25] that disconnected 30,390 MW of power +---PAGE_BREAK--- + +to 7.5 million customers [23]. An even more spectacular example is the August +2003 blackout in northeastern America that disconnected 61,800 MW of power to +an area spanning 8 states and 2 provinces and containing 50 million people [33]. +The vital importance of the electrical infrastructure to society motivates the con- +struction and study of models of cascading failure. + +In this article, we describe some of the salient features of cascading failure in +blackouts with an analytically tractable probabilistic model. The features that we +abstract from the formidable complexities of large blackouts are the large but +finite number of components: components that fail when their load exceeds a thresh- +old, an initial disturbance loading the system, and the additional loading of com- +ponents by the failure of other components. The initial overall system stress is +represented by upper and lower bounds on a range of initial component loadings. +The model neglects the length of times between events and the diversity of power +system components and interactions. Of course, an analytically tractable model is +necessarily much too simple to represent with realism all of the aspects of cas- +cading failure in blackouts; the objective is, rather, to help understand some global +systems effects that arise in blackouts and in more detailed models of blackouts. +Although our main motivation is large blackouts, the model is sufficiently simple +and general that it could be applied to cascading failure of other large, intercon- +nected infrastructures. + +We summarize our cascading failure model and indicate some of the connec- +tions to the literature that are elaborated later. The model has many identical com- +ponents randomly loaded. An initial disturbance adds load to each component and +causes some components to fail by exceeding their loading limit. Failure of a com- +ponent causes a fixed load increase for other components. As components fail, the +system becomes more loaded and cascading failure of further components becomes +likely. The probability distribution of the number of failed components is a satu- +rating quasibinomial distribution. The quasibinomial distribution was introduced +by Consul [11] and further studied by Burtin [3], Islam, O'Shaughnessy, and Smith +[19], and Jaworski [20]. The saturation in our model extends the parameter range +of the quasibinomial distribution, and the saturated distribution can represent highly +stressed systems with a high probability of all components failing. Explicit formu- +las for the saturating quasibinomial distribution are derived using a recursion and +via the quasimultinomial distribution of the number of failures in each stage of the +cascade. These derivations of the quasibinomial distribution and its generalization +to a saturating form appear to be novel. The cascading failure model can also be +expressed as a queuing model, and in the nonsaturating case, the number of cus- +tomers in the first busy period is known to be quasibinomial [10,32]. + +The article is organized as follows. Section 2 describes cascading failure black- +outs and Section 3 describes the model and its normalization. Section 4 derives +the saturating quasibinomial distribution of the number of failures and shows how +the saturation generalizes the quasibinomial distribution and extends its parameter +range. Section 5 illustrates the use of the model in studying the effect of system +loading. +---PAGE_BREAK--- + +## 2. THE NATURE OF CASCADING FAILURE BLACKOUTS + +Bulk electrical power transmission systems are complex networks of large numbers of components that interact in diverse ways. For example, most of America and Canada east of the Rocky Mountains is supplied by a single network running at a shared supply frequency. This network includes thousands of generators, tens of thousands of transmission lines and network nodes, and about 100 control centers that monitor and control the network flows. The flow of power and some dynamical effects propagate on a continental scale. All of the electrical components have limits on their currents and voltages. If these limits are exceeded, automatic protection devices or the system operators disconnect the component from the system. We regard the disconnected component as failed because it is not available to transmit power (in practice, it will be reconnected later). Components can also fail in the sense of misoperation or damage due to aging, fire, weather, poor maintenance, or incorrect design or operating settings. In any case, the failure causes a transient and causes the power flow in the component to be redistributed to other components according to circuit laws and subsequently redistributed according to automatic and manual control actions. The transients and readjustments of the system can be local in effect or can involve components far away, so that a component disconnection or failure can effectively increase the loading of many other components throughout the network. In particular, the propagation of failures is not limited to adjacent network components. The interactions involved are diverse and include deviations in power flows, frequency, and voltage, as well as operation or misoperation of protection devices, controls, operator procedures, and monitoring and alarm systems. However, all of the interactions between component failures tend to be stronger when components are highly loaded. For example, if a more highly loaded transmission line fails, it produces a larger transient, there is a larger amount of power to redistribute to other components, and failures in nearby protection devices are more likely. Moreover, if the overall system is more highly loaded, components have smaller margins so they can tolerate smaller increases in load before failure, the system nonlinearities and dynamical couplings increase, and the system operators have fewer options and more stress. + +A typical large blackout has an initial disturbance or trigger event, followed by a sequence of cascading events. Each event further weakens and stresses the system and makes subsequent events more likely. Examples of an initial disturbance are short circuits of transmission lines through untrimmed trees, protection device misoperation, and bad weather. The blackout events and interactions are often rare, unusual, or unanticipated because the likely and anticipated failures are already routinely accounted for in power system design and operation. The complexity is such that it can take months after a large blackout to sift through the records, establish the events occurring, and reproduce with computer simulations and hindsight a causal sequence of events. + +The historically high reliability of North American power transmission systems is largely due to estimating the transmission system capability and designing +---PAGE_BREAK--- + +and operating the system with margins with respect to a chosen subset of likely and serious contingencies. The analysis is usually either a deterministic analysis of estimated worst cases or a Monte Carlo simulation of moderately detailed probabilistic models that capture steady-state interactions [2]. Combinations of likely contingencies and some dependencies between events such as common mode or common cause are sometimes considered. The analyses address the first few likely failures rather than the propagation of many rare or unanticipated failures in a cascade. + +We briefly review some other approaches to cascading failure in power system blackouts. Carreras, Lynch, Dobson, and Newman [4] represented cascading transmission line overloads and outages in a power system model using the DC load flow approximation and standard linear programming optimization of the generation dispatch. The model shows critical point behavior as load is increased and can show power tails similar to those observed in blackout data. Chen and Thorp [9] modeled power system blackouts using the DC load flow approximation and standard linear programming optimization of the generation dispatch and represented in detail hidden failures of the protection system. The expected blackout size is obtained using importance sampling and it shows some indications of a critical point as loading is increased. Rios, Kirschen, Jawayeera, Nedic, and Allan [30] evaluated expected blackout cost using Monte Carlo simulation of a power system model that represents the effects of cascading line overloads, hidden failures of the protection system, power system dynamic instabilities, and the operator responses to these phenomena. Ni, McCalley, Vittal, and Tayyib [26] evaluate expected contingency severities based on real-time predictions of the power system state to quantify the risk of operational conditions. The computations account for current and voltage limits, cascading line overloads, and voltage instability. Roy, Asavathiratham, Lesieutre, and Verghese [31] constructed randomly generated tree networks that abstractly represent influences between idealized components. Components can be failed or operational according to a Markov model that represents both internal component failure and repair processes and influences between components that cause failure propagation. The effects of the network degree and the intercomponent influences on the failure size and duration were studied. Pepyne, Panayiotou, Cassandras, and Ho [29] also used a Markov model for discrete state power system nodal components, but they propagated failures along the transmission lines of a power systems network with a fixed probability. They studied the effect of the propagation probability and maintenance policies that reduce the probability of hidden failures. The challenging problem of determining cascading failure due to dynamic transients in hybrid nonlinear differential equation models was addressed by DeMarco [15] using Lyapunov methods applied to a smoothed model and by Parrilo, Lall, Paganini, Verghese, Lesieutre, and Marsden [28] using Karhunen-Loeve and Galerkin model reduction. Watts [34] described a general model of cascading failure in which failures propagate through the edges of a random network. Network nodes have a random threshold and fail when this threshold is exceeded by a sufficient fraction of failed nodes one edge away. Phase transitions causing large cascades can occur when the net- +---PAGE_BREAK--- + +work becomes critically connected by having sufficiently average degree or when a highly connected network has sufficiently low average degree so that the effect of a single failure is not swamped by a high connectivity to unfailed nodes. Lindley and Singpurwalla [24] described some foundations for causal and cascading failure in infrastructures and model cascading failure as an increase in a component failure rate within a time interval after another component fails. Initial versions of the cascading failure model of this article appear in Dobson, Chen, Thorp, Carreras, and Newman [18] and Dobson, Carreras, and Newman [16]. + +### 3. DESCRIPTION OF MODEL + +The model has *n* identical components with random initial loads. For each component, the minimum initial load is $L^{\min}$ and the maximum initial load is $L^{\max}$. For $j = 1, 2, \dots, n$, component *j* has initial load $L_j$ that is a random variable uniformly distributed in [$L^{\min}, L^{\max}$]. $L_1, L_2, \dots, L_n$ are independent. + +Components fail when their load exceeds $L^{\text{fail}}$. When a component fails, a fixed and positive amount of load *P* is transferred to each of the components. + +To start the cascade, an initial disturbance loads each component by an additional amount *D*. Some components may then fail depending on their initial loads $L_j$, and the failure of each of these components will distribute an additional load *P* that can cause further failures in a cascade. The components become progressively more loaded as the cascade proceeds. + +In particular, the model produces failures in stages *i* = 0,1,2,... according to the following algorithm, where $M_i$ is the number of failures in stage *i*. + +**CASCADE Algorithm** + +0. All *n* components are initially unfailed and have initial loads $L_1, L_2, \dots, L_n$ that are independent random variables uniformly distributed in [$L^{\min}, L^{\max}$]. + +1. Add the initial disturbance *D* to the load of each component. Initialize the stage counter *i* to zero. + +2. Test each unfailed component for failure: For *j* = 1, ..., *n*, if component *j* is unfailed and its load is greater than $L^{\text{fail}}$, then component *j* fails. Suppose that $M_i$ components fail in this step. + +3. Increment the component loads according to the number of failures $M_i$: Add $M_i P$ to the load of each component. + +4. Increment *i* and go to step 2. + +The CASCADE algorithm has the property that if there are no failures in stage *j* so that $M_j = 0$, then $0 = M_j = M_{j+1} = \dots$ so that there are no subsequent failures (in step 2, $M_j$ can be zero either because all the components have already failed or because the loads of the unfailed components are less than $L^{\text{fail}}$). Since there are *n* components, it follows that $M_n = 0$ and that the outcome with the maximum number of stages with nonzero failures is $1 = M_0 = M_1 = \dots = M_{n-1}$. We are most interested in the total number of failures $S = M_0 + M_1 + \dots + M_{n-1}$. +---PAGE_BREAK--- + +When the model in an application is being interpreted, the load increment *P* need not correspond only to transfer of a physical load such as the power flow through a component. Many ways by which a component failure makes the failure of other components more likely can be thought of as increasing an abstract "load" on the other components until failure occurs when a threshold is reached. + +It is useful to normalize the loads and model parameters so that the initial loads lie in [0,1] and $L^{\text{fail}} = 1$ while preserving the sequence of component failures and $M_0, M_1, \dots$. First, note that the sequence of component failures and $M_0, M_1, \dots$ are unchanged by adding the same constant to the initial disturbance *D* and the failure load $L^{\text{fail}}$. In particular, choosing the constant to be $L^{\max} - L^{\text{fail}}$, the initial disturbance *D* is modified to $D + (L^{\max} - L^{\text{fail}})$ and the failure load $L^{\text{fail}}$ is modified to $L^{\text{fail}} + (L^{\max} - L^{\text{fail}}) = L^{\max}$. Then all of the loads are shifted and scaled to yield normalized parameters. The normalized initial load on component *j* is $\ell_j = (L_j - L^{\min})/(L^{\max} - L^{\min})$ so that $\ell_j$ is a random variable uniformly distributed on [0,1]. The normalized minimum initial load is zero, and the normalized maximum initial load and the normalized failure load are both one. The normalized modified initial disturbance and the normalized load increase when a component fails are + +$$d = \frac{D + L^{\max} - L^{\text{fail}}}{L^{\max} - L^{\min}}, \quad p = \frac{P}{L^{\max} - L^{\min}}. \qquad (1)$$ + +An alternative way to describe the model follows. It is convenient to use the nor- +malized parameters in Eq. (1). Let $N(t)$ be the number of components with loads in +$(1-t, 1]$. If the $n$ initial component loadings are regarded as $n$ points in $[0, 1] \subset \mathbb{R}$, +then $N(t)$ is the number of points greater than $1-t$. Then $0 \le N(t) \le n$, the sample +paths of $N$ are nondecreasing, and $N(t) = 0$ for $t \le 0$ and $N(t) = n$ for $t \ge 1$. + +Let the number of components failed at or before stage *j* be $S_j = M_0 + M_1 + \dots + M_j$. Then, assuming $S_{-1} = 0$, the CASCADE algorithm generates $S_0, S_1, \dots$ according to + +$$S_j = N(d + S_{j-1}p), \quad j = 0, 1, \dots \qquad (2)$$ + +Then $0 \le S_j \le n$, $S_j$ is nondecreasing, and $S_k = S_{k+1}$ implies that $S_j = S_{j+1}$ for $j \ge k$. The minimum such $k$ is the maximum stage number in which failures occur and $S_{-1} < S_0 < S_1 < \dots < S_k = S_{k+1} = \dots$ and the total number of failures $S = S_k$; that is, + +$$N(d + Sp) = S, \qquad (3)$$ + +$$N(d + S_j p) > S_j, \quad -1 \le j < k. \qquad (4)$$ + +Moreover, for $j < k$ and $r = 0, 1, \dots, M_{j+1} - 1$, + +$$N(d + (S_j + r)p) \ge N(d + S_j p) = S_{j+1} = S_j + M_{j+1} > S_j + r. \qquad (5)$$ +---PAGE_BREAK--- + +Therefore, $N(d + sp) > s$ for $s = 0, 1, \dots, S - 1$, and this inequality and Eq. (3) allow +the total number of failures to be characterized as + +$$ +S = \min\{s | N(d + sp) = s, s \in \{0,1,2,\dots\}\}. \quad (6) +$$ + +If, at stage *j*, $d + S_j p > 1$, we say that the model saturates. Saturation implies $S_{j+1} = n$. Saturation never occurs if *d* and *p* are small enough that $d + np < 1$. + +The model can be formulated as a queue with a single server. Exactly $n$ cus- +tomers arrive during a given hour independently and uniformly. The server is avail- +able to serve these customers at time $d$ after the start of the hour because of +completing some other task. The customer service time is $p$. Then, $S$ is the num- +ber of customers that arrive during the first busy period. The queue saturates when +the first busy period runs past the end of the hour. Charalambides [10] and Takács +[32] analyzed this queue in the nonsaturating case described in Section 4.3. + +The model can also be recast in the form of an approximate and idealized fiber bundle model. There are $n$ identical, parallel fibers in the bundle. The $L_j$ of the unnormalized model now indicates breaking strength: Fiber $j$ has random breaking strength $L^{\text{fail}} - L_j$ that is uniformly distributed in [$L^{\text{fail}} - L^{\max}$, $L^{\text{fail}} - L^{\min}$]. Each fiber has zero load initially. Then, an initial force is applied to the bundle that increases the load of each fiber to $D$ and this starts a burst avalanche of fiber breaks of size $S$. When a fiber breaks, it distributes a constant amount of load $P$ to all the other fibers. In contrast, and with better physical justification, idealized fiber bundle models with global redistribution as described by Kloster, Hansen, and Hemmer [22] redistribute the current fiber load equally to the remaining fibers. + +**4. DISTRIBUTION OF NUMBER OF FAILURES** + +The main result is that the distribution of the total number of component failures +$S$ is + +$$ +P[S=r] = \begin{cases} +\binom{n}{r} \phi(d) (d+rp)^{r-1} (\phi(1-d-rp))^{n-r}, & r=0,1,\ldots,n-1 \\ +1 - \sum_{s=0}^{n-1} P(S=s), & r=n, +\end{cases} \tag{7} +$$ + +where $p \ge 0$ and the saturation function is + +$$ +\phi(x) = \begin{cases} 0, & x < 0 \\ x, & 0 \le x \le 1 \\ 1, & x > 1. \end{cases} \qquad (8) +$$ + +It is convenient to assume that $0^0 \equiv 1$ and $0/0 \equiv 1$ when these expressions arise in +any formula in this article. +---PAGE_BREAK--- + +If $d \ge 0$ and $d + np \le 1$, then there is no saturation ($\phi(x) = x$) and Eq. (7) reduces to the quasibinomial distribution + +$$P[S=r] = \binom{n}{r} d(d+rp)^{r-1}(1-d-rp)^{n-r}. \quad (9)$$ + +The quasibinomial distribution was introduced by Consul [11] to model an urn problem in which a player makes strategic decisions. Burtin [3] derived the distribution of the number of initially uninfected nodes that become infected in an inverse epidemic process in a random mapping. This distribution is quasibinomial, with $d$ the fraction of initially infected nodes and $p$ the uniform random mapping probability. Islam et al. [19] interpreted $d$ and $p$ as primary and secondary infection probabilities and applied the quasibinomial distribution to data on the final size of influenza epidemics. Jaworski [20] generalized the derivation to a random mapping with a general fixed-point probability. + +The cascading failure model gives a new application and interpretation of the quasibinomial distribution. Moreover, the saturation in Eq. (7) extends the range of parameters of the quasibinomial distribution to allow $d + np > 1$. Section 5 shows that this extended parameter range can describe regimes with a high probability of all components failing. + +The next two subsections derive Eq. (7) from the CASCADE algorithm in two ways: by means of a recursion and by means of the quasimultinomial joint distribution of $M_0, M_1, \dots, M_{n-1}$. + +**4.1. Recursion** + +It is convenient to show the dependence of the distribution of number of failures on the normalized parameters by writing $P[S=r] = f(r,d,p,n)$. + +In the case of $n=0$ components, + +$$f(0, d, p, 0) = 1. \qquad (10)$$ + +According to the CASCADE algorithm, when the initial disturbance $d \le 0$, no components fail, and when $d \ge 1$, all $n$ components fail. Then + +$$f(r, d, p, n) = \begin{cases} 1 - \phi(d), & r=0 \\ 0, & 0 < r < n \\ \phi(d), & r=n \end{cases} \quad (d \le 0 \text{ or } d \ge 1) \text{ and } n > 0. \tag{11}$$ + +We assume $n > 0$ and $0 < d < 1$ for the rest of the subsection. + +The initial disturbance $d$ causes stage 0 failure of the components that have initial load $\ell$ in $(1-d, 1]$. Therefore, the probability of any component failing in stage 0 is $d$ and +---PAGE_BREAK--- + +$$P[M_0 = k] = \binom{n}{k} d^k (1-d)^{n-k}. \quad (12)$$ + +Suppose that $M_0 = k$ and consider the $n-k$ components that did not fail in stage 0. Since none of the $n-k$ components failed in stage 0, their initial loads $\ell$ must lie in $[0, 1-d]$ and the distribution of their initial loads conditioned on not failing in stage 0 is uniform in $[0, 1-d]$. In stage 1, each of the $n-k$ components has had a load increase $d$ from the initial disturbance and an additional load increase $kp$ from the stage 0 failure of $k$ components. Therefore, the equivalent total initial disturbance for each of the $n-k$ components is $D = kp + d$. + +To summarize, assuming $M_0 = k$, the failure of the $n-k$ components in stage 1 is governed by the model with initial disturbance $D = kp + d$, load transfer $P = p$, $L^{\min} = 0$, $L^{\max} = 1-d$, $L^{\text{fail}} = 1$, and $n-k$ components. Normalizing the parameters using Eq. (1) yields that the failure of the $n-k$ components is governed by the model with normalized initial disturbance $kp/(1-d)$ and normalized load transfer $p/(1-d)$; that is, + +$$P[S=r|M_0=k] = f\left(r-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right). \quad (13)$$ + +Combining Eqs. (12) and (13) yields the recursion + +$$ +\begin{align*} +f(r,d,p,n) &= \sum_{k=0}^{r} P[S=r|M_0=k] P[M_0=k] \\ +&= \sum_{k=0}^{r} \binom{n}{k} d^k (1-d)^{n-k} f\left(r-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right), \\ +&\qquad 0 \le r \le n, \quad 0 < d < 1, \quad n > 0. \tag{14} +\end{align*} +$$ + +Equations (10), (11), and (14) define $f(r,d,p,n) = P[S=r]$ for all $n \ge 0$ and $p \ge 0$. Equations (10) and (11) agree with Eq. (7). Moreover, it is routine to prove in the Appendix that Eq. (7) satisfies recursion (14). Therefore, Eq. (7) is the distribution of $S$ in the CASCADE algorithm. Thus, the recursion offers a simple way to derive the saturating quasibinomial distribution that avoids complicated algebra or combinatorics. It is also straightforward to use Eqs. (10) and (14) to confirm by induction on $n$ that Eq. (7) is a probability distribution. + +## 4.2. A Quasimultinomial Distribution + +This subsection shows that the joint distribution of $M_0, M_1, \dots, M_{n-1}$ is quasimultinomial and hence derives Eq. (7). It is convenient throughout to assume $d \ge 0$, restrict $m_0, m_1, \dots$ to nonnegative integers, and write $s_i = m_0 + m_1 + \dots + m_i$ for $i = 0, 1, \dots$ and $s_{-1} = 0$. +---PAGE_BREAK--- + +Let $\alpha_0 = \phi(d), \beta_0 = 1$, and, for $i=1,2,...$, + +$$ +\alpha_i = \phi \left( \frac{m_{i-1} p}{1 - d - s_{i-2} p} \right), \quad \beta_i = \phi(1 - d - s_{i-2} p). \qquad (15) +$$ + +The identity + +$$ +\beta_i(1 - \alpha_i) = \beta_{i+1}, \quad i = 0, 1, 2, \dots, \tag{16} +$$ + +can be verified using $1 - \phi(x) = \phi(1-x)$ and $d \ge 0$ and considering all of the cases. + +In step 2 of stage 0 in the CASCADE algorithm, the probability that the load increment of *d* causes one of the components to fail is $\alpha_0 = \phi(d)$ and the probability of $m_0$ failures in the *n* components is + +$$ +P[M_0 = m_0] = \binom{n}{m_0} \alpha_0^{m_0} (1-\alpha_0)^{n-m_0}. \quad (17) +$$ + +Consider the end of step 2 of stage *i* ≥ 1 in the CASCADE algorithm. The +failures that have occurred are *M*₀ = *m*₀, *M*₁ = *m*₁, ..., *M*ᵢ = *m*ᵢ and there are *n* − *s*ᵢ +unfailed components, but the component loads have not yet been incremented by +*m*ᵢ*p* in step 3. + +Suppose that *d* + *s*ᵢ₋₁*p* < 1. Then, conditioned on the *n* − *s*ᵢ components not yet having failed, the loads of the *n* − *s*ᵢ unfailed components are uniformly distributed in [*d* + *s*ᵢ₋₁*p*, 1]. In step 3, the probability that the load increment of *m*ᵢ*p* causes one of the unfailed components to fail is αᵢ₊₁ and the probability of *m*ᵢ₊₁ failures in the *n* − *s*ᵢ unfailed components is + +$$ +\begin{align} +P[M_{i+1} &= m_{i+1} | M_i = m_i, \dots, M_0 = m_0] \nonumber \\ +&= \binom{n-s_i}{m_{i+1}} \alpha_{i+1}^{m_{i+1}} (1-\alpha_{i+1})^{n-s_{i+1}}, && m_{i+1} = 0, 1, \dots, n-s_i. \tag{18} +\end{align} +$$ + +Suppose that $d + s_{i-1}p \ge 1$. Then, all of the components must have failed on a previous step and $P[M_{i+1} = m_{i+1}|M_i = m_i, \dots, M_0 = m_0] = 1$ for $m_{i+1} = 0$ and is zero otherwise. In this case, $\alpha_{i+1} = 0$ and Eq. (18) is verified. + +We claim that for $s_i \le n$, + +$$ +P[M_i = m_i, \dots, M_0 = m_0] +\begin{equation} += \frac{n!}{m_0! m_1! \cdots m_i! (n-s_i)!} (\alpha_0 \beta_0)^{m_0} (\alpha_1 \beta_1)^{m_1} \cdots (\alpha_i \beta_i)^{m_i} \beta_{i+1}^{n-s_i}. \tag{19} +\end{equation} +$$ +---PAGE_BREAK--- + +Equation (19) is proved by induction on $i$. For $i=0$, Eq. (19) reduces to Eq. (17). The inductive step is verified by multiplying Eqs. (18) and (19) and using Eq. (16) to obtain $P[M_{i+1} = m_{i+1}, \dots, M_0 = m_0]$ in the form of Eq. (19). + +An expression equivalent to Eq. (19) obtained using Eq. (16) is + +$$ +\begin{align} +P[M_i = m_i, \dots, M_0 = m_0] & \\ +&= \frac{n!}{m_0! m_1! \dots m_i! (n-s_i)!} (\beta_0 - \beta_1)^{m_0} (\beta_1 - \beta_2)^{m_1} \dots (\beta_i - \beta_{i+1})^{m_i} \beta_{i+1}^{n-s_i}. \tag{20} +\end{align} +$$ + +The CASCADE algorithm has the property that if there are no failures in stage $j$ so that $M_j = 0$, then $0 = M_j = M_{j+1} = \dots$ and there are no subsequent failures. This property is verified by Eq. (20) because $m_j = 0$ implies $\beta_{j+1} = \beta_{j+2}$ so that the factor $(\beta_{j+1} - \beta_{j+2})^{m_{j+1}} = 0^{m_{j+1}}$, which vanishes unless $m_{j+1} = 0$. Iterating this argument gives $0 = M_j = M_{j+1} = \dots$. Since the maximum number of failures is $n$, the longest sequence of failures has $n$ stages with $M_0 = M_1 = \dots = M_{n-1} = 1$. It follows that $0 = M_n = M_{n+1} = \dots$ and that the nontrivial part of the joint distribution is determined by $M_0, M_1, \dots, M_{n-1}$. It also follows that $M_{n-1} = 0$ if there are less than $n$ stages with failures. + +Equation (20) can now be rewritten for $i=n-1$. Let $I$ be the largest integer not exceeding $n$ such that $1-d-s_{I-2}p > 0$. Then, Eq. (20) becomes, for $s_{n-1} \le n$, + +$$ +\begin{align} +P[M_{n-1} = m_{n-1}, \dots, M_0 = m_0] & \nonumber \\ +&= \frac{n!}{m_0! m_1! \cdots m_{n-1}! (n-s_{n-1})!} (\phi(d))^{m_0} (m_0 p)^{m_1} (m_1 p)^{m_2} \cdots (m_{I-2} p)^{m_{I-1}} \nonumber \\ +&\qquad \times (\phi(1-d-s_{I-2}p))^{n-s_{I-1}} A(\mathbf{m}, I), \tag{21} +\end{align} +$$ + +where $A(\mathbf{m}, n) = 1$ and $A(\mathbf{m}, I) = 0^{m_{I+1}} \cdots 0^{m_{n-1}} 0^{n-s_{n-1}}$ for $I < n$. It follows from the definition of $A(\mathbf{m}, I)$ that Eq. (21) vanishes for $I < n$ unless $0 = M_{I+1} = \cdots = M_{n-1}$ and $S = M_0 + \cdots + M_I = n$. (Although Eq. (21) was derived assuming $d \ge 0$, it also holds for $d < 0$. In particular, for $d < 0$, Eq. (21) implies $P[M_{n-1} = 0, \dots, M_0 = 0] = 1$.) + +Equation (21) generalizes the quasibinomial distribution and is a form of quasi- +multinomial distribution. It is a different generalization of the quasibinomial dis- +tribution than the quasitrinomial distribution considered by Berg and Mutafchiev +[1] to describe numbers of nodes in central components of random mappings. + +Suppose that $S = M_0 + \dots + M_{n-1} = r < n$. Then, $M_{n-1} = 0$ and $M_0 + \dots + M_{n-2} = r - M_{n-1} = r$, and Eq. (21) vanishes unless $I=n$. Summing Eq. (21) over nonnegative integers $m_0, \dots, m_{n-1}$ that sum to $r$ yields +---PAGE_BREAK--- + +$$ +\begin{align*} +P[S=r] &= \sum_{s_{n-1}=r} \frac{n!}{m_0! m_1! \cdots m_{n-1}! (n-r)!} (\phi(d))^{m_0} (m_0 p)^{m_1} \cdots (m_{n-2} p)^{m_{n-1}} \\ +&\qquad \times (\phi(1-d-rp))^{n-r} \\ +&= \binom{n}{r} (\phi(1-d-rp))^{n-r} p^r \sum_{s_{n-1}=r} \frac{r!}{m_0! m_1! \cdots m_{n-1}!} \\ +&\qquad \times \left(\frac{\phi(d)}{p}\right)^{m_0} m_0^{m_1} \cdots m_{n-2}^{m_{n-2}'}, +\end{align*} +$$ + +which reduces to Eq. (7) using a lemma by Katz [21]. (The context of Katz’s lemma assumes $\phi(d)/p$ is a positive integer, but the generalization is immediate.) + +**4.3. Applying a Generalized Ballot Theorem** + +Charalambides [10] explained how the quasibinomial distribution appears as a consequence of generalized ballot theorems in the theory of fluctuations of stochastic processes [32]. We summarize this approach and comment that it derives only the nonsaturating cases of Eq. (7). + +We assume $0 < d < 1$. Consider $p$ multiplied by the number of components $N(t)$ with loads in $(1-t, 1]$. For $0 \le t \le 1$, $pN(t)$ is a stochastic process with interchangeable increments whose sample functions are nondecreasing step functions with $pN(0) = 0$. According to Eq. (6), the first passage time of $t - pN(t)$ through $d$ is $\min\{t | pN(t) = t - d\} = \min\{d + sp | N(d + sp) = s\} = d + Sp$. Then, according to Takács [32, Sect. 17, Thm. 4], + +$$ +P[d + Sp \le t] = \sum_{d \le y \le t} \frac{d}{y} P[pN(y) = y - d] \quad (22) +$$ + +for $0 < d \le t \le 1$; that is, + +$$ +\sum_{k=0}^{\lfloor (t-d)/p \rfloor} P[S=k] = \sum_{k=0}^{\lfloor (t-d)/p \rfloor} \frac{d}{d+kp} P[N(d+kp)=k]. \quad (23) +$$ + +Setting $t = d + rp$ in Eq. (23) for $r = 0, 1, \dots, \min\{n, (1-d)/p\}$, differencing the resulting equations, and using the binomial distribution of $N(t)$ for $0 \le t \le 1$ yields the nonsaturating cases of Eq. (7). However, the approach does not extend to the saturating cases because $pN(t)$ does not have interchangeable increments when $t > 1$. + +**4.4. Approximate Power Tail Exponent at a Critical Case** + +We describe standard approximations of the quasibinomial distribution that yield a power tail exponent at the critical case. For parameters satisfying $np + d \le 1$ (no saturation), the distribution of $S$ is quasibinomial and can be approximated by let- +---PAGE_BREAK--- + +ting $n \to \infty$, $p \to 0$, and $d \to 0$ in such a way that $\lambda = np$ and $\theta = nd$ are fixed to give the generalized (or Lagrangian) Poisson distribution [12–14] + +$$P[S=r] \approx \theta(r\lambda + \theta)^{r-1} \frac{\exp(-r\lambda - \theta)}{r!}, \quad (24)$$ + +which is the distribution of the number of offspring in a Galton–Watson–Bienaymé branching process, with the first generation produced by a Poisson distribution with parameter $\theta$ and subsequent generations produced by a Poisson distribution with parameter $\lambda$. The critical case for the branching process is $np = \lambda = 1$ and Otter [27] proved that at criticality, the distribution of the number of offspring has a power tail with exponent -1.5. Further implications for cascading failure of the branching process approximation are considered in Dobson, Carreras, and Newman [17]. + +## 5. EFFECT OF LOADING + +How much can an electric power transmission system be loaded before there is undue risk of cascading failure? This section discusses qualitative effects of loading on the distribution of blackout size and then applies the model to describe the effect of loading and illustrate its use. + +### 5.1. Distribution of Blackout Size at Extremes of Loading + +Consider cascading failure in a power transmission system in the impractically extreme cases of very low and very high loading. At very low loading near zero, any failures that occur have minimal impact on other components and these other components have large operating margins. Multiple failures are possible, but they are approximately independent so that the probability of multiple failures is approximately the product of the probabilities of each of the failures. Since the blackout size is roughly proportional to the number of failures, the probability distribution of the blackout size will have an exponential tail. The probability distribution of the blackout size is different if the power system were to be operated recklessly at a very high loading in which every component was close to its loading limit. Then, any initial disturbance would necessarily cause a cascade of failures leading to total or near total blackout. It is clear that the probability distribution of the blackout size must somehow change continuously from the exponential tail form to the certain total blackout form as loading increases from a very low to a very high loading. We are interested in the nature of the transition between these two extremes. + +### 5.2. Effect of Loading in the Model + +This subsection describes one way to represent a load increase in the model and how this leads to a parameterization of the normalized model. Then the effect of the load increase on the distribution of the number of components failed is described. + +For purposes of illustration, the system has $n = 1000$ components. Suppose that the system is operated so that the initial component loadings vary from $L^{\min}$ to +---PAGE_BREAK--- + +$L^{\text{max}} = L^{\text{fail}} = 1$. Then the average initial component loading $L = (L^{\text{min}} + 1)/2$ may be increased by increasing $L^{\text{min}}$. The initial disturbance $D = 0.0004$ is assumed to be the same as the load transfer amount $P = 0.0004$. These modeling choices for component load lead, via the normalization of Eq. (1), to the parameterization $p = d = 0.0004/(2 - 2L)$, $0.5 \le L < 1$. The increase in the normalized power transfer $p$ with increased $L$ can be thought of as strengthening the component interactions that cause cascading failure. + +The probability distribution of the number $S$ of components failed as $L$ increases from 0.6 is shown in Figure 1. The distribution for the nonsaturating case $L = 0.6$ has a tail that is approximately exponential. The tail becomes heavier as $L$ increases, and the distribution for the critical case $L = 0.8$, $np = 1$ has an approximate power-law region over a range of $S$. The power-law region has an exponent of approximately $-1.4$ and this compares to the exponent of $-1.5$ obtained by the analytic approximation in Section 4.4. The distribution for the saturated case $L = 0.9$ has an approximately exponential tail for small $r$, zero probability of intermediate $r$, and a probability of 0.80 of all 1000 components failing. If an intermediate number of components fail in a saturated case, then the cascade always proceeds to all 1000 components failing. + +The increase in the mean number of failures ES as the average initial component loading $L$ is increased is shown in Figure 2. The sharp change in gradient at the critical loading $L = 0.8$ corresponds to the saturation of Eq. (7) and the consequent increasing probability of all components failing. Indeed, at $L = 0.8$, the change in + +**FIGURE 1.** Log-log plot of distribution of number of components failed $S$ for three values of average initial load $L$. Note the power-law region for the critical loading $L = 0.8$. $L = 0.9$ has an isolated point at (1000,0.80), indicating probability 0.80 of all 1000 components failed. The probability of no failures is 0.61 for $L = 0.6$, 0.37 for $L = 0.8$, and 0.14 for $L = 0.9$. +---PAGE_BREAK--- + +**FIGURE 2.** Mean number of components failed *ES* as a function of average initial component loading *L*. Note the change in gradient at the critical loading *L* = 0.8. There are *n* = 1000 components and *ES* becomes 1000 at the highest loadings. + +gradient in Figure 2 together with the power-law region in the distribution of *S* in +Figure 1 suggest a type 2 phase transition in the system. If we interpret the number +of components failed as corresponding to blackout size, the power-law region is +consistent with North American blackout data and blackout simulation results +[4,8,18]. In particular, North American blackout data suggest an empirical distri- +bution of blackout size with a power tail with exponent between −1 and −2 [6,7,8]. +This power tail indicates a significant risk of large blackouts that is not present +when the distribution of blackout sizes has an exponential tail [5]. + +The model results show how system loading can influence the risk of cascading failure. At low loading, there is an approximately exponential tail in the distribution of number of components failed and a low risk of large cascading failure. There is a critical loading at which there is a power-law region in the distribution of number of components failed and a sharp increase in the gradient of the mean number of components failed. As loading is increased past the critical loading, the distribution of number of components failed saturates, there is an increasingly significant probability of all components failing, and there is a significant risk of large cascading failure. + +**Acknowledgments** + +The work was coordinated by the Consortium for Electric Reliability Technology Solutions and funded in part by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Power Technologies, Transmission Reliability Program of the U.S. Department of Energy under contract 9908935 and Interagency Agreement DE-A1099EE35075 with the National Science Foundation. The work was funded in part by NSF grants ECS-0214369 and ECS-0216053. Part of this research has been carried out +---PAGE_BREAK--- + +at Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. + +References + +1. Berg, S. & Mutafchiev, L. (1990). Random mappings with an attracting center: Lagrangian distributions and a regression function. *Journal of Applied Probability* 27: 622–636. + +2. Billington, R. & Allan, R.N. (1996). *Reliability evaluation of power systems*, 2nd ed. New York: Plenum Press. + +3. Burtin, Y.D. (1980). On a simple formula for random mappings and its applications. *Journal of Applied Probability* 17: 403–414. + +4. Carreras, B.A., Lynch, V.E., Dobson, I., & Newman, D.E. (2002). Critical points and transitions in an electric power transmission model for cascading failure blackouts. *Chaos* 12(4): 985–994. + +5. Carreras, B.A., Lynch, V.E., Newman, D.E., & Dobson, I. (2003). Blackout mitigation assessment in power transmission systems. In *36th Hawaii International Conference on System Sciences*. + +6. Carreras, B.A., Newman, D.E., Dobson, I., & Poole, A.B. (2001). Evidence for self-organized criticality in electric power system blackouts. In *34th Hawaii International Conference on System Sciences*. + +7. Carreras, B.A., Newman, D.E., Dobson, I., & Poole, A.B. (2004). Evidence for self-organized criticality in a time series of electric power system blackouts. *IEEE Transactions on Circuits and Systems I: Regular Papers* 51(9): 1733–1740. + +8. Chen, J., Thorp, J.S., & Parashar, M. (2001). Analysis of electric power disturbance data. In *34th Hawaii International Conference on System Sciences*. + +9. Chen, J. & Thorp, J.S. (2002). A reliability study of transmission system protection via a hidden failure DC load flow model. In *IEE Fifth International Conference on Power System Management and Control*, pp. 384–389. + +10. Charalambides, Ch.A. (1990). Abel series distributions with applications to fluctuations of sample functions of stochastic functions. *Communications in Statistics: Theory and Methods* 19(1): 317–335. + +11. Consul, P.C. (1974). A simple urn model dependent upon predetermined strategy. *Sankhyā: The Indian Journal of Statistics, Series B* 36(4): 391–399. + +12. Consul, P.C. (1988). On some models leading to a generalized Poisson distribution. *Communications in Statistics: Theory and Methods* 17(2): 423–442. + +13. Consul, P.C. (1989). *Generalized Poisson distributions*. New York: Marcel Dekker. + +14. Consul, P.C. & Shoukri, M.M. (1988). Some chance mechanisms leading to a generalized Poisson probability model. *American Journal of Mathematical and Management Sciences* 8(1&2): 181–202. + +15. DeMarco, C.L. (2001). A phase transition model for cascading network failure. *IEEE Control Systems Magazine* 21(6): 40–51. + +16. Dobson, I., Carreras, B.A., & Newman, D.E. (2003). A probabilistic loading-dependent model of cascading failure and possible implications for blackouts. In *36th Hawaii International Conference on System Sciences*. + +17. Dobson, I., Carreras, B.A., & Newman, D.E. (2004). A branching process approximation to cascading load-dependent system failure. In *37th Hawaii International Conference on System Sciences*. + +18. Dobson, I., Chen, J., Thorp, J.S., Carreras, B.A., & Newman, D.E. (2002). Examining criticality of blackouts in power system models with cascading events. In *35th Hawaii International Conference on System Sciences*. + +19. Islam, M.N., O'Shaughnessy, C.D., & Smith, B. (1996). A random graph model for the final-size distribution of household infections. *Statistics in Medicine* 15: 837–843. + +20. Jaworski, J. (1998). Predecessors in a random mapping. *Random Structures and Algorithms* 14: 501–519. + +21. Katz, L. (1955). Probability of indecomposability of a random mapping function. *Annals of Mathematical Statistics* 26: 512–517. + +22. Kloster, M., Hansen, A., & Hemmer, P.C. (1997). Burst avalanches in solvable models of fibrous materials. *Physical Review E* 56(3). +---PAGE_BREAK--- + +23. Kosterev, D.N., Taylor, C.W., & Mittelstadt, W.A. (1999). Model validation for the August 10, 1996 WSCC system outage. *IEEE Transactions on Power Systems* 13(3): 967–979. + +24. Lindley, D.V. & Singpurwalla, N.D. (2002). On exchangeable, causal and cascading failures. *Statistical Science* 17(2): 209–219. + +25. NERC (North American Electric Reliability Council) (2002). *1996 system disturbances*. Princeton, NJ: NERC. + +26. Ni, M., McCalley, J.D., Vittal, V., & Tayyib, T. (2003). Online risk-based security assessment. *IEEE Transactions on Power Systems* 18(1): 258–265. + +27. Otter, R. (1949). The multiplicative process. *Annals of Mathematical Statistics* 20: 206–224. + +28. Parrilo, P.A., Lall, S., Paganini, F., Verghese, G.C., Lesieutre, B.C., & Marsden, J.E. (1999). Model reduction for analysis of cascading failures in power systems. *Proceedings of the American Control Conference* 6: 4208–4212. + +29. Pepyne, D.L., Panayiotou, C.G., Cassandras, C.G., & Ho, Y.-C. (2001). Vulnerability assessment and allocation of protection resources in power systems. *Proceedings of the American Control Conference* 6: 4705–4710. + +30. Rios, M.A., Kirschen, D.S., Jawayeera, D., Nedic, D.P., & Allan, R.N. (2002). Value of security: modeling time-dependent phenomena and weather conditions. *IEEE Transactions on Power Systems* 17(3): 543–548. + +31. Roy, S., Asavathiratham, C., Lesieutre, B.C., & Verghese, G.C. (2001). Network models: growth, dynamics, and failure. In *34th Hawaii International Conference on System Sciences*, pp. 728–737. + +32. Takács, L. (1967). *Combinatorial methods in the theory of stochastic processes*. New York: Wiley. + +33. U.S.–Canada Power System Outage Task Force (2004). *Final Report on the August 14th blackout in the United States and Canada*. United States Department of Energy and National Resources Canada. + +34. Watts, D.J. (2002). A simple model of global cascades on random networks. *Proceedings of the National Academy of Sciences USA* 99(9): 5766–5771. + +# APPENDIX + +## Saturating Quasibinomial Formula Satisfies Recursion + +We prove that the saturating quasibinomial formula (7) satisfies recursion (14) for $0 < d < 1$ and $n > 0$. + +In the case $d + rp < 1$ and $r < n$, since + +$$d + rp < 1 \Leftrightarrow \frac{kp}{1-d} + (r-k) \frac{p}{1-d} < 1, \quad (25)$$ + +none of the instances of $f$ in the right-hand side of Eq. (14) saturate so that the right-hand side of Eq. (14) becomes + +$$\sum_{k=0}^{r} \binom{n}{k} d^k (1-d)^{n-k} \binom{n-k}{r-k} \frac{kp}{1-d} \left(\frac{rp}{1-d}\right)^{r-k-1} \left(1 - \frac{rp}{1-d}\right)^{n-r} \\ = \binom{n}{r} \sum_{k=0}^{r} \binom{r}{k} \frac{k}{r} d^k (rp)^{r-k} (1-d- rp)^{n-r} = \binom{n}{r} d(d+rp)^{r-1} (1-d- rp)^{n-r}.$$ + +In the case $d + rp \ge 1$ and $r < n$, Eq. (25) and $r - k < n - k$ imply that all of the instances of $f$ in the right-hand side of Eq. (14) vanish. +---PAGE_BREAK--- + +In the case $r=n$, substituting the expression from Eq. (7) for $f(n-k,(kp)/(1-d))$, +$p/(1-d), n-k$) into the right-hand side of Eq. (14) leads to + +$$ +1 - \sum_{t=0}^{n-1} \sum_{k=0}^{t} \binom{n}{k} d^k (1-d)^{n-k} f\left(t-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right) = 1 - \sum_{s=0}^{n-1} f(s,d,p,n), +$$ + +where the last step uses the result established above that Eq. (7) satisfies Eq. (14) for +$r < n$. \ No newline at end of file diff --git a/samples/texts_merged/2763593.md b/samples/texts_merged/2763593.md new file mode 100644 index 0000000000000000000000000000000000000000..25f57ec7c671ffdd2ad2989e330ad13b192cd0c8 --- /dev/null +++ b/samples/texts_merged/2763593.md @@ -0,0 +1,364 @@ + +---PAGE_BREAK--- + +# Face Recognition with One Sample Image per Class + +Shaokang Chen +Intelligent Real-Time Imaging and +Sensing (IRIS) Group +The University of Queensland +Brisbane, Queensland, Australia +shaokang@itee.uq.edu.au + +Brian C. Lovell +Intelligent Real-Time Imaging and +Sensing (IRIS) Group +The University of Queensland +Brisbane, Queensland, Australia +lovell@itee.uq.edu.au + +## Abstract + +There are two main approaches for face recognition with variations in lighting conditions. One is to represent images with features that are insensitive to illumination in the first place. The other main approach is to construct a linear subspace for every class under the different lighting conditions. Both of these techniques are successfully applied to some extent in face recognition, but it is hard to extend them for recognition with variant facial expressions. It is observed that features insensitive to illumination are highly sensitive to expression variations, which result in face recognition with changes in both lighting conditions and expressions a difficult task. We propose a new method called Affine Principle Components Analysis in an attempt to solve both of these problems. This method extract features to construct a subspace for face representation and warps this space to achieve better class separation. The proposed technique is evaluated using face databases with both variable lighting and facial expressions. We achieve more than 90% accuracy for face recognition by using only one sample image per class. + +## 1. Introduction + +One of the difficulties in face recognition (FR) is the numerous variations between images of the same face due to changes in lighting conditions, view points or facial expressions. A good face recognition system should recognize faces and be immune to these variations as mush as possible. Yet, it is been reported in [19] that differences between images of the same face due to these variations are normally greater than those between different faces. Therefore, most of the systems designed to date can only deal with face images taken under constrained conditions. So these major problems must be + +overcome in the quest to produce robust face recognition systems. + +In the past few years, different approaches have been proposed for face recognition to reduce the impact of these nuisance factors. Two main approaches are used for illumination invariant recognition. One is to represent images with features that are less sensitive to illumination changes such as the edge maps of the image. But edges generated from shadows are related to illumination changes and may have an impact on recognition. Experiments in [19] show that even with the best image representations using illumination insensitive features and distance measurement, the misclassification rate is more then 20%. The second approach presented in [21] and [22], is to prove that images of convex Lambertian objects under different lighting conditions can be approximated by a low dimensional linear subspaces. Kreigman, Belhumeur and Georghiades proposed an appearance-based method in [7] for recognizing faces under variations in lighting and view point based on this concept. Nevertheless, these methods all suppose the surface reflectance of human faces is Lambertian reflectance and it is hard for these systems to deal with cast shadows. Furthermore, these systems need several images of the same face taken under different lighting source directions to construct a model of a given face. However, sometimes it is hard to obtain different images of a given face under specific conditions. + +As for expression invariant recognition, it is still unsolved for machine recognition and is even a difficult task for humans. In [23] and [24], images are morphed to be the same shape as the one used for training. But it is not guaranteed that all images can be morphed correctly, for example an image with closed eyes cannot be morphed to a neutral image because of the lack of texture inside the eyes. It is also hard to learn the local motions within the feature space to determine the expression changes of each face, since the way one person express a certain emotion is normally somewhat different from +---PAGE_BREAK--- + +others. Martinez proposed a method to deal with variations in facial expressions in [20]. An image is divided into several local areas and those that are less sensitive to expressional changes are chosen and weighed independently. But features that are insensitive to expression changes may be sensitive to illumination variation. This is discussed in [19] which says that "when a given representation is sufficient to overcome a single image variation, it may still be affected by other processing stages that control other imaging parameters". + +It is known that performance of face recognition systems is acutely dependent on the choice of features [3], which is thus the key step in the recognition methodology. Principal Component Analysis (PCA) and Fisher Linear Discriminant (FLD) [1] are two well-known statistical feature extraction techniques for face recognition. PCA, a standard decorrelation technique, derives an orthogonal projection basis, which allows representation of faces in a vastly reduced feature space — this dimensionality reduction increases generalisation ability. PCA finds a set of orthogonal features, which provide a maximally compact representation of the majority of the variation of the facial data. But PCA might extract some noise features that degenerate performance of the system. For this reason, Swets and Weng [8] argue in favor of methods such as FLD which seek to determine the most discriminatory features by taking into account both within-class and between-class variation to derive the Most Discriminating Features (MDF). However, compared to PCA, it has been shown that FLD overfits to the training data resulting in a lack of generalization ability [2]. + +We propose a new method Affine Principle Component Analysis (APCA) that can deal with variations both in illumination and facial expression. This paper discusses APCA and presents results, which show that the recognition performance of APCA greatly exceeds that of both PCA and FLD when recognizing known faces with unknown changes in illumination and expression. + +## 2. Review of PCA & FLD + +PCA and FLD are two popular techniques for face recognition. They abstract features from training face images to generate orthogonal sets of feature vectors, which span a subspace of the face images. Recognition is then performed within this space based on some distance metric (possibly Euclidean). + +### 2.1. PCA (Principal Component Analysis) + +PCA is a second-order method for finding the linear representation of faces using only the covariance of data and determines the set of orthogonal components (feature vectors) which minimise the reconstruction error for a given number of feature vectors. Consider the face image set $I = [I_1, I_2, ..., I_n]$, where $I_i$ is a p×q image, $i \in [1..n]$, p,q,n ∈ Z⁺, the average face $\Psi$ of the image set is defined by: + +$$ \Psi = \frac{1}{n} \sum_{i=1}^{n} I_i . \quad (1) $$ + +Normalizing each image by subtracting the average face, we have the normalized difference image: + +$$ \tilde{D}_i = I_i - \Psi . \quad (2) $$ + +Unpacking $\tilde{D}_i$ row-wise, we form the $N$ ($N = p \times q$) dimensional column vector $d_i$. We define the covariance matrix $C$ of the normalized image set $D = [d_1, d_2, ..., d_n]$ by: + +$$ C = \sum_{i=1}^{n} d_i d_i^T = DD^T \quad (3) $$ + +An eigendecomposition of $C$ yields eigenvalues $\lambda_i$ and eigenvectors $u_i$ which satisfy: + +$$ Cu_i = \lambda_i u_i, \quad (4) $$ + +$$ DD^T = C = \sum_{i=1}^{N} \lambda_i u_i u_i^T, \quad (5) $$ + +where $i \in [1..N]$. Since those eigenvectors obtained looks like human faces physically, they are also called eigenfaces. Generally, we select a small subset of $mj,k, where +*m* < *n*, denotes the number of principal eigenfaces cho- +sen for the projection, and *k* = 1,2,...,*K*j, denotes the kth +sample of the class *S*j, where *j* = 1,2,...,*M*. We often +use the nearest neighbor method for classification, where +the distance between two face vectors represents the en- +ergy difference between them. In the case of variable +illumination, lighting changes dominate over the charac- +teristic differences between faces. It has also been +proved in [19] that the distance between face vectors +with facial expression variations are generally greater +than that with face identity. This is the main reason why +PCA does not work well under variable lighting and ex- +pression. In fact, not all the features have the same im- +portance in recognition. Features that are strong between +classes and weak within class are much more useful for +the recognition task. Therefore, we propose an affine +model (Affine PCA) to resolve this problem. The affine +procedure involves three steps: eigenspace rotation, +whitening transformation and eigenface filtering. + +3.1. Eigenspace Rotation + +The eigenfaces extracted from PCA are Most Expres- +sive Features (MEF) and these are not necessarily opti- +mal for face recognition performance as stated in [8]. +Applying FLD we can obtain the Most Discriminating +Features but overfits to only training data lacking of gen- +eralization capacity. Therefore, in order not to lose gen- +eralization ability while still keep the discrimination, we +prefer to rotate the space and find the most variant fea- +tures that can represent changes due to lighting or ex- +pression variation. That is to extract the within class +covariance and apply PCA to find the best eigen features +that maximally represent within class variations. The +within class difference is defined as: + +$$ +D_{Within} = \sum_{j=1}^{M} \sum_{k=1}^{K_j} s_{j,k} - \mu_j, \qquad (9) +$$ + +and the within class covariance become: + +$$ +Cov_{Within} = D_{Within} D_{Within}^{T}, \quad (10) +$$ + +which is a *m* by *m* matrix. Applying singular value +decomposition (SVD) to within class covariance matrix, +we have, + +$$ +Cov_{Within} = USV^T = \sum_{i=1}^{m} \sigma_i v_i v_i^T . +$$ + +Then the rotation matrix M is the set of eigen vectors of +covariance matrix, $M = [v_1, v_2, ..., v_m]$. Then all the +vectors represented in the original subspace are trans- +formed into new space by multiply by M. + +## 3.2. Whitening Transformation + +The purpose for whitening is to normalize the scatter +matrix for uniform gain control. Since as stated in [3] +"mean square error underlying PCA preferentially +weights low frequencies", we would need to compensate +for that. The whitening parameter Γ is related to the ei- +genvalues λᵢ. Conventionally, we would use the stan- +dard deviation for whitening, that is: +Γi = √λᵢ, i = [1...m]. But this value appears to compress +the eigenspace so much that class separability is dimin- +ished. We therefore use Γᵢ = λᵢ/p, where the exponent p is +determined empirically. + +3.3. Filtering the Eigenfaces + +The aim of filtering is to diminish the contribution of +eigenfaces that are strongly affected by variations. We +want to be able to enhance features that capture the main +differences between classes (faces) while diminishing the +contribution of those that are largely due to lighting or +---PAGE_BREAK--- + +expression variation (within class differences). We thus +define a filtering parameter $\Lambda$ which is related to iden- +tity-to-variation (ITV) ratio. The ITV is a ratio measur- +ing the correlation with a change in person versus a +change in variations for each of the eigenfaces. For an M +class problem, assume that for each of the M classes +(persons) we have examples under K standardized differ- +ent variations in illumination or expression. In case of +illumination changes, the lighting source is positioned in +front, above, below, left and right as illustrated in Figure +2. The facial expression changes are normal, surprised +and unpleasant as shown in Figure 3. Let us denote the i-th +eigenface of the k-th sample for class (person) S_j by + +$s_{i,j,k}$. Then + +$$ +\begin{align} +ITV_i &= \frac{\text{Between Class Scatter}}{\text{Within Class Scatter}} \nonumber \\ +&= \frac{\frac{1}{M} \sum_{j=1}^{M} \frac{1}{K} \sum_{k=1}^{K} |s_{i,j,k} - \bar{\sigma}_{i,k}|}{\frac{1}{M} \sum_{j=1}^{M} \frac{1}{K} \sum_{k=1}^{K} |s_{i,j,k} - \mu_{i,j}|}, \tag{11} +\end{align} +$$ + +$$ +\bar{\sigma}_{i,k} = \frac{1}{M} \sum_{j=1}^{M} s_{i,j,k}, +$$ + +and + +$$ +\mu_{i,j} = \frac{1}{K} \sum_{k=1}^{K} s_{i,j,k}, \quad i = [1 \cdots m]. +$$ + +Here $\bar{\sigma}_{i,k}$ represents the i-th element of the mean face vector for variation $k$ for all persons and $\mu_{i,j}$ represents the i-th element of the mean face vector for person $j$ under all different variations. We then define the scaling parameter $\Lambda$ by: + +$$ +\Lambda_i = ITV_i^q \quad (12) +$$ + +where *q* is an exponential scaling factor determined em- +pirically as before. Instead of this exponential scaling +factor, other non-linear functions such as thresholding +suggest themselves. These possibilities have been ex- +plored, but so far the exponential scaling perform best. +After the affine transformation, the distance *d* between +two face vectors *s**j*,k* and *s**j',k*' is: + +$$ +d_{jj',kk'} = \sqrt{\sum_{i=1}^{m} [\omega_i (s_{i,j,k} - s_{i,j',k'})]^2}, \quad (13) +$$ + +$$ +\omega_i = \Gamma_i \Lambda_i / |\Gamma \Lambda^T|. +$$ + +The weights $\omega_i$ scale the corresponding eigenfaces. To determine the two exponents $p$ and $q$ for $\Gamma$ and $\Lambda$, we introduce a cost function and optimise them empirically. It is defined by: + +$$ +OPT = \sum_{j=1}^{M} \sum_{k=1}^{K} \sum_{m} \left( \frac{d_{jj,k0}}{d_{jm,k0}} \right), \forall m \in d_{jm,k0} < d_{jj,k0} \quad (14) +$$ + +where $d_{jj,k0}$ is the distance between the sample $s_{j,k}$ and $s_{j,0}$ which is the standard image reference for class $S_j$ (typically the normally illuminated image). Note that the condition $d_{jm,k0} < d_{jj,k0}$ is only true when there is a misclassification error. Thus $OPT$ is a combination of error rate and the ratio of between-class distance to within-class distance. By minimizing $OPT$, we can determine the best choices for $p$ and $q$. Figure 1 shows the relationship between $OPT$ and $p, q$. For one of our training database, a minimum was obtained at $p = -0.2, q = -0.4$. + +From the above, our final set of transformed eigenfaces would be: + +$$ +u_i' = \omega_i u_i M = \frac{1}{\sigma_i} \omega_i D v_i M \quad (15) +$$ + +where $i=[1...m]$. After transformation, we can apply PCA again on the compressed subspace to further reduce dimensionality (two-stage PCA). + +# 4. EXPERIMENTAL RESULTS + +The method is tested on an Asian Face Image Data- +base PF01 [6] for both changes in lighting source posi- +tions and facial expressions. The size of each image is +171×171 pixels with 256 grey levels per pixel. Figure 2 +and 3 show some examples from the database. To evalu- +ate the performance of our methods, we performed a 3- +fold cross validation on the database as follows. We +choose one-third of the 107 subjects to construct our +APCA model, one-third for training. Then we just add +the normally faces (pictures in the first column in Figure +1 and 2) of the remaining one-thirds of the data into our +recognition database. We then attempt to recognize these +faces under all the other conditions. This process is re- +peated three-fold using different partitions and the per- +formance is averaged. All the results listed in this paper +are obtained from experiments only on testing data. Ta- +ble 1 is the comparison of recognition rate between +APCA and PCA. It is clear from the results that Affine +PCA performs much better than PCA in face recognition +under variable lighting conditions. The proposed APCA +outperforms PCA remarkably in recognition rate with +99.3% for training data and 95.6% for testing data with +negligible reduction in performance for normally lit +faces. Figure 3 displays the recognition rates against +numbers of eigenfaces used (m). It can be seen that +selecting the principal 40 to 50 eigenfaces is sufficient +for invariant luminance face recognition. This number is +---PAGE_BREAK--- + +somewhat higher than is required for standard PCA, where selecting *m* in the range 10 to 20 is sufficient — this is possibly a necessary consequence of the greater complexity of the APCA face subspace compared to standard PCA. + +Figure 1. Examples of illumination changes in Asian Face Database PF01. + +Figure 2. Examples of expression changes in Asian Face Database PF01. + +As for variations in facial expression, APCA achieves higher recognition rate than PCA with an increase of 10%. For changes in both lighting condition and expression, APCA always performs better than PCA despite of the change in number of eigenfaces. The gain is almost stable with high dimension of subspace. It can also be seen from Figure 3, that recognition rate of expression changes does not decrease dramatically with the reduce of number of eigen features compared to illumination variations. Therefore, only as low as 20 features is enough to recognition faces with facial expression variations. + +We also test the performance of APCA on variations on illumination and expression simultaneously. The recognition rate of APCA is less than 5% lower than that of illumination changes and expression changes, but it is obviously higher than the recognition rate of PCA. Thus + +it shows that performance of APCA is stable in spite of the complexity of variations. However, PCA is not as robust as APCA with different variations. For illumination changes, PCA only achieve less than 60% accuracy while the accuracy increase to more than 80% for expression variations. It drops back to 60% with changes combining illumination and expression. This phenomenon has also been reported in [19] as any given representation is not sufficient to overcome variations in both illumination and expression. + +Figure 3. Recognition Rate Vs. Number of features. + +
MethodRecognition rate
Illumination VariationExpression VariationIllumination and Expression Variations
PCA57.3%84.6%70.6%
Affine PCA95.6%92.2%86.8%
+ +Table 1. Comparison of recognition rate between APCA and PCA. + +Conclusion + +We have described an easy to calculate and efficient face recognition algorithm by warping the face subspace constructed from PCA. The affine procedure contains three steps: rotating the eigen space, whitening Transformation, and then filtering the eigenfaces. After affine transformation, features are assigned with different weights for recognition which in fact enlarge the between +---PAGE_BREAK--- + +class covariance while minimizing within class covariance. There only have as few as two variable parameters during the optimization compared to other methods for high dimensionality problems. This method can not only deal with variations in illumination and expression separately but also perform well for the combination of both changes with only one sample image per class. Experiments show that APCA is more robust to change in illumination and expression and have better generalization capacity compared to the FLD method. + +A shortcoming of the algorithm is that we can not guarantee that the weights achieved are the best for recognition since we only rotate the eigen space to the direction that best represent the within class covariance. Future work will be to search the eigen space and find the best eigen features suitable for face recognition. + +## References + +[1] P.Belhumeur, J. Hespanha, and D. Kriegman, "Eigenfaces vs. fisherfaces: Recognition using class specific linear projection", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.19, No.7, 711-720, 1997. + +[2] Chengjun Liu and Harry Wechsler, "Enhanced Fisher Linear Discriminant Models for Face Recognition", 14th International Conference on Pattern Recognition, ICPR'98, Queensland, Australia, August 17-20, 1998. + +[3] Chengjun Liu and Harry Wechsler, "Evolution of Optimal Projection Axes (OPA) for Face Recognition". Third IEEE International Conference on Automatic face and Gesture Recognition, FG'98, Nara, Japan, April 14-16,1998. + +[4] Dao-Qing Dai, Guo-Can Feng, Jian-Huang Lai and P.C. Yuen, "Face Recognition Based on Local Fisher Features", 2nd Int. Conf. on Multimodal Interface, Beijing, 2000. + +[5] Hua Yu and Jie Yang, "A Direct LDA Algorithm for High-Dimensional Data-with Application to Face Recognition", Pattern Recognition 34(10), 2001, pp. 2067-2070. + +[6] Intelligent Multimedia Lab., "Asian Face Image Database PF01", http://nova.postech.ac.kr/. + +[7] Georghiades, A.S. and Belhumeur, P.N. and Kriegman, D.J., "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", IEEE Trans. Pattern Anal. Mach. Intelligence, vol.23, No. 6, 2001, pp. 643-660. + +[8] Daniel L. Swets and John Weng, "Using discriminant eigenfeatures for image retrieval", IEEE Trans. on PAMI, vol. 18, No. 8, 1996, pp. 831-836. + +[9] X.W. Hou, S.Z. Li, H.J. Zhang, "Direct Appearance Models". In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. Hawaii. December, 2001. + +[10] Z. Xue, S.Z. Li, and E.K. Teoh. "Facial Feature Extraction and Image Warping Using PCA Based Statistic Model". In Proceedings of 2001 International Conference on Image Processing. Thessaloniki, Greece. October 7-10, 2001. + +[11] S.Z. Li, K.L. Chan and C.L. Wang. "Performance Evaluation of the Nearest Feature Line Method in Image Classification and Retrieval". IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1335-1339. November, 2000. + +[12] G.D. Guo, H.J. Zhang, S.Z. Li. "Pairwise Face Recognition". In Proceedings of 8th IEEE International Conference on Computer Vision. Vancouver, Canada. July 9-12, 2001. + +[13] S. Mika, G. Ratsch, J.Weston, and K. R. M. B. Scholkopf, "Fisher discriminant analysis with kernels", Neural networks for Signal Processing IX, 1999, pp.41-48. + +[14] M. A. Turk and A. P. Pentland, "Eigenfaces for recognition", Journal of Cognitive Neuroscience, vol. 3, No. 1, 1991, pp.71-86. + +[15] Jie Zhou and David Zhang "Face Recognition by Combining Several Algorithms", ICPR 2002. + +[16] Alexandre Lemieux and Marc Parizeau, "Experiments on Eigenfaces Robustness", ICPR 2002. + +[17] A. M. Martinez and A. C. Kak, "PCA versus LDA", IEEE TPAMI, 23(2):228-233, 2001. + +[18] A. Yilmaz and M. Gokmen, "Eigenhill vs. eigenface and eigengedge", In Proceedings of International Conference Pattern Recognition, Barcelona, Spain, 2000, pp.827-830. + +[19] Yael Adin, Yael Moses, and Shimon Ullman, "Face Recognition: The problem of Compensating for Changes in Illumination Direction", IEEE PAMI, Vol. 19, No. 7, 1997. + +[20] Aleix M. Martinez, "Recognizing Impercisely Localized, Partially Occluded and Expression Variant Faces from a Single Sample per Class", IEEE TPAMI, Vol. 24, No. 6, 2002. + +[21] Ronen Basri and David W. Jacobs, "Lambertian Reflec-tance and Linear Subspaces", IEEE TPAMI, Vol. 25, No.2 2003. + +[22] Peter W. Hallinan, "A Low-Dimensional representation of Human faces for Arbitrary Lighting Conditioins", Proc. IEEE Conf. Computer Vision and Pattern recognition, 1994. + +[23] D. Beymer and T. Poggio, "Face Recognition from One Example View", Science, Vol. 272, No. 5250, 1996. + +[24] M. J. Black, D. J. Fleet and Y. Yacoob, "Robustly esti-mating Changes in Image Appearance", Computer Vision and Image Understanding, Vol. 78, No. 1, 2000. + +[25] Shaokang Chen, Brian C. Lovell and Sai Sun, "Face Recognition with APCA in Variant Illuminations", Work-shop on Signal Processing and Applications, Australia, December, 2002. \ No newline at end of file diff --git a/samples/texts_merged/276850.md b/samples/texts_merged/276850.md new file mode 100644 index 0000000000000000000000000000000000000000..24a6f9635b24f33e9e7d9e2a91c517f55382933d --- /dev/null +++ b/samples/texts_merged/276850.md @@ -0,0 +1,386 @@ + +---PAGE_BREAK--- + +On the entropy for group actions on the circle + +by + +Eduardo Jorquera (Santiago) + +**Abstract.** We show that for a finitely generated group of $C^2$ circle diffeomorphisms, the entropy of the action equals the entropy of the restriction of the action to the non-wandering set. + +1. Introduction. Let $(X, \mathrm{dist})$ be a compact metric space and $G$ a group of homeomorphisms of $X$ generated by a finite family of elements $\Gamma = \{g_1, \dots, g_n\}$. To simplify, we will always assume that $\Gamma$ is symmetric, that is, $g^{-1} \in \Gamma$ for every $g \in \Gamma$. For each $n \in \mathbb{N}$ we denote by $B_{\Gamma}(n)$ the ball of radius $n$ in $G$ (with respect to $\Gamma$), that is, the set of elements $f \in G$ which may be written in the form $f = g_{i_m} \cdots g_{i_1}$ for some $m \le n$ and $g_{i_j} \in \Gamma$. For $g \in G$ we let $\|f\| = \|f\|_{\Gamma} = \min\{n : f \in B_{\Gamma}(n)\}$ + +As in the classical case, given $\varepsilon > 0$ and $n \in \mathbb{N}$, two points $x, y$ in $X$ are said to be $(n, \varepsilon)$-separated if there exists $g \in B_{\Gamma}(n)$ such that $\mathrm{dist}(g(x), g(y)) \ge \varepsilon$. A subset $A \subset X$ is $(n, \varepsilon)$-separated if all $x \neq y$ in $A$ are $(n, \varepsilon)$-separated. We denote by $s(n, \varepsilon)$ the maximal possible cardinality (perhaps infinite) of an $(n, \varepsilon)$-separated set. The topological entropy of the action at the scale $\varepsilon$ is defined by + +$$h_{\Gamma}(G \circled{=} X, \varepsilon) = \limsup_{n \uparrow \infty} \frac{\log(s(n, \varepsilon))}{n},$$ + +and the *topological entropy* is defined by + +$$h_{\Gamma}(G \circled{=}} X) = \lim_{\varepsilon \downarrow 0} h_{\Gamma}(G \circled{=}} X, \varepsilon).$$ + +Notice that, although $h_{\Gamma}(G \circled{=} X, \varepsilon)$ depends on the system of generators, the properties of having zero, positive, or infinite entropy are independent of this choice. + +The definition above was proposed in [5] as an extension of the classical topological entropy of single maps (the definition extends to pseudo-groups + +2000 Mathematics Subject Classification: 20B27, 37A35, 37C85, 37E10. +Key words and phrases: topological entropy, group actions, circle diffeomorphisms. +---PAGE_BREAK--- + +of homeomorphisms, and hence is suitable for applications in foliation theory). Indeed, for a homeomorphism $f$, the topological entropy of the action of $\mathbb{Z} \sim \langle f \rangle$ equals twice the (classical) topological entropy of $f$. Nevertheless, the functorial properties of this notion remain unclear. For example, the following fundamental question is open. + +**GENERAL QUESTION.** Is it true that $h_{\Gamma}(G \circled{=} X)$ is equal to $h_{\Gamma}(G \circled{=} \Omega)$? + +Here $\Omega = \Omega(G \circled{=} X)$ denotes the *non-wandering set* of the action, or in other words + +$$ \Omega = \{x \in X : \text{ for every neighborhood } U \text{ of } x, \text{ we have} \\ f(U) \cap U \neq \emptyset \text{ for some } f \neq \text{id in } G\}. $$ + +This is a closed invariant set whose complement $\Omega^c$ corresponds to the *wandering set* of the action. + +The notion of topological entropy for group actions is quite appropriate in the case where $X$ is a one-dimensional manifold. In fact, in this case, the topological entropy is necessarily finite (cf. §2). Moreover, in the case of actions by diffeomorphisms, the dichotomy $h_{\text{top}} = 0$ or $h_{\text{top}} > 0$ is well understood. Indeed, according to a result originally proved by Ghys, Langevin, and Walczak, for groups of $C^2$ diffeomorphisms [5], and extended by Hurder to groups of $C^1$ diffeomorphisms (see for instance [9]), we have $h_{\text{top}} > 0$ if and only if there exists a resilient orbit for the action. This means that there exists a group element $f$ contracting an interval towards a fixed point $x_0$ inside, and another element $g$ which sends $x_0$ into its basin of contraction under $f$. + +The results of this work give a positive answer to the General Question above in the context of group actions on one-dimensional manifolds under certain mild assumptions. + +**THEOREM A.** If $G$ is a finitely generated subgroup of $\operatorname{Diff}_+^2(S^1)$, then for every finite system of generators $\Gamma$ of $G$, we have + +$$ h_{\Gamma}(G \circled{=} S^1) = h_{\Gamma}(G \circled{=} \Omega). $$ + +Our proof for Theorem A actually works in the Denjoy class $C^{1+bv}$, and applies to general codimension-one foliations on compact manifolds. In the class $C^{1+Lip}$, it is quite possible that we could give an alternative proof using standard techniques from level theory [2, 6]. + +It is unclear whether Theorem A extends to actions of lower regularity. However, it still holds under certain algebraic hypotheses. In fact (quite unexpectedly), the regularity hypothesis is used to rule out the existence of elements $f \in G$ that fix some connected component of the wandering set and which are *distorted*, that is, + +$$ \lim_{n \to \infty} \frac{\|f^n\|}{n} = 0. $$ +---PAGE_BREAK--- + +Actually, for the equality between the entropies it suffices to require that no element in $G$ be subexponentially distorted. In other words, it suffices to require that, for each element $f \in G$ with infinite order, there exist a non-decreasing function $q : \mathbb{N} \to \mathbb{N}$ (depending on $f$) with subexponential growth satisfying $q(\|f^n\|) \ge n$ for every $n \in \mathbb{N}$. This is an algebraic condition which is satisfied by many groups, for example nilpotent or free groups. (We refer the reader to [1] for a nice discussion of distorted elements.) Under this hypothesis, the following result holds. + +**THEOREM B.** If $G$ is a finitely generated subgroup of Homeo$_+(S^1)$ without subexponentially distorted elements, then for every finite system of generators $\Gamma$ of $G$, we have + +$$h_{\Gamma}(G \circled{=} S^1) = h_{\Gamma}(G \circled{=} \Omega).$$ + +The entropy of general group actions and distorted elements seem to be related in an interesting manner. Indeed, though the topological entropy of a single homeomorphism $f$ may be equal to zero, if this map appears as a subexponentially distorted element inside an acting group, then it may create positive entropy for the group action. + +**2. Some background.** In this work we will consider the normalized length on the circle, and every homeomorphism will be orientation preserving. + +We begin by noticing that if $G$ is a finitely generated group of circle homeomorphisms and $\Gamma$ is a finite generating system for $G$, then for all $n \in \mathbb{N}$ and all $\varepsilon > 0$ one has + +$$ (1) \qquad s(n, \varepsilon) \le \frac{1}{\varepsilon} \#B_{\Gamma}(n). $$ + +Indeed, let $A$ be an $(n, \varepsilon)$-separated set of cardinality $s(n, \varepsilon)$. Then for any two adjacent points $x, y$ in $A$ there exists $f \in B_{\Gamma}(n)$ such that $\text{dist}(f(x), f(y)) \ge \varepsilon$. For a fixed $f$, the intervals $[f(x), f(y)]$ which appear have disjoint interiors. Since the total length of the circle is 1, any given $f$ can be used in this construction at most $1/\varepsilon$ times, which immediately gives (1). + +Notice that, taking the logarithm on both sides of (1), dividing by $n$, and passing to the limit gives + +$$h_{\Gamma}(G \circled{=} S^1) \le \operatorname{gr}_{\Gamma}(G),$$ + +where $\operatorname{gr}_{\Gamma}(G)$ denotes the *growth* of $G$ with respect to $\Gamma$, that is, + +$$\operatorname{gr}_{\Gamma}(G) = \lim_{n \to \infty} \frac{\log(\#\{B_{\Gamma}(n)\})}{n}.$$ + +Some easy consequences of this fact are the following: +---PAGE_BREAK--- + +* If $G$ has subexponential growth, that is, if $\text{gr}_\Gamma(G) = 0$ (in particular, if $G$ is nilpotent, or if $G$ is the Grigorchuk–Maki group considered in [8]), then $\text{h}_\Gamma(G \circled S^1) = 0$ for all finite generating systems $\Gamma$. + +* In the general case, if $\# \Gamma = q \ge 1$, then from the relations + +$$ +\[ +\#B_{\Gamma}(n) \le 1 + \sum_{j=1}^{n} 2q(2q-1)^{j-1} = \begin{cases} 1 + \frac{q}{q-1}((2q-1)^n - 1), & q \ge 2, \\ 1 + 2n, & q=1, \end{cases} +\] +$$ + +one concludes that + +$$ +h_{\Gamma}(G \circled S^1) \le \log(2q - 1). +$$ + +This shows in particular that the entropy of the action of $G$ on $S^1$ +is finite. Notice that this may also be deduced from the probabilistic +arguments of [3] (see Théorème D therein). However, these arguments +only yield the weaker estimate $h_{\Gamma}(G \circled S^1) \le \log(2q)$ when $\Gamma$ has +cardinality $q$. + +**3. Some preparations for the proofs.** The statements of our results are obvious when the non-wandering set of the action equals the whole circle. Hence, we will assume in what follows that $\Omega$ is a proper subset of $S^1$, and we will denote by $I$ a connected component of the complement of $\Omega$. Let $\text{St}(I)$ denote the stabilizer of $I$ in $G$. + +LEMMA 1. *The stabilizer St*(*I*) *is either trivial or infinite cyclic.* + +*Proof.* The (restrictions to *I* of the) non-trivial elements of St(*I*)|*I* have no fixed points, for otherwise these points would be non-wandering. Thus St(*I*)|*I* acts freely on *I*, and according to Hölder's Theorem [4, 7], its action is semiconjugate to an action by translations. We claim that if St(*I*)|*I* is non-trivial, then it is infinite cyclic. Indeed, if not then the corresponding group of translations is dense. This implies that the preimage by the semiconjugacy of any point whose preimage is a single point corresponds to a non-wandering point for the action. But this contradicts the fact that *I* is contained in Ω*c*. + +If St(I)|_I is trivial then f|_I is trivial for every f ∈ St(I), and hence f +itself must be the identity. We then conclude that St(I) is trivial. + +Analogously, $\text{St}(I)$ is cyclic if $\text{St}(I)|_I$ is cyclic. In this case, $\text{St}(I)|_I$ is generated by the restriction to the interval $I$ of the generator of $\text{St}(I)$. $\blacksquare$ + +**DEFINITION 1.** A connected component *I* of Ω*c* will be called *of type 1* if St(*I*) is trivial, and *of type 2* if St(*I*) is infinite cyclic. + +Notice that the families of connected components of type 1 and 2 are +invariant, that is, for each $f \in G$ the interval $f(I)$ is of type 1 (resp. of +type 2) if $I$ is of type 1 (resp. of type 2). Moreover, given two connected +components of type 1 of Ω*c*, there exists at most one element in *G* sending +---PAGE_BREAK--- + +the former to the latter. Indeed, if $f(I) = g(I)$ then $g^{-1}f$ is in the stabilizer of $I$, and hence $f = g$ if $I$ is of type 1. + +LEMMA 2. Let $x_1, \dots, x_m$ be points contained in a single type 1 connected component of $\Omega^c$. If for some $\varepsilon > 0$ the points $x_i, x_j$ are $(\varepsilon, n)$-separated for every $i \neq j$, then $m \le 1 + 1/\varepsilon$. + +*Proof.* Let $I = ]a,b[$ be the connected component of type 1 of $\Omega^c$ containing the points $x_1, \dots, x_m$. After renumbering the $x_i$'s, we may assume that $a < x_1 < \dots < x_m < b$. For each $1 \le i \le m-1$ one can choose an element $g_i \in B_I(n)$ such that $\text{dist}(g_i(x_i), g_i(x_{i+1})) \ge \varepsilon$. Now, since $I$ is of type 1, the intervals $]g_i(x_i), g_i(x_{i+1})[$ are pairwise disjoint. Therefore, the number of these intervals times their minimal length is less than or equal to 1. This gives $(m-1)\varepsilon \le 1$, thus proving the lemma. $\blacksquare$ + +The case of connected components $I$ of type 2 of $\Omega^c$ is much more complicated. The difficulty is that if the generator of the stabilizer of $I$ is subexponentially distorted in $G$, then there exist exponentially many $(n, \varepsilon)$-separated points inside $I$, and hence a relevant part of the entropy is “concentrated” in $I$. To deal with this problem, for each connected component $I$ of type 2 of $\Omega^c$ we denote by $p_I$ its middle point, and then we define $\ell_I: G \to \mathbb{N}_0$ as follows. Let $h$ be the generator of the stabilizer of $I$ such that $h(x) > x$ for all $x$ in $I$. For each $f \in G$ the element $fhf^{-1}$ is the generator of the stabilizer of $f(I)$ with the analogous property. We then let $\ell_I(f) = |r|$, where $r$ is the unique integer such that + +$$ f h^r f^{-1} (p_{f(I)}) \leq f(p_I) < f h^{r+1} f^{-1} (p_{f(I)}). $$ + +LEMMA 3. For all $f,g$ in $G$ one has + +$$ \ell_I(g \circ f) \le \ell_{f(I)}(g) + \ell_I(f) + 1. $$ + +*Proof.* Let $r$ be the unique integer such that + +$$ (2) \qquad (fhf^{-1})^r (p_{f(I)}) \le f(p_I) < (fhf^{-1})^{r+1} (p_{f(I)}), $$ + +and let $s$ be the unique integer for which + +$$ (gfhf^{-1}g^{-1})^s (p_{gf(I)}) \le g(p_{f(I)}) < (gfhf^{-1}g^{-1})^{s+1} (p_{gf(I)}), $$ + +so that + +$$ \ell_I(f) = |r|, \quad \ell_{f(I)}(g) = |s|. $$ + +We then have + +$$ g^{-1}(gfhf^{-1}g^{-1})^s (p_{gf(I)}) \le p_{f(I)} < g^{-1}(gfhf^{-1}g^{-1})^{s+1} (p_{gf(I)}), $$ + +that is, + +$$ (fhf^{-1})^s g^{-1} (p_{gf(I)}) \le p_{f(I)} < (fhf^{-1})^{s+1} g^{-1} (p_{gf(I)}). $$ +---PAGE_BREAK--- + +Therefore, + +$$ (f h f^{-1})^r (f h f^{-1})^s g^{-1}(p_{gf(I)}) \leq f(p_I) < (f h f^{-1})^{r+1} (f h f^{-1})^{s+1} g^{-1}(p_{gf(I)}), $$ + +and hence + +$$ (f h f^{-1})^{r+s} g^{-1}(p_{gf(I)}) \leq f(p_I) < (f h f^{-1})^{r+s+2} g^{-1}(p_{gf(I)}). $$ + +This easily gives + +$$ g(f h f^{-1})^{r+s} g^{-1}(p_{gf(I)}) \leq g f(p_I) < g(f h f^{-1})^{r+s+2} g^{-1}(p_{gf(I)}), $$ + +and thus + +$$ (g f h f^{-1} g^{-1})^{r+s}(p_{gf(I)}) \leq g f(p_I) < (g f h f^{-1} g^{-1})^{r+s+2}(p_{gf(I)}). $$ + +This shows that $\ell_I(gf)$ equals either $|r+s|$ or $|r+s+1|$, which concludes the proof. $\blacksquare$ + +The following corollary is a direct consequence of the preceding lemma, but may be proved independently. + +**COROLLARY 1.** For every $f \in G$ one has + +$$ |\ell_I(f) - \ell_{f(I)}(f^{-1})| \leq 1. $$ + +*Proof.* From (2) one obtains + +$$ h^{-(r+1)}(p_I) < f^{-1}(p_{f(I)}) \leq h^{-r}(p_I) < h^{-r+1}(p_I), $$ + +and hence $\ell_{f(I)}(f^{-1})$ equals either $|r|$ or $|r+1|$. Since $\ell_I(f) = |r|$, the corollary follows. $\blacksquare$ + +**4. The proof in the smooth case.** To rule out the possibility of “concentration” of the entropy on a type 2 connected component $I$ of $\Omega^c$, in the $C^2$ case we will use classical control of distortion arguments in order to construct, starting from the function $\ell_I$, a kind of quasi-morphism from $G$ into $\mathbb{N}_0$. Slightly more generally, let $\mathcal{F}$ be any finite family of connected components of type 2 of $\Omega^c$. We denote by $\mathcal{F}^G$ the family of all intervals contained in the orbits of the intervals in $\mathcal{F}$. For each $f \in G$ we then define + +$$ \ell_{\mathcal{F}}(f) = \sup_{I \in \mathcal{F}^G} \ell_I(f). $$ + +*A priori*, the value of $\ell_{\mathcal{F}}$ could be infinite. We claim, however, that, for groups of $C^2$ diffeomorphisms, this value is necessarily finite for every element $f$. + +**PROPOSITION 1.** For every finite family $\mathcal{F}$ of type 2 connected components of $\Omega^c$, the value of $\ell_{\mathcal{F}}(f)$ is finite for each $f \in G$. + +To prove this proposition, we will need to estimate the function $\ell_I(f)$ in terms of the distortion of $f$ on the interval $I$. +---PAGE_BREAK--- + +LEMMA 4. For each fixed type 2 connected component $I$ of $\Omega^c$ and every $g \in G$, the value of $\ell_I(g)$ is bounded from above by a number $L(V)$ depending on $V = \text{var}(\log(g'|_I))$, the total variation of the logarithm of the derivative of the restriction of $g$ to $I$. + +*Proof.* Write $I = ]a,b[$ and $g(I) = [\bar{a},\bar{b}]$. If $h$ is a generator for the stabilizer of $I$, then for every $f \in G$ the value of $\ell_I(f)$ corresponds (up to some constant $\pm 1$) to the number of fundamental domains for the dynamics of $fhf^{-1}$ on $f(I)$ between the points $p_{f(I)}$ and $f(p_I)$, which in turn corresponds to the number of fundamental domains for the dynamics of $h$ on $I$ between $f^{-1}(p_{f(I)})$ and $p_I$. Therefore, we need to show that there exist $c < d$ in $]a,b[$ depending on $V$ and such that $g^{-1}(p_{g(I)})$ belongs to $[c,d]$. We will show that this happens for the values + +$$c = a + \frac{|I|}{2e^V} \quad \text{and} \quad d = b - \frac{|I|}{2e^V}.$$ + +We will just check that the first choice works, leaving the second one to the reader. By the Mean Value Theorem, there exist $x \in g(I)$ and $y \in [\bar{a}, p_{g(I)}]$ such that + +$$ (g^{-1})'(x) = \frac{|I|}{|g(I)|} $$ + +and + +$$ (g^{-1})'(y) = \frac{|g^{-1}([\bar{a}, p_{f(I})]|}{|[\bar{a}, p_{g(I)}]|} = \frac{g^{-1}(p_{g(I)}) - a}{|g(I)|/2}. $$ + +By the definition of the constant $V$, we have $(g^{-1})'(x)/(g^{-1})'(y) \le e^V$. This gives + +$$ e^V \ge \frac{|I|/|g(I)|}{2(g^{-1}(p_{g(I)}) - a)/|g(I)|} = \frac{|I|}{2(g^{-1}(p_{g(I)}) - a)}, $$ + +thus proving that $g^{-1}(p_{g(I)}) \ge a + |I|/2e^V$, as we wanted to show. $\blacksquare$ + +*Proof of Proposition 1.* Let $J = [\bar{a}, \bar{b}]$ be an interval in the $G$-orbit of $I = ]a, b[$. If $g = g_{i_n} \cdots g_{i_1}, g_{i_j} \in \Gamma$, is an element of minimal length sending $I$ to $J$, then the intervals $I, g_{i_1}(I), g_{i_2}g_{i_1}(I), \dots, g_{i_{n-1}} \cdots g_{i_2}g_{i_1}(I)$ have pairwise disjoint interiors. Therefore, + +$$ \mathrm{var}(\log(g'|_I)) \le \sum_{j=0}^{n-1} \mathrm{var}(\log(g'_{i_{j+1}}|_{g_{i_j}\cdots g_{i_1}(I)})) \le \sum_{h \in \Gamma} \mathrm{var}(\log(h')) =: W. $$ + +Moreover, setting $V = \text{var}(\log(f'))$, we have + +$$ \text{var}(\log((fg)'_I)) \le \text{var}(\log(g'_I)) + \text{var}(\log(f')) = W + V. $$ +---PAGE_BREAK--- + +By Lemmas 3 and 4 and Corollary 1, + +$$ +\begin{align*} +\ell_J(f) &\le \ell_J(g^{-1}) + \ell_I(fg) + 1 \le \ell_I(g) + \ell_I(fg) + 2 \\ +&\le L(W) + L(W+V) + 2. +\end{align*} +$$ + +This proves the assertion of the proposition when $\mathcal{F}$ consists of a single interval. The case of general finite $\mathcal{F}$ follows easily. $\blacksquare$ + +For a given $\epsilon > 0$ we define $\ell_{\epsilon} = \ell_{\mathcal{F}_{\epsilon}}$, where $\mathcal{F}_{\epsilon} = \{I_1, \dots, I_k\}$ is the family of all connected components of $\Omega^c$ having length greater than or equal to $\epsilon$, with $k = k(\epsilon)$. Notice that, by Lemma 3, for every $f,g$ in $\Gamma$ one has + +$$ +(3) \qquad \ell_{\varepsilon}(gf) \le \ell_{\varepsilon}(g) + \ell_{\varepsilon}(f) + 1. +$$ + +LEMMA 5. There exist constants $A(\varepsilon) > 0$ and $B(\varepsilon)$ with the following property: If $x_1, \dots, x_m$ are points in a single connected component of type 2 of $\Omega^c$ and $x_i, x_j$ are $(\varepsilon, n)$-separated for every $i \neq j$, then $m \le A(\varepsilon)n + B(\varepsilon)$. + +*Proof.* Write $c_\varepsilon = \max\{\ell_\varepsilon(g) : g \in \Gamma\}$ (according to Proposition 1, the value of $c_\varepsilon$ is finite). Let $I$ be the type 2 connected component of $\Omega^c$ containing $x_1, \dots, x_m$. We may assume that $x_1 < \dots < x_m$. For each $1 \le i \le k$ let $h_i$ be the generator of $\text{St}(I_i)$. Notice that $\ell_\varepsilon(h_i^r) \ge |r|$ for all $r \in \mathbb{Z}$. + +If $f$ is an element in $B_{\Gamma}(n)$ sending $I$ to some $I_i$, then the number of points which are $\varepsilon$-separated by $f$ is less than or equal to $1/\varepsilon + 1$. We claim that the number of elements in $B_{\Gamma}(n)$ sending $I$ to $I_i$ is bounded above by $4nc_{\varepsilon} + 4n - 1$. Indeed, if $g$ also sends $I$ onto $I_i$ then $gf^{-1} \in \text{St}(I_i)$, hence $gf^{-1} = h_i^r$ some $r$. Therefore, using (3) one obtains $|r| \le \ell_{\varepsilon}(h_i^r) \le 2nc_{\varepsilon} + 2n - 1$. + +Since the previous arguments apply to each type 2 interval $I_i$, we have + +$$ +m \le k(1/\varepsilon + 1)(4nc_{\varepsilon} + 4n - 1). +$$ + +Therefore, letting + +$$ +A(\varepsilon) = (4k + 4k/\varepsilon)(1 + c_{\varepsilon}) \quad \text{and} \quad B(\varepsilon) = -(k + k/\varepsilon) +$$ + +concludes the proof. $\blacksquare$ + +To conclude the proof of Theorem A, the following notation will be useful. + +**NOTATION.** 1. Given $\epsilon > 0$ and $n \in \mathbb{N}$, we denote by $s(n, \epsilon)$ the largest cardinality of an $(n, \epsilon)$-separated subset of $\mathbb{S}^1$. Likewise, $s_{\Omega}(n, \epsilon)$ will denote the largest cardinality of an $(n, \epsilon)$-separated set contained in the non-wandering set. + +*Proof of Theorem A.* Fix $0 < \epsilon < 1/2L$, where $L$ is a common Lipschitz constant for the elements in $\Gamma$. We will show that, for some function $p_\epsilon$ growing linearly in $n$ (and whose coefficients depend on $\epsilon$), one has + +$$ +(4) \qquad s(n, \varepsilon) \le p_{\varepsilon}(n)s_{\Omega}(n, \varepsilon) + p_{\varepsilon}(n). +$$ +---PAGE_BREAK--- + +Actually, any function $p_\varepsilon$ with subexponential growth and satisfying such an inequality suffices. Indeed, taking the logarithm of both sides, dividing by $n$, and passing to the limit implies that + +$$h_{\Gamma}(G \circledast S^1, \varepsilon) = h_{\Gamma}(G \circledast \Omega, \varepsilon).$$ + +Letting $\varepsilon$ go to zero gives + +$$h_{\Gamma}(G \circledast S^1) \leq h_{\Gamma}(G \circledast \Omega).$$ + +Since the opposite inequality is obvious, this shows the desired equality between the entropies. + +To show (4), fix an $(n, \varepsilon)$-separated set $S$ containing $s(n, \varepsilon)$ points. Let $n_{\Omega}$ (resp. $n_{\Omega^c}$) be the number of points in $S$ which are in $\Omega$ (resp. in $\Omega^c$). Obviously, $s(n, \varepsilon) = n_{\Omega} + n_{\Omega^c}$. Let $t = t_S$ be the number of connected components of $\Omega^c$ containing points in $S$, and let $l = [t/2]$, where $[\cdot]$ denotes integer part. We will show that there exists an $(n, \varepsilon)$-separated set $T$ contained in $\Omega$ and having cardinality $l$. This will obviously give $s_{\Omega}(n, \varepsilon) \ge l$. The inequalities $t \le 2l+1$ and $n_{\Omega} \le s_{\Omega}(n, \varepsilon)$, together with Lemmas 2 and 3, will imply that + +$$ \begin{aligned} s(n, \varepsilon) &= n_{\Omega} + n_{\Omega^c} \le n_{\Omega} + tk(1 + 1/\varepsilon)(4nc_{\varepsilon} + 4n - 1) \\ &\le s_{\Omega}(n, \varepsilon) + (2s_{\Omega}(n, \varepsilon) + 1)k(1 + 1/\varepsilon)(4nc_{\varepsilon} + 4n - 1), \end{aligned} $$ + +thus showing (4). + +To show the existence of the set $T$ with the properties above, we proceed in a constructive way. Let us enumerate the connected components of $\Omega^c$ containing points in $S$ in a cyclic way as $I_1, \dots, I_t$. Now for each $1 \le i \le l$ choose a point $t_i \in \Omega$ between $I_{2i-1}$ and $I_{2i}$, and let $T = \{t_1, \dots, t_l\}$. We need to check that, for $i \ne j$, the points $t_i$ and $t_j$ are $(n, \varepsilon)$-separated. Now by construction, for each $i \ne j$ there exist at least two different points $x, y$ in $S$ contained in the interval of smallest length in $S^1$ joining $t_i$ and $t_j$. Since $S$ is $(n, \varepsilon)$-separated, there exist $m \le n$ and $g_{i_1}, \dots, g_{i_m}$ in $\Gamma$ such that $\text{dist}(h(x), h(y)) \ge \varepsilon$, where $h = g_{i_m} \cdots g_{i_2}g_{i_1}$. Unfortunately, because of the topology of the circle, this does not imply that $\text{dist}(h(t_i), h(t_j)) \ge \varepsilon$. However, the proof will be finished if we show that + +$$ (5) \quad \text{dist}(g_{i_r} \cdots g_{i_1}(t_i), g_{i_r} \cdots g_{i_1}(t_j)) \ge \varepsilon \quad \text{for some } 0 \le r \le m. $$ + +This claim is obvious if $\text{dist}(t_i, t_j) \ge \varepsilon$. If this is not the case then, by the definition of the constants $\varepsilon$ and $L$, the length of the interval $[g_{i_1}(t_i), g_{i_1}(t_j)]$ is smaller than $1/2$, and hence it coincides with the distance between its endpoints. If this distance is at least $\varepsilon$, then we are done. If not, the same argument shows that the length of the interval $[g_{i_2}g_{i_1}(t_i), g_{i_2}g_{i_1}(t_j)]$ is smaller than $1/2$ and coincides with the distance between its endpoints. If this length is at least $\varepsilon$, then we are done. If not, we continue the procedure. Clearly, there must be some integer $r \le m$ such that the length of the +---PAGE_BREAK--- + +interval $[g_{i_{r-1}} \cdots g_{i_1}(t_i), g_{i_{r-1}} \cdots g_{i_1}(t_j)]$ is smaller than $\varepsilon$, and the one of +$[g_{i_r} \cdots g_{i_1}(t_i), g_{i_r} \cdots g_{i_1}(t_j)]$ is greater than or equal to $\varepsilon$. As before, the +length of the next interval will be forced to be smaller than 1/2, and hence +it will coincide with the distance between its endpoints. This shows (5) and +concludes the proof of Theorem A. $\blacksquare$ + +**5. The proof in the absence of subexponentially distorted elements.** Recall that topological entropy is invariant under topological conjugacy. Therefore, due to [3, Théorème D], in order to prove Theorem B we may assume that $G$ is a group of bi-Lipschitz homeomorphisms. Let $L$ be a common Lipschitz constant for the elements in $\Gamma$. Fix again $0 < \varepsilon < 1/2L$, and let $I_1, \dots, I_k$ be the connected components of $\Omega^c$ having length greater than or equal to $\varepsilon$. Let $h_i$ be a generator for the stabilizer of $I_i$ (with $h_i = \text{Id}$ in case $I_i$ is of type 1). Consider the minimal non-decreasing function $q_\varepsilon$ such that, for each of the non-trivial $h_i$'s, one has $q_\varepsilon(\|h_i^r\|) \ge r$ for all positive $r$. We will show that (4) holds for the function + +$$p_{\varepsilon}(n) = 2k(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1) + 1.$$ + +Notice that, by assumption, this function $p_\epsilon$ grows at most subexponentially +in $n$. Hence, as in the case of Theorem A, inequality (4) allows us to finish +the proof of the equality between the entropies. + +The main difficulty in showing (4) in this case is that Lemma 5 is no +longer available. However, the following still holds. + +LEMMA 6. If $x_1, \dots, x_m$ are points in a single type 2 connected component $I$ of $\Omega^c$ having length at least $\varepsilon$, and $x_i, x_j$ are $(\varepsilon, n)$-separated for all $i \neq j$, then $m \le k(1/\varepsilon + 1)(2q_\varepsilon(2n) + 1)$. + +*Proof.* Let $I$ be the type 2 connected component of $\Omega^c$ containing $x_1, \dots, x_m$. We may assume that $x_1 < \dots < x_m$. If $f$ is an element in $B_I(n)$ sending $I$ to some $I_i$, then the number of points which are $\varepsilon$-separated by $f$ is less than or equal to $1/\varepsilon + 1$. We claim that the number of elements in $B_I(n)$ sending $I$ to $I_i$ is bounded above by $q_\varepsilon(r)$. Indeed, if $g$ also sends $I$ to $I_i$ then $gf^{-1} \in \text{St}(I_i)$, hence $gf^{-1} = h_i^r$ some $r$. Therefore, + +$$2n \geq \|gf^{-1}\| = \|h_i^r\|,$$ + +and hence + +$$q_{\epsilon}(2n) \ge q_{\epsilon}(\|h_i^r\|) \ge |r|.$$ + +Since the previous arguments apply to each type 2 interval $I_i$, this gives + +$$m \le k(1/\epsilon + 1)(2q_\epsilon(2n) + 1),$$ + +thus proving the lemma. $\blacksquare$ + +To show (4) in the present case, we proceed as in the proof of Theorem A. We fix an $(n, \epsilon)$-separated set $S$ containing $s(n, \epsilon)$ points. We let $n_\Omega$ +---PAGE_BREAK--- + + resp. $n_{\Omega^c}$) be the number of points in $S$ which are in $\Omega$ (resp. in $\Omega^c$), so that +$s(n, \varepsilon) = n_{\Omega} + n_{\Omega^c}$. Let $t = t_S$ be the number of connected components of $\Omega^c$ +containing points in $S$, and let $l = [t/2]$. As before, one can show that there +exists an $(n, \varepsilon)$-separated set contained in $\Omega$ and having cardinality $l$. This +will obviously give $s_{\Omega}(n, \varepsilon) \ge l$. The inequalities $t \le 2l+1$ and $n_{\Omega} \le s_{\Omega}(n, \varepsilon)$ +still hold. Using Lemmas 2 and 6 one now obtains + +$$ +\begin{align*} +s(n, \varepsilon) &= n_{\Omega} + n_{\Omega^{c}} \leq n_{\Omega} + tk(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1) \\ +&\leq s_{\Omega}(n, \varepsilon) + (2s_{\Omega}(n, \varepsilon) + 1)k(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1). +\end{align*} +$$ + +This concludes the proof of Theorem B. + +**Acknowledgments.** I would like to thank Andrés Navas for introduc- +ing me to this subject and his continuous support during this work, which +was partially funded by Research Network on Low Dimensional Dynamical +Systems (PBCT-Conicyt’s project ADI 17). I would also extend my grati- +tude to both the referee and the editor for pointing out a subtle error in the +original version of this paper. + +References + +[1] D. Calegari and M. Freedman, *Distortion in transformation groups*, Geom. Topology 10 (2006), 267–293. + +[2] J. Cantwell and L. Conlon, *Poincaré–Bendixson theory for leaves of codimension one*, Trans. Amer. Math. Soc. 265 (1981), 181–209. + +[3] B. Deroin, V. Kleptsyn et A. Navas, *Sur la dynamique unidimensionnelle en régularité intermédiaire*, Acta Math. 199 (2007), 199–262. + +[4] E. Ghys, *Groups acting on the circle*, Enseign. Math. 47 (2001), 329–407. + +[5] E. Ghys, R. Langevin et P. Walczak, *Entropie géométrique des feuilletages*, Acta Math. 160 (1988), 105–142. + +[6] G. Hector, *Architecture des feuilletages de classe C²*, Astérisque 107–108 (1983), 243–258. + +[7] A. Navas, *Groups of Circle Diffeomorphisms*, forthcoming book; Spanish version: Ensaios Matemáticos 13, Brazil. Math. Soc., 2007. + +[8] —, *Growth of groups and diffeomorphisms of the circle*, Geom. Funct. Anal. 18 (2008), 988–1028. + +[9] P. Walczak, *Dynamics of Foliations, Groups and Pseudogroups*, IMPAN Monogr. Math. 64, Birkhäuser, Basel, 2004. + +Departamento de Matemáticas +Facultad de Ciencias +Universidad de Chile +Las Palmeras 3425, Ñuñoa +Santiago, Chile +E-mail: ejorquer@u.uchile.cl + +Received 15 September 2008; +in revised form 25 February 2009 \ No newline at end of file diff --git a/samples/texts_merged/2779026.md b/samples/texts_merged/2779026.md new file mode 100644 index 0000000000000000000000000000000000000000..d145948a64fbe73d69675ba6fb80bd56c80311bd --- /dev/null +++ b/samples/texts_merged/2779026.md @@ -0,0 +1,595 @@ + +---PAGE_BREAK--- + +Erdös-Rényi Sequences and Deterministic construction of +Expanding Cayley Graphs + +V. Arvind * + +Partha Mukhopadhyay† + +Prajakta Nimbhorkar † + +May 15, 2011 + +Abstract + +Given a finite group $G$ by its multiplication table as input, we give a deterministic polynomial-time construction of a directed Cayley graph on $G$ with $O(\log|G|)$ generators, which has a rapid mixing property and a constant spectral expansion. + +We prove a similar result in the undirected case, and give a new deterministic polynomial-time construction of an expanding Cayley graph with $O(\log|G|)$ generators, for any group $G$ given by its multiplication table. This gives a completely different and elementary proof of a result of Wigderson and Xiao [10]. + +For any finite group $G$ given by a multiplication table, we give a deterministic polynomial-time construction of a cube generating sequence that gives a distribution on $G$ which is arbitrarily close to the uniform distribution. This derandomizes the well-known construction of Erdös-Rényi sequences [2]. + +# 1 Introduction + +Let $G$ be a finite group with $n$ elements, and let $J = \{g_1, g_2, \dots, g_k\}$ be a *generating set* for the group $G$. + +The *directed Cayley graph* Cay$(G, J)$ is a directed graph with vertex set $G$ with directed edges of the form $(x, xg_i)$ for each $x \in G$ and $g_i \in J$. Clearly, since $J$ is a generating set for $G$, Cay$(G, J)$ is a strongly connected graph with every vertex of out-degree $k$. + +The *undirected Cayley graph* Cay$(G, J \cup J^{-1})$ is an undirected graph on the vertex set $G$ with undirected edges of the form $\{x, xg_i\}$ for each $x \in G$ and $g_i \in J$. Again, since $J$ is a generating set for $G$, Cay$(G, J \cup J^{-1})$ is a connected regular graph of degree $|J \cup J^{-1}|$. + +Let $X = (V, E)$ be an undirected regular $n$-vertex graph of degree $D$. Consider the *normalized adjacency matrix* $A_X$ of the graph $X$. It is a symmetric matrix with largest eigenvalue 1. For $0 < \lambda < 1$, the graph $X$ is an $(n, D, \lambda)$-spectral expander if the second largest eigenvalue of $A_X$, in absolute value, is bounded by $\lambda$. + +The study of expander graphs and its properties is of fundamental importance in theoretical computer science; the Hoory-Linial-Wigderson monograph is an excellent source [4] for current + +*The Institute of Mathematical Sciences, Chennai, India>Email: arvind@imsc.res.in + +†Chennai Mathematical Institute, Siruseri, India. Emails: {partham,prajakta}@cmi.ac.in +---PAGE_BREAK--- + +developments and applications. A central problem is the explicit construction of expander graph families [4, 5]. By explicit it is meant that the family of graphs has efficient deterministic constructions, where the notion of efficiency is often tailored to a specific application, e.g. [9]. Explicit constructions with the best known (and near optimal) expansion and degree parameters are Cayley expander families (the so-called Ramanujan graphs) [5]. + +Does every finite group have an expanding generator set? Alon and Roichman, in [1], answered this in the positive using the probabilistic method. Let $G$ be any finite group with $n$ elements. Given any constant $\lambda > 0$, they showed that a random multiset $J$ of size $O(\log n)$ picked uniformly at random from $G$ is, with high probability, a spectral expander with second largest eigenvalue bounded by $\lambda$. In other words, $\text{Cay}(G, J \cup J^{-1})$ is an $O(\log n)$ degree, $\lambda$-spectral expander with high probability. The theorem also gives a polynomial (in $n$) time randomized algorithm for construction of a Cayley expander on $G$: pick the elements of $J$ independently and uniformly at random and check that $\text{Cay}(G, J \cup J^{-1})$ is a spectral expander. There is a brute-force deterministic simulation of this that runs in $n^{O(\log n)}$ time by cycling through all candidate sets $J$. Wigderson and Xiao in [10], give a very interesting $n^{O(1)}$ time derandomized construction based on Chernoff bounds for matrix-valued random variables (and pessimistic estimators). Their result is the starting point of the study presented in this paper. + +In this paper, we give an entirely different and elementary $n^{O(1)}$ time derandomized construction that is based on analyzing mixing times of random walks on expanders rather than on its spectral properties. Our construction is conceptually somewhat simpler and also works for directed Cayley graphs. + +The connection between mixing times of random walks on a graph and its spectral expansion is well studied. For undirected graphs we have the following. + +**Theorem 1.1** [8, Theorem 1] Let $A$ be the normalized adjacency matrix of an undirected graph. For every initial distribution, suppose the distribution obtained after $t$ steps of the random walk following $A$ is $\epsilon$-close to the uniform distribution in the $L_1$ norm. Then the spectral gap $(1 - |\lambda_1|)$ of $A$ is $\Omega(\frac{1}{t} \log(\frac{1}{\epsilon}))$. + +In particular, if the graph is $\text{Cay}(G, J \cup J^{-1})$ for any $n$ element group $G$, such that a $C \log n$ step random walk is $\frac{1}{n^c}$-close to the uniform distribution in $L_1$ norm, then the spectral gap is a constant $\frac{c}{C}$. + +Even for directed graphs a connection between mixing times of random walks and the spectral properties of the underlying Markov chain is known. + +**Theorem 1.2** [6, Theorem 5.9] Let $\lambda_{max}$ denote the second largest magnitude (complex valued) eigenvalue of the normalized adjacency matrix $P$ of a strongly connected aperiodic Markov Chain. Then the mixing time is lower bounded by $\tau(\epsilon) \ge \frac{\log(1/2\epsilon)}{\log(1/|\lambda_{max}|)}$, where $\epsilon$ is the difference between the resulting distribution and the uniform distribution in the $L_1$ norm. + +In [7], Pak uses this connection to prove an analogue of the Alon-Roichman theorem for directed Cayley graphs: Let $G$ be an $n$ element group and $J = \langle g_1, \dots, g_k \rangle$ consist of $k = O(\log n)$ group elements picked independently and uniformly at random from $G$. Pak shows that for any initial distribution on $G$, the distribution obtained by an $O(\log n)$ steps *lazy random walk* on the directed graph $\text{Cay}(G, J)$ is $\frac{1}{\text{poly}(n)}$- close to the uniform distribution. Then, by Theorem 1.2, it follows that the directed Cayley graph $\text{Cay}(G, J)$ has a constant spectral expansion. Crucially, we note +---PAGE_BREAK--- + +that Pak considers lazy random walks, since his main technical tool is based on *cube generating sequences* for finite groups introduced by Erdös and Rényi in [2]. + +**Definition 1.3** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of group elements. For any $\delta > 0$, $J$ is said to be a cube generating sequence for $G$ with closeness parameter $\delta$, if the probability distribution $D_J$ on $G$ given by $g_1^{\epsilon_1} \dots g_k^{\epsilon_k}$, where each $\epsilon_i$ is independently and uniformly distributed in $\{0, 1\}$, is $\delta$-close to the uniform distribution in the $L_2$-norm. + +Erdös and Rényi [2] proved the following theorem. + +**Theorem 1.4** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of $k$ elements of $G$ picked uniformly and independently at random. Let $D_J$ be the distribution on $G$ generated by $J$, i.e. $D_J(x) = \text{Pr}_{\{\epsilon_i \in \mathbb{R}\{0,1\} : 1 \le i \le k\}} [g_1^{\epsilon_1} \dots g_k^{\epsilon_k} = x]$ for $x \in G$, and $U$ be the uniform distribution on $G$. Then the expected value $\mathbb{E}_J \|D_J - U\|_2^2 = 1/2^k (1 - 1/n)$. + +In particular if we choose $k = O(\log n)$, the resulting distribution $D_J$ is $\frac{1}{\text{poly}(n)}$-close to the uniform distribution in $L_2$ norm. + +## Our Results + +Let $G$ be a finite group with $n$ elements given by its multiplication table. Our first result is a derandomization of a result of Pak [7]. We show a deterministic polynomial-time construction of a generating set $J$ of size $O(\log |G|)$ such that a lazy random walk on Cay$(G, J)$ mixes fast. Throughout the paper, we measure the distance between two distributions in $L_2$ norm. + +**Theorem 1.5** For any constant $c > 1$, there is a deterministic poly($n$) time algorithm that computes a generating set $J$ of size $O(\log n)$ for the given group $G$, such that given any initial distribution on $G$ the lazy random walk of $O(\log n)$ steps on the directed Cayley graph Cay$(G, J)$ yields a distribution that is $\frac{1}{n^c}$-close (in $L_2$ norm) to the uniform distribution. + +Theorem 1.5 and Theorem 1.2 together yield the following corollary. + +**Corollary 1.6** Given a finite group $G$ and any $\epsilon > 0$, there is a deterministic polynomial-time algorithm to construct an $O(\log n)$ size generating set $J$ such that Cay$(G, J)$ is a spectral expander (i.e. its second largest eigenvalue in absolute value is bounded by $\epsilon$). + +Our next result yields an alternative proof of the Wigderson-Xiao result [10]. In order to carry out a similar approach as the proof of Theorem 1.5 for undirected Cayley graphs, we need a suitable generalization of cube generating sequences, and in particular, a generalization of [2]. Using this generalization, we can give a deterministic poly($n$) time algorithm to compute $J = \langle g_1, g_2, \dots, g_k \rangle$ where $k = O(\log n)$ such that a lazy random walk of length $O(\log n)$ on Cay$(G, J \cup J^{-1})$ is $\frac{1}{\text{poly}(n)}$-close to the uniform distribution. Here the lazy random walk is described by the symmetric transition matrix $A_J = \frac{1}{3}I + \frac{1}{3k}(P_J + P_{J-1})$ where $P_J$ and $P_{J-1}$ are the adjacency matrices of the Cayley graphs Cay$(G, J)$ and Cay$(G, J^{-1})$ respectively. + +**Theorem 1.7** Let $G$ be a finite group of order $n$ and $c > 1$ be any constant. There is a deterministic poly($n$) time algorithm that computes a generating set $J$ of size $O(\log n)$ for $G$, such that an $O(\log n)$ step lazy random walk on $G$, governed by the transition matrix $A_J$ described above, is $\frac{1}{n^c}$-close to the uniform distribution, for any given initial distribution on $G$. +---PAGE_BREAK--- + +Theorem 1.7 and the connection between mixing time and spectral expansion for undirected graphs given by Theorem 1.1 yields the following. + +**Corollary 1.8 (Wigderson-Xiao)** [10] Given a finite group $G$ by its multiplication table, there is a deterministic polynomial (in $|G|$) time algorithm to construct a generating set $J$ such that $\text{Cay}(G, J \cup J^{-1})$ is a spectral expander. + +Finally, we show that the construction of cube generating sequences can also be done in deterministic polynomial time. + +**Theorem 1.9** For any constant $c > 1$, there is a deterministic polynomial (in $n$) time algorithm that outputs a cube generating sequence $J$ of size $O(\log n)$ such that the distribution $D_J$ on $G$, defined by the cube generating sequence $J$, is $\frac{1}{n^c}$-close to the uniform distribution. + +## 1.1 Organization of the paper + +The paper is organized as follows. We prove Theorem 1.5 and Corollary 1.6 in Section 2. The proof of Theorem 1.7 and Corollary 1.8 are given in Section 3. We prove Theorem 1.9 in Section 4. Finally, we summarize in Section 5. + +# 2 Expanding Directed Cayley Graphs + +Let $D_1$ and $D_2$ be two probability distributions over the finite set $\{1, 2, \dots, n\}$. We use the $L_2$ norm to measure the distance between the two distributions: $$ ||D_1 - D_2||_2 = \left[ \sum_{x \in [n]} |D_1(x) - D_2(x)|^2 \right]^{\frac{1}{2}}. $$ Let $U$ denote the uniform distribution on $[n]$. We say that a distribution $D$ is $\delta$-close to the uniform distribution if $$ ||D - U||_2 \le \delta. $$ + +**Definition 2.1** The collision probability of a distribution $D$ on $[n]$ is defined as $\text{Coll}(D) = \sum_{i \in [n]} D(i)^2$. It is easy to see that $\text{Coll}(D) \le 1/n + \delta$ if and only if $||D - U||_2^2 \le \delta$ and $\text{Coll}(D)$ attains its minimum value $1/n$ only for the uniform distribution. + +We prove Theorem 1.5 by giving a deterministic construction of a cube generating sequence $J$ such that a random walk on $\text{Cay}(G, J)$ mixes in $O(\log n)$ steps. We first describe a randomized construction in Section 2.1, which shows the existence of such a sequence. The construction is based on analysis of [7]. This is then derandomized in Section 2.2. + +## 2.1 Randomized construction + +For a sequence of group elements $J = \langle g_1, \dots, g_k \rangle$, we consider the Cayley graph $\text{Cay}(G, J)$, which is, in general, a directed multigraph in which both in-degree and out-degree of every vertex is $k$. Let $A$ denote the adjacency matrix of $\text{Cay}(G, J)$. The lazy random walk is defined by the probability transition matrix $(A+I)/2$ where $I$ is the identity matrix. Let $Q_J$ denote the probability distribution obtained after $m$ steps of the lazy random walk. Pak [7] has analyzed the distribution $Q_J$ and shown that for a random $J$ of $O(\log n)$ size and $m = O(\log n)$, $Q_J$ is $1/n^{O(1)}$-close to the uniform distribution. We note that Pak works with the $L_\infty$ norm. Our aim is to give an efficient deterministic construction of $J$. It turns out for us that the $L_2$ norm and the collision probability +---PAGE_BREAK--- + +are the right tools to work with since we can compute these quantities exactly as we fix elements of $J$ one by one. + +Consider any length-$m$ sequence $I = \langle i_1, \dots, i_m \rangle \in [k]^m$, where $i_j$s are indices that refer to elements in the set $J$. Let $R_I^j$ denote the following probability distribution on $G$. For each $x \in G$: $R_I^j(x) = \text{Pr}_{\bar{\epsilon}}[g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = x]$, where $\bar{\epsilon} = (\epsilon_1, \dots, \epsilon_m)$ and each $\epsilon_i \in \{0, 1\}$ is picked independently and uniformly at random. Notice that for each $x \in G$ we have: $Q_J(x) = \frac{1}{k^m} \sum_{I \in [k]^m} R_I^J(x)$. + +Further, notice that $R_I^J$ is precisely the probability distribution defined by the cube generating sequence $\langle g_{i_1}, g_{i_2}, \dots, g_{i_m} \rangle$, and the above equation states that the distribution $Q_J$ is the average over all $I \in [k]^m$ of the $R_I^J$. + +In general, the indices in $I \in [k]^m$ are not distinct. Let $L(I)$ denote the sequence of distinct indices occurring in $I$, in the order of their first occurrence in $I$, from left to right. We refer to $L(I)$ as the L-subsequence of $I$. Clearly, the sequence $L(I)$ will itself define a probability distribution $R_{L(I)}^J$ on the group $G$. + +Suppose the elements of $J$ are independently, randomly picked from $G$. The following lemma shows for any $I \in [k]^m$ that if $R_{L(I)}^J$ is $\delta$-close to uniform distribution (in $L_2$ norm), in expectation, then so is $R_I^J$. We state it in terms of collision probabilities. + +**Lemma 2.2** For a fixed $I$, If $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} R_{L(I)}^J(g)^2] \leq 1/n + \delta$ then $\mathbb{E}_J[\text{Coll}(R_I^J)] = \mathbb{E}_J[\sum_{g \in G} R_I^J(g)^2] \leq 1/n + \delta$. + +A proof of Lemma 2.2 is in the appendix to keep our presentation self-contained. A similar lemma for the $L_\infty$ norm is shown in [7, Lemma 1] (though it is not stated there in terms of the expectation). + +When elements of $J$ are picked uniformly and independently from $G$, by Theorem 1.4, $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} R_{L(I)}^J(g)^2] = \frac{1}{n} + \frac{1}{2\ell}(1 - \frac{1}{n})$, where $\ell$ is the length of the L-subsequence. Thus the expectation is small provided $\ell$ is large enough. It turns out that most $I \in [k]^m$ have sufficiently long L-subsequences (Lemma 2.3). A similar result appears in [7]. We give a proof of Lemma 2.3 in the appendix. + +**Lemma 2.3** [7] Let $a = \frac{k}{\ell-1}$. The probability that a sequence of length $m$ over $[k]$ does not have an L-subsequence of length $\ell$ is at most $\frac{(ae)^{\frac{k}{a}}}{a^m}$. + +To ensure the above probability is bounded by $\frac{1}{2^m}$, it suffices to choose $m > \frac{(k/a) \log(ae)}{\log(a/2)}$. + +The following lemma (which is again an $L_2$ norm version of a similar statement from [7]), we observe that the expected distance from the uniform distribution is small, when $I \in [k]^m$ is picked uniformly at random. The proof of the lemma is given in the appendix. + +**Lemma 2.4** $\mathbb{E}_J[\text{Coll}(Q_J)] = \mathbb{E}_J[\sum_{g \in G} Q_J(g)^2] \leq \frac{1}{n} + \frac{1}{2\Theta(m)}$. + +We can make $\frac{1}{2\Theta(m)} < \frac{1}{nc}$ for some $c > 0$, by choosing $m = O(\log n)$. That also fixes $k$ to be $O(\log n)$ suitably. +---PAGE_BREAK--- + +## 2.2 Deterministic construction + +Our goal is to compute, for any given constant $c > 0$, a multiset $J$ of $k$ group elements of $G$ such that $\text{Coll}(Q_J) = \sum_{g \in G} Q_J(g)^2 \le 1/n + 1/n^c$, where both $k$ and $m$ are $O(\log n)$. For each $J$ observe, by Cauchy-Schwarz inequality, that + +$$ \text{Coll}(Q_J) = \sum_{g \in G} Q_J(g)^2 \le \sum_{g \in G} \frac{1}{k^m} \sum_{I \in [k]^m} R_I^J(g)^2 = \frac{1}{k^m} \sum_{I \in [k]^m} \text{Coll}(R_I^J). \quad (1) $$ + +Our goal can now be restated: it suffices to construct in deterministic polynomial time a multiset $J$ of group elements such that the average collision probability $\frac{1}{k^m} \sum_{I \in [k]^m} \text{Coll}(R_I^J) \le 1/n + 1/n^c$. + +Consider the random set $J = \{X_1, \dots, X_k\}$ with each $X_i$ a uniformly and independently distributed random variable over $G$. Combined with the proof of Lemma 2.4 (in particular from Equation 17), we observe that for any constant $c > 1$ there are $k$ and $m$, both $O(\log n)$ such that + +$$ \mathbb{E}_J[\text{Coll}(Q_J)] \le \mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le \frac{1}{n} + \frac{1}{n^c}. \quad (2) $$ + +Our deterministic algorithm will fix the elements in $J$ in stages. At stage 0 the set $J = J_0 = \{X_1, X_2, \dots, X_k\}$ consists of independent random elements $X_i$ drawn from the group $G$. Suppose at the $j^{th}$ stage, for $j < k$, the set we have is $J = J_j = \{x_1, x_2, \dots, x_j, X_{j+1}, \dots, X_k\}$, where each $x_r(1 \le r \le j)$ is a fixed element of $G$ and the $X_s(j+1 \le s \le k)$ are independent random elements of $G$ such that + +$$ \mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le 1/n + 1/n^c. $$ + +**Remark.** + +1. In the above expression, the expectation is over the random elements of $J$. + +2. If we can compute in poly($n$) time a choice $x_{j+1}$ for $X_{j+1}$ such that $\mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le 1/n + 1/n^c$ then we can compute the desired generating set $J$ in polynomial (in $n$) time. + +Given $J = J_j = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ with $j$ fixed elements and $k-j$ random elements, it is useful to partition the set of sequences $[k]^m$ into subsets $S_{r,l}$ where $I \in S_{r,l}$ if and only if there are exactly $r$ indices in $I$ from $\{1, \dots, j\}$, and of the remaining $m-r$ indices of $I$ there are exactly $\ell$ distinct indices. We now define a suitable generalization of L-subsequences. + +**Definition 2.5** An $(r, \ell)$-normal sequence for $J$ is a sequence $\{n_1, n_2, \dots, n_r, \dots, n_{r+\ell}\} \in [k]^{r+\ell}$ such that the indices $n_s, 1 \le s \le r$ are in $\{1, 2, \dots, j\}$ and the indices $n_s, s > \ell$ are all distinct and in $\{j+1, \dots, k\}$. I.e. the first $r$ indices (possibly with repetition) are from the fixed part of $J$ and the last $\ell$ are all distinct elements from the random part of $J$. + +**Transforming $S_{r,\ell}$ to $(r, \ell)$-normal sequences** + +We use the simple fact that if $y \in G$ is picked uniformly at random and $x \in G$ be any element independent of $y$, then the distribution of $xyx^{-1}$ is uniform in $G$. + +Let $I = \langle i_1, \dots, i_m \rangle \in S_{r,\ell}$ be a sequence. Let $F = \langle i_{f_1}, \dots, i_{f_r} \rangle$ be the subsequence of indices for the fixed elements in $I$. Let $R = \langle i_{s_1}, \dots, i_{s_{m-r}} \rangle$ be the subsequence of indices for the random elements in $I$, and $L = \langle i_{e_1}, \dots, i_{e_\ell} \rangle$ be the L-subsequence in $R$. More precisely, notice that $R$ is a +---PAGE_BREAK--- + +sequence in {$j+1, \dots, k$$}^{m-r}$ and $L$ is the L-subsequence for $R$. The $(r, \ell)$ normal sequence $\hat{I}$ of +$I \in S_{r,\ell}$ is the sequence $\langle i_{f_1}, \dots, i_{f_r}, i_{e_1}, \dots, i_{e_\ell} \rangle$. + +We recall here that the multiset $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ is defined as before. For ease of notation we denote the list of elements of $J$ by $g_t$, $1 \le t \le k$. I.e. $g_t = x_t$ for $t \le j$ and $g_t = X_t$ for $t > j$. Consider the distribution of the products $g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m}$ where $\epsilon_i \in \{0, 1\}$ are independent and uniformly picked at random. Then we can write + +$$ +\begin{aligned} +g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} &= z_0 g_{i_{f_1}}^{\epsilon_{f_1}} z_1 g_{i_{f_2}}^{\epsilon_{f_2}} z_2 \cdots z_{r-1} g_{i_{fr}}^{\epsilon_{fr}} z_r, && \text{where} \\ +z_0 z_1 \cdots z_r &= g_{i_{s_1}}^{\epsilon_{s_1}} g_{i_{s_2}}^{\epsilon_{s_2}} \cdots g_{i_{s_{m-r}}}^{\epsilon_{s_{m-r}}}. +\end{aligned} + $$ + +By conjugation, we can rewrite the above expression as $g_{i_{f_1}}^{\epsilon_{f_1}} z z_1 g_{i_{f_2}}^{\epsilon_{f_2}} z_2 \dots g_{i_{fr}}^{\epsilon_{fr}} z_r$, where $z = g_{i_{f_1}}^{-\epsilon_{f_1}} z_0 g_{i_{f_1}}^{\epsilon_{f_1}}$. + +We refer to this transformation as moving $g_{i_{f_1}}^{\epsilon_{f_1}}$ to the left. Successively moving the elements +$g_{i_{f_1}}^{\epsilon_{f_1}}$, $g_{i_{f_2}}^{\epsilon_{f_2}}$, ..., $g_{i_{fr}}^{\epsilon_{fr}}$ to the left we can write + +$$ g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = g_{i_1}^{\epsilon_{f_1}} \cdots g_{i_r}^{\epsilon_{fr}} z'_0 z'_1 \cdots z'_r, $$ + +where each $z'_t = u_t z_t u_t^{-1}$, and $u_t$ is a product of elements from the fixed element set $\{x_1, \dots, x_j\}$. Notice that each $z_t$ is a product of some consecutive sequence of elements from $\langle g_{i_{s_1}}^{\epsilon_{s_1}}, g_{i_{s_2}}^{\epsilon_{s_2}}, \dots, g_{i_{s_{m-r}}}^{\epsilon_{s_{m-r}}} \rangle$. If $z_t = \prod_{a=b}^{c} g_{i_{sa}}^{\epsilon_{sa}}$ then $z'_t = \prod_{a=b}^{c} u_t g_{i_{sa}}^{\epsilon_{sa}} u_t^{-1}$. Thus, the product $z'_0 z'_1 \dots z'_r$, is of the form + +$$ z'_0 z'_1 \dots z'_r = \prod_{a=1}^{m-r} h_{s_a}^{\epsilon_{s_a}}, $$ + +where each $h_{s_a} = y_a g_{i_{s_a}}^{\epsilon_{s_a}} y_a^{-1}$, for some elements $y_a \in G$. In this expression, observe that for distinct indices $a$ and $b$, we may have $i_{s_a} = i_{s_b}$ and $y_a \neq y_b$ and hence, in general, $h_{s_a} \neq h_{s_b}$. + +Recall that the L-subsequence $L = (i_{e_1}, \dots, i_{e_\ell})$ is a subsequence of $R = (i_{s_1}, \dots, i_{s_{m-\ell}})$. Consequently, let $(h_{e_1}, h_{e_2}, \dots, h_{e_\ell})$ be the sequence of all *independent* random elements in the above product $\prod_{a=1}^{m-r} h_{s_a}^{\epsilon_{s_a}}$ that correspond to the L-subsequence. To this product, we again apply the transformation of moving to the left, the elements $h_{e_1}^{\epsilon_{e_1}}, h_{e_2}^{\epsilon_{e_2}}, \dots, h_{e_\ell}^{\epsilon_{e_\ell}}$, in that order. Putting it all together we have + +$$ g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = g_{i_1}^{\epsilon_{f_1}} \cdots g_{i_r}^{\epsilon_{fr}} h_{e_1}^{\epsilon_{e_1}} \cdots h_{e_\ell}^{\epsilon_{e_\ell}} y(\bar{\epsilon}), $$ + +where $y(\bar{\epsilon})$ is an element in $G$ that depends on $J$, $I$ and $\bar{\epsilon}$, where $\bar{\epsilon}$ consists of all the $\epsilon_j$ for $j \in I \setminus (F \cup L)$. Let $J(I)$ denote the multiset of group elements obtained from $J$ by replacing the subset $\{g_{i e_1}, g_{i e_2}, \dots, g_{i e_\ell}\}$ with $\{h_{e_1}, h_{e_2}, \dots, h_{e_\ell}\}$. It follows from our discussion that $J(I)$ has exactly $j$ fixed elements $x_1, x_2, \dots, x_j$ and $k-j$ uniformly distributed independent random elements. Recall that $\hat{I} = (i_{f_1}, i_{f_2}, \dots, i_{fr}, i_{e_1}, i_{e_2}, \dots, i_{e_\ell})$ is the $(r, \ell)$-normal sequence for $I$. Analogous to Lemma 2.2, we now compare the probability distributions $R_I^J$ and $\hat{R_I}^{J(I)}$. The proof of the lemma is in the appendix. +---PAGE_BREAK--- + +**Lemma 2.6** For each $j \le k$ and $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ (where $x_1, \dots, x_j \in G$ are fixed elements and $X_{j+1}, \dots, X_k$ are independent uniformly distributed in $G$), and for each $I \in [k]^m$, $\mathbb{E}_J[\text{Coll}(R_I^J)] \le \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$. + +**Remark 2.7** Here it is important to note that the expectation $\mathbb{E}_J[\text{Coll}(R_I^J)]$ is over the random elements in $J$. On the other hand, the expectation $\mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ is over the random elements in $J(I)$ (which are conjugates of the random elements in $J$). In the rest of this section, we need to keep this meaning clear when we use $\mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ for different $I \in [k]^m$. + +By averaging the above inequality over all $I$ sequences and using Equation 1, we get + +$$ \mathbb{E}_J[\text{Coll}(Q_J)] \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(R_I^J)] \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]. \quad (3) $$ + +Now, by Equation 2 and following the proof of Lemma 2.4, when all $k$ elements in $J$ are random then we have $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le 1/n + 1/n^c$. Suppose for any $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ we can compute $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]$ in deterministic polynomial (in $n$) time. Then, given the bound $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le 1/n + 1/n^c$ for $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$, we can clearly fix the $(j+1)^{st}$ element of $J$ by choosing $X_{j+1} := x_{j+1}$ which minimizes the expectation $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]$. Also, it follows easily from Equation 3 and the above lemma that $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le \delta$ implies $\mathbb{E}_J \text{Coll}(Q_J) \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(R_I^J)] \le \delta$. In particular, when $J$ is completely fixed after $k$ stages, and if $\mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le \delta$ then $\text{Coll}(Q_J) \le \delta$. + +**Remark 2.8** In fact, the quantity $\mathbb{E}_{I \in [k]^m}[\text{Coll}(\hat{R}_I^{J(I)})]$ plays the role of a pessimistic estimator for $\mathbb{E}_{I \in [k]^m}[\text{Coll}(R_I^J)]$. + +We now proceed to explain the algorithm that fixes $X_{j+1}$. To this end, it is useful to rewrite this as + +$$ +\begin{align} +\mathbb{E}_J \mathbb{E}_I [\text{Coll}(\hat{R}_I^{J(I)})] &= \frac{1}{k^m} \left[ \sum_{r,\ell} \sum_{I \in S_{r,\ell}} \mathbb{E}_J [\text{Coll}(\hat{R}_I^{J(I)})] \right] \\ +&= \sum_{r,\ell} \frac{|S_{r,\ell}|}{k^m} \mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J [\text{Coll}(\hat{R}_I^{J(I)})] \tag{4} +\end{align} +$$ + +For any $r, \ell$ the size of $S_{r,\ell}$ is computable in polynomial time (Lemma 2.9). We include a proof in the appendix. + +**Lemma 2.9** For each $r$ and $\ell$, $|S_{r,\ell}|$ can be computed in time polynomial in $n$. + +Since $r, \ell$ is of $O(\log n)$, it is clear from Equation 4 that it suffices to compute $\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ in polynomial time for any given $r$ and $\ell$. We reduce this computation to counting number of paths in weighted directed acyclic graphs. To make the reduction clear, we simply the expression $\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ as follows. +---PAGE_BREAK--- + +Let $\bar{u}$ be a sequence of length $r$ from the fixed elements $x_1, x_2, \dots, x_j$. We identify $\bar{u}$ as an element in $[j]^r$. The number of $I$ sequences in $S_{r,l}$ that have $\bar{u}$ as the prefix in the $(r, l)$ normal sequence $\hat{I}$ is $\frac{|S_{r,\ell}|}{j^r}$. Recall that $R_{\hat{I}}^{J(I)}(g) = \text{Prob}_{\bar{\epsilon}}[g_{i_{f_1}}^{\epsilon_1} \dots g_{i_{f_r}}^{\epsilon_r} h_{e_1}^{\epsilon_{r+1}} \dots h_{e_\ell}^{\epsilon_{r+\ell}} = g]$. Let $\bar{u} = (g_{i_{f_1}}, \dots, g_{i_{f_r}})$. It is convenient to denote the element $g_{i_{f_1}}^{\epsilon_1} \dots g_{i_{f_r}}^{\epsilon_r} h_{e_1}^{\epsilon_{r+1}} \dots h_{e_\ell}^{\epsilon_{r+\ell}}$ by $M(\bar{u}, \bar{\epsilon}, \hat{I}, J)$. + +Let $\bar{\epsilon} = (\epsilon_1, \dots, \epsilon_{r+\ell})$ and $\bar{\epsilon}' = (\epsilon'_1, \dots, \epsilon'_{r+\ell})$ be randomly picked from $\{0, 1\}^{r+\ell}$. Then + +$$ +\begin{align} +\mathrm{Coll}(R_{\hat{I}}^{J(I)}) &= \sum_{g \in G} (R_{\hat{I}}^{J(I)}(g))^2 \\ +&= \mathrm{Prob}_{\bar{\epsilon}, \bar{\epsilon}'} [M(\bar{u}, \bar{\epsilon}, \hat{I}, J) = M(\bar{u}, \bar{\epsilon}', \hat{I}, J)]. \tag{5} +\end{align} +$$ + +For fixed $\bar{\epsilon}, \bar{\epsilon}'$ and $\bar{u} \in [j]^r$, let $S_{r,\ell}^{\bar{u}}$ be the set of all $I \in S_{r,\ell}$ such that the subsequence of indices of $I$ for the fixed elements $\{x_1, x_2, \dots, x_j\}$ is precisely $\bar{u}$. Notice that $|S_{r,\ell}^{\bar{u}}| = \frac{|S_{r,\ell}|}{j^r}$. + +Then we have the following. + +$$ +\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J \left[ \left( \sum_{g \in G} (R_{\hat{I}}^{J(I)}(g))^2 \right) \right] = \frac{1}{2^{2(\ell+r)}} \left[ \sum_{\bar{\epsilon}, \bar{\epsilon}' \in \{0,1\}^{\ell+r}} \frac{1}{|S_{r,\ell}|} \sum_{\bar{u} \in [j]^r} \sum_{I \in S_{r,\ell}^{\bar{u}}} \mathbb{E}_J [\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}] \right] (6) +$$ + +where $\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}$ is a 0-1 indicator random variable that gets 1 when $M(\bar{u},\bar{\epsilon},\hat{I},J) = M(\bar{u},\bar{\epsilon}',\hat{I},J)$ and 0 otherwise. Crucially, we note the following: + +**Claim 2.10** For each $I \in S_{r,\ell}^{\bar{u}}$ and for fixed $\bar{\epsilon}, \bar{\epsilon}'$, the random variables $\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}$ are identically distributed. + +The claim follows from the fact that for each $I \in S_{r,\ell}^{\bar{u}}$, the fixed part in $\hat{I}$ is $\bar{u}$ and elements in the unfixed part are identically and uniformly distributed in $G$. We simplify the expression in Equation 6 further. + +$$ +\begin{align} +\frac{1}{|S_{r,\ell}|} & \left[ \sum_{\tilde{u} \in [j]^r} \sum_{I \in S_{r,\ell}^{\tilde{u}}} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] \right] &&= \frac{1}{|S_{r,\ell}|} \left[ \sum_{\tilde{u} \in [j]^r} \frac{|S_{r,\ell}|}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] \right] &&(7) \\ +&&= \sum_{\tilde{u} \in [j]^r} \frac{1}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] &&(8) +\end{align} +$$ + +where Equation 7 follows from Claim 2.10. Let $p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}')$ be the number of different assignments of $\ell$ random elements in $J$ such that $M(\bar{u}, \bar{\epsilon}, \hat{I}, J) = M(\bar{u}, \bar{\epsilon}', \hat{I}, J)$. Then it is easy to see that + +$$ +\sum_{\tilde{u} \in [j]^r} \frac{1}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u}, \tilde{\epsilon}, \hat{I}, J) = M(\tilde{u}, \tilde{\epsilon}', \hat{I}, J)}] = \sum_{\tilde{u}} \frac{1}{j^r} p_{\tilde{u}}(\tilde{\epsilon}, \tilde{\epsilon}') \frac{1}{n^\ell}. \quad (9) +$$ + +where the factor $\frac{1}{n^\ell}$ accounts for the fact that $\ell$ unfixed elements of $J$ are picked uniformly and independently at random from the group $G$. +---PAGE_BREAK--- + +Notice that $2^{r+\ell} \le 2^m = n^{O(1)}$ for $m = O(\log n)$ and $\bar{\epsilon}, \bar{\epsilon}' \in \{0,1\}^{r+\ell}$. Then, combining the Equation 4 and Equation 9, it is clear that to compute $\mathbb{E}_J \mathbb{E}_I[\text{Coll}(R_{\hat{I}}^{J(I)})]$ in polynomial time, it suffices to compute $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \right] \frac{1}{n^{\ell}}$ (for fixed $r, \ell, \bar{\epsilon}, \bar{\epsilon}'$) in polynomial time. We now turn to this problem. + +## 2.3 Reduction to counting paths in weighted DAGs + +We will interpret the quantity $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \right] \frac{1}{n^{\ell}}$ as the sum of weights of paths between a source vertex $s$ and sink vertex $t$ in a layered weighted directed acyclic graph $H = (V, E)$. The vertex set $V$ is $G \times G \times [r+\ell+1] \cup \{s,t\}$, and $s = (e, e, 0)$, where $e$ is the identity element in $G$. The source vertex $s$ is at 0-th layer and the sink $t$ is at the $r + \ell + 2$-th layer. Let $S = \{x_1, x_2, \dots, x_j\}$. The edge set is the union $E = E_s \cup E_S \cup E_{G\setminus S} \cup E_t$, where + +$$ +\begin{align*} +E_s &= \{(s, (g, h, 1)) \mid g, h \in G\} \\ +E_S &= \{((g, h, t), (gx^{\epsilon_t}, hx^{\epsilon'_t}, t+1)) \mid g, h \in G, x \in S, 1 \le t \le r\}, \\ +E_{G\setminus S} &= \{((g, h, t), (gx^{\epsilon_t}, hx^{\epsilon'_t}, t+1)) \mid g, h \in G, x \in G, r < t \le r+\ell\}, \text{ and} \\ +E_t &= \{((g, g, r+\ell+1), t) \mid g \in G\}. +\end{align*} +$$ + +All edges in $E_s$ and $E_t$ have weights 1 each. Each edge in $E_S$ has weight $\frac{1}{j}$. Each edge in $E_{G\setminus S}$ has weight $\frac{1}{n}$. + +Each s-to-t directed path in the graph $G$ corresponds to an $(r, \ell)$-normal sequence $\hat{I}$ (corresponding to some $I \in S_{r,\ell}$), along with an assignment of group elements to the $\ell$ distinct independent random elements that occur in it. For a random $I \in S_{r,\ell}$, the group element corresponding to each of the $r$ “fixed” positions is from $\{x_1, x_2, \dots, x_j\}$ with probability $1/j$ each. Hence each edge in $E_S$ has weight $1/j$. Similarly, the $\ell$ distinct indices in $I$ (from $\{X_{j+1}, \dots, X_k\}$) are assigned group elements independently and uniformly at random. Hence edges in $E_{G\setminus S}$ has weight $\frac{1}{n}$. + +The weight of an s-to-t path is a product of the weights of edges on the path. The graph depends on $j, \bar{\epsilon},$ and $\bar{\epsilon}'$. So for fixed $r, \ell$, we denote it as $H_{r,\ell}(j, \bar{\epsilon}, \bar{\epsilon}')$. The following claim is immediate from the Equation 9. + +**Claim 2.11** *The sum of weights of all s to t paths in $H_{j,\bar{\epsilon},\bar{\epsilon}'}$ is $\sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}}$.* + +In the following lemma we observe that $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$ can be computed in polynomial time. The proof is easy. + +**Lemma 2.12** *For each $j, \bar{\epsilon}, \bar{\epsilon}', r, \ell$, the quantity $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$ can be computed in time polynomial in $n$.* + +**Proof:** The graph $H_{r,l}(j, \bar{\epsilon}, \bar{\epsilon}')$ has $n^2$ vertices in each intermediate layer. For each $1 \le t \le r+\ell+2$, we define a matrix $M_{t-1}$ whose rows are indexed by the vertices of layer $t-1$ and columns by vertices of layer $t$, and the $(a,b)^{th}$ entry of $M_{t-1}$ is the weight of the edge $(a,b)$ in the graph $H_{j,\bar{\epsilon},\bar{\epsilon}'}$. Their product $M = \prod_{\ell=0}^{r+\ell+1} M_t$ is a scalar which is precisely $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$. As the product of the matrices $M_t$ can be computed in time polynomial in $n$, the lemma follows. $\square$ +---PAGE_BREAK--- + +To summarize, we describe the $(j+1)^{st}$ stage of the algorithm, where a group element $x_{j+1}$ is chosen for $X_{j+1}$. The algorithm cycles through all $n$ choices for $x_{j+1}$. For each choice of $x_{j+1}$, and for each $\bar{\epsilon}, \epsilon'$, and $r, \ell$, the graph $H_{r,\ell}(j+1, \bar{\epsilon}, \epsilon')$ is constructed. Using Lemma 2.12, the expression in 4 is computed for each choice of $x_{j+1}$ and the algorithm fixes the choice that minimizes this expression. This completes the proof of Theorem 1.5. + +By Theorem 1.2 we can bound the absolute value of the second largest eigenvalue of the matrix for Cay($G$, $J$). Theorem 1.5 yields that the resulting distribution after an $O(\log n)$ step random walk on Cay($G$, $J$) is $\frac{1}{\text{poly}(n)}$ close to the uniform distribution in the $L_2$ norm. Theorem 1.2 is in terms of the $L_1$ norm. However, since $|L_1| \le n|L_\infty| \le n|L_2|$, Theorem 1.5 guarantees that the resulting distribution is $\frac{1}{\text{poly}(n)}$ close to the uniform distribution also in $L_1$ norm. Choose $\tau = m = c' \log n$ and $\epsilon = \frac{1}{n^c}$ in Theorem 1.2, where $c, c'$ are fixed from Theorem 1.5. Then $\lambda_{max} \le \frac{1}{2O(c/c')} < 1$. This completes the proof of Corollary 1.6. $\square$ + +# 3 Undirected Expanding Cayley Graphs + +In this section, we show a deterministic polynomial-time construction of a generating set $J$ for any group $G$ (given by table) such that a lazy random walk on the *undirected* Cayley graph Cay$(G, J \cup J^{-1})$ mixes well. As a consequence, we get Cayley graphs which have a constant spectral gap (an alternative proof of a result in [10]). Our construction is based on a simple adaptation of techniques used in Section 2. + +The key point in the undirected case is that we will consider a generalization of Erdös-Renyi sequences. We consider the distribution on $G$ defined by $g_1^{\epsilon_1} \dots g_k^{\epsilon_k}$ where $\epsilon_i \in_R \{-1, 0, 1\}$. The following lemma is an easy generalization the Erdös-Renyi result (Theorem 1.4). A similar theorem appears in [3, Theorem 14]. Our main focus in the current paper is the derandomized construction of Cayley expanders. Towards that, to make our paper self-contained, we include a short proof of Lemma 3.1 in the appendix. + +**Lemma 3.1** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of $k$ elements of $G$ picked uniformly and independently at random. Let $D_J$ be the following distribution: $D_J(x) = \text{Pr}_{\{\epsilon_i \in R \{-1, 0, 1\} : 1 \le i \le k\}} [g_1^{\epsilon_1} \cdots g_k^{\epsilon_k} = x]$ for $x \in G$, and $U$ be the uniform distribution on $G$. Then $\mathbb{E}_J[\sum_{x \in G} (D_J(x))^2] = \mathbb{E}_J[\text{Coll}(D_J)] \le (\frac{8}{9})^k + \frac{1}{n}$. + +## Deterministic construction + +First, we note that analogues of Lemma 2.2, 2.3, and 2.4 hold in the undirected case too. In particular, When elements of $J$ are picked uniformly and independently from $G$, by Lemma 3.1, we have $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} (R_{L(I)}^J(g))^2] \le (\frac{8}{9})^\ell + \frac{1}{n}$, where $\ell$ is the length of the L-subsequence $L(I)$ of $I$. Now we state Lemma 3.2 below, which is a restatement of Lemma 2.4 for the undirected case. The proof is exactly similar to the proof of Lemma 2.4. As before, we again consider the probability that an $I$ sequence of length $m$ does not have an $L$ sequence of length $\ell$. Also, we fix $\ell, m$ to $O(\log n)$ appropriately. + +**Lemma 3.2** Let $Q_J(g) = \frac{1}{km} \sum_{I \in [k]^m} R_I(g)$. Then $\mathbb{E}_J[\text{Coll}(Q_J)] = \mathbb{E}_J[\sum_{g \in G} Q_J(g)^2] \le 1/n + 2(\frac{8}{9})^{\Theta(m)}$. +---PAGE_BREAK--- + +Building on this, we can extend the results in Section 2.2 to the undirected case too in a straightforward manner. In particular, we can use essentially the same algorithm as described in Lemma 2.12 to compute the quantity in Equation 5 in polynomial time also in the undirected setting. The only difference we need to incorporate is that now $\bar{\epsilon}, \epsilon' \in \{-1, 0, 1\}^{r+\ell}$. This essentially completes the proof of Theorem 1.7. We do not repeat all the details here. + +Finally, we derive Corollary 1.8. The normalized adjacency matrix of the undirected Cayley graph (corresponding to the lazy walk we consider) is given by $A = \frac{1}{3}I + \frac{1}{3k}(P_J + P_{J-1})$ where $P_J$ and $P_{J-1}$ are the corresponding permutation matrices defined by the sets $J$ and $J^{-1}$. As in the proof of Corollary 1.8, we bound the distance of the resulting distribution from the uniform distribution in $L_1$ norm. Let $m = c' \log n$ be suitably fixed from the analysis and $|A^m \bar{v} - \bar{u}|_1 \le \frac{1}{nc}$. Then by Theorem 1.1, the spectral gap $1-|\lambda_1| \ge \frac{c}{c'}$. Hence the Cayley graph is a spectral expander. It follows easily that the standard undirected Cayley graph with adjacency matrix $\frac{1}{2k}(P_J + P_{J-1})$ is also a spectral expander. + +# 4 Deterministic construction of Erdös-Rényi sequences + +In this section, we prove Theorem 1.9. We use the method of conditional expectations as follows: From Theorem 1.4, we know that $E_J\|D_J - U\|_2^2 = \frac{1}{2^k}(1-\frac{1}{n})$. Therefore there exists a setting of $J$, say $J = \langle x_1, \dots, x_k \rangle$, such that $\|D_J - U\|_2^2 \le \frac{1}{2^k}(1-\frac{1}{n})$. We find such a setting of $J$ by fixing its elements one by one. Let $\delta = \frac{1}{nc}$, $c > 1$ be the required closeness parameter. Thus we need $k$ such that $\frac{1}{2^k} \le \delta$. It suffices to take $k > c \log n$. We denote the expression $X_{i_1}^{\epsilon_1} \dots X_{i_t}^{\epsilon_t}$ by $\bar{X}^\epsilon$ when the length $t$ of the sequence is clear from the context. + +Let after $i$th step, $x_1, \dots, x_i$ be fixed and $X_{i+1}, \dots, X_k$ are to be picked. At this stage, by our choice of $x_1, \dots, x_i$, we have $E_J=(X_{i+1},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_i=x_i) \le \frac{1}{2^k}(1-\frac{1}{n})$. Now we cycle through all the group elements for $X_{i+1}$ and fix $X_{i+1} = x_{i+1}$ such that the $E_J=(X_{i+2},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_{i+1}=x_{i+1}) \le \frac{1}{2^k}(1-\frac{1}{n})$. Such an $x_{i+1}$ always exists by a standard averaging argument. In the next theorem, we show that the conditional expectations are efficiently computable at every stage. Theorem 1.9 is an immediate corollary. + +Assume that we have picked $x_1, \dots, x_i$ from $G$, and $X_{i+1}, \dots, X_k$ are to be picked from $G$. Let the choice of $x_1, \dots, x_i$ be such that $E_J=(X_{i+1},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_i=x_i) \le \frac{1}{2^k}(1-\frac{1}{n})$. Let, for $x \in G$ and $J = \langle X_1, \dots, X_k \rangle$ + +$$Q_J(x) = \mathrm{Pr}_{\bar{\epsilon} \in \{0,1\}^k} [\bar{X}^{\bar{\epsilon}} = x]$$ + +When $J$ is partly fixed, + +$$ +\begin{align*} +\hat{Q}_J(x) &= \mathrm{Pr}_{\bar{\epsilon}_1 \in \{0,1\}^i, \bar{\epsilon}_2 \in \{0,1\}^{k-i}} [\bar{x}^{\bar{\epsilon}_1} \cdot \bar{X}^{\bar{\epsilon}_2} = x] \\ +&= \sum_{y \in G} \mathrm{Pr}_{\bar{\epsilon}_1} [\bar{x}^{\bar{\epsilon}_1} = y] \mathrm{Pr}_{\bar{\epsilon}_2} [\bar{X}^{\bar{\epsilon}_2} = y^{-1}x] \\ +&= \sum_{y \in G} \mu(y) \mathrm{Pr}_{\bar{\epsilon}_2} [\bar{X}^{\bar{\epsilon}_2} = y^{-1}x] \\ +&= \sum_{y \in G} \mu(y) \hat{Q}_{\bar{X}}(y^{-1}x) +\end{align*} +$$ +---PAGE_BREAK--- + +where $\mu(y) = \mathrm{Pr}_{\bar{\epsilon}_1}[\bar{x}^{\bar{\epsilon}_1} = y]$. Then $\mathbb{E}_J[\mathrm{Coll}(D_J)] = \mathbb{E}_J\|D_J - U\|_2^2 + \frac{1}{n}$, and $\mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)] = (\mathbb{E}_J\|D_J - U\|_2^2 | X_1 = x_1, X_2 = x_2, \dots, X_i = x_i) + \frac{1}{n}$. + +Next theorem completes the proof. + +**Theorem 4.1** For any finite group $G$ of order $n$ given as multiplication table, $\mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)]$ can be computed in time polynomial in $n$. + +**Proof:** + +$$ \mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)] = \mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x). \quad (10) $$ + +Now we compute $\mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x)$. + +$$ +\begin{align} +\mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x) &= \mathbb{E}_J \sum_{x \in G} \left( \sum_{y \in G} \mu(y) \hat{Q}_{\bar{X}}(y^{-1}x) \right) \left( \sum_{z \in G} \mu(z) \hat{Q}_{\bar{X}}(z^{-1}x) \right) \\ +&= \sum_{y,z \in G} \mu(y)\mu(z) \mathbb{E}_J \sum_{x \in G} [\hat{Q}_{\bar{X}}(y^{-1}x) \hat{Q}_{\bar{X}}(z^{-1}x)]. \tag{11} +\end{align} +$$ + +Now, + +$$ +\begin{align} +\sum_{x \in G} [\hat{Q}_{\bar{X}}(y^{-1}x) \hat{Q}_{\bar{X}}(z^{-1}x)] &= \sum_{x \in G} \mathrm{Pr}_{\bar{\epsilon}}[\bar{X}^{\bar{\epsilon}} = y^{-1}x] \mathrm{Pr}_{\bar{\epsilon}'}[\bar{X}^{\bar{\epsilon}'} = z^{-1}x] \\ +&= \frac{1}{2^{2k}} \sum_{x, \bar{\epsilon}, \bar{\epsilon}'} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') \\ +&= \frac{1}{2^{2k}} \left( \sum_{\bar{\epsilon}=\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') + \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') \right) (12) +\end{align} +$$ + +where $\chi_a(\bar{\epsilon})$ is an indicator variable which is 1 if $\bar{X}^{\bar{\epsilon}} = a$ and 0 otherwise. If $\bar{\epsilon} = \bar{\epsilon}'$ then $\chi_{y^{-1}x}(\bar{\epsilon}) \cdot$ +$\chi_{z^{-1}x}(\bar{\epsilon}') = \delta_{y,z}$, where $\delta_{a,b} = 1$ whenever $a=b$ and 0 otherwise. + +For $\bar{\epsilon} \neq \bar{\epsilon}'$, $\chi_{y^{-1}x}(\bar{\epsilon}) \cdot \chi_{z^{-1}x}(\bar{\epsilon}') = 1$ only if $y\bar{X}^{\bar{\epsilon}} = z\bar{X}^{\bar{\epsilon}'} = x$. Therefore for $\bar{\epsilon} \neq \bar{\epsilon}'$, we have + +$$ \frac{1}{2^{2k}} \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \cdot \chi_{z^{-1}x}(\bar{\epsilon}') = \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} \delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}} (1-\delta_{\bar{\epsilon},\bar{\epsilon}'}). $$ + +Putting this in Equation 12, we get + +$$ \frac{1}{2^{2k}} \left( \sum_{\bar{\epsilon}=\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon})\chi_{z^{-1}x}(\bar{\epsilon}') + \sum_{\bar{\epsilon}\neq\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon})\chi_{z^{-1}x}(\bar{\epsilon}') \right) = \frac{n}{2^k}\delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'}\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}(1-\delta_{\bar{\epsilon},\bar{\epsilon}'}). $$ + +Therefore we get + +$$ +\begin{align} +\mathbb{E}_J \sum_{x \in G} \hat{Q}_{\bar{X}}(y^{-1}x) \cdot \hat{Q}_{\bar{X}}(z^{-1}x) &= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_J [\mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})]] \\ +&= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})\mathbb{E}_J [\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}]] \\ +&= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})\mathrm{Pr}_{\bar{X}}(y\bar{X}^{\bar{\epsilon}} = z\bar{X}^{\bar{\epsilon}'})] \tag{13} +\end{align} +$$ +---PAGE_BREAK--- + +**Claim 4.2** For $\bar{\epsilon} \neq \bar{\epsilon}'$, $\Pr_{\bar{X}}(y\bar{X}\bar{\epsilon} = z\bar{X}\bar{\epsilon}') = \frac{1}{n}$. + +**Proof:** Let $j$ be the smallest index from left such that $\epsilon_j \neq \epsilon'_j$. Let $X_{i+1}^{\epsilon_1}, \dots, X_{i+j-1}^{\epsilon_{j-1}} = a$. Let $X_{i+j+1}^{\epsilon_{i+1}}, \dots, X_k^{\epsilon_{k-i}} = b$ and $X_{i+j+1}^{\epsilon'_1}, \dots, X_k^{\epsilon'_{k-i}} = b'$. Also, without loss of generality, let $\epsilon_j = 1$ and $\epsilon'_j = 0$. Then we have $\Pr_{\bar{X}}(y\bar{X}\bar{\epsilon} = z\bar{X}\bar{\epsilon}') = \Pr_{X_{i+j}}(yaX_{i+j}b = zab') = \frac{1}{n}$. $\square$ + +Thus Equation 13 becomes + +$$ \mathbb{E}_J \sum_{x \in G} \hat{Q}_{\bar{X}}(y^{-1}x) \cdot \hat{Q}_{\bar{X}}(z^{-1}x) = \frac{n}{2^k} \delta_{y,z} + \frac{2^{2k} - 2^k}{n2^{2k}} $$ + +Putting this in Equation 11, we get + +$$ \mathbb{E}_J[\text{Coll}(\hat{Q}_J)] = \mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x) = \sum_{y,z \in G} \frac{1}{2^{2k}} [2^k \cdot n \cdot \delta_{y,z} + (2^{2k} - 2^k) \cdot \frac{1}{n}] \mu(y)\mu(z) \quad (14) $$ + +Clearly, for any $y \in G$, $\mu(y)$ can be computed in time $O(2^i)$ which is a polynomial in $n$ since $i \le k = O(\log n)$. Also from Equation 14, it is clear that $\mathbb{E}_J[\text{Coll}(\hat{Q}_J)]$ is computable in polynomial (in $n$) time. $\square\square$ + +# 5 Summary + +Constructing explicit Cayley expanders on finite groups is an important problem. In this paper, we give simple deterministic construction of Cayley expanders that have a constant spectral gap. Our method is completely different and elementary than the existing techniques [10]. + +The main idea behind our work is a deterministic polynomial-time construction of a cube generating sequence $J$ of size $O(\log|G|)$ such that $\text{Cay}(G, J)$ has a rapid mixing property. In randomized setting, Pak [7] has used similar ideas to construct Cayley expanders. In particular, we also give a derandomization of a well-known result of Erdös and Rényi [2]. + +# References + +[1] Noga Alon and Yuval Roichman. Random cayley graphs and expanders. *Random Struct. Algorithms*, 5(2):271–285, 1994. + +[2] Paul Erdös and Alfréd Rényi. Probabilistic methods in group theory. *Journal D'analyse Mathematique*, 14(1):127–138, 1965. + +[3] Martin Hildebrand. A survey of results on random random walks on finite groups. *Probability Surveys*, 2:33–63, 2005. + +[4] Shlomo Hoory, Nati Linial, and Avi Wigderson. Expander graphs and their applications. *Bull. AMS*, 43(4):439–561, 2006. + +[5] Alex Lubotzky, R. Phillips, and Peter Sarnak. Ramanujan graphs. *Combinatorica*, 8(3):261–277, 1988. +---PAGE_BREAK--- + +[6] Ravi Montenegro and Prasad Tetali. Mathematical aspects of mixing times in markov chains. *Foundations and Trends in Theoretical Computer Science*, 1(3), 2005. + +[7] Igor Pak. Random cayley graphs with $o(\log[g])$ generators are expanders. In *Proceedings of the 7th Annual European Symposium on Algorithms*, ESA '99, pages 521–526. Springer-Verlag, 1999. + +[8] Dana Randall. Rapidly mixing markov chains with applications in computer science and physics. *Computing in Science and Engg.*, 8(2):30–41, 2006. + +[9] Omer Reingold. Undirected connectivity in log-space. *J. ACM*, 55(4), 2008. + +[10] Avi Wigderson and David Xiao. Derandomizing the ahlswede-winter matrix-valued chernoff bound using pessimistic estimators, and applications. *Theory of Computing*, 4(1):53–76, 2008. +---PAGE_BREAK--- + +# Appendix + +We include a proof of Lemma 2.2. + +## Proof of Lemma 2.2 + +**Proof:** We use the simple fact that if $y \in G$ is picked uniformly at random and $x \in G$ be any element independent of $y$, then the distribution of $xyx^{-1}$ is uniform in $G$. + +Let $I = \langle i_1, \dots, i_m \rangle$, and $L = \langle i_{r_1}, \dots, i_{r_\ell} \rangle$ be the corresponding L-subsequence (clearly, $r_1 = 1$). Let $J = \langle g_1, g_2, \dots, g_k \rangle$ be uniform and independent random elements from $G$. Consider the distribution of the products $g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m}$ where $\epsilon_i \in \{0, 1\}$ are independent and uniformly picked at random. Then we can write + +$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = g_{i_{r_1}}^{\epsilon_{r_1}} x_1 g_{i_{r_2}}^{\epsilon_{r_2}} x_2 \dots x_{\ell-1} g_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_\ell,$$ + +where, by definition of L-subsequence, notice that $x_j$ is a product of elements from $\{g_{i_{r_1}}, g_{i_{r_2}}, \dots, g_{i_{r_{j-1}}}\}$ for each $j$. By conjugation, we can rewrite the above expression as + +$$g_{i_{r_1}}^{\epsilon_{r_1}} x_1 g_{i_{r_2}}^{\epsilon_{r_2}} x_2 \dots h^{\epsilon_{r_\ell}} x_{\ell-1} x_\ell, \text{ where}$$ + +$$h^{\epsilon_{r_\ell}} = x_{\ell-1} g_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_{\ell-1}^{-1}.$$ + +We refer to this transformation as moving $x_{\ell-1}$ to the right. Successively applying this transformation to $x_{\ell-2}, x_{\ell-3}, \dots, x_1$ we can write + +$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = h_{i_{r_1}}^{\epsilon_{r_1}} h_{i_{r_2}}^{\epsilon_{r_2}} \dots h_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_1 x_2 \dots x_{\ell-1} x_\ell,$$ + +where each $h_{i_{r_j}}$ is a conjugate $z_j g_{i_{r_j}} z_j^{-1}$. Crucially, notice that the group element $z_j$ is a product of elements from $\{g_{i_1}, g_{i_2}, \dots, g_{i_{r_{j-1}}}\}$ for each $j$. As a consequence of this and the fact that $\{g_{i_1}, g_{i_2}, \dots, g_{i_{r_\ell}}\}$ are all independent uniformly distributed elements of $G$, it follows that $\{h_{i_1}, h_{i_2}, \dots, h_{i_\ell}\}$ are all independent uniformly distributed elements of $G$. Let $J'$ denote the set of $k$ group elements obtained from $J$ by replacing the subset $\{g_{i_1}, g_{i_2}, \dots, g_{i_\ell}\}$ with $\{h_{i_1}, h_{i_2}, \dots, h_{i_\ell}\}$. Clearly, $J'$ is a set of $k$ independent, uniformly distributed random group elements from $G$. + +Thus, we have + +$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = h_{i_1}^{\epsilon_1} \dots h_{i_\ell}^{\epsilon_\ell} x(\bar{\epsilon}),$$ + +where $x(\bar{\epsilon}) = x_1 x_2 \dots x_r$ is an element in $G$ that depends on $J, I$ and $\bar{\epsilon}$, where $\bar{\epsilon}$ consists of all the $\epsilon_j$ for $j \in I \setminus L$. Hence, for each $g \in G$, observe that we can write + +$$ +\begin{align*} +R_I^J(g) &= \operatorname{Prob}_{\epsilon_1, \ldots, \epsilon_m} \left[ \prod_{j=1}^{m} g_{ij}^{\epsilon_j} = g \right] \\ +&= \operatorname{Prob}_{\epsilon_1, \ldots, \epsilon_m} [h_{i_1}^{\epsilon_1} \cdots h_{i_\ell}^{\epsilon_\ell} = g x(\bar{\epsilon})^{-1}] \\ +&= E_{\bar{\epsilon}}[R'_{L(I)}(gx(\bar{\epsilon})^{-1})]. +\end{align*} +$$ +---PAGE_BREAK--- + +Therefore we have the following: + +$$ +\begin{align*} +\mathbb{E}_J[\mathrm{Coll}(R_I^J)] &= \mathbb{E}_J\left[\sum_g (R_I^J(g))^2\right] \\ +&= \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}} R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right] \\ +&\le \mathbb{E}_J\left[\sum_g \mathbb{E}_{\bar{\epsilon}}(R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right] \tag{15} \\ +&= \mathbb{E}_{\bar{\epsilon}}\left[\mathbb{E}_J\left[\sum_g (R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right]\right] \\ +&= \mathbb{E}_{\bar{\epsilon}}\left[\mathbb{E}_J\left[\sum_h (R_{L(I)}^J(h))^2\right]\right] \\ +&= \mathbb{E}_J\left[\sum_h (R_{L(I)}^J(h))^2\right] \\ +&= \mathbb{E}_J[\mathrm{Coll}(R_{L(I)}^J)] \le \frac{1}{n} + \delta \tag{16} +\end{align*} +$$ + +where the inequality in 15 follows from Cauchy-Schwarz inequality and the last step follows from the assumption of the lemma. + +□□ + +We use simple counting argument to prove Lemma 2.3. A similar lemma appears in [7]. + +**Proof of Lemma 2.3** + +**Proof:** Consider the event that a sequence $X$ of length $m$ does not have an L-subsequence of length $\ell$. Thus it has at most $\ell - 1$ distinct elements, which can be chosen in at most $\binom{k}{\ell-1}$ ways. The $m$ length sequence can be formed from them in at most $[\ell-1]^m$ ways. Therefore + +$$ +\begin{align*} +\Pr[X \text{ has L-subsequence of length } < \ell] & \leq \frac{\binom{k}{\ell-1} [\ell-1]^m}{k^m} \\ +& \leq \left(\frac{ke}{\ell-1}\right)^{\ell-1} \cdot \left(\frac{\ell-1}{k}\right)^m \\ +& = e^{\ell-1} \left(\frac{\ell-1}{k}\right)^{m-\ell+1} \\ +& = \frac{e^{\ell-1}}{a^{m-(k/a)}} = \frac{(ae)^{k/a}}{a^m}. \tag*{\square\square} +\end{align*} +$$ + +Next we prove Lemma 2.4. + +**Proof of Lemma 2.4** + +**Proof:** + +We call $I \in [k]^m$ good if it has an L-subsequence of length at least $\ell$, else we call it bad. +---PAGE_BREAK--- + +$$ +\begin{align*} +\mathbb{E}_J[\mathrm{Coll}(Q_J)] &= \mathbb{E}_J\left[\sum_{g \in G} Q_J^2(g)\right] \\ +&= \mathbb{E}_J\left[\sum_{g \in G} (\mathbb{E}_I(R_I(g))^2]\right] \\ +&\leq \mathbb{E}_J\left[\sum_{g \in G} \mathbb{E}_I(R_I^2(g))\right] \quad \text{By Cauchy-Schwarz inequality} \tag{17} \\ +&= \mathbb{E}_I[\mathbb{E}_J[\mathrm{Coll}(R_I)]] \\ +&\leq \frac{1}{k^m} \mathbb{E}_J\left[\sum_{I \in [k]^m} \sum_{g \in G} (R_I^J(g))^2 + \sum_{\substack{I \in [k]^m \\ I \text{ is bad}}} 1\right] \\ +&\leq \mathrm{Pr}_I[I \text{ is good}] \left(\frac{1}{n} + \frac{1}{2^\ell}\right) + \mathrm{Pr}_I[I \text{ is bad}] \tag{18} +\end{align*} +$$ + +Here the last step follows from Lemma 2.2 and Theorem 1.4. Now we fix $m$ from Lemma 2.3 appropriately to $O(\log n)$ such that $\mathrm{Pr}_I[I \text{ is bad}] \le \frac{1}{2^m}$ and choose $\ell = \Theta(m)$. Hence we get that $\mathbb{E}_J[\mathrm{Coll}(Q_J)] \le \frac{1}{n} + \frac{1}{2^{\Theta(m)}}$. $\square\square$ + +Next, we give the proof of Lemma 2.6. + +# 6 Proof of Lemma 2.6 + +**Proof:** For each $g \in G$, we can write + +$$ +\begin{align*} +R_I^J(g) &= \operatorname{Prob}_{\epsilon_1, \dots, \epsilon_m} \left[ \prod_{j=1}^{m} g_{ij}^{\epsilon_j} = g \right] = \operatorname{Prob}_{\epsilon_1, \dots, \epsilon_m} \left[ g_{i_{f_1}}^{\epsilon_{f_1}} \cdots g_{i_{f_r}}^{\epsilon_{f_r}} h_{e_1}^{\epsilon_{e_1}} \cdots h_{e_\ell}^{\epsilon_{e_\ell}} = gy(\bar{\epsilon})^{-1} \right] \\ +&= E_{\bar{\epsilon}}[R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1})]. +\end{align*} +$$ + +Therefore we have the following: + +$$ +\begin{align} +\mathbb{E}_J[\mathrm{Coll}(R_I^J)] &= \mathbb{E}_J\left[\sum_g (R_I^J(g))^2\right] \nonumber \\ +&= \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}} R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right] \nonumber \\ +&\leq \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}}(R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right] \tag{19} \\ +&= \mathbb{E}_{\bar{\epsilon}}[\mathbb{E}_J\left[\sum_g (R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right]] \nonumber \\ +&= \mathbb{E}_{\bar{\epsilon}}[\mathbb{E}_J\left[\sum_h (R_{\hat{I}}^{J(I)}(h))^2\right]] \nonumber \\ +&= \mathbb{E}_J[\mathrm{Coll}(R_{\hat{I}}^{J(I)})], \nonumber +\end{align} +$$ + +where the inequality 19 follows from Cauchy-Schwarz inequality. $\square\square$ + +We include a short proof of Lemma 2.9. +---PAGE_BREAK--- + +**Proof of Lemma 2.9** + +**Proof:** There are $\binom{m}{r}$ ways of picking $r$ positions for the fixed elements in $I$. Each such index can be chosen in $j$ ways. From the $(k-j)$ random elements of $J$, $\ell$ distinct elements can be picked in $\binom{k-j}{\ell}$ ways. Let $n_{m-r,\ell}$ be the number of sequences of length $m-r$ that can be constructed out of $\ell$ distinct integers such that every integer appears at least once. Clearly, $|S_{r,\ell}| = \binom{m}{r} j^{\binom{k-j}{\ell}} n_{m-r,\ell}$. It is well known that $n_{m-r,\ell}$ is the coefficient of $x^{m-r}/(m-r)!$ in $(e^x - 1)^\ell$. Thus, by the binomial theorem $n_{m-r,\ell} = \sum_{i=0}^\ell (-1)^i \binom{\ell}{i} (\ell-i)^{m-r}$. Since $m = O(\log n)$ and $\ell \le m$, $n_{m-r,\ell}$ can be computed in time polynomial in $n$. □□ + +Next, we give a proof of Lemma 3.1. + +**Proof of Lemma 3.1** + +**Proof:** The proof closely follows the proof of Erdös-Rényi for the case $\bar{\epsilon} \in \{0,1\}^k$. We briefly sketch the argument below for the sake of completeness. + +We denote the expression $g_1^{\epsilon_1} \cdots g_k^{\epsilon_k}$ by $\bar{g}^{\bar{\epsilon}}$. For a given $J$, $\chi_x(\bar{\epsilon}) = 1$ if $\bar{g}^{\bar{\epsilon}} = x$ and $0$ otherwise. Let $S_1 = \{(\bar{\epsilon}, \bar{\epsilon}') | \bar{\epsilon} \neq \bar{\epsilon}'; \exists i$ such that $\bar{\epsilon}_i \neq \bar{\epsilon}'_i$ and $\bar{\epsilon}_i \bar{\epsilon}'_i = 0\}$. Let $S_2 = \{(\bar{\epsilon}, \bar{\epsilon}') | \bar{\epsilon} \neq \bar{\epsilon}'; \bar{\epsilon}_i \neq \bar{\epsilon}'_i \Rightarrow \bar{\epsilon}_i \bar{\epsilon}'_i = -1\}$ + +$$ +\begin{aligned} +\mathbb{E}_J[\mathrm{Coll}(D_J)] &= \mathbb{E}_J\left[\sum_{x \in G} (D_J(x))^2\right] \\ +&= \mathbb{E}_J\left[\sum_{x \in G} (\mathrm{Pr}_{\bar{\epsilon}}[\bar{g}^{\bar{\epsilon}} = x])^2\right] \\ +&= \frac{1}{3^{2k}} \mathbb{E}_J\left[\sum_{x \in G} \left(\sum_{\bar{\epsilon}} \chi_x(\bar{\epsilon})\right) \left(\sum_{\bar{\epsilon}'} \chi_x(\bar{\epsilon}')\right)\right] \\ +&= \frac{1}{3^{2k}} \left[ \sum_{\bar{\epsilon}=\bar{\epsilon}'} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] + \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] \right] \\ +&= \frac{1}{3^{2k}} \left( 3^k + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_2} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] \right) \\ +&= \frac{1}{3^{2k}} \left[ 3^k + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_2} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) \right] \\ +&\leq \frac{1}{3^k} + \left(1 - \frac{1}{3^k} - \frac{5^k}{9^k}\right) \frac{1}{n} + \frac{5^k}{9^k} \\ +&= \left(1 - \frac{1}{n}\right) \left(\frac{1}{3^k} + \frac{5^k}{9^k}\right) + \frac{1}{n} \\ +&< \left(\frac{8}{9}\right)^k + \frac{1}{n} +\end{aligned} +$$ + +To see the last step, first notice that if $\bar{\epsilon} = \bar{\epsilon}'$ then $\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}') = 1$. A simple counting argument shows that $|S_2| = \sum_{i=0}^k {k \choose i} 2^i 3^{k-i} = 5^k$. So $\sum_{(\bar{\epsilon},\bar{\epsilon}') \in S_2} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) \le 5^k$. Now consider +---PAGE_BREAK--- + +$(\bar{\epsilon}, \bar{\epsilon}') \in S_1$, let $j$ be the first position from left such that $\bar{\epsilon}_j \neq \bar{\epsilon}'_j$. W.l.o.g assume that $\bar{\epsilon}_j = 1$ (or $\bar{\epsilon}_j = -1$) and $\bar{\epsilon}'_j = 0$. In that case write $\bar{g}^{\bar{\epsilon}} = a_{g_j} b$ and $\bar{g}^{\bar{\epsilon}'} = a_{b'}$. Then $\mathrm{Pr}_{g_j}[g_j = b'b^{-1}] = \frac{1}{n}$. Hence +$$ \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) = \frac{9^k - 3^k - 5^k}{n}. \quad \square\square $$ \ No newline at end of file diff --git a/samples/texts_merged/2909063.md b/samples/texts_merged/2909063.md new file mode 100644 index 0000000000000000000000000000000000000000..3cffcf4a9b45e118692dc4f07f6c024322afd28a --- /dev/null +++ b/samples/texts_merged/2909063.md @@ -0,0 +1,56 @@ + +---PAGE_BREAK--- + +# y⁺ Calculation, Example 6D + +Example 6D: Consider a high-velocity fluid over a flat plate. It is desired to find the thickness of the viscous sublayer at $y^+=1$. The fluid is H₂O at 395 K and 1 MPa. Its free stream velocity is 700 m/s, and has a boundary layer $\delta=0.1$ m. + +## Solutions: + +1) Use the "Yplus_LIKE_Eddy_Scales_Book_Version.m" application found in my CFD/turbulence book, "Applied Computational Fluid Dynamics and Turbulence Modeling", Springer International Publishing, 1st Ed., ISBN 978-3-030-28690-3, 2019, DOI: 10.1007/978-3-030-28691-0. + +or + +2) Get a free copy of "Yplus_LIKE_Eddy_Scales_Book_Version.m" at www.cfdturbulence.com, or email me at tayloreddydk1@gmail.com. + +or + +3) Use the free $y^+$ estimation GUI tool offered by cfd-online, which is at http://www.cfd-online.com/Tools/yplus.php + +or + +4) Follow the step-by-step solution shown in the next slide. +---PAGE_BREAK--- + +$y^+$ Calculation, Example 6D + +From $P$ and $T$, $\rho = 942 \text{ kg/m}^3$ and $\mu = 2.28 \times 10^{-4} \text{ kg/m-s}$. + +$$v = \frac{\mu}{\rho} = \frac{2.28 \times 10^{-4}}{942} = 2.43 \times 10^{-7} \text{ m}^2/\text{s}$$ + +$$Re_x = \frac{U_\infty \delta(x)}{v} = \frac{700 * 0.1}{2.43 \times 10^{-7}} = 2.87 \times 10^{8}, < 10^{9}$$ + +$$C_f = [2 \log_{10}(Re_x) - 0.65]^{-2.3} = [2 \log_{10}(2.87 \times 10^8) - 0.65]^{-2.3} = 1.60 \times 10^{-3}$$ + +$$\tau_w = C_f \frac{\rho U_\infty^2}{2} = 1.60 \times 10^{-3} \frac{942 * 700^2}{2} = 3.78 \times 10^5$$ + +$$u_* = \sqrt{\frac{\tau_w}{\rho}} = \sqrt{\frac{3.78 \times 10^5}{942}} = 20.0$$ + +$$y(\text{at } y^+=1) = \frac{y^+ v}{u_*} = \frac{1 * 2.43 \times 10^{-7}}{20} = 1.22 \times 10^{-8} \text{ m}$$ +---PAGE_BREAK--- + +# y⁺ Calculation, Example 6D Solutions + +## Approach 1 and 2 (the Matlab script, Yplus_LIKE_Eddy_Scales_Book_Version.m) + +$$Re_x = 2.89 \times 10^8$$ + +$$y(\text{at } y^+=1) = 1.23 \times 10^{-8} \text{ m}$$ + +## Approach 4 (previous slide) + +$$Re_x = 2.87 \times 10^8$$ + +$$y(\text{at } y^+=1) = 1.22 \times 10^{-8} \text{ m}$$ + +## Approach 3 (cfd-online tool) \ No newline at end of file diff --git a/samples/texts_merged/305525.md b/samples/texts_merged/305525.md new file mode 100644 index 0000000000000000000000000000000000000000..e1366b44f714651d87bb099bae7b89d31468c76f --- /dev/null +++ b/samples/texts_merged/305525.md @@ -0,0 +1,295 @@ + +---PAGE_BREAK--- + +Topology Proceedings + +**Web:** http://topology.auburn.edu/tp/ + +**Mail:** Topology Proceedings +Department of Mathematics & Statistics +Auburn University, Alabama 36849, USA + +**E-mail:** topolog@auburn.edu + +**ISSN:** 0146-4124 + +COPYRIGHT © by Topology Proceedings. All rights reserved. +---PAGE_BREAK--- + +# SPLITTABILITY OVER LINEAR ORDERINGS + +A. J. Hanna* and T.B.M. McMaster† + +## Abstract + +A partial order X is splittable over a partial order Y if for every subset A of X there exists an order preserving mapping $f : X \to Y$ such that $f^{-1}f(A) = A$. We define a cardinal function $sc(X)$ (the 'splittability ceiling' for X) to be the least cardinal $\beta$ such that the disjoint sum of $\beta$ copies of X fails to split over a single copy of X. We allow $sc(X) = \infty$ to cover the case where arbitrarily many disjoint copies may be split. We investigate this cardinal function with respect to (linear) partial orders. + +## 1. Introduction + +A. V. Arhangel'skiǐ formulated and developed a range of definitions of splittability (or cleavability) in topology (see for example [1, 2]), of which the following are amongst the most basic. + +**Definition 1.1.** For topological spaces X and Y: + +— *X is splittable over Y along the subset A of X if there exists continuous f : X → Y such that:* + +* The research of the first author was supported by a distinction award scholarship from the Department of Education for Northern Ireland. + +† The authors would like to express their gratitude to Steven Watson for his helpful comments and insight, especially regarding Theorem 1.8. +*Mathematics Subject Classification:* 06A05, 06A06, 54A25, 54C99 +**Key words:** splittability, partially ordered set, splittability ceiling +---PAGE_BREAK--- + +(i) $f(A) \cap f(X \setminus A) = \emptyset$ or, equivalently, + +(ii) $f^{-1}f(A) = A. + +- *X is splittable over Y if for every subset A of X there exists continuous f : X → Y such that $f^{-1}f(A) = A.$* + +It quickly becomes apparent that splittability is not exclu- +sively a topological idea. Indeed, only a routine translation into +the language of the appropriate category is required for an anal- +ogous definition of splittability over other structures. (For ex- +ample, splittability over semigroups is considered in [6].) + +**Definition 1.2.** Let $X$ and $Y$ be partially ordered sets (posets). + +– A map $f$ between partial orders is increasing (or order preserving) if $x \le y$ implies $f(x) \le f(y)$. + +– *X is splittable over Y along the subset A of X if there exists increasing $f: X \to Y$ such that $f^{-1}f(A) = A.$* + +– *X is splittable over Y if for every subset A of X there exists increasing f : X → Y such that f⁻¹f(A) = A.* + +The following result was obtained by D. J. Marron [4, 5]: + +**Theorem 1.3.** A poset *X* is splittable over the *n*-point chain if and only if: + +(i) *X does not contain a chain of height greater than n, and* + +(ii) *X does not contain two disjoint chains of height n.* + +**Note 1.4.** The previous result shows that it is not possible to split the (disjoint) sum of two copies of a finite chain over a single copy of the same finite chain. However, it is possible to disjointly embed two copies of $\omega$ (the positive integers with usual ordering) into a single copy of $\omega$. Clearly, then, it is possible to split 'two disjoint copies' of $\omega$ over $\omega$. +---PAGE_BREAK--- + +In general, suppose that $\alpha$ copies of a poset $X$ can be disjointly embedded into a single copy of $X$. It is clear that the disjoint sum of $\alpha$ copies of $X$ will split over a single copy of $X$. Indeed, if $(X \cdot \alpha)$ can be embedded into $X$, we can split the sum of $\alpha$ copies of $X$ over $X$. + +For notation and further information on linear orderings the interested reader is referred to [8]. + +**Definition 1.5** (the ‘splittability ceiling’ for $X$). Let $sc(X)$ be the least cardinal $\beta$ such that the (disjoint) sum of $\beta$ copies of $X$ fails to split over a single copy of $X$. We allow $sc(X) = \infty$ to cover the case where the sum of arbitrarily many disjoint copies may be split. + +**Note 1.6.** The critical case for deciding $sc(X)$ is reached in attempting to split $2^{|X|}$ copies. If we have more than $2^{|X|}$ disjoint copies of $X$ and split along some subset of their sum, then there must be copies which we are splitting along the same subset (since $X$ has precisely $2^{|X|}$ subsets) and hence the ‘same’ map will do. In other words, if $sc(X) \ge 2^{|X|}$ then $sc(X) = \infty$. + +**Definition 1.7.** [8] A cardinal number $\aleph_\alpha$ is said to be regular if it is not the sum of fewer than $\aleph_\alpha$ cardinal numbers smaller than $\aleph_\alpha$. + +**Theorem 1.8.** For any partial order $X$, if $sc(X) \neq \infty$ then $sc(X)$ is a regular cardinal. + +*Proof.* Suppose $sc(X) = \lambda < \infty$ is not a regular cardinal; then $\lambda$ can be expressed as the sum of $\alpha$ cardinals $\beta_i$ each less than $\lambda$, where $\alpha$ is less than $\lambda$. Let $Y = \bigcup_{i \in \lambda} X_i$ be the disjoint union of $\lambda$ copies of $X$. We can write $Y = \bigcup_{\substack{i \in \alpha \\ j \in \beta_i}} (\bigcup X_j)$. For each $i \in \alpha$ we can split $\bigcup_{j \in \beta_i} X_j$ over a single copy $X_{\beta_i}$ of $X$, since $\beta_i < \lambda$. +---PAGE_BREAK--- + +Likewise we can split $\bigcup_{i \in \alpha} X_{\beta_i}$ over a single copy of $X$, since $\alpha < \lambda$. + +Hence we can split $\lambda$ copies of $X$ (along any subset) over $X$ - a contradiction. $\square$ + +**Proposition 1.9.** *The splittability ceiling for the chain of positive integers $\omega$ is infinity (i.e. $sc(\omega) = \infty$).* + +*Proof.* Given $X$, the (disjoint) sum of copies of $\omega$ and a subset $A$ of $X$, we define a map $f : X \to \omega$ as follows: + +$$f(x) = \begin{cases} x & (\text{if } x \in A \text{ and } x \text{ is odd}) \text{ or } (x \notin A \text{ and } x \text{ is even}), \\ x+1 & \text{otherwise.} \end{cases}$$ + +It is clear that $f$ is increasing and that $f(A)$ is a subset of the odds while $f(X \setminus A)$ is a subset of the evens. It follows that $f$ splits $X$ along $A$ over $\omega$ as required. $\square$ + +The corresponding result holds for the negative integers $\omega^*$ and for the integers $\omega^* + \omega$. + +**Proposition 1.10.** Let $\alpha$ be an ordinal (considered as a linear order). Then + +$$sc(\alpha) = \begin{cases} \infty & \text{if } \alpha \text{ is a limit ordinal} \\ 2 & \text{if } \alpha \text{ is a non-limit ordinal} \end{cases}$$ + +*Proof.* Note that each element in a limit ordinal has an immediate successor. The first part of the result follows from similar methods as employed for $\omega$. If $\alpha$ is a non-limit ordinal we specify the subset $A$ to contain the 'odd' ordinals less than $\alpha$. Similarly we specify the subset $B$ to contain the 'even' ordinals less than $\alpha$. We can express $\alpha = \xi + n$ where $\xi$ is a limit ordinal and $n$ is finite. Now $f(x) \ge x$ for all $x \in \alpha$ and $f(x) = x$ for $x = \xi + i$ ($0 \le i < n$) whenever $f$ is a map splitting $\alpha$ along $A$ or $B$ over $\alpha$. Clearly it will not be possible to split the sum of two copies of $\alpha$ along $A$ and $B$ respectively over a single copy of $\alpha$. $\square$ +---PAGE_BREAK--- + +**Proposition 1.11.** *The splittability ceiling for the chain of rationals η is infinity (i.e. sc(η) = ∞).* + +*Proof.* Decompose η into two disjoint subsets C and D, each of which is dense in η. Enumerate both C and D in an arbitrary fashion. Given disjoint copies of η and a subset A to split along, define a map for each copy. Begin by enumerating the copy $X_1 = \{x_1, x_2, x_3, \dots\}$. If $x_1 \in A (\notin A)$ map $x_1$ to the first point in the enumeration of C (D). The process continues inductively (using a method similar to that devised by Cantor to show that every countable linear order can be embedded into η). □ + +**Proposition 1.12.** *The splittability ceiling for the chain of the real numbers λ is c+ (i.e. sc(λ) = c+).* + +*Proof.* We first note that it is possible to disjointly embed continuum-many copies of $\lambda$ into $\lambda$. To prove the result we show that there are only continuum-many increasing maps from the reals into the reals. We know that there are only continuum-many maps from the rationals into the reals. Given increasing $f: \mathbb{R} \to \mathbb{R}$ consider its restriction to the rationals $f|_{\mathbb{Q}}$. For how many increasing maps $g: \mathbb{R} \to \mathbb{R}$ do we have $f|_{\mathbb{Q}} = g|_{\mathbb{Q}}$? + +We can show that $f$ and $g$ can only differ at countably many points: for each irrational $x$ select both a strictly decreasing sequence $(a_n)$ and a strictly increasing sequence $(b_n)$ of rationals, each converging to $x$. Now $(f(a_n))$ converges to some limit $l$ while $(f(b_n))$ converges to some limit $l'$. If $l = l'$ then $f(x) = g(x) = l$, otherwise $f(x), g(x) \in [l', l]$ and $f(X) \cap [l', l] = \{f(x)\}$. Since there can only be countably many disjoint intervals in the reals, there can only be countably many points $x$ where $f(x) \neq g(x)$. + +It follows that there can only be continuum-many maps within each equivalence class. Hence there are at most continuum-many increasing maps from the reals to the reals. Clearly if we have more than continuum-many disjoint copies of $\lambda$ and pick different subsets in them, then the union of these copies cannot be +---PAGE_BREAK--- + +split over a single copy of $\lambda$ along the union of these sets, due +to the cardinality restriction on increasing maps. +□ + +Similar arguments can be used to locate an upper bound for +the number of increasing maps from any linear order into itself. +The most interesting case appears to be that of the countable +linear orders. Moreover, unless the order is scattered it can be +shown that the splittability ceiling will be infinity. This follows +since any non-scattered linear order will contain a copy of $\eta$ and +we already know that $sc(\eta) = \infty$. + +## 2. Countable Linear Orderings + +**Lemma 2.1.** Let $X$ be a partial order. If $sc(X) > 2$ then $sc(X) \geq \aleph_0$. + +*Proof.* Let $X_1, X_2, X_3$ be disjoint copies of the partial order $X$, with subsets $A_1, A_2, A_3$ respectively. Let $X_4$ be a fourth copy of $X$. Since $sc(X) > 2$ we can split $X_1 \cup X_2$ along $A_1 \cup A_2$ over $X_4$ using an increasing map $f$ (i.e. $f^{-1}f(A_1 \cup A_2) = A_1 \cup A_2$). Now split $X_3 \cup X_4$ along $B \cup A_3$ (where $B = f(A_1 \cup A_2)$) over $X$ using an increasing map $g$ (i.e. $g^{-1}g(f(A_1 \cup A_2) \cup A_3) = f(A_1 \cup A_2) \cup A_3$). + +Define a map $h : X_1 \cup X_2 \cup X_3 \to X$ by + +$$h(x) = \begin{cases} g \circ f(x) & \text{if } x \in X_1 \cup X_2, \\ g(x) & \text{if } x \in X_3; \end{cases}$$ + +then $h$ splits $X_1 \cup X_2 \cup X_3$ along $A_1 \cup A_2 \cup A_3$ over $X$; for suppose $x \in X_1 \cup X_2 \cup X_3$ and + +$$\begin{align*} +h(x) \in h(A_1 \cup A_2 \cup A_3) &= h(A_1 \cup A_2) \cup h(A_3) \\ +&= g \circ f(A_1 \cup A_2) \cup g(A_3) \\ +&= g(f(A_1 \cup A_2) \cup A_3). +\end{align*}$$ + +If $x \in X_3$ then $h(x) = g(x) \in g(f(A_1 \cup A_2) \cup A_3)$ so $x \in f(A_1 \cup A_2) \cup A_3$ and $x \in A_3$. If $x \in X_1 \cup X_2$ then $h(x) = g \circ f(x) \in g(f(A_1 \cup A_2) \cup A_3)$ so $f(x) \in f(A_1 \cup A_2) \cup A_3$. +---PAGE_BREAK--- + +Fig. 1. Splitting 3 copies of X over a single copy of X + +Hence $f(x) \in f(A_1 \cup A_2)$ and $x \in A_1 \cup A_2$. + +Clearly this argument can be extended by induction so that +$sc(X) > n$ for all $n \in \mathbb{N}$. $\square$ + +**Corollary 2.2.** Let X be a finite partial order; then $sc(X) = 2$ or $\infty$. + +We show now that the previous result extends to countable +linear partial orders. To do so, we employ the notion of an ‘order +shuffling’ and a result due to J. L. Orr. + +**Definition 2.3.** [7] Let A be a countable linearly ordered set. A function $f : A \to N^+$ is called an order shuffling on A. A linearly ordered set B shuffles into (A, f) if there is an increasing surjection $\sigma$ from B onto A such that the cardinality of $\sigma^{-1}\{a\}$ is at least $f(a)$ for all but finitely many $a \in A$. If this holds for all $a \in A$ then B shuffles into (A, f) exactly. +---PAGE_BREAK--- + +**Theorem 2.4.** [7] Let A be a countable scattered linear ordering and let f be an order shuffling on A; then A shuffles into (A, f). + +**Lemma 2.5.** Let X be a countable scattered linear order; then there exist an order preserving surjection $\pi : X \to X$ and points $\{a_1, a_2, \dots, a_n\}$ such that: + +(i) $|\pi^{-1}(x)| > 1$ for each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$, and + +(ii) $\pi^{-1}(a_i) = \{a_i\}$ for each $i \in \{1, 2, \dots, n\}$. + +*Proof*. We use Theorem 2.4 to find an order preserving surjection $\pi : X \to X$ such that $|\pi^{-1}(x)| > 1$ for all but $n$ elements $\{a_1, a_2, \dots, a_n\}$. We assume that $n$ is minimal and that $a_1 < a_2 < \dots < a_n$. + +Note that if $\pi^{-1}(\{a_1, a_2, \dots, a_n\}) \subseteq \{a_1, a_2, \dots, a_n\}$ then since $\pi$ is order preserving $\pi^{-1}(a_i) = \{a_i\}$. Suppose that $\pi$ does not exhibit property (ii); then there exists $i$ such that the singleton pre-image of $a_i$ under $\pi$ is not contained in $\{a_1, a_2, \dots, a_n\}$. Let $\rho = \pi \circ \pi$ and consider $\rho^{-1}(x)$ for some $x \in X$. If $x \notin \{a_1, a_2, \dots, a_n\}$ then $|\pi^{-1}(x)| > 1$, hence $|\rho^{-1}(x)| > 1$. + +If $x = a_j$ for $j \neq i$ then clearly $|\rho^{-1}(x)| \ge 1$, but if $x = a_i$ then we can find $y \in X \setminus \{a_1, a_2, \dots, a_n\}$ such that $\pi(y) = x$. Now $|\pi^{-1}(y)| > 1$, so $|\rho^{-1}(x)| > 1$, but $\rho$ now contradicts the minimality of $n$. $\square$ + +**Theorem 2.6.** Let X be a countable linear ordering; then $sc(X) = 2$ or $sc(X) = \infty$. + +*Proof.* We know that if X is not scattered, then X contains a copy of the rationals, so $sc(X) = \infty$. We also know that if $sc(X) > 2$ then $sc(X) \ge \aleph_0$, that is, we can split the sum of any finite number of copies of X over a single copy. Let X be a countable scattered linear order such that $sc(X) > 2$. Using Lemma 2.5 it is possible to find an increasing surjection $\pi : X \to X$ and points $\{a_1, a_2, \dots, a_n\}$ such that: + +(i) $|\pi^{-1}(x)| > 1$ for each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$ and +---PAGE_BREAK--- + +(ii) $\pi^{-1}(a_i) = \{a_i\}$ for each $i = 1, 2, \dots, n$. + +For each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$ choose $x_1, x_2 \in \pi^{-1}(x)$ with +$x_1 < x_2$. +Let $Y = \bigcup_{i \in \beta} X_i$ be the disjoint union of $\beta$ copies of $X$. Let +$A = \bigcup_{i \in \beta} A_i$ where $A_i \subseteq X_i$. For each subset $B$ of $\{a_1, a_2, \dots, a_n\}$ + +let $X_B$ be a copy of $X$. For each $i \in \beta$ let $C_i = A_i \cap \{a_1, a_2, \dots, a_n\}$ +and define a map $f_i : X_i \to X_{C_i}$ as follows: + +$$f_i(x) = \begin{cases} a_i & \text{if } x = a_i, \\ x_1 & \text{if } x \in A_i \setminus \{a_1, a_2, \dots, a_n\}, \\ x_2 & \text{if } x \notin A_i \cup \{a_1, a_2, \dots, a_n\}. \end{cases}$$ + +These maps can be used to split $Y$ along $A$ over $2^n$ copies of $X$ (using $f$ say), which can in turn be split along $f(A)$ over a single copy of $X$. Hence we can split $\beta$ copies of $X$ over $X$, so $sc(X) = \infty$. $\square$ + +**Note 2.7.** Given a countable scattered linear order $X$, for $x, y \in X$ we set $x \equiv y$ if and only if there are only finitely many $z \in X$ such that $x < z < y$ or $y < z < x$, and thus obtain an equivalence relation on $X$. Let us denote the equivalence class of a point $x \in X$ by $e(x)$. Now we can determine a subset $A$ of $X$ such that between each two points in $A$ we can find a point not in $A$ and vice versa. The first step is to select a point $x$ from each equivalence class. We assign a point $y \in e(x)$ to the set $A$ if there are an even number of points between $x$ and $y$ (inclusive). We say that $A$ and $X \setminus A$ alternate in $X$. Note that this only works because the order under consideration is scattered. + +**Lemma 2.8.** Let $X$ be a countable scattered linear order with $sc(X) > 2$. For each $x \in X$ there exists an order preserving injection $f: X \to X$ such that $x \notin f(X)$. + +*Proof.* Let $x \in X$, where $X$ is a countable scattered linear order with $sc(X) > 2$. Choose a subset $A$ of $X$ that alternates in $X$ +---PAGE_BREAK--- + +as described in Note 2.7. Let $Y = X_1 \cup X_2$ be the disjoint union of 2 copies of $X$. Let $B = A_1 \cup A_2$ where $A = A_1 \subseteq X_1$ and $X \setminus A = A_2 \subseteq X_2$. Choose $f$ that splits $Y$ along $B$ over $X$ and set $f_i = f|_{X_i}$ for $i=1,2$. The choice of $A$ ensures that both $f_1$ and $f_2$ are order preserving injections. If $f_1(X_1)$ or $f_2(X_2)$ do not contain $x$ we have found a suitable map. Otherwise we can find distinct $a_1, a_2 \in X$ such that $f_1(a_1) = f_2(a_2) = x$. Now $a_1 < a_2$ say, so define a map $g: X \to X$ by + +$$ g(z) = \begin{cases} f_2(z) & \text{for } z < a_2 \\ f_1(z) & \text{for } z \ge a_2. \end{cases} $$ + +This map is an order preserving injection and $x \notin g(X)$. $\square$ + +**Lemma 2.9.** Let $X$ be a countable scattered linear order such that for each $x \in X$ there exists an order preserving injection $f : X \to X$ such that $x \notin f(X)$. If $A$ is a finite subset of $X$ there exists an order preserving injection $g : X \to X$ such that $A \cap g(X) = \emptyset$. + +*Proof*. Let $A = \{a_1, a_2, \dots, a_n\}$ be a finite subset of $X$. Suppose that there exists an order preserving injection $g : X \to X$ such that $g(X) \cap \{a_1, a_2, \dots, a_k\} = \emptyset$. If $k < n$, then either $a_{k+1} \notin g(X)$ or there exists $b \in X$ such that $g(b) = a_{k+1}$. In the first case, let $h = g$, and in the second case, choose an order preserving injection $f : X \to X$ such that $b \notin f(X)$ and set $h = g \circ f$. Now $h$ is an order preserving injection and $h(X) \cap \{a_1, a_2, \dots, a_{k+1}\} = \emptyset$, and we repeat the above argument. When $k=n$, we are done. $\square$ + +**Theorem 2.10.** Let $X$ be a countable linear order; then $\mathrm{sc}(X) = \infty$ if and only if $2 \cdot X$ order embeds into $X$. + +*Proof.* We need only prove that if $X$ is a countable linear order and $\mathrm{sc}(X) = \infty$ then $2 \cdot X$ order embeds into $X$. If $X$ is not scattered then $X$ contains a subset isomorphic to the rationals. Since every countable linear order embeds into the rationals (see +---PAGE_BREAK--- + +[8]) clearly $2 \cdot X$ order embeds into $X$. We assume now that $X$ is scattered. First find an increasing surjection $\sigma : X \to X$ such that $|\sigma^{-1}(x)| \ge 2$ for all $x \in X \setminus \{a_1, a_2, \dots, a_m\}$. It is possible (via Lemmas 2.8 and 2.9) to find an order preserving injection $f : X \to X$ such that $a_i \notin f(X)$ for all $i$. + +Set $Y = \sigma^{-1}(f(X)) \subseteq X$ and define $\pi : Y \to X$ by $\pi = f^{-1} \circ \sigma$. It follows that $\pi$ is order preserving and that $|\pi^{-1}(x)| \ge 2$ for all $x \in X$. + +Select, for each $x$, two points $x_0, x_1 \in \pi^{-1}(x)$ with $x_0 < x_1$. Define $\phi : \{0, 1\} \times X \to X$ by + +$$ \phi(i, x) = \begin{cases} x_0 & \text{if } i = 0, \\ x_1 & \text{if } i = 1. \end{cases} $$ + +Clearly, $\phi$ order embeds $2 \cdot X$ into $X$. $\square$ + +**Lemma 2.11.** The following statements are equivalent for any linear order $X$: + +(i) $2 \cdot X$ order embeds into $X$, + +(ii) $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$, + +(iii) $n \cdot X$ order embeds into $X$ for some $n \in \mathbb{N}$ where $n > 1$. + +*Proof.* We prove first that (i) implies (ii). Let $X$ be a linear order such that $2 \cdot X$ order embeds into $X$; that is, there exists an order preserving injection $f : \{0, 1\} \times X \to X$. Suppose that $k \cdot X$ order embeds into $X$ for all $k < n$ for some $n \in \mathbb{N}$. Hence there exists an order preserving injection $g : \{0, 1, \dots, k-2\} \times X \to X$. Define a map $h : \{0, 1, \dots, k-1\} \times X \to 2 \cdot X$ as follows: + +$$ h(i, x) = \begin{cases} (0, g(i, x)) & \text{if } i < k-1, \\ (1, g(k-2, x)) & \text{if } i = k-1. \end{cases} $$ + +Now define $\pi : \{0, 1, \dots, k-1\} \times X \to X$ as $\pi = f \circ h$. It follows that $\pi$ is an order preserving injection so by induction we have shown that $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$. That (ii) implies (iii) is trivial. Finally (iii) implies (i) since $2 \cdot X$ will clearly order embed into $n \cdot X$ for any $n > 1$. $\square$ +---PAGE_BREAK--- + +**Theorem 2.12.** Let $X$ be a countable linear order. Then $sc(X) = \infty$ if and only if $sc(n \cdot X) = \infty$ for all $n \in \mathbb{N}$. + +*Proof.* If $sc(X) = \infty$ then $2 \cdot X$ (and hence $k \cdot X$ for all $k \in \mathbb{N}$) will order embed into $X$ by Lemma 2.11. It follows that $2n \cdot X$ will order embed into $n \cdot X$ and hence into $X$, a sufficient condition for $sc(n \cdot X) = \infty$. + +If $sc(n \cdot X) = \infty$ then $2n \cdot X$ order embeds into $n \cdot X$ by Theorem 2.10. That is, we can find an order preserving injection $f : \{0, 1, \dots, 2n-1\} \times X \to \{0, 1, \dots, n-1\} \times X$. For any $x \in X$ we can find $x', x'' \in X$ such that: + +$$f(0,x) \le (0, x') < (0, x'') \le f(2n-1,x).$$ + +Define a map $g : \{0, 1\} \times X \to X$ by + +$$g(i, x) = \begin{cases} x' & \text{if } i = 0, \\ x'' & \text{if } i = 1. \end{cases}$$ + +Clearly $g$ is an order preserving injection that order embeds $2 \cdot X$ into $X$, a sufficient condition for $sc(X) = \infty$ by Theorem 2.10. $\square$ + +**Theorem 2.13.** Let $X$ be a countable linear order. Then $sc(X) = \infty$ if and only if $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$. + +## References + +[1] A. V. Arhangel'skii, *A general concept of cleavability of topological spaces over a class of spaces*, Abstracts Tiraspol Symposium (1985) (Stiinca, Kishinev, 1985), 8–10 (in Russian). + +[2] A. V. Arhangel'skii, *A survey of cleavability*, Topology and its applications **54** (1993) 141–163. + +[3] A. J. Hanna and T. B. M. McMaster, *Some results on cleavability*, submitted. +---PAGE_BREAK--- + +[4] D. J. Marron, * Splittability in ordered sets and in ordered spaces*, Ph.D. thesis, Queen's University Belfast (1997). + +[5] D. J. Marron and T. B. M. McMaster, * Splittability in ordered sets spaces*, Proc. Eighth Prague Topological Symp., (1996) 280-282. +[located in Topology Atlas at http://www.unipissing.ca/topology] + +[6] D. J. Marron and T. B. M. McMaster, *Cleavability in semi-groups*, to appear in Semigroup Forum. + +[7] J. L. Orr, *Shuffling of linear orders*, Canad. Math. Bull. **38**(2) (1995), 223-229. + +[8] J. G. Rosenberg, *Linear orderings*, Pure and Applied Mathematics, Academic Press (1982). + +Department of Pure Mathematics, The Queen's University of +Belfast, University Road, Belfast, BT7 1NN, United Kingdom + +*E-mail address: a.hanna@qub.ac.uk* + +Department of Pure Mathematics, The Queen's University of +Belfast, University Road, Belfast, BT7 1NN, United Kingdom \ No newline at end of file diff --git a/samples/texts_merged/3147359.md b/samples/texts_merged/3147359.md new file mode 100644 index 0000000000000000000000000000000000000000..81a302507d821cdf1449bc2fbf440b3e53c08e29 --- /dev/null +++ b/samples/texts_merged/3147359.md @@ -0,0 +1,589 @@ + +---PAGE_BREAK--- + +Conference Paper + +# Implementing Hybrid Semantics: From Functional to Imperative + +Sergey Goncharov +Renato Neves +José Proença* + +*CISTER Research Centre +CISTER-TR-201008 + +2020/11/30 +---PAGE_BREAK--- + +# Implementing Hybrid Semantics: From Functional to Imperative + +Sergey Goncharov, Renato Neves, José Proença* + +*CISTER Research Centre +Polytechnic Institute of Porto (ISEP P.Porto) +Rua Dr. António Bernardino de Almeida, 431 +4200-072 Porto +Portugal +Tel.: +351.22.8340509, Fax: +351.22.8321159 +E-mail: sergey.goncharov@fau.de, nevrenato@di.uminho.pt, pro@isep.ipp.pt +https://www.cister-labs.pt + +## Abstract + +Hybrid programs combine digital control with differential equations, and naturally appear in a wide range of application domains, from biology and control theory to real-time software engineering. The entanglement of discrete and continuous behaviour inherent to such programs goes beyond the established computer science foundations, producing challenges related to e.g. infinite iteration and combination of hybrid behaviour with other effects. A systematic treatment of hybridness as a dedicated computational effect has emerged recently. In particular, a generic idealized functional language HybCore with a sound and adequate operational semantics has been proposed. The latter semantics however did not provide hints to implementing HybCore as a runnable language, suitable for hybrid system simulation (e.g. the semantics features rules with uncountably many premises). We introduce an imperative counterpart of HybCore, whose semantics is simpler and runnable, and yet intimately related with the semantics of HybCore at the level of hybrid monads. We then establish a corresponding soundness and adequacy theorem. To attest that the resulting semantics can serve as a firm basis for the implementation of typical tools of programming oriented to the hybrid domain, we present a web-based prototype implementation to evaluate and inspect hybrid programs, in the spirit of GHCI for Haskell and UTop for OCaml. The major asset of our implementation is that it formally follows the operational semantic rules. +---PAGE_BREAK--- + +# Implementing Hybrid Semantics: From Functional to Imperative + +Sergey Goncharov¹, Renato Neves² and José Proença³ + +¹ Dept. of Comp. Sci., FAU Erlangen-Nürnberg, Germany + +² University of Minho & INESC-TEC, Portugal + +³ CISTER/ISEP, Portugal + +**Abstract.** Hybrid programs combine digital control with differential equations, and naturally appear in a wide range of application domains, from biology and control theory to real-time software engineering. The entanglement of discrete and continuous behaviour inherent to such programs goes beyond the established computer science foundations, producing challenges related to e.g. infinite iteration and combination of hybrid behaviour with other effects. A systematic treatment of *hybridness* as a dedicated computational effect has emerged recently. In particular, a generic idealized functional language HYBCORE with a sound and adequate operational semantics has been proposed. The latter semantics however did not provide hints to implementing HYBCORE as a runnable language, suitable for hybrid system simulation (e.g. the semantics features rules with uncountably many premises). We introduce an imperative counterpart of HYBCORE, whose semantics is simpler and runnable, and yet intimately related with the semantics of HYBCORE at the level of *hybrid monads*. We then establish a corresponding soundness and adequacy theorem. To attest that the resulting semantics can serve as a firm basis for the implementation of typical tools of programming oriented to the hybrid domain, we present a web-based prototype implementation to evaluate and inspect hybrid programs, in the spirit of GHCI for HASKELL and UTOP for OCAML. The major asset of our implementation is that it formally follows the operational semantic rules. + +## 1 Introduction + +**The core idea of hybrid programming.** Hybrid programming is a rapidly emerging computational paradigm [26,29] that aims at using principles and techniques from programming theory (e.g. compositionality [12,26], Hoare calculi [29,34], theory of iteration [2,8]) to provide formal foundations for developing computational systems that interact with physical processes. Cruise controllers are a typical example of this pattern; a very simple case is given by the hybrid program below. + +```c +while true do { + if v ≤ 10 then (v' = 1 for 1) else (v' = -1 for 1) (cruise controller) +} +``` +---PAGE_BREAK--- + +In a nutshell, the program specifies a digital controller that periodically measures and regulates a vehicle's velocity (v): if the latter is less or equal than 10 the controller accelerates during 1 time unit, as dictated by the program statement $v' = 1 \text{ for } 1$ ($v' = 1$ is a differential equation representing the velocity's rate of change over time. The value 1 on the right-hand side of for is the duration during which the program statement runs). Otherwise, it decelerates during the same amount of time ($v' = -1 \text{ for } 1$). Figure 1 shows the output respective to this hybrid program for an initial velocity of 5. + +Note that in contrast to standard programming, the cruise controller involves not only classical constructs (while-loops and conditional statements) but also differential ones (which are used for describing physical processes). This cross-disciplinary combination is the core feature of hybrid programming and has a notably wide range of application domains (see [29,30]). However, it also hinders the use of classical techniques of programming, and thus calls for a principled extension of programming theory to the hybrid setting. + +Fig. 1: Vehicle's velocity + +As is already apparent from the (cruise controller) example, we stick to an *imperative* programming style, in particular, in order to keep in touch with the established denotational models of physical time and computation. A popular alternative to this for modelling real-time and hybrid systems is to use a *declarative* programming style, which is done e.g. in real-time Maude [27] or Modelica [10]. A well-known benefit of declarative programming is that programs are very easy to write, however on the flip side, it is considerably more difficult to define what they exactly mean. + +**Motivation and related work.** Most of the previous research on formal hybrid system modelling has been inspired by automata theory and Kleene algebra (as the corresponding algebraic counterpart). These approaches led to the well-known notion of hybrid automaton [17] and Kleene algebra based languages for hybrid systems [28,18,19]. From the purely semantic perspective, these formalizations are rather close and share such characteristic features as *nondeterminism* and what can be called *non-refined divergence*. The former is standardly justified by the focus on formal verification of safety-critical systems: in such contexts overabstraction is usually desirable and useful. However, coalescing *purely hybrid* behaviour with nondeterminism detaches semantic models from their prototypes as they exist in the wild. This brings up several issues. Most obviously, a nondeterministic semantics, especially not given in an operational form, cannot directly serve as a basis for languages and tools for hybrid system testing and simulation. Moreover, models with nondeterminism baked in do not provide a clear indication of how to combine hybrid behaviour with effects other +---PAGE_BREAK--- + +than nondeterminism (e.g. probability), or to combine it with nondeterminism in a different way (van Glaabeek's spectrum [36] gives an idea about the diversity of potentially arising options). Finally, the Kleene algebra paradigm strongly suggests a relational semantics for programs, with the underlying relations connecting a state on which the program is run with the states that the program can reach. As previously indicated by Höfner and Möller [18], this view is too coarse-grained and contrasts to the trajectory-based one where a program is associated with a trajectory of states (recall Figure 1). The trajectory-based approach provides an appropriate abstraction for such aspects as notions of convergence, periodic orbits, and duration-based predicates [5]. This potentially enables analysis of properties such as *how fast* our (cruise controller) example reaches the target velocity or for *how long* it exceeds it. + +The issue of *non-refined divergence* mentioned earlier arises from the Kleene algebra law $p;0 = 0$ in conjunction with Fischer-Ladner's encoding of while-loops `while b do { p }` as $(b;p)*; \neg b$. This creates a havoc with all divergent programs `while true do { p }` as they become identified with divergence 0, thus making the above example of a (cruise controller) meaningless. This issue is extensively discussed in Höfner and Möller's work [18] on a *nondeterministic* algebra of trajectories, which tackles the problem by disabling the law $p;0 = 0$ and by introducing a special operator for infinite iteration that inherently relies on nondeterminism. This iteration operator inflates trajectories at so-called 'Zeno points' with arbitrary values, which in our case would entail e.g. the program + +$$ x := 1; while true do { wait x; x := x/2 } \quad (\text{zeno}) $$ + +to output at time instant 2 all possible values in the valuation space (the expression `wait t` represents a wait call of t time units). More details about Zeno points can be consulted in [18,14]. + +In previous work [12,14], we pursued a *purely hybrid* semantics via a simple *deterministic functional* language HYBCORE, with while-loops for which we used Elgot's notion of iteration [8] as the underlying semantic structure. That resulted in a semantics of finite and infinite iteration, corresponding to a refined view of divergence. Specifically, we developed an operational semantics and also a denotational counterpart for HYBCORE. An important problem of that semantics, however, is that it involves infinitely many premisses and requires calculating total duration of programs, which precludes using such semantics directly in implementations. Both the above examples (cruise controller) and (zeno) are affected by this issue. In the present paper we propose an *imperative* language with a denotational semantics similar to HYBCORE's one, but now provide a clear recipe for executing the semantics in a constructive manner. + +**Overview and contributions.** Building on our previous work [14], we devise operational and denotational semantics suitable for implementation purposes, and provide a soundness and adequacy theorem relating both these styles of semantics. Results of this kind are well-established yardsticks in the programming language theory [37], and beneficial from a practical perspective. For example, small-step operational semantics naturally guides the implementation of compilers for +---PAGE_BREAK--- + +programming languages, whilst denotational semantics is more abstract, syntax-independent, and guides the study of program equivalence, of the underlying computational paradigm, and its combination with other computational effects. + +As mentioned before, in our previous work [14] we introduced a simple functional hybrid language HYBCORE with operational and denotational monad-based semantics. Here, we work with a similar imperative while-language, whose semantics is given in terms of a global state space of trajectories over $\mathbb{R}^n$, which is a commonly used carrier when working with solutions of systems of differential equations. A key principle we have taken as a basis for our new semantics is the capacity to determine behaviours of a program p by being able to examine only some subterms of it. In order to illustrate this aspect, first note that our semantics does not reduce program terms p and initial states $\sigma$ (corresponding to valuation functions $\sigma: \mathcal{X} \to \mathbb{R}$ on program variables $\mathcal{X}$) to states $\sigma'$, as usual in classical programming. Instead it reduces triples p, $\sigma$, t of programs p, initial states $\sigma$ and time instants t to a state $\sigma'$; such a reduction can be read as "given $\sigma$ as the initial state, program p produces a state $\sigma'$ at time instant t". Then, the reduction process of p, $\sigma$, t to a state only examines fragments of p or unfolds it when strictly necessary, depending of the time instant t. For example, the reduction of the (cruise controller) unfolds the underlying loop only twice for the time instant $1 + 1/2$ (the time instant $1 + 1/2$ occurred in the second iteration of the loop). This is directly reflected in our prototype implementation of an interactive evaluator of hybrid programs LINCE. It is available online and comes with a series of examples for the reader to explore (http://arcatools.org/lince). The plot in Figure 1 was automatically obtained from LINCE, by calling on the previously described reduction process for a predetermined sequence of time instants t. + +For the denotational model, we build on our previous work [12,14] where hybrid programs are interpreted via a suitable monad **H**, called the *hybrid monad* and capturing the computational effect of *hybridness*, following the seminal approach of Moggi [24,25]. Our present semantics is more lightweight and is naturally couched in terms of another monad **H**S, parametrized by a set **S**. In our case, as mentioned above, **S** is the set of trajectories over $\mathbb{R}^n$ where *n* is the number of available program variables $\mathcal{X}$. The latter monad is in fact parametrized in a formal sense [35] and comes out as an instance of a recently emerged generic construction [7]. A remarkable salient feature of that construction is that it can be instantiated in a constructive setting (without using any choice principles) – although we do not touch upon this aspect here, in our view this reinforces the fundamental nature of our semantics. Among various benefits of **H**S over **H**, the former monad enjoys a construction of an iteration operator (in the sense of Elgot [8]) as a *least fixpoint*, calculated as a limit of an $\omega$-chain of approximations, while for **H** the construction of the iteration operator is rather intricate and no similar characterization is available. A natural question that arises is: how are **H** and **H**S related? We do answer it by providing an instructive connection, which sheds light on the construction of **H**, by explicitly identifying semantic ingredients which have to be added to **H**S to obtain **H**. Additionally, this results in “backward compatibility” with our previous work. +---PAGE_BREAK--- + +**Document structure.** After short preliminaries (Section 2), in Section 3 we introduce our while-language and its operational semantics. In Sections 4 and 5, we develop the denotational model for our language and connect it formally to the existing hybrid monad [12,14]. In Section 6, we prove a soundness and adequacy result for our operational semantics w.r.t. the developed model. Section 7 describes LINCE's architecture. Finally, Section 8 concludes and briefly discusses future work. Omitted proofs and examples are found in the extended version of the current paper [15]. + +## 2 Preliminaries + +We assume familiarity with category theory [1]. By $\mathbb{R}$, $\mathbb{R}_+$ and $\bar{\mathbb{R}}_+$ we respectively denote the sets of reals, non-negative reals, and extended non-negative reals (i.e. $\mathbb{R}_+$ extended with the infinity value $\infty$). Let $[0, \bar{\mathbb{R}}_+)$ denote the set of downsets of $\bar{\mathbb{R}}_+$ having the form $[0, d]$ ($d \in \mathbb{R}_+$) or the form $[0, d)$ ($d \in \bar{\mathbb{R}}_+$). We call the elements of the dependent sum $\sum_{I \in [0, \bar{\mathbb{R}}_+)} X^I$ trajectories (over $X$). By $[0, \mathbb{R}_+]$, $[0, \bar{\mathbb{R}}_+)$ and $[\bar{0}, \bar{\mathbb{R}}_+)$ we denote the following corresponding subsets of $[0, \bar{\mathbb{R}}_+]$: $([0, d] | d \in \mathbb{R}_+)$, $([0, d] | d \in \bar{\mathbb{R}}_+)$ and $([0, d] | d \in \bar{\mathbb{R}}_+)$. By $X \amalg Y$ we denote the disjoint union, which is the categorical coproduct in the category of sets with the corresponding left and right injections inl: $X \to X \amalg Y$, inr: $Y \to X \amalg Y$. To reduce clutter, we often use plain union $X \cup Y$ in place of $X \amalg Y$ if X and Y are disjoint by construction. + +By $a \triangleleft b \triangleright c$ we denote the case distinction construct: a if b is true and c otherwise. By ! we denote the empty function, i.e. a function with the empty domain. For the sake of succinctness, we use the notation $e^t$ for the function application $e(t)$ with real-value t. + +## 3 An imperative hybrid while-language and its semantics + +This section introduces the syntax and operational semantics of our language. We first fix a stock of n-variables $\mathcal{X} = \{x_1, \dots, x_n\}$ over which we build atomic programs, according to the grammar + +$$ +\begin{aligned} +At(\mathcal{X}) &\ni x := t \mid x'_1 = t_1, \dots, x'_n = t_n \quad \texttt{for } t \\ +LTerm(\mathcal{X}) &\ni r \mid r \cdot x \mid t+s +\end{aligned} + $$ + +where $x \in \mathcal{X}$, $r \in \mathbb{R}$, $t_i, t, s \in LTerm(\mathcal{X})$. An atomic program is thus either a classical assignment $x := t$ or a differential statement $x'_1 = t_1, \dots, x'_n = t_n$ for t. The latter reads as "run the system of differential equations $x'_1 = t_1, \dots, x'_n = t_n$ for t time units". We then define the while-language via the grammar + +$$ Prog(\mathcal{X}) \ni a \mid p; q \mid \texttt{if} b \texttt{then} p \texttt{else} q \mid \texttt{while} b \texttt{do} \{ p \} $$ + +where $p, q \in Prog(\mathcal{X})$, $a \in At(\mathcal{X})$ and $b$ is an element of the free Boolean algebra generated by the terms $t \leqslant s$ and $t \geqslant s$. The expression `wait t` (from the previous section) is encoded as the differential statement $x'_1 = 0, \dots, x'_n = 0$ for t. +---PAGE_BREAK--- + +*Remark 1.* The systems of differential equations that our language allows are always linear. This is not to say that we could not consider more expressive systems; in fact we could straightforwardly extend the language in this direction, for its semantics (presented below) is not impacted by specific choices of solvable systems of differential equations. But here we do not focus on such choices regarding the expressivity of continuous dynamics and concentrate on a core hybrid semantics instead on which to study the fundamentals of hybrid programming. + +In the sequel we abbreviate differential statements $x_1' = t_1, \dots, x_n' = t_n$ for $t$, where $\bar{x}'$ and $\bar{t}$ abbreviate the corresponding vectors of variables $x_1' \dots x_n'$ and linear-combination terms $t_1 \dots t_n$. We call functions of type $\sigma: \mathcal{X} \to \mathbb{R}$ environments; they map variables to the respective valuations. We use the notation $\sigma\nabla[\bar{\nu}/\bar{x}]$ to denote the environment that maps each $x_i$ in $\bar{x}$ to $v_i$ in $\bar{\nu}$ and the rest of variables in the same way as $\sigma$. Finally, we denote by $\phi_{\sigma}^{\bar{x}'=\bar{t}}: [0, \infty) \to \mathbb{R}^n$ the solution of a system of differential equations $\bar{x}' = \bar{t}$ with $\sigma$ determining the initial condition. When clear from context, we omit the superscript in $\phi_{\sigma}^{\bar{x}'=\bar{t}}$. For a linear-combination term $t$ the expression $t\sigma$ denotes the corresponding interpretation according to $\sigma$ and analogously for $b\sigma$ where $b$ is a Boolean expression. + +We now introduce a small-step operational semantics for our language. Intuitively, the semantics establishes a set of rules for reducing a triple $\langle program \rangle$ to an environment, via a *finite* sequence of reduction steps. The rules are presented in Figure 2. The terminal configuration $\langle skip, \sigma, t \rangle$ represents a successful end of a computation, which can then be fed into another computation (via rule (**seq-skip**→)). Contrastingly, $\langle stop, \sigma, t \rangle$ is a terminating configuration that inhibits the execution of subsequent computations. The latter is reflected in rules (**diff-stop**→) and (**seq-stop**→) which entail that, depending on the chosen time instant, we do not need to evaluate the whole program, but merely a part of it – consequently, infinite while-loops need not yield infinite reduction sequences (as explained in Remark 2). Note that time $t$ is consumed when applying the rules (**diff-stop**→) and (**diff-seq**→) in correspondence to the duration of the differential statement at hand. The rules (**seq**) and (**seq-skip**→) correspond to the standard rules of operational semantics for while languages over an imperative store [37]. + +*Remark 2.* Putatively infinite while-loops do not necessarily yield infinite reduction steps. Take for example the while-loop below whose iterations have always duration 1. + +$$ x := 0; \while true do { x := x + 1; wait 1 } \end{while} \quad (1) $$ + +It yields a finite reduction sequence for the time instant 1/2, as shown below: + +$$ +\begin{aligned} +& x := 0; \while true do \{ x := x + 1; wait 1 \}, \sigma, 1/2 \rightarrow \\ +& \quad \{ \text{by the rules } (\mathbf{asg}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-skip}\xrightarrow{\phantom{=}}) \} \\ +& \while true do \{ x := x + 1; wait 1 \}, \sigma \nabla[0/x], 1/2 \rightarrow \\ +& \quad \{ \text{by the rule } (\mathbf{wh-true}\xrightarrow{\phantom{=}}) \} +\end{aligned} +$$ +---PAGE_BREAK--- + +Fig. 2: Small-step Operational Semantics + +$$ +\begin{align*} +& x := x + 1 ; \textcolor{blue}{wait} 1 ; \textcolor{blue}{while} \textcolor{blue}{true} \textcolor{blue}{do} \{ x := x + 1 ; \textcolor{blue}{wait} 1 \}, \sigma \nabla [0/x] , \frac{1}{2} \rightarrow \\ +& \qquad \{\text{by the rules } (\mathbf{asg}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-skip}\xrightarrow{\phantom{=}})\} \\ +& \textcolor{blue}{wait} 1 ; \textcolor{blue}{while} \textcolor{blue}{true} \textcolor{blue}{do} \{ x := x + 1 ; \textcolor{blue}{wait} 1 \}, \sigma \nabla [0 + 1/x] , \frac{1}{2} \rightarrow \\ +& \qquad \{\text{by the rules } (\mathbf{diff-stop}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-stop}\xrightarrow{\phantom{=}})\} \\ +& stop, \sigma \nabla [0 + 1/x] , 0 +\end{align*} +$$ + +The gist is that to evaluate program (1) at time instant $1/2$, one only needs to unfold the underlying loop until surpassing $1/2$ in terms of execution time. Note that if the wait statement is removed from the program then the reduction sequence would not terminate, intuitively because all iterations would be instantaneous and thus the total execution time of the program would never reach $1/2$. + +The following theorem entails that our semantics is deterministic, which is +instrumental for our implementation. + +**Theorem 1.** For every program *p*, environment *σ*, and time instant *t* there is at most one applicable reduction rule. + +Let $\to^*$ be the transitive closure of the reduction relation $\to$ that was previously presented. + +**Corollary 1.** For every program term p, environments σ, σ', σ'', time instants t, t', t'', and termination flags s, s' ∈ {skip, stop}, if p, σ, t →* s, σ', t' and p, σ, t →* s', σ'', t'', then the equations s = s', σ' = σ'' and t' = t'' must hold. + +*Proof.* Follows by induction on the number of reduction steps and Theorem 1. □ + +As alluded above, the operational semantics treats time as a resource. This is formalised below. +---PAGE_BREAK--- + +**Proposition 1.** For all program terms $p$ and $q$, environments $\sigma$ and $\sigma'$, and time instants $t, t'$ and $s$, if $p, \sigma, t \to q, \sigma'$, $t'$ then $p, \sigma, t+s \to q, \sigma'$, $t'+s$; and if $p, \sigma, t \to \text{skip}, \sigma'$, $t'$ then $p, \sigma, t+s \to \text{skip}, \sigma'$, $t'+s$. + +# 4 Towards Denotational Semantics: The Hybrid Monad + +A mainstream subsuming paradigm in denotational semantics is due to Moggi [24,25], who proposed to identify a computational effect of interest as a monad, around which the denotational semantics is built using standard generic mechanisms, prominently provided by category theory. In this section we recall necessary notions and results, motivated by this approach, to prepare ground for our main constructions in the next section. + +**Definition 1 (Monad).** A monad $\mathbf{T}$ (on the category of sets and functions) is given by a triple $(T, \eta, (-)^*)$, consisting of an endomap $T$ over the class of all sets, together with a set-indexed class of maps $\eta_X: X \to TX$ and a so-called Kleisli lifting sending each $f: X \to TY$ to $f^*: TX \to TY$ and obeying monad laws: $\eta^* = \text{id}, f^* \cdot \eta = f, (f^* \cdot g)^* = f^* \cdot g^*$ (it follows from this definition that $T$ extends to a functor and $\eta$ to a natural transformation). + +A monad morphism $\theta: \mathbf{T} \to \mathbf{S}$ from $(T, \eta^{\mathbf{T}}, (-)^{\mathbf{T}})$ to $(S, \eta^{\mathbf{S}}, (-)^{\mathbf{S}})$ is a natural transformation $\theta: T \to S$ such that $\theta \cdot \eta^{\mathbf{T}} = \eta^{\mathbf{S}}$ and $\theta \cdot f^{\mathbf{T}} = (\theta \cdot f)^{\mathbf{S}} \cdot \theta$. + +We will continue to use bold capitals (e.g. **T**) for monads over the corresponding endofunctors written as capital Romans (e.g. **T**). + +In order to interpret while-loops one needs additional structure on the monad. + +**Definition 2 (Elgot Monad).** A monad $\mathbf{T}$ is called Elgot if it is equipped with an iteration operator $(-)^{\dagger}$ that sends each $f: X \to T(Y \Join X)$ to $f^{\dagger}: X \to TY$ in such a way that certain established axioms of iteration are satisfied [2,16]. + +Monad morphisms between Elgot monads are additionally required to preserve iteration: $\theta \cdot f^{\dagger\mathbf{T}} = (\theta \cdot f)^{\dagger\mathbf{S}}$ for $\theta: \mathbf{T} \to \mathbf{S}$, $f: X \to T(Y \Join X)$. + +For a monad $\mathbf{T}$, a map $f: X \to TY$, called a Kleisli map, is roughly to be regarded as a semantics of a program $p$, with $X$ as the semantics of the input, and $Y$ as the semantics of the output. For example, with $T$ being the maybe monad $(-) \Join \{\perp\}$, we obtain semantics of programs as partial functions. Let us record this example in more detail for further reference. + +*Example 1 (Maybe Monad M)*. The maybe monad is determined by the following data: $MX = X \Join \{\perp\}$, the unit is the left injection $\text{inl}: X \to X \Join \{\perp\}$ and given $f: X \to Y \Join \{\perp\}$, $f^*$ is equal to the copairing $\text{[f, inr]}: X \Join \{\perp\} \to Y \Join \{\perp\}$. + +It follows by general considerations (enrichment of the category of Kleisli maps over complete partial orders) that **M** is an Elgot monad with the following iteration operator $(-)^{\flat}$: given $f: X \to (Y \Join X) \Join \{\perp\}$, and $x_0 \in X$, let $x_0, x_1, ...$ be the longest (finite or infinite) sequence over $X$ constructed inductively in such a way that $f(x_i) = \text{inl}(\text{inr} x_{i+1})$. Now, $f^{\flat}(x_0) = \text{inr} \perp$ if the sequence is infinite or +---PAGE_BREAK--- + +$f(x_i) = \text{inr} \perp \text{ for some } i$, and $f^z(x_0) = \text{inl} y$ if for the last element of the sequence $x_n$, which must exist, $f(x_n) = \text{inl inl } y$. + +Other examples of Elgot monad can be consulted e.g. in [16]. + +The computational effect of *hybridness* can also be captured by a monad, called *hybrid monad* [12,14], which we recall next (in a slightly different but equivalent form). To that end, we also need to recall *Minkowski addition* for subsets of the set $\mathbb{R}_+$ of extended non-negative reals (see Section 2): $A + B = \{a + b \mid a \in A, b \in B\}$, e.g. $[a, b] + [c, d] = [a + c, b + d]$ and $[a, b] + [c, d) = [a + c, b + d)$. + +**Definition 3 (Hybrid Monad H).** The hybrid monad **H** is defined as follows. + +$$ +\begin{align*} +-HX &= \sum_{I \in [0, \bar{R}_+]} X^I \uplus \sum_{I \in [0, \bar{R}_+]} X^I, \text{ i.e. it is a set of trajectories valued on } X \\ +&\text{and with the domain downclosed. For any } p = \text{inj}\langle I, e \rangle \in HX \text{ with } \text{inr} \in \{\text{inl}, \\ +&\text{inr}\}, \text{ let us use the notation } p_d = I, p_e = e, \text{ the former being the duration of} \\ +&\text{the trajectory and the latter the trajectory itself. Let also } \varepsilon = \langle \emptyset, ! \rangle. +\end{align*} +$$ + +- $\eta(x) = \text{inl}\langle[0,0], \lambda t. x\rangle$, i.e. $\eta(x)$ is a trajectory of duration 0 that returns $x$. + +- given $f: X \to HY$, we define $f^*: HX \to HY$ via the following clauses: + +$$ +\begin{align*} +f^*(\text{inl}\langle I, e \rangle) &= \text{inj}\langle I + J, \lambda t. (f(e^t))_e^0 \rangle \quad \triangleleft t < d \triangleright (f(e^d))_e^{t-d} \\ +&\qquad \text{if } I' = I = [0, d] \text{ for some } d, f(e^d) = \text{inj}\langle J, e' \rangle +\end{align*} +$$ + +$$ +\begin{align*} +f^*(\mathrm{inl}\langle I, e \rangle) &= \mathrm{inr}\langle I', \lambda t. (f(e^t))_e^0 \rangle & \text{if } I' \neq I \\ +f^*(\mathrm{inr}\langle I, e \rangle) &= \mathrm{inr}\langle I', \lambda t. (f(e^t))_e^0 \rangle +\end{align*} +$$ + +where $I' = \bigcup \{[0,t] \subseteq I | \forall s \in [0,t]. f(e^s) \neq \mathrm{inr} \varepsilon\}$ and $\mathrm{inj} \in \{\mathrm{inl}, \mathrm{inr}\}$. + +The definition of the hybrid monad **H** is somewhat intricate, so let us complement it with some explanations (details and further intuitions about the hybrid monad can also be consulted in [12]). The domain **HX** constitutes three types of trajectories representing different kinds of hybrid computation: + +- (closed) convergent: $\text{inl}\langle[0,d],e\rangle \in HX$ (e.g. instant termination $\eta(x)$); + +- open divergent: $\text{inr}\langle[0,d),e\rangle \in HX$ (e.g. instant divergence $\text{inr}\epsilon$ or a trajectory $[0,\infty) \rightarrow X$ which represents a computation that runs ad infinitum); + +- closed divergent: $\text{inr}\langle[0,d],e\rangle \in HX$ (representing computations that start to diverge precisely after the time instant $d$). + +The Kleisli lifting $f^*$ works as follows: for a given trajectory $\text{inj}\langle I, e \rangle$, we first calculate the largest interval $I' \subseteq I$ on which the trajectory $\lambda t \in I'$. $f(e^t)$ does not instantly diverge (i.e. $f(e^t) \neq \text{inr} \varepsilon$) throughout, hence $I'$ is either $[0, d']$ or $[0, d')$ for some $d'$. Now, the first clause in the definition of $f^*$ corresponds to the successful composition scenario: the argument trajectory $\langle I, e \rangle$ is convergent, and composing $f$ with $e$ as described in the definition of $I'$ does not yield divergence all over $I$. In that case, we essentially concatenate $\langle I, e \rangle$ with $f(e^d)$, the latter being the trajectory computed by $f$ at the last point of $e$. The remaining two clauses correspond to various flavours of divergence, including divergence of the input $(\text{inr}\langle I, e\rangle)$ and divergences occurring along $f \cdot e$. Incidentally, this explains how closed divergent trajectories may arise: if $I' = [0, d']$ and $d'$ is properly smaller than $d$, then we diverge precisely *after* $d'$, which is possible e.g. if the program behind $f$ continuously checks a condition which did not fail up until $d'$. +---PAGE_BREAK--- + +# 5 Deconstructing the Hybrid Monad + +As mentioned in the introduction, in [14] we used **H** for giving semantics to a functional language HYBCORE whose programs are interpreted as morphisms of type $X \to HY$. Here, we are dealing with an imperative language, which from a semantic point of view amounts to fixing a type of states *S*, shared between all programs; the semantics of a program is thus restricted to morphisms of type *S* $\to HS$. As explained next, this allows us to make do with a simpler monad **H**S, globally parametrized by *S*. The new monad **H**S has the property that $H_S S$ is naturally isomorphic to *HS*. Apart from (relative to **H**) simplicity, the new monad enjoys further benefits, specifically **H**S is mathematically a better behaved structure, e.g. in contrast to **H**, Elgot iteration on **H**S is constructed as a least fixed point. Factoring the denotational semantics through **H**S thus allows us to bridge the gap to the operational semantics given in Section 3, and facilitates the soundness and adequacy proof in the forthcoming Section 6. + +In order to define $H_S$, it is convenient to take a slightly broader perspective. We will also need to make a detour through the topic of ordered monoid modules with certain completeness properties so that we can characterise iteration on $H_S$ as a least fixed point. + +**Definition 4 (Monoid Module, Generalized Writer Monad [14]).** Given a (not necessarily commutative) monoid ($\mathbb{M}, +, 0$), a monoid module is a set $\mathbb{E}$ equipped with a map $\triangleright: \mathbb{M} \times \mathbb{E} \to \mathbb{E}$ (monoid action), subject to the laws $0 \triangleright e = e$, $(m+n) \triangleright e = m \triangleright (n \triangleright e)$. + +Every monoid-module pair $(\mathbb{M}, \mathbb{E})$ induces a generalized writer monad $T = (T, \eta, (-)^*)$ with $T = \mathbb{M} \times (-) \cup \mathbb{E}$, $\eta_X(x) = \langle 0, x \rangle$, and + +$$f^*(m, x) = (m + n, y) \quad \text{where} \quad m \in \mathbb{M}, x \in X, f(x) = \langle n, y \rangle \in \mathbb{M} \times Y$$ + +$$f^*(m, x) = m \triangleright e \quad \text{where} \quad m \in \mathbb{M}, x \in X, f(x) = e \in \mathbb{E}$$ + +$$f^*(e) = e \quad \text{where} \quad e \in \mathbb{E}$$ + +This generalizes the writer monad ($\mathbb{E} = \emptyset$) and the exception monad ($\mathbb{M} = 1$). + +*Example 2.* A simple motivating example of a monoid-module pair $(\mathbb{M}, \mathbb{E})$ is the pair $(\mathbb{R}_+, \mathbb{R}_+)$ where the monoid operation is addition with 0 as the unit and the monoid action is also addition. + +More specifically, we are interested in ordered monoids and (conservatively) complete monoid modules. These are defined as follows. + +**Definition 5 (Ordered Monoids, (Conservatively) Complete Monoid Modules [7]).** We call a monoid $(\mathbb{M}, 0, +)$ an ordered monoid if it is equipped with a partial order $\leq$, such that $0$ is the least element of this order and $+$ is right-monotone (but not necessarily left-monotone). + +An ordered $\mathbb{M}$-module w.r.t. an ordered monoid $(\mathbb{M}, +, 0, \leq)$, is an $\mathbb{M}$-module $(\mathbb{E}, \triangleright)$ together with a partial order $\sqsubseteq$ and a least element $\perp$, such that $\triangleright$ is +---PAGE_BREAK--- + +monotone on the right and $(- \triangleright \perp)$ is monotone, i.e. + +$$ +\overline{\perp \sqsubseteq x} \qquad \frac{x \sqsubseteq y}{a \triangleright x \sqsubseteq a \triangleright y} \qquad \frac{a \le b}{a \triangleright \perp \sqsubseteq b \triangleright \perp} +$$ + +We call the last property restricted left monotonicity. + +An ordered $\mathbb{M}$-module is $(\omega)$-complete if for every $\omega$-chain $s_1 \sqsubseteq s_2 \sqsubseteq \dots$ on $\mathbb{E}$ there is a least upper bound $\bigcup_i s_i$ and $\triangleright$ is continuous on the right, i.e. + +$$ +\overline{\forall i. s_i \sqsubseteq \bigsqcup_i s_i} \qquad \frac{\forall i. s_i \sqsubseteq x}{\bigsqcup_i s_i \sqsubseteq x} \qquad \overline{a \triangleright \bigsqcup_i s_i \sqsubseteq \bigsqcup_i a \triangleright s_i} +$$ + +(the law $\bigsqcup_i a \triangleright s_i \sqsubseteq a \triangleright \bigsqcup_i s_i$ is derivable). Such an $\mathbb{M}$-module is conservatively complete if additionally for every $\omega$-chain $a_1 \sqsubseteq a_2 \sqsubseteq \dots$ in $\mathbb{M}$, such that the least upper bound $\bigvee_i a_i$ exists, $(\bigvee_i a_i) \triangleright \perp = \bigsqcup_i a_i \triangleright \perp$. + +A homomorphism $h: \mathbb{E} \to \mathbb{F}$ of (conservatively) complete monoid $\mathbb{M}$-modules is required to be monotone and structure-preserving in the following sense: $h(\perp) = \perp$, $h(a \triangleright x) = a \triangleright h(x)$, $h(\bigsqcup_i x_i) = \bigsqcup_i h(x_i)$. + +The completeness requirement for $\mathbb{M}$-modules has a standard motivation coming from domain theory, where $\sqsubseteq$ is regarded as an *information order* and completeness is needed to ensure that the relevant semantic domain can accommodate infinite behaviours. The conservativity requirement additionally ensures that the least upper bounds, which exist in $\mathbb{M}$ agree with those in $\mathbb{E}$. Our main example is as follows (we will use it for building $\mathbf{H}_S$ and its iteration operator). + +**Definition 6 (Monoid Module of Trajectories).** The ordered monoid of finite open trajectories $(\text{Trj}_S, \hat{\wedge}, \langle\emptyset, !\rangle, \leqslant)$ over a given set $S$, is defined as follows: $\text{Trj}_S = \sum_{I \in [0, \bar{R}_+)} S^I$, the unit is the empty trajectory $\varepsilon = \langle\emptyset, !\rangle$; summation is concatenation of trajectories $\hat{\wedge}$, defined as follows: + +$$ +\langle[0, d_1), e_1\rangle^{\wedge} \langle[0, d_2), e_2\rangle = \langle[0, d_1 + d_2), \lambda t. e_1^t \triangleleft t < d_1 \triangleright e_2^{t-d_1}\rangle. +$$ + +The relation $\leqslant$ is defined as follows: $\langle[0, d_1), e_1\rangle \leqslant \langle[0, d_2), e_2\rangle$ if $d_1 \leqslant d_2$ and $e_1^t = e_2^t$ for every $t \in [0, d_1)$. We can additionally consider both sets $\sum_{I \in [0, \bar{R}_+)} S^I$ and $\sum_{I \in [0, \bar{R}_+]} S^I$ as $\text{Trj}_S$-modules, by defining the monoid action $\triangleright$ also as concatenation of trajectories and by equipping these sets with the order $\sqsubseteq$: $\langle I_1, e_1\rangle \sqsubseteq \langle I_2, e_2\rangle$ if $I_1 \subseteq I_2$ and $e_1^t = e_2^t$ for all $t \in I_1$. + +Consider the following functors: + +$$ +H'_S X = \sum_{I \in [0, \bar{R}_+)} S^I \times X \cup \sum_{I \in [0, \bar{R}_+)} S^I +$$ + +$$ +(2) +$$ + +$$ +H_S X = \sum_{I \in [0, \bar{R}_+)} S^I \times X \cup \sum_{I \in [0, \bar{R}_+]} S^I +$$ + +(3) + +Both of them extend to monads $H'_S$ and $H_S$ as they are instances of Definition 4. Moreover, it is laborious but straightforward to prove that both $H'_S X$ and $H_S X$ are conservatively complete Trj$_S$-modules on X [7], i.e. conservatively complete +---PAGE_BREAK--- + +TrjS-modules, equipped with distinguished maps η: X → H'SX, η: X → HSX. +In each case η sends x ∈ X to ⟨ε, x⟩. The partial order on H'SX (which we will +use for obtaining the least upper bound of a certain sequence of approximations) +is given by the clauses below and relies on the previous order ≤ on trajectories: + +$$ +\frac{\langle I, e \rangle \le \langle I', e' \rangle}{\langle I, e \rangle \sqsubseteq \langle I', e' \rangle, x} +\qquad +\frac{\langle I, e \rangle \le \langle I', e' \rangle}{\langle I, e \rangle \sqsubseteq \langle I', e' \rangle} +$$ + +The monad given by (2) admits a sharp characterization, which is an instance of +a general result [7]. In more detail, + +**Proposition 2.** The pair $(H'_S X, \eta)$ is a free conservatively complete $\text{Trj}_S$-module on $X$, i.e. for every conservatively complete $\text{Trj}_S$-module $\mathbb{E}$ and a map $f: X \to \mathbb{E}$, there is unique homomorphism $\hat{f}: H'_S X \to \mathbb{E}$ such that $\hat{f} \cdot \eta = f$. + +Intuitively, Proposition 2 ensures that $H'_S X$ is a least conservatively complete $\text{Trj}_S$-module generated by $X$. This characterization entails a construction of an iteration operator on $\mathbf{H}'_S$ as a least fixpoint. This, in fact, also transfers to $\mathbf{H}_S$ (as detailed in the proof of the following theorem). + +**Theorem 2.** Both $\mathbf{H}'_S$ and $\mathbf{H}_S$ are Elgot monads, for which $f^\dagger$ is computed as a least fixpoint of $\omega$-continuous endomaps $g \mapsto [\eta,g]^* \cdot f$ over the function spaces $X \to \mathbf{H}'_S Y$ and $X \to \mathbf{H}_S Y$ correspondingly. + +In this section's remainder, we formally connect the monad **H**S with the monad **H**, +the latter introduced in our previous work and used for providing a semantics +to the functional language HYBCORE. In the following section we provide a +semantics for the current imperative language via the monad **H**S. Specifically, +in this section we will show how to build **H** from **H**S by considering additional +semantic ingredients on top of the latter. + +Let us subsequently write ηS, (–)S and (–)S for the unit, the Kleisli lifting and the Elgot iteration of **H**S. Note that *S*, *X* ↦→ **H**S*X* is a parametrized monad in the sense of Uustalu [35], in particular *H*S is functorial in *S* and for every *f*: *S* → *S*′, *H**f*: *H*S → *H*S*′* is a monad morphism. + +Then we introduce the following technical natural transformations $\iota$: $H_S X \to X \circled(S \circled{\perp})$ and $\tau$: $H_{S \circled{Y}} X \to H_S X$. First, let us define $\iota$: + +$$ +\iota(I, e, x) = \begin{cases} \operatorname{inl} \operatorname{inl} e^0, & \text{if } I \neq \emptyset \\ \operatorname{inl} x, & \text{otherwise} \end{cases} \qquad \iota(I, e) = \begin{cases} \operatorname{inr} \operatorname{inl} e^0, & \text{if } I \neq \emptyset \\ \operatorname{inr} \operatorname{inr} \perp, & \text{otherwise} \end{cases} +$$ + +In words: $\iota$ returns the initial point for non-zero length trajectories, and otherwise returns either an accompanying value from $X$ or $\perp$ depending on that if the given trajectory is convergent or divergent. The functor $(-) \bowtie E$ for every $E$ extends to a monad, called the *exception monad*. The following is easy to show for $\iota$. + +**Lemma 1.** For every $S$, $\iota: H_S \rightarrow (-) \bowtie (S \bowtie \{\perp\})$ is a monad morphism. + +Next we define $\tau : H_{S \circled{Y}} X \rightarrow H_S X$: + +$$ +\tau(I, e, x) = \begin{cases} \langle I, e, x \rangle, & \text{if } I = I' \\ \langle I', e' \rangle, & \text{otherwise} \end{cases} \qquad \tau(I, e) = \langle I', e' \rangle +$$ + +where ⟨I', e'] is the largest such trajectory that for all t ∈ I', et = inl ett. +---PAGE_BREAK--- + +$$ +\begin{align*} +[\mathbf{x} := \mathbf{t}](\sigma) &= \eta(\sigma \triangleright [\mathbf{t}\sigma/\mathbf{x}]) \\ +[\bar{\mathbf{x}}' = \bar{u} \text{ for } \mathbf{t}](\sigma) &= \langle [0, \mathbf{t}\sigma), \lambda t. \sigma \triangleright [\phi_{\sigma}(t)/\bar{\mathbf{x}}], \sigma \triangleright [\phi_{\sigma}(\mathbf{t}\sigma)/\bar{\mathbf{x}}] \rangle \\ +[\mathbf{p}; \mathbf{q}](\sigma) &= [\mathbf{q}]^*([\mathbf{p}](\sigma)) \\ +[\texttt{if } \mathbf{b} \texttt{ then } \mathbf{p} \texttt{ else } \mathbf{q}](\sigma) &= [\mathbf{p}](\sigma) \triangleleft \mathbf{b}\sigma \triangleright [\mathbf{q}](\sigma) \\ +[\texttt{while } \mathbf{b} \texttt{ do } \{\mathbf{p}\}](\sigma) &= (\lambda \sigma . (\hat{H} \operatorname{inr})([\mathbf{p}](\sigma)) \triangleleft \mathbf{b}\sigma \triangleright \eta(\operatorname{inl} \sigma))^\dagger(\sigma) +\end{align*} +$$ + +Fig. 3: Denotational semantics. + +**Lemma 2.** For all *S* and *Y*, $\tau: H_{S\omega Y} \to H_S$ is a monad morphism. + +We now arrive at the main result of this section. + +**Theorem 3.** The correspondence $S \mapsto H_S S$ extends to an Elgot monad as follows: + +$$ +\begin{align*} +\eta(x \in S) &= \eta^S(x), \\ +(f: X \rightarrow H_S S)^* &= (H_X X \xrightarrow{H_{\iota,f}^{\mathrm{id}}} H_{S\omega\{\perp\}} X \xrightarrow{\tau} H_S X \xrightarrow{f_S^*} H_S S), \\ +(f: X \rightarrow H_{S\omega X}(S \Join X))^{\dagger} &= (X \xrightarrow{f_{S\omega X}^{\dagger}} H_{S\omega X} S \xrightarrow{H_{[\mathrm{inl},(\iota',f)]^{\mathrm{id}}}} H_{S\omega\{\perp\}} S \xrightarrow{\tau} H_S S). +\end{align*} +$$ + +where $\iota' = [\mathrm{inl}, \mathrm{id}] \cdot \iota : H_S S \to S \Join \{\perp\}$ and $(-)^\sharp : (X \to (S \Join X) \Join \{\perp\}) \to (X \to S \Join \{\perp\})$ is the iteration operator of the maybe-monad $(-) \Join \{\perp\}$ (as in Example 1). Moreover, thus defined monad is isomorphic to $\mathbf{H}$. + +*Proof (Proof Sketch).* It is first verified that the monad axioms are satisfied using abstract properties of $\iota$ and $\tau$, mainly provided by Lemmas 1 and 2. Then the isomorphism $\theta: H_S S \cong HS$ is defined as expected: $\theta([0, d], e, x) = \mathrm{inl}\langle[0, d], \hat{e}\rangle$ where $\hat{e}^t = \hat{e}^0$ for $t \in [0, d)$, $\hat{e}^d = x$; and $\theta(I, e) = \mathrm{inr}\langle I, e\rangle$. It is easy to see that $\theta$ respects the unit. The fact that $\theta$ respects Kleisli lifting amounts to a (tedious) verification by case distinction. Checking the formula for $(-)^\dagger$ amounts to transferring the definition of $(-)^\dagger$, as defined in previous work [13], along $\theta$. See the full proof in [15]. □ + +# 6 Soundness and Adequacy + +Let us start this section by providing a denotational semantics to our language using the results of the previous section. We will then provide a soundness and adequacy result that formally connects the thus established denotational semantics with the operational semantics presented in Section 3. + +First, consider the monad in (3) and fix $S = \mathbb{R}^\lambda$. We denote the obtained instance of $H_S$ as $\hat{H}$. Intuitively, we interpret a program $p$ as a map $[[p]] : S \to \hat{H}S$ which given an environment (a map from variables to values) returns a trajectory over $S$. The definition of $[[p]]$ is inductive over the structure of $p$ and is given in Figure 3. +---PAGE_BREAK--- + +In order to establish soundness and adequacy between the small-step operational semantics and the denotational semantics, we will use an auxiliary device. Namely, we will introduce a *big-step* operational semantics that will serve as midpoint between the two previously introduced semantics. We will show that the small-step semantics is equivalent to the big-step one and then establish soundness and adequacy between the big-step semantics and the denotational one. The desired result then follows by transitivity. The big-step rules are presented in Figure 4 and follow the same reasoning than the small-step ones. The expression $p, \sigma, t \Downarrow r, \sigma'$ means that $p$ paired with $\sigma$ evaluates to $r, \sigma'$ at time instant $t$. + +Fig. 4: Big-step Operational Semantics + +Next, we need the following result to formally connect both styles of operational semantics. + +**Lemma 3.** *Given a program p, an environment σ and a time instant t* + +1. if $p, \sigma, t \rightarrow p', \sigma', t'$ and $p', \sigma', t' \Downarrow skip, \sigma''$ then $p, \sigma, t \Downarrow skip, \sigma''$; + +2. if $p, \sigma, t \rightarrow p', \sigma', t'$ and $p', \sigma', t' \Downarrow stop, \sigma''$ then $p, \sigma, t \Downarrow stop, \sigma''$. + +*Proof.* The proof follows by induction over the derivation of the small step relation. □ + +**Theorem 4.** *The small-step semantics and the big-step semantics are related as follows. Given a program p, an environment σ and a time instant t* +---PAGE_BREAK--- + +1. $p, \sigma, t \Downarrow \mathit{skip}, \sigma' \text{ iff } p, \sigma, t \to^\star \mathit{skip}, \sigma', 0$; + +2. $p, \sigma, t \Downarrow \mathit{stop}, \sigma' \text{ iff } p, \sigma, t \to^\star \mathit{stop}, \sigma', 0.$ + +*Proof.* The right-to-left direction is obtained by induction over the length of the small-step reduction sequence using Lemma 3. The left-to-right direction follows by induction over the proof of the big-step judgement using Proposition 1. $\square$ + +Finally, we can connect the operational and the denotational semantics in the +expected way. + +**Theorem 5 (Soundness and Adequacy).** *Given a program p, an environment σ and a time instant t* + +1. $p, \sigma, t \to^* \mathit{skip}, \sigma', 0 \text{ iff } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t) \to \mathbb{R}^\mathcal{X}, \sigma');$ + +2. $p, \sigma, t \to^* \mathit{stop}, \sigma', 0 \text{ iff either } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \sigma'') \text{ or } [\mathbf{p}](\sigma) = \mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \text{ and in either case with } t' > t \text{ and } h(t) = \sigma'.$ + +Here, “soundness” corresponds to the left-to-right directions of the equivalences and “adequacy” to the right-to-left ones. + +*Proof.* By Theorem 4, we equivalently replace the goal as follows: + +1. $p, \sigma, t \Downarrow \mathit{skip}, \sigma' \text{ iff } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t) \to \mathbb{R}^{\mathcal{X}}, \sigma');$ + +2. $p, \sigma, t \Downarrow \mathit{stop}, \sigma' \text{ iff either } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \sigma'') \text{ or } [\mathbf{p}](\sigma) = \mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \text{ and in either case with } t' > t \text{ and } h(t) = \sigma'.$ + +Then the “soundness” direction is obtained by induction over the derivation of +the rules in Fig. 4. The “adequacy” direction follows by structural induction over +$p$; for while-loops, we call the fixpoint law $[\eta, f^\dagger]^* \cdot f = f^\dagger$ of Elgot monads. $\square$ + +# 7 Implementation + +This section presents our prototype implementation – LINCE – which is available +online both to run in our servers and to be compiled and executed locally +(http://arcatools.org/lince). Its architecture is depicted in Figure 5. The +dashed rectangles correspond to its main components. The one on the left +(Core engine) provides the parser respective to the while-language and the +engine to evaluate hybrid programs using the small-step operational semantics +of Section 3. The one on the right (Inspector) depicts trajectories produced +by hybrid programs according to parameters specified by the user and provides +an interface to evaluate hybrid programs at specific time instants (the initial +environment $\sigma: \mathcal{X} \to \mathbb{R}$ is assumed to be the function constant on zero). As +already mentioned, plots are generated by automatically evaluating at different +time instants the program given as input. Incoming arrows in the figure denote +an input relation and outgoing arrows denote an output relation. The two main +components are further explained below. + +**Core engine.** Our implementation extensively uses the computer algebra tool SAGEMATH [31]. This serves two purposes: (1) to solve systems of differential +---PAGE_BREAK--- + +Fig. 5: Depiction of LINCE's architecture + +equations (present in hybrid programs); and (2) to correctly evaluate if-then- +else statements. Regarding the latter, note that we do not merely use predicate +functions in programming languages for evaluating Boolean conditions, essentially +because such functions tend to give wrong results in the presence of real numbers +(due to the finite precision problem). Instead of this, LINCE uses SAGEMATH +and its ability to perform advanced symbolic manipulation to check whether +a Boolean condition is true or not. However, note that this will not always +give an output, fundamentally because solutions of linear differential equations +involve transcendental numbers and real-number arithmetic with such numbers is +undecidable [20]. We leave as future work the development of more sophisticated +techniques for avoiding errors in the computational evaluation of hybrid programs. + +**Inspector.** The user interacts with LINCE at two different stages: (a) when inputting a hybrid program and (b) when inspecting trajectories using LINCE's output interfaces. The latter case consists of adjusting different parameters for observing the generated plots in an optimal way. + +**Event-triggered programs.** Observe that the differential statements $x_1' = t, \dots, x_n' = t$ for $t$ are *time-triggered*: they terminate precisely when the instant of time $t$ is achieved. In the area of hybrid systems it is also usual to consider *event-triggered* programs: those that terminate *as soon as* a specified condition $\psi$ becomes true [38,6,11]. So we next consider atomic programs of the type $x_1' = t, \dots, x_n' = t$ until $\psi$ where $\psi$ is an element of the free Boolean algebra generated by $t \le s$ and $t \ge s$ where $t, s \in LTerm(X)$, signalling the termination of the program. In general, it is impossible to determine with *exact* precision when such programs terminate (again due to the undecidability of real-number arithmetic with transcendental numbers). A natural option is to tackle this problem by checking the condition $\psi$ periodically, which essentially reduces event-triggered programs into time-triggered ones. The cost is that the evaluation of a program might greatly diverge from the nominal behaviour, as discussed for instance in documents [4,6] where an analogous approach is discussed for the well-established simulation tools SIMULINK and MODELICA. In our case, we allow programs of the form $x_1' = t, \dots, x_n' = t$ until$_\epsilon$ $\psi$ in the tool and define them as the abbreviation of `while ¬ψ do { x_1' = t, \dots, x_n' = t for ε }`. This sort of abbreviation has the advantage of avoiding spurious evaluations of hybrid programs w.r.t. the established semantics. We could indeed easily allow such event-triggered programs natively in our language (i.e. without recurring to +---PAGE_BREAK--- + +Fig. 6: Position of the bouncing ball over time (plot on the left); zoomed in position of the bouncing ball at the first bounce (plot on the right). + +abbreviations) and extend the semantics accordingly. But we prefer not to do this at the moment, because we wish first to fully understand the ways of limiting spurious computational evaluations arising from event-triggered programs. + +*Remark 3.* SIMULINK and MODELICA are powerful tools for simulating hybrid systems, but lack a well-established, formal semantics. This is discussed for example in [3,9], where the authors aim to provide semantics to subsets of SIMULINK and MODELICA. Getting inspiration from control theory, the language of SIMULINK is circuit-like, block-based; the language of MODELICA is *acausal* and thus particularly useful for modelling electric circuits and the like which are traditionally modelled by systems of equations. + +*Example 3 (Bouncing Ball)*. As an illustration of the approach described above for event-triggered programs, take a bouncing ball dropped at a positive height $p$ and with no initial velocity $v$. Due to the gravitational acceleration $g$, it falls to the ground and bounces back up, losing part of its kinetic energy in the process. This can be approximated by the following hybrid program + +$$ (p' = v, v' = g \ \mathbf{until}_{0.01} p \le 0 \land v \le 0); (v := v \times -0.5) $$ + +where 0.5 is the dampening factor of the ball. We now want to drop the ball from a specific height (e.g. 5 meters) and let it bounce until it stops. Abbreviating the previous program into $b$, this behaviour can be approximated by $p := 5; v := 0; while true do { b}$. Figure 6 presents the trajectory generated by the ball (calculated by LINCE). Note that since $\epsilon = 0.01$ the ball reaches below ground, as shown in Figure 6 on the right. Other examples of event- and time-triggered programs can be seen in LINCE's website. + +# 8 Conclusions and future work + +We introduced small-step and big-step operational semantics for hybrid programs suitable for implementation purposes and provided a denotational counterpart via the notion of Elgot monad. These semantics were then linked by a soundness and adequacy theorem [37]. We regard these results as a stepping stone for developing computational tools and techniques for hybrid programming; which we attested +---PAGE_BREAK--- + +with the development of LINCE. With this work as basis, we plan to explore the +following research lines in the near future. + +**Program equivalence.** Our denotational semantics entails a natural notion of program equivalence (denotational equality) which inherently includes classical laws of iteration and a powerful uniformity principle [33], thanks to the use of Elgot monads. We intend to further explore the equational theory of our language so that we can safely refactor/simplify hybrid programs. Note that the theory includes equational schema like `(x := a; x := b) = x := b` and `(wait a; wait b) = wait (a + b)` thus encompassing not only usual laws of programming but also axiomatic principles behind the notion of time. + +**New program constructs.** Our while-language is intended to be as simple as possible whilst harbouring the core, uncontroversial features of hybrid programming. This was decided so that we could use the language as both a theoretical and practical basis for advancing hybrid programming. A particular case that we wish to explore next is the introduction of new program constructs, including e.g. non-deterministic or probabilistic choice and exception operations `raiseware`. Denotationally, the fact that we used monadic constructions readily provides a palette of techniques for this process, e.g. tensoring and distributive laws [22,23]. + +**Robustness.** A core aspect of hybrid programming is that programs should be *robust*: small variations in their input should *not* result in big changes in their output [32,21]. We wish to extend LINCE with features for detecting non-robust programs. A main source of non-robustness are conditional statements `if b then p else q`: very small changes in their input may change the validity of b and consequently cause a switch between (possibly very different) execution branches. Currently, we are working on the systematic detection of non-robust conditional statements in hybrid programs, by taking advantage of the notion of $\delta$-perturbation [20]. + +**Acknowledgements** The first author would like to acknowledge support of German Research Council (DFG) under the project A High Level Language for Monad-based Processes (GO 2161/1-2). The second author was financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia, within project POCI-01-0145-FEDER-030947. The third author was partially supported by National Funds through FCT/MCTES, within the CISTER Research Unit (UIDB/04234/2020); by COMPETE 2020 under the PT2020 Partnership Agreement, through ERDF, and by national funds through the FCT, within project POCI-01-0145-FEDER-029946; by the Norte Portugal Regional Operational Programme (NORTE 2020) under the Portugal 2020 Partnership Agreement, through ERDF and also by national funds through the FCT, within project NORTE-01-0145-FEDER-028550; and by the FCT within project ECSEL/0016/2019 and the ECSEL Joint Undertaking (JU) under grant agreement No 876852. The JU receives support from the European Union's Horizon 2020 research and innovation programme and Austria, Czech Republic, Germany, Ireland, Italy, Portugal, Spain, Sweden, Turkey. +---PAGE_BREAK--- + +References + +1. J. Adámek, H. Herrlich, and G. Strecker. *Abstract and concrete categories*. John Wiley & Sons Inc., New York, 1990. + +2. J. Adámek, S. Milius, and J. Velebil. Elgot theories: a new perspective on the equational properties of iteration. *Mathematical Structures in Computer Science*, 21(2):417–480, 2011. + +3. O. Bouissou and A. Chapoutot. An operational semantics for Simulink's simulation engine. In *ACM SIGPLAN Notices*, vol. 47, pp. 129–138. ACM, 2012. + +4. D. Broman. Hybrid simulation safety: Limbos and zero crossings. In *Principles of Modeling*, pp. 106–121. Springer, 2018. + +5. Z. Chaochen, C. A. R. Hoare, and A. P. Ravn. A calculus of durations. *Information Processing Letters*, 40(5):269–276, 1991. + +6. D. A. Copp and R. G. Sanfelice. A zero-crossing detection algorithm for robust simulation of hybrid systems jumping on surfaces. *Simulation Modelling Practice and Theory*, 68:1–17, 2016. + +7. T. L. Diezel and S. Goncharov. Towards Constructive Hybrid Semantics. In Z. M. Ariola, ed., *5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)*, vol. 167 of LIPIcs, pp. 24:1–24:19, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik. + +8. C. Elgot. Monadic computation and iterative algebraic theories. In *Studies in Logic and the Foundations of Mathematics*, vol. 80, pp. 175–230. Elsevier, 1975. + +9. S. Foster, B. Thiele, A. Cavalcanti, and J. Woodcock. Towards a UTP semantics for Modelica. In *International Symposium on Unifying Theories of Programming*, pp. 44–64. Springer, 2016. + +10. P. Fritzson. *Principles of object-oriented modeling and simulation with Modelica 3.3: a cyber-physical approach*. John Wiley & Sons, 2014. + +11. R. Goebel, R. G. Sanfelice, and A. R. Teel. Hybrid dynamical systems. *IEEE Control Systems*, 29(2):28–93, 2009. + +12. S. Goncharov, J. Jakob, and R. Neves. A semantics for hybrid iteration. In *29th International Conference on Concurrency Theory, CONCUR 2018*. Schloss Dagstuhl – Leibniz-Zentrum fuer Informatik, 2018. + +13. S. Goncharov, J. Jakob, and R. Neves. A semantics for hybrid iteration. CoRR, abs/1807.01053, 2018. + +14. S. Goncharov and R. Neves. An adequate while-language for hybrid computation. In *Proceedings of the 21st International Symposium on Principles and Practice of Programming Languages 2019*, PPDP ’19, pp. 11:1–11:15, New York, NY, USA, 2019. ACM. + +15. S. Goncharov, R. Neves, and J. Proença. Implementing hybrid semantics: From functional to imperative. CoRR, abs/2009.14322, 2020. + +16. S. Goncharov, L. Schröder, C. Rauch, and M. Piróg. Unifying guarded and un-guarded iteration. In *International Conference on Foundations of Software Science and Computation Structures*, pp. 517–533. Springer, 2017. + +17. T. A. Henzinger. The theory of hybrid automata. In *LICS96: Logic in Computer Science, 11th Annual Symposium, New Jersey, USA, July 27-30, 1996*, pp. 278–292. IEEE, 1996. + +18. P. Höfner and B. Möller. An algebra of hybrid systems. *The Journal of Logic and Algebraic Programming*, 78(2):74–97, 2009. + +19. J. J. Huerta y Munive and G. Struth. Verifying hybrid systems with modal kleene algebra. In J. Desharnais, W. Guttmann, and S. Joosten, eds., *Relational* +---PAGE_BREAK--- + +*and Algebraic Methods in Computer Science*, pp. 225–243, Cham, 2018. Springer International Publishing. + +20. S. Kong, S. Gao, W. Chen, and E. Clarke. dreach: $\delta$-reachability analysis for hybrid systems. In *International Conference on TOOLS and Algorithms for the Construction and Analysis of Systems*, pp. 200–205. Springer, 2015. + +21. D. Liberzon and A. S. Morse. Basic problems in stability and design of switched systems. *IEEE Control Systems*, 19(5):59–70, 1999. + +22. C. Lüth and N. Ghani. Composing monads using coproducts. In M. Wand and S. L. P. Jones, eds., *ICFP'02: Functional Programming, 7th ACM SIGPLAN International Conference, Pittsburgh, USA, October 04 - 06, 2002*, pp. 133–144. ACM, 2002. + +23. E. Manes and P. Mulry. Monad compositions I: general constructions and recursive distributive laws. *Theory and Applications of Categories*, 18(7):172–208, 2007. + +24. E. Moggi. Computational lambda-calculus and monads. In *Proceedings of the Fourth Annual Symposium on Logic in Computer Science (LICS '89), Pacific Grove, California, USA, June 5-8, 1989*, pp. 14–23. IEEE Computer Society, 1989. + +25. E. Moggi. Notions of computation and monads. *Information and computation*, 93(1):55–92, 1991. + +26. R. Neves. *Hybrid programs*. PhD thesis, Minho University, 2018. + +27. P. C. Ölveczky and J. Meseguer. Semantics and pragmatics of real-time maude. *Higher-order and symbolic computation*, 20(1-2):161–196, 2007. + +28. A. Platzer. Differential dynamic logic for hybrid systems. *Journal of Automated Reasoning*, 41(2):143–189, 2008. + +29. A. Platzer. *Logical Analysis of Hybrid Systems: Proving Theorems for Complex Dynamics*. Springer, Heidelberg, 2010. + +30. R. R. Rajkumar, I. Lee, L. Sha, and J. Stankovic. Cyber-physical systems: the next computing revolution. In *DAC'10: Design Automation Conference, 47th ACM/IEEE Conference, Anaheim, USA, June 13-18, 2010*, pp. 731–736. IEEE, 2010. + +31. W. Stein et al. *Sage Mathematics Software (Version 6.4.1)*. The Sage Development Team, 2015. http://www.sagemath.org/. + +32. R. Shorten, F. Wirth, O. Mason, K. Wulff, and C. King. Stability criteria for switched and hybrid systems. *Society for Industrial and Applied Mathematics (review)*, 49(4):545–592, 2007. + +33. A. Simpson and G. Plotkin. Complete axioms for categorical fixed-point operators. In *Logic in Computer Science, LICS 2000*, pp. 30–41, 2000. + +34. K. Suenaga and I. Hasuo. Programming with infinitesimals: A while-language for hybrid system modeling. In *International Colloquium on Automata, Languages, and Programming*, pp. 392–403. Springer, 2011. + +35. T. Uustalu. Generalizing substitution. *RAIRO-Theoretical Informatics and Applications*, 37(4):315–336, 2003. + +36. R. van Glabbeek. The linear time-branching time spectrum (extended abstract). In *Theories of Concurrency, CONCUR 1990*, vol. 458, pp. 278–297, 1990. + +37. G. Winskel. *The formal semantics of programming languages: an introduction*. MIT press, 1993. + +38. H. Witsenhausen. A class of hybrid-state continuous-time dynamic systems. *IEEE Transactions on Automatic Control*, 11(2):161–167, 1966. \ No newline at end of file diff --git a/samples/texts_merged/3226827.md b/samples/texts_merged/3226827.md new file mode 100644 index 0000000000000000000000000000000000000000..3f1aae74817b8ce6d50c286aca3bc57c1de26436 --- /dev/null +++ b/samples/texts_merged/3226827.md @@ -0,0 +1,194 @@ + +---PAGE_BREAK--- + +# EXPLAIN: A Tool for Performing Abductive Inference + +Isil Dillig and Thomas Dillig + +{idillig, tdillig}@cs.wm.edu + +Computer Science Department, College of William & Mary + +**Abstract.** This paper describes a tool called EXPLAIN for performing abductive inference. Logical abduction is the problem of finding a simple explanatory hypothesis that explains observed facts. Specifically, given a set of premises Γ and a desired conclusion φ, abductive inference finds a simple explanation ψ such that Γ ∧ ψ |= φ, and ψ is consistent with known premises Γ. Abduction has many useful applications in verification, including inference of missing preconditions, error diagnosis, and construction of compositional proofs. This paper gives a brief tutorial introduction to EXPLAIN and describes the basic inference algorithm. + +## 1 Introduction + +The fundamental ingredient of automated logical reasoning is *deduction*, which allows deriving valid conclusions from a given set of premises. For example, consider the following set of facts: + +(1) $\forall x. (\text{duck}(x) \Rightarrow \text{quack}(x))$ + +(2) $\forall x. ((\text{duck}(x) \lor \text{goose}(x)) \Rightarrow \text{waddle}(x))$ + +(3) $\text{duck}(\text{donald})$ + +Based on these premises, logical deduction allows us to reach the conclusion: + +$$ \text{waddle}(\text{donald}) \land \text{quack}(\text{donald}) $$ + +This form of forward deductive reasoning forms the basis of all SAT and SMT solvers as well as first-order theorem provers and verification tools used today. + +A complementary form of logical reasoning to deduction is *abduction*, as introduced by Charles Sanders Peirce [1]. Specifically, abduction is a form of backward logical reasoning, which allows inferring likely premises from a given conclusion. Going back to our earlier example, suppose we know premises (1) and (2), and assume that we have observed that the formula waddle(donald) ∧ quack(donald) is true. Here, since the given premises do not imply the desired conclusion, we would like to find an explanatory hypothesis ψ such that the following deduction is valid: + +$$ +\begin{array}{c} +\forall x. (\text{duck}(x) \Rightarrow \text{quack}(x)) \\ +\forall x. ((\text{duck}(x) \lor \text{goose}(x)) \Rightarrow \text{waddle}(x)) \\ +\psi \\ +\hline +\text{waddle}(\text{donald}) \land \text{quack}(\text{donald}) +\end{array} +$$ +---PAGE_BREAK--- + +The problem of finding a logical formula $\psi$ for which the above deduction is valid is known as *abductive inference*. For our example, many solutions are possible, including the following: + +$$ +\begin{align*} +\psi_1 &: \text{duck}(\text{donald}) \wedge \neg\text{quack}(\text{donald}) \\ +\psi_2 &: \text{waddle}(\text{donald}) \wedge \text{quack}(\text{donald}) \\ +\psi_3 &: \text{goose}(\text{donald}) \wedge \text{quack}(\text{donald}) \\ +\psi_4 &: \text{duck}(\\ +&\qquad \text{donald}) +\end{align*} + $$ + +While all of these solutions make the deduction valid, some of these solutions are more desirable than others. For example, $\psi_1$ contradicts known facts and is therefore a useless solution. On the other hand, $\psi_2$ simply restates the desired conclusion, and despite making the deduction valid, gets us no closer to explaining the observation. Finally, $\psi_3$ and $\psi_4$ neither contradict the premises nor restate the conclusion, but, intuitively, we prefer $\psi_4$ over $\psi_3$ because it makes fewer assumptions. + +At a technical level, given premises $\Gamma$ and desired conclusion $\phi$, abduction is the problem of finding an explanatory hypothesis $\psi$ such that: + +(1) $\Gamma \wedge \psi \models \phi$ + +(2) $\Gamma \wedge \psi \nvDash \text{false}$ + +Here, the first condition states that $\psi$, together with known premises $\Gamma$, entails the desired conclusion $\phi$. The second condition stipulates that $\psi$ is consistent with known premises. As illustrated by the previous example, there are many solutions to a given abductive inference problem, but the most desirable solutions are usually those that are as simple and as general as possible. + +Recently, abductive inference has found many useful applications in verification, including inference of missing function preconditions [2, 3], diagnosis of error reports produced by verification tools [4], and for computing underapproximations [5]. Furthermore, abductive inference has also been used for inferring specifications of library functions [6] and for automatically synthesizing circular compositional proofs of program correctness [7]. + +In this paper, we describe our tool, called **EXPLAIN**, for performing logical abduction in the combination theory of Presburger arithmetic and propositional logic. The solutions computed by EXPLAIN are both simple and general: EXPLAIN always yields a logically weakest solution containing the fewest possible variables. + +## 2 A Tutorial Introduction to EXPLAIN + +The EXPLAIN tool is part of the SMT solver MISTRAL, which is available at http://www.cs.wm.edu/~tdillig/mistral under GPL license. MISTRAL is written in C++ and provides a C++ interface for EXPLAIN. In this section, we give a brief tutorial on how to solve abductive inference problems using EXPLAIN. + +As an example, consider the abduction problem defined by the premises $x \le 0$ and $y > 1$ and the desired conclusion $2x - y + 3z \le 10$ in the theory of linear +---PAGE_BREAK--- + +1. Term* x = VariableTerm::make("x"); + +2. Term* y = VariableTerm::make("y"); + +3. Term* z = VariableTerm::make("z"); + +4. Constraint c1(x, ConstantTerm::make(0), ATOM_LEQ); + +5. Constraint c2(y, ConstantTerm::make(1), ATOM_GT); + +6. Constraint premises = c1 & c2; + +7. map elems; + +8. elems[x] = 2; + +9. elems[y] = -1; + +10. elems[z] = 3; + +11. Term* t = ArithmeticTerm::make(elems); + +12. Constraint conclusion(t, ConstantTerm::make(10), ATOM_LEQ); + +13. Constraint explanation = conclusion.abduce(premises); + +14. cout << "Explanation: " << explanation << endl; + +Fig. 1: C++ code showing how to use EXPLAIN for performing abduction + +integer arithmetic. In other words, we want to find a simple formula $\psi$ such that: + +$$ +\begin{array}{l} +x \le 0 \land y > 1 \land \psi \models 2x - y + 3z \le 10 \\ +x \le 0 \land y > 1 \land \psi \not\models false +\end{array} + $$ + +Figure 1 shows C++ code for using EXPLAIN to solve the above abductive inference problem. Here, lines 1-12 construct the constraints used in the example, while line 13 invokes the **abduce** method of EXPLAIN for performing abduction. Lines 1-3 construct variables *x*, *y*, *z*, and lines 4 and 5 form the constraints *x* ≤ 0 and *y* > 1 respectively. In MISTRAL, the operators &, |, ! are overloaded and are used for conjoining, disjoining, and negating constraints respectively. Therefore, line 6 constructs the premise *x* ≤ 0 ∧ *y* > 1 by conjoining c1 and c2. Lines 7-12 construct the desired conclusion 2*x* − *y* + 3*z* ≤ 10. For this purpose, we first construct the arithmetic term 2*x* − *y* + 3*z* (lines 7-11). An ArithmeticTerm consists of a map from terms to coefficients; for instance, for the term 2*x* − *y* + 3*z*, the coefficients of *x*, *y*, *z* are specified as 2, −1, 3 in the elemts map respectively. + +The more interesting part of Figure 1 is line 13, where we invoke the **abduce** method to compute a solution to our abductive inference problem. For this example, the solution computed by EXPLAIN (and printed out at line 14) is *z* ≤ 4. It is easy to confirm that *z* ≤ 4 ∧ *x* ≤ 0 ∧ *y* > 1 logically implies 2*x* − *y* + 3*z* ≤ 10 and that *z* ≤ 4 is consistent with our premises. + +In general, the abductive solutions computed by EXPLAIN have two theoretical guarantees: First, they contain as few variables as possible. For instance, in our example, although $z-x \leq 4$ is also a valid solution to the abduction problem, EXPLAIN always yields a solution with the fewest number of variables because such solutions are generally simpler and more concise. Second, among the class of solutions that contain the same set of variables, EXPLAIN always yields the +---PAGE_BREAK--- + +logically weakest explanation. For instance, in our example, while $z = 0$ is also a valid solution to the abduction problem, it is logically stronger than $z \le 4$. Intuitively, logically weak solutions to the abduction problem are preferable because they make fewer assumptions and are therefore more likely to be true. + +## 3 Algorithm for Performing Abductive Inference + +In this section, we describe the algorithm used in EXPLAIN for performing abductive inference. First, let us observe that the entailment $\Gamma \wedge \psi \models \phi$ can be rewritten as $\psi \models \Gamma \Rightarrow \phi$. Furthermore, in addition to entailing $\Gamma \Rightarrow \phi$, we want $\psi$ to obey the following three requirements: + +1. The solution $\psi$ should be consistent with $\Gamma$ because an explanation that contradicts known premises is not useful + +2. To ensure the simplicity of the explanation, $\psi$ should contain as few variables as possible + +3. To capture the generality of the abductive explanation, $\psi$ should be no stronger than any other solution $\psi'$ satisfying the first two requirements + +Now, consider a minimum satisfying assignment (MSA) of $\Gamma \Rightarrow \phi$. An MSA of a formula $\varphi$ is a partial satisfying assignment of $\varphi$ that contains as few variables as possible. The formal definition of MSAs as well as an algorithm for computing them are given in [8]. Clearly, an MSA $\sigma$ of $\Gamma \Rightarrow \phi$ entails $\Gamma \Rightarrow \phi$ and satisfies condition (2). Unfortunately, an MSA of $\Gamma \Rightarrow \phi$ does not satisfy condition (3), as it is a logically strongest solution containing a given set of variables. + +Given an MSA of $\Gamma \Rightarrow \phi$ containing variables $V$, we observe that a logically weakest solution containing only $V$ is equivalent to $\forall \bar{V}$. ($\Gamma \Rightarrow \phi$), where $\bar{V}$ = free($\Gamma \Rightarrow \phi$)-$V$. Hence, given an MSA of $\Gamma \Rightarrow \phi$ consistent with $\Gamma$, an abductive solution satisfying all conditions (1)-(3) can be obtained by applying quantifier elimination to $\forall \bar{V}$. ($\Gamma \Rightarrow \phi$). + +Thus, to solve the abduction problem, what we want is a largest set of variables $X$ such that $(\forall X.(\Gamma \Rightarrow \phi)) \wedge \Gamma$ is satisfiable. We call such a set of variables $X$ a maximum universal subset (MUS) of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$. Given an MUS $X$ of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$, the desired solution to the abductive inference problem is obtained by eliminating quantifiers from $\forall X.(\Gamma \Rightarrow \phi)$ and then simplifying the resulting formula with respect to $\Gamma$ using the algorithm from [9]. + +Pseudo-code for our algorithm for solving an abductive inference problem defined by premises $\Gamma$ and conclusion $\phi$ is shown in Figure 2. The **abduce** function given in lines 1-5 first computes an MUS of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$ using the helper **find_mus** function. Given such a maximum universal subset $X$, we obtain a quantifier-free abductive solution $\chi$ by applying quantifier elimination to the formula $\forall X.(\Gamma \Rightarrow \phi)$. Finally, at line 4, to ensure that the final abductive solution does not contain redundant subparts that are implied by the premises, we apply the simplification algorithm from [9] to $\chi$. This yields our final abductive solution $\psi$ which satisfies our criteria of minimality and generality and that is not redundant with respect to the original premises. +---PAGE_BREAK--- + +``` +abduce(φ, Γ) { + 1. φ = (Γ ⇒ φ) + 2. Set X = find_mus(φ, Γ, free(φ), 0) + 3. χ = elim(∀X.φ) + 4. ψ = simplify(χ, Γ) + 5. return ψ +} + +find_mus(φ, Γ, V, L) { + 6. If V = ∅ or |V| ≤ L return ∅ + 7. U = free(φ) - V + 8. if( UNSAT (Γ ∧ ∀U.φ)) return ∅ + 9. Set best = ∅ +10. choose x ∈ V + +11. if(SAT(∀x.φ)) { +12. Set Y = find_mus(∀x.φ, Γ, V \ {x}, L - 1); +13. If (|Y| + 1 > L) { best = Y ∪ {x}; L = |Y| + 1 } +14. Set Y = find_mus(φ, Γ, V \ {x}, L); +15. If (|Y| > L) { best = Y } + +16. return best; +} +``` + +Fig. 2: Algorithm for performing abduction + +The function `find_mus` used in `abduce` is shown in lines 6-16 of Figure 2. This algorithm directly extends the `find_mus` algorithm we presented earlier in [8] to exclude universal subsets that contradict Γ. At every recursive invocation, `find_mus` picks a variable x from the set of free variables in φ. It then recursively invokes `find_mus` to compute the sizes of the universal subsets with and without x and returns the larger universal subset. In this algorithm, L is a lower bound on the size of the MUS and is used to terminate search branches that cannot improve upon an existing solution. Therefore, the search for an MUS terminates if we either cannot improve upon an existing solution L, or the universal subset U at line 7 is no longer consistent with Γ. The return value of `find_mus` is therefore a largest set X of variables for which Γ ∧ ∀X.φ is satisfiable. + +# 4 Experimental Evaluation + +To explore the size of abductive solutions and the cost of computing such solutions in practice, we collected 1455 abduction problems generated by the Compass program analysis system for inferring missing preconditions of functions. In each abduction problem $(\Gamma \land \psi) \Rightarrow \phi$, $\Gamma$ represents known invariants, and +---PAGE_BREAK--- + +Fig. 3: Size of Formula vs. Size of Abductive Solution and Time for Abduction + +$\phi$ is the weakest precondition of an assertion in some function $f$. Hence, the solution $\psi$ to the abduction problem represents a potential missing precondition of $f$ sufficient to guarantee the safety of the assertion. + +The left-hand side of Figure 3 plots the size of the formula $\Gamma \Rightarrow \phi$, measured as the number of leaves in the formula, versus the size of the computed abductive solution. As this graph shows, the abductive solution is generally much smaller than the original formula, demonstrating that our abduction algorithm generates small explanations in practice. The right-hand side of Figure 3 plots the size of the formula $\Gamma \Rightarrow \phi$ versus the time taken to solve the abduction problem. As expected, the time increases with formula size, but remains tractable even for the largest abduction problems in our benchmark set. + +## References + +1. Peirce, C.: Collected papers of Charles Sanders Peirce. Belknap Press (1932) +2. Calcagno, C., Distefano, D., O'Hearn, P., Yang, H.: Compositional shape analysis by means of bi-abduction. POPL 44(1) (2009) 289–300 +3. Giacobazzi, R.: Abductive analysis of modular logic programs. In: Proceedings of the 1994 International Symposium on Logic programming, Citeseer (1994) 377–391 +4. Dillig, I., Dillig, T., Aiken, A.: Automated error diagnosis using abductive inference. In: PLDI. (2012) +5. Gulwani, S., McCloskey, B., Tiwari, A.: Lifting abstract interpreters to quantified logical domains. In: POPL, ACM (2008) 235–246 +6. Zhu, H., Dillig, I., Dillig, T.: Abduction-based inference of library specifications for source-sink property verification. In: Technical Report, College of William & Mary. (2012) +7. Li, B., Dillig, I., Dillig, T., McMillan, K., Sagiv, M.: Synthesis of circular compositional program proofs via abduction. In: To appear in TACAS. (2013) +8. Dillig, I., Dillig, T., McMillan, K., Aiken, A.: Minimum satisfying assignments for SMT, CAV (2012) +9. Dillig, I., Dillig, T., Aiken, A.: Small formulas for large programs: On-line constraint simplification in scalable static analysis. Static Analysis (2011) 236–252 \ No newline at end of file diff --git a/samples/texts_merged/3251599.md b/samples/texts_merged/3251599.md new file mode 100644 index 0000000000000000000000000000000000000000..629921735f5c0f14d5bdc90738031be4e4f28926 --- /dev/null +++ b/samples/texts_merged/3251599.md @@ -0,0 +1,679 @@ + +---PAGE_BREAK--- + +Research Article + +On Retarded Integral Inequalities for Dynamic Systems +on Time Scales + +Qiao-Luan Li,¹ Xu-Yang Fu,¹ Zhi-Juan Gao,¹ and Wing-Sum Cheung² + +¹College of Mathematics & Information Science, Hebei Normal University, Shijiazhuang 050024, China + +²Department of Mathematics, The University of Hong Kong, Hong Kong + +Correspondence should be addressed to Wing-Sum Cheung; wscheung@hku.hk + +Received 13 September 2013; Accepted 16 January 2014; Published 20 February 2014 + +Academic Editor: Jaeyoung Chung + +Copyright © 2014 Qiao-Luan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. + +The object of this paper is to establish some nonlinear retarded inequalities on time scales which can be used as handy tools in the theory of integral equations with time delays. + +**1. Introduction** + +Integral inequalities play an important role in the qualitative analysis of differential and integral equations. The well-known Gronwall inequality provides explicit bounds for solutions of many differential and integral equations. On the basis of various initiatives, this inequality has been extended and applied to various contexts (see, e.g., [1-4]), including many retarded ones (see, e.g., [5-9]). + +Recently, Ye and Gao [7] obtained the following. + +**Theorem A.** Let $I = [t_0, T) \subset \mathbb{R}$, $a(t), b(t) \in C(I, \mathbb{R}^+)$, $\phi(t) \in C([t_0 - r, t_0], \mathbb{R}^+)$, $a(t_0) = \phi(t_0)$, and $u(t) \in C([t_0 - r, T), \mathbb{R}^+)$ with + +$$ +\begin{aligned} +& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u(s-r) ds, && t \in [t_0, T) \\ +& u(t) \le \phi(t), && t \in [t_0 - r, t_0), +\end{aligned} +\quad (1) $$ + +where $\beta > 0$. Then, the following assertions hold. + +(i) Suppose that $\beta > 1/2$. Then, + +$$ +\begin{aligned} +& u(t) \le e^t [w_1(t) + y_1(t)]^{1/2}, && t \in [t_0 + r, T), \\ +& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) ds, && (2) \\ +& t \in [t_0, t_0+r), +\end{aligned} +$$ + +where $K_1 = \Gamma(2\beta - 1)e^{-2r}/4^{\beta-1}$, $C_1 = \max\{2, e^{2r}\}$, $w_1(t) = C_1e^{-2t_0}a^2(t)$, $\phi_1(t) = C_1e^{-2t_0}\phi^2(t)$, and + +$$ +\begin{aligned} +& y_1(t) \\ +& = \int_{t_0}^{t_0+r} K_1 b^2(s) \phi_1(s-r) ds \\ +& \quad \cdot \exp \left( \int_{t_0+r}^{t} K_1 b^2(\tau) d\tau \right) \\ +& + \int_{t_0+r}^{t} w_1(s-r) K_1 b^2(s) \exp \left( \int_{s}^{t} K_1 b^2(\tau) d\tau \right) ds. +\end{aligned} +\quad (3) $$ + +If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing $C^1$-functions, then + +$$ +\begin{aligned} +& u(t) \le \sqrt{C_1} a(t) \exp \left( t - t_0 + \frac{K_1}{2} \int_{t_0}^{t} b^2(s) ds \right), && (4) \\ +& t \in [t_0, T). +\end{aligned} +\quad (ii) $$ + +(ii) Suppose that $0 < \beta \le 1/2$. Then, + +$$ +\begin{aligned} +& u(t) \le e^t [w_2(t) + y_2(t)]^{1/q}, && t \in [t_0 + r, T), \\ +& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) ds, && (5) \\ +& t \in [t_0, t_0 + r), +\end{aligned} +$$ +---PAGE_BREAK--- + +where $K_2 = [(\Gamma(1 - (\beta p))/p^{1-p(1-\beta)})^{1/p}, C_2 =$ +max $\{2^{q-1}, e^{qr}\}, w_2(t) = C_2 e^{-qt_0} a^q(t), \phi_2(t) = C_2 e^{-qt_0} \phi^q(t)$, +$\psi(t) = 2^{q-1} K_2^q e^{-qr} b^q(t),$ and + +$$ +\begin{equation} +\begin{aligned} +y_2(t) &= \int_{t_0}^{t_0+r} \psi(s) \phi_2(s-r) ds \cdot \exp \left( \int_{t_0+r}^{t} \psi(\tau) d\tau \right) \\ +&\quad + \int_{t_0+r}^{t} w_2(s-r) \psi(s) \exp \left( \int_{s}^{t} \psi(\tau) d\tau \right) ds. +\end{aligned} +\tag{6} +\end{equation} +$$ + +If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing $C^1$-functions, +then + +$$ +u(t) \le C_2^{1/q} a(t) \exp \left( t - t_0 + \frac{1}{q} \int_{t_0}^t \psi(s) ds \right), \quad (7) +$$ + +$$ +t \in [t_0, T). +$$ + +In this paper, we will further investigate functions $u$ +satisfying the following more general inequalities: + +$$ +\begin{align} +& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u^{n/m} (s-r) \Delta s, \notag \\ +& \phantom{u(t) \le a(t) + } t \in [t_0, T]_{\mathbb{T}}, \tag{8} \\ +& u(t) \le \phi(t), \quad t \in [t_0-r, t_0]_{\mathbb{T}}, \notag \\ +& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} [b(s) u^{n/m}(s) + c(s) u^{n/m}(s-r)] \Delta s, \notag \\ +& \phantom{u(t) \le a(t) + } t \in [t_0, T]_{\mathbb{T}}, \tag{9} +\end{align} +$$ + +$$ +u(t) \le a(t) +$$ + +$$ ++ \int_{t_0}^{t} (t-s)^{\beta-1} [b(s) u^{n/m}(s) + c(s) u^{n/m}(s-r)] \Delta s, \\ +t \in [t_0, T)_\mathbb{T}, +$$ + +$$ +u(t) \leq \phi(t), \quad t \in [t_0 - r, t_0]_{\mathbb{T}}, +$$ + +where $\mathbb{T}$ is any time scale, $u(t)$, $a(t)$, $b(t)$, $c(t)$, and $\phi(t)$ are real-valued nonnegative rd-continuous functions defined on $\mathbb{T}$, $m$ and $n$ are positive constants, $m \ge n$, $m \ge 1$, $(1/p) + (1/m) = 1$, $\beta > (p-1)/p$, and $[t_0, T]_\mathbb{T} := [t_0, T) \cap \mathbb{T}$. + +First, we make a preliminary definition. + +**Definition 1.** We say that a function $p : \mathbb{T} \to \mathbb{R}$ is regressive provided that + +$$ +1 + \mu(t)p(t) \neq 0, \quad \forall t \in \mathbb{T}^k +\quad (10) +$$ + +holds, where $\mu(t)$ is graininess function; that is, $\mu(t) := \sigma(t) - t$. The set of all regressive and rd-continuous functions $f : \mathbb{T} \to \mathbb{R}$ will be denoted by $\mathcal{R}$. + +**2. Main Results** + +For convenience, we first cite the following lemma. + +**Lemma 2** (see [10]). Let $a \ge 0$, $p \ge q \ge 0$, $p \ne 0$; then + +$$ +a^{q/p} \leq \frac{q}{p} K^{\frac{(q-p)}{p}} a + \frac{p-q}{p} K^{\frac{q}{p}} +\quad (11) +$$ + +for any $K > 0$. + +**Lemma 3.** Let $a(t) \ge 0$, $b(t) > 0$, $p(t) := nb(t)/m$, $-b \in$ +$\mathcal{R}^+ := \{f \in \mathcal{R} : 1 + \mu(t)f(t) > 0, \text{ for all } t \in \mathbb{T}\}$, $\phi(t) \ge 0$ is +rd-continuous on $[t_0 - r, t_0]_{\mathbb{T}}$, and $r \ge 0$ and $m \ge n > 0$ are +real constants. If $u(t) \ge 0$ is rd-continuous and + +$$ +\begin{equation} +\begin{aligned} +& u^m(t) \le a(t) + \int_{t_0}^{t} b(s) u^n(s-r) \Delta s, && t \in [t_0, T]_{\mathbb{T}}, \\ +& u(t) \le \phi(t), && t \in [t_0 - r, t_0]_{\mathbb{T}}, +\end{aligned} +\tag{12} +\end{equation} +$$ + +then + +$$ +\begin{equation} +\begin{split} +u^m(t) &\le a(t) + \int_{t_0+r}^{t} p(s)a(s-r)e_{-p}(s,t)\Delta s \\ +&\quad + e_{-p}(t_0+r,t) \int_{t_0}^{t_0+r} b(s)\phi^n(s-r)\Delta s \\ +&\quad + \frac{m-n}{n}(e_{-p}(t_0+r,t)-1) +\end{split} +\tag{13} +\end{equation} +$$ + +for $t \in [t_0 + r, T)_\mathbb{T}$ and + +$$ +u^m(t) \leq a(t) + \int_{t_0}^{t} b(s) \phi^n(s-r) \Delta s +\quad (14) +$$ + +for $t \in [t_0, t_0 + r)_T$. + +Furthermore, if $a(t)$ and $\phi(t)$ are nondecreasing with $a(t_0) = \phi^n(t_0)$, then + +$$ +u^m(t) \le c(t)e_b(t_0, t), \quad t \in [t_0, T)_T, \quad (15) +$$ + +where $c(t) := a(t) + (m-n)/n$. + +*Proof.* Let $z(t) = \int_{t_0}^t b(s)u^n(s-r)\Delta s$. Then, $z(t_0) = 0$, $u^m(t) \le a(t)+z(t)$ and $z(t)$ is positive, nondecreasing for $t \in [t_0, T)_T$. By Lemma 2, we get + +$$ +\begin{align*} +z^\Delta (t) &= b(t) u^n (t-r) \le b(t) [a(t-r) + z(t-r)]^{n/m} \\ +&\le b(t) \left[ \frac{n}{m} (a(t-r) + z(t-r)) + \frac{m-n}{m} \right] \\ +&\le \frac{n}{m} b(t) z(\sigma(t)) + \frac{n}{m} b(t) a(t-r) + \frac{m-n}{m} b(t) \\ +&= p(t) z(\sigma(t)) + p(t) a(t-r) + \frac{m-n}{n} p(t) +\end{align*} +\tag{16} +$$ + +for $t \in [t_0 + r, T)_T$. Multiplying (16) by $e_{-p}(t, t_0 + r) > 0$, we get + +$$ +(z(t)e_{-p}(t, t_0 + r))^{\Delta} &\le p(t)a(t-r)e_{-p}(t, t_0 + r) \\ +&\quad + \frac{m-n}{n}p(t)e_{-p}(t, t_0 + r). +\tag{17} +$$ +---PAGE_BREAK--- + +Integrating both sides from $t_0 + r$ to $t$, we obtain + +$$ +\begin{align} +z(t) \le e_{-p}(t_0+r,t)z(t_0+r) & \nonumber \\ +& + e_{-p}(t_0+r,t) \int_{t_0+r}^{t} p(s)a(s-r)e_{-p}(s,t_0+r) \Delta s \nonumber \\ +& + \frac{m-n}{n} (e_{-p}(t_0+r,t)-1). \tag{18} +\end{align} +$$ + +For $t \in [t_0, t_0 + r)_{\mathbb{T}}$, $z^{\Delta}(t) \le b(t)\phi^n(t-r)$, so + +$$ +z(t) \leq \int_{t_0}^{t} b(s) \phi^n(s-r) \Delta s. \quad (19) +$$ + +Using (18) and (19), we get + +$$ +\begin{align} +z(t) \le e_{-p}(t_0+r,t) & \int_{t_0}^{t_0+r} b(s) \phi^n(s-r) \Delta s \nonumber \\ +& + \int_{t_0+r}^{t} p(s) a(s-r) e_{-p}(s,t) \Delta s \tag{20} \\ +& + \frac{m-n}{n} (e_{-p}(t_0+r,t)-1) \nonumber +\end{align} +$$ + +for $t \in [t_0 + r, T)_\mathbb{T}$. + +Noting that $u^m(t) \le a(t) + z(t)$, inequalities (13) and (14) follow. + +Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, then for $t \in [t_0, t_0 + r)_{\mathbb{T}}$, by (14), we have + +$$ +\begin{equation} +\begin{aligned} +u^m(t) &\le a(t) + \phi^n (t-r) \int_{t_0}^t b(s) \Delta s \\ +&\le a(t) \left( 1 + \int_{t_0}^t b(s) \Delta s \right) \le c(t) e_{-b}(t_0, t). +\end{aligned} +\tag{21} +\end{equation} +$$ + +If $t \in [t_0 + r, T)_\mathbb{T}$, by (13), + +$$ +\begin{align*} +& u^m(t) \le a(t) + e_{-p}(t_0+r,t)a(t) \int_{t_0}^{t_0+r} b(s) \Delta s \\ +& \phantom{u^m(t) \le} + a(t) \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\ +& \phantom{u^m(t) \le} + \frac{m-n}{n} \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\ +& \le c(t) + e_{-p}(t_0+r,t)c(t) \int_{t_0}^{t_0+r} b(s) \Delta s \tag{22} \\ +& \phantom{u^m(t) \le} + c(t) \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\ +& = c(t)e_{-p}(t_0+r,t) \left(1 + \int_{t_0}^{t_0+r} b(s)\Delta s\right) \\ +& \le c(t)e_{-b}(t_0,t). +\end{align*} +$$ + +The proof is complete. $\square$ + +**Theorem 4.** Assume that $u(t)$ satisfies condition (8), $a(t) \ge 0$, $K := 2^{m-1}\Gamma^{m-1}(p\beta - p + 1)(m/pn)^{\beta m-1}e^{-nr}$, $b_1(t) := (n/m)Kb^m(t)$, $-Kb^m \in \mathcal{R}^+$; then + +$$ +\begin{align} +u(t) &\le e^t [w_1(t) + y_1(t)]^{1/m}, && t \in [t_0 + r, T)_\mathbb{T}, \\ +u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi^{n/m} (s-r) \Delta s, && (23) +\end{align} +$$ + +$$ +t \in [t_0, t_0 + r)_\mathbb{T}, +$$ + +where $w_1(t) := 2^{m-1}a^m(t)e^{-mt_0}$, $\phi_1(t) := e^{-t_0}e^r\phi(t)$, +and $y_1(t) := \int_{t_0+r}^{t} b_1(s)w_1(s-r)e_{-b_1}(s,t)\Delta s + e_{-b_1}(t_0+r,t)\int_{t_0}^{t_0+r} K b^m(s)\phi_1^n(s-r)\Delta s + ((m-n)/n)(e_{-b_1}(t_0+r,t)-1)$. + +If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing, and +$a^m(t_0) = 2^{1-m}e^{(m-n)t_0}e^{nr}\phi^n(t_0)$, then + +$$ +u(t) \le e^t [\alpha(t) e_{-Kb^n}(t_0, t)]^{1/m}, \quad t \in [t_0, T)_\mathbb{T}, \quad (24) +$$ + +where $\alpha(t) := w_1(t) + (m-n)/n$ + +*Proof.* The second inequality in (23) is obvious. Next, we will prove the first inequality in (23). For $t \in [t_0, T)_\mathbb{T}$, using Hölder's inequality with indices *p* and *m*, we obtain from (8) + +$$ +\begin{align} +u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} b(s) e^{-ns/m} u^{n/m} (s-r) \Delta s \notag \\ +&\le a(t) + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \notag \\ +&\qquad \times \left( \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m}. \tag{25} +\end{align} +$$ + +By Jensen's inequality $(\sum_{i=1}^n x_i)^{\sigma} \le n^{\sigma-1} (\sum_{i=1}^n x_i^{\sigma})$, we get + +$$ +u^m(t) \le 2^{m-1} a^m(t) ++ 2^{m-1} \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{m/p} +\times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. \tag{26} +$$ + +For the first integral in (26), we have the estimate + +$$ +\begin{align} +&\int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \\ +&= \int_{0}^{t-t_0} \tau^{p\beta-p} e^{pn(t-\tau)/m} \Delta\tau \\ +&\le e^{pnt/m} \int_{0}^{t} \tau^{p\beta-p} e^{-pn\tau/m} \Delta\tau \tag{27} \\ +&= e^{pnt/m} \left(\frac{m}{pn}\right)^{p\beta-p+1} \int_{0}^{pnt/m} \sigma^{p\beta-p} e^{-\sigma}\Delta\sigma \\ +&< e^{pnt/m} \left(\frac{m}{pn}\right)^{p\beta-p+1} \Gamma(p\beta - p + 1). +\end{align} +$$ +---PAGE_BREAK--- + +Hence, + +$$ +\begin{equation} \tag{28} +\begin{aligned} +& u^m(t) \le 2^{m-1} a^m(t) + 2^{m-1} e^{mt} \Gamma^{m-1}(p\beta - p + 1) \\ +& \quad \times \left(\frac{m}{pn}\right)^{\beta m-1} \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s \\ +& \le 2^{m-1} a^m(t) e^{-mt_0} + 2^{m-1} \Gamma^{m-1}(p\beta - p + 1) \left(\frac{m}{pn}\right)^{\beta m-1} \\ +& \qquad \times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. +\end{aligned} +\end{equation} +$$ + +and so + +$$ +\begin{align*} +& (u(t)e^{-t})^m \\ +&\le 2^{m-1} a^m(t) e^{-mt_0} + 2^{m-1} \Gamma^{m-1}(p\beta - p+1) \left(\frac{m}{pn}\right)^{\beta m-1} \\ +&\qquad \times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. \tag{29} +\end{align*} +$$ + +Let $v(t) := e^{-t}u(t)$; then we have + +$$ +\begin{equation} +\begin{aligned} +v^m(t) &\le w_1(t) + K \int_{t_0}^{t} b^m(s) v^n(s-r) \Delta s, \\ + &\qquad t \in [t_0, T)_{\mathbb{T}}. +\end{aligned} +\tag{30} +\end{equation} +$$ + +For $t \in [t_0 - r, t_0]_{\mathbb{T}}$, we have $e^{-t}u(t) \le e^{-t}\phi(t) \le e^r e^{-t_0}\phi(t)$; +that is, $v(t) \le \phi_1(t)$. By Lemma 3, we get + +$$ +\begin{equation} +\begin{aligned} +v^m(t) &\le w_1(t) + \int_{t_0+r}^{t} b_1(s) w_1(s-r) e_{-b_1}(s,t) \Delta s \\ +&\quad + e_{-b_1}(t_0+r,t) \int_{t_0}^{t_0+r} K b^m(s) \phi_1^n(s-r) \Delta s \\ +&\quad + \frac{m-n}{n} (e_{-b_1}(t_0+r,t)-1). +\end{aligned} +\tag{31} +\end{equation} +$$ + +Hence, the first inequality in (23) follows. + +Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, and $a^m(t_0) = 2^{1-m}e^{(m-n)t_0}\phi^n(t_0)e^{nr}$, by Lemma 3, we have + +$$ +u(t) \le e^t [\alpha(t) e_{-Kb^m}(t_0, t)]^{1/m}, \quad t \in [t_0, T)_{\mathbb{T}}. \quad (32) +$$ + +The proof is complete. + +**Lemma 5.** Let $a(t) \ge 0$, $b(t) > 0$, $c(t) > 0$, $p(t) := (nb(t)/m)$, +$q(t) := (nc(t)/m)$, $\gamma(t) := a(t) + (m-n)/n$ and $-p, -(p+c) \in$ +$\mathbb{R}^+$ and let $\phi(t) \ge 0$ be rd-continuous on $[t_0 - r, t_0]_{\mathbb{T}}$, where +$r \ge 0$ and $m \ge n > 0$ are real constants. If $u(t) \ge 0$ is rd- +continuous and + +$$ +\begin{equation} +\begin{aligned} +& u^m(t) \le a(t) + \int_{t_0}^{t} [b(s) u^n(s) + c(s) u^n(s-r)] \Delta s, \\ +& \qquad t \in [t_0, T)_\mathbb{T}, +\end{aligned} +\tag{33} +\end{equation} +$$ + +$$ +u(t) \leq \phi(t), \quad t \in [t_0 - r, t_0]_{\mathbb{T}}, +$$ + +then + +$$ +\begin{align*} +& u^m(t) \\ +&\leq a(t) \\ +&\quad + \int_{t_0+r}^{t} [p(s)\gamma(s)+q(s)\gamma(s-r)] e_{-(p+q)}(s,t)\Delta s \\ +&\quad + e_{-(p+q)}(t_0+r,t) \\ +&\quad \times \int_{t_0}^{t_0+r} [p(s)\gamma(s)+c(s)\phi^n(s-r)] e_{-p}(s,t_0+r)\Delta s +\end{align*} +\tag{34} +$$ + +for $t \in [t_0 + r, T)_\mathbb{T}$ and + +$$ +u^m(t) \leq a(t) + \int_{t_0}^{t} [p(s)\gamma(s) + c(s)\phi^n(s-r)] e_{-p}(s,t)\Delta s +$$ + +for $t \in [t_0, t_0 + r]_{\mathbb{T}}$. + +Furthermore, if $a(t)$ and $\phi(t)$ are nondecreasing with $a(t_0) = \phi^n(t_0)$, then + +$$ +u^m(t) \leq \gamma(t) e_{-(p+c)}(t_0, t), \quad t \in [t_0, T)_T. \quad (36) +$$ + +Proof. Let $z(t) = \int_{t_0}^t [b(s)u^n(s)+c(s)u^n(s-r)]\Delta s$. Then, $z(t_0) = 0$, $u^m(t) \leq a(t) + z(t)$, $z(t)$ is positive and nondecreasing for $t \in [t_0, T)_T$. Further, we have + +$$ +z^\Delta (t) = b (t) u^n (t) + c (t) u^n (t-r). \quad (37) +$$ + +For $t \in [t_0, t_0 + r]_{\mathbb{T}}$, using Lemma 2, we have + +$$ +z^\Delta (t) &\le b (t) (a (t) + z (t))^{n/m} + c (t) \phi^n (t-r) \\ +&\le b (t) \left[ \frac{n}{m} (a (t) + z (t)) + \frac{m-n}{m} \right] + c (t) \phi^n (t-r) \\ +&\le p (t) \gamma (t) + p (t) z (\sigma (t)) + c (t) \phi^n (t-r), +$$ + +$$ +(e_{-p}(t, t_0) z(t))^\Delta \le (p(t)\gamma(t)+c(t)\phi^n(t-r))e_{-p}(t, t_0). \quad (38) +$$ + +Integrating both sides from $t_0$ to $t$, we obtain + +$$ +z(t) \leq \int_{t_0}^{t} [p(s)\gamma(s)+c(s)\phi^n(s-r)] e_{-p}(s,t)\Delta s. \quad (39) +$$ +---PAGE_BREAK--- + +For $t \in [t_0 + r, T)_{\mathbb{T}}$, + +$$ +\begin{aligned} +z^{\Delta}(t) &\le b(t)[a(t) + z(t)]^{n/m} \\ +&\quad + c(t)[a(t-r) + z(t-r)]^{n/m} \\ +&\le b(t)\left(\frac{n}{m}(a(t)+z(t)) + \frac{m-n}{m}\right) \\ +&\quad + c(t)\left(\frac{n}{m}(a(t-r)+z(t-r)) + \frac{m-n}{m}\right) \\ +&\le \left(\frac{n}{m}b(t) + \frac{n}{m}c(t)\right)z(\sigma(t)) + \frac{n}{m}b(t)a(t) \\ +&\quad + \frac{m-n}{m}c(t)a(t-r) + \frac{m-n}{m}b(t) + \frac{m-n}{m}c(t) \\ +&\le (p(t)+q(t))z(\sigma(t)) + p(t)\gamma(t) + q(t)\gamma(t-r). +\end{aligned} +\tag{40} +$$ + +Hence, we get + +$$ +\begin{align} +(e_{-(p+q)}(t, t_0 + r) z(t))^\Delta & \tag{41} \\ +&\le (p(t) \gamma(t) + q(t) \gamma(t-r)) e_{-(p+q)}(t, t_0 + r). \nonumber +\end{align} +$$ + +Integrating both sides from $t_0 + r$ to $t$, we obtain + +$$ +\begin{align*} +z(t) &\le e_{-(p+q)}(t_0+r,t)z(t_0+r) \\ + &\quad + e_{-(p+q)}(t_0+r,t) \\ + &\quad \times \int_{t_0+r}^{t} [p(s)\gamma(s) + q(s)\gamma(s-r)] e_{-(p+q)}(s,t_0+r) \Delta s \\ + &\le e_{-(p+q)}(t_0+r,t) \\ + &\quad \times \int_{t_0}^{t_0+r} [p(s)\gamma(s) + c(s)\phi^n(s-r)] e_{-p}(s,t_0+r) \Delta s \\ + &\quad + \int_{t_0+r}^{t} [p(s)\gamma(s) + q(s)\gamma(s-r)] e_{-(p+q)}(s,t) \Delta s. +\end{align*} +\tag{42} +$$ + +Using $u^m(t) \le a(t) + z(t)$, we get inequalities (34) and (35). +Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, then, by (35), + +$$ +\begin{align} +u^m(t) &\le \gamma(t) \left( 1 + \int_{t_0}^{t} (p(s) + c(s)) e_{-p}(s,t) \Delta s \right) \notag \\ +&\le \gamma(t) \left( 1 + \int_{t_0}^{t} (p(s) + c(s)) e_{-(p+c)}(s,t) \Delta s \right) \tag{43} \\ +&\le \gamma(t) e_{-(p+c)}(t_0,t) \notag +\end{align} +$$ + +for $t \in [t_0, t_0 + r)_{\mathbb{T}}$. Furthermore, by (34), + +$$ +\begin{align*} +u^m(t) &\le \gamma(t) + \gamma(t) e_{-(p+q)}(t_0 + r, t) \\ +&\quad \times \int_{t_0}^{t_0+r} (p(s)+c(s)) e_{-p}(s,t_0+r) \Delta s \\ +&\quad + \gamma(t) \int_{t_0+r}^{t} (p(s)+q(s)) e_{-(p+q)}(s,t) \Delta s \\ +&\le \gamma(t) e_{-(p+q)}(t_0+r, t) \\ +&\quad \times \left( 1 + \int_{t_0}^{t_0+r} (p(s)+c(s)) e_{-(p+c)}(s,t_0+r) \Delta s \right) \\ +&= \gamma(t) e_{-(p+c)}(t_0, t) +\end{align*} +\tag{44} +$$ + +for $t \in [t_0 + r, T)_{\mathbb{T}}$. The proof is complete. $\square$ + +**Theorem 6.** Assume that $u(t)$ satisfies condition (9), $a(t) \ge 0$, $K := 3^{m-1}\Gamma^{m-1}(p\beta - p + 1)(m/pn)^{\beta m-1}$, $p(t) := nKb^m(t)/m$, $c_1(t) := Ke^{-mr}c^m(t)$, $q(t) := (n/m)c_1(t)$, $-p, -(p+c_1) \in \mathbb{R}^+$. + +If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing, and +$a^m(t_0) = 3^{1-m}e^{(m-n)t_0}e^{nr}\phi^n(t_0)$, then + +$$ +u(t) \le e^{\ell} [\gamma(t) e_{-(p+c_1)}(t_0, t)]^{1/m}, \quad t \in [t_0, T]_{\mathbb{T}}, \quad (45) +$$ + +where $\gamma(t) = 3^{m-1}a^m(t)e^{-mt_0} + (m-n)/n$. + +Proof. For $t \in [t_0, T)_\mathbb{T}$, using Hölder's inequality with indices $p$ and $m$, we obtain from (9) that + +$$ +\begin{align*} +u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} b(s) e^{-ns/m} u^{n/m}(s) \Delta s \\ +&\quad + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} c(s) e^{-ns/m} u^{n/m}(s-r) \Delta s \\ +&\le a(t) + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \\ +&\quad \times \left( \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s) \Delta s \right)^{1/m} \\ +&\quad + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \\ +&\quad \times \left( \int_{t_0}^{t} c^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m} +\end{align*} +$$ +---PAGE_BREAK--- + +$$ +\begin{equation} +\begin{aligned} +& \le a(t) + e^{nt/m} \left(\frac{m}{pn}\right)^{\beta-1+1/p} \Gamma^{1/p}(p\beta - p + 1) \\ +& \quad \times \left[ \left( \int_{t_0}^t b^m(s) e^{-ns} u^n(s) \Delta s \right)^{1/m} \right. \\ +& \qquad \left. + \left( \int_{t_0}^t c^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m} \right]. +\end{aligned} +\tag{46} +\end{equation} +$$ + +By Jensen's inequality $(\sum_{i=1}^n x_i)^\sigma \le n^{\sigma-1} (\sum_{i=1}^n x_i^\sigma)$, we get + +$$ +\begin{align*} +& u^m(t) \\ +&\le 3^{m-1}a^m(t) + 3^{m-1}e^{nt}\left(\frac{m}{pn}\right)^{(m\beta-1)}\Gamma^{m-1}(p\beta - p + 1) \\ +&\quad \times \left(\int_{t_0}^t b^m(s)e^{-ns}u^n(s)\Delta s + \int_{t_0}^t c^m(s)e^{-ns}u^n(s-r)\Delta s\right). \tag{47} +\end{align*} +$$ + +So, + +$$ +\begin{equation} +\begin{aligned} +& (u(t)e^{-t})^m \\ +&\le 3^{m-1} a^m(t) e^{-mt_0} \\ +&\quad + 3^{m-1} \left(\frac{m}{pn}\right)^{(m\beta-1)} \Gamma^{m-1}(p\beta - p + 1) \\ +&\quad \times \left( \int_{t_0}^t b^m(s) e^{-ns} u^n(s) \Delta s + \int_{t_0}^t c^m(s) e^{-ns} u^n(s-r) \Delta s \right). +\end{aligned} +\tag{48} +\end{equation} +$$ + +Let $v(t) := e^{-t}u(t)$, $w_2(t) := 3^{m-1}a^m(t)e^{-mt_0}$; we have + +$$ +\begin{equation} +\begin{aligned} +v^m(t) &\le w_2(t) + \int_{t_0}^t K b^m(s) v^n(s) \Delta s \\ +&\quad + \int_{t_0}^t K e^{-nr} c^m(s) v^n(s-r) \Delta s +\end{aligned} +\tag{49} +\end{equation} +$$ + +for $t \in [t_0, T]_\mathbb{T}$. For $t \in [t_0 - r, t_0]_\mathbb{T}$, we have $e^{-t}u(t) \le$ +$e^{-t}\phi(t) \le e^{-t_0}e^r\phi(t)$; that is, $v(t) \le \phi_1(t)$. By Lemma 5, we +get + +$$ +u(t) \le e^t [\gamma(t) e_{-(p+c_1)}(t_0, t)]^{1/m}, \quad t \in [t_0, T]_{\mathbb{T}}. \quad (50) +$$ + +The proof is complete. + +The following is a simple consequence of Theorem 4. + +**Corollary 7.** Suppose that $m = n = 2$, + +$$ +\begin{align} +& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u(s-r) \Delta s, \notag \\ +& \phantom{u(t) \le a(t) + } t \in [t_0, T), \tag{51} \\ +& u(t) \le \phi(t), \quad t \in [t_0 - r, t_0), \notag +\end{align} +$$ + +then + +$$ +\begin{align*} +u(t) &\le e^t \left[ w_1(t) + \int_{t_0+r}^t Kb^2(s) w_1(s-r) e_{-Kb^2}(s,t) \Delta s \right. \\ +&\qquad \left. + e_{-Kb^2}(t_0+r,t) \right. \\ +&\qquad \left. \times \int_{t_0}^{t_0+r} Kb^2(s) \phi_1^2(s-r) \Delta s \right]^{1/2}, \\ +&\qquad t \in [t_0+r,T)_\mathbb{T}, \\ +u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) \Delta s, \\ +&\qquad t \in [t_0, t_0+r)_\mathbb{T}, +\end{align*} +$$ + +where $K := \Gamma(2\beta - 1)e^{-2r} \cdot (1/4^{\beta-1})$, $w_1(t) := 2a^2(t)e^{-2t_0}$, +$\phi_1(t) := e^{-t_0}e^r\phi(t)$. + +If $\mathbb{T} = \mathbb{R}$, then the conclusion reduces to that of Theorem A for $\beta > 1/2$. + +**Conflict of Interests** + +The authors declare that there is no conflict of interests regarding the publication of this paper. + +**Acknowledgments** + +The first author's research was supported by NNSF of China (11071054), Natural Science Foundation of Hebei Province (A2011205012). The corresponding author's research was partially supported by an HKU URG grant. + +**References** + +[1] R. P. Agarwal, S. Deng, and W. Zhang, “Generalization of a retarded Gronwall-like inequality and its applications,” *Applied Mathematics and Computation*, vol. 165, no. 3, pp. 599–612, 2005. + +[2] B. G. Pachpatte, “Explicit bounds on certain integral inequalities,” *Journal of Mathematical Analysis and Applications*, vol. 267, no. 1, pp. 48–61, 2002. + +[3] W.-S. Cheung, “Some new nonlinear inequalities and applications to boundary value problems,” *Nonlinear Analysis: Theory, Methods & Applications*, vol. 64, no. 9, pp. 2112–2128, 2006. + +[4] C.-J. Chen, W.-S. Cheung, and D. Zhao, “Gronwall-Bellman-type integral inequalities and applications to BVPs,” *Journal of Inequalities and Applications*, vol. 2009, Article ID 258569, 15 pages, 2009. + +[5] Y. G. Sun, “On retarded integral inequalities and their applications,” *Journal of Mathematical Analysis and Applications*, vol. 301, no. 2, pp. 265–275, 2005. + +[6] H. Zhang and F. Meng, “On certain integral inequalities in two independent variables for retarded equations,” *Applied Mathematics and Computation*, vol. 203, no. 2, pp. 608–616, 2008. + +[7] H. Ye and J. Gao, “Henry-Gronwall type retarded integral inequalities and their applications to fractional differential +---PAGE_BREAK--- + +equations with delay,” *Applied Mathematics and Computation*, vol. 218, no. 8, pp. 4152–4160, 2011. + +[8] O. Lipovan, “A retarded Gronwall-like inequality and its applications,” *Journal of Mathematical Analysis and Applications*, vol. 252, no. 1, pp. 389–401, 2000. + +[9] O. Lipovan, “A retarded integral inequality and its applications,” *Journal of Mathematical Analysis and Applications*, vol. 285, no. 2, pp. 436–443, 2003. + +[10] F. Jiang and F. Meng, “Explicit bounds on some new nonlinear integral inequalities with delay,” *Journal of Computational and Applied Mathematics*, vol. 205, no. 1, pp. 479–486, 2007. \ No newline at end of file diff --git a/samples/texts_merged/3295535.md b/samples/texts_merged/3295535.md new file mode 100644 index 0000000000000000000000000000000000000000..4e9df8ef2f2fa549653cfbb548f3b6e31375facb --- /dev/null +++ b/samples/texts_merged/3295535.md @@ -0,0 +1,1186 @@ + +---PAGE_BREAK--- + +# Edinburgh Research Explorer + +## Infrared singularities of QCD scattering amplitudes in the Regge limit to all orders + +**Citation for published version:** + +Caron-Huot, S, Gardi, E, Reichel, J & Vernazza, L 2018, 'Infrared singularities of QCD scattering amplitudes in the Regge limit to all orders', *Journal of High Energy Physics*, vol. N/A, 98, pp. 1-34. +https://doi.org/10.1007/JHEP03(2018)098 + +**Digital Object Identifier (DOI):** +10.1007/JHEP03(2018)098 + +**Link:** +Link to publication record in Edinburgh Research Explorer + +**Document Version:** +Other version + +**Published In:** +Journal of High Energy Physics + +**General rights** + +Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. + +**Take down policy** + +The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. +---PAGE_BREAK--- + +Infrared singularities of QCD scattering amplitudes +in the Regge limit to all orders + +Simon Caron-Huot,ª Einan Gardi,ᵇ Joscha Reichel,ᵇ Leonardo Vernazzaᵇ,c + +ªDepartment of Physics, McGill University, 3600 rue University, Montréal, QC Canada H3A 2T8 + +ᵇHiggs Centre for Theoretical Physics, School of Physics and Astronomy, The University of Edinburgh, Edinburgh EH9 3FD, Scotland, UK + +ᶜNikhef, Science Park 105, NL-1098 XG Amsterdam, The Netherlands + +E-mail: schuot@physics.mcgill.ca, Einan.Gardi@ed.ac.uk, +joscha.reichel@ed.ac.uk, l.vernazza@nikhef.nl + +**ABSTRACT:** Scattering amplitudes of partons in QCD contain infrared divergences which can be resummed to all orders in terms of an anomalous dimension. Independently, in the limit of high-energy forward scattering, large logarithms of the energy can be resummed using Balitsky-Fadin-Kuraev-Lipatov theory. We use the latter to analyze the infrared-singular part of amplitudes to all orders in perturbation theory and to next-to-leading-logarithm accuracy in the high-energy limit, resumming the two-Reggeon contribution. Remarkably, we find a closed form for the infrared-singular part, predicting the Regge limit of the soft anomalous dimension to any loop order. + +**KEYWORDS:** scattering amplitudes, Regge, BFKL, resummation, QCD + +*In memory of Lev Nikolaevich Lipatov and his pioneering contributions* +---PAGE_BREAK--- + +# Contents + +
1Introduction1
2Scattering amplitudes by iterated solution of the BFKL equation4
2.1The even amplitude from the BFKL wavefunction4
2.2Iterative solution for the wavefunction and amplitude7
3The soft approximation11
3.1The wavefunction at NLL to all orders12
3.2The all-order structure of two-parton scattering amplitudes at NLL14
4The soft anomalous dimension in the high-energy limit to all orders16
4.1The infrared factorisation formula in the Regge limit17
4.2Extracting the soft anomalous dimension at NLL20
4.3Properties of the soft anomalous dimension in the Regge limit22
4.4Exponentiation check for higher-order infrared poles26
5Conclusions28
AThe even amplitude at NLL accuracy within the shockwave formalism30
BProof of the all-order amplitude31
+ +## 1 Introduction + +The high-energy limit of QCD scattering has always been a subject of much theoretical interest, see e.g. [1–7]. In particular, the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation [1, 2] provides a theoretical framework to resum high-energy (or rapidity) logarithms to all orders in perturbation theory. It was used extensively to investigate a range of physical phenomena including the small-x behaviour of deep-inelastic structure functions and parton densities, and jet production with large rapidity gaps. The non-linear generalisations of BFKL, known as the Balitsky-JIMWLK equation [8–13], extends the range of phenomena further, e.g. to describe gluon saturation in heavy-ion collisions. + +On the theoretical front, a separate line of investigation concerns the structure of partonic scattering amplitudes in the high-energy limit [14–24]. Scattering amplitudes of quarks and gluons are dominated at high energies by the t-channel exchange of effective excitations dubbed Reggeized gluons. In this context the BFKL equation and its generalisations provide again a highly-valuable tool: by solving these equations iteratively one can compute high-energy logarithms order-by-order in perturbation theory [23, 24]. +---PAGE_BREAK--- + +The real part of a $2 \to 2$ partonic amplitude (i.e. its signature-odd part, see eq. (2.1)) is governed by an odd number of Reggeized gluons. The leading high-energy logarithms simply exponentiate, dressing the *t*-channel gluon propagator by a power of $s/t$. In Regge theory (see e.g. [25]) this behaviour corresponds to a Regge pole in the complex angular momentum plane. QCD amplitudes can thus be factorized in the high-energy limit into a *t*-channel Reggeized gluon exchange which captures the dependence on the energy, and energy-independent impact factors that depend on the colliding partons. However, this simple picture does not extend beyond next-to-leading logarithms (NLL) due to multiple Reggeized gluon exchange, which form Regge cuts. This was recently demonstrated explicitly in ref. [24], where these effects were computed through three-loops, by constructing an iterative solution of the relevant BFKL or Balitsky-JIMWLK equation, describing the evolution of three Reggeized gluons and their mixing with a single Reggeized gluon. + +In the this paper we extend this study, focusing on the imaginary part of $2 \to 2$ partonic amplitudes, which are governed by the exchange of an even number of Reggeized gluons, which also form Regge cuts. The leading logarithmic corrections to the even amplitude are determined to all orders by a wavefunction of a pair of Reggeized gluons, which solves the celebrated BFKL evolution equation. This iterative solution, which will be central to the present work, can be famously described by ladder graphs, where an additional rung is generated at each order in the loop expansion. + +The study of scattering amplitudes in the high-energy limit [14–24] is intimately linked to the study of their infrared singularity structure. Indeed, the gluon Regge trajectory $\alpha_g(t)$ is infrared-singular, and its exponentiation along with the energy logarithms, which is a manifestation of Reggeization, is readily consistent with the exponentiation of soft singularities through the relevant renormalization group equation. The latter of course holds also away from the high-energy limit, as guaranteed by infrared factorization theorems. The correspondence between the structure of amplitudes in the high-energy limit, which is governed by rapidity evolution equations, on the one hand, and the structure of infrared singularities on the other, becomes more complicated at subleading orders. While both separately provide means to explore the structure of amplitudes to all orders in perturbation theory, the interplay between the two provides additional insight in either direction, as demonstrated multiple times over the past few years [19–24]. + +Infrared singularities of massless scattering amplitudes are now fully known, for general colour, kinematics and any number of partons, through three loops, owing to an explicit computation of the soft anomalous dimension at this order [26, 27]. While through two loops infrared singularities are governed exclusively by a sum over colour dipoles formed by pairs of the hard-scattered partons [28–31], at three loops one encounters for the first time infrared singularities that are simultaneously sensitive to the colour and kinematics of three and four hard partons. Subsequently, ref. [24] specialised these results to the high-energy limit, and provided a detailed comparison between the singularity structure deduced from the soft anomalous dimension and what has been established there through three loops via computations in the high-energy limit. While full consistency was found, remarkably, it was shown that at three loops (see eq. (4.11) there) the real part of the amplitude is only sensitive to non-dipole corrections starting at N³LL accuracy, while for the imaginary part +---PAGE_BREAK--- + +of the amplitude they appear already at NNLL accuracy. + +As an application of the interplay between these limits, it was recently demonstrated [32] that the functional form of the three-loop soft anomalous dimension in general kinematics can in fact be fully recovered via a bootstrap procedure using the high-energy limit of $2 \to 2$ scattering, alongside other information, as input. The bootstrap programme of the soft anomalous dimension can be extended beyond three loops, provided that information from special kinematic limits is available. The imaginary part of $2 \to 2$ amplitudes is a natural place to start; indeed, already in ref. [23], a non-dipole contribution at four-loops and NLL accuracy could be predicted using BFKL theory. + +In the present paper we continue to develop this line of investigation of the high-energy limit of $2 \to 2$ scattering, focusing on the imaginary (signature-even) part of the amplitude, which is governed, as mentioned above, by the exchange of a pair of Reggeized gluons satisfying the BFKL evolution equation. The leading-order equation is sufficient to determine an infinite tower of high-energy logarithms in the soft anomalous dimension¹. + +Although the BFKL Hamiltonian has been diagonalised in many instances [3], to study partonic amplitudes requires us to use the dimensionally-regulated Hamiltonian, which is comparatively less understood. We will nonetheless find an exact iterative solution! This hinges on the following reasons: the two-Reggeon wavefunction itself turns out to be finite at all orders, so that infrared divergences are controlled by the limit of the wavefunction where a Reggeized gluon becomes soft. The evolution equation then closes within that limit, dramatically simplifying its solution. This will enable us to obtain the soft limit of the two-Reggeon wavefunction to all loop orders and NLL accuracy, and corresponding closed-form expressions for the singular part of the amplitude (see eq. (3.18)) and soft anomalous dimension (see eq. (4.20) with (4.21)), which turns out to be an entire function of the coupling. + +The structure of the paper is as follows. In section 2 we recall the basic notions regarding the high-energy limit of $2 \to 2$ amplitudes and explain how the BFKL evolution equation can be solved iteratively to determine the two Reggeized gluon wavefunction and the imaginary part of the amplitude. In section 2 we also reformulate the equation so as to explicitly display the fact that the evolution retains infrared-finiteness, comment on the symmetries displayed by the evolution and recover the four-loop results of ref. [23]. Appendix A completes this review by explaining how the particular form of the evolution equation used here follows from the more general non-linear set up used in refs. [8, 23, 24]. In section 3 we consider the soft approximation, show that the evolution closes in this limit, and exploit this simplification to derive all-order solutions for the wavefunction and amplitude. Finally in section 4 we study the implications of our results in the high-energy limit regarding the soft anomalous dimension, obtaining a closed-form solution for the latter at next-to-leading order in high-energy logarithms to all orders, and verify the consistency of our BFKL-based result with infrared exponentiation. + +¹We refer to these as next-to-leading logarithms, owing to their suppression by one logarithm compared to the Reggized-gluon corrections to the real part of the amplitudes. +---PAGE_BREAK--- + +## 2 Scattering amplitudes by iterated solution of the BFKL equation + +The well-known BFKL evolution equation predicts the rapidity dependence of two-parton amplitudes in the high-energy limit [1, 2]. In the following we briefly summarise the conclusions from this approach regarding the leading contributions to the signature-even amplitude, or the two-Reggeon cut. + +### 2.1 The even amplitude from the BFKL wavefunction + +**Figure 1.** The $t$-channel exchange dominating the high-energy limit, $s \gg -t > 0$. The figure also defines our conventions for momenta assignment and Mandelstam invariants. We shall assume that particles 2 and 3 (1 and 4) are of the same type and have the same helicity. + +Let us consider a $2 \to 2$ scattering amplitude $M_{ij\to ij}$, where $i,j$ can be a quark or a gluon. The momenta are assigned as indicated in figure 1. In the following we will suppress the species indices $i, j$, unless explicitly needed. The high-energy limit corresponds to a configuration of forward scattering, such that the Mandelstam variables satisfy $s \gg -t > 0$. In analysing this limit it is convenient to decompose the amplitude into its odd and even components with respect to $s \leftrightarrow u$ exchange, the so-called *signature*: + +$$ M^{(\pm)}(s,t) = \frac{1}{2} \left( M(s,t) \pm M(-s,-t) \right), \qquad (2.1) $$ + +where $M^{(+)}$, $M^{(-)}$ are referred to, respectively, as the *even* and *odd* amplitudes. As shown in ref. [24], these have respectively *real* and *imaginary* coefficients, when expressed in terms of the natural signature-even combination of logarithms, + +$$ \frac{1}{2} \left( \log \frac{-s-i0}{-t} + \log \frac{-u-i0}{-t} \right) \approx \log \left| \frac{s}{t} \right| - i \frac{\pi}{2} \equiv L, \qquad (2.2) $$ + +and have independent factorisation properties in the high-energy limit. The effect we discuss in the following originates from the exchange of two Reggeons, therefore it proves useful² to + +²The full advantage of considering the reduced amplitude will become clear in what follows. First, BFKL evolution of the reduced amplitude involves an extra term proportional to $T_t^2$ in (2.17). This term renders the wavefunction finite. Second, upon performing infrared factorization of the reduced amplitude one is able to identify the NLL terms that originate in the soft anomalous dimension — see eq. (4.10). +---PAGE_BREAK--- + +define a reduced amplitude, as introduced in ref. [24], dividing by the effect of one-Regge +exchange: + +$$ +\hat{\mathcal{M}}_{ij \to ij} \equiv e^{-\mathbf{T}_i^2 \alpha_g(t) L} \mathcal{M}_{ij \to ij}, \quad (2.3) +$$ + +where $\mathbf{T}_t^2$ represents the total colour charge exchanged in the $t$ channel (see eq. (2.10) +below). The function $\alpha_g(t)$ in eq. (2.3) represents the gluon Regge trajectory having the +perturbative expansion + +$$ +\alpha_g(t) = \sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n \alpha_g^{(n)}(t). \qquad (2.4) +$$ + +Given that we work to NLL accuracy, we will only need the gluon Regge trajectory to first +order in $\alpha_s$, where in $d = 4 - 2\epsilon$ dimensions + +$$ +\alpha_g^{(1)}(t) = \frac{B_0(\epsilon)}{2\epsilon} \left(\frac{-t}{\mu^2}\right)^{-\epsilon} \stackrel{\mu^2 \rightarrow -t}{=} \frac{B_0(\epsilon)}{2\epsilon}. \quad (2.5) +$$ + +Here, $B_0(\epsilon)$ is a ubiquitous loop factor and the first of a class of bubble integrals, cf. eq. (3.6), +to become important in section 3. For now, it suffices to know that + +$$ +B_0(\epsilon) = e^{\epsilon\gamma_E} \frac{\Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}{\Gamma(1-2\epsilon)} = 1 - \frac{\zeta_2}{2}\epsilon^2 - \frac{7\zeta_3}{3}\epsilon^3 + \dots \quad (2.6) +$$ + +In the following we will consider the leading contributions to the signature-even amplitude +to all orders, corresponding to the two-Reggeon exchange. These corrections — which +we denote by $\hat{\mathcal{M}}_{\text{NLL}}^{(+)}$ — were studied long ago [1, 2] and can be expressed in terms the +two-Reggeized-gluon wavefunction $\Omega(p, k)$ as follows: + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+)} \left( \frac{s}{-t} \right) = -i\pi \int [\mathrm{D}k] \frac{p^2}{k^2(p-k)^2} \Omega(p,k) \mathbf{T}_{s-u}^2 \mathcal{M}_{ij\to ij}^{(\text{tree})}, \quad (2.7) +$$ + +where the integration measure is + +$$ +[\mathrm{D}k] = \frac{\pi}{B_0} \left( \frac{\mu^2}{4\pi e^{-\gamma_E}} \right)^{\epsilon} \frac{\mathrm{d}^{2-2\epsilon}k}{(2\pi)^{2-2\epsilon}} . \quad (2.8) +$$ + +with $B_0 \equiv B_0(\epsilon)$ and the tree amplitude is + +$$ +M_{ij \to ij}^{(\text{tree})} = 4\pi\alpha_s \frac{2s}{t} (T_i^b)_{a_1a_4} (T_j^b)_{a_2a_3} \delta_{\lambda_1\lambda_4} \delta_{\lambda_2\lambda_3}, \quad (2.9) +$$ + +where $\lambda_i$ for $i = 1$ through 4 are helicity indices. The colour operator $\mathbf{T}_{s-u}^2$ in eq. (2.7) acts +on $\mathcal{M}_{ij\to ij}^{(\text{tree})}$ and it is defined in terms of the usual basis of Casimirs corresponding to colour +flow through the three channels [22, 33]: + +$$ +\mathbf{T}_{s-u}^2 \equiv \frac{\mathbf{T}_s^2 - \mathbf{T}_u^2}{2}, \quad \text{with} +\quad +\begin{cases} +\mathbf{T}_s = \mathbf{T}_1 + \mathbf{T}_2 = -\mathbf{T}_3 - \mathbf{T}_4, \\ +\mathbf{T}_u = \mathbf{T}_1 + \mathbf{T}_3 = -\mathbf{T}_2 - \mathbf{T}_4, \\ +\mathbf{T}_t = \mathbf{T}_1 + \mathbf{T}_4 = -\mathbf{T}_2 - \mathbf{T}_3, +\end{cases} +\tag{2.10} +$$ +---PAGE_BREAK--- + +where $\mathbf{T}_i$ represent the colour charge operator [34] in the representation corresponding to parton $i$. The wavefunction $\Omega(p, k)$ has a perturbative expansion in the strong coupling, taking the form + +$$ \Omega(p,k) = \sum_{\ell=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^{\ell} L^{\ell-1} \frac{B_0^{\ell}}{(\ell-1)!} \Omega^{(\ell-1)}(p,k), \quad (2.11) $$ + +where we set the renormalization scale equal to the momentum transfer, $\mu^2 = -t = p^2$. The amplitude itself then has the corresponding expansion + +$$ \hat{\mathcal{M}}_{\text{NLL}}^{(+)} \left( \frac{s}{-t} \right) = \sum_{\ell=1}^{\infty} \left( \frac{\alpha_s}{\pi} \right)^{\ell} L^{\ell-1} \hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)}. \quad (2.12) $$ + +We emphasise that while these corrections are the leading-logarithmic contributions to the even amplitude, we denote them by NLL to recall that the power of the logarithm $L$ is one less than the loop order. This can be contrasted with the single-Reggeized-gluon contribution to the odd amplitude $\mathcal{M}_{\text{LL}}^{(-)} \sim e^{\mathbf{T}_t^2 \alpha_g(t)L} \mathcal{M}^{\text{(tree)}}$. + +In eq. (2.12) $\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)}$ contains $\ell$-loop diagrams and can be computed from the $(\ell-1)$-loop contribution to the wavefunction through integration + +$$ \hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)} = -i\pi \frac{(B_0)^\ell}{(\ell-1)!} \int [\mathrm{D}k] \frac{p^2}{k^2(p-k)^2} \Omega^{(\ell-1)}(p,k) \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}}. \quad (2.13) $$ + +In the normalisation used in eq. (2.13), the leading-order wavefunction is simply + +$$ \Omega^{(0)}(p, k) = 1. \quad (2.14) $$ + +At loop level the wavefunction is then obtained iteratively by applying the BFKL Hamiltonian: + +$$ \begin{aligned} \Omega^{(\ell-1)}(p, k) &= (2C_A - \mathbf{T}_t^2) \int [\mathrm{D}k'] f(p, k, k') \Omega^{(\ell-2)}(p, k') + \tilde{J}(p, k) \Omega^{(\ell-2)}(p, k) \\ &\equiv \hat{H} \Omega^{(\ell-2)}(p, k) \end{aligned} \quad (2.15) $$ + +where $f(p, k, k')$ is the BFKL evolution kernel + +$$ f(p, k, k') = \frac{k^2}{k'^2 (k-k')^2} + \frac{(p-k)^2}{(p-k')^2 (k-k')^2} - \frac{p^2}{k'^2 (p-k')^2}, \quad (2.16) $$ + +and the function $\tilde{J}(p, k)$ is + +$$ \tilde{J}(p, k) = \frac{1}{2\epsilon} \left[ C_A \left( \frac{p^2}{k^2} \right)^{\epsilon} + C_A \left( \frac{p^2}{(p-k)^2} \right)^{\epsilon} - \mathbf{T}_t^2 \right]. \quad (2.17) $$ + +Eq. (2.15) is the standard BFKL Hamiltonian (see eq. (17) of the initial reference [1]) written using dimensional regularisation as an infrared regulator. $\tilde{J}(p, k)$ accounts for the Regge trajectories of the individual Reggeized gluons, minus the overall Regge trajectory with colour charge $\mathbf{T}_t^2$ which was subtracted in the exponent of the reduced amplitude (2.3). + +As discussed in refs. [23, 24] and briefly reviewed in appendix A, this equation and its higher-order generalisations can be understood by considering the expectation value of +---PAGE_BREAK--- + +Wilson lines associated to the colour flow of the external partons [8], which are described as “target” and “projectile” in the (high-energy) forward scattering configuration of figure 1. The wavefunction then represents the transverse momenta in each of two Wilson lines and the BFKL equation is obtained as an appropriate limit of the more general Balitsky-JIMWLK evolution equation. + +A graphical representation of eq. (2.13) is provided in figure 2. As a result of BFKL evolution, the amplitude at NLL accuracy can be represented as a ladder. At order $\ell$ it is obtained by closing the ladder and integrating the wavefunction of order ($\ell - 1$) over the resulting loop momentum, according to eq. (2.13). The wavefunction $\Omega^{(\ell-1)}(p, k)$, in turn, is obtained by applying once the leading-order BFKL evolution kernel to the wavefunction of order ($\ell - 2$). Graphically, this operation corresponds to adding one rung to the ladder. + +**Figure 2.** Graphical representation of the amplitude at NLL accuracy, as obtained through BFKL evolution. The addition of one rung corresponds to applying once the leading-order BFKL evolution onto the projectile wavefunction or impact factor at order ($\ell - 2$). This gives the wavefunction at order ($\ell - 1$), according to eq. (2.18). Closing the ladder and integrating over the resulting loop momentum gives the reduced amplitude, according to eq. (2.13). + +## 2.2 Iterative solution for the wavefunction and amplitude + +Eq. (2.13) shows that the $\ell$-th order amplitude is obtained in terms of iterated integrals, which arise upon evaluating the wavefunction $\Omega^{(\ell-1)}(p, k)$ to order ($\ell - 1$). It is straightforward to compute the first few orders, which gives us an opportunity to revisit the findings of ref. [23]. We will be able to explain why a new colour structure emerges for the first time at four loops, and explore the general structure of the relevant iterated integrals. + +A useful fact is that the evolution admits one well-known solution in the case where the exchanged state is colour-adjoint and $\Omega(p, k)$ is constant (independent of $k$) [1, 2], which gives a positive-signature state with the same leading-order trajectory as the Reggeized gluon. This enables one to rewrite the Hamiltonian (2.15) as a part which vanishes when $\Omega(p, k)$ is constant, plus a part proportional to ($C_A - T_t^2$): + +$$ \Omega^{(\ell-1)}(p, k) = \hat{H} \Omega^{(\ell-2)}(p, k), \quad \hat{H} = (2C_A - \mathbf{T}_t^2) \hat{H}_i + (C_A - \mathbf{T}_t^2) \hat{H}_m \quad (2.18) $$ +---PAGE_BREAK--- + +where, explicitly, + +$$ +\begin{align} +\hat{H}_i \Psi(p, k) &= \int [\mathrm{D}k'] f(p, k, k') [\Psi(p, k') - \Psi(p, k)], \\ +\hat{H}_m \Psi(p, k) &= J(p, k) \Psi(p, k), +\end{align} +\tag{2.19} +$$ + +where the function $J(p, k)$ is defined by + +$$ +\begin{align} +J(p,k) &= \frac{1}{2\epsilon} + \int [\mathrm{D}k'] f(p,k,k') \nonumber \\ +&= \frac{1}{2\epsilon} \left[ 2 - \left(\frac{p^2}{k^2}\right)^{\epsilon} - \left(\frac{p^2}{(p-k)^2}\right)^{\epsilon} \right]. \tag{2.20} +\end{align} +$$ + +The first interesting feature to note is that the $\hat{H}_i$ operator in eq. (2.18) vanishes when acting on $\Omega^{(0)}(p, k) = 1$. Therefore the wavefunction to one-loop involves a single colour structure: + +$$ +\Omega^{(1)}(p, k) = (C_A - \mathbf{T}_t^2) J(p, k). \quad (2.21) +$$ + +The second colour structure appears for the first time at the second order: + +$$ +\Omega^{(2)}(p,k) = (C_A - \mathbf{T}_t^2)^2 (J(p,k))^2 + (2C_A - \mathbf{T}_t^2)(C_A - \mathbf{T}_t^2) \int [\mathrm{D}k'] f(p,k,k') [J(p,k') - J(p,k)] . \quad (2.22) +$$ + +Inserting the explicit form of $J(p,k)$ from eq. (2.20) into eq. (2.22), one finds that it involves bubble integrals, as well as three-mass triangle integrals with massless propagators, such as + +$$ +\int [\mathrm{D}k'] \frac{(p-k)^2}{(p-k')^2(k-k')^2} \left(\frac{p^2}{k'^2}\right)^{\epsilon}, \qquad (2.23) +$$ + +which is represented in figure 3. The wavefunction at higher orders can be expressed + +**Figure 3.** Three-mass triangle integral with massless propagators appearing in the calculation of the wavefunction at two loops. This type of integrals contribute to the amplitude only starting at four loops, due to the symmetry of the problem, as discussed in the main text. The bubble integral on one of the two edges of the triangles clarifies the origin of the propagator which is raised to power $\epsilon$ in eq. (2.23). + +formally by introducing a class of functions + +$$ +\begin{align} +\Omega_{ia_1 \dots a_n}(p,k) &\equiv \int [\mathrm{D}k'] f(p,k,k') [\Omega_{a_1 \dots a_n}(p,k') - \Omega_{a_1 \dots a_n}(p,k)], \\ +\Omega_{ma_1 \dots a_n}(p,k) &\equiv J(p,k) \Omega_{a_1 \dots a_n}(p,k), +\end{align} +\tag{2.24} +$$ +---PAGE_BREAK--- + +where $\Omega_{\mathcal{O}}(p, k) \equiv 1$, and each of the indices $a_j$ can take the value “i” or “m”, which stand for integration and multiplication, respectively, according to the action of the two Hamiltonian operators in eq. (2.19). In this notation, the one- and two-loop wavefunctions read, respectively, + +$$ +\begin{aligned} +\Omega^{(1)}(p,k) &= (C_A - \mathbf{T}_t^2) \Omega_m, \\ +\Omega^{(2)}(p,k) &= (C_A - \mathbf{T}_t^2)^2 \Omega_{mm} + (2C_A - \mathbf{T}_t^2)(C_A - \mathbf{T}_t^2) \Omega_{im}, +\end{aligned} +\quad (2.25) +$$ + +and it is also easy to write the wavefunctions at higher loops, for example: + +$$ +\begin{aligned} +\Omega^{(3)}(p,k) ={}& (C_A - \mathbf{T}_t^2)^3 \Omega_{mmm} + (2C_A - \mathbf{T}_t^2)(C_A - \mathbf{T}_t^2)^2 (\Omega_{imm} + \Omega_{mim}) \\ +&+ (2C_A - \mathbf{T}_t^2)^2(C_A - \mathbf{T}_t^2) \Omega_{iim}. +\end{aligned} +\quad (2.26) +$$ + +The wavefunctions written thus far are sufficient to evaluate the reduced amplitude up to four loops. At one and two loops, inserting respectively $\Omega^{(0)}(p,k) = 1$ and eq. (2.21) into eq. (2.13) and performing bubble integrals one gets immediately + +$$ +\begin{aligned} +\hat{\mathcal{M}}_{\text{NLL}}^{(+,1)} &= -i\pi \frac{B_0}{2\epsilon} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})}, && (2.27) \\ +\hat{\mathcal{M}}_{\text{NLL}}^{(+,2)} &= i\pi \frac{(B_0)^2}{2} \left[ \frac{1}{(2\epsilon)^2} + \frac{9\zeta_3}{2}\epsilon + \frac{27\zeta_4}{4}\epsilon^2 + \frac{63\zeta_5}{2}\epsilon^3 + \mathcal{O}(\epsilon^4) \right] (C_A - \mathbf{T}_t^2) \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})}. +\end{aligned} +$$ + +We notice that the amplitude depends solely on the colour structure $(C_A - T_t^2)$, and this in turn is a consequence of the fact that the wavefunctions $\Omega^{(0)}$ and $\Omega^{(1)}$ have only one colour component. Based on this consideration alone, one would expect the second colour structure, $(2C_A - T_t^2)$, to contribute to the amplitude starting at three loops, given that it appears in $\Omega^{(2)}(p,k)$ of eq. (2.25). However, this contribution of $\Omega^{(2)}(p,k)$ to the amplitude $\hat{\mathcal{M}}_{\text{NLL}}^{(+3)}$ cancels by symmetry: + +$$ +\begin{aligned} +& \int [\mathrm{D}k] \frac{p^2}{k^2(p-k)^2} \Omega_{\mathrm{im}}(p,k) = \int [\mathrm{D}k] [\mathrm{D}k'] \frac{p^2}{k^2(p-k)^2} f(p,k,k') [J(p,k') - J(p,k)] \\ +&= \int [\mathrm{D}k] [\mathrm{D}k'] \left\{ \frac{p^2}{k'^2(p-k')^2} f(p,k',k) J(p,k') - (k \leftrightarrow k') \right\} = 0, +\end{aligned} +\quad (2.28) +$$ + +where in the last line we used the property + +$$ +\frac{p^2}{k'^2(p-k')^2} f(p, k', k) = \frac{p^2}{k^2(p-k)^2} f(p, k, k'), \quad (2.29) +$$ + +which makes evident that eq. (2.28) vanishes by antisymmetry with respect to $k \leftrightarrow k'$. Because of this, the amplitude at three loops has again a single colour component, proportional to $(C_A - T_t^2)^2$: + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+3)} = i\pi \frac{(B_0)^3}{3!} \left[ \frac{1}{(2\epsilon)^3} - \frac{11\zeta_3}{4} - \frac{33\zeta_4}{8}\epsilon - \frac{357\zeta_5}{4}\epsilon^2 + \mathcal{O}(\epsilon^3) \right] (C_A - \mathbf{T}_t^2)^2 \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})}. +\quad (2.30) $$ +---PAGE_BREAK--- + +**Figure 4.** Graphical representation of the BFKL ladder at four loops. The fact that $\Omega^{(1)}(p, k) \sim (C_A - T_t^2)$ in conjunction with the target-projectile symmetry imply that the first rungs on either side can only give rise to contributions proportional to $(C_A - T_t^2)$. As a consequence, distinct colour structures can appear for the first time at four loops. + +This symmetry relation generalises to higher orders, i.e. one has + +$$ \int [\mathrm{D}k] \frac{p^2}{k^2(p-k)^2} \Omega_{i a_1 \ldots a_n}(p,k) = 0, \qquad (2.31) $$ + +for any $a_1 \ldots a_n$. While this symmetry ensures that there is only one colour structure at three loops, this is no longer the case starting at four loops. There, one obtains [23] + +$$ \begin{align} \hat{\mathcal{M}}_{\text{NLL}}^{(+,4)} &= -i\pi \frac{(B_0)^4}{3!} \int [\mathrm{D}k] \frac{p^2}{k^2(p-k)^2} \Biggl\{ & (C_A - \mathbf{T}_t^2)^3 \Omega_{\mathrm{mmm}}(p,k) \nonumber \\ & + (2C_A - \mathbf{T}_t^2)(C_A - \mathbf{T}_t^2)^2 \Omega_{\mathrm{mim}}(p,k) \Biggr\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\mathrm{tree})} \nonumber \\ &= i\pi \frac{(B_0)^4}{4!} \Biggl\{ (C_A - \mathbf{T}_t^2)^3 \left( \frac{1}{(2\epsilon)^4} + \frac{175\zeta_5}{2}\epsilon + \mathcal{O}(\epsilon^2) \right) \nonumber \\ &\quad + C_A(C_A - \mathbf{T}_t^2)^2 \left( -\frac{\zeta_3}{8\epsilon} - \frac{3}{16}\zeta_4 - \frac{167\zeta_5}{8}\epsilon + \mathcal{O}(\epsilon^2) \right) \Biggr\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\mathrm{tree})}. \tag{2.32} \end{align} $$ + +One sees that the integrated result involves two colour structures, and in the final expression in eq. (2.32) we rearranged them so as to single out a factor of $C_A$. In section 4 below we will see that in this form it is easy to compare the amplitude with the structure of infrared divergences. Specifically, we will see that corrections involving the colour structure $(C_A - T_t^2)^{\ell-1}$ at $\ell$ loop order emerge directly from the simplest “dipole” formula of the soft anomalous dimension, while other colour structures, namely $C_A^j (C_A - T_t^2)^{\ell-j-1}$ with $j \ge 1$, identify deviations from the dipole formula, as was first observed in ref. [23] for $\ell = 4$. + +Inspecting the diagrammatic representation of BFKL evolution in figure 2, one can interpret the delayed appearance of a new colour structure to four loops, as a consequence of the target-projectile symmetry. Recall that for the first rung of the ladder, only the second term $\hat{H}_m$ in eq. (2.18) contributes, so the wavefunction has a single colour structure $(C_A - T_t^2)$. Considering more rungs, using target-projectile symmetry one can deduce that the same is true for the first rung on the opposite side of the ladder. As a consequence, despite the fact that $\Omega^{(2)}(p, k)$ contains two structures (see eq. (2.25)), the effect of the +---PAGE_BREAK--- + +second one, $(2C_A - \mathbf{T}_t^2)$, cannot appear in the three-loop amplitude, where each of the two +rungs contribute a factor of $(C_A - \mathbf{T}_t^2)$. As shown in figure 4, distinct colour structures can +only appear in the amplitude starting at four loops, where the middle rung — and only +that rung — gives rise to both colour factors. + +**3 The soft approximation** + +While it would be possible to calculate the wavefunction and amplitude to higher loop orders, in this paper we focus on the infrared divergent part of the latter. We strive to compare its singularities with the predictions made by the infrared factorisation theorem and, consequently, deduce higher-order corrections to the high-energy soft anomalous dimension. With this goal in mind, we highlight at this point another important property of $\Omega(p,k)$, which can be verified when inspecting eq. (2.18) more carefully (see below): the wavefunction $\Omega^{(\ell-1)}(p,k)$ is finite for $\epsilon \to 0$ to all orders in perturbation theory! This is a non-trivial statement, which becomes evident only after the evolution equation is brought from the form in eq. (2.15) to eq. (2.18). A practical implication is that all divergences in the amplitude must originate in the final integration, namely going from the wavefunction to the amplitude as in eq. (2.7). Inspecting the latter equation, we see that divergences arise only in the $k \to 0$ and $k \to p$ limits (and ultraviolet power counting in eq. (2.19) using (2.16) excludes divergences from $k' \gg p, k$). Due to the symmetry of the integrand, all divergences of the amplitude can therefore be obtained by evaluating it in one of these two limits, and multiplying the result by two. + +Let us now examine more carefully the evolution of the wavefunction according to +eqs. (2.18) and (2.19), verify that the wavefunction is indeed finite, and derive a simplified +version of the evolution, valid in the small-$k$ or *soft* approximation: $k \ll p$. The loop +integral in eq. (2.19) can in principle receive contributions from two regions; $k \ll k' \sim p$ +and $k \sim k' \ll p$. Inspecting the form of $f(p, k, k')$ in the two regions, it is easy to check +that only the second region contributes: + +$$ +\begin{align} +f(p,k,k')|_{k\ll k'\sim p} &\rightarrow 0 + \frac{p^2}{(p-k')^2 k'^2} - \frac{p^2}{k'^2 (p-k')^2} = 0, \\ +f(p,k,k')|_{k\sim k'\ll p} &\rightarrow \frac{k^2}{k'^2(k-k')^2} + \frac{1}{(k-k')^2} - \frac{1}{k'^2} = \frac{2(k\cdot k')}{k'^2(k-k')^2}. \tag{3.1} +\end{align} +$$ + +This means that the soft approximation closes under evolution! In the following, we will identify the region $k \sim k' \ll p$ as *soft* and add a subscript *s* to quantities calculated in this limit. From $J(p,k)$ in eq. (2.20) one gets + +$$ +J_s(p, k) = \frac{1}{2\epsilon} \left[ 1 - \left( \frac{p^2}{k^2} \right)^{\epsilon} \right], \quad (3.2) +$$ +---PAGE_BREAK--- + +and the evolution in eq. (2.18) becomes + +$$ +\begin{align} +\Omega_s^{(\ell-1)}(p, k) &= \hat{H}_s \Omega_s^{(\ell-2)}(p, k), \\ +\hat{H}_s \Psi(p, k) &= (2C_A - \mathbf{T}_t^2) \int [Dk'] \frac{2(k \cdot k')}{k'^2 (k-k')^2} [\Psi(p, k') - \Psi(p, k)] \nonumber \\ +&\quad + (C_A - \mathbf{T}_t^2) J_s(p, k) \Psi(p, k), \tag{3.3} +\end{align} +$$ + +where $[Dk']$ is the previously defined integration measure (2.8). Eq. (3.3) confirms that it +is consistent to truncate the Regge evolution to the soft approximation: using the power +counting $\Psi(p,k) \sim 1$, we see that the $k'$ integral is saturated by the soft region $k' \sim k$, with +no sensitivity to larger scales. + +Inserting the wavefunction $\Omega_s^{(\ell-1)}(p, k)$ into eq. (2.13), we get the amplitude in the soft limit at the $\ell$-th order. In this approximation the last integral becomes divergent and needs an ultraviolet cutoff, which we fix by requiring $k^2 < p^2$, based on dimensional analysis and consistency with the soft limit (any cutoff would be consistent, and would not affect the infrared singularities). The integration measure for the last integral therefore reads + +$$ +\int [Dk]_s = \frac{(p^2)^{\epsilon} e^{\epsilon\gamma_E}}{2\Gamma(1-\epsilon)B_0} \int_0^{p^2} dk^2 (k^2)^{-\epsilon}, \quad (3.4) +$$ + +where we multiplied by a factor of two, in order to take into account the fact that there is +an identical contribution from the region where the Reggeized gluon carrying momentum +$(p-k)$ is soft. Inserting this result into eq. (2.13), we get $\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)}$ in the soft approximation: + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)} = - \frac{i\pi(B_0)^{\ell-1}}{(\ell-1)!} \frac{e^{\epsilon\gamma_E}}{\Gamma(1-\epsilon)} \int_0^{p^2} \frac{dk^2}{2k^2} \left(\frac{p^2}{k^2}\right)^{\epsilon} \Omega_s^{(\ell-1)}(p,k) \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). \quad (3.5) +$$ + +We stress that this approximation gives correct results only as far as infrared singularities +are concerned. All poles in $\epsilon$ are exact, since the integrand is finite and divergences arise +only from the $k \to 0$ limit of integration. The reduced amplitude in eq. (3.5) ceases to be +correct at finite $\mathcal{O}(\epsilon^0)$ order, as indicated. + +The most significant advantage of the soft approximation is that the evolution equation greatly simplifies, and this allows us to obtain closed-form expressions for the wavefunction $\Omega_s^{(\ell-1)}(p, k)$ and the amplitude $\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)}|_s$, as we are going to detail in the following. + +**3.1 The wavefunction at NLL to all orders** + +In analogy to the exercise done in section 2.2, we start by calculating explicitly the wave- +function at the first few orders in perturbation theory, this time in the soft approximation. +The initial condition is still given by eq. (2.14), and the evolution obeys eq. (3.3). This +equation has a much simpler structure compared to the original one, eq. (2.19), because +the soft approximation turns a two-scale problem into a one-scale problem. It is easy to +check that the wavefunction reduces to a polynomial in $\xi = (p^2/k^2)^\epsilon$, which implies that +the integrals involved in eq. (3.3) are simple bubble integrals of the type + +$$ +\int [Dk'] \frac{2(k \cdot k')}{k'^2 (k-k')^2} \left(\frac{p^2}{k'^2}\right)^{n\epsilon} = -\frac{1}{2\epsilon} \frac{B_n(\epsilon)}{B_0(\epsilon)} \left(\frac{p^2}{k^2}\right)^{(n+1)\epsilon}, \quad (3.6) +$$ +---PAGE_BREAK--- + +where the integration measure is given in eq. (2.8). This defines a class of one-loop functions mentioned above eq. (2.6), namely + +$$ +B_n(\epsilon) = e^{\epsilon\gamma_E} \frac{\Gamma(1-\epsilon)\Gamma(1+\epsilon+n\epsilon)\Gamma(1-\epsilon-n\epsilon)}{\Gamma(1+n\epsilon)\Gamma(1-2\epsilon-n\epsilon)}. \quad (3.7) +$$ + +Using this we can write the action of the soft Hamiltonian (3.3) on any monomial ($m \ge 0$): + +$$ +\begin{align} +\hat{H}_s \xi^m &= \frac{\xi^m}{2\epsilon} \left( (1-\xi)(C_A - \mathbf{T}_t^2) + \xi \hat{B}_m(\epsilon)(2C_A - \mathbf{T}_t^2) \right) \tag{3.8} \\ +&= \frac{(C_A - \mathbf{T}_t^2)}{2\epsilon} \left( \xi^m - \xi^{m+1} \left[ 1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] \right), +\end{align} +$$ + +where we have introduced the loop functions + +$$ +\hat{B}_n(\epsilon) = 1 - \frac{B_n(\epsilon)}{B_0(\epsilon)} = 2n(2+n)\zeta_3\epsilon^3 + 3n(2+n)\zeta_4\epsilon^4 + \dots \quad (3.9) +$$ + +Given that $\hat{B}_m(\epsilon) = O(\epsilon^3)$, the first line in eq. (3.8) makes manifest the fact that $\hat{H}_s \xi^m$ is +finite for $\epsilon \to 0$, in line with our earlier assertion about the finiteness of the wavefunction. +The second line will be useful in what follows for determining the all-order structure of the +wavefunction. + +Applying eq. (3.6) repeatedly up to three loops (which is sufficient to determine the amplitude at four loops) we find + +$$ +\begin{align} +\Omega_s^{(0)}(\xi) &= 1, \tag{3.10} \\ +\Omega_s^{(1)}(\xi) &= \frac{(C_A - \mathbf{T}_t^2)}{2\epsilon} (1 - \xi), \nonumber \\ +\Omega_s^{(2)}(\xi) &= \frac{(C_A - \mathbf{T}_t^2)^2}{(2\epsilon)^2} \left\{ 1 - 2\xi + \xi^2 \left[ 1 - \hat{B}_1(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] \right\}, \nonumber \\ +\Omega_s^{(3)}(\xi) &= \frac{(C_A - \mathbf{T}_t^2)^3}{(2\epsilon)^3} \left\{ + \begin{aligned}[t] + &1 - 3\xi + 3\xi^2 \left[ 1 - \hat{B}_1(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] \\ + &\quad - \xi^3 \left[ 1 - \hat{B}_1(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] \left[ 1 - \hat{B}_2(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] + \end{aligned} +\right\}. \nonumber +\end{align} +$$ + +The evaluation of a few additional orders allows us to obtain an ansatz for the (ℓ − 1)-th order wavefunction: + +$$ +\Omega_s^{(\ell-1)}(p,k) = \frac{(C_A - T_t^2)^{\ell-1}}{(2\epsilon)^{\ell-1}} \sum_{n=0}^{\ell-1} (-1)^n \binom{\ell-1}{n} \left(\frac{p^2}{k^2}\right)^{n\epsilon} \prod_{m=0}^{n-1} \left\{ 1 - \hat{B}_m(\epsilon) \frac{2C_A - T_m^2}{C_A - T_m^2} \right\}. \quad (3.11) +$$ + +The validity of this all-order formula can be proved directly using the action of the Hamil- +tonian in the second line of eq. (3.8) by noticing first that, independently of the loop order, +the term ξⁿ can only be generated by acting n times with the second term of eq. (3.8), +each of which raises the power of ξ by one. Hence ξⁿ will always be accompanied by the +product (-1)ⁿ ∏ₘ=₀ⁿ⁻¹ {₁ − osl*me*²} . Furthermore, the combinatorial factor (l/ₙ) as- +sociated with ξⁿ simply counts the number of different ways of acting (l − 1) times with +the Hamiltonian, out of which n times with the second term and l − 1 − n times with the +first. +---PAGE_BREAK--- + +## 3.2 The all-order structure of two-parton scattering amplitudes at NLL + +The main result of the previous section is that, in the soft approximation, the wavefunction reduces to a polynomial in $(p^2/k^2)^\epsilon$, given by eq. (3.11). As a consequence, the calculation of the amplitude (3.5) becomes straightforward, because it involves only integrals of the type + +$$ \int_0^{p^2} \frac{dk^2}{k^2} \left(\frac{p^2}{k^2}\right)^{n\epsilon} = -\frac{1}{n\epsilon}, \qquad (3.12) $$ + +which allows us to obtain + +$$ \begin{aligned} \mathcal{M}_{\text{NLL}}^{(+,\ell)}|_s = {}& i\pi \frac{1}{(2\epsilon)^\ell} \frac{B_0^\ell(\epsilon)}{\ell!} (1-\hat{B}_{-1}) (C_A - \mathbf{T}_t^2)^{\ell-1} \sum_{n=1}^\ell (-1)^{n+1} \binom{\ell}{n} \\ & \times \prod_{m=0}^{n-2} \left[ 1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] \mathbf{T}_{s-u}^2 M^{(\text{tree})} + \mathcal{O}(\epsilon^0), \end{aligned} \quad (3.13) $$ + +where the factor $(1 - \hat{B}_{-1})$ follows from rewriting the factor $e^{\epsilon\gamma_E}/\Gamma(1-\epsilon) = B_{-1}(\epsilon)$: + +$$ (B_0)^{\ell-1} \frac{e^{\epsilon\gamma_E}}{\Gamma(1-\epsilon)} = (B_0)^\ell \frac{B_{-1}(\epsilon)}{B_0(\epsilon)} = (B_0)^\ell (1 - \hat{B}_{-1}). \quad (3.14) $$ + +Eq. (3.13) looks rather involved but one must keep in mind that, upon expansion in $\epsilon$, it contains many finite terms which do not represent the actual amplitude since we are working in the soft approximation. Given the overall factor of $1/(2\epsilon)^\ell$ in eq. (3.13), all the singularities are obtained by retaining only contributions up to $\epsilon^{\ell-1}$ in the subsequent factors. When this is taken into account a great simplification arises: indeed, as shown in appendix B, it is possible to prove that eq. (3.13) is equivalent to + +$$ \begin{aligned} \mathcal{M}_{\text{NLL}}^{(+,\ell)}|_s = {}& i\pi \frac{1}{(2\epsilon)^\ell} \frac{B_0^\ell(\epsilon)}{\ell!} (1-\hat{B}_{-1}) \left(1-\hat{B}_{-1}(\epsilon)\frac{2C_A-\mathbf{T}_t^2}{C_A-\mathbf{T}_t^2}\right)^{-1} \\ & \times (C_A-\mathbf{T}_t^2)^{\ell-1} \mathbf{T}_{s-u}^2 M^{(\text{tree})} + \mathcal{O}(\epsilon^0). \end{aligned} \quad (3.15) $$ + +It is remarkable that the complicated sum of products of bubble integrals weighed by a binomial factor collapses to a single factor which depends only on one bubble integral, namely $\hat{B}_{-1}(\epsilon)$. The main ingredient of the proof is the fact that the wavefunction itself is finite. + +Eq. (3.15) constitutes the main result of this section: by iterating the BFKL equation (which was not diagonalised before in $d = 4 - 2\epsilon$ dimensions) we obtained the singular part of the even amplitude at NLL accuracy, to all orders in the strong coupling constant. Anticipating comparison with the structure of infrared divergences dictated by the soft anomalous dimension, it proves useful to rearrange eq. (3.15) in such a way to single out the colour structures $C_A$ and $(C_A - \mathbf{T}_t^2)$. Indeed, as discussed at the end of section 2.2, we know that the dipole formula of infrared divergencies fixes the singularities of the even amplitude in the high-energy limit to be proportional to the colour structure $(C_A - \mathbf{T}_t^2)^{\ell-1}\mathbf{T}_{s-u}^2$ at $\ell$ +---PAGE_BREAK--- + +loops. From eq. (3.15) we obtain + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell)}|_s = i\pi \frac{1}{(2\epsilon)^{\ell}} \frac{B_0^{\ell}(\epsilon)}{\ell!} \left(1 - R(\epsilon)\frac{C_A}{C_A - \mathbf{T}_t^2}\right)^{-1} (C_A - \mathbf{T}_t^2)^{\ell-1} \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}} + \mathcal{O}(\epsilon^0), \quad (3.16) +$$ + +where we have introduced the function + +$$ +\begin{align} +R(\epsilon) &\equiv \frac{B_0(\epsilon)}{B_{-1}(\epsilon)} - 1 = \frac{\Gamma^3(1-\epsilon)\Gamma(1+\epsilon)}{\Gamma(1-2\epsilon)} - 1 \nonumber \\ +&= -2\zeta_3 \epsilon^3 - 3\zeta_4 \epsilon^4 - 6\zeta_5 \epsilon^5 - (10\zeta_6 - 2\zeta_3^2) \epsilon^6 + \mathcal{O}(\epsilon^7). \tag{3.17} +\end{align} +$$ + +Furthermore, by resumming eq. (3.16) according to eq. (2.12) we get the all-order amplitude: + +$$ +\begin{equation} +\begin{split} +\hat{\mathcal{M}}_{\text{NLL}}^{(+)}|_s ={}& \frac{i\pi}{L(C_A - \mathbf{T}_t^2)} \left(1 - R(\epsilon) \frac{C_A}{C_A - \mathbf{T}_t^2}\right)^{-1} \\ +& \times \left[\exp\left\{\frac{B_0(\epsilon)}{2\epsilon} \frac{\alpha_s}{\pi} L(C_A - \mathbf{T}_t^2)\right\} - 1\right] \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). +\end{split} +\tag{3.18} +\end{equation} +$$ + +This result will be used in the next section to extract the soft anomalous dimension. + +Before addressing this topic, however, it proves useful to explore in more detail the implications of eq. (3.16) by writing explicitly a few orders in perturbation theory. Up to three loops eq. (3.16) reduces to + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell=1,2,3)}|_{s} = i\pi \frac{B_0^\ell(\epsilon)}{\ell!(2\epsilon)^\ell} (C_A - \mathbf{T}_t^2)^{\ell-1} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0), \quad (3.19) +$$ + +i.e. only one colour structure contributes to the amplitude up to three loops, and the singularities are correctly reproduced by the dipole formula of infrared divergences. Starting at four loops, and for the subsequent three orders, one gets an additional contribution proportional to a new colour structure: + +$$ +\[ +\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell=4,5,6)}|_{s} = i\pi \frac{B_0^\ell(\epsilon)}{\ell!(2\epsilon)^\ell} \left\{ +\begin{aligned}[t] +& (C_A - \mathbf{T}_t^2)^{\ell-1} + R(\epsilon) C_A (C_A - \mathbf{T}_t^2)^{\ell-2} \\ +& \qquad + R^2(\epsilon) C_A^2 (C_A - \mathbf{T}_t^2)^{\ell-3} +\end{aligned} +\right\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0), +\tag{3.20} +\] +$$ + +which matches with the infrared-divergent part of the result reported earlier in eq. (2.32). It can be easily verified (see the next section) that the infrared divergences associated with the first colour structure are predicted by the dipole formula, while the ones associated with the second are not. Next, starting at seven loops, and for the subsequent three orders, yet another colour structure arises: + +$$ +\begin{equation} +\begin{split} +\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell=7,8,9)}|_s = {}& i\pi \frac{B_0^\ell(\epsilon)}{\ell!(2\epsilon)^\ell} \Biggl\{ \\ +& \qquad (C_A - \mathbf{T}_t^2)^{\ell-1} + R(\epsilon) C_A (C_A - \mathbf{T}_t^2)^{\ell-2} \\ +& \qquad + R^2(\epsilon) C_A^2 (C_A - \mathbf{T}_t^2)^{\ell-3} \Biggr\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). +\end{split} +\tag{3.21} +\end{equation} +$$ + +Expanding eq. (3.16) for the next three orders in $\alpha_s$ we get + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+,\ell=10,11,12)}|_s = i\pi \frac{B_0^\ell(\epsilon)}{\ell!(2\epsilon)^\ell} \left\{ +(C_A - \mathbf{T}_t^2)^{\ell-1} + R(\epsilon) C_A(C_A - \mathbf{T}_t^2)^{\ell-2} +\right\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). \quad (3.22) +$$ +---PAGE_BREAK--- + +$$ + R^2(\epsilon) C_A^2 (C_A - \mathbf{T}_t^2)^{\ell-3} + R^3(\epsilon) C_A^3 (C_A - \mathbf{T}_t^2)^{\ell-4} \Big\} \mathbf{T}_{s-u}^2 M^{(\text{tree})} + \mathcal{O}(\epsilon^0). $$ + +It is now easy to understand the pattern singularities implied by eq. (3.16): at each order the first colour structure, proportional to $(C_A - \mathbf{T}_t^2)^{\ell-1}$, describes the singularities predicted by the dipole formula. Additional colour structures are generated by the expansion of the geometric series $1/((1-R(\epsilon)\frac{C_A}{C_A-\mathbf{T}_t^2})$ in eq. (3.16), such that every three loops a new colour structure arises with an increasing power of $C_A$, replacing one of the factors of $(C_A - \mathbf{T}_t^2)$. All these new structures introduce infrared divergences, which are not accounted for by the dipole formula. + +Now that we understand the result implied by the BFKL evolution equation, we are in the position to investigate how the infrared divergences not accounted for by the dipole formula can be included in the soft anomalous dimension. This will be the subject of the following section. + +## 4 The soft anomalous dimension in the high-energy limit to all orders + +It is well known that infrared divergences in gauge-theory scattering amplitudes are multi-plicatively “renormalizable”: finite hard-scattering amplitudes may be obtained by multiplying the original infrared-divergent amplitude by a renormalization factor $\mathbf{Z}(\{p_i\}, \mu, \alpha_s(\mu))$, which is matrix-valued in colour-flow space. This factor solves a renormalization group equation, and hence can be written as a path-ordered exponential of a soft anomalous dimension $\Gamma(\{p_i\}, \mu, \alpha_s(\mu))$, integrated over the scale $\mu$. As such, the soft anomalous dimension constitutes a fundamental ingredient for the calculation of scattering processes at any given order in perturbation theory, and much effort has been devoted to its determination. It has been shown that the soft anomalous dimension has a simple dipole structure up to two loops [28]. Corrections involving three and four partons arise starting at three loops, and a series of analyses has been performed in order to constrain their structure at three loops and beyond [29–31, 35–37]; the complete correction at three loops was calculated recently [26, 27]. + +The general structure of the soft anomalous dimension is fixed by the factorisation properties of soft and collinear radiation, along with symmetry properties, such as rescaling invariance of soft corrections with respect to the momenta of the hard partons. The latter properties link dipole terms to the cusp anomalous dimension and dictate the structure of corrections to the soft anomalous dimension that correlate more than two partons [29–31, 35]. In particular, they imply that at three loops, non-dipole corrections can only depend on the kinematics via rescaling-invariant cross ratios. The soft anomalous dimension can be further constrained by the behaviour of scattering amplitudes in special kinematic limits, such as the Regge limit [21, 22, 24] and collinear limits [30, 36]. Furthermore, it was recently shown [32] that the space of functions in terms of which the non-dipole correction is expressed (single-valued multiple polylogarithms) can, in fact, be deduced from general considerations. A bootstrap procedure was then set up, which remarkably completely fixes the functional form of the non-dipole correction at three loops (up to an overall rational numerical factor) based on known information from the kinematic limits mentioned above, +---PAGE_BREAK--- + +reproducing the result of the Feynman-diagram computation of ref. [26, 27]. The prospects +of extending this bootstrap procedure to higher loops provides an additional motivation to +determining the soft anomalous dimension in the high-energy limit. + +As discussed above, ref. [23] determined the next-to-leading high-energy logarithms +(NLL) of $2 \to 2$ scattering amplitudes at four loops. In this paper we have been able to +extend this and computed the infrared singularities at NLL in the high-energy limit to +all order in perturbation theory. We are therefore able to determine the soft anomalous +dimension in this approximation to all orders. + +We start this section by briefly reviewing the structure of the soft anomalous dimension in the high-energy limit, and then determine it to all orders by extracting the $O(1/\epsilon)$ coefficient from the amplitude obtained in section 3.2, which we then analyze numerically in detail. Finally we show that the singularity structure we deduced from the high-energy limit computation, consisting of poles of $O(1/\epsilon)$ through to $O(1/\epsilon^\ell)$ at $\ell$ loops, is consistent with infrared factorisation, namely it is exactly reproduced by the expansion of the path-ordered exponential of the integral of the soft anomalous dimension. + +4.1 The infrared factorisation formula in the Regge limit + +The infrared divergences of scattering amplitudes can be factorised as + +$$ +\mathcal{M}(\{p_i\}, \mu, \alpha_s(\mu)) = \mathbf{Z}(\{p_i\}, \mu, \alpha_s(\mu)) \mathcal{H}(\{p_i\}, \mu, \alpha_s(\mu)), \quad (4.1) +$$ + +where $\mathcal{H}$ is a finite hard-scattering amplitude while $\mathbf{Z}$ captures all singularities. $\mathbf{Z}$ admits a renormalization group equation whose solution (in the minimal-subtraction scheme) can be written as a path-ordered exponential of the soft anomalous dimension: + +$$ +\mathbf{Z}(\{\boldsymbol{p}_i\}, \mu, \alpha_s(\mu)) = \mathcal{P} \exp \left\{ -\int_0^\mu \frac{d\lambda}{\lambda} \Gamma(\{\boldsymbol{p}_i\}, \lambda, \alpha_s(\lambda)) \right\}. \quad (4.2) +$$ + +The scale dependence of the soft anomalous dimension Γ({pᵢ}, λ, αs) for massless-parton (pᵢ² = 0) scattering is both explicit and via the 4 − 2ε dimensional coupling. In QCD (with n_f light quark flavours) the latter obeys the renormalization group equation + +$$ +\beta(\alpha_s, \epsilon) \equiv \frac{d\alpha_s}{d\ln\mu} = -2\epsilon\alpha_s - \frac{\alpha_s^2}{2\pi} \sum_{n=0}^{\infty} b_n \left(\frac{\alpha_s}{\pi}\right)^n \quad \text{with} \quad b_0 = \frac{11}{3}C_A - \frac{2}{3}n_f. \tag{4.3} +$$ + +For our purposes only the zeroth order solution will be needed: $\alpha_s(\mu) = \alpha_s(p) (p^2/\mu^2)^\epsilon$. +The explicit dependence on the scale ($\Gamma$ is linear in $\log \lambda$) reflects the presence of double +poles due to overlapping soft and collinear divergences. + +The soft anomalous dimension in multileg scattering of massless partons is an operator +in colour space given by [26, 29–31, 35] + +$$ +\Gamma(\{p_i\}, \lambda, \alpha_s(\lambda)) = \Gamma^{\text{dip.}}(\{p_i\}, \lambda, \alpha_s(\lambda)) + \sum_{n=3}^{\infty} \Delta^{(n)} \left(\frac{\alpha_s}{\pi}\right)^n, \quad (4.4) +$$ + +with +$$ +\Gamma^{\text{dip.}}(\{p_i\}, \lambda, \alpha_s(\lambda)) = -\frac{\gamma_K(\alpha_s)}{2} \sum_{iLL of eq. (4.6) into eq. (4.2) and integrating over the scale (using the zeroth-order scale +dependence of αs) we obtain: + +$$ +\mathbf{Z}_{\text{LL}}^{(+)} \left( \frac{s}{t}, \mu, \alpha_s(\mu) \right) = \exp \left\{ \frac{\alpha_s}{\pi} \frac{1}{2\epsilon} L \mathbf{T}_t^2 \right\}. \quad (4.11) +$$ + +Considering the second term in the square brackets of eq. (4.10) we note that $\mathbf{Z}_{\text{LL}}^{(+)}$ can be combined with the exponential of the Regge trajectory, and this combination gives rise to an exponent proportional to $(B_0(\epsilon) - 1)/(2\epsilon) \sim \mathcal{O}(\epsilon)$. Given that the hard function is finite by definition, $\mathcal{H}_{\text{NLL}}^{(+)} \sim \mathcal{O}(\epsilon^0)$, we conclude that the second term in eq. (4.10) only contributes to finite terms in $\hat{\mathcal{M}}_{\text{NLL}}^{(+)}$. This implies that the infrared-singular part of the reduced amplitude is insensitive to $\mathcal{H}_{\text{NLL}}^{(+)}$ [23] and is given by: + +$$ +\hat{\mathcal{M}}_{\text{NLL}}^{(+)} = \exp \left\{ -\frac{\alpha_s B_0(\epsilon)}{2\epsilon} L \mathbf{T}_t^2 \right\} \mathbf{Z}_{\text{NLL}}^{(-)} \left( \frac{s}{t}, \mu, \alpha_s(\mu) \right) \mathcal{H}_{\text{LL}}^{(-)} (\{p_i\}, \mu, \alpha_s(\mu)) + \mathcal{O}(\epsilon^0). \quad (4.12) +$$ + +Equation (4.12) can be further simplified by noticing that the hard function at LL +accuracy is fixed by Regge factorisation: it is simply the exponential of the finite part of +the gluon Regge trajectory, i.e. we have + +$$ +\mathcal{H}_{\text{LL}}^{(-)} (\{p_i\}, \mu, \alpha_s(\mu)) = \exp \left\{ \frac{\alpha_s B_0(\epsilon) - 1}{2\epsilon} LC_A \right\} \mathcal{M}^{\text{(tree)}}, \quad (4.13) +$$ + +where we used the fact that $T_t^2 = C_A$ when acting on the Regge limit of the tree level am- +plitude. Moving this (finite) exponential to the left, this result allows us to write eq. (4.12) +more explicitly as +---PAGE_BREAK--- + +$$ +\begin{equation} +\begin{split} +\exp \left\{ \frac{(1 - B_0(\epsilon)) \alpha_s}{2\epsilon} L(C_A - \mathbf{T}_t^2) \right\} \hat{\mathcal{M}}_{\text{NLL}} &= \exp \left\{ -\frac{1}{2\epsilon} \frac{\alpha_s}{\pi} L \mathbf{T}_t^2 \right\} \\ +&\quad \times \mathcal{P} \exp \left\{ -\int_0^p \frac{d\lambda}{\lambda} \left[ \boldsymbol{\Gamma}_{\text{LL}} (\alpha_s(\lambda)) + \boldsymbol{\Gamma}_{\text{NLL}} (\alpha_s(\lambda)) \right] \right\} \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0), +\end{split} +\tag{4.14} +\end{equation} +$$ + +where it is understood that both sides of this equality are to be projected onto even sig- +nature. Below we will abbreviate the l.h.s. as $\bar{\mathcal{M}}_{\text{NLL}}$. The NLL contribution to the path- +ordered exponential on the second line can be written out fully as + +$$ +-\int_0^p \frac{d\lambda}{\lambda} \left[ \mathcal{P} \exp \left\{ -\int_0^\lambda \frac{d\lambda'}{\lambda'} \mathbf{\Gamma}_{\text{LL}}(\alpha_s(\lambda')) \right\} \right] \mathbf{\Gamma}_{\text{NLL}}(\alpha_s(\lambda)) \left[ \mathcal{P} \exp \left\{ -\int_\lambda^p \frac{d\lambda'}{\lambda'} \mathbf{\Gamma}_{\text{LL}}(\alpha_s(\lambda')) \right\} \right]. \quad (4.15) +$$ + +Finally, integrating the exponents in each of the two brackets as in eq. (4.11) and using again that $\mathbf{T}_t^2 = C_A$ in the right factor upon acting on $\mathcal{M}^{(\text{tree})}$, we obtain, projecting onto the even amplitude: + +$$ +\bar{\mathcal{M}}_{\text{NLL}}^{(+)} = - \int_0^p \frac{d\lambda}{\lambda} \exp \left\{ \frac{1}{2\epsilon} \frac{\alpha_s(p)}{\pi} L(C_A - \mathbf{T}_t^2) \left[ 1 - \left( \frac{p^2}{\lambda^2} \right)^{\epsilon} \right] \right\} \mathbf{\Gamma}_{\text{NLL}}^{(-)} (\alpha_s(\lambda)) \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). \quad (4.16) +$$ + +This expression for the even amplitude may be compared directly with the one obtained in +eq. (3.18) using the BFKL analysis; exploiting the fact that the exponential on the l.h.s. of +eq. (4.14) is finite (and that $R(\epsilon)$ is finite), the BFKL prediction can be written as + +$$ +\bar{\mathcal{M}}_{\text{NLL}}^{(+)} = i\pi \left[ \frac{\exp\left\{\frac{1}{2\epsilon}\frac{\alpha_s}{\pi}L(C_A - \mathbf{T}_t^2)\right\} - 1}{L(C_A - \mathbf{T}_t^2)} \left(1 - \frac{C_A}{C_A - \mathbf{T}_t^2}R(\epsilon)\right)^{-1} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0) \right] \quad (4.17) +$$ + +with $R(\epsilon)$ defined in eq. (3.17). We now have two expressions for the infrared singularities of +the reduced amplitude — an expression in terms of the soft anomalous dimension, eq. (4.16), +and the all-order result of BFKL evolution in the soft approximation, eq. (4.17). In the +next section we equate them and extract $\Gamma_{\text{NLL}}^{(-)}$. + +## 4.2 Extracting the soft anomalous dimension at NLL + +In minimal subtraction schemes, anomalous dimensions can be extracted by taking the +coefficient of pure $1/\epsilon$ single poles. Indeed, to get the coefficient of the single poles in +eq. (4.16) we can drop the exponentials to get + +$$ +\begin{align} +[\bar{\mathcal{M}}_{\text{NLL}}^{(+)}]_{\text{single poles}} &= - \int_0^p \frac{d\lambda}{\lambda} \Gamma_{\text{NLL}}^{(-)} (\alpha_s(\lambda)) \mathcal{M}^{(\text{tree})} \\ +&= \frac{1}{2\epsilon} \sum_{\ell=1}^{\infty} \left( \frac{\alpha_s(p)}{\pi} \right)^{\ell} L^{\ell-1} \frac{1}{\ell} \Gamma_{\text{NLL}}^{(-,\ell)} \mathcal{M}^{(\text{tree})}. \tag{4.18} +\end{align} +$$ + +This result must be set equal to the single poles obtained from eq. (4.17), whose $\ell$-loop +coefficient is + +$$ +\bar{\mathcal{M}}_{\text{NLL}}^{(+,\ell)} = \frac{i\pi}{2\epsilon\ell!} \left[ \frac{(C_A - T_t^2)}{2\epsilon} \right]^{\ell-1} \left( 1 - \frac{C_A}{C_A - T_t^2} R(\epsilon) \right)^{-1} T_{s-u}^2 M^{(\text{tree})} + O(\epsilon^0). \quad (4.19) +$$ +---PAGE_BREAK--- + +Comparing with eq. (4.18) then gives + +$$ +\Gamma_{\text{NLL}}^{(-,\ell)} = i\pi G^{(\ell)} \mathbf{T}_{s-u}^2 \quad (4.20) +$$ + +with + +$$ +G^{(\ell)} \equiv \frac{1}{(\ell-1)!} \left[ \frac{(C_A - \mathbf{T}_t^2)}{2} \right]^{\ell-1} \left( 1 - \frac{C_A}{C_A - \mathbf{T}_t^2} R(\epsilon) \right)^{-1} \Bigg|_{\epsilon^{\ell-1}}, \quad (4.21) +$$ + +where the subscript indicates that one should extract the coefficient of $\epsilon^{\ell-1}$. Although the notation does not manifest this, the end result is always a polynomial in colour operators $C_A$ and $\mathbf{T}_t^2$, since $R(\epsilon)$ has a regular series as $\epsilon \to 0$. Rescaling $\epsilon$, this can also be written as + +$$ +\Gamma_{\text{NLL}}^{(-,\ell)} = \frac{i\pi}{(\ell-1)!} \left( 1 - \frac{C_A}{C_A - \mathbf{T}_t^2} R(x(C_A - \mathbf{T}_t^2)/2) \right)^{-1} \bigg|_{x^{\ell-1}} \mathbf{T}_{s-u}^2 . \quad (4.22) +$$ + +where the function $R(\epsilon) = -2\zeta_3 \epsilon^3 + \dots$ is defined in eq. (3.17). + +Equation (4.22) is the main result of this paper: it gives the soft anomalous dimension in the Regge limit to any loop order at next-to-leading logarithmic accuracy (i.e. all terms of the form $\alpha_s^\ell L^{\ell-1}$); the even contribution $\Gamma_{\text{NLL}}^{(+,\ell)}$ was given in eqs. (4.6) and (4.8). In other words, we now know eq. (4.9) to all orders: + +$$ +\Gamma_{\text{NLL}}^{(-)} = \sum_{\ell=1}^{\infty} \Gamma_{\text{NLL}}^{(-,\ell)} \left( \frac{\alpha_s(\lambda)}{\pi} \right)^{\ell} L^{\ell-1}. \qquad (4.23) +$$ + +Expanding the above formula explicitly to eight loops: + +$$ +\begin{align*} +\Gamma_{\text{NLL}}^{(-,1)} &= i\pi \mathbf{T}_{s-u}^2 \\ +\Gamma_{\text{NLL}}^{(-,2)} &= 0 \\ +\Gamma_{\text{NLL}}^{(-,3)} &= 0, \\ +\Gamma_{\text{NLL}}^{(-,4)} &= -i\pi \frac{\zeta_3}{24} C_A (C_A - \mathbf{T}_t^2)^2 \mathbf{T}_{s-u}^2, \\ +\Gamma_{\text{NLL}}^{(-,5)} &= -i\pi \frac{\zeta_4}{128} C_A (C_A - \mathbf{T}_t^2)^3 \mathbf{T}_{s-u}^2, \tag{4.24} \\ +\Gamma_{\text{NLL}}^{(-,6)} &= -i\pi \frac{\zeta_5}{640} C_A (C_A - \mathbf{T}_t^2)^4 \mathbf{T}_{s-u}^2, \\ +\Gamma_{\text{NLL}}^{(-,7)} &= i\pi \frac{1}{720} \left[ \frac{\zeta_3^2}{16} C_A^2 (C_A - \mathbf{T}_t^2)^4 + \frac{1}{32} (\zeta_3^2 - 5\zeta_6) C_A (C_A - \mathbf{T}_t^2)^5 \right] \mathbf{T}_{s-u}^2, \\ +\Gamma_{\text{NLL}}^{(-,8)} &= i\pi \frac{1}{5040} \left[ \frac{3\zeta_3\zeta_4}{32} C_A^2 (C_A - \mathbf{T}_t^2)^5 + \frac{3}{64} (\zeta_3\zeta_4 - 3\zeta_7) C_A (C_A - \mathbf{T}_t^2)^6 \right] \mathbf{T}_{s-u}^2. +\end{align*} +$$ + +These results are valid in any gauge theory, and hold modulo colour operators which vanish when acting on the Regge limit of the tree amplitude (which is given by the *t*-channel gluon exchange diagram). +---PAGE_BREAK--- + +### 4.3 Properties of the soft anomalous dimension in the Regge limit + +In the previous section we computed $\Gamma_{\text{NLL}}^{(-)}$, the imaginary part of the soft anomalous dimension in the Regge limit, to all orders. Let us briefly explore its properties addressing the colour structure, the convergence of the expansion, and finally its asymptotic high-energy behaviour. + +Considering eq. (4.24), our first observation is that colour structures of increasing complexity emerge every three loops, as dictated by the expansion of $R(\epsilon)$ in eq. (3.17): corrections going beyond the dipole formula start at four loops, where the colour structure is proportional to $C_A$ to a single power. This correction reproduces precisely that found previously in ref. [23]. Proceeding to five and six loops $\Gamma_{\text{NLL}}$ only incurs extra powers of $(C_A - \mathbf{T}_t^2)$. Starting at seven loops, however terms with two powers of $C_A$ appear as well. Similarly, a cubic power of $C_A$ would emerge at ten loops, and so on. We also note that the zeta values appearing in $\Gamma_{\text{NLL}}$ are of uniform weight, which is, of course, again a mere consequence of the Taylor series of $R(\epsilon)$. + +To proceed it would be useful to specify the relevant colour charge exchanged in the $t$ channel, $\mathbf{T}_t^2$. To this end consider for example gluon-gluon scattering, where the $t$ channel colour flow can be any of the $\text{SU}(N_c)$ representations appearing in the decomposition[^4] + +$$8 \otimes 8 = 1 \oplus 8_s \oplus 8_a \oplus 10 \oplus \overline{10} \oplus 27 \oplus 0, \qquad (4.25)$$ + +where the labels refer to their dimensions for $N_c = 3$. Because of Bose symmetry, the symmetry of the colour structure mirrors the signature of the corresponding amplitudes under $s \leftrightarrow u$ exchange. Thus, only even representations are relevant for the two-Reggeon amplitude discussed here; these are the singlet, where $\mathbf{T}_t^2 = 0$, the symmetric octet with $\mathbf{T}_t^2 = C_A = N_c$, the 27 representation with $\mathbf{T}_t^2 = 2(N_c + 1)$, and the “0” representation, where $\mathbf{T}_t^2 = 2(N_c - 1)$. In the following we restrict the discussion to the first three cases, which are all relevant for QCD with $N_c = 3$ (the latter has a vanishing dimension, and hence it does not contribute). + +The next observation, already mentioned in section 2.2, is that the symmetric octet representation with $\mathbf{T}_t^2 = C_A$, corresponds to a constant wavefunction, and thus a trivial solution to eq. (2.18), with no corrections to the reduced amplitude beyond one loop (as can be verified for example in the explicit results in eqs. (3.19) through (3.22) upon considering $\mathbf{T}_t^2 = C_A$). The reduced amplitude for the symmetric octet state is thus one-loop exact, corresponding to a simple Regge-pole behaviour with a gluon Regge trajectory for the original amplitude according to eq. (2.3). This of course reproduces the known behaviour of the symmetric-octet exchange used in the original derivation of the BFKL equation. In turn, for the singlet — the famous Pomeron — and 27 representation, we find non-trivial radiative corrections associated with a Regge cut. We will thus use these two examples in the discussion that follows. + +Next let us consider the convergence properties of the perturbative series representing the soft anomalous dimension in eq. (4.20). One immediately notes that this series is highly convergent due to the $1/(\ell - 1)!$ prefactor in eq. (4.21). Figure 5 illustrates this factorial + +[^4]: A more complete exposition of the *t*-channel basis of colour flow can be found in refs. [20, 24]. +---PAGE_BREAK--- + +**Figure 5.** Logarithmic plot of the absolute value of the coefficients $G^{(\ell)}$ (4.27), for $\ell = 1, \dots, 22$. The $|G^{(\ell)}|$ quickly become very small suggesting good convergence of the series. Shown is the singlet (crosses) and 27 exchange (circles). + +suppression of the coefficients $G^{(\ell)}$ as a function of the order $\ell$ for $C_A = N_c = 3$ and for the +two relevant representations, the singlet and the 27. + +Furthermore, we can establish that the anomalous dimension (4.22) has an *infinite radius of convergence* as a function of $x \equiv L\alpha_s/\pi$. To see this we write the resummed soft anomalous dimension as: + +$$ +\Gamma_{\text{NLL}}^{(-)} = i\pi \frac{\alpha_s}{\pi} G\left(\frac{\alpha_s}{\pi}L\right) \mathbf{T}_{s-u}^2, \quad (4.26) +$$ + +where the generating function for the expansion coefficients is defined by + +$$ +G(x) = \sum_{\ell=1}^{\infty} x^{\ell-1} G^{(\ell)}. \tag{4.27} +$$ + +It is convenient to further identify $G(x)$ as the Borel transform of some function + +$$ +g(y) \equiv \int_0^\infty dx G(x) e^{-x/y} = \sum_{\ell=1}^\infty G^{(\ell)} y^\ell (\ell - 1)! , \quad (4.28) +$$ + +which upon using eq. (4.21), simply evaluates to + +$$ +g(y) = \frac{y}{1 - \frac{C_A}{C_A - \mathbf{T}_t^2} R \left( y(C_A - \mathbf{T}_t^2)/2 \right)}. \quad (4.29) +$$ + +We may now recover the original $G(x)$ via the integral + +$$ +G(x) = \frac{1}{2\pi i} \int_{w-i\infty}^{w+i\infty} d\eta g\left(\frac{1}{\eta}\right) e^{\eta x}, \qquad (4.30) +$$ +---PAGE_BREAK--- + +**Figure 6.** Partial sums $G_n(x) = \sum_{\ell=1}^n G^{(\ell)} x^{\ell-1}$ for $n = 1, \dots, 22$ (rainbow, red through violet) and numerical results for $G(x)$ (black crosses). The plot illustrates convergence in that increasing the order $n$ extends the range of $x$ for which the partial sum matches the numerical result. The figure shows the singlet (left) as well as the 27 exchange (right). + +where the integration contour runs parallel to the imaginary axis, to the right of all singularities of the integrand. + +The function $g(y)$ in eq. (4.28) only has isolated poles away from the origin and has a finite radius of convergence: it is well-defined in a disc around the origin. It then follows that $G(x)$ has an infinite radius of convergence, hence this function — and the soft anomalous dimension $\Gamma_{\text{NLL}}^{(-)}$ in eq. (4.26) — is an entire function, free of any singularities for any finite $x = L\alpha_s/\pi$. + +We stress that our use of the Borel transform is opposite to the usual application of Borel summation (which is ordinarily used to sum asymptotic series): the function $G(x)$, in which we are interested, is an entire function; we make use of its inverse Borel transform, $g(y)$, which has worse behaviour by having merely a finite radius of convergence. Nonetheless we find that numerically integrating eq. (4.30) is a particularly convenient way to evaluate the anomalous dimension. This numerical integration is compared to the partial sums + +$$G_n(x) \equiv \sum_{\ell=1}^{n} G^{(\ell)} x^{\ell-1} \qquad (4.31)$$ + +in figure 6, where we find good agreement for the given values of $x$. While it becomes challenging to efficiently compute the coefficients $G^{(\ell)}$ at high orders (here we only evaluated them for $\ell \le 22$), we find the numerical integration of eq. (4.30) to be very stable, even for larger values of $x$. Thus, the remarkable convergence properties of $G(x)$ along with the Borel technique, presents us with the possibility of computing $\Gamma_{\text{NLL}}^{(-)}$ for $x = L\alpha_s/\pi \gg 1$, i.e. at asymptotically high energies. This is a rather unique situation in a perturbative setting — in other circumstances resummation techniques are limited to the region $x = L\alpha_s/\pi \lesssim 1$. + +Evaluating the integral (4.30) and plotting $G(x)$ for larger values of $x$ reveals oscillations with a constant period and an exponentially growing amplitude. Since this behaviour is difficult to capture graphically we instead show the logarithm of $|G(x)|$ weighted by the sign of $G(x)$ in figure 7. This observation suggests to approximate (4.30) by +---PAGE_BREAK--- + +**Figure 7.** Numerical results for $\text{sign}[G(x)] \ln|G(x)|$ for the singlet (blue) and 27 exchange (orange). The “heartbeat” at small $x$ reflects the logarithmic divergence of $\ln|G(x)|$ where $G(x)$ changes its sign for the first time (similar divergences occur every oscillation but are not visible due to the finite resolution of the plot). + +$$ G(x) \rightarrow c e^{ax} \cos(bx + d), \quad (4.32) $$ + +for sufficiently large values of $x$. By means of eq. (4.28), this model is equivalent to + +$$ g\left(\frac{1}{\eta}\right) \rightarrow c \operatorname{Re} \left[ \frac{e^{id}}{\eta - a - ib} \right] = \frac{c}{2} \left( \frac{e^{id}}{\eta - a - ib} + \frac{e^{-id}}{\eta - a + ib} \right), \quad (4.33) $$ + +which is to be integrated as in (4.30) with a contour to the right of the poles. We thus find that to capture the behaviour $G(x)$ at large $x$ it is sufficient to simply consider $g(\frac{1}{\eta})$ as a pair of complex-conjugated poles at $\eta = a \pm ib$. Indeed, numerically extracting the rightmost poles of $g(\frac{1}{\eta})$ of eq. (4.29) to identify the parameters $a$ and $b$ in eq. (4.33), and dividing the full, numerically-evaluated, $G(x)$ by $e^{ax}$ leaves us with almost pure cosine-like behaviour for any $x \gg 1$, as can be seen in figure 8. For reference, we quote our numerical results for $a, b, c$ and $d$ in table 1. + +
abcd
11.971.520.250.48
271.460.410.582.01
+ +**Table 1.** Numerical results for *a*, *b*, *c* and *d*, cf. eq. (4.32), for the singlet (1) and 27 representation. +---PAGE_BREAK--- + +**Figure 8.** The approximation of eq. (4.32) for $G(x)$ for $x \gg 1$, divided by $e^{ax}$ (solid line) contrasted with numerical results (crosses). The coefficients $a$ and $b$ were extracted from the poles of $g(1/\eta)$ while $c$ and $d$ were fitted after dividing the full, numerically evaluated, $G(x)$ by $e^{ax}$. Already for moderate values of $x$ we observe excellent agreement. The singlet exchange is shown on the left and the 27 is on the right. + +## 4.4 Exponentiation check for higher-order infrared poles + +As a final step we confirm the agreement between the BFKL prediction and the soft factorisation theorem. Thus far we have only used the single poles as predicted by the BFKL evolution to extract the NLL soft anomalous dimension $\Gamma_{\text{NLL}}^{(-)}$. As explained in section 4.1, higher-order poles of the amplitude are generated upon expansion of the path-ordered exponential in eq. (4.16). They have to match the BFKL computation and therefore provide an independent and non-trivial check of our results. + +To see how this works, let us expand the BFKL result (4.17) to the first few orders, namely + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+)} \left( \frac{s}{-t} \right) = \sum_{\ell=1}^{\infty} \left( \frac{\alpha_s}{\pi} \right)^{\ell} L^{\ell-1} \bar{\mathcal{M}}_{\text{NLL}}^{(+,\ell)}. \quad (4.34) $$ + +with + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+,1)} = i\pi \left[ \frac{1}{2\epsilon} + O(\epsilon^0) \right] \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}}, \quad (4.35a) $$ + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+,2)} = i\pi \frac{(C_A - \mathbf{T}_t^2)}{2!} \left[ \frac{1}{(2\epsilon)^2} + O(\epsilon^0) \right] \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}}, \quad (4.35b) $$ + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+,3)} = i\pi \frac{(C_A - \mathbf{T}_t^2)^2}{3!} \left[ \frac{1}{(2\epsilon)^3} + O(\epsilon^0) \right] \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}}, \quad (4.35c) $$ + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+,4)} = i\pi \frac{(C_A - \mathbf{T}_t^2)^3}{4!} \left[ \frac{1}{(2\epsilon)^4} - \frac{1}{2\epsilon} \frac{\zeta_3 C_A}{4(C_A - \mathbf{T}_t^2)} + O(\epsilon^0) \right] \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}}, \quad (4.35d) $$ + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+,5)} = i\pi \frac{(C_A - \mathbf{T}_t^2)^4}{5!} \left[ \frac{1}{(2\epsilon)^5} - \frac{1}{(2\epsilon)^2} \frac{\zeta_3 C_A}{4(C_A - \mathbf{T}_t^2)} - \frac{1}{2\epsilon} \frac{3\zeta_4 C_A}{16(C_A - \mathbf{T}_t^2)} + O(\epsilon^0) \right] \mathbf{T}_{s-u}^2 \mathcal{M}^{\text{(tree)}}. \quad (4.35e) $$ +---PAGE_BREAK--- + +Let us begin with the leading pole. One can see a simple pattern in its $\ell$-th order coefficient, which is proportional to $(C_A - \mathbf{T}_t^2)^{\ell-1}/(\ell!(2\epsilon)^\ell)$. This should be compared with the prediction (4.16) from infrared exponentiation, which we reproduce here for convenience: + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+)} = - \int_0^p \frac{d\lambda}{\lambda} \exp \left\{ \frac{1}{2\epsilon} \frac{\alpha_s(p)}{\pi} L(C_A - \mathbf{T}_t^2) \left[ 1 - \left( \frac{p^2}{\lambda^2} \right)^\epsilon \right] \right\} \Gamma_{\text{NLL}}^{(-)} (\alpha_s(\lambda)) \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). \quad (4.36) $$ + +Substituting $\Gamma_{\text{NLL}}^{(-)}$ using eqs. (4.23) and (4.20), and taking into account that the running coupling $\alpha_s(\mu) = \alpha_s(p) (p^2/\mu^2)^\epsilon$, one gets + +$$ \begin{aligned} \bar{\mathcal{M}}_{\text{NLL}}^{(+)} = & -i\pi \sum_{k=1}^{\infty} G^{(k)} \left(\frac{\alpha_s(p)}{\pi}\right)^k L^{k-1} \int_0^p \frac{d\lambda}{\lambda} \left(\frac{p^2}{\lambda^2}\right)^{\epsilon k} \\ & \times \exp \left\{ \frac{1}{2\epsilon} \frac{\alpha_s(p)}{\pi} L(C_A - \mathbf{T}_t^2) \left[ 1 - \left(\frac{p^2}{\lambda^2}\right)^\epsilon \right] \right\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} + \mathcal{O}(\epsilon^0). \end{aligned} \quad (4.37) $$ + +For the leading pole it is clear that only the $G^{(1)}$ terms contribute, corresponding to the one-loop contribution to the soft anomalous dimension (4.9), and we then get: + +$$ \begin{aligned} [\bar{\mathcal{M}}_{\text{NLL}}^{(+)}]_{\text{leading poles}} &= -i\pi \frac{\alpha_s(p)}{\pi} \int_0^p \frac{d\lambda}{\lambda} \left(\frac{p^2}{\lambda^2}\right)^{\epsilon} \\ & \quad \times \exp \left\{ \frac{1}{2\epsilon} \frac{\alpha_s(p)}{\pi} L(C_A - \mathbf{T}_t^2) \left[ 1 - \left(\frac{p^2}{\lambda^2}\right)^{\epsilon} \right] \right\} \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})} \\ &= -i\pi \left[ \frac{\exp\left\{\frac{1}{2\epsilon}\frac{\alpha_s(p)}{\pi}L(C_A - \mathbf{T}_t^2)\right\} - 1}{L(C_A - \mathbf{T}_t^2)} \right] \mathbf{T}_{s-u}^2 \mathcal{M}^{(\text{tree})}. \end{aligned} \quad (4.38) $$ + +Expanding in $\alpha_s$ this matches precisely the $1/(\ell!(2\epsilon)^\ell)$ terms in eq. (4.35), with the correct prefactor. This exponentiation of leading poles had been verified previously in ref. [23]. Moving on to the first subleading pole, the Regge prediction reveals a four-loop single pole in eq. (4.35d), as well as a five-loop double pole in eq. (4.35e) and so on, all proportional to $\zeta_3$. In general, expanding the BFKL result (4.17) to higher orders one finds a tower of such terms going like $1/(\ell!(2\epsilon)^{\ell-3})$. In the infrared exponentiation formula, these should be generated by a single parameter, the four-loop anomalous dimension, $\Gamma_{\text{NLL}}^{(-,4)}$, which is indeed proportional to $\zeta_3$ (see eq. (4.24)). It can be traced back to the leading-order term in the expansion of $R(\epsilon)$ in (3.17), contributing to $G^{(4)}$ in eq. (4.21). Similarly, a $k$-loop anomalous dimension $\Gamma_{\text{NLL}}^{(-,k)}$, in general, contributes in proportion to $G^{(k)}$. Indeed, integrating eq. (4.37) we find that + +$$ \bar{\mathcal{M}}_{\text{NLL}}^{(+)} = \frac{i\pi}{2\epsilon} \sum_{k=1}^{\infty} G^{(k)} (k-1)! \sum_{\ell=k}^{\infty} \frac{1}{\ell!} \left(\frac{\alpha_s(p)}{\pi}\right)^{\ell} L^{\ell-1} \left(\frac{C_A - \mathbf{T}_t^2}{2\epsilon}\right)^{\ell-k} \mathbf{T}_{s-u}^2 M^{(\text{tree})} + O(\epsilon^0). \quad (4.39) $$ + +Next we note that given $k$, all contributions with $\ell < k$ are either constant or vanish for $\epsilon \to 0$, and so in as far as the singularities are concerned the sum over $\ell$ can be performed +---PAGE_BREAK--- + +over all positive integers, independently of $k$. This yields + +$$ \bar{M}_{\text{NLL}}^{(+)} = i\pi \sum_{k=1}^{\infty} \frac{G^{(k)} (k-1)!(2\epsilon)^{k-1}}{L(C_A - \mathbf{T}_t^2)^k} \left[ \exp\left\{ \frac{1}{2\epsilon} \frac{\alpha_s}{\pi} L(C_A - \mathbf{T}_t^2) \right\} - 1 \right] \mathbf{T}_{s-u}^2 M^{\text{(tree)}} + \mathcal{O}(\epsilon^0). \quad (4.40) $$ + +This shows that infrared exponentiation works out *if*, and *only if*, all the poles in the NLL amplitude can be written as a function of $\epsilon$ only (i.e. independent of $\alpha_s$), times the quantity in the square bracket. With hindsight, infrared exponentiation thus explains the compact form of the BFKL result in eq. (4.17). Finally, it is straightforward to substitute in the definition of $G^{(k)}$ from eq. (4.21) and sum up the series over $k$, recovering the full result for the singularities of the amplitudes in eq. (4.17). This completes the proof that the BFKL result we obtained is consistent with infrared factorisation. + +## 5 Conclusions + +We considered the even signature component of two-to-two parton scattering amplitudes in the high-energy limit. This amplitude is dominated by the $t$-channel exchange of a state consisting of two Reggeized gluons, corresponding to the simplest example of a Regge cut in QCD. The amplitude can be evaluated in QCD perturbation theory by iteratively solving the BFKL equation. Each order in perturbation theory corresponds to one additional rung in the BFKL ladder, building up a tower of so-called next-to-leading logarithms, $\mathcal{O}(\alpha_s^\ell L^{\ell-1})$. Although the BFKL Hamiltonian has been diagonalised in many cases [3], the dimensionally-regulated Hamiltonian relevant for partonic amplitudes has remained more difficult to handle. + +Our first observation was that the wavefunction describing the two Reggeized gluons remains finite through BFKL evolution for any number of rungs, while the corresponding amplitude develops infrared singularities due to the soft limit of the wavefunction. We further observed that the evolution of a state in which one of the two Reggeized gluons is much softer than the other, $k \ll p-k$, yields again a similar state. In other words, the soft approximation is consistent with BFKL evolution, and as a consequence, one can systematically solve the equation to any loop order within this approximation. We found that the soft approximation leads to a major simplification, where all integrals reduce to products of bubbles, and the wavefunction at any given order is simply a polynomial of that order in $(p^2/k^2)^\epsilon$. This eventually allowed us to determine the singularities of the amplitude in a closed form to any order, as given in eq. (3.18). + +At the next step we contrasted the singularity structure we obtained though BFKL evolution with the known exponentiation properties of infrared singularities. As expected, we found that the two are consistent, and this provides a highly non-trivial check of the calculation. The leading singularity at each order, $\mathcal{O}(\alpha_s^\ell L^{\ell-1}/\epsilon^\ell)$, is simply related to the one-loop soft anomalous dimension, and has a colour structure proportional to $(C_A - \mathbf{T}_t^2)^{\ell-1}$. New singularities, with fewer powers of $1/\epsilon$ and different colour structures, appear starting from four loop. These correspond to new terms in the imaginary part of the soft anomalous dimension, eq. (4.24). We were thus able to determine the soft anomalous dimension at +---PAGE_BREAK--- + +next-to-leading logarithmic accuracy in the high-energy limit to all orders. These results +also provide a valuable input for determining the structure of long-distance singularities for +general kinematics using a bootstrap approach, as done at the three-loop order in ref. [32]. + +We point out that the $\ell$-loop coefficient of the soft anomalous dimension we computed +is a linear combination of zeta values of weight ($\ell - 1$), which coincides with the maximal +(transcendental) weight. This is not surprising given that these corrections are indepen- +dent of the matter content nor the amount of supersymmetry of the theory, and are thus +common for example to QCD and $N = 4$ super Yang-Mills. We further showed that these +corrections to the soft anomalous dimension can be resummed, as in eq. (4.26), into an +entire function of $x = L\alpha_s/\pi$. Remarkably, this gives us means to determine the asymp- +totic high-energy behaviour of this anomalous dimension, corresponding to $x \gg 1$, a regime +which is usually inaccessible to perturbation theory. We find that at large $x$ the imaginary +part of the anomalous dimension in the Regge limit, in any colour representation, becomes +an oscillating function with an exponentially growing amplitude. + +While our analysis in this paper was focused on infrared singularities, for which the +soft approximation is sufficient, the formulation of the evolution in eq. (2.19) along with the +observation that the wavefunction is finite, pave the way to determining the wavefunction +beyond the soft approximation, thus evaluating the finite contributions to Regge-cut of +two-to-two amplitudes. It would also be interesting to extend the present analysis to the +next order, using the known next-to-leading order Hamiltonian; again we expect that a +suitable wavefunction will remain finite to all orders, facilitating a direct determination of +the infrared singularities. + +Acknowledgments + +We would like to thank J.M. Smillie for useful discussions in the early stages of this +project. SCH’s research is supported by the National Science and Engineering Council of +Canada, and was supported in its early stage by the Danish National Research Foundation +(DNRF91). EG’s research is supported by the STFC Consolidated Grant “Particle Physics +at the Higgs Centre.” LV’s research is supported by the People Programme (Marie Curie +Actions) of the European Union’s Horizon 2020 Framework Programme H2020-MSCA-IF- +2014 under REA grant No. 656463 – “Soft Gluons”. SCH thanks the Higgs Centre for +Theoretical Physics for hospitality during part of this work. This research was conducted +in part at the CERN summer institute “LHC and the Standard Model: Physics and Tools” +and at the workshops “Automated, Resummed and Effective: Precision Computations for +the LHC and Beyond” and “Mathematics and Physics of Scattering Amplitudes” at the +Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence +“Origin and Structure of the Universe”. +---PAGE_BREAK--- + +A The even amplitude at NLL accuracy within the shockwave formalism + +In this appendix we briefly review how eq. (2.13) can be derived within the shockwave +formalism refs. [23, 24]. Amplitudes in the high-energy limit are calculated as expectation +values of null Wilson lines: + +$$ +U(z_{\perp}) = \mathcal{P} \exp \left[ ig_s \int_{-\infty}^{+\infty} dx^+ A_+^a (x^+, x^- = 0, z_{\perp}) T^a \right] . \quad (\text{A.1}) +$$ + +The latter follows the path of colliding partons from the projectile or target (with $x^+$ and $x^-$ interchanged), and are labelled by transverse coordinates $z_\perp$ (below we shall omit the subscript $\perp$ for lighter notation). The full transverse structure needs to be retained, because the high-energy limit is taken with fixed momentum transfer. Importantly, the number of Wilson lines cannot be held fixed, because the projectile and target contain an arbitrary number of virtual partons. However, in perturbation theory, the unitary matrices $U(z)$ are close to the identity and can therefore be usefully parametrised by a field $W$: + +$$ +U(z) = e^{ig_s T^a W^a(z)}. \tag{A.2} +$$ + +Physically, the colour-adjoint field $W^a$, which is propagating in the transverse space, is interpreted as source for a BFKL Reggeized gluon [23]. At weak coupling a generic projectile is thus formed by a superposition of $W$ states. Up to NLL accuracy one needs to consider up to two Reggeons. In this approximation, a projectile, created with four-momentum $p_1$ and absorbed with $p_4$, is parameterised in momentum space as + +$$ +|\psi_i\rangle \equiv \frac{Z_i^{-1}}{2p_1^+} a_i(p_4) a_i^\dagger(p_1) |0\rangle = |\psi_{i,1}\rangle + |\psi_{i,2}\rangle + \dots, \quad (\text{A.3}) +$$ + +where the ellipses stand for wavefunction components with three or more Reggeized gluons, +which are not relevant at NLL accuracy. We next note that states with an even (odd) +number of Reggeized gluons have an even (odd) signature, so + +$$ +|\psi_{i,1}\rangle = |\psi_{i,1}^{(-)}\rangle = ig_s D_i^{(1)}(p) \mathbf{T}_i^a W^a(p) \qquad (\text{A.4a}) +$$ + +$$ +|\psi_{i,2}\rangle = |\psi_{i,2}^{(+)}\rangle = -\frac{g_s^2}{2} \mathbf{T}_i^a \mathbf{T}_i^b \int \frac{\mathrm{d}^{2-2\epsilon} q}{(2\pi)^{2-2\epsilon}} \Omega^{(0)}(p,q) W^a(q) W^b(p-q), \quad (\text{A.4b}) +$$ + +where $D_i^{(1)}(p)$ is an impact factor which parameterises the dependence of the coefficient on +the (transverse) momentum transfer $p = p_4 - p_1$ with $p^2 = -t$. At the leading order, there is +only one Wilson line $U(z)$ following the original parton, and the two-Reggeon wavefunction +is obtained simply by expanding eq. (A.2), which gives, as in the main text: + +$$ +\Omega^{(0)}(p, q) = 1. \tag{A.5} +$$ + +The null Wilson lines acquire energy dependence through rapidity divergences, which must +be regulated, leading to the Balitsky-JIMWLK rapidity evolution equation: + +$$ +\frac{d}{d\eta} |\psi_i\rangle = H |\psi_i\rangle . \tag{A.6} +$$ +---PAGE_BREAK--- + +The scattering amplitude can be obtained by computing the overlap between $\langle \psi_j |$ and $|\psi_i \rangle$, after evolving them to common rapidity, where the overlap is defined as the vacuum expectation value of left-moving and right-moving W-fields. In terms of the reduced amplitude defined in eq. (2.3) one has + +$$ \frac{i}{2s} \hat{M}_{ij \to ij} = \langle \psi_j | e^{\hat{H}L} |\psi_i \rangle, \quad \hat{H} = H - \mathbf{T}_t^2 \alpha_g(t). \qquad (\text{A.7}) $$ + +Evolution at the desired accuracy is obtained by simply considering the Hamiltonian at leading order in $g_s^2$ in terms of W fields, which, to this order, is diagonal: + +$$ \hat{H} \begin{pmatrix} W \\ WW \end{pmatrix} = \begin{pmatrix} \hat{H}_{1 \to 1} & 0 \\ 0 & \hat{H}_{2 \to 2} \end{pmatrix} \begin{pmatrix} W \\ WW \end{pmatrix} + \mathcal{O}(g_s^4). \qquad (\text{A.8}) $$ + +Since the signature odd and even sectors are orthogonal and closed under the action of $\hat{H}$ (as a consequence of the signature symmetry), their contributions to the amplitude at NLL separate: + +$$ \begin{aligned} \frac{i}{2s} \hat{M}_{ij \to ij}^{\text{NLL}} &= \frac{i}{2s} (\hat{M}_{ij \to ij}^{(-),\text{NLL}} + \hat{M}_{ij \to ij}^{(+),\text{NLL}}) \\ &\equiv \langle \psi_{j,1}^{(-)} | e^{\hat{H}L} | \psi_{i,1}^{(-)} \rangle^{\text{(NLO)}} + \langle \psi_{j,2}^{(+)} | e^{\hat{H}L} | \psi_{i,2}^{(+)} \rangle^{\text{(LO)}}, \end{aligned} \qquad (\text{A.9}) $$ + +where “LO” and “NLO” means that all ingredients are needed respectively to leading and next-to-leading nonvanishing order. In this paper we focus on the even amplitude, representing the exchange of a pair of Reggeons, corresponding to the second term in eq. (A.9). It is then convenient to compute the inner product in eq. (A.7) by first evolving the wavefunction: + +$$ e^{\hat{H}_{2 \to 2} L} |\psi_{i,2}^{(+)}\rangle = -\frac{g_s^2}{2} \mathbf{T}_i^a \mathbf{T}_i^b \sum_{\ell=0}^{\infty} \frac{1}{\ell!} \left( \frac{\alpha_s B_0(\epsilon)L}{\pi} \right)^{\ell} \int \frac{d^{2-2\epsilon}q}{(2\pi)^{2-2\epsilon}} \Omega^{(\ell)}(p,q) W^a(q)W^b(p-q). \qquad (\text{A.10}) $$ + +As displayed in eq. (2.15), the wavefunctions $\Omega^{(\ell)}$ may then be obtained iteratively by applying the Hamiltonian $\hat{H}_{2 \to 2}$. This Hamiltonian was discussed at length in terms of Wilson lines in ref. [24], to which we refer for further details ($-\hat{H}_{k \to k}$ is given in eq. (3.13) there; note the overall minus sign between our conventions). Acting with $\hat{H}_{2 \to 2}$ on the states in eq. (A.10), reproduces precisely the leading order BFKL Hamiltonian recorded in the main text. Finally, computing the overlap with the target state $\langle \psi_{j,2}^{(+)}|$ produces the integral which closes the ladder in eq. (2.13). + +## B Proof of the all-order amplitude + +In this appendix we show that the singular terms in eq. (3.15) are equal to those in eq. (3.13). We start by noticing that the statement is equivalent to + +$$ \sum_{n=1}^{\ell} (-1)^{n+1} \binom{\ell}{n} \prod_{m=0}^{n-2} \left[ 1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] - \left( 1 - \hat{B}_{-\ell}(1/\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right)^{-1} = O(\epsilon^\ell). \quad (\text{B.1}) $$ +---PAGE_BREAK--- + +Multiplying both sides of this equality by $(1 - \hat{B}_{-1}(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2}) = 1 + \mathcal{O}(\epsilon^3)$ we get + +$$ \sum_{n=1}^{\ell} (-1)^{n+1} \binom{\ell}{n} \prod_{m=0}^{n-2} \left[ 1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] \left( 1 - \hat{B}_{-1}(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right) - 1 = \mathcal{O}(\epsilon^\ell). \quad (\text{B.2}) $$ + +The additional factor multiplying the sum on the l.h.s. can be incorporated into the product. +Similarly, the $-1$ on the l.h.s. can be included in the sum. We obtain + +$$ \sum_{n=0}^{\ell} (-1)^{n+1} \binom{\ell}{n} \prod_{m=-1}^{n-2} \left[ 1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] = \mathcal{O}(\epsilon^{\ell}). \quad (\text{B.3}) $$ + +At this point, we realise that the structure of the sum and product is strikingly similar to +that appearing in the target-averaged wavefunction in eq. (3.11). In that case, finiteness of +the $\ell$-loop wavefunction implies + +$$ \sum_{n=0}^{\ell} (-1)^n n^q \binom{\ell}{n} \prod_{m=0}^{n-1} \left[ 1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} \right] = \mathcal{O}\left(\epsilon^{\ell-q}\right) \quad \text{with } q = 0, 1, 2, \dots \quad (\text{B.4}) $$ + +which is obtained by expanding $(p^2/k^2)^{n\epsilon}$ around small $\epsilon$ inside the sum. Next, we bring +the product in eq. (B.3) to the same form as in eq. (B.4), obtaining + +$$ \sum_{n=0}^{\ell} (-1)^{n+1} \binom{\ell}{n} \left(1 - \hat{B}_{-1}(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2}\right) \left(1 - \hat{B}_{n-1}(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2}\right)^{-1} \\ +\times \prod_{m=0}^{n-1} \left[1 - \hat{B}_m(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2}\right] = \mathcal{O}(\epsilon^\ell). \quad (\text{B.5}) $$ + +The extracted factor + +$$ \begin{aligned} & \left(1 - \hat{B}_{-1}(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2}\right) \left(1 - \hat{B}_{n-1}(\epsilon) \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2}\right)^{-1} \\ &= 1 + \frac{2C_A - \mathbf{T}_t^2}{C_A - \mathbf{T}_t^2} [2\epsilon(n\epsilon)^2\zeta_3 + 3\epsilon^2(n\epsilon)^2\zeta_4 + (4\epsilon^3(n\epsilon)^2 + 2\epsilon(n\epsilon)^4)\zeta_5] + \mathcal{O}(\epsilon^6) && (\text{B.6}) \end{aligned} $$ + +is a function of $\epsilon$ and $\delta = n\epsilon$, cf. eqs. (3.7) and (3.9), which are both small. In other words, +the (double) expansion of eq. (B.6) in $\epsilon$ and $\delta$ around 0 contains only terms for which the +power of $\epsilon$ is equal or greater than the power of $n$. This, then, together with eq. (B.4), +proves eq. (B.3) and thus the conjectured amplitude (3.15). + +References + +[1] E. A. Kuraev, L. N. Lipatov and V. S. Fadin, *The Pomeranchuk Singularity in Nonabelian Gauge Theories*, Sov. Phys. JETP **45** (1977) 199–204. + +[2] I. I. Balitsky and L. N. Lipatov, *The Pomeranchuk Singularity in Quantum Chromodynamics*, Sov. J. Nucl. Phys. **28** (1978) 822–829. +---PAGE_BREAK--- + +[3] L. N. Lipatov, *The Bare Pomeron in Quantum Chromodynamics*, Sov. Phys. JETP 63 (1986) 904-912. + +[4] A. H. Mueller, *Soft gluons in the infinite momentum wave function and the BFKL pomeron*, Nucl. Phys. B415 (1994) 373-385. + +[5] A. H. Mueller and B. Patel, *Single and double BFKL pomeron exchange and a dipole picture of high-energy hard processes*, Nucl. Phys. B425 (1994) 471-488, [hep-ph/9403256]. + +[6] R. C. Brower, J. Polchinski, M. J. Strassler and C.-I. Tan, *The Pomeron and gauge/string duality*, JHEP 12 (2007) 005, [hep-th/0603115]. + +[7] I. Moult, M. P. Solon, I. W. Stewart and G. Vita, *Fermionic Glauber Operators and Quark Reggeization*, 1709.09174. + +[8] I. Balitsky, *Operator expansion for high-energy scattering*, Nucl. Phys. B463 (1996) 99-160, [hep-ph/9509348]. + +[9] I. Balitsky, *Factorization for high-energy scattering*, Phys. Rev. Lett. 81 (1998) 2024-2027, [hep-ph/9807434]. + +[10] Y. V. Kovchegov, *Small $x$ F(2) structure function of a nucleus including multiple pomeron exchanges*, Phys. Rev. D60 (1999) 034008, [hep-ph/9901281]. + +[11] J. Jalilian-Marian, A. Kovner, L. D. McLerran and H. Weigert, *The Intrinsic glue distribution at very small $x$*, Phys. Rev. D55 (1997) 5414-5428, [hep-ph/9606337]. + +[12] J. Jalilian-Marian, A. Kovner, A. Leonidov and H. Weigert, *The Wilson renormalization group for low $x$ physics: Towards the high density regime*, Phys. Rev. D59 (1998) 014014, [hep-ph/9706377]. + +[13] E. Iancu, A. Leonidov and L. D. McLerran, *The Renormalization group equation for the color glass condensate*, Phys. Lett. B510 (2001) 133-144, [hep-ph/0102009]. + +[14] M. G. Sotiropoulos and G. F. Sterman, *Color exchange in near forward hard elastic scattering*, Nucl. Phys. B419 (1994) 59-76, [hep-ph/9310279]. + +[15] G. P. Korchemsky, *On Near forward high-energy scattering in QCD*, Phys. Lett. B325 (1994) 459-466, [hep-ph/9311294]. + +[16] I. A. Korchemskaya and G. P. Korchemsky, *Evolution equation for gluon Regge trajectory*, Phys. Lett. B387 (1996) 346-354, [hep-ph/9607229]. + +[17] I. A. Korchemskaya and G. P. Korchemsky, *High-energy scattering in QCD and cross singularities of Wilson loops*, Nucl. Phys. B437 (1995) 127-162, [hep-ph/9409446]. + +[18] V. Del Duca and E. W. N. Glover, *The High-energy limit of QCD at two loops*, JHEP 10 (2001) 035, [hep-ph/0109028]. + +[19] V. Del Duca, G. Falcioni, L. Magnea and L. Vernazza, *High-energy QCD amplitudes at two loops and beyond*, Phys. Lett. B732 (2014) 233-240, [1311.0304]. + +[20] V. Del Duca, G. Falcioni, L. Magnea and L. Vernazza, *Analyzing high-energy factorization beyond next-to-leading logarithmic accuracy*, JHEP 02 (2015) 029, [1409.8330]. + +[21] V. Del Duca, C. Duhr, E. Gardi, L. Magnea and C. D. White, *An infrared approach to Reggeization*, Phys. Rev. D85 (2012) 071104, [1108.5947]. + +[22] V. Del Duca, C. Duhr, E. Gardi, L. Magnea and C. D. White, *The Infrared structure of gauge theory amplitudes in the high-energy limit*, JHEP 12 (2011) 021, [1109.3581]. +---PAGE_BREAK--- + +[23] S. Caron-Huot, *When does the gluon reggeize?*, JHEP 05 (2015) 093, [1309.6521]. + +[24] S. Caron-Huot, E. Gardi and L. Vernazza, *Two-parton scattering in the high-energy limit*, JHEP 06 (2017) 016, [1701.05241]. + +[25] P. D. B. Collins, *An Introduction to Regge Theory and High-Energy Physics*. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 2009. + +[26] O. Almelid, C. Duhr and E. Gardi, *Three-loop corrections to the soft anomalous dimension in multileg scattering*, Phys. Rev. Lett. 117 (2016) 172002, [1507.00047]. + +[27] E. Gardi, O. Almelid and C. Duhr, *Long-distance singularities in multi-leg scattering amplitudes*, PoS LL2016 (2016) 058, [1606.05697]. + +[28] S. M. Aybat, L. J. Dixon and G. F. Sterman, *The Two-loop soft anomalous dimension matrix and resummation at next-to-next-to leading pole*, Phys. Rev. D74 (2006) 074004, [hep-ph/0607309]. + +[29] E. Gardi and L. Magnea, *Factorization constraints for soft anomalous dimensions in QCD scattering amplitudes*, JHEP 03 (2009) 079, [0901.1091]. + +[30] T. Becher and M. Neubert, *Infrared singularities of scattering amplitudes in perturbative QCD*, Phys. Rev. Lett. 102 (2009) 162001, [0901.0722]. + +[31] T. Becher and M. Neubert, *On the Structure of Infrared Singularities of Gauge-Theory Amplitudes*, JHEP 06 (2009) 081, [0903.1126]. + +[32] O. Almelid, C. Duhr, E. Gardi, A. McLeod and C. D. White, *Bootstrapping the QCD soft anomalous dimension*, JHEP 09 (2017) 073, [1706.10162]. + +[33] Yu. L. Dokshitzer and G. Marchesini, *Soft gluons at large angles in hadron collisions*, JHEP 01 (2006) 007, [hep-ph/0509078]. + +[34] S. Catani, *The Singular behavior of QCD amplitudes at two loop order*, Phys. Lett. B427 (1998) 161-171, [hep-ph/9802439]. + +[35] E. Gardi and L. Magnea, *Infrared singularities in QCD amplitudes*, Nuovo Cim. C32N5-6 (2009) 137-157, [0908.3273]. + +[36] L. J. Dixon, E. Gardi and L. Magnea, *On soft singularities at three loops and beyond*, JHEP 02 (2010) 081, [0910.3653]. + +[37] V. Ahrens, M. Neubert and L. Vernazza, *Structure of Infrared Singularities of Gauge-Theory Amplitudes at Three and Four Loops*, JHEP 09 (2012) 138, [1208.4847]. + +[38] G. P. Korchemsky and A. V. Radyushkin, *Loop Space Formalism and Renormalization Group for the Infrared Asymptotics of QCD*, Phys. Lett. B171 (1986) 459-467. + +[39] G. P. Korchemsky and A. V. Radyushkin, *Infrared asymptotics of perturbative QCD: renormalisation properties of the Wilson loops in higher orders of perturbation theory*, Sov. J. Nucl. Phys. 44 (1986) 877. + +[40] G. P. Korchemsky and A. V. Radyushkin, *Renormalization of the Wilson Loops Beyond the Leading Order*, Nucl. Phys. B283 (1987) 342-364. \ No newline at end of file diff --git a/samples/texts_merged/3438890.md b/samples/texts_merged/3438890.md new file mode 100644 index 0000000000000000000000000000000000000000..7a8fa03cf106cbcb3150e249768299d8679bfc16 --- /dev/null +++ b/samples/texts_merged/3438890.md @@ -0,0 +1,226 @@ + +---PAGE_BREAK--- + +Footstep Planning Based on Univector Field Method for +Humanoid Robot + +Youngdae Hong and Jong-Hwan Kim + +Department of Electrical Engineering and Computer Science, KAIST +Daejeon, Korea +{ydhong,johkim}@rit.kaist.ac.kr +http://rit.kaist.ac.kr + +**Abstract.** This paper proposes a footstep planning algorithm based on univector field method optimized by evolutionary programming for humanoid robot to arrive at a target point in a dynamic environment. The univector field method is employed to determine the moving direction of the humanoid robot at every footstep. Modifiable walking pattern generator, extending the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase, is utilized to generate every joint trajectory of a robot satisfying the planned footstep. The proposed algorithm enables the humanoid robot not only to avoid either static or moving obstacles but also step over static obstacles. The performance of the proposed algorithm is demonstrated by computer simulations using a modeled small-sized humanoid robot HanSaRam (HSR)-VIII. + +**Keywords:** Footstep planning, univector field method, evolutionary programming, humanoid robot, modifiable walking pattern generator. + +# 1 Introduction + +These days research on a humanoid robot has made rapid progress for dexterous motions with the hardware development. Various humanoid robots have demonstrated stable walking with control schemes [1]-[5]. Considering the future of the humanoid robot as a service robot, research on navigation in indoor environments such as homes and offices with obstacles is now needed. + +In indoor environments, most of research on navigation has been carried out for differential drive mobile robots. The navigation method of the mobile robots is categorized into separated navigation and unified navigation. The separated navigation method, such as structural navigation and deliberative navigation, separates path planning and path following as two isolated tasks. In the path planning step, a path generation algorithm is developed which connects the staring point with the end point without crossing the obstacles. To find the shortest path many searching algorithms such as A\* algorithm and dynamic programming have been applied [6]. On the other hand, in unified navigation method such as the artificial potential field method [7], [8], the path planning step and the path following step are unified in one task. + +In the navigation research, differential drive mobile robots make a detour to avoid obstacles to arrive at a goal position. On the other hand, humanoid robots are able to +---PAGE_BREAK--- + +traverse obstacles by their legs. When they move around in an environment, positions of their footprints are important as there are obstacles. Thus, the study of footstep planning for humanoid robots is an important research issue. + +As research on footstep planning, algorithm obtaining information of obstacle's shape and location by sensors was presented [9]. Through obtained information, a robot determines its step length which is predefined as three type step lengths and its motion such as circumventing, stepping over or stepping on obstacles. Also, an algorithm finding alternative path employing A* by heuristic cost function was developed [10]. Stable region of robot's footprints is predetermined and then a few of placements of them are selected as a discrete set. This algorithm checks collision between a robot and obstacles by 2D polygon intersection test. Human-like strategy for footstep planning was also presented [11]. + +In this paper, a footstep planning algorithm based on the univector field method for humanoid robot is proposed. The univector field method is one of the unified navigation methods, which is designed for fast differential drive mobile robots to enhance performances. Using this method, robot can navigate rapidly to the desired position and orientation without oscillations and unwanted inefficient motions [12], [13]. The footstep planning algorithm determines moving direction of a humanoid robot in real time and has low computing cost by employing the univector field method. Besides, it is able to modify foot placement depending on obstacle's position. Inputting the moving direction and step length of a robot at every footstep to modifiable walking pattern generator [14], every joint trajectory is generated. The proposed algorithm generates an evolutionary optimized path by evolutionary programming (EP) considering hardware limit of a robot and makes a robot arrive at a goal with desired direction. Computer simulations are carried out by a model of HanSaRam (HSR)-VIII which is a small-sized humanoid robot developed in Robot Intelligence Technology (RIT) Lab, KAIST. + +The rest of the paper is organized as follows: Section 2 describes an overview of univector field method and Section 3 explains MWPG. In Section 4 a footstep planning algorithm is proposed. Computer simulation results are presented in Section 5. Finally concluding remarks follow in Section 6. + +## 2 Univector Field Method + +The univector field method is one of path planning methods developed for a differential drive mobile robot. The univector field consists of *move-to-goal univector field* which leads a robot to move to a destination and *avoid-obstacle univector field* which makes a robot avoid obstacles. Its moving direction is decided by combining *move-to-goal* univector field and *avoid-obstacle univector field*. The univector field method requires relatively low computing power because it does not generate a whole path from a start point to a destination before moving, but generates a moving direction decided at every step in real time. In addition, it is easy to plan a path in a dynamic environment with moving obstacles. Thus, this method of path planning is adopted and extended for a humanoid robot. +---PAGE_BREAK--- + +## 2.1 Move-to-Goal Univector Field + +The move-to-goal univector field is defined as + +$$ \mathbf{v}_{muf} = [-\cos(\theta_{muf}) - \sin(\theta_{muf})]^T, \quad (1) $$ + +where + +$$ \theta_{muf} = \cos^{-1}\left(\frac{p_x - g_x}{d_{goal}}\right), d_{goal} = \sqrt{(p_x - g_x)^2 + (p_y - g_y)^2}, $$ + +$\theta_{muf}$ is the angle from x-axis of the goal at robot's position, $d_{goal}$ is the distance between the center of a goal and robot's position, and $(p_x, p_y)$ and $(g_x, g_y)$ are the robot's position and the goal position, respectively. + +## 2.2 Avoid-Obstacle Univector Field + +The avoid-obstacle univector field is defined as + +$$ \mathbf{v}_{auf} = [\cos(\theta_{auf}) \sin(\theta_{auf})]^T, \quad (2) $$ + +where + +$$ \theta_{auf} = \cos^{-1}\left(\frac{p_x - o_x}{d_{ob}}\right), d_{ob} = \sqrt{(p_x - o_x)^2 + (p_y - o_y)^2}, $$ + +$\theta_{auf}$ is the angle from x-axis of an obstacle at robot's position, $d_{ob}$ is the distance between the center of an obstacle and robot's position and $(o_x, o_y)$ is the position of an obstacle. + +Total univector field is determined by properly combining the move-to-goal univector field and the avoid-obstacle univector field. Total univector $\mathbf{v}_{tuf}$ is defined as + +$$ \mathbf{v}_{tuf} = w_{muf}\mathbf{v}_{muf} + w_{auf}\mathbf{v}_{auf}, \quad (3) $$ + +where $w_{muf}$ and $w_{auf}$ represent the scale factor of the move-to-goal univector field and the avoid-obstacle univector field, respectively. + +# 3 Modifiable Walking Pattern Generator + +The modifiable walking pattern generator (MWPG) extended the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase. In the conventional 3D-LIPM without the ZMP variation, only the homogeneous solutions of the 3D-LIPM dynamic equation were considered. However, considering the particular solutions, more extensive and unrestricted walking patterns could be generated by allowing the ZMP variation. The solutions with both homogeneous and particular parts are as follows: + +Sagittal motion: + +$$ \begin{bmatrix} x_f \\ v_f T_c \end{bmatrix} = \begin{bmatrix} C_T & S_T \\ S_T & C_T \end{bmatrix} \begin{bmatrix} x_i \\ v_i T_c \end{bmatrix} - \frac{1}{T_c} \begin{bmatrix} \int_0^T S_i \bar{p}(t) dt \\ \int_0^T C_i \bar{p}(t) dt \end{bmatrix}, \quad (4) $$ +---PAGE_BREAK--- + +Lateral motion: + +$$ \begin{bmatrix} y_f \\ w_f T_c \end{bmatrix} = \begin{bmatrix} C_T & S_T \\ S_T & C_T \end{bmatrix} \begin{bmatrix} y_i \\ w_i T_c \end{bmatrix} - \frac{1}{T_c} \begin{bmatrix} \int_0^T S_t \bar{p}(t) dt \\ \int_0^T C_t \bar{p}(t) dt \end{bmatrix}, \quad (5) $$ + +where $(x_i, v_i)/(x_f, v_f)$ and $(y_i, w_i)/(y_f, w_f)$ represent initial/final position and velocity of the CM in the sagittal and lateral plane, respectively. $S_t$ and $C_t$ are defined as $\cosh(t/T_c)$ and $\sinh(t/T_c)$ with time constant $T_c = \sqrt{Z_c/g}$. The functions $p(t)$ and $q(t)$ are ZMP trajectories for the sagittal and lateral planes, respectively. $\bar{p}(t) = p(T-t)$ and $\bar{q}(t) = q(T-t)$. Through the variation of the ZMP, the walking state (WS), which is the state of the point mass in the 3D-LIPM represented in terms of CM position and linear velocity can be moved to the desired WS in the region of possible trajectories expanded by applying the particular solutions. By means of the MWPG, a humanoid robot can change both sagittal and lateral step lengths, rotation angle of ankles and the period of the walking pattern [14]. + +# 4 Footstep Planning Algorithm + +In this section, a footstep planning algorithm for a humanoid robot is described. It decides moving orientation at every footstep by univector field navigation method. Using the determined orientations, it calculates exact foot placement. Subsequently, by in-putting the moving direction and step length of a robot at every footstep by proposed footstep planning algorithm to MWPG, every joint trajectory is generated to satisfy the planned footstep. + +## 4.1 Path Planning + +To apply univector field method to the path generation of a humanoid robot, the following three issues are considered. To generate a natural and effective path, obstacle's boundary and virtual obstacle [15] are introduced to the avoid-obstacle univector field considering the obstacle's size and movement, respectively. Also, a hyperbolic spiral univector field is developed as a move-to-goal univector field in order to reach a destination with a desired orientation [13]. + +**Boundary of Avoid-Obstacle Univector Field.** The repulsive univector field by obstacles is not generated at every position but generated in a restricted range by applying a boundary to the avoid-obstacle univector field. Also, the more the robot's position becomes distant from the center of an obstacle, the more the magnitude of the repulsive univector field decreases linearly. Consequently, a robot is not influenced the repulsive univector field at the region which is away from the boundary of obstacles. Considering this boundary effect, the avoid-obstacle univector $\mathbf{v}_{auf}$ is defined as + +$$ \mathbf{v}_{auf} = k_b [\cos(\theta_{auf}) \sin(\theta_{auf})]^T \quad (6) $$ + +where + +$$ k_b = \frac{d_{boun} - (d_{ob} - o_{size})}{d_{boun}}, $$ +---PAGE_BREAK--- + +o_size is the obstacle's radius, d_boun is the size of boundary and k_b is a scale factor. By introducing the boundary into the avoid-obstacle univector field, an effective path is generated. + +**Virtual Obstacle.** The virtual obstacle is defined by introducing a shifting vector to the center position of a real obstacle, where the direction of shifting vector is opposed to the robots moving direction and the magnitude is proportional to the robots moving velocity. Then, the position of the center of the virtual obstacle is obtained as + +$$[o_x^{\text{virtual}}, o_y^{\text{virtual}}]^T = [o_x^{\text{real}}, o_y^{\text{real}}]^T + \mathbf{s}, \quad (7)$$ + +$$\mathbf{s} = -k_v \mathbf{v}_{\text{robot}},$$ + +where $(o_x^{\text{virtual}}, o_y^{\text{virtual}})$ is the virtual obstacle's position, $(o_x^{\text{real}}, o_y^{\text{real}})$ is the real obstacle's position, $\mathbf{s}$ is the shifting vector, $k_v$ is the scale factor of the virtual obstacle and $\mathbf{v}_{\text{robot}}$ is the robot's velocity vector. When calculating the avoid-obstacle univector, the virtual obstacle's positions are used instead of the real obstacles. By introducing the virtual obstacle, a robot can avoid obstacles more safely and smoothly by a generated path at every step. + +**Hyperbolic Spiral Univector Field.** The move-to-goal univector field is designed by the hyperbolic spiral for a robot to get to a target point with a desired orientation. The hyperbolic spiral univector field $\mathbf{v}_{huf}$ is defined as + +$$\mathbf{v}_{huf} = [\cos(\phi_h) \sin(\phi_h)]^T, \quad (8)$$ + +where + +$$\phi_h = \begin{cases} \theta \pm \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho+k_r}\right) & \text{if } \rho > d_e \\ \theta \pm \frac{\pi}{2} \sqrt{\frac{\rho}{d_e}} & \text{if } 0 \le \rho \le d_e, \end{cases}$$ + +$\theta$ is the angle from x-axis of the goal at robot's position. The notation $\pm$ represents the direction of movement, where + is when a robot moves clockwise and - counter-clockwise. $k_r$ is an adjustable parameter. If $k_r$ becomes larger, the maximal value of curvature derivative decreases and the contour of the spiral becomes smoother. $\rho$ is the distance between the center of the destination and robot's position $d_e$ is predefined radius that decides the size of the spiral. + +By designing a move-to-goal univector field with hyperbolic spiral, a robot can arrive at a destination with any orientation angle. In this paper, in order to obtain the desired posture at a target position, two hyperbolic spiral univector fields are combined. The move-to-goal univector field is defined as + +$$\phi_{\text{muf}} = \begin{cases} \theta_{\text{up}} + \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho_{\text{up}}+k_r}\right) & \text{if } p_y^h > g_{\text{size}} \\ \theta_{\text{down}} - \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho_{\text{down}}+k_r}\right) & \text{if } p_y^h < -g_{\text{size}}, \\ \theta_{\text{dir}} & \text{otherwise} \end{cases}, \quad (9)$$ + +with + +$$\rho_{\text{up}} = \sqrt{p_x^{h2} + (p_y^h - d_e - g_{\text{size}})^2}, \quad \rho_{\text{down}} = \sqrt{p_x^{h2} + (p_y^h + d_e + g_{\text{size}})^2},$$ +---PAGE_BREAK--- + +$$ \theta_{up} = \tan^{-1}\left(\frac{p_y^h - d_e - g_{size}}{p_x^h}\right) + \theta_{dir}, \quad \theta_{down} = \tan^{-1}\left(\frac{p_y^h + d_e + g_{size}}{p_x^h}\right) + \theta_{dir}, $$ + +$$ \mathbf{p}^h = \mathbf{M}_{rot} \mathbf{M}_{trans} \mathbf{p}, $$ + +$$ \mathbf{M}_{trans} = \begin{bmatrix} 1 & 0 & -g_x \\ 0 & 1 & -g_y \\ 0 & 0 & 1 \end{bmatrix}, \quad \mathbf{M}_{rot} = \begin{bmatrix} \cos(-\theta_{dir}) & -\sin(-\theta_{dir}) & 0 \\ \sin(-\theta_{dir}) & \cos(-\theta_{dir}) & 0 \\ 0 & 0 & 1 \end{bmatrix}, $$ + +$$ \mathbf{p} = [p_x \ p_y \ 1]^T, \quad \mathbf{p}^h = [p_x^h \ p_y^h \ 1]^T, $$ + +where $g_{size}$ is the radius of the goal region and $\theta_{dir}$ is the desired arrival angle at a target. By using the move-to-goal univector field which is composed of the hyperbolic spiral univector fields, a robot can arrive at a goal with any arrival angles. + +## 4.2 Footstep Planning + +While a humanoid robot moves towards a destination, there is a situation when it has to step over an obstacle if it is not too high. This is the main difference from the path planning for a differential drive mobile robot, as it tries to find a detour route to circumvent obstacles instead of stepping over them. In this section, a footstep planning algorithm is proposed, which enables a robot to traverse over the obstacles effectively. + +It is very natural and efficient way that a robot steps over them instead of detouring, if its moving direction is maintained. The proposed algorithm enables a robot step over the obstacles with minimal step length while maintaining its moving direction. It is + +**Fig. 1.** Stepping over an obstacle. (a) Left leg is supporting leg without additional step (b) Left leg is supporting leg with additional step (c) Right leg is supporting leg without additional step (d) Right leg is supporting leg with additional step. +---PAGE_BREAK--- + +Fig. 2. Stepping over an obstacle when an obstacle is in front of one leg + +assumed that the shape of obstacles is a rectangle with narrow width and long length as shown in Fig. 1. + +The forward and backward step length from a supporting leg of a humanoid robot are restricted because of hardware limitation. If an obstacle is wider in width than the maximum step length of a humanoid robot, it is not able to step over an obstacle. Thus, a humanoid robot has to step over an obstacle with the shortest possible step length in order to step over the widest possible obstacle. The step length of a humanoid robot is determined by which leg is a supporting leg when it steps over an obstacle. As the proposed algorithm considers these facts, it enables a robot to step over obstacles with the shortest step length. Fig. 1 shows the footprints to step over an obstacle using this algorithm. Fig. 1(a) and Fig. 1(d) are situations when a left foot comes close to the obstacle earlier than a right foot and Fig. 1(b) and Fig. 1(c) are situations when a right foot approaches the obstacle closely than the other one. In case of Fig. 1(a) and 1(b), the left leg is appropriate as a supporting leg for the minimum step length. On the other hand, the right leg is appropriate as a supporting leg in Fig. 1(c) and 1(d). Therefore, in order to make a left leg as a supporting leg in Fig. 1(b) and a right leg as a supporting leg in Fig. 1(d), one more step is needed before stepping over the obstacle, while such an additional step is not needed in Fig. 1(a) and 1(c). + +There is a situation when an obstacle is only in front of one leg such that the other leg can be placed without considering the obstacle. The proposed algorithm deals with this situation such that it can step over the obstacle effectively like a human being. Fig. 2 shows the footprints of a robot in this case. + +## 4.3 Parameter Optimization by Evolution Programming + +A humanoid robot has the constraint of change in rotation of legs on account of the hardware limitation. Hence, when planning footsteps for a biped robot by the proposed algorithm, the maximum change in rotation of legs has to be assigned. In this algorithm, there are seven parameters to be assigned such as $k_v$ in the virtual obstacle, $d_{boun}$ in the avoid-obstacle univector field, $d_e$, $k_r$, $g_{size}$ in the move-to-goal univector field and $w_{muf}$, $w_{auf}$ in composition of the move-to-goal univector field and the avoid-obstacle univector field, respectively. A robot can arrive at a goal with the change in rotation of legs within constraints by selecting appropriate values of parameters mentioned above. Also to generate the most effective path, EP is employed to choose the values of parameters. The fitness function in EP is designed considering the followings: +---PAGE_BREAK--- + +* A robot should arrive at a destination with a minimum position error. +* The facing direction of a robot at a destination should be the desired one. +* A robot should not collide with obstacles. +* The change in rotation of legs should not exceed the constraint value. + +Consequently, the fitness function is defined as + +$$f = -(k_p P_{err} + k_q | \theta_{err} | + k_{col} N_{col} + k_{const} N_{const}) \quad (10)$$ + +where $N_{const}$ is the number of constraint violations of change in rotation of legs, $N_{col}$ is the number of obstacle collisions of the robot, $\theta_{err}$ is the difference between the desired orientation and the orientation of a robot at a goal, $P_{err}$ is the position error at a goal and $k_{const}, k_{col}, k_q, k_p$ are constants. + +# 5 Simulation Results + +HSR-VIII (Fig. 3(a)) is a small-sized humanoid robot that has been continuously undergoing redesign and development in RIT Lab, KAIST since 2,000. Its height and weight are 52.8 cm and 5.5 kg, respectively. It has 26 DOFs which consists of 12 DC motors with harmonic drives for reduction gears in the lower body and 14 RC servo motors in the upper body. HSR-VIII was modeled by Webot which is the 3D mobile robotics simulation software [16]. Simulations were carried out with Webot of the HSR-VIII model by applying the proposed footstep planning algorithm. + +Through the simulation, seven parameters in the algorithm were optimized by EP. Maximum rotating angle of the robot's ankles was selected heuristically as 40°. After 100 generations, the parameters were optimized as $k_v=1.94$, $d_{boun}=20.09$, $d_e=30.04$, $k_r=0.99$, $g_{size}=0.94$, $w_{muf}=1.96$, $w_{auf}=1.46$. + +Fig. 3(b) shows the sequence of robot's footsteps as a 2D simulation result, where there were ten obstacles of three different kinds such as five static circular obstacles + +**Fig. 3.** (a) HSR-VIII. (b) Sequence of footsteps in the environment with ten obstacles of three different kinds. +---PAGE_BREAK--- + +**Fig. 4.** Snap shots of 3D simulation result by Webot in the environment with ten obstacles of three different kinds. (A goal is a circle in the right bottom corner.) + +and two moving circular obstacles and three static rectangular obstacles with a height of 1.0 cm. The desired angle at a destination was fixed at 90° from x-axis. As shown in the figure, by the proposed algorithm the robot moves from a start point to a target goal in the right bottom corner, while avoiding static and moving circular obstacles and stepping over static rectangular ones by adjusting its step length. In addition, the robot faces the desired orientation at the goal. Fig. 5 shows the 3D simulation result by Webot, where the environment is the same as that used in the 2D simulation. Similar result was obtained as in Fig. 3(b). In particular, in third and sixth snapshots of the Fig. 10, it can be seen that the robot makes a turn before colliding with the moving circular obstacles predicting their movement. + +# 6 Conclusion + +The real-time footstep planning algorithm was proposed for a humanoid robot to travel to a destination avoiding and stepping over obstacles. The univector field method was adopted to determine the heading direction and using the determined orientations, exact foot placement was calculated. The proposed algorithm generated the efficient path by applying a boundary to the avoid-obstacle univector field and introducing the virtual obstacle concept. Furthermore, it enables a robot to get to a destination with a desired orientation by employing the hyperbolic spiral univector field. The proposed algorithm made a robot possible to step over an obstacle with minimal step length maintaining its heading orientation. It also considered the situation when an obstacle is in front of only one leg. In this case, it steps over the obstacle while placing the other leg properly as a supporting one. The effectiveness of the algorithm was demonstrated by computer simulations in dynamic environment. As a further work, experiments with a real small-sized humanoid robot HSR-VIII will be carried out using a global camera to demonstrate the applicability of the proposed algorithm. +---PAGE_BREAK--- + +References + +1. Nishiwaki, K., Sugihara, T., Kagami, S., Kanehiro, F., Inaba, M., Inoue, H.: Design and Development of Research Platform for Perception- Action Integration in Humanoid Robot: H6. In: Proc. IEEE/RSJ Int. Conference on Intelligent Robots and Systems, pp. 1559–1564 (2000) + +2. Kaneko, K., Kanehiro, F., Kajita, S., Hirukawa, H., Kawasaki, T., Hirata, M., Akachi, K., Isozumi, T.: Humanoid Robot HRP-2. In: Proc. IEEE Int'l. Conf. on Robotics and Automation, ICRA 2004 (2004) + +3. Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N., Fujimura, K.: The intelligent ASIMO: system overview and integration. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2478–2483 (2002) + +4. Ogura, Y., Aikawa, H., Shimomura, K., Kondo, H., Morishima, A., Lim, H., Takanishi, A.: Development of a New Humanoid Robot WABIAN-2. In: Proc. IEEE Int'l. Conf. on Robotics and Automation, ICRA 2006 (2006) + +5. Kim, Y.-D., Lee, B.-J., Ryu, J.-H., Kim, J.-H.: Landing Force Control for Humanoid Robot by Time-Domain Passivity Approach. IEEE Trans. on Robotics 23(6), 1294–1301 (2007) + +6. Kanal, L., Kumar, V. (eds.): Search in Artificial Intelligence. Springer, New York (1988) + +7. Borenstein, J., Koren, Y.: Real-time obstacle avoidance for fast mobile robots. IEEE Trans. Syst., Man, Cybern. 20, 1179–1187 (1989) + +8. Borenstein, J., Koren, Y.: The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Syst., Man, Cybern. 7, 278–288 (1991) + +9. Yagi, M., Lumelsky, V.: Biped Robot Locomotion in Scenes with Unknown Obstacles. In: Proc. IEEE Int'l. Conf. on Robotics and Automation (ICRA 1999), Detroit, MI, May 1999, pp. 375–380 (1999) + +10. Chestnutt, J., Lau, M., Cheung, G., Kuffner, J., Hodgins, J., Kanade, T.: Footstep planning for the honda asimo humanoid. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 631–636 (2005) + +11. Ayaz, Y., Munawar, K., Bilal Malik, M., Konno, A., Uchiyama, M.: Human-Like Approach to Footstep Planning Among Obstacles for Humanoid Robots. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 5490–5495 (2006) + +12. Kim, Y.-J., Kim, J.-H., Kwon, D.-S.: Evolutionary Programming-Based Uni-vector Field Navigation Method for Fast Mobile Robots. IEEE Trans. on Systems, Man and Cybernetics - Part B - Cybernetics 31(3), 450–458 (2001) + +13. Kim, Y.-J., Kim, J.-H., Kwon, D.-S.: Univector Field Navigation Method for Fast Mobile Robots. Korea Advanced Institute of Science and Technology, Ph.D. Thesis + +14. Lee, B.-J., Stonier, D., Kim, Y.-D., Yoo, J.-K., Kim, J.-H.: Modifiable Walking Pattern of a Humanoid Robot by Using Allowable ZMP Variation. IEEE Transaction on Robotics 24(4), 917–925 (2008) + +15. Lim, Y.-S., Choi, S.-H., Kim, J.-H., Kim, D.-H.: Evolutionary Univector Field-based Navigation with Collision Avoidance for Mobile Robot. In: Proc. 17th World Congress The International Federation of Automatic Control, Seoul, Korea (July 2008) + +16. Michel, O.: Cyberbotics Ltd. WebotsTM: Professional mobile robot simulation. Int. J. of Advanced Robotic Systems 1(1), 39–42 (2004) \ No newline at end of file diff --git a/samples/texts_merged/3450399.md b/samples/texts_merged/3450399.md new file mode 100644 index 0000000000000000000000000000000000000000..fb39cd20863fd86a60aabb0e3f6443441e7723dd --- /dev/null +++ b/samples/texts_merged/3450399.md @@ -0,0 +1,67 @@ + +---PAGE_BREAK--- + +# Note 7 Supplement: RSA Extras + +Computer Science 70 +University of California, Berkeley + +Summer 2018 + +## 1 One-Time Pad + +The exclusive OR (XOR) $x \oplus y$ of two bits $x$ and $y$ is defined by: + +
xyx ⊕ y
000
011
101
110
+ +In other words, $x \oplus y$ equals 1 if and only if $x$ and $y$ are different bits. Notice that $x \oplus y$ is the same as $x + y \bmod 2$. For any $x \in \{0, 1\}$, we have $x \oplus x = 0$ and $x \oplus 0 = x$. So, for any $y \in \{0, 1\}$, we have $y \oplus x \oplus x = y \oplus 0 = y$. + +We can extend the XOR operation to work on bit strings $x$ and $y$ of the same length by applying the XOR operation bitwise. + +**Example 1.** $01000 \oplus 11100 = 10100$. + +For bit strings $x$ and $y$ of the same length, we again have $y \oplus x \oplus x = y$. This actually gives us the simplest method to encrypt our messages, known as the **one-time pad**. To send a message $m$ (a bit string), the sender and receiver both agree (in advance) on a secret key $k$, which is a bit string of the same length as the message. The sender sends $m \oplus k$ to the receiver, and the receiver decrypts the message by $m \oplus k \oplus k = m$. + +If an eavesdropper intercepts the encrypted message $m \oplus k$, then without knowledge of the secret key $k$, the one-time pad is unbreakable. Indeed, since the secret key is unknown, then the eavesdropper must think that any secret +---PAGE_BREAK--- + +key is possible. Given any message $m'$, then $m' \oplus m \oplus k \oplus m' = m \oplus k$, which means that the encrypted message $m \oplus k$ could have also come from the message $m'$ with the secret key $m \oplus k \oplus m'$. We have just shown that the encrypted message could have come from *any* starting message, which means that the eavesdropper knows nothing about the original message. + +The one-time pad is not very convenient, however, because in order to guarantee the safety of the scheme, the secret key should really be discarded after one use (hence the name “one-time pad”). Since the sender and receiver must agree upon the secret key beforehand, the inability to reuse the secret key significantly hinders the practicality of the scheme. Nevertheless, the one-time pad can be useful when combined with other schemes. + +## 2 Application of RSA: Digital Signatures + +A signature is meant to provide proof of an individual's identity. In order for the signature to be a valid proof, the signature must have the property that no other individual can produce the same signature. Unfortunately, in the real world, we know that signatures can be forged. + +Inspired by this idea, we introduce the concept of a **digital signature**. As before, a digital signature is supposed to provide proof of an individual's identity. However, the property that “no other individual can produce the same signature” is replaced by the property that “no other individual can reliably produce the same signature *efficiently*”. The idea is that someone who wants to forge the signature must use some brute force method which is computationally infeasible, e.g., would require centuries or more to compute. + +Suppose that you have a RSA public key ($N,e$) with corresponding private key $d$. One way to provide a “signature” is to reveal your private key $d$. If we assume that RSA is unbreakable, then the private key cannot be computed efficiently from the public key, so this would indeed constitute a signature. Unfortunately this has the drawback of revealing your private key. + +Instead, the signature scheme proceeds as follows. A verifier provides the individual with some randomly chosen $x \in \{0, 1, \dots, N-1\}$ and asks the individual for $x^d \mod N$. The verifier can then check that $x^{ed} \equiv x \pmod N$. + +If the individual knows the private key $d$, then this computation is fast. However, a forger without knowledge of the private key must labor to find the $y \in \{0, 1, \dots, N-1\}$ such that $y^e \equiv x \pmod N$. If RSA is unbreakable, then this cannot be done efficiently. Presently we believe that you cannot do +---PAGE_BREAK--- + +meaningfully better than exhaustive search, which can easily take centuries if $N$ is large enough. + +The verifier can play this game with the individual multiple times until +the verifier is satisfied that the individual is not forging the signature. + +# 3 RSA Attacks + +The RSA scheme presented in the notes is known as “textbook RSA”. When RSA is used in practice, there are extra bells and whistles that are added to the scheme to improve its security. In this section we describe a couple of known attacks against the RSA scheme. + +The first attack warns against using RSA alone. Suppose that you take +your credit card $m$ and pass it to the encryption function $E$ to get your +encrypted credit card number $E(m)$. The encrypted credit card number $E(m)$ +is then sent to a company such as Amazon in order to complete a credit +card transaction. However, an eavesdropper sees $E(m)$. The eavesdropper +can then send $E(m)$ to the company again in order to make his or her own +purchases, effectively stealing your credit card. + +The method to prevent this attack is to take your credit card number $m$, and in each new transaction, pad your credit card number with a randomly generated string at the end to form a longer, random string $m'$. Then, send $E(m')$ to the company. This is called *RSA with padding*. The randomness ensures that even if you send the same message twice, the encrypted messages will most likely differ, so that if the company receives the same encrypted message $E(m)$ twice in a row, then it will know to be suspicious. + +The second attack is about unwittingly giving away information. Say that an attacker intercepts the encrypted message $E(m)$. Since the attacker cannot decrypt the message, it asks the company to decrypt the message in a roundabout way. First the attacker picks a random number $r$, and asks the company to please decrypt the message $E(m) \cdot r^e \bmod N$, where $(N, e)$ is the public key. After multiplying $E(m)$ by $r^e$, the result is a seemingly innocuous string, so the company complies with the request, sending back the decrypted message $mr$. Now, since the attacker knows $r$, he or she also knows $r^{-1} \bmod N$, and using this, the attacker can recover the original message $m$. + +It may be surprising to learn that our cryptosystems (such as RSA) are +not *provably* secure, but nevertheless they are used every day. \ No newline at end of file diff --git a/samples/texts_merged/3461249.md b/samples/texts_merged/3461249.md new file mode 100644 index 0000000000000000000000000000000000000000..1b960a7121f53a0e5ed51585977bac29ddb18d1e --- /dev/null +++ b/samples/texts_merged/3461249.md @@ -0,0 +1,272 @@ + +---PAGE_BREAK--- + +# Generalizing Robot Imitation Learning with Invariant Hidden Semi-Markov Models + +Ajay Kumar Tanwani¶,§, Jonathan Lee§, Brijen Thananjeyan§, Michael Laskey§, Sanjay Krishnan§, Roy Fox§, Ken Goldberg§, Sylvain Calinon¶ + +**Abstract.** Generalizing manipulation skills to new situations requires extracting invariant patterns from demonstrations. For example, the robot needs to understand the demonstrations at a higher level while being invariant to the appearance of the objects, geometric aspects of objects such as its position, size, orientation and viewpoint of the observer in the demonstrations. In this paper, we propose an algorithm that learns a joint probability density function of the demonstrations with invariant formulations of hidden semi-Markov models to extract invariant segments (also termed as sub-goals or options), and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The algorithm takes as input the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest with a task-parameterized formulation, and adapt the segments according to the environmental changes in a systematic manner. We present variants of this algorithm in latent space with low-rank covariance decompositions, semi-tied covariances, and non-parametric online estimation of model parameters under small variance asymptotics; yielding considerably low sample and model complexity for acquiring new manipulation skills. The algorithm allows a Baxter robot to learn a pick-and-place task while avoiding a movable obstacle based on only 4 kinesthetic demonstrations. + +**Keywords:** hidden Markov models, imitation learning, adaptive systems + +## 1 Introduction + +Generative models are widely used in robot imitation learning to estimate the distribution of the data for regenerating samples from the model [1]. Common applications include probability density function estimation, image regeneration, dimensionality reduction and so on. The parameters of the model encode the task structure which is inferred from the demonstrations. In contrast to direct trajectory learning from demonstrations, many problems arise in robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, and viewpoint of the observer. Recent trend in imitation learning is forgoing such a task structure for end-to-end supervised learning which requires a large amount of training demonstrations. + +§University of California, Berkeley. +¶Idiap Research Institute, Switzerland. +Corresponding author: ajay.tanwani@berkeley.edu +---PAGE_BREAK--- + +Fig. 1: Conceptual illustration of hidden semi-Markov model (HSMM) for imitation learning: (left) 3-dimensional Z-shaped demonstrations composed of 5 equally spaced trajectory samples, (middle) demonstrations are encoded with a 3 state HMM represented by Gaussians (shown as ellipsoids) that represent the blue, green and red segments respectively. The transition graph shows a duration model (Gaussian) next to each node, (right) the generative model is combined with linear quadratic tracking (LQT) to synthesize motion in performing robot manipulation tasks from 5 different initial conditions marked with orange squares (see also Fig. 2). + +The focus of this paper is to learn the joint probability density function of the human demonstrations with a family of **Hidden Markov Models (HMMs)** in an **unsupervised** manner [20]. We combine tools from statistical machine learning and optimal control to segment the demonstrations into different components or sub-goals that are sequenced together to perform manipulation tasks in a smooth manner. We first present a simple algorithm for imitation learning that combines the decoded state sequence of a hidden semi-Markov model [20,30] with a linear quadratic tracking controller to follow the demonstrated movement [2] (see Fig. 1). We then augment the model with a task-parameterized formulation such that it can be systematically adapted to changing situations such as pose/size of the objects in the environment [4,23,27]. We present latent space formulations of our approach to exploit the task structure using: 1) mixture of factor analyzers decomposition of the covariance matrix [14], 2) semi-tied covariance matrices of the mixture model [23], and 3) Bayesian non-parametric formulation of the model with Hierarchical Dirichlet process (HDP) for online learning under small variance asymptotics [24]. The paper unifies and extends our previous work on encoding manipulation skills in a task-adaptive manner [22,23,24]. Our objective is to reduce the number of demonstrations required for learning a new task, while ensuring effective generalization in new environmental situations. + +## 1.1 Related Work + +Imitation learning provides a promising approach to facilitate robot learning in the most 'natural' way. The main challenges in imitation learning include [16]: 1) **what-to-learn** – acquiring meaningful data to represent the important features of the task from demonstrations, and 2) **how-to-learn** – learning a control policy from the features to reproduce the demonstrated behaviour. Imitation learning algorithms typically fall into **behaviour cloning** or **inverse reinforcement learning (IRL)** approaches. IRL aims to recover the unknown reward function that is being optimized in the demonstrations, while be- +---PAGE_BREAK--- + +behaviour cloning approaches directly learn from human demonstrations in a supervised manner. Prominent approaches to imitation learning include Dynamic Movement Primitives [9], Generative Adversarial Imitation Learning [8], one-shot imitation learning [5] and so on [18]. + +This paper emphasizes learning manipulation skills from human demonstrations in an unsupervised manner using a family of hidden Markov models by sequencing the atomic movement segments or primitives. HMMs have been typically used for recognition and generation of movement skills in robotics [13]. Other related application contexts in imitation learning include options framework [10], sequencing primitives [15], and neural task programs [29]. + +A number of variants of HMMs have been proposed to address some of its shortcomings, including: 1) how to bias learning towards models with longer self-dwelling states, 2) how to robustly estimate the parameters with high-dimensional noisy data, 3) how to adapt the model with newly observed data, and 4) how to estimate the number of states that the model should possess. For example, [11] used HMMs to incrementally group whole-body motions based on their relative distance in HMM space. [13] presented an iterative motion primitive refinement approach with HMMs. [17] used the Beta Process Autoregressive HMM for learning from unstructured demonstrations. Figueroa et al. used the transformation invariant covariance matrix for encoding tasks with a Bayesian non-parametric HMM [6]. + +In this paper, we address these shortcomings with an algorithm that learns a hidden semi-Markov model [20,30] from a few human demonstrations for segmentation, recognition, and synthesis of robot manipulation tasks (see Sec. 2). The algorithm observes the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest, and adapts the model according to the environmental changes in a systematic manner in Sec. 3. Capturing such invariant representations allows us to compactly encode the task variations than using a standard regression problem. We present variants of the algorithm in latent space to exploit the task structure in Sec. 4. In Sec. 5, we show the application of our approach to learning a pick-and-place task from a few demonstrations, with an outlook to our future work. + +## 2 Hidden Markov Models + +**Hidden Markov models (HMMs)** encapsulate the spatio-temporal information by augmenting a mixture model with latent states that sequentially evolve over time in the demonstrations [20]. HMM is thus defined as a doubly stochastic process, one with sequence of hidden states and another with sequence of observations/emissions. Spatio-temporal encoding with HMMs can handle movements with variable durations, recurring patterns, options in the movement, or partial/unaligned demonstrations. Without loss of generality, we will present our formulation with semi-Markov models for the remainder of the paper. Semi-Markov models relax the Markovian structure of state transitions by relying not only upon the current state but also on the duration/elapsed time in the current state, i.e., the underlying process is defined by a *semi-Markov chain* with a variable duration time for each state. The state duration stay is a random integer variable that assumes values in the set $\{1, 2, \dots, s^{\max}\}$. The value corresponds to the +---PAGE_BREAK--- + +number of observations produced in a given state, before transitioning to the next state. **Hidden Semi-Markov Models** (HSMMs) associate an observable output distribution with each state in a semi-Markov chain [30], similar to how we associated a sequence of observations with a Markov chain in a HMM. + +Let $\{\xi_t\}_{t=1}^T$ denote the sequence of observations with $\xi_t \in \mathbb{R}^D$ collected while demonstrating a manipulation task. The state may represent the visual observation, kinesthetic data such as the pose and the velocities of the end-effector of the human arm, haptic information, or any arbitrary features defining the task variables of the environment. The observation sequence is associated with a hidden state sequence $\{z_t\}_{t=1}^T$ with $z_t \in \{1...K\}$ belonging to the discrete set of K cluster indices. The cluster indices correspond to different segments of the task such as reach, grasp, move etc. We want to learn the joint probability density of the observation sequence and the hidden state sequence. The transition between one segment $i$ to another segment $j$ is denoted by the transition matrix $a \in \mathbb{R}^{K \times K}$ with $a_{i,j} \triangleq P(z_t = j | z_{t-1} = i)$. The parameters $\{\mu_j^S, \Sigma_j^S\}$ represent the mean and the standard deviation of staying $s$ consecutive time steps in state $j$ as $p(s)$ estimated by a Gaussian $\mathcal{N}(s|\mu_j^S, \Sigma_j^S)$. The hidden state follows a categorical distribution with $z_t \sim \text{Cat}(\pi_{z_{t-1}})$ where $\pi_{z_{t-1}} \in \mathbb{R}^K$ is the next state transition distribution over state $z_{t-1}$ with $\Pi_i$ as the initial probability, and the observation $\xi_t$ is drawn from the output distribution of state $j$, described by a multivariate Gaussian with parameters $\{\mu_j, \Sigma_j\}$. The overall parameter set for an HSMM is defined by $\{\Pi_i, \{a_{i,m}\}_{m=1}^K, \mu_i, \Sigma_i, \mu_i^S, \Sigma_i^S\}_{i=1}^K$. + +## 2.1 Encoding with HSMM + +For learning and inference in a HMM [20], we make use of the intermediary variables as: 1) **forward variable**, $\alpha_{t,i}^{HMM} \triangleq P(z_t = i, \xi_1, ..., \xi_t|\theta)$: probability of a datapoint $\xi_t$ to be in state $i$ at time step $t$ given the partial observation sequence $\{\xi_1, ..., \xi_t\}$, 2) **backward variable**, $\beta_{t,i}^{HMM} \triangleq P(\xi_{t+1}, ..., \xi_T|z_t = i, \theta)$: probability of the partial observation sequence $\{\xi_{t+1}, ..., \xi_T\}$ given that we are in the $i$-th state at time step $t$, 3) **smoothed node marginal** $\gamma_{t,i}^{HMM} \triangleq P(z_t = i|\xi_1, ..., \xi_T, \theta)$: probability of $\xi_t$ to be in state $i$ at time step $t$ given the full observation sequence $\xi$, and 4) **smoothed edge marginal** $\zeta_{t,i,j}^{HMM} \triangleq P(z_t = i, z_{t+1} = j|\xi_1, ..., \xi_T, \theta)$: probability of $\xi_t$ to be in state $i$ at time step $t$ and in state $j$ at time step $t+1$ given the full observation sequence $\xi$. Parameters $\{\Pi_i, \{a_{i,m}\}_{m=1}^K, \mu_i, \Sigma_i\}_{i=1}^K$ are estimated using the EM algorithm for HMMs, and the duration parameters $\{\mu_i^S, \Sigma_i^S\}_{i=1}^K$ are estimated empirically from the data after training using the most likely hidden state sequence $z_t = \{z_1...z_T\}$ (see supplementary materials for details). + +## 2.2 Decoding from HSMM + +Given the learned model parameters, the probability of the observed sequence $\{\xi_1... \xi_t\}$ to be in a hidden state $z_t = i$ at the end of the sequence (also known as filtering prob- +---PAGE_BREAK--- + +lem) is computed with the help of the forward variable as + +$$P(z_t | \xi_1, \dots, \xi_t) = h_{t,i}^{\text{HMM}} = \frac{\alpha_{t,i}^{\text{HMM}}}{\sum_{k=1}^{K} \alpha_{t,k}^{\text{HMM}}} = \frac{\pi_i \mathcal{N}(\xi_t | \mu_i, \Sigma_i)}{\sum_{k=1}^{K} \pi_k \mathcal{N}(\xi_t | \mu_k, \Sigma_k)}. \quad (1)$$ + +Sampling from the model for predicting the sequence of states over the next time horizon $P(z_t, z_{t+1}, \dots, z_{T_p} | \xi_1, \dots, \xi_t)$ can be done in two ways: **1) stochastic sampling:** the sequence of states is sampled in a probabilistic manner given the state duration and the state transition probabilities. By stochastic sampling, motions that contain different options and do not evolve only on a single path can also be represented. Starting from the initial state $z_t = i$, the $s$ duration steps are sampled from $\{\mu_i^S, \Sigma_i^S\}$, after which the next transition state is sampled $z_{t+s+1} \sim \pi_{z_{t+s}}$. The procedure is repeated for the given time horizon in a receding horizon manner; **2) deterministic sampling:** the most likely sequence of states is sampled and remains unchanged in successive sampling trials. We use the forward variable of HSMM for deterministic sampling from the model. The forward variable $\alpha_{t,i}^{\text{HMM}} \triangleq P(z_t = i, \xi_1, \dots, \xi_t|\theta)$ requires marginalizing over the duration steps along with all possible state sequences. The probability of a datapoint $\xi_t$ to be in state $i$ at time step $t$ given the partial observation sequence $\{\xi_1, \dots, \xi_t\}$ is now specified as [30] + +$$\alpha_{t,i}^{\text{HSMM}} = \sum_{s=1}^{\min(s^{\max}, t-1)} \sum_{j=1}^{K} \alpha_{t-s,j}^{\text{HSMM}} a_{j,i} \mathcal{N}(s|\mu_i^S, \Sigma_i^S) \prod_{c=t-s+1}^{t} \mathcal{N}(\xi_c | \mu_i, \Sigma_i), \quad (2)$$ + +where the initialization is given by $\alpha_{1,i}^{\text{HSMM}} = \Pi_i N(1|\mu_i^S, \Sigma_i^S) N(\xi_1|\mu_i, \Sigma_i)$, and the output distribution in state $i$ is conditionally independent for the $s$ duration steps given as $\prod_{c=t-s+1}^{t} N(\xi_c | \mu_i, \Sigma_i)$. Note that for $t < s^{\max}$, the sum over duration steps is computed for $t-1$ steps, instead of $s^{\max}$. Without the observation sequence for the next time steps, the forward variable simplifies to + +$$\alpha_{t,i}^{\text{HSMM}} = \sum_{s=1}^{\min(s^{\max}, t-1)} \sum_{j=1}^{K} \alpha_{t-s,j}^{\text{HSMM}} a_{j,i} N(s|\mu_i^S, \Sigma_i^S). \quad (3)$$ + +The forward variable is used to plan the movement sequence for the next $T_p$ steps with $t = t + 1... T_p$. During prediction, we only use the transition matrix and the duration model to plan the future evolution of the initial/current state and omit the influence of the spatial data that we cannot observe, i.e., $N(\xi_t|\mu_i, \Sigma_i) = 1$ for $t > 1$. This is used to retrieve a step-wise reference trajectory $N(\hat{\mu}_t, \hat{\Sigma}_t)$ from a given state sequence $z_t$ computed from the forward variable with, + +$$z_t = \{z_t, \dots, z_{T_p}\} = \arg\max_i \alpha_{t,i}^{\text{HSMM}}, \quad \hat{\mu}_t = \mu_{z_t}, \quad \hat{\Sigma}_t = \Sigma_{z_t}. \quad (4)$$ + +Fig. 2 shows a conceptual representation of the step-wise sequence of states generated by deterministically sampling from HSMM encoding of the Z-shaped data. In the next section, we show how to synthesise robot movement from this step-wise sequence of states in a smooth manner. +---PAGE_BREAK--- + +Fig. 2: Sampling from HSMM from an unseen initial state $\xi_0$ over the next time horizon and tracking the step-wise desired sequence of states $\mathcal{N}(\hat{\mu}_t, \hat{\Sigma}_t)$ with a linear quadratic tracking controller. Note that this converges although $\xi_0$ was not previously encountered. + +## 2.3 Motion Generation with Linear Quadratic Tracking + +We formulate the motion generation problem given the step-wise desired sequence of states $\{\mathcal{N}(\hat{\mu}_t, \hat{\Sigma}_t)\}_{t=1}^{T_p}$ as sequential optimization of a scalar cost function with a linear quadratic tracker (LQT) [2]. The control policy $u_t$ at each time step is obtained by minimizing the cost function over the finite time horizon $T_p$, + +$$ c_t(\xi_t, u_t) = \sum_{t=1}^{T_p} (\xi_t - \hat{\mu}_t)^{\top} Q_t (\xi_t - \hat{\mu}_t) + u_t^{\top} R_t u_t, \quad (5) $$ + +s.t. $\xi_{t+1} = A_d\xi_t + B_d u_t,$ + +starting from the initial state $\xi_1$ and following the discrete linear dynamical system specified by $A_d$ and $B_d$. We consider a linear time-invariant double integrator system to describe the system dynamics. Alternatively, a time-varying linearization of the system dynamics along the reference trajectory can also be used to model the system dynamics without loss of generality. Both discrete and continuous time linear quadratic regulator/tracker can be used to follow the desired trajectory. The discrete time formulation, however, gives numerically stable results for a wide range of values of $R$. The control law $u_t^*$ that minimizes the cost function in Eq. (5) under finite horizon subject to the linear dynamics in discrete time is given as, + +$$ u_t^* = K_t(\hat{\mu}_t - \xi_t) + u_t^{\text{FF}}, \quad (6) $$ + +where $K_t = [K_t^P, K_t^V]$ are the full stiffness and damping matrices for the feedback term, and $u_t^{\text{FF}}$ is the feedforward term (see supplementary materials for computing the +---PAGE_BREAK--- + +Fig. 3: Task-parameterized formulation of HSMM: four demonstrations on left are observed from two coordinate systems that define the start and end position of the demonstration (starting in purple position and ending in green position for each demonstration). The generative model is learned in the respective coordinate systems. The model parameters in respective coordinate systems are adapted to the new unseen object positions by computing the products of linearly transformed Gaussian mixture components. The resulting HSMM is combined with LQT for smooth retrieval of manipulation tasks. + +gains). Fig. 2 shows the results of applying discrete LQT on the desired step-wise sequence of states sampled from an HSMM encoding of the Z-shaped demonstrations. Note that the gains can be precomputed before simulating the system if the reference trajectory does not change during the reproduction of the task. The resulting trajectory $\xi_t^*$ smoothly tracks the step-wise reference trajectory $\hat{\mu}_t$ and the gains $K_t^P, K_t^V$ locally stabilize $\xi_t^*$ along $\xi_t^*$ in accordance with the precision required during the task. + +# 3 Invariant Task-Parameterized HSMMs + +Conventional approaches to encode task variations such as change in the pose of the object is to augment the state of the environment with the policy parameters [19]. Such an encoding, however, does not capture the geometric structure of the problem. Our approach exploits the problem structure by introducing the task parameters in the form of coordinate systems that observe the demonstrations from multiple perspectives. Task-parameterization enables the model parameters to adapt in accordance with the external task parameters that describe the environmental situation, instead of hard coding the solution for each new situation or handling it in an *ad hoc* manner [27]. When a different situation occurs (pose of the object changes), changes in the task parameters/reference frames are used to modulate the model parameters in order to adapt the robot movement to the new situation. + +## 3.1 Model Learning + +We represent the task parameters with $F$ coordinate systems, defined by $\{A_j, b_j\}_{j=1}^F$, where $A_j$ denotes the orientation of the frame as a rotation matrix and $b_j$ represents the origin of the frame. We assume that the coordinate frames are specified by the user, based on prior knowledge about the carried out task. Typically, coordinate frames will be attached to objects, tools or locations that could be relevant in the execution of a task. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames, +---PAGE_BREAK--- + +with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint observed with respect to frame $j$. The parameters of the task-parameterized HSMM are defined by + +$$ \theta = \left\{ \{\mu_i^{(j)}, \Sigma_i^{(j)}\}_{j=1}^F, \{a_{i,m}\}_{m=1}^K, \mu_i^S, \Sigma_i^S \right\}_{i=1}^K, $$ + +where $\mu_i^{(j)}$ and $\Sigma_i^{(j)}$ define the mean and the covariance matrix of $i$-th mixture component in frame $j$. Parameter updates of the task-parameterized HSMM algorithm remain the same as HSMM, except the computation of the mean and the covariance matrix is repeated for each coordinate system separately. The emission distribution of the $i$-th state is represented by the product of the probabilities of the datapoint to belong to the $i$-th Gaussian in the corresponding $j$-th coordinate system. The forward variable of HMM in the task-parameterized formulation is described as + +$$ \alpha_{t,i}^{\text{TP-HMM}} = \left( \sum_{j=1}^{K} \alpha_{t-1,j}^{\text{HMM}} a_{j,i} \right) \prod_{j=1}^{F} \mathcal{N}(\xi_t^{(j)} | \mu_i^{(j)}, \Sigma_i^{(j)}). \quad (7) $$ + +Similarly, the backward variable $\beta_{t,i}^{\text{TP-HMM}}$, the smoothed node marginal $\gamma_{t,i}^{\text{TP-HMM}}$, and the smoothed edge marginal $\zeta_{t,i,j}^{\text{TP-HMM}}$ can be computed by replacing the emission distribution $\mathcal{N}(\xi_t | \mu_i, \Sigma_i)$ with the product of probabilities of the datapoint in each frame $\prod_{j=1}^{F} \mathcal{N}(\xi_t^{(j)} | \mu_i^{(j)}, \Sigma_i^{(j)})$. The duration model $\mathcal{N}(s|\mu_i^S, \Sigma_i^S)$ is used as a replacement of the self-transition probabilities $a_{i,i}$. The hidden state sequence over all demonstrations is used to define the duration model parameters $\{\mu_i^S, \Sigma_i^S\}$ as the mean and the standard deviation of staying $s$ consecutive time steps in the $i$-th state. + +## 3.2 Model Adaptation in New Situations + +In order to combine the output from coordinate frames of reference for an unseen situation represented by the frames $\{\tilde{\mathbf{A}}_j, \tilde{\mathbf{b}}_j\}_{j=1}^F$, we linearly transform the Gaussians back to the global coordinates with $\{\tilde{\mathbf{A}}_j, \tilde{\mathbf{b}}_j\}_{j=1}^F$, and retrieve the new model parameters $\{\tilde{\boldsymbol{\mu}}_i, \tilde{\boldsymbol{\Sigma}}_i\}$ for the $i$-th mixture component by computing the products of the linearly transformed Gaussians (see Fig. 3). + +$$ \mathcal{N}(\tilde{\boldsymbol{\mu}}_i, \tilde{\boldsymbol{\Sigma}}_i) \propto \prod_{j=1}^{F} \mathcal{N}(\tilde{\mathbf{A}}_j \boldsymbol{\mu}_i^{(j)} + \tilde{\mathbf{b}}_j, \tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top). \quad (8) $$ + +Evaluating the products of Gaussians represents the observation distribution of HSMM, whose output sequence is decoded and combined with LQT for smooth motion generation as shown in the previous section. + +$$ \tilde{\Sigma}_i = \left( \sum_{j=1}^{F} (\tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top)^{-1} \right)^{-1}, \qquad \tilde{\boldsymbol{\mu}}_i = \tilde{\Sigma}_i \sum_{j=1}^{F} (\tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top)^{-1} (\tilde{\mathbf{A}}_j \boldsymbol{\mu}_i^{(j)} + \tilde{\mathbf{b}}_j). \quad (9) $$ +---PAGE_BREAK--- + +Fig. 4: Parameters representation of a diagonal, full and mixture of factor analyzers decomposition of covariance matrix. Filled blocks represent non-zero entries. + +# 4 Latent Space Representations + +Dimensionality reduction has long been recognized as a fundamental problem in unsupervised learning. Model-based generative models such as HSMMs tend to suffer from the *curse of dimensionality* when few datapoints are available. We use statistical subspace clustering methods that reduce the number of parameters to be robustly estimated to address this problem. A simple way to reduce the number of parameters would be to constrain the covariance structure to a diagonal or spherical/isotropic matrix, and restrict the number of parameters at the cost of treating each dimension separately. Such decoupling, however, cannot encode the important motor control principles of coordination, synergies and action-perception couplings [28]. + +Consequently, we seek out a latent feature space in the high-dimensional data to reduce the number of model parameters that can be robustly estimated. We consider three formulations to this end: 1) low-rank decomposition of the covariance matrix using *Mixture of Factor Analyzers (MFA)* approach [14], 2) partial tying of the covariance matrices of the mixture model with the same set of basis vectors, albeit different scale with semi-tied covariance matrices [7,23], and 3) Bayesian non-parametric sequence clustering under small variance asymptotics [12,21,24]. All the decompositions can readily be combined with invariant task-parameterized HSMM and LQT for encapsulating reactive autonomous behaviour as shown in the previous section. + +## 4.1 Mixture of Factor Analyzers + +The basic idea of MFA is to perform subspace clustering by assuming the covariance structure for each component of the form, + +$$ \Sigma_i = \Lambda_i \Lambda_i^\top + \Psi_i, \quad (10) $$ + +where $\Lambda_i \in \mathbb{R}^{D \times d}$ is the factor loadings matrix with $d < D$ for parsimonious representation of the data, and $\Psi_i$ is the diagonal noise matrix (see Fig. 4 for MFA representation in comparison to a diagonal and a full covariance matrix). Note that the mixture of probabilistic principal component analysis (MPPCA) model is a special case of MFA with the distribution of the errors assumed to be isotropic with $\Psi_i = I\sigma_i^2$ [26]. The MFA model assumes that $\xi_t$ is generated using a linear transformation of $d$-dimensional vector of latent (unobserved) factors $f_t$, + +$$ \xi_t = \Lambda_i f_t + \mu_i + \epsilon, \quad (11) $$ +---PAGE_BREAK--- + +where $\mu_i \in \mathbb{R}^D$ is the mean vector of the $i$-th factor analyzer, $f_t \sim N(0, I)$ is a normally distributed factor, and $\epsilon \sim N(0, \Psi_i)$ is a zero-mean Gaussian noise with diagonal covariance $\Psi_i$. The diagonal assumption implies that the observed variables are independent given the factors. Note that the subspace of each cluster is not spanned by orthogonal vectors, whereas it is a necessary condition in models based on eigendecomposition such as PCA. Each covariance matrix of the mixture component has its own subspace spanned by the basis vectors of $\Sigma_i$. As the number of components increases to encode more complex skills, an increasing large number of potentially redundant parameters are used to fit the data. Consequently, there is a need to share the basis vectors across the mixture components as shown below by semi-tying the covariance matrices of the mixture model. + +## 4.2 Semi-Tied Mixture Model + +When the covariance matrices of the mixture model share the same set of parameters for the latent feature space, we call the model a *semi-tied* mixture model [23]. The main idea behind semi-tied mixture models is to decompose the covariance matrix $\Sigma_i$ into two terms: a common latent feature matrix $H \in \mathbb{R}^{D \times D}$ and a component-specific diagonal matrix $\Sigma_i^{(\text{diag})} \in \mathbb{R}^{D \times D}$, i.e., + +$$ \Sigma_i = H \Sigma_i^{(\text{diag})} H^\top. \quad (12) $$ + +The latent feature matrix encodes the locally important synergistic directions represented by $D$ non-orthogonal basis vectors that are shared across all the mixture components, while the diagonal matrix selects the appropriate subspace of each mixture component as convex combination of a subset of the basis vectors of $H$. Note that the eigen decomposition of $\Sigma_i = U_i \Sigma_i^{(\text{diag})} U_i^\top$ contains $D$ basis vectors of $\Sigma_i$ in $U_i$. In comparison, semi-tied mixture model gives $D$ globally representative basis vectors that are shared across all the mixture components. The parameters $H$ and $\Sigma_i^{(\text{diag})}$ are updated in closed form with EM updates of HSMM [7]. + +The underlying hypothesis in semi-tying the model parameters is that similar coordination patterns occur at different phases in a manipulation task. By exploiting the spatial and temporal correlation in the demonstrations, we reduce the number of parameters to be estimated while locking the most important synergies to cope with perturbations. This allows the reuse of the discovered synergies in different parts of the task having similar coordination patterns. In contrast, the MFA decomposition of each covariance matrix separately cannot exploit the temporal synergies, and has more flexibility in locally encoding the data. + +## 4.3 Bayesian Non-Parametrics under Small Variance Asymptotics + +Specifying the number of latent states in a mixture model is often difficult. Model selection methods such as cross-validation or Bayesian Information Criterion (BIC) are typically used to determine the number of states. Bayesian non-parametric approaches comprising of Hierarchical Dirichlet Processes (HDPs) provide a principled model selection procedure by Bayesian inference in an HMM with infinite number of states [25]. +---PAGE_BREAK--- + +Fig. 5: Bayesian non-parametric clustering of Z-shaped streaming data under small variance asymptotics with: (left) online DP-GMM, (right) online DP-MPPCA. Note that the number of clusters and the subspace dimension of each cluster is adapted in a non-parametric manner. + +These approaches provide flexibility in model selection, however, their widespread use is limited by the computational overhead of existing sampling-based and variational techniques for inference. We take a **small variance asymptotics** approximation of the Bayesian non-parametric model that collapses the posterior to a simple deterministic model, while retaining the non-parametric characteristics of the algorithm. + +Small variance asymptotic (SVA) analysis implies that the covariance matrix $\Sigma_i$ of all the Gaussians is set to the isotropic noise $\sigma^2$, i.e., $\Sigma_i \approx \lim_{\sigma^2 \to 0} \sigma^2 I$ in the likelihood function and the prior distribution [12,3]. The analysis yields simple deterministic models, while retaining the non-parametric nature. For example, SVA analysis of the Bayesian non-parametric GMM leads to the DP-means algorithm [12]. Similarly, SVA analysis of the Bayesian non-parametric HMM under Hierarchical Dirichlet Process (HDP) yields the segmental $k$-means problem [21]. + +Restricting the covariance matrix to an isotropic/spherical noise, however, fails to encode the coordination patterns in the demonstrations. Consequently, we model the covariance matrix in its intrinsic affine subspace of dimension $d_i$ with projection matrix $\Lambda_i^{d_i} \in \mathbb{R}^{D \times d_i}$, such that $d_i < D$ and $\Sigma_i = \lim_{\sigma^2 \to 0} \Lambda_i^{d_i} \Lambda_i^{d_i^\top} + \sigma^2 I$ (akin to DP-MPPCA model). Under this assumption, we apply the small variance asymptotic limit on the remaining $(D - d_i)$ dimensions to encode the most important coordination patterns while being parsimonious in the number of parameters (see Fig. 5). Performing small-variance asymptotics of the joint likelihood of HDP-HMM yields the maximum aposteriori estimates of the parameters by iteratively minimizing the loss function* + +$$ +\begin{aligned} +\mathcal{L}(z, d, \mu, U, a) = & \sum_{t=1}^{T} \mathrm{dist}(\xi_t, \mu_{z_t}, U_{z_t}^{d_t})^2 + \lambda(K-1) \\ +& + \lambda_1 \sum_{i=1}^{K} d_i - \lambda_2 \sum_{t=1}^{T-1} \log(a_{z_t, z_{t+1}}) + \lambda_3 \sum_{i=1}^{K} (\tau_i - 1), +\end{aligned} +$$ + +where $\mathrm{dist}(\xi_t, \mu_{z_t}, U_{z_t}^d)^2$ represents the distance of the datapoint $\xi_t$ to the subspace of cluster $z_t$ defined by mean $\mu_{z_t}$ and unit eigenvectors of the covariance matrix $U_{zt}^d$ (see supplementary materials for details). The algorithm optimizes the number of clusters + +*Setting $d_i = 0$ by choosing $\lambda_1 \gg 0$ gives the loss function formulation with isotropic Gaussian under small variance asymptotics [21]. +---PAGE_BREAK--- + +Fig. 6: (left) Baxter robot picks the glass plate with a suction lever and places it on the cross after avoiding an obstacle of varying height, (centre-left) reproduction for previously unseen object and obstacle position, (center-right) left-right HSMM encoding of the task with duration model shown next to each state ($s^{max} = 100$), (right) rescaled forward variable evolution of the forward variable over time. + +and the subspace dimension of each cluster while minimizing the distance of the data-points to the respective subspaces of each cluster. The $\lambda_2$ term favours the transitions to states with higher transition probability (states which have been visited more often before), $\lambda_3$ penalizes for transition to unvisited states with $\tau_i$ denoting the number of distinct transitions out of state $i$, while $\lambda$ and $\lambda_1$ are the penalty terms for increasing the number of states and the subspace dimension of each output state distribution. + +The analysis is used here for scalable online sequence clustering that is non-parametric in the number of clusters and the subspace dimension of each cluster. The resulting algorithm groups the data in its low dimensional subspace with non-parametric mixture of probabilistic principal component analyzers based on Dirichlet process, and captures the state transition and state duration information in a HDP-HSMM. The cluster assignment and the parameter updates at each iteration minimize the loss function, thereby, increasing the model fitness while penalizing for new transitions, new dimensions and/or new clusters. An interested reader can find more details of the algorithm in [24]. + +# 5 Experiments, Results and Discussion + +We now show how our proposed work enables a Baxter robot to learn a pick-and-place task from a few human demonstrations. The objective of the task is to place the object in a desired target position by picking it from different initial poses of the object, while adapting the movement to avoid the obstacle. The setup of pick-and-place task with obstacle avoidance is shown in Fig. 6. The Baxter robot is required to grasp the glass plate with a suction lever placed in an initial configuration as marked on the setup. The obstacle can be vertically displaced to one of the 8 target configurations. We describe the task with two frames, one frame for the object initial configuration with {$A_1, b_1$} and other frame for the obstacle {$A_2, b_2$} with $A_2 = I$ and $b_2$ to specify the centre of the obstacle. We collect 8 kinesthetic demonstrations with different initial configurations of the object and the obstacle successively displaced upwards as marked with the visual tags in the figure. Alternate demonstrations are used for the training set, while the rest are used for the test set. Each observation comprises of the end-effector Cartesian position, +---PAGE_BREAK--- + +Fig. 7: Task-Parameterized HSMM performance on pick-and-place with obstacle avoidance task: (top) training set reproductions, (bottom) testing set reproductions. + +quaternion orientation, gripper status (open/closed), linear velocity, quaternion derivative, and gripper status derivative with $D = 16$, $P = 2$, and a total of 200 datapoints per demonstration. We represent the frame $\{\mathbf{A}_1, \mathbf{b}_1\}$ as + +$$ \mathbf{A}_1^{(n)} = \begin{bmatrix} \mathbf{R}_1^{(n)} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \varepsilon_1^{(n)} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{R}_1^{(n)} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \varepsilon_1(n)\end{bmatrix}, \quad \mathbf{b}_1^{(n)} = \begin{bmatrix} \mathbf{p}_1^{(n)} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{1}\end{bmatrix}, \qquad (13) $$ + +where $\mathbf{p}_1^{(n)} \in \mathbb{R}^3, \mathbf{R}_1^{(n)} \in \mathbb{R}^{3\times3}, \varepsilon_1^{(n)} \in \mathbb{R}^{4\times4}$ denote the Cartesian position, the rotation matrix and the quaternion matrix in the $n$-th demonstration respectively. Note that we do not consider time as an explicit variable as the duration model in HSMM encapsulates the timing information locally. + +Performance setting in our experiments is as follows: $\{\pi_i, \mu_i, \Sigma_i\}_{i=1}^K$ are initialized using k-means clustering algorithm, $R = 9I$, where $I$ is the identity matrix, learning converges when the difference of log-likelihood between successive demonstrations is less than $1 \times 10^{-4}$. Results of regenerating the movements with 7 mixture components are shown in Fig. 7. For a given initial configuration of the object, the model parameters are adapted by evaluating the product of Gaussians for a new frame configuration. The reference trajectory is then computed from the initial position of the robot arm using the forward variable of HSMM and tracked using LQT. The robot arm moves from its initial configuration to align itself with the first frame $\{\mathbf{A}_1, \mathbf{b}_1\}$ to grasp the object, and follows it with the movement to avoid the obstacle and subsequently, align with the second frame $\{\mathbf{A}_2, \mathbf{b}_2\}$ before placing the object and returning to a neutral position. The model exploits variability in the observed demonstrations to statistically encode different phases of the task such as reach, grasp, move, place, return. The imposed +---PAGE_BREAK--- + +Fig. 8: Latent space representations of invariant task-parameterized HSMM for a randomly chosen demonstration from the test set. Black dotted lines show human demonstration, while grey line shows the reproduction from the model (see supplementary materials for details). + +Table 1: Performance analysis of invariant hidden Markov models with training MSE, testing MSE, number of parameters for pick-and-place task. MSE (in meters) is computed between the demonstrated trajectories and the generated trajectories (lower is better). Latent space formulations give comparable task performance with much fewer parameters. + +
ModelTraining MSETesting MSENumber of Parameters
pick-and-place via obstacle avoidance (K = 7, F = 2, D = 16)
HSMM0.0026 ± 0.00090.014 ± 0.00852198
Semi-Tied HSMM0.0033 ± 0.00160.0131 ± 0.00771030
MFA HSMM (dk = 1)0.0037 ± 0.00110.0109 ± 0.0068742
MFA HSMM (dk = 4)0.0025 ± 0.00070.0119 ± 0.00771414
MFA HSMM (dk = 7)0.0023 ± 0.00090.0123 ± 0.00842086
SVA HDP HSMM
(K = 8, k = 3.94)
0.0073 ± 0.00240.0149 ± 0.00721352
+ +structure with task-parameters and HSMM allows us to acquire a new task in a few human demonstrations, and generalize effectively in picking and placing the object. Table 1 evaluates the performance of the invariant task-parameterized HSMM with latent space representations. We observe significant reduction in the model parameters, while achieving better generalization on the unseen situations compared to the task-parameterized HSMM with full covariance matrices (see Fig. 8 for comparison across models). It is seen that the MFA decomposition gives the best performance on test set with much fewer parameters. + +# 6 Conclusions + +Learning from demonstrations is a promising approach to teach manipulation skills to robots. In contrast to deep learning approaches that require extensive training data, generative mixture models are useful for learning from a few examples that are not explicitly labelled. The formulations are inspired by the need to make generative mixture models easy to use for robot learning in a variety of applications, while requiring considerably less learning time. +---PAGE_BREAK--- + +We have presented formulations for learning invariant task representations with hidden semi-Markov models for recognition, prediction, and reproduction of manipulation tasks; along with learning in latent space representations for robust parameter estimation of mixture models with high-dimensional data. By sampling the sequence of states from the model and following them with a linear quadratic tracking controller, we are able to autonomously perform manipulation tasks in a smooth manner. This has enabled a Baxter robot to tackle a pick-and-place via obstacle avoidance problem from previously unseen configurations of the environment. A relevant direction of future work is to not rely on specifying the task parameters manually, but to infer generalized task representations from the videos of the demonstrations in learning the invariant segments. Moreover, learning the task model from a small set of labelled demonstrations in a semi-supervised manner is an important aspect in extracting meaningful segments from demonstrations. + +**Acknowledgements:** This work was, in large part, carried out at Idiap Research Institute and Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland. This work was in part supported by the DexROV project through the EC Horizon 2020 program (Grant 635491), and the NSF National Robotics Initiative Award 1734633 on Scalable Collaborative Human-Robot Learning (SCHooL). The information, data, comments, and views detailed herein may not necessarily reflect the endorsements of the sponsors. + +## References + +1. Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robot. Auton. Syst., 57(5):469-483, May 2009. +2. Francesco Borrelli, Alberto Bemporad, and Manfred Morari. Predictive control for linear and hybrid systems. Cambridge University Press, 2011. +3. Tamara Broderick, Brian Kulis, and Michael I. Jordan. Mad-bayes: Map-based asymptotic derivations from bayes. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 226-234, 2013. +4. S. Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics, 9(1):1-29, 2016. +5. Yan Duan, Marcin Andrychowicz, Brad C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. CoRR, abs/1703.07326, 2017. +6. Nadia Figueroa and Aude Billard. Transform-invariant non-parametric clustering of covariance matrices and its application to unsupervised joint segmentation and action discovery. CoRR, abs/1710.10060, 2017. +7. Mark J. F. Gales. Semi-tied covariance matrices for hidden markov models. IEEE Transactions on Speech and Audio Processing, 7(3):272-281, 1999. +8. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. CoRR, abs/1606.03476, 2016. +9. A. Ijspeert, J. Nakanishi, P Pastor, H. Hoffmann, and S. Schaal. Dynamical movement primitives: Learning attractor models for motor behaviors. Neural Computation, (25):328-373, 2013. +10. S. Krishnan, R. Fox, I. Stoica, and K. Goldberg. DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations. CoRR, 2017. +11. D. Kulic, W. Takano, and Y. Nakamura. Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden markov chains. Intl Journal of Robotics Research, 27(7):761-784, 2008. +---PAGE_BREAK--- + +12. Brian Kulis and Michael I. Jordan. Revisiting k-means: New algorithms via bayesian non-parametrics. In *Proceedings of the 29th International Conference on Machine Learning (ICML-12)*, pages 513–520, New York, NY, USA, 2012. ACM. + +13. D. Lee and C. Ott. Incremental motion primitive learning by physical coaching using impedance control. In *Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS)*, pages 4133–4140, Taipei, Taiwan, October 2010. + +14. G. J. McLachlan, D. Peel, and R. W. Bean. Modelling high-dimensional data by mixtures of factor analyzers. *Computational Statistics and Data Analysis*, 41(3-4):379–388, 2003. + +15. Jose Medina R. and Aude Billard. Learning Stable Task Sequences from Demonstration with Linear Parameter Varying Systems and Hidden Markov Models. In *Conference on Robot Learning (CoRL)*, 2017. + +16. Chrystopher L. Nehaniv and Kerstin Dautenhahn, editors. *Imitation and social learning in robots, humans, and animals: behavioural, social and communicative dimensions*. Cambridge University Press, 2004. + +17. Scott Niekum, Sarah Osentoski, George Konidaris, and Andrew G Barto. Learning and generalization of complex tasks from unstructured demonstrations. In *IEEE/RSJ International Conference on Intelligent Robots and Systems*, pages 5239–5246, 2012. + +18. Takayuki Osa, Joni Pajarinen, Gerhard Neumann, Andrew Bagnell, Pieter Abbeel, and Jan Peters. *An Algorithmic Perspective on Imitation Learning*. Now Publishers Inc., Hanover, MA, USA, 2018. + +19. Alexandros Paraschos, Christian Daniel, Jan R Peters, and Gerhard Neumann. Probabilistic movement primitives. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, *Advances in Neural Information Processing Systems 26*, pages 2616–2624. Curran Associates, Inc., 2013. + +20. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77:257–285, 1989. + +21. Anirban Roychowdhury, Ke Jiang, and Brian Kulis. Small-variance asymptotics for hidden markov models. In *Advances in Neural Information Processing Systems 26*, pages 2103–2111. Curran Associates, Inc., 2013. + +22. A. K. Tanwani. *Generative Models for Learning Robot Manipulation Skills from Humans*. PhD thesis, Ecole Polytechnique Federale de Lausanne, Switzerland, 2018. + +23. A. K. Tanwani and S. Calinon. Learning robot manipulation tasks with task-parameterized semitied hidden semi-markov model. *IEEE Robotics and Automation Letters*, 1(1):235–242, 2016. + +24. Ajay Kumar Tanwani and Sylvain Calinon. Small variance asymptotics for non-parametric online robot learning. CoRR, abs/1610.02468, 2016. + +25. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical dirichlet processes. *Journal of the American Statistical Association*, 101(476):1566–1581, 2006. + +26. M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analyzers. *Neural Computation*, 11(2):443–482, 1999. + +27. A. D. Wilson and A. F. Bobick. Parametric hidden Markov models for gesture recognition. *IEEE Trans. on Pattern Analysis and Machine Intelligence*, 21(9):884–900, 1999. + +28. D. M. Wolpert, J. Diedrichsen, and J. R. Flanagan. Principles of sensorimotor learning. *Nature Reviews*, 12:739–751, 2011. + +29. Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, and Silvio Savarese. Neural task programming: Learning to generalize across hierarchical tasks. CoRR, abs/1710.01813, 2017. + +30. S.-Z. Yu. Hidden semi-Markov models. Artificial Intelligence, 174:215–243, 2010. \ No newline at end of file diff --git a/samples/texts_merged/3594993.md b/samples/texts_merged/3594993.md new file mode 100644 index 0000000000000000000000000000000000000000..883a9862ec6545db81d06fd018e26bb322db7806 --- /dev/null +++ b/samples/texts_merged/3594993.md @@ -0,0 +1,309 @@ + +---PAGE_BREAK--- + +# On coloring box graphs + +CrossMark + +Emilie Hogana, Joseph O'Rourkeb, Cindy Traubc, Ellen Veomettd,* + +a Pacific Northwest National Laboratory, United States + +b Smith College, United States + +c Southern Illinois University Edwardsville, United States + +d Saint Mary's College of California, United States + +## ARTICLE INFO + +**Article history:** +Received 5 November 2013 +Received in revised form 6 September 2014 +Accepted 13 September 2014 +Available online 23 October 2014 + +**Keywords:** +Graph coloring +Box graph +Chromatic number + +## ABSTRACT + +We consider the chromatic number of a family of graphs we call box graphs, which arise from a box complex in *n*-space. It is straightforward to show that any box graph in the plane has an admissible coloring with three colors, and that any box graph in *n*-space has an admissible coloring with *n* + 1 colors. We show that for box graphs in *n*-space, if the lengths of the boxes in the corresponding box complex take on no more than two values from the set {1, 2, 3}, then the box graph is 3-colorable, and for some graphs three colors are required. We also show that box graphs in 3-space which do not have cycles of length four (which we call "string complexes") are 3-colorable. + +© 2014 Elsevier B.V. All rights reserved. + +## 1. Introduction and results + +There are many geometrically-defined graphs whose chromatic numbers have been studied. Perhaps the most famous such example is the Four Color Theorem, which states that any planar graph is 4-colorable [1]. Another famous example is the chromatic number of the plane. More specifically, a graph $G = (V, E)$ is defined where $V = \mathbb{R}^2$ and $(x, y) \in E$ precisely when $\|x - y\|_2 = 1$ (where $\| \cdot \|_2$ is the usual Euclidean norm in the plane). Through simple geometric constructions, one can show that $4 \le \chi(G) \le 7$ for this graph, although the precise value is still not known; see [8], for example. + +In this article, we consider graphs that arise from box complexes. We first define what a box complex is: + +**Definition 1.** An *n*-dimensional box is a set $B \subset \mathbb{R}^n$ that can be defined as: + +$$B = \{x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$$ + +where $a_i < b_i$ for $i = 1, 2, \dots, n$. + +An *n*-dimensional *box complex* is a set of finitely many *n*-dimensional boxes $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ such that if the intersection of two boxes $B_i \cap B_j$ is nonempty, then $B_i \cap B_j$ is a face (of any dimension) of both $B_i$ and $B_j$, for any $i$ and $j$ (see Fig. 1). + +Now we can define a box graph: + +**Definition 2.** An *n*-dimensional *box graph* is a graph defined on an *n*-dimensional box complex. The box graph $G(\mathcal{B}) = (V, E)$ defined on the box complex $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ is the undirected graph whose vertex set is the boxes: + +$$V = \{B_1, B_2, \dots, B_m\}$$ + +* Corresponding author. +E-mail address: erv2@stmarys-ca.edu (E. Veomett). +---PAGE_BREAK--- + +Fig. 1. Examples in $\mathbb{R}^2$. + +Fig. 2. Defining a 2-dimensional box graph. + +and whose edges $(B_i, B_j) \in E$ record when $B_i \cap B_j$ is an $(n-1)$-dimensional face of both $B_i$ and $B_j$. In other words, the box graph is the dual graph of the box complex, and the colorings we are considering are in some sense “solid colorings.” + +When it eases understanding, we may use the terms box complex and box graph interchangeably. We also may use boxes and vertices interchangeably. + +The following proposition shows that, as far as the corresponding box graphs are concerned, we may as well restrict ourselves to box complexes where each of the vertices of the boxes has integer coordinates (and thus all boxes have integer lengths). + +**Proposition 1.** Let $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ be a box complex and let $G(\mathcal{B}) = (V, E)$ be its corresponding box graph. There exists a box complex $\{C_1, C_2, \dots, C_m\}$ where the vertices of each $C_i$ ($i = 1, 2, \dots, m$) have all integer coordinates, such that the box graph corresponding to complex $\{C_1, C_2, \dots, C_m\}$ is the same graph $G$. + +We will prove **Proposition 1** in Section 2. + +We ask the following natural question: + +**Question 1.** What is the minimum number of colors $k$ that are required so that every $n$-dimensional box graph has an admissible $k$-coloring? + +From Fig. 2(c), we can see that three colors may be necessary to color a 2-dimensional box graph. In fact, as we will prove in Section 2, three colors are also sufficient: + +**Proposition 2.** Any box graph in $n$-space has an admissible coloring with $n + 1$ colors. + +Our goal is to answer **Question 1** in dimension 3, which is still open. In the case where the "boxes" are zonotopes (as opposed to right-angled bricks), sometimes 4 colors are needed [4], and in the case where the "boxes" are now touching spheres, the chromatic number is between 5 and 13 [2]. Analogously, for simplicial complexes in $\mathbb{R}^n$, $n+1$ colors suffice [6]. We suspect that any 3-dimensional box graph is 3-colorable, and we can show that this is true for a few families of 3-dimensional box graphs. The following are the main results of this paper: + +**Theorem 1.** Let $G$ be an $n$-dimensional box graph such that the lengths of all of the boxes in the corresponding box complex take on no more than two values from the set $\{1, 2, 3\}$. That is, all the side lengths of the boxes are 1 or 2, or all the side lengths are 1 or 3, or all the side lengths are 2 or 3. Then $G$ is 3-colorable. + +**Theorem 2.** Let $G$ be a 3-dimensional box graph that has no cycles on four vertices. Then $G$ is 3-colorable. + +The rest of this paper is organized as follows: in Section 2 we will state and prove some straightforward results on box graphs. We will prove **Theorem 1** in Section 3, and we will prove **Theorem 2** in Section 4. + +## **2. Straightforward results on box graphs** + +As promised, we will start with proofs of **Propositions 1** and **2**. +---PAGE_BREAK--- + +**Proof of Proposition 1.** Suppose {$B_1, B_2, \dots, B_m$} is a box complex in $\mathbb{R}^n$, so that each vertex of each box has $n$ coordinates. Let $x_0, x_1, \dots, x_k$ be the list of all of the different first coordinates of all of the vertices of the boxes in the box complex. Order them so that + +$$x_0 < x_1 < \cdots < x_k.$$ + +Now make a new box complex {$B_1^1, B_2^1, \dots, B_m^1$} such that the vertices are all the same except the first coordinates. Specifically, if the first coordinate of a vertex in $B_j$ is $x_i$, then the first coordinate of the corresponding vertex in $B_j^1$ is the integer $i$. Thus, the vertex $(x_i, y_2, y_3, \dots, y_n)$ of $B_j$ becomes the vertex $(i, y_2, y_3, \dots, y_n)$ of $B_j^1$. + +Note that each $B_i^1$ is still a box, and this does not change the intersection pattern of the boxes. That is, if $B_j \cap B_\ell$ is $d$-dimensional, then so is $B_j^1 \cap B_\ell^1$. (And if $B_j \cap B_\ell$ was empty, then so is $B_j^1 \cap B_\ell^1$.) + +We continue with this process for the 2nd, 3rd, ..., $n$th coordinates. Finally, we get a box complex {$B_1^n, B_2^n, \dots, B_m^n$} with the same intersection pattern as $B_1, B_2, \dots, B_m$ but with all integer coordinates for the vertices. Thus, the box graph for complex {$B_1^n, B_2^n, \dots, B_m^n$} is the same as the box graph for complex {$B_1, B_2, \dots, B_m$}. $\square$ + +In order to prove Proposition 2 we first give the definition of *k*-degenerate graphs, and show the well-known result that *k*-degenerate graphs are *k* + 1-colorable [5]. + +**Definition 3.** A graph G is *k*-degenerate if each of its induced subgraphs has a vertex of degree k. + +**Lemma 1.** Every *k*-degenerate graph is *k* + 1-colorable. + +**Proof.** Let $G = (V, E)$ be a $k$-degenerate graph. We will proceed by induction on $|V|$, the size of the vertex set. If $|V| = 1$ then certainly $G$ is $k$-colorable for any $k \ge 1$. Now, suppose that $|V| = m \ge 2$, and assume as the induction hypothesis that any $k$-degenerate graph on $m-1$ vertices is $k+1$-colorable. + +Then, since $G$ is $k$-degenerate we know there exists a vertex $v \in V$ with $\deg(v) = k$. Consider the graph $G-v$, formed by removing vertex $v$ and all of its incident edges, with $m-1$ vertices. This graph must be $k$-degenerate since it is an induced subgraph of $G$. Therefore, by the induction hypothesis we can color $G-v$ using $k+1$ colors. Now, when $v$ and its edges are added back into $G$ we must have at least one available color since $v$ has only $k$ neighbors and there are $k+1$ total colors. Therefore, by induction, any $k$-degenerate graph is $k+1$-colorable. $\square$ + +We now prove Proposition 2 by showing that any box graph is *n*-degenerate. + +**Proof of Proposition 2.** Let $G = (V, E)$ be a box graph, so that each $v \in V$ is a box in the corresponding box complex. We will label each box in V by its "right, forward, top" vertex. More precisely, each box can be defined as + +$$\{x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$$ + +where $a_i < b_i$ for $i = 1, 2, \dots, n$. We then label this box with $(b_1, b_2, \dots, b_n)$. + +Now find a "right, forward, top" box in the graph. That is, find a vertex $u \in V$ with corresponding label $(u_1, u_2, \dots, u_n)$ such that for any other $v \in V$ with label $(v_1, v_2, \dots, v_n)$ and $(u, v) \in E$, we have + +$$u_1 \ge v_1, u_2 \ge v_2, \dots, u_n \ge v_n.$$ + +(Such a box is guaranteed to exist because G is finite.) Note that, by our choice of *u*, *u* has at most *n* neighbors. + +Since we began with an arbitrary box graph, the existence of a degree *n* vertex must be true for all induced subgraphs of G. Therefore, any box graph corresponding to a box complex in $\mathbb{R}^n$ is *n*-degenerate, and by Lemma 1 is *n* + 1 colorable. $\square$ + +We note that the above argument is the *n*-dimensional analogue to the "elbow" argument in [7]. + +We state the following result as a reminder to the reader: + +**Proposition 3.** Let $G = (V, E)$ be a graph. Then the following are equivalent: + +1. The graph G contains no odd cycle. + +2. The graph G is bipartite. + +3. The graph G is 2-colorable. + +**Proof.** Proposition 3 is a well-known introductory graph theory result. See Section I.2 of [3], for example. $\square$ + +The following proposition shows that if a box graph cannot be colored with just 2 colors, it must have some boxes with side lengths that are different from each other. + +**Proposition 4.** Suppose a box complex only contains boxes that are cubes; that is, boxes with all side lengths equal. Then the corresponding box graph is 2-colorable. + +**Proof.** Suppose a box complex contains only cubes, and let $G = (V, E)$ be the corresponding box graph. Without loss of generality, we may assume that G is connected. Thus, since all of the boxes in the corresponding box complex are cubes, they must all be cubes of the same size; let the side length of the cubes be $k$. By the proof of Proposition 1, we can assume that $k \in \mathbb{N}$ and the coordinates of all the vertices of the boxes in the box complex are integer multiples of $k$. +---PAGE_BREAK--- + +Just as we did in the proof of Proposition 2, label each $v \in V$ with the “right, forward, top” vertex. Let $(v_1, v_2, \ldots, v_n)$ be the label for vertex $v$. Color vertex $v$ with color + +$$ \frac{1}{k} (v_1 + v_2 + \cdots + v_n) \pmod{2}. $$ + +Note that exactly two colors are used. If two vertices are adjacent: $(u, v) \in E$, then we know that their corresponding labels $(u_1, u_2, \ldots, u_n)$ and $(v_1, v_2, \ldots, v_n)$ must be the same in every coordinate except one, in which they differ by $k$. That is, there exists $i \in \{1, 2, \ldots, n\}$ such that + +$$ \begin{aligned} u_j &= v_j & \text{if } j \in \{1, 2, \ldots, n\} \text{ and } j \neq i \\ u_i &= v_i \pm k. \end{aligned} $$ + +Thus, if two vertices are adjacent then their colors must be different. Thus, this is a valid 2-coloring of G. $\square$ + +In [4] it was proved that any box complex in $\mathbb{R}^3$ that is homeomorphic to a ball is 2-colorable. + +### 3. Proof of Theorem 1 + +We shall prove Theorem 1 in parts via a few lemmas. Here is the first of our lemmas: + +**Lemma 2.** Suppose that each side length of each box in a box complex is a positive integer which is congruent to either 1 or 2 mod 3. Then the corresponding box graph is 3-colorable. + +**Proof.** Consider an $n$-dimensional box complex $\{B_1, B_2, \ldots, B_m\}$, and label each box again by its “right, forward, top” vertex coordinates, $(b_1, b_2, \ldots, b_n)$. Now, color each box by $(b_1 + b_2 + \cdots + b_n)$ mod 3. We claim that this is a valid coloring. + +If two boxes, $B_i$, $B_j$ are adjacent then their right, forward, top vertices will differ in exactly one coordinate. Let $(b_{i,1}, b_{i,2}, \ldots, b_{i,n})$ be the label for $B_i$ and $(b_{j,1}, b_{j,2}, \ldots, b_{j,n})$ the label for $B_j$. Then, WLOG, $b_{i,1} \neq b_{j,1}$ and $b_{i,k} = b_{j,k}$ for $k=2, 3, \ldots, n$. These two boxes will have the same color iff $b_{i,1} - b_{j,1} \equiv 0 \pmod{3}$. However, this value is the side length of one of these boxes which we have restricted to not equal any multiple of 3. Therefore neighboring boxes may not have the same color, so this 3-coloring is admissible. $\square$ + +The following corollary follows directly from Lemma 2: + +**Corollary 1.** Suppose a box complex in $\mathbb{R}^n$ has boxes with side lengths only equal to 1 or 2. Then the corresponding box graph is 3-colorable. + +The next in our series of lemmas: + +**Lemma 3.** Suppose that each side length of each box in a box complex is an odd integer. Then the corresponding box graph is 2-colorable. + +**Proof.** We will prove this by showing that there can be no odd cycles in the graph (see Proposition 3). + +Assume we have a box complex $\mathcal{B} = \{B_1, \ldots, B_k\}$. Consider any cycle within the corresponding box graph. Label the vertices of this cycle by the “right, forward, top” corner of the corresponding box, and label each of the edges of the cycle with the distances between those corners, mod 2. In other words, if the neighboring vertices are labeled (1, 1, ..., 1) and (4, 1, ..., 1) then we label the edge with 3 mod 2 = 1. Moreover, we will choose a direction of travel around the cycle and sign the length of the edge positive if we are moving along that edge in the positive direction, and negative if we move along the edge in the negative direction. Thus, for example, if we move from vertex (1, 1, ..., 1) to (4, 1, ..., 1), the edge is labeled with 1 since moving from 1 to 4 is in the positive direction in the first coordinate, whereas if we move from vertex (4, 1, ..., 1) to (1, 1, ..., 1), the edge is labeled with -1. + +We now claim that the sum of the integers along the cycle must be 0. This is because in each dimension, any length we move in the positive direction must be traveled again in the negative direction, and therefore their parity must also be equal. + +Finally, we note that, by assumption, all of the lengths are odd. Thus, all edge labels must be either 1 or -1. Since we have a list of edges labeled 1 or -1 and the sum of the labels is 0, there must be an even number of edges in the cycle. $\square$ + +The following corollary follows directly from Lemma 3: + +**Corollary 2.** Suppose a box complex in $\mathbb{R}^n$ has boxes with side lengths only equal to 1 or 3. Then the corresponding box graph is 3-colorable. + +The proof for Theorem 1 when blocks have dimensions 2 or 3, given in the remainder of this section, relies on placing a partial order on the box graph corresponding to a given box complex. The elements of the partially ordered set (poset) are the vertices of the box graph, i.e., the individual boxes that comprise the box complex. As before, we label box $\{x = (x_1, x_2, \ldots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$ by its “right, forward, top” vertex coordinates, $(b_1, b_2, \ldots, b_n)$. The order relation for this poset is induced by the following cover relation: box $B_i$ with label $(b_1, b_2, \ldots, b_n)$ covers box $B_j$ with label +---PAGE_BREAK--- + +Fig. 3. All edges above the ones drawn do not change in length after *T* is applied. + +(c₁, c₂, . . . , cₙ) if and only if the two boxes are adjacent and Σn_{k=1} b_k ≥ Σn_{k=1} c_k. Since these adjacent boxes must share an (n − 1)-dimensional face, their labels will differ in exactly one coordinate, by a difference equal to the dimension of box Bᵢ orthogonal to shared face Bᵢ ∩ Bⱼ. + +We note further that the sum $r(B_i) = \sum_{k=1}^{n} b_k$ of the entries of the label of a given box is a rank function for this poset. We will use the rank function and the poset structure to describe valid colorings of the box graph. This technique will consider an initial drawing of the poset (and subsequent re-drawings) with all nodes at integer heights. We then refer to the *length* of an edge in the poset as the positive vertical distance between its endpoints. + +Here is the last of the lemmas that we will need for Theorem 1: + +**Lemma 4.** Suppose a box complex has boxes with side lengths only equal to 2 or 3. Then the corresponding box graph is 3-colorable. + +**Proof.** Consider now the case in which all dimensions of the boxes in a box complex $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ are 2 or 3. We produce the associated poset $\mathcal{P}$ described above, and make an initial drawing of $\mathcal{P}$ with nodes having heights corresponding to their ranks. Note that this implies that if two boxes $B_i$ and $B_j$ which are adjacent in the box graph are drawn with heights $h_i$ and $h_j$ respectively, then $r(B_i) - r(B_j) = h_i - h_j$, and $h_i - h_j$ is either 2 or 3 if $h_i > h_j$. In other words, all lengths of the edges in the poset are either 2 or 3. Without loss of generality, we can make this drawing so that all rank-minimal vertices have height $h$-value of 0. We now describe how to redraw the poset $\mathcal{P}$ in such a way that all adjacencies and cover relations are preserved, but all edges have lengths equivalent to 1 or 2 mod 3. + +We now consider the lengths of edges in the poset, working our way in order of increasing height $h$ of the terminal endpoints. Since the first nodes occur on the line $h=0$ and all edges have length 2 or 3, no edges terminate on $h=1$, and edges that terminate on $h=2$ have length 2, which is among the desired values. Edges terminating on $h=3$ or above may have length 2 or length 3. We perform the following transformation on the drawing of the poset. Let $h_i$ denote the height of vertex $B_i$ in the initial drawing of the poset. We perform transformation $T$ below to the drawing of the poset: + +$$ T(h_i) = \begin{cases} h_i & \text{if } h_i \le 2, \\ h_i + 2 & \text{if } h_i \ge 3. \end{cases} $$ + +Note that $T$ has no effect on the length of edges terminating at or below $h=2$, and no effect on the length of edges commencing at or above $h=3$. For edges that include the interval $[2, 3]$, two units are added to their length. In the new drawing of the poset, no edges will terminate on lines $h=3$ or $h=4$. Edges terminating on $h=5$ were either originally of length 3 commencing from $t=0$ or of length 2 commencing at $h=1$. The former now have length 5, while the length of the latter is now 4. In either case, edges terminating on $h=5$ have lengths equivalent to 1 or 2 mod 3. A similar argument shows that edges in the revised drawing that terminate on $h=6$ or $h=7$ are either of length 2, 4, or 5. (See Fig. 3.) + +Any edges terminating on *h*-values of 8 or higher were not affected by the first stretch, and thus may have length 3. +Continue the stretching/redrawing procedure as before, extending the interval [7, 8] by two units and redrawing the poset. +This procedure only changes the lengths of edges which include the interval [7, 8], so in particular it does not change +the lengths of any prior edges. Since our complex is finite, only finitely many re-drawings are needed to draw the poset +with edges all having length equivalent to 1 or 2 mod 3. At that time, the nodes can be colored using the argument from +Lemma 2. □ + +We can now finally prove Theorem 1: + +**Proof of Theorem 1.** This is a direct consequence of Corollaries **1**, **2**, and **Lemma 4**. □ +---PAGE_BREAK--- + +**Fig. 4.** This 2 × 2 pattern (a 4-cycle in the dual) is forbidden as part of a string complex. + +**Fig. 5.** An example of a string complex. + +**4. Proof of Theorem 2** + +First, a couple of definitions: + +**Definition 4.** A *string complex* is any box complex in $\mathbb{R}^3$ that does not contain a 2 × 2 pattern of boxes shown in Fig. 4. The dual of the forbidden pattern is a 4-cycle, which is the shortest cycle possible in a box complex. So in other words, a string complex is a 3-dimensional box complex in whose corresponding graph has no 4-cycle (see Fig. 5). + +We use the term “string complex” because, without the 2 × 2 pattern in Fig. 4, the box complex is forced to have lots of “holes” and be “stringy.” + +**Definition 5.** A 3-dimensional box complex {$B_1, B_2, B_3, \dots, B_m$} is *reducible* to the 3-dimensional box complex {$A_1, A_2, \dots, A_\ell$} ($\ell \le m$) if one can sequentially remove boxes from complex {$B_1, B_2, \dots, B_m$} of degree $\le 2$ in order to obtain complex {$A_1, A_2, \dots, A_\ell$}. More specifically, there exists an ordering $B_1, B_2, \dots, B_m$ such that + +$$B_i = A_i \quad \text{for } i = 1, 2, \dots, \ell$$ + +and for $j = 0, 1, 2, \dots, m - \ell - 1$, the box $B_{m-j}$ has degree $\le 2$ in the box complex + +$$\{B_1, B_2, \dots, B_{m-j}\}.$$ + +A box complex is *irreducible* if every vertex is of degree $\ge 3$. + +Note that a complex may be reducible to a smaller complex which is itself irreducible. +The following lemma is analogous to the tools we used in the proof of Proposition 2: + +**Lemma 5.** If a 3-dimensional box complex is reducible to the empty complex, then its corresponding box graph is 3-colorable. + +**Proof.** We prove by induction on $m$, the number of boxes in the box complex. Certainly if $m=1$, the box graph is 3-colorable. Suppose that $m \ge 2$, and that for any 3-dimensional box complex on $m-1$ boxes which is reducible to the empty complex, the corresponding box graph is 3-colorable. Suppose that the box complex {$B_1, B_2, \dots, B_m$} is reducible to the empty complex. That is, for $i=1, 2, \dots, n$, the box $B_i$ has degree $\le 2$ in the complex + +$$\{B_1, B_2, \dots, B_n\}.$$ + +Note that the box complex {$B_1, B_2, \dots, B_{m-1}$} is also reducible to the empty complex and has $m-1$ boxes in it. Thus, by our inductive assumption, the corresponding graph is 3-colorable. Now, because $B_m$ had degree $\le 2$ in the box complex {$B_1, B_2, \dots, B_m$}, we can choose to color $B_m$ a color which is different from the colors of its neighbors. Thus, we have proven the lemma. $\square$ +---PAGE_BREAK--- + +**Fig. 6.** $b_0$ is the topmost, leftmost box in the top layer $T$. + +By Lemma 5, Theorem 2 is a direct corollary of the following theorem and its subsequent corollary: + +**Theorem 3.** Every string complex is reducible. + +**Proof.** Assume to the contrary. That is, let $\mathcal{S} = \{S_1, S_2, \dots, S_m\}$ be an irreducible string complex. We will show that irreducibility implies the complex must contain a 2 × 2 pattern of boxes, which contradicts the assumption that the complex is a string complex. + +Let $T_1, T_2, \dots, T_\ell$ be the top layer of boxes in $\mathcal{S}$; say the top faces lie in a plane parallel to the xy-plane, extreme in the +z direction. We first claim that every box in $T_1, T_2, \dots, T_\ell$ must have degree $\ge 2$ within the complex $\mathcal{T} = \{T_1, T_2, \dots, T_\ell\}$. Suppose otherwise. That is, suppose there is a box $T_i$ with degree $\le 1$ within the box complex $\mathcal{T}$. Then $T_i$ can have at most degree 2 in the complex $\mathcal{S}$ by joining to a box beneath it. But we know that every box in $\mathcal{S}$ must have degree $\ge 3$, because the complex $\mathcal{S}$ was irreducible. Thus, it is indeed true that each $T_i$, $i = 1, 2, \dots, \ell$ has degree $\ge 2$ in the complex $\mathcal{T}$. + +Now we look at an extreme corner box of $T_1, T_2, \dots, T_\ell$. Specifically, let $b_0$ be backmost (extreme in the +y direction), and among the topmost boxes of $\mathcal{T}$, leftmost (extreme in the -x direction). So $b_0$ is a type of “upper left corner”. Because it is extreme in two directions, two of its faces in $\mathcal{T}$ are exposed, so it must have exactly degree 2 in $\mathcal{T}$. Because we assumed $\mathcal{S}$ is irreducible, $b_0$ (and indeed every box of $\mathcal{S}$) must have degree $\ge 3$. So $b_0$ must be adjacent to a box $b'_0$ beneath it (beneath in the z-direction). See Fig. 6. + +Let $b_1$ and $b_2$ be the boxes adjacent to $b_0$ in $T$, with $b_1$ adjacent to $b_0$ in the x-direction as in the figure. Again, by our previous arguments, $b_1$ must have degree $\ge 2$ in $\mathcal{T}$. It is already adjacent to $b_0$ to its left, and it cannot be adjacent to a box above it, because it is topmost. So it must be adjacent to one or both of the boxes labeled $b_3$ and $b_4$ in the figure. + +However, $b_1$ cannot be adjacent to $b_3$, for then $\{b_0, b_1, b_2, b_3\}$ forms a 2 × 2 pattern, contradicting the assumption that $\mathcal{S}$ is a string complex. Therefore $b_1$ must be adjacent to $b_4$ in Fig. 6. Now $b_1$ has degree exactly 2 in $T$. Because it must have degree $\ge 3$ for $\mathcal{S}$ to be irreducible, it must be adjacent to box $b'_1$ underneath. But now $\{b_0, b_1, b'_0, b'_1\}$ forms a 2 × 2 pattern, again contradicting the assumption that $\mathcal{S}$ is a string complex. + +We have now exhausted all possibilities, which have led to contradictions. So the assumption that $\mathcal{S}$ is irreducible is false, and $\mathcal{S}$ must be reducible. ☐ + +**Corollary 3.** Every string complex can be reduced to the empty complex. + +**Proof.** Let $\mathcal{S}$ be a string complex. It cannot be irreducible by Theorem 3, and so it must have a box $b$ of degree $\le 2$. Let $\mathcal{S}_1 = \mathcal{S} \setminus b$ be the complex with $b$ removed. We claim that $\mathcal{S}_1$ is again a string complex. The reason is that the forbidden 2 × 2 pattern cannot be created by the removal of a box. Therefore, applying Theorem 3 again, $\mathcal{S}_1$ is reducible. Continuing in this manner, we can reduce $\mathcal{S}$ to the empty complex. ☐ + +**5. Conclusion** + +That box complexes in $\mathbb{R}^2$ sometimes need 3 colors is a straightforward observation, but whether any box complex in $\mathbb{R}^3$ might need 4 colors is an open question. Although it is natural to expect that the chromatic number might be $n+1$ for boxes in $\mathbb{R}^n$ as it is for simplices, we in fact have no example that requires more than 3 colors for any $n \ge 3$. + +**Acknowledgments** + +We thank the participants of the 2012 AMS Mathematics Research Institute for stimulating discussions, and we thank the referees for their insightful comments. The proof of Theorem 2 was developed in collaboration with Smith students Lily Du, Jessica Lord, Micaela Mendlow, Emily Merrill, Viktoria Pardey, Rawia Salih, and Stephanie Wang. The first, third and last authors were supported by an AMS Mathematics Research Communities grant. +---PAGE_BREAK--- + +References + +[1] K. Appel, W. Haken, Every planar map is four colorable, Bull. Amer. Math. Soc. 82 (5) (1976) 711-712. + +[2] Bhaskar Bagchi, Basudeb Datta, Higher-dimensional analogues of the map coloring problem, Amer. Math. Monthly 120 (8) (2013) 733–737. + +[3] Béla Bollobás, Modern Graph Theory, in: Graduate Texts in Mathematics, vol. 184, Springer-Verlag, New York, 1998. + +[4] Suzanne Gallagher, Joseph O'Rourke, Coloring objects built from bricks, in: Proc. 15th Canad. Conf. Comput. Geom., 2003, pp. 56–59. + +[5] Alexandr V. Kostochka, On almost (k - 1)-degenerate (k + 1)-chromatic graphs and hypergraphs, Discrete Math. 313 (4) (2013) 366–374. + +[6] Joseph O'Rourke, A note on solid coloring of pure simplicial complexes, December 2010, arXiv:1012.4017 [cs.DM]. + +[7] Tom Sibley, Stan Wagon, Rhombic Penrose tilings can be 3-colored, Amer. Math. Monthly 107 (3) (2000) 251–253. + +[8] Alexander Soifer, Chromatic number of the plane & its relatives. I. The problem & its history, Geombinatorics 12 (3) (2003) 131–148. \ No newline at end of file diff --git a/samples/texts_merged/3723390.md b/samples/texts_merged/3723390.md new file mode 100644 index 0000000000000000000000000000000000000000..9f988e98bd70c1d3785c07a875f0311684fab6c9 --- /dev/null +++ b/samples/texts_merged/3723390.md @@ -0,0 +1,333 @@ + +---PAGE_BREAK--- + +# Capacity of multiservice WCDMA Networks with variable GoS + +Nidhi Hegde and Eitan Altman + +INRIA 2004 route des Lucioles, B.P.93 06902 Sophia-Antipolis, France +Email: {Nidhi.Hegde, Eitan.Altman} @sophia.inria.fr + +Abstract— Traditional definitions of capacity of CDMA networks are either related to the number of calls they can handle (pole capacity) or to the arrival rate that guarantees that the rejection rate (or outage) is below a given fraction (Erlang capacity). We extend the latter definition to other quality of service (QoS). We consider best-effort (BE) traffic sharing the network resources with real-time (RT) applications. BE applications can adapt their instantaneous transmission rate to the available one and thus need not be subject to admission control or outages. Their meaningful QoS is the average delay. The delay aware capacity is defined as the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. We compute both the blocking probability of the RT traffic having an adaptive Grade of Service (GoS) as well as the expected delay of the BE traffic for an uplink multicell WCDMA system. This yields the Erlang capacity for former and the delay capacity for the latter. + +## I. INTRODUCTION + +Third generation mobile networks such as the Universal Mobiles Telecommunications System (UMTS), will provide a wide variety of services to users, including multimedia applications and interactive real-time applications as well as best-effort applications such as file transfer, Internet browsing, and electronic mail. These services have varied quality of service (QoS) requirements; real time applications (RT) needs some guaranteed minimum transmission rate as well as delay bounds which requires reservation of system capacity. We assume that RT traffic is subject to Call Admission Control (CAC) in order to guarantee the minimum rates for accepted RT calls. This implies that RT traffic may suffer rejections whose rate is then an important QoS for such applications. In contrast, Best-effort (BE) applications can adapt their transmission rate to the network's available resources and is therefore not subject to CAC. The relevant QoS measure for BE traffic is then the expected sojourn time (or delay) of a call in the system (e.g. the expected time to download a file). + +We consider BE traffic sharing the network resources with RT applications. Our aim is to compute both the blocking (or rejection) probability of the RT traffic as well as the expected delay of the BE traffic for an uplink multicell WCDMA system. Although RT calls need a minimum guaranteed transmission rate, they are assumed to be able to adapt to network resources in a way similar to the BE traffic. For example, in the case of voice applications, UMTS will use the Adaptive Multi-Rate (AMR) codec that offers eight different transmission rates of voice that vary between 4.75 kbps to 12.2 kbps, and that can be dynamically changed every 20 msec. Although both RT + +and BE traffic have adaptive rates, we identify a key difference between the two: The *duration* of a RT call does not depend on the instantaneous assigned rate it gets (only the quality may change), whereas for BE calls, the *total volume transmitted* during the call does not depend on the assigned rate; the duration of BE calls therefore does depend on the dynamic rate assignment. We propose a probabilistic model that takes these features into account and enables to compute the performance measures of interest: we compute the blocking probabilities and the average throughput per RT calls, the expected average number of RT and BE calls in the system, and the expected delay of BE call. + +We extend the notion of capacity in order to describe the amount of traffic for which the system can offer reasonable QoS. Traditional definitions of capacity of networks are either related to the number of calls they can handle (pole capacity) or to the arrival rate that guarantees that the rejection rate (or outage) is below a given fraction (Erlang capacity, see [11]). We extend the latter definition to other QoS. The delay aware capacity, suitable in particular for the BE traffic, is defined as the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. We compute it as a function of other parameters of the system (rate of arrival and characteristics of RT traffic, the CAC and downgrading policy applied to RT traffic). + +We briefly mention related work. In [10], an uplink CDMA with two classes is considered, the RT traffic is transmitted all the time, the non real time mobiles (NRT) are time-shared. A related idea has also been analyzed in [6]. The benefits of time sharing is studied and conditions for silencing some are obtained. The capacity of voice/data CDMA systems is also analyzed in [7] where both classes are modeled as VBR traffic. Adaptive features of transmission rates are not considered in the above references. In [1], the author considers the influence of the value of a fixed (not-adaptive) bandwidth per BE calls on the Erlang capacity of the system (that includes also RT calls), taking into account that a lower bandwidth implies longer call durations. A limiting capacity (as the fixed bandwidth vanishes) is identified and computed. Related work [2], [9] has also been done in wireline ATM networks (although without the power control aspects and without the downgrading features of wireless). + +The structure of this paper is as follows. Next section introduces the model and preliminaries. Section III computes the performance of RT and BE traffic in the case of a +---PAGE_BREAK--- + +single sector using a matrix geometric approach. This is then extended in Section IV to the multisector multicell case using a fix point argument. In Section V we provide numerical examples and we end with a concluding section. + +## II. PRELIMINARIES + +We consider the uplink of a multi-service WCDMA system with K service classes. Let $X_j$ be the number of ongoing calls of type j in some given sector, and $\mathbf{X} = (X_1, \dots, X_K)$. In CDMA systems, in order for a signal to be received, the ratio of it's received power to the sum of the background noise and interference must be greater than a given constant. For some given $\mathbf{X}$, this condition is as follows [5]: + +$$ \frac{P_j}{N + I_{\text{own}} + I_{\text{other}} - P_j} \triangleq \gamma_j \ge \tilde{\Delta}'_j, \quad j = 1, \dots, K, \quad (1) $$ + +where N is the background noise, and $I_{\text{own}}$ and $I_{\text{other}}$ are the total power received from the mobiles within the considered sector, and within the other sectors or cells, respectively. $\gamma_j$ is the ratio of received power to total receive noise and interference at the base station, SIR, and $\tilde{\Delta}'_j$ is the required SIR for a call of class j, given by $\tilde{\Delta}'_j = E_j/N_o R_j W$ where $E_j$ is the energy per transmitted bit of type j, $N_o$ is the thermal noise density, $W$ is the WCDMA modulation bandwidth, and $R_j$ is the transmission rate of the type j call. + +The interference received from mobiles in the same sector is simply $I_{\text{own}} = \sum_{j=1}^K X_j P_j$. When $X_j$ for all $j=1, \dots, K$ is fixed, we also make the standard assumption [5] that the other-cell interference is proportional to interference for own cell, by some constant $f$, as such: + +$$ I_{\text{other}} = f I_{\text{own}}. \quad (2) $$ + +Note that the above assumes perfect power control. Due to inaccuracies in the closed-loop fast power control mechanism, mainly due to shadow fading of the radio signal, the $\gamma_j$ may not be equal to $\tilde{\Delta}'_j$ at all times. We now define $\gamma_j$ to be a random variable of the form $\gamma_j = 10^{\xi_j/10}$, where $\xi_j \sim N(\mu_\xi, \sigma_\xi)$ includes the shadow fading component and $\sigma_\xi$ is the standard deviation of shadow fading with typical values between 0.3 and 2 dB [4], [11]. It follows then that $\gamma_j$ has a lognormal distribution given by: $f_{\gamma_j}(x_j) = \frac{h}{x_j \sigma_\xi \sqrt{2\pi}} \exp\left(-\frac{(h \ln(x_j) - \mu_\xi)^2}{2\sigma_\xi^2}\right)$ where $h = 10/\ln 10$. + +Since $\gamma_j$ is now a random variable, we can write the condition (1) in terms of $\tilde{\gamma}_j$, the average received SIR. We would now like to determine the required SIR, $\tilde{\Delta}_j$ such that $\tilde{\gamma}_j = \tilde{\Delta}_j$ where $\tilde{\Delta}_j$ includes power control errors and replaces $\tilde{\Delta}'_j$ in (1). We determine $\tilde{\Delta}_j$ for the outage condition: $\Pr[\gamma_j \ge \tilde{\Delta}'_j] = \beta$ [12]. The reliability, $\beta$, is typically set to 99%. We have: + +$$ \Pr[\gamma_j \ge \tilde{\Delta}'_j] = \beta = \int_{\tilde{\Delta}'}^{\infty} f_{\gamma_j}(x) dx = Q \left( \frac{h \ln \tilde{\Delta}' - \mu_{\xi}}{\sigma_{\xi}} \right) $$ + +where $Q(x) = \int_x^\infty \frac{1}{2\pi} e^{-t^2/2} dt.$ + +By inverting the above Q-function, we have: + +$$ \tilde{\Delta}'_j = 10^{\left(\frac{Q^{-1}(\beta)\sigma_\xi}{10} + \frac{\mu_\xi}{10}\right)} \quad (3) $$ + +Since $\gamma_j$ is a lognormal random variable, its expectation is given by: $\tilde{\gamma}_j = \exp\left(\frac{\sigma_\xi^2}{2h^2} + \frac{\mu_\xi}{h}\right)$. We solve for $\mu_\xi$, to obtain: + +$$ \mu_\xi = h \ln \tilde{\gamma}_j - \frac{\sigma_\xi^2}{2h} \quad (4) $$ + +We use (3) and (4) to get: + +$$ \tilde{\Delta}'_j = \tilde{\gamma}_j 10^{\frac{Q^{-1}(\beta)\sigma_\xi}{10} - \frac{\sigma_\xi^2}{20h}} $$ + +We then have the SIR condition in (1) modified as follows: + +$$ \tilde{\gamma}_j \geq \tilde{\Delta}'_j \Gamma = \frac{E_j R_j}{N_o W} \Gamma \triangleq \tilde{\Delta}_j \quad (5) $$ + +where + +$$ \Gamma = 10^{\frac{\sigma_{\xi}^{2}}{20h} - \frac{Q^{-1}(\beta)\sigma_{\xi}}{10}}. $$ + +Note that $\Gamma$ is independent of service class. The value of $\Gamma$ is a function of the standard deviation of the shadow fading of users, $\sigma_\xi$, whose value varies with user mobility. Differences in the signal fading due only to user mobility are not considered in this paper. The above modified required SIR now includes a correction for imperfect power control. + +Revisiting (1), we notice that in order serve a large of number of ongoing calls, that is to keep the $X_j$s high, we must keep the $P_j$s as low as possible. We then solve for the minimum required received power $P_j$ satisfying (5) which is known to be the one that gives strict equality $\tilde{\gamma}_j = \tilde{\Delta}_j$ in (5): + +$$ P_j = \frac{N\Delta_j}{1 - (1+f)\sum_{k=1}^{K} X_k \Delta_k} \quad (6) $$ + +where $\Delta_j = \frac{\tilde{\Delta}'_j}{1+\tilde{\Delta}'_j}$ turns out to be the signal-to-total-power ratio, STPR (see [1, eq 4]). + +Define the loading as: + +$$ \theta = \sum_{j=1}^{K} X_j \Delta_j(\mathbf{X}). \quad (7) $$ + +This definition reflects the fact that $\Delta_j$ is a function of the number of each type of call in the system (since it depends on the transmission rate $R_j$ and since $R_j$ will be determined as a function of the system state). In this paper we consider both real time (RT) and best-effort (BE) services that receive a variable rate. As explained in Section III, the rate received by RT calls, and thus $\Delta_{RT}$, depends on the number of RT calls. The rate received by BE calls depends on both $X_{RT}$ and $X_{BE}$. We maintain this dependence throughout the paper, however for notational convenience we will sometimes drop the argument $(\mathbf{X})$. + +Now we may define the integer capacity of the cell as the set $X^*$ of vectors $\mathbf{X}$ such that the received powers of the mobiles stays finite, i.e. the denominator of (6) does not vanish [1]. In the equation for minimum received power shown in (6), +---PAGE_BREAK--- + +this implies the condition $\theta(1+f) < 1$. The system prevents, through Call Admission Control (CAC), that the denominator vanishes; more generally, it is desirable to be even more conservative and to impose a bound on the capacity, $\Theta_{\epsilon} = 1 - \epsilon$ where $\epsilon > 0$. Thus the CAC will ensure that $\theta \le \Theta_{\epsilon}/(1+f)$. Later on we shall consider special combined policies for RT traffic that combine CAC with some rate adaptation, along with a rate adaptation for NRT traffic, which will result in a further restriction on the number of RT calls that the system can handle (which will also be called, with some abuse of notation, the integer capacity of RT traffic). + +### III. SINGLE SECTOR IN ISOLATION + +Let us first consider a single sector, so that we may exclude interference from other sectors and other cells in the calculations, thereby setting $f = 0$, in this section. We consider a base station with uplink capacity such that + +$$ \theta \le \Theta_{\epsilon}. \quad (8) $$ + +Here we define *capacity* in terms of the sum of $\Delta$'s, STPR, of all users. We denote by individual normalized bandwidth, the individual required STPR that corresponds to a particular rate. For example, a call that requires a rate of $y$ bps requires a normalized bandwidth of $\Delta = \frac{E/N_o}{W/y+E/N_o}$ where $E/N_o$ is the requirement specified for the given service type of the call. + +#### A. Real Time Calls + +We assume a single type of RT calls capable of accepting a variable rate, with a requested transmission rate $R_{\text{RT}}^r$. From (5) and the definition of $\Delta_j$ that follows (6), we derive the required bandwidth $\Delta_{\text{RT}}^r$ that corresponds to rate $R_{\text{RT}}^r$: + +$$ \Delta_{\text{RT}}^{r} = \frac{E_{\text{RT}}/N_o}{W/R_{\text{RT}}^{r} + E_{\text{RT}}/N_o}. $$ + +We now introduce the parameters of the call admission control for the RT traffic. All BE calls in the sector share equally the capacity remaining after RT calls have been allocated the required normalized bandwidth. In addition, we assume that some portion of the capacity is reserved for BE calls, thus the RT calls have a maximum capacity, denoted by $L_{\text{RT}}$. Let us denote $L_{\text{BE}}$ to be the minimum portion of the total capacity available for BE calls. We then have $L_{\text{BE}} = \Theta_{\epsilon} - L_{\text{RT}}$. We have the following condition for the capacity bound on RT calls: + +$$ X_{\text{RT}}\Delta_{\text{RT}} \le L_{\text{RT}} \quad (9) $$ + +where $\Delta_{\text{RT}}$ is the normalized bandwidth received by each RT call. Note that this value will depend on the number of RT calls, and thus may vary. + +The integer capacity for RT calls, such that they all receive the requested rate $R_{\text{RT}}^r$ and bandwidth $\Delta_{\text{RT}}^r$, is then given by +$$ N_{\text{RT}} = \left\lfloor \frac{L_{\text{RT}}}{\Delta_{\text{RT}}^r} \right\rfloor. $$ + +1) CAC and GoS control: In a strict call admission control scheme for RT calls, new RT call arrivals would be blocked and cleared when there are $N_{\text{RT}}$ RT calls in the sector. However, in UMTS, we can control the GoS, by providing RT calls with a variable transmission rate [3]. In such a case, we may allow more than $N_{\text{RT}}$ RT calls, at the expense of reducing the transmission rate of all RT calls, thus keeping the total normalized bandwidth occupied by all RT calls within the limit. Let us then define a second threshold for admission of RT calls, $M_{\text{RT}} > N_{\text{RT}}$. Call admission control for RT calls then is as follows. As long as the number of RT calls is less than $N_{\text{RT}}$, all RT calls receive the requested normalized bandwidth $\Delta_{\text{RT}}^r$. When the number $j$ of RT calls is more than $N_{\text{RT}}$ but not more than $M_{\text{RT}}$, all RT calls receive with equality a modified (reduced) normalized bandwidth, denoted here as $\Delta_{\text{RT}}^j$, such that (9) is still satisfied. If there are $M_{\text{RT}}$ RT calls in the sector, new RT call arrivals are blocked and cleared. $M_{\text{RT}}$ may be chosen so that RT calls receive a minimum transmission rate of $R_{\text{RT}}^m$, with normalized bandwidth $\Delta_{\text{RT}}^m$, even in the worst case. The integer capacity for RT calls then is $M_{\text{RT}} = \lceil \frac{L_{\text{RT}}}{\Delta_{\text{RT}}^m} \rceil$, where $\Delta_{\text{RT}}^m = \frac{E_{\text{RT}}/N_o}{W/R_{\text{RT}}^m+E_{\text{RT}}/N_o}$, as derived from (5). The bandwidth received by each RT call at some time $t$ is thus a function of $X_{\text{RT}}(t)$ as follows: + +$$ \Delta_{\text{RT}}(X_{\text{RT}}(t)) = \begin{cases} \Delta_{\text{RT}}^{r} & 1 \le X_{\text{RT}}(t) \le N_{\text{RT}}; \\ L_{\text{RT}}/X_{\text{RT}}(t) & N_{\text{RT}} < X_{\text{RT}}(t) < M_{\text{RT}}. \end{cases} \quad (10) $$ + +2) RT Traffic Model: We assume that RT calls arrive according to a Poisson process with rate $\lambda_{\text{RT}}$. The duration of an RT call is assumed to have an exponential distribution with mean $1/\mu_{\text{RT}}$, and is not affected by the allocated bandwidth. Let $X_1(t)$ and $X_2(t)$ represent the number of RT customers and BE customers respectively, at time $t$ in the given sector. The number of RT calls in the system is not affected by the BE calls. Therefore, $X_1(t)$ follows a birth and death process, with birth rate $\lambda_{\text{RT}}$ and death rate $\mu_{\text{BE}}$. The steady-state probabilities $\pi_{\text{RT}}(x)$ of the number of RT calls $x$ in the system are given by: + +$$ \mathrm{Pr}[X_{\mathrm{RT}} = x] = \lim_{t \to \infty} \mathrm{Pr}[X_{\mathrm{RT}}(t) = x] = \frac{\rho_{\mathrm{RT}}^x / x!}{\sum_{i=0}^{M_{\mathrm{RT}}} \rho_{\mathrm{RT}}^i / i!} \quad (11) $$ + +where $\rho_{\mathrm{RT}} = \lambda_{\mathrm{RT}}/\mu_{\mathrm{RT}}$. For RT calls, we are interested in the call blocking probability and the average throughput. The call blocking probability is given by: + +$$ P_B^{\mathrm{RT}} = \pi_R(M_R) = \frac{\rho_R^{M_R}/M_R!}{\sum_{i=0}^{M_R} \rho_R^i / i!} \quad (12) $$ + +We define $r(x)$ to be the transmission rate received by RT calls when there are $x$ RT calls in the sector, as follows: + +$$ r(X_{\text{RT}}) = \frac{\Delta_{\text{RT}}(X_{\text{RT}}) W}{(1 - \Delta_{\text{RT}}(X_{\text{RT}})) E_{\text{RT}}/N_o} $$ + +Since the transmission rate of RT calls is affected by the number of RT calls, we would like to include in our definition of expected throughput, a measure of the number of RT calls in the sector. We define the expected throughput per call as +---PAGE_BREAK--- + +the ratio of the expected global throughput to the expected number of RT calls in the sector, as follows: + +$$ +\mathbb{E}[r(X_{\mathrm{RT}})] = \frac{\sum_{x=1}^{M_{\mathrm{RT}}} \mathrm{Pr}[X_{\mathrm{RT}} = x] x r(x)}{\sum_{x=1}^{M_{\mathrm{RT}}} \mathrm{Pr}[X_{\mathrm{RT}} = x] x} \quad (13) +$$ + +B. Best-Effort Calls + +We define $C(x)$ to be the capacity available to BE calls when there are $x$ RT calls, as follows: + +$$ +C(x) = \begin{cases} \Theta_{\epsilon} - x \Delta_{\text{RT}}^r , & x \le N_{\text{RT}}; \\ L_{\text{BE}} , & N_{\text{RT}} < x \le M_{\text{RT}}. \end{cases} +$$ + +All BE calls in the sector share equally the available band- +width. We can then model BE service by a processor shar- +ing(PS) discipline with a random service capacity. We study +two performance metrics for BE calls: the average sojourn +time of a BE call for given values of RT and BE load, and +the maximum BE arrival rate such that the average delay is +always bounded by a given constant. + +Best-effort calls arrive according to a Poisson process with rate $\lambda_{\text{BE}}$. The required workload of BE classes, i.e. file sizes, are i.i.d exponentially distributed with mean $1/\mu_{\text{BE}}$. The departure rate of BE calls is given by $\nu(X_{\text{RT}}) = \mu_{\text{BE}}R_{\text{BE}}(X_{\text{RT}})$, where $R_{\text{BE}}(X_{\text{RT}})$ is the total BE rate corresponding to the available BE capacity $C(X_{\text{RT}})$, as follows: + +$$ +R_{\text{BE}}(X_{\text{RT}}) = \frac{C(X_{\text{RT}})W}{(1 - C(X_{\text{RT}})) E_{\text{BE}}/N_o}. +$$ + +We assume no call admission control for BE calls. The process $(X_2(t), X_1(t))$ is an irreducible Markov chain. It is ergodic if and only if the average service capacity available to BE calls is greater than the BE load (as in [2]): + +$$ +\mu_{\text{BE}} \mathbb{E} R_{\text{BE}}(X_{\text{RT}}) > \lambda_{\text{BE}}. \tag{14} +$$ + +Specifically, the process $(X_2(t), X_1(t))$ is a homogeneous quasi birth and death process(QBD) with the generator $Q$. The stationary distribution of this system, $\pi$, is calculated by $\pi Q = 0$, with the normalization condition $\pi e = 1$ where $e$ is a vector of ones of proper dimension. $\pi$ represents the steady-state probability of the two-dimensional process lexicographically: we partition $\pi$ as $[\pi(0), \pi(1), ...]$ with the vector $\pi(i)$ for level $i$, where the levels correspond to the number of BE calls in the system. We may further partition each level into the number of RT calls, $\pi(i) = [\pi(i, 0), \pi(i, 1), ..., \pi(i, M_{RT})]$, for $i \ge 0$. + +The generator $Q$ has the form: + +$$ +Q = \begin{bmatrix} +B & A_0 & 0 & 0 & \cdots \\ +A_2 & A_1 & A_0 & 0 & \cdots \\ +0 & A_2 & A_1 & A_0 & \cdots \\ +0 & 0 & \ddots & \ddots & \ddots +\end{bmatrix} \quad (15) +$$ + +where the matrices $B$, $A_0$, $A_1$, and $A_2$ are square matrices of +size ($M_{\text{RT}} + 1$). $A_0$ corresponds to a BE connection arrival, +given by $A_0 = \text{diag}(\lambda_{\text{BE}})$. $A_2$ corresponds to a departure of +a BE call. The departure rate for BE calls is $\nu(X_{\text{RT}})$. Thus +$A_2 = \text{diag}(\nu(i); 0 \le i \le M_{\text{RT}})$ $A_1$ corresponds to the arrival + +and departure processes of the RT calls. $A_1$ is tri-diagonal as +follows: + +$$ +\begin{align*} +A_1[i, i+1] &= \lambda_{RT} \\ +A_1[i, i-1] &= i\mu_{RT} \\ +A_1[i, i] &= -\lambda_{RT} - i\mu_{RT} - \lambda_{BE} - \nu(i) +\end{align*} +$$ + +We also have $B = A_1 + A_2$. + +The steady-state equations can be written as: + +$$ +0 = \pi(0)B + \pi(1)A_2 \quad (16) +$$ + +$$ +0 = \pi(i-1)A_0 + \pi(i)A_1 + \pi(i+1)A_2, \quad i \ge 1 \quad (17) +$$ + +We follow the matrix-geometric solution to this QBD [8]. +Assuming stability as shown in (14), the steady-state solution +$\pi$ exists, and is given by: + +$$ +\pi(i) = \pi(0)\mathbf{R}^i \qquad (18) +$$ + +where the matrix **R** is the minimal non-negative solution to +the equation: + +$$ +A_0 + R A_1 + R^2 A_2 = 0 \quad (19) +$$ + +In order to solve for **R**, we find it efficient to write $A_1 = T-S$ +where *S* is a diagonal matrix and *T* has a zero diagonal. The +diagonal matrix *S* is positive and invertible, and we may write +(19) as **R** = (*A*₀ + **R**T + **R**²*A*₂)*S⁻¹. This equation can then +be solved by successive iterations starting with **R** = 0, a zero +matrix. + +Once the matrix **R** is known, we may find π(0) using the boundary condition (16) and the normalization πe = 1 which using (18) is equivalent to π(0)(I − R)⁻¹e = 1. The marginal distribution of the number of RT calls can easily be obtained by using (11). The marginal probability of the number BE calls is + +$$ +\mathrm{Pr}[X_{\mathrm{BE}} = i] = \sum_{j=0}^{M_{\mathrm{RT}}} \pi(i,j) = \pi(i)e = \pi(0)\mathbf{R}^i e. +$$ + +One way to compute the above is by finding the $M_{RT} + 1$ +eigenvalues and corresponding eigenvectors of the matrix +$\mathbf{R}$. All $M_{RT} + 1$ eigenvalues of the matrix $\mathbf{R}$ are distinct +[9] and therefore $\mathbf{R}$ is diagonalizable. Define $D$ to be a +diagonal matrix containing the eigenvalues of $\mathbf{R}$, $r_i$, on the +diagonal, and $V$ to be a matrix containing the corresponding +eigenvectors, $v_i$ as columns. We then have: + +$$ +\mathrm{Pr}[X_{\mathrm{BE}} = i] = \pi(0)\mathbf{R}^i e = \pi(0)V D^i V^{-1}e = \sum_{k=0}^{M_{\mathrm{RT}}} a_k r_k^i +$$ + +where $a_k = \pi(0)v_k e'_k V^{-1}e$ and $e'_k$ is a zero vector of proper dimension with the $k$th element equal to one. The expectation of $X_{BE}$ is as follows: + +$$ +\mathbb{E}[X_{\text{BE}}] = \sum_{k=0}^{M_{\text{RT}}} a_k \frac{r_k}{(1-r_k)^2} \quad (20) +$$ + +We can now use Little’s Law to calculate the average sojourn time of a BE session, $T_{BE} = E[X_{BE}]/\lambda_{BE}$. Having obtained the expected delay of BE traffic in terms of the +---PAGE_BREAK--- + +system parameters, one can now obtain the delay aware capacity of BE traffic, i.e. the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. + +IV. EXTENSION TO MULTIPLE SECTORS + +In this section we provide an analysis for the multi-sector multi-cell case, by including an approximation for the other-sector interference, $I_{\text{other}}$. Above in (2), we have made the assumption that $I_{\text{other}}$ is proportional to $I_{\text{own}}$ by a constant $f$. Such a definition of other sector interference and the subsequent derivation of minimum required received power in (6) holds for a static network with a fixed number of mobiles. However, in our dynamic model of stochastic arrivals and holding times, such a definition may not hold at all times. We then approximate the instantaneous interference $I_{\text{other}}$ by its average $\mathbb{E}[I_{\text{other}}]$. We modify (2) to $I_{\text{other}} = f\mathbb{E}[I_{\text{own}}] = \sum_{j=1}^{K} \mathbb{E}[X_j \Delta_j(\mathbf{X})]$. The minimum required received power in (6) is now as follows: + +$$P_j = \frac{N \Delta_j}{1 - \sum_{j=1}^{K} X_j \Delta_j - f \mathbb{E}[X_j \Delta_j(\mathbf{X})]}$$ + +Let $G$ denote the expected other-sector (and cell) interference, $G = f \sum_{j=1}^{K} \mathbb{E}[X_j \Delta_j(\mathbf{X})]$. The equation for $P_j$, above then implies the condition $\theta \le 1-G$. This condition is equivalent to (8) with $\Theta_G = 1-G$ replacing $\Theta_\epsilon$. + +The expected interference due to RT calls is calculated as follows: + +$$f\mathbb{E}[X_{RT}\Delta_{RT}(X_{RT})] = f \sum_{i=0}^{M_{BE}} \pi_{RT}(i)i\Delta_{RT}(i)$$ + +where we use (11) for $\pi_{RT}(i)$. For BE calls, we need not calculate the steady state distribution $\pi$. Since BE calls use all of the remaining capacity, the sum of the STPRs of the BE calls, where there is at least one BE call, is simply the available BE capacity, $C(X_{RT})$. The expected interference due to BE calls is given by: + +$$f\mathbb{E}[X_{BE}\Delta_{BE}(\mathbf{X})] = f(1-\pi(0)e)\sum_{i=0}^{M_{RT}} \pi_{RT}(i)C(i)$$ + +where $\pi(0)e$ is the probability that there are no BE calls in the sector, and can be calculated using only (16) and the normalization condition $\pi e = 1$. For each fixed value of $G$, say $g$, we can obtain the probabilities $\pi_{RT}$ and $\pi(0)$ using $\Theta_g$ instead of $\Theta_\epsilon$. We denote these values by $\pi_{RT}^g$ and $\pi^g(0)$ respectively, and the expectation operator corresponding to these probabilities as $\mathbb{E}^g$. Define $F(g) = f \sum_{j \in K} \mathbb{E}^g[X_j \Delta_j(\mathbf{X})]$. $G$ then is the solution of the fixed point equation: + +$$g = F(g) \quad (21)$$ + +We can now set the BE threshold as $L_{\text{BE}}^g = \Theta_g - L_{\text{RT}}$. Under such a definition, for a given $L_{\text{RT}}$, $F(g)$ can be shown to be continuous in $g$. $F(g)$ also maps onto itself, and thus by the Brower Fixed Point Theorem, there exists a solution. $F(g)$ can be shown to be nonincreasing in $g$, implying uniqueness of the solution to (21). + +V. NUMERICAL RESULTS + +In this section we perform numerical experiments to evaluate the performance of RT and BE calls. The rate requested by the RT calls is 12.2kbps(the maximum rate for AMR speech service in UMTS [3]). For the results shown here we have assumed a minimum acceptable rate of 7.95kbps, which is one of the eight possible rates for the AMR speech class. We assume that the set of rates acceptable to RT calls is continuous. We assume no minimum rate for BE calls. The average file size of a BE call is assumed to be 20kBytes. We assume $E_{\text{RT}}/N_o = 4.1\text{dB}$, $E_{\text{BE}}/N_o = 3.1\text{dB}$ [3], a chip rate $W = 3.84\text{Mcps}$ and $\Theta_\epsilon = 1-10^{-5}$. We define the load in terms of the total RT rate available, $R_T$. The total RT rate is in turn defined as the product of the minimum RT rate and the integer capacity for RT calls if there were no BE threshold, $R_T = [\frac{\Theta_\epsilon}{\Delta_{RT}^m}] R_{RT}^m$. The normalized load for RT calls is defined by $\tilde{\rho}_{RT} = \frac{\lambda_{RT} R_{RT}^r}{\mu_{RT} R_T^r}$, and the BE normalized load is $\tilde{\rho}_{BE} = \frac{\lambda_{BE}}{\mu_{BE} R_T^r}$. + +We consider the heavy traffic regime, where $\tilde{\rho}_{RT} = 0.5$ and $\tilde{\rho}_{BE} = 0.55$. We keep the normalized loads constant and vary the holding time of the RT calls. We evaluate the performance metrics of interest as a function of the BE reserved capacity, $L_{\text{BE}}$. + +Figure 1 shows the change in RT call blocking probability, computed using (12), as the BE Threshold, $L_{\text{BE}}$ is varied from 0 to $\Theta_\epsilon$. As expected, as $L_{\text{BE}}$ is increased, there is less capacity available for RT calls, and their call blocking probability increases. We may observe the tradeoff between the service qualities of BE and RT calls in Figures 2 and 3. These figures show the expected RT throughput and expected BE sojourn time, respectively. In Figure 2 we see that the expected RT throughput, computed using (13), is close to the requested rate of 12.2kbps up to a BE threshold of approximately $L_{\text{BE}} = 0.35$. As $L_{\text{BE}}$ is increased further, the expected RT throughput gradually drops, always remaining above the minimum rate of 7.95kbps. + +Fig. 1. RT Call Blocking for heavy traffic + +The sensitivity of BE service quality is seen in Figures 3 and 4 with respect to not only the BE threshold, but also the RT call duration. In Figure 3 the expected BE sojourn time, computed using (20) and Little's Law, decreases as $L_{\text{BE}}$ is increased. +---PAGE_BREAK--- + +Fig. 2. Expected RT Throughput + +For small values of $L_{BE}$ we see that the expected BE sojourn time varies greatly with increasing $L_{BE}$, when the duration of RT calls is large (smaller values of $\mu_{RT}$). The duration of the RT calls determines the time scale of the evolution of the number of RT calls in the system, and thus the available capacity for the BE calls. When the mean duration of RT calls is small, the number of RT calls evolves much faster relative to the BE calls, and thus we would expect the BE calls to obtain a capacity that is fairly constant. When the mean duration of RT calls is large, the changes in capacity received by BE calls might cause the BE queue to build up for long periods during which there are many ongoing RT calls, thus resulting in higher average sojourn times. For related results for non-variable RT GoS, see [2] and [9]. We observe from the figure that this effect can be diminished by increasing the BE threshold. An increase in $L_{BE}$ means that for BE calls the reserved capacity is substantial compared to the capacity remaining after RT calls are served, an effect similar to having a constant capacity. + +Fig. 3. Expected BE Sojourn Time + +The delay aware capacity of BE calls for a fixed RT load is shown in Figure 4. Here, we find the maximum BE arrival rate such that $T_{BE} \le c$, where c is a constant, set to 0.25 in this figure. As expected, the maximum BE arrival rate increases as $L_{BE}$ increases allowing a larger portion of the total capacity for BE calls. We note again the sensitivity to mean RT call duration at smaller values of $L_{BE}$, where the delay capacity + +approximately doubles when $\mu_{RT}$ is changed from 10 to 0.001. + +Fig. 4. BE Delay Aware Capacity + +## VI. CONCLUSION + +We have modelled resource sharing of BE applications with RT applications in WCDMA networks. Both type of traffic have flexibility to adapt to the available bandwidth but unlike BE traffic, RT traffic requires strict minimum bounds on the throughput. We studied the performance of both BE and RT traffic and examined the impact of reservation of some portion of the bandwidth for the BE applications. We introduced a novel capacity definition related to the delay of BE traffic and showed how to compute it. + +## REFERENCES + +[1] Eitan Altman. Capacity of multi-service cdma cellular networks with best-effort applications. In *Proceedings of ACM MOBICOM*, September 2002. + +[2] Eitan Altman, Damien Artiges, and Karim Traore. On the integration of best-effort and guaranteed performance services. In *European Transactions on Telecommunications, Special Issue on Architectures, Protocols and Quality of Service for the Internet of the Future*, 2, February-March 1999. + +[3] Harri Holma and Antti Toskala, editors. WCDMA for UMTS, Radio Access For Third Generation Mobile Communications. John Wiley & Sons, Ltd., 2001. + +[4] Insoo Koo, JeeHwan Ahn, Jeong-A Lee, and Kiseon Kim. Analysis of erland capacity for the multimedia DS-CDMA systems. *IEICE Transactions of Fundamentals*, E82-A(5):849–55, May 1999. + +[5] Jaana Laiho and Achim Wacker. Radio network planning process and methods for WCDMA. *Annales des Télécommunications*, 56(5-6):317–31, 2001. + +[6] R. Leelahakriengkrai and R. Agrawal. Scheduling in multimedia CDMA wireless networks. Technical Report ECE-99-3, ECE Dept., University of Wisconsin-Madison, July 1999. + +[7] N. Mandayam, J. Holtzman, and S. Barberis. Performance and capacity of a voice/data CDMA system with variable bit rate sources. In *Special Issue on Insights into Mobile Multimedia Communications*. Academic Press Inc., January 1997. + +[8] M. F. Neuts. Matrix-geometric solutions in stochastic models: an algorithmic approach. The John Hopkins Unversity Press, 1981. + +[9] R. Núnez Qeuija and O.J. Boxma. Analysis of a multi-server queueing model of ABR. *J. Appl. Math. Stoch. Anal.*, 11(3), 1998. + +[10] S. Ramakrishna and Jack M. Holtzman. A scheme for throughput maximization in a dual-class CDMA system. *IEEE Journal Selected Areas in Comm.*, 16:830–44, 1998. + +[11] Audrey M. Viterbi and Andrew J. Viterbi. Erlang capacity of a power controlled CDMA system. *IEEE Journal on Selected Areas in Communications*, 11(6):892–900, August 1993. + +[12] Qiang Wu, Wei-Ling Wu, and Jiong-Pan Zhou. Effects of slow fading SIR errors on CDMA capacity. In *Proceedings of IEEE VTC*, pages 2215–17, 1997. \ No newline at end of file diff --git a/samples/texts_merged/4364106.md b/samples/texts_merged/4364106.md new file mode 100644 index 0000000000000000000000000000000000000000..e09df8b6d2832b4ca90aaadad565449ddd97965c --- /dev/null +++ b/samples/texts_merged/4364106.md @@ -0,0 +1,764 @@ + +---PAGE_BREAK--- + +# ASYMPTOTIC BEHAVIOR OF COUPLED INCLUSIONS WITH VARIABLE EXPONENTS + +PETER E. KLOEDEN* + +Mathematisches Institut, Universität Tübingen +D-72076 Tübingen, Germany + +JACSON SIMSEN + +Instituto de Matemática e Computação, Universidade Federal de Itajubá +Av. BPS n. 1303, Bairro Pinheirinho, 37500-903, Itajubá - MG - Brazil + +PETRA WITTBOLD + +Fakultät für Mathematik, Universität of Duisburg-Essen +Thea-Leymann-Str. 9, 45127 Essen, Germany + +*(Communicated by Alain Miranville)* + +**ABSTRACT.** This work concerns the study of asymptotic behavior of the solutions of a nonautonomous coupled inclusion system with variable exponents. We prove the existence of a pullback attractor and that the system of inclusions is asymptotically autonomous. + +**1. Introduction.** Nonlinear reaction-diffusion equations have been studied extensively in recent years and a special attention has been given to coupled reaction-diffusion equations from various fields of applied sciences arising from epidemics, biochemistry and engineering [18]. Reaction-diffusion systems are naturally applied in chemistry where the most common is the change in space and time of the concentration of one or more chemical substances. One interest in chemical kinetics is the construction of mathematical models that can describe the characteristics of a chemical reaction. Mathematical models for electrorheological fluids were considered in [19, 20, 21] and variable exponents do appear in the diffusion term (see also [7, 9]). Reaction-diffusion systems can be perturbed by discontinuous nonlinear terms, which leads to study differential inclusions rather than differential equations, for example, evolution differential inclusion systems with positively sublinear upper semicontinuous multivalued reaction terms *F* and *G* (see [6]). + +2000 Mathematics Subject Classification. Primary: 35B40, 35B41, 35K57; Secondary: 35K55, 35K92. + +**Key words and phrases.** Pullback attractor, reaction-diffusion coupled systems, variable exponents, asymptotically autonomous problems. + +This work was initiated when the second author was supported with CNPq scholarship - process 202645/2014-2 (Brazil). The first author was supported by Chinese NSF grant 11571125. The second author was partially supported by the Brazilian research agency FAPEMIG process PPM 00329-16. + +* Corresponding author. +---PAGE_BREAK--- + +This work concerns the coupled system of inclusions: + +$$ +(S) \quad \left\{ +\begin{array}{ll} +\dfrac{\partial u_1}{\partial t} - \operatorname{div}(D_1(t, \cdot)|\nabla u_1|^{p(\cdot)-2}\nabla u_1) + |u_1|^{p(\cdot)-2}u_1 \in F(u_1, u_2) & t > \tau \\ +\\ +\dfrac{\partial u_2}{\partial t} - \operatorname{div}(D_2(t, \cdot)|\nabla u_2|^{q(\cdot)-2}\nabla u_2) + |u_2|^{q(\cdot)-2}u_2 \in G(u_1, u_2) & t > \tau \\ +\\ +\dfrac{\partial u_1}{\partial n}(t,x) = \dfrac{\partial u_2}{\partial n}(t,x) = 0 & \text{in } \partial\Omega, \\ +\\ +(u_1(\tau), u_2(\tau)) = (u_{0,1}, u_{0,2}) \text{ in } L^2(\Omega) \times L^2(\Omega), & +\end{array} +\right. +$$ + +on a bounded domain $\Omega \subset \mathbb{R}^n$, $n \ge 1$, with smooth boundary, where $F$ and $G$ are +bounded, upper semicontinuous and positively sublinear multivalued maps and the +exponents $p(\cdot), q(\cdot) \in C(\Omega)$ satisfy + +$$ +p^+ := \max_{x \in \bar{\Omega}} p(x) > p^- := \min_{x \in \bar{\Omega}} p(x) > 2, \quad q^+ := \max_{x \in \bar{\Omega}} q(x) > q^- := \min_{x \in \bar{\Omega}} q(x) > 2. +$$ + +In addition, the diffusion coefficients $D_1, D_2$ are assumed to satisfy: + +**Assumption D.** $D_1, D_2 : [\tau, T] \times \Omega \to \mathbb{R}$ are functions in $L^\infty([\tau, T] \times \Omega)$ satisfying: +(i) There is a positive constant $\beta$ such that $0 < \beta \le D_i(t, x)$ for almost all $(t, x) \in [\tau, T] \times \Omega$, $i = 1, 2$. +(ii) $D_i(t, x) \ge D_i(s, x)$ a.a. $x \in \Omega$ and $t \le s$ in $[\tau, T]$, $i = 1, 2$. + +In this work we extend the results in [15] for a single inclusion to the case of a coupled inclusion system. We will prove that the strict generalized process (see Definition 2.7 in Section 2) defined by (S) possesses a pullback attractor. Moreover, we prove that the system (S) is in fact asymptotically autonomous. It makes use of a collection of ideas and results of some recent, distinct previous works [15, 22, 23, 27] of the authors, which are applied here to a new problem to yield interesting new results. Regarding [13, 14, 15] where an equation and a single inclusion of this type of problems were considered, the coupled system can not be treated in the same way as the single case, the principal additional technical difficulty is to adjust the results considering two inclusions, in this sense, the main technical difficulty appears to prove dissipativity. + +The paper is organized as follows. First, in Section 2 we provide some definitions and results on existence of global solutions and generalized processes. In Section 3 we prove the existence of the pullback attractor for the system (S). In Section 4 we say some words about forward attraction and in the last section we prove that the system (S) is asymptotically autonomous. + +**2. Preliminaries, existence of global solutions and generalized processes.** + +Consider now the system (S) in the following abstract form + +$$ +(S2) \quad \left\{ +\begin{array}{ll} +\dfrac{du}{dt}(t) + A(t)u(t) \in F(u(t), v(t)) & t > \tau \\ +\\ +\dfrac{dv}{dt}(t) + B(t)v(t) \in G(u(t), v(t)) & t > \tau \\ +(u(\tau), v(\tau)) = (u_0, v_0) \in H \times H, +\end{array} +\right. +$$ + +where $F$ and $G$ are bounded, upper semicontinuous and positively sublinear mul- +tivalued maps (see Definitions 2.4, 2.3 and 2.5, respectively) and, for each $t > \tau$, +$A(t)$ and $B(t)$ are univalued maximal monotone operators of subdifferential type +in a real separable Hilbert space $H$. Specifically, $A(t) = \partial\varphi^t$ and $B(t) = \partial\psi^t$ for +---PAGE_BREAK--- + +nonnegative mappings $\varphi^t$, $\psi^t$ with $\partial\varphi^t(0) = \partial\psi^t(0) = 0$, $\forall t \in \mathbb{R}$ and the mappings $\varphi^t$, $\psi^t$ satisfy: + +**Assumption A.** Let $T > \tau$ be fixed. + +(A.1) There is a set $Z \subset (\tau, T]$ of zero measure such that $\phi^t$ is a lower semicontinuous proper convex function from $H$ into $(-\infty, \infty]$ with a nonempty effective domain for each $t \in [\tau, T] \setminus Z$. + +(A.2) For any positive integer $r$ there exist a constant $K_r > 0$, an absolutely continuous function $g_r : [\tau, T] \to \mathbb{R}$ with $g'_r \in L^\beta(\tau, T)$ and a function of bounded variation $h_r : [\tau, T] \to \mathbb{R}$ such that if $t \in [\tau, T] \setminus Z$, $w \in D(\phi^t)$ with $|w| \le r$ and $s \in [t, T] \setminus Z$, then there exists an element $\tilde{w} \in D(\phi^s)$ satisfying + +$$ +\begin{align*} +|\tilde{w} - w| &\le |g_r(s) - g_r(t)|(\phi^t(w)) + K_r)^{\alpha}, \\ +\phi^s(\tilde{w}) &\le \phi^t(w) + |h_r(s) - h_r(t)|(\phi^t(w) + K_r), +\end{align*} +$$ + +where $\alpha$ is some fixed constant with $0 \le \alpha \le 1$ and + +$$ +\beta := \begin{cases} 2 & \text{if } 0 \le \alpha \le \frac{1}{2}, \\ \frac{1}{1-\alpha} & \text{if } \frac{1}{2} \le \alpha \le 1 \end{cases} . +$$ + +Let us first review some concepts and results from the literature, which will be useful in the sequel. We refer the reader to [2, 3, 29] for more details about multivalued analysis theory. + +**2.1. Setvalued mappings.** Let $X$ be a real Banach space and $M$ a Lebesgue measurable subset in $\mathbb{R}^q$, $q \ge 1$. + +**Definition 2.1.** The map $G : M \to P(X)$ is called measurable if for each closed subset $C$ in $X$ the set + +$$ +G^{-1}(C) = \{y \in M; G(y) \cap C \neq \emptyset\} +$$ + +is Lebesgue measurable. + +If $G$ is a univ alued map, the above definition is equivalent to the usual definition +of a measurable function. + +**Definition 2.2.** By a selection of $E: M \to P(X)$ we mean a function $f: M \to X$ +such that $f(y) \in E(y)$ a.e. $y \in M$, and we denote by Sel$E$ the set + +$$ +\mathrm{SelE} \doteq \{ f, f : M \to X \text{ is a measurable selection of } E \}. +$$ + +**Definition 2.3.** Let $U$ be a topological space. A mapping $G : U \to P(X)$ is called upper semicontinuous [weakly upper semicontinuous] at $u \in U$, if + +(i) $G(u)$ is nonempty, bounded, closed and convex. + +(ii) For each open subset [open set in the weak topology] $D$ in $X$ satisfying $G(u) \subset D$, there exists a neighborhood $V$ of $u$, such that $G(v) \subset D$, for each $v \in V$. + +If $G$ is upper semicontinuous [weakly upper semicontinuous] at each $u \in U$, then it +is called upper semicontinuous [weakly upper semicontinuous] on $U$. +---PAGE_BREAK--- + +**Definition 2.4.** $F,G: H \times H \rightarrow P(H)$ are said to be bounded if, whenever $B_1, B_2 \subset H$ are bounded, then $F(B_1, B_2) = \bigcup_{(u,v) \in B_1 \times B_2} F(u,v)$ and $G(B_1, B_2) = \bigcup_{(u,v) \in B_1 \times B_2} G(u,v)$ are bounded in $H$. + +In order to obtain global solutions we impose the following suitable conditions on terms $F$ and $G$. + +**Definition 2.5 ([24]).** The pair $(F,G)$ of maps $F, G: H \times H \to P(H)$, which takes bounded subsets of $H \times H$ into bounded subsets of $H$, is called positively sublinear if there exist $a > 0, b > 0, c > 0$ and $m_0 > 0$ such that for each $(u,v) \in H \times H$ with $\|u\| > m_0$ or $\|v\| > m_0$ for which either there exists $f_0 \in F(u,v)$ satisfying $\langle u, f_0 \rangle > 0$ or there exists $g_0 \in G(u,v)$ with $\langle v, g_0 \rangle > 0$, then both + +$$ \|f\| \le a\|u\| + b\|v\| + c \quad \text{and} \quad \|g\| \le a\|u\| + b\|v\| + c $$ + +hold for each $f \in F(u,v)$ and each $g \in G(u,v)$. + +## 2.2. Generalized processes. +In order to study the asymptotic behavior of the solutions of the system (S) we will work with a multivalued process defined by a generalized process. We will review these concepts which had been considered in [22, 23] and can be used in the study of infinite dimensional dynamical systems. + +**Definition 2.6.** Let $(X, \rho)$ be a complete metric space. A generalized process $\mathcal{G} = \{\mathcal{G}(\tau)\}_{\tau \in \mathbb{R}}$ on $X$ is a family of function sets $\mathcal{G}(\tau)$ consisting of maps $\varphi : [\tau, \infty) \to X$, satisfying the conditions: + +(C1) For each $\tau \in \mathbb{R}$ and $z \in X$ there exists at least one $\varphi \in \mathcal{G}(\tau)$ with $\varphi(\tau) = z$; + +(C2) If $\varphi \in \mathcal{G}(\tau)$ and $s \ge 0$, then $\varphi^{+s} \in \mathcal{G}(\tau + s)$, where $\varphi^{+s} := \varphi_{|\tau+s,\infty)}$; + +(C3) If $\{\varphi_j\}_{j \in \mathbb{N}} \subset \mathcal{G}(\tau)$ and $\varphi_j(\tau) \to z$, then there exists a subsequence $\{\varphi_\mu\}_{\mu \in \mathbb{N}}$ of $\{\varphi_j\}_{j \in \mathbb{N}}$ and $\varphi \in \mathcal{G}(\tau)$ with $\varphi(\tau) = z$ such that $\varphi_\mu(t) \to \varphi(t)$ for each $t \ge \tau$. + +**Definition 2.7.** A generalized process $\mathcal{G} = \{\mathcal{G}(\tau)\}_{\tau \in \mathbb{R}}$ which satisfies the condition +(C4) (Concatenation) If $\varphi, \psi \in \mathcal{G}$ with $\varphi \in \mathcal{G}(\tau)$, $\psi \in \mathcal{G}(r)$ and $\varphi(s) = \psi(s)$ for +some $s \ge r \ge \tau$, then $\theta \in \mathcal{G}(\tau)$, where $\theta(t) := \begin{cases} \varphi(t), & t \in [\tau, s] \\ \psi(t), & t \in (s, \infty) \end{cases}$, +is called an exact (or strict) generalized process. + +A multivalued process $\{U_G(t, \tau)\}_{t \ge \tau}$ defined by a generalized process $\mathcal{G}$ is a family of multivalued operators $U_G(t, \tau) : P(X) \to P(X)$ with $-\infty < \tau \le t < +\infty$, such that for each $\tau \in \mathbb{R}$ + +$$ U_G(t, \tau)E = \{\varphi(t); \varphi \in \mathcal{G}(\tau), \text{ with } \varphi(\tau) \in E\}, t \geq \tau. $$ + +**Theorem 2.8 ([22, 23]).** Let $\mathcal{G}$ be an exact generalized process. If $\{U_{\mathcal{G}}(t, \tau)\}_{t \geq \tau}$ is a multivalued process defined by $\mathcal{G}$, then $\{U_{\mathcal{G}}(t, \tau)\}_{t \geq \tau}$ is an exact multivalued process on $P(X)$, i.e., + +1. $U_{\mathcal{G}}(t, t) = Id_{P(X)}$, + +2. $U_{\mathcal{G}}(t, \tau) = U_{\mathcal{G}}(t, s)U_{\mathcal{G}}(s, \tau)$ for all $-\infty < \tau \le s \le t < +\infty$. + +A family of sets $K = \{K(t) \subset X : t \in \mathbb{R}\}$ will be called a nonautonomous set. The family $K$ is closed (compact, bounded) if $K(t)$ is closed (compact, bounded) for all $t \in \mathbb{R}$. The $\omega$-limit set $\omega(t, E)$ consists of the pullback limits of all converging sequences $\{\xi_n\}_{n \in \mathbb{N}}$ where $\xi_n \in U_{\mathcal{G}}(t, s_n)E$, $s_n \to -\infty$. Let $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ be a family of subsets of $X$. We have the following concepts of invariance: +---PAGE_BREAK--- + +• $\mathcal{A}$ is positively invariant if $U_G(t, \tau)\mathcal{A}(\tau) \subset \mathcal{A}(t)$ for all $-\infty < \tau \le t < \infty$; + +• $\mathcal{A}$ is negatively invariant if $\mathcal{A}(t) \subset U_G(t, \tau)\mathcal{A}(\tau)$ for all $-\infty < \tau \le t < \infty$; + +• $\mathcal{A}$ is invariant if $U_G(t, \tau)\mathcal{A}(\tau) = \mathcal{A}(t)$ for all $-\infty < \tau \le t < \infty$. + +**Definition 2.9.** Let $t \in \mathbb{R}$. + +1. A set $\mathcal{A}(t) \subset X$ pullback attracts a set $B \in X$ at time $t$ if +$$ \mathrm{dist}(U_{\mathcal{G}}(t, s)B, \mathcal{A}(t)) \to 0 \quad \text{as } s \to -\infty. $$ + +2. A family $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ pullback attracts bounded sets of $X$ if $\mathcal{A}(\tau) \subset X$ +pullback attracts all bounded subsets at $\tau$, for each $\tau \in \mathbb{R}$. In this case, we +say that the nonautonomous set $\mathcal{A}$ is pullback attracting. + +3. A set $\mathcal{A}(t) \subset X$ pullback absorbs bounded subsets of $X$ at time $t$ if, for each bounded set $B$ in $X$, there exists $T = T(t, B) \le t$ such that $U_G(t, \tau)B \subset \mathcal{A}(t)$ for all $\tau \le T$. + +4. A family $\{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ pullback absorbs bounded subsets of $X$ if for each $t \in \mathbb{R}$ $\mathcal{A}(t)$ pullback absorbs bounded sets at time $t$. + +2.3. **Strong solutions.** Consider the following initial value problem: + +$$ (P_t) \quad \left\{ \begin{aligned} \frac{du}{dt}(t) + A(t)u(t) &\ni f(t), & t > \tau \\ u(\tau) &= u_0 \end{aligned} \right. $$ + +where for each $t > \tau$, $A(t)$ is maximal monotone in a Hilbert space $H$, $f \in L^1(\tau, T; H)$ and $u_0 \in H$. Moreover, suppose $\mathcal{D}(A(t)) = \mathcal{D}(A(\tau))$, $\forall t, \tau \in \mathbb{R}$ and $\overline{\mathcal{D}(A(t))} = H$, for all $t \in \mathbb{R}$. + +**Definition 2.10.** A function $u : [\tau, T] \to H$ is called a strong solution of $(P_t)$ on $[\tau, T]$ if + +(i) $u \in C([\tau, T]; H)$; + +(ii) $u$ is absolutely continuous on any compact subset of $(\tau, T)$; + +(iii) $u(t)$ is in $D(A(t))$ for a.e. $t \in [\tau, T]$, $u(\tau) = u_0$ and satisfies the inclusion in $(P_t)$ for a.e. $t \in [\tau, T]$. + +**Definition 2.11.** A strong solution of (S2) is a pair $(u, v)$ satisfying: $u, v \in C([\tau, T]; H)$ for which there exist $f, g \in L^1(\tau, T; H)$, $f(t) \in F(u(t), v(t))$, $g(t) \in G(u(t), v(t))$ a.e. in $(\tau, T)$, and such that $(u, v)$ is a strong solution (see Definition 2.10) over $(\tau, T)$ to the system $(P_1)$ below: + +$$ (P_1) \quad \left\{ \begin{aligned} \frac{du}{dt} + A(t)u &= f \\ \frac{dv}{dt} + B(t)v &= g \\ u(\tau) &= u_0, v(\tau) = v_0 \end{aligned} \right. $$ + +**Theorem 2.12 ([27]).** Let $A = \{A(t)\}_{t>\tau}$ and $B = \{B(t)\}_{t>\tau}$ be families of uni- +valued operators $A(t) = \partial\varphi^t$, $B(t) = \partial\psi^t$ with $\varphi^t$, $\psi^t$ non negative maps satisfying +**Assumption A** with $\partial\varphi^t(0) = \partial\psi^t(0) = 0$. Also suppose each one of A and B +generates a compact evolution process, and let $F, G: H \times H \to P(H)$ be upper +semicontinuous and bounded multivalued maps. Then given a bounded subset $B_0 \subset$ +$H \times H$, there exists $T_0 > 0$ such that for each $(u_0, v_0) \in B_0$ there exists at least one +strong solution $(u, v)$ of (S2) defined on $[\tau, T_0]$. If, in addition, the pair $(F, G)$ is +positively sublinear, given $T > \tau$, the same conclusion is true with $T_0 = T$. +---PAGE_BREAK--- + +Let $D(u(\tau), v(\tau))$ be the set of the solutions of (S2) with initial data $(u_{\tau}, v_{\tau})$ and define $G(\tau) := \bigcup_{(u_{\tau}, v_{\tau}) \in H \times H} D(u(\tau), v(\tau))$. Consider $\mathbb{G} := \{G(\tau)\}_{\tau \in \mathbb{R}}$. + +**Theorem 2.13 ([27]).** Under the conditions of Theorem 2.12, $\mathbb{G}$ is an exact generalized process. + +Let $\Omega \subset \mathbb{R}^n$, $n \ge 1$, be a bounded smooth domain and write $H := L^2(\Omega)$ and $Y := W^{1,p(\cdot)}(\Omega)$ with $p^- > 2$. Then $Y \subset H \subset Y^*$ with continuous and dense embeddings. We refer the reader to [7, 8] and references therein to see properties of the Lebesgue and Sobolev spaces with variable exponents. In particular, with + +$$L^{p(\cdot)}(\Omega) := \{u : \Omega \to \mathbb{R} : u \text{ is measurable, } \int_{\Omega} |u(x)|^{p(x)} dx < \infty\}$$ + +and $L_+^\infty(\Omega) := \{q \in L^\infty(\Omega) : \text{ess inf } q \ge 1\}$, define + +$$\rho(u) := \int_{\Omega} |u(x)|^{p(x)} dx, \quad \|u\|_{L^{p(\cdot)}(\Omega)} := \inf \left\{ \lambda > 0 : \rho\left(\frac{u}{\lambda}\right) \le 1 \right\}.$$ + +for $u \in L^{p(\cdot)}(\Omega)$ and $p \in L_+^\infty(\Omega)$. + +Consider the operator $A(t)$ defined in $Y$ such that for each $u \in Y$ is associated the following element of $Y^*$, $A(t)u: Y \to \mathbb{R}$ given by + +$$A(t)u(v) := \int_{\Omega} D_1(t,x) |\nabla u(x)|^{p(x)-2} \nabla u(x) \cdot \nabla v(x) dx + \int_{\Omega} |u(x)|^{p(x)-2} u(x)v(x) dx.$$ + +The authors proved in [13] that: + +• For each $t \in [\tau, T]$ the operator $A(t): Y \to Y^*$, with domain $Y = W^{1,p(\cdot)}(\Omega)$, is maximal monotone and $A(t)(Y) = Y^*$. + +• The realization of the operator $A(t)$ in $H = L^2(\Omega)$, i.e., + +$$A_H(t)u = -\operatorname{div}(D_1(t)|\nabla u(t)|^{p(x)-2}\nabla u(t)) + |u(t)|^{p(x)-2}u(t),$$ + +is maximal monotone in $H$ for each $t \in [\tau, T]$. + +• The operator $A_H(t)$ is the subdifferential $\partial\varphi_{p(\cdot)}^t$ of the convex, proper and lower semicontinuous map $\varphi_{p(\cdot)}^t: L^2(\Omega) \to \mathbb{R} \cup \{+\infty\}$ given by + +$$\varphi_{p(\cdot)}^t(u) = \begin{cases} \left[ \int_{\Omega} \frac{D_1(t,x)}{p(x)} |\nabla u|^{p(x)} dx + \int_{\Omega} \frac{1}{p(x)} |u|^{p(x)} dx \right] & \text{if } u \in Y \\ +\infty, & \text{otherwise.} \end{cases} \quad (1)$$ + +Using the following elementary assertion we can obtain estimates on the operator considering only two cases. + +**Proposition 1 ([1]).** Let $\lambda, \mu$ be arbitrary nonnegative numbers. For every positive $\alpha, \theta, \alpha \ge \theta$, + +$$\lambda^{\alpha} + \mu^{\theta} \geq \frac{1}{2^{\alpha}} \begin{cases} (\lambda + \mu)^{\alpha} & \text{if } \lambda + \mu < 1, \\ (\lambda + \mu)^{\theta} & \text{if } \lambda + \mu \geq 1. \end{cases}$$ + +Then it is easy to show that for every $u \in Y$ + +$$\langle A(t)u, u\rangle_{Y^*,Y} \geq \frac{\min\{\beta, 1\}}{2^{p^+}} \begin{cases} \|u\|_Y^{p_+} & \text{if } \|u\|_Y < 1, \\ \|u\|_Y^{p_-} & \text{if } \|u\|_Y \geq 1. \end{cases} \quad (2)$$ + +From Example 4.4 in the last section of [27] we can apply Theorem 2.12 and Theorem 2.13 for $A(t)u = -\operatorname{div}(D_1(t, \cdot)|\nabla u|^{p(\cdot)-2}\nabla u) + |u|^{p(\cdot)-2}u$ and $B(t)v =$ +---PAGE_BREAK--- + +$-div(D_2(t, \cdot)|\nabla v|^{q(\cdot)-2}\nabla v) + |v|^{q(\cdot)-2}v$ and conclude that system (S) has global solutions and it defines an exact generalized process $\mathbb{G}$. + +**3. Existence of the pullback attractor.** First, we provide estimates on the solutions in the spaces $H \times H$ and $Y \times Y$. + +**Lemma 3.1.** Let $(u_1, u_2)$ be a solution of problem (S). Then there exist a positive number $r_0$ and a constant $T_0$ which do not depend on the initial data, such that + +$$\|(u_1(t), u_2(t))\|_{H \times H} \le r_0, \quad \forall t \ge T_0 + \tau.$$ + +*Proof.* Let $\varphi = (u_1, u_2) \in \mathbb{G}$ be a solution of (S). Then there exists a pair $(f,g) \in \text{Sel } F(u_1, u_2) \times \text{Sel } G(u_1, u_2)$ with $f, g \in L^1(\tau, T; H)$ for each $T > \tau$ such that $u_1$, $u_2$ satisfy the problem + +$$ +\left\{ +\begin{array}{ll} +\displaystyle \frac{du_1}{dt} + A(t)(u_1) = f & \text{in } (\tau, T) \times \Omega, \\ +\\ +\displaystyle \frac{du_2}{dt} + B(t)(u_2) = g & \text{in } (\tau, T) \times \Omega, \\ +\\ +u_1(\tau,x) = u_{1,0}(x), \quad u_2(\tau,x) = u_{2,0}(x) & \text{in } \Omega. +\end{array} +\right. +\qquad (3) +$$ + +Let $\alpha := 4(|\Omega| + 1)^2$ and $\sigma := \frac{\min\{\beta, 1\}}{2\max\{p^+, q^+\}}$. Multiplying the first equation by $u_1$, the second equation in (3) by $u_2$ and using (2) we obtain + +$$ +\frac{1}{2} \frac{d}{dt} \|u_1(t)\|_H^2 \leq \begin{cases} -\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_H^{p^+} + \langle f(t), u_1(t) \rangle_H & \text{if } t \in I_1, \\ -\frac{\sigma}{\alpha^{q^-}} \|u_1(t)\|_H^{q^-} + \langle f(t), u_1(t) \rangle_H & \text{if } t \in I_2, \end{cases} \quad (4) +$$ + +where + +$I_1 := \{t \in (\tau, T) : \|u_1(t)\|_Y < 1\}, \quad I_2 := \{t \in (\tau, T) : \|u_1(t)\|_Y \ge 1\},$ + +and + +$$ +\frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \leq +\begin{cases} +-\frac{\sigma}{\alpha^{q^+}} \|u_2(t)\|_H^{q^+} + \langle g(t), u_2(t) \rangle_H & \text{if } t \in \tilde{I}_1 \\ +-\frac{\sigma}{\alpha^{q^-}} \|u_2(t)\|_H^{q^-} + \langle g(t), u_2(t) \rangle_H & \text{if } t \in \tilde{I}_2, +\end{cases} +$$ + +where + +$$ +\tilde{I}_1 := \{t \in (\tau, T) : \|u_2(t)\|_Y < 1\}, \quad \tilde{I}_2 := \{t \in (\tau, T) : \|u_2(t)\|_Y \ge 1\}. +$$ + +Now, define $r := \frac{p^+}{p^-} > 1$ and let $r'$ be such that $\frac{1}{r} + \frac{1}{r'} = 1$. Then, by Young's inequality, + +$$ +-\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_{H}^{p^+} \le r \left( -\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_{H}^{p^-} + \frac{\sigma}{\alpha^{p^+} r'} \right). \quad (5) +$$ + +Using (5) in (4) we obtain + +$$ +\frac{1}{2} \frac{d}{dt} \|u_1(t)\|_{H}^{2} \leq -C_{2} \|u_{1}(t)\|_{H}^{p^{-}} + \langle f(t), u_{1}(t) \rangle_{H} + C_{1} \quad \forall t \in I := (\tau, T), \quad (6) +$$ + +where $C_1 := \frac{L\sigma}{p^{-}\alpha^{p^{-}}}$ +and $C_2 := \frac{\min\{1,\beta\}}{(2\alpha)^L}$ with $L := \max\{p^+, q^+\}$. + +In an analogous way, taking $\tilde{r} := \frac{q^+}{q^-} > 1$ and $\tilde{r}'$ such that $\frac{1}{\tilde{r}} + \frac{1}{\tilde{r}'} = 1$ we have + +$$ +\frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \le -\tilde{C}_2 \|u_2(t)\|_H^{q^-} + \langle g(t), u_2(t) \rangle_H + \tilde{C}_1, \quad \forall t \in I, +$$ + +where $\tilde{C}_1 := \frac{L\sigma}{q^{-}\alpha^{q^{-}}}$ and $\tilde{C}_2 = C_2 = \frac{\min\{1,\beta\}}{(2\alpha)^L}$. +---PAGE_BREAK--- + +We can suppose, without losing generality that $p^{-} \ge q^{-}$. If $p^{-} = q^{-}$ we obtain a similar expression as (6) with $q^{-}$ in the place of $p^{-}$. If $p^{-} > q^{-}$, taking $\theta := \frac{p^{-}}{q^{-}} > 1$, $\theta'$ such that $\frac{1}{\theta'} + \frac{1}{\theta} = 1$ and $\epsilon > 0$ we have + +$$ +\begin{align*} +\|u_1(t)\|_H^{q^-} &= \frac{\epsilon}{\epsilon} \|u_1(t)\|_H^{q^-} \le \frac{1}{\theta' \epsilon \theta'} + \frac{1}{\theta} \epsilon^\theta \|u_1(t)\|_H^{p^-} \\ +\text{and then} \quad & \\ +& -C_2 \|u_1(t)\|_H^{p^-} \le \frac{\theta}{\epsilon^\theta} \left[ \frac{C_2}{\theta' \epsilon \theta'} - C_2 \|u_1(t)\|_H^{q^-} \right]. +\end{align*} +$$ + +Thus we obtain + +$$ +\left\{ +\begin{array}{l} +\displaystyle \frac{1}{2} \frac{d}{dt} \|u_1(t)\|_H^2 \le -\frac{C_2 \theta}{\epsilon^{\theta}} \|u_1(t)\|_H^{q_-} + \langle f(t), u_1(t) \rangle_H + C_1 + \frac{\theta C_2}{\theta' \epsilon^{\theta} \epsilon^{\theta'}} \\ +\\ +\displaystyle \frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \le -\tilde{C}_2 \|u_2(t)\|_H^{q_-} + \langle g(t), u_2(t) \rangle_H + \tilde{C}_1 +\end{array} +\right. +\quad (7) +$$ + +We estimate $\langle f(t), u_1(t) \rangle_H$ and $\langle g(t), u_2(t) \rangle_H$ using the assumption that $(F, G)$ is positively sublinear (see Definition 2.5) and Young's inequality. Choosing a convenient, sufficiently small $\epsilon$ we obtain + +$$ +\begin{align*} +\frac{1}{2} \frac{d}{dt} (\|u_1(t)\|_H^2 + \|u_2(t)\|_H^2) &\le -C_5 (\|u_1(t)\|_H^{q_-} + \|u_2(t)\|_H^{q_-}) + C_6 \\ +&\le -\frac{C_5}{2^{q-2}} (\|u_1(t)\|_H^2 + \|u_2(t)\|_H^2)^{\frac{q-2}{2}} + C_6, +\end{align*} +$$ + +where $C_5$, $C_6 > 0$ are constants that depend on the numbers $|\Omega|$, $\beta$, $p^-$, $p^+$, $q^-$, $q^+$, $a$, $b$, $c$ and $m_0$. + +Hence, the function $y(t) := \|u_1(t)\|_H^2 + \|u_2(t)\|_H^2$ satisfies the inequality + +$$ +y'(t) \leq - \frac{2C_5}{2^{q/2}} y(t)^{\frac{q-}{2}} + 2C_6, \quad t > 0. +$$ + +From Lemma 5.1 in [28] we obtain + +$$ +y(t) \le \left( \frac{C_6}{\frac{C_5}{2^{q^-/2}}} \right)^{2/q^-} + \left[ \frac{2C_5}{2^{q^-/2}} (q^-/2 - 1)(t-\tau) \right]^{-1/(q^-/2-1)} . +$$ + +Let $T_0 > 0$ be such that $\left[ \frac{2C_5}{2^{q^-/2}} \left(\frac{q^-}{2} - 1\right) T_0 \right]^{-1/(q^-/2-1)} \le 1$. Then, + +$$ +\|u_1(t)\|_{H}^{2} + \|u_{2}(t)\|_{H}^{2} \leq \kappa_{0} := (C_{6}2^{q^{-}/2}/C_{5})^{2/q^{-}} + 1 \quad \text{for all } t \geq T_{0} + \tau. \quad \square +$$ + +**Lemma 3.2.** Let $(u_1, u_2)$ be a solution of problem (S). Then there exist positive constants $r_1$ and $T_1 > T_0$, which do not depend on the initial data, such that + +$$ +\|(u_1(t), u_2(t))\|_{Y \times Y} \le r_1, \quad \forall t \ge T_1 + \tau. +$$ + +Proof. Take $T_1 > T_0$. Since $(u_1, u_2)$ is a solution of (S) there exists a pair $(f,g) \in \operatorname{Sel} F(u,v) \times \operatorname{Sel} G(u,v)$ with $f, g \in L^1(\tau,T;H)$ such that $u$ and $v$ satisfy the problem + +$$ +\left\{ +\begin{array}{ll} +\displaystyle \frac{du_1}{dt} + A(t)(u_1) = f & \text{in } (\tau, T) \times \Omega, \\ +\\ +\displaystyle \frac{du_2}{dt} + B(t)(u_2) = g & \text{in } (\tau, T) \times \Omega. +\end{array} +\right. +$$ +---PAGE_BREAK--- + +Consider $\varphi_{p(\cdot)}^t$ as in (1). Using Assumption D (ii), + +$$ \frac{d}{dt} \varphi_{p(\cdot)}^{t}(u_{1}(t)) \leq \left\langle \partial \varphi_{p(\cdot)}^{t}(u_{1}(t)), \frac{du_{1}}{dt}(t) \right\rangle $$ + +and then we obtain + +$$ \frac{d}{dt} \varphi_{p(\cdot)}^{t}(u_1(t)) + \frac{1}{2} \left\| f(t) - \frac{du_1}{dt}(t) \right\|_{H}^{2} \leq \frac{1}{2} \|f(t)\|_{H}^{2}. $$ + +Now by Lemma 3.1 and the fact that $F$ and $G$ are bounded, there exists a positive constant $C_0$ such that $\|f(t)\|_H \le C_0$ for all $t \ge T_0 + \tau$. Then, by the definition of a subdifferential and the Uniform Gronwall Lemma (see [28]), there exists a positive constant $C_1$ such that $\varphi_{p(\cdot)}^t(u_1(t)) \le C_1$ for all $t \ge T_1 + \tau$. Consequently, there exists a positive constant $K_1$ such that $\|u_1(t)\|_Y \le K_1$ for all $t \ge T_1 + \tau$. + +In a similar way, we conclude $\|u_2(t)\|_Y \le K_2$ for all $t \ge T_1 + \tau$ for a positive constant $K_2$. The assertion of the lemma then follows. $\square$ + +Let $U_G$ be the multivalued process defined by the generalized process $G$. We know from [23] that for all $t \ge s$ in $\mathbb{R}$ the map $x \mapsto U_G(t,s)x \in P(H \times H)$ is closed, so we obtain from Theorem 18 in [4] the following result + +**Theorem 3.3.** If for any $t \in \mathbb{R}$ there exists a nonempty compact set $D(t)$ which pullback attracts all bounded sets of $H \times H$ at time $t$, then the set $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ with $\mathcal{A}(t) = \bigcup_{B \in \mathcal{B}(H \times H)} \omega(t, B)$, is the unique compact, negatively invariant pullback attracting set which is minimal in the class of closed pullback attracting nonautonomous sets. Moreover, the sets $\mathcal{A}(t)$ are compact. + +**Theorem 3.4.** The multivalued evolution process $U_G$ associated with system (S) has a compact, negatively invariant pullback attracting set $\mathfrak{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ which is minimal in the class of closed pullback attracting nonautonomous sets. Moreover, the sets $\mathcal{A}(t)$ are compact. + +*Proof.* By Lemma 3.2 we have that the family $D(t) := \overline{B_{Y \times Y}(0, r_1)}^{H \times H}$ of compact sets of $H \times H$ is attracting. The result thus follows from Theorem 3.3. $\square$ + +**4. Forward attraction.** Pullback attractors contain all of the bounded entire solutions of the nonautonomous dynamical system [11, 12]. Simple counterexamples show that a pullback attractor need not be attracting in the forward sense [11]. However, since the pullback absorbing set $D$ above is also forward absorbing (the absorption time is independent of the initial time $\tau$), the forward omega limit sets $\omega_f(\tau, D)$ of the multivalued process starting at time $\tau$ are nonempty and compact subsets of the compact set $D$. Moreover, it follows by the positive invariance of the $D$ and the two-parameter semi-group property that they are increasing in time. The forward limiting dynamics thus tends to the nonempty compact subset $\omega_f^\infty(D) = \cup_{\tau \ge 0} \omega_f(\tau, D) \subset D$, which was called the forward attracting set in [16]. (It is related to the Vishik uniform attractor, when that exists, but can be smaller since the attraction here need not be uniform in the initial time). + +As shown in Proposition 8 of [16] (in the context of single valued difference equations, but a similar proof holds here) the forward attracting set $\omega_f^\infty(D)$ is asymptotically positively invariant with respect to the set valued process $U_G(t, \tau)$, +---PAGE_BREAK--- + +i.e., if for any monotone decreasing sequence $\varepsilon_p \to 0$ as $p \to \infty$ there exists a monotone increasing sequence $T_p \to \infty$ as $p \to \infty$ such that for each $\tau \ge T_p$ + +$$U_G(t, \tau)\omega_f^\infty(D) \subset B_{\varepsilon_p}(\omega_f^\infty(D)), \quad t \ge \tau,$$ + +where $B_{\varepsilon_p}(\omega_f^\infty(D)) := \{x \in H \times H : \operatorname{dist}_{H \times H}(x, \omega_f^\infty(D)) < \varepsilon_p\}$. + +Simple counterexamples show that the set $\omega_f^\infty(D)$ need not be invariant or even positive invariant, although it may be in special cases depending on the nature of the time varying terms in the system. For asymptotically autonomous systems $\omega_f^\infty(D)$ is contained in the global attractor $\mathcal{A}_\infty$ for the multivalued semigroup $G$ associated with the limiting autonomous system. + +Moreover, it is possible to compare the global attractor $\mathcal{A}_\infty$ with the limit-set $\mathcal{A}(\infty)$ defined by $\mathcal{A}(\infty) := \bigcap_{t \in \mathbb{R}} (\cup_{r \ge t} \mathcal{A}(r))$ and which can be characterized by + +$$\bigcup_{r_n \nearrow \infty} \{x \in X : \exists x_n \in \mathcal{A}(r_n) \text{ s. t. } x_n \to x\}.$$ + +This kind of comparison was done in [26] for the multivalued context. + +**Theorem 4.1** ([26]). Suppose the pullback attractor $\mathcal{A}$ is forward compact, i.e., $\cup_{r \ge t} \mathcal{A}(r)$ is precompact for each $t \in \mathbb{R}$. Moreover, suppose that for each solution $u$ of problem (8) there exists a solution $v$ of problem (9) such that $u(t+\tau) \to v(t)$ in $X$ as $\tau \to +\infty$ for each $t \ge 0$ whenever $\psi_\tau \in \mathcal{A}(\tau)$ and $\psi_\tau \to \psi_0$ in $X$ as $\tau \to +\infty$. Then $\mathcal{A}_\infty \supset \mathcal{A}(\infty)$. + +To obtain the equality $\mathcal{A}_\infty = \mathcal{A}(\infty)$ we need to assume stronger conditions as in the next result. + +**Theorem 4.2** ([26]). Under the same assumptions of Theorem 4.1, we have $\mathcal{A}_\infty = \mathcal{A}(\infty)$ if we further assume the following conditions: + +(a) $\mathcal{A}(\infty)$ forward attracts $\mathcal{A}_\infty$ by $U_G(\cdot, 0)$, i.e., + +$$\lim_{t \to +\infty} \operatorname{dist}(U_G(t, 0)\mathcal{A}_\infty, \mathcal{A}(\infty)) = 0;$$ + +(b) $\lim_{t \to +\infty} \sup_{x \in \mathcal{A}_\infty} \operatorname{dist}(G(t)x, U_G(t, 0)x) = 0.$ + +5. Asymptotic upper semicontinuity. In this section we establish the asymptotic upper semicontinuity of the elements of the pullback attractor. Specifically, we prove that the system (S) is asymptotically autonomous. + +5.1. **Theoretical results.** In this subsection motivated by problem (S), we study the asymptotic behavior of an abstract nonautonomous multivalued problem in a Hilbert space *H* of the form + +$$ +\left\{ +\begin{array}{ll} +\displaystyle \frac{du_1}{dt}(t) + A(t)u_1(t) \in F(u_1(t), u_2(t)) & t > \tau \\ +\\ +\displaystyle \frac{du_2}{dt}(t) + B(t)u_2(t) \in G(u_1(t), u_2(t)) & t > \tau \\ +\\ +(u_1(\tau), u_2(\tau)) = (\psi_{1,\tau}, \psi_{2,\tau}) =: \psi_{\tau}, +\end{array} +\right. +\qquad (8) +$$ + +compared with that of an autonomous multivalued problem of the form + +$$ +\left\{ +\begin{array}{ll} +\displaystyle \frac{dv_1}{dt}(t) + A_\infty v_1(t) \in F(v_1(t), v_2(t)) & t > 0 \\ +\\ +\displaystyle \frac{dv_2}{dt}(t) + B_\infty v_2(t) \in G(v_1(t), v_2(t)) & t > 0 \\ +\\ +(v_1(0), v_2(0)) = (\psi_{1,0}, \psi_{2,0})) =: \psi_0, +\end{array} +\right. +\qquad (9) +$$ +---PAGE_BREAK--- + +where $A(t), B(t), A_\infty$ and $B_\infty$ are univalued operators in $H \times H$ and $F, G: H \times H \to P(H \times H)$ are multivalued maps. + +Under appropriate relationships between the operators $A(t)$, $A_\infty$ and $B(t)$, $B_\infty$, the autonomous problem (9) is the asymptotic autonomous version of the nonautonomous problem (8). In particular, we establish the convergence in the Hausdorff semi-distance of the component subsets of the pullback attractor of the nonautonomous problem (8) to the global autonomous attractor of the autonomous problem (9). + +Some definitions on multivalued semigroups are recalled here, see for example [5, 17, 24] for more details. + +**Definition 5.1.** Let $X$ be a complete metric space. The map $G : \mathbb{R}^+ \times X \to P(X)$ is called a multivalued semigroup (or *m-semiflow*) if + +(1) $G(0, \cdot) = \mathbf{1}$ is the identity map; + +(2) $G(t_1 + t_2, x) \subset G(t_1, G(t_2, x))$, for all $x \in X$ and $t_1, t_2 \in \mathbb{R}^+$. + +It is called strict (or exact) if $G(t_1 + t_2, x) = G(t_1, G(t_2, x))$, for all $x \in X$ and $t_1, t_2 \in \mathbb{R}^+$. + +**Definition 5.2.** Let $G$ be a multivalued semigroup on $X$. The set $A \subset X$ attracts the subset $B$ of $X$ if $\lim_{t \to \infty} \text{dist}_H(G(t, B), A) = 0$. The set $M$ is said to be a global $B$-attractor for $G$ if $M$ attracts any nonempty bounded subset $B \subset X$. + +Suppose that the multivalued evolution process $\{U(t, \tau) : t \ge \tau\}$ in $H \times H$ associated with problem (8) has a pullback attractor $\mathcal{A} = \{\mathcal{A}(t) : t \in \mathbb{R}\}$ and that the multivalued semigroup $G : \mathbb{R}^+ \times H \times H \to P(H \times H)$ associated with problem (9) has a global autonomous $B$-attractor $\mathcal{A}_\infty$ in the Hilbert space $H \times H$. The following result will be used later to establish the convergence in the Hausdorff semi-distance of the component subsets $\mathcal{A}(t)$ of the pullback attractor $\mathcal{A}$ to $\mathcal{A}_\infty$ as $t \to \infty$. + +**Theorem 5.3.** Suppose that $C := \bigcup_{\tau \ge 0} \mathcal{A}(\tau)$ is a compact subset of $H \times H$. In addition, suppose that for each solution $u$ of problem (8) there exists a solution $v$ of problem (9) with initial values $\psi_\tau$ and $\psi_0$, respectively, such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$ whenever $\psi_\tau \in \mathcal{A}(\tau)$ and $\psi_\tau \to \psi_0$ in $H$ as $\tau \to +\infty$. Then + +$$ \lim_{t \to +\infty} \text{dist}_{H \times H}(\mathcal{A}(t), \mathcal{A}_\infty) = 0. $$ + +*Proof.* Suppose that this is not true. Then there would exist an $\epsilon_0 > 0$ and a real sequence $\{\tau_n\}_{n \in \mathbb{N}}$ with $\tau_n \nearrow +\infty$ such that $\text{dist}_{H \times H}(\mathcal{A}(\tau_n), \mathcal{A}_\infty) \ge 3\epsilon_0$ for all $n \in \mathbb{N}$. Since the sets $\mathcal{A}(\tau_n)$ are compact, there exist $a_n \in \mathcal{A}(\tau_n)$ such that + +$$ \text{dist}_{H \times H}(a_n, \mathcal{A}_\infty) = \text{dist}_{H \times H}(\mathcal{A}(\tau_n), \mathcal{A}_\infty) \ge 3\epsilon_0, \quad (10) $$ + +for each $n \in \mathbb{N}$. By the attraction property of the multivalued semigroup, we have $\text{dist}_{H \times H}(G(\tau_{n_0}, C), \mathcal{A}_\infty) \le \epsilon_0$ for $n_0 > 0$ large enough. Moreover, by the negative invariance of the pullback attractor there exist $b_n \in \mathcal{A}(\tau_n - \tau_{n_0}) \subset C$ for $n > n_0$ such that $a_n \in U(\tau_n, \tau_n - \tau_{n_0})b_n$ for each $n > n_0$. Since $C$ is compact, there is a convergent subsequence $b_{n'}' \to b \in C$. Since $a_{n'}' \in U(\tau_{n'}, \tau_{n'} - \tau_{n_0})b_{n'}'$ there exists +---PAGE_BREAK--- + +a solution $u_{n'} = (u_{1n'}, u_{2n'})$ of + +$$ +\begin{cases} +\frac{du_{1n'}}{dt}(t) + A(t)u_{1n'}(t) \in F(u_{1n'}(t), u_{2n'}(t)) \\ +\frac{du_{2n'}}{dt}(t) + B(t)u_{2n'}(t) \in G(u_{1n'}(t), u_{2n'}(t)) \\ +u_{n'}(\tau_{n'} - \tau_{n_0}) = b_{n'}, +\end{cases} +$$ + +such that $a_{n'} = u_{n'}(\tau_{n'})$. + +Writing $\tau_{n'} = \tau_{n_0} + (\tau_{n'} - \tau_{n_0})$ and using the hypotheses with $t = \tau_{n_0}$ and $\tau = \tau_{n'} - \tau_{n_0} \to +\infty$ (as $n' \to +\infty$), there exists a solution $v_{n'}$ of + +$$ +\left\{ +\begin{array}{l} +\displaystyle \frac{dv_{1n'}}{dt}(t) + A_\infty v_{1n'}(t) \in F(v_{1n'}(t), v_{2n'}(t)) \\ +\displaystyle \frac{dv_{2n'}}{dt}(t) + B_\infty v_{2n'}(t) \in G(v_{1n'}(t), v_{2n'}(t)) \\ +v_{n'}(0) = b, +\end{array} +\right. +$$ + +such that + +$$ +\| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} < \epsilon_0 +$$ + +for $n'$ large enough. Hence, + +$$ +\begin{align*} +\mathrm{dist}_{H \times H} (a_{n'}, \mathcal{A}_{\infty}) &= \mathrm{dist}_{H \times H} (u_{n'}(\tau_{n'}), \mathcal{A}_{\infty}) \\ +&\leq \| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} + \mathrm{dist}_{H \times H} (v_{n'}(\tau_{n_0}), \mathcal{A}_{\infty}) \\ +&\leq \| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} + \mathrm{dist}_{H \times H} (G(\tau_{n_0}, C), \mathcal{A}_{\infty}) \\ +&\leq 2\epsilon_0, +\end{align*} +$$ + +which contradicts (10). □ + +The next result is very useful for checking that the hypothesis of asymptotic +continuity of the nonautonomous flow in the preceeding theorem for problems like +(8) holds. In order to obtain the result we suppose that the operators $A(t)$ and $A_\infty$ +satisfy the following assumption. + +**Assumption G.** For each $\tau \in \mathbb{R}$ there exist non increasing functions $g_{1,\tau}$, $g_{2,\tau}$ : +$[0,+\infty) \rightarrow [0,+\infty)$ such that $g_{i,\tau}(t) \rightarrow 0$ as $\tau \rightarrow +\infty$, for each $t \ge 0$, $i=1,2$, and +$\langle A(t+\tau)u_1(t+\tau) - A_\infty v_1(t), u_1(t+\tau) - v_1(t) \rangle \ge -g_{1,\tau}(t)$, for all $t \in \mathbb{R}^+$, $\tau \in \mathbb{R}$, +and +$\langle B(t+\tau)u_2(t+\tau) - B_\infty v_2(t), u_2(t+\tau) - v_2(t) \rangle \ge -g_{2,\tau}(t)$, for all $t \in \mathbb{R}^+$, $\tau \in \mathbb{R}$, +for any solution $u = (u_1,u_2)$ of (8) and $v = (v_1,v_2)$ of (9). + +**Lemma 5.4.** Suppose that Assumption G is satisfied. If $\psi_\tau = (\psi_{1,\tau}, \psi_{2,\tau}) \to \psi_0 = (\psi_{1,0}, \psi_{2,0})$ in $H \times H$ as $\tau \to +\infty$, then for each solution $u$ of (8) there exists a solution $v$ of (9) such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$. +---PAGE_BREAK--- + +*Proof.* Let $u$ be a solution of (8) then there exists $f = (f_1, f_2)$ with $f_1, f_2 \in L^2([\tau, T]; H)$ such that $f_1(t) \in F(u_1(t), u_2(t))$ and $f_2(t) \in G(u_1(t), u_2(t))$, a.e., and + +$$ +\left\{ +\begin{array}{ll} +\dfrac{du_1}{dt}(t) + A(t)u_1(t) = f_1(t), & \text{a.e in } (\tau, T], \\ +\dfrac{du_2}{dt}(t) + B(t)u_2(t) = f_2(t), & \text{a.e in } (\tau, T], \\ +u(\tau) = \psi_{\tau}. & +\end{array} +\right. +\qquad (11) +$$ + +Consider $g \in L^2([0, T]; H \times H)$ such that $g(t) = f(t+\tau)$ and let $v$ be the unique solution of the problem + +$$ +\left\{ +\begin{array}{ll} +\dfrac{dv_1}{dt}(t) + A_\infty v_1(t) = g_1(t), & \text{a.e in } (0, T], \\ +\dfrac{dv_2}{dt}(t) + B_\infty v_2(t) = g_2(t), & \text{a.e in } (0, T], \\ +v(0) = \psi_0. & +\end{array} +\right. +\qquad (12) +$$ + +Subtracting the equations in (11) from the equations in (12) gives + +$$ \frac{d}{dt}(u_1(t+\tau) - v_1(t)) + A(t+\tau)u_1(t+\tau) - A_{\infty}v_1(t) = f_1(t+\tau) - g_1(t) $$ + +and + +$$ \frac{d}{dt}(u_2(t+\tau) - v_2(t)) + B(t+\tau)u_2(t+\tau) - B_{\infty}v_2(t) = f_2(t+\tau) - g_2(t) $$ + +for a.e. $t \in [0, T]$. Multiplying by $u_i(t+\tau) - v_i(t)$ and taking the inner product, then using Assumption G, we obtain + +$$ \frac{1}{2} \frac{d}{dt} \|u_i(t+\tau) - v_i(t)\|_H^2 \leq g_{i,\tau}(t), \quad i=1,2. $$ + +Integrating this last inequality from 0 to t, gives + +$$ \|u_i(t+\tau) - v_i(t)\|_H^2 \leq \| \psi_{i,\tau} - \psi_{i,0} \|_H^2 + 2tg_{i,\tau}(0). $$ + +Since $\psi_{i,\tau} \to \psi_{i,0}$ in $H$ and $g_{i,\tau}(0) \to 0$ as $\tau \to +\infty$, the result follows. $\square$ + +5.2. **Application to system (S).** The results in Subsection 5.1 are applied here to the nonlinear system of inclusions with spatially variable exponents (S) in the Hilbert space $\tilde{H} = H \times H$, with $H := L^2(\Omega)$. + +We assume that the diffusion coefficients satisfy Assumption D and the additional Assumption D (iii) that follows: + +**Assumption D (iii).** For each $t \ge 0$, $D_i(t+\tau, \cdot) \to D_i^*(\cdot)$ in $L^\infty(\Omega)$ as $\tau \to +\infty$, for $i=1,2$. + +Assumptions D (i)—D (ii) imply that the pointwise limit $D_i^*(x)$ as $t \to \infty$ exists and satisfies $0 < \beta \le D_i^*(x)$ for almost all $x \in \Omega$, $i=1,2$. Then the problem (S) with $D^*(x) = (D_1^*(x), D_2^*(x))$ is autonomous and has a global autonomous B-attractor as a particular case of the results in Section 3 (see also a direct proof in [25] for the autonomous system of inclusions without the nonlinear perturbation $|u|^{p(\cdot)-2}u$). + +We will show that the dynamics of the original nonautonomous problem is asymptotically autonomous and its pullback attractor converges upper semicontinuously +---PAGE_BREAK--- + +to the autonomous global B-attractor $\mathcal{A}_\infty$ of the problem + +$$ +\left\{ +\begin{array}{l} +\displaystyle \frac{\partial v_1}{\partial t}(t) - \operatorname{div} (D_1^* |\nabla v_1(t)|^{p(x)-2} \nabla v_1(t)) + |v_1(t)|^{p(x)-2} v_1(t) \in F(v_1(t), v_2(t)), \\[6pt] +\displaystyle \frac{\partial v_2}{\partial t}(t) - \operatorname{div} (D_2^* |\nabla v_2(t)|^{q(x)-2} \nabla v_2(t)) + |v_2(t)|^{q(x)-2} v_2(t) \in G(v_1(t), v_2(t)), \\[6pt] +v(0) = \psi_0. +\end{array} +\right. +\tag{13} +$$ + +In particular, we consider the operators + +$$ +\begin{align*} +A(t)u_1 &:= -\operatorname{div} (D_1(t)|\nabla u_1|^{p(x)-2}\nabla u_1) + |u_1|^{p(x)-2}u_1, \\ +B(t)u_2 &:= -\operatorname{div} (D_2(t)|\nabla u_2|^{q(x)-2}\nabla u_2) + |u_2|^{q(x)-2}u_2, \\ +A_\infty v_1 &:= -\operatorname{div} (D_1^*|\nabla v_1|^{p(x)-2}\nabla v_1) + |v_1|^{p(x)-2}v_1, \\ +B_\infty v_2 &:= -\operatorname{div} (D_2^*|\nabla v_2|^{q(x)-2}\nabla v_2) + |v_2|^{q(x)-2}v_2. +\end{align*} +$$ + +Applying Lemma 3.1, there exist positive constants $T_0$, $B_0$ such that + +$$ +\|u(t)\|_{H \times H} \le B_0, \quad \forall t \ge T_0 + \tau. +$$ + +Moreover, applying Lemma 3.2 for $Y = W^{1,p(x)}(\Omega)$, there exist positive constants $T_1$, $B_1$ such that + +$$ +\|u(t)\|_{Y \times Y} \le B_1, \quad \forall t \ge T_1 + \tau. \tag{14} +$$ + +Since also $\|v(t)\|_{Y \times Y} \le B_1$ for all $t \ge T_1 + \tau$ and $Y \subset H$ with compact embedding, it follows + +**Corollary 1.** $\overline{\cup_{\tau \in \mathbb{R}} \mathcal{A}(\tau)}$ is a compact subset of $H \times H$. + +Using estimate (14), the proof of the next result follows the same lines as the +proof of Theorem 4.2 of [14], and therefore is omitted here. + +**Theorem 5.5.** If $\{\psi_\tau : \tau \in \mathbb{R}\}$ is a bounded set in $Y \times Y$ and $\psi_\tau \to \psi_0$ in $H \times H$ as $\tau \to +\infty$, then Assumption G is satisfied with $g_{i,\tau}(t) = K \|D_i(t+\tau, \cdot) - D_i^*(\cdot)\|_{L^\infty(\Omega)}$, $(i=1,2)$ where K is a positive constant. + +Observe that by Assumption D (iii) the function $g_{i,\tau}: [0, +\infty) \to [0, +\infty)$ given in Theorem 5.5 satisfies $g_{i,\tau}(t) \to 0$ as $\tau \to +\infty$ for each $t \ge 0$. The next result gives the desired asymptotic upper semi-continuous convergence. + +**Theorem 5.6.** $\lim_{t \to +\infty} \text{dist}_{H \times H}(\mathcal{A}(t), \mathcal{A}_\infty) = 0$. + +*Proof.* Suppose that $\psi_\tau \in A(\tau)$ and $\psi_\tau \to \psi_0$ in $H \times H$. Using the negatively invariance of the pullback attractor and the estimate (14) it follows that $\{\psi_\tau : \tau \in \mathbb{R}\}$ is a bounded set in $Y \times Y$. Theorem 5.5 then guarantees that Assumption G is satisfied. Thus, from Lemma 5.4, for each solution $u = (u_1, u_2)$ of (S) there exists a solution $v = (v_1, v_2)$ of (13) such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$. Theorem 5.3 then yields $\lim_{t \to +\infty} \text{dist}(A(t), A_\infty) = 0$. $\square$ + +REFERENCES + +[1] C. O. Alves, S. Shmarev, J. Simsen and M. S. Simsen, *The Cauchy problem for a class of parabolic equations in weighted variable Sobolev spaces: existence and asymptotic behavior*, *J. Math. Anal. Appl.*, **443** (2016), 265–294. +---PAGE_BREAK--- + +[2] J. P. Aubin and A. Cellina, *Differential Inclusions: Set-Valued Maps and Viability Theory*, Springer-Verlag, Berlin, 1984. + +[3] J. P. Aubin and H. Frankowska, *Set-valued Analysis*, Birkhäuser, Berlin, 1990. + +[4] T. Caraballo, J. A. Langa, V. S. Melnik and J. Valero, Pullback attractors for nonautonomous and stochastic multivalued dynamical systems, *Set-Valued Analysis*, **11** (2003), 153–201. + +[5] T. Caraballo, P. Marin-Rubio and J. C. Robinson, A comparison between two theories for multivalued semiflows and their asymptotic behaviour, *Set-Valued Analysis*, **11** (2003), 297–322. + +[6] J. I. Díaz and I. I. Vrabie, Existence for reaction diffusion systems. A compactness method approach, *J. Math. Anal. Appl.*, **188** (1994), 521–540. + +[7] L. Diening, P. Harjulehto, P. Hästö and M. Rúžička, *Lebesgue and Sobolev Spaces with Variable Exponents*, Springer-Verlag, Berlin, Heidelberg, 2011. + +[8] X. L. Fan and Q. H. Zhang, Existence of solutions for $p(x)$-laplacian Dirichlet problems, *Nonlinear Anal.*, **52** (2003), 1843–1852. + +[9] P. Harjulehto, P. Hästö, U. Lê and M. Nuortio, Overview of differential equations with non-standard growth, *Nonlinear Analysis*, **72** (2010), 4551–4574. + +[10] P. E. Kloeden and T. Lorenz, Construction of nonautonomous forward attractors, *Proc. Amer. Mat. Soc.*, **144** (2016), 259–268. + +[11] P. E. Kloeden and P. Marín-Rubio, Negatively invariant sets and entire trajectories of set-valued dynamical systems, *J. Setvalued & Variational Analysis*, **19** (2011), 43–57. + +[12] P. E. Kloeden and M. Rasmussen, *Nonautonomous Dynamical Systems*, Amer. Math. Soc. Providence, 2011. + +[13] P. E. Kloeden and J. Simsen, Pullback attractors for non-autonomous evolution equation with spatially variable exponents, *Commun. Pure & Appl. Analysis*, **13** (2014), 2543–2557. + +[14] P. E. Kloeden and J. Simsen, Attractors of asymptotically autonomous quasilinear parabolic equation with spatially variable exponents, *J. Math. Anal. Appl.*, **425** (2015), 911–918. + +[15] P. E. Kloeden, J. Simsen and M. S. Simsen, A pullback attractor for an asymptotically autonomous multivalued Cauchy problem with spatially variable exponent, *J. Math. Anal. Appl.*, **445** (2017), 513–531. + +[16] P. E. Kloeden and Meihua Yang, Forward attraction in nonautonomous difference equations, *J. Difference Eqns. Applns.*, **22** (2016), 513–525. + +[17] V. S. Melnik and J. Valero, On attractors of multivalued semi-flows and differential inclusions, *Set-Valued Anal.*, **6** (1998), 83–111. + +[18] C. V. Pao, On nonlinear reaction-diffusion systems, *J. Math. Anal. Appl.*, **87** (1982), 165–198. + +[19] K. Rajagopal and M. Rúžička, Mathematical modelling of electrorheological fluids, *Contin. Mech. Thermodyn.*, **13** (2001) 59–78. + +[20] M. Rúžička, Flow of shear dependent elecrorheological fluids, *C. R. Acad. Sci. Paris, Série I*, **329** (1999), 393–398. + +[21] M. Rúžička, *Electrorheological Fluids: Modeling and Mathematical Theory*, Lectures Notes in Mathematics, vol. 1748, Springer-Verlag, Berlin, 2000. + +[22] J. Simsen and J. Valero, Characterization of Pullback Attractors for Multivalued Nonautonomous Dynamical Systems, Advances in Dynamical Systems and Control, 179–195, Stud. Syst. Decis. Control, 69, Springer, [Cham], 2016. + +[23] J. Simsen and E. Capelato, Some properties for exact generalized processes, *Continuous and Distributed Systems II*, 209–219, Studies in Systems, Decision and Control. led. 30, Springer International Publishing, 2015. + +[24] J. Simsen and C. B. Gentile, On p-Laplacian differential inclusions-Global existence, compactness properties and asymptotic behavior, *Nonlinear Analysis*, **71** (2009), 3488–3500. + +[25] J. Simsen and M. S. Simsen, Existence and upper semicontinuity of global attractors for $p(x)$-Laplacian systems, *J. Math. Anal. Appl.*, **388** (2012), 23–38. + +[26] J. Simsen and M. S. Simsen, On asymptotically autonomous dynamics for multivalued evolution problems, *Discrete Contin. Dyn. Syst. Ser. B*, **24** (2019), no. 8, 3557–3567. + +[27] J. Simsen and P. Wittbold, Compactness results with applications for nonautonomous coupled inclusions, *J. Math. Anal. Appl.*, **479** (2019), 426–449. + +[28] R. Temam, *Infinite-Dimensional Dynamical Systems in Mechanics and Physics*, Springer-Verlag, New York, 1988. + +[29] I.I. Vrabie, *Compactness Methods for Nonlinear Evolutions*, Second Editon, Pitman Monographs and Surveys in Pure and Applied Mathematics, New York, 1995. +---PAGE_BREAK--- + +Received March 2019; revised June 2019. + +*E-mail address:* kloeden@na-uni.tuebingen.de + +*E-mail address:* jacson@unifei.edu.br + +*E-mail address:* petra.wittbold@uni-due.de \ No newline at end of file diff --git a/samples/texts_merged/4409661.md b/samples/texts_merged/4409661.md new file mode 100644 index 0000000000000000000000000000000000000000..a2dea56c90bca768b81f6e77981ad5dd1faa100e --- /dev/null +++ b/samples/texts_merged/4409661.md @@ -0,0 +1,2319 @@ + +---PAGE_BREAK--- + +# Marginal triviality of the scaling limits of critical 4D Ising and $\phi_4^4$ models + +Michael Aizenman* and Hugo Duminil-Copin† + +25 January 2021 + +## Abstract + +We prove that the scaling limits of spin fluctuations in four-dimensional Ising-type models with nearest-neighbor ferromagnetic interaction at or near the critical point are Gaussian. A similar statement is proven for the $\lambda\phi^4$ fields over $\mathbb{R}^4$ with a lattice ultraviolet cutoff, in the limit of infinite volume and vanishing lattice spacing. The proofs are enabled by the models' random current representation, in which the correlation functions' deviation from Wick's law is expressed in terms of intersection probabilities of random currents with sources at distances which are large on the model's lattice scale. Guided by the analogy with random walk intersection amplitudes, the analysis focuses on the improvement of the so-called tree diagram bound by a logarithmic correction term, which is derived here through multi-scale analysis. + +# 1 Introduction + +The results presented below address questions pertaining to two distinct research agendas: one aims at Constructive Field Theory and the other at the understanding of the critical behavior in Statistical Mechanics. While these two goals are somewhat different the questions and the answers are related. We start with their brief presentation. + +## 1.1 Constructive Quantum Field Theory and Functional Integration + +Quantum field theories with local interaction play an important role in the physics discourse, where they appear in subfields ranging from high energy to condensed matter physics. The mathematical challenge of proper formulation of this concept led to programs of Constructive Quantum Field Theory (CQFT). A path towards that goal was charted through the proposal to define quantum fields as operator valued distributions whose essential properties are formulated as the Wightman axioms [50]. Wightman's reconstruction theorem allows one to recover this structure from the collection of the corresponding correlation functions, defined over the Minkowski space-time. By the Osterwalder-Schrader theorem [39, 40], correlation functions with the required properties may potentially be obtained through analytic continuation from those of random distributions defined over the corresponding Euclidean space that meet a number of conditions: suitable analyticity, permutation symmetry, Euclidean covariance, and reflection-positivity. + +* aizenman@princeton.edu Departments of Physics and Mathematics, Princeton University +† duminil@ihes.fr Institut des Hautes Études Scientifiques and Université de Genève +---PAGE_BREAK--- + +Seeking natural candidates for such *Euclidean fields*, one ends up with the task of constructing probability averages over random distributions $\Phi(x)$, for which the expectation value of functionals $F(\Phi)$ would have properties fitting the formal expression + +$$ \langle F(\Phi) \rangle \approx \frac{1}{\text{norm}} \int F(\Phi) \exp[-H(\Phi)] \prod_{x \in \mathbb{R}^d} d\Phi(x), \quad (1.1) $$ + +where $H(\Phi)$ is the Hamiltonian. In this context, it seems natural to consider expressions of the form + +$$ H(\Phi) \coloneqq (\Phi, A\Phi) + \int_{\mathbb{R}^d} P(\Phi(x)) dx \quad (1.2) $$ + +with $(\Phi, A\Phi)$ a positive definite and reflection-positive quadratic form, and $P(\Phi(x))$ a polynomial (or a more general function) whose terms of order $\Phi(x)^{2k}$ are interpreted heuristically as representing $k$-particle interactions. An example of a quadratic form with the above properties (at $K, b > 0$) and also rotation invariance is + +$$ (\Phi, A\Phi) := \int_{\mathbb{R}^d} (K |\nabla\Phi|^2(x) + b |\Phi(x)|^2) dx. \quad (1.3) $$ + +The functionals $F(\Phi)$ to which (1.1) is intended to apply include the smeared averages + +$$ T_f(\Phi) := \int_{\mathbb{R}^d} f(x)\Phi(x)dx \quad (1.4) $$ + +associated with continuous functions of compact support $f \in C_0(\mathbb{R}^d)$. By linearity, the expectation values of products of such variables take the form + +$$ \left\langle \prod_{j=1}^{n} T_{f_j}(\Phi) \right\rangle := \int_{(\mathbb{R}^d)^n} dx_1 \dots dx_n S_n(x_1, \dots, x_n) \prod_{j=1}^{n} f(x_j), \quad (1.5) $$ + +with $S_n(x_1, \dots, x_n)$ characterizing the probability measure on the space of distribution +which corresponds to the expectation value $\langle - - \rangle$. This is summarized by saying that in a +distributional sense + +$$ \left\langle \prod_{j=1}^{n} \Phi(x_j) \right\rangle = S_n(x_1, \dots, x_n), \qquad (1.6) $$ + +with $S_n$ referred to as the *Schwinger functions* of the corresponding euclidean field theory. + +A relatively simple class of Euclidean fields are the Gaussian fields, for which $H$ con- +tains only quadratic terms. Gaussian fields (whether reflection-positive or not) are alter- +natively characterized by having their structure determined by just the two-point function, +with the $2n$-point Schwinger functions computable through Wick's law: + +$$ S_{2n}(x_1, \ldots, x_{2n}) = \sum_{\pi} \prod_{j=1}^{n} S_2(x_{\pi(2j-1)}, x_{\pi(2j)}) := \mathcal{G}_n[S_2](x_1, \ldots, x_{2n}), \quad (1.7) $$ + +where $\pi$ ranges over pairing permutations of $\{1, \ldots, 2n\}$. The field theoretic interpretation +of (1.7) is the absence of interaction. Due to that, and to their algebraically simple +structure, such fields have been referred to as *trivial*. + +When interpreting (1.1), one quickly encounters a number of problems. Even in the +generally understood case of the Gaussian free field, with $H$ consisting of just the quadratic +term (1.3), Equation (1.1) is not to be taken literally as the measure is supported by non- +differentiable functions for which the integral in the exponential is almost surely divergent. + +A natural step to tackle next seems to be the addition of the lowest order even term, +i.e. $\lambda\Phi^4$. However, in dimensions $d > 1$, the free field is no longer a random function but a +---PAGE_BREAK--- + +random distribution which even locally is unbounded. Thus such simple looking proposals +lead to additional divergences, whose severity increases with the dimension. + +The heuristic “renormalization group” approach to the problem by K. Wilson [51] indi- +cates that in low enough dimensions, specifically $d < 4$ for $\lambda\Phi^4$, the problem could be tack- +led through cutoff-dependent counter-terms. Partially successful attempts to carry such +a project rigorously have been the focus of a substantial body of works. The means em- +ployed have included: counter-terms, which are allowed to depend on regularizing cutoffs, +scale decomposition, renormalization group flows, the theory or regularity structures [27], +etc. + +A natural starting point towards such a construction of a $\Phi_d^4$ functional integral (1.1) is to regularize it with a pair of cutoffs: at the short distance (ultraviolet) scale and the large distance (infrared) scale. A lattice version of that is the restriction of $\Phi(\cdot)$ to the vertices of a finite graph with the vertex set + +$$ \mathcal{V}_{a,R} = (a\mathbb{Z})^d \cap \Lambda_R, \quad \Lambda_R := [-R, R]^d. \tag{1.8} $$ + +For the corresponding finite collection of variables $\{\Phi(x)\}_{x \in \mathcal{V}_{a,R}}$ the Hamiltonian (1.2) is initially interpreted in terms of the Riemann-sum style discrete analog of the integral expressions. Moments of $\Phi(x)$ are to be accompanied by lower order counter-terms. In particular, the fourth power addition takes the form + +$$ P(\Phi(x)) = \lambda\Phi^4 - c(\lambda, a, R)\Phi^2, \tag{1.9} $$ + +The cutoffs are removed, through the limit $R \nearrow \infty$ followed by $a \searrow 0$. Parameter such as $c(\lambda, a, R)$ are allowed to be adjusted in the process, so as to stabilize the Schwinger functions $S_n(x_1, \dots, x_n)$ on the continuum limit scale. + +The constructive field theory program has yielded non-trivial scalar field theories over $\mathbb{R}^2$ and $\mathbb{R}^3$ [11, 21, 26, 40]. (Here we do not discuss here gauge field theories, cf. [31]). However, the progression of constructive results was halted when it was proved that for dimensions $d > 4$ the attempt to construct $\Phi_d^4$ with + +$$ \lim_{|x-y| \to \infty} S_2(x,y) = 0 \tag{1.10} $$ + +by the method outlined above (in essence: taking the scaling limit of the lattice models +at $\beta \le \beta_c$) yields only Gaussian fields [1, 17]. + +Various partial results have indicated that the same may hold true for the critical +dimension $d = 4$ (cf. [7, 8, 9, 20, 29]), however a sweeping statement such as proved for +$d > 4$ has remained open. In this work we address this case. + +For clarity let us note that, like the no-go statements of [1, 17], the results presented +here do not involve explicit computations of the counterterms along the above scheme. +Instead, they are based on dimension-dependent relations among the Schwinger functions +which may emerge in any such limit. + +## 1.2 Statement of the main result + +The probability measures which correspond to (1.1) with the lattice and finite volume +cutoffs (1.8) take the form of a statistical-mechanics Gibbs equilibrium state average + +$$ \langle F(\phi) \rangle = \frac{1}{\text{norm}} \int F(\phi) \exp[-H(\phi)] \prod_{x \in \Lambda_R} \rho(d\phi_x), \tag{1.11} $$ +---PAGE_BREAK--- + +with a Hamiltonian $H(\phi)$ and an a-priori measure $\rho(d\phi)$ of the form + +$$ H(\phi) = - \sum_{\{x,y\} \in \Lambda_R} J_{x,y} \phi_x \phi_y, \quad \rho(d\phi_x) = e^{-\lambda \phi_x^4 + b \phi_x^2} d\phi_x, \qquad (1.12) $$ + +where $d\phi_x$ is the Lebesgue measure on $\mathbb{R}$ and $J_{x,y}$ is zero for non-nearest neighbour vertices, and $J \ge 0$ otherwise. To keep the notation simple, the basic variables are written here as they appear from the perspective of the lattice but our attention is focused on the correlations at distances of the order of $L$, with + +$$ 1 \ll L \ll R. \qquad (1.13) $$ + +In terms of the scaling limit discussed above, $a$ is equal to $1/L$. + +A point of fundamental importance is that since the interaction through which the field variables are correlated is local (nearest neighbor on the lattice scale), for the field correlations functions to exhibit non-singular variation on the scales $L \gg 1$, the system's parameters $(J, \lambda, b)$ need to be very close to the critical manifold, along which the correlation length of the lattice system diverges¹. + +Quantities whose joint distribution we track in the scaling limit are based on the collections of random variables of the form + +$$ T_{f,L}(\phi) := \frac{1}{\sqrt{\sum_L}} \sum_{x \in \mathbb{Z}^d} f(x/L) \phi_x, \qquad (1.14) $$ + +where $f$ ranges over compactly supported continuous functions, whose collection is denoted $C_0(\mathbb{R}^d)$, and $\sum_L$ denotes the variance of the sum of spins over the box of size $L$, i.e. + +$$ \Sigma_L := \left\langle \left( \sum_{x \in \Lambda_L} \phi_x \right)^2 \right\rangle. \qquad (1.15) $$ + +**Definition 1.1** A discrete system as described above, parametrized by $(J, \lambda, b, R, L)$, converges in distribution, in the double limit $\lim_{L \to \infty} \lim_{R/L \to \infty}$ (with a possible restriction to a subsequence along which also the other parameters are allowed to vary) if for any finite collection of test functions $f \in C_0(\mathbb{R}^d)$ the joint distributions of the random variables $\{T_{f,L}(\phi)\}$ converge. + +Through a standard probabilistic construction, the limit can be presented as a random field $\Phi$, to whose weighted averages $T_f(\Phi)$ the above variables converge in distribution. We omit here the detailed discussion of this point², but remark that for the models considered here the construction is simplified by i) the exclusion of delta functions $\delta(x)$ and their derivatives from the family of considered test functions, and ii) the uniform local integrability of the rescaled correlation functions (before and at the limit). This important condition is implied in the present case by the *infrared bound*, which is presented below in Section 5.3. + +Our main result concerning the euclidean field theory is the following. + +¹The scaling limit of a correlation function with exponential decay which on the lattice scale is of a fixed correlation length results in a white noise distribution in the limit. + +²By the Kolmogorov extension theorem, one may start by selecting sequences of the parameter values so as to establish convergence in distribution for a countable collection of test functions $f$, which is dense in $C_0(\mathbb{R}^d)$, and then use the uniform local integrability of the rescaled correlation function and of the limiting Schwinger functions, to extend the statement by continuity arguments to all $f \in C_0(\mathbb{R}^d)$. One may then recast the limiting variables as associated with a single random $\Phi$, as in (1.4). +---PAGE_BREAK--- + +**Theorem 1.2 (Gaussianity of $\Phi_4^d$)** For dimension $d=4$, any random field reachable by the above constructions, and satisfying (1.10), is a generalized Gaussian process. + +Let us mention that the precise asymptotic behaviour of scaling limits of lattice models which start from sufficiently small perturbations of the Gaussian free field, i.e. small enough $\lambda$, have been obtained through rigorous renormalization techniques [9, 16, 20, 29]. In comparison, our result also covers arbitrarily “hard” $\phi^4$ fields. However, we do not currently provide comparable analysis of the convergence in terms of the exact scale of the logarithmic corrections, and the exact expression for the covariance of the limiting Gaussian field. + +Let us also note that what from the perspective of constructive field theory may be regarded as disappointment is a positive and constructive result from the perspective of statistical mechanics. The theoreticians' goal there is to understand the critical behavior in models which lie beyond the reach of exact solutions. The proven gaussianity of the limit is therefore also a constructive result. + +## 1.3 The statistical mechanics perspective + +Statistical mechanics provides a general approach for studying the behaviour of extensive systems of a divergent number of degrees of freedom. Among the theoretically gratifying observations in this field has been the discovery of “universality”. The term means that some of the key features of phase diagrams, and critical behavior (including the critical exponents), appear to be the same across broad classes of systems of rather different microscopic structure. This has accorded relevance to studies of the phase transitions in drastically streamlined mathematical models. The ferromagnetic Ising spin model to which we turn next are among the earliest, and most studied such systems. + +An intuitive explanation of universality is that the large scale behavior of models of rich short scale structure is described by statistical field theories for which there are far fewer options. A heuristic perspective on this phenomenon is provided by the renormalization group theory, c.g. [51]. In particular, the mechanism underlying the simplicity of the scaling limit is related to simplicity of the critical exponents, which means that for $d \ge 4$ they assume their mean field values. Rigorous results for the latter (though still partial, in terms of logarithmic corrections) were presented in [46, 6]. + +The Ising spin model on $\Lambda \subset \mathbb{Z}^d$ has as its basic variables a collection of $\pm 1$ valued variables $\{\sigma_x\}_{x \in \Lambda}$, and a Hamiltonian (the energy function) of the form + +$$ H_{\Lambda,J,h}(\sigma) := - \sum_{\{x,y\} \subset \Lambda} J_{x,y} \sigma_x \sigma_y - \sum_{x \in \Lambda} h \sigma_x. \quad (1.16) $$ + +The model's finite volume Gibbs equilibrium state $\langle \cdot \rangle_{\Lambda,J,h,\beta}$ at inverse temperature $\beta \ge 0$ is the probability measure under which the expectation value of any function $F: \{\pm 1\}^\Lambda \to \mathbb{R}$ is given by + +$$ \langle F \rangle_{\Lambda, J, h, \beta} := \frac{1}{Z(\Lambda, J, h, \beta)} \sum_{\sigma \in \{\pm 1\}^\Lambda} F(\sigma) \exp[-\beta H_{\Lambda, J, h}(\sigma)], \quad (1.17) $$ + +where the normalizing factor $Z(\Lambda, J, h, \beta)$ is the model's partition function. Infinite volume Gibbs states on $\mathbb{Z}^d$, which we shall denote by $\langle \cdot \rangle_{J,h,\beta}$, are defined through suitable limits (over sequences $\Lambda_n \nearrow \mathbb{Z}^d$) of the above. + +We focus here on the nearest neighbor ferromagnetic interaction (n.n.f.) + +$$ J_{x,y} = \begin{cases} J & \|x-y\| = 1 \\ 0 & \text{otherwise} \end{cases} \quad (1.18) $$ +---PAGE_BREAK--- + +with $J > 0$. In dimensions $d > 1$, this model exhibits a line of first-order phase transitions (in the plane of the model's thermodynamics parameters $(\beta, h)$) along the line $h = 0$, $\beta \in (\beta_c(d), \infty)$. The line terminates at the critical point $(\beta_c, 0)$ at which the model's correlation length diverges. Our discussion concerns the scaling limits at, or near, this point. Since the phase transition occurs at zero magnetic field, we restrict the discussion to $h = 0$ and will omit $h$ from the notation. + +Away from the critical point the model's truncated correlation functions decay exponentially fast [3, 14]. This leads to the definition of the *correlation length* $\xi(\beta)$ as: + +$$ \xi(\beta) := \lim_{n \to \infty} -n / \log \langle \sigma_0; \sigma_{ne_1} \rangle_\beta \quad (\text{with } \mathbf{e}_1 = (1, 0, \dots, 0)). \qquad (1.19) $$ + +The correlation length is proven to be finite for any $\beta < \beta_c$ [3] and divergent in the limit $\beta \to \beta_c$ [44]. At the critical point $\xi(\beta_c) = +\infty$ as the decay of the 2-point function slows to a power-law (see [44] and the discussion around Corollary 5.8). + +At this point, one may notice the similarity between the Ising model’s Gibbs equilibrium distribution (1.17) and the discretized functional integral (1.11). Furthermore, in view of the probability measures’ relation + +$$ \frac{1}{2} [\delta(\sigma - 1) + \delta(\sigma + 1)] d\sigma = 2 \lim_{\lambda \to \infty} e^{-\lambda(\phi^2-1)^2} d\phi / \text{Norm}(\lambda) \quad (1.20) $$ + +the Ising spin’s a-priori (binary) distribution can be viewed as the “hard” limit of the $\phi^4$ measure. Hence included in Theorem 1.2 is the statement that for $d=4$ any scaling limit of the critical Ising model is Gaussian. + +However, our analysis flows in the opposite direction. In essence, the argument is structured as follows: + +1. deploying methods which take advantage of the Ising systems' structure, the stated results are first proven for the n.n.f. Ising model (in four dimensions); + +2. the analysis is adapted to the model's extension, in which each spin is replaced by a block average of 'elemental' Ising spins with an intrablock ferromagnetic coupling; + +3. through weak limits the statement is extended to systems of variables whose a-priori single spin distribution belongs to the Griffiths-Simon (G-S) class. + +Included in the G-S class (defined below) are the $\Phi^4$ measure $\rho(d\varphi)$ of (1.12). + +To reduce the repetition, some of the relevant relations are presented below in a form which may not be the simplest for n.n.f. but is suitable for the model's generalized version. However in the rest of this section we focus on the n.n.f. case. + +As it is known, and made explicit in Section 6.3, for Ising models a bellwether for Gaussian behaviour at large distances is the asymptotic validity of Wick's law at the level of the four-point function [1, 38]. The deviation is expressed in the *Ursell function* + +$$ U_4^\beta(x, y, z, t) := \langle \sigma_x \sigma_y \sigma_z \sigma_t \rangle_\beta - \left[ \langle \sigma_x \sigma_y \rangle_\beta \langle \sigma_z \sigma_t \rangle_\beta + \langle \sigma_x \sigma_z \rangle_\beta \langle \sigma_y \sigma_t \rangle_\beta + \langle \sigma_x \sigma_t \rangle_\beta \langle \sigma_y \sigma_z \rangle_\beta \right] (1.21) $$ + +the relevant question being whether $U_4(x,y,z,t)/\langle\sigma_x\sigma_y\sigma_z\sigma_t\rangle_\beta$ vanishes asymptotically for quadruples of sites at large distances, of comparable order between the pairs. + +Gaussianity of the scaling limits for $d > 4$ was previously established through the combination of the *tree diagram bound* of [1]: + +$$ |U_4^\beta(x, y, z, t)| \le 2 \sum_{u \in \mathbb{Z}^d} \langle \sigma_u \sigma_x \rangle_\beta \langle \sigma_u \sigma_y \rangle_\beta \langle \sigma_u \sigma_z \rangle_\beta \langle \sigma_u \sigma_t \rangle_\beta \qquad (1.22) $$ +---PAGE_BREAK--- + +and the *Infrared Bound* of [19, 21] + +$$ +\langle \sigma_x \sigma_y \rangle_{\beta_c} \leq \frac{C}{|x-y|^{d-2}}. \tag{1.23} +$$ + +At the heuristic level, the triviality of the scaling limit for $d > 4$ is indicated by the +following dimension counting. Assume that at $\beta_c$ the two-point function is of comparable +values for pairs of sites at similar distances (which is false for $\beta \neq \beta_c$ at distances much +larger than $\xi(\beta)$). Then, for quadruples of points at mutual distances of order $L$, the +sum in the tree diagram bound (1.22) contributes a factor $L^d$ while the summand has two +extra correlation function factors, in comparison to $\langle \sigma_x \sigma_y \sigma_z \sigma_t \rangle_\beta$, each factor dominated by +$1/L^{d-2}$. This suggests that $U_4(x,y,z,t)$ in comparison to the full correlation functions may +be of the order $O(L^{4-d})$, which for $d > 4$ vanishes in the limit $L \to \infty$. Up to numerous +technical details this is the essence of the argument presented in [1, 17]. However, the +above estimate is clearly inconclusive for $d = 4$. + +The key advance presented here is the following improvement of the tree diagram +bound. The multiplicative factor by which it improves (1.22) is derived through a multi +scale analysis which is of relevance at the marginal dimension $d = 4$. + +**Theorem 1.3 (Improved tree diagram bound inequality)** For the n.n.f. Ising model in dimension $d=4$, there exist $c, C > 0$ such that for every $\beta \le \beta_c$, every $L \le \xi(\beta)$ and every $x, y, z, t \in \mathbb{Z}^d$ at a distance larger than $L$ of each other, + +$$ +|U_4^\beta(x, y, z, t)| \leq \frac{C}{B_L(\beta)^c} \sum_{u \in \mathbb{Z}^4} \langle \sigma_u \sigma_x \rangle_\beta \langle \sigma_u \sigma_y \rangle_\beta \langle \sigma_u \sigma_z \rangle_\beta \langle \sigma_u \sigma_t \rangle_\beta, \quad (1.24) +$$ + +where $B_L(\beta)$ is the bubble diagram truncated at a distance $L$ defined by the formula + +$$ +B_L(\beta) := \sum_{x \in \Lambda_L} (\sigma_0 \sigma_x)_\beta^2 . \tag{1.25} +$$ + +For a heuristic insight on the implications of this improvement for $d = 4$, one may consider separately the two following scenarios: the two-point function $\langle \sigma_0 \sigma_x \rangle_\beta$ may be roughly of the order $L^{2-d}$ (meaning that the Infrared Bound is saturated up to constant), or it may be much smaller. In the first case (which is conjectured to hold when $d = 4$), $B_L(\beta)$ is of order $\log L$, so that the improved tree diagram bound indicates that $|U_4|/S_4 = O(\log L)^{-c}$, and thus is asymptotically negligible. In the second case (which is not the one expected to hold), already the unadulterated tree diagram bound (1.22) suffices. + +We derive (1.24) making extensive use of the Ising model's random current representation that is presented in Section 3. It enables combinatorial identities through which the deviations from Wick's law can be expressed in terms of intersection probabilities of the random clusters which link pairwise the specified source points. + +Beyond the four point function, the full statement of the scaling limit’s gaussianity is established here through the following estimate of the characteristic function of smeared averages of spins. + +**Proposition 1.4** There exist $c, C > 0$ such that for the n.n.f. Ising model on $\mathbb{Z}^4$, every $\beta \le \beta_c$, every $L \le \xi(\beta)$, and test function $f \in C_0(\mathbb{R}^4)$, + +$$ +\left| \left\langle \exp[z T_{f,L}(\sigma) - \frac{z^2}{2} \langle T_{f,L}(\sigma)^2 \rangle_{\beta}] \right\rangle_{\beta} - 1 \right| \leq \frac{C \|f\|_{\infty}^4 r_f^{12}}{(\log L)^c} z^4, \quad (1.26) +$$ + +with $\|f\|_{\infty} := \max\{|f(x)| : x \in \mathbb{R}^4\}$ and $r_f$ the diameter of the function's support. +---PAGE_BREAK--- + +The claimed gaussianity follows since (by the Infrared Bound, applied on the left-hand side) for any non-negative continuous function $f \neq 0$ with bounded support, + +$$Cr_f^2 \|f\|_\infty^2 \geq \langle T_{f,L}(\sigma)^2 \rangle_\beta \geq c_f > 0, \quad (1.27)$$ + +uniformly in $\beta \leq \beta_c$ and $L$, we get that for $L \gg 1$ the distribution of $T_{f,L}(\sigma)$ is approximately Gaussian of variance $\langle T_{f,L}(\sigma)^2 \rangle_\beta$. + +**Organization of the proof:** The result proven here is unconditional. However, to better convey the argument's structure, we first establish the claimed result for the scaling limits of critical models ($\beta = \beta_c$) under the auxiliary assumption that the two-point function behaves regularly on all scales, in a sense defined below. We then present an unconditional proof for $\beta \leq \beta_c$ in which we add to the above analysis the proof that the two-point function is regular on a sufficiently large collection of distance scales, up to the correlation length $\xi(\beta)$. + +**Organization of the article:** In the next section, we present the Griffiths-Simon construction of random variables which can be obtained as local aggregates of ferromagnetically coupled Ising spins. It yields a useful link between the $\phi^4$ and Ising variables. Following that, in Section 3 we present the basics of Ising models' random current representation, and the intuition based on random walk intersection probabilities. Section 4 contains a conditional proof of the improved tree diagram bound at criticality, derived under a power-law decay assumption on the two-point function. Next, as a preparation for the unconditional proof, in Section 5 we present some relevant properties of Ising model's two-point function. These estimates are stated and proved in the context of systems of real valued variables with the single-spin distribution in the aforementioned Griffiths-Simon class. Included there are mostly known but also some new results. Section 6 contains the unconditional proof of our main results for the Ising model. Section 7 is devoted to its extension to the Griffiths-Simon class. The appendix contains some auxiliary technical statements that are of independent interest. + +# 2 The Griffiths-Simon class of measures + +The discrete approximations of the $\varphi^4$ functional integral and the Gibbs states of an Ising model are not only analogous, as explained above, but are actually related. + +In one direction one has (1.20) and the implications mentioned next to it. However, in this work we shall make use of another relation, which permits us to apply tools which are initially developed for general Ising models to the study of the $\varphi^4$ functional integral. This relation is based on a construction which was initiated by Griffiths [23], and advanced further by Simon-Griffiths [45]. + +**Definition 2.1** A probability measure on $\rho(d\varphi)$ on $\mathbb{R}$ is said to belong to the Griffiths-Simon (GS) class if either of the following conditions is satisfied + +1) the expectation values with respect to $\rho$ can be presented as + +$$\int F(\varphi) \rho(d\varphi) = \sum_{\sigma \in \{-1,1\}^N} F\left(\alpha \sum_{n=1}^{N} b_n \sigma_n\right) e^{\sum_{n,m=1}^{N} K_{n,m} \sigma_n \sigma_m} / \text{Norm}. \quad (2.1)$$ + +with some $\{b_n\} \subset \mathbb{R}$, and $K_{n,m} \geq 0$. + +2) $\rho$ can be presented as a (weak) limit of probability measures of the above type, and +---PAGE_BREAK--- + +Figure 1: The decorated graph, in which the sites $x \in \Lambda$ of a graph of interest are re- +placed by “blocks” $\mathcal{B}_x$ of sites indexed as $(x, n)$. The Ising “constituent spins” $\sigma_{x,n}$ are +coupled pairwise through intra-block couplings $\delta_{x,y} K_{n,m}$ and inter-block couplings $J_{x,y}$. +The depicted lines indicate a possible realization of the corresponding random current. + +is of sub-gaussian growth: + +$$ +\int e^{|\varphi|^\alpha} \rho(d\varphi) < \infty \quad \text{for some } \alpha > 2. \qquad (2.2) +$$ + +A random variable is said to be of Griffiths-Simon type if its probability distribution is in +the GS class. + +The construction (1) was employed by Griffiths [23] for a proof that the Ising model’s +Lee-Yang property as well as the Griffiths correlation inequalities hold also for a broader +class of similar models with other notable spin variables. Subsequently, Simon and Grif- +fiths [45] pointed out that upon taking weak limits this can be extended to cover alsothe +$\phi^4$ a-priori measures, spelled in (1.12). + +More specifically, a finite collection of the variables {$\varphi_x$}$_{x\in\Lambda}$ with the a-priori measure +$\rho(d\varphi) = e^{-\lambda\varphi^4+b\varphi^2}d\varphi/\text{norm can be produced as the } N\to\infty \text{ limit (in distribution) of the} +$ +collection of the block averages of elemental Ising spins {$\sigma_{x,n}$} (the dots in Fig. 1 ) + +$$ +\varphi_x^{(N)} = \alpha_N(\lambda, b) \sum_{n=1}^{N} \sigma_{x,n} \qquad (2.3) +$$ + +under the “ultra-local” coupling (which is to be added to the intersite interaction $H$ of +(1.12)) + +$$ +H_{\text{inner}} = - \frac{g_N(\lambda, b)}{N} \sum_{x \in \Lambda} \sum_{n,m} \sigma_{x,n} \sigma_{x,m} \quad (2.4) +$$ + +with suitably adjusted $(\alpha_N, g_N)$. Their exact values are not important for our discussion, +but let us note that $H_{\text{inner}}$ is a mean field interaction and thus it is easy to see that for +each $(\lambda, b)$ with $\lambda \neq 0$: $g_N(\lambda, b)$ tends to 1 as $N$ tends to infinity, at a $(\lambda, b)$ dependent +rate. + +In this representation, any system of $\phi^4$ variables associated with the sites of a graph $\mathcal{V}$, and coupled through the graph's edges, is presentable as the limit ($N \to \infty$) of a system of constituent Ising spins associated with the Cartesian graph product $\mathbb{Z}^d \times \mathcal{K}_N$, with $\mathcal{K}_N$ denoting the complete graph of $N$ vertices. +---PAGE_BREAK--- + +# 3 Random current intersection probabilities + +## 3.1 Definition and switching lemma + +Starting with the Ising model, in this section we briefly introduce its random current representation, which allows to express the model’s subtle correlation effects in more tangible stochastic geometric terms. The utility of the random current representation is enhanced by the combinatorial symmetry expressed in its *switching lemma*, which enables to structure some of the essential truncated correlations in terms guided by the analysis of the intersection properties of the traces of random walks. + +**Definition 3.1** A current configuration **n** on Λ is an integer-valued function defined over unordered pairs (x, y) ∈ Λ. The current's set of sources is defined as the set + +$$ \partial \mathbf{n} := \{ x \in \Lambda : (-1)^{\sum_{y \in \Lambda} \mathbf{n}(x,y)} = -1 \}. \quad (3.1) $$ + +For a given Ising model on $\Lambda$, we associate to a current configuration the weight + +$$ w(\mathbf{n}) = w_{\Lambda, J, \beta}(\mathbf{n}) := \prod_{\{x,y\} \subset V} \frac{(\beta J_{x,y})^{\mathbf{n}(x,y)}}{\mathbf{n}(x,y)!}. \quad (3.2) $$ + +Starting from Taylor's expansion + +$$ \exp(\beta J_{x,y} \sigma_x \sigma_y) = \sum_{\mathbf{n}(x,y) \ge 0} \frac{(\beta J_{x,y} \sigma_x \sigma_y)^{\mathbf{n}(x,y)}}{\mathbf{n}(x,y)!}, \quad (3.3) $$ + +one can see that the Ising model’s partition function (defined below (1.17)) can be expressed in terms of the corresponding random current: + +$$ Z(\Lambda, \beta) = 2^{|\Lambda|} \sum_{\mathbf{n}:\partial\mathbf{n}=\emptyset} w(\mathbf{n}). \quad (3.4) $$ + +Furthermore, the spin-spin correlation functions can be represented as + +$$ \langle \prod_{x \in A} \sigma_x \rangle_{\Lambda, \beta} = \frac{\sum_{\mathbf{n}: \partial\mathbf{n}=A} w(\mathbf{n})}{\sum_{\mathbf{n}: \partial\mathbf{n}=\emptyset} w(\mathbf{n})}. \quad (3.5) $$ + +At this point, it helps to note that any configuration with $\partial\mathbf{n} = \emptyset$, i.e. without sources, can be viewed as the edge count of a multigraph which is decomposable into a union of loops. In contrast, any configuration with $\partial\mathbf{n} = A$, such as the one appearing in the numerator of (3.5), can be viewed as describing the edge count of a multigraph which is decomposable into a collection of loops and of paths connecting pairwise the sources, i.e. sites of $A$. In particular, a configuration with $\partial\mathbf{n} = \{u, v\}$ can be viewed as giving the “flux numbers” of a family of loops together with a path from $u$ to $v$. Thus, the random current representation allows to present the spin-spin correlation as the effect on the partition function of a loop system with the addition of a path linking the two sources. In these terms, the spin-spin correlation $\langle \sigma_{x_1} \cdots \sigma_{x_{2n}} \rangle_\beta$ represents the sum of the multiplicative effect of the introduction of $n$ paths pairing the sources. + +Connectivity properties of currents play a significant role in our analysis. To express those we shall employ the following terminology and notation. +---PAGE_BREAK--- + +**Definition 3.2** i) We say that $x$ is connected to $y$ (in $\mathbf{n}$), and denote the event by $x \xrightarrow{\mathbf{n}} y$, if there exists a path of vertices $x = u_0, u_1, \dots, u_k = y$ with $\mathbf{n}(u_i, u_{i+1}) > 0$ for every $0 \le i < k$. We say that $x$ is connected to a set $S$ if it is connected to a vertex in $S$. + +ii) The cluster of $x$, denoted by $C_n(x)$, is the set of vertices connected to $x$ in $\mathbf{n}$. + +iii) For a set of vertices $B$, we denote by $\mathcal{F}_B$ the set of $\mathbf{n}$ satisfying that there exists a sub-current $\mathbf{m} \le \mathbf{n}$ such that $\partial\mathbf{m} = B$. + +Some of the most powerful properties of the random current representation are best seen when considering pairs of random currents and using the following lemma. + +**Lemma 3.3 (Switching lemma)** For any $A, B \subset \Lambda$ and any function $F$ from the set of currents into $\mathbb{R}$, + +$$ \sum_{\substack{\mathbf{n}_1:\partial\mathbf{n}_1=A \\ \mathbf{n}_2:\partial\mathbf{n}_2=B}} F(\mathbf{n}_1+\mathbf{n}_2)w(\mathbf{n}_1)w(\mathbf{n}_2) = \sum_{\substack{\mathbf{n}_1:\partial\mathbf{n}_1=A\Delta B \\ \mathbf{n}_2:\partial\mathbf{n}_2=\emptyset}} F(\mathbf{n}_1+\mathbf{n}_2)w(\mathbf{n}_1)w(\mathbf{n}_2)\mathbf{1}_{\mathbf{n}_1+\mathbf{n}_2 \in \mathcal{F}_B}. \quad (3.6) $$ + +where $A\Delta B$ denotes the symmetric difference of sets, $A\Delta B := (A \setminus B) \cup (B \setminus A)$. + +The switching lemma appeared as a combinatorial identity in Griffiths-Hurst-Sherman's derivation of the GHS inequality [24]. Its greater potential for the geometrization of the correlation functions was developed in [1], and works which followed. In this paper, we employ two generalizations of this useful identity. In the first, the two currents **n**₁ and **n**₂ need not be defined on the same graph (see [4, Lemma 2.2] for details). The second will involve a slightly more general switching statement, which was used in several occasions in the past (cf. [5, Lemma 2.1] and reference therein). + +It should be recognized that other stochastic geometric representations of spin correlations and/or interactions can be found (e.g. the Symanzik representation of the $\phi^4$ action [48], and the BFS random walk representation of the correlation functions [11]). It is conceivable that the overall strategy could be applied also through other means. However we find the random current representation particularly useful for our purpose. + +## 3.2 Representation of Ursell's four-point function + +The switching lemma enables one to rewrite spin-spin correlation ratios in terms of probabilities of events expressed in terms of the random currents. The first of these is the relation + +$$ \frac{\langle \sigma_A \rangle_{\Lambda, \beta} \langle \sigma_B \rangle_{\Lambda, \beta}}{\langle \sigma_A \sigma_B \rangle_{\Lambda, \beta}} := \mathbf{P}_{\Lambda, \beta}^{A\Delta B, \emptyset} [\mathbf{n}_1 + \mathbf{n}_2 \in \mathcal{F}_B], \qquad (3.7) $$ + +for which we denote by $\mathbf{P}_{\Lambda,\beta}^A (\mathbf{n})$ the probability distribution on random currents constrained by the source condition $\partial\mathbf{n} = A$, or more explicitly + +$$ \mathbf{P}_{\Lambda, \beta}^{A}(\mathbf{n}) := \frac{2^{|\Lambda|} w(\mathbf{n})}{\langle \prod_{x \in A} \sigma_x \rangle_{\Lambda, \beta} Z(\Lambda, \beta)} \mathbb{I}[\partial\mathbf{n} = A], \quad (3.8) $$ + +and by $\mathbf{P}_{\Lambda,\beta}^{A_1, \dots, A_i}$ we denote the law of an independent family of currents $(\mathbf{n}_1, \dots, \mathbf{n}_i)$ + +$$ \mathbf{P}_{\Lambda, \beta}^{A_1, \dots, A_i} := \mathbf{P}_{\Lambda, \beta}^{A_1} \otimes \cdots \otimes \mathbf{P}_{\Lambda, \beta}^{A_i}. \quad (3.9) $$ + +For two-point sets we may write $A = xy$ instead of $\{x,y\}$. +---PAGE_BREAK--- + +As we will also work with the infinite volume Gibbs measures, let us note that random currents and the switching lemma admit a generalization to infinite volume³. Existing continuity results [4] permit to extend (3.7) to the infinite volume, expressed in terms of the weak limits of the random current measures $\mathbf{P}_{\Lambda_n, \beta}^A$ and $\mathbf{P}_{\Lambda_n, \beta}^{A_1, \dots, A_i}$, in the limit $\Lambda_n \nearrow \mathbb{Z}^d$. The limiting statement is similar to (3.7) but without the finite volume subscript $\Lambda$: + +$$ \frac{\langle \sigma_A \rangle_\beta \langle \sigma_B \rangle_\beta}{\langle \sigma_A \sigma_B \rangle_\beta} = \mathbf{P}_\beta^{A\Delta B, \emptyset}[\mathbf{n}_1 + \mathbf{n}_2 \in \mathcal{F}_B]. \quad (3.10) $$ + +Combining (3.10) for the different values of the product of spin-spin correlations leads to + +$$ U_4^\beta(x, y, z, t) = -2 \langle \sigma_x \sigma_y \rangle_\beta \langle \sigma_z \sigma_t \rangle_\beta \mathbf{P}_\beta^{xy,zt} [\mathbf{C}_{n_1+n_2}(x) \cap \mathbf{C}_{n_1+n_2}(z) \neq \emptyset]. \quad (3.11) $$ + +This equality is of fundamental importance to the question discussed here. It was the basis of the analysis of [1], and is the starting point for our discussion. + +By (3.11), the relative magnitude of the deviation of the four-point function $\langle \sigma_x \sigma_y \sigma_z \sigma_t \rangle_\beta$ from the Gaussian law (i.e. the discrepancy in Wick's formula) is bounded in terms of intersection properties of the two clusters that link the indicated sources pairwise: + +$$ \frac{|U_4^\beta(x, y, z, t)|}{\langle \sigma_x \sigma_y \sigma_z \sigma_t \rangle_\beta} \le 2 \mathbf{P}_\beta^{xy,zt} [\mathbf{C}_{n_1+n_2}(x) \cap \mathbf{C}_{n_1+n_2}(z) \neq \emptyset]. \quad (3.12) $$ + +The random sets $\mathbf{C}_{n_1+n_2}(x)$ and $\mathbf{C}_{n_1+n_2}(z)$ are not independently distributed. However (3.12) can be further simplified through a monotonicity property of random currents. As proved in [1], and recalled here in the Appendix, the probability of an intersection can only increase upon the two sets' replacement by a pair of independently distributed clusters defined through the addition of two sourceless currents: + +$$ \mathbf{P}_\beta^{xy,zt} [\mathbf{C}_{n_1+n_2}(x) \cap \mathbf{C}_{n_1+n_2}(z) \neq \emptyset] \leq \mathbf{P}_\beta^{xy,zt,\emptyset,\emptyset} [\mathbf{C}_{n_1+n_3}(x) \cap \mathbf{C}_{n_2+n_4}(z) \neq \emptyset]. \quad (3.13) $$ + +This leads to the simpler upper bound in which the two random sets are independent: + +$$ |U_4^\beta(x, y, z, t)| \le 2 \langle \sigma_x \sigma_y \rangle_\beta \langle \sigma_z \sigma_t \rangle_\beta \mathbf{P}_\beta^{xy,zt,\emptyset,\emptyset} [\mathbf{C}_{n_1+n_3}(x) \cap \mathbf{C}_{n_2+n_4}(z) \neq \emptyset]. \quad (3.14) $$ + +Bounding the intersection probability by the expected number of intersection sites and applying the switching lemma leads directly to the tree diagram bound (1.22). However, as was explained above, to tackle the marginal dimension $d=4$ one needs to improve on that. + +While $\mathbf{C}_{n_1+n_3}(x)$ and $\mathbf{C}_{n_2+n_4}(z)$ are bulkier and exhibit less independence than simple random walks linking the sources $\{x,y\}$ and $\{z,t\}$, the analogy is of help in guiding the intuition towards useful estimate strategies. In particular, it is classical that in dimension $d=4$ the probability that the traces of two random walks starting at distance $L$ of each other intersect, tends to 0 (as $1/\log L$, see [2, (2.8)] and [34]), but nevertheless the expected number of points of intersection remains of order $\Omega(1)$. The discrepancy is explained by the fact that although the intersections occur rarely, the conditional expectation of the number of intersection sites, conditioned on there being at least one, diverges logarithmically in $L$. The thrust of our analysis will be to establish similar behaviour in the system considered here. More explicitly, we will prove that the conditional expectation of the clusters' intersection size, conditioned on it being non-empty, grows at least as $(\log L)^c$. + +The analysis of clusters’ intersection properties is more difficult than that of the paths of simple random walks for at least two reasons: + +³The extension of the switching lemma to $\mathbb{Z}^d$ is straightforward for $\beta \le \beta_c$ since then $\mathbf{n}_1 + \mathbf{n}_2$ does not contain infinite paths of positive currents, almost surely under $\mathbf{P}_\beta^{A,B}$. For $\beta < \beta_c$ this is implied by the discussion of [1] for $\beta < \beta_c$, and for $\beta = \beta_c$ it follows from the continuity result of [4] for $\beta = \beta_c$. +---PAGE_BREAK--- + +* Missing information on the two-point function: Most analyses of intersection properties of random walks involve estimates on the Green function. In our system its role is to some extent taken by the two-point spin-spin correlation function. However, unlike the former case we do not a priori know the two-point function's exact order of magnitude (though a good one-sided inequality is provided by the Infrared Bound). This raises a difficulty that we address by studying the regularity properties of the two-point function in Section 5. + +* The lack of a simple Markov property: in one way or another, the analysis of intersections for random walks involves the random walk's Markov property. Among its other applications, the walk's renewal property facilitates de-correlating the walks' behaviour at different places. In comparison, the random current clusters exhibit only a multidimensional domain Markov property. One of the main contributions of this paper will be to show a mixing property of random currents which will enable us to bypass the difficulty raised by the lack of a renewal property. + +We expect that both the regularity estimates and the mixing properties established here are of independent interest, and may be of help in studies of the model also in three dimensions. + +# 4 A conditional improvement of the tree diagram bound for + +$$ \beta = \beta_c $$ + +To better convey the strategy by which the tree diagram bound is improved, we start with a conditional proof of (1.24) for the Ising model on $\mathbb{Z}^4$ at criticality (i.e. when $\beta = \beta_c$), under the following assumption on the model’s two-point function. The removal of this assumption will raise substantial problems which are presented in the sections that follow. Below, $|·|$ denotes the infinity-norm + +$$ |x| := \max\{|x_i|, 1 \le i \le d\}. \quad (4.1) $$ + +**Assumption 4.1 (Power-law decay)** There exist $\eta$ and $c, C \in (0, \infty)$ such that for every $x \in \mathbb{Z}^d$, + +$$ \frac{c}{|x|^{d-2+\eta}} \le \langle \sigma_0 \sigma_x \rangle_{\beta_c} \le \frac{C}{|x|^{d-2+\eta}}. \quad (4.2) $$ + +The Infrared Bound (5.37) guarantees that $\eta \ge 0$ in any dimension $d > 2$. Note that if $\eta > 0$ for $d = 4$, then $B_L(\beta_c)$ is bounded uniformly in $L$ in which case the tree diagram bound implies the improved one. Thus, under this assumption the case requiring attention is just $\eta = 0$ (which is the generally expected value). + +## 4.1 Intersection clusters + +Our starting point is (3.14) in which $U_4^{\beta_c}$ is bounded by the probability of intersection of two independently distributed clusters $C_{n_1+n_3}(x)$ and $C_{n_2+n_4}(z)$, of which $\mathbf{n}_1$ and $\mathbf{n}_2$ include paths linking pairwise widely separated sources, $\partial \mathbf{n}_1 = \{x,y\}$ and $\partial \mathbf{n}_2 = \{z,t\}$. Introduce the notation + +$$ \mathcal{T} := C_{n_1+n_3}(x) \cap C_{n_2+n_4}(z), \quad (4.3) $$ +---PAGE_BREAK--- + +and let $|\mathcal{T}|$ be the set's cardinality. The tree diagram bound corresponds to the first moment estimate: + +$$ \mathbf{P}_{\beta_c}^{xy,zt,\emptyset,\emptyset} [|\mathcal{T}| > 0] \leq \mathbf{E}_{\beta_c}^{xy,zt,\emptyset,\emptyset} [|\mathcal{T}|], \quad (4.4) $$ + +in which the intersection probability is bounded by the intersection set's expected size. + +Although the set $\mathcal{T}$ is less tractable than the intersection of a pair of Markovian random walks, their intuitive example provides a useful guide. The intersection of the traces of two simple random walks in dimension $d = 4$ has a Cantor-set like structure. Guided by this analogy, and taking advantage of the switching lemma, we show that conditioned on the event that $u$ belongs to $\mathcal{T}$, the intersection $|\mathcal{T}|$ is typically very large. This is in line with our expectation that the vertices in the intersection set occur in large (disconnected) clusters, causing the expected size of $|\mathcal{T}|$ to be much larger than the probability of it being non-zero. + +Below and in the rest of this article, we introduce the annulus of sizes $k \le n$ and the boundary of a box as follows: + +$$ \mathrm{Ann}(k,n) := \Lambda_n \times \Lambda_{k-1} \quad \text{and} \quad \partial\Lambda_n := \mathrm{Ann}(n,n) \tag{4.5} $$ + +(cf. Fig. 2). + +In the proof, we apply the following deterministic covering lemma, which links the number of points in a set $\mathcal{X} \subset \mathbb{Z}^d$ with the number of concentric annuli of the form $u + \mathrm{Ann}(\ell_k, \ell_{k+1})$, with $u \in \mathcal{X}$, which it takes to cover $\mathcal{X}$. To state it we denote, for any (possibly finite) increasing sequence of lengths $\mathcal{L} = (\ell_k)$, every $u \in \mathbb{Z}^d$, and every integer $K$, + +$$ \mathbf{M}_u(\mathcal{X}; \mathcal{L}, K) = \mathrm{card}\{k \le K : \mathcal{X} \cap [u + \mathrm{Ann}(\ell_k, \ell_{k+1})] \ne \emptyset\} \tag{4.6} $$ + +(cf. Fig. 2). + +**Lemma 4.2** (Annular covering) In the above notation, for any sequence $\mathcal{L} = (\ell_k)$ with $\ell_1 \ge 1$ and $\ell_{k+1} \ge 2\ell_k$ + +$$ |\mathcal{X}| \ge 2^{\min\{\mathbf{M}_u(\mathcal{X};\mathcal{L},K)/5: u \in \mathcal{X}\}}. \tag{4.7} $$ + +**Proof** It suffices to show that if $|\mathcal{X}| < 2^r$ for some $r$, then there exists a site $u \in \mathcal{X}$ for which $\mathbf{M}_u(\mathcal{X};\mathcal{L},K) < 5r$. + +We prove the following stronger statement: For every set $\mathcal{X}$ containing the origin and every $K$, if $|\mathcal{X} \cap \Lambda_{\ell_K}| < 2^r$, then there exists $u \in \mathcal{X} \cap \Lambda_{\ell_K}$ with $M_u(\mathcal{X};\mathcal{L},K) < 5r$. + +The assertion is obviously true for $r=1$ as one can pick $u$ to be the origin. Next, consider the case of $r > 1$ assuming the statement holds for all smaller values. If the intersection of $\mathcal{X}$ and $\Lambda_{\ell_{K-1}}$ is reduced to the origin, then $M_0(\mathcal{X};\mathcal{L},K) \le 2$ (only the annuli $\mathrm{Ann}(\ell_l, \ell_{l+1})$ with $l$ equal to $K-1$ or $K$ can intersect $\mathcal{X}$) as required so we now assume that this is not the case. Consider $0 \le k \le K-2$ maximal such that there exists $u \in \mathcal{X}$ with $\ell_k < |u| \le \ell_{k+1}$. + +Since $\mathcal{X} \cap \Lambda_{\ell_{k-1}}$ and $\mathcal{X} \cap (u + \Lambda_{\ell_{k-1}})$ are disjoint (we use that $\ell_k \ge 2\ell_{k-1}$), one of the two sets has cardinality strictly smaller than $2^{r-1}$. Assume first that it is $\mathcal{X} \cap \Lambda_{\ell_{k-1}}$. The induction hypothesis implies the existence of $v \in \mathcal{X} \cap \Lambda_{\ell_{k-1}}$ such that + +$$ \mathbf{M}_v(\mathcal{X};\mathcal{L},k-1) < 5(r-1). \tag{4.8} $$ + +By our choice of $k$, every site in $\mathcal{X}$ is either in $\Lambda_{\ell_{k+1}}$ or outside of $\Lambda_{\ell_{K-1}}$. This implies that only the annuli $\mathrm{Ann}(\ell_l, \ell_{l+1})$ with $l$ equal to $k, k+1, K-2, K-1$ or $K$ can intersect $\mathcal{X}$, so that + +$$ \mathbf{M}_v(\mathcal{X};\mathcal{L},K) \leq \mathbf{M}_v(\mathcal{X};\mathcal{L},k-1) + 5 < 5r. \tag{4.9} $$ +---PAGE_BREAK--- + +Figure 2: The two (duplicated)-currents $\mathbf{n}_1+\mathbf{n}_3$ and $\mathbf{n}_2+\mathbf{n}_4$ in blue and black respectively. The clusters of $x$ (or equivalently $y$) in $\mathbf{n}_1+\mathbf{n}_3$ and $z$ (or equivalently $t$) in $\mathbf{n}_2+\mathbf{n}_4$ are depicted in bold. The red vertices are the elements of the intersection $\mathcal{T}$. We illustrated the annuli around one element, denoted $u$, of $\mathcal{T}$ and draw them in gray when an intersection occurs. Here, we therefore have $M_u(\mathcal{T}; \mathcal{L}, 5) = 3$ since three annuli contain an intersection. + +If it is $\mathcal{X} \cap (u + \Lambda_{\ell_{k-1}})$ which has small cardinality, simply translate the set by $u$ and apply +the same reasoning. The distance between the vertex $v$ obtained by the procedure and 0 +is at most $\ell_{k-1} + \ell_k \le \ell_K$, so that the claim follows in this case as well. □ + +In the following conditional statement, we denote by $\mathcal{L}\alpha$ a sequence of integers defined +recursively so that $\ell_{k+1} = \ell_k^\alpha$ with a specified $\alpha > 1$ and $\ell_0$ a large enough integer. + +**Proposition 4.3 (Conditional intersection-clustering bound)** *Under the assumption that the Ising model on $\mathbb{Z}^4$ satisfies (4.2) with $\eta = 0$ and restricting to $\alpha > 3^8$: there exist $\ell_0 = \ell_0(\alpha)$ and $\delta = \delta(\alpha) > 0$ such that for every $K > 2$ and every $u, x, y, z, t \in \mathbb{Z}^4$ with mutual distance between $x, y, z, t$ larger than $2\ell_K$,* + +$$ +\mathbf{P}_{\beta_c}^{ux,uz,uy,ut}[\mathbf{M}_u(\mathcal{T}; \mathcal{L}_\alpha, K) < \delta K] \le 2^{-\delta K}. \quad (4.10) +$$ + +Before deriving this estimate, which is proven in the next section, let us show how it +leads to the improved tree diagram bound. + +**Proof of Theorem 1.3 under the assumption** (4.2). As the discussion is limited here to $\beta = \beta_c$, we omit it from the notation. If $\eta > 0$ the bubble diagram is finite and hence the desired statement is already contained in the tree diagram bound (1.22). Focus then on the case $\eta = 0$, for which the bubble diagram diverges logarithmically. Fix $\alpha > 3^8$ and let $\ell_0$ and $\delta$ be given by Proposition 4.3. Since $x, y, z, t$ are at mutual distances at least $L$, there exists $c = c(\alpha) > 0$ such that one may pick + +$$ +K = K(L) \geq c \log \log L \tag{4.11} +$$ + +in such a way that $L \ge 2\ell_K$. +---PAGE_BREAK--- + +Using Lemma 4.2, then the switching lemma, and finally Proposition 4.3, we get + +$$ +\begin{align*} +\mathbf{P}^{xy,zt,\emptyset,\emptyset}[0 < |\mathcal{T}| < 2^{\delta K/5}] & \le \sum_{u \in \mathbb{Z}^4} \mathbf{P}^{xy,zt,\emptyset,\emptyset}[u \in \mathcal{T}, \mathbf{M}_u(\mathcal{T}; \mathcal{L}_\alpha, K) < \delta K] \\ +& = \sum_{u \in \mathbb{Z}^4} \frac{\langle \sigma_u \sigma_x \rangle \langle \sigma_u \sigma_y \rangle \langle \sigma_u \sigma_z \rangle \langle \sigma_u \sigma_t \rangle}{\langle \sigma_x \sigma_y \rangle \langle \sigma_z \sigma_t \rangle} \mathbf{P}^{ux,uz,uy,ut}[\mathbf{M}_u(\mathcal{T}; \mathcal{L}_\alpha, K) < \delta K] \\ +& \le 2^{-\delta K} \sum_{u \in \mathbb{Z}^4} \frac{\langle \sigma_u \sigma_x \rangle \langle \sigma_u \sigma_y \rangle \langle \sigma_u \sigma_z \rangle \langle \sigma_u \sigma_t \rangle}{\langle \sigma_x \sigma_y \rangle \langle \sigma_z \sigma_t \rangle}. \tag{4.12} +\end{align*} +$$ + +For the larger values of $|\mathcal{T}|$, the Markov inequality and the switching lemma give + +$$ +\begin{align} +\mathbf{P}^{xy,zt,\emptyset,\emptyset}[|\mathcal{T}| \ge 2^{\delta K/5}] &\le 2^{-\delta K/5} \mathbf{E}^{xy,zt,\emptyset,\emptyset}[|\mathcal{T}|] \\ +&= 2^{-\delta K/5} \sum_{u \in \mathbb{Z}^4} \frac{\langle \sigma_u \sigma_x \rangle \langle \sigma_u \sigma_y \rangle \langle \sigma_u \sigma_z \rangle \langle \sigma_u \sigma_t \rangle}{\langle \sigma_x \sigma_y \rangle \langle \sigma_z \sigma_t \rangle}. \tag{4.13} +\end{align} +$$ + +Adding (4.12) and (4.13) gives an improved tree diagram bound which, in view of (4.11) +and of the logarithmic divergence of $B_L(\beta_c)$ implied by $\eta = 0$, yields (1.24). $\square$ + +## 4.2 Derivation of the conditional intersection-clustering bound (Proposition 4.3) + +The intuition underlying the conditional intersection-clustering bound and the choice of $\ell_k$ is guided by the aforementioned example of simple random walks. In dimension 4, the traces of two independent random walks starting at the origin intersect in an annulus of the form $\text{Ann}(n, n^\alpha)$ with probability at least $c(\alpha) > 0$ uniformly in $n$. Since the paths traced by these random walks within different annuli are roughly independent, one may expect the number of annuli among the $K$ first ones in which the paths intersect to be, with large probability, of the order of $\delta K$. + +However, in the case considered here, the clusters of $u$ in $\mathbf{n}_1 + \mathbf{n}_3$ and $\mathbf{n}_2 + \mathbf{n}_4$ do not have the renewal structure of Markovian random walks. We shall compensate for that in two steps: + +(i) *reformulate the intersection property*, + +(ii) *derive an asymptotic mixing statement*. + +For the first step, let $I_k$ be the event (with $I$ standing for intersection) that there exist unique clusters of $\text{Ann}(\ell_k, \ell_{k+1})$ in $\mathbf{n}_1 + \mathbf{n}_3$ and $\mathbf{n}_2 + \mathbf{n}_4$ crossing the annulus from the inner boundary to the outer boundary and that these two clusters are intersecting. Lemma 4.4 presents the statement that the probability that the event occurs and that these clusters intersect, is bounded away from 0 uniformly in $k$. + +Note that the annuli $\text{Ann}(\ell_k, \ell_{k+1})$ are wide enough so that sourceless currents will typically have no radial crossing, and when such crossings are forced by the placement of sources (for instance when one source, is at the common center of a family of nested annuli and the other at a distant site outside), in each annulus there will most likely be only one crossing cluster. It then follows that all the crossing clusters of $\mathbf{n}_1 + \mathbf{n}_3$ belong to the $\mathbf{n}_1 + \mathbf{n}_3$ cluster of the sources, and a similar property holds for the crossing clusters of $\mathbf{n}_2 + \mathbf{n}_4$. + +For the second step, we prove that events observed within sufficiently separated annuli are roughly independent. The exact assertion is presented below in Proposition 4.6 and will be the crux of the whole paper. + +Following is the first of these two statements. +---PAGE_BREAK--- + +**Lemma 4.4 (Conditional intersection-clustering property)** Assume (4.2) holds for the Ising model on $\mathbb{Z}^4$ with $\eta = 0$. For $\alpha > 3/4$, there exist $\ell_0 = \ell_0(\alpha)$ and $c = c(\alpha, \ell_0) > 0$ such that for every $x, z \notin \Lambda_{2\ell_{k+1}}$, + +$$ +\mathbf{P}_{\beta_c}^{0x,0z,\emptyset,\emptyset}[I_k] \geq c. \tag{4.14} +$$ + +The main ingredient in the proof is a second moment method on the number of inter- +sections in $\text{Ann}(\ell_k, \ell_{k+1})$ of the clusters of the origin in $\mathbf{n}_1 + \mathbf{n}_3$ and $\mathbf{n}_2 + \mathbf{n}_4$. A second part +of the proof is devoted to the uniqueness of the clusters crossing the annulus. This makes +the event under consideration measurable in terms of the currents within just the specified +annulus, allowing us to apply the mixing property for the proof of Proposition 4.3, which +follows further below. + +**Proof** Drop $\beta_c$ from the notation. Fix $\alpha > 3^4$ and set $\varepsilon > 0$ so that $\alpha > (1+\varepsilon)(3+\varepsilon)^4$. The constants $c_i$ below depend on $\varepsilon$ only. Introduce the intermediary integers $n \le m \le M \le N$ satisfying + +$$ +n \ge \ell_k^{3+\epsilon}, \quad m \ge n^{3+\epsilon}, \quad M \ge m^{1+\epsilon}, \quad N \ge M^{3+\epsilon}, \quad \ell_{k+1} \ge N^{3+\epsilon}. \tag{4.15} +$$ + +We start by proving that $\mathcal{M} := \mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0) \cap \text{Ann}(m, M)$ is non-empty with positive probability by applying a second-moment method on $|\mathcal{M}|$. Namely, the switching lemma (more precisely (A.10)) and (4.2) imply that + +$$ +\begin{align*} +\mathbf{E}^{0x,0z,\emptyset,\emptyset}[|\mathcal{M}|] &= \sum_{v \in \text{Ann}(m,M)} \mathbf{P}^{0x,\emptyset}[v \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftarrows}} 0] \mathbf{P}^{0z,\emptyset}[v \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftarrows}} 0] \\ +&= \sum_{v \in \text{Ann}(m,M)} \frac{\langle \sigma_0 \sigma_v \rangle \langle \sigma_v \sigma_x \rangle}{\langle \sigma_0 \sigma_x \rangle} \frac{\langle \sigma_0 \sigma_v \rangle \langle \sigma_v \sigma_z \rangle}{\langle \sigma_0 \sigma_z \rangle} \\ +&\ge c_1 (B_M - B_{m-1}) \ge c_2 \log(M/m). \tag{4.16} +\end{align*} +$$ + +On the other hand, we find that + +$$ +\mathbf{E}^{0x,0z,\emptyset,\emptyset}[|\mathcal{M}|^2] = \sum_{v,w \in \text{Ann}(m,M)} \mathbf{P}^{0x,\emptyset}[v,w \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\longleftrightarrow} 0] \mathbf{P}^{0z,\emptyset}[v,w \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\longleftrightarrow} 0]. \quad (4.17) +$$ + +Now, by a delicate application of the switching lemma and a monotonicity argument we +have the following inequality (stated and proven as Proposition A.3 in the Appendix), + +$$ +\mathbf{P}^{0x,\emptyset}[v, w \xleftarrow{\hspace{2em}} 0] \leq \frac{\langle\sigma_0\sigma_v\rangle\langle\sigma_v\sigma_w\rangle\langle\sigma_w\sigma_x\rangle}{\langle\sigma_0\sigma_x\rangle} + \frac{\langle\sigma_0\sigma_w\rangle\langle\sigma_w\sigma_v\rangle\langle\sigma_v\sigma_x\rangle}{\langle\sigma_0\sigma_x\rangle}. \quad (4.18) +$$ + +Together with (4.2), this gives + +$$ +\mathbf{E}^{0x,0z,\emptyset,\emptyset}[|\mathcal{M}|^2] \le C_3(B_M - B_{m-1}) B_{2M} \le C_4(\log M)^2. \quad (4.19) +$$ + +The second moment (or Cauchy-Schwarz) inequality, and the bound $M \ge m^{1+\epsilon}$ thus imply + +$$ +\mathbf{P}^{0x,0z,\emptyset,\emptyset}[M \neq 1] \geq \frac{\mathbf{E}^{0x,0z,\emptyset,[|M|]^2}}{\mathbf{E}^{0x,0z,\emptyset,[|M|^2]}} \geq c_5 > 0. \quad (4.20) +$$ + +At this stage, one may feel that the main point of the lemma was established: we showed +that with uniformly positive probability the clusters of 0 in **n**₁ + **n**₃ and **n**₂ + **n**₄ intersect +in Ann(*m*, *M*). However, to conclude the argument we need to establish the uniqueness, +with large probability, of the crossing cluster in **n**₁ + **n**₃ (the same then holds true for +---PAGE_BREAK--- + +**n**₂ + **n**₄). This part of the proof is slightly more technical and may be omitted in a first reading. It is here that we shall need α to be large enough. + +To prove the uniqueness of crossings, we employ the notion of the current's *backbone*⁴, on which more can be found in [1, 3, 12, 13, 15]. If the event {M ≠ Ø} occurs but not Iₖ, then one of the following four events must occur (see e.g. Fig. 3): + +$F_1 := \text{the backbone } \Gamma(\mathbf{n}_1) \text{ of } \mathbf{n}_1 \text{ does two successive crossings of } \mathrm{Ann}(\ell_k, n);$ + +$F_2 := \mathbf{n}_1 + \mathbf{n}_2 \text{ contains a cluster crossing } \mathrm{Ann}(n, m) \setminus \Gamma(\mathbf{n}_1);$ + +$F_3 := \mathbf{n}_1 + \mathbf{n}_2 \text{ contains a cluster crossing } \mathrm{Ann}(M, N) \setminus \Gamma(\mathbf{n}_1);$ + +$F_4 := \text{the backbone } \Gamma(\mathbf{n}_1) \text{ of } \mathbf{n}_1 \text{ does two successive crossings of } \mathrm{Ann}(N, \ell_{k+1}).$ + +We bound the probabilities of these events separately. For $F_1$ to occur, the backbone $\Gamma(\mathbf{n}_1)$ must do a zigzag: to go from 0 to a vertex $v \in \partial\Lambda_n$, then to a vertex $w \in \partial\Lambda_{\ell_k}$, and finally to $x$. The *chain rule* for backbones (see e.g. [3]) combined with the assumed condition (4.2), jointly imply that + +$$ +\mathbf{P}^{0x,\emptyset}[F_1] \leq \sum_{\substack{v \in \partial\Lambda_n \\ w \in \partial\Lambda_{\ell_k}}} \frac{\langle \sigma_0 \sigma_v \rangle \langle \sigma_v \sigma_w \rangle \langle \sigma_w \sigma_x \rangle}{\langle \sigma_0 \sigma_x \rangle} \leq C_6 n^3 \ell_k^3 n^{-4} \leq C_7 \ell_k^{-\epsilon}. \quad (4.21) +$$ + +To bound the probability of $F_2$, condition on $\Gamma(\mathbf{n}_1)$. The remaining current in $\mathbf{n}_1$ is a sourceless current with depleted coupling constants (see [3, 12, 13] for details on this type of reasoning). The probability that some $v \in \partial\Lambda_n$ and $w \in \partial\Lambda_m$ are connected in $Z^4 \setminus \Gamma(\mathbf{n}_1)$ to each other can then be bounded by $\langle\sigma_v\sigma_w\rangle\langle\sigma_v\sigma_w\rangle'$ where the $\langle\cdot\rangle'$ denotes an Ising measure with depleted coupling constants (the depletion depends on $\Gamma(\mathbf{n}_1)$ and the switching lemma concerns one current with depletion and one without; we refer to [4] for the statement and proof of the switching lemma in this context, and some applications). At the risk of repeating ourselves, we refer to [3] for an illustration of this line of reasoning. The Griffiths inequality [22] implies that this probability is bounded by $\langle\sigma_v\sigma_w\rangle^2$, which together with (4.2), immediately leads to the following sequence of inequalities: + +$$ +\mathbf{P}^{0x,\emptyset}[F_2] \leq \sum_{\substack{v \in \partial\Lambda_n \\ w \in \partial\Lambda_m}} (\sigma_v \sigma_w)^2 \leq C_8 n^{-\epsilon}. \tag{4.22} +$$ + +The event $F_4$ is bounded similarly to $F_1$, and $F_3$ similarly to $F_2$. For $\ell_0 = \ell_0(\epsilon)$ large enough the sum of the four probabilities does not exceed half of the constant $c_5$ in (4.20), and the main statement follows. $\square$ + +**Remark 4.5** The condition $\alpha > 3^4$ is used in the second part of the proof, where we need +the exponent connecting the inner and outer radii of annuli to be strictly larger than 3. +We did not try to improve on this exponent. + +The second of the above described statements is one of the main innovations of this +paper. It concerns a mixing property, which in Section 6.1 will be stated under a more +general form and derived unconditionally for every $d \ge 4$. + +⁴We mentioned that a current **n** with sources x and y can be seen as the superposition of one path from x to y and loops. The backbone Γ(**n**) is an appropriate choice of such a path induced by an ordering of the edges. Again, we refrain ourselves from providing more details here and refer to the relevant literature for details on this notion. +---PAGE_BREAK--- + +Figure 3: In this picture, $\Gamma(\mathbf{n}_1)$ does only one crossing of $\text{Ann}(\ell_k, n)$, and $\mathbf{n}_1 + \mathbf{n}_2 - \Gamma(\mathbf{n}_1)$ does not cross $\text{Ann}(n, m) \setminus \Gamma(\mathbf{n}_1)$. This prevents the fact that the cluster in red, made of loops in $\mathbf{n}_1+\mathbf{n}_2-\Gamma(\mathbf{n}_1)$ would connect an excursion of $\Gamma(\mathbf{n}_1)$ outside of $\Lambda_{\ell_k}$ but not reaching $\partial\Lambda_n$ to $\partial\Lambda_m$ (which would potentially create an additional cluster crossing $\text{Ann}(\ell_k, m)$). + +**Proposition 4.6 (Conditional mixing property)** Assume that the complementary pair of power law bounds (4.2) holds for the Ising model on $Z^4$ with $\eta = 0$, and fix $\alpha > 3^8$. Then there exists $C > 0$ such that for every $n^\alpha \le N$, every $x \notin \Lambda_N$, and every pair of events E and F depending on the restriction of **n** to edges within $\Lambda_n$ and outside of $\Lambda_N$ respectively, + +$$ +| \mathbf{P}_{\beta_c}^{0x}[E \cap F] - \mathbf{P}_{\beta_c}^{0x}[E] \mathbf{P}_{\beta_c}^{0x}[F] | \le \frac{C}{\sqrt{\log(N/n)}}. \quad (4.23) +$$ + +The heart of the proof will be the use of a (random) resolution of identity $\mathbb{N}$, meaning +a random variable which is concentrated around 1, given by a weighted sum of indicator +functions $\Pi[y \xleftarrow{n_1+n_2} 0]$ with $y \in \mathbb{Z}^d$, where $\partial \mathbf{n}_1 = \{0, x\}$ and $\partial \mathbf{n}_2 = \emptyset$, which will enable us +to write + +$$ +\mathbf{P}^{0x}[E \cap F] \approx \mathbf{E}^{0x,\emptyset}[\mathbb{N}\Pi(\mathbf{n}_1 \in E \cap F)]. \quad (4.24) +$$ + +Since $\mathbb{N}$ will be a certain convex combination of the random variables $\Pi[y \underset{\mathbf{n}_1+\mathbf{n}_2}{\stackrel{\hspace{2em}}{\rightleftharpoons}} 0] / (\sigma_0 \sigma_y)$, +the term on the right will be a convex sum of $\mathbf{P}^{0x,\emptyset}$-probabilities of the events $\{y \underset{\mathbf{n}_1+\mathbf{n}_2}{\stackrel{\hspace{2em}}{\rightleftharpoons}} 0, \mathbf{n}_1 \in E \cap F\}$. For each fixed $y$, we will use the switching principle to transform the +sources $\{0, x\}$ and $\emptyset$ of $\mathbf{n}_1$ and $\mathbf{n}_2$ into $\{0, y\}$ and $\{y, x\}$, exchanging at the same time the +roles of $\mathbf{n}_1$ and $\mathbf{n}_2$ inside $\Lambda_n$ without changing anything outside $\Lambda_N$. This useful operation +has a nice byproduct: the event $\mathbf{n}_1 \in F$ becomes $\mathbf{n}_2 \in F$ which is independent of $\mathbf{n}_1 \in E$. +Deducing the mixing from there will be a matter of elementary algebraic manipulations. + +The error term will be (almost entirely) due to how concentrated around 1 N is. In +order to prove this fact, we will implement a refined second moment method in which we +estimate the expectation and the second moment of N sharply. The proof will require +some regularity assumptions on the gradient of the two-point function: for every $x \in \mathbb{Z}^d$, + +$$ +|\nabla_x \langle \sigma_0 \sigma_x \rangle| \leq \frac{C}{|x|} \langle \sigma_0 \sigma_x \rangle, \tag{4.25} +$$ + +which follows from (4.2) by an argument that we choose to postpone to Section 5.5 (after +the required technology has been introduced). +---PAGE_BREAK--- + +**Proof** Let us recall that we are discussing here $\beta = \beta_c$, omitting the symbol from the notation. Fix $\alpha > 3^4$ (the power 4 instead of 8 suffices at this stage) and choose $\varepsilon > 0$ so that $\alpha > (1+\varepsilon)(9+\varepsilon)^2$. Below, the constants $C_i$ are independent of $\beta$ and $n^\alpha = N \le \xi(\beta)$ (we may assume equality between $N$ and $n^\alpha$ without loss of generality). Introduce two intermediary integers $m \le M$ satisfying that + +$$m \ge n^{9+\varepsilon}, \quad M \ge m^{1+\varepsilon}, \quad N \ge M^{9+\varepsilon} \tag{4.26}$$ + +as well as the notation $n_k = 2^k m$ for $k \ge 1$. Set $K$ such that $n_{K+1} \le M < n_{K+2}$. The key to our proof will be the random variable + +$$\mathbf{N} := \frac{1}{K} \sum_{k=1}^{K} \frac{1}{\alpha_k} \sum_{y \in \text{Ann}(n_k, n_{k+1})} \mathbb{I}[y^{\mathbf{n}_1 + \mathbf{n}_2} \xleftarrow{\hspace{2em}} 0] \quad \text{where } \alpha_k := \sum_{y \in \text{Ann}(n_k, n_{k+1})} \langle \sigma_0 \sigma_y \rangle. \tag{4.27}$$ + +Combining the regularity assumptions (4.25) and (4.2) with Proposition A.3 (the precise computation is presented in Section 6.2), we find + +$$\mathbf{E}^{0x,\emptyset}[\mathbf{N}] = \frac{1}{K} \sum_{k=1}^{K} \frac{1}{\alpha_k} \sum_{y \in \text{Ann}(n_k, n_{k+1})} \frac{\langle \sigma_0 \sigma_y \rangle \langle \sigma_y \sigma_x \rangle}{\langle \sigma_0 \sigma_x \rangle} \geq 1 - \frac{C_1}{K}, \tag{4.28}$$ + +$$\mathbf{E}^{0x,\emptyset}[\mathbf{N}^2] \leq \frac{1}{K^2} \sum_{k,l=1}^{K} \frac{1}{\alpha_k \alpha_l} \sum_{\substack{y \in \text{Ann}(n_k, n_{k+1}) \\ z \in \text{Ann}(n_l, n_{l+1})}} \frac{\langle \sigma_0 \sigma_y \rangle \langle \sigma_y \sigma_z \rangle \langle \sigma_z \sigma_x \rangle + \langle \sigma_0 \sigma_z \rangle \langle \sigma_z \sigma_y \rangle \langle \sigma_y \sigma_x \rangle}{\langle \sigma_0 \sigma_x \rangle} \leq 1 + \frac{C_2}{K}. \tag{4.29}$$ + +The Cauchy-Schwarz inequality and the fact that $\mathbf{P}^{0x}[E \cap F] = \mathbf{P}^{0x,\emptyset}[\mathbf{n}_1 \in E \cap F]$ thus imply that + +$$|\mathbf{P}^{0x,\emptyset}[\mathbf{n}_1 \in E \cap F] - \mathbf{E}^{0x,\emptyset}[\mathbf{N}\mathbb{I}_{\mathbf{n}_1 \in E \cap F}]| \leq \sqrt{\mathbf{E}^{0x,\emptyset}[(\mathbf{N}-1)^2]} \leq \frac{C_3}{\sqrt{K}}. \tag{4.30}$$ + +Now, fix $y \in \text{Ann}(m, M)$ and let $G(y)$ be the event (depending on $\mathbf{n}_1 + \mathbf{n}_2$ only) that there exists $\mathbf{k} \le \mathbf{n}_1 + \mathbf{n}_2$ such that $\mathbf{k}=0$ on $\Lambda_n$, $\mathbf{k}=\mathbf{n}_1+\mathbf{n}_2$ outside $\Lambda_N$, and $\partial\mathbf{k} = \{x,y\}$. We find that + +$$\mathbf{P}^{0x,\emptyset}[\mathbf{n}_1 \in E \cap F, y^{\mathbf{n}_1 + \mathbf{n}_2} \xleftarrow{\hspace{2em}} 0, G(y)] = \frac{\langle \sigma_0 \sigma_y \rangle \langle \sigma_y \sigma_x \rangle}{\langle \sigma_0 \sigma_x \rangle} \mathbf{P}^{0y,yx}[\mathbf{n}_1 \in E, \mathbf{n}_2 \in F, G(y)], \tag{4.31}$$ + +where we use the following reasoning: for $\mathbf{m} \in G(y)$, consider the multi-graph $\mathcal{M}$ obtained by duplicating every edge of the graph into $\mathbf{m}(x, y)$ edges. If $G(y)$ occurs, the existence of $\mathbf{k}$ guarantees the existence of a subgraph $\mathcal{K} \subset \mathcal{M}$ with $\partial\mathcal{K} = \{x, y\}$ containing no edge with endpoints in $\Lambda_n$ and all those of $\mathcal{M}$ with endpoints outside $\Lambda_N$, so that the generalized switching principle formulated in [5, Lemma 2.1] implies that + +$$ +\begin{align} +\sum_{\mathcal{T} \subseteq M : \partial\mathcal{T} = \{0, x\}} \mathbb{I}[\mathcal{T} \in E \cap F] &= +\sum_{\substack{\mathcal{T} \subseteq M : \partial\mathcal{T} = (\mathcal{T} \Delta K) = \{0, x\}}} +\mathbb{I}[\mathcal{T} \Delta K \in E \cap F] \\ +&= +\sum_{\substack{\mathcal{T} \subseteq M : +\partial\mathcal{T} = +\begin{smallmatrix} +0, x +\end{smallmatrix} +}} +\mathbb{I}[\mathcal{T} +\in E, +M +\setminus +\mathcal{T} +\in F], +\tag{4.32} +\end{align} +$$ + +where we allow ourselves the latitude of calling $E$ and $F$ the events defined for multi-graphs corresponding to the events $E$ and $F$ for currents. One gets (4.31) when rephrasing this equality in terms of weighted currents (exactly like in standard proofs of the switching principle, see e.g. [1] or [5] for a closely related reasoning). +---PAGE_BREAK--- + +Observe now that forgetting about $G(y)$ on the right-hand side of (4.31) gives + +$$ +\mathbf{P}^{0y,yx}[\mathbf{n}_1 \in E, \mathbf{n}_2 \in F] = \mathbf{P}^{0y}[E]\mathbf{P}^{yx}[F]. \quad (4.33) +$$ + +Furthermore, since $x \notin \Lambda_N$ and $y \in \Lambda_m$, (4.25) implies that + +$$ +\left| \frac{\langle \sigma_0 \sigma_y \rangle \langle \sigma_y \sigma_x \rangle}{\langle \sigma_0 \sigma_x \rangle} - \langle \sigma_0 \sigma_y \rangle \right| \leq \frac{C_4 m}{N}. \quad (4.34) +$$ + +Last but not least, we can bound (from below) $\mathbf{P}^{0x,\emptyset}[G(y)]$ and $\mathbf{P}^{0y,yx}[G(y)]$ as follows. +We only briefly describe the argument since we will present it in full details in Section 6.2. +The event $G(y)$ clearly contains the event that $\text{Ann}(M,N)$ is not crossed by a cluster +in $\mathbf{n}_1$, and $\text{Ann}(n,m)$ is not crossed by a cluster in $\mathbf{n}_2$, since in such case $\mathbf{k}$ can be +defined as the sum of $\mathbf{n}_1$ restricted to the clusters intersecting $\Lambda_N^c$ (this current has no +sources) and $\mathbf{n}_2$ restricted to the clusters intersecting $\Lambda_m^c$ (this current has sources $x$ and +$y$). Now, we can bound the probability of $\mathbf{n}_1$ crossing $\text{Ann}(M,N)$ in the same spirit as we +bounded the probabilities for $F_1$ and $F_3$ in the previous proof by splitting $\text{Ann}(M,N)$ in +two annuli $\text{Ann}(\sqrt{MN}, N)$ and $\text{Ann}(M, \sqrt{MN})$, then estimating the probability that the +backbone of $\mathbf{n}_1$ crosses the inner annulus more than once, and then the probability that +the remaining current (which is sourceless) crosses the outer annulus. Doing the same for +the probability that a cluster of $\mathbf{n}_2$ crosses $\text{Ann}(n,m)$, we find that + +$$ +\frac{\langle \sigma_0 \sigma_x \rangle}{\langle \sigma_0 \sigma_y \rangle \langle \sigma_y \sigma_x \rangle} \mathbf{P}^{0x,\emptyset}[G(y), y \xrightarrow[n_1+n_2=2]{\substack{n_1+ n_2 \\ \leftrightarrow}} 0] = \mathbf{P}^{0y,yx}[G(y)] \geq 1 - \frac{C_5}{n^\epsilon} \geq 1 - C_6 \left(\frac{n}{N}\right)^{\epsilon/(\alpha-1)}. \quad (4.35) +$$ + +Note that we use that $M \ge N^{9+\epsilon}$ in this part of the proof. + +Overall, the value of *K* and (4.30)–(4.35) put together imply + +$$ +|\mathbf{P}^{0x}[E \cap F] - \sum_{y \in \mathrm{Ann}(m,M)} \delta(y) \mathbf{P}^{0y,\emptyset}[E] \mathbf{P}^{yx,\emptyset}[F]| \leq \frac{C_7}{\sqrt{\log(N/n)}}, \quad (4.36) +$$ + +with $\delta(y) = (\sigma_0\sigma_y)/(K\alpha_{k(y)})$ where $k(y)$ is such that $y \in \mathrm{Ann}(n_{k(y)}, n_{k(y)+1})$. + +The end of the proof is now a matter of elementary algebraic manipulations. Applying +this inequality twice (once with $x$ and once with $x'$) for $F$ being the full set, we obtain +that for every $x, x' \notin \Lambda_N$ and every event $E$ which is depending on $\Lambda_n$ only, + +$$ +|\mathbf{P}^{0x}[E] - \mathbf{P}^{0x'}[E]| \leq \frac{2C_7}{\sqrt{\log(N/n)}}. \tag{4.37} +$$ + +Now, assume the stronger assumption that $\alpha > 3^8$ and fix $m = [\sqrt{Nn}]$. Applying + +• (4.36) for *m* and *N*, the full event and *F*, + +• then (4.37) for *n*, *m* and *E* (note that *m* ≥ *n*³), + +• and (4.36) for *m* and *N*, *E* and *F*, + +gives that for every $x \notin \Lambda_N$, + +$$ +\begin{align*} +& | \mathbf{P}^{0x}[E \cap F] - \mathbf{P}^{0x}[E] \mathbf{P}^{0x}[F] | \\ +&\leq |\mathbf{P}^{0x}[E \cap F] - \mathbf{P}^{0x}[E] \sum_{y} \delta(y) \mathbf{P}^{yx}[F]| + \frac{C_7}{\sqrt{\log(N/n)}} \\ +&\leq |\mathbf{P}^{0x}[E \cap F] - \sum_{y} \delta(y) \mathbf{P}^{0y}[E] \mathbf{P}^{yx}[F]| + \frac{3C_7}{\sqrt{\log(N/n)}} \\ +&\leq \frac{4C_7}{\sqrt{\log(N/n)}}. \tag{4.38} +\end{align*} +$$ + +Using Lemma 4.4 and Proposition 4.6, we may now establish the clustering of inter-sections, under (4.2). + +$\square$ + + +---PAGE_BREAK--- + +**Proof of Proposition 4.3** In view of the translation invariance of the claimed statement, we take $u$ to be the origin. Since $x$ and $y$ are at a distance larger than $2\ell_K$ of each other, one of them is at a distance (larger than or equal to) $\ell_K$ of $u$. Without loss of generality we take that to be $x$, and make a similar assumption about $z$. + +Let $\mathcal{S}_K$ denote the set of subsets of $\{1, \dots, K-2\}$ containing even integers only and fix $S \in \mathcal{S}_K$. Let $A_S$ be the event that no $I_k$ occurs for $k \in S$. If $s$ denotes the maximal element of $S$, the mixing property Proposition 4.6 used with $n = \ell_{s-1}$ and $N = \ell_s$ gives + +$$ \mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_S] \leq \mathbf{P}^{0x,0z,\emptyset,\emptyset}[I_s^c]\mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_{S\setminus\{s\}}] + \frac{C}{\sqrt{\log \ell_{s-1}}}. \quad (4.39) $$ + +To be precise and honest, we use a multi-current version, with four currents, of the mixing property. We will state and prove this property in Sections 6.1 and 6.2 and ignore this additional difficulty for now. Also, it is here that the stronger restriction $\alpha > 3^\delta$ is used, along with the choice of $\ell_0 = \ell_0(\alpha)$, to enable the mixing. Note that we used that the event $I_s$ is expressed in terms of just the restriction of the currents $\mathbf{n}_1, \dots, \mathbf{n}_4$ to $\text{Ann}(\ell_s, \ell_{s+1})$. + +Now, the intersection property Lemma 4.4 and an elementary bound on $\ell_{s-1}$ gives the existence of $c_0 > 0$ small enough that + +$$ \mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_S] \le (1-2c_0)\mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_{S\setminus\{s\}}] + c_0(1-c_0)^{|S|-1}. \quad (4.40) $$ + +An induction gives immediately that for every $S \in \mathcal{S}_K$, + +$$ \mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_S] \le (1-c_0)^{|S|}. \quad (4.41) $$ + +Let $B_S \subset A_S$ be the event that the clusters of 0 in $\mathbf{n}_1 + \mathbf{n}_3$ and $\mathbf{n}_2 + \mathbf{n}_4$ do not intersect in any of the annuli $\text{Ann}(\ell_s, \ell_{s+1})$ for $s \in S$. Thanks to Corollary A.2, the probability of $B_S$ increases when removing sources, so that + +$$ \mathbf{P}^{0x,0z,0y,0t}[B_S] \le \mathbf{P}^{0x,0z,\emptyset,\emptyset}[B_S] \le \mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_S] \le (1-c_0)^{|S|}. \quad (4.42) $$ + +To conclude, observe that if $\mathbf{M}_0(\mathcal{T}; \mathcal{L}_\alpha, K) \le \delta K$, then there must exist a set $S \in \mathcal{S}_K$ of cardinality at least $(\frac{1}{2} - \delta)K$ such that $B_S$ occurs. We deduce that + +$$ +\begin{aligned} +\mathbf{P}^{0x,0z,0y,0t}[\mathbf{M}_0(\mathcal{T}; \mathcal{L}_\alpha, K) < \delta K] &\le \sum_{S \in \mathcal{S}_K : |S| \ge (1/2 - \delta)K} \mathbf{P}^{0x,0z,0y,0t}[B_S] \\ +&\le \binom{K/2}{\delta K} (1-c_0)^{(1/2-\delta)K}, +\end{aligned} +\quad (4.43) +$$ + +which implies the claim by appropriately choosing the value of $\delta$. $\square$ + +# 5 Weak regularity of the two-point function + +Progressing towards the unconditional proof of Theorem 1.3 we establish in this section the abundance, below the correlation length, of regular scales at which the two-point function has properties similar to those it would have under the power-law decay assumption (4.2). This auxiliary result is stated here as Theorem 5.12. + +Towards this goal we focus here on the two-point function, and present some old and new observations. In particular, we discuss the following three properties of the two-point function: + +(i) *monotonicity* (Section 5.1) +---PAGE_BREAK--- + +(ii) *sliding-scale spatial Infrared Bound* (Section 5.3), + +(iii) *gradient estimate* (Section 5.5), + +(iv) a lower bound for the two point function at $\beta_c$. + +The first three are based on the reflection-positivity of the n.n.f. interaction, and apply to +systems of real valued variables of arbitrary (but common) distribution, of sub-gaussian +growth, i.e. satisfying (2.2). That includes the Ising and $\varphi^4$ variables which are of partic- +ular interest for us. The last item is proven for systems with spins in the GS class. + +**Some unifying notation:** In statements which apply to both the Ising and $\varphi^4$ systems, we shall refer to the spin/field variables by the “neutral” symbol $\tau$. Its a-priori distribution is denoted $\rho(d\tau)$. It may be displayed as a subscript, but also will often be omitted. + +The expectation value functional with respect to the Gibbs measure, or functional +integral, for a system in the domain $\Lambda$ is denoted $\langle \cdot \rangle_{\Lambda,\rho,\beta}$, with $\langle \cdot \rangle_{\rho,\beta}$ denoting the states' +natural infinite volume limit. + +We also denote by $\beta_c(\rho)$ (or just $\beta_c$ where the spins' a-priori distribution $\rho$ is clear +from the context) is the critical inverse temperature and $\xi(\rho,\beta)$ the correlation length. + +Throughout this section $|J| := \sum_y J_{0,y}$ and + +$$ +S_{\rho,\beta}(x) := \langle \tau_0 \tau_x \rangle_{\rho,\beta}. \tag{5.1} +$$ + +We refer to points in $\mathbb{R}^d$ as $x = (x_1, \dots, x_d)$ and denote by $e_j$ the unit vector with $x_j = 1$. + +5.1 Messager-Miracle-Sole monotonicity for the two-point function + +The Messenger-Miracle-Solé (MMS) inequality [30, 37, 41] states that for models with n.n.f. interactions (and more generally reflection-positive interactions) in a region $\Lambda$ endowed with reflection symmetry, the correlation function $\langle \prod_{x \in A} \tau_x \prod_{x \in B} \tau_x \rangle_{\Lambda, \rho, \beta}$ at sets of sites A and B which are on the same side of a reflection plane, can only decrease when B is replaced by its reflected image, $\mathcal{R}(B)$, i.e. + +$$ +\left\langle \prod_{x \in A} \tau_x \prod_{x \in B} \tau_x \right\rangle_{\Lambda, \rho, \beta} \geq \left\langle \prod_{x \in A} \tau_x \prod_{x \in \mathcal{R}(B)} \tau_x \right\rangle_{\Lambda, \rho, \beta}. \quad (5.2) +$$ + +In the infinite volume limit on $\mathbb{Z}^d$, this principle can be invoked for reflections with respect +to + +• hyperplanes passing through vertices or mid-edges, i.e. reflections changing only one coordinate $x_i$, which is sent to $L - x_i$ for some fixed $L \in \frac{1}{2}\mathbb{Z}$, + +• “diagonal” hyperplanes, i.e. reflections changing only two coordinates $x_i$ and $x_j$, which are sent to $x_j \pm L$ and $x_i \mp L$ respectively, for some $L \in \mathbb{Z}$. + +In particular, this implies the following useful comparison principle. + +**Proposition 5.1 (MMS monotonicity)** For the n.n.f. model on $\mathbb{Z}^d$ ($d \ge 1$) with real valued spin variables satisfying (2.2): + +i) along the principal axes the two-point function is monotone decreasing in $\|x\|_\infty$ + +$$ +\textit{ii) for any } x = (x_1, \dots, x_d) \in \mathbb{Z}^d, +$$ + +$$ +S_{\rho,\beta}((\|x\|_{\infty}, 0_{\perp})) \geq S_{\rho,\beta}(x) \geq S_{\rho,\beta}((\|x\|_{1}, 0_{\perp})), \quad (5.3) +$$ + +where $\|x\|_1 := \sum_{j=1}^{d} |x_j|$, $\|x\|_{\infty} := \max_j |x_j|$, and $0_{\perp}$ is the null vector in $\mathbb{Z}^{d-1}$. +---PAGE_BREAK--- + +The above carries the useful implication that for any $x, y \in \mathbb{Z}^d$ with $\|y\|_{\infty} \ge d \|x\|_{\infty}$, + +$$S_{\rho,\beta}(x) \ge S_{\rho,\beta}(y) \quad (5.4)$$ + +since $\|x\|_1 \le d \|x\|_\infty \le \|y\|_\infty$, by (5.3) the two quantities are on the correspondingly opposite sides of $S_{\rho,\beta}((\|x\|_1, 0_\perp))$. + +We shall encounter below also monotonicity statements of the Fourier transform. Both are useful in extracting point-wise implications from bounds on the corresponding two point function's bulk averages (as in Corollary 5.8, below). + +## 5.2 The two-point function’s Fourier transform + +In view of the model’s translation invariance it is natural to consider the system’s behavior also through its Fourier spin-wave modes. These are defined as + +$$\tilde{\tau}_{\beta}(p) := \frac{1}{\sqrt{(2L)^d}} \sum_{x \in (-L,L]^d} e^{ip \cdot x} \tau_x \qquad (5.5)$$ + +with $p$ ranging over $\Lambda_L^* := [-\pi, \pi)^d \cap \frac{\pi}{L}\mathbb{Z}^d$. + +These variables are especially relevant in case the Hamiltonian is taken with the periodic boundary conditions, under which sites $x, y \in \Lambda_L$ are neighbors if either $\|x - y\|_1 = 1$ or $|y_i - x_i| = 2L - 1$ for some $i \in \{1, \dots, d\}$. With these boundary conditions, the model is invariant under cyclic shifts, and its Hamiltonian decomposes into a sum of single-mode contributions: + +$$H_{\Lambda_R}(\tau) = \sum_{p \in \Lambda_L^*} \mathcal{E}(p) |\hat{\tau}_\beta(p)|^2, \quad (5.6)$$ + +with + +$$\mathcal{E}(p) := 2 \sum_{j=1}^{d} [1 - \cos(p_j)] = 4 \sum_{j=1}^{d} \sin^2(p_j/2). \quad (5.7)$$ + +Among the various relations in which the Fourier transform plays a useful role, the following statements will be of relevance for our discussion. + +i) The spin-wave modes' second moment coincides with the finite volume Fourier transform of the two-point correlation function ($S^{(L)}(p)$): + +$$\widetilde{S}_{\rho,\beta}^{(L)}(p) := \sum_{x \in \Lambda_L} e^{ip \cdot x} \langle \tau_0 \tau_x \rangle_{\Lambda_L, \rho, \beta}^{(b.c.)} = (\langle \hat{\tau}_\beta(p) \rangle^2)_{\Lambda_L, \rho, \beta}^{(b.c.)} \ge 0. \quad (5.8)$$ + +ii) For the n.n.f. interaction, and more generally reflection-positive interactions, the following gaussian-domination (aka *infrared*) bound holds [18, 19]: + +$$\mathcal{E}(p) \widetilde{S}_{\rho,\beta}^{(L)}(p) \leq \frac{1}{2|J|\beta}. \quad (5.9)$$ + +The bound appeals to the physicists’ intuition, reminding one of the equipartition law. Alas, it has so far been proven only for reflection-positive interactions. + +iii) The Parseval-Plancherel identity yields the sum rule: + +$$\langle \tau_0^2 \rangle_{\Lambda_L, \rho, \beta}^{(b.c.)} = \frac{1}{|\Lambda_L|} \sum_{p \in \Lambda_L^*} \widetilde{S}_{\rho, \beta}^{(L)}(p). \quad (5.10)$$ +---PAGE_BREAK--- + +As was pointed out in [19], the combination of (5.10) with (5.9) yields a (then novel) way to prove the occurrence of spontaneous magnetization in dimensions $d > 2$, at high enough $\beta$. + +More explicitly, in (5.10) one may note that the Infrared Bound (5.9) does not provide any direct control on the $p=0$ term, since $\mathcal{E}(0) = 0$. And, in fact, the hallmark of the low temperature phase ($\beta > \beta_c(\rho)$) is that this single value of the summand attains macroscopic size: + +$$ \widehat{S}_{\rho,\beta}^{(L)}(0) \approx |\Lambda_L| M(\rho, \beta)^2 \quad (5.11) $$ + +with $M(\rho, \beta)$ the model's spontaneous magnetization. + +We shall also use the following statement on the relation between the finite volume and the infinite volume states. + +**Proposition 5.2** For a model in the GS class on $\mathbb{Z}^d$, $d > 2$, with translation invariant finite range interactions, for any $\beta < \beta_c(\rho)$: + +1. the system has only one infinite volume Gibbs equilibrium state. + +2. the correlation functions of that state satisfy, for any finite $A \subset \mathbb{Z}^d$, and any sequence of finite volumes $V_n \subset \mathbb{Z}^d$ which asymptotically cover any finite region, + +$$ \langle \prod_{x \in A} \tau_x \rangle_{\rho, \beta} = \lim_{V_n \to \mathbb{Z}^d} \langle \prod_{x \in A} \tau_x \rangle_{V_n, \rho, \beta}^{(b.c.)} \quad (5.12) $$ + +with $\langle - \rangle^{(b.c.)}_{V_n, \rho, \beta}$ denoting the correlation function under boundary conditions which may include either cross-boundary spin couplings (e.g. periodic), or arbitrary specified values of $\tau_{|\partial V_n}$. + +3. with the finite volumes taken as the rectangular domains $\Lambda_L$, also the Fourier Transform functions converge, i.e. for any $p \in [-\pi, \pi]^d$, and sequence as in (5.12) + +$$ \lim_{n \to \infty} \sum_{x \in V_n} e^{ip \cdot x} \langle \tau_0 \tau_x \rangle_{V_n, \rho, \beta}^{(b.c.)} = \sum_{x \in \mathbb{Z}^d} e^{ip \cdot x} S_{\rho, \beta}(x) =: \hat{S}_{\rho, \beta}(p) \quad (5.13) $$ + +The statement follows by standard arguments that we omit here. The main ingredients are the exponential decay of correlations, which at any $\beta < \beta_c(\rho)$ are exponentially bounded, uniformly in the volume, and the FKG inequality. The first two points hold also for $\beta = \beta_c(\rho)$ [4]. However not the last, (5.13), since at the critical temperature the correlation function is not summable. + +We shall employ the freedom which Proposition 5.2 provides in establishing the different monotonicity properties of $S(p)$ in $p$. + +Furthermore, for the *disordered regime*, where $M(\rho, \beta) = 0$, the sum rule combined with the Infrared Bound implies that for every $\beta < \beta_c(\rho)$, + +$$ \langle \tau_0^2 \rangle_\beta = \int_{[-\pi, \pi]^d} \hat{S}_{\rho, \beta}(p) dp \leq \int_{[-\pi, \pi]^d} \frac{dp}{2|J|\beta \mathcal{E}(p)}. \quad (5.14) $$ + +Since $\mathcal{E}(p)$ vanishes only at $p=0$ and there at the rate $\mathcal{E}(p) \sim |p|^2$, the integral is convergent for $d > 2$ and one gets + +$$ \langle \tau_0^2 \rangle_{\rho, \beta_c(\rho)} \leq \frac{C_d}{2|J|\beta_c(\rho)} \quad (5.15) $$ + +with $C_d < \infty$ for $d > 2$. This bound will be used in Section 7. +---PAGE_BREAK--- + +## 5.3 The spectral representation and a sliding-scale Infrared Bound + +We next present a Fourier transform counterpart (though one derived by different means) of the Messager-Miracle-Sole monotonicity stated in Section 5.1, and use it for a sliding-scale extension of the Infrared Bound (5.9). The results, which include both old [21, 47] and new observations, are based on the relation of the two-point function with the transfer matrix, and the positivity of the latter. + +The transfer matrix has been the source of many insights on the structure of statistical mechanical systems with finite range interactions. Its appearance can be seen in Ising's study of one dimensional systems, for which it permits a simple proof of the absence of phase transition. Also, in higher dimensions it has played an essential role in many important developments [19, 21, 42], some of which rely on positivity properties. Here we shall use the following consequences of its spectral representation for the two-point function. + +**Proposition 5.3 (Spectral Representation)** In the n.n.f. model on $\mathbb{Z}^d$ ($d \ge 1$), at $\beta < \beta_c(\rho)$, for every square summable function $v: \mathbb{Z}^{d-1} \to \mathbb{C}$, there exists a positive measure $\mu_{v,\beta}$ of finite mass + +$$ \int_{1/\xi(\rho, \beta)}^{\infty} d\mu_{v,\beta}(a) = \sum_{x_{\perp}, y_{\perp} \in \mathbb{Z}^{d-1}} v_{x_{\perp}} \overline{v_{y_{\perp}}} S_{\rho, \beta}((0, y_{\perp} - x_{\perp})) \quad (5.16) $$ + +such that for every $n \in \mathbb{Z}$ + +$$ \sum_{x_{\perp}, y_{\perp} \in \mathbb{Z}^{d-1}} v_{x_{\perp}} \overline{v_{y_{\perp}}} S_{\rho, \beta}((n, x_{\perp} - y_{\perp})) = \int_{1/\xi(\rho, \beta)}^{\infty} e^{-a|n|} d\mu_{v,\beta}(a). \quad (5.17) $$ + +And for every $p_{\perp} \in [-\pi, \pi]^{d-1}$ there exists a positive measure $\mu_{p_{\perp},\beta}$ of finite mass such that for every $p_1 \in [-\pi, \pi]$ + +$$ \hat{S}_{\rho, \beta}(p) = \int_0^\infty \frac{e^a - e^{-a}}{\mathcal{E}_1(p_1) + (e^{a/2} - e^{-a/2})^2} d\mu_{p_1, \beta}(a), \quad (5.18) $$ + +with $\mathcal{E}_1(k) := 2[1 - \cos(k)] = 4\sin^2(k/2)$. + +Although the spectral representation is quite well-known (cf. [21] and references therein) for completeness of the presentation we include the derivation of (5.17) in the Appendix. Equation (5.18) then follows by applying (5.17) to the function + +$$ v_{p_1}(x_\perp) := \frac{1}{\sqrt{|\Lambda_\ell^{(d-1)}|}} e^{ip_1 \cdot x_\perp} I[x \in \Lambda_\ell^{(d-1)}] $$ + +and taking the limit $\ell \to \infty$. Here $\Lambda_\ell^{(d-1)}$ is the $d-1$ dimensional version of the box $\Lambda_L$ and $I[\cdot]$ is the indicator function. The convergence is facilitated by the exponential decay of correlations at $\beta < \beta_c$. + +Of particular interest for us are the following implications of (5.18) (the first was noted and applied in [21]). + +**Proposition 5.4** For a n.n.f. model on $\mathbb{Z}^d$ ($d \ge 1$), at any $\beta < \beta_c(\rho)$: + +1. $\hat{S}_{\rho,\beta}(p_1, p_2, \dots, p_d)$ is monotone decreasing in each $|p_j|$, over $[-\pi, \pi]$, + +2. $\mathcal{E}_1(p_1)\hat{S}_{\rho,\beta}(p)$ and $|p_1|^2\hat{S}_{\rho,\beta}(p_1)$ are monotone increasing in $|p_1|,$ +---PAGE_BREAK--- + +3. the function + +$$ +\hat{S}_{\rho,\beta}^{(\text{mod})}(p) := \hat{S}_{\rho,\beta}(p) + \hat{S}_{\rho,\beta}(p + \pi(1,1,0,\dots,0)) \quad (5.19) +$$ + +is monotone decreasing in $\delta$ along the line of constant $\{p_3, \dots, p_d\}$ and + +$$ +(p_1, p_2) = (|p_1 - p_2| + \delta, |p_1 - p_2| - \delta), \quad \delta \in [0, |p_1 - p_2|]. \tag{5.20} +$$ + +and the above remains true under any permutation of the indices. + +The correction in (5.19) is insignificant in the regime where $\hat{S}_{\rho,\beta}(p)$ is large. That is so since $|\hat{S}_{\rho,\beta}(p+\pi(1,1,0,\dots,0))| \le C/\beta$ uniformly for $|p| \le \pi/2$. (The main term diverges in the limit $\beta \nearrow \beta_c(\rho)$ and $p \to 0$.) + +**Proof** The first two statements are implied by the combination of (5.18) with the observation that each of the following functions is monotone in $k \in [0, \pi]: k \mapsto \mathcal{E}_1(k)$, $k \mapsto k^2/\mathcal{E}_1(k)$, and for each $a \ge 0$, $k \mapsto \mathcal{E}_1(k)/(\mathcal{E}_1(k) + (e^{a/2} - e^{-a/2})^2)$. + +The third statement is based on the application of the transfer matrix in the diagonal direction (cf. Fig. 4). More explicitly, to produce the spectral representation one may start by considering a partially rotated rectangular region, whose main axes are associated with the coordinate system ($x_1+x_2, x_1-x_2, x_3, \dots, x_d$). The finite-volume Hamiltonian is taken with the correspondingly modified periodic boundary conditions which produce cyclicity in these directions. As stated in (5.13), for $\beta < \beta_c(\rho)$ the change does not affect the two-point function’s infinite volume limit. + +In this case, there are two transfer matrices $T$ and $T^*$ corresponding to adding one layer of even (resp. odd) vertices, i.e. vertices with $x_1+x_2$ even (resp. odd). The argument by which monotonicity was proven above for the Cartesian directions applies to the two-point function's restriction to the sub-lattice of even vertices since the proof would involve the matrix $TT^*$, which is positive. + +Then, if $\hat{S}_{\rho,\beta}^{(\mathrm{mod})}$ is given by (5.19), one finds + +$$ +\hat{S}_{\rho, \beta}^{(\mathrm{mod})}(p) = \sum_{x \in \Lambda_L} e^{ip \cdot x} S_{\rho, \beta}(0,x) \sum_{k=0,1} e^{i\pi(x_1+x_2)k} = 2 \sum_{\substack{x \in \Lambda_L \\ x_1+x_2 \text{ even}}} e^{ip \cdot x} S_{\rho, \beta}(0,x). \quad (5.21) +$$ + +Thus, the third monotonicity statement follows by a direct adaptation of the proof of the first one. +$\square$ + +**Corollary 5.5** For a n.n.f. model on $\mathbb{Z}^d$ ($d \ge 1$) at any $\beta < \beta_c(\rho)$, the two-point function satisfies, for all $p \in [-\pi/2, \pi/2]^d$, + +$$ +\hat{S}_{\rho,\beta}(\|p\|_{\infty}, 0_{\perp}) \geq \hat{S}_{\rho,\beta}(p) \geq \hat{S}_{\rho,\beta}(\|p\|_{1}, 0_{\perp}) - \frac{C}{\beta}, \quad (5.22) +$$ + +with $C$ depending on the dimension only. + +The restriction to $p \in [-\pi/2, \pi/2]^d$ guarantees that the second term of (5.19) can be bounded by $C/\beta$ (this bounds corresponds to the $-C/\beta$ term on the right-hand side of (5.22)), as explained below Proposition 5.4. +---PAGE_BREAK--- + +Figure 4: The split of $Z^2$ into even and odd sub-lattices and their stratification into intertwined diagonal hyperplanes. The partition function of $(-L, L]^2$ with rotated-periodic boundary condition can be written as $Z_{2L} = \text{tr}(T_2 T_1)^L$, with $T_j$ a pair of conjugate mapping between the Hilbert spaces of the even and the odd hyperplanes. The product $T_2 T_1 = T_1^* T_1$ provides the even subgraph's transfer matrix in one of this graph's principal directions. + +**Proof** The inequality follows from Proposition 5.4 through the monotonicity lines used for (5.3). $\square$ + +The previous bound combined with the second statement in Proposition 5.4 yield an interesting consequence for the behaviour of the *susceptibility* truncated at a distance *L*, which we define as + +$$ \chi_L(\rho, \beta) := \sum_{x \in \Lambda_L} S_{\rho, \beta}(x). \qquad (5.23) $$ + +**Theorem 5.6 (Sliding-scale Infrared Bound)** There exists a constant $C = C(d) > 0$ such that for every n.n.f. model on $\mathbb{Z}^d$ ($d > 2$), every $\beta \le \beta_c(\rho)$ and $L \ge \ell \ge 1$, + +$$ \frac{\chi_L(\rho, \beta)}{L^2} \le \frac{C \chi_\ell(\rho, \beta)}{\beta \ell^2}. \qquad (5.24) $$ + +The case $\ell = 1$ is in essence similar to the Infrared Bound (5.9), as is explained below, so that (5.24) may be viewed as a *sliding-scale* version of this inequality. One may also note that (5.24) is a sharp improvement (replacing the exponent $d$ by 2) on the more naive application of the Messager-Miracle-Sole inequality giving that for every $L \ge \ell \ge 1$, + +$$ \frac{\chi_L(\rho, \beta)}{L^d} \le \frac{\chi_\ell(\rho, \beta)}{\ell^d}. \qquad (5.25) $$ + +**Proof** Let us first note that it suffices to prove the claim for all $\beta < \beta_c(\rho)$, with a uniform constant $C$. Its extension to the critical point can be deduced from the continuity + +$$ S_{\rho,\beta}(x) = \lim_{\beta \to \beta_c(\rho)} S_{\beta_c(\rho)}(x) \qquad (5.26) $$ + +(which follows from the main result of [4]). This observation allows us to apply the monotonicity results discussed above. +---PAGE_BREAK--- + +Below, the constants $C_i$ are to be understood as dependent on $d$ only. Consider the smeared version of $\chi_L(\rho, \beta)$ defined by + +$$ \tilde{\chi}_L(\rho, \beta) := \sum_{x \in \mathbb{Z}^d} e^{-(\|x\|_2/L)^2} S_{\rho,\beta}(x). \quad (5.27) $$ + +with $\|p\|_2^2 := \sum_{i=1}^d p_i^2$. The MMS monotonicity statement (5.4) implies that + +$$ e^{-d\chi_L(\rho, \beta)} \le \tilde{\chi}_L(\rho, \beta) \le C_1\chi_L(\rho, \beta) \quad (5.28) $$ + +for every $L$, so that it suffices to prove that for every $L \ge \ell \ge 1$, + +$$ \frac{\tilde{\chi}_L(\rho, \beta)}{L^2} \le C_2 \frac{\tilde{\chi}_\ell(\rho, \beta)}{\ell^2}. \quad (5.29) $$ + +We will work in Fourier space, and use the identity + +$$ \tilde{\chi}_L(\rho, \beta) \asymp L^d \int_{[-\pi, \pi]^d} e^{-\|p\|^2} \hat{S}_{\rho, \beta}(p) dp, \quad (5.30) $$ + +where $f \asymp g$ means $cg \le f \le Cg$ with $c, C$ independent of everything else (we use that the Fourier transform of the Gaussian on the lattice is a Jacobi theta-function within multiplicative constants of $e^{-\|p\|^2}$ on $[-\pi, \pi]^d$). + +Now, let + +$$ A := \{p \in [-\frac{\pi\ell}{L}, \frac{\pi\ell}{L}]^d : |p_1| = \|p\|_\infty\}. \quad (5.31) $$ + +Using the symmetries of $\hat{S}_{\rho,\beta}$ and the decay of Corollary 5.5, we find that + +$$ \int_{[-\pi, \pi]^d} e^{-\|p\|^2 L^2} \hat{S}_{\rho, \beta}(p) dp \le (d + C_3 e^{-\ell^2}) \int_A e^{-\|p\|^2 L^2} \hat{S}_{\rho, \beta}(p) dp. \quad (5.32) $$ + +Since $|p_1| = \|p\|_\infty$ for $p \in A$ and $\|p\|_\infty \ge \|p\|_1/d$, the second property of Proposition 5.4 and Corollary 5.5 give that + +$$ \hat{S}_{\rho,\beta}(p) \le \hat{S}_{\rho,\beta}(\|p\|_{\infty}, 0_{\perp}) \le (d^L)^2 \hat{S}_{\rho,\beta}(\frac{L}{\ell}\|p\|_1, 0_{\perp}) \le (d^L)^2 (\hat{S}_{\rho,\beta}(\frac{L}{\ell}p) + C/\beta). \quad (5.33) $$ + +Using this inequality and making the change of variable $p \mapsto q = L/\ell p$ gives + +$$ \int_A \exp[-\|p\|^2 L^2] \hat{S}_{\rho,\beta}(p) dp \le C_4 (\frac{\ell}{L})^{d-2} \left( \int_{[-\pi, \pi]^d} \exp[-\|q\|^2 L^2] \hat{S}_{\rho,\beta}(q) dq + C_5/\beta \right), \quad (5.34) $$ + +which after plugging in (5.32) and taking the Fourier transform implies that + +$$ \tilde{\chi}_L(\rho, \beta) \le C_6 (\frac{\ell}{L})^{d-2} (\tilde{\chi}_\ell(\rho, \beta) + C_5/\beta). \quad (5.35) $$ + +The inequality (5.29) follows from the fact that $\tilde{\chi}_\ell(\rho, \beta) \ge 1$, so that the constant $C_5/\beta$ can be removed by changing $C_6$ into a larger constant $C_7/\beta$. $\square$ + +Inequality (5.4) and then the sliding-scale Infrared Bound with $L = |x|$ and $\ell = 1$ (5.24) implies that for every $x \in \mathbb{Z}^d$, + +$$ S_{\rho,\beta}(x) \le \frac{C_1}{|x|^d} \sum_{y \in \text{Ann}(d|x|, 2d|x|)} S_{\rho,\beta}(y) \le \frac{C_1}{|x|^d} \chi_{2d|x|}(\rho, \beta) \le \frac{C_2 \langle \tau_0^2 \rangle_{\rho,\beta}}{|x|^{d-2}}. \quad (5.36) $$ + +The factor $\langle \tau_0^2 \rangle_\beta$ in the upper bound may seem pointless for the Ising model where it is simply equal to 1, but it becomes very important when studying unbounded spins, as in Section 7, where it is essential for a dimensionless improved tree diagram bound. + +It may be noted that the combination of (5.36) with (5.14) leads to the more standard formulation [18, 19] of the Infrared Bound in $x$-space: + +$$ S_{\rho,\beta}(x) \le \frac{C}{\beta |J||x|^{d-2}}. \quad (5.37) $$ +---PAGE_BREAK--- + +## 5.4 A lower bound + +The above upper bound will next be supplemented by a power-law lower bound on the two point function at $\beta_c$. Conceptually, it originates in the observation that it the correlations drop on some scale by a fast enough power law then on larger scales they decay exponentially fast. An early version of this principle can be found in Hammersley's analysis of percolation [28]. A general statement was presented in Dobrushin's analysis of the constructive criteria for the high temperature phase. For Ising systems a simple version of such statement can be deduced from the following observation. + +**Lemma 5.7** For every ferromagnetic model in the GS class on $\mathbb{Z}^d$ ($d \ge 1$) with coupling constants that are invariant under translations, every finite $0 \in \Lambda \subset \mathbb{Z}^d$ and every $y \notin \Lambda$, + +$$S_{\rho,\beta}(y) \le \sum_{\substack{u \in \Lambda \\ v \notin \Lambda}} S_{\rho,\beta}(u) \beta J_{u,v} S_{\rho,\beta}(y-v). \quad (5.38)$$ + +This statement is a mild extension of Simon's inequality which was originally formulated for the n.n.f. Ising models [44]. Being spin-dimension balanced, it is valid also for the Griffiths-Simon class of variables and more general pair interactions⁵. + +The MMS monotonicity allows us to extract the following point-wise implication, which will be used below + +**Corollary 5.8 (Lower bound on $S_{\rho,\beta}$)** For a n.n.f. model in the GS class on $\mathbb{Z}^d$ ($d \ge 1$), there exists $c = c(d) > 0$ such that for every $\beta \le \beta_c(\rho)$ and $x \in \mathbb{Z}^d$, + +$$S_{\rho,\beta}(x) \ge \frac{c}{\beta|J| \|x\|_{\infty}^{d-1}} \exp\left(-\frac{d\|x\|_{\infty} + 1}{\xi(\rho, \beta)}\right). \quad (5.39)$$ + +**Proof** Let us introduce + +$$Y_{\rho,\beta}(\Lambda) := \sum_{\substack{u \in \Lambda \\ v \notin \Lambda}} S_{\rho,\beta}(u) \beta J_{u,v}. \quad (5.40)$$ + +Set $L := d\|x\|_{\infty}$. Applying (5.38) with $\Lambda_L$ and $y = ne_1$, and iterating it $\lfloor \frac{n}{L+1} \rfloor$ times (i.e. as many as possible without reducing the last factor to a distance shorter than $L$), we get + +$$\beta |J| S_{\rho,\beta}(ne_1) \le Y_{\rho,\beta}(\Lambda_L)^{\lceil \frac{n}{L+1} \rceil}. \quad (5.41)$$ + +Since $\lim_n S_{\rho,\beta}(ne_1)^{1/n} = e^{-1/\xi(\rho,\beta)}$, we deduce that + +$$e^{-1/\xi(\rho, \beta)} \le Y_{\rho, \beta}(\Lambda_L)^{\frac{1}{L+1}}. \quad (5.42)$$ + +On the other hand, by (5.4), for each $x \in \mathbb{Z}^d$, $S_{\rho,\beta}(u) \le S_{\rho,\beta}(x)$ for all $u \in \partial\Lambda_L$, and hence + +$$\frac{Y_{\rho,\beta}(\Lambda_L)}{|\partial\Lambda_L|} \le \beta |J| S_{\rho,\beta}(x). \quad (5.43)$$ + +The substitution of (5.42) in (5.43) yields the claimed lower bound (5.39). $\square$ + +⁵The factor $S_{\rho,\beta}(u)$ in (5.38) can also be replaced by the finite volume expectation $\langle \tau_0 \tau_u \rangle_\Lambda$, as in Lieb's improvement of Simon's inequality [36]. Both versions have an easy proof through a simple application of the switching lemma, in its mildly improved form. +---PAGE_BREAK--- + +5.5 Regularity of the two-point function’s gradient + +**Proposition 5.9 (gradient estimate)** *There exists C = C(d) > 0 such that for every n.n.f. model in the GS class, every $\beta \le \beta_c(\rho)$, every $x \in \mathbb{Z}^d$ and every $1 \le i \le d$,* + +$$ +|S_{\rho,\beta}(x \pm \mathbf{e}_i) - S_{\rho,\beta}(x)| \leq \frac{F(|x|)}{|x|} S_{\rho,\beta}(x), \quad (5.44) +$$ + +where + +$$ +F(n) := C \frac{S_{\rho, \beta}(dn\mathbf{e}_1)}{S_{\rho, \beta}(n\mathbf{e}_1)} \log \left( \frac{2S_{\rho, \beta}(\frac{n}{2}\mathbf{e}_1)}{S_{\rho, \beta}(n\mathbf{e}_1)} \right). \quad (5.45) +$$ + +The previous proposition is particularly interesting when $S_{\rho,\beta}(2d\mathbf{e}_1) \ge c_0 S_{\rho,\beta}(\frac{n}{2}\mathbf{e}_1)$, +in which case we obtain the existence of a constant $C_0 = C_0(c_0, d) > 0$ such that for every +$x \in \partial\Lambda_n$ and $1 \le i \le d$, + +$$ +|S_{\rho,\beta}(x \pm \mathbf{e}_i) - S_{\rho,\beta}(x)| \leq \frac{C_0}{|x|} S_{\rho,\beta}(x). \tag{5.46} +$$ + +**Proof** Without loss of generality, we may assume that $x = (|x|, x_\perp)$. We first assume that $i=1$. Introduce the three sequences $u_n := S_{\rho,\beta}(ne_1)$, $v_n := S_{\rho,\beta}((n,x_\perp))$ and $w_n := u_n + v_n$. The spectral representation applied to the function $v$ being the sum of the Dirac functions at $0_\perp$ and $x_\perp$ implies the existence of a finite measure $\mu_{x_\perp,\beta}$ such that + +$$ +w_n = \int_0^\infty e^{-na} d\mu_{x_\perp, \beta}(a). \tag{5.47} +$$ + +Cauchy-Schwarz gives $w_n^2 \le w_{n-1}w_{n+1}$, which when iterated between $n$ and $n/2$ (assume $n$ +is even, the odd case is similar) leads to + +$$ +\frac{w_{n+1}}{w_n} \geq \left(\frac{w_n}{w_{n/2}}\right)^{2/n} \geq 1 - \frac{2}{n} \log \left(\frac{w_{n/2}}{w_n}\right). \quad (5.48) +$$ + +We now use that $u_{n/2} \ge v_{n/2}$, $u_n \ge v_n$, and $u_n \ge u_{n+1}$ which are all consequences of the +Messager-Miracle-Sole inequality. Together with trivial algebraic manipulations, we get + +$$ +v_{n+1} \ge v_n - \frac{4\log(2u_{n/2}/u_n)}{n} u_n. \quad (5.49) +$$ + +The bound we are seeking corresponds to $n = |x|$. + +To get the result for $i \neq 1$, use the Messager-Miracle-Sole inequality applied twice to +get that + +$$ +|S_{\rho, \beta}(x \pm \mathbf{e}_i) - S_{\rho, \beta}(x)| \leq S_{\rho, \beta}(x - d\mathbf{e}_1) - S_{\rho, \beta}(x + d\mathbf{e}_1), \quad (5.50) +$$ + +and then refer to the previous case to conclude (one obtains the result for $n = |x| - d$, but +the proof can be easily adapted to get the result for $n = |x|$). $\square$ + +**Remark 5.10** When $x = ne_1$ and $i = 1$, running through the lines of the previous proof shows that one can take $F(n) = 2\log(S_{\rho,\beta}(ne_1)/S_{\rho,\beta}(\frac{n}{2}e_1))$ which is bounded by $(2+o(1))\log n$ thanks to the lower bound (5.39) and the Infrared Bound (5.37). We therefore get that for every $n \le \xi(\rho, \beta)$, + +$$ +S_{\rho, \beta}(ne_1) - S_{\rho, \beta}((n+1)e_1) \le (2+o(1)) \frac{\log n}{n} S_{\rho, \beta}(ne_1). \quad (5.51) +$$ + +It would be of interest to remove the log n factor, as this would enable a proof that $S_{\rho,\beta}(ne_1)$ does not drop too fast between different scales. +---PAGE_BREAK--- + +## 5.6 Regular scales + +Using the dyadic distance scales, we shall now introduce the notion of regular scales, which in essence means that on the given scale the two-point function has the properties which in the conditional proof of Section 4, were available under the assumption (4.2). + +**Definition 5.11** Fix $c, C > 0$. An annular region $\text{Ann}(n/2, 4n)$ is said to be regular if the following four properties are satisfied: + +$$ P1 \quad \text{for every } x, y \in \text{Ann}(n/2, 4n), S_{\rho,\beta}(y) \le C S_{\rho,\beta}(x); $$ + +$$ P2 \quad \text{for every } x, y \in \text{Ann}(n/2, 4n), |S_{\rho,\beta}(x) - S_{\rho,\beta}(y)| \le \frac{C|x-y|}{|x|} S_{\rho,\beta}(x); $$ + +$$ P3 \quad \text{for every } x \in \Lambda_n \text{ and } y \notin \Lambda_{Cn}, S_{\rho,\beta}(y) \le \frac{1}{2} S_{\rho,\beta}(x); $$ + +$$ P4 \quad \chi_{2n}(\rho, \beta) \ge (1+c)\chi_n(\rho, \beta). $$ + +A scale $k$ is said to be regular if the above holds for $n = 2^k$, and a vertex $x \in \mathbb{Z}^d$ will be said to be in a regular scale if it belongs to an annulus with the above properties. + +One may note that $P1$ follows trivially from $P2$ but we still choose to state the two properties independently (the proof would work with weaker versions of $P2$ so one can imagine cases where the notion of regular scale could be used with a different version of $P2$ not implying $P1$). + +Under the power-law assumption (4.2) of Section 4 every scale is regular at criticality. However, for now we do not have an unconditional proof of that. For an unconditional proof of our main results, this gap will be addressed through the following statement, which is the main result of this section. + +**Theorem 5.12 (Abundance of regular scales)** Fix $d > 2$ and $\alpha > 2$. There exist $c = c(d) > 0$ and $C = C(d) > 0$ such that for every n.n.f. model in the GS class and every $n^\alpha \le N \le \xi(\rho, \beta)$, there are at least $c \log_2(N/n)$ regular scales $k$ with $n \le 2^k \le N$. + +**Proof** The lower bound (5.8) for $S_{\rho,\beta}$ and the Infrared Bound (5.37) imply that + +$$ \chi_N(\rho, \beta) \ge c_0 N \ge c_0 (N/n)^{(\alpha-2)/(\alpha-1)} n^2 \ge c_1 (N/n)^{(\alpha-2)/(\alpha-1)} \chi_n(\rho, \beta). \quad (5.52) $$ + +Using the sliding-scale Infrared Bound (5.25), there exist $r, c_2 > 0$ (independent of $n, N$) such that there are at least $c_2 \log_2(N/n)$ scales $m = 2^k$ between $n$ and $N$ such that + +$$ \chi_{rm}(\rho, \beta) \ge \chi_{4dm}(\rho, \beta) + \chi_m(\rho, \beta). \quad (5.53) $$ + +Let us verify that the different properties of regular scales are satisfied for such an $m$. Applying (5.4) in the first inequality, the assumption (5.53) in the second, and (5.4) in the third, one has + +$$ |\text{Ann}(4dm, rm)| S_{\rho,\beta}(4dme_1) \ge \chi_{rm}(\rho, \beta) - \chi_{4dm}(\rho, \beta) \ge \chi_m(\rho, \beta) \ge |\Lambda_{m/(4d)}| S_{\rho,\beta}(\frac{1}{4}me_1). \quad (5.54) $$ + +This implies that $S_{\rho,\beta}(4dme_1) \ge c_0 S_{\rho,\beta}(\frac{1}{4}me_1)$, which immediately gives $P1$ by (5.4) for $S_{\rho,\beta}$ and $P2$ by the gradient estimate given by Proposition 5.9. Furthermore, the fact that $S_{\rho,\beta}(x) \ge S_{\rho,\beta}(4dme_1) \ge \frac{c_3}{m^d}\chi_m(\rho, \beta)$ for every $x \in \text{Ann}(m, 2m)$ implies $P4$. To prove P3, +---PAGE_BREAK--- + +observe that for every $R$, the previous displayed inequality together with the sliding-scale Infrared Bound (5.24) give that for every $y \notin \Lambda_{dRm}$ and $x \in \Lambda_m$, + +$$|\Lambda_{Rm}|S_{\rho,\beta}(y) \le \chi_{Rm}(\rho, \beta) \le C_4 R^2 \chi_m(\rho, \beta) \le C_5 R^2 m^d S_{\rho,\beta}(x), \quad (5.55)$$ + +which implies the claim for $C$ and $c$ respectively large and small enough using here the assumption that $d > 2$. $\square$ + +# 6 Unconditional proofs of the Ising's results + +In this section, we prove our results for every $\beta \le \beta_c$ without making the power-law assumption of Section 4. We emphasize that unlike the introductory discussion of that section, the proofs given below are unconditional. The discussion is also not restricted to the critical point itself and covers more general approaches of the scaling limits, from the side $\beta \le \beta_c$ (hence the correlation length will be mentioned in several places). However, at this stage the discussion is still restricted to the n.n.f. Ising model. + +## 6.1 Unconditional proofs of the intersection-clustering bound and Theorem 1.3 for the Ising model + +The notation remains as in Section 4. The endgame in this section will be the unconditional proof of the intersection-clustering bound that we restate below in the right level of generality. The main modification is that the sequence $\mathcal{L}$ of integers $l_k$ will be chosen dynamically, adjusting it to the behaviour of the two-point function. More precisely, recall the definition of the bubble diagram $B_L(\beta)$ truncated at a distance $L$. Fix $D \gg 1$ and define recursively a (possibly finite) sequence $\mathcal{L}$ of integers $l_k = l_k(\beta, D)$ by the formula $l_0 = 0$ and + +$$l_{k+1} = \inf\{\ell : B_\ell(\beta) \ge D B_{\ell_k}(\beta)\}. \qquad (6.1)$$ + +By the Infrared Bound (5.37), $B_L - B_\ell \le C_0 \log(L/\ell)$ (in dimension $d=4$) from which it is a simple exercise to deduce that under the above definition + +$$D^k \le B_{\ell_k}(\beta) \le CD^k \qquad (6.2)$$ + +for every $k$ and some large constant $C$ independent of $k$. + +**Proposition 6.1 (clustering bound)** For $d=4$ and $D$ large enough, there exists $\delta = \delta(D) > 0$ such that for every $\beta \le \beta_c$, every $K > 3$ with $\ell_K \le \xi(\beta)$, and every $u,x,y,z,t \in \mathbb{Z}^4$ with mutual distances between $x,y,z,t$ larger than $2\ell_K$, + +$$\mathbf{P}_{\beta}^{ux,uz,uy,ut}[\mathbf{M}_u(\mathcal{T}; \mathcal{L}, K) < \delta K] \le 2^{-\delta K}. \qquad (6.3)$$ + +Before proving this proposition, let us explain how it implies the improved tree diagram bound. + +**Proof of Theorem 1.3** Choose $D$ large enough that the previous proposition holds true. We follow the same lines as in Section 4.1, simply noting that since $B_{\ell_k}(\beta) \le CD^k$, we may choose $K \ge c \log B_L(\beta)$ with $2\ell_K \le L$, where $c$ is independent of $L$ and $\beta$, so that (4.13) implies the improved tree diagram bound inequality. $\square$ +---PAGE_BREAK--- + +The main modification we need for an unconditional proof of the intersection-clustering +bound lies in the derivation of the intersection and mixing properties. The former is similar +to Lemma 4.4, but restricted to sources that lie in regular scales. We restate it here in a +slightly modified form. + +Recall that $I_k$ is the event that there exist unique clusters of $\text{Ann}(\ell_k, \ell_{k+1})$ in $\mathbf{n}_1 + \mathbf{n}_3$ +and $\mathbf{n}_2 + \mathbf{n}_4$ crossing the annulus from the inner boundary to the outer boundary and that +these two clusters are intersecting. + +**Lemma 6.2 (Intersection property)** +Fix $d = 4$. There exists $c > 0$ such that for every $\beta \le \beta_c$, every $k$, and every $y \notin \Lambda_{2\ell_{k+1}}$ in a regular scale, + +$$ +\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[I_k] \geq c. \tag{6.4} +$$ + +**Proof** Restricting our attention to the case of *y* belonging to a regular scale enables us to use properties P1 and P2 of the regularity assumption on the scale. With this additional assumption, we follow the same proof as the one of the conditional version (Lemma 4.4). Introduce the intermediary integers $n \le m \le M \le N$ satisfying + +$$ +\ell_k^4 \ge n \ge \ell_k^{3+\epsilon}, \quad n^4 \ge m \ge n^{3+\epsilon}, \quad M^4 \ge N \ge M^{3+\epsilon}, \quad N^4 \ge \ell_{k+1} \ge N^{3+\epsilon}. \tag{6.5} +$$ + +For the second moment method on $\mathcal{M}$, the first and second moments take the following forms + +$$ +\begin{equation} +\begin{aligned} +\mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|] &\ge c_1(B_M(\beta) - B_{m-1}(\beta)) \ge c_2 B_{\ell_{k+1}}(\beta), \\ +\mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|^2] &\le c_3(B_M(\beta) - B_{m-1}(\beta)) B_{2M}(\beta) \le c_3 B_{\ell_{k+1}}(\beta)^2, +\end{aligned} +\tag{6.6}\tag{6.7} +\end{equation} +$$ + +where in the second inequality of the first line, we used that $D$ is large enough and +Lemma 6.3 below to get that + +$$ +B_M(\beta) \ge \frac{B_{\ell_{k+1}}(\beta)}{1 + 15C} \quad \text{and} \quad B_{m-1}(\beta) \le (1 + 15C)B_{\ell_k}(\beta) \le \frac{1 + 15C}{D} B_{\ell_{k+1}}(\beta). +$$ + +For the bound on the probabilities of the events $F_1, \dots, F_4$ defined as in Section 4.2, recall +that the vertices $x$ and $z$ there are in our case both equal to $y$ that belongs to a regular +scale. Using Property 2 of the regularity of scales, the bounds in (4.21) and (4.22) follow +readily from the Infrared Bound (5.37). $\square$ + +In the previous proof, we used the following statement. + +**Lemma 6.3** For $d=4$, there exists $C > 0$ such that for every $\beta \le \beta_c$ and every $\ell \le L \le \xi(\beta)$, + +$$ +B_L(\beta) \le \left(1 + C \frac{\log(L/\ell)}{\log \ell}\right) B_\ell(\beta). \tag{6.8} +$$ + +**Proof** For every $n \le N$ for which $n = 2^k$ with $k$ regular, we have that (recall the definition of $\chi_n(\beta)$ from the previous section) + +$$ +\begin{align*} +B_{2N}(\beta) - B_N(\beta) &\le C_0 N^{-4} \chi_{N/d}(\beta)^2 \\ +&\le C_1 n^{-4} \chi_n(\beta)^2 \\ +&\le C_2 n^{-4} (\chi_{2n}(\beta) - \chi_n(\beta))^2 \\ +&\le C_3 (B_{2n}(\beta) - B_n(\beta)), \tag{6.9} +\end{align*} +$$ +---PAGE_BREAK--- + +where in the first inequality we used (5.4), in the second the sliding-scaled Infrared Bound (5.24), in the third Property P4 of the regularity of $n$, and in the last Cauchy-Schwarz. + +Now, there are $\log_2(L/\ell)$ scales between $\ell$ and $L$, and at least $\frac{1}{C}\log_2\ell$ regular scales between 1 and $\ell$ by abundance of regular scales (Theorem 5.12). Since the sums of squared correlations on any of the former contribute less to $B_L(\beta) - B_\ell(\beta)$ than any of the latter to $B_\ell(\beta)$, we deduce that + +$$B_L(\beta) \leq \left(1 + C \frac{\log_2(L/\ell)}{\log_2 \ell}\right) B_\ell(\beta). \quad (6.10)$$ + +Next comes the unconditional mixing property. + +**Theorem 6.4 (random currents' mixing property)** For $d \geq 4$, there exist $\alpha, c > 0$ such that for every $t \leq s$, every $\beta \leq \beta_c$, every $n^\alpha \leq N \leq \xi(\beta)$, every $x_i \in \Lambda_n$ and $y_i \notin \Lambda_N$ ($i \leq t$), and every events E and F depending on the restriction of $(\mathbf{n}_1, \dots, \mathbf{n}_s)$ to edges within $\Lambda_n$ and outside of $\Lambda_N$ respectively, + +$$\left| \mathbf{P}_{\beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [E \cap F] - \mathbf{P}_{\beta}^{x'_1 y'_1, \dots, x'_t y'_t, \emptyset, \dots, \emptyset} [E] \mathbf{P}_{\beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [F] \right| \leq s (\log \frac{N}{n})^{-c}. \quad (6.11)$$ + +Furthermore, for every $x'_1, \dots, x'_t \in \Lambda_n$ and $y'_1, \dots, y'_t \notin \Lambda_N$, we have that + +$$\left|\mathbf{P}_{\beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [E] - \mathbf{P}_{\beta}^{x'_1 y'_1, \dots, x'_t y'_t, \emptyset, \dots, \emptyset} [E]\right| \leq s (\log \frac{N}{n})^{-c}, \quad (6.12)$$ + +$$\left|\mathbf{P}_{\beta}^{x_1 y_1, \ldots, x_t y_t, \emptyset, \ldots, \emptyset} [F] - \mathbf{P}_{\beta}^{x'_1 y'_1, \ldots, x'_t y'_t, \emptyset, \ldots, \emptyset} [F]\right| \leq s (\log \frac{N}{n})^{-c}. \quad (6.13)$$ + +We postpone the proof to Section 6.2 below. Before showing how Theorem 6.4 is used in the proof of the improved tree diagram bound, let us make an interlude and comment on this statement. + +**Discussion** The relation (6.11) is an assertion of approximate independence between events at far distances, and (6.12)–(6.13) expresses a degree of independence of the probability of an event from the precise placement of the sources when these are far from the event in question. This result should be of interest on its own, and possibly have other applications, since mixing properties efficiently replace independence in statistical mechanics. + +The main difficulty of the theorem concerns currents with a source inside $\Lambda_n$ and a source outside $\Lambda_N$ (i.e. the first $t$ ones). In this case, the currents are constrained to have a path linking the two, and that may be a conduit for information, and correlation, between $\Lambda_n$ and the exterior of $\Lambda_N$. To appreciate the point it may be of help to compare the situation with Bernoulli percolation: there the mixing property without sources is a triviality (by the variables’ independence); while an analogue of the mixing property with sources $x$ and $y$ would concern Bernoulli percolation conditioned on having a path from $x$ to $y$. Proving convergence at criticality, for $x$ set as the origin and $y$ tending to infinity, of these conditioned measures is a notoriously hard problem. It would in particular imply the existence of the so-called Incipient Infinite Cluster (IIC), and the definition of the IIC was justified in 2D [32] and in high dimension [49], but it is still open in dimensions $3 \leq d \leq 10$. When the number of sources is even inside $\Lambda_n$, things become much simpler and one may in fact prove a quantitative ratio weak mixing using mixing properties for (sub)-critical random-cluster measures with cluster-weight 2 provided by [4]. +---PAGE_BREAK--- + +Theorem 6.4 has an extension to three dimensions using [4], but there it becomes non-quantitative (the careful reader will notice that the condition $d > 3$ is coming from the exponent appearing in the proof of (6.36) in Lemma 6.7 in the next section). More precisely, one may prove that in dimension $d = 3$, for every $n, s$ and $\varepsilon$, there exists a constant $N$ sufficiently large that the previous theorem holds with an error $\varepsilon$ instead of $s(\log \frac{N}{n})^{-c}$. This has a particularly interesting application: one may construct the IIC in dimension $d = 3$ for this model, since the random-cluster model with cluster weight $q = 2$ conditioned on having a path from $x$ to $y$ can be obtained as the random current model with sources $x$ and $y$ together with an additional independent sprinkling (see [5]). This represents a non-trivial result for critical 3D Ising. More generally, we believe that the previous mixing result may be a key tool in the rigorous description of the critical behaviour of the Ising model in three dimensions. + +This concludes the interlude, and we return now to the proof of the intersection-clustering bound. + +**Proof of Proposition 6.1** We follow the same argument as in the proof of the conditional version (Proposition 4.3) and borrow the notation from the corresponding proof at the end of Section 4.2. We fix $\alpha > 2$ large enough that the mixing property Theorem 6.4 holds true. Using Lemma 6.3, we may choose $D = D(\alpha)$ such that $\ell_{k+1} \ge \ell_k^\alpha$. + +The proof is exactly identical to the proof of Proposition 4.3, with the exception of the bound on $\mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_S]$ and the fact that we restrict ourselves to subsets $S$ of even integers in $\{1, \dots, K-3\}$. In order to obtain this result, first observe that since we assumed $\ell_K \le \xi(\beta)$, by Theorem 5.12 there exists $y \in \text{Ann}(\ell_{K-1}, \ell_K)$ in a regular scale. Since the event $A_S$ depends on the currents inside $\Lambda_{\ell_{K-2}}$ (since $S$ does not contain integers strictly larger than $K-3$), and that $\ell_{K-1} \ge \ell_{K-2}^\alpha$, the mixing property (Theorem 6.4) shows that + +$$ \mathbf{P}^{0x,0z,\emptyset,\emptyset}[A_S] \le \mathbf{P}^{0y,0y,\emptyset,\emptyset}[A_S] + \frac{C}{\sqrt{\log \ell_{K-1}}} \le \mathbf{P}^{0y,0y,\emptyset,\emptyset}[A_S] + 2^{-\delta K}. \quad (6.14) $$ + +To derive the first bound on the right-hand side, we apply the mixing property repeatedly (Theorem 6.4) and the intersection property (Lemma 6.2) exactly as in the conditional proof. For the second inequality, we lower bound $\ell_{K-1}$ using $B_{\ell_{K-1}}(\beta) \ge D^{K-1}$ and the Infrared Bound (5.37). $\square$ + +## 6.2 The mixing property: proof of Theorem 6.4 + +As we saw, the mixing property is in the core of the proof of our main result. The strategy of the proof was explained in Section 4.2 when we proved mixing for one current under the power-law assumption. In this section we again define a random variable $N$ which is approximately 1 and is a weighted sum over ($t$-tuple of) vertices connected to the origin. The main difficulty will come from the fact that since we do not fully control the spin-spin correlations, we will need to define $N$ in a smarter fashion. Also, whereas in Section 4.2 we treated the case of a single current ($s=1$), here we generalize to multiple currents. + +Fix $\beta \le \beta_c$ and drop it from the notation. Also fix $s \ge t \ge 1$ and $n^\alpha \le N \le \xi(\beta)$. Below, constants $c_i$ and $C_i$ are independent of the choices of $s,t,\beta,n,N$ satisfying the properties above. We introduce the integers $m$ and $M$ such that $m/n = (N/n)^{1/3}$ and $N/M = (N/n)^{1/3}$ (we omit the details of the rounding operation). + +For $\mathbf{x} = (x_1, \dots, x_t)$ and $\mathbf{y} = (y_1, \dots, y_t)$, we will use the following shortcut notation + +$$ \mathbf{P}^{\mathbf{x}\mathbf{y}} := \mathbf{P}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} \quad \text{and} \quad \mathbf{P}^{\mathbf{x}\mathbf{y}} \otimes \mathbf{P}^{\emptyset}, \quad (6.15) $$ +---PAGE_BREAK--- + +where the second measure is the law of the random variable $(\mathbf{n}_1, \dots, \mathbf{n}_s, \mathbf{n}'_1, \dots, \mathbf{n}'_s)$, where $(\mathbf{n}'_1, \dots, \mathbf{n}'_s)$ is an independent family of sourceless currents. + +To define $\mathbf{N}$, first introduce for every vertex $y \notin \Lambda_{2dm}$, the set (see Fig. 5) + +$$ +A_y(m) := \{u \in \text{Ann}(m, 2m) : \forall x \in \Lambda_{m/d}, \langle \sigma_x \sigma_y \rangle \le \left(1 + \frac{C|x-u|}{|y|}\right) \langle \sigma_u \sigma_y \rangle\}, \quad (6.16) +$$ + +where $C$ is given by the definition of good scales. + +**Remark 6.5** When $y$ is in a regular scale, then $A_y(m)$ is equal to $\text{Ann}(m, 2m)$ by Prop- erty P2 of regular scales. The reason why we consider $A_y(m)$ instead of the full annulus $\text{Ann}(m, 2m)$ is technical: since $y$ will not a priori be assumed to belong to a regular scale (in fact $|y|$ may be much larger than $\xi(\beta)$ when $\beta < \beta_c$), we will use (for (6.26) and (6.37) below) the inequality between $\langle \sigma_x \sigma_y \rangle$ and $\langle \sigma_u \sigma_y \rangle$ in several bounds. Now, if $y_1 = |y|$, then + +$$ +A_y(m) \supset \{z \in \mathbb{Z}^d : m \le z_1 \le 2m \text{ and } 0 \le z_j \le m/d \text{ for } j > 1\} \quad (6.17) +$$ + +as the Messager-Miracle-Sole inequality implies⁶ that $\langle \sigma_z \sigma_y \rangle \ge \langle \sigma_x \sigma_y \rangle$ for every $x \in \Lambda_{m/d}$. + +From now on, fix a set $\mathcal{H}$ of regular scales $k$ with $m \le 2^k \le M/2$ satisfying that distinct $k, k' \in \mathcal{H}$ are differing by a multiplicative factor at least $C$ (where the constant $C$ is given by Theorem 5.12). We further assume that $|\mathcal{H}| \ge c_1 \log(N/n)$, where $c_1$ is sufficiently small. The existence of $\mathcal{H}$ is guaranteed by the definition of $m$ and $M$ and the abundance of regular scales given by Theorem 5.12. + +Define $\mathbf{N} := \prod_{i=1}^{t} \mathbf{N}_i$, where + +$$ +\mathbf{N}_i := \frac{1}{|\mathcal{H}|} \sum_{k \in \mathcal{K}} \frac{1}{A_{x_i, y_i}(2^k)} \sum_{u \in A_{y_i}(2^k)} \mathbb{I}[u \xleftarrow{\mathbf{n}_i+\mathbf{n}'_i} x_i], \quad (6.20) +$$ + +where $a_{x,y}(u) := \langle\sigma_x\sigma_u\rangle\langle\sigma_u\sigma_y\rangle/\langle\sigma_x\sigma_y\rangle$ and $A_{x,y}(m) := \sum_{u\in A_{y}(m)} a_{x,y}(u)$. The first step of the proof is the following concentration inequality. + +**Proposition 6.6 (Concentration of N)** *For every $\alpha > 2$, there exists $C_0 = C_0(\alpha, t) > 0$ such that for every $n$ large enough and $n^\alpha \le N \le \xi(\beta)$,* + +$$ +\mathbf{E}^{\mathbf{x},\mathbf{y},\emptyset[(\mathbf{N}-1)^2]} \leq \frac{C_0}{\log(N/n)}. \quad (6.21) +$$ + +*Proof* We shall apply the telescopic formula + +$$ +\mathbf{N}-1 = \prod_{i=1}^{t} \mathbf{N}_i - 1 = \sum_{i=1}^{t} (\mathbf{N}_i - 1) \prod_{j>i} \mathbf{N}_j +$$ + +⁶The claim follows directly from the inequality $S_\beta(y) \le S_\beta(x)$ for every $x,y$ such that $x_1 \ge 0$ and $y_1 \ge x_1 + \sum_{j>1} |y_j - x_j|$. In order to prove this inequality, define, for $0 \le i \le d$, + +$$ +v^{(i)} := (x_1 + \sum_{j=i}^{d} |y_j - x_j|, x_2, \dots, x_i, y_{i+1}, \dots, y_d). \tag{6.18} +$$ + +Successive applications of the Messager-Miracle-Sole inequality with respect to the sum or the difference +(depending on whether $x_i$ is positive or negative) of the first and $i$-th coordinates implies that + +$$ +S_{\beta}(y) \le S_{\beta}(v^{(1)}) \le S_{\beta}(v^{(2)}) \le \cdots \le S_{\beta}(v^{(d)}) = S_{\beta}(x). \quad (6.19) +$$ +---PAGE_BREAK--- + +with the last product interpreted as 1 for $i = t$. Hence, by the Cauchy-Schwarz inequality and the currents' independence, + +$$ \mathbf{E}^{xy,\emptyset}[(\mathbf{N}-1)^2] \leq t \sum_{i=1}^{t} \mathbf{E}^{xy,\emptyset}[(\mathbf{N}_i-1)^2] \prod_{j>i} \mathbf{E}^{xy,\emptyset}[\mathbf{N}_j^2]. \quad (6.22) $$ + +It therefore suffices to show that there exists a constant $C_1 > 0$ such that for every $i \le t$, + +$$ \mathbf{E}^{xy,\emptyset}[(\mathbf{N}_i - 1)^2] \le \frac{C_1}{\log(N/n)}. \quad (6.23) $$ + +To lighten the notation, and since the random variable $\mathbf{N}_i$ depends only on $\mathbf{n}_i$ and $\mathbf{n}'_i$, we omit the index in $x_i$, $y_i$, $\mathbf{n}_i$, $\mathbf{n}'_i$ and write instead just $x, y, \mathbf{n}, \mathbf{n}'$. We keep the index in $\mathbf{N}_i$ to avoid confusion with $\mathbf{N}$ which is the product of these random variables. + +The proof of (6.23) is also based on a computation of the first and second moments of $\mathbf{N}_i$. For the first moment, the switching lemma and the definition of $\mathbf{N}_i$ imply that $\mathbf{E}^{xy,\emptyset}[\mathbf{N}_i] = 1$. From the lower bound on $|\mathcal{K}|$, to bound the second moment it therefore suffices to show that + +$$ \mathbf{E}^{xy,\emptyset}[\mathbf{N}_i^2] \le 1 + \frac{C_2}{|\mathcal{K}|}, \quad (6.24) $$ + +which follows from the inequality, for every $\ell \ge k$ in $\mathcal{K}$, + +$$ \sum_{\substack{u \in A_y(2^k) \\ v \in A_y(2^\ell)}} \mathbf{P}^{xy,\emptyset}[u, v \underset{\ell}{\overset{\ell}{n+n'}} x] \le A_{x,y}(2^\ell) A_{x,y}(2^\ell)(1+C_3 2^{-(\ell-k)}). \quad (6.25) $$ + +**Case** $\ell > k$. We find by (A.11) that + +$$ \mathbf{P}^{xy,\emptyset}[u,v \stackrel{n+n'}{\rightleftarrows} x] \le a_{x,y}(u)a_{x,y}(v) \left( \frac{\langle \sigma_x \sigma_y \rangle \langle \sigma_u \sigma_v \rangle}{\langle \sigma_u \sigma_y \rangle \langle \sigma_x \sigma_v \rangle} + \frac{\langle \sigma_x \sigma_y \rangle \langle \sigma_u \sigma_v \rangle}{\langle \sigma_v \sigma_y \rangle \langle \sigma_x \sigma_u \rangle} \right). \quad (6.26) $$ + +Now, since $u \in A_y(2^\ell)$, $\langle \sigma_x \sigma_y \rangle \le (1+C^{\frac{|u-x|}{|y|}})\langle \sigma_u \sigma_y \rangle$. Furthermore, since $\ell$ is a regular scale, Property P2 of regular scales implies that $\langle \sigma_u \sigma_v \rangle \le (1+C^{\frac{|u-x|}{|v|}})\langle \sigma_x \sigma_v \rangle$. We deduce that + +$$ \frac{\langle \sigma_x \sigma_y \rangle \langle \sigma_u \sigma_v \rangle}{\langle \sigma_u \sigma_y \rangle \langle \sigma_x \sigma_v \rangle} \le 1 + C_0 2^{-(\ell-k)}. \quad (6.27) $$ + +Similarly, since $v \in A_y(2^\ell)$, $\langle \sigma_x \sigma_y \rangle \le (1+C^{\frac{|v-x|}{|y|}})\langle \sigma_v \sigma_y \rangle$. Property P3 for the $\ell-k$ regular scales in $\mathcal{K}$ between $k$ and $\ell$ implies that + +$$ \frac{\langle\sigma_x\sigma_y\rangle\langle\sigma_u\sigma_v\rangle}{\langle\sigma_v\sigma_y\rangle\langle\sigma_x\sigma_u\rangle} \le C_1 2^{-(\ell-k)}. \quad (6.28) $$ + +Plugging (6.27)-(6.28) into (6.26) and summing over $u \in A_y(2^k)$ and $v \in A_y(2^\ell)$ gives (6.25). +---PAGE_BREAK--- + +Case $\ell = k$. Assume that $\langle \sigma_u \sigma_y \rangle \le \langle \sigma_v \sigma_y \rangle$. Use (A.11) to write + +$$ +\mathbf{P}^{xy,\emptyset}[u, v \underset{n \to n'}{\stackrel{n+n'}{\rightleftarrows}} x] \leq \langle \sigma_v \sigma_u \rangle \left( \frac{\langle \sigma_x \sigma_u \rangle}{\langle \sigma_x \sigma_v \rangle} + \frac{\langle \sigma_u \sigma_y \rangle}{\langle \sigma_v \sigma_y \rangle} \right) a_{x,y}(v). \quad (6.29) +$$ + +By Property P1 of regular scales, the first term under parenthesis is bounded by a constant. +The second one is bounded by 1 by assumption. Now, for each $v \in A_y(2^k)$, + +$$ +\begin{align*} +\sum_{u \in A_y(2^k): \langle \sigma_u \sigma_y \rangle \le \langle \sigma_v \sigma_y \rangle} \langle \sigma_v \sigma_u \rangle &\le \chi_{2^{k+1}}(\beta) \le C_2 (\chi_{2^{k+1}}(\beta) - \chi_{2^k}(\beta)) \\ +&\le C_3 \sum_{u \in A_y(2^k)} \langle \sigma_0 \sigma_u \rangle \\ +&\le C_4 \sum_{u \in A_y(2^k)} \frac{\langle \sigma_x \sigma_u \rangle \langle \sigma_u \sigma_y \rangle}{\langle \sigma_x \sigma_y \rangle} = C_4 A_{x,y}(2^k), +\end{align*} +$$ + +where the first inequality is trivial, the second one is true by Property P4, the third by +Remark 6.5 (when $y$ is regular then it is a direct consequence of P4, and when it is not +one can use (6.17) and the Messager-Miracle-Sole inequality), and the fourth inequality +follows from Property P1 of regular scales (to replace $\langle \sigma_0 \sigma_u \rangle$ by $\langle \sigma_x \sigma_u \rangle$) and the fact that +since $u \in A_y(2^k)$, $\langle \sigma_x \sigma_y \rangle \le (1+C|u-x||y|)\langle \sigma_u \sigma_y \rangle \le C_5\langle \sigma_u \sigma_y \rangle$. + +We deduce that + +$$ +\sum_{u,v \in A_y(2^k)} \mathbf{P}^{xy,\emptyset}[u, v \xrightarrow{n_1+n_2} x] \le 2 - \sum_{\substack{u,v \in A_y(2^k) \\ \langle \sigma_u \sigma_y \rangle \le \langle \sigma_v \sigma_y \rangle}} \mathbf{P}^{xy,\emptyset}[u, v \xrightarrow{n_1+n_2} x] \le C_6 A_{x,y}(2^k)^2. \quad (6.30) +$$ + +For a proof of Theorem 6.4 we fix $\alpha > 2$ (which will be taken large enough later). +Applying the Cauchy-Schwarz inequality gives + +$$ +|\mathbf{P}^{xy}[E \cap F] - \mathbf{E}^{xy,\emptyset}[\mathbf{N}_{\mathbb{I}((n_1, ..., n_s) \in E \cap F)}]| \leq \sqrt{\mathbf{E}^{xy,\emptyset}[(\mathbf{N} - 1)^2]} \leq \frac{C_1}{\sqrt{\log(N/n)}}. \quad (6.31) +$$ + +Now, for **u** = (u₁, ..., u_t) with uᵢ ∈ Ann(m, M) for every i, let G(u₁, ..., u_t) be the event +that for every i ≤ s, there exists **k**ᵢ ≤ **n**ᵢ + **n′**ᵢ such that **k**ᵢ = 0 on Λₙ, **k**ᵢ = **n**ᵢ + **n′**ᵢ outside +Λₙ, and ∂**k**ᵢ is equal to {uᵢ, yᵢ} if i ≤ t and Ø if t < i ≤ s. The switching principle implies +as in Section 5.3 that + +$$ +\begin{align} +&\mathbf{P}^{\text{xy},\emptyset[(\mathbf{n}_1, \dots, \mathbf{n}_s) \in E \cap F], u_i} \xrightarrow{n_i+n'_i} x_i &&\text{for } i \le t, G(u_1, \dots, u_t)] \\ +&= (\prod_{i=1}^t a_{x_i,y_i}(u_i)) \mathbf{P}^{\text{xu},\text{uy}[(\mathbf{n}_1, \dots, \mathbf{n}_s) \in E, (\mathbf{n}'_1, \dots, \mathbf{n}'_s) \in F], G(u_1, \dots, u_t)].} &&(6.32) +\end{align} +$$ + +Also, as before, we have the trivial identity + +$$ +\mathbf{P}^{\mathbf{xu},\mathbf{uy}}[(\mathbf{n}_1, ..., \mathbf{n}_s) \in E, (\mathbf{n}'_1, ..., \mathbf{n}'_s) \in F] = \mathbf{P}^{\mathbf{xu}}[E] \mathbf{P}^{\mathbf{uy}}[F]. \quad (6.33) +$$ + +We now pause the argument to establish the following lemma. + +**Lemma 6.7** For $d \ge 4$, there exist $\epsilon > 0$ and $\alpha_0 = \alpha_0(\epsilon) > 0$ large enough such that for every $n^{\alpha_0} \le N \le \xi(\beta)$ and for every $\mathbf{u}$ with $u_i \in A_{y_i}(2^{k_i})$ for some $k_i$ with $m \le 2^{k_i} \le M/2$ for every $1 \le i \le t$, + +$$ +\left(\prod_{i=1}^{t} a_{x_i, y_i}(u_i)\right)^{-1} P^{\mathbf{x}y,\emptyset}\left[u_i\xrightarrow[n_i+n'_i]{n_i+n'_i} x_i, \forall i\le t, G(u_1,\ldots,u_t)^c\right] = P^{\mathbf{x}u,\mathbf{uy}}[G(u_1,\ldots,u_t)^c] \\ +\le s\left(\frac{n}{N}\right)^{\epsilon}. +\tag{6.34} +$$ +---PAGE_BREAK--- + +Figure 5: The currents $\mathbf{n}_i$ (red) and $\mathbf{n}'_i$ (blue). Since the sources of $\mathbf{n}_i$, i.e. $x_i$ and $u_i$, are both in $\Lambda_M$, a reasoning similar to the proof of uniqueness in the intersection property (first control the backbone, proving that it does not cross the annulus $\text{Ann}(M, R)$, and then the remaining sourceless current) enables us to conclude that the probability that the current contains a crossing of $\text{Ann}(M, N)$ is small. Similarly, since the sources $u_i$ and $y_i$ of $\mathbf{n}'_i$ lie both outside of $\Lambda_m$, we can prove that the probability that $\mathbf{n}'_i$ crosses $\text{Ann}(n, m)$ is small. An extra care is needed for establishing the latter since $y$ is not assumed to be regular. To circumvent this problem, we consider only intersection sites $u_i$ in one of the boxes $\Lambda_k(y_i)$, which are depicted here in gray. + +**Proof** Fix $\varepsilon > 0$ sufficiently small (we will see below how small it should be). The first identity follows from the switching lemma so we focus on the second one. Let $G_i$ be the event that the current **k***i* exists. This event clearly contains (see Fig. 5) the event that $\text{Ann}(M, N)$ is not crossed by a cluster in **n***i*, and $\text{Ann}(n, m)$ is not crossed by a cluster in **n'***i*, since in such case **k***i* can be defined as the sum of **n***i* restricted to the clusters intersecting $\Lambda_N^c$ (this current has no sources) and **n'***i* restricted to the clusters intersecting $\Lambda_m^c$ (this current has sources $u_i$ and $y_i$). We focus on the probability of this event for $i \le t$, the case $t < i \le s$ being even simpler since there are no sources. + +We bound the probability of **n***i* crossing Ann(*M*, *N*) by splitting Ann(*M*, *N*) in two annuli Ann(*M*, *R*) and Ann(*R*, *N*) with *R* = √*MN*, then estimating the probability that the backbone of **n***i* crosses the inner annulus, and then the probability that the remaining current crosses the outer annulus. More precisely, the chain rule for backbones [3] gives that for α₀ = α₀(ε) > 0 large enough and *N* ≥ nα₀, +$$ +\mathbf{P}^{\text{xy}}[\Gamma(\mathbf{n}_i) \text{ crosses } \text{Ann}(M, R)] \le \sum_{v \in \partial \Lambda_R} \frac{\langle \sigma_{x_i} \sigma_v \rangle \langle \sigma_v \sigma_{u_i} \rangle}{\langle \sigma_{x_i} \sigma_{u_i} \rangle} \le C_2 R^3 \frac{M^3}{R^4} \le (n/N)^{\varepsilon}, \quad (6.35) +$$ +where the lower bound (5.39) to bound the denominator and the Infrared Bound (5.37) for the numerator. Then, observe that the remaining current **n***i* ∖ Γ(**n***i*) is sourceless. Adding an additional sourceless current and using the switching lemma and Griffiths inequality [22] (very much like in the bound (4.22) in the proof of Lemma 4.4) gives + +$$ +\mathbf{P}^{\text{xy}}[\mathbf{n}_i \setminus \Gamma(\mathbf{n}_i) \text{ crosses } \text{Ann}(R, N)] \leq \sum_{\substack{v \in \partial \Lambda_R \\ w \in \partial \Lambda_N}} \langle \sigma_v \sigma_w \rangle^2 \leq C_3 R^3 N^3 (R/N)^4 \leq (n/N)^{\epsilon}, \quad (6.36) +$$ +where we used the Infrared-Bound 5.37, and in the last one the definition of *R* and the fact that *N* ≥ *n*^α₀ for α₀ large enough. +---PAGE_BREAK--- + +When dealing with the probability of $\mathbf{n}'_i$ crossing $\text{Ann}(n, m)$, fix $r = \sqrt{nm}$ and apply +the same reasoning with the annuli $\text{Ann}(n, r)$ and $\text{Ann}(r, m)$. The equivalent of (6.36) is +the same as before, but one must be more careful about the bound on the probability of +the event dealing with the backbone: + +$$ +\mathbf{P}^{\mathbf{x}\mathbf{y}}[\Gamma(\mathbf{n}'_i) \text{ crosses } \text{Ann}(r,m)] \le C_4 \sum_{v \in \partial \Lambda_r} \frac{\langle \sigma_{u_i} \sigma_v \rangle \langle \sigma_v \sigma_{y_i} \rangle}{\langle \sigma_{u_i} \sigma_{y_i} \rangle} \le C_5 r^3/m^2 \le (n/N)^{\epsilon}, \quad (6.37) +$$ + +where we used the Infrared Bound (5.37) and our assumption that $u_i$ belongs to one of +the $A_k(y_i)$ (to show that $\langle \sigma_v \sigma_{y_i} \rangle \le C_4 \langle \sigma_{u_i} \sigma_{y_i} \rangle$). + +$\Delta$ + +Invoking the above lemma we now return to the proof of Theorem 6.4. +Introduce the coefficients $\delta(\mathbf{u}, \mathbf{x}, \mathbf{y})$ equal to + +$$ +\delta(\mathbf{u}, \mathbf{x}, \mathbf{y}) := \prod_{i=1}^{t} \frac{a_{x_i, y_i}(u_i)}{|\mathcal{H}| A_{x_i, y_i}(2^{k_i})} \qquad (6.38) +$$ + +for **u** such that for every *i* ≤ *t*, *u*ᵢ ∈ Ayᵢ(2*k*ᵢ) for some *k*ᵢ, and equal to 0 for other **u**. +Gathering (6.31)–(6.33) as well as Lemma 6.7, and observing that the sum on (*u*₁, ..., *u*ₜ) +of δ(**u**, **x**, **y**) is 1, we obtain that + +$$ +| \mathbf{P}^{\mathbf{x}\mathbf{y}}[E \cap F] - \sum_{\mathbf{u}} \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}) \mathbf{P}^{\mathbf{x}\mathbf{u}}[E] \mathbf{P}^{\mathbf{u}\mathbf{y}}[F] | \leq \frac{C_5 s}{\sqrt{\log(N/n)}} + 2C_6 s (n/N)^{\epsilon} \leq \frac{C_7 s}{\sqrt{\log(N/n)}}, \tag{6.39} +$$ + +provided that $N \ge n^{\alpha_0}$ where $\alpha_0$ is given by the previous lemma. + +To conclude the proof is now a matter of elementary algebraic manipulations. We begin +by proving (6.12) when all the $y_i, y'_i$ for $i \le t$ belong to regular scales (not necessarily the +same ones). In this case, apply twice (once for **y** and once for **y'**) the previous inequality +for our event *E* and the event on the outside being the full event to find + +$$ +|\mathbf{P}^{\mathbf{x}\mathbf{y}}[E] - \mathbf{P}^{\mathbf{x}\mathbf{y}}'[E]| \leq |\sum_{\mathbf{u}} (\delta(\mathbf{u}, \mathbf{x}, \mathbf{y}) - \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}')) \mathbf{P}^{\mathbf{x}\mathbf{u}}[E]| + \frac{2C_7s}{\sqrt{\log(N/n)}}. \quad (6.40) +$$ + +Since all the $y_i, y'_i$ are in regular scales, Remark 6.5 implies that $A_{y_i}(2^{k_i}) = A_{y'_i}(2^{k_i}) =$ +$\text{Ann}(2^{k_i}, 2^{k_i+1})$. Furthermore, Property 2 of regular scales implies⁷ that + +$$ +|\delta(\mathbf{u}, \mathbf{x}, \mathbf{y}) - \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}')| \le C_8 s \frac{M}{N} \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}) \le C_9 s (n/N)^{1/3} \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}). \quad (6.41) +$$ + +Therefore, (6.12) follows readily (with a large constant $C_{10}$) in this case. The same argument works for the second identity (6.13) for every **x**, **x'** and **y**, noticing that for every regular **u** for which the coefficients are non-zero, + +$$ +|\delta(\mathbf{u}, \mathbf{x}', \mathbf{y}) - \delta(\mathbf{u}, \mathbf{x}, \mathbf{y})| \le C_{10} s \frac{m}{n} \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}) \le C_{11} s (n/N)^{1/3} \delta(\mathbf{u}, \mathbf{x}, \mathbf{y}). \quad (6.42) +$$ + +⁷Note that in this case $\delta(\mathbf{u}, \mathbf{x}, \mathbf{y})$ and $\delta(\mathbf{u}, \mathbf{x}, \mathbf{y}')$ are both close to + +$$ +\delta'(\mathbf{u}, \mathbf{x}) := \prod_{i 0$ sufficiently small. We now provide the proof of (6.48). Fix $0 < a < 1$ (any choice would do) and split the sum into four sums + +$$ S(L, r, \beta) = \underbrace{\sum_{\substack{x \in \Lambda_{drL} \\ x_1, \dots, x_4 \in \Lambda_{rL} \\ L(x_1, \dots, x_4) \ge L^a}} (\cdots)}_{(1)} + \underbrace{\sum_{\substack{x \notin \Lambda_{drL} \\ x_1, \dots, x_4 \in \Lambda_{rL} \\ L(x_1, \dots, x_4) \ge L^a}} (\cdots)}_{(2)} + \underbrace{\sum_{\substack{x \in \Lambda_{drL} \\ x_1, \dots, x_4 \in \Lambda_{rL} \\ L(x_1, \dots, x_4) < L^a}} (\cdots)}_{(3)} + \underbrace{\sum_{\substack{x \notin \Lambda_{drL} \\ x_1, \dots, x_4 \in \Lambda_{rL} \\ L(x_1, \dots, x_4) < L^a}} (\cdots)}_{(4)} \quad (6.49) $$ + +**Bound on (1)** We focus on this term and give more details since it is in fact the main contributor. By Lemma 6.3, + +$$ B_L(x_1, \dots, x_4)(\beta) \geq \frac{1}{C_3} B_L(\beta). \quad (6.50) $$ + +Summing over the sites in $\Lambda_{rL}$, we get that + +$$ (1) \le C_3 \frac{|\Lambda_{rL} | \chi_{2rL} (\beta)^4}{\Sigma_L(\beta)^2 B_L(\beta)^c} \le C_4 r^{12} \left( \frac{\chi_L(\beta)^2}{L^4 B_L(\beta)} \right)^c, \quad (6.51) $$ + +where in the second inequality we used the sliding-scale Infrared Bound (5.24) to bound $\chi_{2drL}(\beta)$ in terms of $\chi_L(\beta) \le C_5 L^{-4} \Sigma_L(\beta)$ and the Infrared Bound (5.37) to write + +$$ \chi_L(\beta) \le C_6 L^2. \quad (6.52) $$ + +Applying Cauchy-Schwarz for the first inequality below, then bounding the two terms in the middle by (6.52) and Lemma 6.3 correspondingly, we find that + +$$ \frac{\chi_L(\beta)^2}{L^4 B_L(\beta)} \le 2 \frac{\chi_{L/\log L}(\beta)^2}{L^4} + C_7 \frac{B_L(\beta) - B_{L/\log L}(\beta)}{B_L(\beta)} \le C_8 \frac{\log \log L}{\log L}. \quad (6.53) $$ + +Plugging this estimate in (6.51) gives + +$$ (1) \le C_9 r^{12} \left( \frac{\log \log L}{\log L} \right)^c . \quad (6.54) $$ + +**Bound on (2)** Combine (5.4) and the sliding-scale Infrared Bound (5.24) to get that for $i = 1, \dots, 4$, + +$$ \langle \sigma_x \sigma_{x_i} \rangle_\beta \le \frac{C_{10}}{|x|^4} \chi_{|x|/d}(\beta) \le \frac{C_{11}}{L^2|x|^2} \chi_L(\beta). \quad (6.55) $$ + +Summing over the sites gives the same bound as in (6.51) so that the reasoning in (1) gives + +$$ (2) \le C_{12} r^{12} \left( \frac{\log \log L}{\log L} \right)^c . \quad (6.56) $$ + +**Bound on (3)** This term is much smaller than the previous two due to the constraint that two sites must be close to each other. In fact, we will not even need the improved part of the tree diagram bound and will simply use that $B_L(x_1, \dots, x_4)(\beta) \ge 1$. Then, we use the Infrared Bound (5.37) to bound the terms $\langle \sigma_x \sigma_{x_i} \rangle_\beta$ and $\langle \sigma_x \sigma_{x_j} \rangle_\beta$ for +---PAGE_BREAK--- + +which $x_i$ and $x_j$ are at a distance exactly $L(x_1, \dots, x_4)$. Summing over the other two sites $x_k$ and $x_l$ gives a contribution bounded by $\chi_{2rL}(\beta)^2 \le C_{13}L^{-8}r^4\Sigma_L(\beta)^2$ by the sliding-scale Infrared Bound (5.24). Summing over $x$ and then $x_i$ and $x_j$ gives that + +$$ (3) \le \frac{C_{14}r^4 \log(Lr)}{L^{4-4a}}. \qquad (6.57) $$ + +**Bound on (4)** This sum is even simpler to bound than (3). Again, we simply use $B_L(x_1, \dots, x_4)(\beta) \ge 1$, bound two of the terms $\langle \sigma_x \sigma_{x_i} \rangle_\beta$ using (6.55), and the other two using the Infrared Bound (5.37). Summing over the vertices and using the constraint that two of the sites must be close to each other gives the bound + +$$ (4) \le \frac{C_{15}r^8}{L^{4-4a}}. \qquad (6.58) $$ + +In conclusion, all the sums (1)–(4) are sufficiently small (recall that by definition $r \ge 1$) and the claim is derived. + +**Remark 6.8** For $\beta < \beta_c$, applying (6.46) with $r=1$ and $L = \xi(\beta)$ gives the following bound on the renormalized coupling constant $g(\beta)$: + +$$ g(\beta) := \frac{1}{\xi(\beta)^4 \chi(\beta)^2} \sum_{x,y,z \in \mathbb{Z}^d} |U_4^\beta(0,x,y,z)| \le \log\left(\frac{1}{|\beta - \beta_c|}\right)^{-c}, \qquad (6.59) $$ + +where we used that + +$$ \xi(\beta)^2 \ge c_0 \chi_{\xi(\beta)}(\beta) \ge c_1 \chi(\beta) \ge c_2 / (\beta_c - \beta). $$ + +(The first inequality follows from the infrared bound, the second is a classical bound obtained first by Sokal [47], and the third is a mean-field lower bound on $\chi(\beta)$ [1]). In field theory this quantity is often referred to as the (dimensionless) renormalized coupling constant. In [29] it was proved that for lattice $\phi_4^4$ measures of small enough $\lambda$ it converges to 0 at the rate $1/\log(\frac{1}{|\beta-\beta_c|})$. Such behaviour is expected to be true, in dimension $d=4$, also for the n.n.f. Ising model. + +# 7 Generalization to models in the Griffiths-Simon class + +In this section we extend the results to nearest-neighbor ferromagnetic models in the GS class. An important observation is that the results from the previous section also extend. Note, however, that $\rho$ can have unbounded support, so that to be of relevance the relations of interest need to be expressed in spin-dimension balanced forms. Once this is done, many of the basic diagrammatic bounds which are available for the Ising model extend to the GS class essentially by linearity, and then to the GS class by continuity. Below, we carefully present the generalizations. + +In the whole section, $U_4^{\rho,\beta}$ denotes the 4-point Ursell function of the $\tau$ variables, and $B_L(\rho, \beta)$ the bubble diagram truncated at a distance $L$. We also reuse the notation $\xi(\rho, \beta)$ and $\beta_c(\rho)$ introduced in Section 5. + +## 7.1 An improved tree diagram bound for models in the GS class + +For bounds which are not homogeneous in the spin dimension, one needs to pay attention to the fact that $\tau$ is neither dimensionless nor bounded, and prepare the extension by reformulating the Ising relations in a spin-dimensionless form. +---PAGE_BREAK--- + +For example, the basic tree diagram bound (1.22) has four Ising spins on the left side and four pairs on the right. An extension of the inequality to GS models can be reached by site-splitting the terms in which an Ising spin is repeated, using the inequality (5.38) (which has a simple proof by means of the switching lemma) $^8$. The resulting diagrammatic bounds may at first glance appear as slightly more complicated than the one for the Ising case, but it has the advantage of being dimensionally balanced. That is a required condition for a bound to hold uniformly throughout the GS class of models. Additional consideration is needed for the factors by which the tree diagram bound of [1] is improved here. Taking care of that we get the following extension of the result, which also covers the $\phi^4$ lattice models. + +**Theorem 7.1 (Improved tree diagram bound for the GS class)** *There exist $C, c > 0$ such that for every n.n.f. model in the GS class on $\mathbb{Z}^4$, every $\beta \le \beta_c(\rho)$, $L \le \xi(\rho, \beta)$ and every $x,y,z,t \in \mathbb{Z}^d$ at distances larger than $L$ of each other,* + +$$|U_4^{\rho,\beta}(x,y,z,t)| \le C \left( \frac{B_0(\rho, \beta)}{B_L(\rho, \beta)} \right)^c \sum_u \sum_{u',u''} \langle \tau_x \tau_u \rangle_{\rho,\beta} \beta J_{u,u'} \langle \tau_{u'} \tau_y \rangle_{\rho,\beta} \langle \tau_z \tau_u \rangle_{\rho,\beta} \beta J_{u,u''} \langle \tau_{u''} \tau_t \rangle_{\rho,\beta}. \quad (7.1)$$ + +Before diving into the proof, note that the improved tree diagram bound implies, as it did for the Ising model, the following quantitative bound on the convergence to gaussian of the scaling limit of the $\tau$ field in four dimensional models with variables in the GS class. + +**Proposition 7.2** *There exist two constants $c, C > 0$ such that for every n.n.f. model in the GS class on $\mathbb{Z}^4$, every $\beta \le \beta_c(\rho)$, $L \le \xi(\rho, \beta)$, every continuous function $f: \mathbb{R}^4 \to \mathbb{R}$ with bounded support and every $z \in \mathbb{R}$,* + +$$\left| \left\langle \exp[zT_{f,L}(\tau) - \frac{z^2}{2} \langle T_{f,L}(\tau)^2 \rangle_{\rho,\beta}] \right\rangle_{\rho,\beta} - 1 \right| \leq \frac{C \|f\|_{\infty}^4 r_f^{12}}{(\log L)^c} z^4. \quad (7.2)$$ + +We now return to the proof of the improved tree diagram bound, following the path outlined above. The GS class of variables is naturally divided into two kinds. The core consists of those that directly fall under the Definition 2.1. The rest can be obtained as weak limits of the former. Since the constants in (7.1) are uniform, it suffices to prove the result for the former to get it for the latter. We therefore focus on site-measures $\rho$ satisfying Definition 2.1, which can directly be represented as Ising measures on a graph where every vertex is replaced by blocks, as explained in the previous section. In this case, we identify $\langle \cdot \rangle_{\rho,\beta}$ with the Ising measure, and $\tau_x$ with the proper average of Ising's variables. With this identification, we can harvest all the nice inequalities that are given by Ising's theory. In particular, we can use the random current representation. + +More explicitly, to generalize the argument used in the Ising's proof, we introduce the measure $\mathbf{P}^{xy}$ defined on the graph $\mathbb{Z}^d \times \{1, \dots, N\}$ in two steps: + +• first, sample two integers $1 \le i, j \le N$ with probability + +$$Q_i Q_j (\sigma_{x,i} \sigma_{y,j})_{\rho,\beta} / (\tau_x \tau_y)_{\rho,\beta},$$ + +• second, sample a current according to the measure $\mathbf{P}_{\rho,\beta}^{(x,i),(y,j)}$ corresponding to the random current representation of the Ising model $\langle \cdot \rangle_{\rho,\beta}$. + +⁸An alternative method for reducing a diagrammatic expression's spin-dimension is to divide by $\langle\sigma_u^2\rangle_0$. Both methods are of use, and may be compared through (5.15). +---PAGE_BREAK--- + +The interpretation of this object is that of a random current with two random sources $(x, i) \in \mathcal{B}_x$ and $(y, j) \in \mathcal{B}_y$. Also note that the superscript $xy$ will unequivocally denote this type of measures (we will avoid using measures with deterministic sources in this section to prevent confusion) and $\mathbf{P}_{\rho,\beta}^{\emptyset}$ which have no sources. + +The interest in $\mathbf{P}_{\rho,\beta}^{xy}$ over measures with deterministic sets of sources comes from the fact that the probability that the cluster of the sources intersects a set of the form $\mathcal{B}_u$ can be bounded in terms of correlations of the variables $\tau_x, x \in \mathbb{Z}^d$ (see Proposition A.8). + +**Proof of Theorem 7.1** As mentioned above, every $\rho$ in the GS class is a weak limit of measures satisfying the first condition of Definition 2.1. We therefore focus on such measures. + +Exactly like in the case of the Ising model, the core of the proof of Theorem 7.1 will be the proof of the intersection-clustering property that we now state and whose proof is postponed after the proof of the theorem. Define $\ell_0 = 0$ and $\ell_k = \ell_k(\rho, \beta)$ using the same definition (using $B_L(\rho, \beta)$ this time) as in (6.1). Let $\mathcal{T}_u$ be the set of vertices $v \in \mathbb{Z}^d$ such that $\mathcal{B}_v$ is connected in $\mathbf{n}_1 + \mathbf{n}_3$ to a box $\mathcal{B}_{u'}$ and in $\mathbf{n}_2 + \mathbf{n}_4$ to a box $\mathcal{B}_{u''}$, with $u'$ and $u''$ at graph distance at most 2 of $u$. Note that $\mathcal{T}_u$ is now a function of $u$ and that it is defined in terms of “coarse intersections”, i.e. lattice sites $v$ such that both clusters intersect $\mathcal{B}_v$ (but do not necessarily intersect each other). + +**Proposition 7.3 (intersection-clustering bound for the GS class)** For $d=4$ and $D$ large enough, there exists $\delta = \delta(D)$ such that for every model in the GS class, every $\beta \le \beta_c(\rho)$, every $K$ such that $\ell_K \le \xi(\rho, \beta)$ and every $u, u', u'', x, y, z, t \in \mathbb{Z}^d$ with $u'$ and $u''$ neighbors of $u$ and $x, y, z, t$ at mutual distances larger than $2\ell_K$, + +$$ \mathbf{P}_{\rho,\beta}^{ux,uz,u'y,u''t}[\mathbf{M}_u(\mathcal{T}_u; \mathcal{L}, K) \le \delta K] \le 2^{-\delta K}. \quad (7.3) $$ + +Postponing the proof of this estimate, we proceed with the proof of the Theorem. Express $U_4^{\rho,\beta}$ in terms of intersection properties of currents by summing (3.14) over vertices of $\mathcal{B}_x, \dots, \mathcal{B}_z$: + +$$ |U_4^{\rho,\beta}(x,y,z,t)| \le 2\langle\tau_x\tau_y\rangle_{\rho,\beta}\langle\tau_z\tau_t\rangle_{\rho,\beta}\mathbf{P}_{\rho,\beta}^{xy,zt,\emptyset,\emptyset}[\mathbf{C}_{n_1+n_3}(\partial\mathbf{n}_1) \cap \mathbf{C}_{n_2+n_4}(\partial\mathbf{n}_2) \neq \emptyset], \quad (7.4) $$ + +where $\mathbf{C}_{n_1+n_3}(\partial n_1)$ and $\mathbf{C}_{n_2+n_4}(\partial n_2)$ refer to the clusters in $\mathbf{n}_1+\mathbf{n}_3$ and $\mathbf{n}_2+\mathbf{n}_4$ of the sources in $\partial n_1$ and $\partial n_2$ respectively (we introduce this notation since the sources are not deterministic anymore). + +Define $K \ge c \log[B_L(\rho, \beta)/B_0(\rho, \beta)]$ as in the Ising case. We now implement the same reasoning as for the Ising model, with the twist that we consider coarse intersections. If $\mathbf{C}_{n_1+n_3}(\partial n_1)$ and $\mathbf{C}_{n_2+n_4}(\partial n_2)$ intersect, then + +• either the number of $u \in \mathbb{Z}^d$ such that $\mathbf{C}_{n_1+n_3}(\partial n_1)$ and $\mathbf{C}_{n_2+n_4}(\partial n_2)$ intersect $\mathcal{B}_u$ is larger than or equal to $2^{\delta K/5}$, + +• or there exists $u \in \mathbb{Z}^d$ such that $\mathbf{C}_{n_1+n_3}(\partial n_1)$ and $\mathbf{C}_{n_2+n_4}(\partial n_2)$ intersect $\mathcal{B}_u$, and $\mathbf{M}_u(\mathcal{T}_u; \mathcal{L}, K) < \delta K$. + +Using the Markov inequality and (A.38) on the first line, and Lemma A.7 in the second one, we find (drop $\rho$ and $\beta$ from notation) + +$$ +\begin{aligned} +|U_4(x,y,z,t)| &\le 2^{-\delta K/5} \sum_{u,u',u'' \in \mathbb{Z}^d} \langle \tau_x \tau_u \rangle \beta J_{u,u'} \langle \tau_{u'} \tau_y \rangle \langle \tau_z \tau_u \rangle \beta J_{u,u''} \langle \tau_{u''} \tau_t \rangle \\ +&\quad + \sum_{u,u',u'' \in \mathbb{Z}^d} \langle \tau_x \tau_u \rangle \beta J_{u,u'} \langle \tau_{u'} \tau_y \rangle \langle \tau_z \tau_u \rangle \beta J_{u,u''} \langle \tau_{u''} \tau_t \rangle \mathbf{P}^{xu,zu,u'y,u''t}[\mathbf{M}_u(\mathcal{T}_u; \mathcal{L}, K) < \delta K]. +\end{aligned} +\quad (7.5) +$$ +---PAGE_BREAK--- + +Lemma A.7 was invoked here since in the present context $\mathbf{M}_u(\mathcal{T}_u; \mathcal{L}, K)$ is defined in terms of coarse rather than true intersections. The intersection-clustering bound (Proposition 7.3) concludes the proof. $\square$ + +We now need to prove Proposition 7.3. The proof itself is exactly the same as for Proposition 6.1 (the monotonicity property of (A.2) is not impacted), except for the proofs of the mixing and intersection properties (i.e. statements corresponding to Lemma 6.2 and Theorem 6.4 respectively). Below, we briefly detail the statements and proofs of these results. Let $I_k(0)$ be the event that there exists $v \in \text{Ann}(\ell_k, \ell_{k+1})$ such that $\mathcal{B}_v$ is connected in $\mathbf{n}_1 + \mathbf{n}_3$ and in $\mathbf{n}_2 + \mathbf{n}_4$ to the union of the boxes $\mathcal{B}_w$ with $w$ at a distance at most 2 of 0. + +**Lemma 7.4 (intersection property for the GS class)** *There exists $c > 0$ such that for every $\rho$ in the GS class, every $\beta \le \beta_c(\rho)$, every $k$, every neighbour $0'$ of the origin, and every $y \notin \Lambda_{2\ell_{k+1}}$ in a regular scale,* + +$$ \mathbf{P}_{\rho,\beta}^{0y,0'y,\emptyset,\emptyset}[I_k(0)] \ge c. \quad (7.6) $$ + +**Proof** Reuse the notions included in the proofs of the intersection property in previous sections. Let + +$$ \mathcal{M} := \sum_{v \in \text{Ann}(m,M)} \sum_{i,i'=1}^{n} Q_i^2 \mathbb{I}[\partial \mathbf{n}_1 \xleftarrow{\mathbf{n}_1+\mathbf{n}_3} (v,i)] Q_{i'}^2 \mathbb{I}[\partial \mathbf{n}_1 \xleftarrow{\mathbf{n}_1+\mathbf{n}_3} (v,i')]. \quad (7.7) $$ + +A computation similar to before gives + +$$ \mathbf{E}_{\rho, \beta}^{0y, 0'y, \emptyset, \emptyset}[|\mathcal{M}|] \geq c_1(B_M(\rho, \beta) - B_{m-1}(\rho, \beta)) \quad (7.8) $$ + +$$ \mathbf{E}_{\rho, \beta}^{0y, 0'y, \emptyset, \emptyset}[|\mathcal{M}|^2] \le C_2 B_{\ell_{k+1}}(\rho, \beta)^2. \quad (7.9) $$ + +Now, in the first line we use the same reasoning as below (6.6). We include it for completeness to see where the division by $B_0(\rho, \beta)$ enters into the game (it is the only place it does). The Infrared Bound (5.36) (note that $\langle \tau_0^2 \rangle_{\rho,\beta} = B_0(\rho, \beta)$) implies that + +$$ B_M(\rho, \beta) - B_{m-1}(\rho, \beta) \ge B_{\ell_{k+1}}(\rho, \beta) - B_{\ell_k}(\rho, \beta) - C_3 B_0(\rho, \beta) \ge \left(1 - \frac{1+C_3}{D}\right) B_{\ell_{k+1}}(\rho, \beta). \quad (7.10) $$ + +Cauchy-Schwarz therefore implies the fact that $\mathcal{M} \neq \emptyset$ with positive probability, which implies in particular the existence of a vertex $v \in \text{Ann}(m,M)$ which is connected in $\mathbf{n}_1 + \mathbf{n}_3$ to $\mathcal{B}_0$ and in $\mathbf{n}_2 + \mathbf{n}_4$ to $\mathcal{B}'_0$. + +The second part of the proof bounding the probabilities of $F_1, \dots, F_4$ follows by the same proof as for the Ising model. More precisely, for $F_1$, the chain rule for backbones [3] and a decomposition on the first edge of the backbone with one endpoint in (a block of a vertex in) $\Lambda_{n-1}$ and the other (in a block of a vertex) in $\partial\Lambda_n$, and then the first edge after this between an endpoint (in a block of a vertex) outside $\Lambda_{\ell_k}$ and one in (a block of a vertex in) $\Lambda_{\ell_k}$ implies that + +$$ \mathbf{P}_{\rho, \beta}^{0y, \emptyset}[F_1] \le \sum_{\substack{v \in \partial\Lambda_n \\ w \in \partial\Lambda_{\ell_k} \\ v', w' \in \mathbb{Z}^d}} \frac{\langle \tau_0 \tau_v \rangle_{\rho, \beta} \beta J_{v', v} \langle \tau_v \tau_{w'} \rangle_{\rho, \beta} \beta J_{w', w} \langle \tau_w \tau_y \rangle_{\rho, \beta}}{\langle \tau_0 \tau_y \rangle_{\rho, \beta}} \le C_3 n^3 \ell_k^3 n^{-4} \le C_4 \ell_k^{-\epsilon}. \quad (7.11) $$ +---PAGE_BREAK--- + +This inequality uses Property P2 of regular scales, the lower bound (5.39) on the two-point function, and the Infrared Bound (5.37). For $F_3$, the same reasoning as for Ising, with Proposition A.8 replacing the switching lemma, leads to + +$$ +\mathbf{P}_{\rho,\beta}^{0,x,\emptyset}[F_3] \le \sum_{\substack{v \in \partial \Lambda_n \\ w \in \partial \Lambda_m}} \mathbf{P}_{\rho,\beta}^{\emptyset,\emptyset}[\mathcal{B}_v \xleftarrow{\mathrm{n}_1+\mathrm{n}_2} \mathcal{B}_w] \le \sum_{\substack{v \in \partial \Lambda_n \\ w \in \partial \Lambda_m \\ v', w' \in \mathbb{Z}^d}} \langle \tau_v \tau_w \rangle \beta J_{w,w'} \langle \tau_{w'} \tau_{v'} \rangle \beta J_{v',v} \le C_5 l_k^{-\epsilon}, \tag{7.12} +$$ + +where in the last line we used again the Infrared Bound (5.37). $\square$ + +We now turn to the proof of the mixing property for the measures $P_\beta^{xy}$, which is the exact replica of the Ising statement. + +**Theorem 7.5 (mixing of random currents for the GS class)** For $d \ge 4$, there exist $\alpha, c > 0$ such that for every $\rho$ satisfying Definition 2.1, every $t \le s$, every $\beta \le \beta_c(\rho)$, every $n^\alpha \le N \le \xi(\rho, \beta)$, every $x_i \in \Lambda_n$ and $y_i \notin \Lambda_N$ for every $i \le t$, and every events E and F depending on the restriction of $(\mathbf{n}_1, \dots, \mathbf{n}_s)$ to edges within $\Lambda_n$ and outside of $\Lambda_N$ respectively, + +$$ +| \mathbf{P}_{\rho, \beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [E \cap F] - \mathbf{P}_{\rho, \beta}^{x'_1 y'_1, \dots, x'_t y'_t, \emptyset, \dots, \emptyset} [E] | \mathbf{P}_{\rho, \beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [F] | \le s (\log \frac{N}{n})^{-c}. \quad (7.13) +$$ + +Furthermore, for every $x'_1, \dots, x'_t \in \Lambda_n$ and $y'_1, \dots, y'_t \notin \Lambda_N$, + +$$ +|\mathbf{P}_{\rho,\beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [E] - \mathbf{P}_{\rho,\beta}^{x'_1 y'_1, \dots, x'_t y'_t, \emptyset, \dots, \emptyset} [E]| \le s (\log \frac{N}{n})^{-c}, \quad (7.14) +$$ + +$$ +\left| |\mathbf{P}_{\rho,\beta}^{x_1 y_1, \dots, x_t y_t, \emptyset, \dots, \emptyset} [F] - \mathbf{P}_{\rho,\beta}^{x'_1 y'_1, \dots, x'_t y'_t, \emptyset, \dots, \emptyset} [F]| \le s (\log \frac{N}{n})^{-c}. \quad (7.15) +$$ + +**Proof** The beginning is the same as for the Ising model, until the definition of the variable $\mathbf{N}_i$ that now becomes + +$$ +\mathbf{N}_i := \frac{1}{|\mathcal{H}|} \sum_{k \in \mathcal{K}} \frac{1}{A_{x_i, y_i}(k)} \sum_{u \in A_k(y_i)} Q_j^2 [\mathbb{I}[(u,j) \xleftarrow{\mathbf{n}_i + \mathbf{n}'_i} \partial\mathbf{n}_i]], \quad (7.16) +$$ + +where $a_{x,y}(u) := (\tau_x\tau_u)(\tau_u\tau_y)/(\tau_x\tau_y)$ and $A_{x,y}(k) := \sum_{u\in A_k(y_i)} a_{x,y}(u)$. The proof of the concentration inequality follows the same lines as in the Ising case. Indeed, the choice of the weight $Q_j^2$ enables to rewrite the moments of the random variables $\mathbf{N}_i$ in terms of the correlations of the random variables $(\tau_z : z \in \mathbb{Z}^d)$. The rest of the proof is exactly the same, with trivial changes. For instance, in the proof of Lemma 6.7, one must be careful to derive bounds on probabilities involving $\beta|J|$. This is easily doable using Proposition A.8 exactly like in the previous proof. $\square$ + +# A Appendix + +## A.1 Random currents's partial monotonicity statements + +An inconvenient feature of the random current representation is the lack of an FKG-type monotonicity, as the one valid for the Fortuin-Kasteleyn random cluster models (cf. [25]). The addition of a pair of sources may enhance the configuration, e.g. forcing a long line +---PAGE_BREAK--- + +where such were rare, but in some situations it may facilitate a split in a connecting line, +thereby reducing the current's connectivity properties. Nevertheless, some monotonicity +properties can still be found, and are used in our analysis. + +In this section, we set $\sigma_A$ for the product of the spins in $A$ and write $\mathbf{C}_n(S) = \cup_{x \in S} \mathbf{C}_n(x)$. + +**Lemma A.1** Let $A, B, S$ be subsets of $\Lambda$ and $F$ a non-negative function defined over pairs of currents, which is determined by just the values of $(\mathbf{n}_1, \mathbf{n}_2)$ along the edges touching the connected cluster $\mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(S)$ and such that $F(\mathbf{n}_1, \mathbf{n}_2) = 0$ whenever that cluster intersects $B$ and $(\partial\mathbf{n}_1, \partial\mathbf{n}_2) = (A, B)$. Then + +$$ +\mathbf{E}_{\Lambda,\beta}^{A,B}[F(\mathbf{n}_1, \mathbf{n}_2)] = \mathbf{E}_{\Lambda,\beta}^{A,\emptyset}[F(\mathbf{n}_1, \mathbf{n}_2) \frac{\langle \sigma_B \rangle_{\Lambda \setminus C_{\mathbf{n}_1+\mathbf{n}_2}(S),\beta}}{\langle \sigma_B \rangle_{\Lambda,\beta}}] \leq \mathbf{E}_{\Lambda,\beta}^{A,\emptyset}[F(\mathbf{n}_1, \mathbf{n}_2)]. \quad (\text{A.1}) +$$ + +**Proof** The second inequality is a trivial application of Griffiths’ inequality [22]. The first one is proven by a fairly straightforward manipulation involving currents that we now present. We drop $\beta$ from the notation. Fix $T \subset \Lambda$ not intersection $B$ and choose $F$ given by + +$$ +F(\mathbf{n}_1, \mathbf{n}_2) := \mathbb{I}[\mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(S) = T] \mathbb{I}[\mathbf{n}_1 = \mathbf{n}] \mathbb{I}[\mathbf{n}_2 = \mathbf{m} \text{ on } T] \quad (\text{A.2}) +$$ + +for **n** and **m** currents on Λ and T respectively. For such a choice of function, we find that + +$$ +\begin{align*} +\langle \sigma_A \rangle_\Lambda \langle \sigma_B \rangle_\Lambda \mathbf{E}_\Lambda^{A,B}[F(\mathbf{n}_1, \mathbf{n}_2)] &= \frac{4^{|\Lambda|}}{Z(\Lambda, \beta)^2} \sum_{\mathbf{n}_1 : \partial\mathbf{n}_1 = A} \sum_{\mathbf{n}_2 : \partial\mathbf{n}_2 = B} F(\mathbf{n}_1, \mathbf{n}_2) w(\mathbf{n}_1) w(\mathbf{n}_2) \\ +&= \frac{4^{|\Lambda|} w(\mathbf{n}) w(\mathbf{m})}{Z(\Lambda, \beta)^2} \sum_{\mathbf{n}'_2 : \partial\mathbf{n}'_2 = B} w(\mathbf{n}_2) \\ +&= \frac{4^{|\Lambda|} w(\mathbf{n}) w(\mathbf{m})}{Z(\Lambda, \beta)^2} \langle \sigma_B \rangle_{\Lambda \setminus T} \sum_{\mathbf{n}'_2 : \partial\mathbf{n}'_2 = \emptyset} w(\mathbf{n}_2) \\ +&= \langle \sigma_A \rangle_\Lambda \langle \sigma_B \rangle_{\Lambda \setminus T} \mathbf{E}_\Lambda^{A,\emptyset}[F(\mathbf{n}_1, \mathbf{n}_2)], \tag{A.3} +\end{align*} +$$ + +where $\mathbf{n}'_2$ is referring to a current on $\Lambda \setminus T$. In the second line, we used that for $F(\mathbf{n}_1, \mathbf{n}_2)$ to be non-zero, $\mathbf{n}_1$ must be equal to $\mathbf{n}$ and $\mathbf{n}_2$ be decomposed into the current $\mathbf{m}$ on $T$ and a current $\mathbf{n}'_2$ outside $T$ (also, $\mathbf{n}_2(x,y)$ is equal to zero for every $x \in T$ and $y \notin T$). In the last line, we skipped the steps corresponding to going backward line to line to end up with $\mathbf{E}_{\Lambda}^{A,\emptyset}[F(\mathbf{n}_1, \mathbf{n}_2)]$. + +The proof follows readily for every function *F* satisfying the assumptions of the lemma. +Also, we obtain the result on Z*d* by letting Λ tend to Z*d*. + +An interesting application of the lemma is the following pair of disentangling bounds. +The first inequality appeared in [1, Proposition 5.2], the second is new. + +**Corollary A.2** For every $\beta > 0$, every four vertices $x, y, z, t \in \mathbb{Z}^d$ and every set $S \subset \mathbb{Z}^d$, + +$$ +\begin{align} +\mathbf{P}_{\beta}^{xy,zt}[\mathbf{C}_{n_1+n_2}(x) \cap \mathbf{C}_{n_1+n_2}(z) \neq \emptyset] &\leq \mathbf{P}_{\beta}^{xy,\emptyset,zt}[\mathbf{C}_{n_1+n_2}(x) \cap \mathbf{C}_{n_3}(z) \neq \emptyset], && (\text{A.4}) \\ +\mathbf{P}_{\beta}^{0x,0z,\emptyset,\emptyset}[\mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0) \cap S \neq \emptyset] &\leq \mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0) \cap S \neq \emptyset]. && (\text{A.5}) +\end{align} +$$ +---PAGE_BREAK--- + +**Proof** Fix $\beta > 0$, $\Lambda$ finite (the claim will then follow by letting $\Lambda$ tend to $\mathbb{Z}^d$) and drop $\beta$ from the notation. For the first identity, introduce the random variable + +$$ \mathbf{C} = \mathbf{C}(\mathbf{n}_1, \mathbf{n}_2, \mathbf{n}_3) := \mathbf{C}_{\mathbf{n}_3}(\mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(x)). \quad (\text{A.6}) $$ + +Lemma A.1 applied in the first and third lines, Griffiths’ inequality [22], and the trivial inclusion $\mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(x) \subset \mathbf{C}$ in the second, give + +$$ +\begin{align*} +\mathbf{P}_{\Lambda}^{xy,zt}[\mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(x) \cap \mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(z) = \emptyset] &= \mathbf{E}_{\Lambda}^{xy,\emptyset}[\mathbb{I}[z, t \notin \mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(x)] \frac{\langle \sigma_z \sigma_t \rangle_{\Lambda \setminus \mathbf{C}_{\mathbf{n}_1+\mathbf{n}_2}(x)}}{\langle \sigma_z \sigma_t \rangle_{\Lambda}}] \\ +&\geq \mathbf{E}_{\Lambda}^{xy,\emptyset}[\mathbb{I}[z, t \notin \mathbf{C}] \frac{\langle \sigma_z \sigma_t \rangle_{\Lambda \setminus \mathbf{C}}}{\langle \sigma_z \sigma_t \rangle_{\Lambda}}] \\ +&= \mathbf{P}_{\Lambda}^{xy,\emptyset,zt}[z, t \notin \mathbf{C}], \tag{A.7} +\end{align*} +$$ + +which gives the first inequality. + +The second identity requires two successive applications of Lemma A.1. First, conditioning on $\mathbf{n}_2 + \mathbf{n}_4$, the proposition applied to $S := \mathbf{C}_{\mathbf{n}_2+\mathbf{n}_4}(0) \cap S$ gives + +$$ \mathbf{P}_{\Lambda}^{0x,0z,\emptyset,0t}[\mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0) \cap S \neq \emptyset] \leq \mathbf{P}_{\Lambda}^{0x,0z,0y,0t}[\mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0) \cap S \neq \emptyset]. \quad (\text{A.8}) $$ + +Similarly, conditioning on $\mathbf{n}_1 + \mathbf{n}_3$, the proposition applied to $S' := \mathbf{C}_{\mathbf{n}_1+\mathbf{n}_3}(0) \cap S$ gives + +$$ \mathbf{P}_{\Lambda}^{0x,0z,\emptyset,\emptyset}[\mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0) \cap S \neq \emptyset] \leq \mathbf{P}_{\Lambda}^{0x,0z,\emptyset,\emptyset}[|\mathbf{C}_{n_1+n_3}(0) \cap \mathbf{C}_{n_2+n_4}(0)| S \neq \emptyset], \quad (\text{A.9}) $$ + +thus concluding the proof. $\square$ + +## A.2 Multi-point connectivity probabilities + +The following two relations facilitate the derivation of estimates guided by the random walk analogy. + +**Proposition A.3** For every $x, u, v \in \mathbb{Z}^d$, we have that + +$$ +\begin{align} +\mathbf{P}_{\beta}^{0x,\emptyset}[u \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\leftrightarrow} 0] &= \frac{\langle \sigma_0 \sigma_u \rangle_{\beta} \langle \sigma_u \sigma_x \rangle_{\beta}}{\langle \sigma_0 \sigma_x \rangle_{\beta}}, && (\text{A.10}) \\ +\mathbf{P}_{\beta}^{0x,\emptyset}[u,v &\stackrel{\mathbf{n}_1+\mathbf{n}_2}{\leftrightarrow} 0] \le && \frac{\langle \sigma_0 \sigma_v \rangle_{\beta} \langle \sigma_v \sigma_u \rangle_{\beta} \langle \sigma_u \sigma_x \rangle_{\beta}}{\langle \sigma_0 \sigma_x \rangle_{\beta}} + && \frac{\langle \sigma_0 \sigma_u \rangle_{\beta} \langle \sigma_u \sigma_v \rangle_{\beta} \langle \sigma_v \sigma_x \rangle_{\beta}}{\langle \sigma_0 \sigma_x \rangle_{\beta}}. && (\text{A.11}) +\end{align} +$$ + +The equality (A.10) is a direct consequence of the switching lemma and has been used several times in the past. The inequality (A.11) is an important new addition, which is proven below. Its structure suggests a more general *k*-step random walk type bound, but the present proof does not extend to *k* > 2. In particular, if a *k*-step bound could be proven for every *k*, it would improve the concentration estimate for $\mathcal{N}$ in the proof of mixing from an inverse logarithmic bound to a small polynomial one, which would translate into a similar bound for the mixing property which may be very useful for the study of the critical regime. Note that this would not improve the log correction in our result since the intersection property also requires the $\ell_k$ to grow fast. +---PAGE_BREAK--- + +**Proof** Fix $\beta > 0$ and drop it from the notation. We work with finite $\Lambda$ and then take the limit as $\Lambda$ tends to $\mathbb{Z}^d$. In the whole proof, $\leftrightarrow$ denotes the connection in $\mathbf{n}_1 + \mathbf{n}_2$, and $\leftarrow\to$ denotes the absence of connection. As mentioned above, (A.10) follows readily from the switching lemma. To prove (A.11), use the switching lemma to find + +$$ +\mathbf{P}_{\Lambda}^{0x,\emptyset}[u,v \leftrightarrow 0] = \frac{\langle \sigma_0 \sigma_u \rangle_{\Lambda} \langle \sigma_u \sigma_x \rangle_{\Lambda}}{\langle \sigma_0 \sigma_x \rangle_{\Lambda}} \mathbf{P}_{\Lambda}^{0u,ux}[v \leftrightarrow u]. \quad (\text{A.12}) +$$ + +Then, our goal is to show that + +$$ +\mathbf{P}_{\Lambda}^{0u,ux}[v \leftrightarrow u] \leq \mathbf{P}_{\Lambda}^{0u,\emptyset}[v \leftrightarrow u] + \mathbf{P}_{\Lambda}^{\emptyset,ux}[v \leftrightarrow u] - \mathbf{P}_{\Lambda}^{\emptyset,\emptyset}[v \leftrightarrow u] \quad (\text{A.13}) +$$ + +which implies (A.11) readily using (A.10). In order to show (A.13), set $\mathbf{C} = \mathbf{C}_{n_1+n_2}(v)$ and apply Lemma A.1 to $F(\mathbf{n}_1, \mathbf{n}_2) := \mathbb{I}[u \leftrightarrow v]$ to obtain + +$$ +\mathbf{P}_{\Lambda}^{0u,ux}[u \leftrightarrow v] = \mathbf{E}_{\Lambda}^{0u,\emptyset}[\mathbb{I}[u \leftrightarrow v] \frac{\langle \sigma_0 \sigma_y \rangle_{\Lambda \setminus C}}{\langle \sigma_0 \sigma_y \rangle_{\Lambda}}]. \quad (\text{A.14}) +$$ + +Next, apply Lemma A.1 to + +$$ +F(\mathbf{n}_1, \mathbf{n}_2) := \mathbb{I}[u \leftrightarrow v] \left(1 - \frac{\langle \sigma_0 \sigma_x \rangle_{\Lambda \setminus C}}{\langle \sigma_0 \sigma_x \rangle_{\Lambda}}\right) \geq 0 \quad (\text{A.15}) +$$ + +(the inequality is due to Griffiths’ inequality [22]) to obtain (A.13) thanks to the following inequalities + +$$ +\begin{align} +& \mathbf{P}_{\Lambda}^{0x,\emptyset}[u \leftrightarrow v] - \mathbf{P}_{\Lambda}^{0x,0y}[u \leftrightarrow v] = \mathbf{E}_{\Lambda}^{0x,\emptyset}[F(\mathbf{n}_1, \mathbf{n}_2)] = \mathbf{E}_{\Lambda}^{\emptyset,\emptyset}\left[F(\mathbf{n}_1, \mathbf{n}_2) \frac{\langle\sigma_0\sigma_x\rangle_{\Lambda\setminus C}}{\langle\sigma_0\sigma_x\rangle_{\Lambda}}\right] \\ +& \leq \mathbf{E}_{\Lambda}^{\emptyset,\emptyset}[F(\mathbf{n}_1, \mathbf{n}_2)] = \mathbf{P}_{\Lambda}^{\emptyset,\emptyset}[u \leftrightarrow v] - \mathbf{P}_{\Lambda}^{\emptyset,0y}[u \leftrightarrow v]. \tag{A.16} +\end{align} +$$ + +□ + +**Remark A.4** Griffiths’ inequality [22] plugged in (A.14) gives + +$$ +\mathbf{P}^{0u,ux}[v \leftrightarrow u] \geq \mathbf{P}^{0u,\emptyset}[v \leftrightarrow u]. \quad (\text{A.17}) +$$ + +**Remark A.5** The inequalities (A.13) and (A.17) can be extended to every set $S \subset \mathbb{Z}^d$ and every two vertices $x, y \in \mathbb{Z}^d$: + +$$ +\mathbf{P}_{\beta}^{0x,\emptyset}[0 \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftharpoons}} S] \leq \mathbf{P}_{\beta}^{0x,0y}[0 \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftharpoons}} S] \leq \mathbf{P}_{\beta}^{0x,\emptyset}[0 \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftharpoons}} S] + \mathbf{P}_{\beta}^{\emptyset,0y}[0 \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftharpoons}} S] - \mathbf{P}_{\beta}^{\emptyset,\emptyset}[0 \underset{n_1+n_2}{\stackrel{n_1+n_2}{\rightleftharpoons}} S]. \quad (\text{A.18}) +$$ + +**A.3 The spectral representation** + +In Section 5.3 we make use of a spectral representation of the correlation function $S(x) := (\tau_0\tau_x)$. Though the statement is well known, cf. [21] and references therein, for completeness of the presentation following is its derivation. For the present purpose it is convenient to present the system’s Hamiltonian as the semi-definite function + +$$ +H = - \sum_{x,y} J_{x,y} (\tau_x - \tau_y)^2 . \tag{A.19} +$$ +---PAGE_BREAK--- + +The difference from the expressions used elsewhere in the paper are the diagonal quadratic terms $\tau_x^2$ whose effect on the Gibbs measure can be incorporated by an adjustment in the spins' a-priori distribution (which is doable as we assumed in the first place that the site distribution was satisfying (2.2)). + +To avoid burdensome notation, when the domain over which the spins are defined is clear from the context of the discussion we shall use the symbol $\tau = \{\tau_x\}_x$ to denote the entire collection of spins in that region, and by $\rho_0(d\tau)$ the corresponding product measure. + +**Proposition A.6 (Spectral Representation)** Let $\rho_0$ be a single variable distribution for which the Gibbs states on $\mathbb{Z}^d$ with the n.n.f. Hamiltonian satisfies + +$$ \langle |\tau_0|^2 \rangle_\beta < \infty, \quad \forall \beta \ge 0. \tag{A.20} $$ + +Then, for every $0 < \beta < \infty$ and every square-summable $v \in \ell^2(\mathbb{Z}^{d-1})$, there exists a positive measure $\mu_{v,\beta}$ with a total mass satisfying + +$$ \mu_{v,\beta}([0, \infty)) \le \|v\|_2^2 \langle |\tau_0|^2 \rangle_\beta \tag{A.21} $$ + +such that for every $n \in \mathbb{Z}$, + +$$ \sum_{x_\perp, y_\perp \in \mathbb{Z}^{d-1}} v_{x_\perp} \overline{y_{y_\perp}} S_\beta((n, x_\perp - y_\perp)) = \int_0^\infty e^{-a|n|} d\mu_{v,\beta}(a). \tag{A.22} $$ + +For $\beta < \beta_c$ the measure' support is limited to $a \ge 1/\xi(\beta)$ (here $\xi(\beta)$ is the correlation length of the system). + +In particular, with $v = \delta_\perp$ the Kronecker function (at the origin) on $\mathbb{Z}^{d-1}$, this yields +the following spectral representation for the correlation function along a principal axis + +$$ S_\beta((n, 0_\perp)) = \int_{1/\xi(\beta)}^\infty e^{-a|n|} d\mu_{\delta_\perp, \beta}(a), \tag{A.23} $$ + +with a measure whose total mass is $\mu_{\delta_\perp, \beta}([0, \infty)) = (\lvert\tau_0\rvert^2)_\beta$. + +**Proof** Throughout the proof $\beta$ is held constant, and to a large extent will be omitted from the notation. It is convenient to first derive the corresponding statements for finite volume versions of the model, in tubular domains with periodic boundary conditions $\mathbb{T}(m, \ell) := (\mathbb{Z}/m\mathbb{Z}) \times (\mathbb{Z}/\ell\mathbb{Z})^{d-1}$ (with the notational convention $\mathbb{Z}/\infty\mathbb{Z} = \mathbb{Z}$). The corresponding finite volume correlation function is naturally denoted $S_{m,\ell;\beta}(x) := \langle \tau_0\tau_x \rangle_{\mathbb{T}(m,\ell)}$. + +Let $\mathcal{V}_\ell$ be the $\mathbb{C}$-vector space of $L^2(\otimes_{x\in(\mathbb{Z}/\ell\mathbb{Z})^{d-1}} \rho(d\tau_x))$ of functions supported on the transversal hyperplane $(\mathbb{Z}/\ell\mathbb{Z})^{d-1}$, over the product measure $\otimes\rho(d\tau_x)$. On $\mathcal{V}_\ell$, let $T_\ell$ be the self adjoint operator whose kernel is given by + +$$ T_\ell(\tau, \tau') := \exp \left\{ -\frac{\beta J}{4} \sum_{\substack{x,y \subset (\mathbb{Z}/\ell\mathbb{Z})^{d-1} \\ \{x,y\} \text{ edge}}} [(τ_x - τ_y)^2 + (τ'_x - τ'_y)^2] - \frac{\beta J}{2} \sum_{x \in (\mathbb{Z}/\ell\mathbb{Z})^{d-1}} (τ_x - τ'_x)^2 \right\}. \tag{A.24} $$ + +This operator serves as the “transfer matrix” in terms of which the partition function can +be presented as a trace: + +$$ Z_{m,\ell} := \operatorname{Tr}(T_{\ell}^{m}). \tag{A.25} $$ + +To express the correlations functions, let us consider the multiplication operators + +$$ \tau[v] := \sum_{x \in (\mathbb{Z}/\ell\mathbb{Z})^{d-1}} v(x)\tau_x \tag{A.26} $$ +---PAGE_BREAK--- + +associated with square summable functions $v : (\mathbb{Z}/\ell\mathbb{Z})^{d-1} \to \mathbb{C}$. + +In this notation, the correlation function of spins at sites (which we write as $(n, x_\perp) \in T_{m,\ell}$) satisfy + +$$ +\sum_{x_1, y_1 \in (\mathbb{Z}/\ell\mathbb{Z})^{d-1}} \overline{v_{y_1}} v_{x_1} S_{m,\ell;\beta}((n, y_\perp - x_\perp)) = \frac{\operatorname{Tr}(T_\ell^{m-n} \bar{\tau}[v] T_\ell^n \tau[v])}{\operatorname{Tr}(T_\ell^m)} . \quad (\text{A.27}) +$$ + +We next claim that for any $\ell < \infty$ the operator $T_\ell$ is: + +(i) self adjoint and compact (and thus with spectrum which is discrete, except for possible accumulation at 0); + +(ii) positive definite; + +(iii) non-degenerate at the top of its spectrum, with a strictly positive eigenfunction. + +Item (i) is implied by the kernel’s symmetry and the finiteness of its Hilbert-Schmidt norm: + +$$ +\mathrm{Tr} T_{\ell}^{*} T_{\ell} = \iint \rho(d\tau) \rho(d\tau') |T_{\ell}(\tau, \tau')|^2 \le 1. \qquad (\text{A.28}) +$$ + +Positivity (ii) can be deduced by the criteria of [19] (see also [10]) applied to the reflection symmetry with respect to the hyperplanes passing through mid-edges. The last assertion (iii) is implied by (i) combined with the kernel’s pointwise positivity (cf. Krein-Rutman theorem [33]). + +Rewritten in terms of the spectral representation of $T_\ell$, (A.27) takes the form: + +$$ +\sum_{x_1, y_1 \in \mathbb{Z}^{d-1}} v_{x_1} \overline{v_{y_1}} S_{m,l;\beta}((n, x_{\perp} - y_{\perp})) = \frac{\sum_{\lambda_1, \lambda_2 \in \text{Spec}(T_l)} \lambda_1^{m-n} \lambda_2^n \langle e_{\lambda_1} | \bar{\tau}[v] | e_{\lambda_2} \rangle \langle e_{\lambda_2} | \tau[v] | e_{\lambda_1} \rangle}{\sum_{\lambda \in \text{Spec}(T_l)} \lambda^m}, \quad (A.29) +$$ + +where $\{|e_\lambda\rangle\}$ is an orthonormal basis of eigenvectors of $T_\ell$. By the structure of the spectrum described above, in the limit $m \to \infty$ only the terms with $\lambda_1 = \lambda_{max}$ and $\lambda = \lambda_{max}$ are of relevance, and one is left with the single sum: + +$$ +\begin{align} +\sum_{x_\perp, y_\perp \in \mathbb{Z}^{d-1}} v_{x_\perp} \overline{v_{y_\perp}} S_{\infty, \ell; \beta}((n, x_\perp - y_\perp)) &= \sum_{\lambda \in \text{Spec}(P)} \left(\frac{\lambda}{\lambda_{\text{max}}}^n\right)^n \langle e_{\lambda_{\text{max}}} | \overline{T} | e_\lambda \rangle \langle e_\lambda | T | e_{\lambda_{\text{max}}} \rangle \\ +&=: \int_0^\infty e^{-an} d\mu_{v,\beta,\ell}(a), \tag{A.30} +\end{align} +$$ + +with $e^{-a} = \lambda/\lambda_{\max}$ and $\mu_{v,\beta,\ell}$ the above discrete spectral measure (whose support starts at $\xi(\ell, \beta)$, the inverse rate of decay in $x$ of $S_{\infty,\ell}(x)$). + +Next we consider the limit $\ell \to \infty$ at fixed $x_\perp - y_\perp$. It is known, through the FKG inequality, that the correlation function converges pointwise, i.e. for any $(n, x_\perp - y_\perp)$ and $\beta$, + +$$ +S_{\beta}((n, x_{\perp} - y_{\perp})) = \lim_{\ell \to \infty} S_{\infty, \ell; \beta}((n, x_{\perp} - y_{\perp})) \quad (\text{A.31}) +$$ + +Through (A.30) this translates into convergence of the moments of $e^{-a}$ under the measures $\mu_{v,\beta,\ell}$. The moment criterion for the convergence of positive measures over bounded intervals (here $[0,1]$) allows to conclude existence of the (weak) limit $\lim_{\ell\to\infty} \mu_{v,\beta,\ell} = \mu_{v,\beta}$ (which need not be a point measure) with which the claimed relation (A.22) holds. $\square$ + +One may observe that the above result applies to all $\beta$. It may be added that for $\beta < \beta_c$ the spectral measure's support is bounded away from 0. In contrast, for $\beta > \beta_c$ the measures associated with $v$ of non-zero sum include a point mass there, i.e. $\mu_{v,\beta}(\{0\}) \neq 0$, which is the spectral representation of the long range order. +---PAGE_BREAK--- + +### A.4 Intersection properties for random current representation of models in the GS class + +We start with the version of the switching lemma that we will use. Below, $\delta_{uv}$ denotes the current equal to 1 on the edge $uv$ and 0 otherwise. + +**Lemma A.7 (Coarse switching)** Let $S,T$ be two disjoint sets of vertices. For every event $E$ depending on the sum of two currents and every $x \neq y$, + +$$ P_{\beta}^{xy,\emptyset}[x \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\longleftrightarrow} S, \mathbf{n}_1+\mathbf{n}_2 \in E] \leq \beta \sum_{a \in S, b \notin S} J_{a,b} \frac{\langle \sigma_x \sigma_a \rangle \langle \sigma_b \sigma_y \rangle}{\langle \sigma_x \sigma_y \rangle} P_{\beta}^{xa,by}[\mathbf{n}_1+\mathbf{n}_2+\delta_{ab} \in E], \quad (A.32) $$ + +$$ P_{\beta}^{\emptyset,\emptyset}[S \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\longleftrightarrow} T, \mathbf{n}_1+\mathbf{n}_2 \in E] \leq \beta^2 \sum_{\substack{a \in S, b \notin S \\ s \in T, t \notin T}} J_{a,b} J_{s,t} \langle \sigma_a \sigma_s \rangle \langle \sigma_t \sigma_b \rangle P_{\beta}^{as,tb}[\mathbf{n}_1+\mathbf{n}_2+\delta_{ab}+\delta_{st} \in E]. \quad (A.33) $$ + +**Proof** We start with the first inequality. Fix $\Lambda$ finite. By multiplying by the quantity $\langle \sigma_x \sigma_y \rangle_\beta 4^{-|\Lambda|} Z(\Lambda, J, \beta)^2$, and then making the change of variable $\mathbf{m} = \mathbf{n}_1 + \mathbf{n}_2$, $\mathbf{n}_2 = \mathbf{n}$, we find that + +$$ +\begin{align} +(1) := & \sum_{\substack{\partial \mathbf{n}_1 = \{x,y\} \\ \partial \mathbf{n}_2 = \emptyset}} w_\beta(\mathbf{n}_1) w_\beta(\mathbf{n}_2) \mathbb{I}[\mathbf{n}_1 + \mathbf{n}_2 \in E] \mathbb{I}[x \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\longleftrightarrow} S] \\ += & \sum_{\substack{\partial \mathbf{m} = \{x,y\}}} w_\beta(\mathbf{m}) \mathbb{I}[\mathbf{m} \in E] \mathbb{I}[x \stackrel{\mathbf{m}}{\longleftrightarrow} S] \sum_{\substack{\mathbf{n} \leq \mathbf{m} \\ \partial \mathbf{n} = \emptyset}} \binom{\mathbf{m}}{\mathbf{n}} \\ += & 2^{-|\Lambda|} \sum_{\substack{\partial \mathbf{m} = \{x,y\}}} w_{2\beta}(\mathbf{m}) \mathbb{I}[\mathbf{m} \in E] \mathbb{I}[x \stackrel{\mathbf{m}}{\longleftrightarrow} S] 2^{k(\mathbf{m})}, +\end{align} +\quad (A.34) +$$ + +where in the last line we used that the number of even subgraphs of the multi-graph $\mathcal{M}$ (see for instance definition in [5]) associated with $\mathbf{m}$ is given by $2^{|\mathbf{m}|+k(\mathbf{m})-|\Lambda|}$, where $|\mathbf{m}|$ means the total sum of $\mathbf{m}$, and $k(\mathbf{m})$ is the number of connected components. Now, observe that + +$$ w_{2\beta}(\mathbf{m})\mathbb{I}[x \stackrel{\mathbf{m}}{\longleftrightarrow} S] 2^{k(\mathbf{m})} \leq \sum_{a \notin S, b \in S} \beta J_{a,b} w_{2\beta}(\mathbf{m}-\delta_{ab})\mathbb{I}[x \stackrel{\mathbf{m}-\delta_{ab}}{\longleftrightarrow} b]\mathbb{I}[\mathbf{m}_{ab} \geq 1] 2^{k(\mathbf{m}-\delta_{ab})}. +\quad (A.35) $$ + +Indeed, we are necessarily in one of the following cases: consider the edges $ab$ with $a \in S$, $b \notin S$, and $a$ connected to $y$ in $\mathbf{m}-\delta_{ab}$. Assume that + +* there is an edge $ab$ as above with $\mathbf{m}_{ab} \ge 2$, in such case $k(\mathbf{m}-\delta_{ab}) = k(\mathbf{m})$ and + $w_{2\beta}(\mathbf{m}) = \frac{2\beta J_{ab}}{\mathbf{m}_{ab}} w_{2\beta}(\mathbf{m}-\delta_{ab}) \le \beta J_{a,b} w_{2\beta}(\mathbf{m}-\delta_{ab});$ + +* there is a loop in the cluster of $x$ in $\mathbf{m}$ which is intersecting the edge-boundary of $S$, in such case there are two edges $ab$ satisfying the property above, with $k(\mathbf{m}-\delta_{ab}) = k(\mathbf{m})$ and $w_{2\beta}(\mathbf{m}) \le 2\beta J_{a,b} w_{2\beta}(\mathbf{m}-\delta_{ab});$ + +* otherwise, there is only one edge $ab$ with $\mathbf{m}_{ab}=1$, in such case $k(\mathbf{m}-\delta_{ab}) = k(\mathbf{m})+1$ and $w_{2\beta}(\mathbf{m}) \le 2\beta J_{a,b} w_{2\beta}(\mathbf{m}-\delta_{ab}).$ +---PAGE_BREAK--- + +Injecting the last displayed inequality in the first one, and then making the change of variable $\mathbf{m}' = \mathbf{m} - \delta_{uv}$, we find that + +$$ +\begin{align} +2^{|\Lambda|} \times (1) &\le \sum_{a \notin S, b \in S} \beta J_{a,b} \sum_{\partial \mathbf{m}' = \{x,y,a,b\}} w_{2\beta}(\mathbf{m}') 2^{k(\mathbf{m}')}\mathbb{I}[x \stackrel{\mathbf{m}'}{\longleftrightarrow} a] \mathbb{I}[\mathbf{m}' + \delta_{ab} \in E] \\ +&= \sum_{a \notin S, b \in S} \beta J_{a,b} \sum_{\substack{\mathbf{n}_1 = \{x,a,b,y\} \\ \partial \mathbf{n}_2 = \emptyset}} w_{\beta}(\mathbf{n}_1) w_{\beta}(\mathbf{n}_2) \mathbb{I}[x \stackrel{\mathbf{n}_1+\mathbf{n}_2}{\longleftrightarrow} a] \mathbb{I}[\mathbf{n}_1 + \mathbf{n}_2 + \delta_{ab} \in E] \\ +&= \sum_{a \notin S, b \in S} \beta J_{a,b} \sum_{\substack{\mathbf{n}_1 = \{b,y\} \\ \partial \mathbf{n}_2 = \{x,a\}}} w_{\beta}(\mathbf{n}_1) w_{\beta}(\mathbf{n}_2) \mathbb{I}[\mathbf{n}_1 + \mathbf{n}_2 + \delta_{ab} \in E], \tag{A.36} +\end{align} +$$ + +where in the last line we used the switching lemma. Dividing this relation by the factor $\langle\sigma_x\sigma_y\rangle_{\Lambda,\beta}4^{-|\Lambda|}Z(\Lambda, J, \beta)^2$ and letting $\Lambda$ tend to the full lattice implies the first claim. + +The second claim follows from the same reasoning using pairs of edges (ab,st) with $a \in S$, $b \notin S$, $s \in T$ and $t \notin T$ such that $a$ is connected to $s$ in $\mathbf{n}_1 + \mathbf{n}_2$. $\square$ + +We deduce the following pair of diagrammatic bounds on the connectivity probabilities. + +**Proposition A.8** For every distinct $x, y, u, v \in \mathbb{Z}^d$ + +$$ +\begin{align} +\mathbf{P}_{\rho,\beta}^{\emptyset,\emptyset} [\mathcal{B}_x \underset{x',y'\in\mathbb{Z}^d}{\stackrel{\mathbf{n}_1+\mathbf{n}_2}{\rightleftarrows}} \mathcal{B}_y] &\leq \sum_{x',y'\in\mathbb{Z}^d} \langle\tau_x\tau_y\rangle \beta J_{y,y'} \langle\tau_{y'}\tau_{x'}\rangle \beta J_{x',x}, \tag{A.37} \\ +\mathbf{P}_{\rho,\beta}^{\mathcal{X},\emptyset} [\partial\mathbf{n}_1 \underset{u'\in\mathbb{Z}^d}{\stackrel{\mathbf{n}_1+\mathbf{n}_2}{\rightleftarrows}} \mathcal{B}_u] &\leq \sum_{u'\in\mathbb{Z}^d} \frac{\langle\tau_x\tau_u\rangle \beta J_{u,u'} \langle\tau_{u'}\tau_x\rangle}{\langle\tau_x\tau_y\rangle}. \tag{A.38} +\end{align} +$$ + +*Proof* For the first one, sum (A.33) for *E* being the full event and vertices in $\mathcal{B}_x$ and $\mathcal{B}_y$, and use (A.10). For the second one, do the same with (A.32) instead. $\square$ + +**Acknowledgments** The work of M. Aizenman on this project was supported in part by the NSF grant DMS-1613296, and that of H. Duminil-Copin by the NCCR SwissMAP, the Swiss NSF and an IDEX Chair from Paris-Saclay. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 757296). The joint work was advanced through mutual visits to Princeton and Geneva University, sponsored by a Princeton-Unige partnership grant. We thank S. Goswami, A. Raoufi, P.-F. Rodriguez, and F. Severo for stimulating discussions, and M. Oulamara, R. Panis, P. Wildemann, and an anonymous referee for careful reading of the paper. + +**References** + +[1] M. Aizenman, Geometric analysis of $\varphi^4$ fields and Ising models. I, II, Comm. Math. Phys., 86(1):1-48, 1982. + +[2] M. Aizenman, The Intersection of Brownian Paths as a Case Study of a Renormalization Group Method for Quantum Field Theory., Comm. Math. Phys., 97:91-110, 1985. + +[3] M. Aizenman, D.J. Barsky, and R. Fernández, The phase transition in a general class of Ising-type models is sharp, J. Stat. Phys., 47(3-4):343-374, 1987. +---PAGE_BREAK--- + +[4] M. Aizenman, H. Duminil-Copin, and V. Sidoravicius, *Random Currents and Continuity of Ising Model's Spontaneous Magnetization*, Comm. Math. Phys., 334:719-742, 2015. + +[5] M. Aizenman, H. Duminil-Copin, V. Tassion and S. Warzel, *Emergent Pla-narity in two-dimensional Ising Models with finite-range Interactions*, Inven-tiones Mathematicae, 216(3):661-743, 2019. + +[6] M. Aizenman and R. Fernández, *On the critical behaviour of the magnetization in high-dimensional Ising models*, J. Stat. Phys., 44(3-4):393-454, 1986. + +[7] M. Aizenman and R. Graham, *On the renormalized coupling constant and the susceptibility in φ⁴ field theory and the Ising model in four dimensions*, Nucl. Phys. B, 225:261-288 (1983). + +[8] C. Aragao de Carvalho, S. Caracciolo, J. Fröhlich, *Polymers and gφ⁴ theory in four dimensions*, Nucl. Phys. **B215** [FS7], 209-248 (1983). + +[9] R. Bauerschmidt, D.C. Brydges, and G. Slade, *Scaling limits and critical be-haviour of the 4-dimensional n-component |φ⁴| spin model*, J. Stat. Phys., 157:692-742, 2014. + +[10] M. Biskup, *Reflection positivity and phase transitions in lattice spin models*, Methods of contemporary mathematical statistical physics, Lecture Notes in Math., vol. 1970, Springer, Berlin, 2009, pp. 1-86. + +[11] D. Brydges, J. Fröhlich, and T. Spencer, *The random walk representation of clas-sical spin systems and correlation inequalities*, Comm. Math. Phys., 83(1):123-150, 1982. + +[12] H. Duminil-Copin, *Random currents expansion of the Ising model*, in European Congress of Mathematics, Eur. Math. Soc., Zürich, 2018, pp. 869-889. MR 3890455. Zbl 1403.82005. https://doi.org/10.4171/176-1/39. + +[13] H. Duminil-Copin, *Lectures on the Ising and Potts models on the hypercubic lattice*. In *Random Graphs, Phase Transitions, and the Gaussian Free Field*, Springer Proc. Math. Stat. 304, Springer, Cham, 2020, pp. 35-161. MR 4043224. Zbl 1447.82007. https://doi.org/10.1007/978-3-030-32011-9-2. + +[14] H. Duminil-Copin, S. Goswami, and A. Raoufi, *Exponential decay of truncated correlations for the Ising model in any dimension for all but the critical temper-ature*, Comm. Math. Phys., 374(2):891-921, 2020. + +[15] H. Duminil-Copin and V. Tassion, *A new proof of the sharpness of the phase transition for Bernoulli percolation and the Ising model*, Math. Phys., 343(2):725-745, 2016. + +[16] J. Feldman, J. Magnen, V. Rivasseau, and R. Sénéor, *Construction and Borel Summability of Infrared Φ⁴ by a Phase Space Expansion*, Comm. Math. Phys., 109:437-480, 1987. + +[17] J. Fröhlich, *On the triviality of λφ⁴ theories and the approach to the critical point in d > 4 dimensions*, Nuclear Physics B, 200(2):281-296, 1982. +(-) +---PAGE_BREAK--- + +[18] J. Fröhlich, R. Israel, E.H. Lieb, B. Simon, *Phase transitions and reflection positivity*. I. *General theory and long range lattice models*, Comm. math. Phys., 62:1–34, 1978. + +[19] J. Fröhlich, B. Simon, and T. Spencer, *Infrared bounds, phase transitions and continuous symmetry breaking*, Comm. Math. Phys., 50(1):79–95, 1976. + +[20] K. Gawedzki and A. Kupiainen, *Massless Lattice $\Phi_4^4$ Theory: Rigorous Control of a Renormalizable Asymptotically Free Model*, Comm. Math. Phys., 99:197–252, 1985. + +[21] J. Glimm and A. Jaffe, Positivity of the $\phi_3^4$ hamiltonian., Fortschritte der Physik, 21(7):327–376, 1973. + +[22] R. Griffiths, *Correlation in Ising ferromagnets I, II*, J. Math. Phys., 8:478–489, 1967. + +[23] R. Griffiths, *Rigorous Results for Ising Ferromagnets of Arbitrary Spin*, J. Math. Phys., 10:1559, 1969. + +[24] R.B. Griffiths, C.A. Hurst, and S. Sherman, Concavity of magnetization of an Ising ferromagnet in a positive external field, J. Math. Phys., 11:790–795, 1970. + +[25] G. Grimmett, The random-cluster model, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 333, Springer-Verlag, Berlin, 2006. + +[26] F. Guerra, L. Rosen and B. Simon, *The P($\phi$$^2$ Euclidean Quantum Field Theory as Classical Statistical Mechanics*, Annals of Math., 101:111–189, 1975. + +[27] M. Hairer, *A theory of regularity structures*, Inventiones Mathematicae, 198(2):269–504, 2014. + +[28] J.M. Hammersley. Percolation processes: Lower bounds for the critical probability. Ann. Math. Statist., 28:790–795, 1957. + +[29] T. Hara and H. Tasaki, *A Rigorous Control of Logarithmic Corrections in Four-Dimensional ($\phi_4$)$^4$ Spin Systems. II. Critical Behaviour of Susceptibility and Correlation Length*, J. Stat. Phys., 47(1/2):99-121, 1987. + +[30] G.C. Hegerfeldt, Correlation inequalities for Ising ferromagnets with symmetries, Comm. Math. Phys., 57(3):259–266, 1977. + +[31] A. Jaffe and E. Witten, *Quantum Yang-Mills Theory*, The millennium prize problems, (1):129, 2006, www.claymath.org/sites/default/files/yangmills.pdf. + +[32] H. Kesten, *The incipient infinite cluster in two-dimensional percolation*, Probab. Theory Related Fields, 73(3):369–394, 1986. + +[33] M.G. Krein, M.A. Rutman, Linear operators leaving invariant a cone in a Banach space, Transl. Amer. Math. Soc.26, 199 (1950) (cf. p.279). + +[34] G.F. Lawler, *Intersections of random walks*, Modern Birkhäuser Classics, Birkhäuser/Springer, New York, 2013, Reprint of the 1996 edition. +---PAGE_BREAK--- + +[35] J.L. Lebowitz, *GHS and other inequalities*, Comm. Math. Phys. 35:87-92, 1974. + +[36] E. H. Lieb, *A refinement of Simon's correlation inequality*, Comm. Math. Phys. 77(2):127-135, 1980. + +[37] S. Miracle-Solé A. Messager, *Correlation functions and boundary conditions in the Ising ferromagnet*, J. Stat. Phys. 17(4):245-262, 1977. + +[38] C.M. Newman, *Inequalities for Ising models and field theories which obey the Lee-Yang theorem*, Comm. Math. Phys., 41(1), 1975. + +[39] K. Osterwalder, R. Schrader, *Axioms for Euclidean Green's functions I.*, Comm. Math. Phys. 31:83-112, 1973. + +[40] K. Osterwalder, R. Schrader, *Axioms for Euclidean Green's functions II.*, Comm. Math. Phys. 42:281-305, 1975. + +[41] R. Schrader, *New correlation inequalities for the Ising model and $P(\phi)$ theories*, Phys. Rev., B15:2798, 1977. + +[42] T.D. Schultz, D.C. Mattis, E.H. Lieb, *Two-dimensional Ising model as a soluble problem of many fermions*, Reviews of Modern Physics 36(3), 856, 1964. + +[43] B. Simon, The $P(\Phi)_2$ Euclidean (Quantum) Field Theory, Princeton Univ. Press, 1974. + +[44] B. Simon, *Correlation inequalities and the decay of correlations in ferromagnets*, Comm. Math. Phys. **77**, 111-126 (1980). + +[45] B. Simon and R.B. Griffiths, *The $\phi_2^4$ field theory as a classical Ising model*, Comm. Math. Phys. 33(2):145-164, 1973. + +[46] A.D. Sokal, *Sokal, A.D.: A rigorous inequality for the specific heat of an Ising or $\phi_4^4$ ferromagnet*. Phys. Lett. bf 71 A, 451-453 (1979) + +[47] A.D. Sokal, *An alternate constructive approach to the $\phi_3^4$ quantum field theory, and a possible destructive approach to $\phi_4^4$,*, Ann. Inst. Henri Poincaré Phys. Théorique, 37:317-398, 1982. + +[48] K. Symanzik, *Euclidean quantum field theory*, Local quantum theory. Jost, R. (ed.). New York: Academic Press, 1969. + +[49] R. van der Hofstad and A. Járai, *The incipient infinite cluster for high-dimensional unoriented percolation*, J. Stat. Phys., 114(3-4):553-625, 2004. + +[50] A.S. Wightman, *Quantum Field Theory in Terms of Vacuum Expectation Values*, Phys. Rev. 101, 860, 1956. + +[51] K.G. Wilson, *Renormalization Group and Critical Phenomena. I. Renormalization Group and the Kadanoff Scaling Picture*, Phys. Rev. B **4**, 1971. \ No newline at end of file diff --git a/samples/texts_merged/450057.md b/samples/texts_merged/450057.md new file mode 100644 index 0000000000000000000000000000000000000000..60d63ba40a53058d83ae20468efb099927c3c5d9 --- /dev/null +++ b/samples/texts_merged/450057.md @@ -0,0 +1,1876 @@ + +---PAGE_BREAK--- + +Some results on the Weiss-Weinstein bound for +conditional and unconditional signal models in array +processing + +Dinh Thang Vu, Alexandre Renaux, Remy Boyer, Sylvie Marcos + +► To cite this version: + +Dinh Thang Vu, Alexandre Renaux, Remy Boyer, Sylvie Marcos. Some results on the Weiss-Weinstein bound for conditional and unconditional signal models in array processing. Signal Processing, Elsevier, 2014, 95 (2), pp.126-148. 10.1016/j.sigpro.2013.08.020. hal-00947784 + +HAL Id: hal-00947784 + +https://hal.inria.fr/hal-00947784 + +Submitted on 17 Feb 2014 + +**HAL** is a multi-disciplinary open access +archive for the deposit and dissemination of sci- +entific research documents, whether they are pub- +lished or not. The documents may come from +teaching and research institutions in France or +abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est +destinée au dépôt et à la diffusion de documents +scientifiques de niveau recherche, publiés ou non, +émanant des établissements d'enseignement et de +recherche français ou étrangers, des laboratoires +publics ou privés. +---PAGE_BREAK--- + +Some results on the Weiss-Weinstein bound for conditional and +unconditional signal models in array processing + +Dinh Thang VU, Alexandre RENAUX, Rémy BOYER, Sylvie MARCOS + +Université Paris-Sud 11, CNRS, Laboratoire des Signaux et Systèmes, Supelec, 3 rue Joliot Curie, 91192 Gif-sur-Yvette +Cedex, France (e-mail: {Vu,Renaux,Remy.Boyer,Marcos}@lss.supelec.fr) + +Abstract + +In this paper, the Weiss-Weinstein bound is analyzed in the context of sources localization with a planar +array of sensors. Both conditional and unconditional source signal models are studied. First, some results +are given in the multiple sources context without specifying the structure of the steering matrix and of the +noise covariance matrix. Moreover, the case of an uniform or Gaussian prior are analyzed. Second, these +results are applied to the particular case of a single source for two kinds of array geometries: a non-uniform +linear array (elevation only) and an arbitrary planar (azimuth and elevation) array. + +Keywords: Weiss-Weinstein bound, DOA estimation. + +# 1. Introduction + +Sources localization problem has been widely investigated in the literature with many applications such as radar, sonar, medical imaging, etc. One of the objective is to estimate the direction-of-arrival (DOA) of the sources using an array of sensors. + +In array processing, lower bounds on the mean square error are usually used as a benchmark to evaluate +the ultimate performance of an estimator. There exist several lower bounds in the literature. Depending +on the assumptions about the parameters of interest, there are three main kinds of lower bounds. When +the parameters are assumed to be deterministic (unknown), the main lower bounds on the (local) mean +square error used are the well known Cramér-Rao bound and the Barankin bound (more particularly their +approximations [1][2][3][4]). When the parameters are assumed to be random with a known prior distribution, +these lower bounds on the global mean square error are called Bayesian bounds [5]. Some typical families +of Bayesian bounds are the Ziv-Zakai family [6][7][8] and the Weiss-Weinstein family [9][10][11][12]. Finally, +when the parameter vector is made from both deterministic and random parameters, the so-called hybrid +bounds have been developed [13][14][15]. + +Since the DOA estimation is a non-linear problem, the outliers effect can appear and the estimators +mean square error exhibits three distinct behaviors depending on the number of snapshots and/or on +the signal to noise ratio(SNR) [16]. At high SNR and/or for a high number of snapshots, i.e., in the +---PAGE_BREAK--- + +asymptotic region, the outliers effect can be neglected and the ultimate performance are described by the (classical/Bayesian/hybrid) Cramér-Rao bound. However, when the SNR and/or the number of snapshots decrease, the outliers effect lead to a quick increase of the mean square error: this is the so-called threshold effect. In this region, the behavior of the lower bounds are not the same. Some bounds, generally called global bounds (Barankin, Ziv-Zakai, Weiss-Weinstein) can predict the threshold while the others, called local bounds, like the Cramér-Rao bound or the Bhattacharyya bound cannot. Finally, at low SNR and/or at low number of snapshots, i.e., in the no-information region, the deterministic bounds exceed the estimator mean square error due to the fact that they do not take into account the parameter support. On the contrary, the Bayesian bounds exploit the parameter prior information leading to a "real" lower bound on the global mean square error. + +In this paper¹, we are interested in the Weiss-Weinstein bounds which is known to be one of the tightest Bayesian bound with the bounds of the Ziv-Zakai family. We will study the two main source models used in the literature [17]: the unconditional (or stochastic) model where the source signals are assumed to be Gaussian and the conditional (or deterministic) model where the source signals are assumed to be deterministic. Surprisingly, in the context of array processing, while closed-form expressions of the Ziv-Zakai bound (more precisely its extension by Bell et. al. [18]) were proposed around 15 years ago for the unconditional model, the results concerning the Weiss-Weinstein bound are, most of the time, only conducted by way of computations. Concerning the unconditional model, in [19], the Weiss-Weinstein bound has been evaluated by way of computations and has been compared to the mean square error of the MUSIC algorithm and classical Beamforming using a particular 8 × 8 element array antenna. In [20], the authors have introduced a numerical comparison between the Bayesian Cramér-Rao bound, the Ziv-Zakai bound and the Weiss-Weinstein bound for DOA estimation. In [21], numerical computations of the Weiss-Weinstein bound to optimize sensor positions for non-uniform linear arrays have been presented. Again in the unconditional model context, in [22], by considering the matched-field estimation problem, the authors have derived a semi closed-form expression of a simplified version of the Weiss-Weinstein bound for the DOA estimation. Indeed, the integration over the prior probability density function was not performed. The conditional model (with known waveforms) is studied only in [23], where a closed-form expression of the WWB is given in the simple case of spectral analysis and in [24] which is a simplified version of the bound. + +While the primary goal of this paper is to give closed-form expressions of the Weiss-Weinstein bound for the DOA estimation of a single source with an arbitrary planar array of sensors, under both conditional and unconditional source signal models, we also provide partial closed-form expressions of the bound which could be useful for other problems. First, we study the general Gaussian observation model with parameterized + +¹Section 5.2.2 of this paper has been partially presented in [24] +---PAGE_BREAK--- + +mean or parameterized covariance matrix. Indeed, one of the success of the Cramér-Rao is that, for this +observation model, a closed-form expression of the Fisher information matrix is available: this is the so- +called Slepian-Bang formula [25]. Such kind of formulas have been less investigated in the context of bounds +tighter than the Cramér-Rao bound. Second, some results are given in the multiple sources context without +specifying the structure of the steering matrix and of the noise covariance matrix. Finally, these results +are applied to the particular case of a single source for two kinds of array geometries: the non-uniform +linear array (elevation only) and the planar (azimuth and elevation) array. Consequently, the aim of this +paper is also to provide a textbook of formulas which could be applied in other fields. The Weiss-Weinstein +bound is known to depend on parameters called test points and other parameters generally denoted $s_i$. One +particularity of this paper in comparison with the previous works on the Weiss-Weinstein bound is that we +do not use the assumption $s_i = 1/2, \forall i$. + +This paper is organized as follows. Section 2 is devoted to the array processing observation model which will be used in the paper. In Section 3, a short background on the Weiss-Weinstein bound is presented and two general closed-form expressions which will be the cornerstone for our array processing problems are derived. In Section 4 we apply these general results to the array processing problem without specifying the structure of the steering matrix. In Section 5, we study the particular case of the non-uniform linear array and of the planar array for which we provide both closed-form expressions of the bound in the context of a single stationary source in the far field area. Some simulation results are proposed in Section 6. Finally, Section 7 gives our conclusions. + +## 2. Problem setup + +In this section, the general observation model generally used in array signal processing is presented as +well as the first different assumptions used in the remain of the paper. Particularly, the so-called conditional +and unconditional source models are emphasized. + +**2.1. Observations model** + +We consider the classical scenario of an array with $M$ sensors which receives $N$ complex bandpass +signals $\mathbf{s}(t) = [s_1(t) \ s_2(t) \ \cdots \ s_N(t)]^T$. The output of the array is a $M \times 1$ complex vector $\mathbf{y}(t)$ which can +be modelled as follows (see, e.g., [26] or [17]) + +$$ \mathbf{y}(t) = \mathbf{A}(\theta)\mathbf{s}(t) + \mathbf{n}(t), \quad t = 1, \dots, T, \qquad (1) $$ + +where $T$ is the number of snapshots, where $\theta = [\theta_1 \ \theta_2 \ \cdots \ \theta_q]^T$ is an unknown parameter vector of interest², +where $\mathbf{A}(\theta)$ is the so-called $M \times N$ steering matrix of the array response to the sources, and where the +$M \times 1$ random vector $\mathbf{n}(t)$ is an additive noise. + +²Note that one source can be described by several parameters. Consequently, *q* > *N* in general. +---PAGE_BREAK--- + +## 2.2. Assumptions + +* The unknown parameters of interest are assumed to be random with an *a priori* probability density function $p(\theta_i)$, $i = 1, \dots, q$. These random parameters are assumed to be statistically independent such that the *a priori* joint probability density function is $p(\boldsymbol{\theta}) = \prod_{i=1}^q p(\theta_i)$. Note that this assumption will be only used in Subsections 4.2 and 4.3. We also assume that the parameter space, denoted $\Theta$, is a connected subset of $\mathbb{R}^q$ (see [27]). + +* The noise vector is assumed to be complex Gaussian, statistically independent of the parameters, i.i.d., circular, with zero mean and known covariance matrix $E[\mathbf{n}(t)\mathbf{n}^H(t)] = \mathbf{R}_n$. This assumption will be made more restrictive in Section 5 where it will be assumed that $\mathbf{R}_n = \sigma_n^2\mathbf{I}$. In any case, $\mathbf{R}_n$ is assumed to be a full rank matrix. + +* The steering matrix $\mathbf{A}(\boldsymbol{\theta})$ is assumed such that the observation model is identifiable. From Section 3 to Section 4, the structure of $\mathbf{A}(\boldsymbol{\theta})$ is not specified in order to obtain the more general results. + +* Concerning the source signals, two kinds of models have been investigated in the literature (see, e.g., [28] or [17]) and will be alternatively used in this paper. + +- $M_1$: *Unconditional or stochastic model:* $\mathbf{s}(t)$ is assumed to be a complex circular random vector, i.i.d., statistically independent of the noise, Gaussian with zero-mean and known covariance matrix $E[\mathbf{s}(t)\mathbf{s}^H(t)] = \mathbf{R}_s$. Note that concerning the previous results on the Cramér-Rao bound available in the literature [28], the covariance matrix $\mathbf{R}_s$ is assumed to be unknown. In this paper, we have made the simpler assumption that the covariance matrix $\mathbf{R}_s$ is known. These assumptions have already been used for the calculation of bounds more complex than the Cramér-Rao bound (see, e.g., [22], [29], [30]). + +- $M_2$: *Conditional or deterministic model:* $\forall t$, $\mathbf{s}(t)$ is assumed to be deterministic known. Note that, under the conditional model assumption, the signal waveforms can be assumed either unknown or known. While the conditional observation model with unknown waveforms seems more challenging, the conditional model with known waveforms signals which will be used in this paper can be found in several applications such as in mobile telecommunication and radar (see e.g. [31],[32], and [33]). + +## 2.3. Likelihood of the observations + +Let $\mathbf{R}_y = E[(\mathbf{y}(t) - E[\mathbf{y}(t)])(\mathbf{y}(t) - E[\mathbf{y}(t)])^H]$ be the covariance matrix of the observation vector $\mathbf{y}(t)$. According to the aforementioned assumptions, it is easy to see that under $M_1$, the observations $\mathbf{y}(t)$ are distributed as a complex circular Gaussian random vector with zero mean and covariance matrix +---PAGE_BREAK--- + +$\mathbf{R}_y(\theta) = \mathbf{A}(\theta)\mathbf{R}_s\mathbf{A}^H(\theta) + \mathbf{R}_n$ while under $\mathcal{M}_2$, the observations $\mathbf{y}(t)$ are distributed as a complex circular Gaussian random vector with mean $\mathbf{A}(\theta)\mathbf{s}(t)$ and covariance matrix $\mathbf{R}_y = \mathbf{R}_n$. Moreover, in both case the observations are i.i.d.. + +Therefore, the likelihood, $p(\mathbf{Y}; \boldsymbol{\theta})$, of the full observations matrix $\mathbf{Y} = [\mathbf{y}(1) \ \mathbf{y}(2) \ \dots \ \mathbf{y}(T)]$ under $\mathcal{M}_1$ +is given by + +$$ +p(\mathbf{Y}; \boldsymbol{\theta}) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathbf{Y}}(\boldsymbol{\theta})|^T} \exp \left( -\sum_{t=1}^{T} \mathbf{y}(t)^H \mathbf{R}_{\mathbf{Y}}^{-1}(\boldsymbol{\theta}) \mathbf{y}(t) \right), \quad (2) +$$ + +where $\mathbf{R}_y(\theta) = \mathbf{A}(\theta)\mathbf{R}_s\mathbf{A}^H(\theta) + \mathbf{R}_n$ and the likelihood under $\mathcal{M}_2$ is given by + +$$ +p(\mathbf{Y}; \boldsymbol{\theta}) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathrm{n}}|^T} \exp \left( -\sum_{t=1}^{T} (\mathbf{y}(t) - \mathbf{A}(\boldsymbol{\theta}) \mathbf{s}(t))^{H} \mathbf{R}_{\mathrm{n}}^{-1} (\mathbf{y}(t) - \mathbf{A}(\boldsymbol{\theta}) \mathbf{s}(t)) \right). \quad (3) +$$ + +**3. Weiss-Weinstein bound: Generalities** + +In this Section, we first remind to the reader the structure of the Weiss-Weinstein bound on the mean square error and the assumptions used to compute this bound. Second, a general result about the Gaussian observation model with parameterized mean or parameterized covariance matrix, which, to the best of our knowledge, does not appear in the literature is presented. This result will be useful to study both the unconditional model $\mathcal{M}_1$ and the conditional model $\mathcal{M}_2$ in the next Section. + +3.1. Background + +The Weiss-Weinstein bound for a $q \times 1$ real parameter vector $\boldsymbol{\theta}$ is a $q \times q$ matrix denoted **WWB** and is +given as follows [34] + +$$ +\text{WWB} = \text{HG}^{-1}\text{H}^T, \tag{4} +$$ + +where the $q \times q$ matrix $\mathbf{H} = [\mathbf{h}_1 \ \mathbf{h}_2 \dots \mathbf{h}_q]$ contains the so-called test-points $\mathbf{h}_i$, $i = 1, \dots, q$ such that +$\boldsymbol{\theta} + \mathbf{h}_i \in \Theta \ \forall \mathbf{h}_i$. The $k, l$-element of the $q \times q$ matrix $\mathbf{G}$ is given by + +$$ +\{\mathbf{G}\}_{k,l} = \frac{\mathbb{E}\left[(L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})) - L^{1-s_k}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_k, \boldsymbol{\theta}))\right] (L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})) - L^{1-s_l}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_l, \boldsymbol{\theta}))\right]}{\mathbb{E}[L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})] \mathbb{E}[L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})]}, \quad (5) +$$ + +where the expectations are taken over the joint probability density function $p(\mathbf{Y}, \boldsymbol{\theta})$ and where the function +$L(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_i, \boldsymbol{\theta})$ is defined by $L(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_i, \boldsymbol{\theta}) = \frac{p(\mathbf{Y}, \boldsymbol{\theta}+\mathbf{h}_i)}{p(\mathbf{Y}, \boldsymbol{\theta})}$. The notation $L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})$ means that $s_k$ +is the power of $L(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_i, \boldsymbol{\theta})$. The elements $s_i$ are such that $s_i \in [0, 1], i = 1, \dots, q$. + +Note that we have the following order relation [34] + +$$ +\operatorname{Cov}(\hat{\theta}) = E\left[(\hat{\theta} - \theta)(\hat{\theta} - \theta)^T\right] \geq \operatorname{WWB}, \quad (6) +$$ + +where $\mathbf{A} \succeq \mathbf{B}$ means that the matrix $\mathbf{A} - \mathbf{B}$ is a semi-positive definite matrix and where $\operatorname{Cov}(\hat{\theta})$ is the +global (the expectation is taken over the joint pdf $p(\mathbf{Y}, \boldsymbol{\theta})$) mean square error of any estimator $\hat{\theta}$ of the +---PAGE_BREAK--- + +parameter vector $\theta$. Finally, in order to obtain a tight bound, one has to maximize **WWB** over the test-points $\mathbf{h}_i$ and $s_i$ ($i=1, \dots, q$). Note that this maximization can be done by using the trace of $\mathbf{HG}^{-1}\mathbf{H}^T$ or with respect to the Loewner partial ordering [35]. In this paper we will use the trace of $\mathbf{HG}^{-1}\mathbf{H}^T$ which is enough to obtain tight results. + +## 3.2. A general result on the Weiss-Weinstein bound and its application to the Gaussian observation models + +An analytical result on the Weiss-Weinstein bound which will be useful in the following derivations and which could be useful for other problems is derived in this part. Note that this result is independent of the parameter vector size *q* and of the considered observation model. + +Let us denote $\Omega$ the observation space. By rewriting the elements of matrix $\mathbf{G}$ (see Eqn. (5)) involved in the Weiss-Weinstein bound, one obtains for the numerator denoted by $N_{\{\mathbf{G}\}_{k,l}}$, + +$$ +\begin{aligned} +N_{\{\mathbf{G}\}_{k,l}} &= \mathbb{E} \left[ (L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})) (L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_k, \boldsymbol{\theta})) (L^{s_k+s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})) (L^{s_k-s_l}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_l, \boldsymbol{\theta})) \right] \\ +&= \int_{\Theta} \int_{\Omega} \frac{p^{s_k}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_k) p^{s_l}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_l)}{p^{s_k+s_l-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta} + \int_{\Theta} \int_{\Omega} \frac{p^{1-s_k}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_k) p^{1-s_l}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_l)}{p^{1-s_k-s_l}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta} \\ +&\quad - \int_{\Theta} \int_{\Omega} \frac{p^{s_k}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_k) p^{1-s_l}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_l)}{p^{s_k-s_l}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta} - \int_{\Theta} \int_{\Omega} \frac{p^{1-s_k}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_k) p^{s_l}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_l)}{p^{s_l-s_k}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta}, +\end{aligned} +\quad (7) $$ + +and for the denominator denoted by $D_{\{\mathbf{G}\}_{k,l}}$, + +$$ +\begin{aligned} +D_{\{\mathbf{G}\}_{k,l}} &= \mathbb{E}[L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})] \mathbb{E}[L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})] \\ +&= \int_{\Theta} \int_{\Omega} \frac{p^{s_k}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_k)}{p^{s_k-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y} d\boldsymbol{\theta} \int_{\Theta} \int_{\Omega} \frac{p^{s_l}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_l)}{p^{s_l-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y} d\boldsymbol{\theta}. +\end{aligned} +\quad (8) $$ + +Let us now define a function $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$ as + +$$ +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \int_{\Theta} \int_{\Omega} \frac{p^{\alpha}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta}, +\quad (9) $$ + +where $(\alpha, \beta) \in [0, 1]^2$ and where $(\mathbf{u}, \mathbf{v})$ are two $q \times 1$ vectors such that $\boldsymbol{\theta} + \mathbf{u} \in \Theta$ and $\boldsymbol{\theta} + \mathbf{v} \in \Theta$. The notation $p^\alpha (\mathbf{Y}, \boldsymbol{\theta} + \mathbf{u})$ means that $\alpha$ is the power of $p(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{u})$. By identification, it is easy to see that + +$$ +\begin{aligned} +\{\mathbf{G}\}_{k,l} = & \\ +& \frac{\eta(s_k, s_l, h_k, h_l) + \eta(1-s_k, 1-s_l, -h_k, -h_l) - \eta(s_k, 1-s_l, h_k, -h_l) - \eta(1-s_k, s_l, -h_k, h_l)}{\eta(s_k, 0, h_k, 0) \eta(0, s_l, 0, h_l)}. +\end{aligned} +\quad (10) $$ + +Note that we choose the arbitrary notation $D_{\{\mathbf{G}\}_{k,l}} = \eta(s_k, 0, h_k, 0) \eta(0, s_l, 0, h_l)$ for the denominator. The notation $D_{\{\mathbf{G}\}_{k,l}} = \eta(s_k, 1, h_k, 0) \eta(1, s_l, 0, h_l)$ or, even, $D_{\{\mathbf{G}\}_{k,l}} = \eta(s_k, 0, h_k, v) \eta(0, s_l, u, h_l)$ will lead to the same result. + +With Eqn. (10), it is clear that the knowledge of $\eta(\alpha, \beta, u, v)$ for a particular problem leads to the Weiss-Weinstein bound (without the maximization procedure over the test-points and over the parameters $s_i$). Surprisingly, this simple expression is given in [34] only for $s_i = 1/2$, $\forall i$ and not for the general case. +---PAGE_BREAK--- + +Let us now detail this function $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$. The function $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$ can be rewritten as + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \int_{\Theta} \frac{p^{\alpha}(\boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\boldsymbol{\theta})} \int_{\Omega} \frac{p^{\alpha}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\mathbf{Y}; \boldsymbol{\theta})} d\mathbf{Y} d\boldsymbol{\theta} \\ +&= \int_{\Theta} \dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \frac{p^{\alpha}(\boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\boldsymbol{\theta})} d\boldsymbol{\theta}, +\end{align} +\tag{11} +$$ + +where we define + +$$ \dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v}, \boldsymbol{\theta}) = \int_{\Omega} \frac{p^{\alpha}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\mathbf{Y}; \boldsymbol{\theta})} d\mathbf{Y}. \quad (12) $$ + +Our aim is to give the most general result. Consequently, we will focus only on $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ since the *a priori* probability density function depends on the considered problem. + +An important remark pointed out in [27] is that the integration for the parameter space is with respect to the region $\{\boldsymbol{\theta}: p(\boldsymbol{\theta}) > 0\}$. However, since the functions being integrated are $p(\boldsymbol{\theta})$, $p(\boldsymbol{\theta} + \mathbf{u})$, and $p(\boldsymbol{\theta} + \mathbf{v})$, then the actual region of integration (where all the functions are positive) is the intersection of three regions, $\{\boldsymbol{\theta}: p(\boldsymbol{\theta}) > 0\} \cap \{\boldsymbol{\theta}: p(\boldsymbol{\theta} + \mathbf{u}) > 0\} \cap \{\boldsymbol{\theta}: p(\boldsymbol{\theta} + \mathbf{v}) > 0\}$. Note that, in order to simplify the notation we only use $\Theta$ throughout this paper but this remark will be useful and explicitly specified in Section 4.2. + +### 3.2.1. Gaussian observation model with parameterized covariance matrix + +One calls (circular, i.i.d.) Gaussian observation model with parameterized covariance matrix, a model such that the observations $\mathbf{y}(t) \sim CN(0, R_y(\boldsymbol{\theta}))$ where $\boldsymbol{\theta}$ are the parameters of interest. Note that $M_1$ is a special case of this model since the parameters of interest appear only in the covariance matrix of the observations which has the following particular structure $R_y(\boldsymbol{\theta}) = A(\boldsymbol{\theta})R_sA^H(\boldsymbol{\theta}) + R_n$. The closed-form expression of $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ is given by: + +$$ +\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)}}{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta} |\alpha\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}+\mathbf{u}) + \beta\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}+\mathbf{v}) - (\alpha+\beta-1)\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta})|^T}. +\tag{13} +$$ + +The proof is given in Appendix .1. Note that, similar expressions are given in [18] (Eqn. (B.15)) and [36] (p. 67, Eqn. (52)) for the particular case where $\alpha = s$ and $\beta = 1-s$. + +### 3.2.2. Gaussian observation model with parameterized mean + +One calls (circular, i.i.d.) Gaussian observation model with parameterized mean, a model such that the observations $\mathbf{y}(t) \sim CN(\mathbf{f}(\boldsymbol{\theta}), R_y)$ where $\boldsymbol{\theta}$ are the parameters of interest. Note that $M_2$ is a special case of this model since the parameters of interest appear only in the mean of the observations which has the following particular structure $\mathbf{f}_t(\boldsymbol{\theta}) = A(\boldsymbol{\theta})s(t)$ (and $R_y = R_n$). The closed-form expression of $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ is given in this case by +---PAGE_BREAK--- + +$$ +\begin{equation} +\begin{split} +\ln \eta_{\theta} (\alpha, \beta, \mathbf{u}, \mathbf{v}) = & -\sum_{t=1}^{T} \alpha (1-\alpha) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta (1-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \\ +& + (1-\alpha-\beta) (\alpha+\beta) \mathbf{f}_t^H (\boldsymbol{\theta}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) - 2 \operatorname{Re} \{\alpha\beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \\ +& + \alpha (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) + \beta (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) \}, +\end{split} +\tag{14} +\end{equation} +$$ + +or equivalently by + +$$ +\begin{equation} +\begin{split} +\ln \eta_{\theta} (\alpha, \beta, \mathbf{u}, \mathbf{v}) = & -\sum_{t=1}^{T} \alpha (1-\alpha-\beta) \| \mathbf{R}_{\mathbf{y}}^{-1/2} (\mathbf{f}_t(\boldsymbol{\theta}+\mathbf{u}) - \mathbf{f}_t(\boldsymbol{\theta})) \|^{2} + \alpha\beta \| \mathbf{R}_{\mathbf{y}}^{-1/2} (\mathbf{f}_t(\boldsymbol{\theta}+\mathbf{u}) - \mathbf{f}_t(\boldsymbol{\theta}+\mathbf{v})) \|^{2} \\ +& +\beta (1-\alpha-\beta) \| \mathbf{R}_{\mathbf{y}}^{-1/2} (\mathbf{f}_t(\boldsymbol{\theta}+\mathbf{v}) - \mathbf{f}_t(\boldsymbol{\theta})) \|^{2}. +\end{split} +\tag{15} +\end{equation} +$$ + +The details are given in Appendix .2. + +**4. General application to array processing** + +In the previous Section, it has been shown that the Weiss-Weinstein bound computation (or, at least, +the matrix **G** computation) is reduced to the knowledge of the function η (α, β, **u**, **v**) given by Eqn. (9). As +one can see in Eqn. (10), the elements of the matrix **G** depend on η (α, β, **u**, **v**) for particular values of α, β, +**u**, and **v**. Consequently, the goal of this Section is to detail these particular functions for our model given +by Eqn. (1). Since Eqn. (9) can be decomposed into a *deterministic part* (in the sense where ηθ (α, β, **u**, **v**) +(see Eqn. (12)) only depends on the likelihood function) and a *Bayesian part* (when we have to integrate +ηθ (α, β, **u**, **v**) over the *a priori* probability density function of the parameters), we will first focus on the +particular functions ηθ (α, β, **u**, **v**) by using the results of the previous Section on the Gaussian observation +model with parameterized mean or covariance matrix. Second, we will detail the passage from ηθ (α, β, **u**, **v**) +to η (α, β, **u**, **v**) in the particular case where p(θi) is a uniform probability density function ∀i. Another +result will also be given in the case of a Gaussian prior. + +4.1. Analysis of $\hat{\eta}_{\theta}$ $(\alpha, \beta, u, v)$ + +We will now detail the particular functions $\hat{\eta}_{\theta}(\alpha, \beta, u, v)$ involved in the different elements of $\{\mathbf{G}\}_{k,l}$, +$k,l \in \{1,q\}^2$ for both models $M_1$ and $M_2$. + +4.1.1. Unconditional observation model $M_1$ + +Under the unconditional model $\mathcal{M}_1$, by using Eqn. (13), one obtains straightforwardly the functions +$\hat{\eta}_{\theta}(\alpha, \beta, u, v)$ involved in the elements $\{\mathbf{G}\}_{k,l} = \{\mathbf{G}\}_{l,k}$ +---PAGE_BREAK--- + +$$ +\left\{ +\begin{aligned} +\dot{\eta}_{\theta}(s_k, s_l, \mathbf{h}_k, \mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{T(s_k+s_l-1)}}{|\mathbf{R}_y(\theta+\mathbf{h}_k)|^{Ts_k} |\mathbf{R}_y(\theta+\mathbf{h}_l)|^{Ts_l} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)+s_l \mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_k+s_l-1) \mathbf{R}_y^{-1}(\theta)|^T}, \\ +\dot{\eta}_{\theta}(1-s_k, 1-s_l, -\mathbf{h}_k, -\mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{T(1-s_k-s_l)} |\mathbf{R}_y(\theta-\mathbf{h}_k)|^{Ts_k-1} |\mathbf{R}_y(\theta-\mathbf{h}_l)|^{Ts_l-1}}{|(1-s_k)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_k)+(1-s_l)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_l)-(1-s_k-s_l)\mathbf{R}_y^{-1}(\theta)|^T}, \\ +\dot{\eta}_{\theta}(s_k, 1-s_l, \mathbf{h}_k, -\mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_k} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)+(1-s_l)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_l)-(s_k-s_l)\mathbf{R}_y^{-1}(\theta)|^T}{|\mathbf{R}_y(\theta+\mathbf{h}_k)|^{Ts_k} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)-(s_k-1)\mathbf{R}_y^{-1}(\theta)|^T}, \\ +\dot{\eta}_{\theta}(1-s_k, s_l, -\mathbf{h}_k, \mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_l} |(1-s_k)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_k)+s_l\mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_l-s_k)\mathbf{R}_y^{-1}(\theta)|^T}{|\mathbf{R}_y(\theta-\mathbf{h}_k)|^{Ts_k-1} |\mathbf{R}_y(\theta-\mathbf{h}_k)|^{Ts_k-1}}, \\ +\dot{\eta}_{\theta}(s_k, 0, \mathbf{h}_k, \mathbf{0}) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_k} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)-(s_k-1)\mathbf{R}_y^{-1}(\theta)|^T}{|\mathbf{R}_y(\theta+\mathbf{h}_k)|^{Ts_k} |s_l \mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_l-1)\mathbf{R}_y^{-1}(\theta)|^T}, \\ +\dot{\eta}_{\theta}(0, s_l, \mathbf{0}, \mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_l} |\mathbf{R}_y(\theta)|^{Ts_l-1}}{|s_l \mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_l-1)\mathbf{R}_y^{-1}(\theta)|^T}. +\end{aligned} +\right. +\quad (16) +$$ + +The diagonal elements of $\mathbf{G}$ are obtained by letting $k=l$ in the above equations. + +### 4.1.2. Conditional observation model $\mathcal{M}_2$ + +Under the conditional model $\mathcal{M}_2$, by using Eqn. (15) with $\mathbf{f}_t(\boldsymbol{\theta}) = \mathbf{A}(\boldsymbol{\theta})\mathbf{s}(t)$ and $\mathbf{R}_{\boldsymbol{y}} = \mathbf{R}_{\boldsymbol{n}}$ one obtains straightforwardly the functions $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ involved in the elements $\{\mathbf{G}\}_{k,l} = \{\mathbf{G}\}_{l,k}$ + +$$ +\left\{ +\begin{array}{l} +\ln \dot{\eta}_{\theta}(s_k, s_l, \mathbf{h}_k, \mathbf{h}_l) = s_k (s_k + s_l - 1) \zeta_{\theta}(\mathbf{h}_k, \mathbf{0}) + s_l (s_k + s_l - 1) \zeta_{\theta}(\mathbf{h}_l, \mathbf{0}) - s_k s_l \zeta_{\theta}(\mathbf{h}_k, \mathbf{h}_l), \\ +\\ +\ln \dot{\eta}_{\theta}(1 - s_k, 1 - s_l, -\mathbf{h}_k, -\mathbf{h}_l) = (s_k - 1)(s_k + s_l - 1) \zeta_{\theta}(-\mathbf{h}_k, \mathbf{0}) + (s_l - 1)(s_k + s_l - 1) \zeta_{\theta}(-\mathbf{h}_l, \mathbf{0}) \\ +\qquad - (1 - s_k)(1 - s_l) \zeta_{\theta}(-\mathbf{h}_k, -\mathbf{h}_l), \\ +\\ +\ln \dot{\eta}_{\theta}(s_k, 1 - s_l, \mathbf{h}_k, -\mathbf{h}_l) = s_k (s_k - s_l) \zeta_{\theta}(\mathbf{h}_k, \mathbf{0}) + (1 - s_l)(s_k - s_l) \zeta_{\theta}(-\mathbf{h}_l, \mathbf{0}) + s_k (s_l - 1) \zeta_{\theta}(\mathbf{h}_k, -\mathbf{h}_l), \\ +\\ +\ln \dot{\eta}_{\theta}(1 - s_k, s_l, -\mathbf{h}_k, \mathbf{h}_l) = (s_k - 1)(s_k - s_l) \zeta_{\theta}(-\mathbf{h}_k, \mathbf{0}) + s_l (s_l - s_k) \zeta_{\theta}(\mathbf{h}_l, \mathbf{0}) + (s_k - 1) s_l \zeta_{\theta}(-\mathbf{h}_k, \mathbf{h}_l), \\ +\\ +\ln \dot{\eta}_{\theta}(s_k, 0, \mathbf{h}_k, \mathbf{0}) = s_k (s_k - 1) \zeta_{\theta}(\mathbf{h}_k, \mathbf{0}), \\ +\\ +\ln \dot{\eta}_{\theta}(0, s_l, \mathbf{0}, \mathbf{h}_l) = s_l (s_l - 1) \zeta_{\theta}(\mathbf{h}_l, \mathbf{0}), +\end{array} +\right. +\tag{17} +$$ + +where we define + +$$ +\zeta_{\theta}(\mu, \rho) = \sum_{t=1}^{T} \| \mathbf{R}_{n}^{-1/2} (\mathbf{A}(\theta + \mu) - \mathbf{A}(\theta + \rho)) \mathbf{s}(t) \|^{2}. \quad (18) +$$ + +The diagonal elements of $\mathbf{G}$ are obtained by letting $k=l$ in the above equations. Note that, since we are working on matrix $\mathbf{G}$, all the previously proposed results are made whatever the number of test-points. + +## 4.2. Analysis of $\eta(\alpha, \beta, u, v)$ with a uniform prior + +Of course, the analysis of $\eta(\alpha, \beta, u, v)$ given by Eqn. (11) can only be conducted by specifying the a priori probability density functions of the parameters. Consequently, the results provided here are very specific. However, note that, in general, this aspect is less emphasized in the literature where most of the authors give results without specifying the prior probability density functions and compute the rest of the bound numerically (see e.g., [22][20][37]). + +We assume that all the parameters $\theta_i$ have a uniform prior distribution over the interval $[a_i, b_i]$ and are statistically independent. We will also assume one test-point per parameter, otherwise there is no possibility +---PAGE_BREAK--- + +to obtain (pseudo) closed-form expressions. Consequently, the matrix **H** is such that + +$$ +\mathbf{H} = \mathrm{Diag} ([h_1 h_2 \cdots h_q]), \tag{19} +$$ + +and the vector **h***i*, *i* = 1, ..., *q*, takes the value *h**i* at the *i*th row and zero elsewhere. So, in this analysis, +the vector **u** takes the value *u**i* at the *i*th row and zero elsewhere and the vector **v** takes the value *v**j* at the +*j*th row and zero elsewhere (of course, we can have *i* = *j*). Under these assumptions, η(α, β, **u**, **v**) can be +rewritten³ for *i* ≠ *j* + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \int_{\Theta} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \frac{p^{\alpha}(\theta_i + u_i) p^{\beta}(\theta_j + v_j) p^{\beta}(\theta_i) p^{\alpha}(\theta_j)}{p^{\alpha+\beta-1}(\theta_i) p^{\alpha+\beta-1}(\theta_j)} \prod_{\substack{k=1 \\ k \neq i, k \neq j}}^{q} p(\theta_k) d\theta \\ +&= \frac{1}{\prod_{k=1}^{q} (b_k - a_k)} \int_{\Theta^{q-2}} \int_{\Theta_j} \int_{\Theta_i} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) d\theta_i d\theta_j d(\theta / \{\theta_i, \theta_j\}), \tag{20} +\end{align} +$$ + +where $\Theta_i = \begin{cases} [a_i, b_i - u_i] & \text{if } u_i > 0, \\ [a_i - u_i, b_i] & \text{if } u_i < 0, \end{cases}$ and $\Theta_j = \begin{cases} [a_j, b_j - v_j] & \text{if } v_j > 0, \\ [a_j - v_j, b_j] & \text{if } v_j < 0, \end{cases}$. For $i=j$, one can have $\mathbf{v} = \pm \mathbf{u}$, +then one obtains + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = \pm \mathbf{u}) &= \int_{\Theta} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \frac{p^{\alpha}(\theta_i + u_i) p^{\beta}(\theta_i \pm u_i)}{p^{\alpha+\beta-1}(\theta_i)} \prod_{\substack{k=1 \\ k \neq i}}^{q} p(\theta_k) d\theta \\ +&= \frac{1}{\prod_{k=1}^{q} (b_k - a_k)} \int_{\Theta^{q-1}} \int_{\Theta_i} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v} = \pm \mathbf{u}) d\theta_i d(\theta / \{\theta_i\}). \tag{21} +\end{align} +$$ + +In the last equation, if $\mathbf{v} = -\mathbf{u}$, then $\Theta_i = \begin{cases} [a_i + u_i, b_i - u_i] & \text{if } u_i > 0, \\ [a_i - u_i, b_i + u_i] & \text{if } u_i < 0, \end{cases}$ , while, if $\mathbf{v} = \mathbf{u}$, then +$\Theta_i = \begin{cases} [a_i, b_i - u_i] & \text{if } u_i > 0, \\ [a_i - u_i, b_i] & \text{if } u_i < 0, \end{cases}$. + +Depending on the structure of $\eta_\theta (\alpha, \beta, \mathbf{u}, \mathbf{v})$, $\eta (\alpha, \beta, \mathbf{u}, \mathbf{v})$ has to be computed numerically or a closed- +form expression can be found. + +Another particular case which appears sometimes is when the function $\eta_\theta (\alpha, \beta, \mathbf{u}, \mathbf{v})$ does not depend +on $\theta$ (see, [23][5][8][18][20][21][27][29] and Section 5 of this paper). In this case, $\eta_\theta (\alpha, \beta, \mathbf{u}, \mathbf{v})$ is denoted + +³In this case, one has to have a particular attention to the integration domain as mentionned in Section 3.2. It will not be +the case for the Gaussian prior since the support is ℝ. +---PAGE_BREAK--- + +$\dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ and one obtains from Eqn. (20) + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \frac{\dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v})}{\prod_{k=1}^{q} (b_k - a_k)} \left( \prod_{\substack{k=1 \\ k \neq i, k \neq j}}^{q} \int_{a_k}^{b_k} d\theta_k \right) \int_{\Theta_i} d\theta_i \int_{\Theta_j} d\theta_j \nonumber \\ +&= \frac{(b_i - a_i - |u_i|)(b_j - a_j - |v_j|)}{(b_i - a_i)(b_j - a_j)} \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}), \tag{22} +\end{align} +$$ + +and from Eqn. (21) + +$$ +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = \mathbf{u}) = \frac{(b_i - a_i - |u_i|)}{(b_i - a_i)} \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}), \quad (23) +$$ + +and + +$$ +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = -\mathbf{u}) = \frac{(b_i - a_i - 2|u_i|)}{(b_i - a_i)} \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}). \quad (24) +$$ + +### 4.3. Analysis of $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$ with a Gaussian prior + +Finally, one can mention that if the prior is now assumed to be Gaussian, i.e., $\theta_i \sim N(\mu_i, \sigma_i^2) \forall i$ and $\dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ does not depend on $\theta$ one obtains after a straightforward calculation + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \int_{\mathbb{R}} \frac{p^{\alpha}(\theta_i + u_i)}{p^{\alpha-1}(\theta_i)} d\theta_i \int_{\mathbb{R}} \frac{p^{\beta}(\theta_j + v_j)}{p^{\beta-1}(\theta_j)} d\theta_j \\ +&= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \exp \left( -\frac{1}{2} \left( \frac{\alpha(1-\alpha)u_i^2}{\sigma_i^2} + \frac{\beta(1-\beta)v_j^2}{\sigma_j^2} \right) \right), \tag{25} +\end{align} +$$ + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = \mathbf{u}) &= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \int_{\mathbb{R}} \frac{p^{\alpha+\beta}(\theta_i + u_i)}{p^{\alpha+\beta-1}(\theta_i)} d\theta_i \\ +&= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \exp\left(-\frac{(\alpha+\beta)(1-\alpha-\beta)u_i^2}{2\sigma_i^2}\right), \tag{26} +\end{align} +$$ + +and + +$$ +\begin{align} +\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = -\mathbf{u}) &= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \int_{\mathbb{R}} \frac{p^{\alpha}(\theta_i + u_i) p^{\beta}(\theta_i - u_i)}{p^{\alpha+\beta-1}(\theta_i)} d\theta_i \\ +&= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \exp\left(-\frac{(\alpha + \beta - \alpha^2 - \beta^2 + 2\alpha\beta) u_i^2}{2\sigma_i^2}\right). \tag{27} +\end{align} +$$ + +## 5. Specific applications to array processing: DOA estimation + +We now consider the application of the Weiss-Weinstein bound in the particular context of source localization. Indeed, until now, the structure of the steering matrix $A(\theta)$ for a particular problem has not been used in the proposed (semi) closed-form expressions. Consequently, these previous results can be applied to a large class of estimation problems such as far-field and near-field sources localization, passive localization with polarized array of sensors, or radar (known waveforms). +---PAGE_BREAK--- + +Here, we want to focus on the direction-of-arrival estimation of a single source in the far-field area with narrow-band signal. In this case, the steering matrix $\mathbf{A}(\boldsymbol{\theta})$ becomes a steering vector denoted as $\mathbf{a}(\boldsymbol{\theta})$ (except for one preliminary result concerning the conditional model which will be given whatever the number of sources in Section 5.1.2). The structure of this vector will be specified by the analysis of two kinds of array geometry: the non-uniform linear array from which only one angle-of-arrival can be estimated ($\boldsymbol{\theta}$ becomes a scalar) and the arbitrary planar array from which both azimuth and elevation can be estimated ($\boldsymbol{\theta}$ becomes a $2 \times 1$ vector). In any cases, the array always consists of $M$ identical, omnidirectional sensors. Both models $\mathcal{M}_1$ and $\mathcal{M}_2$ will be considered and the noise will be assumed spatially uncorrelated: $\mathbf{R}_n = \sigma_n^2 \mathbf{I}$. Since we focus on the single source scenario, the variance of the source signal $s(t)$ is denoted $\sigma_s^2$ for the model $\mathcal{M}_1$. + +The general structure of the $i^{th}$ element of the steering vector is as follows + +$$ \{\mathbf{a}(\boldsymbol{\theta})\}_i = \exp \left( j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\theta} \right), \quad i = 1, \dots, M \qquad (28) $$ + +where $\boldsymbol{\theta}$ represents the parameter vector, where $\lambda$ denotes the wavelength, and where $\mathbf{r}_i$ denotes the coordinate of the $i^{th}$ sensor position with respect to a given referential. In the following, $\mathbf{r}_i$ will be a scalar or a $2 \times 1$ vector depending on the context (linear array or planar array). + +## 5.1. Preliminary results + +Since our analysis is now reduced to the single source case, we give here some other closed-form expressions which will be useful when we will detail the specific linear and planar arrays. + +### 5.1.1. Unconditional observation model $\mathcal{M}_1$ + +In order to detail the set of functions $\eta_{\theta}$ given by Eqn. (16), one has to find closed-form expressions of the determinant $|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta} + \mathbf{u})|$ and of determinants having the following structure: $|m_1\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2)|$ with $m_1 + m_2 = 1$ or $|m_1\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2) + m_3\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_3)|$ with $m_1 + m_2 + m_3 = 1$. Under $\mathcal{M}_1$, the observation covariance matrix is now given by + +$$ \mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}) = \sigma_s^2 \mathbf{a}(\boldsymbol{\theta}) \mathbf{a}^H(\boldsymbol{\theta}) + \sigma_n^2 \mathbf{I}_M. \qquad (29) $$ + +Concerning the calculation of $|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta} + \mathbf{u})|$, it is easy to find + +$$ |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta} + \mathbf{u})| = \sigma_n^{2M} \left( 1 + \frac{\sigma_s^2}{\sigma_n^2} \|\mathbf{a}(\boldsymbol{\theta} + \mathbf{u})\|^2 \right). \qquad (30) $$ + +Moreover, after calculation detailed in Appendix B.3, one obtains for the other determinants + +$$ |m_1 \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2 \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2)| = \frac{1}{(\sigma_n^2)^M} \left( \begin{aligned}[t] & 1 - \varphi_1 m_1 \| \mathbf{a}(\boldsymbol{\theta}_1) \| ^2 + m_2 \varphi_2 \| \mathbf{a}(\boldsymbol{\theta}_2) \| ^2 \\ & - \varphi_1 m_1 \varphi_2 m_2 (\| \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) \| ^2 - \| \mathbf{a}(\boldsymbol{\theta}_1) \| ^2 \| \mathbf{a}(\boldsymbol{\theta}_2) \| ^2) \end{aligned} \right) \qquad (31) $$ +---PAGE_BREAK--- + +and + +$$ +\begin{align} +|m_1 \mathbf{R}_{\mathbf{y}}^{-1}(\theta_1) + m_2 \mathbf{R}_{\mathbf{y}}^{-1}(\theta_2) + m_3 \mathbf{R}_{\mathbf{y}}^{-1}(\theta_3)| = & \nonumber \\ +& \frac{1}{(\sigma_n^2)^M} \left( 1 - \sum_{k=1}^3 m_k \varphi_k \| \mathbf{a}(\theta_k) \|^2 - \frac{1}{2} \sum_{k=1}^3 \sum_{\substack{k'=1 \\ k' \neq k}}^3 m_k \varphi_k m_{k'} \varphi_{k'} \left( \| \mathbf{a}^H(\theta_k) \mathbf{a}(\theta_{k'}) \|^2 - \| \mathbf{a}(\theta_k) \|^2 \| \mathbf{a}(\theta_{k'}) \|^2 \right) \right. \nonumber \\ +& \left. - \left( \prod_{k=1}^3 m_k \varphi_k \right) \left( \prod_{k=1}^3 \| \mathbf{a}(\theta_k) \|^2 - \frac{1}{2} \sum_{k=1}^3 \sum_{\substack{k'=1 \\ k' \neq k}}^3 \sum_{\substack{k''=1 \\ k'' \neq k'}}^3 \| \mathbf{a}^H(\theta_k) \mathbf{a}(\theta_{k''}) \|^2 \| \mathbf{a}(\theta_{k''}) \|^2 \right) \right. \nonumber \\ +& \left. + \mathbf{a}^H(\theta_3) \mathbf{a}(\theta_2) \mathbf{a}^H(\theta_1) \mathbf{a}(\theta_3) \mathbf{a}^H(\theta_2) \mathbf{a}(\theta_1) + \mathbf{a}^H(\theta_3) \mathbf{a}(\theta_1) \mathbf{a}^H(\theta_1) \mathbf{a}(\theta_2) \mathbf{a}^H(\theta_2) \mathbf{a}(\theta_3) \right), \tag{32} +\end{align} +$$ + +where + +$$ +\varphi_k = \frac{\sigma_s^2}{\sigma_s^2 \|a(\theta_k)\|^2 + \sigma_n^2}, \quad k = 1, 2, 3. \tag{33} +$$ + +5.1.2. Conditional observation model $\mathcal{M}_2$ + +Note that the results proposed here are in the context of any number of sources. Under the conditional +model, the set of functions $\hat{\eta}_{\theta}$ given by Eqn. (17) is linked to the function $\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho})$ given by Eqn. (18). In +this analysis, the vector $\boldsymbol{\mu}$ takes the value $\mu_i$ at the $i^{th}$ row and zero elsewhere and the vector $\boldsymbol{\rho}$ takes the +value $\rho_j$ at the $j^{th}$ row and zero elsewhere (of course, one can has $i = j$). In Appendix .4, the calculation +of the following closed-form expressions for $\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho})$ are detailed. + +• If $(m-1)p+1 \le i,j \le mp$, where $p$ denotes the number of parameters per source, then, we have + +$$ +\begin{equation} +\begin{aligned} +\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = {}& \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_m \|^{2} \sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_{\boldsymbol{n}}^{-1}\}_{i,j} \\ +& \times \left( \exp\left(-j\frac{2\pi}{\lambda}\mathbf{r}_{i}^{T}\boldsymbol{\mu}_{m}\right) - \exp\left(-j\frac{2\pi}{\lambda}\mathbf{r}_{i}^{T}\boldsymbol{\rho}_{m}\right) \right) \\ +& \times \left( \exp\left(j\frac{2\pi}{\lambda}\mathbf{r}_{j}^{T}\boldsymbol{\mu}_{m}\right) - \exp\left(j\frac{2\pi}{\lambda}\mathbf{r}_{j}^{T}\boldsymbol{\rho}_{m}\right) \right) +\end{aligned} +\tag{34} +\end{equation} +$$ + +• Otherwise, if (m − 1) p + 1 ≤ i ≤ mp and ( n − 1) p + 1 ≤ j ≤ np , then we have + +$$ +\begin{align*} +\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = & -2 \operatorname{Re} \left( \sum_{t=1}^{T} {\{\mathbf{s}(t)\}_m}^* {\{\mathbf{s}(t)\}_n} \right) \\ +& + \sum_{t=1}^{T} \|{\{\mathbf{s}(t)\}_n}\|^{2} \sum_{i=1}^{M} \sum_{j=1}^{M} {\{\mathbf{R}_{\boldsymbol{n}}^{-1}\}_{i,j}} \\ +& + \sum_{t=1}^{T} \|{\{\mathbf{s}(t)\}_n}\|^{2} \sum_{i=1}^{M} \sum_{j=1}^{M} {\{\mathbf{R}_{\boldsymbol{n}}^{-1}\}_{i,j}} \\ +& + 2 \operatorname{Re} \left( j \frac{2\pi}{\lambda} (\mathbf{r}_j^T \boldsymbol{\theta}_n - \mathbf{r}_i^T \boldsymbol{\theta}_m) \right) \\ +& + 2 \operatorname{Re} (\mathbf{r}_j^T (\boldsymbol{\mu}_m - \boldsymbol{\rho}_m)) \\ +& + 2 \operatorname{Re} (\mathbf{r}_i^T (\boldsymbol{\mu}_n - \boldsymbol{\rho}_n)) \\ +& + 2 \operatorname{Re} (\boldsymbol{\mu}_m - \boldsymbol{\rho}_m) \\ +& + 2 \operatorname{Re} (\boldsymbol{\mu}_n - \boldsymbol{\rho}_n) +\end{align*} +$$ + +$$ +\times +\sum_{i=1}^{M} +\sum_{j=1}^{M} +\{ +\mathbf{R}_{\mathrm{n}}^{-1} +\}_{i,j} +\times +\exp +\left( +j +\frac{2\pi}{\lambda} +( +\mathbf{r}_{j}^{T} +\boldsymbol{\theta}_{n} +- +\mathbf{r}_{i}^{T} +\boldsymbol{\theta}_{m} +) +\right) +\times +(-j +\frac{2\pi}{\lambda} +\mathbf{r}_{i}^{T} +\boldsymbol{\mu}_{m}) +\times +(j +\frac{2\pi}{\lambda} +\mathbf{r}_{j}^{T} +\boldsymbol{\rho}_{n}) +. +\quad (35) +$$ +---PAGE_BREAK--- + +In particular, if one assumes $\mathbf{R}_n = \sigma_n^2 \mathbf{I}$, then, several simplifications can be done: + +• If $(m-1)p+1 \le i,j \le mp$, then + +$$ +\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \left\| \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m\right) - \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_m\right) \right\|^2 \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_m \|^2, \quad (36) +$$ + +where we note that the function $\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho})$ does not depend on the parameter $\theta$. + +• Otherwise, if $(m-1)p+1 \le i \le mp$ and $(n-1)p+1 \le j \le np$, then + +$$ +\begin{equation} +\begin{split} +\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) &= \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \left\| \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m\right) \right\|^2 \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_m \|^2 + \\ +&\qquad + \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \left\| \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_n\right) \right\|^2 \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_n \|^2 \\ +&\quad - 2 \operatorname{Re} \left( \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \exp\left(j \frac{2\pi}{\lambda} \mathbf{r}_i^T (\boldsymbol{\theta}_n - \boldsymbol{\theta}_m)\right) \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m\right) \exp\left(j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_n\right) \sum_{t=1}^{T} \{\mathbf{s}(t)\}_m^* \{\mathbf{s}(t)\}_n \right) +\end{split} +\tag{37} +\end{equation} +$$ + +It is clear that the proposed above formulas for both the unconditional and the conditional models can be applied to any kind of array geometry and whatever the number of sources. However, they generally depend on the parameter vector $\theta$. This means that, in general, the calculation of the set of functions $\eta$ will have to be performed numerically (except if one is able to find a closed-form expression of Eqn. (11)). In the following we present a kind of array geometry where, fortunately, the set of functions $\eta_\theta$ will not depend on $\theta$ leading to a straightforward calculation of the bound. + +5.2. 3D Source localization with a planar array + +We first consider the problem of DOA estimation of a single narrow band source in the far field area by using an arbitrary planar array. In fact, we start by this general setting because the non-uniform linear array is clearly a particular case of this array. Without loss of generality, we assume that the sensors of this array lay on the $xOy$ plan with Cartesian coordinates (see Fig. .1). Therefore, the vector $\mathbf{r}_i$ contains the coordinate of the $i^{th}$ sensor position with respect to this referential, i.e., $\mathbf{r}_i = [d_{x_i} \ d_{y_i}]^T$, $i = 1, ..., M$. From (28), the steering vector is given by + +$$ +\mathbf{a}(\boldsymbol{\theta}) = \left[ \exp\left(j \frac{2\pi}{\lambda} (d_{x_1} u + d_{y_1} v)\right) \dots \exp\left(j \frac{2\pi}{\lambda} (d_{x_M} u + d_{y_M} v)\right) \right]^T, \quad (38) +$$ + +where, as in [18], the parameter vector of interest is $\boldsymbol{\theta} = [u \ v]^T$ where + +$$ +\begin{cases} +u = \sin \varphi \cos \phi, \\ +v = \sin \varphi \sin \phi, +\end{cases} +\tag{39} +$$ + +and where $\varphi$ and $\phi$ represent the elevation and azimuth angles of the source, respectively. The parameters space is such that $u \in [-1, 1]$ and $v \in [-1, 1]$. Therefore, we assume that they both follow a uniform distribution over $[-1, 1]$. Note that from a physical point of view, it should be more tempting to choose a uniform +---PAGE_BREAK--- + +prior for $\varphi$ and $\phi$. This will lead to a probability density functions for $u$ and $v$ not uniform. To the best of our knowledge, this assumption has only been used in the context of lower bounds in [20]. Unfortunately, such prior leads to an untractable expression of the bound (see Eqn. (21) of [20]). Consequently, other authors have generally not specified the prior leading to semi closed-form expressions of bounds (i.e. that it remains a numerical integration to perform over the parameters) [20][37][22]. On the other hand, in order to obtain a closed-form expression, authors have generally used a simplified assumption, i.e. a uniform prior directly on $u$ and $v$ (see, for example, [21][38]). In this paper, we have followed the same way by expecting a slight modification of performance with respect to a more physical model and in order to be able to get closed-form expressions of the bound. + +We choose the matrix of test points such that + +$$ \mathbf{H} = [\mathbf{h}_u \quad \mathbf{h}_v] = \begin{bmatrix} h_u & 0 \\ 0 & h_v \end{bmatrix}. \qquad (40) $$ + +Then, we have: $\theta + \mathbf{h}_u = [u + h_u \ v]^T$ and $\theta + \mathbf{h}_v = [u \ v + h_v]^T$. Moreover, we now have two elements $s_i \in [0, 1], i = 1, 2$ for which we will prefer the notation $s_u$ and $s_v$, respectively. + +### 5.2.1. Unconditional observation model $\mathcal{M}_1$ + +Under $\mathcal{M}_1$, let us set $U_{SNR} = \frac{\sigma_s^4}{\sigma_n^2(M\sigma_s^2+\sigma_n^2)}$. The closed-form expressions of the elements of matrix $\mathbf{G} = [\begin{matrix} \{\mathbf{G}\}_{uu} & \{\mathbf{G}\}_{uv} \\ \{\mathbf{G}\}_{vu} & \{\mathbf{G}\}_{vv} \end{matrix}]$ are given by (see Appendix B.5 for the proof): + +$$ \{\mathbf{G}\}_{uu} = \frac{\left( \left(1 - \frac{|h_u|}{2}\right) \left(1 + 2s_u(1 - 2s_u)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-T} + \left(1 - \frac{|h_u|}{2}\right) \left(1 + 2(1-s_u)(2s_u-1)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-T} \right)}{\left(1 - \frac{|h_u|}{2}\right)^2 \left(1 + s_u(1-s_u)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-2T}}, \quad (41) $$ + +$$ \{\mathbf{G}\}_{vv} = \frac{\left( \left(1 - \frac{|h_v|}{2}\right) \left(1 + 2s_v(1 - 2s_v)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-T} + \left(1 - \frac{|h_v|}{2}\right) \left(1 + 2(1-s_v)(2s_v-1)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-T} \right)}{\left(1 - \frac{|h_v|}{2}\right)^2 \left(1 + s_v(1-s_v)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{4\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-2T}}, \quad (42) $$ +---PAGE_BREAK--- + +$$ +\begin{equation} +\left\{ +\begin{aligned} +& \left( + \begin{pmatrix} + s_u s_v \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 - M^2 \right) \\ + +s_u(1-s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 - M^2 \right) \\ + +s_v(1-s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M^2 \right) + \end{pmatrix} + \right)^{-T} \\ +& \times \left( + \begin{pmatrix} + -s_u s_v (1-s_u-s_v) \frac{U_{SNR}^2 R^{\sigma_n^2}}{\sigma_n^2} \\ + \left( \sum_{k=1}^{M} \exp(j \frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(-j \frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}) \right) \\ + +\sum_{k=1}^{M} \exp(-j \frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(j \frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(-j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}) \\ + -M \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 \\ + -M \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 + M^3 + \end{pmatrix} + \\ +& + \left( + \begin{pmatrix} + (1-s_u)(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 - M^2 \right) \\ + +(1-s_u)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 - M^2 \right) \\ + +(1-s_v)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M^2 \right) + \end{pmatrix} + \right)^{-T} \\ +& + (1-s_u)(1-s_v)(s_u+s_v-1) \frac{U_{SNR}^2 R^{\sigma_n^2}}{\sigma_n^2} \\ +& + (-1-s_u)(1-s_v)(s_u+s_v-1) (1-U_{SNR}) \\ +& + (1-U_{SNR}) \left( + \begin{pmatrix} + s_u(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u + d_{y_k} h_v)) \right\|^2 - M^2 \right) \\ + +s_u(s_v-s_u) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 - M^2 \right) \\ + +(1-s_v)(s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M^2 \right) + \end{pmatrix} + \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) + \end{pmatrix} + \\ +& - (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\ +& + (-s_u)(1-s_v)(s_v-s-u) + \frac{U_{SNR}^2 R^{\sigma_n^2}}{\sigma_n^2} + \\ +& - (-s-u(1-u_s)) U_{SNR} + (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_y_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\ +& - (\sum_{k=1}^{M}\exp(-j)\frac{\sigma_n^2 R^{\sigma_n^2}}{\sigma_n^4}\\ +& + \left( + 0 + \right)^{-T}, + \\[6ex] +{\mathbf{\Gamma}}uv = + & + \left( + 0 + \right)^{-T} + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + ) + ( + 0 + + + U_S U_N S R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-RRSRSSSRRSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzppp_pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprpr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr p pp_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_yy__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o_____ +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$$ + +$${G}}uv = $$ +---PAGE_BREAK--- + +and, of course, ${\mathbf{G}}_{uv} = {\mathbf{G}}_{vu}$. Consequently, the unconditional Weiss-Weinstein bound is 2 × 2 matrix given by: + +$$ +\begin{align} +\mathbf{UWWB} &= \mathbf{HG}^{-1}\mathbf{H}^T \nonumber \\ +&= \frac{1}{\{\mathbf{G}\}_{uu}\{\mathbf{G}\}_{vv} - \{\mathbf{G}\}_{uv}^2} \begin{bmatrix} +h_u^2 \{\mathbf{G}\}_{vv} & -h_u h_v \{\mathbf{G}\}_{uv} \\ +-h_u h_v \{\mathbf{G}\}_{uv} & h_v^2 \{\mathbf{G}\}_{uu} +\end{bmatrix}, \tag{44} +\end{align} +$$ + +which has to be optimized over $s_u$, $s_v$, $h_u$, and $h_v$. Concerning the optimization over $s_u$ and $s_v$, several other works in the literature have suggested to simply use $s_u = s_v = 1/2$. Most of the time, numerical simulations of this simplified bound compared with the bound obtained after optimization over $s_u$ and $s_v$ leads to the same results while their is no formal proof of this fact (see [5] page 41 footnote 17). Note that, thanks to the expressions obtained in the next Section concerning the linear array, we will be able to prove that $s = 1/2$ is a (maybe not unique) correct choice for any linear array. In the case of the planar array treated in this Section, we will only check this property by simulation. + +In the particular case where $s_u = s_v = 1/2$ one obtains the following simplified expressions + +$$ +\begin{align} +\{\mathbf{G}\}_{uu} &= \frac{2\left(1-\frac{|h_u|}{2}\right) - 2(1-|h_u|)\left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{4\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-T}}{\left(1-\frac{|h_u|}{2}\right)^2 \left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-2T}}, \tag{45} \\ +\{\mathbf{G}\}_{vv} &= \frac{2\left(1-\frac{|h_v|}{2}\right) - 2(1-|h_v|)\left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{4\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-T}}{\left(1-\frac{|h_v|}{2}\right)^2 \left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-2T}}, \tag{46} +\end{align} +$$ + +and + +$$ +\begin{equation} +\begin{split} +\{\mathbf{G}\}_{uv} = {}& \frac{\left( 2 \left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 \right) \right)^{-T} \right.}{\left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u + d_{y_k} h_v)) \right\|^2 \right) \right)^{-T}} \\ +& \quad \left. - 2 \left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u + d_{y_k} h_v)) \right\|^2 \right) \right)^{-T} \right)} \\ +& \quad \cdot \frac{\left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 \right) \right)^{-T}}{\left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 \right) \right)^{-T}} +\end{split} +\tag{47} +\end{equation} +$$ + +Again, the Weiss-Weinstein bound is obtained by using the above expressions in Eqn. (44) and after an optimization over the test points. The optimization over the test points can be done over a search grid or by using the ambiguity diagram of the array in order to reduce significantly the computational cost (see [14],[22], [30],[39]). +---PAGE_BREAK--- + +5.2.2. Conditional observation model $M_2$ + +Under $\mathcal{M}_2$, let us set $C_{SNR} = \frac{1}{\sigma_n^2} \sum_{t=1}^{T} \|s(t)\|^2$. The closed-form expressions of the elements of matrix **G** are given by (see Appendix .6 for the proof): + +$$ +\begin{align} +\{\mathbf{G}\}_{uu} &= \frac{\left( \begin{aligned}[c] + &\left(1 - \frac{|h_u|}{2}\right) \exp\left(4s_u(2s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right) \\ + + &\left(1 - \frac{|h_u|}{2}\right) \exp\left(4(2s_u - 1)(s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right) \\ + - &2(1 - |h_u|) \exp\left(2s_u(s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}h_u\right)\right)\right) +\end{aligned} \right)}{\left(1 - \frac{|h_u|}{2}\right)^2 \exp\left(4s_u(s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}, \tag{48} +\end{align} +$$ + +$$ +\begin{equation} +\begin{split} +\{\mathbf{G}\}_{vv} = {}& \frac{\left( \begin{aligned}[t] + &\left(1 - \frac{|h_v|}{2}\right) \exp\left(4s_v(2s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right) \\ + + &\left(1 - \frac{|h_v|}{2}\right) \exp\left(4(2s_v - 1)(s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right) \\ + - &2(1 - |h_v|) \exp\left(2s_v(s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{y_k}h_v\right)\right)\right) +\end{aligned} \right)}{\left(1 - \frac{|h_v|}{2}\right)^2 \exp\left(4s_v(s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}, +\end{split} +\tag{49} +\end{equation} +$$ + +$$ +\begin{equation} +\begin{split} +\{\mathbf{G}\}_{uv} = {}& \frac{\left( + \begin{aligned}[t] + & 2s_u(s_u + s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\ + & + 2s_v(s_u + s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\ + & - 2s_u s_v C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u - d_{y_k}h_v)\right)\right) + \end{aligned} + \right)} + {\exp\left(2s_u(s_u-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)} + \times + \left( + \begin{aligned}[t] + & 2(s_u-1)(s_u+s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\ + & + 2(s_v-1)(s_u+s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\ + & - 2(1-s_u)(1-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u-d_{y_k}h_v)\right)\right) + \end{aligned} + \right)} + \\ +& + + \times + \left( + \begin{aligned}[t] + & 2s_u(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\ + & + 2(1-s_v)(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\ + & + 2s_u(s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right)\right) + \end{aligned} + \right) + \\ +& - + \times + \left( + \begin{aligned}[t] + & 2(s_u-1)(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\ + & + 2s_v(s_v-s_u)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\ + & + 2(s_u-1)s_v C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right)\right) + \end{aligned} + \right) + \\ +& - + \times + \left( + \begin{aligned}[t] + & 2(s_u-1)(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\ + & + 2s_v(s_v-s_u)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\ + & + 2(s_u-1)s_v C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right)\right) + \end{aligned} + \right) + \\ +& = + \frac{\exp\left(2s_u(s_u-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right) + \exp\left(2s_v(s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)} + {\exp\left(2s_u(s_u-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right) + \exp\left(2s_v(s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}, + \end{split} + \tag{50} +\end{split} +$$ + +and $\{\mathbf{G}\}_{uv} = \{\mathbf{G}\}_{vu}$. Consequently, the conditional Weiss-Weinstein bound is 2 × 2 matrix given by using the above equations in Eqn. (44). As for the unconditional case, if we set $s_u = s_v = 1/2$, one obtains the following simplified expressions +---PAGE_BREAK--- + +$$ +\begin{align} +\{\mathbf{G}\}_{uu} &= \frac{2\left(1 - \frac{|h_u|}{2}\right) - 2(1 - |h_u|)\exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}{\left(1 - \frac{|h_u|}{2}\right)^2 \exp\left(-C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}, \tag{51} \\ +\{\mathbf{G}\}_{vv} &= \frac{2\left(1 - \frac{|h_v|}{2}\right) - 2(1 - |h_v|)\exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}{\left(1 - \frac{|h_v|}{2}\right)^2 \exp\left(-C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}, \tag{52} \\ +\{\mathbf{G}\}_{uv} &= \frac{\begin{pmatrix} 2 \exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u - d_{y_k}h_v)\right)\right)\right) \\ -2 \exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u + d_{y_k}h_v)\right)\right)\right)}{\exp\left(-\frac{C_{SNR}}{2}\left(2M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right) - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}. \tag{53} +\end{align} +$$ + +By using the above expressions in Eqn. (44) and after an optimization over the test points, one obtains +the Weiss-Weinstein bound. + +5.3. Source localization with a non-uniform linear array + +We now briefly consider the DOA estimation of a single narrow band source in the far area by using a non-uniform linear array antenna. Without loss of generality, let us assume that the linear array antenna lays on the Ox axis of the coordinate system (see Fig. .1), consequently, $d_{y_i} = 0, \forall i$. The sensor positions vector is denoted $[d_{x_1} ... d_{x_M}]$. By letting $\theta = \sin \varphi$, where $\varphi$ denotes the elevation angle of the source, the steering vector is then given by + +$$ +\mathbf{a}(\theta) = \left[ \exp \left( j \frac{2\pi}{\lambda} d_{x_1} \theta \right) \dots \exp \left( j \frac{2\pi}{\lambda} d_{x_M} \theta \right) \right]^T . \quad (54) +$$ + +We assume that the parameter $\theta$ follows a uniform distribution over [-1, 1]. As in Section 4.2 and since +the parameter of interest is a scalar, matrix **H** of the test points becomes a scalar denoted $h_\theta$. In the +same way, there is only one element $s_i \in [0, 1]$ which will be simply denoted *s*. The closed-form expressions +given here are straightforwardly obtained from the aforementioned results on the planar array about the +element denoted $\{\mathbf{G}\}_{uu}$. We will continue to use the previously introduced notations $U_{SNR} = \frac{\sigma_s^4}{\sigma_n^2 (M\sigma_s^2 + \sigma_n^2)}$ +and $C_{SNR} = \frac{1}{\sigma_n^2} \sum_{t=1}^T \|s(t)\|^2$. +---PAGE_BREAK--- + +### 5.3.1. Unconditional observation model $M_1$ + +The closed-form expression of the unconditional Weiss-Weinstein bound, denoted UWWB, is given by + +$$ \text{UWWB} = \frac{h_{\theta}^{2} \left(1 - \frac{|h_{\theta}|}{2}\right)^{2} \left(1 + s(1-s)U_{\text{SNR}} \left(M^{2} - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-2T}}{\left(1 - \frac{|h_{\theta}|}{2}\right) \left( \begin{aligned}[t] & \left(1 + 2s(1-2s)U_{\text{SNR}} \left(M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T} \\ & + \left(1 + 2(1-s)(2s-1)U_{\text{SNR}} \left(M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T} \end{aligned} \right) \\ & - 2(1-|h_{\theta}|) \left(1 + s(1-s)U_{\text{SNR}} \left(M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{4\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T}} \tag{55} $$ + +In order to find one optimal value of $s$ that maximizes $\mathbf{HG}^{-1}\mathbf{H}^T$, $\forall h_\theta$ we have considered the derivative of $\mathbf{HG}^{-1}\mathbf{H}^T$ w.r.t. $s$. The calculation (not reported here) is straightforward and it is easy to see that $\left.\frac{\partial \mathbf{HG}^{-1}\mathbf{H}^T}{\partial s}\right|_{s=\frac{1}{2}} = 0$. Consequently, the Weiss-Weinstein bound has just to be optimized over $h_\theta$ and is simplified leading to + +$$ UWWB = \sup_{h_{\theta}} \frac{h_{\theta}^{2} \left(1 - \frac{|h_{\theta}|}{2}\right)^{2} \left(1 + \frac{U_{SNR}}{4} \left(M^{2} - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-2T}}{2 \left(1 - \frac{|h_{\theta}|}{2}\right) - 2(1-|h_{\theta}|) \left(1 + \frac{U_{SNR}}{4} \left(M^{2} - \left\| \sum_{k=1}^{M} \exp(-j \frac{4\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T}} . \tag{56} $$ + +In the classical case of a uniform linear array (i.e., $d_{x_k} = d$), this expression can be still simplified by +noticing that $\sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) = M \exp(-j \frac{2\pi d}{\lambda} h_{\theta})$. + +### 5.3.2. Conditional observation model $M_2$ + +The closed-form expression of the conditional Weiss-Weinstein bound CWWB is given by + +$$ CWWB = \frac{h_{\theta}^{2} \left(1 - \frac{|h_{\theta}|}{2}\right)^{2} \exp \left(4s(s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{2\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right)}{\left(1 - \frac{|h_{\theta}|}{2}\right) \left( \begin{aligned}[t] & \exp \left(4s(2s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{2\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right) \\ & + \exp \left(4(2s-1)(s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{2\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right) \\ & - 2(1-|h_{\theta}|) \exp \left(2s(s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{4\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right) \end{aligned} \right)} . \tag{57} $$ + +Again, it is easy to check that $\left.\frac{\partial \mathbf{HG}^{-1}\mathbf{H}^T}{\partial s}\right|_{s=\frac{1}{2}} = 0$. Consequently, one optimal value of $s$ that maximizes $\mathbf{HG}^{-1}\mathbf{H}^T$, $\forall h_\theta$ is $s = \frac{1}{2}$. The Weiss-Weinstein bound is then simplified as follows + +$$ CWWB = \sup_{h_\theta} \frac{h_\theta^2 \left(1 - \frac{|h_\theta|}{2}\right)^2 \exp\left(-C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_\theta\right)\right)\right)}{2\left(1 - \frac{|h_\theta|}{2}\right) - 2(1-|h_\theta|)\exp\left(-\frac{1}{2}C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}h_\theta\right)\right)\right)}. \tag{58} $$ +---PAGE_BREAK--- + +In the classical case of a uniform linear array (i.e., $d_{x_k} = d$), this expression can be still simplified by +noticing that $\sum_{k=1}^{M} \cos(\frac{2\pi}{\lambda}d_{x_k}h_{\theta}) = M \cos(\frac{2\pi d}{\lambda}h_{\theta})$. + +**6. Simulation results and analysis** + +As an illustration of the previously derived results, we first consider the scenario proposed in Fig. 5 of +[18], i.e., the DOA estimation under the unconditional model using an uniform circular array consisting of +$M = 16$ sensors with a half-wavelength inter-sensors spacing. The numbers of snapshots is $T = 100$. Since +the array is symmetric, the performance estimation concerning the parameters $u$ and $v$ are the same, this is +why only the performance with respect to the parameters $u$ is given in Fig. 2. The Weiss-Weinstein bound +is computed using Eqn. (45), (46) and (47). The Ziv-Zakai bound is computed using Eqn. (24) in [18]. The +empirical global mean square error (MSE) of the maximum *a posteriori* (MAP) estimator is obtained over +2000 Monte Carlo trials. As in the Fig. (1b) in [18], one observes that both the Weiss-Weinstein bound and +the Ziv-Zakai bound are tight w.r.t. the MSE of the MAP and capture the SNR threshold. Note that, in +the Fig. (1b) in [18], the Weiss-Weinstein bound was computed numerically only. + +To the best of our knowledge, their are no closed-form expressions of the Ziv-Zakai bound for the +conditional model available in the literature. In this case, we consider 3D source localization using a V- +shaped array. Indeed, it has been shown that this kind of array is able to outperform other classical planar +arrays, more particularly the uniform circular array [40]. This array is made from two branches of uniform +linear arrays with 6 sensors located on each branches and one sensor located at the origin. We denote $\Delta$ the +angle between these two branches. The sensors are equally spaced with a half-wavelength. The number of +snapshots is $T = 20$. Fig. 3 shows the behavior of the Weiss-Weinstein bound with respect to the opening +angle $\Delta$. One can observe that when $\Delta$ varies, the estimation performance concerning the estimation of +parameter $u$ varies slightly. On the contrary, the estimation performance concerning the estimation of +parameter $v$ is strongly dependent on $\Delta$. When $\Delta$ increases from 0° to 90°, the Weiss-Weinstein bound of +$v$ decreases, as well as the SNR threshold. Fig. 3 also shows that $\Delta = 90^\circ$ is the optimal value, which is +different with the optimal value $\Delta = 53.13^\circ$ in [40] since the assumptions concerning the source signal are +not the same. + +**7. Conclusion** + +In this paper, the Weiss-Weinstein bound on the mean square error has been studied in the array process- +ing context. In order to analyze the unconditional and conditional signal source models, the structure of the +bound has been detailed for both Gaussian observation models with parameterized mean or parameterized +covariance matrix. +---PAGE_BREAK--- + +Appendix 1. Closed-form expression of $\eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ under the Gaussian observation model with parameterized covariance + +Since $\mathbf{y}(t) \sim \mathcal{CN}(\mathbf{0}, \mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}))$, one has, + +$$ \eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)}}{\pi^{MT} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta}} \int_{\Omega} \exp \left( -\sum_{t=1}^{T} \mathbf{y}^H(t) \mathbf{\Gamma}^{-1} \mathbf{y}(t) \right) d\mathbf{Y}, \quad (1) $$ + +where $\mathbf{\Gamma}^{-1} = \alpha \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta} + \mathbf{v}) - (\alpha + \beta - 1)\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta})$. Then, since + +$$ \int_{\Omega} \exp \left\{ -\sum_{t=1}^{T} \mathbf{y}^H(t) \mathbf{\Gamma}^{-1} \mathbf{y}(t) \right\} d\mathbf{Y} = \pi^{MT} |\mathbf{\Gamma}|^T, \quad (2) $$ + +one has + +$$ \eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)} |\mathbf{\Gamma}|^T}{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta}} = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)}}{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta} |\mathbf{\Gamma}^{-1}|^T} \quad (3) $$ + +Appendix 2. Closed-form expression of $\eta_{\theta}(\alpha, \beta, u, v)$ under the Gaussian observation model with parameterized mean + +Since $\mathbf{y}(t) \sim \mathcal{CN}(\mathbf{f}_t(\boldsymbol{\theta}), \mathbf{R}_{\mathbf{y}})$, one has + +$$ \eta_{\theta}(\alpha, \beta, u, v) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathbf{y}}|^T} \int_{\Omega} \exp \left( -\sum_{t=1}^{T} \xi(t) \right) d\mathbf{Y}, \quad (4) $$ + +with⁴ + +$$ +\begin{align*} +\xi(t) &= \alpha (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{u}))^H \mathbf{R}_{\mathbf{y}}^{-1} (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{u})) + \beta (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{v}))^H \mathbf{R}_{\mathbf{y}}^{-1} (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{v})) \\ +&\quad + (1 - \alpha - \beta) (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta}))^H \mathbf{R}_{\mathbf{y}}^{-1} (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta})) \\ +&= \mathbf{y}^H \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{y} + \alpha \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t^H (\boldsymbol{\theta}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) \\ +&\quad - 2 \operatorname{Re}\{\mathbf{y}^H \mathbf{R}_{\mathbf{y}}^{-1} (\alpha \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t (\boldsymbol{\theta}))\}. +\end{align*} +\quad (5) +$$ + +Let us set $\mathbf{x} = \mathbf{y} - (\alpha\mathbf{f}_t(\boldsymbol{\theta} + \mathbf{u}) + \beta\mathbf{f}_t(\boldsymbol{\theta} + \mathbf{v}) + (1-\alpha-\beta)\mathbf{f}_t(\boldsymbol{\theta}))$. Consequently, + +$$ +\begin{align} +\mathbf{x}^H \mathbf{R}_{\mathrm{y}}^{-1} \mathbf{x} &= \mathbf{y}^H \mathbf{R}_{\mathrm{y}}^{-1} \mathbf{y} - 2 \operatorname{Re}\{\mathbf{y}^H \mathbf{R}_{\mathrm{y}}^{-1} (\alpha \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t (\boldsymbol{\theta}))\} \\ +&\quad + (\alpha \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t^H (\boldsymbol{\theta})) \mathbf{R}_{\mathrm{y}}^{-1} (\alpha \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t (\boldsymbol{\theta})) . +\end{align} +$$ + +And $\xi(t)$ can be rewritten as + +$$ +\xi(t) = x^H R_y^{-1} x + c\xi(t), +$$ + +⁴For simplicity, the dependence on $t$ of $\tilde{\boldsymbol{f}}$ and $\tilde{\boldsymbol{x}}$ is not emphasized. +---PAGE_BREAK--- + +where + +$$ +\begin{align} +\dot{\xi}(t) ={}& \alpha (1-\alpha) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta (1-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \nonumber \\ +& + (1-\alpha-\beta) (\alpha+\beta) \mathbf{f}_t^H (\boldsymbol{\theta}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) - 2 \operatorname{Re} \left\{ \alpha \beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \right. \nonumber \\ +& \qquad \left. + \alpha (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) + \beta (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) \right\}. \tag{.8} +\end{align} +$$ + +Note that $\dot{\xi}(t)$ is independent of $\mathbf{x}$. By defining $\mathbf{X} = [\mathbf{x}(1), \mathbf{x}(2), \dots, \mathbf{x}(T)]$, the function $\eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ becomes + +$$ +\[ +\dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathbf{y}}|^T} \int_{\Omega} \exp \left( -\sum_{t=1}^{T} x^H \mathbf{R}_{\mathbf{y}}^{-1} x + \dot{\xi}(t) \right) d\mathbf{X} = \exp \left( -\sum_{t=1}^{T} \dot{\xi}(t) \right), \quad (.9) +\] +$$ + +since $\frac{1}{\pi^{MT} |\mathbf{R_y}|^T} \int_{\Omega} \exp \left(-\sum_{t=1}^{T} x^H \mathbf{R_y}^{-1} x\right) d\mathbf{X} = 1$. + +Appendix .3. Closed-form expressions of $|m_1\mathbf{R}_y^{-1}(\theta_1) + m_2\mathbf{R}_y^{-1}(\theta_2)|$ and $|m_1\mathbf{R}_y^{-1}(\theta_1) + m_2\mathbf{R}_y^{-1}(\theta_2) + m_3\mathbf{R}_y^{-1}(\theta_3)|$ + +Note that this calculation is actually an extension of the result obtained in Appendix A of [22] in which $m_1 = m_2 = \frac{1}{2}$ and $m_3 = 0$, but follows the same method. The inverse of $\mathbf{R_y}$ can be deduced from the Woodbury formula + +$$ +\mathbf{R}_{\mathrm{y}}^{-1}(\boldsymbol{\theta}) = \frac{1}{\sigma_n^2} \left( \mathbf{I}_M - \frac{\sigma_s^2 \mathbf{a}(\boldsymbol{\theta}) \mathbf{a}^H(\boldsymbol{\theta})}{\sigma_s^2 \| \mathbf{a}(\boldsymbol{\theta}) \|^2 + \sigma_n^2} \right). +$$ + +Then, + +$$ +\sum_{k=1}^{3} m_k \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_k) = \frac{1}{\sigma_n^2} \sum_{k=1}^{3} m_k \left( I - \frac{\sigma_s^2 \mathbf{a}(\boldsymbol{\theta}_k) \mathbf{a}^H(\boldsymbol{\theta}_k)}{\sigma_s^2 \| \mathbf{a}(\boldsymbol{\theta}_k) \|^2 + \sigma_n^2} \right). \quad (10) +$$ + +Since the rank of **a**(*θ**k*)**a**H(*θ**k*) is equal to 1 and since *θ*1 ≠ *θ*2 ≠ *θ*3 (except for **h***k* = **h**l = 0), the above matrix has *M* − 3 eigenvalues equal to 1*σ**n*2*k*=13 *m**k* and 3 eigenvalues corresponding to the eigenvectors made from the linear combination of **a**(*θ*1), **a**(*θ*2), and **a**(*θ*3): **a**(*θ*1) + *pa*(*θ*2) + *qa*(*θ*3). The determinant will then be the product of these *M* eigenvalues⁵. Let us set + +$$ +\varphi_k = \frac{\sigma_s^2}{\sigma_s^2 \|a(\theta_k)\|^2 + \sigma_n^2}, \quad k = 1, 2, 3. \tag{.11} +$$ + +Then, the three aforementioned eigenvalues denoted $\lambda$ must satisfy: + +$$ +\left( \sum_{k=1}^{3} m_k \mathbf{R}_{\mathbf{y}}^{-1} (\boldsymbol{\theta}_k) \right) (\mathbf{a}(\boldsymbol{\theta}_1) + p\mathbf{a}(\boldsymbol{\theta}_2) + q\mathbf{a}(\boldsymbol{\theta}_3)) = \lambda (\mathbf{a}(\boldsymbol{\theta}_1) + p\mathbf{a}(\boldsymbol{\theta}_2) + q\mathbf{a}(\boldsymbol{\theta}_3)). \quad (.12) +$$ + +By using Eqn. (.10) in the above equation and after a factorization with respect to **a**(*θ*₁), **a**(*θ*₂), and **a**(*θ*₃) one obtains + +⁵Note that we are only interested by the eigenvalues. Consequently, the linear combination of $a(\theta_1)$, $a(\theta_2)$, and $a(\theta_3)$ can be written $a(\theta_1) + pa(\theta_2) + qa(\theta_3)$ instead of $ra(\theta_1) + pa(\theta_2) + qa(\theta_3)$ +---PAGE_BREAK--- + +$$ +\begin{align} +& \left( x - m_1 \varphi_1 \| \mathbf{a}(\boldsymbol{\theta}_1) \|^2 - p m_1 \varphi_1 \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) - q m_1 \varphi_1 \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_3) \right) \mathbf{a}(\boldsymbol{\theta}_1) \nonumber \\ +& + \left( -m_2 \varphi_2 \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_1) + p (x - m_2 \varphi_2 \| \mathbf{a}(\boldsymbol{\theta}_2) \|^2) - q m_2 \varphi_2 \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_3) \right) \mathbf{a}(\boldsymbol{\theta}_2) \nonumber \\ +& + \left( -m_3 \varphi_3 \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_1) - m_3 \varphi_3 p \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_2) + q (x - m_3 \varphi_3 \| \mathbf{a}(\boldsymbol{\theta}_3) \|^2) \right) \mathbf{a}(\boldsymbol{\theta}_3) = 0, \tag{.13} +\end{align} +$$ + +where⁶ + +$$ +x = 1 - \sigma_n^2 \lambda. \quad (14) +$$ + +Consequently, the coefficients of $\mathbf{a}(\boldsymbol{\theta}_1)$, $\mathbf{a}(\boldsymbol{\theta}_2)$, and $\mathbf{a}(\boldsymbol{\theta}_3)$ are equals to zero leading to a system of three equations with two unknown ($p$ and $q$). Solving the two first equations to find⁷ $p$ and $q$, and applying the solution into the last equation, one obtains the following polynomial equation of $x$ + +$$ +\begin{equation} +\begin{split} +& x^3 - x^2 \sum_{k=1}^{3} m_k \varphi_k \| \mathbf{a}(\boldsymbol{\theta}_k) \|^2 - \frac{x}{2} \sum_{k=1}^{3} \sum_{\substack{k'=1 \\ k' \neq k}}^{3} m_k \varphi_k m_{k'} \varphi_{k'} \left( \| \mathbf{a}^H(\boldsymbol{\theta}_k) \mathbf{a}(\boldsymbol{\theta}_{k'}) \|^2 - \| \mathbf{a}(\boldsymbol{\theta}_k) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_{k'}) \|^2 \right) \\ +& - m_1 m_2 m_3 \varphi_1 \varphi_2 \varphi_3 (\| \mathbf{a}(\boldsymbol{\theta}_1) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_2) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_3) \|^2 - \| \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_3) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_1) \|^2 \\ +& - \| \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_3) \|^2 - \| \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_1) \|^2 \| \mathbf{a}^H(\boldsymbol{\theta}_2) \|^2 + \| \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_2) \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_3) \| \mathbf{a}^H(\boldsymbol{\theta}_2) \| \mathbf{a}(\boldsymbol{\theta}_1) \\ +& + \| \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_1) \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) \| \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_3) ) = 0 +\end{split} +\end{equation} +$$ + +Since we are only interested by the product of the three eigenvalues, we do not have to solve this polynomial in $\lambda$ and only the opposite of the last term is required. This leads to Eqn. (31) with $\sum_{k=1}^{3} m_k = 1$. Of course, the closed-form expression of $|m_1\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2)|$ is obtained by letting $m_3 = 0$ and $\sum_{k=1}^{2} m_k = 1$ in Eqn. (32). + +Appendix .4. Closed-form expressions of $\zeta_\theta (\mu, \rho)$ + +Remind that the function $\zeta_{\theta} (\mu, \rho)$ is defined by Eqn. (18). Let us define $p$ as the number of parameters per sources (assumed to be constant for each sources). Then, without loss of generality, the full parameter vector $\theta$ can be decomposed as $\theta = [\theta_1^T ... \theta_N^T]^T$ where $\theta_i = [\theta_{i,1} ... \theta_{i,p}]^T$, $i = 1, ..., N$ with $q = Np$. Remind that $\mu = [0... \mu_i ... 0]^T$ and $\rho = [0... \rho_j ... 0]^T$. It exists two distinct cases to study: when both index $i$ and $j$ are such that $(m-1)p+1 \le i \le mp$, $m=1,...,N$ and $(m-1)p+1 \le j \le mp$ or when + +$^{6}$Note that, from Eqn. (16), $\sum_{k=1}^{3} m_k = 1$. + +$^7p$ and $q$ are given by + +$$ +p = \frac{m_2\varphi_2\mathbf{a}^H(\boldsymbol{\theta}_2)(m_1\varphi_1\mathbf{a}(\boldsymbol{\theta}_1)\mathbf{a}^H(\boldsymbol{\theta}_1) + (x-m_1\varphi_1\|\mathbf{a}(\boldsymbol{\theta}_1)\|^2)\mathbf{I})\mathbf{a}(\boldsymbol{\theta}_3)}{m_1\varphi_1\mathbf{a}^H(\boldsymbol{\theta}_1)(m_2\varphi_2\mathbf{a}(\boldsymbol{\theta}_2)\mathbf{a}^H(\boldsymbol{\theta}_2) + (x-m_2\varphi_2\|\mathbf{a}(\boldsymbol{\theta}_2)\|^2)\mathbf{I})\mathbf{a}(\boldsymbol{\theta}_3)}, \quad (.15) +$$ + +and + +$$ +q = \frac{(x - m_1 \varphi_1 ||\mathbf{a}(\boldsymbol{\theta}_1)||^2)(x - m_2 \varphi_2 ||\mathbf{a}(\boldsymbol{\theta}_2)||^2) - m_1 \varphi_1 m_2 \varphi_2 ||\mathbf{a}^H(\boldsymbol{\theta}_1)\mathbf{a}(\boldsymbol{\theta}_2)||^2 ||\mathbf{a}(\boldsymbol{\theta}_1)||^2}{m_1 \varphi_1 |\mathbf{a}^H(\boldsymbol{\theta}_1)| (m_2 \varphi_2 |\mathbf{a}(\boldsymbol{\theta}_2)| |\mathbf{a}^H(\boldsymbol{\theta}_2)| + (x - m_2 \varphi_2 ||\mathbf{a}(\boldsymbol{\theta}_2)||^2) |\mathbf{I}| |\mathbf{a}(\boldsymbol{\theta}_3)|)} . \quad (.16) +$$ +---PAGE_BREAK--- + +$(m-1)p+1 \le i \le mp, m=1,\dots,N$ and $(n-1)p+1 \le j \le np, n=1,\dots,N$ with $m \ne n$. Therefore let us denote: + +$$ +\left\{ +\begin{array}{l} +\boldsymbol{\mu}_m = [0 \cdots 0 \quad h_i \quad 0 \cdots 0]^T \in \mathbb{R}^p \\ +\boldsymbol{\rho}_m = [0 \cdots 0 \quad h_j \quad 0 \cdots 0]^T \in \mathbb{R}^p +\end{array} +\right. +\quad \text{if } (m-1)p+1 \le i,j \le mp +\qquad (17) +$$ + +and + +$$ +\left\{ +\begin{array}{ll} +\boldsymbol{\mu}_m = [0 \cdots 0 & h_i \quad 0 \cdots 0]^T \in \mathbb{R}^p, \\ +\boldsymbol{\rho}_n = [0 \cdots 0 & h_j \quad 0 \cdots 0]^T \in \mathbb{R}^p, +\end{array} +\right. +\quad +\text{if } +\left\{ +\begin{array}{l} +(m-1)p+1 \le i \le mp, \\ +(n-1)p+1 \le j \le np, +\end{array} +\right. +\quad +\text{with } m \ne n. +\tag{18} +$$ + +Appendix .4.1. The case where (m − 1) p + 1 ≤ i, j ≤ mp + +In this case, one has: + +$$ +A(\theta + \mu) - A(\theta + \rho) = [\mathbf{0} \cdots \mathbf{0} \quad a(\theta_m + \mu_m) - a(\theta_m + \rho_m) \quad \mathbf{0} \cdots \mathbf{0}] \in C^{p \times N}, \quad (19) +$$ + +and consequently, + +$$ +\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = \| \mathbf{R}_{\mathrm{n}}^{-1/2} (\mathbf{a}(\boldsymbol{\theta}_{m}+\boldsymbol{\mu}_{m}) - \mathbf{a}(\boldsymbol{\theta}_{m}+\boldsymbol{\rho}_{m})) \|^{2} \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_{m} \|^{2}. \quad (20) +$$ + +Due to Eqn. (28), one has + +$$ +\[ +\|\mathbf{R}_{\mathrm{n}}^{-1/2} (\mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\mu}_m) - \mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\rho}_m))\|^2 = +\sum_{i=1}^{M} \sum_{j=1}^{M} \left\{ \mathbf{R}_{\mathrm{n}}^{-1} \right\}_{i,j} \exp \left( j \frac{2\pi}{\lambda} (\mathbf{r}_j^T - \mathbf{r}_i^T) \boldsymbol{\theta}_m \right) \left( \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) - \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_m) \right) \\ +\times \left( \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\mu}_m) - \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\rho}_m) \right). \tag{21} +\] +$$ + +In particular, in the case where $\mathbf{R}_n = \sigma_n^2 I$ one obtains + +$$ +\| \mathbf{R}_{\mathrm{n}}^{-1/2} (\mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\mu}_m) - \mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\rho}_m)) \|^{2} = \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \| \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) - \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_m) \|^{2}. \quad (22) +$$ + +Appendix .4.2. The case where (m − 1) p + 1 ≤ i ≤ mp and where (n − 1) p + 1 ≤ j ≤ np + +Without loss of generality, we assume that $n > m$. Then, + +$$ +\begin{align*} +& A(\boldsymbol{\theta} + \boldsymbol{\mu}) - A(\boldsymbol{\theta} + \boldsymbol{\rho}) = [\boldsymbol{a}(\boldsymbol{\theta}_1) - \boldsymbol{a}(\boldsymbol{\theta}_1) \cdots \boldsymbol{a}(\boldsymbol{\theta}_m + \boldsymbol{\mu}_m) - \boldsymbol{a}(\boldsymbol{\theta}_m) \cdots \boldsymbol{a}(\boldsymbol{\theta}_n) - \boldsymbol{a}(\boldsymbol{\theta}_n + \boldsymbol{\rho}_n) \cdots \boldsymbol{a}(\boldsymbol{\theta}_N) - \boldsymbol{a}(\boldsymbol{\theta}_N)] \\ +& = [\mathbf{0} \cdots \mathbf{0} ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ +$$ + +$$ += [\mathbf{0}\cdots\mathbf{0}\quad a(\theta_m+\mu_m)-a(\theta_m)\quad 0\cdots0\quad a(\theta_m)-a(\theta_n+\rho_n)\quad 0\cdots0], \quad (23) +$$ + +and consequently, + +$$ +\zeta_{\theta}(\mu, \rho) = \sum_{t=1}^{T} \| R_{n}^{-1/2} (\mathbf{a}(\theta_{m}+\mu_{m}) - \mathbf{a}(\theta_{m})) [\mathbf{s}(t)]_{m} + (\mathbf{a}(\theta_{n}) - \mathbf{a}(\theta_{n}+\rho_{n})) [\mathbf{s}(t)]_{n} \|^{2}. \quad (24) +$$ +---PAGE_BREAK--- + +Let us set $\varkappa = \mathbf{R}_n^{-1/2}(\mathbf{a}(\boldsymbol{\theta}_m+\boldsymbol{\mu}_m)-\mathbf{a}(\boldsymbol{\theta}_m))$ and $\boldsymbol{\varrho} = \mathbf{R}_n^{-1/2}(\mathbf{a}(\boldsymbol{\theta}_n)-\mathbf{a}(\boldsymbol{\theta}_n+\boldsymbol{\rho}_n))$. Then, $\zeta_{\boldsymbol{\theta}}(\boldsymbol{\mu}, \boldsymbol{\rho})$ can be rewritten + +$$ +\begin{align*} +\zeta_{\boldsymbol{\theta}}(\boldsymbol{\mu}, \boldsymbol{\rho}) &= \sum_{t=1}^{T} \| \varkappa \{\mathbf{s}(t)\}_{m} + \boldsymbol{\varrho} \{\mathbf{s}(t)\}_{n} \|^2 \\ +&= \sum_{t=1}^{T} \left( \varkappa^H \varkappa \| \{\mathbf{s}(t)\}_{m} \|^2 + \varkappa^H \boldsymbol{\varrho} \{\mathbf{s}(t)\}_{m}^* \{\mathbf{s}(t)\}_{n} + \boldsymbol{\varrho}^H \varkappa \{\mathbf{s}(t)\}_{m} \{\mathbf{s}(t)\}_{n}^* + \boldsymbol{\varrho}^H \boldsymbol{\varrho} \| \{\mathbf{s}(t)\}_{n} \|^2 \right) \\ +&= \varkappa^H \varkappa \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_{m} \|^2 + \boldsymbol{\varrho}^H \boldsymbol{\varrho} \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_{n} \|^2 + 2 \operatorname{Re} \left( \varkappa^H \boldsymbol{\varrho} \sum_{t=1}^{T} \{\mathbf{s}(t)\}_{m}^* \{\mathbf{s}(t)\}_{n} \right). \tag{25} +\end{align*} +$$ + +By using the structure of the steering matrix **A**, it leads to + +$$ +\left\{ +\begin{aligned} +\varkappa^H \varkappa &= \sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_n^{-1}\}_{i,j} \exp(j \frac{2\pi}{\lambda} (\mathbf{r}_j^T - \mathbf{r}_i^T) \boldsymbol{\theta}_m) \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\mu}_m), \\ +\boldsymbol{\varrho}^H \boldsymbol{\varrho} &= \sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_n^{-1}\}_{i,j} \exp(j \frac{2\pi}{\lambda} (\mathbf{r}_j^T - \mathbf{r}_i^T) \boldsymbol{\theta}_n) \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_n) \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\rho}_n), \\ +\varkappa^H \boldsymbol{\varrho} &= -\sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_n^{-1}\}_{i,j} \exp(j \frac{2\pi}{\lambda} (\mathbf{r}_j^T \boldsymbol{\theta}_n - \mathbf{r}_i^T \boldsymbol{\theta}_m)) \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\rho}_n). +\end{aligned} +\right. +\quad (26) +$$ + +Appendix 5. Proof of Eqn. (41), (42) and (43) + +In fact, one only has to prove Eqn. (43) since Eqn. (41) and (42) can be obtained by letting $h_u = h_v$ and $s_u = s_v$ in Eqn. (43) and by using $(h_u, s_u)$ for Eqn. (41) and $(h_v, s_v)$ for Eqn. (42). By plugging Eqn. (30) and (32) into Eqn. (16), and by considering the following expressions + +$$ +\begin{align*} +\mathbf{a}^H(\boldsymbol{\theta} + \mathbf{h}_u)\mathbf{a}(\boldsymbol{\theta} + \mathbf{h}_v) &= \sum_{i=1}^{M} \exp(j\frac{2\pi}{\lambda}(d_{y_i}\mathbf{h}_v - d_{x_i}\mathbf{h}_u)) = (\mathbf{a}^H(\boldsymbol{\theta} + \mathbf{h}_v)\mathbf{a}(\boldsymbol{\theta} + \mathbf{h}_u))^H, \\ +\mathbf{a}^H(\boldsymbol{\theta} \pm \mathbf{h}_u)\mathbf{a}(\boldsymbol{\theta}) &= \sum_{i=1}^{M} \exp(\mp j\frac{2\pi}{\lambda}d_{x_i}\mathbf{h}_u), \text{ and } +\mathbf{a}^H(\boldsymbol{\theta} + \mathbf{h}_u)\mathbf{a}(\boldsymbol{\theta} - \mathbf{h}_u) = \sum_{i=1}^{M} \exp(-j\frac{4\pi}{\lambda}d_{x_i}\mathbf{h}_u), +\end{align*} +$$ + +one obtains the closed-form expressions for the set of functions $\eta_{\theta}$ ($\alpha, \beta, u, v$) + +$$ +\eta_{\theta}(s_u, s_v, h_u, h_v) = +\begin{pmatrix} +s_u s_v & \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}(d_{x_k}\mathbf{h}_u - d_{y_k}\mathbf{h}_v)) \right\|^2 - M^2 \right) \\ +& + s_u(1-s_u-s_v) & \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}\mathbf{h}_u) \right\|^2 - M^2 \right) \\ +& + s_v(1-s_u-s_v) & \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}\mathbf{h}_v) \right\|^2 - M^2 \right) \\ +& - s_u s_v (1-s_u-s_v) & U_{SNR}^2 / (\sigma_s^2) \\ +& + U_{SNR} & - M \\ +& + M & - M \\ +& + M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & - M \\ +& - M & -M\\ +\end{pmatrix} +^{-T} +. (27) +$$ +---PAGE_BREAK--- + +$$ +\begin{aligned} +\eta_{\theta}(1 - s_u, 1 - s_v, -\mathbf{h}_u, -\mathbf{h}_v) = & \\ +& \left( 1 - U_{SNR} \left( \begin{array}{@{}l@{}} (1-s_u)(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp \left(j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)\right) \right\|^2 - M^2 \right) \\ + (1-s_u)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp \left(j \frac{2\pi}{\lambda} d_{x_k} h_u\right) \right\|^2 - M^2 \right) \\ + (1-s_v)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp \left(j \frac{2\pi}{\lambda} d_{y_k} h_v\right) \right\|^2 - M^2 \right) \end{array} \right)^{-T} \\ +& - (1-s_u)(1-s_v)(s_u+s_v-1) \frac{U_{SNR}^2 \sigma_n^2}{\sigma_s^2} \times \\ +& \times \left( \begin{array}{@{}l@{}} \sum_{k=1}^{M} \exp \left(j \frac{2\pi d_{y_k} h_v}{\lambda}\right) \sum_{k=1}^{M} \exp \left(-j \frac{2\pi d_{x_k} h_u}{\lambda}\right) \sum_{k=1}^{M} \exp \left(j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}\right) \\ + \sum_{k=1}^{M} \exp \left(-j \frac{2\pi d_{y_k} h_v}{\lambda}\right) \sum_{k=1}^{M} \exp \left(j \frac{2\pi d_{x_k} h_u}{\lambda}\right) \sum_{k=1}^{M} \exp \left(-j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}\right) \\ - M \left\| \sum_{k=1}^{M} \exp \left(-j \frac{2\pi}{\lambda} d_{y_k} h_v\right) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp \left(-j \frac{2\pi}{\lambda} d_{x_k} h_u\right) \right\|^2 \\ - M \left\| \sum_{k=1}^{M} \exp \left(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)\right) \right\|^2 + M^3 \end{array} \right) +\end{aligned} +. (28) +$$ + +$$ +\begin{aligned} +\eta_{\theta}(s_u, 1 - s_v, \mathbf{h}_u, -\mathbf{h}_v) = & \\ +& \left( 1 - U_{SNR} \left( s_u(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}(d_{x_k}h_u + d_{y_k}h_v)\right)\right\|^2 - M^2 \right) + s_u(s_v-s_u) \left( \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right\|^2 - M^2 \right) + (1-s_v)(s_v-s_u) \left( \left\| \sum_{k=1}^{M} \exp\left(j\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right\|^2 - M^2 \right) \right)^{-T} \\ +& - s_u(1-s_v)(s_v-s_u) \frac{U_{SNR}^2 g_n^2}{g_s^2} \\ +& \times \left( \sum_{k=1}^{M} \exp\left(j\frac{2\pi d_{y_k}h_v}{\lambda}\right) \sum_{k=1}^{M} \exp\left(j\frac{2\pi d_{x_k}h_u}{\lambda}\right) \sum_{k=1}^{M} \exp\left(-j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}\right) + \sum_{k=1}^{M} \exp\left(-j\frac{2\pi d_{y_k}h_v}{\lambda}\right) \sum_{k=1}^{M} \exp\left(-j\frac{2\pi d_{x_k}h_u}{\lambda}\right) \sum_{k=1}^{M} \exp\left(j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}\right) \\ +& - M \left\| \sum_{k=1}^{M} \exp\left(j\frac{2\pi}{\lambda}d_{y_k}h_v\right) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}d_{x_k}h_u\right) \right\|^2 \\ +& - M \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right) \right\|^2 + M^3 +\end{aligned} +. (29) +$$ + +$$ +\eta_\theta(s_u, 0, h_u, 0) = \left( 1 + s_u(1-s_u)U_{SNR} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 \right) \right)^{-T}, . (30) +$$ + +$$ +\noindent +\eta_\theta(0, s_v, 0, h_v) = (1 + s_v(1 - s_v)U_{SNR}) (M^2 - (\sum_{k=1}^{M} |\exp(-j\dfrac{2\pi}{\lambda}d_{y_k}h_v)|^2))^{-T}. +\notag +$$ + +. (31) +---PAGE_BREAK--- + +$$ +\begin{equation} +\begin{split} +\eta_{\theta}(1 - s_u, s_v, -\mathbf{h}_u, \mathbf{h}_v) = {}& \\ +& \left( + \begin{aligned} + & \left( s_v(1-s_u) \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}(d_{x_k}h_u + d_{y_k}h_v)) \right\|^2 - M^2 \right) \right) \\ + & + s_v(s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u) \right\|^2 - M^2 \right) \\ + & + (1-s_u)(s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v) \right\|^2 - M^2 \right) + \end{aligned} + \right)^{-T} \\ +& \times \left( + \begin{aligned} + & \left( \sum_{k=1}^{M} \exp(j\frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(j\frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(-j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}) \right) \\ + & + \sum_{k=1}^{M} \exp(-j\frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(-j\frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}) \\ + & - M \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u) \right\|^2 \\ + & - M \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)) \right\|^2 + M^3 + \end{aligned} + \right) +\end{split} +\tag{.32} +\end{equation} +$$ + +One notices that the set of functions $\eta_\theta(\alpha, \beta, u, v)$ does not depend on $\theta$. Consequently, it is also easy to obtain the Weiss-Weinstein bound (throughout the set of functions $\eta(\alpha, \beta, u, v)$) by using the results of Section 4.2 whatever the considered prior on $\theta$ (only the integral $\int_\Theta \frac{p^{\alpha+\beta}(\theta+u)}{p^{\alpha+\beta-1}(\theta)} d\theta$ has to be calculated or computed numerically). In our case of a uniform prior, the results are straightforward and leads to Eqn. (41), (42) and (43). + +Appendix .6. *Proof of Eqn. (48), (49) and (50)* + +The set of functions $\eta_\theta(\mu, \rho)$ from Eqn. (18). Since $\mathbf{R}_n = \sigma_n^2 \mathbf{I}$, one obtains $\zeta_\theta(\mathbf{h}_u, \mathbf{0}) = \zeta_\theta(-\mathbf{h}_u, \mathbf{0}) = 2C_{SNR} \left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}\mathbf{h}_u\right)\right)$, +$$ +\begin{align*} +\zeta_\theta(\mathbf{h}_v, \mathbf{0}) &= \zeta_\theta(-\mathbf{h}_v, \mathbf{0}) = 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}\mathbf{h}_v\right) \right), && \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_u) = \zeta_\theta(-\mathbf{h}_u, \mathbf{h}_u) = 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}\mathbf{h}_v\right) \right), \\ +\zeta_\theta(\mathbf{h}_v, -\mathbf{h}_v) &= \zeta_\theta(-\mathbf{h}_v, \mathbf{h}_v) = 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{y_k}\mathbf{h}_v\right) \right), && \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_v) = \zeta_\theta(-\mathbf{h}_u, -\mathbf{h}_v) = \\ +&= 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}\mathbf{h}_u - d_{y_k}\mathbf{h}_v)\right) \right), && \zeta_\theta(-\mathbf{h}_v, -\mathbf{h}_u) = \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_v) = \zeta_\theta(-\mathbf{h}_v, -\mathbf{h}_u) = \\ +&= 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}\mathbf{h}_u + d_{y_k}\mathbf{h}_v)\right) \right), && \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_u) = \zeta_\theta(-\mathbf{h}_u, -\mathbf{h}_v) = \zeta_\theta(-\mathbf{h}_v, -\mathbf{h}_u) = \\ +&= 0. +\end{align*} +$$ + +Again, since the set of functions $\zeta_\theta(\mu, \rho)$ does not depend on $\theta$, the set of functions $\eta_\theta(\alpha, \beta, u, v)$ is given by plugging the above equations into Eqn. (17) and does not depend on $\theta$. Consequently, as in unconditional case, the set of functions $\eta(\alpha, \beta, u, v)$ is obtained by using the results of Section 4.2 whatever the considered prior on $\theta$. In our case of a uniform prior, the results are straightforward and leads to Eqn. (48), (49) and (50). +---PAGE_BREAK--- + +References + +[1] R. J. McAulay and L. P. Seidman, "A useful form of the Barankin lower bound and its application to PPM threshold analysis," *IEEE Transactions on Information Theory*, vol. 15, no. 2, pp. 273-279, Mar. 1969. + +[2] R. J. McAulay and E. M. Hofstetter, "Barankin bounds on parameter estimation," *IEEE Transactions on Information Theory*, vol. 17, no. 6, pp. 669-676, Nov. 1971. + +[3] E. Chaumette, J. Galy, A. Quinlan, and P. Larzabal, "A new Barankin bound approximation for the prediction of the threshold region performance of maximum likelihood estimators," *IEEE Transactions on Signal Processing*, vol. 56, no. 11, pp. 5319-5333, Nov. 2008. + +[4] K. Todros and J. Tabrikian, "General classes of performance lower bounds for parameter estimation - part I: non-Bayesian bounds for unbiased estimators," *IEEE Transactions on Information Theory*, vol. 56, no. 10, pp. 5045-5063, Oct. 2010. + +[5] H. L. Van Trees and K. L. Bell, Eds., *Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking*. New-York, NY, USA: Wiley/IEEE Press, Sep. 2007. + +[6] J. Ziv and M. Zakai, "Some lower bounds on signal parameter estimation," *IEEE Transactions on Information Theory*, vol. 15, no. 3, pp. 386-391, May 1969. + +[7] S. Bellini and G. Tartara, "Bounds on error in signal parameter estimation," *IEEE Transactions on Communications*, vol. 22, no. 3, pp. 340-342, Mar. 1974. + +[8] K. L. Bell, Y. Steinberg, Y. Ephraim, and H. L. Van Trees, "Extended Ziv-Zakaï lower bound for vector parameter estimation," *IEEE Transactions on Information Theory*, vol. 43, no. 2, pp. 624-637, Mar. 1997. + +[9] A. J. Weiss and E. Weinstein, "A lower bound on the mean square error in random parameter estimation," *IEEE Transactions on Information Theory*, vol. 31, no. 5, pp. 680-682, Sep. 1985. + +[10] I. Rapoport and Y. Oshman, "Weiss-Weinstein lower bounds for markovian systems. part I: Theory," *IEEE Transactions on Signal Processing*, vol. 55, no. 5, pp. 2016-2030, May 2007. + +[11] A. Renaux, P. Forster, P. Larzabal, C. D. Richmond, and A. Nehorai, "A fresh look at the Bayesian bounds of the Weiss-Weinstein family," *IEEE Transactions on Signal Processing*, vol. 56, no. 11, pp. 5334-5352, Nov. 2008. + +[12] K. Todros and J. Tabrikian, "General classes of performance lower bounds for parameter estimation - part II: Bayesian bounds," *IEEE Transactions on Information Theory*, vol. 56, no. 10, pp. 5064-5082, Oct. 2010. + +[13] Y. Rockah and P. Schultheiss, "Array shape calibration using sources in unknown locations-part I: Far-field sources," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 35, no. 3, pp. 286-299, Mar. 1987. + +[14] I. Reuven and H. Messer, "A Barankin-type lower bound on the estimation error of a hybrid parameter vector," *IEEE Transactions on Information Theory*, vol. 43, no. 3, pp. 1084-1093, May 1997. + +[15] S. Bay, B. Geller, A. Renaux, J.-P. Barbot, and J.-M. Brossier, "On the hybrid Cramér-Rao bound and its application to dynamical phase estimation," *IEEE Signal Processing Letters*, vol. 15, pp. 453-456, 2008. + +[16] H. L. Van Trees, *Detection, Estimation and Modulation Theory*. New-York, NY, USA: John Wiley & Sons, 1968, vol. 1. + +[17] B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai, "Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing," in *Radar Array Processing*, S. S. Haykin, J. Litva, and T. J. Shepherd, Eds. Berlin: Springer-Verlag, 1993, ch. 4, pp. 99-151. + +[18] K. L. Bell, Y. Ephraim, and H. L. Van Trees, "Explicit Ziv-Zakaï lower bound for bearing estimation," *IEEE Transactions on Signal Processing*, vol. 44, no. 11, pp. 2810-2824, Nov. 1996. + +[19] T. J. Nohara and S. Haykin, "Application of the Weiss-Weinstein bound to a two dimensional antenna array," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 36, no. 9, pp. 1533-1534, Sep. 1988. + +[20] H. Nguyen and H. L. Van Trees, "Comparison of performance bounds for DOA estimation," in Proc. of IEEE Workshop on Statistical Signal and Array Processing (SSAP), vol. 1, Jun. 1994, pp. 313-316. +---PAGE_BREAK--- + +[21] F. Athley, "Optimization of element positions for direction finding with sparse arrays," in *Proc. of IEEE Workshop on Statistical Signal Processing (SSP)*, vol. 1, 2001, pp. 516–519. + +[22] W. Xu, A. B. Baggeroer, and C. D. Richmond, "Bayesian bounds for matched-field parameter estimation," *IEEE Transactions on Signal Processing*, vol. 52, no. 12, pp. 3293–3305, Dec. 2004. + +[23] A. Renaux, "Weiss-Weinstein bound for data aided carrier estimation," *IEEE Signal Processing Letters*, vol. 14, no. 4, pp. 283–286, Apr. 2007. + +[24] D. T. Vu, A. Renaux, R. Boyer, and S. Marcos, "Closed-form expression of the Weiss-Weinstein bound for 3D source localization: the conditional case," in *Proc. of IEEE Workshop on Sensor Array and Multi-channel Processing (SAM)*, vol. 1, Kibutz Ma'ale Hahamisha, Israel, Oct. 2010, pp. 125–128. + +[25] S. M. Kay, *Fundamentals of Statistical Signal Processing: Estimation Theory*. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., Mar. 1993, vol. 1. + +[26] H. L. Van Trees, *Detection, Estimation and Modulation theory: Optimum Array Processing*. New-York, NY, USA: John Wiley & Sons, Mar. 2002, vol. 4. + +[27] Z. Ben Haim and Y. Eldar, "A comment on the Weiss-Weinstein bound for constrained parameter sets," *IEEE Transactions on Information Theory*, vol. 54, no. 10, pp. 4682–4684, Oct. 2008. + +[28] P. Stoica and A. Nehorai, "Performances study of conditional and unconditional direction of arrival estimation," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 38, no. 10, pp. 1783–1795, Oct. 1990. + +[29] K. L. Bell, Y. Ephraim, and H. L. Van Trees, "Explicit Ziv-Zakaï lower bounds for bearing estimation using planar arrays," in *Proc. of Workshop on Adaptive Sensor Array Processing (ASAP)*. Lexington, MA, USA: MIT Lincoln Laboratory, Mar. 1996. + +[30] I. Reuven and H. Messer, "The use of the Barankin bound for determining the threshold SNR in estimating the bearing of a source in the presence of another," in *Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)*, vol. 3, Detroit, MI, USA, May 1995, pp. 1645–1648. + +[31] J. Li and R. T. Compton, "Maximum likelihood angle estimation for signals with known waveforms," *IEEE Transactions on Signal Processing*, vol. 41, no. 9, pp. 2850–2862, Sep. 93. + +[32] M. Cedervall and R. L. Moses, "Efficient maximum likelihood DOA estimation for signals with known waveforms in presence of multipath," *IEEE Transactions on Signal Processing*, vol. 45, no. 3, pp. 808–811, Mar. 1997. + +[33] J. Li, B. Halder, P. Stoica, and M. Viberg, "Computationally efficient angle estimation for signals with known waveforms," *IEEE Transactions on Signal Processing*, vol. 43, no. 9, pp. 2154–2163, Sep. 1995. + +[34] E. Weinstein and A. J. Weiss, "A general class of lower bounds in parameter estimation," *IEEE Transactions on Information Theory*, vol. 34, no. 2, pp. 338–342, Mar. 1988. + +[35] P. S. La Rosa, A. Renaux, A. Nehorai, and C. H. Muravchik, "Barankin-type lower bound on multiple change-point estimation," *IEEE Transactions on Signal Processing*, vol. 58, no. 11, pp. 5534–5549, Nov. 2010. + +[36] H. L. Van Trees, *Detection, Estimation and Modulation Theory: Radar-Sonar Signal Processing and Gaussian Signals in Noise*. New-York, NY, USA: John Wiley & Sons, Sep. 2001, vol. 3. + +[37] K. L. Bell, "Performance bounds in parameter estimation with application to bearing estimation," Ph.D. dissertation, George Mason University, Fairfax, VA, USA, 1995. + +[38] W. Xu, A. B. Baggeroer, and K. L. Bell, "A bound on mean-square estimation error with background parameter mismatch," *IEEE Transactions on Information Theory*, vol. 50, no. 4, pp. 621–632, Apr. 2004. + +[39] J. Tabrikian and J. L. Krolik, "Barankin bounds for source localization in an uncertain ocean environment," *IEEE Transactions on Signal Processing*, vol. 47, no. 11, pp. 2917–2927, Nov. 1999. + +[40] H. Gazzah and S. Marcos, "Cramér-Rao bounds for antenna array design," *IEEE Transactions on Signal Processing*, vol. 54, no. 1, pp. 336–345, Jan. 2006. +---PAGE_BREAK--- + +Figure .1: 3D source localization using a planar array antenna. +---PAGE_BREAK--- + +Figure .2: Ziv-Zakai bound, Weiss-Weinstein bound and empirical MSE of the MAP estimator: unconditional case. +---PAGE_BREAK--- + +Figure 3: Weiss-Weinstein bounds of the V-shaped array w.r.t. the opening angle $\Delta$. \ No newline at end of file diff --git a/samples/texts_merged/4808858.md b/samples/texts_merged/4808858.md new file mode 100644 index 0000000000000000000000000000000000000000..5d40635300d42441d9d4837b68cf02714e8d63d2 --- /dev/null +++ b/samples/texts_merged/4808858.md @@ -0,0 +1,28 @@ + +---PAGE_BREAK--- + +**Problem 1** In an LC circuit with $C = 4.00$ μF, the maximum potential difference across the capacitor is 1.50 V and the maximum current through the inductor is 50 mA. + +(a) What is the inductance $L$? + +(b) What is the frequency of oscillations? + +(c) How long does it take for the charge to rise from 0 to its maximum value? + +**Problem 4** A circuit is composed of two metal rails 8 cm apart, a resistor with $R = 1 \Omega$ connecting them, and a rod at the other end which moves at a speed of 0.45 m/s. A uniform magnetic field $B = 0.1$ T points perpendicular to the plane of the circuit. + +(a) Find the induced emf in the circuit. + +(b) Find the current in the circuit. + +(c) If the rod moved in the opposite direction, how would your answers change? + +**Problem 5** While upgrading the electronics in your car stereo, you calculate that you need to construct an LC circuit that oscillates at 20 Hz. If you have a 40 mH inductor, what capacitor do you need to buy from Radio Shack? + +**Problem 6** You have an LC circuit that includes a small, unavoidable resistance from the wires. The inductor is 1.5 mH and the capacitor is 3 mF. The capacitor is initially charged to 30 μC. After 100 oscillations, the maximum charge on the capacitor is only 5 μC. + +(a) What is the resistance of the circuit? + +(b) How much energy has been lost? + +(c) Where did this energy go? \ No newline at end of file diff --git a/samples/texts_merged/4872902.md b/samples/texts_merged/4872902.md new file mode 100644 index 0000000000000000000000000000000000000000..f989e09b3ffe2d1a47218da18845540614e3cfa2 --- /dev/null +++ b/samples/texts_merged/4872902.md @@ -0,0 +1,230 @@ + +---PAGE_BREAK--- + +Computation of Time-Domain Frequency Stability and +Jitter from PM Noise Measurements* + +W. F. Walls and F. L. Walls + +Femtosecond Systems Inc., +4894 Van Gordon St. Suite 301N, +Wheat Ridge, CO 80033, USA + +National Institute of Standards and Technology, +325 Broadway Boulder, CO 80303, USA + +Abstract + +This paper explores the effect of phase modulation (PM), amplitude modulation (AM), and thermal noise on the rf spectrum, phase jitter, timing jitter, and frequency stability of precision sources. + +**1. Introduction** + +In this paper we review the basic definitions generally used to describe phase +modulation (PM) noise, amplitude modulation (AM) noise, fractional frequency stability, +timing jitter and phase jitter in precision sources. From these basic definitions we can then +compute the effect of frequency multiplication or division on these measures of +performance. We find that under ideal frequency multiplication or division by a factor N, +the PM noise and phase jitter of a source is intrinsically changed by a factor of N². The +fractional frequency stability and timing jitter are, however, unchanged as long as we can +determine the average zero crossings. After a sufficiently large N, the carrier power +density is less than the PM noise power. This condition is often referred to as carrier +collapse. Ideal frequency translation results in the addition of the PM noise of the two +sources. The effect of AM noise on the multiplied or translated signals can be increased +or decreased depending on the component non-linearity. Noise added to a precision signal +results in equal amounts of PM and AM noise. The upper and lower PM (or AM) +sidebands are exactly equal and 100% correlated, independent of whether the PM (or AM) +originate from random or coherent processes [1]. + +## 2. Basic Definitions + +2.1 Descriptions of Voltage Wave Form + +The output of a precision source can be written as + +$$ +V(t) = [V_o + \varepsilon(t)][\cos(2\pi v_o t) + \phi(t)], \quad (1) +$$ + +* Work of the US Government not subject to US copyright. +† Presently at Total Frequency, Boulder, CO 80303. +---PAGE_BREAK--- + +where $v_0$ is the average frequency, and $V_0$ is the average amplitude. Phase/frequency variations are included in $\phi(t)$ and the amplitude variations are included in $\epsilon(t)$ [2]. The instantaneous frequency is given by + +$$ v = v_o + \frac{1}{2\pi} \frac{d}{dt} \phi(t) \quad (2a) $$ + +The instantaneous fractional frequency deviation is given by + +$$ y(t) = \frac{1}{2\pi v_o} \frac{d}{dt} \phi(t) \quad (2b) $$ + +The power spectral density (PSD) of phase fluctuations $S_\phi(f)$ is the mean squared phase fluctuation $\delta\phi(f)$ at Fourier frequency $f$ from the carrier in a measurement bandwidth of 1 Hz. This includes the contributions at both the upper and lower sidebands. These sidebands are exactly equal in amplitude and are 100% correlated [1]. Thus experimentally + +$$ S_{\phi}(f) = \frac{[\delta\phi(f)]^2}{BW} \quad \text{radians}^2/\text{Hz}, \quad (3) $$ + +where BW is the measurement bandwidth in Hz. Since the BW is small compared to $f$, $S_\phi(f)$ appears locally to be white and obeys Gaussian statistics. The fractional 1-sigma confidence interval is $1 \pm 1/\sqrt{N}$ [3]. + +Often the PM noise is specified as single side band noise $\ell(f)$, which is defined as $1/2$ of $S_\phi(f)$. The units are generally given in dBc/Hz, which is short hand for dB below the carrier in a 1 Hz bandwidth. + +$$ \ell(f) = 10 \log \left[ \frac{1}{2} S_{\phi}(f) \right] \quad \text{dBc/Hz}. \quad (4) $$ + +Frequency modulation noise is often specified as $S_y(f)$ which is the PSD of fractional frequency fluctuations. $S_y(f)$ is related to $S_\phi(f)$ by + +$$ s_y(f) = \frac{f^2}{\nu^2} S_\phi(f) \quad 1/\text{Hz}. \quad (5) $$ + +In the laser literature one often sees the frequency noise expressed as the PSD of frequency modulation $S_\phi^\bullet$, which is related to $S_y(f)$ as. + +$$ S_\phi^\bullet \phi(f) = f^2 S_y(f) = f^2 S_\phi(f) \quad \text{Hz}^2/\text{Hz}. \quad (6) $$ + +The amplitude modulation (AM) noise $S_a(f)$ is the mean squared fractional amplitude fluctuations at Fourier frequency $f$ from the carrier in a measurement bandwidth of 1 Hz. Thus experimentally + +$$ S_a(f) = \left( \frac{\delta\epsilon(f)}{V_0} \right)^2 \frac{1}{BW} \quad 1/\text{Hz}, \quad (7) $$ + +where BW is the measurement bandwidth in Hz. +---PAGE_BREAK--- + +The rf power spectrum for small PM and AM noise is approximately given by + +$$V^2(f) \equiv V_o^2 [e^{-\phi_c^2} + S_\phi(f) + S_a(f)] \quad (8)$$ + +Where $e^{-\phi_c^2}$ is the approximate power in the carrier at Fourier frequencies from 0 to $f_c$. $\phi_c^2$ is the mean squared phase fluctuation due to the PM noise at frequencies larger than $f_c$ [4]. $\phi_c^2$ is calculated from. + +$$\phi_c^2 = \int_{-\infty}^{\infty} S_{\phi}(f) df. \quad (9)$$ + +The half-power bandwidth of the signal, 2 fc can be found by setting $\phi_c^2 = 0.7$. The difference between the half-power and the 3 dB bandwidth depends on the shape of $S_\phi(f)$ [4]. + +## 2.2 Frequency Stability In The Time Domain + +The frequency of even a precision source is often not stationary in time, so traditional statistical methods to characterize it diverge with increasing number of samples [2]. Special statistics have been developed to handle this problem. The most common is the two-sample or Allan variance (AVAR), which is based on analyzing the fluctuations of adjacent samples of fractional frequency averaged over a period $\tau$. The square root of the Allan variance $\sigma_y(\tau)$, often called ADEV, is defined as + +$$\sigma_y(\tau) = \left\langle \frac{1}{2} \left[ y(t+\tau) - \bar{y}(t) \right]^2 \right\rangle^{1/2} \quad (10)$$ + +$\sigma_y(\tau)$ can be estimated from a finite set of frequency averages, each of length $\tau$ from + +$$\sigma_y(\tau) = \left[ \frac{1}{2(M-1)} \sum_{i=1}^{M-1} (y_i - \bar{y})^2 \right]^{1/2} \quad (11)$$ + +This assumes that there is no dead time between samples [2]. If there is dead time, the results are biased depending on the amount of dead time and the type of PM noise. See [2] for details. + +$\sigma_y(\tau)$ can also be calculated from the $S_\phi(f)$ using + +$$\sigma_y(\tau) = \left( \frac{\sqrt{2}}{\pi v_o \tau} \right) \left[ \int_0^\infty H_\phi(f) |S_\phi(f)| \sin^4(\pi f \tau) df \right]^{1/2} \quad (12)$$ + +where $H_o(f)$ is the transfer function of the system used for measuring $\sigma_y(\tau)$ or $\delta t$ below [2]. $H_\phi(f)$ must +---PAGE_BREAK--- + +Figure 1. Placement of the yis used in the computation of σy(τ) and δt = τσy(τ). + +have a low-pass characteristic for σy(τ) to converge in the presence of white PM or flicker PM noise. In practice the measurement system always has a finite bandwidth but if this is not controlled or known, the results for σy(τ) will have little meaning [2]. See Table 1. If H₀(f) has a low pass characteristic with a very sharp roll off at a maximum frequency f_h, it can be replaced by 1 and the integration terminated at f_h. Practical examples usually require the exact shape of H₀(f). Programs exist that numerically compute σy(τ) for an arbitrary combination of these 5 noise types [5]. Most sources contain at least three of them plus long-term drift or aging. + +## 2.3 Effects of Frequency Multiplication, Division, and Translation + +Frequency multiplication by a factor N is the same as phase amplification by a factor N. For example 2π radians is amplified to 2πN radians. Since PM noise is the mean squared phase fluctuation, the PM noise must increase by N². Thus + +$$S_{\phi}(Nv_o, f) = N^2 S_{\phi}(v_o, f) + \text{Multiplication PM}, \quad (13)$$ + +where Multiplication PM is the noise added by the multiplication process. + +We see from Eqs. (8), (9) and (13) that the power in the carrier decreases exponentially as $e^{-N^2}$. After a sufficiently large multiplication factor N, the carrier power density is less than the PM noise power. This is often referred to as carrier collapse [4]. Ideal frequency translation results in the addition of the PM noise of the two sources [2]. The half power bandwidth of the signal also changes with frequency multiplication. + +Frequency division can be considered as frequency multiplication by a factor 1/N. The effect is to reduce the PM noise by a factor 1/N². The only difference is that there can be aliasing of the broadband PM noise at the input to significantly increase the output PM above that calculated for a perfect divider [6]. This effect can be avoided by using narrow +---PAGE_BREAK--- + +band filter at the input or intermediate stages. Ideal frequency multiplication or division does not change $\sigma_y(\tau)$. + +Frequency translation has the effect of adding the PM noise of the input signal $v_i$ and the reference signal $v_o$ to that of the PM noise in the nonlinear device providing the translation. + +$$S_{\phi}(v_2, f) = S_{\phi}(v_o, f) + S_{\phi}(v_1, f) + \text{Translation PM.} \quad (14)$$ + +Thus dividing a high frequency signal, rather than mixing two high frequency signals generally produces a low frequency reference signal with less residual noise. + +### 3. Effect Of Multiplicative Noise + +Multiplicative noise is noise modulation power that remains proportional to the signal level. For example consider the case where the gain is modulated by some process with an index $\beta$ as + +$$\text{Gain} = G_o(1+\beta)\cos\Omega\tau \quad (15)$$ + +If we assume an input signal given by + +$$V_{in} = V_o \cos[2\pi v_o t + \phi(t)] \quad (16)$$ + +then the output voltage will have the form + +$$V_{out} = V_o G_o + V_o G_o \beta \cos\Omega t \cos[2\pi v_o t + \phi(t)] \quad (17)$$ + +The amplitude fluctuation is seen to be proportional to the input signal. Using Eqs. (1) and (7) we can compute the AM noise to be + +$$\frac{1}{2} S_a(f) = \frac{\beta^2}{2} \quad (18)$$ + +Similarly if the phase is modulated as + +$$\phi(t) = \beta \cos[\Omega(t)] \quad (19)$$ + +the output voltage will be of the form + +$$V_{out} = V_o \cos[\omega\tau + \beta \cos[\Omega(t)]] \quad (20)$$ + +The phase fluctuation is proportional to the input signal and the PM is calculated using Eqs. (1) and (3) to be + +$$\frac{1}{2} S_{\phi}(f) = \frac{\beta^2}{4} \quad (21)$$ +---PAGE_BREAK--- + +### 4. Effect of Additive Noise + +The addition of a noise signal $V_n(t)$ to the signal $V_o(t)$ yields a total signal + +$$V(t) = V_o(t) + V_n(t) \quad (22)$$ + +Since the noise term $V_n(t)$ is uncorrelated with $V_o(t)$, 1/2 the power contributes to AM noise and 1/2 the power contributes to PM noise. + +$$\text{AM } V_n(t)/\sqrt{2} \text{ PM } V_n(t)/\sqrt{2} \quad (23)$$ + +$$L(f) = \frac{s_{\phi}(f)}{2} = \frac{s_{a}(f)}{2} = \frac{V_{n}^{2}(f)}{4V_{o}^{2}} \frac{1}{BW} \quad (24)$$ + +where BW is the bandwidth in Hz. We see that the AM and PM is proportional to inverse power. These results can be applied to amplifier and detection circuits as follows. The input noise power to the amplifier is given by kTBW. The gain of the amplifier from a matched source into a match load is $G_o$. The noise power to the load is just kTBWG_oF, where F is the noise figure. The output power to the load is $P_o$. Using Eq. (24) we obtain + +$$L(f) = \frac{s_{\phi}(f)}{2} = \frac{s_{a}(f)}{2} = \frac{V_{n}^{2}(f)}{4V_{o}^{2}} \frac{1}{BW} = \frac{2kTBWFG_{o}}{4P_{o}BW} = \frac{kTFG_{o}}{2P_{o}} = -177\text{dBc/Hz} \quad (25)$$ + +for T= 300K, F=1, P_o/G_o = P_in = 0 dBm. + +### 5. Phase Jitter + +The phase jitter $\delta\phi$ is computed from the PM noise spectrum using + +$$\delta\phi = \int_{0}^{\infty} [S_{\phi}(f)] H(f) df \quad (26)$$ + +Generally $H(f)$ must have the shape of the high pass filter or a minimum cutoff frequency $f_{min}$ used to exclude low frequency changes for the integration, or $\delta\phi$ will diverge due to random walk FM, flicker FM, or white FM noise processes. Usually $H(f)$ also has a low pass characteristic at high frequencies to limit the effects of flicker PM and white PM [2]. See Table 1. + +### 6. Timing Jitter + +Recall that $\sigma_y(\tau)$ is the fractional frequency stability of adjacent samples each of length $\tau$. See Fig. 1. The time jitter $\delta t$ is the timing error that accumulates after a period $\tau$. $\delta t$ is related to $\sigma_y(\tau)$ by + +$$\frac{\delta t}{\tau} = \frac{\delta v}{v} = \sigma_y(\tau) \quad \delta t = \tau \sigma_y(\tau) \quad (27)$$ +---PAGE_BREAK--- + +Table 1 shows the asymptotic forms of $\sigma_y(\tau)$, $\delta t$, and $\delta\phi$ as a function of $\tau$, $f_{\text{min}}$, and $f_h$ for the 5 common noise types at frequency $v_o$ and $Nv_o$, under the assumption that $2\pi f_h \tau > 1$. It is interesting to note that for white phase noise, all three measures are dominated by $f_h$[5]. For random walk frequency modulation (FM) and flicker FM, $\sigma_y(\tau)$ is independent of $f_h$ and instead is dominated by $S_\phi(1/\tau)$ or $S_\phi(f_{\text{min}})$. Also, the timing jitter is independent of $N$ as long as we can still identify zero crossings, while the phase jitter, which is proportional to frequency, is multiplied by a factor $N$. Typical sources usually contain at least 3 of these noise types. + +Table 1. $\sigma_y(\tau)$, $\delta t$, and $\delta\phi$ as a function of $\tau$, $f_{\text{min}}$, and $f_h$ at carrier frequency $v_o$ and $Nv_o$ + +
Noise typeSφ(f)σy(τ)δt at vo or Nvoδφ at voδφ at N vo
Random
Walk FM
[v2/f4]h2π[(2/3)h2τ]1/2Tπ[(2/3)h2τ]1/2v[[(1/(3fmin)3)h2]1/2Nv[[(1/(3fmin)3)h2]1/2
Flicker FM[v2/f3]h1[2ln(2)h1]1/2τ[2ln(2)h1]1/2v[[(1/(2fmin)2)h1]1/2Nv[[(1/(2fmin)2)h1]1/2
White FM[v2/f2]h0{(1/(2τ))h0}1/2[(τ/2)h0]1/2v{{((1/fmin)-/fh)]h0}1/2Nv[(1/(fmin)-/fh)h0]1/2
Flicker PM[v2/f1]h1{(1/(2π)) [1.038
+3ln(2πfhτ)h1]1/2
[1/(2π)] [1.038
+3ln(2πfhτ)h1]1/2
v[ln(fh/fmin)h1]1/2Nv[ln(fh/fmin)h1]1/2
White PM[v2f-2]h2{1/(2πτ)}[3fhh2]1/2[1/(2π)}[3fhh2]1/2v[fn,h2]1/2Nv[fn,h2]1/2
+ +## 7. Discussion + +We have explored the effects of phase modulation (PM), amplitude modulation (AM), and additive noise on the rf spectrum, phase jitter, timing jitter, and frequency stability of precision sources. Under ideal frequency multiplication or division by a factor $N$, the PM noise and phase jitter of a source is changed by a factor of $N^2$. After a sufficiently large $N$, the carrier power density is less than the PM noise power. This condition is often referred to as carrier collapse. Noise added to a precision signal results in equal amounts of PM and AM noise. The upper and lower PM (or AM) sidebands are exactly equal and 100% correlated, independent of whether the PM (or AM) originates from random or coherent processes. + +## 8. Acknowledgements + +We gratefully acknowledge helpful discussions with David A. Howe, A. Sen Gupta, and Jeff Vollin. + +## References + +[1] F.L. Walls, "Correlation Between Upper and Lower Sidebands," IEEE Trans. Ultrason., Ferroelectrics, and Freq. Cont., 47, 407-410, 2000. + +[2] D.B. Sullivan, D.W. Allan, D.A. Howe, and F.L. Walls, "Characterization of Clocks and Oscillators", NIST Tech. Note 1337, 1-342, 1990. + +[3] F.L. Walls, D.B. Percival, and W.R. Irelan, "Biases and Variances of Several FFT Spectral Estimators as a Function of Noise Type and Number of Samples," Proc. 43rd Ann. Symp. Freq. Control, Denver, CO, May 31-June 2, 336-341, 1989. Also found in [1]. +---PAGE_BREAK--- + +[4] F.L. Walls and A. DeMarchi, "RF Spectrum of a Signal After Frequency Multiplication: Measurement and Comparison with a Simple Calculation," IEEE Trans. Instrum. Meas., **24**, 210-217, 1975. + +[5] F.L. Walls, J. Gary, A. O'Gallagher, R. Sweet, and L. Sweet, Time Domain Frequency Stability Calculated from the Frequency Domain Description: Use of the SIGINT Software Package to Calculate Time Domain Frequency Stability from the Frequency Domain, NISTIR 89-3916 (revised), 1-31, 1991. + +[6] A. SenGupta and F.L. Walls, "Effect of Aliasing on Spurs and PM Noise in Frequency Dividers," Proc. Intl. IEEE Freq. Cont. Symp., Kansas City, MO, June 6-9, 2000. \ No newline at end of file diff --git a/samples/texts_merged/4994833.md b/samples/texts_merged/4994833.md new file mode 100644 index 0000000000000000000000000000000000000000..930066c8c3a5a0f1beaabec89faea68f36f3275e --- /dev/null +++ b/samples/texts_merged/4994833.md @@ -0,0 +1,529 @@ + +---PAGE_BREAK--- + +Sampling variance update method in +Monte Carlo Model Predictive Control* + +Shintaro Nakatani* Hisashi Date** + +* Graduate School of Systems and Information Engineering, University of Tsukuba, Ibaraki, Japan (e-mail: nakatani-s@roboken.iit.tsukuba.ac.jp). + +** Faculty of Engineering, Information and Systems, University of Tsukuba, Ibaraki, Japan (e-mail: hdate@iit.tsukuba.ac.jp) + +**Abstract:** This study describes the influence of user parameters on control performance in a Monte-Carlo model predictive control (MCMPC). MCMPC based on Monte-Carlo sampling depends significantly on the characteristics of sampling distribution. We quantified the effect of user determinable parameters on control performance using the relationship between the algorithm of MCMPC and convergence to the optimal solution. In particular, we investigated the limitations associated with the variance of sampling distribution causing a trade-off relationship with the convergence speed and accuracy of estimation. To overcome this limitation, we proposed two variance updating methods and new MCMPC algorithm. Furthermore, the effectiveness of the numerical simulation was verified. + +**Keywords:** Optimal control theory, Monte-Carlo methods, Randomized methods, Model predictive and optimization-based control + +# 1. INTRODUCTION + +In recent years, model predictive control (MPC) has attracted considerable attention in various fields owing to its ability of explicitly handling the required constraints Carlos E. Garcia and Morari (1989), Ohtsuka (2004). In MPC, an algorithm is used to determine the optimal control inputs by repeatedly solving the optimization problem with constraint up to a finite time in the future. From the view point of implementation, MPC can be separated into two categories, i.e., gradient and sample-based MPC. + +The former method is currently being researched to be applied in various real-world systems. The C/GMRES proposed by Ohtsuka (2004) is a quite efficient method among gradient-based MPC. The C/GMRES is known to be an efficient algorithm Cairano and Kolmanovsky (2019) for nonlinear systems and has been considered for application in various systems such as smart grid systems Toru (2012) and vehicle collision avoidance control Masashi Nanno (2010). + +In gradient-based MPC, the optimal input is determined by solving the optimal control problem using the gradient information of the cost function. Therefore, if the optimal control problem is simple, the optimal solution can be derived quickly and accurately. Alternatively, the target system is limited to systems with differentiable cost function. + +In another method, i.e., sample-based MPC, the optimal input is determined using Monte-Carlo approximation. In general, Monte-Carlo method requires a significant number of computational resources; therefore, real-time im- + +plementation of sample-based MPC is difficult. However, in literature Williams et al. (2016); Ohyama and Date (2017), it has been reported that the efficient approach is to take advantage of the parallel nature of sampling and use graphical processing unit to implement it in real time. In addition, as sample-based MPC does not require gradient information of the cost function, there are many significant advantages. The literature Nakatani and Date (2019) describes the feature of the Monte-Carlo model predictive control (MCMPC), which is a type of sample-based MPC. It also explains its capability of handling discontinuous events, based on the result of experiments of collision of pendulum on a cart. + +From theoretical point of view, the most successful method is the path integral optimal control framework Kappen (2007); Satoh et al. (2017). The key idea in this framework is that the solution of the optimal control problem is transformed into the expectation over all possible trajectories and corresponding trajectory costs. This transformation allows stochastic optimal control problems to be solved by using a Monte-Carlo approximation with guaranteed convergence. However, in these studies, effect of the variance of sampling distribution on convergence was not considered. Williams et al. (2015) mentions this problem and proposes a framework that allows users to freely determine the variance of the sampling distribution. These previous studies are common in that the theory of path integration is applied to stochastic optimal control problems. + +Alternatively, the MCMPC investigated herein aims to overcome the optimal control problem for deterministic systems. Therefore, herein we discuss the convergence of MCMPC by considering the optimal control problem for + +* This work was not supported by any organization +---PAGE_BREAK--- + +discrete-time linear systems, wherein the only optimal +solution can be derived analytically. + +This study aims to mainly describe the trade-off relation- +ship between the variance of sampling distribution and the +convergence, i.e., if we choose large sampling variance, the +convergence can be fastened while a large noise remains on +the solution. This problem requires that the variance must +be properly controlled to perfectly match the sub-optimal +input to the optimal solution. This also means that we +need to adjust the sampling variance properly to achieve +fast convergence and precision at the same time. Two types +of variance update methods are proposed: The one is in- +spired by cooling principle in simulated annealing method +and the other is based on the use of the most recent sample +variance. These methods are compared in simulation of +a linear system. Besides the variance update methods, +we also introduce two types of optimization among the +Monte Carlo samples: Top-1 sample and weighted mean. +Taking the best sample among all samples tends to achieve +fast convergence but suffered from large estimation noise +compared with weighted mean. These are compared in +simulation. + +Based on these results, we show that the newly proposed +method is one of the effective methods for the problem +discussed in this paper. + +## 2. FINITE-TIME OPTIMAL CONTROL PROBLEM FOR DISCRETE-TIME LINEAR SYSTEMS + +We considered an optimal control problem for discrete- +time linear systems on the *k*-th control cycle with predic- +tion for *I*-th steps, indicated by {$k|0$}, . . . , {$k|i$}, . . . , {$k|I$}. +Consider a class of linear discrete-time systems described +by the following equation: + +$$x_{\{k|i+1\}} = Ax_{\{k|i\}} + Bu_{\{k|i\}}, \quad (1)$$ + +where the state is denoted by $x_{\{k|i\}} \in \mathbb{R}^n$, control input by $u_{\{k|i\}} \in \mathbb{R}^1$, and system matrices are denoted by $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times 1}$. In addition, it is assumed that the initial state $x_{\{k|0\}}$ of the system at each control cycle $k$ is known and there are no constraint about input or state for simplicity. For the system (1), the cost function used in the finite-time optimal control problem from the current control cycle to $I$-steps future is described by following equation: + +$$J(x_k, u_k, k) = \frac{1}{2} \sum_{i=0}^{N-1} \left( x_{\{k|i+1\}}^T Q x_{\{k|i+1\}} + u_{\{k|i\}}^T R u_{\{k|i\}} \right), \quad (2)$$ + +where the $Q \in \mathbb{R}^{n \times n}$ is the positive definite weight for the state, $R \in \mathbb{R}^1$ is the positive definite weight for the input. In the rest of this study, we use $J$ as the cost value unless otherwise noted. Then, the solution of this optimal control problem is defined as + +$$u_{\{k|i\}}^* = \arg \min_{u_{\{k|i\}}} J(x_k, u_k, k). \quad (3)$$ + +At this moment, by using the fact that the time evolution +of the system (1) can be expressed using only the initial +state $x_{\{k|0\}}$ and input sequences $u_{\{k|0\}}, \dots, u_{\{k|N-1\}}$, we +can rewrite the equation (2) as following equation: + +$$J(x_k, u_k, k) = \frac{1}{2} \hat{\mathbf{u}}^T \hat{\mathbf{Q}} \hat{\mathbf{u}} + x_{\{k|0\}}^T \hat{\mathbf{B}} \hat{\mathbf{u}} + \frac{1}{2} x_{\{k|0\}}^T \hat{\mathbf{A}} x_{\{k|0\}}, \quad (4)$$ + +where the matrices $\hat{A} \in \mathbb{R}^{n \times n}$, $\hat{B} \in \mathbb{R}^{n \times N}$, and $\hat{Q} \in \mathbb{R}^{N \times N}$ and the vector $\hat{u} \in \mathbb{R}^I$, are shown in from (5) to (8). + +$$\hat{A} = A^T QA + (A^2)^T QA^2 + \cdots + (A^N)^T QA^N \quad (5)$$ + +$$\hat{B} = \left[ \sum_{k=1}^{N} (A^k)^T QA^{k-1} B, \dots, \sum_{k=j}^{N} (A^k)^T QA^{k-j} B, \dots, (A^N)^T QB \right] \quad (6)$$ + +$$\hat{Q} = \begin{bmatrix} \hat{q}_{11} & \cdots & \hat{q}_{1j} & \cdots & \hat{q}_{1I} \\ \vdots & \ddots & \vdots & & \vdots \\ \hat{q}_{1i} & \cdots & \hat{q}_{ij} & & \hat{q}_{iI} \\ \vdots & & \vdots & & \vdots \\ \hat{q}_{I1} & \cdots & \hat{q}_{jI} & & \hat{q}_{II} \end{bmatrix} \quad (7)$$ + +$$\hat{\mathbf{u}} = [u_{\{k|0\}}, \dots, u_{\{k|I-1\}}] \quad (8)$$ + +The matrix $\hat{Q}$, whose element in the *i*-th row and *j*-th column of the upper triangle, is a symmetric matrix $\hat{Q}$ and is given by + +$$\hat{q}_{ij} = +\begin{cases} +\displaystyle \sum_{k=0}^{N-i} B^T (A^k)^T Q A^k B + R, & (i=j) \\ +\displaystyle \sum_{k=j-i}^{N-i} B^T (A^k)^T Q A^{k+i-j} B. & (i*d*, **û****d+1* can be described as + +$$ +\bar{\mathbf{u}}_{d+1} = E(\bar{\mathbf{u}}) = (\sigma^2 \hat{\Omega} + \lambda^2 I_N)^{-1} (\sigma^2 \hat{\Omega} \mathbf{u}^* + \lambda^2 \bar{\mathbf{u}}_d). \quad (21) +$$ + +If we define the error between the optimal input sequences $\mathbf{u}^*$ and the sub-optimal input $\bar{\mathbf{u}}_d$ estimated by the $d$-th estimation as $\boldsymbol{e}_d = \bar{\mathbf{u}}_d - \mathbf{u}^*$, we can describe the $d+1$-th estimation error as + +$$ +\boldsymbol{e}_{d+1} = \left( \frac{\sigma^2}{\lambda^2} \hat{\boldsymbol{Q}} + I \right)^{-1} \boldsymbol{e}_d +\quad (22) +$$ + +As a result of the above considerations, we obtain the +theorem on the relationship between convergence and +parameters specific to MCMPC as shown below. + +Theorem 1. In (4), it is assumed that the matrix $\hat{\mathcal{Q}}$ is a real positive definite symmetric matrix and the unique optimal inputs sequences exists as shown in (10). + +Then, the sub-optimal input $\bar{\mathbf{u}}_d$ converges to $\mathbf{u}^*$ when $d \to \infty$. + +**Proof.** The necessary and sufficient condition for the error $\boldsymbol{e}_d$ to asymptotically converge to 0 is that the +---PAGE_BREAK--- + +absolute value of all eigenvalues of matrix $\Omega$ shown in (23) is less than 1. + +$$ \Omega = \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \qquad (23) $$ + +Assuming that for any real positive definite symmetric matrices $M_A, M_B$, the following inequality holds: + +$$ \lambda_i(M_A + M_B) > \lambda_i(M_A), \qquad (24) $$ + +where $\lambda_i(Z)$ means the i-th eigenvalue of a matrix Z (Proof omitted.). Based on the assumption that $\hat{Q}$ is a real positive definite symmetric matrix, the following equation holds: + +$$ \lambda_i \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right) > \lambda_i(I) = 1. \qquad (25) $$ + +Since $\lambda_i(Z^{-1}) = \frac{1}{\lambda_i(Z)}$ holds for any non-singular matrix, the following inequality holds: + +$$ \lambda_i(\Omega) = \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) < \lambda_i(I). \qquad (26) $$ + +As the eigenvalues of all real positive definite symmetric matrices are positive real numbers, the absolute value of all eigenvalues of the matrix $\Omega$ is less than 1. Then, the error $e_d$ satisfies the following equation: + +$$ \lim_{d \to \infty} e_d = 0. \qquad (27) $$ + +This means: + +$$ \lim_{d \to \infty} (\bar{u}_d - u^*) = 0. \qquad (28) $$ + +Thus, the sub-optimal input sequences $\bar{u}_d$ converges asymptotically to $u^*$ when $d \to \infty$. $\square$ + +**Corollary 1.** When $\sigma \to \infty$, Eq. (26) satisfies the following equation: + +$$ \lim_{\sigma \to \infty} \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) = 0, \forall i. \qquad (29) $$ + +Eq. (29) shows that if $\sigma \to \infty$, the first estimation result $\bar{u}^{(1)}$ satisfies $\bar{u}^{(1)} = u^*$. Therefore, if $\sigma$ is larger, the sub-optimal input sequences $\bar{u}_d$ converges to the optimal values faster. + +Then, the variance-covariance matrix of the sample mean $\Sigma_S$ shown in Eq. (20) can be described as the following equation: + +$$ \lim_{\sigma \to \infty} \Sigma_S = \frac{\lambda^2 \hat{Q}^{-1}}{M}. \qquad (30) $$ + +Eq. (30) means that if $\lambda$ is sufficiently small, the variance of the sub-optimal input sequences $\bar{u}_d$ is small. This consideration is consistent with the results of path integral analysis. Therefore, this means that there is a tradeoff between convergence and variance. Moreover, equation (30) shows that if sample number $M$ is large, the error of the expected value $E(\bar{u})$ by the Monte-Carlo approximation is $O(1/\sqrt{M})$. + +**Corollary 2.** When $\sigma \to 0$, equation (20) satisfies the following equation: + +$$ \lim_{\sigma \to 0} \Sigma_S = 0, \qquad (31) $$ + +However, the eigenvalue of the coefficient matrix $\Omega$ in equation (22) is as shown below: + +$$ \lim_{\sigma \to 0} \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) = 1, \forall i. \qquad (32) $$ + +These equations show that there is a tradeoff between the convergence and variance of sample mean $\Sigma_S$. Equation (31) and (32) show that if the user chooses the variance $\sigma^2$ as small as possible to eliminate the variance of sample mean $\Sigma_S$, the error $e_d$ at the previous estimation will remain. Moreover, if $\sigma$ is too small, the sub-optimal input sequences $\bar{u}_d$ slowly converges to the optimal values. + +From Corollary 1 and Corollary 2, it is understood that the variance needs to be controlled appropriately to improve the estimation accuracy and convergence speed. + +## 3.2 Algorithm of TOP1 sample MCMPC + +In Top1 sample MCMPC, the optimization problem is solved by iterating the following three processes within the same control cycle. + +### Phase 1 +Generating input sequences + +### Phase 2 +Running forward simulation in parallel + +### Phase 3 +Estimating the sub-optimal input sequences $\tilde{u}$ and updating standard deviation $\sigma$. + +Phase 1 and phase 2 are the same as the MCMPC algorithm described above. + +In phase 3, sub-optimal input sequences $\tilde{u}$ is described by the following equation: + +$$ \tilde{u} = \arg\min_{u_{\{k|i\}\in U}} J(x_k, u_k, k), \qquad (33) $$ + +where $U$ means a set of all inputs sequences $\hat{u}$ randomly sampled in phase 1. In addition, the standard deviation $\sigma$ updated as described in section 4 + +## 3.3 Model predictive control algorithm + +So far we have described how to repeat the prediction in one control cycle. In the model predictive control we propose, the prediction is repeated every control cycle, and the sub-optimal input predicted in the previous control cycle is re-optimized. So, sub-optimal input in k-th control cycle correspond to the result of iteration of $k \times d$ times predictions. + +# 4. SAMPLING VARIANCE UPDATE METHODS + +In this section, we describe two types of update methods that are used each time of the iteration of precision. The first variance update method used in this study can be described as following equation: + +$$ \sigma_d = \gamma^d \sigma_0, \qquad (34) $$ + +where $\gamma$ is a positive constant $\gamma \in [0.8, 1.0]$, and $d$ is the number of iteration, and $\sigma_0$ is a parameter that represents the initial standard deviation that should be designed by the user. Equation (34) is inspired by the cooling schedule used in the simulated annealing (SA) method. In SA, it is guaranteed that the estimated value can reach the optimal solution when $\gamma$ is chosen appropriately and cooled enough times. For example, if we chose $\gamma = 1/\log(1+d)$, estimated value reliably converges to optimal value. But, the cooling rate $\gamma = 1/\log(1+d)$ is too slow, so, in practically, the +---PAGE_BREAK--- + +cooling rate $\gamma \in [0.8, 1.0)$ is generally used Rosen and Nakano (1994). + +The second method can be described by the following equation: + +$$\sigma_d = \sqrt{\frac{1}{\sum_{m=1}^{M} w_{d-1}(\hat{\mathbf{u}})}}. \quad (35)$$ + +Equation (35) corresponds to the error variance of equation (16) that can be calculated based on the error propagation law. Note that equation (35) is a variance update method that reflects the quality of the estimation results. In the rest of this study, we will refer to the method shown earlier as the geometric cooling method and the method shown later as latest sample variance method. + +## 5. NUMERICAL SIMULATION + +In this section, we first show the models used in two different numerical simulations. Next, we show the simulation results when using normal type MCMPC, which shows the effect of variance $\sigma$ on convergence. Furthermore, we show the results of applying the two types of variance update methods shown in the subsection 4 to normal type MCMPC and Top1 sample MCMPC. Finally, we show the results of the application to the problem of swing-up stabilization of a double inverted pendulum, which is a type of nonlinear system. + +### 5.1 Simulation models + +**Example 1.** As the first example, we consider the optimal control problem when MCMPC is applied to a three-dimensional unstable discrete-time linear system that can be described by the following equation: + +$$ \begin{aligned} x_{k+1} &= Ax_k + Bu_k \\ x_k &\in \mathbb{R}^3, u_k \in \mathbb{R}^1 \end{aligned} \quad (36) $$ + +where we denote coefficient matrices A and B as show in the following equations: + +$$A = \begin{bmatrix} 0 & 1 & 0 \\ 0 & -1.1364 & 0.273 \\ 0 & -0.1339 & -0.1071 \end{bmatrix} \quad (37)$$ + +$$B = \begin{bmatrix} 0 \\ 0 \\ 0.0893 \end{bmatrix}, \quad (38)$$ + +then, the eigenvalues of A are as $\Lambda = [0, -1.1059, -0.1376]^T$. Since one of eigenvalues of A exists outside of the unit circle, system (36) is an unstable system. Then we consider an optimal control problem for system (36) that takes a prediction horizon $N = 15$, initial state $x_0 = [2.98, 0.7, 0.0]^T$, state weight matrix Q and an input weight R as follows: + +$$Q = \operatorname{diag}(2.0, 1.0, 0.1), \quad R = 1. \quad (39)$$ + +Then, the optimal input sequences $\mathbf{u}^*$ can be easily calculated using equation (3). In this study, we show only the analytical solution $u_0^* = -2.69$ used in the following discussion. + +**Example 2.** As the second example, we consider the swing-up stabilization of an arm type double inverted pendulum. + +Table 1. Parameters of arm type double pendulum + +
NameSymbol (·)Value
Angle of the first link$\theta_1$ (rad)Variable
Angle of the second link$\theta_2$ (rad)Variable
First link drive torque$\tau_1$ (N·m)Variable
Mass of first link$m_1$ (kg)-
Mass of second link$m_2$ (kg)$9.60 \times 10^{-2}$
Coefficient of friction$\mu_2$ (kg·m²s⁻¹)$1.26 \times 10^{-4}$
Gravity acceleration$g$ (ms⁻²)9.81
Length of first link$L_1$ (m)$2.27 \times 10^{-1}$
Length of second link$l_2$ (m)$1.95 \times 10^{-1}$
Moment of inertia$J_2$ (kg·m²)$1.10 \times 10^{-3}$
Positive constant$a_1$6.29
Positive constant$b_1$$1.64 \times 10^1$
+ +Fig. 1. Model of arm type double pendulum + +The state equation of the arm type double inverted pendulum shown in Fig. 1 can be described by the following two equations: + +$$\ddot{\theta}_1(t) = -a_1\dot{\theta}_1(t) + b_1u(t) \quad (40)$$ + +$$\alpha_1 \cos \theta_{12}(t) \cdot \ddot{\theta}_1(t) + \alpha_2 \ddot{\theta}_2(t) = \alpha_1 \dot{\theta}^2(t) \sin \theta_{12}(t) + \alpha_3 \sin \theta_2(t) \\ + \mu_2 \dot{\theta}_1(t) - \mu_2 \dot{\theta}_2(t) \quad (41)$$ + +The time-invariant parameters $\alpha_1$, $\alpha_2$, and $\alpha_3$ and the variable $\theta_{12}$ in Equation (40) and Equation (41) are as follows: + +$$\begin{align} \alpha_1 &= m_2 L_1 l_2, & \alpha_2 &= J_2 + m_2 l_2^2 \\ \alpha_3 &= m_2 l_2 g, & \theta_{12}(t) &= \theta_1(t) - \theta_2(t). \end{align} \quad (42)$$ + +The parameters of equations (40) to (42) and Fig. 1 are listed in Table 2. Then we consider an optimal control problem for this example that takes a prediction horizon $N = 80$, initial state shown in equation (43), state weight matrix Q and an input weight R shown in equation (44). + +$$[\theta_1(0), \dot{\theta}_1(0), \theta_2(0), \dot{\theta}_2(0)] = [\pi, 0, \pi, 0]. \quad (43)$$ + +$$Q = \operatorname{diag}(5.0, 0.01, 5.0, 0.01), \quad R = 1. \quad (44)$$ + +### 5.2 Trade-off between precision and convergence + +In this subsection, we consider the relationship between the variance $\sigma$ of the sampling distribution and convergence using the result of applying normal type MCMPC to Example 1. Fig. 3 shows the average and standard deviation $3\sigma$ of the simulation results of 30 independent trials under each condition. +---PAGE_BREAK--- + +Table 2. Parameters (for Example 1) + +
NameSymbolValue
Num of predictive stepsN15 step
Num of samplesM5,000
Num of iterationsd100
Varianceσ2Variable value
Varianceλ6.3
+ +Fig. 2. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 + +Table 2 lists the specific parameters of MCMPC used in this simulation to confirm the relationship between variance $\sigma$ and convergence. In Fig. 3, we compare the result when $\sigma$ gradually increase to 0.5, 1.0, 2.0, 4.0. As $\sigma$ increases, error $e_0$ converges to 0 with fewer iterations. However, it can be confirmed that the variation in error $e_0$ as the variance $\sigma$ increases. This result is a good example showing that the variance $\sigma$ of sampling distribution results in a trade-off relationship between the speed of convergence and the accuracy of the estimated sub-optimal inputs at the time of convergence. + +From the results shown in Fig.3, it is necessary to update the variance $\sigma$ appropriately to obtain the optimal inputs faster and more accurately. + +## 5.3 Comparison of sampling variance update methods + +Fig. 3 shows the result obtained by using geometric cooling method, as shown in (34). Then, we plotted the result of the average of 30 independent trials and range of the standard deviation $3\sigma$ in Fig. 3. The upper figure shows the result obtained using normal type MCMPC, whereas the lower figure shows the results obtained using Top1 sample MCMPC. We determined $\gamma$ in equation (34) using the following equation: + +$$ \gamma = \exp \left( \frac{1}{D} \log \left( \frac{\delta}{\sigma_0} \right) \right) \quad (45) $$ + +where *D* number of iterations, $\sigma_0$ is initial variance $\sigma$ of sampling distribution, and $\delta$ is variance $\sigma$ of sampling distribution used in the *D*-th iterations. In this simulation, the conditions of $D = 100, \delta = 10^{-5}$ remained, and the value of $\sigma_0$ was changed from 0.5 to 4.0. In the upper figure in Fig. 3, it can be confirmed that the error $e_0$ may or may not converge to 0 depending on the initial variance $\sigma_0$. On the contrary, in the lower figure in Fig. 3, the error $e_0$ converges to 0 at any initial variance. In either case, the variation with respect to the estimated sub-optimal + +Fig. 3. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 when using geometric cooling method. (This figure shows results of mean and variance $3\sigma$ of 30 trials.) + +input can be reduced. When normal type MCMPC was applied, the error $e_0$ in result did not converge to 0 when the initial variance $\sigma_0$ was set considerably small because $\sigma_d$ converged earlier than error $e_0$. + +Fig. 4 shows the result obtained by applying latest sample variance method, as shown in (35). In the upper figure, which shows the result obtained by applying the normal type MCMPC, it can be confirmed that the error $e_0$ did not converge because $\sigma$ converged earlier than error $e_0$. Alternatively, when the TOP1 sample MCMPC, as shown in the lower figure in Fig. 4, is applied, the error $e_0$ and variation in the error $e_0$ of results converged near 0. + +These results shown in Fig. 3 and Fig. 4 indicate that the two variance update methods proposed in this study cannot improve the trade off relationship between the convergence speed and the estimation accuracy when the normal type MCMPC is applied. However, when the update method shown in (34) is applied, choosing the appropriate (i.e., sufficiently large) initial variance can improve the trade-off relationship. On the other hand, in the case of TOP1 sample MCMPC, any of the updating methods can reliably converge to the optimal solution if sufficient iteration is taken. This means that TOP1 sample MCMPC has high affinity with any distribution update method. + +## 5.4 Application to a nonlinear system + +In this section, we show the results of applying what we have analogized so far to nonlinear systems. The discussion of convergence for the linear system can be applied to a nonlinear system that can be linearly approximated around the optimal solution. The system model and cost function are shown in Example 2. The parameters of the controller used for this simulation are as shown in Table 3. We set the initial variance to the lower bound given by: + +$$ \sigma_0 \geq \frac{u_{max} - u_{min}}{6}. \quad (46) $$ + +The method of determining the variance $\sigma_0$ as in equation (46) is also used in Nakatani and Date (2019). Fig. 5 shows time responses of $\theta_1, \theta_2, \dot{\theta}_1, \dot{\theta}_2$, respectively, and shows a plot of the average value of 30 trials and a stan- +---PAGE_BREAK--- + +Fig. 4. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 when using latest sample variance method. (This figure shows results of mean and variance $3\sigma$ of 30 trials.) + +dard deviation $3\sigma$. In addition, (a) shows in the figure corresponds to the result of applying the TOP1 sample MCMPC, and (b) is the result of applying the normal type MCMPC. When the variance update method considered in this study was applied to normal type MCMPC, none of the methods achieved swing-up stabilization. For this reason, the result shown in Fig. 5 is a result of applying normal type MCMPC without variance updating. Moreover, the result of TOP1 sample MCMPC is the result of using the variance update method shown in equation (34). In addition, the variance $\sigma$ used in this simulation was one with the best performance among the five different simulations using variance $\sigma_0^2 = 0.5, 1.0, 2.0, 3.0, 4.0$ in normal type MCMPC. Both controllers stabilized the swing up in approximately 2.0 s after the start of control. + +The upper figure in Fig. 6 and Fig. 7 shows the input sequences. Immediately after the start of control, TOP1 sample MCMPC selects the smallest input that satisfies the input constraints. On the contrary, the normal type MCMPC selects the conservative input. The lower figure in Fig. 6 and Fig. 7 shows the value of the cost function calculated based on the input sequences predicted in each control cycle. The smaller the value shown in Fig. 6 in each control cycle, the better the control performance. According to the results shown in this study, the TOP1 sample MCMPC demonstrates superior control performance. Moreover, this result was the same when the initial variance $\sigma_0$ and the variance update method were changed. + +In normal type MCMPC, when the variance $\sigma$ or the distributed update method was changed, the control performance deteriorated or the swing-up stability could not be stabilized due to the trade-off relationship described in subsection 3.1. + +## 6. CONCLUSION + +Herein, we examined the relationship between the convergence of MCMPC and user determinable parameters. Additionally, it was analytically verified that the variance $\sigma$ of sampling distribution has a trade off relationship with the convergence speed and the accuracy of estimation. Next, we proposed two types of variance update meth- + +Table 3. Parameters (for Example 2) + +
NameValue
Simulation time5.0 (s)
Control cycle100 (Hz)
Prediction horizon0.8 (s)
Num of predictive steps80 step
Num of samples5,000
Num of iterations100
σ20 or σ21.0
λ240
γ0.9
Input constraint-3.0 ≤ u(t) ≤ 3.0 (V)
+ +Fig. 5. Simulation result ((a) TOP1 sample MCMPC vs (b) Normal type MCMPC). Left side top: time response of $\theta_1$. Right side top: time response of $\theta_2$. Left side bottom: time response of $\dot{\theta}_1$. Right side bottom: time response of $\dot{\theta}_2$. + +Fig. 6. Top: Simulation result of input sequences. Bottom: Cost value calculated in each control cycle. (This figure shows results of mean and variance $3\sigma$ of 30 trials.) + +ods and TOP1 sample MCMPC to overcome this trade-off problem. Finally, we completed numerical simulations and discussed the effects of applying the variance update method and TOP1 sample MCMPC. We also showed an example of numerical simulation applied to a nonlinear system and examined the possibility of applying the proposed analogy for controlling nonlinear systems. +---PAGE_BREAK--- + +Fig. 7. Top: Simulation result of input sequences. Bottom: Cost value calculated in each control cycle. (This figure shows results of one trial out of 30 trials.) + +REFERENCES + +Cairano, S.D. and Kolmanovsky, I.V. (2019). Automotive applications of model predictive control. In *Handbook of Model Predictive Control*, 493–527. Springer International Publishing, Cham. + +Carlos E. Garcia, D.M.P. and Morari, M. (1989). Model predictive control: Theory and practice—a survey. *Automatica*, **25**, 335–348. + +Kappen, H.J. (2007). An introduction to stochastic control theory, path integrals and reinforcement learning. *Proc. 9th Granada seminor on computational physics: Cooperative behavior in nearal systems*, 149–181. + +Masashi Nanno, T.O. (2010). Nonlinear model predictive control for vehicle collision avoidance using c/gmres algorithm. presented at the 2010 IEEE International Conference on Control Applications, Yokohama, Japan, September 8–10. + +Nakatani, S. and Date, H. (2019). Swing up control of inverted pendulum on a cart with collision by monte carlo model predictive control. *2019 58th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)*, 1050–1055. + +Namekawa, T. (2012). Distributed and predictive control for smart grids. *Journal of the Society of Instrument and Control Engineers*, **51**, 62–68. + +Ohtsuka, T. (2004). A continuation/gmres method for fast computation of nonlinear receding horizon control. *Automatica*, **40**, 563–574. + +Ohyama, S. and Date, H. (2017). Parallelized nonlinear model predictive control on gpu. *2017 11th Asian Control Conference (ASCC)*, Gold Coast, QLD, 1620–1625. + +Rosen, E.B. and Nakano, R. (1994). Simulated annealing: Basics and recent topics on simulated annealing [in japanese]. *Journal of Japanese Society for Artificial Intelligence*, 365–372. + +Satoh, S., Kappen, H.J., and Saeki, M. (2017). An iterative method for nonlinear stochastic optimal control based on path integrals. *IEEE Transactions on Automatic Control*, **62**, 262–276. + +Williams, G., Aldrich, A., and Theodorou, E. (2015). Model predictive path integral control using covariance variable importance sampling. arXiv preprint arXiv:1509.01149. + +Williams, G., Paul Drews, B.G., Rehg, J.M., and Theodorou, E.A. (2016). Aggressive driving with model predictive path integral control. *IEEE international Conference on Robotics and Automation (ICRA)*, Stockholm, Sweden, 1433–1440. + +Appendix A. DERIVATION OF SAMPLE MEAN EXPECTATION AND VARIANCE OF SAMPLE MEAN + +In this section, we describe how to derive the analytical solution (21) from Eq. (18). Substituting the results of Eq. (15) and Eq. (17) for Eq. (18) can be transformed as: + +$$ +\begin{align*} +E(\tilde{u}) &= \frac{1}{\sqrt{2\pi}\sigma^2} \int \exp\left(-\frac{1}{2\lambda^2}(\tilde{u}-\mathbf{u}^*)^T \hat{\mathcal{Q}} (\tilde{u}-\mathbf{u}^-) - (\tilde{u}-\mathbf{u}^-)^T \Sigma^{-1} (\tilde{u}-\mathbf{u}^-)\right) d\tilde{u} \\ +&= \bar{C}_1 \int \exp\left(-\frac{1}{2\lambda^2}(\tilde{u}-\mathbf{u}^*)^T \hat{\mathcal{Q}} (\tilde{u}-\mathbf{u}^-) - \frac{1}{2\sigma^2}(\tilde{u}-\mathbf{u}^-)^T (\tilde{u}-\mathbf{u}^-)\right) d\tilde{u} \\ +&= \bar{C}_2 \int \exp\left(-\tilde{u}^T \left(\frac{1}{2\lambda^2}\hat{\mathcal{Q}} + \frac{1}{2\sigma^2}I\right)\tilde{u} + \frac{1}{\lambda^2}(\tilde{u}^*)^T \hat{\mathcal{Q}} \tilde{u} + \frac{1}{\sigma^2}\tilde{u}^T \tilde{u}\right) d\tilde{u} \\ +&= \bar{C}_3 \int \exp\left(-\frac{1}{2\lambda^2\sigma^2}\tilde{u}^T (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I)\tilde{u} + \frac{\sigma^2}{\lambda^2\sigma^2}(\tilde{u}^*)^T \hat{\mathcal{Q}} \tilde{u} + \frac{\lambda^2}{\lambda^2\sigma^2}\tilde{u}^T \tilde{u}\right) d\tilde{u} \\ +&= \bar{C}_4 \int \exp\left(-\frac{1}{2\lambda^2\sigma^2}(\tilde{u}-\mathbf{u})^T (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I)(\tilde{u}-\mathbf{u})\right) d\tilde{u} +\end{align*} +(A.1) +$$ + +where $\bar{C}_1, \bar{C}_2, \bar{C}_3$ and $\bar{C}_4$ are equivalent to terms that are listed as constants to arrange them into terms of the quadratic form and other terms related to $\hat{\mathbf{u}}$, respectively. Then, we define the contents of the exponential function on the fourth line in Eq. (A.1) as $g$, and obtain a stationary point by partial differentiation of $g$ with $\hat{\mathbf{u}}$ to obtain the following result: + +$$ +\left. \frac{\partial g}{\partial \hat{\mathbf{u}}} \right|_{\hat{\mathbf{u}}=\bar{\mathbf{u}}} = (\sigma^2 \hat{\mathcal{Q}} + \lambda^2 I) \bar{\mathbf{u}} - (\sigma^2 \hat{\mathcal{Q}} \mathbf{u}^* + \lambda^2 \bar{\mathbf{u}}) = 0. \quad (\text{A.2}) +$$ + +Here, solving the Eq. (A.2) for $\tilde{\mathbf{u}}$ agrees with the result of Eq. (A.2). + +Next, we find the variance of sample mean $\tilde{\mathbf{u}}$ using Eq. (A.1). Let random variable $\hat{\mathbf{u}}$ be a random variable that follows a multidimensional normal distribution with expected value $\tilde{\mathbf{u}}$ and variance $\Sigma_S$. From the PDF of this distribution and the result of the coefficient comparison of the integrand on the fifth line in Eq. (A.1), the variance of $\Sigma_S$ can be shown as: + +$$ +\frac{1}{2\lambda^2\sigma^2} (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I) = \frac{1}{2}\Sigma_S^{-1} \qquad (\text{A.3}) +$$ \ No newline at end of file diff --git a/samples/texts_merged/503850.md b/samples/texts_merged/503850.md new file mode 100644 index 0000000000000000000000000000000000000000..9f1a12a8fa8d944b97ebcc0d6f2d02a9f2066c50 --- /dev/null +++ b/samples/texts_merged/503850.md @@ -0,0 +1,169 @@ + +---PAGE_BREAK--- + +QUESTION 1 + +$A$ = the complement of $\angle B$ degrees + +$B$ = the supplement of $\angle C$ degrees + +$C$ = the supplement of the complement of $\angle D$ degrees + +$D$ = the central angle of a circle with radius 4 with corresponding arc length of $\pi$ + +$$\text{Find } A + B + C + D$$ +---PAGE_BREAK--- + +QUESTION 2 + +A = the number of diagonals of an icosagon (20 sided polygon) + +B = the area of an isosceles trapezoid with base lengths 4 and 28 and a height of 5 + +C = the height of a rectangular prism with a length of 20, a width of 9, and a space diagonal of 25 + +D = the volume of a hemisphere with radius 6 + +Find $A+B+\frac{D}{C}$ +---PAGE_BREAK--- + +QUESTION 3 + +Puneet lives in a box with dimensions $20ft \times 15ft \times 10ft$. There is a door with dimensions $7ft \times 4ft$. Each can of paint can cover $100 ft^2$. + +A = the number of paint cans needed to paint the door + +B = the number of paint cans needed to paint Puneet's house given that he paints the entire surface area of the house + +C = the length of the longest sandwich Puneet can fit into his box + +D = the ratio of the volume of the box to the surface area of the box + +Find $AC + BD$ +---PAGE_BREAK--- + +QUESTION 4 + +A semicircle is inscribed in an equilateral triangle so that the diameter rests on one side of the triangle and is tangent to +the other two sides. Let A be the radius of the semicircle when the side lengths of the triangle equals 24. + +Two poles of height 6 ft. and 8 ft. are located 12 ft. away from each other. Jenny attaches two cables that connect the top of one pole to the bottom of the other. Let B be the height of the intersection of the two cables from the ground. + +Jenny likes pie and $\pi$. She buys herself a two-dimensional pie with radius 14 in. Let C be the area of her pie in $in^2$. + +Find $A + B + C$. +---PAGE_BREAK--- + +QUESTION 5 + +A = the length of the inradius of a triangle with side lengths 7, 8, and 9 + +B = the length of circumradius of a triangle with side lengths 10, 10, and 14 + +C = the area of a triangle with side lengths 14, 60, and 66 + +D = the area of a triangle with side lengths 12 and 15 and included angle of 60° + +$$ +\text{Hint: Area} = \frac{1}{2} ab \sin C \text{ where C is the angle between } a \text{ and } b +$$ + +Find $A\sqrt{5} + B\sqrt{51} - \frac{C}{\sqrt{2}} + \frac{D}{\sqrt{3}}$ +---PAGE_BREAK--- + +**QUESTION 6** + +A = the sum of the coordinates of the centroid of a triangle with vertices (5, 7), (-1, 5), and (8, 0) + +B = the slope of the median from vertex B of a triangle with vertices A(31, 7), B(19, 21), C(25, 12) + +C = the measure of ∠D in degrees in △DOG if the opposite side length is $4\sqrt{2}$, ∠G equals 45° and DO equals 8 + +Find A+B+C. +---PAGE_BREAK--- + +QUESTION 7 + +(Figure not drawn to scale. A quadrilateral is drawn over two parallel lines.) + +What is the sum of $\angle B$ and $\angle F$ if $\angle A = 42^\circ$, $\angle C = 79^\circ$, $\angle E = 135^\circ$, and $\angle D = 51^\circ$? +---PAGE_BREAK--- + +QUESTION 8 + +Two spheres are inscribed in a rectangular box so that each sphere is tangent to five sides of the box and the other sphere. +If the radius of each of the spheres is 4 in, then the volume of the box is A in³. + +If a frustum of the cone has radii 6 in and 8 in and a height of 4 in, then the lateral surface area is Bπ in². + +An ant is sitting on the center of the top face of a right, cylindrical can of soup with radius 4 in and height 6π in. The ant wants to get down to the ground so it takes the shortest path to the edge of the face and climbs down the side of the can. The ant spirals down the can, rotating around once and arriving at the point directly underneath his position on the top edge. The length of the path the ant took from his original position to the ground is C in. + +Find A+B+C. +---PAGE_BREAK--- + +QUESTION 9 + +Add the values in the parentheses to $x$ if they are true. Subtract them from $x$ if they are false. Begin with $x = 0$. + +(5) The incenter of a triangle is the center of its inscribed circle + +(-3) The circumcenter of a triangle is equidistant from the sides of the triangle + +(-2) The orthocenter is the intersection of the altitudes of a triangle + +(7) The centroid is the intersection of the medians of a triangle + +(10) Euler's line is made up of the orthocenter, circumcenter, and the incenter + +After performing these operations, what is $x$? +---PAGE_BREAK--- + +QUESTION 10 + +A cylinder with radius 3 and height $\frac{9}{4}$ is inscribed in a cone with radius 8. + +$A$ = the volume of cylinder + +$B$ = the height of the cone + +$C$ = the volume of the cone + +Find $\frac{AC}{B}$. +---PAGE_BREAK--- + +QUESTION 11 + +Siddarth is obsessed with the song Bang by Griana Arande. Jeewoo, unfortunately, has bad music taste and likes All the Single Men by Jeyonce. The song Bang by Griana Arande is 3 minutes long. All the Single Men by Jeyonce is also 3 minutes long. If Siddarth starts to listen to the song randomly at a time between 12:00 pm and 12:30 pm and if Jenny listens to All the Single Men by Jeyonce randomly between 12:00 and 12:30 p.m., what is the probability that their songs are both playing at a given time between 12:00 to 12:30 p.m. +---PAGE_BREAK--- + +QUESTION 12 + +A = the number of sides of an undecagon + +B = the number of faces of a hexahedron + +C = the number of vertices of a figure with 12 edges and 8 faces + +D = the number of space diagonals in a dodecahedron + +Find (A+D) - (B+C) +---PAGE_BREAK--- + +QUESTION 13 + +A = sin 60° + +B = sin 30° + +C = cos 45° + +D = tan 60° + +Find ABCD. +---PAGE_BREAK--- + +QUESTION 14 + +(The figure is not drawn to scale.) + +The lengths of *a* and *b* are 6 and 4, respectively. How many possible combinations of (*c*, *d*) exist if *c* and *d* are integer lengths? \ No newline at end of file diff --git a/samples/texts_merged/5396754.md b/samples/texts_merged/5396754.md new file mode 100644 index 0000000000000000000000000000000000000000..6923858887cd2a054a3dd338afaad89e9e0caf51 --- /dev/null +++ b/samples/texts_merged/5396754.md @@ -0,0 +1,251 @@ + +---PAGE_BREAK--- + +Monte Carlo Sampling in Path +Space: Calculating Time Correlation +Functions by Transforming +Ensembles of Trajectories + +Cite as: AIP Conference Proceedings 690, 192 (2003); https://doi.org/10.1063/1.1632129 +Published Online: 06 November 2003 + +Christoph Dellago, and Phillip L. Geissler + +ARTICLES YOU MAY BE INTERESTED IN + +Precision shooting: Sampling long transition pathways +The Journal of Chemical Physics **129**, 194101 (2008); https://doi.org/10.1063/1.2978000 + +An efficient transition path sampling algorithm for nanoparticles under pressure +The Journal of Chemical Physics **127**, 154718 (2007); https://doi.org/10.1063/1.2790431 +---PAGE_BREAK--- + +Monte Carlo Sampling in Path Space: +Calculating Time Correlation Functions +by Transforming Ensembles of Trajectories + +Christoph Dellago\* and Phillip L. Geissler\^\textsuperscript{†} + +\*Institute for Experimental Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria + +\^\textsuperscript{†}Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA 02139 + +**Abstract.** Computational studies of processes in complex systems with metastable states are often complicated by a wide separation of time scales. Such processes can be studied with transition path sampling, a computational methodology based on an importance sampling of reactive trajectories capable of bridging this time scale gap. Within this perspective, ensembles of trajectories are sampled and manipulated in close analogy to standard techniques of statistical mechanics. In particular, the population time correlation functions appearing in the expressions for transition rate constants can be written in terms of free energy differences between ensembles of trajectories. Here we calculate such free energy differences with thermodynamic integration, which, in effect, corresponds to reversibly changing between ensembles of trajectories. + +INTRODUCTION + +Transition path sampling is a computational technique developed by us and others to study rare events in complex systems [1, 2, 3]. Although rare, such events are crucially important in many condensed matter systems. Nucleation of first order phase transitions, transport in solids, chemical reactions in solution, and protein folding all occur on time scales which are long compared to basic molecular motions. Transition path sampling, which is based on an importance sampling in trajectory space, can provide insights into mechanism and kinetics of processes involving dynamical bottlenecks. In the following we will give a brief overview of this methodology, focusing on the calculation of reaction rate constants. In this framework reaction rates are related to the reversible work required to manipulate ensembles of trajectories. As a consequence, rate constants can be calculated using free energy estimation methods familiar from equilibrium statistical mechanics, such as umbrella sampling and thermodynamic integration. For an in depth treatment of all aspects of transition path sampling we refer the reader to the review articles [2] and [3]. + +In the path sampling approach dynamical pathways of length $t$ are represented by ordered sequences of $L = t/\Delta t + 1$ states, $x(t) \equiv \{x_0, x_{\Delta t}, x_{2\Delta t}, \dots, x_t\}$. Consecutive states are separated by a time increment $\Delta t$. Such dynamical pathways can be deterministic trajectories as generated by Newtonian dynamics or stochastic trajectories as constructed from Langevin dynamics or from Monte Carlo simulations. For Markovian single step transition probabilities $p(x_{i\Delta t} \rightarrow x_{(i+1)\Delta t})$ the statistical weight $\mathcal{P}[x(t)]$ of a particular +---PAGE_BREAK--- + +trajectory $x(t)$ is + +$$ \mathcal{P}[x(t)] = \rho(x_0) \prod_{i=0}^{L-1} p(x_{i\Delta t} \rightarrow x_{(i+1)\Delta t}), \quad (1) $$ + +where $\rho(x_0)$ is the distribution of initial states $x_0$. In many applications, $\rho(x_0)$ will be an equilibrium distribution such as the canonical distribution, but non-equilibrium distributions of initial conditions are possible as well. + +In applying transition path sampling one is usually interested in finding dynamical pathways connecting stable (or metastable) states, which we name *A* and *B*. Then, the probability of a *reactive* pathway, i.e., of a pathway starting in *A* and ending in *B*, is + +$$ \mathcal{P}_{AB}[x(t)] = h_A(x_0) \mathcal{P}[x(t)] h_B(x_t) / Z_{AB}(t), \quad (2) $$ + +where $h_A(x)$ and $h_B(x)$ are the population functions for regions *A* and *B*. That is, $h_A(x)$ is 1 if $x$ is in *A* and 0 otherwise, and $h_B(x)$ is defined analogously. The factor $Z_{AB}$, + +$$ Z_{AB}(t) = \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] h_B(x_t), \quad (3) $$ + +normalizes the reactive path probability, and the notation $\int \mathcal{D}x(t)$ indicates an integration over all time slices of the pathway. The quantity $Z_{AB}(t)$ can be viewed as a partition function characterizing the ensemble of all reactive pathways. This analogy between conventional equilibrium statistical mechanics and the statistics of trajectories will be important in the discussion of reaction kinetics in the next section. The distribution $\mathcal{P}_{AB}[x(t)]$, which weights trajectories in the *transition path ensemble*, is a statistical description of all dynamical pathways connecting regions *A* and *B*. + +To sample the transition path ensemble we have developed several Monte Carlo simulation techniques [4, 5]. In these algorithms, which are importance sampling procedures in trajectory space, one proceeds by generating trial pathways from existing trajectories via what we call the shooting and shifting method [4]. Newly generated trial pathways are then accepted with a probability obeying the detailed balance condition. This condition guarantees that pathways are sampled according to their weight in the transition path ensemble. The detailed balance condition can be satisfied by choosing an acceptance probability according to the celebrated Metropolis rule [6]. Using such an acceptance probability in conjunction with the shooting and shifting algorithms one can efficiently explore trajectory space and harvest reactive pathways with their proper weight. Statistical analysis of the harvested pathways can then provide information on the kinetics of transition. The basis for this type of analysis will be discussed in the following section. + +REACTION RATES + +The time correlation function of state populations + +$$ C(t) = \frac{\langle h_A(x_0) h_B(x_t) \rangle}{\langle h_A(x_0) \rangle} \quad (4) $$ +---PAGE_BREAK--- + +provides a link between the microscopic dynamics of the system and the phenomeno- +logical description of the kinetics in terms of the forward and backward reaction rate +constants kAB and kBA, respectively [7]. If the reaction time τrxn = (kAB + kBA)-1 is sig- +nificantly larger than the time τmol necessary to cross the barrier top, C(t) approaches its +long time value exponentially after the short molecular transient time τmol: + +$$ +C(t) \approx \langle h_B \rangle (1 - \exp\{-t/\tau_{\text{rxn}}\}), \quad (5) +$$ + +For τmol < t ≪ τrxn the population correlation function C(t) grows linearly: + +$$ +C(t) \approx k_{AB}t. \tag{6} +$$ + +Thus, the forward reaction rate constant can be determined from the slope of C(t) in this +time regime. + +To evaluate C(t) in the transition path sampling framework we rewrite it in terms of +sums over trajectories: + +$$ +C(t) = \frac{\int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] h_B(x_t)}{\int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)]} = \frac{Z_{AB}(t)}{Z_A}. \quad (7) +$$ + +The above expression can be viewed as the ratio between the “partition functions” for +two different path ensembles: one, $Z_A$, in which pathways start in A and end anywhere, +and one, $Z_{AB}(t)$, in which pathways start in A and end in B. This perspective suggests +that we determine the correlation function $C(t)$ via calculation of $\Delta F(t) \equiv F_{AB}(t) - F_A =$ +$-\ln Z_{AB}(t) + \ln Z_A$, in effect a difference of free energies. From the free energy difference +one can then immediately determine the time correlation function, $C(t) = \exp[-\Delta F(t)]$. +The free energy difference $\Delta F(t)$ can be viewed as the work necessary to reversibly +change from a path ensemble with free final points $x_t$ to a path ensemble in which the +final points $x_t$ are required to reside in region B. + +In principle, one can determine the reaction rate constant $k_{AB}$ by calculating the time +correlation function $C(t)$ at various times and by taking a numerical derivative with +respect to $t$. This procedure is, however, numerically costly since it requires repeated +free energy calculations. Fortunately, the reversible work $\Delta F(t')$ for a given time $t'$ can +be written as a sum of the reversible work $\Delta F(t)$ for a different time $t$ and the reversible +work $F(t',t)$ necessary to change $t$ to $t'$ [2]: + +$$ +\Delta F(t') = \Delta F(t) + F(t', t). \tag{8} +$$ + +This reversible work $F(t',t)$ can then be calculated for all times between 0 and $t'$ in +a single transition path sampling simulation, as described in detail in Ref. [2]. In the +following sections we will focus on ways to determine the reversible work $\Delta F(t)$ for a +single time $t$. + +MODEL + +To illustrate the numerical methods presented in this paper we have used them to +calculate the time correlation function C(t) for isomerizations occurring in a simple +---PAGE_BREAK--- + +diatomic molecule immersed in a bath of purely repulsive particles, schematically shown on the left hand side panel of Fig. 1. A very similar model has been studied by Straub, Borkovec, and Berne [8]. This two dimensional model consists of *N* point particles of unit mass interacting via the Weeks-Chandler-Anderson potential [9], + +$$V_{\text{WCA}}(r) = \begin{cases} 4\epsilon \left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} \right] + \epsilon & \text{for } r \le r_{\text{WCA}} \equiv 2^{1/6}\sigma, \\ 0 & \text{for } r > r_{\text{WCA}}. \end{cases} \quad (9)$$ + +Here, *r* is the interparticle distance, and *ε* and *σ* specify the strength and the interaction radius of the potential, respectively. In addition, two of the *N* particles are bound to each other by a double well potential + +$$V_{\text{dw}}(r) = h \left[ 1 - \frac{(r - r_{\text{WCA}} - w)^2}{w^2} \right]^2, \quad (10)$$ + +where *h* denotes the height of the potential energy barrier separating the potential energy wells located at $r_{\text{WCA}} = 2^{1/6}\sigma$ and $r_{\text{WCA}} + w$. + +**FIGURE 1.** (a) Schematic representation of the diatomic molecule (dark grey disks) held together by a spring immersed in the WCA fluid (light grey disks). (b) Intramolecular (solid line) and intermolecular (dashed line) potential energy. The parameters determining height and width of the double well potential are $h = 6\epsilon$ and $w = 0.5\sigma$. The thin lines denote the "drawbridge" constraining potential used in the thermodynamic integration and are labelled from $\lambda = 10$ to $\lambda = 100$ according to their slopes. The limits $r_A$ and $r_B$ for states A and B, respectively, are shown as vertical dotted lines. + +The diatomic molecule held together by the potential shown in Fig. 1 can reside in two states. In the *contracted* state the interatomic distance *r* fluctuates around $r_{\text{WCA}}$, while in the *expanded* state *r* is close to $r_{\text{WCA}} + w$. Due to interactions with the solvent particles, transitions between the two states can occur provided the total energy of the system is sufficiently high. Collisions with solvent particles provide the energy for activation as well as the dissipation necessary to stabilize the molecule in one of the wells after a barrier crossing has occurred. For high barriers, transitions between the extended and the contracted state are rare. In all calculations the system is defined to be in state A if the interatomic distance $r < r_A = 1.35\sigma$ and in state B if $r > r_B = 1.45\sigma$. These limiting values are denoted by vertical dotted lines in the right hand side panel of Fig. 1. The +---PAGE_BREAK--- + +Newtonian equations of motion are integrated with the velocity Verlet algorithm [10] using a time step of $\Delta t = 0.002(m\sigma^2/\epsilon)^{1/2}$. + +THERMODYNAMIC INTEGRATION + +In Ref. [4] we determined the time correlation function $C(t)$ with an umbrella sampling approach. Here we show how the time correlation function $C(t)$ from Equ. (7) can be calculated with a strategy analogous to thermodynamic integration, a method used to estimate the free energy difference between ensembles [11, 12]. In a conventional thermodynamic integration, one introduces a coupling parameter $\lambda$, which can transform one ensemble into the other when changed from $\lambda_i$ to $\lambda_f$. Derivatives of the free energy with respect to $\lambda$ calculated at intermediate values of $\lambda$ can then be used to compute the free energy difference by numerical integration from $\lambda_i$ to $\lambda_f$. + +Thermodynamic integration can also be used to calculate free energy differences between path ensembles. Such a strategy has in effect been used by S. Sun [13] to efficiently estimate free energy difference in the fast switching method recently proposed by Jarzynski [14, 15, 16, 17, 18]. For our purpose we introduce a function $\Theta(x, \lambda)$ depending on the configuration $x$ and on a parameter $\lambda$. The dependence on $\lambda$ is chosen such that $\Theta(x, \lambda_i) = 1$ and $\Theta(x, \lambda_f) = h_B(x)$. Using this function $\Theta$ one can then continuously transform an ensemble of paths starting in A and ending anywhere into an ensemble of pathways beginning in A and ending in B. + +Introducing the partition function + +$$Z(t, \lambda) \equiv \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) \quad (11)$$ + +we generalize the time correlation function $C(t)$ from Equ. (7) as the ratio between partition functions for $\lambda$ and $\lambda_i$: + +$$C(t, \lambda) = Z(t, \lambda) / Z(t, \lambda_i). \qquad (12)$$ + +For $\lambda = \lambda_f$ this function is just the correlation function $C(t) = \exp(-\Delta F)$ we wish to determine. We calculate the reversible work $F(t, \lambda) = -\ln Z(t, \lambda)$ by first taking its derivative with respect to $\lambda$: + +$$\frac{\partial F(t, \lambda)}{\partial \lambda} = -\frac{\partial \ln Z(t, \lambda)}{\partial \lambda} = -\frac{1}{Z(t, \lambda)} \frac{\partial}{\partial \lambda} Z(t, \lambda). \quad (13)$$ + +Using the definition of $Z$ we obtain: + +$$\frac{\partial F(t, \lambda)}{\partial \lambda} = - \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \frac{\partial \Theta(x_t, \lambda)}{\partial \lambda} / Z(t, \lambda). \quad (14)$$ + +To bring this expression into a form amenable to a path sampling simulation we define an “energy” $U(x, \lambda)$ related to the function $\Theta$ by: + +$$U(x, \lambda) = -\ln \Theta(x, \lambda). \quad (15)$$ +---PAGE_BREAK--- + +Inserting the above expression into Eq. (14) we finally obtain: + +$$ +\frac{\partial F(t, \lambda)}{\partial \lambda} = \frac{1}{Z(t, \lambda)} \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) \frac{\partial U(x_t, \lambda)}{\partial \lambda} = \left\langle \frac{\partial U(x_t, \lambda)}{\partial \lambda} \right\rangle_{\lambda}. \quad (16) +$$ + +Here, 〈· · · 〉$_{λ}$ denotes a path average carried out in the ensemble described by + +$$ +\mathcal{P}[x(t), \lambda] \equiv h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) / Z(t, \lambda). \quad (17) +$$ + +This is the ensemble of all pathways starting in region A with a bias Θ(xᵢ, λ) acting on xᵢ, the last time slice of the pathway. The biasing function Θ(x, λ) is designed to pull the path endpoints gradually towards region B as λ is increased and to finally confine them to region B for λ = λ_f. From derivatives ∂F(t, λ)/∂λ computed for several values of λ in the range between λ_i and λ_f one then can calculate the reversible work ΔF(t) = F(t, λ_f) - F(t, λ_i) by integration: + +$$ +\Delta F(t) = \int_{\lambda_i}^{\lambda_f} d\lambda \left\langle \frac{\partial U(x_t, \lambda)}{\partial \lambda} \right\rangle_{\lambda}. \qquad (18) +$$ + +The correlation function we originally set out to compute is then simply given by $C(t) = \exp[-\Delta F(t)]$. + +To study transitions of our solvated diatomic molecule, we introduce a “drawbridge” potential anchored at $r_B$: + +$$ +U(x, \lambda) \equiv \lambda \times [r_B - r(x)] \times \theta[r_B - r(x)]. \tag{19} +$$ + +Here, $r_B$ is the lower limit of $r$ in region $B$ and $\theta$ is the Heaviside theta function. By lifting the drawbridge from $\lambda = 0$ to $\lambda = \infty$ one can continuously confine the initially free endpoints of the pathways to final region $B$. For this drawbridge biasing potential the derivative of the reversible work $F(t, \lambda)$ is given by + +$$ +\frac{\partial F(t, \lambda)}{\partial \lambda} = \left\langle [r_B - r(x_t)] \times \theta[r_B - r(x_t)] \right\rangle_{\lambda}. \quad (20) +$$ + +We have used Equ. (20) to calculate $\partial F(t, \lambda)/\partial \lambda$ for $t = 0.8(m\sigma^2/\epsilon)^{1/2}$ at 100 equidistant values of $\lambda$ in the range from $\lambda = 0$ to $\lambda = 100$. Each single path sampling simulation consisted of $2 \times 10^6$ attempted path moves. In this sequence of path sampling simulations starting at $\lambda = 0$ and ending at $\lambda = 100$, corresponding to a *compression* of pathways, the final path of simulation *n* was used as initial path for simulation *n* + 1. Results of these simulations are plotted in Fig. 2. Derivatives of the reversible work with respect to $\lambda$ are shown on the left hand side. The right panel contains the reversible work $F(t, \lambda)$ as a function of $\lambda$ as obtained by numerical integration. The plateau value of $F(t, \lambda) = 9.85$ reached at $\lambda \sim 40$ is the reversible work $\Delta F(t)$ necessary to confine the final points of the pathways to region *B*. To investigate if these results are affected by hysteresis, we have carried out a sequence of path sampling simulations corresponding to an *expansion* of the path ensemble. In this sequence of simulations we started with pathways constrained to end in region *B* end then subsequently lowered $\lambda$ from an initial value of 100 +---PAGE_BREAK--- + +FIGURE 2. Results of path ensemble thermodynamic integration simulations. Left hand side: derivatives of the reversible work $F(t, \lambda)$ with respect to the coupling parameter $\lambda$ calculated in a path compression simulation (solid line) and in a path expansion simulation (dashed line). In both cases $\partial F/\partial \lambda$ was calculated at 101 equidistant values of $\lambda$ in the range from 0 to 100. Right hand side: Reversible work $F(t, \lambda)$ as a function of $\lambda$ obtained by numerical integration of the curves shown on the left hand side. Again, the solid line denotes results of a path ensemble compression while the dashed line refers to a path ensemble expansion. The free energy difference obtained from these simulations is $\Delta F(t) = 9.85$ corresponding to a correlation function value of $C(t) = 5.27 \times 10^{-5}$. + +to a final value of 0. The reversible work and its derivative obtained by path expansions are shown as dashed lines in Fig. 2. Path compression and path expansion yield almost identical results. + +In this work we have borrowed many familiar ideas and techniques from statistical thermodynamics (e.g., reversible work, thermodynamic integration) in order to compute intrinsically dynamical quantities (e.g., rate constants). Thermodynamic concepts become directly useful for this purpose once the dynamical problem has been reduced to characterizing the statistical consequences of imposing constraints (of reactivity) on stationary distributions (of dynamical pathways). This task, in the context of phase space ensembles, is the central challenge of classical statistical mechanics. Remarkably, such a thermodynamic interpretation extends even to the nonequilibrium realm. Recent results concerning *irreversible* transformations between equilibrium states [14, 15, 16, 17, 18] have analogous meaning for finite-time switching between ensembles of trajectories, opening new routes for rate constant calculations. We are working to develop transition path sampling methods exploiting this analogy. + +## ACKNOWLEDGMENTS + +P.L.G. is an MIT Science Fellow. The calculations were performed on the Schrödinger II Linux cluster of the Vienna University Computer Center. +---PAGE_BREAK--- + +REFERENCES + +1. C. Dellago, P. G. Bolhuis, F. S. Csajka, and D. Chandler, *J. Chem. Phys.* **108**, 1964 (1998). + +2. C. Dellago, P. G. Bolhuis, and P. L. Geissler, *Adv. Chem. Phys.* **123**, 1 (2002); + +3. Peter G. Bolhuis, D. Chandler, C. Dellago, Phillip L. Geissler, *Ann. Rev. Phys. Chem.* **53**, 291 (2002). + +4. C. Dellago, P. G. Bolhuis, and D. Chandler, *J. Chem. Phys.* **108**, 9263 (1998). + +5. P. G. Bolhuis, C. Dellago, and D. Chandler, *Faraday Discuss.* **110**, 421 (1998). + +6. N. Metropolis, A. W. Metropolis, M. N. Rosenbluth, A. H. Teller, and E. Teller, *J. Chem. Phys.* **21**, 1087 (1953). + +7. D. Chandler, *Introduction to Modern Statistical Mechanics*, Oxford University Press (1987). + +8. J. E. Straub, M. Borkovec, and B. J. Berne, *J. Chem. Phys.* **89**, 4833 (1988). + +9. J. D. Weeks, D. Chandler, and H. C. Andersen, *J. Chem. Phys.* **54**, 5237 (1971). + +10. M. P. Allen and D. J. Tildesley, *Computer Simulations of Liquids*, Oxford University Press, Oxford (1987). + +11. J. G. Kirkwood, *J. Chem. Phys.* **3**, 300 (1935). + +12. D. Frenkel and B. Smit, *Understanding Molecular Simulation*, 2nd edition, Academic Press (2002). + +13. S. X. Sun, *J. Chem. Phys.* **118**, 5769 (2003). + +14. C. Jarzynski, *Phys. Rev. Lett* **78**, 2690 (1997). + +15. C. Jarzynski, *Phys. Rev. E* **56**, 5018 (1997). + +16. G. E. Crooks, *J. Stat. Phys.* **90**, 1480 (1997). + +17. G. E. Crooks, *Phys. Rev. E* **60**, 2721 (1999). + +18. G. E. Crooks, *Phys. Rev. E* **61**, 2361 (2000). \ No newline at end of file diff --git a/samples/texts_merged/5647681.md b/samples/texts_merged/5647681.md new file mode 100644 index 0000000000000000000000000000000000000000..5e4453aad30733ee794e3f447515239632d9c6d7 --- /dev/null +++ b/samples/texts_merged/5647681.md @@ -0,0 +1,487 @@ + +---PAGE_BREAK--- + +A note on sufficiency in binary panel models + +Koen Jochmans, Thierry Magnac + +► To cite this version: + +Koen Jochmans, Thierry Magnac. A note on sufficiency in binary panel models. 2015. hal-01248065 + +HAL Id: hal-01248065 + +https://hal-sciencespo.archives-ouvertes.fr/hal-01248065 + +Preprint submitted on 23 Dec 2015 + +**HAL** is a multi-disciplinary open access +archive for the deposit and dissemination of sci- +entific research documents, whether they are pub- +lished or not. The documents may come from +teaching and research institutions in France or +abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est +destinée au dépôt et à la diffusion de documents +scientifiques de niveau recherche, publiés ou non, +émanant des établissements d'enseignement et de +recherche français ou étrangers, des laboratoires +publics ou privés. +---PAGE_BREAK--- + +# A NOTE ON SUFFICIENCY IN BINARY PANEL MODELS + +Koen Jochmans +Thierry Magnac +---PAGE_BREAK--- + +A NOTE ON SUFFICIENCY IN BINARY PANEL MODELS + +KOEN JOCHMANS AND THIERRY MAGNAC + +December 4, 2015 + +Consider estimating the slope coefficients of a fixed-effect binary-choice model from two-period panel data. Two approaches to semiparametric estimation at the regular parametric rate have been proposed. One is based on a sufficient statistic, the other is based on a conditional-median restriction. We show that, under standard assumptions, both approaches are equivalent. + +KEYWORDS: binary choice, fixed effects, panel data, regular estimation, sufficiency. + +INTRODUCTION + +A classic problem in panel data analysis is the estimation of the vector of slope coefficients, $\beta$, in fixed-effect linear models from binary response data on $n$ observations. + +In seminal work, Rasch (1960) constructed a conditional maximum-likelihood estimator for the fixed-effect logit model by building on a sufficiency argument. Chamberlain (2010) and Magnac (2004) have shown that sufficiency is necessary for estimation at the $n^{-1/2}$ rate to be possible in general. + +Manski (1987) proposed a maximum-score estimator of $\beta$. His estimator relies on a conditional median restriction and does not require sufficiency. However, it converges at the slow rate $n^{-1/3}$. Horowitz (1992) suggested smoothing the maximum-score criterion function and showed that, by doing so, the convergence rate can be improved, although the $n^{-1/2}$-rate remains unattainable. + +Lee (1999) has given an alternative conditional-median restriction and derived a $n^{-1/2}$-consistent maximum rank-correlation estimator of $\beta$. He provided sufficient conditions for this condition to hold that restrict the distribution of the fixed effects and the covariates. It can be shown that these restrictions involve the unknown parameter $\beta$ through index-sufficiency requirements on the distribution of the covariates, and that these can severely restrict the values that $\beta$ is allowed to take. + +In this note we reconsider the conditional-median restriction of Lee (1999) under standard assumptions and look for conditions that imply it to hold for any $\beta$. We find that imposing the + +Department of Economics, Sciences Po, 28 rue des Saints Pères, 75007 Paris, France. +koen.jochmans@sciencespo.fr. + +GREMAQ and IDEI, Toulouse School of Economics, 21 Allée de Brienne, 31000 Toulouse, France. +thierry.magnac@tse-fr.eu. +---PAGE_BREAK--- + +conditional-median restriction is equivalent to requiring sufficiency. + +1. MODEL AND ASSUMPTIONS + +Suppose that binary outcomes $y_i = (y_{i1}, y_{i2})$ relate to a set of observable covariates $x_i = (x_{i1}, x_{i2})$ through the threshold-crossing model + +$$y_{i1} = 1\{x_{i1}\beta + \alpha_i \geq u_{i1}\}, \quad y_{i2} = 1\{x_{i2}\beta + \alpha_i \geq u_{i2}\},$$ + +where $u_i = (u_{i1}, u_{i2})$ are latent disturbances, $\alpha_i$ is an unobserved effect, and $\beta$ is a parameter vector of conformable dimension, say $k$. The challenge is to construct an estimator of $\beta$ from a random sample ${(y_i, x_i), i = 1, \dots, n}$ that converges at the regular $n^{-1/2}$ rate. + +Let $\Delta y_i = y_{i2} - y_{i1}$ and $\Delta x_i = x_{i2} - x_{i1}$. The following assumption will be maintained throughout. + +ASSUMPTION 1 (Identification and regularity) + +(a) $u_i$ is independent of $(x_i, \alpha_i)$. + +(b) $\Delta x_i$ is not contained in a proper linear subspace of $\mathbb{R}^k$. + +(c) The first component of $\Delta x_i$ continuously varies over $\mathcal{R}$ (for almost all values of the other components) and the first component of $\beta$ is not equal to zero. + +(d) $\alpha_i$ varies continuously over $\mathcal{R}$ (for almost all values of $x_i$). + +(e) The distribution of $u_i$ admits a strictly positive, continuous, and bounded density function with respect to Lebesgue measure. + +Parts (a)-(c) collect sufficient conditions that ensure that $\beta$ is identified while Parts (d)-(e) are conventional regularity conditions (see Magnac 2004). From here on out we omit the 'almost surely' qualifier from all conditional statements. + +Assumption 1 does not parametrize the distribution of $u_i$ nor does it restrict the dependence between $\alpha_i$ and $x_i$ beyond the complete-variation requirement of Assumption 1(d). As such, our approach is semiparametric and we treat the $\alpha_i$ as fixed effects. + +2. CONDITIONS FOR REGULAR ESTIMATION + +Magnac (2004, Theorem 1) has shown that, under Assumption 1, the semiparametric efficiency bound for $\beta$ is zero unless $y_{i1} + y_{i2}$ is a sufficient statistic for $\alpha_i$. Sufficiency can be stated as follows. +---PAGE_BREAK--- + +**CONDITION 1 (Sufficiency)** There exists a real function G, independent of $\alpha_i$, such that + +$$ \mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = G(\Delta x_i \beta) $$ + +for all $\alpha_i \in \mathbb{R}$. + +Condition 1 states that data in first-differences follow a single-indexed binary-choice model. This yields a variety of estimators of $\beta$, such as semiparametric maximum likelihood (Klein and Spady 1993), that are $n^{-1/2}$-consistent under standard assumptions. + +Magnac (2004, Theorem 3) derived conditions on the distributions of $u_i$ and $\Delta u_i$ that imply +that Condition 1 holds. + +On the other hand, Lee (1999) considered estimation of $\beta$ based on a sign restriction. We write +$\mathrm{med}(x)$ for the median of random variable $x$ and let $\sgn(x) = 1\{x > 0\} - 1\{x < 0\}$. + +**CONDITION 2 (Median restriction)** For any two observations i and j, + +$$ \mathrm{med} \left( \frac{\Delta y_i - \Delta y_j}{2} \mid x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j \right) = \mathrm{sgn}(\Delta x_i \beta - \Delta x_j \beta) $$ + +holds. + +Condition 2 suggests a rank estimator for $\beta$. Conditions for this estimator to be $n^{-1/2}$-consistent +are stated in Sherman (1993). + +Lee (1999, Assumption 1) restricted the joint distribution of $\alpha_i, x_i$, and $x_{i1}\beta, x_{i2}\beta$ to ensure that +Condition 2 holds. Aside from these restrictions going against the fixed-effect approach, they do +not hold uniformly in $\beta$, in general. The Appendix contains additional discussion and an example. + +### 3. EQUIVALENCE + +The main result of this paper is the equivalence of Conditions 1 and 2 as requirements for $n^{-1/2}$- +consistent estimation of any $\beta$. + +**THEOREM 1 (Equivalence)** *Under Assumption 1 Condition 2 holds for any $\beta$ if and only if Condition 1 holds.* + + + +PROOF: We start with two lemmas that are instrumental in showing Theorem 1. +---PAGE_BREAK--- + +LEMMA 1 (Sufficiency) Condition 1 is equivalent to the existence of a continuously-differentiable, +strictly-decreasing function c, independent of αᵢ, such that + +$$ +\frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = c(\Delta x_i \beta) +$$ + +for all $\alpha_i \in \mathbb{R}$. + +PROOF: Conditional on $\Delta y_i \neq 0$ and on $\alpha_i, x_i$, the variable $\Delta y_i$ is Bernoulli with success probability + +$$ +\mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \frac{1}{1 + \frac{\mathrm{Pr}(\Delta y_i = -1 | x_i, \alpha_i)}{\mathrm{Pr}(\Delta y_i = 1 | x_i, \alpha_i)}}. +$$ + +Re-arranging this expression and enforcing Condition 1 shows that + +$$ +\frac{\Pr(\Delta y_i = -1|x_i, \alpha_i)}{\Pr(\Delta y_i = 1|x_i, \alpha_i)} = \frac{1 + G(\Delta x_i \beta)}{G(\Delta x_i \beta)}, +$$ + +which is a function of $\Delta x_i \beta$ only. Monotonicity of this function follows easily, as in Magnac (2004, +Proof of Theorem 2). This completes the proof of Lemma 1. +Q.E.D. + +LEMMA 2 (Median restriction) Let + +$$ +\tilde{c}(x_i) = \frac{\Pr(\Delta y_i = -1|x_i)}{\Pr(\Delta y_i = 1|x_i)}. +$$ + +Condition 2 is equivalent to the sign restriction + +$$ +\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta) +$$ + +holding for any two observations *i* and *j*. + +PROOF: Conditional on $\Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j$ (and the covariates), + +$$ +\frac{\Delta y_i - \Delta y_j}{2} = \begin{cases} 1 & \text{if } \Delta y_i = 1 \text{ and } \Delta y_j = -1 \\ -1 & \text{if } \Delta y_j = 1 \text{ and } \Delta y_i = -1. \end{cases} +$$ + +Therefore, it is Bernoulli with success probability + +$$ +\mathrm{Pr}(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j) = \frac{1}{1 + r(x_i, x_j)}, +$$ + +where + +$$ +r(x_i, x_j) = \frac{\Pr(\Delta y_i = -1, \Delta y_j = 1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j)}{\Pr(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j)}. +$$ +---PAGE_BREAK--- + +Note that + +$$ +\mathrm{med} \left( \frac{\Delta y_i - \Delta y_j}{2} \middle| x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j \right) = \mathrm{sgn} \left( \frac{1}{1+r(x_i, x_j)} - \frac{r(x_i, x_j)}{1+r(x_i, x_j)} \right). +$$ + +By the Bernoulli nature of the outcomes in the first step and random sampling of the observations +in the second step, we have that + +$$ +r(x_i, x_j) = \frac{\Pr(\Delta y_i = -1, \Delta y_j = 1 | x_i, x_j)}{\Pr(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j)} = \frac{\Pr(\Delta y_i = -1 | x_i) \Pr(\Delta y_j = 1 | x_j)}{\Pr(\Delta y_i = 1 | x_i) \Pr(\Delta y_j = -1 | x_j)} = \frac{\tilde{c}(x_i)}{\tilde{c}(x_j)}. +$$ + +Therefore, Condition 2 can be written as + +$$ +\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta). +$$ + +This completes the proof of Lemma 2. + +Q.E.D. + +We first establish that Condition 1 implies Condition 2. Armed with Lemmas 1 and 2 this is a +simple task. First note that, because the function $c$ is strictly decreasing by Lemma 1, Condition +1 implies that + +$$ +\operatorname{sgn}(c(\Delta x_j \beta) - c(\Delta x_i \beta)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta). +$$ + +Under Condition 1 we also have that + +$$ +c(\Delta x_i \beta) = \frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = \frac{\Pr(\Delta y_i = -1 | x_i)}{\Pr(\Delta y_i = 1 | x_i)} = \tilde{c}(x_i). +$$ + +Therefore, + +$$ +\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta). +$$ + +By Lemma 2, this is Condition 2. + +To see that Condition 2 implies Condition 1, first note that + +$$ +\frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = \frac{\Pr(u_{i1} \le \tilde{\alpha}_i - \frac{1}{2}\Delta x_i \beta, u_{i2} > \tilde{\alpha}_i + \frac{1}{2}\Delta x_i \beta)}{\Pr(u_{i1} > \tilde{\alpha}_i - \frac{1}{2}\Delta x_i \beta, u_{i2} \le \tilde{\alpha}_i + \frac{1}{2}\Delta x_i \beta)} +$$ + +where we let $\tilde{\alpha}_i = \alpha_i + \frac{1}{2}(x_{i1} + x_{i2})\beta$. Therefore, + +$$ +\mathrm{Pr}(\Delta y_i = 1|x_i, \Delta y_i \neq 0, \alpha_i) = \tilde{G}(\Delta x_i \beta, \tilde{\alpha}) +$$ + +for some function $\tilde{G}$, and + +$$ +\mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = \int \tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i) P(d\tilde{\alpha} | x_i, \Delta y_i \neq 0), +$$ +---PAGE_BREAK--- + +where $P(\tilde{\alpha}_i|x_i, \Delta y_i \neq 0)$ denotes the distribution of $\tilde{\alpha}_i$ given $x_i$ and $\Delta y_i \neq 0$. Next, by Lemma 2, Condition 2 implies that + +$$ \Delta x_i \beta = \Delta x_j \beta \iff \tilde{c}(x_i) = \tilde{c}(x_j) \iff E[\tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i)|x_i, \Delta y_i \neq 0] = E[\tilde{G}(\Delta x_j \beta, \tilde{\alpha}_j)|x_j, \Delta y_j \neq 0]. $$ + +Hence, it must hold that + +$$ \int_{-\infty}^{+\infty} \tilde{G}(v, \tilde{\alpha}) \{ P(d\tilde{\alpha}|x_i, \Delta y_i \neq 0) - P(d\tilde{\alpha}|x_j, \Delta y_i \neq 0) \} = 0 $$ + +for all values $v \in \mathcal{R}$ and all $(x_i, x_j)$. Because the distribution of $\alpha_i$ given $x_i$ and $\Delta y_i \neq 0$ is unrestricted, this condition holds if and only if the function $\tilde{G}$ does not depend on $\tilde{\alpha}_i$, and so not on $\alpha_i$. Moreover, we must have that + +$$ \tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i) = \Pr(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \Pr(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = G(\Delta x_i \beta) $$ + +for some function $G$. This is Condition 1. This completes the proof of Theorem 1. Q.E.D. + +## APPENDIX (NOT FOR PUBLICATION) + +The notation in Lee (1999) decomposes $x$ into its continuously varying single component whose coefficient is equal to 1 and the remaining variables. We shall denote $a$ the first component and $z$ the remaining variables so that $x = (a, z)$. We denote by $\theta$ the coefficient of $z$ in $x\beta$ so that $\beta = (1, \theta)$, and omit the subscript $i$ throughout. + +Assumptions (g) and (h) of Lee (1999) can be written as + +$$ (g) \quad \alpha \perp \Delta z | \Delta a + \theta \Delta z, $$ + +$$ (h) \quad a_1 + \theta z_1 \perp \Delta z | \Delta a + \theta \Delta z, \alpha $$ + +in which, e.g., $\Delta z = z_2 - z_1$. + +We first prove that these conditions imply an index sufficiency requirement on the distribution function of regressors. Second, we provide an example in which these conditions restrict the parameter of interest to only two possible values, except in non-generic cases. + +### Index sufficiency + +Denote by $f$ the density with respect to some dominating measure and rewrite (h) as + +$$ f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z, \alpha) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z, \alpha) f(\Delta z | \Delta a + \theta \Delta z, \alpha). $$ + +As Condition (g) can be written as + +$$ f(\Delta z | \Delta a + \theta \Delta z, \alpha) = f(\Delta z | \Delta a + \theta \Delta z), $$ +---PAGE_BREAK--- + +we therefore have that + +$$f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z, \alpha) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z, \alpha) f(\Delta z | \Delta a + \theta \Delta z),$$ + +which we can multiply by $f(\alpha | \Delta a + \theta \Delta z)$ and integrate with respect to $\alpha$ to get + +$$f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z) f(\Delta z | \Delta a + \theta \Delta z).$$ + +As this expression can be rewritten as + +$$f(\Delta z | \Delta a + \theta \Delta z, a_1 + z_1 \theta) = f(\Delta z | \Delta a + \theta \Delta z),$$ + +Conditions (g) and (h) of Lee (1999) demand that + +$$f(\Delta z | a_1 + z_1\theta, a_2 + z_2\theta) = f(\Delta z | \Delta a + \theta\Delta z, a_1 + z_1\theta) = f(\Delta z | \Delta a + \theta\Delta z),$$ + +or in terms of the original variables, that + +$$f(\Delta z | x_1\beta, x_2\beta) = f(\Delta z | \Delta x\beta),$$ + +This is an index sufficiency requirement on the data generating process of the regressors $x$ that is +driven by the parameter of interest, $\beta$. + +*Example* + +To illustrate, suppose that $z$ is a single dimensional regressor and that regressors are jointly normal +with a restricted covariance matrix allowing for contemporaneous correlation only. Moreover, + +$$\begin{pmatrix} a_1 \\ a_2 \\ z_1 \\ z_2 \end{pmatrix} \sim N \left( \begin{pmatrix} \mu_{a_1} \\ \mu_{a_2} \\ \mu_{z_1} \\ \mu_{z_2} \end{pmatrix}, \begin{pmatrix} \sigma_{a_1}^2 & 0 & \sigma_{a_1 z_1} & 0 \\ 0 & \sigma_{a_2}^2 & 0 & \sigma_{a_2 z_2} \\ \sigma_{a_1 z_1} & 0 & \sigma_{z_1}^2 & 0 \\ 0 & \sigma_{a_2 z_2} & 0 & \sigma_{z_2}^2 \end{pmatrix} \right).$$ + +Then + +$$\begin{pmatrix} \Delta z \\ x_1\beta \\ x_2\beta \end{pmatrix} \sim N \left( \begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \end{pmatrix}, \begin{pmatrix} \Sigma_{11} & \Sigma_{12} & \Sigma_{13} \\ \Sigma_{12} & \Sigma_{22} & \Sigma_{23} \\ \Sigma_{13} & \Sigma_{23} & \Sigma_{33} \end{pmatrix} \right)$$ + +for +---PAGE_BREAK--- + +$$ +\begin{align*} +\mu_1 &= \mu_{z_2} - \mu_{z_1} \\ +\mu_2 &= \mu_{a_1} + \mu_{z_1} \theta \\ +\mu_3 &= \mu_{a_2} + \mu_{z_2} \theta +\end{align*} +$$ + +and + +$$ +\begin{align*} +\Sigma_{11} &= \operatorname{var}(\Delta z) = \operatorname{var}(z_1) + \operatorname{var}(z_2) \\ +\Sigma_{12} &= \operatorname{cov}(\Delta z, x_1 \beta) = -\operatorname{cov}(z_1, a_1 + z_1 \theta) \\ +&= -\operatorname{cov}(a_1, z_1) - \theta \operatorname{var}(z_1) \\ +&= -\sigma_{a_1 z_1} - \theta \sigma_{z_1}^2 \\ +\Sigma_{13} &= \operatorname{cov}(\Delta z, x_2 \beta) = \operatorname{cov}(z_2, a_2 + z_2 \theta) \\ +&= \operatorname{cov}(a_2, z_2) + \theta \operatorname{var}(z_2) \\ +&= \sigma_{a_2 z_2} + \theta \sigma_{z_2}^2 \\ +\Sigma_{22} &= \operatorname{var}(x_1 \beta) = \operatorname{var}(a_1 + z_1 \theta) \\ +&= \operatorname{var}(a_1) + \theta^2 \operatorname{var}(z_1) + \theta 2 \operatorname{cov}(a_1, z_1) \\ +&= \sigma_{a_1}^2 + 2\theta \sigma_{a_1 z_1} + \theta^2 \sigma_{z_1}^2 \\ +\Sigma_{33} &= \operatorname{var}(x_2 \beta) = \operatorname{var}(a_2 + z_2 \theta) \\ +&= \operatorname{var}(a_2) + \theta^2 \operatorname{var}(z_2) + \theta 2 \operatorname{cov}(a_2, z_2) \\ +&= \sigma_{a_2}^2 + 2\theta \sigma_{a_2 z_2} + \theta^2 \sigma_{z_2}^2 \\ +\Sigma_{23} &= \operatorname{cov}(x_1 \beta, x_2 \beta) = 0. +\end{align*} +$$ + +From standard results on the multivariate normal distribution we have that + +$$ +\Delta z | x_1 \beta, x_2 \beta +$$ + +is normal with constant variance and conditional mean function + +$$ +m(x_1\beta, x_2\beta) = \mu_1 + \frac{(\Sigma_{13}\Sigma_{22} - \Sigma_{12}\Sigma_{23})(x_2\beta - \mu_3) - (\Sigma_{13}\Sigma_{23} - \Sigma_{12}\Sigma_{33})(x_1\beta - \mu_2)}{\Sigma_{22}\Sigma_{33} - \Sigma_{23}^2}. +$$ + +To satisfy the condition of index sufficiency we need that + +$$ +(\Sigma_{13}\Sigma_{22} - \Sigma_{12}\Sigma_{23}) = (\Sigma_{13}\Sigma_{23} - \Sigma_{12}\Sigma_{33}). +$$ + +Plugging-in the expressions from above, this becomes + +$$(\sigma_{a_2 z_2} + \theta \sigma_{z_2}^2)(\sigma_{a_1}^2 + 2\theta\sigma_{a_1 z_1} + \theta^2\sigma_{z_1}^2) = (\sigma_{a_1 z_1} + \theta\sigma_{z_1}^2)(\sigma_{a_2}^2 + 2\theta\sigma_{a_2 z_2} + \theta^2\sigma_{z_2}^2).$$ +---PAGE_BREAK--- + +We can write this condition as the third-order polynomial equation (in $\theta$) + +$$C + B\theta + A\theta^2 + D\theta^3 = 0$$ + +with coefficients + +$$ +\begin{align*} +C &= \sigma_{a_1}^2 \sigma_{a_2 z_2} - \sigma_{a_2}^2 \sigma_{a_1 z_1} \\ +B &= \sigma_{a_1}^2 \sigma_{z_2}^2 + 2\sigma_{a_2 z_2} \sigma_{a_1 z_1} - \sigma_{a_2}^2 \sigma_{z_1}^2 - 2\sigma_{a_2 z_2} \sigma_{a_1 z_1} \\ + &= \sigma_{a_1}^2 \sigma_{z_2}^2 - \sigma_{a_2}^2 \sigma_{z_1}^2 \\ +A &= \sigma_{a_1 z_1} \sigma_{z_2}^2 - \sigma_{a_2 z_2} \sigma_{z_1}^2 \\ +D &= 0. +\end{align*} +$$ + +For $t = 1, 2$, let + +$$\rho_t = \frac{\sigma_{a_t z_t}}{\sigma_{a_t} \sigma_{z_t}}, r_t = \frac{\sigma_{a_t}}{\sigma_{z_t}}.$$ + +Then + +$$ +\begin{align*} +\frac{C}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \rho_2 r_1 - \rho_1 r_2 \\ +\frac{B}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \frac{r_1}{r_2} - \frac{r_2}{r_1} \\ +\frac{A}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \frac{\rho_1}{r_2} - \frac{\rho_2}{r_1}. +\end{align*} +$$ + +The polynomial condition therefore is + +$$(\rho_2 r_1 - \rho_1 r_2) + \left( \frac{r_1}{r_2} - \frac{r_2}{r_1} \right) \theta + \left( \frac{\rho_1}{r_2} - \frac{\rho_2}{r_1} \right) \theta^2 = 0.$$ + +Note that the leading polynomial coefficient is equal to zero if and only if $\rho_1 r_1 = \rho_2 r_2$. This leads to three mutually-exclusive cases: + +(i) The data are stationary, that is, $\rho_1 = \rho_2$ and $r_1 = r_2$. Then all polynomial coefficients are zero so that all values of $\theta$ satisfy Lee's restriction. + +(ii) We have $\rho_1 r_1 = \rho_2 r_2$ but $r_1 \neq r_2$. Then the resulting linear equation admits one and only one solution in $\theta$. + +(iii) The leading polynomial coefficient is non-zero, so, $\rho_1 r_1 \neq \rho_2 r_2$. In this case the discriminant +---PAGE_BREAK--- + +of the second-order polynomial equals + +$$ +\begin{align*} +\Delta &= \left(\frac{r_1}{r_2} - \frac{r_2}{r_1}\right)^2 - 4 \left(\frac{\rho_1}{r_2} - \frac{\rho_2}{r_1}\right) (\rho_2 r_1 - \rho_1 r_2) \\ +&= \left(\frac{r_1}{r_2}\right)^2 + \left(\frac{r_2}{r_1}\right)^2 - 2 - 4 \left( \rho_1 \rho_2 \left\{ \frac{r_1}{r_2} + \frac{r_2}{r_1} \right\} - (\rho_1^2 + \rho_2^2) \right). +\end{align*} +$$ + +Set $x = \frac{r_1}{r_2} \ge 0$ and write + +$$ +\Delta(x) = x^2 + \frac{1}{x^2} - 2 - 4(\rho_1\rho_2(x + \frac{1}{x}) - (\rho_1^2 + \rho_2^2)), +$$ + +which is smooth for $x > 0$. The derivative of $\Delta$ with respect to $x$ equals + +$$ +\begin{align*} +\Delta'(x) &= 2x - \frac{2}{x^3} - 4(\rho_1\rho_2(1 - \frac{1}{x^2})) \\ +&= \frac{2}{x^3}(x^4 - 1) - 4\rho_1\rho_2\frac{1}{x^2}(x^2 - 1) \\ +&= \frac{2}{x^3}(x^2 - 1)(x^2 + 1 - 2\rho_1\rho_2 x). +\end{align*} +$$ + +Note that the Cauchy-Schwarz inequality implies that $x^2 + 1 - 2\rho_1\rho_2 x \ge 0$ so that, for $x \ge 0$, + +$$ +\operatorname{sgn}(\Delta'(x)) = \operatorname{sgn}(x - 1). +$$ + +Further, $\Delta(1) = 4(\rho_1 - \rho_2)^2$. Therefore, $\Delta(x)$ is always non-negative. Hence, in this case, the polynomial condition generically has two solutions in $\theta$. + +Conclusion + +Conditions (g) and (h) of Lee (1999) imply an index-sufficiency condition for the distribution function of regressors. In generic cases in a standard example, this condition is restrictive and is not verified by every possible value of the parameter of interest, $\theta$, but only two. + +REFERENCES + +Chamberlain, G. (2010), “Binary Response Models for Panel Data: Identification and Information,” *Econometrica*, 78, 159–168. + +Horowitz, J. L. (1992), “A Smoothed Maximum Score Estimator for the Binary Response Model,” *Econometrica*, 60, 505–531. + +Klein, R. W., and Spady, R. H. (1993), “An Efficient Semiparametric Estimator for Binary Choice Models,” *Econometrica*, 61, 387–421. + +Lee, M.-J. (1999), “A Root-N Consistent Semiparametric Estimator for Related-Effects Binary Response Panel Data,” *Econometrica*, 67, 427–433. +---PAGE_BREAK--- + +Magnac, T. (2004), "Panel Binary Variables and Sufficiency: Generalizing Conditional Logit," *Econometrica*, 72, 1859-1876. + +Manski, C. F. (1987), "Semiparametric Analysis of Random Effects Linear Models from Binary Panel Data," *Econometrica*, 55, 357-362. + +Rasch, G. (1960), "Probabilistic models for some intelligence and attainment tests," Unpublished report, The Danish Institute of Educational Research, Copenhagen. + +Sherman, R. P. (1993), "The Limiting Distribution of the Maximum Rank Correlation Estimator," *Econometrica*, 61, 123-137. \ No newline at end of file diff --git a/samples/texts_merged/5718759.md b/samples/texts_merged/5718759.md new file mode 100644 index 0000000000000000000000000000000000000000..e512cd03e178f4db5123eb0163704b4e4acb50bb --- /dev/null +++ b/samples/texts_merged/5718759.md @@ -0,0 +1,262 @@ + +---PAGE_BREAK--- + +Probing local density of states near the diffraction limit using nanowaveguide coupled cathode luminescence + +Yoshinori Uemura,¹ Masaru Irita,¹ Yoshikazu Homma,¹ and Mark Sadgrove*¹ + +¹Department of Physics, Faculty of Science, Tokyo University of Science, +1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan* + +The photonic local density of states (PLDOS) determines the light matter interaction strength in nanophotonic devices. For standard dielectric devices, the PLDOS is fundamentally limited by diffraction, but its precise dependence on the size parameter *s* of a device can be non-trivial. Here, we measure the PLDOS dependence on the size parameter in a waveguide using a new technique - nanowaveguide coupled cathode luminescence (CL). We observe that depending on the position within the waveguide cross-section, the effective diffraction limit of the PLDOS varies, and the PLDOS peak shape changes. Our results are of fundamental importance for optimizing coupling to nanophotonic devices, and also open new avenues for spectroscopy based on evanescently coupled CL. + +# I. INTRODUCTION + +The rate of decay of an emitter into a given optical mode is governed by Fermi's golden rule, and is proportional to the photonic local density of states (PLDOS) $\rho$ associated with that mode. A fundamental limit on $\rho$ for nanophotonic devices is the diffraction limit which places a lower bound on the mode size of ~$\lambda/2$ in a given dimension [1]. Dielectric devices with a characteristic size less than this have sub-optimal PLDOS due to redistribution of mode amplitude into the evanescent region - i.e. a loss of mode confinement. An operational definition of the diffraction limit for nanodevices is, therefore, the size at which the PLDOS is maximized. + +An important class of diffraction limited nano devices is that of nanowaveguides, which are used in fields ranging from quantum optics [2] and optomechanics [3] through to particle manipulation [4]. For certain nanowaveguide types, systematic measurement of the photonic local density of states via cathode luminescence (CL) spectroscopy [5-7] has been achieved via leaky modes. In this remarkable technique, depicted in Fig. 1(a), electrons incident on a device induce luminescence, offering essentially tomographic PLDOS reconstruction due to the point-dipole-like excitation provided by the electron beam [8-10]. However, because luminescence is collected in the far-field, the PLDOS of true waveguide modes (which by definition do not couple to radiation modes) cannot be measured in general. Furthermore, although it is well known that an optimal diameter exists for coupling to nanowaveguides [11], no systematic measurement of the diffraction limited behavior of waveguide PLDOS has ever been performed to the best of our knowledge. + +Here, we detect CL emitted into a the fundamental mode of a nanowaveguide (optical fiber taper) as depicted in Fig. 1(b). We use this new technique to characterize hitherto unmeasured aspects of the waveguide mode + +PLDOS. In particular, we measure the PLDOS dependence on the waveguide size parameter $s$ (defined below) around the diffraction limit. Using different electron energies, we probe the PLDOS i) close to the waveguide surface, where the near-field character of the mode is strong, and ii) nearer to the waveguide center where the mode has a standard transverse wave character. These two regimes are shown to exhibit different dependence on the size parameter, and in particular a different effective diffraction limit. These results shed light on a fundamental characteristic of nanowaveguides, and illuminate the subtle nature of the widely used diffraction limit concept for nanohotonic devices. Furthermore, the new method of waveguide-coupled CL promises a novel way to create fiber coupled electrically driven photon sources and probe previously inaccessible characteristics of optical near-fields using the CL technique. + +# II. PRINCIPLE AND METHODS + +The principle of our experiment is shown in Figs. 1(b) and (c). Electrons from a scanning electron microscope (SEM) penetrate a vacuum clad silica fiber (core refractive index $n_{co} = 1.46$) of radius $a$ ($200 \text{ nm} \le a \le 1 \text{ µm}$) to a depth $\delta$ which depends on the electron energy. The electrons induce luminescence in the silica, a portion of which couples directly to the fiber fundamental modes with an intensity that depends on the photonic local density of states of the modes. As shown in Fig. 1(c), for a given value of $\delta$ and a position $y$ along the fiber cross section, the radial position $r$ and angle $\theta$ of the electron stopping position can be defined, with $\phi = \sin^{-1}(y/a)$, $r = \sqrt{y^2 + (a \cos\phi - \delta)^2}$ and $\theta = \pi/2 - \cos^{-1}(y/r)$. In Fig. 1(d), the so-parameterized stopping point of the electrons as a function of $y$ is overlaid on the profile of a fundamental fiber mode for the case where $a = 200 \text{ nm}$, and the CL wavelength is 659 nm for three different values of $\delta$. + +As shown in Fig. 1(e), we assume that the measured light is from incoherent CL [5] which is produced in an ef- + +* mark.sadgrove@rs.tus.ac.jp +---PAGE_BREAK--- + +FIG. 1. Principle of the experiment. (a) Example of a standard cathode luminescence spectroscopy experiment. A resonant mode leaks photons which reach a detector in the far field. (b) Concept of the present work. Electrons are incident on a vacuum clad optical fiber of radius $a$ and CL is detected through the guided mode itself. (c) Electrons incident at a point $(a, \phi)$ penetrate a distance $\delta$ into the fiber to point $(r, \theta)$ and induce cathode luminescence which couples directly to the fiber fundamental mode. (d) Intensity $|e|^2$ of a circularly-polarized fundamental (HE$_{11}$) mode of the fiber with curves showing electron stopping position for $\delta = 10$ nm (solid line), $\delta = 50$ nm (dotted line) and $\delta = 100$ nm (dashed line) (e) Emission model. The energetic electron is assumed to excite an emitter within the fiber silica matrix to a high energy level which then decays by non-radiative processes before emitting a randomly polarized photon into the fiber fundamental mode with propagation constant $\beta$ at a center wavelength near 659 nm. (f) The thick red (magenta) line shows the normalized photonic local density of states $\bar{\gamma}_g$ at the fiber surface (center) nm as a function of the size parameter $s$. Also shown are $v_g/c$ (dotted blue line), and the effective refractive index of the mode $n_{\text{eff}}$ (dotted black line). + +fective off-resonant excitation process in which unpaired oxygen defect centers in the silica [12] are excited to a high energy level which decays non-radiatively before a final radiative transition produces randomly polarized luminescence with a phonon-broadened spectrum. The emission is assumed to occur at the point in the material where the electron comes to a stop, i.e., a distance $\delta$ from the fiber surface. (In fact, the process is more complicated: a cascade of secondary electrons is also created after the primary electron enters the material, and CL can originate from these electrons too. For the 0.5 keV energy used predominantly in this work, this cascade region is approximately 10 nm in diameter. We treat this behavior phenomenologically by treating the electron beam as having a Gaussian distribution of a similar width and convolving this distribution with the PLDOS.) + +Assuming a single mode fiber, the coupled intensity of the CL is proportional to the decay rate $\gamma_g$ into the fundamental fiber modes at the position $\mathbf{r}_0$ in the fiber where CL is generated. In general we may write this relation as $[1, 13] \quad \gamma_g = \frac{2\mu_0\omega_0^2}{\hbar} \text{Im}[\mathbf{p} \cdot \mathbf{G}_T(\mathbf{r}_0, \mathbf{r}_0, \omega_0) \cdot \mathbf{p}]$, where $\omega_0$ is the transition resonant frequency, $\mathbf{p}$ is the dipole moment, and $\mathbf{G}_T$ is the guided mode transverse Green tensor. The imaginary part of the Green tensor may be evaluated $[13, 14]$ yielding $\text{Im}[\mathbf{G}^T(\mathbf{r}_0, \mathbf{r}_0, \omega_0)] = \frac{c^2 \mathbf{e}(\mathbf{r}_0) \mathbf{e}^*(\mathbf{r}_0)}{4\gamma_g \omega_0}$. + +Here, $v_g$ is the mode group velocity and $\mathbf{e}(\mathbf{r}_0)$ is taken to be the normalized mode function of the positive propagating, left hand circular polarized HE$_{11}$ fundamental mode of the fiber. The mode function is normalized according to the condition $1 = \int d^2r n(r)^2 |\mathbf{e}(\mathbf{r}_0)|^2$, where the integral is taken over a plane perpendicular to the fiber axis. The product of mode functions is interpreted as a dyad. Details of the mode functions are given in the Appendix. In our present study, the wavelength of the modes is fixed at $\lambda = 659$ nm, and the value that the mode function takes depends on the fiber radius $a$, at the radial position $\mathbf{r}_0(y, \delta)$. Note that the quantity $|\mathbf{e}(\mathbf{r}_0)|^2$ has units m$^{-2}$ and may be considered to be a dimensionless- energy flux. This should be compared to the usual energy density associated with three dimensionally confined resonant modes. + +By circular symmetry, a randomly polarized dipole couples with the same strength to either of the two orthogonally polarized fundamental modes. We may average over dipole polarization to produce the photonic local density of states associated with the fundamental modes [1] + +$$ \rho_g(s, \mathbf{r}) = \frac{2}{3} \frac{6\omega_0}{\pi c^2} \text{Im}[\text{Tr}[\mathbf{G}(\mathbf{r}_0, \mathbf{r}_0, \omega_0)]] = \frac{|\mathbf{e}(s, \mathbf{r}_0)|^2}{v_g}, \quad (1) $$ + +where the factor of 1/3 arises from the average over dipole +---PAGE_BREAK--- + +orientations, and the factor of 2 arises due to the two possible orthogonal polarizations of the fundamental mode. + +Finally, we see that + +$$ \bar{\gamma}_g = \frac{\pi\omega_0}{3\hbar\epsilon_0} p^2 \rho_g(s, \mathbf{r}), \quad (2) $$ + +where $\bar{\gamma}_g$ is the decay rate into the fundamental modes averaged over polarization, and the dipole moment strength is assumed to be $p = |\mathbf{p}|$ in any direction. Note that for a given $s$, $\rho_g$ contains all the dependence of $\bar{\gamma}_g$ on the fiber mode behavior. Our experimental measurements are of photon count rates through the fiber over some time $\Delta t$. It may be seen that such measurements are proportional to $\bar{\gamma}_g \Delta t \propto \rho_g$. In practice, we normalize both our measurements and the theoretical predictions for $\rho_g$ so that their maxima are equal to unity before comparing them. We denote the so-normalized value of the PLDOS by $\bar{\rho}_g$. + +Because Maxwell's equations are scale free, the functional dependence of the local density of states on the waveguide transverse dimension *a* or the wavelength *λ* are most generally expressed using the dimensionless size parameter $s = ka = (c/\omega_0)a$, where $k = 2\pi/\lambda$. By using a tapered fiber, we allow the measurement of the PLDOS as a function of *s* for fixed *λ* and variable *a*. + +The thick red line in Fig. 1(f) shows the normalized local density of states as a function of *s* just inside the fiber surface. The thick magenta line shows the same calculation made at the fiber center. Also shown are the scaled group velocity of the fundamental mode $v_g/c$ (dotted blue line) and the effective refractive index $n_{\text{eff}}$ for the fundamental mode (dotted black line). It may be seen that peak region of the PLDOS is associated with the transition of $v_g$ from the bulk silica value of $v_g \approx c/1.45$, to $v_g \approx c$ as the fiber mode is dominated by its evanescent component. Note that the maximum value of the unscaled PLDOS at the fiber center is almost three times larger than that just inside the fiber surface. Because the present experiment does not allow us to cleanly measure the relative amplitude of the PLDOS at these two different radial positions, we use the normalized PLDOS and focus on the differences seen in the peak position and peak width. + +The most notable aspect of the PLDOS curves for different radial positions is that the peak value occurs at a different value of *s*. In this sense, the effective diffraction limit of *s* is different depending on where in the fiber cross-section it is measured. This is a generic feature of waveguides (i.e. not just fibers) and occurs due to the behavior of the mode function $|e(\mathbf{r})| = A(s)F(s,r)$, where $A(s)$ is a normalization factor depending only on the size parameter, and $F(s,r)$ is in general a decreasing function of the radial distance $r$ from the fiber center. Broadly speaking, $A(s)$ sets the intensity scale at a given value of $s$ for a fixed optical power, and thus has a peaked structure which gives rise to the diffraction limit. $F(s,r)$ can generally be written in the form $F(ur/a)$, where $u = a\sqrt{n_{\text{co}}^2 k^2 - \beta^2}$ is a dimensionless wavenumber + +which increases monotonically with the waveguide size parameter *s*. As $r/a$ increases, the fall-off in *F* as a function of *u* becomes steeper, leading to the peak of the PLDOS occuring at lower *s*. This is also the reason for the the narrower width of the PLDOS peak when *r* = *a* compared with *r* = 0. More details are given in the supplementary material. In this sense, despite being polarization averaged, the PLDOS near the diffraction limit contains information about the near-field nature of the mode, which is transverse near the fiber center but vectorial in nature at the fiber surface. + +Experimentally, we detect the intensity in the fiber modes by passing a single mode fiber which is adiabatically connected to the fiber taper out of the SEM vacuum via a feedthrough. The fiber can be connected to a spectrum analyzer or a modified Hanbury-Brown-Twiss setup which allows measurement of both polarization and the intensity correlation function $g^{(2)}$. In experiments, we used electron energies of 0.5 keV in a spot excitation configuration, and 2 keV in a sweep excitation configuration. CL emitted into the fiber taper passed through a 630 nm cutoff single mode fiber to ensure that only light in the fundamental modes was collected. Further details of the experiment are given in the Appendix. + +### III. RESULTS + +We now turn to our experimental results. First, we look at general properties of the fiber coupled cathode luminescence. The CL spectrum measured through the guided modes is shown in Fig. 2(a). A Lorentzian curve was fitted to the data and, as indicated, the center wavelength was found to be 659 nm and the full width at half maximum (FWHM) was found to be 28 nm. This spectrum is similar to that seen in silica fibers due to radiation induced defects, or the fiber drawing process itself [12]. The luminescence has been attributed to unpaired oxygen atoms in the silica matrix. + +We also checked the polarization at the fiber output by rotating both a half waveplate and a quarter waveplate before the light entered a polarizing beam splitter, and measuring the output at both ports. For both waveplates, we saw variations in intensity of about ±5% of the mean value, suggesting nearly perfect random polarization. + +Because little is known about the density of defects in silica which produce the observed cathode luminescence, we also measured the count coincidence rate of the CL through the guided modes. The normalized coincidence signal corresponds to the second order correlation function $g^{(2)}(\tau) = \langle n(t)n(t+\tau) \rangle / \langle \langle n(t) \rangle \rangle \langle n(t+\tau) \rangle \rangle$, where *n* denotes photon counts, the coincidence delay is given by $\tau$, and $\langle \cdot \rangle$ denotes a time average. For a single or few emitters, an anti-bunching dip in the coincidence rate is expected at $\tau = 0$. As seen in Fig. 2(b), the measured correlation function shows no sign of antibunching and is consistent with a relatively large number of independent +---PAGE_BREAK--- + +FIG. 2. (a) Measured spectrum of the fiber coupled CL. (b) Measured second-order correlation function $g^{(2)}(\tau)$ for a time difference $\tau$ between detection events. + +photon emitters within the excitation volume. + +Next, we consider scans made of the fiber over its cross section for fiber diameters between 200 and 1000 nm. Fig. 3(a) shows raw count rates (discrete points) joined by lines to guide the eye. It is notable that a large peak is observed at $2a = 400$ nm relative to the other diameters. This is due to the increased mode confinement at this diameter. Fig. 3(b) shows the same experimental results normalized to allow easier comparison. In each case, curves showing values of $\bar{\rho}_g(a, \delta, y)$ for $\delta = 10$ nm convolved with a Gaussian profile with a standard deviation of 10 nm to account for the broad electron cascade process inside the silica. For these curves, we fitted the value of the amplitude and center position to the data. The fiber diameter was set to its experimentally measured value in the theory. Note that the colors of the points and curves correspond to the data shown in the same color in Fig. 3(a). Error bars show $\pm 1$ standard deviation over ten intensity measurements. + +The data show that the CL intensity varies only slowly across the fiber cross section. This is expected considering the circular symmetry of the coupling, i.e., a randomly polarized emitter should couple with the same strength to the fundamental modes at any position within the fiber that is a constant radial distance from its center. However, due to the stopping position on the x axis being dependent on y, the distance from the fiber center at which CL occurs changes with the change becoming larger as the penetration depth increases. + +Finally, we measured the waveguide coupled CL at different diameters using beam spot illumination at 0.5 keV ($\delta \approx 10$ nm [15]) and 2 keV ($\delta \approx 175$ nm [15]). Results of these measurements are shown in Fig. 4. The PLDOS curve is calculated at $y=0$ for the respective values of $\delta$ given above. The experimental results show generally good qualitative and quantitative agreement with the calculated PLDOS curve. In particular, the difference in the PLDOS peak position and the difference in the peak widths is clearly reproduced by the data. For the 0.5 keV data, we observe a peak at $s = 1.4$ whereas for 2.0 keV the peak occurs at $s = 1.9$. This corresponds to a difference in radius of 100 nm. + +#### IV. DISCUSSION + +In this work, we defined the PLDOS for the fundamental mode of an optical fiber and experimentally evaluated the PLDOS by measuring CL coupled directly to the fiber fundamental modes. Using this technique, we made the first complete measurements of the PLDOS dependence on the size parameter around the diffraction limit. We clearly demonstrated the different PLDOS behavior for points near the fiber surface and nearer to the fiber center. Although previous CL measurements of photonic crystal waveguide modes do exist, they have relied on intrinsic losses or leaky modes which coupled to the far field [7]. Likewise, although the coupling efficiency from point emitters to the modes of a fiber has been measured, these measurements suffered from large systematic errors and did not reveal the full behavior of the PLDOS itself [16]. In contrast we are able to clearly measure the difference in PLDOS behavior near the fiber surface and nearer to the fiber center even though the respective PLDOS peak positions differ by a fiber radius of just 100 nm. + +This work successfully enlarges the domain in which CL spectroscopy may be applied, from its original application to modes with a radiative component to the case of completely bound photonic states of which the modes of a waveguide are one example. It should also be possible to use our technique to couple electron beam induced luminescence from more general non-radiative modes which do not couple to the far field. Such modes can couple via the evanescent field of the optical fiber taper to its guided modes and thus be detected as in the present experiment, opening up CL spectroscopy to regimes which could traditionally only be measured using electron energy loss (EEL) methods. Due to the much less rigorous requirements for sample preparation and electron beam energy required for CL spectroscopy as compared with EEL spectroscopy, this is a significant addition to the electron spectroscopy toolbox. + +In terms of applications typical fiber coupled photon sources up to now have used optically excited emitters [16–18]. Our method should provide a new route to achieving waveguide-coupled, electrically driven photon sources [19–21]. In particular, the ability to simultaneously image the nanostructure surface and excite fiber coupled cathode luminescence will allow a more deterministic approach even for non-deterministically assembled composite nanodevices created by combining nanowaveguides with colloidal nanocrystals. + +For the above reasons, we believe that the technique detailed here can open new opportunities to study fundamental aspects of nano-optics by measuring PLDOS through waveguide modes, while also providing a new platform for applications. + +This work was supported by the Nano-Quantum Information Research Division of Tokyo University of Science. Part of this work was supported by JST CREST (Grant Number JPMJCR18I5). +---PAGE_BREAK--- + +FIG. 3. Spot scans perpendicular to the optical fiber axis for electron energies of 0.5 keV. (a) Shows unnormalized data (discrete points) for five different fiber diameters with lines connecting points to guide the eye. (b) Shows the same data normalized and fitted by $\bar{p}_g(a, \delta, y)$ convolved with a Gaussian beam profile. From top to bottom, the data shown is for $2a = 200, 400, 600, 800,$ and $1000$ nm. Theoretical curves for $\delta = 10$ nm are shown for each case. + +FIG. 4. Measurement of relative PLDOS as a function of diameter. Circles show measurements made using a stationary electron beam of energy 0.5 keV at the fiber center. The measurements shown are the averaged raw data, with error bars showing the standard deviation over ten separate measurements. The red curve shows $\bar{p}_g(a, \delta = 10 \text{ nm}, y = 0)$. Triangles show similar measurements, but for a beam energy of 2.0 keV, which corresponds to $\delta = 175$ nm. The theoretical value of $\bar{p}$ in this case is shown by the magenta curve. + +## Appendix A: Fiber guided modes + +Treatments of the guided modes of step-index optical fibers may be found in a number of places [22, 23]. For convenience, we present a treatment of the mode functions that follows references [11, 24]. + +The wave equation in cylindrical coordinates for the z component of an electromagnetic mode $E(r, \phi)$ propagating along the z-axis with radial coordinate r and azimuthal coordinate $\phi$ is + +$$ \frac{\partial^2 E_z}{\partial r^2} + \frac{1}{r} \frac{\partial E_z}{\partial r} + \frac{1}{r^2} \frac{\partial^2 E_z}{\partial \phi^2} + [k^2 n^2 - \beta^2] E_z = 0, \quad (\text{A1}) $$ + +where $k = 2\pi/\lambda$ is the free space wave number, $n = n(r)$ is the refractive index, and $\beta$ is the mode propagation constant. Setting $E(r, \phi) = e(r)e_{\phi}(\phi)$, and taking $e_{\phi}(\phi) = \exp(im\phi)$ (requiring integer $m$), the radial wave equation is found to be + +$$ \frac{\partial^2 e_z}{\partial r^2} + \frac{1}{r} \frac{\partial e_z}{\partial r} + \left[ \chi^2 - \frac{m^2}{r^2} \right] e_z = 0, \quad (\text{A2}) $$ + +where $\chi^2 = k^2 n^2 - \beta^2$. Specializing to a step index fiber of radius $a$ where the core index is $n_{\text{co}}$ and the cladding index is $n_{\text{cl}}$, we split $\chi^2$ into two cases: $h^2 = k^2 n_{\text{co}}^2 - \beta^2$ in the core, and $q^2 = \beta^2 - k^2 n_{\text{cl}}^2$ in the cladding. Full consideration of boundary conditions restricts the solutions to + +$$ e_z = A \frac{2q K_m(qa)}{\beta J_m(qa)} J_m(qr), \quad r \le a, \quad (\text{A3}) $$ + +and + +$$ e_z = A \frac{2q}{\beta} K_m(qr), \quad r > a, \quad (\text{A4}) $$ + +for an arbitrary amplitude $A$. It can be shown that the radial and azimuthal components can be derived from $e_z$. $J_m$ and $K_m$ are Bessel functions of the first kind and modified Bessel functions of the second kind respectively, with order $m$. + +Restricting ourselves to the fundamental mode with $m=1$, and taking a clockwise circular polarization, the mode function components are + +$$ e_r = iA \frac{q K_1(qa)}{h J_1(qa)} [(1-s)J_0(hr) - (1+s)J_2(hr)] $$ + +$$ e_{\phi} = -A \frac{q K_1(qa)}{h J_1(qa)} [(1-s)J_0(hr) - (1+s)J_2(hr)] $$ + +$$ e_z = A \frac{2q K_1(qa)}{\beta J_1(qa)} J_1(qr) $$ +---PAGE_BREAK--- + +in the core and + +$$ +\begin{align*} +e_r &= iA[(1-s)K_0(hr) - (1+s)K_2(hr)] \\ +e_\phi &= -A[(1-s)K_0(hr) - (1+s)K_2(hr)] \\ +e_z &= A \frac{2q}{\beta} K_1(qr) +\end{align*} +$$ + +in the cladding. Here, we have $s = (1/q^2a^2 + 1/h^2a^2)/(J_1'(ha)/haJ_1(ha) + K_1'(qa)/qaK_1(qa))$. + +To produce the mode functions, we choose $A$ so that $\int d^2rn(r)^2|e|^2=1$, where the integral is taken over the entire $r-\phi$ plane. For brevity, we omit the expression for the integral, along with the eigenvalue equation required to find $\beta$. The appropriate expressions may be found elsewhere [11, 24]. We note that the left hand side of the normalization condition is related to but not identical to the mode power. + +Inside the fiber core (as is the case in the current work) we find + +$$ +\begin{align*} +|\mathbf{e}|^2 &= |e_r|^2 + |e_\phi|^2 + |e_z|^2 \\ +&= 2A^2 \frac{q^2 K_1^2(qa)}{h^2 J_1^2(ha)} \left[ (1-s)^2 J_0^2(hr) + \frac{h^2}{\beta^2} J_1^2(hr) + (1+s)^2 J_2^2(hr) \right]. +\end{align*} +$$ + +In order to make clearer the contributions to the PL-DOS, we divide the mode function intensity into r inde- +pendent and dependent parts as follows: + +$$ +|\mathbf{e}|^2 = A^2(k, a)F^2(k, a, r), \quad (\text{A5}) +$$ + +where + +$$ +A^2(k,a) = A^2 \frac{q^2 K_1^2(qa)}{h^2 J_1^2(ha)} +$$ + +and + +$$ +F(k, a, r) = (1-s)^2 J_0^2(\mathrm{ur}/a) + \frac{h^2}{\beta^2} J_1^2(\mathrm{ur}/a) + (1+s)^2 J_2^2(\mathrm{ur}/a), +$$ + +where $u = ah$. + +From Fig. 5, it may be seen that $A(k, a)$ (black curve) has a peaked form and is responsible for the overall shape of the PLDOS, as discussed in the main text. $F(k, a, r)$ for a set value of $r/a$ is a decaying function of the size parameter $s$, with the decay rate being smaller at the fiber center ($r=0$, magenta line in Fig. 5) than at the fiber surface ($r=a$, red line in Fig. 5). When multiplied by $A(k, a)$, this behavior of the $F$ function explains both the shift in the PLDOS peak depending on $r$ and the width of the PLDOS peak. + +**Appendix B: Details of the experiment** + +The experimental setup is depicted schematically in +Fig. 6. We used the electron beam of a scanning electron +microscope (LEO 1530VP, Carl Zeiss) to excite CL in +our sample. The sample chamber was evacuated with + +FIG. 5. $A(k,a)$ (black curve), $F(k,a,r=0)$ (magenta curve) and $F(k,a,r=a)$ (red curve). + +a turbo-molecular pump down to $1 \times 10^{-3}$ Pa. The primary-electron column is a Gemini type which achieves high resolution for low energy electrons compared to a conventional SEM [25]. A schottky field emission electron source (SFE) is installed in the SEM gun chamber. The SFE has a very low beam noise and notable long term beam current stability. Primary SEM observations were made in an electron energy range of 0.5 – 2.0 keV. The beam current was measured using a Faraday cup yielding approximately 40 pA. The electron beam profile was evaluated using Au-Pd coated polystyrene latex spheres, of 90 nm in diameter [26, 27]. The spatial resolution (20/80% edge profile) was about 5 nm in the electron energy range used in the experiment. The electron beam was used to excite luminescence in an optical fiber taper (see below) using either a stationary spot excitation mode, or a sweep excitation mode, where the electron beam was scanned over the fiber, allowing imaging by detection of secondary electrons. + +Regarding the optical setup, the tapered fiber was manufactured from a commercial single mode fiber (780 HP) using a heat and pull technique [28]. Tapered fibers used in the experiment had a transmission of at least 90% and a typical transmission of 95%. The fiber was mounted in the SEM and its output was spliced to a standard optical fiber which passed out of the SEM through a homemade feedthrough system [29]. Regarding the mounting of the fiber taper: we used a UV cured adhesive to fix the fiber to an aluminium mount at two points maximally far from the taper center. To suppress vibrations of the fiber, we also added adhesive to one side of the taper closer to the taper center, meaning that fluorescence could only be measured through one of the fiber outputs, due to strong absorption and scattering caused by the adhesive. We note that CL can still be induced in the event of fiber vibrations, but precise measurement of the fiber diameter, as required for the current experiment, is difficult. + +For CL spectrum observation, the output fiber was +connected to a spectrometer (ACTON Spectra Pro 2300, + + +---PAGE_BREAK--- + +FIG. 6. Experimental setup. Electrons produced by SEM gun are focussed and incident on an tapered, vacuum clad optical fiber which is mounted in the SEM vacuum chamber. The optical fiber tapers adiabatically into a standard optical fiber which passes through a feedthrough and can be connected to one of two measurement systems. Measurement system 1 allows the measurement of the CL spectrum. Measurement system 2 allows the measurement of CL intensity, polarization and the correlation of CL photons. Acronyms used are explained in the Key. + +Princeton Instruments) equipped with a CCD detector (Pixis 100BR, Princeton Instruments) to measure the wavelength as depicted by Fig. 6, Measurement System 1. In order to measure the intensity of CL, photon polarization, and photon correlations, we used Measurement System 2 as shown in Fig. 6. We used a fiber u-bench setup with a polarizing beam splitter installed whose outputs were coupled to multimode fibers which were in turn connected to single photon counting mod- + +ules (SPCM-AQRH-14-FC, Excelitas). Count rates and photon correlation measurements were made using a two channel counter / correlator (TimeTagger20, Swabian Instruments). + +Note that in all optical detection experiments, we spliced the output of the main fiber (780HP), single mode above 780 nm in wavelength) to a fiber which was single-mode at our operating wavelength (630HP) in order to guarantee that we only measured light coupled to the fundamental mode of the fiber. + +[1] L. Novotny and B. Hecht, *Principles of nano-optics* (Cambridge university press, 2012). + +[2] I. Aharonovich, D. Englund, and M. Toth, Nature Photonics **10**, 631 (2016). + +[3] B. Khanaliloo, H. Jayakumar, A. C. Hryciw, D. P. Lake, H. Kaviani, and P. E. Barclay, Physical Review X **5**, 041051 (2015). + +[4] A. H. Yang, S. D. Moore, B. S. Schmidt, M. Klug, M. Lipson, and D. Erickson, Nature **457**, 71 (2009). + +[5] F. G. De Abajo, Reviews of modern physics **82**, 209 (2010). + +[6] A. Polman, M. Kociak, and F. J. G. de Abajo, Nature materials **18**, 1158 (2019). + +[7] B. J. Brenny, D. M. Beggs, R. E. van der Wel, L. Kuipers, and A. Polman, ACS Photonics **3**, 2112 (2016). + +[8] A. C. Atre, B. J. Brenny, T. Coenen, A. García-Etxarri, A. Polman, and J. A. Dionne, Nature nanotechnology **10**, 429 (2015). + +[9] R. Sapienza, T. Coenen, J. Renger, M. Kuttge, N. Van Hulst, and A. Polman, Nature materials **11**, 781 (2012). + +[10] A. Hörl, G. Haberfehlner, A. Trügler, F.-P. Schmidt, U. Hohenester, and G. Kothleitner, Nature communications **8**, 1 (2017). + +[11] F. Le Kien, S. D. Gupta, V. Balykin, and K. Hakuta, Physical Review A **72**, 032509 (2005). + +[12] G. Sigel Jr and M. Marrone, Journal of Non-Crystalline Solids **45**, 235 (1981). + +[13] T. Søndergaard and B. Tromborg, Physical Review A **64**, 033812 (2001). +---PAGE_BREAK--- + +[14] F. Le Kien, D. Kornovan, S. S. S. Hejazi, V. G. Truong, M. Petrov, S. N. Chormaic, and T. Busch, New Journal of Physics **20**, 093031 (2018). + +[15] B. Raftari, N. Budko, and K. Vuik, AIP Advances **8**, 015307 (2018). + +[16] R. Yalla, F. Le Kien, M. Morinaga, and K. Hakuta, Phys. Rev. Lett. **109**, 063602 (2012). + +[17] M. Fujiwara, K. Toubaru, T. Noda, H.-Q. Zhao, and S. Takeuchi, Nano letters **11**, 4362 (2011). + +[18] R. Yalla, M. Sadgrove, K. P. Nayak, and K. Hakuta, Physical review letters **113**, 143601 (2014). + +[19] E. Le Moal, S. Marguet, B. Rogez, S. Mukherjee, P. Dos Santos, E. Boer-Duchemin, G. Comtet, and G. Dujardin, Nano letters **13**, 4198 (2013). + +[20] L. Tizei and M. Kociak, Physical Review Letters **110**, 153604 (2013). + +[21] S. Meuret, L. Tizei, T. Cazimajou, R. Bourrellier, H. Chang, F. Treussart, and M. Kociak, Physical review letters **114**, 197401 (2015). + +[22] K. Okamoto, Fundamentals of optical waveguides (Academic press, 2006). + +[23] F. Le Kien, J. Liang, K. Hakuta, and V. Balykin, Optics Communications **242**, 445 (2004). + +[24] F. Le Kien, T. Busch, V. G. Truong, and S. N. Chormaic, Physical Review A **96**, 023835 (2017). + +[25] H. Jaksch and J. Martin, Fresenius' journal of analytical chemistry **353**, 378 (1995). + +[26] M. Irita, S. Yamazaki, H. Nakahara, and Y. Saito, in *IOP Conference Series: Materials Science and Engineering*, Vol. 304 (IOP Publishing, 2018) p. 012006. + +[27] M. Irita, H. Nakahara, and Y. Saito, e-Journal of Surface Science and Nanotechnology **16**, 84 (2018). + +[28] J. M. Ward, D. G. O'Shea, B. J. Shortt, M. J. Morrissey, K. Deasy, and S. G. Nic Chormaic, Review of Scientific Instruments **77**, 083105 (2006), https://doi.org/10.1063/1.2239033. + +[29] E. R. Abraham and E. A. Cornell, Appl. Opt. **37**, 1762 (1998). \ No newline at end of file diff --git a/samples/texts_merged/5893423.md b/samples/texts_merged/5893423.md new file mode 100644 index 0000000000000000000000000000000000000000..0f752f3587f1cceccf79dff692eaa352c23034cd --- /dev/null +++ b/samples/texts_merged/5893423.md @@ -0,0 +1,1625 @@ + +---PAGE_BREAK--- + +Fastened CROWN: Tightened Neural Network Robustness Certificates + +Zhaoyang Lyu,1* Ching-Yun Ko,2* Zhifeng Kong,3 Ngai Wong,4 Dahua Lin,1 Luca Daniel2 + +1The Chinese University of Hong Kong, Hong Kong, China + +2Massachusetts Institute of Technology, Cambridge, MA 02139, USA + +3University of California San Diego, La Jolla, CA 92093, USA + +4The University of Hong Kong, Hong Kong, China + +lyuzhaoyang@link.cuhk.edu.hk, {cyko, luca}@mit.edu, z4kong@eng.ucsd.edu, +nwong@eee.hku.hk, dhlin@ie.cuhk.edu.hk + +Abstract + +The rapid growth of deep learning applications in real life +is accompanied by severe safety concerns. To mitigate this +uneasy phenomenon, much research has been done providing +reliable evaluations of the fragility level in different deep neu- +ral networks. Apart from devising adversarial attacks, quanti- +fiers that certify safeguarded regions have also been designed +in the past five years. The summarizing work in (Salman et +al. 2019) unifies a family of existing verifiers under a con- +vex relaxation framework. We draw inspiration from such +work and further demonstrate the optimality of determinis- +tic CROWN (Zhang et al. 2018) solutions in a given lin- +ear programming problem under mild constraints. Given this +theoretical result, the computationally expensive linear pro- +gramming based method is shown to be unnecessary. We then +propose an optimization-based approach FROWN (Fastened +CROWN): a general algorithm to tighten robustness certifi- +cates for neural networks. Extensive experiments on vari- +ous networks trained individually verify the effectiveness of +FROWN in safeguarding larger robust regions. + +Introduction + +The vulnerability of deep neural networks remains an un- +revealed snare in the beginning years of the deep learn- +ing resurgence. In 2014, Szegedy et al. uncovered the dis- +covery of hardly-perceptible adversarial perturbations that +could fool image classifiers. This discovery agonized the fast +development of accuracy-oriented deep learning and shifted +community's attentions to the fragility of trained models. Es- +pecially with the increasing adoption of machine learning +and artificial intelligence in safety-critical applications, the +vulnerability of machine learning models to adversarial at- +tacks has become a vital issue (Sharif et al. 2016; Kurakin, +Goodfellow, and Bengio 2017; Carlini and Wagner 2017; +Wong and Kolter 2018). Addressing this urging issue re- +quires reliable ways to evaluate the robustness of a neural +network, namely by studying the safety region around a data +point where no adversarial example exists. This understand- +ing of machine learning models' vulnerability will, on the + +other hand, help industries build more robust intelligent sys- +tems. + +Disparate ways of reasoning and quantifying vulnerabil- +ity (or robustness) of neural networks have been exploited +to approach this dilemma, among which *attack-based* meth- +ods have long been in a dominating position. In these years, +a sequel of adversarial attack algorithms have been pro- +posed to mislead networks' predictions in tasks such as +object detection (Goodfellow, Shlens, and Szegedy 2015; +Moosavi-Dezfooli, Fawzi, and Frossard 2016), visual ques- +tion answering (Mudrakarta et al. 2018; Zeng et al. 2019; +Gao et al. 2019b; 2019a), text classification (Papernot et al. +2016), speech recognition (Cisse et al. 2017; Gong and +Poellabauer 2017), and audio systems (Carlini and Wag- +ner 2018), where the level of model vulnerability is quan- +tified by the distortion between successful adversaries and +the original data points. Notably, the magnitudes of distor- +tions suggested by attack-based methods are essentially up- +per bounds of the minimum adversarial distortion. + +In contrast to attack-based approaches, attack-agnostic +verification-based methods evaluate the level of network +vulnerability by either directly estimating (Szegedy et al. +2014; Weng et al. 2018b) or lower bounding (Hein and +Andriushchenko 2017; Raghunathan, Steinhardt, and Liang +2018; Dvijotham et al. 2018; Zhang et al. 2018; Singh et al. +2018; Weng et al. 2019) the minimum distortion networks +can bear for a specific input sample. As an iconic robustness +estimation, CLEVER (Weng et al. 2018a) converts the ro- +bustness evaluation task into the estimation of the local Lip- +schitz constant, which essentially associates with the max- +imum norm of the local gradients w.r.t. the original exam- +ple. Extensions of CLEVER (Weng et al. 2018c) focuses on +twice differentiable classifiers and works on first-order Tay- +lor polynomial with Lagrange remainder. + +A number of verification-based methods have been pro- +posed in literature to compute a lower bound of the safe- +guaranteed region around a given input, i.e. a region where +the network is guaranteed to make consistent predictions de- +spite any input perturbations. A pioneering work in provid- +ing certifiable robustness verification (Szegedy et al. 2014) +computes the product of weight matrix operator norms in +ReLU networks to give a lower-bounding metric of the + +*Equal contribution. Source code and the appendix is available at https://github.com/ZhaoyangLyu/FROWN. +Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. +---PAGE_BREAK--- + +Table 1: List of Notations + +
NotationDefinitionNotationDefinitionNotationDefinition
F : ℝn → ℝtneural network classifierx0 ∈ ℝnoriginal inputx ∈ ℝnperturbed input
ninput sizea(k)the hidden state of the k-th layerz(k)the pre-activation of the k-th layer
nknumber of neurons in layer k[K]set {1, 2, ..., K}Bp(x0, ε){x | ||x - x0||p ≤ ε}
FLj(x) : ℝn → ℝlinear lower bound of Fj(x)γj(k)Lglobal lower bound of zj(k)l ≪ z ≪ ulr ≤ zr ≤ ur,
∀ r ∈ [s], l, z, u ∈ ℝs
FUj(x) : ℝn → ℝlinear upper bound of Fj(x)γj(k)Uglobal upper bound of zj(k)neg(x) =x, if x ≤ 0;
0, otherwise.
s[k-1]Uset {s(1)U, ..., s(k-1)U}t[k-1]Uset {t(1)U, ..., t(k-1)U}
s[k-1]Lset {s(1)L, ..., s(k-1)L}t[k-1]Lset {t(1)L, ..., t(k-1)L}sigmaReLU/ Sigmoid/ Tanh activation
a[k]set {a(1), a(2), ..., a(k)}z[k]set {z(1), z(2), ..., z(k)}
+ +minimum distortion. However, this certificate method was shown to be generally too conservative to be useful (Hein and Andriushchenko 2017; Weng et al. 2018b). Later, tighter bounds have also been provided for continuously-differentiable shallow networks by utilizing local Lipschitz constants of the network (Hein and Andriushchenko 2017). Then, for the first time, the formal verification problem is reduced from a mixed integer linear programming (MILP) problem to a linear programming (LP) problem when dealing with $l_{\infty}$-norm box constraints (Wong and Kolter 2018). Its concurrent works include Fast-Lin (Weng et al. 2018b), which analytically calculates bounds for perturbed samples in given regions and finds the largest certifiable region for ReLU networks through binary search. Fast-Lin is further generalized for multilayer perceptrons with general activation functions in CROWN (Zhang et al. 2018). Recently, Salman et al. conclude a general framework for a genre of convex relaxed optimization problems and demonstrate existing approaches to be special cases of their proposal. Notably, although Wong and Kolter propose to verify the robustness for ReLU network by the use of LP, a feasible dual solution is instead used in practice to avoid any actual use of LP solvers. Comparatively, Salman et al. experiment with more than one linear function to bound nonlinear activations (e.g. 2 lower-bounding functions for ReLU act.) and stick to LP solvers. + +Certifiable robustness lower bounds are especially vital in safety-critical scenarios (e.g. autonomous driving car) since any miss-classification can be lethal. However, albeit being useful, new challenges with these certifiable quantifiers arise. There are, in most cases, non-negligible gaps between the certified lower and upper bounds of the minimum distortion. This inconsistency in the quantification questions diminishes the use of these state-of-the-art robustness evaluation approaches. + +In this article, we stay in line with the previous sequel of works that focus on linear bounds and provide two major contributions: + +1. We prove that if we limit the constraint relaxation to be exactly one linear bound in each direction (upper and lower) in the LP-based method, the results provided by CROWN are optimal. Therefore the costly LP solving process is unnecessary under this relaxation. + +2. We propose a general optimization framework that we name FROWN (Fastened CROWN) for tightening the certifiable regions guaranteed by CROWN, which is also + +theoretically applicable to tighten convolutional neural network certificate CNN-Cert (Boopathy et al. 2019) and recurrent neural network certificate POPQORN (Ko et al. 2019). + +## Backgrounds + +This section summarizes the most relevant backgrounds of our proposals. Specifically, LP formulation (Salman et al. 2019) is summarized, together with the seminal work of Fast-Lin (Weng et al. 2018b) and CROWN (Zhang et al. 2018) (generalized Fast-Lin). We first begin by giving the definition of an *m*-layer neural network: + +**Definitions.** Given a trained *m*-layer perceptron *F*, we denote the hidden unit, weight matrix, bias, and pre-activation unit of the *k*-th layer ($k \in [m]$) as $a^{(k)}$, $W^{(k)}$, $b^{(k)}$, and $z^{(k)}$, respectively. Hence, $z^{(k)} = W^{(k)}a^{(k-1)} + b^{(k)}$, $a^{(k)} = \sigma(z^{(k)})$, where $a^0 = x_0 \in \mathbb{R}^n$ is the original input and $F(x) = z^{(m)}$ is the network output. Denoting the number of neurons as $n_k$ for the *k*-th layer, implies that $a^{(k)}, z^{(k)}, b^{(k)} \in \mathbb{R}^{n_k}$ and $W^{(k)} \in \mathbb{R}^{n_k \times n_{k-1}}$, for $k \in [m]$. Furthermore, we use square brackets in the superscripts to group a set of variables (e.g. $a^{[m]}$ denotes the set of variables $\{a^{(1)},...,a^{(2)},...,a^{(m)}\}$ and $z^{[m]}$ denotes the set of variables $\{z^{(1)},...,z^{(2)},...,z^{(m)}\}$). Table 1 summarizes all the notations we use in this paper. + +When quantifying the robustness of the *m*-layer neural network, one essentially wants to know 1) how far the network output will deviate when the input is perturbed with distortions of a certain magnitude and 2) the critical point in terms of distortion magnitudes, beyond which the deviation might alter the model prediction. If we let $x \in \mathbb{R}^n$ denote the perturbed input of $x_0$ (class i) within an $\epsilon$-bounded $l_p$-ball (i.e., $x \in B_p(x_0, \epsilon)$, or $||x - x_0||_p \le \epsilon$), the task of robustness analysis for this network intrinsically involves the comparison between the *i*-th network output $F_i(x)$ and other outputs $F_{j \ne i}(x)$. In practice, one can translate the original problem to the problem of deriving a lower bound of $F_i(x)$ and upper bounds of $F_{j \ne i}(x)$ for perturbed inputs within the $l_p$-norm ball. With such quantifier, network *F* is guaranteed to make consistent predictions within the $l_p$-norm ball if the lower bound of the original class output is always larger than the upper bounds of all other classes' outputs. + +We summarize below the LP problem (Salman et al. 2019) to solve the lower bound of $F_i(x)$. The LP problem for the +---PAGE_BREAK--- + +upper bound of $F_{j \neq i}(\mathbf{x})$ can be similarly derived. + +**The LP problem.** The optimization problem for solving the lower bound of $F_i(\mathbf{x}) = \mathbf{z}_i^{(m)} = \mathbf{W}_{i,:}^{(m)}\mathbf{a}^{(m-1)} + \mathbf{b}_i^{(m)}$ reads: + +$$ +\begin{align} +& \min_{\mathbf{a}^{(0)} \in \mathbb{B}_p (x_0, \epsilon), \mathbf{a}^{[m-1]}, \mathbf{z}^{[m-1]}} \mathbf{W}_{i,:}^{(m)} \mathbf{a}^{(m-1)} + \mathbf{b}_i^{(m)} \tag{1} \\ +& \text{s.t.} \quad \left\{ +\begin{array}{l} +\mathbf{z}_i^{(k)} = \mathbf{W}_i^{(k)} \mathbf{a}^{(k-1)} + \mathbf{b}_i^{(k)}, \quad \forall k \in [m-1], \\ +\mathbf{a}_i^{(k)} = \sigma(\mathbf{z}_i^{(k)}), \quad \forall k \in [m-1]. +\end{array} +\right. +\end{align} +$$ + +The optimization problem for upper bounds can be readily obtained by replacing the “min” operation by “max”. Then, the nonlinear constraint in (1) is lifted with linear relaxations. Specifically, suppose the lower and upper bounds of the pre-activation units $\mathbf{z}^{[m-1]}$ are known, namely, for $k$ from 1 to $m-1$, as a result $\mathbf{l}^{(k)}$ and $\mathbf{u}^{(k)}$ satisfy + +$$ +\mathbf{l}^{(k)} \preceq \mathbf{z}^{(k)} \preceq \mathbf{u}^{(k)}, \quad (2) +$$ + +and therefore every element $\sigma(\mathbf{z}_i^{(k)})$ of the nonlinear activation $\sigma(\mathbf{z}_i^{(k)})$ in constraint (1) can be bounded by linear functions: + +$$ +h_i^{(k)L}(\mathbf{z}_i^{(k)}) \leq \sigma(\mathbf{z}_i^{(k)}) \leq h_i^{(k)U}(\mathbf{z}_i^{(k)}), \forall \mathbf{z}_i^{(k)} \in [\mathbf{l}_i^{(k)}, \mathbf{u}_i^{(k)}], \quad (3) +$$ + +for $i \in [n_k]$. The existence of linear bounding functions in (3) is guaranteed since $\mathbf{z}_i^{(k)}$ is bounded and compactness is a continuous invariant within any interval. For example, the following are bounding functions: $h_i^{(k)L}(\mathbf{z}_i^{(k)}) = \min_{\mathbf{z}_i^{(k)} \in [\mathbf{l}_i^{(k)}, \mathbf{u}_i^{(k)}]} (\sigma(\mathbf{z}_i^{(k)}))$, $h_i^{(k)U}(\mathbf{z}_i^{(k)}) = \max_{\mathbf{z}_i^{(k)} \in [\mathbf{l}_i^{(k)}, \mathbf{u}_i^{(k)}]} (\sigma(\mathbf{z}_i^{(k)}))$. $h_i^{(k)L}$ and $h_i^{(k)U}$ can also be taken as the pointwise supremum and infimum of several linear functions, respectively, which is equivalent to using multiple linear constraints. In practice, Salman et al. use linear functions characterized by slopes and intercepts: + +$$ +\begin{align*} +h_i^{(k)L}(\mathbf{z}_i^{(k)}) &= s_i^{(k)L}\mathbf{z}_i^{(k)} + t_i^{(k)L}, \\ +h_i^{(k)U}(\mathbf{z}_i^{(k)}) &= s_i^{(k)U}\mathbf{z}_i^{(k)} + t_i^{(k)U}. \tag{4} +\end{align*} +$$ + +The optimization problem can therefore be relaxed to an LP-alike problem¹: + +$$ +\begin{equation} +\begin{split} +& \min_{\mathbf{a}^{(0)} \in \mathbb{B}_p(x_0, \epsilon), \mathbf{a}^{[m-1]}, \mathbf{z}^{[m-1]}} \mathbf{W}_{i,:}^{(m)} \mathbf{a}^{(m-1)} + \mathbf{b}_i^{(m)} \\ +& \text{s.t.} \quad \left\{ +\begin{array}{@{}l@{}} +\displaystyle \mathbf{z}_i^{(k)} = \mathbf{W}_i^{(k)} \mathbf{a}^{(k-1)} + \mathbf{b}_i^{(k)}, \quad \forall k \in [m-1], \\ +\displaystyle h_i^{(k)L}(\mathbf{z}_i^{(k)}) \preceq \mathbf{a}_i^{(k)} \preceq h_i^{(k)U}(\mathbf{z}_i^{(k)}), \quad \forall k \in [m-1], \\ +\displaystyle \mathbf{l}^{(k)} \preceq \mathbf{z}^k \preceq \mathbf{u}^{(k)}, \quad \forall k \in [m-1]. +\end{array} +\right. +\end{split} +\tag{5} +\end{equation} +$$ + +Recalling that with the optimization formed as in Problem (5), one is essentially optimizing for the lower (or upper) output bounds of the network (the pre-activation of the m-th layer), whereas these build upon the assumption that the pre-activation bounds are known as Equation (2). To satisfy this + +assumption, one actually only needs to substitute the layer index $m$ in Problem (5) with the corresponding intermediate layer's index. In practice, one can recursively solve LP problems from the second layer to the $m$-th layer to obtain the pre-activation bounds for all layers. In this process, the pre-activation bounds computed in a layer also constitute the optimization constraint for the next to-be-solved optimization problem for the next layer's pre-activation. See details of this LP-based method in Appendix Section A.3. + +**CROWN Solutions.** Here we briefly walk through the derivation of Fast-Lin (Weng et al. 2018b) and CROWN (Zhang et al. 2018), whose procedures are essentially the same except for activation-specific bounding rules adopted. The first steps include bounding $\mathbf{z}_i^{(k)}$, $k \in [m]^2$. + +$$ +\begin{align} +\mathbf{z}_i^{(k)} &= \sum_{j=1}^{n_k-1} \mathbf{W}_{i,j}^{(k)} \sigma(\mathbf{z}_j^{(k-1)}) + \mathbf{b}_i^{(k)}, && (6) \\ +&\geq \sum_{j=1}^{n_k-2} \tilde{\mathbf{W}}_{i,j}^{(k-1)} \sigma(\mathbf{z}_j^{(k-2)}) + \tilde{\mathbf{b}}_i^{(k-1)}, && (7) +\end{align} +$$ + +where neg($\mathbf{x}$) = $\mathbf{x}$, if $\mathbf{x} \le 0$; neg($\mathbf{x}$) = 0, otherwise. And $\tilde{\mathbf{W}}_{i,j}^{(k-1)} = [\operatorname{relu}(\mathbf{W}_{i,:}^{(k)}) \odot (s^{(k-1)L})^\top + \operatorname{neg}(\mathbf{W}_{i,:}^{(k)})] (\operatorname{diag}(s^{(k-1)U})^\top]\tilde{\mathbf{W}}_{:,j}^{(k-1)}$, $\tilde{\mathbf{b}}_i^{(k-1)} = [\operatorname{relu}(\tilde{\mathbf{W}}_{i,:}^{(k)})] (\operatorname{diag}(s^{(k-1)L})^\top + [\operatorname{neg}(\tilde{\mathbf{W}}_{i,:}^{(k)})] (\operatorname{diag}(s^{(k-1)U})^\top])\tilde{\mathbf{b}}_i^{(k-1)} + [\operatorname{relu}(\tilde{\mathbf{W}}_{i,:}^{(k)})]t_i^{(k-1)L} + [\operatorname{neg}(\tilde{\mathbf{W}}_{i,:}^{(k)})]t_i^{(k-1)U} + \tilde{\mathbf{b}}_i^{(k)}$, where $\odot$ denotes element-wise products. As Equations (6) and (7) are in similar forms, the above procedures can be repeated until all the nonlinearities in $k-1$ layers are unwrapped by linear bounding functions and $\tilde{\mathbf{z}}_i^{(k)}$ is upper bounded by $\sum_{j=1}^n (\tilde{\mathbf{W}}_{i,j}^{(1)})\tilde{\mathbf{x}}_j + (\tilde{\mathbf{b}}_i^{(1)})$, where $(\tilde{\mathbf{W}}_{i,j}^{(1)})$ and $(\tilde{\mathbf{b}}_i^{(1)})$ are similarly defined as shown above. Taking the dual form of the bound then yields the closed-form bound $\gamma_j^L$ that satisfies + +$$ +\begin{equation} +\begin{aligned} +&\tilde{\boldsymbol{\mathrm{z}}}_{i}^{(k)} \geq \gamma_{i}^{\boldsymbol{(k)L}} := &\tilde{\boldsymbol{\mathrm{W}}}_{i,:}^{\boldsymbol{(1)}}\boldsymbol{\mathrm{x}}_0 - c ||\tilde{\boldsymbol{\mathrm{W}}}_{i,:}^{\boldsymbol{(1)}}||_q + &\tilde{\boldsymbol{\mathrm{b}}}_{i}^{\boldsymbol{(1)}}, +\end{aligned} +\tag{8} +\end{equation} +$$ + +$\forall x \in B_p(x_0, \epsilon)$, where $1/p + 1/q = 1$. Although the steps above are used to derive the closed-form lower bound, the closed-form upper bound $\gamma_i^{(k)L}$ can be similarly derived. To quantify the robustness for an $m$-layer neural network, one needs to recursively adopt formulas in Equation (8) to calculate the bounds of pre-activation³ $\mathbf{z}^{(k)}$, for $k = 2, \dots, m$. These bounds, as will be explained in more details later, confine the feasible set for choosing bounding linear functions in Equation (4). Notably, lower bound $\gamma_i^{(k)L}$ and upper bound $\gamma_i^{(k)U}$ are implicitly reliant to slopes of bounding lines in previous layers $s^{[k-1]U} = \{s^{(1)U}, \dots, s^{(k-1)U}\}$, $s^{[k-1]L} = \{s^{(1)L}, \dots, s^{(k-1)L}\}$ and + +¹The optimization problem turns to a strict LP problem only when we have $p = \infty$ or $1$ that makes the feasible set a polyhedron. However we coarsely denote all the cases in general as LP problems since all the constraints are now linear in the variables. + +²Similar to the discussion in the LP-based method above, the bounds computed are exactly network output bounds when $k=m$; whereas $k \neq m$ gives the pre-activation bounds to fulfill the assumption in Inequality (2). + +³$\mathbf{z}^{(1)}$ is deterministically computed by the input and $\mathbf{z}^{(m)} = F(\mathbf{x})$ is the output bound. +---PAGE_BREAK--- + +their intercepts $t^{[k-1]U} = \{t^{(1)U}, \dots, t^{(k-1)U}\}$, $t^{[k-1]L} = \{t^{(1)L}, \dots, t^{(k-1)L}\}$. A major difference that distinguishes CROWN from our contributions in the following sections is its deterministic rules of choosing upper/lower-bounding lines. The readers are referred to the literature (Zhang et al. 2018) or Sections A.2 and A.4 in the appendix for more details of CROWN. + +## Relation Between the LP Problem and CROWN Solutions + +Now we discuss the relationship between the LP problem formulation and CROWN. A key conclusion is that: CROWN is not only a dual feasible solution of the presented LP problem as discussed by Salman et al., it gives the optimal solution under mild constraints. + +Before introducing the optimality of CROWN solutions under the LP framework, we define an important condition in the computation process of CROWN as below: + +**Condition 1 Self-consistency.** Suppose $\{\tilde{s}^{[v-1]U}, \tilde{s}^{[v-1]L}, \tilde{t}^{[v-1]U}, \tilde{t}^{[v-1]L}\}$ are used to calculate $\gamma_i^{(v)L}$ and $\gamma_i^{(v)U}$, $\{\hat{s}^{[k-1]U}, \hat{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}\}$ are used to calculate $\gamma_j^{(k)L}$ and $\gamma_j^{(k)U}$, then the following condition holds, + +$$ +\begin{aligned} +\tilde{s}^{[v-1]U} &= \hat{s}^{[v-1]U}, & \tilde{s}^{[v-1]L} &= \hat{s}^{[v-1]L}, \\ +\tilde{t}^{[v-1]U} &= \hat{t}^{[v-1]U}, & \tilde{t}^{[v-1]L} &= \hat{t}^{[v-1]L}, +\end{aligned} +$$ + +for $\forall i \in [n_v], \forall j \in [n_k], 2 \le v \le k \le m$ and two sets equal to each other is defined as their corresponding elements equal to each other. + +A similar condition can also be defined in the LP-based method and is supplemented in Section A.3 in the appendix. The self-consistency condition guarantees the same set of bounding lines is used when computing bounds for different neurons in the process of CROWN or the LP-based method. We note that both the original CROWN and the LP-based method satisfy the self-consistency condition. + +**Theorem 1** *The lower bound obtained by Equation (8) is the optimal solution to Problem (5) when the following three conditions are met:* + +* Each of the $h^{(k)L}(z^k)$ and $h^{(k)U}(z^k)$ in Problem (5) is chosen to be *one* linear function⁴ as in Equation (4). + +* The LP problem shares the same bounding lines with CROWN. + +* The self-consistency conditions for both CROWN and the LP-based method hold. + +We refer readers to Section A.5 in the appendix for the proof. We emphasize the cruciality of the self-consistency conditions in Theorem 1: We do observe CROWN and the LP-based method can give different bounds when Condition 1 is not met, even though the two use the same bounding lines. In essence, Theorem 1 allows one to compute bounds analytically and efficiently following steps in CROWN instead of solving the expensive LP problems under certain conditions. + +Figure 1: The process of CROWN using different bounding lines to compute the closed-form bounds for different neurons. The blue curves are the ReLU activation. The orange and green lines are the upper and lower bounding lines, respectively. When computing closed-form bounds of the pre-activation of neurons in the second layer, different neurons can choose different bounding lines in the previous layers to yield the tightest closed-form bounds for themselves. + +## Fastened CROWN + +Recognizing the dependency of lower bounds and upper bounds to slopes and intercepts in the original CROWN method, a consistent use of these parameters is enforced through the self-consistency condition. In fact, we argue that this constraint can be lifted in general (we visualize this relaxation in Figure 1). The aim of this section stems from this relaxation and focuses on optimizing the pre-activation/output bounds over these tunable bounding parameters to achieve tighter bounds. In that merit, we propose an optimization framework called FROWN (Fastened CROWN) for tightening robustness certificates in CROWN. Moreover, FROWN is versatile and can be widely applied to tighten previously-proposed CNN-Cert (Boopathy et al. 2019) for convolutional neural networks and POPQORN (Ko et al. 2019) for recurrent neural networks. We formalize the objective as the following two optimization problems: + +$$ +\begin{gather} +\max_{s^{[k-1]L}, s^{[k-1]U}, t^{[k-1]L}, t^{[k-1]U}} \gamma_i^{(k)L} \tag{9} \\ +\text{s.t. } s_i^{(v)L} z_i^{(v)} + t_i^{(v)L} \le \sigma(z_i^{(v)}) \le s_i^{(v)U} z_i^{(v)} + t_i^{(v)U}, \nonumber \\ +\forall z_i^{(v)} \in [l_i^{(v)}, u_i^{(v)}], i \in [n_v], v \in [k-1], \nonumber +\end{gather} +$$ + +and + +$$ +\begin{gather} +\min_{s^{[k-1]L}, s^{[k-1]U}, t^{[k-1]L}, t^{[k-1]U}} \gamma_i^{(k)U} \tag{10} \\ +\text{s.t. } s_i^{(v)L} z_i^{(v)} + t_i^{(v)L} \le \sigma(z_i^{(v)}) \le s_i^{(v)U} z_i^{(v)} + t_i^{(v)U}, \nonumber \\ +\forall z_i^{(v)} \in [l_i^{(v)}, u_i^{(v)}], i \in [n_v], v \in [k-1]. \nonumber +\end{gather} +$$ + +However, we stress that Problems (9) and (10) are generally non-convex optimization problems when there are more than two layers in the target network. We enclose the proof as Section A.7 in the appendix. Therefore, optimizing for a non-convex objective function over parameters in large search spaces with infinite number of constraints is impractical. To this end, our ideas are to limit the search space to + +⁴Theoretically one can use multiple linear functions to bound the nonlinearity in Problem (5) to obtain tighter bounds. +---PAGE_BREAK--- + +Table 2: Search space of bounding lines for ReLU, Sigmoid, and Tanh functions. “Variable” is the optimization variable that characterizes the bounding line. “Range” is the feasible region of the variable. “-” indicates the case when the tightest bounding line is unique and chosen. The slope and intercept of ReLU upper-bounding line are always set to be $s_0$ and $t(s_0, l)$, respectively. + +
Nonlinearity
Pre-activation
Bounds
ReLU (Lower bnd.)Sigmoid & Tanh (Upper bnd.)Sigmoid & Tanh (Lower bnd.)
l < u ≤ 0l < 0 < u0 ≤ l < ul < u ≤ 0l < 0 < u
case 1
case 2l < u ≤ 0l < 0 < u
case 3
0 ≤ l < u
Variable-s--d1-d1d2-
Range-[0, 1]--[ld, u]-[l, u][l, u]-
Slopes0ss0s0σ'(d1)s0σ'(d1)σ'(d2)-
Interceptt(s0, l)0t(s0, l)t(s0, l)t(σ'(d1), d1)t(s0, l)t(σ'(d1), d1)t(σ'(d2), d2)t(s0, l)
+ +Notes: Case 1 refers to $\sigma'(u)l + t(\sigma'(u), u) \geq \sigma(l)$; and case 2, otherwise. Case 3 refers to $\sigma'(l)u + t(\sigma'(l), l) \leq \sigma(u)$; and case 4, otherwise. $s_0 = |\sigma(u) - \sigma(l)| / (u-l)$, $t(s, y) = \sigma(y) - sy$, $l_d$ and $u_d$ are defined as the abscissas of the points at which the tangent passes the left endpoint $(l, \sigma(l))$ and the right endpoint $(u, \sigma(u))$, respectively. $d_1$ and $d_2$ are the abscissas of the points of tangency. See Figure 2 for the visualization of $l_d$, $u_d$, $d_1$ and $d_2$. + +Figure 2: Illustration of the search space of bounding lines for Sigmoid and ReLU activation. See definitions of $d_1$, $d_2$, $u_d$, $l_d$ in Table 2. + +smaller ones. We present our solutions by first introducing +the ideas of “tighter” bounding lines. + +**Definition 1** Suppose $\tilde{h}_i^{(k)L}(\mathbf{z}_i^{(k)}) = \hat{s}_i^{(k)L}\mathbf{z}_i^{(k)} + \tilde{t}_i^{(k)L}$ and $\hat{h}_i^{(k)L}(\mathbf{z}_i^{(k)}) = \hat{s}_i^{(k)L}\mathbf{z}_i^{(k)} + \tilde{t}_i^{(k)L}$ are two lower-bounding lines that satisfy + +$$ +\left\{ +\begin{array}{l} +\tilde{h}_{i}^{(k)L}(\mathbf{z}_{i}^{(k)}) > \hat{h}_{i}^{(k)L}(\mathbf{z}_{i}^{(k)}), \quad \forall \mathbf{z}_{i}^{(k)} \in (\mathbf{l}_{i}^{(k)}, \mathbf{u}_{i}^{(k)}) \\ +\sigma(\mathbf{z}_{i}^{(k)}) \geq \tilde{h}_{i}^{(k)L}(\mathbf{z}_{i}^{(k)}), \quad \forall \mathbf{z}_{i}^{(k)} \in [\mathbf{l}_{i}^{(k)}, \mathbf{u}_{i}^{(k)}) +\end{array} +\right. +$$ + +then we say $\tilde{h}_i^{(k)L}(\mathbf{z}_i^{(k)}) = \hat{s}_i^{(k)L}\mathbf{z}_i^{(k)} + \tilde{t}_i^{(k)L}$ is a tighter lower-bounding line than $\hat{h}_i^{(k)L}(\mathbf{z}_i^{(k)}) = \hat{s}_i^{(k)L}\mathbf{z}_i^{(k)} + \tilde{t}_i^{(k)L}$ for the nonlinear activation $\sigma$ in the interval $[\mathbf{l}_i^{(k)}, \mathbf{u}_i^{(k)}]$. + +Similarly, we define a tighter upper-bounding line. Accordingly, the tightest bounding line refers to the $\hat{h}_i^{(k)L}$ when there is, by definition, no tighter bounding line than itself. Note that the tightest bounding line may not be unique. For example, any line passing through the origin with a slope between 0 and 1 is a tightest lower-bounding line for the ReLU activation in an interval across the origin $(\mathbf{l}_i^{(v)} < 0 < \mathbf{u}_i^{(v)})$. + +With the notion of tightness, a straightforward idea is to adopt one of the tightest bounding lines in every layer for generally tighter closed-form pre-activation/output bounds. However, the proposition: + +tighter bounding lines $\Rightarrow$ tighter closed-form bounds + +is not always true. We observe tighter bounding lines can sometimes lead to looser bounds. However, if we roll back to Condition 1, we can prove that it constitutes a sufficient condition for the proposition. + +**Theorem 2** If the robustness of a neural network is evaluated by CROWN on two trials with two different sets of bounding lines characterized by $\{\tilde{s}^{[k-1]U}, \tilde{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}\}$ and $\{\hat{s}^{[k-1]U}, \hat{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}\}$, and in both of which the self-consistency condition is met, then the closed-form bounds obtained via CROWN satisfy + +$$ +\begin{align*} +\gamma_i^{(k)L}(\tilde{s}^{[k-1]U}, \tilde{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}) &\ge \\ +\gamma_i^{(k)L}(\hat{s}^{[k-1]U}, \hat{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}), \\ +\gamma_i^{(k)U}(\tilde{s}^{[k-1]U}, \tilde{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}) &\le \\ +\gamma_i^{(k)U}(\hat{s}^{[k-1]U}, \hat{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}), +\end{align*} +$$ + +for $\forall i \in [n_k]$, when bounding lines determined by $\{\tilde{s}^{[k-1]U}, \tilde{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}\}$ are the same as or tighter than those determined by $\{\hat{s}^{[k-1]U}, \hat{s}^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}\}$. + +**Proof:** The self-consistency guarantees the optimality of bounds given by CROWN to the corresponding LP problem (5) according to Theorem 1. As the use of tighter bounding lines in Problem (5) narrows the feasible set, the optimal value of Problem (5) (lower pre-activation/output bound) will stay or grow larger, which means the lower bound given by CROWN will stay or grow larger. Similar arguments extend to tightened upper bounds. $\square$ + +Till now, we have confirmed the connection between +the tightest bounding lines and the tightest CROWN pre- +activation/output bound under Condition 1. In addition, we +manage to prove Theorem 2 under a weaker condition (see +its proof in Section A.6 in the appendix). + +Condition 1 is too strong a condition for our proposed optimization framework to be practical. Actually we propose to improve CROWN by breaking Condition 1. Our problem can be eased by only considering the dependency of +---PAGE_BREAK--- + +closed-form pre-activation/output bounds on the intercepts +(with the slopes fixed). We provide the following theorem: + +**Theorem 3** If the robustness of a neural network is evaluated by CROWN on two trials with bounding lines characterized by {$s^{[k-1]U}$, $s^{[k-1]L}$, $\tilde{t}^{[k-1]U}$, $\tilde{t}^{[k-1]L}$} and {$s^{[k-1]U}$, $s^{[k-1]L}$, $\hat{t}^{[k-1]U}$, $\hat{t}^{[k-1]L}$}, then the closed-form bounds obtained via CROWN satisfy + +$$ +\begin{align*} +\gamma_i^{(k)L}(s^{[k-1]U}, s^{[k-1]L}, \tilde{t}^{[k-1]U}, \tilde{t}^{[k-1]L}) &\ge \\ +\gamma_i^{(k)L}(s^{[k-1]U}, s^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}), \\ +\gamma_i^{(k)U}(s^{[k-1]U}, s^{[k-1]L}, \tilde{t}^{[k-1]U}, \tilde{t}^{[k-1]L}) &\le \\ +\gamma_i^{(k)U}(s^{[k-1]U}, s^{[k-1]L}, \hat{t}^{[k-1]U}, \hat{t}^{[k-1]L}), +\end{align*} +$$ + +for $\forall i \in [n_k]$, when $\tilde{t}^{(v-1)L} \succcurlyeq \hat{t}^{(v-1)L}$ and $\tilde{t}^{(v-1)U} \preceq \hat{t}^{(v-1)U}$, $\forall v \in [k-1]$. + +This theoretical guarantee limits the freedom in choosing the intercepts: we should always choose upper-bounding lines with smaller intercepts and lower-bounding lines with larger intercepts if different bounding lines are allowed for different network neurons. Note that this conclusion holds under no assumptions on the choice of bounding lines and hence can be used to instruct how to choose bounding lines in FROWN. We demonstrate Theorem 3 can be used to reduce the search space of the upper (or lower) bounding line to one that can be characterized by a single variable continuously in Appendix Section A.8. This enables the usage of gradient-based method to search over candidate bounding lines to obtain tighter bounds. We further limit the search space to the tightest bounding lines (which is a subset of the search space narrowed only by Theorem 3) as demonstrated in Table 2 and exemplified in Figure 2 in order to simplify implementation. We emphasize that this limit is not necessary. FROWN can be readily generalized to the case in which the search space is reduced only by Theorem 3, and the obtained bounds should be even tighter as the search space is larger. Since the tightest bounding lines defined in Table 2 automatically satisfy the optimization constraints in Problems (9) and (10), the constrained optimization problems are then converted to unconstrained ones. Furthermore, notice that the objective functions in the two problems are differentiable to bounding line parameters. This allows us to solve the problems by projected gradient descent (Nesterov 2014) (see details in Appendix Section A.9). + +By and large, given an *m*-layer network *F*, input sample $\mathbf{x}_0 \in \mathbb{R}^n$, $l_p$ ball parameters $p \ge 1$, and $\epsilon \ge 0$, for $\forall j \in [n_m]$, $1/q = 1 - 1/p$, we can compute two fixed values $\gamma_j^L$ and $\gamma_j^U$ such that $\forall \mathbf{x} \in \mathbb{B}_p(\mathbf{x}_0, \epsilon)$, the inequality $\gamma_j^L \le F_j(\mathbf{x}) \le \gamma_j^U$ holds. Suppose the label of the input sequence is $i$, the largest possible lower bound $\epsilon_i$ of untargeted and targeted (target class $j$) attacks is found by solving: + +Untargeted: $\epsilon_i = \max_{\epsilon} \epsilon$, s.t. $\gamma_i^L(\epsilon) \ge \gamma_j^U(\epsilon), \forall j \ne i$. + +Targeted: $\hat{\epsilon}(i,j) = \max_{\epsilon} \epsilon$, s.t. $\gamma_i^L(\epsilon) \ge \gamma_j^U(\epsilon)$. + +We conduct binary search procedures to compute the largest +possible $\epsilon_i$ (or $\hat{\epsilon}$). + +Experimental Results + +**Overview.** In this section, we aim at comparing the LP-based method⁵ and FROWN as two approaches to improve CROWN. We allow the LP-based method to use more than one bounding lines (which also increases computation cost) in order to make improvement on CROWN. Specifically, two lower-bounding lines are considered for ReLU networks while up to three upper/lower-bounding lines are adopted for Sigmoid (or Tanh) networks in the LP-based method (more details supplemented as Section A.4 in the appendix). On the other hand, FROWN improves CROWN solutions by optimizing over the bounding lines to give tighter bounds. These two approaches are evaluated and compared herein by both the safeguarded regions they certify and their time complexity. We run the LP-based method on a single Intel Xeon E5-2640 v3 (2.60GHz) CPU. We implement our proposed method FROWN using PyTorch to enable the use of an NVIDIA GeForce GTX TITAN X GPU. However, we time FROWN on a single Intel Xeon E5-2640 v3 (2.60GHz) CPU when comparing with the LP-based method for fair comparisons. We leave the detailed experimental set-ups and complete experimental results to Appendix Section A.9. + +**Experiment I.** In the first experiment, we compare the improvements of FROWN and the LP-based method over CROWN on sensorless drive diagnosis networks ⁶ and MNIST classifiers. We present their results in Table 3. As shown in the table, we consider ReLU and Sigmoid (results of Tanh networks are included in Appendix Section A.9) networks that are trained independently on two datasets. The size of the networks ranges from 3 layers to 20 layers and 20 neurons to 100 neurons. We remark that even on networks with only 100 neurons, the LP-based method scales badly and is unable to provide results in 100 minutes for only one image. The improved bounds in Table 3 verify the effectiveness of both FROWN and LP-based approach in tightening CROWN results. Specifically, an up to 93% improvement in the magnitude of bounds is witnessed on sensorless drive diagnosis networks. And in general, the deeper the target network is, the greater improvement can be achieved. When comparing FROWN to the LP-based method, it is demonstrated that FROWN computes bounds up to two orders of magnitudes faster than the LP-based method and is especially advantageous when certifying $l_1$-norm regions. On the other hand, while the LP-based method gives larger certified bounds for ReLU networks in most cases, FROWN certifies larger bounds for Sigmoid and Tanh networks. + +**Experiment II.** In our second experiment, we compute the robustness certificate on CIFAR10 networks that have 2048 neurons in each layer. With the width of neural networks, the LP-based method is **unusable** due to its high computational-complexity. Therefore, we only show the improvements FROWN has brought to the original CROWN solutions. In + +⁵The highly-efficient Gurobi LP solver is adopted here. +⁶https://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis +---PAGE_BREAK--- + +Table 3: (Experiment I) Averaged certified $l_{\infty}$ bounds and $l_p$ bounds ($p = 1, 2, \infty$) of Sensorless Drive Diagnosis classifiers and MNIST classifiers, respectively. "N/A" indicates no results can be obtained in the given runtime. The up arrow "↑" means "more than". "$m \times [N] \sigma$" means an $m$-layer network with $N$ neurons and $\sigma$ activation. + +
Sensorless Drive Diagnosis classifiers
Network$p$Certified BoundsImprovementAvg. Time per Image (s)Speedups of
FROWN over LP
CROWNFROWNLPFROWNLPFROWNLP
4 × [20] ReLU$∞$0.20190.22470.226911.27%12.38%0.350.962.8 X
8 × [20] ReLU0.20940.23650.252612.95%20.66%1.814.192.3 X
12 × [20] ReLU0.19960.24960.274025.05%37.28%4.399.462.2 X
4 × [20] Sigmoid$∞$0.10190.14180.138839.08%36.21%0.742.112.8 X
8 × [20] Sigmoid0.08580.16180.162688.62%89.59%4.1532.157.7 X
12 × [20] Sigmoid0.07820.15100.108193.06%38.22%8.71152.9117.6 X
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Table 4: (Experiment II) Averaged certified $l_p$ bounds of different classifiers on CIFAR10 Networks. + +
MNIST classifiers
CROWNFROWNLPFROWNLPFROWNLP
5 × [20]
ReLU
13.80184.08354.22157.41%11.04%1.69195.07115.5 X
20.53460.57100.55876.81%4.52%1.1241.6337.1 X
$\infty$0.02610.02780.02876.52%9.89%1.2611.939.5 X
20 × [20]
ReLU
12.38533.10623.173530.22%33.04%35.531301.1136.6 X
20.36560.49250.507434.71%38.78%31.10229.877.4 X
$\infty$0.01830.02400.024530.84%33.75%43.31199.404.6 X
5 × [20]
Sigmoid
11.80092.13402.112618.49%17.31%1.75310.27177.2 X
20.31000.35810.341715.51%10.20%1.2344.0035.9 X
$\infty$0.01530.01740.017013.94%11.11%1.3917.1912.4 X
20 × [20]
Sigmoid
11.53481.97791.973028.87%28.55%31.804904.99154.3 X
20.25240.32610.265729.21%5.28%31.771354.5442.6 X
$\infty$0.01310.01660.015327.01%17.12%44.861859.6841.5 X + +
Network$p$
CROWN
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$
$\infty$\end{thead}
Network
CROWN
FROWN
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOW
FLOWSIZE
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY
NONLINEARITY\n

(CROWN: CROWN bound, FROWN: FROWN bound, FLOW: FLOW bound, FLOWSIZE: FLOWSIZE bound, NONLINEARITY: NONLINEARITY bound)

+ +
Network (CROWN/FROWN)Certified Bounds (CROWN/FROWN)Improvement (CROWN/FROWN)
CROWN (CROWN bound)FROWN (FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)CROWN (CROWN bound)FROWN (FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)
Network (CROWN/FROWN)Certified Bounds (CROWN/FROWN)
Improvement (CROWN/FROWN)
(CROWN-FROWN) / FROWN (bound - FROWN bound)
(CROWN-FROWN) / FROWN (bound - FROWN bound)
(CROWN-FROWN) / FROWN (bound - FROWN bound)
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Network (CROWN/FROWN)Certified Bounds (CROWN/FROWN)Improvement (CROWN/FROWN)Certified Bounds (CROWN/FROWN)Improvement (CROWN/FROWN)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - FROWN bound)(CROWN-FROWN) / FROWN (bound - Frown bound)(CROWN-FROWN) / Frown bound (bound - frown bound)
Network (CROWN/Frown bound)CERTIFIED BOUNDS (Crown bound)CERTIFIED BOUNDS (Frown bound)CERTIFIED BOUNDS IMPROVEMENT (Crown bound - Frown bound)CERTIFIED BOUNDS IMPROVEMENT (Crown bound - frown bound)(CERTIFIED BOUNDS IMPROVEMENT / CROWNFOWN BOUND) * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOUND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWNFOWN BOND * CROWEOWNBOND* CROWEOWNBOND* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + CERTIFIED BOUNDS IMPROVEMENT (Crown bound - frown bound) / frown bound + + CERTIFIED BOUNDS IMPROVEMENT (Crown bound - frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) * frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT / frown bound) / frown bound + + (CERTIFIED BOUNDS IMPROVEMENT/ frown bound)/frownbound + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bounds IMPROVEMENT/ frownbond + + CERTIFIED-Bonds IMPROVEMENT/ row nbond + + + + + + + + + + + + + + + + + + + + + + +
+ +this experiment, we further speed up FROWN by optimizing neurons in a layer group by group, instead of one by one, and we provide a parameter to balance the trade-off between tightness of bounds and time cost in FROWN(see details in Appendix Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A,Section A +More discussions on this are included in Appendix Section +Discussion +Overall we have shown that trade-off between computational costs and certified adversarial distortions in ReLUs networks: +The LP-based approach certifies larger bounds than +FROWN at a cost of about twice as many times longer runtime. +However, +the LP-based approach suffers poor scalability and soon becomes computationally infeasible as the network grows deeper or wider. +In contrast, +FROWN manages to increase the certified region of $C^{R}O^{W}$ much more efficiently and wins over the LP-based approach in almost all $Sig-$moid/Tanh networks. +Notably, +In some cases, +the LP-based method gives even worse result than $C^{R}O^{W}$ (those with negative improvements). +We conclude two possible reasons: +i) +The Gurobi LP solver is not guaranteed to converge to the optimal solution and ii) +statistical fluctuations caused by random sample selections. +More discussions on this are included in Appendix Section +A. +The results show that while our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +Finally, +we note that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. +This shows that there may be other approaches that can be used to improve certified adversarial bounds in ReLUs networks. +In conclusion, +our approach provides a useful tool to study certified adversarial bounds in ReLUs networks. +While our approach can sometimes improve over $F^{L}_{O^{W}}$, it does not always do so. +This suggests that while our approach can sometimes improve certified adversarial bounds in ReLUs networks. + +Discussion +Overall we have shown that trade-off between computational costs and certified adversarial distortions in ReLUs networks: +The LP-based approach certifies larger bounds than +FROWN at a cost of about twice as many times longer runtime. +However, +the LP-based approach suffers poor scalability and soon becomes computationally infeasible as the network grows deeper or wider. +In contrast, +FOWL-NO-SIGNATURE manages to increase the certified region of $C^{R}O^{W}$ much more efficiently and wins over the LP-based approach in almost all $Sig-$moid/Tanh networks. +Notably, +In some cases, +the LP-based method gives even worse result than $C^{R}O^{W}$ (those with negative improvements). +We conclude two possible reasons: +i) +The Gurobi LP solver is not guaranteed to converge to the optimal solution and ii) +statistical fluctuations caused by random sample selections. +More discussions on this are included in Appendix Section +Discussion +Overall we have shown that trade-off between computational costs and certified adversarial distortions in ReLUs networks: +The LP-based approach certifies larger bounds than +FOWL-NO-SIGNATURE +More specifically, +it improves on previous work by +- Using a different set of certificates +- Adding more layers to an existing network +- Using a differentiable layering function +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- Adding more input channels +- Adding more output channels +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-\n\note: +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single layer or channel when using a differentiable layering function. + +The improvement of adding one additional layer or channel is often less than one percent improvement over a single +---PAGE_BREAK--- + +**Conclusion** + +In this paper, we have proved the optimality of CROWN in the relaxed LP framework under mild conditions. Furthermore, we have proposed a general and versatile optimization framework named FROWN for optimizing state-of-the-art formal robustness verifiers including CROWN, CNN-Cert, and POPQORN. Experiments on various networks have verified the usefulness of FROWN in providing tightened robustness certificates at a significantly lower cost than the LP-based method. + +**Acknowledgement** + +This work is partially supported by the General Research Fund (Project 14236516) of the Hong Kong Research Grants Council, and MIT-Quest program. + +**References** + +Boopathy, A.; Weng, T.-W.; Chen, P.-Y.; Liu, S.; and Daniel, L. 2019. Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks. In AAAI. + +Carlini, N., and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In SP. + +Carlini, N., and Wagner, D. A. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. CoRR abs/1801.01944. + +Cisse, M. M.; Adi, Y.; Neverova, N.; and Keshet, J. 2017. Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. In NeurIPS. + +Dvijotham, K.; Stanforth, R.; Gowal, S.; Mann, T.; and Kohli, P. 2018. A dual approach to scalable verification of deep networks. UAI. + +Gao, P.; Jiang, Z.; You, H.; Lu, P.; Hoi, S. C. H.; Wang, X.; and Li, H. 2019a. Dynamic fusion with intra- and inter-modality attention flow for visual question answering. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). + +Gao, P.; You, H.; Zhang, Z.; Wang, X.; and Li, H. 2019b. Multi-modality latent interaction network for visual question answering. In The IEEE International Conference on Computer Vision (ICCV). + +Gong, Y., and Poellabauer, C. 2017. Crafting adversarial examples for speech paralinguistics applications. CoRR abs/1711.03280. + +Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and harnessing adversarial examples. In ICLR. + +Hein, M., and Andriushchenko, M. 2017. Formal guarantees on the robustness of a classifier against adversarial manipulation. In NeurIPS. + +Ko, C.-Y.; Lyu, Z.; Weng, L.; Daniel, L.; Wong, N.; and Lin, D. 2019. POPQORN: Quantifying robustness of recurrent neural networks. In ICML. + +Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. ICLR Workshop. + +Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR. + +Mudrakarta, P. K.; Taly, A.; Sundararajan, M.; and Dhamdhere, K. 2018. Did the model understand the question? + +Nesterov, Y. 2014. *Introductory Lectures on Convex Optimization: A Basic Course*. Springer Publishing Company, Incorporated, 1 edition. + +Papernot, N.; McDaniel, P. D.; Swami, A.; and Harang, R. E. 2016. Crafting adversarial input sequences for recurrent neural networks. MILCOM. + +Raghunathan, A.; Steinhardt, J.; and Liang, P. 2018. Certified defenses against adversarial examples. ICLR. + +Salman, H.; Yang, G.; Zhang, H.; Hsieh, C.; and Zhang, P. 2019. A convex relaxation barrier to tight robustness verification of neural networks. CoRR abs/1902.08722. + +Sharif, M.; Bhagavatula, S.; Bauer, L.; and Reiter, M. K. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 1528–1540. + +Singh, G.; Gehr, T.; Mirman, M.; Püschel, M.; and Vechev, M. 2018. Fast and effective robustness certification. In NeurIPS. 10825–10836. + +Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neural networks. ICLR. + +Weng, T.-W.; Zhang, H.; Chen, P.-Y.; Yi, J.; Su, D.; Gao, Y.; Hsieh, C.-J.; and Daniel, L. 2018a. Evaluating the robustness of neural networks: An extreme value theory approach. In ICLR. + +Weng, T.-W.; Zhang, H.; Chen, H.; Song, Z.; Hsieh, C.-J.; Boning, D.; Dhillon, I. S.; and Daniel, L. 2018b. Towards fast computation of certified robustness for relu networks. ICML. + +Weng, T.-W.; Zhang, H.; Chen, P.-Y.; Lozano, A.; Hsieh, C.-J.; and Daniel, L. 2018c. On extensions of clever: A neural network robustness evaluation algorithm. In GlobalSIP. + +Weng, L.; Chen, P.-Y.; Nguyen, L.; Squillante, M.; Boopathy, A.; Oseledets, I.; and Daniel, L. 2019. PROVEN: Verifying robustness of neural networks with a probabilistic approach. In ICML. + +Wong, E., and Kolter, Z. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML, volume 80, 5286–5295. + +Zeng, X.; Liu, C.; Wang, Y.-S.; Qiu, W.; Xie, L.; Tai, Y.-W.; Tang, C.-K.; and Yuille, A. L. 2019. Adversarial attacks beyond the image space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). + +Zhang, H.; Weng, T.-W.; Chen, P.-Y.; Hsieh, C.-J.; and Daniel, L. 2018. Efficient neural network robustness certification with general activation functions. In NeurIPS. 4944–4953. \ No newline at end of file diff --git a/samples/texts_merged/598288.md b/samples/texts_merged/598288.md new file mode 100644 index 0000000000000000000000000000000000000000..b39e101b4e7c2976898f38f4cb787a3f6abeef04 --- /dev/null +++ b/samples/texts_merged/598288.md @@ -0,0 +1,14996 @@ + +---PAGE_BREAK--- + +Harmonic +Oscillators and +Two-by-two Matrices +in Symmetry +Problems in Physics + +Edited by +Young Suh Kim + +Printed Edition of the Special Issue Published in Symmetry +---PAGE_BREAK--- + +# Harmonic Oscillators and Two-By-Two Matrices in Symmetry Problems in Physics + +Special Issue Editor +Young Suh Kim +---PAGE_BREAK--- + +Young Suh Kim +University of Maryland +USA + +*Editorial Office* +MDPI AG +St. Alban-Anlage 66 +Basel, Switzerland + +This edition is a reprint of the Special Issue published online in the open access journal *Symmetry* (ISSN 2073-8994) from 2014–2017 (available at: http://www.mdpi.com/journal/symmetry/special_issues/physics-matrices). + +For citation purposes, cite each article independently as indicated on the article page online and as indicated below: + +Author 1; Author 2. Article title. *Journal Name Year, Article number*, page range. + +First Edition 2017 + +ISBN 978-3-03842-500-7 (Pbk) +ISBN 978-3-03842-501-4 (PDF) + +Articles in this volume are Open Access and distributed under the Creative Commons Attribution license (CC BY), which allows users to download, copy and build upon published articles even for commercial purposes, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications. The book taken as a whole is © 2017 MDPI, Basel, Switzerland, distributed under the terms and conditions of the Creative Commons license CC BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/4.0/). +---PAGE_BREAK--- + +# Table of Contents + +
About the Special Issue EditorV
Preface to "Harmonic Oscillators and Two-by-two Matrices in Symmetry Problems in Physics"VII
+ +## Chapter 1 + +**Orlando Panella and Pinaki Roy** +Pseudo Hermitian Interactions in the Dirac Equation +Reprinted from: *Symmetry* **2014**, *6*(1), 103–110, doi: 10.3390/sym6010103 +3 + +**Ettore Minguzzi** +Spacetime Metrics from Gauge Potentials +Reprinted from: *Symmetry* **2014**, *6*(2), 164–170, doi: 10.3390/sym6020164 +9 + +**Andrea Quadri** +Quantum Local Symmetry of the D-Dimensional Non-Linear Sigma Model: A Functional Approach +Reprinted from: *Symmetry* **2014**, *6*(2), 234–255; doi: 10.3390/sym6020234 +15 + +**Lock Yue Chew and Ning Ning Chung** +Dynamical Relation between Quantum Squeezing and Entanglement in Coupled Harmonic Oscillator System +Reprinted from: *Symmetry* **2014**, *6*(2), 295–307; doi: 10.3390/sym6020295 +34 + +**F. De Zela** +Closed-Form Expressions for the Matrix Exponential +Reprinted from: *Symmetry* **2014**, *6*(2), 329–344; doi: 10.3390/sym6020329 +45 + +**Luis L. Sánchez-Soto and Juan J. Monzón** +Invisibility and PT Symmetry: A Simple Geometrical Viewpoint +Reprinted from: *Symmetry* **2014**, *6*(2), 396–408; doi: 10.3390/sym6020396 +59 + +**Sibel Başkal, Young S. Kim and Marilyn E. Noz** +Wigner's Space-Time Symmetries Based on the Two-by-Two Matrices of the Damped Harmonic Oscillators and the Poincaré Sphere +Reprinted from: *Symmetry* **2014**, *6*(3), 473–515; doi: 10.3390/sym6030473 +70 + +## Chapter 2 + +**Heung-Ryoul Noh** +Analytical Solutions of Temporal Evolution of Populations in Optically-Pumped Atoms with Circularly Polarized Light +Reprinted from: *Symmetry* **2016**, *8*(3), 17; doi: 10.3390/sym8030017 +111 + +**M. Howard Lee** +Local Dynamics in an Infinite Harmonic Chain +Reprinted from: *Symmetry* **2016**, *8*(4), 22; doi: 10.3390/sym8040022 +123 +---PAGE_BREAK--- + +**Christian Baumgarten** +Old Game, New Rules: Rethinking the Form of Physics +Reprinted from: *Symmetry* **2016**, *8*(5), 30; doi: 10.3390/sym8050030..................................................135 + +**Anaelle Hertz, Sanjib Dey, Véronique Hussin and Hichem Eleuch** +Higher Order Nonclassicality from Nonlinear Coherent States for Models with +Quadratic Spectrum +Reprinted from: *Symmetry* **2016**, *8*(5), 36; doi: 10.3390/sym8050036..................................................170 + +**Gabriel Amador, Kiara Colon, Nathalie Luna, Gerardo Mercado, Enrique Pereira and Erwin Suazo** +On Solutions for Linear and Nonlinear Schrödinger Equations with Variable Coefficients: A +Computational Approach +Reprinted from: *Symmetry* **2016**, *8*(6), 38; doi: 10.3390/sym8060038..................................................179 + +**Alexander Rauh** +Coherent States of Harmonic and Reversed Harmonic Oscillator +Reprinted from: *Symmetry* 2016, *8*(6), 46; doi:10.3390/sym8060046..................................................195 + +**Sibel Başkal, Young S. Kim and Marilyn E. Noz** +Entangled Harmonic Oscillators and Space-Time Entanglement +Reprinted from: *Symmetry* **2016**, *8*(7), 55; doi: 10.3390/sym8070055..................................................207 + +**Halina Grushevskaya and George Krylov** +Massless Majorana-Like Charged Carriers in Two-Dimensional Semimetals +Reprinted from: *Symmetry* **2016**, *8*(7), 60; doi: 10.3390/sym8070060..................................................233 + +Chapter 3 + +**Young S. Kim and Marilyn E. Noz** +Lorentz Harmonics, Squeeze Harmonics and Their Physical Applications +Reprinted from: *Symmetry* **2011**, *3*, 16–36; doi: 10.3390/sym3010016 ..................................................247 + +**Young S. Kim and Marilyn E. Noz** +Dirac Matrices and Feynman's Rest of the Universe +Reprinted from: *Symmetry* **2012**, *4*, 626–643; doi: 10.3390/sym4040626..................................................266 + +**Young S. Kim and Marilyn E. Noz** +Symmetries Shared by the Poincar?Group and the Poincar?Sphere +Reprinted from: *Symmetry* **2013**, *5*, 233–252; doi: 10.3390/sym5030233..................................................282 + +**Sibel Baskal, Young S. Kim and Marilyn E. Noz** +Wigner's Space-Time Symmetries Based on the Two-by-Two Matrices of the Damped Harmonic +Oscillators and the Poincar?Sphere +Reprinted from: *Symmetry* **2014**, *6*, 473–515; doi: 10.3390/sym6030473..................................................299 + +**Sibel Baskal, Young S. Kim and Marilyn E. Noz** +Loop Representation of Wigner's Little Groups +Reprinted from: *Symmetry* **2017**, *9*(7), 97; doi: 10.3390/sym9070097..................................................338 +---PAGE_BREAK--- + +About the Special Issue Editor + +**Young Suh Kim** Dr. Kim came to the United States from South Korea in 1954 after high school graduation, to become a freshman at the Carnegie Institute of Technology (now called Carnegie Mellon University) in Pittsburgh. In 1958, he went to Princeton University to pursue graduate studies in Physics and received his PhD degree in 1961. In 1962, he became an assistant professor of Physics at the University of Maryland at College Park near Washington, DC. In 2007, Dr. Kim became a professor emeritus at the same university and thus became a full-time physicist. Dr. Kim's thesis advisor at Princeton was Sam Treiman, but he had to go to Eugene Wigner when faced with fundamental problems in physics. During this process, he became interested in Wigner's 1939 paper on internal space-time symmetries of physics. Since 1978, his publications have been based primarily on constructing mathematical formulas for understanding this paper. In 1988, Dr. Kim noted that the same set of mathematical devices is applicable to squeezed states in quantum optics. Since then, he has also been publishing papers on optical and information sciences. +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +Preface to "Harmonic Oscillators and Two-by-two Matrices in Symmetry Problems in Physics" + +This book consists of articles published in the two Special Issues entitled "Physics Based on Two-By-Two Matrices" and "Harmonic Oscillators in Modern Physics", in addition to the articles published by the issue editor that are not in those Special Issues. + +With a degree of exaggeration, modern physics is the physics of harmonic oscillators and two-by-two matrices. Indeed, they constitute the basic language for the symmetry problems in physics, and thus the main theme of this journal. There is nothing special about the articles published in these Special Issues. In one way or another, most of the articles published in this *Symmetry* journal are based on these two mathematical instruments. + +What is special is that the authors of these two Special Issues were able to recognize this aspect of the symmetry problems in physics. They are not the first to do this. In 1963, Eugene Wigner was awarded the Nobel prize for introducing group theoretical methods to physical problems. Wigner's basic scientific language consisted of two-by-two matrices. + +Paul A. M. Dirac's four-by-four matrices are two-by-two matrices of two-by-two matrices. In addition, Dirac had another scientific language. He was quite fond of harmonic oscillators. He used the oscillator formalism for the Fock space which is essential to second quantification and quantum field theory. The role of Gaussian functions in coherent and squeezed states in quantum optics is well known. In addition, the oscillator wave functions are used as approximations for many complicated wave functions in physics. + +Needless to say, spacial relativity and quantum mechanics are two of the greatest achievements in physics of the past century. Dirac devoted lifelong efforts to making quantum mechanics compatible with Einstein's spacial relativity. He was interested in oscillator wave functions that can be Lorentz-boosted. + +This journal will be publishing many interesting papers based on two-by-two matrices and harmonic oscillators. The authors will be very happy to acknowledge that they are following the examples of Dirac and Wigner. We all respect them. + +Young Suh Kim +*Special Issue Editor* +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +# Chapter 1: +Two-ByTwo Matrices +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +Article + +Pseudo Hermitian Interactions in the Dirac Equation + +Orlando Panella ¹,* and Pinaki Roy ² + +¹ INFN—Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, Via A. Pascoli, Perugia 06123, Italy + +² Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 Barrackpur Trunck Road Kolkata 700108, India; E-Mail: pinaki@isical.ac.in + +* E-Mail: orlando.panella@pg.infn.it; Tel.: +39-075-585-2762; Fax: +39-075-584-7296. + +Received: 31 July 2013; in revised form: 18 December 2013 / Accepted: 23 December 2013 / +Published: 17 March 2014 + +**Abstract:** We consider a (2 + 1)-dimensional massless Dirac equation in the presence of complex vector potentials. It is shown that such vector potentials (leading to complex magnetic fields) can produce bound states, and the Dirac Hamiltonians are η-pseudo Hermitian. Some examples have been explicitly worked out. + +**Keywords:** pseudo Hermitian Hamiltonians; two-dimensional Dirac Equation; complex magnetic fields + +# 1. Introduction + +In recent years, the massless Dirac equation in (2 + 1) dimensions has drawn a lot of attention, primarily because of its similarity to the equation governing the motion of charge carriers in graphene [1,2]. In view of the fact that electrostatic fields alone cannot provide confinement of the electrons, there have been quite a number of works on exact solutions of the relevant Dirac equation with different magnetic field configurations, for example, square well magnetic barriers [3–5], non-zero magnetic fields in dots [6], decaying magnetic fields [7], solvable magnetic field configurations [8], etc. On the other hand, at the same time, there have been some investigations into the possible role of non-Hermiticity and *PT* symmetry [9] in graphene [10–12], optical analogues of relativistic quantum mechanics [13] and relativistic non-Hermitian quantum mechanics [14], photonic honeycomb lattice [15], etc. Furthermore, the (2 + 1)-dimensional Dirac equation with non-Hermitian Rashba and scalar interaction was studied [16]. Here, our objective is to widen the scope of incorporating non-Hermitian interactions in the (2 + 1)-dimensional Dirac equation. We shall introduce η pseudo Hermitian interactions by using imaginary vector potentials. It may be noted that imaginary vector potentials have been studied previously in connection with the localization/delocalization problem [17,18], as well as *PT* phase transition in higher dimensions [19]. Furthermore, in the case of the Dirac equation, there are the possibilities of transforming real electric fields to complex magnetic fields and vice versa by the application of a complex Lorentz boost [20]. To be more specific, we shall consider η-pseudo Hermitian interactions [21] within the framework of the (2 + 1)-dimensional massless Dirac equation. In particular, we shall examine the exact bound state solutions in the presence of imaginary magnetic fields arising out of imaginary vector potentials. We shall also obtain the η operator, and it will be shown that the Dirac Hamiltonians are η-pseudo Hermitian. + +# 2. The Model + +The (2 + 1)-dimensional massless Dirac equation is given by: + +$$ H\psi = E\psi, \quad H = c\sigma \cdot P = c \begin{pmatrix} 0 & P_- \\ P_+ & 0 \end{pmatrix}, \quad \psi = \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} \tag{1} $$ +---PAGE_BREAK--- + +where $c$ is the velocity of light and: + +$$P_{\pm} = (P_x \pm iP_y) = (p_x + A_x) \pm i(p_y + A_y) \quad (2)$$ + +In order to solve Equation (1), it is necessary to decouple the spinor components. Applying the operator, $\mathcal{H}$, from the left in Equation (1), we find: + +$$c^2 \begin{pmatrix} P_- P_+ & 0 \\ 0 & P_+ P_- \end{pmatrix} \psi = E^2 \psi \quad (3)$$ + +Let us now consider the vector potential to be: + +$$A_x = 0, \quad A_y = f(x) \quad (4)$$ + +so that the magnetic field is given by: + +$$B_z(x) = f'(x) \quad (5)$$ + +For the above choice of vector potentials, the component wave functions can be taken of the form: + +$$\psi_{1,2}(x,y) = e^{ik_y y} \phi_{1,2}(x) \quad (6)$$ + +Then, from (3), the equations for the components are found to be (in units of $\hbar = 1$): + +$$ \begin{aligned} \left[-\frac{d^2}{dx^2} + W^2(x) + W'(x)\right] \phi_1(x) &= \epsilon^2 \phi_1(x) \\ \left[-\frac{d^2}{dx^2} + W^2(x) - W'(x)\right] \phi_2(x) &= \epsilon^2 \phi_2(x) \end{aligned} \quad (7) $$ + +where $\epsilon = (E/c)$, and the function, $W(x)$, is given by: + +$$W(x) = k_y + f(x) \quad (8)$$ + +## 2.1. Complex Decaying Magnetic Field + +It is now necessary to choose the function, $f(x)$. Our first choice for this function is: + +$$f(x) = -(A + iB)e^{-x}, \quad -\infty < x < \infty \quad (9)$$ + +where $A > 0$ and $B$ are constants. This leads to a complex exponentially decaying magnetic field: + +$$B_z(x) = (A + iB)e^{-x} \quad (10)$$ + +For $B = 0$ or a purely imaginary number (such that $(A + iB) > 0$), the magnetic field is an exponentially decreasing one, and we recover the case considered in [7,8]. + +Now, from the second of Equation (7), we obtain: + +$$\left[-\frac{d^2}{dx^2} + V_2(x)\right] \phi_2 = (\epsilon^2 - k_y^2) \phi_2 \quad (11)$$ + +where: + +$$V_2(x) = k_y^2 + (A + iB)^2 e^{-2x} - (2k_y + 1)(A + iB) e^{-x} \quad (12)$$ +---PAGE_BREAK--- + +It is not difficult to recognize $V_2(x)$ in Equation (12) as the complex analogue of the Morse potential whose solutions are well known [22,23]. Using these results, we find: + +$$ +\begin{align} +E_{2,n} &= \pm c \sqrt{k_y^2 - (k_y - n)^2} \\ +\phi_{2,n} &= t^{k_y-n} e^{-t/2} L_n^{(2k_y-2n)}(t), \quad n = 0, 1, 2, \dots < [k_y] +\end{align} +\tag{13} +$$ + +where $t = 2(A + iB)e^{-x}$ and $L_n^{(a)}(t)$ denote generalized Laguerre polynomials. The first point to note here is that for the energy levels to be real, it follows from Equation (13) that the corresponding eigenfunctions are normalizable when the condition $k_y \ge 0$ holds. For $k_y < 0$, the wave functions are not normalizable, i.e., no bound states are possible. + +Let us now examine the upper component, $\phi_1$. Since $\phi_2$ is known, one can always use the +intertwining relation: + +$$ +cP_{-}\psi_{2} = E\psi_{1} \qquad (14) +$$ + +to obtain $\phi_1$. Nevertheless, for the sake of completeness, we present the explicit results for $\phi_1$. In this +case, the potential analogous to Equation (12) reads: + +$$ +V_1(x) = k_y^2 + (A + iB)^2 e^{-2x} - (2k_y - 1)(A + iB) e^{-x} \quad (15) +$$ + +Clearly, $V_1(x)$ can be obtained from $V_2(x)$ by the replacement $k_y \rightarrow k_y - 1$, and so, the solutions can be +obtained from Equation (13) as: + +$$ +\begin{gather*} +E_{1,n} = \pm c \sqrt{k_y^2 - (k_y - n - 1)^2} \\ +\phi_{1,n} = t^{k_y-n-1} e^{-t/2} L_n^{(2k_y-2n-2)}(t), \quad n=1,2,\dots, [k_y-1] +\end{gather*} +\tag{16} +$$ + +Note that the *n* = 0 state is missing from the spectrum Equation (16), so that it is a singlet state. +Furthermore, *E*2,*n*+1 = *E*1,*n*, so that the ground state is a singlet, while the excited ones are doubly +degenerate. Similarly, the negative energy states are also paired. In this connection, we would like to +note that {*H*, σ3} = 0, and consequently, except for the ground state, there is particle hole symmetry. +The wave functions for the holes are given by σ3ψ. The precise structure of the wave functions of the +original Dirac equation are as follows (we present only the positive energy solutions): + +$$ +\begin{equation} +\begin{aligned} +E_0 &= 0, & \psi_0 &= \begin{pmatrix} 0 \\ \phi_{2,0} \end{pmatrix} \\ +E_{n+1} &= c \sqrt{k_y^2 - (k_y - n - 1)^2}, & \psi_{n+1} &= \begin{pmatrix} \phi_{1,n} \\ \phi_{2,n+1} \end{pmatrix}, +\end{aligned} +\tag{17} +\end{equation} +$$ + +It is interesting to note that the spectrum does not depend on the magnetic field. Furthermore, the dispersion relation is no longer linear, as it should be in the presence of interactions. It is also easily checked that when the magnetic field is reversed, i.e., $A \to -A$ and $B \to -B$ with the simultaneous change of $k_y \to -k_y$, the two potentials $V_{1,2}(x) = W(x) \pm W'(x)$ go one into each other, $V_1(x) \leftrightarrow V_2(x)$. Therefore, the solutions are correspondingly interchanged, $\phi_{1,n} \leftrightarrow \phi_{2,n}$ and $E_{1,n} \leftrightarrow E_{2,n}$, but retain the same functional form as in Equations (13) and (16). + +Therefore, we find that it is indeed possible to create bound states with an imaginary vector potential. We shall now demonstrate the above results for a second example. +---PAGE_BREAK--- + +## 2.2. Complex Hyperbolic Magnetic Field + +Here, we choose $f(x)$, which leads to an effective potential of the complex hyperbolic Rosen-Morse type: + +$$f(x) = A \tanh(x - i\alpha), \quad -\infty < x < \infty, \quad A \text{ and } \alpha \text{ are real constants} \tag{18}$$ + +In this case, the complex magnetic field is given by: + +$$B_z(x) = A \sech^2(x - i\alpha) \tag{19}$$ + +Note that for $\alpha = 0$, we get back the results of [8,24]. Using Equation (18) in the second half of Equation (7), we find: + +$$[-\frac{d^2}{dx^2} + U_2(x)] \phi_2 = (\epsilon^2 - k_y^2 - A^2)\phi_2 \tag{20}$$ + +where + +$$U_2(x) = k_y^2 - A(A+1) \operatorname{sech}^2(x - i\alpha) + 2Ak_y \tanh(x - i\alpha) \tag{21}$$ + +This is the Hyperbolic Rosen-Morse potential with known energy values and eigenfunctions. In the present case, the eigenvalues and the corresponding eigenfunctions are given by [23,25]: + +$$E_{2,n} = \pm c \sqrt{A^2 + k_y^2 - (A-n)^2 - \frac{A^2 k_y^2}{(A-n)^2}}, \quad n = 0, 1, 2, \dots < [A - \sqrt{Ak_y}] \tag{22}$$ + +$$\phi_{2,n} = (1-t)^{s_1/2} (1+t)^{s_2/2} P_n^{(s_1,s_2)}(t)$$ + +where $P_n^{(a,b)}(z)$ denotes Jacobi polynomials and: + +$$t = \tanh x, \quad s_{1,2} = A - n \pm \frac{Ak_y}{A-n} \tag{23}$$ + +The energy values corresponding to the upper component of the spinor can be found out by replacing $A$ by $(A-1)$, and $\phi_1$ can be found out using relation Equation (14). + +# 3. η-Pseudo Hermiticity + +Let us recall that a Hamiltonian is η-pseudo Hermitian if [21]: + +$$\eta H \eta^{-1} = H^{\dagger} \tag{24}$$ + +where $\eta$ is a Hermitian operator. It is known that eigenvalues of a $\eta$-pseudo Hermitian Hamiltonian are either all real or are complex conjugate pairs [21]. In view of the fact that in the present examples, the eigenvalues are all real, one is tempted to conclude that the interactions are $\eta$ pseudo Hermitian. To this end, we first consider case 1, and following [26], let us consider the Hermitian operator: + +$$\eta = e^{-\theta p_x}, \quad \theta = \arctan \frac{B}{A} \tag{25}$$ + +Then, it follows that: + +$$\eta c \eta^{-1} = c, \quad \eta p_x \eta^{-1} = p_x, \quad \eta V(x) \eta^{-1} = V(x + i\theta) \tag{26}$$ +---PAGE_BREAK--- + +We recall that in both the cases considered here, the Hamiltonian is of the form: + +$$H = c\sigma \cdot P = c \begin{pmatrix} 0 & P_{-} \\ P_{+} & 0 \end{pmatrix} \qquad (27)$$ + +where, for the first example: + +$$P_{\pm} = p_x \pm ip_y \pm i(A + iB)e^{-x} \qquad (28)$$ + +Then: + +$$H^{\dagger} = c \begin{pmatrix} 0 & P_{+}^{\dagger} \\ P_{-}^{\dagger} & 0 \end{pmatrix} \qquad (29)$$ + +Now, from Equation (28), it follows that: + +$$P_{+}^{\dagger} = p_{x} - ip_{y} - i(A - iB)e^{-x}, \quad P_{-}^{\dagger} = p_{x} + ip_{y} + i(A - iB)e^{-x} \qquad (30)$$ + +and using Equation (26), it can be shown that: + +$$\eta P_{+}\eta^{-1} = p_{x} + ip_{y} + i(A - iB)e^{-x} = P_{-}^{\dagger}, \quad \eta P_{-}\eta^{-1} = p_{x} - ip_{y} - i(A - iB)e^{-x} = P_{+}^{\dagger} \qquad (31)$$ + +Next, to demonstrate the pseudo Hermiticity of the Dirac Hamiltonian Equation (27), let us consider +the operator $\eta' = \eta \cdot I_2$, where $I_2$ is the $(2 \times 2)$ unit matrix. Then, it can be shown that: + +$$\eta' H \eta'^{-1} = H^{\dagger} \qquad (32)$$ + +Thus, the Dirac Hamiltonian with a complex decaying magnetic field Equation (10) is $\eta$-pseudo Hermitian. + +For the magnetic field given by Equation (19), the operator, $\eta$, can be found by using relations Equation (26). After a straightforward calculation, it can be shown that the $\eta$ operator is given by: + +$$\eta = e^{-2\alpha p_x} \qquad (33)$$ + +so that, in this second example, also, the Dirac Hamiltonian is $\eta$-pseudo Hermitian. + +**4. Conclusions** + +Here, we have studied the (2 + 1)-dimensional massless Dirac equation (we note that if a massive particle of mass *m* is considered, the energy spectrum in the first example would become *E**n* = *c*√(*k**y*2 + *m*2*c*2 − (*k**y* − *n*)2). Similar changes will occur in the second example, too), in the presence of complex magnetic fields, and it has been shown that such magnetic fields can create bound states. It has also been shown that Dirac Hamiltonians in the presence of such magnetic fields are η-pseudo Hermitian. We feel it would be of interest to study the generation of bound states using other types of magnetic fields, e.g., periodic magnetic fields. + +**Acknowledgments:** One of us (P. R.) wishes to thank INFN Sezione di Perugia for supporting a visit during which part of this work was carried out. He would also like to thank the Physics Department of the University of Perugia for its hospitality. + +**Conflicts of Interest:** The authors declare no conflict of interest. + +**References** + +1. Novoselov, K.S.; Geim, A.K.; Morozov, S.V.; Jiang, D.; Zhang, Y.; Dubonos, S.V.; Grigorieva, I.V.; Firsov, A.A. Electric field effect in atomically thin carbon films. *Science* **2004**, *306*, 666–669. +2. Novoselov, K.S.; Geim, A.K.; Morozov, S.V.; Jiang, D.; Katsnelson, M.I.; Grigorieva, I.V.; Dubonos, S.V.; Firsov, A.A. Two-dimensional gas of massless Dirac fermions in graphene. *Nature* **2005**, *438*, 197–200. +---PAGE_BREAK--- + +3. De Martino, A.; Dell'Anna, L.; Egger, R. Magnetic confinement of massless dirac fermions in graphene. *Phys. Rev. Lett.* **2007**, 98, 066802:1–066802:4. + +4. De Martino, A.; Dell'Anna L.; Eggert, R. Magnetic barriers and confinement of Dirac-Weyl quasiparticles in graphene. *Solid State Commun.* **2007**, 144, 547–550. + +5. Dell'Anna, L.; de Martino, A. Multiple magnetic barriers in graphene. *Phys. Rev. B* **2009**, 79, 045420:1–045420:9. + +6. Wang, D.; Jin, G. Bound states of Dirac electrons in a graphene-based magnetic quantum dot. *Phys. Lett. A* **2009**, 373, 4082–4085. + +7. Ghosh, T.K. Exact solutions for a Dirac electron in an exponentially decaying magnetic field. *J. Phys. Condens. Matter* **2009**, 21, doi:10.1088/0953-8984/21/4/045505. + +8. Kuru, S; Negro, J.M.; Nieto, L.M. Exact analytic solutions for a Dirac electron moving in graphene under magnetic fields. *J. Phys. Condens. Matter* **2009**, 21, doi:10.1088/0953-8984/21/45/455305. + +9. Bender, C.M.; Boettcher, S. Real spectra in non-hermitian hamiltonians having PT symmetry. *Phys. Rev. Lett.* **1988**, 80, 5243–5246. + +10. Fagotti, M; Bonati, C.; Logoteta, D.; Marconcini, P.; Macucci, M. Armchair graphene nanoribbons: PT-symmetry breaking and exceptional points without dissipation. *Phys. Rev. B* **2011**, 83, 241406:1–241406:4. + +11. Szameit, A.; Rechtsman, M.C.; Bahat-Treidel, O.; Segev, M. PT-Symmetry in heoneycomb photonic lattices. *Phys. Rev. A* **2011**, 84, 021806(R):1–021806(R):5. + +12. Esaki, K.; Sato, M.; Hasebe, K.; Kohmoto, M. Edge states and topological phases in non-Hermitian systems. *Phys. Rev. B* **2011**, 84, 205128:1–205128:19. + +13. Longhi, S. Classical simulation of relativistic quantum mechanics in periodic optical structures. *Appl. Phys. B* **2011**, 104, 453–468. + +14. Longhi, S. Optical realization of relativistic non-hermitian quantum mechanics. *Phys. Rev. Lett.* **2010**, 105, 013903:1–013903:4. + +15. Ramezani, H.; Kottos, T.; Kovanis, V.; Christodoulides, D.N. Exceptional-point dynamics in photoni honeycomb lattices with PT-symmetry. *Phys. Rev. A* **2012**, 85, 013818:1–013818:6. + +16. Mandal, B.P.; Gupta, S. Pseudo-hermitian interactions in Dirac theory: Examples. *Mod. Phys. Lett. A* **2010**, 25, 1723–1732. + +17. Hatano, N.; Nelson, D. Localization transitions in non-hermitian quantum mechanics. *Phys. Rev. Lett.* **1996**, 77, 570–573. + +18. Feinberg, J.; Zee, A. Non-Hermitian localization and delocalization. *Phys. Rev. E* **1999**, 59, 6433–6443. + +19. Mandal, B.P.; Mourya, B.K.; Yadav, R.K. PT phase transition in higher-dimensional quantum systems. *Phys. Lett. A* **2013**, 377, 1043–1046. + +20. Tan, L.Z.; Park, C.-H.; Louie, S.G. Graphene Dirac fermions in one dimensional field profiles; Transforming magnetic to electric field. *Phys. Rev. B* **2010**, 81, 195426:1–195426:8. + +21. Mostafazadeh, A. Pseudo-hermiticity versus PT-symmetry III: Equivalence of pseudo-Hermiticity and the presence of antilinear symmetries. *J. Math. Phys.* **2002**, 43, 3944–3951. + +22. Flügge, S. *Practical Quantum Mechanics*; Springer-Verlag: Berlin, Germany, 1974. + +23. Cooper, F.; Khare, A; Sukhatme, U. *Supersymmetry in Quantum Mechanics*; World Scientific Publishing Co. +Pte. Ltd.: Singapore, 2001. + +24. Milpas, E.; Torres, M.; Murguía, G. Magnetic field barriers in graphene: An analytically solvable model. +*J. Phys. Condens. Matter* **2011**, 23, 245304:1–245304:7. + +25. Rosen, N.; Morse, P.M. On the vibrations of polyatomic molecules. *Phys. Rev.* **1932**, 42, 210–217. + +26. Ahmed, Z. Pseudo-hermiticity of hamiltonians under imaginary shift of the coordinate: Real spectrum of complex potentials. *Phys. Lett. A* **2001**, 290, 19–22. + +© 2014 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +# Spacetime Metrics from Gauge Potentials + +Ettore Minguzzi + +Dipartimento di Matematica e Informatica "U. Dini", Università degli Studi di Firenze, Via S. Marta 3, I-50139 +Firenze, Italy; E-Mail: ettore.minguzzi@unifi.it; Tel./Fax: +39-055-4796-253 + +Received: 27 January 2014; in revised form: 21 March 2014 / Accepted: 24 March 2014 / +Published: 27 March 2014 + +**Abstract:** I present an approach to gravity in which the spacetime metric is constructed from a non-Abelian gauge potential with values in the Lie algebra of the group $U(2)$ (or the Lie algebra of quaternions). If the curvature of this potential vanishes, the metric reduces to a canonical curved background form reminiscent of the Friedmann $S^3$ cosmological metric. + +**Keywords:** gauge theory; G-structure; teleparallel theory + +## 1. Introduction + +The observational evidence in favor of Einstein's general theory of relativity has clarified that the spacetime manifold is not flat, and hence that it can be approximated by the flat Minkowski spacetime only over limited regions. Quantum Field Theory, and in particular the perturbative approach through the Feynman's integral, has shown the importance of expanding near a "classical" background configuration. Although we do not have at our disposal a quantum theory of gravity, it would be natural to take a background configuration which approximates as much as possible the homogeneous curved background that is expected to take place over cosmological scales accordingly to the cosmological principle. Therefore, it is somewhat surprising that most classical approaches to quantum gravity start from a perturbation of Minkowski's metric in the form $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$. This approach is ill defined in general unless the manifold is asymptotically flat. Indeed, the expansion depends on the chosen coordinate system, a fact which is at odds with the principle of general covariance. + +Expanding over the flat metric is like Taylor expanding a function by taking the first linear approximation near a point. It is clear that the approximation cannot be good far from the point and that no firm global conclusion can be drawn from similar approaches. A good global expansion should be performed in a different way, taking into account the domain of definition of the function. So, a function defined over an interval would be better approximated with a Fourier series than with a Taylor expansion. Despite of these simple analogies, much research has been devoted to quantum gravity by means of expansions of the form $g = \eta + h$, possibly because of the lack of alternatives. + +Actually, some years ago [1] I proposed a gauge approach to gravity that solves this problem in a quite simple way and which, I believe, deserves to be better known. + +To start with let us observe that general relativity seems to privilege in its very formalism the flat background. Indeed, the Riemann curvature $\mathcal{R}$ measures the extent by which the spacetime is far from flat, namely far from the background + +$$ \mathcal{R} = 0 \Leftrightarrow (M,g) \text{ is flat.} $$ + +If the true background is not the flat Minkowski space then as a first step one would have to construct a different curvature $F$ with the property that + +$$ F = 0 \Leftrightarrow (M,g) \text{ takes the canonical background shape.} $$ + +It is indeed possible to accomplish this result. Let us first introduce some notations. +---PAGE_BREAK--- + +## 2. Some Notations from Gauge Theory + +Gauge theories were axiomatized in the fifties by Ehresmann [2] as connections over principal bundles. Since I need to fix the notation, here I shortly review that setting. A principal bundle is given by a differentiable manifold (the bundle) $P$, a differentiable manifold (the base) $M$, a projection + +$$ \pi: P \to M \qquad (1) $$ + +a Lie group $G$, and a right action of $G$ on $P$ + +$$ p \to pg \quad p \in P, \ g \in G \qquad (2) $$ + +such that $M = P/G$, i.e., $M$ is the orbit space. Moreover, the fiber bundle $P$ is locally the product $P = M \times G$. To be more precise, given a point $m \in M$ there is an open set $U$ of $m$, such that $\pi^{-1}(U)$ is diffeomorphic to $U \times G$ and the diffeomorphism preserves the right action. If this property holds also globally the principal bundle is called trivial. The set $\pi^{-1}(m)$ is the fiber of $m$ and it is diffeomorphic to $G$. Let $\mathcal{G}$ be the Lie algebra of $G$, and let $\tau_a$ be a base of generators + +$$ [\tau_a, \tau_b] = f_{ab}^c \tau_c \qquad (3) $$ + +Let $p \in P$ be a point of the principal bundle; it can be considered as an application $p: G \to P$ which acts as $g \to pg$. The fundamental fields (We follow mostly the conventions of Kobayashi-Nomizu. The upper star * indicates the pull-back when applied to a function, the fundamental field when applied to a generator, and the horizontal lift when applied to a curve or a tangent vector on the base.) $\tau_a^*$ over $P$ are defined in $p$ as the push-forward of the group generators: $\tau_a^* = p_*\tau_a$. They are vertical fields in the sense that they are in the ker of $\pi: \pi_*\tau_a^* = 0$. They form a base of the vertical tangent space at $p$. + +A connection over $P$ is a 1-form $\omega: P \to \mathcal{G}$ with the following properties: + +(a) $\omega(X^*) = X \quad X \in \mathcal{G}$ + +(b) $R_g^*\omega = g^{-1}\omega g$ + +The tangent space at $p$ is split into the sum of two subspaces: the vertical space, that is the ker of $\pi$, and the horizontal space, that is the ker of $\omega$ + +$$ T_p P = H_p \oplus V_p \qquad (4) $$ + +Let $U$ be an open set of $M$. A section $\sigma$ is a function $\sigma: U \to \pi^{-1}(U)$ such that $\pi \circ \sigma = I_U$. The gauge potential depends on the section and is defined by + +$$ A = \tau_a A_\mu^a dx^\mu = \sigma^* \omega \qquad (5) $$ + +where $\{x^\mu\}$ are coordinates on the base. A change of section is sometimes called gauge transformation. The curvature is defined by (The exterior product is defined through $\alpha \wedge \beta = \alpha \otimes \beta - \beta \otimes \alpha$ where $\alpha$ and $\beta$ are 1-forms. As a consequence $\omega \wedge \omega = [\omega, \omega]$) + +$$ \Omega = d\omega h = d\omega + \omega \wedge \omega \qquad (6) $$ + +where $h$ projects the vector arguments to the horizontal space [2]. The field strength is defined by $F = \tau_a F_{\mu\nu}^a dx^\mu dx^\nu = \sigma^*\Omega$. In other words + +$$ F_{\mu\nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + f_{bc}^a A_\mu^b A_\nu^c \qquad (7) $$ +---PAGE_BREAK--- + +Given a section one can construct a system of coordinates over $P$ in a canonical way. Simply let $(x, g)$ be the coordinates of the point $p = \sigma(x)g$. In this coordinates the connection can be rewritten + +$$\omega = g^{-1} dg + g^{-1} A g \qquad (8)$$ + +and the curvature can be rewritten + +$$\Omega = g^{-1} F_g \qquad (9)$$ + +indeed the form of the connection given here satisfies both the requirements above and $A = \sigma^*\omega$. From these last equations one easily recovers the gauge transformation rules after a change of section $\sigma' = \sigma u(x)$ ($g' = u^{-1}(x)g$), that is + +$$A'_{\mu} = u^{-1} A_{\mu} u + u^{-1} \partial_{\mu} u \qquad (10)$$ + +$$F'_{\mu\nu} = u^{-1} F_{\mu\nu} u \qquad (11)$$ + +### 3. The Background Metric + +We are used to define a manifold through charts $\phi: U \to \mathbb{R}^4$, $U \subset M$, taking values on $\mathbb{R}^4$. Let us instead take them with value in a four-dimensional canonical manifold with enough structure to admit some natural metric. We shall use a matrix Lie group $G$, but we do not really want to give any special role to the identity of $G$. We shall see later how to solve this problem. The metric $g$ has to be constructed as a small departure from that naturally present in $G$ and which plays the role of background metric. + +We take as background metric the expression + +$$g_B = I_g(\theta, \theta) \qquad (12)$$ + +where $\theta$ is the Maurer-Cartan form of the group [2], that is $\theta = g^{-1}dg$, and $I_g$ is an adjoint invariant quadratic form on the Lie algebra $G$, which might depend on $g \in G$. The Maurer-Cartan form has the effect of mapping an element $v \in T_g G$ to the Lie algebra element whose fundamental vector field at $g$ is $v$. + +Of course, we demand that $g_B$ be a Lorentzian metric in a four-dimensional Lie group, and furthermore we want it to represent an isotropic cosmological background, thus $G$ has to contain the $SO(3)$ subgroup. We are lead to the Abelian group of translations $T_4$ or to the group $U(2)$ (or equivalently the group of quaternions since it shares with $U(2)$ the Lie algebra). In what follows we shall only consider the latter group, the case of the Abelian translation group being simpler. + +Thus let us consider the group $U(2)$. Every matrix of this group reads $u = e^{i\lambda r}$ with $0 \le \lambda \le \pi$ where $r \in SU(2)$ (while a quaternion reads $e^{\lambda r}, \lambda \in \mathbb{R}$) + +$$r = \begin{pmatrix} r_0 + i r_3 & r_2 + i r_1 \\ -r_2 + i r_1 & r_0 - i r_3 \end{pmatrix}, \qquad \sum_{\mu=0}^{3} r_{\mu}^{2} = 1 \qquad (13)$$ + +The Lie algebra of $U(2)$ is that of anti-hermitian matrices $A$ which read + +$$A = i \begin{pmatrix} a^0 + a^3 & a^1 - ia^2 \\ a^1 + ia^2 & a^0 - a^3 \end{pmatrix} \qquad (14)$$ + +By adjoint invariance of $I_g$ we mean $I_{u'gu^\dagger}(uAu^\dagger, uAu^\dagger) = I_g(A, A)$, for any $u, u' \in U(2)$. Clearly, the adjoint invariance for the Abelian subgroup $U(1)$ is guaranteed because for $u \in U(1)$, $uAu^\dagger = A$, $u'gu^\dagger = g$. The expressions that satisfy this invariance property are + +$$I_g(A, A) = \frac{\alpha(\lambda)}{2} (\operatorname{tr} A)^2 - \frac{\beta(\lambda)}{2} \operatorname{tr}(A^2) \qquad (15)$$ +---PAGE_BREAK--- + +$$I_g(A, A) = -2\alpha(\lambda)(a^0)^2 + \beta(\lambda)[(a^0)^2 + (a^1)^2 + (a^2)^2 + (a^3)^2] \quad (16)$$ + +where $\alpha$ and $\beta$ are functions of the phase of $g = e^{i\lambda}r$, $r \in SU(2)$ (which is left invariant under adjoint transformations). We get a Lorentzian metric for $2\alpha > \beta$ and $\beta > 0$. With the simple choice $\alpha = \beta = 1$ we get + +$$I_g(A, A) = \det A = -(a^0)^2 + (a^1)^2 + (a^2)^2 + (a^3)^2 \quad (17)$$ + +Notice that $\text{tr}(r^\dagger dr) = 0$ and + +$$\operatorname{tr}(r^\dagger dr r^\dagger dr) = -\operatorname{tr}(dr^\dagger dr) = -2 \det(r^\dagger dr) = -2 \sum_{\mu=0}^{3} dr_\mu^2 \quad (18)$$ + +Let us recall that $\theta = \phi^\dagger d\phi$ where the group element $\phi$ reads $\phi = re^{i\lambda}$. Thus using $\text{tr}(r^\dagger dr) = 0$ we find for the background metric + +$$ +\begin{align*} +g_B = I_g(\theta, \theta) &= I \left( r^\dagger dr + i d\lambda, r^\dagger dr + i d\lambda \right) = \\ +&= -I (d\lambda, d\lambda) + I(r^\dagger dr, r^\dagger dr) = -(2\alpha - \beta)d\lambda^2 - \frac{\beta}{2}\operatorname{tr}(r^\dagger dr r^\dagger dr) = \\ +&= -(2\alpha - \beta)d\lambda^2 + \beta(dr_0^2 + dr_1^2 + dr_2^2 + dr_3^2) +\end{align*} +$$ + +Recalling the constraint $\sum_{\mu=0}^{3} r_{\mu}^{2} = 1$ we find a background metric which coincides with Friedmann's with a $S^3$ section. + +More specifically, let $\sigma_0 = I$, and let $\sigma_i$, $i = 1, 2, 3$, be the Pauli matrices. Let $\tau_\mu = i\sigma_\mu$ be a base for the Lie algebra of $U(2)$. Let us parametrize $\phi \in U(2)$ through + +$$\phi = e^{i\lambda\sigma_0} r = e^{i\lambda\sigma_0} e^{i\chi(\tau_1 \sin\theta \cos\varphi + \tau_2 \sin\theta \sin\varphi + \tau_3 \cos\theta)} \quad (19)$$ + +then the background metric reads + +$$g_B = -dt^2 + a^2(t) (d\chi^2 + \sin^2\chi(d\theta^2 + \sin^2\theta d\varphi^2)) \quad (20)$$ + +where + +$$t = \int_0^\lambda d\lambda' \sqrt{2\alpha(\lambda') - \beta(\lambda')} \quad (21)$$ + +and + +$$a^2(t) = \beta(\lambda(t)) \quad (22)$$ + +These calculations, first presented in [1], show that the Friedmann metric appears rather naturally from the study of the $U(2)$ group. Of course, since this argument depends only on the Lie algebra rather than the group structure, it can be repeated for the group of quaternions [3]. + +**4. Perturbing the Background** + +In this section we shall suppose that $I_g$ does not depend on $g$, namely that $\alpha$ and $\beta$ are constants, this means that we ignore the time dependence of the cosmological background. + +We mentioned that we wish to use charts $\phi: U \to G, U \subset M$, with value in a group $G$ but that we do not want to assign to the identity of $G$ any special role. To that end, let us assume for simplicity that $M$ is simply connected, and let us introduce a trivial bundle $P$ endowed with a flat connection $\tilde{\omega}$. The connection being flat is integrable, thus given an horizontal section $\tilde{\sigma}: M \to P$, and parametrizing every point of $P$ through $p(x, g) = \tilde{\sigma}(x)g$, we obtain a splitting $P \sim M \times G$. In this way the identity of $G$ does not play any special role since it refers to different points of $P$ depending on the choice of section $\tilde{\sigma}$. +---PAGE_BREAK--- + +A second section $\sigma: M \to P$ is now related to the former by $\sigma(x)\phi^{-1}(x) = \tilde{\sigma}(x)$, where $\phi: M \to G$ +is the chart we were looking for. In order to be interpreted as a chart, $\phi$ has to be injective. The idea is +to define the metric + +$$g = I(\tilde{A} - A, \tilde{A} - A)$$ + +where $\tilde{A} = \sigma^*\tilde{\omega}$ is the potential of the flat connection and $A = \sigma^*\omega$ is the potential of a possibly non-trivial connection. From the transformation rule for the potential (10) we obtain + +$$\tilde{A} = \phi^{-1}(x) d\phi(x)$$ + +Let us show that the metric so defined satisfies the property $F = 0 \Rightarrow$ background metric. Suppose that $F = 0$ then $\sigma$ can be chosen in such a way that $A = 0$, thus the metric becomes + +$$F=0 \quad \Rightarrow \quad g = I(\phi^{-1}(x)d\phi(x), \phi^{-1}(x)d\phi(x)) = I(\phi^*\theta, \phi^*\theta) = \phi^*g_B \qquad (23)$$ + +that is, up to a coordinate change the metric coincides with the background metric. + +We observe that $A = \tau_a A_\mu^a dx^\mu$ has 16 components, namely the same number of components as the metric. However, we have an additional degree of freedom given by $\phi(x)$. This function can be completely removed using the invertibility of this map, namely using the coordinates $\phi^\mu$ on the Lie group to parametrize $M$. In this way the metric reads + +$$g = I(\phi^{-1}d\phi - \tau_a A_\mu^a(\phi)d\phi^\mu, \phi^{-1}d\phi - \tau_a A_\mu^a(\phi)d\phi^\mu)$$ + +these coordinates are referred as *internal coordinates*. In internal coordinates any gauge transformation +induces a coordinate transformation. For instance, the gauge potential transforms as + +$$\tau_a A_c'^a = \{u^{-1}\tau_a A_b^c u + u^{-1}\partial_b u\} \frac{\partial \phi^b}{\partial \phi_c'} \quad (24)$$ + +and the transformation law for the curvature becomes + +$$F'_{ab} = u^{-1} F_{cd} u \frac{\partial \phi^c}{\partial \phi'^a} \frac{\partial \phi^d}{\partial \phi'^b} \qquad (25)$$ + +where $\sigma' = \sigma u$ and the matrix $u(\phi)$ is related to the transformation $\phi'^a(\phi^b)$ by the product $\phi' = \phi u(\phi)$. +In the same way it can be shown, for example, that the spacetime metric transforms as a tensor +under (24). + +One can further ask whether the Einstein equations can be rephrased as dynamical equations +for the potential $A$. The answer is affirmative and passes through the vierbein reformulation of the +Einstein-Hilbert Lagrangian. + +We recall that a tetrad field (vierbein) $e_a = e_a^\mu \partial_\mu$, is a set of four vector fields $e_a$ such that +$g_{\mu\nu} = \eta_{ab} e_a^\mu e_b^\nu$. The inverse $e_a^\mu$ is defined through $e_a^\mu e_\nu^a = \delta_\nu^\mu$. The Einstein Lagrangian can be rewritten + +$$-\frac{\sqrt{-g}}{16\pi} R = \frac{1}{8\pi} v_{,\nu} v^{\nu} + \frac{\sqrt{-g}}{16\pi} \left\{ \frac{1}{4} C^{abc} C_{abc} - C_{ac}^{a} C_{b}^{b c} + \frac{1}{2} C^{abc} C_{bac} \right\} \quad (26)$$ + +where the first term on the right-hand side is a total divergence and + +$$C_{ab}^{c} = e_{a}^{c} (\partial_{a} e_{b}^{c} - \partial_{b} e_{a}^{c}) = e_{a}^{c} e_{b}^{v} (\partial_{v} e_{\mu}^{c} - \partial_{\mu} e_{v}^{c}) \qquad (27)$$ + +In order to obtain a dynamics for $A$ we select a base $\tau_a$ for the Lie algebra such that + +$$I(\tau_a, \tau_b) = \eta_{ab}$$ +---PAGE_BREAK--- + +where $\eta_{ab}$ is the Minkowski metric. Then we make a gauge transformation so as to send the flat potential $\tilde{A}$ to zero. This gauge is called the *OT gauge*. Since $g = I(\tau_a A_\mu^a dx^\mu, \tau_a A_\mu^a dx^\mu)$, the vierbein becomes coincident with the potential + +$$e_\mu^a = A_\mu^a$$ + +so the field equations can be ultimately expressed in terms of $A_\mu^a$. We have observed above that with $F=0$ the metric becomes that of the Einstein static Universe which is not a solution of the dynamical equations (without cosmological constant). One could wish to obtain a realistic cosmological solution for $F=0$. At the moment I do not know how to modify the theory so as to accomplish this result (but observe that we never changed the dynamics which is always that given by the Einstein's equations). However, our framework might not need any modification. It can be shown [1] that the scale factor $a$ in front of the Einstein static Universe metric is actually the coupling constant for this theory so the expansion of the Universe could be an effect related to the renormalization of the theory. + +In the Abelian case $T_4$ (not in the $U(2)$ case) the Lagrangian can also be expressed in terms of the curvature (7). Indeed, since $f_{ab}^c = 0$ the curvature becomes coincident with the tensors $C_{bc}^a$ entering the above expression of the Lagrangian (however, observe that the potential still enters the metric and the vierbeins which are used to raise the indices of the curvature). The final expression is quadratic in the curvature $F$ and is related to the teleparallel approach to general relativity [4–7]. Issues related to the renormalizability of the dynamics determined by (26) have yet to be fully studied. + +The *OT gauge* approach has been used to infer the dynamics and is complementary to the *internal coordinates* approach mentioned above. Indeed, while the latter allows us to interpret the map $\phi: U \to G, U \subset M$, as a chart with values in $G$, the *OT frame* approach sends $\phi$ to the identity, so in the new gauge the non-injective map $\phi$ cannot be interpreted as a chart. Thus, after having developed the dynamics in the *OT gauge* we would have to make a last gauge transformation to reformulate it in internal coordinates. + +**Acknowledgments:** This work has been partially supported by GNFM of INDAM. + +**Conflicts of Interest:** The author declares no conflicts of interest. + +## References + +1. Minguzzi, E. Gauge invariance in teleparallel gravity theories: A solution to the background structure problem. Phys. Rev. D **2002**, 65, 084048. doi:10.1103/PhysRevD.65.084048. + +2. Kobayashi, S.; Nomizu, K. Foundations of Differential Geometry. In *Interscience Tracts in Pure and Applied Mathematics*; Interscience Publishers: New York, NY, USA, 1963; Volume I. + +3. Trifonov, V. Natural Geometry of Nonzero Quaternions. Int. J. Theor. Phys. **2007**, *46*, 251–257. + +4. Cho, Y.M. Einstein Lagrangian as the translational Yang-Mills Lagrangian. Phys. Rev. D **1976**, *14*, 2521–2525. + +5. Hayashi, K.; Shirafuji, T. New general relativity. Phys. Rev. D **1979**, *19*, 3524–3553. + +6. Rodrigues, W.A., Jr.; de Souza, Q.A.G.; da Rocha, R. Conservation Laws on Riemann-Cartan, Lorentzian and Teleparallel Spacetimes. Bull. Soc. Sci. Lett. Lodz. Ser. Rech. Deform. **2007**, *52*, 37–65, 66–77. + +7. Aldrovandi, R.; Pereira, J.G. Teleparallel Gravity. In *Fundamental Theories of Physics*; Springer: Berlin, Germany, 2013; Volume 173. + +© 2014 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Review + +# Quantum Local Symmetry of the *D*-Dimensional +Non-Linear Sigma Model: A Functional Approach + +Andrea Quadri ¹,² + +¹ Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Milano, via Celoria 16, I-20133 Milano, Italy; E-Mail: andrea.quadri@mi.infn.it; Tel.: +39-2-5031-7287; Fax: +39-2-5031-7480 + +² Dipartimento di Fisica, Università di Milano, via Celoria 16, I-20133 Milano, Italy + +Received: 27 February 2014; in revised form: 31 March 2014 / Accepted: 11 April 2014 / +Published: 17 April 2014 + +**Abstract:** We summarize recent progress on the symmetric subtraction of the Non-Linear Sigma Model in *D* dimensions, based on the validity of a certain Local Functional Equation (LFE) encoding the invariance of the SU(2) Haar measure under local left transformations. The deformation of the classical non-linearly realized symmetry at the quantum level is analyzed by cohomological tools. It is shown that all the divergences of the one-particle irreducible (1-PI) amplitudes (both on-shell and off-shell) can be classified according to the solutions of the LFE. Applications to the non-linearly realized Yang-Mills theory and to the electroweak theory, which is directly relevant to the model-independent analysis of LHC data, are briefly addressed. + +**Keywords:** Non-Linear Sigma Model; quantum symmetries; renormalization; Becchi–Rouet–Stora–Tyutin (BRST) + +## 1. Introduction + +The purpose of this paper is to provide an introduction to the recent advances in the study of the renormalization properties of the SU(2) Non-Linear Sigma Model (NLSM) and of the quantum deformation of the underlying non-linearly realized classical SU(2) local symmetry. The results reviewed here are based mainly on References [1–19]. + +The linear sigma model was originally proposed a long time ago in [20] in the context of elementary particle physics. In this model the pseudoscalar pion fields $\vec{\phi}$ form a chiral multiplet together with a scalar field $\sigma$, with $(\sigma, \vec{\phi})$ transforming linearly as a vector under $O(4) \sim \text{SU}(2) \times \text{SU}(2)/Z_2$. If one considers instead the model on the manifold defined by + +$$ \sigma^2 + \vec{\phi}^2 = f_{\pi}^2, \quad \sigma > 0 \qquad (1) $$ + +one obtains a theory where the chiral group $SO(4) \sim \text{SU}(2) \times \text{SU}(2)$ (with $SO(4)$ selected by the positivity condition on $\sigma$) is spontaneously broken down to the isotopic spin group $\text{SU}(2)$. The composite field $\sigma$ has a non-vanishing expectation value $f_\pi$ (to be identified with the pion decay constant), while the pions are massless. Despite the fact that this is only an approximate description (since in reality the pions are massive and chiral $\text{SU}(2) \times \text{SU}(2)$ is not exact, even before being spontaneously broken), the approach turned out to be phenomenologically quite successful and paved the way to the systematic use of effective field theories as a low energy expansion. + +The first step in this direction was to obtain a phenomenological lagrangian directly, by making use of a pion field with non-linear transformation properties dictated by chiral symmetry from the beginning. After the seminal work of Reference [21] for the chiral $\text{SU}(2) \times \text{SU}(2)$ group, non-linearly realized symmetries were soon generalized to arbitrary groups in [22,23] and have since then become a very popular tool [24]. +---PAGE_BREAK--- + +Modern applications involve, e.g., Chiral Perturbation Theory [25–28], low energy electroweak theories [29] as well as gravity [30]. + +Effective field theories usually exhibit an infinite number of interaction terms, that can be organized according to the increasing number of derivatives. By dimensional arguments, the interaction terms must then be suppressed by some large mass scale M (so that one expects that the theory is reliable at energies well below M) (For a modern introduction to the problem, see e.g., [31]). In the spirit of the phenomenological lagrangians, the tree-level effective action is used to compute physical quantities up to a given order in the momentum expansion. Only a finite number of derivative interaction vertices contribute to that order, thus allowing to express the physical observables one is interested in through a finite number of parameters (to be eventually fixed by comparison with experimental data). Then the theory can be used to make predictions at the given order of accuracy in the low-energy expansion. + +The problem of the mathematically consistent evaluation of quantum corrections in this class of models has a very long history. On general grounds, the derivative couplings tend to worsen the ultraviolet (UV) behavior of the theory, since UV divergent contributions arise in the Feynman amplitudes that cannot be compensated by a multiplicative renormalization of the fields and a redefinition of the mass parameters and the coupling constants in the classical action (truncated at some given order in the momentum expansion). Under these circumstances, one says that the theory is non-renormalizable (A compact introduction to renormalization theory is given in [32]). + +It should be stressed that the key point here is the instability of the classical action: no matter how many terms are kept in the derivative expansion of the tree-level action, there exists a sufficiently high loop order where UV divergences appear that cannot be reabsorbed into the classical action. On the other hand, if in a non-anomalous and non-renormalizable gauge theory one allows for *infinitely many* terms in the classical action (all those compatible with the symmetries of the theory), then UV divergences can indeed be reabsorbed by preserving the Batalin-Vilkovisky master equation [33] and the model is said to be renormalizable in the modern sense [34]. + +Sometimes symmetries are so powerful in constraining the UV divergences that the non-linear theory proves to be indeed renormalizable (although not by power-counting), like for instance the NLSM in two dimensions [35,36] (For a more recent introduction to the subject, see e.g., [37]). + +In four dimensions the situation is much less favorable. It has been found many years ago that already at one loop level in the four-dimensional NLSM there exists an infinite number of one-particle irreducible (1-PI) divergent pion amplitudes. Many attempts were then made in the literature in order to classify such divergent terms. Global SU(2) chiral symmetry is not preserved already at one loop level [38–40]. Moreover it turns out that some of the non-symmetric terms can be reabsorbed by a redefinition of the fields [40–43], however in the off-shell four-point $\phi_a$ amplitudes some divergent parts arise that cannot be reabsorbed by field redefinitions unless derivatives are allowed [40]. These technical difficulties prevented such attempts to evolve into a mathematically consistent subtraction procedure. + +More recently it has been pointed out [1] that one can get the full control on the ultraviolet divergences of the $\phi$'s-amplitudes by exploiting the constraints stemming from the presence of a certain local symmetry, associated with the introduction of a SU(2) background field connection into the theory. This symmetry in encoded in functional form in the so-called Local Functional Equation (LFE) [1]. It turns out that the fundamental divergent amplitudes are not those associated with the quantum fields of the theory, namely the pions, but those corresponding to the background connection and to the composite operator implementing the non-linear constraint [1,2]. These amplitudes are named ancestor amplitudes. + +At every order in the loop expansion there is only a finite number of divergent ancestor amplitudes. They uniquely fix the divergent amplitudes involving the pions. Moreover, non-renormalizability of this theory in four dimensions can be traced back to the instability of the classical non-linear +---PAGE_BREAK--- + +local symmetry, that gets deformed by quantum corrections. These results hold for the full off-shell amplitudes [3]. + +A comment is in order here. In Reference [4] it has been argued that Minimal Subtraction is a symmetric scheme, fulfilling all the symmetries of the NLSM in the LFE approach. This in particular entails that all finite parts of the needed higher order counterterms are consistently set to zero. It should be stressed that this is not the most general solution compatible with the symmetries and the WPC, that is commonly used in the spirit of the most popular effective field theory point of view. Indeed, these finite parts are constrained neither by the LFE nor by the WPC and thus, mathematically, they can be freely chosen, as far as they are introduced at the order prescribed by the WPC and without violating the LFE. + +The four dimensional SU(2) NLSM provides a relatively simple playground where to test the approach based on the LFE, that can be further generalized to the SU(N) case (and possibly even to a more general Lie group). + +Moreover, when the background vector field becomes dynamical, the SU(2) NLSM action allows one to generate a mass term for the gauge field $\partial_a$ la Stückelberg [44,45]. The resulting non-linear implementation of the spontaneous symmetry breaking mechanism (as opposed to the linear Higgs mechanism) is widely used in the context of electroweak low energy effective field theories, that are a very important tool in the model-independent analysis of LHC data [46-49]. + +## 2. The Classical Non-Linear Sigma Model + +The classical SU(2) NLSM in $D$ dimensions is defined by the action + +$$S_0 = \int d^D x \frac{m_D^2}{4} \mathrm{Tr} (\partial_\mu \Omega^\dagger \partial^\mu \Omega) \quad (2)$$ + +where the matrix $\Omega$ is a SU(2) group element given by + +$$\Omega = \frac{1}{m_D} (\phi_0 + i\phi_a \tau_a), \quad \Omega^\dagger \Omega = 1, \det \Omega = 1, \quad \phi_0^2 + \phi_a^2 = m_D^2 \quad (3)$$ + +In the above equation $\tau_a, a = 1,2,3$ are the Pauli matrices and $m_D = m^{D/2-1}$ is the mass scale of the theory. $m$ has mass dimension 1. $\phi_a$ are the three independent fields parameterizing the matrix $\Omega$, while we choose the positive solution of the non-linear constraint, yielding + +$$\phi_0 = \sqrt{m_D^2 - \phi_a^2} \quad (4)$$ + +In components one finds + +$$S_0 = \int d^D x \left( \frac{1}{2} \partial_{\mu} \phi_a \partial^{\mu} \phi_a + \frac{1}{2} \frac{\phi_a \partial_{\mu} \phi_a \phi_b \partial^{\mu} \phi_b}{\phi_0^2} \right) \quad (5)$$ + +The model therefore contains non-polynomial, derivative interactions for the massless scalars $\phi_a$. Equation (2) is invariant under a global SU(2)$_L \times$ SU(2)$_R$ chiral transformation + +$$\Omega' = U\Omega V^{\dagger}, \quad U \in \mathrm{SU}(2)_L, \quad V \in \mathrm{SU}(2)_R \quad (6)$$ + +We notice that such a global transformation is non-linearly realized, as can be easily seen by looking at its infinitesimal version. E.g., for the left transformation one finds: + +$$\delta\phi_a = \frac{1}{2}\alpha\phi_0(x) + \frac{1}{2}\epsilon_{abc}\phi_b(x)\alpha_c, \qquad \delta\phi_0(x) = -\frac{1}{2}\alpha\phi_a(x) \quad (7)$$ + +Since $\phi_0$ is given by Equation (4), the first term in the r.h.s. of $\delta\phi_a$ is non-linear (and even non-polynomial) in the quantum fields. +---PAGE_BREAK--- + +Perturbative quantization of the NLSM requires to carry out the path-integral + +$$Z[J] = \int \mathcal{D}\phi_a \exp (iS_0[\phi_a] + i \int d^D x J_a \phi_a) \quad (8)$$ + +by expanding around the free theory and by treating the second term in the r.h.s. of Equation (5) as an interaction. Notice that in Equation (8) the sources $J_a$ are coupled to the fields $\phi_a$ over which the path-integral is performed. In momentum space the propagator for the $\phi_a$ fields is + +$$\Delta_{\phi_a \phi_b} = i \frac{\delta_{ab}}{p^2} \qquad (9)$$ + +The mass dimension of the $\phi_a$ is therefore $D/2 - 1$, in agreement with Equation (3). + +The presence of two derivatives in the interaction term is the cause (in dimensions greater than 2) of severe UV divergences, leading to the non-renormalizability of the theory. + +### 3. The Approach based on the Local Functional Equation + +Some years ago it was recognized that the most effective classification of the UV divergences (both for on-shell and off-shell amplitudes) of the NLSM cannot be achieved in terms of the quantized fields $\phi_a$, as it usually happens in power-counting renormalizable theories, but rather through the so-called ancestor amplitudes, i.e., the Green's functions of certain composite operators, whose knowledge completely determines the amplitudes involving at least one $\phi_a$-leg. This property follows as a consequence of the existence of an additional local functional identity, the so-called Local Functional Equation (LFE) [1]. + +The LFE stems from the *local* SU(2)$_L$-symmetry that can be established from the gauge transformation of the flat connection $F_\mu$ associated with the matrix $\Omega$: + +$$F_{\mu} = i\Omega\partial_{\mu}\Omega^{\dagger} = \frac{1}{2}F_{a\mu}\tau^{a} \qquad (10)$$ + +i.e., the local SU(2)-transformation of $\Omega$ + +$$\Omega' = U\Omega \qquad (11)$$ + +induces a gauge transformation of the flat connection, namely + +$$F'_{\mu} = U F_{\mu} U^{\dagger} + i U \partial_{\mu} U^{\dagger} \qquad (12)$$ + +$S_0$ in Equation (2) is not invariant under local SU(2)$_L$ transformations; however it is easy to make it invariant, once one realizes that it can be written as + +$$S_0 = \int d^D x \frac{m_D^2}{4} \mathrm{Tr}(F_\mu^2) \qquad (13)$$ + +Since $F_\mu$ transforms as a gauge connection, one can introduce an additional external classical vector source $\tilde{J}_\mu = \frac{1}{2}\tilde{J}_{a\mu} \tau^a$ and replace $S_0$ with + +$$S = \int d^D x \frac{m_D^2}{4} \mathrm{Tr} (F_\mu - \tilde{J}_\mu)^2 \qquad (14)$$ +---PAGE_BREAK--- + +If one requires that $\tilde{f}_{a\mu}$ transforms as a gauge connection under the local SU(2)$_L$ group, $S$ in Equation (14) is invariant under a local SU(2)$_L$ symmetry given by + +$$ \begin{aligned} \delta\phi_a &= \frac{1}{2}\alpha_a\phi_0 + \frac{1}{2}\epsilon_{abc}\phi_b\alpha_c, & \delta\phi_0 &= -\frac{1}{2}\alpha_a\phi_a \\ \delta\tilde{f}_{a\mu} &= \partial_\mu\alpha_a + \epsilon_{abc}\tilde{f}_{b\mu}\alpha_c \end{aligned} \qquad (15) $$ + +Notice that in the above equation $\alpha_a$ is a local parameter. + +In order to implement the classical local SU(2)$_L$ invariance at the quantum level, one needs to define the composite operator $\phi_0$ in Equation (4) by coupling it in the classical action to an external source $K_0$ through the term + +$$ S_{\text{ext}} = \int d^D x K_0 \phi_0 \qquad (16) $$ + +$K_0$ is invariant under $\delta$. + +The important observation now is that the variation of full one-particle irreducible (1-PI) vertex functional $\Gamma^{(0)} = S + S_{\text{ext}}$ is linear in the quantized fields $\phi_a$, i.e., + +$$ \delta\Gamma^{(0)} = -\frac{1}{2} \int d^Dx \alpha_a(x) K_0(x) \phi_a(x) \qquad (17) $$ + +By taking a derivative of both sides of the above equation w.r.t. $\alpha_a(x)$ one obtains the LFE for the tree-level vertex functional $\Gamma^{(0)}$: + +$$ W_a(\Gamma^{(0)}) = -\partial_\mu \frac{\delta\Gamma^{(0)}}{\delta\tilde{f}_{a\mu}} + \epsilon_{acb} J_{c\mu} \frac{\delta\Gamma^{(0)}}{\delta\tilde{f}_{b\mu}} + \frac{1}{2} \frac{\delta\Gamma^{(0)}}{\delta K_0(x)} \frac{\delta\Gamma^{(0)}}{\delta\phi_a(x)} + \frac{1}{2} \epsilon_{abc} \phi_c(x) \frac{\delta\Gamma^{(0)}}{\delta\phi_b(x)} = -\frac{1}{2} K_0(x) \phi_a(x) \quad (18) $$ + +Notice that the $\phi_0$-term, entering in the variation of the $\phi_a$ field, is generated by $\frac{\delta\Gamma^{(0)}}{\delta K_0(x)}$. The advantage of this formulation resides in the fact that it is suitable to be promoted at the quantum level. Indeed by defining the composite operator $\phi_0$ by taking functional derivatives w.r.t. its source $K_0$, one is able to control its renormalization, once radiative corrections are included [50]. + +In the following Section we are going to give a compact and self-contained presentation of the algebraic techniques used to deal with bilinear functional equations like the LFE in Equation (18). + +# 4. Ancestor Amplitudes and the Weak Power-Counting + +We are going to discuss in this Section the consequences of the LFE for the full vertex functional. The imposition of a quantum symmetry in a non-power-counting renormalizable theory is a subtle problem, since in general there is no control on the dimensions of the possible breaking terms as strong as the one guaranteed by the Quantum Action Principle (QAP) in the renormalizable case. Let us discuss the latter case first. + +## 4.1. Renormalizable Theories and the Quantum Action Principle + +If the tree-level functional $\Gamma^{(0)}$ is power-counting renormalizable, the renormalization procedure [51] provides a way to compute all higher-order terms in the loop expansion of the full vertex functional $\Gamma[\Phi, \chi] = \sum_{n=0}^{\infty} \hbar^n \Gamma^{(n)}[\Phi, \chi]$, depending on the set of quantized fields $\Phi$ and external sources collectively denoted by $\chi$, by fixing order by order only a finite set of action-like normalization conditions. One says that the classical action is therefore stable under radiative corrections, namely the number of free parameters does not increase with the loop order. + +This procedure is a recursive one, since it allows to construct $\Gamma^{(n)}$ once $\Gamma^{(j)}$, $j < n$ are known. From a combinatorial point of view, it turns out that $\Gamma$ is the generating functional of the 1-PI renormalized Feynman amplitudes. +---PAGE_BREAK--- + +A desirable feature of power-counting renormalizable theories is that the dependence of 1-PI Green's functions under an infinitesimal variations of the quantized fields and of the parameters of the model is controlled by the so-called Quantum Action Principle (QAP) [52–55] and can be expressed as the insertion of certain *local* operators with UV dimensions determined by their tree-level approximation (i.e., a polynomial in the fields, the external sources and derivatives thereof). + +Let us now consider a certain symmetry $\delta$ of the tree-level $\Gamma^{(0)}$ classical action. Under the condition that the symmetry $\delta$ is non-anomalous [56], it can be extended to the full vertex functional $\Gamma$. In many cases of physical interest the proof that the symmetry is non-anomalous can be performed by making use of cohomological tools. Namely one writes the functional equation associated with the $\delta$-invariance of the tree-level vertex functional as follows: + +$$ S(\Gamma^{(0)}) = \int d^D x \sum_{\Phi} \frac{\delta\Gamma^{(0)}}{\delta\Phi(x)} \frac{\delta\Gamma^{(0)}}{\delta\Phi^{*}(x)} = 0 \quad (19) $$ + +where $\Phi^*$ is an external source coupled in the tree-level vertex functional to the $\delta$-transformation of $\Phi$ and the sum is over the quantized fields. $\Phi^*$ are known as antifields [33]. If $\delta$ is nilpotent (as it happens, e.g., for the Becchi-Rouet-Stora-Tyutin (BRST) operator [57–59] in gauge theories), the recursive proof of the absence of obstructions to the fulfillment of Equation (19) works as follows. Suppose that Equation (19) is satisfied up to order $n-1$ in the loop expansion. Then by the QAP the $n$-th order breaking + +$$ \Delta^{(n)} = \int d^D x \sum_{\Phi} \left( \frac{\delta\Gamma^{(0)}}{\delta\Phi(x)} \frac{\delta\Gamma^{(n)}}{\delta\Phi^{*}(x)} + \frac{\delta\Gamma^{(n)}}{\delta\Phi(x)} \frac{\delta\Gamma^{(0)}}{\delta\Phi^{*}(x)} + \sum_{j=1}^{n-1} \frac{\delta\Gamma^{(j)}}{\delta\Phi(x)} \frac{\delta\Gamma^{(n-j)}}{\delta\Phi^{*}(x)} \right) \quad (20) $$ + +is a polynomial in the fields, the external sources and their derivatives. The term involving $\Gamma^{(n)}$ in Equation (20) allows to define the linearized operator $S_0$ according to + +$$ S_0(\Gamma^{(n)}) = \int d^D x \sum_{\Phi} \left( \frac{\delta\Gamma^{(0)}}{\delta\Phi(x)} \frac{\delta\Gamma^{(n)}}{\delta\Phi^{*}(x)} + \frac{\delta\Gamma^{(n)}}{\delta\Phi(x)} \frac{\delta\Gamma^{(0)}}{\delta\Phi^{*}(x)} \right) \quad (21) $$ + +$S_0$ is also nilpotent, as a consequence of the nilpotency of $\delta$ and of the tree-level invariance in Equation (19). By exploiting this fact and by applying $S_0$ on both sides of Equation (20) one finds + +$$ S_0(\Delta^{(n)}) = 0 \quad (22) $$ + +provided that the Wess-Zumino consistency condition [60] + +$$ S_0 \left( \sum_{j=1}^{n-1} \frac{\delta \Gamma^{(j)}}{\delta \Phi(x)} \frac{\delta \Gamma^{(n-j)}}{\delta \Phi^*(x)} \right) = 0 \quad (23) $$ + +holds. This is the case, e.g., for the BRST symmetry and the associated master Equation (19), since Equation (23) turns out to be a consequence of a generalized Jacobi identity for the Batalin-Vilkovisky bracket for the conjugated variables $(\Phi, \Phi^*)$ [33]. + +The problem of establishing whether the functional identity + +$$ S(\Gamma) = 0 \quad (24) $$ + +holds at order $n$ then boils down to prove that the most general solution to Equation (22) is of the form + +$$ \Delta^{(n)} = -S_0(\Xi^{(n)}) \quad (25) $$ +---PAGE_BREAK--- + +since then $\Gamma^{(n)} = \Gamma^{(n)} + \Xi^{(n)}$ will fulfill Equation (24) at order $n$ in the loop expansion. I.e., the problem reduces to the computation of the cohomology $H(S_0)$ of the operator $S_0$ in the space of integrated local polynomials in the fields, the external sources and their derivatives. Two $S_0$-invariant integrated local polynomials $\mathcal{J}_1$ and $\mathcal{J}_2$ belong to the same cohomology class in $H(S_0)$ if and only if + +$$ \mathcal{J}_1 = \mathcal{J}_2 + S_0(\mathcal{K}) \qquad (26) $$ + +for some integrated local polynomial $\mathcal{K}$. In particular, $H(S_0)$ is empty if the only cohomology class is the one of the zero element, so that the condition that $\mathcal{J}_1$ is $S_0$-invariant implies that + +$$ \mathcal{J}_1 = S_0(\mathcal{K}) \qquad (27) $$ + +for some $\mathcal{K}$. Hence if one can prove that the cohomology of the operator $S_0$ is empty in the space of breaking terms, then Equation (25) must be fulfilled by some choice of the functional $\Xi^{(n)}$. Moreover it must be checked that the UV dimensions of the possible counterterms $\Xi^{(n)}$ are compatible with the action-like condition, so that renormalizability of the theory is not violated. An extensive review of BRST cohomologies for gauge theories is given in [61]. + +## 4.2. Non-Renormalizable Theories + +The QAP does not in general hold for non-renormalizable theories. This does not come as a surprise, since the appearance of UV divergences with higher and higher degree, as one goes up with the loop order, prevents to characterize the induced breaking of a functional identity in terms of a polynomial of a given finite degree (independent of the loop order). + +Moreover for the NLSM another important difference must be stressed: the basic Green's functions of the theory are not those of the quantized fields $\phi_a$, but those of the flat connection coupled to the external vector source $\tilde{j}_{a\mu}$ and of the non-linear constraint $\phi_0$ (coupled to $K_0$). This result follows from the invertibility of + +$$ \frac{\delta \Gamma}{\delta K_0} = \phi_0 + O(\hbar) $$ + +as a formal power series in $\hbar$ (since $\phi_0|_{\phi_a=0} = m_D$). Then the LFE for the vertex functional $\Gamma$ + +$$ W_a(\Gamma) = -\frac{1}{2} K_0(x) \phi_a(x) \qquad (28) $$ + +can be seen as a first-order functional differential equation controlling the dependence of $\Gamma$ on the fields $\phi_a$. Provided that a solution exists (as will be proven in Section 5), Equation (28) determines all the amplitudes involving at least one external $\phi_a$-leg in terms of the boundary condition provided by the functional $\Gamma[\tilde{j}, K_0] = \Gamma[\phi, \tilde{j}, K_0]|_{\phi_a=0}$. + +$\Gamma[\tilde{j}, K_0]$ is the generating functional of the so called ancestor amplitudes, i.e., the 1-PI amplitudes involving only external $\tilde{j}$ and $K_0$ legs. + +It is therefore reasonable to assume the LFE in Equation (28) as the starting point for the quantization of the theory. + +From a path-integral point of view, Equation (28) implies that one is performing an integration over the SU(2)-invariant Haar measure of the group, namely one is computing + +$$ Z[J, \tilde{j}_\mu, K_0] = \int \mathcal{D}\Omega(\phi) \exp \left( i\Gamma^{(0)}[\phi, \tilde{j}_\mu, K_0] + i \int d^D x j_\alpha \phi_\alpha \right) \qquad (29) $$ + +where we denote by $\mathcal{D}\Omega(\phi)$ the SU(2) Haar measure (in the coordinate representation spanned by the fields $\phi_\alpha$). This clarifies the geometrical meaning of the LFE. +---PAGE_BREAK--- + +### 4.3. Weak Power-Counting + +As we have already noticed, in four dimensions the NLSM is non power-counting renormalizable, since already at one loop level an infinite number of divergent $\phi$-amplitudes exists. One may wonder whether the UV behavior of the ancestor amplitudes (the boundary conditions to the LFE) is better. It turns out that this is indeed the case and one finds that in $D$ dimensions a $n$-th loop Feynman amplitude $G$ with $N_{K_0}$ external $K_0$-legs and $N_J$ external $\bar{J}$-legs has superficial degree of divergence given by [2] + +$$d(G) \leq (D-2)n + 2 - N_J - 2N_{K_0} \quad (30)$$ + +The proof is straightforward although somehow lengthy and will not be reported here. It can be found in [2]. Equation (30) establishes the Weak Power-Counting (WPC) condition: at every loop order only a finite number of superficially divergent ancestor amplitudes exist. + +For instance, in $D = 4$ and at one loop order, Equation (30) reduces to + +$$d(G) \leq 4 - N_J - 2N_{K_0} \quad (31)$$ + +i.e., UV divergent amplitudes involve only up to four external $\tilde{J}_\mu$ legs or two $K_0$-legs. + +By taking into account Lorentz-invariance and global SU(2)$_R$ symmetry, the list of UV divergent amplitudes reduces to + +$$ \begin{gathered} \int d^4 x \partial_\mu \tilde{J}_{av} \partial^\mu \tilde{J}_a^\nu, \quad \int d^4 x (\partial \tilde{J}_a)^2, \quad \int d^4 x \epsilon_{abc} \partial_\mu \tilde{J}_{av} \tilde{J}_b^\mu \tilde{J}_c^\nu, \quad \int d^4 x (\tilde{J}_a)^2 (\tilde{J}_b)^2 \\ \int d^4 x \tilde{J}_{a\mu} \tilde{J}_b^\mu \tilde{J}_{av} \tilde{J}_b^\nu, \quad \int d^4 x \tilde{J}_{a\mu}^2, \quad \int d^4 x K_0^2, \quad \int d^4 x K_0 \tilde{J}_a^2 \end{gathered} \quad (32) $$ + +Notice that the counterterms are local. + +It should be emphasized that the model is not power-counting renormalizable, even when ancestor amplitudes are considered, since according to Equation (30) the number of UV divergent amplitudes increases as the loop order $n$ grows. + +A special case is the 2-dimensional NLSM. For $D = 2$ Equation (30) yields + +$$d(G) \leq 2 - N_J - 2N_{K_0} \quad (33)$$ + +i.e., at every loop order there can be only two UV divergent ancestor amplitudes, namely + +$$\int d^2 x \bar{J}^2 \quad \text{and} \quad \int d^2 x K_0$$ + +These are precisely of the same functional form as the ancestor amplitudes entering in the tree-level vertex functional and, in this sense, the model shares the stability property of the classical action typical of power-counting renormalizable models. Renormalizability of the 2-dimensional NLSM can also be established by relying on the Ward identity of global SU(2) symmetry (see e.g., [37]). + +A comment is in order here. In References [24,25] the external fields are the sources of connected Green's functions of certain quark-antiquark currents. The ancestor amplitudes in the NLSM, in the approach based on the LFE, do not have a direct physical interpretation of this type, however they have a very clear geometrical meaning. First of all, $\bar{J}_\mu$ is the source coupled to the flat connection naturally associated with the group element $\Omega$. On the other hand, $K_0$ is the unique scalar source required, in the special case of the SU(2) group, in order to control the renormalization of the non-linear classical SU(2) transformation of the $\phi_a$'s and thus plays the role of the so-called antifields [33,50]. The extension to a general Lie group G is addressed at the end of Section 5. +---PAGE_BREAK--- + +**5. Cohomological Analysis of the LFE** + +In order to study the properties of the LFE, it is very convenient to introduce a fictious BRST operator $s$ by promoting the gauge parameters $\alpha_a(x)$ to classical anticommuting ghosts $\omega_a(x)$. I.e., one sets + +$$ +\begin{align} +s \tilde{J}_{a\mu} &= \partial_{\mu} \omega_a + \epsilon_{abc} \tilde{J}_{b\mu} \omega_c, & s \phi_a &= \frac{1}{2} \omega_a \phi_0 + \frac{1}{2} \epsilon_{abc} \phi_b \omega_c, & s \phi_0 &= -\frac{1}{2} \omega_a \phi_a \\ +s K_0 &= \frac{1}{2} \omega_a \frac{\delta \Gamma^{(0)}}{\delta \phi_a(x)}, & s \omega_a &= -\frac{1}{2} \epsilon_{abc} \omega_b \omega_c +\end{align} +\tag{34} $$ + +Some comments are in order here. First of all the BRST operator $s$ acts also on the external source $K_0$. Moreover, the BRST transformation of $\omega_a$ is fixed by nilpotency, namely $s^2 = 0$. + +The introduction of the ghosts allows to define a grading w.r.t. the conserved ghost number. $\omega$ has ghost number +1, while all the other fields and sources have ghost number zero. (The ghost number was called the Faddeev-Popov (ΦΠ) charge in [2].) + +In terms of the operator $s$ we can write the $n$-th order projection ($n \ge 1$) of the LFE in Equation (28) as follows: + +$$ [\int d^D x \omega_a W_a(\Gamma)]^{(n)} = s\Gamma^{(n)} + \sum_{j=1}^{n-1} \int d^D x \frac{1}{2}\omega_a \frac{\delta\Gamma^{(j)}}{\delta K_0} \frac{\delta\Gamma^{(n-j)}}{\delta\phi_a} = 0 \quad (35) $$ + +Notice that the bilinear term in the LFE manifests itself into the presence of the mixed $\frac{\delta\Gamma^{(j)}}{\delta K_0}$ $\frac{\delta\Gamma^{(n-j)}}{\delta\phi_a}$ contribution. Moreover in the r.h.s. there is no contribution from the breaking term linear in $\phi_a$ in Equation (18) since the latter remains classical. + +Suppose now that all divergences have been recursively subtracted up to order $n-1$. At the $n$-th order the UV divergent part can only come from the term involving $\Gamma^{(n)}$ in Equation (35) and therefore, if the LFE holds, one gets a condition on the UV divergent part $\Gamma_{pol}^{(n)}$ of $\Gamma^{(n)}$: + +$$ s\Gamma_{pol}^{(n)} = 0 \qquad (36) $$ + +To be specific, one can use Dimensional Regularization and subtract only the pole part of the ancestor amplitudes (after the proper normalization of the ancestor background connection amplitudes + +$$ \frac{m}{m_D} \frac{\delta^{(n)} \Gamma}{\delta J_{a_1}^{\mu_1} \dots \delta J_{a_n}^{\mu_n}} $$ + +The LFE then fixes the correct factor for the normalization of amplitudes involving $K_0$. This subtraction procedure has been shown to be symmetric [2,4], i.e., to preserve the LFE. The pole parts before subtraction obey the condition in Equation (36). + +By the nilpotency of $s$, solving Equation (36) is equivalent to computing the cohomology of the BRST operator $s$ in the space of local functionals in $\tilde{J}, \tilde{\phi}, K_0$ and their derivatives with ghost number zero. This can be achieved by using the techniques developed in [62]. + +One first builds invariant combinations in one-to-one correspondence with the ancestor variables $\tilde{J}_{a\mu}$ and $K_0$. For that purpose it is more convenient to switch back to matrix notation. The difference $I_\mu = F_\mu - \tilde{J}_\mu$ transforms in the adjoint representation of SU(2), being the difference of two gauge connections. Thus the conjugate of such a difference w.r.t. $\Omega$ + +$$ j_{\mu} = j_{a\mu} \frac{\tau_a}{2} = \Omega^{\dagger} I_{\mu} \Omega \qquad (37) $$ +---PAGE_BREAK--- + +is invariant under s. By direct computation one finds + +$$ +\begin{align} +m_{\bar{D}}^2 j_{a\mu} &= m_{\bar{D}}^2 I_{a\mu} - 2\phi_b^2 I_{a\mu} + 2\phi_b I_{b\mu}\phi_a + 2\phi_0 \epsilon_{abc} \phi_b I_{c\mu} \nonumber \\ +&\equiv m_{\bar{D}}^2 R_{ba} I_{b\mu} \tag{38} +\end{align} +$$ + +The matrix $R_{ba}$ is an element of the adjoint representation of SU(2) and therefore the mapping $\tilde{J}_{a\mu} \rightarrow j_{a\mu}$ is invertible. + +One can also prove that the following combination + +$$ +\bar{\kappa}_0 \equiv \frac{m_D^2 K_0}{\phi_0} - \phi_a \frac{\delta S}{\delta \phi_a} \quad (39) +$$ + +is invariant [2]. At $\phi_a = 0$ one gets + +$$ +\bar{\kappa}_0|_{\phi_a=0} = m_D \kappa_0 \qquad (40) +$$ + +and therefore the transformation $K_0 \to \bar{K}_0$ is also invertible. + +In terms of the new variables $\bar{K}_0$ and $j_\mu$ and by differentiating Equation (36) w.r.t. $\omega_a$ one gets + +$$ +\Theta_{ab} \frac{\delta \Gamma_{pol}^{(n)} [j, \bar{K}, \phi]}{\delta \phi_b} = 0 \quad (41) +$$ + +where $s\phi_b = \omega_a \Theta_{ab}, i.e.,$ + +$$ +\Theta_{ab} = \frac{1}{2}\phi_0 \delta_{ab} + \frac{1}{2}\epsilon_{abc}\phi_c \quad (42) +$$ + +$\Theta_{ab}$ is invertible and thus Equation (41) yields + +$$ +\frac{\delta \Gamma_{pol}^{(n)} [j, \bar{K}_0, \phi]}{\delta \phi_b} = 0 \qquad (43) +$$ + +This equation is a very powerful one. It states that the *n*-th order divergences (after the theory has been made finite up to order *n* − 1) of the *φ*-fields can only appear through the invariant combinations $\bar{K}_0$ and $j_{a\mu}$. These invariant variables have been called bleached variables and they are in one-to-one correspondence with the ancestor variables $K_0$ and $\tilde{J}_{a\mu}$. + +The subtraction strategy is thus the following. One computes the divergent part of the properly +normalized ancestor amplitudes that are superficially divergent at a given loop order according to the +WPC formula in Equation (30). Then the replacement $\tilde{J}_{a\mu} \to j_{a\mu}$ and $K_0 \to \bar{K}_0$ is carried out. This gives +the full set of counterterms required to make the theory finite at order *n* in the loop expansion. + +As an example, we give here the explicit form of the one-loop divergent counterterms for the +NLSM in *D* = 4 [2] (notice that we have set *g* = 1 according to our conventions in this paper): + +$$ +\hat{f}^{(1)} = \frac{1}{D-4} \left[ -\frac{1}{12} \frac{1}{(4\pi)^2} \frac{m_D^2}{m^2} (\mathcal{I}_1 - \mathcal{I}_2 - \mathcal{I}_3) + \frac{1}{(4\pi)^2} \frac{1}{48} \frac{m_D^2}{m^2} (\mathcal{I}_6 + 2\mathcal{I}_7) \right. \\ +\left. + \frac{1}{(4\pi)^2} \frac{3}{2} \frac{1}{m^2 m_D^2} \mathcal{I}_4 + \frac{1}{(4\pi)^2} \frac{1}{2} \frac{1}{m^2} \mathcal{I}_5 \right] \tag{44} +$$ + +By projecting the above equation on the relevant monomial in the $\phi_a$ fields one can get the divergences +of the descendant amplitudes. As an example, for the four point $\phi_a$ function one gets by explicit +---PAGE_BREAK--- + +computation that the contribution from the combination $I_1 - I_2 - I_3$ is zero, while the remaining invariants give + +$$ \hat{f}^{(1)}[\phi\phi\phi\phi] = -\frac{1}{D-4} \frac{1}{m_D^2 m^2 (4\pi)^2} \int d^D x \left( -\frac{1}{3}\partial_\mu \phi_a \partial^\mu \phi_a \partial_\nu \phi_b \partial^\nu \phi_b - \frac{2}{3}\partial_\mu \phi_a \partial_\nu \phi_b \partial^\mu \phi_b \partial^\nu \phi_b \right. \\ \left. -\frac{3}{2}\phi_a \Box \phi_a \phi_b \Box \phi_b - 2\phi_a \Box \phi_a \partial_\mu \phi_b \partial^\mu \phi_b \right) \quad (45) $$ + +The invariants in the combination $I_6 + 2I_7$ generate the counterterms in the first line between square brackets; these counterterms are globally SU(2) invariant. The other terms are generated by invariants involving the source $K_0$. In [39,40] they were constructed by means of a (non-locally invertible) field redefinition of $\phi_a$. The full set of mixed four point amplitudes involving at least one $\phi_a$ legs and the external sources $J_\mu$ and $K_0$ can be found in [2]. + +The correspondence with the linear sigma model in the large coupling limit has been studied in [5]. + +The massive NLSM in the LFE formulation has been studied in [15], while the symmetric subtraction procedure for the LFE associated with polar coordinates in the simplest case of the free complex scalar field has been given in [16]. + +In the SU(2) NLSM just one scalar source $K_0$ is sufficient in order to formulate the LFE. For an arbitrary Lie group G the LFE can always be written if one introduces a full set of antifields $\phi_I^*$ as follows. Let us denote by $\Omega(\phi_I)$ the group element belonging to G, parameterized by local coordinates $\phi_I$. Then under an infinitesimal left G-transformation of parameters $\alpha_J$ + +$$ \delta\Omega = i\alpha_J T_J \Omega \quad (46) $$ + +where $T_J$ are the generators of the group G, one has + +$$ \delta\phi_I = S_{IJ}(\phi)\alpha_J \quad (47) $$ + +It is convenient to promote the local left invariance to a BRST symmetry by upgrading the parameters $\alpha_I$ to local classical anticommuting ghosts $C_J$. Then one can introduce in the usual way the couplings with the antifields $\phi_I^*$ through + +$$ S_{\text{ext}} = \int d^D x \phi_I^* S_{IJ}(\phi) C_J \quad (48) $$ + +and then write the corresponding BV master equation [33]. This is the generalization of the LFE valid for the group G. The cohomology of the linearized BV operator (which is the main tool for identifying the bleached variables, as shown above) has been studied for any Lie group G in [62]. + +## 6. Higher Loops + +At orders $n > 1$ the LFE for $\Gamma^{(n)}$ is an inhomogeneous equation + +$$ s\Gamma^{(n)} = \Delta^{(n)} = -\frac{1}{2} \int d^D x \omega_a \sum_{j=1}^{n-1} \frac{\delta\Gamma^{(j)}}{\delta K_0} \frac{\delta\Gamma^{(n-j)}}{\delta\phi_a} \quad (49) $$ + +The above equation can be explicitly integrated by using the techniques of the Slavnov-Taylor (ST) parameterization of the effective action [63–65] (originally developed in order to provide a strategy for the restoration of the ST identity of non-anomalous gauge theories in the absence of a symmetric regularization). +---PAGE_BREAK--- + +For that purpose it is convenient to redefine the ghost according to + +$$ +\bar{\omega}_a = \Theta_{ab} \omega_b \tag{50} +$$ + +where $\Theta_{ab}$ is given in Equation (42). The action of $s$ then reduces to + +$$ +s\bar{K}_0 = s j_{a\mu} = 0, \quad s\phi_a = \bar{\omega}_a, \quad s\bar{\omega}_a = 0 \tag{51} +$$ + +This means that the variables $\bar{K}_0$ and $j_{a\mu}$ are invariant, while the pair $(\phi_a, \bar{\omega}_a)$ is a BRST doublet (i.e., a pair of variables $u, v$ such that $s u = v, s v = 0$) [33,66]. + +By the nilpotency of s the following consistency condition must hold for $\Delta^{(n)}$: + +$$ +s\Delta^{(n)} = 0 \tag{52} +$$ + +The fulfillment of the above equation as a consequence of the validity of the LFE up to order $n-1$ is proven in [63]. In terms of the new variables Equation (49) reads + +$$ +\int d^D x \bar{\omega}_a \frac{\delta \Gamma^{(n)}}{\delta \phi_a} = \Delta^{(n)} [\bar{\omega}_a, \phi_a, \bar{K}_0, j_{a\mu}] \quad (53) +$$ + +By noticing that $\Delta^{(n)}$ is linear in $\bar{\omega}_a$ and by differentiating Equation (53) w.r.t. $\bar{\omega}_a$ we arrive at + +$$ +\frac{\delta \Gamma^{(n)}}{\delta \phi_a(x)} = \frac{\delta \Delta^{(n)}}{\delta \bar{\omega}_a(x)} \qquad (54) +$$ + +The above equation controls the explicit dependence of the *n*-th order vertex functional on $\phi_a$ (there is +in addition an implicit dependence on $\phi_a$ through the variables $j_{a\mu}$ and $\bar{K}_0$). + +The explicit dependence on $\phi_a$ only appears through lower order terms. Hence it does not +influence the *n*-th order ancestor amplitudes. + +The solution of Equation (49) can be written in compact form by using a homotopy operator. +Indeed $\Gamma^{(n)}$ will be the sum of a $n$-th order contribution $A^{(n)}$, depending only on $j_{a\mu}$ and $\bar{K}_0$, plus a +lower order term: + +$$ +\begin{equation} +\Gamma^{(n)}[\phi_a, \bar{\omega}_a, \bar{K}_0, j_{a\mu}] = A^{(n)}[\bar{K}_0, j_{a\mu}] \tag{55} +\end{equation} +$$ + +The operator $\lambda_t$ acts as follows on a generic functional $X[\phi_a, \bar{\omega}_a, \bar{K}_0, j_{a\mu}]$: + +$$ +\lambda_t X[\phi_a, \bar{\omega}_a, \bar{K}_0, j_{a\mu}] = X[t\phi_a, t\bar{\omega}_a, \bar{K}_0, j_{a\mu}] \quad (56) +$$ + +The homotopy operator $\kappa$ for the BRST differential $s$ in the second line of Equation (55) is therefore given by + +$$ +\kappa = \int d^D x \int_0^1 dt \, \phi_a(x) \lambda_t \frac{\delta}{\delta \bar{\omega}_a(x)} \qquad (57) +$$ + +and satisfies the condition + +$$ +\{s, \kappa\} = 1 +\quad (58) +$$ + +where **1** denotes the identity on the space of functionals spanned by $\overline{\omega}_a, \phi_a$. +---PAGE_BREAK--- + +An important remark is in order here. The theory remains finite and respects the LFE if one adds to $\Gamma^{(n)}$ some integrated local monomials in $j_{a\mu}$ and $\bar{K}_0$ and ordinary derivatives thereof (with finite coefficients), compatible with Lorentz symmetry and global SU(2) invariance, while respecting the WPC condition in Equation (30): + +$$ \Gamma_{finite}^{(n)} = \sum_j \int d^D x M_j (j_{a\mu}, \bar{K}_0) \qquad (59) $$ + +This is a consequence of the non power-counting renormalizability of the theory: one can introduce order by order in the loop expansion an increasing number of finite parameters that do not appear in the classical action. Notice that they cannot be inserted back at tree-level: if one performs such an operation, the WPC condition is lost. + +This observation suggests that these finite parameters cannot be easily understood as physical free parameters of the theory, since they cannot appear in the tree-level action. It was then proposed to define the model by choosing the symmetric subtraction scheme discussed in Section 5 and by considering as physical parameters only those present in the classical action plus the scale of the radiative corrections $\Lambda$ [4]. While acceptable on physical grounds, from the mathematical point of view one may wonder whether there is some deeper reason justifying such a strategy. We will comment briefly on this point in the Conclusions. + +## 7. Applications to Yang-Mills and the Electroweak Theory + +When the vector source $\tilde{f}_{a\mu}$ becomes a dynamical gauge field, the NLSM action gives rise to the Stückelberg mass term [67]. + +The subtraction procedure based on the LFE has been used to implement a mathematically consistent formulation of non-linearly realized massive Yang-Mills theory. SU(2) Yang-Mills in the LFE formalism has been formulated in [6]. The pseudo-Goldstone fields take over the role of the $\phi_a$ fields of the NLSM. Their Green's functions are fixed by the LFE. The WPC proves to be very restrictive, since by imposing the WPC condition it turns out that the only allowed classical solution is the usual Yang-Mills theory plus the Stückelberg mass term. + +This is a very powerful (and somehow surprising) result. Indeed all possible monomials constructed out of $j_{a\mu}$ and ordinary derivatives thereof are gauge-invariant and therefore they could be used as interaction vertices in the classical action. + +Otherwise said, the peculiar structure of the Yang-Mills action + +$$ S_{YM} = - \int d^4 x \frac{1}{4} G_{a\mu\nu} G_a^{\mu\nu} \qquad (60) $$ + +where $G_{a\mu\nu}$ denotes the field strength of the gauge field $A_{a\mu}$ + +$$ G_{a\mu\nu} = \partial_{\mu} A_{av} - \partial_{v} A_{a\mu} + f_{abc} A_{b\mu} A_{cv} $$ + +is not automatically enforced by the requirement of gauge invariance if the gauge group is non-linearly realized. However if the WPC condition is satisfied, the only admissible solution becomes Yang-Mills theory plus the Stückelberg mass term: + +$$ S_{nLYM} = S_{YM} + \int d^4 x \frac{M^2}{2} (A_{a\mu} - F_{a\mu})^2 \qquad (61) $$ + +Massive Yang-Mills theory in the presence of a non-linearly realized gauge group is physically unitary [67] (despite the fact that it violates the Froissart bound [68–74] at tree-level). The counterterms in the Landau gauge have been computed at one loop level in [7]. The formulation of the theory in a general 't Hooft gauge has been given in [8]. +---PAGE_BREAK--- + +The approach based on the LFE can also be used for non-perturbative studies of Yang-Mills theory on the lattice. The phase diagram of SU(2) Yang-Mills has been considered in [17]. Emerging evidence is being accumulated about the formation of isospin scalar bound states [18] in the supposedly confined phase of the theory [19]. + +An analytic approach based on the massless bound-state formalism for the implementation of the Schwinger mechanism in non-Abelian gauge theories has been presented in [75–77]. + +A very important physical application of non-linearly realized gauge theories is the formulation of a non-linearly realized electroweak theory, based on the group SU(2) × U(1). The set of gauge fields comprises the SU(2) fields $A_{a\mu}$ and the hypercharge U(1) gauge connection $B_\mu$. By using the technique of bleached variables one can first construct SU(2) invariant variables in one-to-one correspondence with $A_\mu = A_{a\mu} \frac{\tau_a}{2}$ [8]: + +$$w_{\mu} = \Omega^{\dagger} g A_{\mu} \Omega - g' \frac{\tau_3}{2} B_{\mu} + i \Omega^{\dagger} \partial_{\mu} \Omega \equiv w_{a\mu} \frac{\tau_a}{2} \quad (62)$$ + +In the above equation we have reinserted back for later convenience the SU(2) and U(1) coupling constants $g$ and $g'$. Since $w_\mu$ is SU(2) invariant, the hypercharge generator coincides with the electric charge generator. $w_{3\mu}$ is then the bleached counterpart of the $Z_\mu$ field, since + +$$Z_{\mu} = \left. \frac{1}{\sqrt{g^2 + g'^2}} w_{3\mu} \right|_{\phi_a=0} = c_W A_{3\mu} - s_W B_{\mu} \quad (63)$$ + +where $s_W$ and $c_W$ are the sine and cosine of the Weinberg angle + +$$s_W = \frac{g'}{\sqrt{g^2 + g'^2}}, \qquad c_W = \frac{g}{\sqrt{g^2 + g'^2}} \quad (64)$$ + +The photon $A_\mu$ is described by the combination orthogonal to $Z_\mu$, namely + +$$A_{\mu} = s_W A_{3\mu} + c_W B_{\mu} \quad (65)$$ + +One can build out of $A_{1\mu}$ and $A_{2\mu}$ the charged $W^\pm$ field + +$$W_{\mu}^{\pm} = \frac{1}{\sqrt{2}}(A_{1\mu} \mp iA_{2\mu}) \quad (66)$$ + +whose bleached counterpart is simply + +$$w_{\mu}^{\pm} = \frac{1}{\sqrt{2}}(w_{1\mu} \mp i w_{2\mu}) \quad (67)$$ + +The WPC allows for the same symmetric couplings of the Standard Model and for two independent mass invariants [9–11] + +$$M_W^2 w^+ w^- + \frac{M_Z^2}{2} w_{3\mu}^2 \quad (68)$$ + +where the mass of the Z and W bosons are not related by the Weinberg relation + +$$M_Z = \frac{M_W}{c_W}$$ +---PAGE_BREAK--- + +This is a peculiar signature of the mass generation mechanism *à la* Stückelberg, that is not present in the linearly realized theory *à la* Brout-Englert-Higgs [78–80] (even if one discards the condition of power-counting renormalizability in favour of the WPC) [12]. + +The inclusion of physical scalar resonances in the non-linearly realized electroweak model, while respecting the WPC, yields some definite prediction for the Beyond the Standard Model (BSM) sector. Indeed it turns out that it is impossible to add a scalar singlet without breaking the WPC condition. The minimal solution requires a SU(2) doublet of scalars, leading to a CP-even physical field (to be identified with the recently discovered scalar resonance at 125.6 GeV) and to three additional heavier physical states, one CP-odd and neutral and two charged ones [13]. The proof of the WPC in this model and the BRST identification of physical states has been given in [14]. + +The WPC and the symmetries of the theory select uniquely the tree-level action of the non-linearly realized electroweak model. As in the NLSM case, mathematically additional finite counterterms are allowed at higher orders in the loop expansion. In [4] it has been argued that they cannot be interpreted as additional physical parameters (unlike in the effective field theory approach), on the basis of the observation that they are forbidden at tree-level by the WPC, and this strategy has been consistently applied in [7,11]. + +The question remains open of whether a Renormalization Group equation exists, involving a finite change in the higher order subtractions, in such a way to compensate the change in the sliding scale $\Lambda$ of the radiative corrections. We notice that in this case the finite higher order counterterms would be a function of the tree-level parameters only (unlike in the conventional effective field theory approach, where they are treated as independent extra free parameters). This issue deserves further investigation, since obviously the possibility of running the scale $\Lambda$ in a mathematically consistent way would allow to obtain physical predictions of the same observables applicable in different energy regimes. + +## 8. Conclusions + +The LFE makes it apparent that the independent amplitudes of the NLSM are not those of the quantum fields, over which the path-integral is carried out, but rather those of the background connection $\tilde{J}_\mu$ and of the source $K_0$, coupled to the solution of the non-linear constraint $\phi_0$. The WPC can be formulated only for these ancestor amplitudes; the LFE in turn fixes the descendant amplitudes, involving at least one pion external leg. Within this formulation, the minimal symmetric subtraction discussed in Section 5 is natural, since it provides a way to implement the idea that the number of ancestor interaction vertices, appearing in the classical action and compatible with the WPC, must be finite. + +However, it should be stressed that the most general solution to the LFE, compatible with the WPC, does not forbid to choose different finite parts of the higher order symmetric counterterms (as in the most standard view of effective field theories, where such arbitrariness is associated with extra free parameters of the non-renormalizable theory), as far as they are introduced at the order prescribed by the WPC condition and without violating the LFE. + +In this connection it should be noticed that the addition of the symmetric finite renormalizations in Equation (59), that are allowed by the symmetries of the theory, is equivalent to a change in the Hopf algebra [81,82] of the model. This is because the finite counterterms in Equation (59) modify the set of 1-PI Feynman diagrams on which the Hopf algebra is constructed, as a dual of the enveloping algebra of the Lie algebra of Feynman graphs. The approach to renormalization based on Hopf algebras is known to be equivalent [83] to the traditional approach based on the Bogoliubov recursive formula and its explicit solution through the Zimmermann’s forest formula [84]. For models endowed with a WPC it might provide new insights into the structure of the UV divergences of the theory. This connection seems to deserve further investigations. + +**Acknowledgments:** It is a pleasure to acknowledge many enlightening discussions with R. Ferrari. Useful comments and a careful reading of the manuscript by D. Bettinelli are also gratefully acknowledged. +---PAGE_BREAK--- + +# Appendix + +## One-Loop Invariants + +We report here the invariants controlling the one-loop divergences of the NLSM in $D = 4$ [2]. + +$$ +\begin{aligned} +\mathcal{I}_1 &= \int d^D x [D_\mu (F - \bar{J})_v]_a [D^\mu (F - \bar{J})^\nu]_a, \\ +\mathcal{I}_2 &= \int d^D x [D_\mu (F - \bar{J})^\mu]_a [D_v (F - \bar{J})^\nu]_a, \\ +\mathcal{I}_3 &= \int d^D x \epsilon_{abc} [D_\mu (F - \bar{J})_v]_a (F_b^\mu - \bar{J}_b^\mu) (F_c^\nu - \bar{J}_c^\nu), \\ +\mathcal{I}_4 &= \int d^D x \left(\frac{m_D^2 K_0}{\phi_0} - \phi_a \frac{\delta S}{\delta \phi_a}\right)^2, \\ +\mathcal{I}_5 &= \int d^D x \left(\frac{m_D^2 K_0}{\phi_0} - \phi_a \frac{\delta S}{\delta \phi_a}\right) (F_b^\mu - \bar{J}_b^\mu)^2, \\ +\mathcal{I}_6 &= \int d^D x (F_a^\mu - \bar{J}_a^\mu)^2 (F_b^\nu - \bar{J}_b^\nu)^2, \\ +\mathcal{I}_7 &= \int d^D x (F_a^\mu - \bar{J}_a^\mu) (F_a^\nu - \bar{J}_a^\nu) (F_{b\mu} - \bar{J}_{b\mu}) (F_{b\nu} - \bar{J}_{b\nu}) +\end{aligned} +\quad (\text{A1}) $$ + +In the above equation $D_\mu[F]$ stands for the covariant derivative w.r.t. $F_{a\mu}$ + +$$ D_{\mu}[F]_{ab} = \delta_{ab}\partial_{\mu} + \epsilon_{acb}F_{c\mu} \quad (\text{A2}) $$ + +**Conflicts of Interest:** The author declares no conflict of interest. + +## References + +1. Ferrari, R. Endowing the nonlinear sigma model with a flat connection structure: A way to renormalization. JHEP 2005, doi:10.1088/1126-6708/2005/08/048. + +2. Ferrari, R.; Quadri, A. A Weak power-counting theorem for the renormalization of the non-linear sigma model in four dimensions. Int. J. Theor. Phys. 2006, 45, 2497–2515. + +3. Bettinelli, D.; Ferrari, R.; Quadri, A. Path-integral over non-linearly realized groups and Hierarchy solutions. JHEP 2007, doi:10.1088/1126-6708/2007/03/065. + +4. Bettinelli, D.; Ferrari, R.; Quadri, A. Further Comments on the Symmetric Subtraction of the Nonlinear Sigma Model. Int. J. Mod. Phys. 2008, A23, 211–232. + +5. Bettinelli, D.; Ferrari, R.; Quadri, A. The Hierarchy principle and the large mass limit of the linear sigma model. Int. J. Theor. Phys. 2007, 46, 2560–2590. + +6. Bettinelli, D.; Ferrari, R.; Quadri, A. A Massive Yang-Mills Theory based on the Nonlinearly Realized Gauge Group. Phys. Rev. D 2008, 77, doi:10.1103/PhysRevD.77.045021. + +7. Bettinelli, D.; Ferrari, R.; Quadri, A. One-loop self-energy and counterterms in a massive Yang-Mills theory based on the nonlinearly realized gauge group. Phys. Rev. D 2008, 7, doi:10.1103/PhysRevD.77.105012. + +8. Bettinelli, D.; Ferrari, R.; Quadri, A. Gauge Dependence in the Nonlinearly Realized Massive SU(2) Gauge Theory. J. General. Lie Theor. Appl. 2008, 2, 122–126. + +9. Bettinelli, D.; Ferrari, R.; Quadri, A. The SU(2) × U(1) Electroweak Model based on the Nonlinearly Realized Gauge Group. Int. J. Mod. Phys. 2009, A24, 2639–2654. + +10. Bettinelli, D.; Ferrari, R.; Quadri, A. The SU(2) × U(1) Electroweak Model based on the Nonlinearly Realized Gauge Group. II. Functional Equations and the Weak Power-Counting. Acta Phys. Polon. 2010, B41, 597–628. + +11. Bettinelli, D.; Ferrari, R.; Quadri, A. One-loop Self-energies in the Electroweak Model with Nonlinearly Realized Gauge Group. Phys. Rev. D 2009, 79, doi:10.1103/PhysRevD.79.125028. +---PAGE_BREAK--- + +12. Quadri, A. The Algebra of Physical Observables in Nonlinearly Realized Gauge Theories. *Eur. Phys. J.* **2010**, C70, 479-489. + +13. Binosi, D.; Quadri, A. Scalar Resonances in the Non-linearly Realized Electroweak Theory. *JHEP* **2013**, 1302, doi:10.1007/JHEP02(2013)020. + +14. Bettinelli, D.; Quadri, A. The Stueckelberg Mechanism in the presence of Physical Scalar Resonances. *Phys. Rev. D* **2013**, 88, doi:10.1103/PhysRevD.88.065023. + +15. Ferrari, R. A Symmetric Approach to the Massive Nonlinear Sigma Model. *J. Math. Phys.* **2011**, 52, 092303:1-092303:16. + +16. Ferrari, R. On the Renormalization of the Complex Scalar Free Field Theory. *J. Math. Phys.* **2010**, 51, 032305:1-032305:20. + +17. Ferrari, R. On the Phase Diagram of Massive Yang-Mills. *Acta Phys. Polon.* **2012**, B43, 1965-1980. + +18. Ferrari, R. On the Spectrum of Lattice Massive SU(2) Yang-Mills. *Acta Phys. Polon.* **2013**, B44, 1871-1885. + +19. Ferrari, R. Metamorphosis versus Decoupling in Nonabelian Gauge Theories at Very High Energies. *Acta Phys. Polon.* **2012**, B43, 1735-1767. + +20. Gell-Mann, M.; Levy, M. The axial vector current in beta decay. *Nuovo Cim.* **1960**, 16, 705-726. + +21. Weinberg, S. Nonlinear realizations of chiral symmetry. *Phys. Rev.* **1968**, 166, 1568-1577. + +22. Coleman, S.R.; Wess, J.; Zumino, B. Structure of phenomenological Lagrangians. 1. *Phys. Rev.* **1969**, 177, 2239-2247. + +23. Callan, C.G., Jr.; Coleman, S.R.; Wess, J.; Zumino, B. Structure of phenomenological Lagrangians. 2. *Phys. Rev.* **1969**, 177, 2247-2250. + +24. Weinberg, S. Phenomenological Lagrangians. *Physica* **1979**, A96, 327-340. + +25. Gasser, J.; Leutwyler, H. Chiral Perturbation Theory to One Loop. *Ann. Phys.* **1984**, 158, 142-210. + +26. Gasser, J.; Leutwyler, H. Chiral Perturbation Theory: Expansions in the Mass of the Strange Quark. *Nucl. Phys.* **B** **1985**, 250, 465-516. + +27. Bijnsens, J.; Colangelo, G.; Ecker, G. Renormalization of chiral perturbation theory to order p**6. *Ann. Phys.* **2000**, 280, 100-139. + +28. Ecker, G.; Gasser, J.; Leutwyler, H.; Pich, A.; de Rafael, E. Chiral Lagrangians for Massive Spin 1 Fields. *Phys. Lett.* **B** **1989**, 223, 425-432. + +29. Buchmuller, W.; Wyler, D. Effective Lagrangian Analysis of New Interactions and Flavor Conservation. *Nucl. Phys.* **B** **1986**, 268, 621-653. + +30. Donoghue, J.F. Introduction to the effective field theory description of gravity. Available online: http://arxiv.org/abs/grqc/9512024 (accessed on 15 April 2014). + +31. Weinberg, S. *The Quantum Theory of Fields*. Vol. 2: Modern Applications; Cambridge University Press: Cambridge, UK, 1996. + +32. Itzykson, C.; Zuber, J. *Quantum Field Theory*; McGraw-Hill: New York, NY, USA, 1980. + +33. Gomis, J.; Paris, J.; Samuel, S. Antibracket, antifields and gauge theory quantization. *Phys. Rep.* **1995**, 259, 1-145. + +34. Gomis, J.; Weinberg, S. Are nonrenormalizable gauge theories renormalizable? *Nucl. Phys.* **B** **1996**, 469, 473-487. + +35. Brezin, E.; Zinn-Justin, J.; Le Guillou, J. Renormalization of the Nonlinear Sigma Model in (Two + Epsilon) Dimension. *Phys. Rev. D* **1976**, 14, 2615-2621. + +36. Becchi, C.; Piguet, O. On the Renormalization of Two-dimensional Chiral Models. *Nucl. Phys.* **B** **1989**, 315, 153-165. + +37. Zinn-Justin, J. *Quantum Field Theory and Critical Phenomena*; International Series of Monographs on Physics; Oxford University Press: Oxford, UK, 2002. + +38. Ecker, G.; Honerkamp, J. Application of invariant renormalization to the nonlinear chiral invariant pion lagrangian in the one-loop approximation. *Nucl. Phys.* **B** **1971**, 35, 481-492. + +39. Appelquist, T.; Bernard, C.W. The Nonlinear σ Model in the Loop Expansion. *Phys. Rev.* **D** **1981**, 23, doi:10.1103/PhysRevD.23.425. + +40. Tataru, L. One Loop Divergences of the Nonlinear Chiral Theory. *Phys. Rev.* **D** **1975**, 12, 3351-3352. + +41. Gerstein, I.; Jackiw, R.; Weinberg, S.; Lee, B. Chiral loops. *Phys. Rev.* **D** **1971**, 3, 2486-2492. + +42. Charap, J. Closed-loop calculations using a chiral-invariant lagrangian. *Phys. Rev.* **D** **1970**, 2, 1554-1561. + +43. Honerkamp, J.; Meetz, K. Chiral-invariant perturbation theory. *Phys. Rev.* **D** **1971**, 3, 1996-1998. +---PAGE_BREAK--- + +44. Stueckelberg, E. Interaction forces in electrodynamics and in the field theory of nuclear forces. *Helv. Phys. Acta* **1938**, *11*, 299-328. + +45. Ruegg, H.; Ruiz-Altaba, M. The Stueckelberg field. *Int. J. Mod. Phys.* **2004**, *A19*, 3265-3348. + +46. Altarelli, G.; Mangano, M.L. Electroweak Physics. In Proceedings of CERN Workshop on Standard Model Physics (and More) at the LHC, CERN, Geneva, Switzerland, 25-26 May 1999. + +47. Azatov, A.; Contino, R.; Galloway, J. Model-Independent Bounds on a Light Higgs. JHEP **2012**, 1204, doi:10.1007/JHEP04(2012)127. + +48. Contino, R. The Higgs as a Composite Nambu-Goldstone Boson. Available online: http://arxiv.org/abs/1005.4269 (accessed on 15 April 2014). + +49. Espinosa, J.; Grojean, C.; Muhleitner, M.; Trott, M. First Glimpses at Higgs' face. JHEP **2012**, 1212, doi:10.1007/JHEP12(2012)045. + +50. Zinn-Justin, J. Renormalization of Gauge Theories—Unbroken and broken. Phys. Rev. D **1974**, *9*, 933–946. + +51. Velo, G.; Wightman, A. Renormalization Theory. In Proceedings of the NATO Advanced Study Institute, Erice, Sicily, Italy, 17–31 August 1975. + +52. Breitenlohner, P.; Maison, D. Dimensional Renormalization and the Action Principle. Commun. Math. Phys. **1977**, *52*, 11–38. + +53. Lam, Y.M.P. Perturbation Lagrangian theory for scalar fields: Ward-Takahashi identity and current algebra. Phys. Rev. D **1972**, *6*, 2145–2161. + +54. Lam, Y.M.P. Perturbation lagrangian theory for Dirac fields—Ward-Takahashi identity and current algebra. Phys. Rev. D **1972**, *6*, 2161–2167. + +55. Lowenstein, J. Normal product quantization of currents in Lagrangian field theory. Phys. Rev. D **1971**, *4*, 2281–2290. + +56. Piguet, O.; Sorella, S. Algebraic renormalization: Perturbative renormalization, symmetries and anomalies. Lect. Notes Phys. **1995**, M28, 1–134. + +57. Becchi, C.; Rouet, A.; Stora, R. Renormalization of Gauge Theories. Ann. Phys. **1976**, *98*, 287-321. + +58. Becchi, C.; Rouet, A.; Stora, R. Renormalization of the Abelian Higgs-Kibble Model. Commun. Math. Phys. **1975**, *42*, 127-162. + +59. Becchi, C.; Rouet, A.; Stora, R. The Abelian Higgs-Kibble Model. Unitarity of the S Operator. Phys. Lett. B **1974**, *52*, 344-346. + +60. Wess, J.; Zumino, B. Consequences of anomalous Ward identities. Phys. Lett. B **1971**, *37*, 95-97. + +61. Barnich, G.; Brandt, F.; Henneaux, M. Local BRST cohomology in gauge theories. Phys. Rep. **2000**, *338*, 439-569. + +62. Henneaux, M.; Wilch, A. Local BRST cohomology of the gauged principal nonlinear sigma model. Phys. Rev. D **1998**, *58*, 025017:1-025017:14. + +63. Quadri, A. Slavnov-Taylor parameterization of Yang-Mills theory with massive fermions in the presence of singlet axial-vector currents. JHEP **2005**, 0506, doi:10.1088/1126-6708/2005/06/068. + +64. Quadri, A. Higher order nonsymmetric counterterms in pure Yang-Mills theory. J. Phys. G **2004**, *30*, 677-689. + +65. Quadri, A. Slavnov-Taylor parameterization for the quantum restoration of BRST symmetries in anomaly free gauge theories. JHEP **2003**, 0304, doi:10.1088/1126-6708/2003/04/017. + +66. Quadri, A. Algebraic properties of BRST coupled doublets. JHEP **2002**, 0205, doi:10.1088/1126-6708/2002/05/051. + +67. Ferrari, R.; Quadri, A. Physical unitarity for massive non-Abelian gauge theories in the Landau gauge: Stueckelberg and Higgs. JHEP **2004**, 0411, doi:10.1088/1126-6708/2004/11/019. + +68. Froissart, M. Asymptotic behavior and subtractions in the Mandelstam representation. Phys. Rev. **1961**, *123*, 1053-1057. + +69. Cornwall, J.M.; Levin, D.N.; Tikopoulos, G. Derivation of Gauge Invariance from High-Energy Unitarity Bounds on the s Matrix. Phys. Rev. D **1974**, *10*, 1145-1167. + +70. Lee, B.W.; Quigg, C.; Thacker, H. Weak Interactions at Very High-Energies: The Role of the Higgs Boson Mass. Phys. Rev. D **1977**, *16*, 1519-1531. + +71. Weldon, H.A. The Effects of Multiple Higgs Bosons on Tree Unitarity. Phys. Rev. D **1984**, *30*, 1547-1558. +---PAGE_BREAK--- + +72. Chanowitz, M.S.; Gaillard, M.K. The TeV Physics of Strongly Interacting W's and Z's. Nucl. Phys. B **1985**, 261, 379-431. + +73. Gounaris, G.; Kogerler, R.; Neufeld, H. Relationship Between Longitudinally Polarized Vector Bosons and their Unphysical Scalar Partners. Phys. Rev. D **1986**, *34*, 3257-3259. + +74. Bettinelli, D.; Ferrari, R.; Quadri, A. Of Higgs, Unitarity and other Questions. Proc. Steklov Inst. Math. **2011**, 272, 22-38. + +75. Aguilar, A.; Ibanez, D.; Mathieu, V.; Papavassiliou, J. Massless bound-state excitations and the Schwinger mechanism in QCD. Phys. Rev. D **2012**, *85*, doi:10.1103/PhysRevD.85.014018. + +76. Aguilar, A.; Binosi, D.; Papavassiliou, J. The dynamical equation of the effective gluon mass. Phys. Rev. D **2011**, *84*, doi:10.1103/PhysRevD.84.085026. + +77. Ibañez, D.; Papavassiliou, J. Gluon mass generation in the massless bound-state formalism. Phys. Rev. D **2013**, *87*, doi:10.1103/PhysRevD.87.034008. + +78. Higgs, P.W. Broken symmetries, massless particles and gauge fields. Phys. Lett. **1964**, *12*, 132-133. + +79. Higgs, P.W. Broken Symmetries and the Masses of Gauge Bosons. Phys. Rev. Lett. **1964**, *13*, 508-509. + +80. Englert, F.; Brout, R. Broken Symmetry and the Mass of Gauge Vector Mesons. Phys. Rev. Lett. **1964**, *13*, 321-323. + +81. Connes, A.; Kreimer, D. Renormalization in quantum field theory and the Riemann-Hilbert problem. 1. The Hopf algebra structure of graphs and the main theorem. Commun. Math. Phys. **2000**, *210*, 249-273. + +82. Connes, A.; Kreimer, D. Renormalization in quantum field theory and the Riemann-Hilbert problem. 2. The beta function, diffeomorphisms and the renormalization group. Commun. Math. Phys. **2001**, *216*, 215-241. + +83. Ebrahimi-Fard, K.; Patras, F. Exponential renormalization. Ann. Henri Poincare **2010**, *11*, 943-971. + +84. Zimmermann, W. Convergence of Bogolyubov's method of renormalization in momentum space. Commun. Math. Phys. **1969**, *15*, 208-234. + +© 2014 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Dynamical Relation between Quantum Squeezing and Entanglement in Coupled Harmonic Oscillator System + +Lock Yue Chew ¹,* and Ning Ning Chung ² + +¹ Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore + +² Department of Physics, National University of Singapore, Singapore 117542, Singapore; E-Mail: phycnn@nus.edu.sg + +* E-Mail: lockyue@ntu.edu.sg; Tel.: +65-6316-2968; +65-6316-6984. + +Received: 27 February 2014; in revised form: 14 April 2014 / Accepted: 18 April 2014 / +Published: 23 April 2014 + +**Abstract:** In this paper, we investigate into the numerical and analytical relationship between the dynamically generated quadrature squeezing and entanglement within a coupled harmonic oscillator system. The dynamical relation between these two quantum features is observed to vary monotonically, such that an enhancement in entanglement is attained at a fixed squeezing for a larger coupling constant. Surprisingly, the maximum attainable values of these two quantum entities are found to consistently equal to the squeezing and entanglement of the system ground state. In addition, we demonstrate that the inclusion of a small anharmonic perturbation has the effect of modifying the squeezing *versus* entanglement relation into a nonunique form and also extending the maximum squeezing to a value beyond the system ground state. + +**Keywords:** quantum entanglement; squeezed state; coupled harmonic oscillators + +PACS: 03.65.Ge, 31.15.MD + +# 1. Introduction + +Entanglement is a fundamental resource for non-classical tasks in the field of quantum information [1]. It has been shown to improve communication and computation capabilities via the notion of quantum dense coding [2], quantum teleportation [3], unconditionally secured quantum cryptographic protocols [4,5], and quantum algorithms for integer factorization [6]. For any quantum algorithm operating on pure states, it has been proven that the presence of multi-partite entanglement is necessary if the quantum algorithm is to offer an exponential speed-up over classical computation [7]. Note, however, that a non-zero value of entanglement might not be the necessary condition for quantum computational speed up of algorithm operating on mixed states [8]. In addition, in order to achieve these goals practically, it is necessary to maintain the entanglement within the quantum states which are fragile against the decohering environment. An approach would be to employ an entangled state with as large an entanglement as possible, and the idea is that the production of such entangled state could be tuned through the operation of quantum squeezing. + +Indeed, the relation between quantum squeezing and quantum entanglement has been actively pursued in recent years [9–18]. Notably, the creation of entanglement is shown experimentally to be able to induce spin squeezing [9,10]. Such entanglement-induced squeezing has the important outcome of producing measuring instruments that go beyond the precision of current models. In addition, quantum squeezing is found to be able to induce, enhance and even preserve entanglement in decohering environments [11–13]. Previously, we have investigated the relation between the squeezing +---PAGE_BREAK--- + +and entanglement of the ground state of the coupled harmonic oscillator system [16,17]. The ground state entanglement entropy was found to increase monotonically with an increase in quadrature squeezing within this system. When a small anharmonic perturbing potential is added to the system, a further enhancement in quadrature squeezing is observed. While the entropy-squeezing curve shifts to the right in this case, we realized that the entanglement entropy is still a monotonically increasing function in terms of quadrature squeezing. + +In this paper, we have extended our earlier work discussed above by investigating into the dynamical relation between quadrature squeezing and entanglement entropy of the coupled harmonic oscillator system. Coupled harmonic oscillator system has served as useful paradigm for many physical systems, such as the field modes of electromagnetic radiation [19–21], the vibrations in molecular systems [22], and the formulation of the Lee model in quantum field theory [23]. It was shown that the coupled harmonic oscillator system possesses the symmetry of the Lorentz group $O(3, 3)$ or $SL(4, r)$ classically, and that of the symmetry $O(3, 2)$ or $Sp(4)$ quantum mechanically [24]. In addition, the physics of coupled harmonic oscillator system can be conveniently represented by the mathematics of two-by-two matrices, which have played a role in clarifying the physical basis of entanglement [25]. In Section 2 of this paper, we first described the coupled harmonic oscillator model. It is then followed by a discussion on the relation between the dynamically generated squeezing and entanglement of the coupled oscillator systems, which we have determined quantitatively via numerical computation. In Section 3 of the paper, we present analytical results in support of the numerical results obtained in Section 2. Here, we illustrate how the problem can be solved in terms of two-by-two matrices. Then, in Section 4 of the paper, we study how the inclusion of anharmonicity can influence the relation between the dynamically generated squeezing and entanglement. Finally, we give our conclusion in Section 5 of the paper. + +## 2. Dynamical Relation of Quantum Squeezing and Entanglement in Coupled Harmonic Oscillator System + +The Hamiltonian of the coupled harmonic oscillator system is described as follow: + +$$H = \frac{p_1^2}{2m_1} + \frac{1}{2}m_1\omega_1^2 x_1^2 + \frac{p_2^2}{2m_2} + \frac{1}{2}m_2\omega_2^2 x_2^2 + \lambda(x_2 - x_1)^2 \quad (1)$$ + +where $x_1$ and $x_2$ are the position co-ordinates, while $p_1$ and $p_2$ are the momenta of the oscillators. The interaction potential between the two oscillators is assumed to depend quadratically on the distance between the oscillators, and is proportional to the coupling constant $\lambda$. For simplicity, we have set $m_1 = m_2 = m$ and $\omega_1 = \omega_2 = \omega$. This Hamiltonian is commonly used to model physical systems such as the vibrating molecules or the squeezed modes of electromagnetic field. In fact, the model has been widely explored [26–28] and is commonly used to elucidate the properties of quantum entanglement in continuous variable systems [29–35]. + +Next, let us discuss on the relation between the squeezing and entanglement of the lowest energy eigenstate of this coupled harmonic oscillator system. Note that + +$$H |g\rangle = E_0 |g\rangle \quad (2)$$ + +with $|g\rangle$ being the ground state and $E_0$ being the lowest eigen-energy of the coupled oscillator system with Hamiltonian given by Equation (1). Entanglement between the two oscillators can be quantified by the von Neumann entropy: + +$$S_{vN} = -\text{Tr}[\rho_l \ln \rho_l] \quad (3)$$ + +where $\rho_l$ is the reduced density matrix. For squeezing parameter, we shall adopt the dimensionless definition: + +$$S_x = -\ln \frac{\sigma_{x1}}{\sigma_{x1}^{(0)}} \quad (4)$$ +---PAGE_BREAK--- + +with $\sigma_{x_1} = \sqrt{\langle x_1^2 \rangle - \langle x_1 \rangle^2}$ being the uncertainty associated with the first oscillator's position and the normalization constant $\sigma_{x_1}^{(0)} = \sqrt{\hbar/2m\omega}$ being the uncertainty associated with the harmonic oscillator's position. For simplicity, we shall evaluate only the position squeezing in the first oscillator. + +Indeed, the position uncertainty squeezing and the entanglement entropy of the ground state of this oscillator have been solved analytically by previous studies [36,37] as follows: + +$$S_x = -\ln \frac{\sqrt{\frac{\hbar}{2m\omega}\frac{1+\gamma}{2}}}{\sqrt{\frac{\hbar}{2m\omega}}} = -\ln \sqrt{\frac{1+\gamma}{2}} \quad (5)$$ + +where $\gamma = 1/\sqrt{1+4\lambda/m\omega^2}$; and + +$$S_{vN} = \cosh^2\left(\frac{\ln\gamma}{4}\right) \ln\left[\cosh^2\left(\frac{\ln\gamma}{4}\right)\right] - \sinh^2\left(\frac{\ln\gamma}{4}\right) \ln\left[\sinh^2\left(\frac{\ln\gamma}{4}\right)\right] \quad (6)$$ + +As shown in Reference [17], by eliminating $\gamma$ between Equations (5) and (6), the relation between the squeezing parameter and the von Neumann entropy of the ground state of the coupled harmonic oscillators is obtained as follow: + +$$S_{vN} = \frac{(\zeta + 1)^2}{4\zeta} \ln\left(\frac{(\zeta + 1)^2}{4\zeta}\right) - \frac{(\zeta - 1)^2}{4\zeta} \ln\left(\frac{(\zeta - 1)^2}{4\zeta}\right) \quad (7)$$ + +with + +$$\zeta = \sqrt{2e^{-2S_x} - 1} \quad (8)$$ + +This relation is shown as a solid line in Figure 1. + +**Figure 1.** A plot on the dynamical relation between entanglement and squeezing obtained numerically for coupled harmonic oscillator system with the coupling constant $\lambda = 0.75$ (squares), 2 (triangles), 3.75 (circles) and 6 (crosses). Note that the ground state entanglement-squeezing curve given by Equation (7) is plotted as a solid curve for comparison. In addition, the values of the maximum attainable squeezing and entanglement for various $\lambda$ have been plotted as stars. + +In this paper, we have gone beyond the static relation between squeezing and entanglement based on the stationary ground state. In particular, we have explored numerically into the dynamical generation of squeezing and entanglement via the quantum time evolution, with the initial state being the tensor product of the vacuum states ($|0,0\rangle$) of the oscillators. Note that the obtained results +---PAGE_BREAK--- + +hold true for any initial coherent states ($|a_1, a_2\rangle$) since the entanglement dynamics of the coupled harmonic oscillator system is independent of initial states [38]. In general, the system dynamics is either two-frequency periodic or quasi-periodic depending on whether the ratio of the two frequencies, $f_1 = 1$ and $f_2 = \sqrt{1+4\lambda}$, are rational or irrational. By yielding the values of the squeezing parameter and the entanglement entropy at the same time point within their respective dynamical evolution, we obtained the dynamical relations between the squeezing and entanglement for different coupling constants $\lambda = 0.75, 2, 3.75$ and 6, as shown in Figure 1. Interestingly, the results show a smooth monotonic increase of the dynamically generated entanglement entropy as the quadrature squeezing increases for each $\lambda$. In addition, the dynamically generated entanglement entropy is observed to be larger for a fixed squeezing as $\lambda$ increases. It is surprising that the maximum attainable values of these two quantum entities determined dynamically are found to fall consistently on the system ground states' squeezing and entanglement relation as given by Equations (7) and (8) for all values of $\lambda$. More importantly, this relation also serves as a bound to the entanglement entropy and squeezing that are generated dynamically. + +### 3. Analytical Derivation on the Dynamical Relation between Quantum Squeezing and Entanglement + +In this section, we shall perform an analytical study on the dynamical relationship between quantum squeezing and the associated entanglement production. We first yield the second quantized form of the Hamiltonian of the coupled harmonic oscillator system as follows: + +$$H = a_1^\dagger a_1 + a_2^\dagger a_2 + 1 + \frac{\lambda}{2} \{(a_1^\dagger + a_1) - (a_2^\dagger + a_2)\}^2 \quad (9)$$ + +Then, the time evolution of the annihilation operator $a_j$ (as well as the creation operator $a_j^\dagger$) can be determined according to the following Heisenberg equation of motion: + +$$\frac{d}{dt} a_j = \frac{1}{i} [a_j, H] \quad (10)$$ + +From this, we obtain: + +$$\frac{d}{dt} \tilde{a} = A \tilde{a} \quad (11)$$ + +with $\tilde{a} = (a_1 a_1^\dagger a_2 a_2^\dagger)^T$ and + +$$A = \begin{pmatrix} B & C \\ C & B \end{pmatrix} \quad (12)$$ + +Note that + +$$B = i \begin{pmatrix} -(1+\lambda) & -\lambda \\ \lambda & 1+\lambda \end{pmatrix} \quad (13)$$ + +and + +$$C = i \begin{pmatrix} \lambda & \lambda \\ -\lambda & -\lambda \end{pmatrix} \quad (14)$$ + +Due to the symmetry in the coupled oscillator system, the matrix $A$ is symmetric in the form of a two-by-two matrix although it is not symmetric in its full four-by-four matrix form. This symmetric property enables a simple evaluation of the time dependent annihilation and creation operators of the oscillators: + +$$\tilde{a}(t) = F\tilde{a}(0) \quad (15)$$ + +where + +$$F = \frac{1}{2} \begin{pmatrix} J e^{D_1 t} J - K e^{D_2 t} K^{-1} & J e^{D_1 t} J + K e^{D_2 t} K^{-1} \\ J e^{D_1 t} J + K e^{D_2 t} K^{-1} & J e^{D_1 t} J - K e^{D_2 t} K^{-1} \end{pmatrix} \quad (16)$$ +---PAGE_BREAK--- + +$$J = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \qquad (17)$$ + +$$D_1 = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \qquad (18)$$ + +$$D_2 = \begin{pmatrix} i\Omega & 0 \\ 0 & -i\Omega \end{pmatrix} \qquad (19)$$ + +and + +$$K = \begin{pmatrix} 1 & \beta \\ \beta & 1 \end{pmatrix} \qquad (20)$$ + +with $\Omega = f_2 = \sqrt{1+4\lambda}$ and $\beta = (1+\Omega)/(1-\Omega)$. We then have: + +$$a_1(t) = \left(\frac{1}{2}e^{-it} - \eta_1 + \eta_2\right) a_1(0) + \eta_3 a_1^\dagger(0) + \left(\frac{1}{2}e^{-it} + \eta_1 - \eta_2\right) a_2(0) - \eta_3 a_2^\dagger(0) \qquad (21)$$ + +$$a_1^\dagger(t) = -\eta_3 a_1(0) + \left(\frac{1}{2}e^{it} - \eta_1^* + \eta_2^*\right) a_1^\dagger(0) + \eta_3 a_2(0) + \left(\frac{1}{2}e^{it} + \eta_1^* - \eta_2^*\right) a_2^\dagger(0) \qquad (22)$$ + +$$a_2(t) = \left(\frac{1}{2}e^{-it} + \eta_1 - \eta_2\right) a_1(0) - \eta_3 a_1^\dagger(0) + \left(\frac{1}{2}e^{-it} - \eta_1 + \eta_2\right) a_2(0) + \eta_3 a_2^\dagger(0) \qquad (23)$$ + +$$a_2^\dagger(t) = \eta_3 a_1(0) + \left(\frac{1}{2}e^{it} + \eta_1^* - \eta_2^*\right) a_1^\dagger(0) - \eta_3 a_2(0) + \left(\frac{1}{2}e^{it} - \eta_1^* + \eta_2^*\right) a_2^\dagger(0) \qquad (24)$$ + +where + +$$\eta_1 = \frac{(1-\Omega)^2}{8\Omega} e^{i\Omega t}$$ + +$$\eta_2 = \frac{(1+\Omega)^2}{8\Omega} e^{-i\Omega t}$$ + +$$\eta_3 = \frac{i(1-\Omega)(1+\Omega)}{4\Omega} \sin(\Omega t)$$ + +With these results, we are now ready to determine the analytical expressions of both the quantum entanglement and squeezing against time. For entanglement, we shall employ the criterion developed by Duan *et al.* [39] for quantification since it leads to simplification of the analytical expression while remaining valid as a measure of entanglement in coupled harmonic oscillator systems. According to this criterion, as long as + +$$S_D = 2 - (\Delta u)^2 - (\Delta v)^2 > 0 \qquad (25)$$ + +the state of the quantum system is entangled. Note that $u = x_1 + x_2$ and $v = p_1 - p_2$ are two EPR-type operators, whereas $\Delta u$ and $\Delta v$ are the corresponding quantum fluctuation. This allows us to express the entanglement measure $S_D$ as follows: + +$$S_D(t) = 2\langle(a_1^\dagger a_1) - (a_1^\dagger)\langle a_1\rangle + (a_2^\dagger a_2) - (a_2^\dagger)\langle a_2\rangle + \\ (a_1^\dagger a_2^\dagger) - (a_1^\dagger)\langle a_2^\dagger\rangle + (a_1 a_2) - (a_1)\langle a_2\rangle\rangle \qquad (26)$$ + +Note that the short form $\langle O \rangle$ used in Equation (26) implies $\langle\alpha_1, \alpha_2|O(t)|\alpha_1, \alpha_2\rangle$, where $|\alpha_1, \alpha_2\rangle$ represents a tensor product of arbitrary initial coherent states. Recall that the subsequent results are independent of +---PAGE_BREAK--- + +the initial states as mentioned in the last section. After substituting Equations (21)–(24) into Equation (26), we obtain the analytical expression of entanglement against time: + +$$S_D(t) = (\Omega^2 - 1) \sin^2 \Omega t \quad (27)$$ + +In coupled harmonic oscillator systems, $S_D$ has a unique monotonic relation with $S_{vN}$ (see Figure 2). For squeezing, we have + +$$ +\begin{aligned} +S_x(t) &= -\ln \sqrt{\frac{\langle x_1^2 \rangle - \langle x_1 \rangle^2}{0.5}} \\ +&= -\ln \sqrt{\langle a_1^{\dagger 2} \rangle - \langle a_1^{\dagger} \rangle^2 + \langle a_1^2 \rangle - \langle a_1 \rangle^2 + \langle a_1^{\dagger} a_1 \rangle - \langle a_1^{\dagger} \rangle \langle a_1 \rangle + \langle a_1 a_1^{\dagger} \rangle - \langle a_1 \rangle \langle a_1^{\dagger} \rangle} +\end{aligned} +\quad (28) $$ + +Then, by substituting Equations (21)–(24) into Equation (28) as before, we obtain the analytical expression of squeezing against time: + +$$S_x(t) = -\ln \sqrt{1 - \frac{\Omega^2 - 1}{2\Omega^2} \sin^2 \Omega t} \quad (29)$$ + +We can also obtain an analytical expression between $S_D$ and $S_x$ by substituting Equation (27) into Equation (29) with some rearrangement: + +$$S_D = 2\Omega^2 (1 - e^{-2S_x}) \quad (30)$$ + +It is important to note that $S_x$ can only span a range of values $0 \le S_x \le S_x^{(m)}$, where $S_x^{(m)} = -\ln(\Omega^2+1)/2\Omega^2$. Furthermore, for a coupled harmonic oscillator system with a fixed value of $\lambda$, the dynamically generated squeezing can be higher than the squeezing in the system's ground state. The analytical result given by Equation (30) is plotted in Figure 3 for $\lambda = 0.75, 2, 3.75, 6$ and 10, with each curve begins at $S_x = 0$, $S_D = 0$ and ends at $S_x = S_x^{(m)}$, $S_D = S_D^{(m)} = \Omega^2 - 1$. In fact, the set of end points given by $S_x = S_x^{(m)}$, $S_D = S_D^{(m)}$ gives rise to the solid curve in Figure 3. Specifically, the maximum entanglement and the maximum squeezing parameter relates as follow: + +$$S_D^{(m)} = \frac{1 - \zeta^2}{\zeta^2} \quad (31)$$ + +with + +$$\zeta = \sqrt{2e^{-2S_x^{(m)}} - 1} \quad (32)$$ + +Note that Equation (32) is the same as Equation (8), and Equation (31) corresponds to the ground state solid curve of Figure 1. This allows us to deduce the monotonic relation between $S_D$ and $S_{vN}$, which is performed by evaluating the relation between $S_D$ of the maximum entangled state and $S_{vN}$ of the ground state at equal amount of squeezing. Indeed, the resulting derived relationship shown as solid line in Figure 2 is valid due to the fact that the link between $S_D(t)$ and $S_{vN}(t)$ is found to be expressible by precisely the same curve. Thus, we have concretely affirmed the one to one correspondence between $S_D$ and $S_{vN}$ through this relationship. More importantly, we have clearly demonstrated that the maximum entanglement attained dynamically is the same as the degree of entanglement of a ground state with the same squeezing. +---PAGE_BREAK--- + +**Figure 2.** This plot shows the monotonic relation between $S_D$ and $S_{vN}$ in coupled harmonic oscillator systems. $S_D(t)$ and $S_{vN}(t)$ are plotted as squares ($\lambda = 0.75$), triangles ($\lambda = 2$), circles ($\lambda = 3.75$) and crosses ($\lambda = 6$). The relation between the ground state von Neuman entropy given by $S_{vN} = \frac{(\xi+1)^2}{4\xi} \ln(\frac{\xi+1)^2}{4\xi}) - \frac{(\xi-1)^2}{4\xi} \ln(\frac{\xi-1)^2}{4\xi})$ and the maximum dynamically generated entanglement given by $S_D^{(m)} = \frac{1-\xi^2}{\xi^2}$ is plotted as solid curve. Note that both $S_{vN}$ and $S_D^{(m)}$ are functions of the squeezing parameter $S_x$ and $\xi = \sqrt{2e^{-2S_x} - 1}$. + +**Figure 3.** A plot on the dynamical relation between entanglement and squeezing given by Equation (30) for coupled harmonic oscillator system. The relation is dependent on $\lambda$ and the curves from top to bottom are with respect to $\lambda = 10, 6, 3.75, 2,$ and $0.75$ respectively. Note that the thick solid curve represents the values of the maximum attainable squeezing and entanglement for the range $0 < \lambda < 10$. + +When projected into the $x_1 - p_2$ or $x_2 - p_1$ plane, the initial coherent state can be represented by a circular distribution with equal uncertainty in both *x* and *p* direction. During the time evolution, the circular distribution is being rotated and squeezed. As a result, squeezing and entanglement are generated such that the distribution becomes elliptical in the $x_1 - p_2$ or $x_2 - p_1$ plane with rotation of the ellipse's major axis away from the *x*- or *p*-axis which creates entanglement. The generation of squeezing and entanglement reaches their maximum values at the same time when the major axis of the elliptical distribution has rotated 45° away from the *x*- or *p*-axis. Note that at this point, squeezing is merely in the collective modes. On the other hand, as discussed in Reference [37], the ground state wave function of the coupled harmonic oscillator system is separable in their collective modes. In both cases, entanglement and squeezing relates uniquely as given by Equation (7) and (31). +---PAGE_BREAK--- + +4. Quantum Squeezing and Entanglement in Coupled Anharmonic Oscillator Systems + +Next, let us investigate the effect of including an anharmonic potential on the dynamical relation +between squeezing and entanglement through the following Hamiltonian systems: + +$$ +H = \frac{p_1^2}{2m_1} + \frac{1}{2}m_1\omega_1^2 x_1^2 + \frac{p_2^2}{2m_2} + \frac{1}{2}m_2\omega_2^2 x_2^2 + \lambda(x_2 - x_1)^2 + \epsilon(x_1^4 + x_2^4) \quad (33) +$$ + +For simplicity, we consider only the quartic perturbation potential. For previous studies of entanglement in coupled harmonic oscillators with quartic perturbation, see Reference [40] and the references therein. Again, we choose the initial state to be the tensor product of the vacuum states. We then evolve the state numerically through the Hamiltonian given by Equation (33). For the numerical simulation, we consider only a small anharmonic perturbation, i.e., $\epsilon = 0.1$ and $0.2$. Note that we have truncated the basis size at $M = 85$ at which the results are found to converge. + +With a small anharmonic perturbation, the dynamically generated entanglement entropy is no longer a smooth monotonically increasing function of the quadrature squeezing as before (see Figure 4). This implies that for coupled anharmonic oscillator systems, the dynamically generated degree of entanglement cannot be characterized through a measurement of the squeezing parameter. In addition, when the anharmonic potential is included, the maximum attainable squeezing is much enhanced. This effect is clearly shown in Figure 4, where we observe that the maximum dynamical squeezing extends far beyond the largest squeezing given by the coupled anharmonic oscillator system’s ground state at different $\lambda$. In addition, as we increase the anharmonic perturbation from 0.1 to 0.2, we found that the maximum attainable squeezing continues to grow with extension going further beyond the largest squeezing given by the ground state of the coupled anharmonic oscillator system. + +**Figure 4.** The effect of anharmonicity ($\epsilon = 0.1$) on the dynamical relation between quadrature squeezing and entanglement. Note that we have employed the following parameter: (a) $\lambda = 0.75$; (b) $\lambda = 2$; (c) $\lambda = 3.75$; and (d) $\lambda = 6$. We have plotted the ground state entanglement-squeezing curve of the coupled anharmonic oscillator system with $\epsilon = 0.1$ as solid curve for comparison. +---PAGE_BREAK--- + +**Figure 5.** The effect of anharmonicity ($\epsilon = 0.2$) on the dynamical relation between quadrature squeezing and entanglement. Note that we have employed the following parameter: (a) $\lambda = 0.75$, (b) $\lambda = 2$, (c) $\lambda = 3.75$, and (d) $\lambda = 6$. We have plotted the ground state entanglement-squeezing curve of the coupled anharmonic oscillator system with $\epsilon = 0.2$ as solid curve for comparison. + +## 5. Conclusions + +We have studied into the dynamical generation of quadrature squeezing and entanglement for both coupled harmonic and anharmonic oscillator systems. Our numerical and analytical results show that the quantitative relation that defines the dynamically generated squeezing and entanglement in coupled harmonic oscillator system is a monotonically increasing function. Such a monotonic relation vanishes, however, when a small anharmonic potential is added to the system. This result implies the possibility of characterizing the dynamically generated entanglement by means of squeezing in the case of coupled harmonic oscillator system. In addition, we have uncovered the unexpected result that the maximum attainable entanglement and squeezing obtained dynamically matches exactly the entanglement-squeezing relation of the system's ground state of the coupled harmonic oscillators. When an anharmonic potential is included, we found that the dynamically generated squeezing can be further enhanced. We percieve that this result may provide important insights to the construction of precision instruments that attempt to beat the quantum noise limit. + +**Acknowledgments:** L. Y. Chew would like to thank Y. S. Kim for the helpful discussion on this work during the ICSSUR 2013 conference held in Nuremberg, Germany. + +**Author Contributions:** All authors contribute equally to the theoretical analysis, numerical computation, and writing of the paper. + +**Conflicts of Interest:** The authors declare no conflict of interest. + +## References + +1. Nielson, M.A.; Chuang, I.L. *Quantum Computation and Quantum Information*; Cambridge University Press: Cambridge, UK, 2000. +2. Bennett, C.H.; Wiesner, S.J. Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states. *Phys. Rev. Lett.* **1992**, *69*, 2881–2884. +---PAGE_BREAK--- + +3. Bennett, C.H.; Brassard, G.; Crépeau, C.; Jozsa, R.; Peres, A.; Wooters, W.K. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett. 1993, 70, 1895-1899. + +4. Bennett, C.H.; Brassard, G. Quantum cryptography: Public key distribution and coin tossing. In Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing, IEEE Computer Society, New York, NY, USA, 1984; pp. 175-179. + +5. Ekert, A.K. Quantum cryptography based on Bell's theorem. Phys. Rev. Lett. 1991, 67, 661-663. + +6. Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 1997, 26, 1484-1509. + +7. Jozsa, R.; Linden, N. On the role of entanglement in quantum-computational speed-up. Proc. R. Soc. Lond. A 2003, 459, 2011-2032. + +8. Lanyon, B.P.; Barbieri, M.; Almeida, M.P.; White, A.G. Experimental quantum computing without entanglement. Phys. Rev. Lett. 2008, 101, 200501:1-200501:4. + +9. Sørensen, A.; Duan, L.M.; Cirac, J.I.; Zoller, P. Many-particle entanglement with Bose-Einstein condensates. Nature 2001, 409, 63-66. + +10. Bigelow, N. Squeezing Entanglement. Nature 2001, 409, 27-28. + +11. Furuichi, S.; Mahmoud, A.A. Entanglement in a squeezed two-level atom. J. Phys. A Math. Gen. 2001, 34, 6851-6857. + +12. Xiang, S.; Shao, B.; Song, K. Quantum entanglement and nonlocality properties of two-mode Gaussian squeezed states. Chin. Phys. B 2009, 18, 418-425. + +13. Galve, F.; Pachón, L.A.; Zueco, D. Bringing entanglement to the high temperature limit. Phys. Rev. Lett. 2010, 105, doi:10.1103/PhysRevLett.105.180501. + +14. Ulam-Orgikh, D.; Kitagawa, M. Spin squeezed and decoherence limit in Ramsey spectroscopy. Phys. Rev. A 2001, 64, doi:10.1103/PhysRevA.64.052106. + +15. Wolf, M.M.; Eisert, J.; Plenio, M.B. Entangling power of passive optical elements. Phys. Rev. Lett. 2003, 90, 047904:1-047904:4. + +16. Chung, N.N.; Er, C.H.; Teo, Y.S.; Chew, L.Y. Relation of the entanglement entropy and uncertainty product in ground states of coupled anharmonic oscillators. Phys. Rev. A 2010, 82, doi:10.1103/PhysRevA.82.014101. + +17. Chew, L.Y.; Chung, N.N. Quantum entanglement and squeezing in coupled harmonic and anharmonic oscillators systems. J. Russ. Laser Res. 2011, 32, 331-337. + +18. Er, C.H.; Chung, N.N.; Chew, L.Y. Threshold effect and entanglement enhancement through local squeezing of initial separable states in continuous-variable systems. Phys. Scripta 2013, 87, doi:10.1088/0031-8949/87/02/025001. + +19. Han, D.; Kim, Y.S.; Noz, M.E. Linear canonical transformations of coherent and squeezed states in the Wigner phase space. Phys. Rev. A 1988, 37, 807-814. + +20. Han, D.; Kim, Y.S.; Noz, M.E. Linear canonical transformations of coherent and squeezed states in the Wigner phase space. II. Quantitative analysis. Phys. Rev. A 1989, 40, 902-912. + +21. Han, D.; Kim, Y.S.; Noz, M.E. Linear canonical transformations of coherent and squeezed states in the Wigner phase space. III. Two-mode states. Phys. Rev. A 1990, 41, 6233-6244. + +22. Wilson, E.B.; Decius, J.C.; Cross, P.C. Molecular Vibration; McGraw-Hill: New York, NY, USA, 1955. + +23. Schweber, S.S. An Introduction to Relativistic Quantum Field Theory; Row-Peterson: New York, NY, USA, 1961. + +24. Han, D.; Kim, Y.S.; Noz, M.E. O(3,3)-like symmetries of coupled harmonic-oscillators. J. Math. Phys. 1995, 36, 3940-3954. + +25. Kim, Y.S.; Noz, M.E. Coupled oscillators, entangled oscillators, and Lorentz-covariant harmonic oscillators. J. Opt. B Quantum Semiclass. Opt. 2005, 7, S458-S467. + +26. Eisert, J.; Plenio, M.B.; Bose, S.; Hartley, J. Towards quantum entanglement in nanoelectromechanical devices. Phys. Rev. Lett. 2004, 93, 190402:1-190402:4. + +27. Joshi, C.; Jonson, M.; Öhberg, P.; Andersson, E. Constructive role of dissipation for driven coupled bosonic modes. Phys. Rev. A 2013, 87, 062304:1-062304:4. + +28. Joshi, C.; Hutter, A.; Zimmer, F.E.; Jonson, M.; Andersson, E.; Öhberg, P. Quantum entanglement of nanocantilevers. Phys. Rev. A 2010, 82, doi:10.1103/PhysRevA.82.043846. + +29. Ikeda, S.; Fillaux, F. Incoherent elastic-neutron-scattering study of the vibrational dynamics and spin-related symmetry of protons in the KHCO₃ crystal. Phys. Rev. B 1999, 59, 4134-4145. + +Symmetry **2014**, *6*, 295–307 +---PAGE_BREAK--- + +30. Fillaux, F. Quantum entanglement and nonlocal proton transfer dynamics in dimers of formic acid and analogues. *Chem. Phys. Lett.* **2005**, *408*, 302–306. + +31. Audenaert, K.; Eisert, J.; Plenio, M.B.; Werner, R.F. Symmetric qubits from cavity states. *Phys. Rev. A* **2002**, *66*, 042327:1–042327:6. + +32. Martina, L.; Soliani, G. Hartree-Fock approximation and entanglement. Available online: http://arxiv.org/abs/0704.3130 (accessed on 18 April 2014). + +33. Chung, N.N.; Chew, L.Y. Energy eigenvalues and squeezing properties of general systems of coupled quantum anharmonic oscillators. *Phys. Rev. A* **2007**, *76*, doi:10.1103/PhysRevA.76.032113. + +34. Chung, N.N.; Chew, L.Y. Two-step approach to the dynamics of coupled anharmonic oscillators. *Phys. Rev. A* **2009**, *80*, doi:10.1103/PhysRevA.80.012103. + +35. Jellal, A.; Madouri, F.; Merdaci, A. Entanglement in coupled harmonic oscillators studied using a unitary transformation. *J. Stat. Mech.* **2011**, doi:10.1088/1742-5468/2011/09/P09015. + +36. McDermott, R.M.; Redmount, I.H. Coupled classical and quantum oscillators. Available online: http://arxiv.org/abs/quant-ph/0403184 (accessed on 18 April 2014). + +37. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative example of Feymann's rest of the universe. *Am. J. Phys.* **1999**, *67*, 61–66. + +38. Chung, N.N.; Chew, L.Y. Dependence of entanglement dynamics on the global classical dynamical regime. *Phys. Rev. E* **2009**, *80*, 016204:1–016204:7. + +39. Duan, L.M.; Giedke, G.; Cirac, J.I.; Zoller, P. Inseparable criterion for continuous variable systems. *Phys. Rev. Lett.* **2000**, *84*, 2722–2725. + +40. Joshi, C.; Jonson, M.; Andersson, E.; Öhberg, P. Quantum entanglement of anharmonic oscillators. *J. Phys. B At. Mol. Opt. Phys.* **2011**, *44*, doi:10.1088/0953-4075/44/24/245503. + +© 2014 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Closed-Form Expressions for the Matrix Exponential + +F. De Zela + +Departamento de Ciencias, Sección Física, Pontificia Universidad Católica del Perú, Ap.1761, Lima L32, Peru; +E-Mail: fdezela@pucp.edu.pe; Tel.: +51-1-6262000; Fax: +51-1-6262085 + +Received: 28 February 2014; in revised form: 16 April 2014 / Accepted: 17 April 2014 / Published: 29 April 2014 + +**Abstract:** We discuss a method to obtain closed-form expressions of $f(A)$, where $f$ is an analytic function and $A$ a square, diagonalizable matrix. The method exploits the Cayley-Hamilton theorem and has been previously reported using tools that are perhaps not sufficiently appealing to physicists. Here, we derive the results on which the method is based by using tools most commonly employed by physicists. We show the advantages of the method in comparison with standard approaches, especially when dealing with the exponential of low-dimensional matrices. In contrast to other approaches that require, e.g., solving differential equations, the present method only requires the construction of the inverse of the Vandermonde matrix. We show the advantages of the method by applying it to different cases, mostly restricting the calculational effort to the handling of two-by-two matrices. + +**Keywords:** matrix exponential; Cayley-Hamilton theorem; two-by-two representations; Vandermonde matrices + +PACS: 02.30.Tb, 42.25.Ja, 03.65.Fd + +# 1. Introduction + +Physicists are quite often faced with the task of calculating $f(A)$, where $A$ is an $n \times n$ matrix and $f$ an analytic function whose series expansion generally contains infinitely many terms. The most prominent example corresponds to $\exp A$. Usual approaches to calculate $f(A)$ consist in either truncating its series expansion, or else finding a way to "re-summate" terms so as to get a closed-form expression. There is yet another option that can be advantageously applied when dealing with an $n \times n$ matrix, and which derives from the Cayley-Hamilton theorem [1]. This theorem states that every square matrix satisfies its characteristic equation. As a consequence of this property, any series expansion can be written in terms of the first $n$ powers of $A$. While this result is surely very well known among mathematicians, it appears to be not so widespread within the physicists' community [2]. Indeed, most textbooks on quantum mechanics still resort to the Baker-Hausdorff lemma or to special properties of the involved matrices, in order to obtain closed-form expressions of series expansions [3–5]. This happens even when dealing with low-dimensional matrices, i.e., in cases in which exploiting the Cayley-Hamilton theorem would straightforwardly lead to the desired result. Such a state of affairs probably reflects a lack of literature on the subject that is more palatable to physicists than to mathematicians. The present paper aims at dealing with the subject matter by using language and tools that are most familiar to physicists. No claim of priority is made; our purpose is to show how well the derived results fit into the repertoire of tools that physicists routinely employ. To this end, we start addressing the simple, yet rich enough case of $2 \times 2$ matrices. + +An archetypical example is the Hamiltonian $H = k\sigma \cdot B$ that rules the dynamics of a spin-1/2 particle subjected to a magnetic field $B$. Here, $\sigma = (\sigma_x, \sigma_y, \sigma_z)$ denotes the Pauli spin operator and $k$ is a parameter that provides the above expression with appropriate units. The upsurge of research in several areas of physics—most notably in quantum optics—involving two-level systems, has made a +---PAGE_BREAK--- + +Hamiltonian of the above type quite ubiquitous. Indeed, the dynamics of any two-level system is ruled by a Hamiltonian that can be written in such a form. Hence, one often requires an explicit, closed-form expression for quantities such as $\exp(i\alpha n \cdot \sigma)$, where $n$ is a unit vector. This closed-form expression can be obtained as a generalization of Euler's formula $\exp i\alpha = \cos \alpha + i \sin \alpha$. It reads + +$$ \exp(i\alpha n \cdot \sigma) = \cos \alpha I + i \sin \alpha n \cdot \sigma \quad (1) $$ + +with $I$ denoting the identity operator. + +Let us recall how most textbooks of quantum mechanics proceed to demonstrate Equation (1) (see, e.g., [3–5]). The demonstration starts by writing the series expansion $\exp A = \sum_k A^k/k!$ for the case $A = i\alpha n \cdot \sigma$. Next, one invokes the following relationship: + +$$ (a \cdot \sigma)(b \cdot \sigma) = (a \cdot b)I + i(a \times b) \cdot \sigma \quad (2) $$ + +whose proof rests on $c_i \sigma_j = \delta_{ij}I + i\epsilon_{ijk}\sigma_k$ (summation over repeated indices being understood). Equation (2) implies that $(n \cdot \sigma)^{2n} = I$, and hence $(n \cdot \sigma)^{2n+1} = n \cdot \sigma$. This allows one to split the power series of $\exp(i\alpha n \cdot \sigma)$ in two parts, one constituted by even and the other by odd powers of $i\alpha n \cdot \sigma$: + +$$ \exp(i\alpha n \cdot \sigma) = \sum_{n=0}^{\infty} \frac{(i\alpha)^{2n}}{2n!} I + \sum_{n=0}^{\infty} \frac{(i\alpha)^{2n+1}}{(2n+1)!} n \cdot \sigma \quad (3) $$ + +By similarly splitting Euler's exponential, i.e., + +$$ \exp i\alpha = \cos \alpha + i \sin \alpha = \sum_{n=0}^{\infty} \frac{(i\alpha)^{2n}}{2n!} + \sum_{n=0}^{\infty} \frac{(i\alpha)^{2n+1}}{(2n+1)!} \quad (4) $$ + +one sees that Equation (3) is the same as Equation (1). + +Although this standard demonstration is a relatively simple one, it seems to be tightly related to the particular properties of the operator $n \cdot \sigma$, as well as to our ability to "re-summate" the series expansion so as to obtain a closed-form expression. There are several other cases [6] in which a relation similar to Equation (1) follows as a consequence of generalizing some properties of the group SU(2) and its algebra to the case SU(N), with $N > 2$. Central to these generalizations and to their associated techniques are both the Cayley-Hamilton theorem and the closure of the Lie algebra su(N) under commutation and anti-commutation of its elements [6]. As already recalled, the Cayley-Hamilton theorem states that any $n \times n$ matrix $A$ satisfies its own characteristic equation $p(A) = 0$, where + +$$ p(\lambda) = \mathrm{Det}(\lambda I - A) = \lambda^n + c_{n-1}\lambda^{n-1} + \dots + c_1\lambda + c_0 \quad (5) $$ + +is $A$'s characteristic polynomial. From $p(A) = 0$ it follows that any power $A^k$, with $k \ge n$, can be written in terms of the matrices $I = A^0, A, \dots, A^{n-1}$. Thus, any infinite series, such as the one corresponding to $\exp A$, may be rewritten in terms of the $n$ powers $A^0, A, \dots, A^{n-1}$. By exploiting this fact one can recover Equation (1). Reciprocally, given $A$, one can construct a matrix $B$ that satisfies $\exp B = A$, as shown by Dattoli, Mari and Torre [2]. These authors used essentially the same tools as we do here and presented some of the results that we will show below, but leaving them in an implicit form. The aforementioned authors belong to a group that has extensively dealt with our subject matter and beyond it [7], applying the present techniques to cases of current interest [8]. A somewhat different approach was followed by Leonard [9], who related the Cayley-Hamilton theorem to the solution of ordinary differential equations, in order to get closed expressions for the matrix exponential. This technique can be applied to all $n \times n$ matrices, including those that are not diagonalizable. Untidt and Nielsen [10] used this technique when addressing the groups SU(2), SU(3) and SU(4). Now, especially when addressing SU(2), Leonard's approach seems to be unnecessarily involved. This is because there is a trade-off between the wide applicability of the method and its tailoring to a +---PAGE_BREAK--- + +special case. When dealing with diagonalizable matrices, the present approach may prove more useful. +Thus, one exploits not only the Cayley-Hamilton theorem, but the diagonalizability of the involved +matrices as well. As a result, we are provided with a straightforward way to obtain closed-form +expressions for the matrix exponential. There are certainly many other ways that are either more +general [9,11] or else better suited to specific cases [12–16], but the present method is especially useful +for physical applications. + +The rest of the paper is organized as follows. First, we present Leonard's technique in a way that somewhat differs from the approach used in [9]. Thereafter, we show how to obtain Equation (1) by using a technique that can be generalized to diagonalizable $n \times n$ matrices, thereby introducing the method that is the main subject of the present work. As an illustration of this technique, we address some representative cases that were taken from the repertoire of classical mechanics, quantum electrodynamics, quantum optics and from the realm of Lorentz transformations. While the results obtained are known, their derivations should serve to demonstrate the versatility of the method. Let us stress once again that our aim has been to present this method by following an approach that could be appealing to most physicists, rather than to mathematically oriented readers. + +## 2. Closed Form of the Matrix Exponential via the Solution of Differential Equations + +Consider the coupled system of differential equations, given by + +$$Dx = \frac{dx}{dt} = Ax \quad (6)$$ + +with $x = (x_1, \dots, x_n)^T$ and $A$ a constant, $n \times n$ matrix. The matrix exponential appears in the solution of Equation (6), when we write it as $x(t) = e^{At}x(0)$. By successive derivation of this exponential we obtain $D^k e^{At} = A^k e^{At}$. Hence, $p(D)e^{At} = (D^n + c_{n-1}D^{n-1} + \dots + c_1D + c_0)e^{At} = p(A)e^{At} = 0$, on account of $p(A) = 0$, i.e., the Cayley-Hamilton theorem. Now, as already noted, this implies that $e^{At}$ can be expressed in terms of $A^0, A, \dots, A^{n-1}$. Let us consider the matrix $M(t) := \sum_{k=0}^{n-1} y_k(t)A^k$, with the $y_k(t)$ being $n$ independent solutions of the differential equation $p(D)y(t) = 0$. That is, the $y_k(t)$ solve this equation for $n$ different initial conditions that will be conveniently chosen. We have thus that $p(D)M(t) = \sum_{k=0}^{n-1} p(D)y_k(t)A^k = 0$. Our goal is to choose the $y_k(t)$ so that $e^{At} = M(t)$. To this end, we note that $D^k e^{At}|_{t=0} = A^k e^{At}|_{t=0} = A^k$. That is, $e^{At}$ solves $p(D)\Phi(t) = 0$ with the initial conditions $\Phi(0) = A^0, \dots, D^{n-1}\Phi(0) = A^{n-1}$. It is then clear that we must take the following initial conditions: $D^j y_k(0) = \delta_{kj}^j$ with $j, k \in \{0, \dots, n-1\}$. In such a case, $e^{At}$ and $M(t)$ satisfy both the same differential equation and the same initial conditions. Hence, $e^{At} = M(t)$. + +Summarizing, the method consists in solving the *n*-th order differential equation $p(D)y(t) = 0$ for *n* different initial conditions. These conditions read $D^j y_k(0) = \delta_{kj}^j$, with $j, k \in \{0, \dots, n-1\}$. The matrix exponential is then given by $e^{At} = \sum_{k=0}^{n-1} y_k(t)A^k$. The standard procedure for solving $p(D)y(t) = 0$ requires finding the roots of the characteristic equation $p(\lambda) = 0$. Each root $\lambda$ with multiplicity *m* contributes to the general solution with a term $(a_0 + a_1\lambda + \dots + a_{m-1}\lambda^{m-1})e^{\lambda t}$, the $a_k$ being fixed by the initial conditions. As already said, this method applies even when the matrix *A* is not diagonalizable. However, when the eigenvalue problem for *A* is a solvable one, another approach can be more convenient. We present such an approach in what follows. + +## 3. Closed Form of the Matrix Exponential via the Solution of Algebraic Equations + +Let us return to Equation (1). We will derive it anew, this time using standard tools of quantum mechanics. Consider a Hermitian operator *A*, whose eigenvectors satisfy $A |a_k\rangle = a_k |a_k\rangle$ and span the Hilbert space on which *A* acts. Thus, the identity operator can be written as $I = \sum_k |a_k\rangle\langle a_k|$. One can also write $A = A \cdot I = \sum_k a_k |a_k\rangle\langle a_k|$. Moreover, $A^m = \sum_k a_k^m |a_k\rangle\langle a_k|$, from which it follows that + +$$F(A) = \sum_k F(a_k) |a_k\rangle\langle a_k| \qquad (7)$$ +---PAGE_BREAK--- + +for any function $F(A)$ that can be expanded in powers of $A$. + +Let us consider the 2 × 2 case $A = n \cdot \sigma$, with $n$ a unit vector. This matrix has the eigenvalues $\pm 1$ and the corresponding eigenvectors $|n_{\pm}\rangle$. That is, $n \cdot \sigma |n_{\pm}\rangle = \pm |n_{\pm}\rangle$. We need no more than this to get Equation (1). Indeed, from $n \cdot \sigma = |n_+⟩⟨n_+| - |n_-⟩⟨n_-|$ and $I = |n_+⟩⟨n_+| + |n_-⟩⟨n_-|$, it follows that $|n_{\pm}\rangle⟨n_{\pm}| = (I \pm n \cdot \sigma) / 2$. Next, we consider $F(A) = \exp A = \sum_k \exp a_k |a_k⟩⟨a_k|$, with $A = i\alpha n \cdot \sigma$. The operator $i\alpha n \cdot \sigma$ has eigenvectors $|n_{\pm}\rangle$ and eigenvalues $\pm i\alpha$. Thus, + +$$ +\begin{align} +\exp(i\alpha n \cdot \sigma) &= e^{i\alpha} |n_+\rangle \langle n_+| + e^{-i\alpha} |n_-\rangle \langle n_-| \tag{8} \\ +&= \frac{1}{2} e^{i\alpha} (I + n \cdot \sigma) + \frac{1}{2} e^{-i\alpha} (I - n \cdot \sigma) \tag{9} \\ +&= \left( \frac{e^{i\alpha} + e^{-i\alpha}}{2} \right) I + \left( \frac{e^{i\alpha} - e^{-i\alpha}}{2} \right) n \cdot \sigma \tag{10} +\end{align} +$$ + +which is Equation (1). Note that it has not been necessary to know the eigenvectors of $A = i\alpha n \cdot \sigma$. It is a matter of convenience whether one chooses to express $\exp(i\alpha n \cdot \sigma)$ in terms of the projectors $|n_{\pm}\rangle⟨n_{\pm}|$, or in terms of $I$ and $n \cdot \sigma$. + +Let us now see how the above method generalizes when dealing with higher-dimensional spaces. To this end, we keep dealing with rotations. The operator exp $(i\alpha n \cdot \sigma)$ is a rotation operator acting on spinor space. It is also an element of the group SU(2), whose generators can be taken as $X_i = i\gamma_i / 2$, $i = 1, 2, 3$. They satisfy the commutation relations $[X_i, X_j] = \epsilon_{ijk}X_k$ that characterize the rotation algebra. The rotation operator can also act on three-dimensional vectors $r$. In this case, one often uses the following formula, which gives the rotated vector $r'$ in terms of the rotation angle $\theta$ and the unit vector $n$ that defines the rotation axis: + +$$ r' = r \cos\theta + n(n \cdot r)[1 - \cos\theta] + (n \times r)\sin\theta \quad (11) $$ + +Equation (11) is usually derived from vector algebra plus some geometrical considerations [17]. We can derive it, alternatively, by the method used above. To this end, we consider the rotation generators $X_i$ for three-dimensional space, which can be read off from the next formula, Equation (12). The rotation matrix is then obtained as $\exp(\theta n \cdot X)$, with + +$$ n \cdot X = \begin{pmatrix} 0 & -n_3 & n_2 \\ n_3 & 0 & -n_1 \\ -n_2 & n_1 & 0 \end{pmatrix} \equiv M \qquad (12) $$ + +It is straightforward to find the eigenvalues of the non-Hermitian, antisymmetric matrix $M$. They are 0 and $\pm i$. Let us denote the corresponding eigenvectors as $|n_0\rangle$ and $|n_{\pm}\rangle$, respectively. Similarly to the spin case, we have now + +$$ I = |n_+\rangle\langle n_+| + |n_-\rangle\langle n_-| + |n_0\rangle\langle n_0| \quad (13) $$ + +$$ M = i|n_+\rangle\langle n_+| - i|n_-\rangle\langle n_-| \quad (14) $$ + +We need a third equation, if we want to express the three projectors $|n_k\rangle⟨n_k|$, $k = \pm, 0$, in terms of $I$ and $M$. This equation is obtained by squaring $M$: + +$$ M^2 = -|n_+\rangle\langle n_+| - |n_-\rangle\langle n_-| \quad (15) $$ + +From Equations (13)–(15) we immediately obtain $|n_{\pm}\rangle⟨n_{\pm}| = (\mp iM - M^2)/2$, and $|n_0\rangle⟨n_0| = I + M^2$. Thus, we have + +$$ +\begin{align} +\exp(\theta M) &= e^{i\theta} |n_+\rangle\langle n_+| + e^{-i\theta} |n_-\rangle\langle n_-| + e^0 |n_0\rangle\langle n_0| && (16) \\ +&= I + M \sin\theta + M^2 [1 - \cos\theta] && (17) +\end{align} +$$ +---PAGE_BREAK--- + +By letting $M$, as given in Equation (12), act on $\mathbf{r} = (x,y,z)^T$, we easily see that $Mr = n \times r$ and +$M^2\mathbf{r} = n \times (n \times \mathbf{r}) = n(n \cdot \mathbf{r}) - \mathbf{r}$. Thus, on account of Equation (17), $\mathbf{r}' = \exp(\theta M)\mathbf{r}$ reads the same +as Equation (11). + +The general case is now clear. Consider an operator *A* whose matrix representation is an +*N* × *N* matrix. Once the eigenvalues *a**k* of *A* (which we assume nondegenerate) have been +determined, we can write the *N* equations: *A*0 = *I* = Σ*k* |*a**k*⟩⟨*a**k*|, *A* = Σ*k* *a**k* |*a**k*⟩⟨*a**k*|, *A*2 = Σ*k*=1*N* *a**k**N*-1 |*a**k*⟩⟨*a**k*|, ..., *A**N*-1 = Σ*k*=1*N* *a**k**N*-1 |*a**k*⟩⟨*a**k*|, from which it is possible to obtain the *N* projectors +|*a**k*⟩⟨*a**k*| in terms of *I*, *A*, *A*2, ..., *A**N*-1. To this end, we must solve the system + +$$ +\begin{pmatrix} +1 & 1 & \cdots & 1 \\ +a_1 & a_2 & \cdots & a_N \\ +a_1^2 & a_2^2 & \cdots & a_N^2 \\ +\vdots & \vdots & \ddots & \vdots \\ +a_1^{N-1} & a_2^{N-1} & \cdots & a_N^{N-1} +\end{pmatrix} +\begin{pmatrix} +|a_1\rangle\langle a_1| \\ +|a_2\rangle\langle a_2| \\ +|a_3\rangle\langle a_3| \\ +\vdots \\ +|a_N\rangle\langle a_N| +\end{pmatrix} += +\begin{pmatrix} +I \\ +A \\ +A^2 \\ +\vdots \\ +A^{N-1} +\end{pmatrix} +\quad (18) +$$ + +The matrix in Equation (18), with components $V_{k,i} = a_i^{k-1}$ ($k,i \in \{1,...,N\}$), is a Vandermonde matrix, whose inverse can be explicitly given [18]. Once we have written the $|a_k\rangle\langle a_k|$ in terms of $I, A, ... A^{N-1}$, we can express any analytic function of $A$ in terms of these $N$ powers of $A$, in particular $\exp A = \sum_{k=1}^{N} \exp(a_k) |a_k\rangle\langle a_k|$. For the case $N=4$, for instance, we have the following result: + +$$ +\begin{align} +|a_1\rangle\langle a_1| &= \frac{A^3 - A^2(a_2 + a_3 + a_4) + A(a_2a_3 + a_2a_4 + a_3a_4) - a_2a_3a_4}{(a_1 - a_2)(a_1 - a_3)(a_1 - a_4)} \tag{19} \\ +|a_2\rangle\langle a_2| &= \frac{A^3 - A^2(a_1 + a_3 + a_4) + A(a_1a_3 + a_1a_4 + a_3a_4) - a_1a_3a_4}{(a_2 - a_1)(a_2 - a_3)(a_2 - a_4)} \tag{20} \\ +|a_3\rangle\langle a_3| &= \frac{A^3 - A^2(a_1 + a_2 + a_4) + A(a_1a_2 + a_1a_4 + a_2a_4) - a_1a_2a_4}{(a_3 - a_1)(a_3 - a_2)(a_3 - a_4)} \tag{21} \\ +|a_4\rangle\langle a_4| &= \frac{A^3 - A^2(a_1 + a_2 + a_3) + A(a_1a_2 + a_1a_3 + a_2a_3) - a_1a_3a_4}{(a_4 - a_1)(a_4 - a_2)(a_4 - a_3)} \tag{22} +\end{align} +$$ + +The general solution can be written in terms of the inverse of the Vandermonde matrix V. To this end, +consider a system of equations that reads like (18), but with the operators entering the column vectors +being replaced by numbers, i.e., $|a_j\rangle\langle a_j| \rightarrow w_j$, with $j = 1, \dots, N$, and $A^k \rightarrow q_{k+1}$, with $k = 0, \dots, N-1$. +The solution of this system is given by $w_j = \sum_{k=0}^{N-1} U_{j,k} q_k$, with $U = V^{-1}$, the inverse of the Vandermonde +matrix. This matrix inverse can be calculated as follows [18]. Let us define a polynomial $P_j(x)$ of degree +$N-1$ as + +$$ +P_j(x) = \prod_{\substack{n=1 \\ n \neq j}}^{N} \frac{x-a_n}{a_j-a_n} = \sum_{k=1}^{N} U_{j,k} x^{k-1} \quad (23) +$$ + +The coefficients $U_{j,k}$ of the last equality follow from expanding the preceding expression and collecting equal powers of $x$. These $U_{j,k}$ are the components of $V^{-1}$. Indeed, setting $x = a_i$ and observing that $P_j(a_i) = \delta_{ji} = \sum_{k=1}^N U_{j,k} a_i^{k-1} = (UV)_{ji}$, we see that $U$ is the inverse of the Vandermonde matrix. The projectors $|a_j\rangle\langle a_j|$ in Equation (18) can thus be obtained by replacing $x \to A$ in Equation (23). We get in this way the explicit solution + +$$ +|a_j\rangle\langle a_j| = \sum_{k=1}^{N} U_{j,k} A^{k-1} = \prod_{\substack{n=1 \\ n \neq j}}^{N} \frac{A - a_n}{a_j - a_n} \quad (24) +$$ + +The above expression can be inserted into Equation (7), if one wants to write $F(A)$ in terms of the first +$N$ powers of $A$. +---PAGE_BREAK--- + +So far, we have assumed that the eigenvalues of A are all nondegenerate. Let us now consider a matrix M with degenerate eigenvalues. As before, we deal with a special case, from which the general formalism can be easily inferred. Let M be of dimension four and with eigenvalues λ₁ and λ₂, which are two-fold degenerate. We can group the projectors as follows: + +$$I = (|e_1\rangle \langle e_1| + |e_2\rangle \langle e_2|) + (|e_3\rangle \langle a_3| + |e_4\rangle \langle e_4|) \quad (25)$$ + +$$M = \lambda_1 (|e_1\rangle \langle e_1| + |e_2\rangle \langle e_2|) + \lambda_2 (|e_3\rangle \langle a_3| + |e_4\rangle \langle e_4|) \quad (26)$$ + +It is then easy to solve the above equations for the two projectors associated with the two eigenvalues. We obtain + +$$|e_1\rangle \langle e_1| + |e_2\rangle \langle e_2| = \frac{\lambda_2 I - M}{\lambda_2 - \lambda_1} \quad (27)$$ + +$$|e_3\rangle \langle a_3| + |e_4\rangle \langle e_4| = \frac{\lambda_1 I - M}{\lambda_1 - \lambda_2} \quad (28)$$ + +We can then write + +$$e^M = \frac{1}{\lambda_1 - \lambda_2} \left[ (\lambda_1 e^{\lambda_2} - \lambda_2 e^{\lambda_1}) I + (e^{\lambda_1} - e^{\lambda_2}) M \right] \quad (29)$$ + +We will need this result for the calculation of the unitary operator that defines the Foldy–Wouthuysen transformation, our next example. It is now clear that in the general case of degenerate eigenvalues, we can proceed similarly to the nondegenerate case, but solving $n < N$ equations. + +## 4. Examples + +Let us now see how the method works when applied to some well-known cases. Henceforth, we refer to the method as the Cayley–Hamilton (CH)-method, for short. Our aim is to show the simplicity of the required calculations, as compared with standard techniques. + +### 4.1. The Foldy–Wouthuysen Transformation + +The Foldy–Wouthuysen transformation is introduced [19] with the aim of decoupling the upper ($\varphi$) and lower ($\chi$) components of a bispinor $\psi = (\varphi, \chi)^T$ that solves the Dirac equation $i\hbar\partial\psi/\partial t = H\psi$, where $H = -i\hbar c\alpha \cdot \nabla + \beta mc^2$. Here, $\beta$ and $\alpha = (\alpha_x, \alpha_y, \alpha_z)$ are the 4 × 4 Dirac matrices: + +$$\beta = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \quad \alpha = \begin{pmatrix} 0 & \sigma \\ \sigma & 0 \end{pmatrix} \qquad (30)$$ + +The Foldy–Wouthuysen transformation is given by $\psi' = U\psi$, with [19] + +$$U = \exp\left(\frac{\theta}{2}\beta\alpha \cdot p\right) \qquad (31)$$ + +We can calculate $U$ by applying Equation (29) for $M = \theta\beta\alpha \cdot p/2 = (\theta|p|/2)\beta\alpha \cdot n$, where $n = p/|p|$. +The eigenvalues of the 4 × 4 matrix $\beta\alpha \cdot n$ are $\pm i$, each being two-fold degenerate. This follows from +noting that the matrices + +$$\beta\alpha \cdot n = \begin{pmatrix} 0 & \sigma \cdot n \\ -\sigma \cdot n & 0 \end{pmatrix} \quad \text{and} \quad \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \qquad (32)$$ +---PAGE_BREAK--- + +have the same eigenvalues. Indeed, because $(\sigma \cdot n)^2 = 1$, the above matrices share the characteristic equation $\lambda^2 + 1 = 0$. Their eigenvalues are thus $\pm i$. The eigenvalues of $M = \theta \beta \alpha \cdot p/2$ are then $\lambda_{1,2} = \pm i\theta |p|/2$. Replacing these values in Equation (29) we obtain + +$$ +\begin{align} +\exp\left(\frac{\theta}{2}\beta\alpha \cdot p\right) &= \frac{1}{i\theta|p|} \left[ \frac{i\theta|p|}{2} \left(e^{-i\theta|p|/2} + e^{i\theta|p|/2}\right) I + \left(e^{i\theta|p|/2} - e^{-i\theta|p|/2}\right) \frac{\theta|p|}{2} \beta\alpha \cdot n \right] \tag{33} \\ +&= \cos\left(|p|\theta/2\right) + \sin\left(|p|\theta/2\right) \beta\alpha \cdot \frac{p}{|p|} \tag{34} +\end{align} + $$ + +The standard way to get this result requires developing the exponential in a power series. Thereafter, one must exploit the commutation properties of $\alpha$ and $\beta$ in order to group together odd and even powers of $\theta$. This finally leads to the same closed-form expression that we have arrived at after some few steps. + +## 4.2. Lorentz-Type Equations of Motion + +The dynamics of several classical and quantum systems is ruled by equations that can be cast as differential equations for a three-vector $S$. These equations often contain terms of the form $\Omega \times$. An example of this is the ubiquitous equation + +$$ \frac{dS}{dt} = \Omega \times S \qquad (35) $$ + +Equation (35) and its variants have been recently addressed by Babusci, Dattoli and Sabia [20], who applied operational methods to deal with them. Instead of writing Equation (35) in matrix form, these authors chose to exploit the properties of the vector product by defining the operator $\hat{\Omega} := \Omega \times$. The solution for the case $\partial\Omega/\partial t = 0$, for instance, was obtained by expanding $\exp(t\hat{\Omega})$ as an infinite series and using the cyclical properties of the vector product in order to get $S(t)$ in closed form. This form is nothing but Equation (11) with the replacements $r' \rightarrow S(t)$, $r \rightarrow S(0)$ and $\theta \rightarrow \Omega t$, where $\Omega := |\Omega|$. We obtained Equation (11) without expanding the exponential and without using any cyclic properties. Our solution follows from writing Equation (35) in matrix form, i.e., + +$$ \frac{dS}{dt} = \Omega MS \qquad (36) $$ + +where $M$ is given by Equation (12) with $n = \Omega/\Omega$. The solution $S(t) = \exp(M\Omega t)S(0)$ is then easily written in closed form by applying the CH-method, as in Equation (11). The advantages of this method show up even more sharply when dealing with some extensions of Equation (36). Consider, e.g., the non-homogeneous version of Equation (35): + +$$ \frac{dS}{dt} = \Omega \times S + N = \Omega MS + N \qquad (37) $$ + +This is the form taken by the Lorentz equation of motion when the electromagnetic field is given by scalar and vector potentials reading $\Phi = -E \cdot r$ and $A = B \times r/2$, respectively [20]. The solution of Equation (37) is easily obtained by acting on both sides with the "integrating (operator-valued) factor" $\exp(-\Omega Mt)$. One then readily obtains, for the initial condition $S(0) = S_0$, + +$$ S(t) = e^{\Omega M t} S_0 + \int_0^t e^{\Omega M(t-s)} N ds \qquad (38) $$ + +The matrix exponentials in Equation (38) can be expressed in their eigenbasis, as in Equation (16). For a time-independent $N$, the integral in Equation (38) is then trivial. An equivalent solution is given in [20], but written in terms of the evolution operator $\hat{U}(t) = \exp(i\hat{\Omega}t)$ and its inverse. Inverse operators repeatedly appear within such a framework [20] and are often calculated with the help of +---PAGE_BREAK--- + +the Laplace transform identity: $\hat{\Lambda}^{-1} = \int_{0}^{\infty} \exp(-s\hat{\Lambda})ds$. Depending on $\hat{\Lambda}$, this could be not such a straightforward task as it might appear at first sight. Now, while vector notation gives us additional physical insight, vector calculus can rapidly turn into a messy business. Our strategy is therefore to avoid vector calculus and instead rely on the CH-method as much as possible. Only at the end we write down our results, if we wish, in terms of vector products and the like. That is, we use Equations (13)–(17) systematically, in particular Equation (16) when we need to handle $\exp(\theta M)$, e.g., within integrals. The simplification comes about from our working with the eigenbasis of $\exp(\theta M)$, i.e., with the eigenbasis of $M$. Writing down the final results in three-vector notation amounts to expressing these results in the basis in which $M$ was originally defined, cf. Equation (12). Let us denote this basis by $\{|x\rangle, |y\rangle, |z\rangle\}$. The eigenvectors $|n_{\pm}\rangle$ and $|n_0\rangle$ of $M$ are easily obtained from those of $X_3$, cf. Equation (12). The eigenvectors of $X_3$ are, in turn, analogous to those of Pauli's $\sigma_y$, namely $|\pm\rangle = (|x\rangle \mp i|y\rangle)/\sqrt{2}$, plus a third eigenvector that is orthogonal to the former ones, that is, $|0\rangle = |z\rangle$. In order to obtain the eigenvectors of $n \cdot X$, with $n = (\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)$, we apply the rotation $\exp(\phi X_3) \exp(\theta X_2)$ to the eigenvectors $|\pm\rangle$ and $|0\rangle$, thereby getting $|n_{\pm}\rangle$ and $|n_0\rangle$, respectively. All these calculations are easily performed using the CH-method. + +Once we have $|n_{\pm}\rangle$ and $|n_0\rangle$, we also have the transformation matrix $T$ that brings $M$ into diagonal form: $T^{-1}MT = M_D = \text{diag}(-i, 0, i)$. Indeed, $T$'s columns are just $|n_{-}\rangle$, $|n_0\rangle$ and $|n_+\rangle$. After we have carried out all calculations in the eigenbasis of $M$, by applying $T$ we can express the final result in the basis $\{|x\rangle, |y\rangle, |z\rangle\}$, thereby obtaining the desired expressions in three-vector notation. Let us illustrate this procedure by addressing the evolution equation + +$$ \frac{dS}{dt} = \Omega \times S + \lambda \Omega \times (\Omega \times S) \qquad (39) $$ + +In matrix form, such an equation reads + +$$ \frac{dS}{dt} = \Omega MS + \lambda (\Omega M)^2 S = [\Omega M + \lambda (\Omega M)^2]S \equiv AS \qquad (40) $$ + +The solution is given by $S(t) = \exp(\Lambda t)S_0$. The eigenbasis of $\Lambda$ is the same as that of $M$. We have thus + +$$ \exp(\Lambda t) = e^{(i\Omega - \lambda\Omega^2)t} |n_+\rangle\langle n_+| + e^{(-i\Omega - \lambda\Omega^2)t} |n_-\rangle\langle n_-| + |n_0\rangle\langle n_0| \qquad (41) $$ + +The projectors $|n_k\rangle\langle n_k|$ can be written in terms of the powers of $\Lambda$ by solving the system + +$$ I = |n_+\rangle\langle n_+| + |n_-\rangle\langle n_-| + |n_0\rangle\langle n_0| \qquad (42) $$ + +$$ A = (i\Omega - \lambda\Omega^2)|n_+\rangle\langle n_+| - (i\Omega + \lambda\Omega^2)|n_-\rangle\langle n_-| \qquad (43) $$ + +$$ A^2 = (i\Omega - \lambda\Omega^2)^2 |n_+\rangle\langle n_+| + (i\Omega + \lambda\Omega^2)^2 |n_-\rangle\langle n_-| \qquad (44) $$ + +Using $A = \Omega M + \lambda(\Omega M)^2$ and $A^2 = -2\Lambda\Omega^3 M + (1 - \lambda^2\Omega^2)(\Omega M)^2$, and replacing the solution of the system (42)–(44) in Equation (41) we get + +$$ \exp(\Lambda t) = I + e^{-\lambda\Omega^2 t} \sin(\Omega t) M + [1 - e^{-\lambda\Omega^2 t} \cos(\Omega t)] M^2 \qquad (45) $$ + +Finally, we can write the solution $S(t) = \exp(\Lambda t)S_0$ in the original basis $\{|x\rangle, |y\rangle, |z\rangle\}$, something that in this case amounts to writing $MS_0 = n \times S_0$ and $M^2S_0 = n(n \cdot S_0) - S_0$. Equation (39) was also addressed in [20], but making use of the operator method. The solution was given in terms of a series expansion for the evolution operator. In order to write this solution in closed form, it is necessary to introduce sin- and cos-like functions [20]. These functions are defined as infinite series involving two-variable Hermite polynomials. The final expression reads like Equation (11), but with sin and cos replaced by the aforementioned functions containing two-variable Hermite polynomials. Now, one can hardly unravel from such an expression the physical features that characterize the system's dynamics. +---PAGE_BREAK--- + +On the other hand, a solution given as in Equation (45) clearly shows such dynamics, in particular +the damping effect stemming from the $\lambda$-term in Equation (39), for $\lambda > 0$. Indeed, Equation (45) +clearly shows that the state vector $S(t) = \exp(\mathcal{A}t)S_0$ asymptotically aligns with $\Omega$ while performing a +damped Larmor precession about the latter. + +The case $\partial\Omega/\partial t \neq 0$ is more involved and generally requires resorting to Dyson-like series expansions, e.g., time-ordered exponential integrations. While this subject lies beyond the scope of the present work, it should be mentioned that the CH-method can be advantageously applied also in this context. For instance, time-ordered exponential integrations involving operators of the form $A + B(t)$ do require the evaluation of $\exp A$. Likewise, disentangling techniques make repeated use of matrix exponentials of single operators [21]. In all these cases, the CH-method offers a possible shortcut. + +**4.3. The Jaynes-Cummings Hamiltonian** + +We address now a system composed by a two-level atom and a quantized (monochromatic) +electromagnetic field. Under the dipole and the rotating-wave approximations, the Hamiltonian of +this system reads (in standard notation) + +$$ +H = \frac{\hbar}{2} \omega_0 \sigma_z + \hbar \omega a^\dagger a + \hbar g (a^\dagger \sigma_- + a \sigma_+) \quad (46) +$$ + +Let us denote the upper and lower states of the two-level atom by $|a\rangle$ and $|b\rangle$, respectively, and the Fock states of the photon-field by $|n\rangle$. The Hilbert space of the atom-field system is spanned by the basis $B = \{|a,n\rangle, |b,n\rangle, n=0,1,\dots\}$. The states $|a,n\rangle$ and $|b,n\rangle$ are eigenstates of the unperturbed Hamiltonian $H_0 = \hbar\omega_0\sigma_z/2 + \hbar\omega a^\dagger a$. The interaction Hamiltonian $V = \hbar g (a^\dagger\sigma_- + a\sigma_+)$ couples the states $|a,n\rangle$ and $|b,n+1\rangle$ alone. Hence, $H$ can be split into a sum: $H = \sum_n H_n$, with each $H_n$ acting on the subspace $\text{Span}\{|a,n\rangle, |b,n+1\rangle\}$. Within such a subspace, $H_n$ is represented by the 2 × 2 matrix + +$$ +H_n = \hbar\omega \left(n + \frac{1}{2}\right) I + \hbar \begin{pmatrix} \frac{\delta}{2} & g\sqrt{n+1} \\ g\sqrt{n+1} & -\frac{\delta}{2} \end{pmatrix} \quad (47) +$$ + +where $\delta = \omega_0 - \omega$. + +A standard way [22] to calculate the evolution operator $U = \exp(-iHt/\hbar)$ goes as follows. +One first writes the Hamiltonian in the form $H = H_1 + H_2$, with $H_1 = \hbar\omega(a^{\dagger}a + \sigma_{z}/2)$ and +$H_2 = \hbar\delta\sigma_{z}/2 + \hbar g(a^{\dagger}\sigma_{-} + a\sigma_{+})$. Because $[H_1, H_2] = 0$, the evolution operator can be factored as +$U = U_1U_2 = \exp(-iH_1t/\hbar)\exp(-iH_2t/\hbar)$. The first factor is diagonal in Span $B$. The second factor can +be expanded in a Taylor series. As it turns out, one can obtain closed-form expressions for the even and +the odd powers of the expansion. Thus, a closed-form for $U_2$ can be obtained as well. As can be seen, +this method depends on the realization that Equation (46) can be written in a special form, which renders +it possible to factorize $U$. + +Let us now calculate $U$ by the CH-method. We can exploit the fact that $H$ splits as $H = \sum_n H_n$, with $[H_n, H_m] = 0$, and write $U = \prod_n U_n = \prod_n \exp(-iH_n t / \hbar)$. Generally, a 2 × 2 Hamiltonian $H$ has eigenvalues of the form $E_\pm = \hbar(\lambda_0 \pm \lambda)$. We have thus + +$$ +I = |+\rangle\langle +| + |-\rangle\langle -| \tag{48} +$$ + +$$ +H/\hbar = (\lambda_0 + \lambda) |+\rangle\langle +| + (\lambda_0 - \lambda) |-\rangle\langle -| \quad (49) +$$ + +so that + +$$ +\begin{align} +\exp(-iHt/\hbar) &= \exp(-i\lambda_+ t) |+\rangle\langle +| + \exp(-i\lambda_- t) |-\rangle\langle -| \tag{50} \\ +&= \frac{e^{-i\lambda_0 t}}{\lambda} \left[ (i\lambda_0 \sin \lambda t + \lambda \cos \lambda t) I - i(\sin \lambda t) \frac{H}{\hbar} \right] \tag{51} +\end{align} +$$ +---PAGE_BREAK--- + +In our case, $H_n$ has eigenvalues $E_n^\pm = \hbar\omega(n+1/2) \pm \hbar\sqrt{\delta^2/4 + g^2(n+1)} \equiv \hbar\omega(n+1/2) \pm \hbar R_n$. Whence, + +$$ \exp(-iH_n t / \hbar) = \frac{e^{-i\omega(n+1/2)t}}{R_n} \left[ \left( i\omega\left(n+\frac{1}{2}\right)\sin(R_n t) + R_n \cos(R_n t) \right) I - i\sin(R_n t) \frac{H_n}{\hbar} \right] \quad (52) $$ + +Replacing $H_n$ from Equation (47) in the above expression we get + +$$ \exp(-iH_n t / \hbar) = e^{-i\omega(n+1/2)t} \left[ \cos(R_n t) I - \frac{i \sin(R_n t)}{2R_n} \begin{pmatrix} \delta & 2g\sqrt{n+1} \\ 2g\sqrt{n+1} & -\delta \end{pmatrix} \right] \quad (53) $$ + +This result enables a straightforward calculation of the evolved state $|\psi(t)\rangle$ out of a general initial state + +$$ |\psi(0)\rangle = \sum_n C_{a,n} |a, n\rangle + C_{b,n+1} |b, n+1\rangle \quad (54) $$ + +Equation (53) refers to a matrix representation in the two-dimensional subspace $\text{Span}\{|a, n\rangle, |b, n+1\rangle\}$. Let us focus on + +$$ \cos (R_n t) I = \begin{pmatrix} \cos (R_n t) & 0 \\ 0 & \cos (R_n t) \end{pmatrix} \qquad (55) $$ + +This matrix is a representation in subspace $\text{Span}\{|a, n\rangle, |b, n+1\rangle\}$ of the operator + +$$ \cos \left( t \sqrt{\hat{\varphi} + g^2} \right) |a\rangle\langle a| + \cos \left( t \sqrt{\hat{\varphi}} \right) |b\rangle\langle b| \quad (56) $$ + +where $\hat{\varphi} := g^2 a^\dagger a + \delta^2/4$. Proceeding similarly with the other operators that enter Equation (53) and observing that $\sin(R_n t) R_n^{-1}\sqrt{n+1} = \langle n | i \sin(t\sqrt{\hat{\varphi}+g^2}) (\sqrt{\hat{\varphi}+g^2})^{-1} | n+1 \rangle$, etc., we readily obtain + +$$ \exp(-iHt/\hbar) = e^{-i\omega(a^\dagger a + \frac{1}{2})t} \left( \cos(t\sqrt{\hat{\varphi} + g^2}) - \frac{i\delta\sin(t\sqrt{\hat{\varphi} + g^2})}{2\sqrt{\hat{\varphi} + g^2}} a - \frac{ig\sin(t\sqrt{\hat{\varphi}})a^\dagger}{\sqrt{\hat{\varphi}}} \cos(t\sqrt{\hat{\varphi}}) + \frac{i\delta\sin(t\sqrt{\hat{\varphi}})}{2\sqrt{\hat{\varphi}}} a^\dagger \right) \quad (57) $$ + +where the 2 × 2 matrix refers now to the atomic subspace $\text{Span}\{|a\rangle, |b\rangle\}$. One can see that the CH-method reduces the amount of calculational effort invested to get Equation (53), as compared with other approaches [22]. + +### 4.4. Bispinors and Lorentz Transformations + +As a further application, let us consider the representation of Lorentz transformations in the space of bispinors. In coordinate space, Lorentz transformations are given by $\tilde{x}^\mu = A_\nu^\mu x^\nu$ (Greek indices run from 0 to 3), with the $A_\nu^\mu$ satisfying $A_\nu^\mu A_\sigma^\nu = \eta^{\mu\tau}$. Here, $\eta^{\mu\nu}$ represents the metric tensor of Minkowsky space ($\eta^{00} = -\eta^{11} = -\eta^{22} = \eta^{33} = 1$, $\eta^{\mu\nu} = 0$ for $\mu \neq \nu$). A bispinor $\psi(x)$ transforms according to [19] + +$$ \tilde{\psi}(\tilde{x}) = \tilde{\psi}(\Lambda x) = S(\Lambda)\psi(x) \quad (58) $$ + +with + +$$ S(\Lambda) = \exp B \quad (59) $$ + +$$ B = -\frac{1}{4} V^{\mu\nu} \gamma_{\mu} \gamma_{\nu} \quad (60) $$ +---PAGE_BREAK--- + +The $V^{\mu\nu} = -V^{\nu\mu}$ are the components of an antisymmetric tensor, which has thus six independent +components, corresponding to the six parameters defining a Lorentz transformation. The quantities +$\gamma_{\mu} = \eta_{\mu\nu}\gamma^{\nu}$ satisfy $\gamma^{\mu}\gamma^{\nu} + \gamma^{\nu}\gamma^{\mu} = 2\eta^{\mu\nu}$. The quantities $\gamma_{\mu}\gamma_{\nu}$ are the generators of the Lorentz group. +$S(\Lambda)$ is not a unitary transformation, but satisfies + +$$ +S^{-1} = \gamma_0 S^\dagger \gamma_0 \tag{61} +$$ + +For the following, it will be advantageous to define + +$$ +p_i = \gamma_0 \gamma_i, \quad i = 1, 2, 3 \tag{62} +$$ + +$$ +q_1 = \gamma_2 \gamma_3, q_2 = \gamma_3 \gamma_1, q_3 = \gamma_1 \gamma_2 \qquad (63) +$$ + +We call the $p_i$ Pauli generators and the $q_i$ quaternion generators. The pseudoscalar $\gamma_5 := \gamma_0\gamma_1\gamma_2\gamma_3$ satisfies $\gamma_5^2 = -1$, $\gamma_5\gamma_\mu = -\gamma_\mu\gamma_5$, so that it commutes with each generator of the Lorentz group: + +$$ +\gamma_5 (\gamma_{\mu} \gamma_{\nu}) = (\gamma_{\mu} \gamma_{\nu}) \gamma_5 \qquad (64) +$$ + +This means that quantities of the form $a + \beta\gamma_5$ ($a, \beta \in \mathbb{R}$) behave like complex numbers upon multiplication with $p_i$ and $q_i$. We denote the subspace spanned by such quantities as the complex-like subspace $C_i$ and set $i \equiv \gamma_5$. Noting that $\mathbf{i} p_i = q_i$ and $\mathbf{i} q_i = -p_i$, the following multiplication rules are easily derived: + +$$ +q_i q_j = \epsilon_{ijk} q_k - \delta_{ij} \quad (65) +$$ + +$$ +p_i p_j = -\epsilon_{ijk} q_k + \delta_{ij} = -q_i q_j = -i \epsilon_{ijk} p_k + \delta_{ij} \quad (66) +$$ + +$$ +p_i q_j = \epsilon_{ijk} p_k + i\delta_{ij} = i(-\epsilon_{ijk} q_k + \delta_{ij}) \quad (67) +$$ + +The following commutators can then be straightforwardly obtained: + +$$ +[q_i, q_j] = 2\epsilon_{ijk}q_k \tag{68} +$$ + +$$ +[p_i, p_j] = -2\epsilon_{ijk}q_k = -2ie_{ijk}p_k \tag{69} +$$ + +$$ +[p_i, q_j] = 2\epsilon_{ijk}p_k \tag{70} +$$ + +They make clear why we dubbed the $p_i$ as Pauli generators. Noting that they furthermore satisfy + +$$ +p_i p_j + p_j p_i = 2\delta_{ij} \tag{71} +$$ + +we see the correspondence $\mathbf{i} \rightarrow i$, $p_k \rightarrow -\sigma_k$, with $\mathbf{i}$ being the imaginary unit and $\sigma_k$ the Pauli matrices. These matrices, as is well-known, satisfy $[\sigma_i, \sigma_j] = 2ie_{ijk}\sigma_k$ and the anticommutation relations $\sigma_i\sigma_j + \sigma_j\sigma_i = 2\delta_{ij}$, which follow from $\sigma_i\sigma_j = i\epsilon_{ijk}\sigma_k + \delta_{ij}$. + +We can write now $S(\Lambda) = \exp(-\frac{1}{4}V^{\mu\nu}\gamma_{\mu}\gamma_{\nu})$ in terms of $p_i$ and $q_i$: + +$$ +B = \sum_{i=1}^{3} (\alpha^i p_i + \beta^i q_i) \tag{72} +$$ + +Here, we have set $\alpha^i = -V^{0i}/4$ and $\beta^k \epsilon_{ijk} = -V^{ij}/4$. We can write $B$ in terms of the Pauli-generators +alone: + +$$ +B = \sum_{i=1}^{3} (\alpha^i + i\beta^i) p_i = \sum_{i=1}^{3} z^i p_i \quad (73) +$$ +---PAGE_BREAK--- + +Considering the isomorphism $p_k \leftrightarrow -\sigma_k$, we could derive the expression for $S(\Lambda) = \exp B$ by splitting the series expansion into even and odd powers of $B$, and noting that + +$$B^2 = (\alpha^2 - \beta^2) + (2\alpha \cdot \beta) \quad i \equiv z^2 \in C_i \qquad (74)$$ + +where $\alpha^2 \equiv \alpha \cdot \alpha, \beta^2 \equiv \beta \cdot \beta$, and $\alpha \cdot \beta \equiv \sum_{i=1}^{3} \alpha^i \beta^i$. We have then that $B^3 = z^2 B, B^4 = z^4, B^5 = z^4 B, ...$. This allows us to write + +$$\exp(B) = 1 + B + \frac{z^2}{2!} + \frac{z^2}{3!}B + \frac{z^4}{4!} + \frac{z^4}{5!}B + \dots = \\ \left(1 + \sum_{n=1}^{\infty} \frac{z^{2n}}{(2n)!}\right) + B\left(1 + \frac{z^2}{3!} + \frac{z^4}{5!} + \dots\right) = \cosh z + \frac{\sinh z}{z} B \quad (75)$$ + +As in the previous examples, also in this case the above result can be obtained more directly by noting that $B = \sum_{i=1}^{3} (\alpha^i + i\beta^i) p_i \leftrightarrow -\sum_{i=1}^{3} (\alpha^i + i\beta^i) \sigma_i$. This suggests that we consider $\exp(-f \cdot \sigma)$, with $f = \alpha + i\beta \in \mathbb{C}$. The matrix $f \cdot \sigma$ has the (complex) eigenvalues + +$$\lambda_{\pm} = \pm\sqrt{\alpha^2 - \beta^2 + 2i\alpha \cdot \beta} \equiv \pm z \qquad (76)$$ + +Writing $|f_{\pm}\rangle$ for the corresponding eigenvectors, i.e., $f \cdot \sigma |f_{\pm}\rangle = \lambda_{\pm} |f_{\pm}\rangle$, we have that + +$$I = |f_+\rangle\langle f_+| + |f_-\rangle\langle f_-| \qquad (77)$$ + +$$f \cdot \sigma = \lambda_+ |f_+\rangle \langle f_+| + \lambda_- |f_-\rangle \langle f_-| \qquad (78)$$ + +Solving for $|f_{\pm}\rangle\langle f_{\pm}|$, we get + +$$|f_{\pm}\rangle\langle f_{\pm}| = \frac{zI \pm f \cdot \sigma}{2z} \qquad (79)$$ + +We apply now the general decomposition $\exp A = \sum_n \exp a_n |a_n\rangle\langle a_n|$ to the case $A = -f \cdot \sigma$. The operator $\exp(-f \cdot \sigma)$ has eigenvectors $|f_{\pm}\rangle$ and eigenvalues $\exp(\mp z)$. Thus, + +$$\begin{align} +\exp(-f \cdot \sigma) &= e^{-z} |f_+\rangle \langle f_+| + e^{z} |f_-\rangle \langle f_-| & (80) \\ +&= \frac{e^{-z}}{2z} (zI + f \cdot \sigma) + \frac{e^{z}}{2z} (zI - f \cdot \sigma) & (81) \\ +&= \left(\frac{e^{z} + e^{-z}}{2}\right) I - \left(\frac{e^{z} - e^{-z}}{2z}\right) f \cdot \sigma & (82) \\ +&= \cosh z - \frac{\sinh z}{z} f \cdot \sigma & (83) +\end{align}$$ + +which is equivalent to Equation (75) via the correspondence $\cosh(z) + \sinh(z)B/z \leftrightarrow \cosh(z) - i\epsilon(z)f \cdot z$. We have thus obtained closed-form expressions for $\exp(-f \cdot \sigma)$, with $f = \alpha + i\beta \in C^3$, i.e., for the elements of SL(2, C), the universal covering group of the Lorentz group. It is interesting to note that the elements of SL(2, C) are related to those of SU(2) by extending the parameters $\alpha$ entering $\exp(i\alpha n) \in SU(2)$ from the real to the complex domain: $i\alpha \rightarrow \alpha + i\beta$. Standard calculations that are carried out with SU(2) elements can be carried out similarly with SL(2, C) elements [15]. A possible realization of SU(2) transformations occurs in optics, by acting on the polarization of light with the help of birefringent elements (waveplates). If we also employ dichroic elements like polarizers, which absorb part of the light, then it is possible to implement SL(2, C) transformations as well. In this way, one can simulate Lorentz transformations in the optical laboratory [23]. The above formalism is of great help for designing the corresponding experimental setup. +---PAGE_BREAK--- + +**5. Conclusions** + +The method presented in this paper—referred to as the Cayley–Hamilton method—proves advantageous for calculating closed-form expressions of analytic functions $f(A)$ of an $n \times n$ matrix $A$, in particular matrix exponentials. The matrix $A$ is assumed to be a diagonalizable one, even though only its eigenvalues are needed, not its eigenvectors. We have recovered some known results from classical and quantum mechanics, including Lorentz transformations, by performing the straightforward calculations that the method prescribes. In most cases, the problem at hand was reshaped so as to solve it by dealing with two-by-two matrices only. + +**Acknowledgments:** The author gratefully acknowledges the Research Directorate of the Pontificia Universidad Católica del Perú (DGI-PUCP) for financial support under Grant No. 2014-0064. + +**Conflicts of Interest:** The author declares no conflict of interest. + +**References** + +1. Gantmacher, F.R. *The Theory of Matrices*; Chelsea Publishing Company: New York, NY, USA, 1960; p. 83. + +2. Dattoli, G.; Mari, C.; Torre, A. A simplified version of the Cayley-Hamilton theorem and exponential forms of the $2 \times 2$ and $3 \times 3$ matrices. *Il Nuovo Cimento* **1998**, *180*, 61–68. + +3. Cohen-Tannoudji, C.; Diu, B.; Laloë, F. *Quantum Mechanics*; John Wiley & Sons: New York, NY, USA, 1977; pp. 983–989. + +4. Sakurai, J.J. *Modern Quantum Mechanics*; Addison-Wesley: New York, NY, USA, 1980; pp. 163–168. + +5. Greiner, W.; Müller, B. *Quantum Mechanics, Symmetries*; Springer: New York, NY, USA, 1989; p. 68. + +6. Weigert, S. Baker-Campbell-Hausdorff relation for special unitary groups SU(N). *J. Phys. A* **1997**, *30*, 8739–8749. + +7. Dattoli, G.; Ottaviani, P.L.; Torre, A.; Vásquez, L. Evolution operator equations: Integration with algebraic and finite-difference methods. Applications to physical problems in classical and quantum mechanics and quantum field theory. *Riv. Nuovo Cimento* **1997**, *20*, 1–133. + +8. Dattoli, G.; Zhukovsky, K. Quark flavour mixing and the exponential form of the Kobayashi–Maskawa matrix. *Eur. Phys. J. C* **2007**, *50*, 817–821. + +9. Leonard, I. The matrix exponential. *SIAM Rev.* **1996**, *38*, 507–512. + +10. Untidt, T.S.; Nielsen, N.C. Closed solution to the Baker-Campbell-Hausdorff problem: Exact effective Hamiltonian theory for analysis of nuclear-magnetic-resonance experiments. *Phys. Rev. E* **2002**, *65*, doi:10.1103/PhysRevE.65.021108. + +11. Moore, G. Orthogonal polynomial expansions for the matrix exponential. *Linear Algebra Appl.* **2011**, *435*, 537–559. + +12. Ding, F. Computation of matrix exponentials of special matrices. *Appl. Math. Comput.* **2013**, *223*, 311–326. + +13. Koch, C.T.; Spence, J.C.H. A useful expansion of the exponential of the sum of two non-commuting matrices, one of which is diagonal. *J. Phys. A Math. Gen.* **2003**, *36*, 803–816. + +14. Ramakrishna, V.; Zhou, H. On the exponential of matrices in $su(4)$. *J. Phys. A Math. Gen.* **2006**, *39*, 3021–3034. + +15. Tudor, T. On the single-exponential closed form of the product of two exponential operators. *J. Phys. A Math. Theor.* **2007**, *40*, 14803–14810. + +16. Siminovitch, D.; Untidt, T.S.; Nielsen, N.C. Exact effective Hamiltonian theory. II. Polynomial expansion of matrix functions and entangled unitary exponential operators. *J. Chem. Phys.* **2004**, *120*, 51–66. + +17. Goldstein, H. *Classical Mechanics*, 2nd ed.; Addison-Wesley: New York, NY, USA, 1980; pp. 164–174. + +18. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. *Numerical Recipees in FORTRAN, The Art of Scientific Computing*, 2nd ed.; Cambridge University Press: Cambridge, UK, 1992; pp. 83–84. + +19. Bjorken, J.D.; Drell, S.D. *Relativistic Quantum Mechanics*; McGraw-Hill: New York, NY, USA, 1965. + +20. Babusci, D.; Dattoli, G.; Sabia, E. Operational methods and Lorentz-type equations of motion. *J. Phys. Math.* **2011**, *3*, 1–17. + +21. Puri, R.R. *Mathematical Methods of Quantum Optics*; Springer: New York, NY, USA, 2001; pp. 8–53. +---PAGE_BREAK--- + +22. Meystre, P.; Sargent, M. *Elements of Quantum Optics*, 2nd ed.; Springer: Berlin, Germany, 1999, pp. 372–373. + +23. Kim, Y.S.; Noz, M.E. Symmetries shared by the Poincaré group and the Poincaré sphere. *Symmetry* **2013**, *5*, 233–252. + +© 2014 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +# Invisibility and *PT* Symmetry: A Simple Geometrical Viewpoint + +Luis L. Sánchez-Soto * and Juan J. Monzón + +Departamento de Óptica, Facultad de Física, Universidad Complutense, 28040 Madrid, Spain; E-Mail: jjmonzon@opt.ucm.es + +* E-Mail: lsanchez@fis.ucm.es; Tel.: +34-91-3944-680; Fax: +34-91-3944-683. + +Received: 24 February 2014; in revised form: 12 May 2014 / Accepted: 14 May 2014 / +Published: 22 May 2014 + +**Abstract:** We give a simplified account of the properties of the transfer matrix for a complex one-dimensional potential, paying special attention to the particular instance of unidirectional invisibility. In appropriate variables, invisible potentials appear as performing null rotations, which lead to the helicity-gauge symmetry of massless particles. In hyperbolic geometry, this can be interpreted, via Möbius transformations, as parallel displacements, a geometric action that has no Euclidean analogy. + +**Keywords:** *PT* symmetry; SL(2, C); Lorentz group; Hyperbolic geometry + +## 1. Introduction + +The work of Bender and coworkers [1–6] has triggered considerable efforts to understand complex potentials that have neither parity (*P*) nor time-reversal symmetry (*T*), yet they retain combined *PT* invariance. These systems can exhibit real energy eigenvalues, thus suggesting a plausible generalization of quantum mechanics. This speculative concept has motivated an ongoing debate in several forefronts [7,8]. + +Quite recently, the prospect of realizing *PT*-symmetric potentials within the framework of optics has been put forward [9,10] and experimentally tested [11]. The complex refractive index takes on here the role of the potential, so they can be realized through a judicious inclusion of index guiding and gain/loss regions. These *PT*-synthetic materials can exhibit several intriguing features [12–14], one of which will be the main interest of this paper, namely, unidirectional invisibility [15–17]. + +In all these matters, the time-honored transfer-matrix method is particularly germane [18]. However, a quick look at the literature immediately reveals the different backgrounds and habits in which the transfer matrix is used and the very little cross talk between them. + +To remedy this flaw, we have been capitalizing on a number of geometrical concepts to gain further insights into the behavior of one-dimensional scattering [19–26]. Indeed, when one think in a unifying mathematical scenario, geometry immediately comes to mind. Here, we keep going this program and examine the action of the transfer matrices associated to invisible scatterers. Interestingly enough, when viewed in SO(1, 3), they turn to be nothing but parabolic Lorentz transformations, also called null rotations, which play a crucial role in the determination of the little group of massless particles. Furthermore, borrowing elementary techniques of hyperbolic geometry, we reinterpret these matrices as parallel displacements, which are motions without Euclidean counterpart. + +We stress that our formulation does not offer any inherent advantage in terms of efficiency in solving practical problems; rather, it furnishes a general and unifying setting to analyze the transfer matrix for complex potentials, which, in our opinion, is more than a curiosity. +---PAGE_BREAK--- + +## 2. Basic Concepts on Transfer Matrix + +To be as self-contained as possible, we first briefly review some basic facts on the quantum scattering of a particle of mass $m$ by a local complex potential $V(x)$ defined on the real line $\mathbb{R}$ [27–34]. Although much of the renewed interest in this topic has been fuelled by the remarkable case of *PT* symmetry, we do not use this extra assumption in this Section. + +The problem at hand is governed by the time-independent Schrödinger equation + +$$H\Psi(x) = \left[-\frac{d^2}{dx^2} + U(x)\right] \Psi(x) = \varepsilon \Psi(x) \quad (1)$$ + +where $\varepsilon = 2mE/\hbar^2$ and $U(x) = 2mV(x)/\hbar^2$, $E$ being the energy of the particle. We assume that $U(x) \to 0$ fast enough as $x \to \pm\infty$, although the treatment can be adapted, with minor modifications, to cope with potentials for which the limits $U_{\pm} = \lim_{x\to\pm\infty} U(x)$ are different. + +Since $U(x)$ decays rapidly as $|x| \to \infty$, solutions of (1) have the asymptotic behavior + +$$\Psi(x) = \begin{cases} A_+ e^{+ikx} + A_- e^{-ikx} & x \to -\infty \\ B_+ e^{+ikx} + B_- e^{-ikx} & x \to \infty \end{cases} \quad (2)$$ + +Here, $k^2 = \varepsilon$, $A_\pm$ and $B_\pm$ are $k$-dependent complex coefficients (unspecified, at this stage), and the subscripts $+$ and $-$ distinguish right-moving modes $\exp(+ikx)$ from left-moving modes $\exp(-ikx)$, respectively. + +The problem requires to work out the exact solution of (1) and invoke the appropriate boundary conditions, involving not only the continuity of $\Psi(x)$ itself, but also of its derivative. In this way, one has two linear relations among the coefficients $A_\pm$ and $B_\pm$, which can be solved for any amplitude pair in terms of the other two; the result can be expressed as a matrix equation that translates the linearity of the problem. Frequently, it is more advantageous to specify a linear relation between the wave amplitudes on both sides of the scatterer, namely, + +$$\begin{pmatrix} B_+ \\ B_- \end{pmatrix} = \mathbf{M} \begin{pmatrix} A_+ \\ A_- \end{pmatrix} \quad (3)$$ + +$\mathbf{M}$ is the transfer matrix, which depends in a complicated way on the potential $U(x)$. Yet one can extract a good deal of information without explicitly calculating it: let us apply (3) successively to a right-moving [( $A_+ = 1, B_- = 0$ )] and to a left-moving wave [( $A_+ = 0, B_- = 1$ )], both of unit amplitude. The result can be displayed as + +$$\begin{pmatrix} T^\ell \\ 0 \end{pmatrix} = \mathbf{M} \begin{pmatrix} 1 \\ R^\ell \end{pmatrix}, \quad \begin{pmatrix} R^r \\ 1 \end{pmatrix} = \mathbf{M} \begin{pmatrix} 0 \\ T^r \end{pmatrix} \quad (4)$$ + +where $T^{\ell,r}$ and $R^{\ell,r}$ are the transmission and reflection coefficients for a wave incoming at the potential from the left and from the right, respectively, defined in the standard way as the quotients of the pertinent fluxes [35]. + +With this in mind, Equation (4) can be thought of as a linear superposition of the two independent solutions + +$$\Psi_k^\ell(x) = \begin{cases} e^{+ikx} + R^\ell(k)e^{-ikx} & x \to -\infty, \\ T^\ell(k)e^{+ikx} & x \to \infty, \end{cases}, \quad \Psi_k^r(x) = \begin{cases} T^r(k)e^{-ikx} & x \to -\infty, \\ e^{-ikx} + R^r(k)e^{+ikx} & x \to \infty \end{cases} \quad (5)$$ + +which is consistent with the fact that, since $\varepsilon > 0$, the spectrum of the Hamiltonian (1) is continuous and there are two linearly independent solutions for a given value of $\varepsilon$. The wave function $\Psi_k^\ell(x)$ represents a wave incident from $-\infty [\exp(+ikx)]$ and the interaction with the potential produces a +---PAGE_BREAK--- + +reflected wave [$R^{\ell}(k) \exp(-ikx)$] that escapes to $-\infty$ and a transmitted wave [$T^{\ell}(k) \exp(+ikx)$] that moves off to $+\infty$. The solution $\Psi_k^{\ell}(x)$ can be interpreted in a similar fashion. + +Because of the Wronskian of the solutions (5) is independent of $x$, we can compute $W(\Psi_k^{\ell}, \Psi_k^r) = \Psi_k^{\ell}\Psi_k^{r'} - \Psi_k^{r'}\Psi_k^{\ell}$ first for $x \to -\infty$ and then for $x \to \infty$; this gives + +$$ \frac{i}{2k} W(\Psi_k^\ell, \Psi_k^r) = T^\ell(k) = T^r(k) \quad (6) $$ + +We thus arrive at the important conclusion that, irrespective of the potential, the transmission coefficient is always independent of the input direction. + +Taking this constraint into account, we go back to the system (4) and write the solution for **M** as + +$$ M_{11}(k) = T(k) - \frac{R^{\ell}(k) R^r(k)}{T(k)}, \quad M_{12}(k) = \frac{R^r(k)}{T(k)}, \quad M_{21}(k) = -\frac{R^{\ell}(k)}{T(k)}, \quad M_{22}(k) = \frac{1}{T(k)} \quad (7) $$ + +A straightforward check shows that $\det \mathbf{M} = +1$, so $\mathbf{M} \in \text{SL}(2, \mathbb{C})$; a result that can be drawn from a number of alternative and more elaborate arguments [36]. + +One could also relate outgoing amplitudes to the incoming ones (as they are often the magnitudes one can externally control): this is precisely the scattering matrix, which can be concisely formulated as + +$$ \begin{pmatrix} B_+ \\ A_- \end{pmatrix} = S \begin{pmatrix} A_+ \\ B_- \end{pmatrix} \qquad (8) $$ + +with matrix elements + +$$ S_{11}(k) = T(k), \quad S_{12}(k) = R^r(k), \quad S_{21}(k) = R^{\ell}(k), \quad S_{22}(k) = T(k) \quad (9) $$ + +Finally, we stress that transfer matrices are very convenient mathematical objects. Suppose that $V_1$ and $V_2$ are potentials with finite support, vanishing outside a pair of adjacent intervals $I_1$ and $I_2$. If $\mathbf{M}_1$ and $\mathbf{M}_2$ are the corresponding transfer matrices, the total system (with support $I_1 \cup I_2$) is described by + +$$ \mathbf{M} = \mathbf{M}_1 \mathbf{M}_2 \qquad (10) $$ + +This property is rather helpful: we can connect simple scatterers to create an intricate potential landscape and determine its transfer matrix by simple multiplication. This is a common instance in optics, where one routinely has to treat multilayer stacks. However, this important property does not seem to carry over into the scattering matrix in any simple way [37,38], because the incoming amplitudes for the overall system cannot be obtained in terms of the incoming amplitudes for every subsystem. + +### 3. Spectral Singularities + +The scattering solutions (5) constitute quite an intuitive way to attack the problem and they are widely employed in physical applications. Nevertheless, it is sometimes advantageous to look at the fundamental solutions of (1) in terms of left- and right-moving modes, as we have already used in (2). + +Indeed, the two independent solutions of (1) can be formally written down as [39] + +$$ \Psi_k^{(+)}(x) = e^{+ikx} + \int_x^\infty K_+(x,x')e^{+ikx'}dx' \qquad (11) $$ + +$$ \Psi_k^{(-)}(x) = e^{-ikx} + \int_{-\infty}^{x} K_-(x,x')e^{-ikx'}dx' $$ +---PAGE_BREAK--- + +The kernels $K_{\pm}(x, x')$ enjoy a number of interesting properties. What matters for our purposes is that the resulting $\Psi_k^{(\pm)}(x)$ are analytic with respect to $k$ in $C_+ = \{z \in C | \operatorname{Im} z > 0\}$ and continuous on the real axis. In addition, it is clear that + +$$ \Psi_k^{(+)}(x) = e^{+ikx} \quad x \to \infty, \qquad \Psi_k^{(-)}(x) = e^{-ikx} \quad x \to -\infty \tag{12} $$ + +that is, they are the Jost functions for this problem [31]. + +Let us look at the Wronskian of the Jost functions $W(\Psi_k^{(-)}, \Psi_k^{(+)})$, which, as a function of $k$, is analytical in $C_+$. A spectral singularity is a point $k_* \in \mathbb{R}_+$ of the continuous spectrum of the Hamiltonian (1) such that + +$$ W(\Psi_{k^*}^{(-)}, \Psi_{k^*}^{(+)}) = 0 \tag{13} $$ + +so $\Psi_k^{(\pm)}(x)$ become linearly dependent at $k_*$ and the Hamiltonian is not diagonalizable. In fact, the set of zeros of the Wronskian is bounded, has at most a countable number of elements and its limit points can lie in a bounded subinterval of the real axis [40]. There is an extensive theory of spectral singularities for (1) that was started by Naimark [41]; the interested reader is referred to, e.g., Refs. [42–46] for further details. + +The asymptotic behavior of $\Psi_k^{\pm}(x)$ at the opposite extremes of $\mathbb{R}$ with respect to those in (12) can be easily worked out by a simple application of the transfer matrix (and its inverse); viz, + +$$ \begin{align} \Psi_k^{(-)}(x) &= M_{12}e^{+ikx} + M_{22}e^{-ikx} && x \to \infty \\ \Psi_k^{(+)}(x) &= M_{22}e^{+ikx} - M_{12}e^{-ikx} && x \to -\infty \end{align} \tag{14} $$ + +Using $\Psi_k^{\pm}(x)$ in (12) and (14), we can calculate + +$$ \frac{i}{2k} W(\Psi_k^{(-)}, \Psi_k^{(+)}) = M_{22}(k) \tag{15} $$ + +Upon comparing with the definition (13), we can reinterpret the spectral singularities as the real zeros of $M_{22}(k)$ and, as a result, the reflection and transmission coefficients diverge therein. The converse holds because $M_{12}(k)$ and $M_{21}(k)$ are entire functions, lacking singularities. This means that, in an optical scenario, spectral singularities correspond to lasing thresholds [47–49]. + +One could also consider the more general case that the Hamiltonian (1) has, in addition to a continuous spectrum corresponding to $k \in \mathbb{R}_+$, a possibly complex discrete spectrum. The latter corresponds to the square-integrable solutions of that represent bound states. They are also zeros of $M_{22}(k)$, but unlike the zeros associated with the spectral singularities these must have a positive imaginary part [36]. + +The eigenvalues of S are + +$$ s_{\pm} = \frac{1}{M_{22}(k)} \left[ 1 \pm \sqrt{1 - M_{11}(k)M_{22}(k)} \right] \tag{16} $$ + +At a spectral singularity, $s_+$ diverges, while $s_- \to M_{11}(k)/2$, which suggests identifying spectral singularities with resonances with a vanishing width. + +**4. Invisibility and PT Symmetry** + +As heralded in the Introduction, unidirectional invisibility has been lately predicted in *PT* materials. We shall elaborate on the ideas developed by Mostafazadeh [50] in order to shed light into this intriguing question. + +The potential $U(x)$ is called reflectionless from the left (right), if $R^\ell(k) = 0$ and $R^r(k) \neq 0$ [$R^r(k) = 0$ and $R^\ell(k) \neq 0$]. From the explicit matrix elements in (7) and (9), we see that unidirectional +---PAGE_BREAK--- + +reflectionlessness implies the non-diagonalizability of both **M** and **S**. Therefore, the parameters of the potential for which it becomes reflectionless correspond to exceptional points of **M** and **S** [51,52]. + +The potential is called invisible from the left (right), if it is reflectionless from left (right) and in addition $T(k) = 1$. We can easily express the conditions for the unidirectional invisibility as + +$$ +\begin{align} +M_{12}(k) & \neq 0, & M_{11}(k) &= M_{22}(k) = 1 && \text{(left invisible)} \\ +M_{21}(k) & \neq 0, & M_{11}(k) &= M_{22}(k) = 1 && \text{(right invisible)} +\end{align} +\tag{17} $$ + +Next, we scrutinize the role of $\mathcal{PT}$-symmetry in the invisibility. For that purpose, we first briefly recall that the parity transformation “reflects” the system with respect to the coordinate origin, so that $x \mapsto -x$ and the momentum $p \mapsto -p$. The action on the wave function is + +$$ \Psi(x) \mapsto (\mathcal{P}\Psi)(x) = \Psi(-x) \tag{18} $$ + +On the other hand, the time reversal inverts the sense of time evolution, so that $x \mapsto x$, $p \mapsto -p$ and $i \mapsto -i$. This means that the operator $\mathcal{T}$ implementing such a transformation is antiunitary and its action reads + +$$ \Psi(x) \mapsto (\mathcal{T}\Psi)(x) = \Psi^*(x) \tag{19} $$ + +Consequently, under a combined $\mathcal{PT}$ transformation, we have + +$$ \Psi(x) \mapsto (\mathcal{PT}\Psi)(x) = \Psi^*(-x) \tag{20} $$ + +Let us apply this to a general complex scattering potential. The transfer matrix of the $\mathcal{PT}$-transformed system, we denote by $\mathbf{M}^{(\mathcal{PT})}$, fulfils + +$$ \begin{pmatrix} A_+^* \\ A_-^* \end{pmatrix} = \mathbf{M}^{(\mathcal{PT})} \begin{pmatrix} B_+^* \\ B_-^* \end{pmatrix} \tag{21} $$ + +Comparing with (3), we come to the result + +$$ \mathbf{M}^{(\mathcal{PT})} = (\mathbf{M}^{-1})^* \tag{22} $$ + +and, because $\det \mathbf{M} = 1$, this means + +$$ M_{11} \stackrel{\mathcal{PT}}{\longmapsto} M_{22}^*, \quad M_{12} \stackrel{\mathcal{PT}}{\longmapsto} -M_{12}^*, \quad M_{21} \stackrel{\mathcal{PT}}{\longmapsto} -M_{21}^*, \quad M_{22} \stackrel{\mathcal{PT}}{\longmapsto} M_{11}^* \tag{23} $$ + +When the system is invariant under this transformation [$\mathbf{M}^{(\mathcal{PT})} = \mathbf{M}$], it must hold + +$$ \mathbf{M}^{-1} = \mathbf{M}^* \tag{24} $$ + +a fact already noticed by Longhi [48] and that can be also recast as [53] + +$$ \mathrm{Re}\left(\frac{R^\ell}{T}\right) = \mathrm{Re}\left(\frac{R^r}{T}\right) = 0 \tag{25} $$ + +This can be equivalently restated in the form + +$$ \rho^\ell - \tau = \pm\pi/2, \quad \rho^r - \tau = \pm\pi/2 \tag{26} $$ + +with $\tau = \arg(T)$ and $\rho_{\ell,r} = \arg(R_{\ell,r})$. Hence, if we look at the complex numbers $R^\ell$, $R^r$, and $T$ as phasors, Equation (26) tell us that $R^\ell$ and $R^r$ are always collinear, while $T$ is simultaneously +---PAGE_BREAK--- + +perpendicular to them. We draw the attention to the fact that the same expressions have been derived for lossless symmetric beam splitters [54]: we have shown that they hold true for any *PT*-symmetric structure. + +A direct consequence of (23) is that there are particular instances of *PT*-invariant systems that are invisible, although not every invisible potential is *PT* invariant. In this respect, it is worth stressing, that even (*P*-symmetric) potentials do not support unidirectional invisibility and the same holds for real (*T*-symmetric) potentials. + +In optics, beam propagation is governed by the paraxial wave equation, which is equivalent to a Schrödinger-like equation, with the role of the potential played here by the refractive index. Therefore, a necessary condition for a complex refractive index to be *PT* invariant is that its real part is an even function of $x$, while the imaginary component (loss and gain profile) is odd. + +**5. Relativistic Variables** + +To move ahead, let us construct the Hermitian matrices + +$$ \mathbf{X} = \begin{pmatrix} X_+ \\ X_- \end{pmatrix} \otimes \begin{pmatrix} X_+^* & X_-^* \end{pmatrix} = \begin{pmatrix} |X_+|^2 & X_+ X_-^* \\ X_+^* X_- & |X_-|^2 \end{pmatrix} \quad (27) $$ + +where $X_{\pm}$ refers to either $A_{\pm}$ or $B_{\pm}$; i.e., the amplitudes that determine the behavior at each side of the potential. The matrices $\mathbf{X}$ are quite reminiscent of the coherence matrix in optics or the density matrix in quantum mechanics. + +One can verify that $\mathbf{M}$ acts on $\mathbf{X}$ by conjugation + +$$ \mathbf{X}' = \mathbf{M} \mathbf{X} \mathbf{M}^\dagger \quad (28) $$ + +The matrix $\mathbf{X}'$ is associated with the amplitudes $B_{\pm}$ and $\mathbf{X}$ with $A_{\pm}$. + +Let us consider the set $\sigma^{\mu} = (\mathbb{1}, \sigma)$, with Greek indices running from 0 to 3. The $\sigma^{\mu}$ are the identity and the standard Pauli matrices, which constitute a basis of the linear space of $2 \times 2$ complex matrices. For the sake of covariance, it is convenient to define $\tilde{\sigma}^{\mu} \equiv \sigma_{\mu} = (\mathbb{1}, -\sigma)$, so that [55] + +$$ \mathrm{Tr}(\tilde{\sigma}^{\mu}\sigma_{\nu}) = 2\delta_{\nu}^{\mu} \quad (29) $$ + +and $\delta_{\nu}^{\mu}$ is the Kronecker delta. To any Hermitian matrix $\mathbf{X}$ we can associate the coordinates + +$$ x^{\mu} = \frac{1}{2} \mathrm{Tr}(\mathbf{X}\tilde{\sigma}^{\mu}) \quad (30) $$ + +The congruence (28) induces in this way a transformation + +$$ x'^{\mu} = \Lambda_{\nu}^{\mu}(\mathbf{M}) x^{\nu} \quad (31) $$ + +where $\Lambda_{\nu}^{\mu}(\mathbf{M})$ can be found to be + +$$ \Lambda_{\nu}^{\mu}(\mathbf{M}) = \frac{1}{2} \mathrm{Tr} (\tilde{\sigma}^{\mu} \mathbf{M} \sigma_{\nu} \mathbf{M}^{\dagger}) \quad (32) $$ + +This equation can be solved to obtain $\mathbf{M}$ from $\Lambda$. The matrices $\mathbf{M}$ and $-\mathbf{M}$ generate the same $\Lambda$, so this homomorphism is two-to-one. The variables $x^{\mu}$ are coordinates in a Minkovskian (1+3)-dimensional space and the action of the system can be seen as a Lorentz transformation in SO(1, 3). + +Having set the general scenario, let us have a closer look at the transfer matrix corresponding to right invisibility (the left invisibility can be dealt with in an analogous way); namely, + +$$ \mathbf{M} = \begin{pmatrix} 1 & R \\ 0 & 1 \end{pmatrix} \quad (33) $$ +---PAGE_BREAK--- + +where, for simplicity, we have dropped the superscript from $R^r$, as there is no risk of confusion. +Under the homomorphism (32) this matrix generates the Lorentz transformation + +$$ +\Lambda(\mathbf{M}) = \begin{pmatrix} +1 + |R|^2/2 & \mathrm{Re}\,R & -\mathrm{Im}\,R & -|R|^2/2 \\ +\mathrm{Re}\,R & 1 & 0 & -\mathrm{Re}\,R \\ +-\mathrm{Im}\,R & 0 & 1 & \mathrm{Im}\,R \\ +|R|^2/2 & \mathrm{Re}\,R & -\mathrm{Im}\,R & 1 - |R|^2/2 +\end{pmatrix} \tag{34} +$$ + +According to Wigner [56], the little group is a subgroup of the Lorentz transformations under which a standard vector $s^\mu$ remains invariant. When $s^\mu$ is timelike, the little group is the rotation group SO(3). If $s^\mu$ is spacelike, the little group are the boosts SO(1, 2). In this context, the matrix (34) is an instance of a null rotation; the little group when $s^\mu$ is a lightlike or null vector, which is related to E(2), the symmetry group of the two-dimensional Euclidean space [57]. + +If we write (34) in the form $\Lambda(\mathbf{M}) = \exp(i\mathbf{N})$, we can easily work out that + +$$ +\mathbf{N} = \begin{pmatrix} +0 & \operatorname{Re} R & -\operatorname{Im} R & 0 \\ +\operatorname{Re} R & 0 & 0 & -\operatorname{Re} R \\ +-\operatorname{Im} R & 0 & 0 & \operatorname{Im} R \\ +0 & \operatorname{Re} R & -\operatorname{Im} R & 0 +\end{pmatrix} \quad (35) +$$ + +This is a nilpotent matrix and the vectors annihilated by N are invariant by Λ(M). In terms of the Lie +algebra so(1, 3), N can be expressed as + +$$ +\mathbf{N} = \mathrm{Re}\,R (\mathbf{K}_1 + \mathbf{J}_2) - \mathrm{Im}\,R (\mathbf{K}_2 + \mathbf{J}_1) \qquad (36) +$$ + +where $\mathbf{K}_i$ generate boosts and $\mathbf{J}_i$ rotations ($i=1,2,3$) [58]. Observe that the rapidity of the boost and the angle of the rotation have the same norm. The matrix $\mathbf{N}$ define a two-parameter Abelian subgroup. + +Let us take, for the time being, Re R = 0, as it happens for *PT*-invariant invisibility. We can +express **K**₂ + **J**₁ as the differential operator + +$$ +\mathbf{K}_2 + \mathbf{J}_1 \mapsto (x^2\partial_0 + x^0\partial_2) + (x^2\partial_3 - x^3\partial_2) = x^2(\partial_0 + \partial_3) + (x^0 - x^3)\partial_2 \quad (37) +$$ + +As we can appreciate, the combinations + +$$ +x^2, \quad x^0 - x^3, \quad (x^0)^2 - (x^1)^2 - (x^3)^2 \tag{38} +$$ + +remain invariant. Suppressing the inessential coordinate $x^2$, the flow lines of the Killing vector (37) is +the intersection of a null plane, $x^0 - x^3 = c_2$ with a hyperboloid $(x^0)^2 - (x^1)^2 - (x^3)^2 = c_3$. The case +$c_3 = 0$ has the hyperboloid degenerate to a light cone with the orbits becoming parabolas lying in +corresponding null planes. + +**6. Hyperbolic Geometry and Invisibility** + +Although the relativistic hyperboloid in Minkowski space constitute by itself a model of hyperbolic geometry (understood in a broad sense, as the study of spaces with constant negative curvature), it is not the best suited to display some features. + +Let us consider the customary tridimensional hyperbolic space $\mathbb{H}^3$, defined in terms of the upper half-space $\{(x,y,z) \in \mathbb{R}^3 | z > 0\}$, equipped with the metric [59] + +$$ +ds^2 = \frac{\sqrt{dx^2 + dy^2 + dz^2}}{z} \tag{39} +$$ + +The geodesics are the semicircles in $\mathbb{H}^3$ orthogonal to the plane $z=0$. +---PAGE_BREAK--- + +We can think of the plane $z = 0$ in $\mathbb{R}^3$ as the complex plane $\mathbb{C}$ with the natural identification $(x,y,z) \mapsto w = x + iy$. We need to add the point at infinity, so that $\hat{\mathbb{C}} = \mathbb{C} \cup \infty$, which is usually referred to as the Riemann sphere and identify $\hat{\mathbb{C}}$ as the boundary of $\mathbb{H}^3$. + +Every matrix $\mathbf{M}$ in SL(2, $\mathbb{C}$) induces a natural mapping in $\mathbb{C}$ via Möbius (or bilinear) transformations [60] + +$$w' = \frac{M_{11}w + M_{12}}{M_{21}w + M_{22}} \qquad (40)$$ + +Note that any matrix obtained by multiplying $\mathbf{M}$ by a complex scalar $\lambda$ gives the same transformation, so a Möbius transformation determines its matrix only up to scalar multiples. In other words, we need to quotient out SL(2, $\mathbb{C}$) by its center $\{\mathbb{1}, -\mathbb{1}\}$: the resulting quotient group is known as the projective linear group and is usually denoted PSL(2, $\mathbb{C}$). + +Observe that we can break down the action (40) into a composition of maps of the form + +$$w \mapsto w + \lambda, \quad w \mapsto \lambda w, \quad w \mapsto -1/w \qquad (41)$$ + +with $\lambda \in \mathbb{C}$. Then we can extend the Möbius transformations to all $\mathbb{H}^3$ as follows: + +$$ (w,z) \mapsto (w+\lambda,z), \quad (w,z) \mapsto (\lambda w, |\lambda|z), \quad (w,z) \mapsto \left(-\frac{w^*}{|w^2+z^2|}, \frac{z}{|w^2+z^2|}\right) \qquad (42) $$ + +The expressions above come from decomposing the action on $\hat{\mathbb{C}}$ of each of the elements of PSL(2, $\mathbb{C}$) in question into two inversions (reflections) in circles in $\hat{\mathbb{C}}$. Each such inversion has a unique extension to $\mathbb{H}_3$ as an inversion in the hemisphere spanned by the circle and composing appropriate pairs of inversions gives us these formulas. + +In fact, one can show that PSL(2, $\mathbb{C}$) preserves the metric on $\mathbb{H}_3$. Moreover every isometry of $\mathbb{H}_3$ can be seen to be the extension of a conformal map of $\hat{\mathbb{C}}$ to itself, since it must send hemispheres orthogonal to $\hat{\mathbb{C}}$ to hemispheres orthogonal to $\hat{\mathbb{C}}$, hence circles in $\hat{\mathbb{C}}$ to circles in $\hat{\mathbb{C}}$. Thus all orientation-preserving isometries of $\mathbb{H}_3$ are given by elements of PSL(2, $\mathbb{C}$) acting as above. + +In the classification of these isometries the notion of fixed points is of utmost importance. These points are defined by the condition $w' = w$ in (40), whose solutions are + +$$w_f = \frac{(M_{11} - M_{22}) \pm \sqrt{[Tr(\mathbf{M})]^2 - 4}}{2M_{21}} \qquad (43)$$ + +So, they are determined by the trace of $\mathbf{M}$. When the trace is a real number, the induced Möbius transformations are called elliptic, hyperbolic, or parabolic, according $[Tr(\mathbf{M})]^2$ is lesser than, greater than, or equal to 4, respectively. The canonical representatives of those matrices are [61] + +$$ \underbrace{\begin{pmatrix} e^{i\theta/2} & 0 \\ 0 & e^{-i\theta/2} \end{pmatrix}}_{\text{elliptic}}, \quad \underbrace{\begin{pmatrix} e^{\xi/2} & 0 \\ 0 & e^{-\xi/2} \end{pmatrix}}_{\text{hyperbolic}}, \quad \underbrace{\begin{pmatrix} 1 & \lambda \\ 0 & 1 \end{pmatrix}}_{\text{parabolic}} \qquad (44) $$ + +while the induced geometrical actions are + +$$w' = we^{i\theta}, \quad w' = we^{\xi}, \quad w' = w + \lambda \qquad (45)$$ + +that is, a rotation of angle $\theta$ (so fixes the axis z), a squeezing of parameter $\xi$ (it has two fixed points in $\hat{\mathbb{C}}$, no fixed points in $\mathbb{H}_3$, and every hyperplane in $\mathbb{H}_3$ that contains the geodesic joining the two fixed points in $\hat{\mathbb{C}}$ is invariant); and a parallel displacement of magnitude $\lambda$, respectively. We emphasize that this later action is the only one without Euclidean analogy. Indeed, in view of (33), this is precisely the action associated to an invisible scatterer. The far-reaching consequences of this geometrical interpretation will be developed elsewhere. +---PAGE_BREAK--- + +**7. Concluding Remarks** + +We have studied unidirectional invisibility by a complex scattering potential, which is characterized +by a set of *PT* invariant equations. Consequently, the *PT*-symmetric invisible configurations are quite +special, for they possess the same symmetry as the equations. + +We have shown how to cast this phenomenon in term of space-time variables, having in this way +a relativistic presentation of invisibility as the set of null rotations. By resorting to elementary notions +of hyperbolic geometry, we have interpreted in a natural way the action of the transfer matrix in this +case as a parallel displacement. + +We think that our results are yet another example of the advantages of these geometrical methods: +we have devised a geometrical tool to analyze invisibility in quite a concise way that, in addition, +can be closely related to other fields of physics. + +**Acknowledgments:** We acknowledge illuminating discussions with Antonio F. Costa, José F. Cariñena and José María Montesinos. Financial support from the Spanish Research Agency (Grant FIS2011-26786) is gratefully acknowledged. + +**Author Contributions:** Both authors contributed equally to the theoretical analysis, numerical calculations, and writing of the paper. + +**Conflicts of Interest:** The authors declare no conflict of interest. + +References + +1. Bender, C.M.; Boettcher, S. Real spectra in non-Hermitian Hamiltonians having *PT* symmetry. Phys. Rev. Lett. **1998**, *80*, 5243–5246. +2. Bender, C.M.; Boettcher, S.; Meisinger, P.N. *PT*-symmetric quantum mechanics. J. Math. Phys. **1999**, *40*, 2201–2229. +3. Bender, C.M.; Brody, D.C.; Jones, H.F. Complex extension of quantum mechanics. Phys. Rev. Lett. **2002**, *89*, doi:10.1103/PhysRevLett.89.270401. +4. Bender, C.M.; Brody, D.C.; Jones, H.F. Must a Hamiltonian be Hermitian? Am. J. Phys. **2003**, *71*, 1095–1102. +5. Bender, C.M. Making sense of non-Hermitian Hamiltonians. Rep. Prog. Phys. **2007**, *70*, 947–1018. +6. Bender, C.M.; Mannheim, P.D. *PT* symmetry and necessary and sufficient conditions for the reality of energy eigenvalues. Phys. Lett. A **2010**, *374*, 1616–1620. +7. Assis, P. *Non-Hermitian Hamiltonians in Field Theory: PT-symmetry and Applications*; VDM: Saarbrücken, Germany, 2010. +8. Moiseyev, N. *Non-Hermitian Quantum Mechanics*; Cambridge University Press: Cambridge, UK, 2011. +9. El-Ganainy, R.; Makris, K.G.; Christodoulides, D.N.; Musslimani, Z.H. Theory of coupled optical *PT*-symmetric structures. Opt. Lett. **2007**, *32*, 2632–2634. +10. Bendix, O.; Fleischmann, R.; Kottos, T.; Shapiro, B. Exponentially fragile *PT* symmetry in lattices with localized eigenmodes. Phys. Rev. Lett. **2009**, *103*, doi:10.1103/PhysRevLett.103.030402. +11. Ruter, C.E.; Makris, K.G.; El-Ganainy, R.; Christodoulides, D.N.; Segev, M.; Kip, D. Observation of parity-time symmetry in optics. Nat. Phys. **2010**, *6*, 192–195. +12. Makris, K.G.; El-Ganainy, R.; Christodoulides, D.N.; Musslimani, Z.H. Beam dynamics in *PT* symmetric optical lattices. Phys. Rev. Lett. **2008**, *100*, 103904:1–103904:4. +13. Longhi, S. Bloch oscillations in complex crystals with *PT* symmetry. Phys. Rev. Lett. **2009**, *103*, 123601:1–123601:4. +14. Sukhorukov, A.A.; Xu, Z.; Kivshar, Y.S. Nonlinear suppression of time reversals in *PT*-symmetric optical couplers. Phys. Rev. A **2010**, *82*, doi:10.1103/PhysRevA.82.043818. +15. Ahmed, Z.; Bender, C.M.; Berry, M.V. Reflectionless potentials and *PT* symmetry. J. Phys. A **2005**, *38*, L627–L630. +16. Lin, Z.; Ramezani, H.; Eichelkraut, T.; Kottos, T.; Cao, H.; Christodoulides, D.N. Unidirectional invisibility induced by *PT*-symmetric periodic structures. Phys. Rev. Lett. **2011**, *106*, doi:10.1103/PhysRevLett.106.213901. +17. Longhi, S. Invisibility in *PT*-symmetric complex crystals. J. Phys. A **2011**, *44*, doi:10.1088/1751-8113/44/48/485302. +---PAGE_BREAK--- + +18. Sánchez-Soto, L.L.; Monzón, J.J.; Barriuso, A.G.; Cariñena, J. The transfer matrix: A geometrical perspective. *Phys. Rep.* **2012**, *513*, 191–227. + +19. Monzón, J.J.; Sánchez-Soto, L.L. Lossles multilayers and Lorentz transformations: More than an analogy. *Opt. Commun.* **1999**, *162*, 1–6. + +20. Monzón, J.J.; Sánchez-Soto, L.L. Fully relativistic-like formulation of multilayer optics. *J. Opt. Soc. Am. A* **1999**, *16*, 2013–2018. + +21. Monzón, J.J.; Yonte, T.; Sánchez-Soto, L.L. Basic factorization for multilayers. *Opt. Lett.* **2001**, *26*, 370–372. + +22. Yonte, T.; Monzón, J.J.; Sánchez-Soto, L.L.; Cariñena, J.F.; López-Lacasta, C. Understanding multilayers from a geometrical viewpoint. *J. Opt. Soc. Am. A* **2002**, *19*, 603–609. + +23. Monzón, J.J.; Yonte, T.; Sánchez-Soto, L.L.; Cariñena, J.F. Geometrical setting for the classification of multilayers. *J. Opt. Soc. Am. A* **2002**, *19*, 985–991. + +24. Barriuso, A.G.; Monzón, J.J.; Sánchez-Soto, L.L. General unit-disk representation for periodic multilayers. *Opt. Lett.* **2003**, *28*, 1501–1503. + +25. Barriuso, A.G.; Monzón, J.J.; Sánchez-Soto, L.L.; Cariñena, J.F. Vectorlike representation of multilayers. *J. Opt. Soc. Am. A* **2004**, *21*, 2386–2391. + +26. Barriuso, A.G.; Monzón, J.J.; Sánchez-Soto, L.L.; Costa, A.F. Escher-like quasiperiodic heterostructures. *J. Phys. A* **2009**, *42*, 192002:1–192002:9. + +27. Muga, J.G.; Palao, J.P.; Navarro, B.; Egusquiza, I.L. Complex absorbing potentials. *Phys. Rep.* **2004**, *395*, 357–426. + +28. Levai, G.; Znojil, M. Systematic search for PT-symmetric potentials with real spectra. *J. Phys. A* **2000**, *33*, 7165–7180. + +29. Ahmed, Z. Schrödinger transmission through one-dimensional complex potentials. *Phys. Rev. A* **2001**, *64*, 042716:1–042716:4. + +30. Ahmed, Z. Energy band structure due to a complex, periodic, PT-invariant potential. *Phys. Lett. A* **2001**, *286*, 231–235. + +31. Mostafazadeh, A. Spectral singularities of complex scattering potentials and infinite reflection and transmission coefficients at real energies. *Phys. Rev. Lett.* **2009**, *102*, 220402:1–220402:4. + +32. Cannata, F.; Dedonder, J.P.; Ventura, A. Scattering in PT-symmetric quantum mechanics. *Ann. Phys.* **2007**, *322*, 397–433. + +33. Chong, Y.D.; Ge, L.; Stone, A.D. PT-symmetry breaking and laser-absorber modes in optical scattering systems. *Phys. Rev. Lett.* **2011**, *106*, doi:10.1103/PhysRevLett.106.093902. + +34. Ahmed, Z. New features of scattering from a one-dimensional non-Hermitian (complex) potential. *J. Phys. A* **2012**, *45*, doi:10.1088/1751-8113/45/3/032004. + +35. Boonserm, P.; Visser, M. One dimensional scattering problems: A pedagogical presentation of the relationship between reflection and transmission amplitudes. *Thai J. Math.* **2010**, *8*, 83–97. + +36. Mostafazadeh, A.; Mehri-Dehnavi, H. Spectral singularities, biorthonormal systems and a two-parameter family of complex point interactions. *J. Phys. A* **2009**, *42*, doi:10.1088/1751-8113/42/12/125303. + +37. Aktosun, T. A factorization of the scattering matrix for the Schrödinger equation and for the wave equation in one dimension. *J. Math. Phys.* **1992**, *33*, 3865–3869. + +38. Aktosun, T.; Klaus, M.; van der Mee, C. Factorization of scattering matrices due to partitioning of potentials in one-dimensional Schrödinger-type equations. *J. Math. Phys.* **1996**, *37*, 5897–5915. + +39. Marchenko, V.A. *Sturm-Liouville Operators and Their Applications*; AMS Chelsea: Providence, RI, USA, 1986. + +40. Tunca, G.; Bairamov, E. Discrete spectrum and principal functions of non-selfadjoint differential operator. *Czech J. Math.* **1999**, *49*, 689–700. + +41. Naimark, M.A. Investigation of the spectrum and the expansion in eigenfunctions of a non-selfadjoint operator of the second order on a semi-axis. *AMS Transl.* **1960**, *16*, 103–193. + +42. Pavlov, B.S. The nonself-adjoint Schrödinger operators. *Topics Math. Phys.* **1967**, *1*, 87–114. + +43. Naimark, M.A. *Linear Differential Operators: Part II*; Ungar: New York, NY, USA, 1968. + +44. Samsonov, B.F. SUSY transformations between diagonalizable and non-diagonalizable Hamiltonians. *J. Phys. A* **2005**, *38*, L397–L403. + +45. Andrianov, A.A.; Cannata, F.; Sokolov, A.V. Spectral singularities for non-Hermitian one-dimensional Hamiltonians: Puzzles with resolution of identity. *J. Math. Phys.* **2010**, *51*, 052104:1–052104:22 +---PAGE_BREAK--- + +46. Chaos-Cador, L.; García-Calderón, G. Resonant states for complex potentials and spectral singularities. *Phys. Rev. A* **2013**, 87, doi:10.1103/PhysRevA.87.042114. + +47. Schomerus, H. Quantum noise and self-sustained radiation of *PT*-symmetric systems. *Phys. Rev. Lett.* **2010**, *104*, doi:10.1103/PhysRevLett.104.233601. + +48. Longhi, S. *PT*-symmetric laser absorber. *Phys. Rev. A* **2010**, *82*, doi:10.1103/PhysRevA.82.031801. + +49. Mostafazadeh, A. Nonlinear spectral singularities of a complex barrier potential and the lasing threshold condition. *Phys. Rev. A* **2013**, *87*, doi:10.1103/PhysRevA.87.063838. + +50. Mostafazadeh, A. Invisibility and *PT*-symmetry. *Phys. Rev. A* **2013**, *87*, doi:10.1103/PhysRevA.87.012103. + +51. Müller, M.; Rotter, I. Exceptional points in open quantum systems. *J. Phys. A* **2008**, *41*, 244018:1–244018:15. + +52. Mehri-Dehnavi, H.; Mostafazadeh, A. Geometric phase for non-Hermitian Hamiltonians and its holonomy interpretation. *J. Math. Phys.* **2008**, *49*, 082105:1–082105:17. + +53. Monzón, J.J.; Barriuso, A.G.; Montesinos-Amilibia, J.M.; Sánchez-Soto, L.L. Geometrical aspects of *PT*-invariant transfer matrices. *Phys. Rev. A* **2013**, *87*, doi:10.1103/PhysRevA.87.012111. + +54. Mandel, L.; Wolf, E. *Optical Coherence and Quantum Optics*; Cambridge University Press: Cambridge, UK, 1995. + +55. Barut, A.O.; Rączka, R. *Theory of Group Representations and Applications*; PWN: Warszaw, Poland, 1977; Section 17.2. + +56. Wigner, E. On unitary representations of the inhomogeneous Lorentz group. *Ann. Math.* **1939**, *40*, 149–204. + +57. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986. + +58. Weinberg, S. *The Quantum Theory of Fields*; Cambridge University Press: Cambridge, UK, 2005; Volume 1. + +59. Iversen, B. *Hyperbolic Geometry*; Cambridge University Press: Cambridge, UK, 1992; Chapter VIII. + +60. Ratcliffe, J.G. *Foundations of Hyperbolic Manifolds*; Springer: Berlin, Germany, 2006; Section 4.3. + +61. Anderson, J.W. *Hyperbolic Geometry*; Springer: New York, NY, USA, 1999; Chapter 3. + +© 2014 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Wigner's Space-Time Symmetries Based on the +Two-by-Two Matrices of the Damped Harmonic +Oscillators and the Poincaré Sphere + +Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³ + +¹ Department of Physics, Middle East Technical University, Ankara 06800, Turkey; +E-Mail: baskal@newton.physics.metu.edu.tr + +² Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA + +³ Department of Radiology, New York University, New York, NY 10016, USA; E-Mail: marilyne.noz@gmail.com + +* E-Mail: yskim@umd.edu; Tel.: +1-301-937-1306. + +Received: 28 February 2014; in revised form: 28 May 2014 / Accepted: 9 June 2014 / Published: 25 June 2014 + +**Abstract:** The second-order differential equation for a damped harmonic oscillator can be converted to two coupled first-order equations, with two two-by-two matrices leading to the group $Sp(2)$. It is shown that this oscillator system contains the essential features of Wigner's little groups dictating the internal space-time symmetries of particles in the Lorentz-covariant world. The little groups are the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. It is shown that the damping modes of the oscillator correspond to the little groups for massive and imaginary-mass particles respectively. When the system makes the transition from the oscillation to damping mode, it corresponds to the little group for massless particles. Rotations around the momentum leave the four-momentum invariant. This degree of freedom extends the $Sp(2)$ symmetry to that of $SL(2, c)$ corresponding to the Lorentz group applicable to the four-dimensional Minkowski space. The Poincaré sphere contains the $SL(2, c)$ symmetry. In addition, it has a non-Lorentzian parameter allowing us to reduce the mass continuously to zero. It is thus possible to construct the little group for massless particles from that of the massive particle by reducing its mass to zero. Spin-1/2 particles and spin-1 particles are discussed in detail. + +**Keywords:** damped harmonic oscillators; coupled first-order equations; unimodular matrices; Wigner's little groups; Poincaré sphere; $Sp(2)$ group; $SL(2, c)$ group; gauge invariance; neutrinos; photons + +PACS: 03.65.Fd, 03.67.-a, 05.30.-d + +# 1. Introduction + +We are quite familiar with the second-order differential equation + +$$m \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + Ky = 0 \quad (1)$$ + +for a damped harmonic oscillator. This equation has the same mathematical form as + +$$L \frac{d^2 Q}{dt^2} + R \frac{dQ}{dt} + \frac{1}{C} Q = 0 \qquad (2)$$ + +for electrical circuits, where L, R, and C are the inductance, resistance, and capacitance respectively. These two equations play fundamental roles in physical and engineering sciences. Since they start from the same set of mathematical equations, one set of problems can be studied in terms of the other. For instance, many mechanical phenomena can be studied in terms of electrical circuits. +---PAGE_BREAK--- + +In Equation (1), when $b = 0$, the equation is that of a simple harmonic oscillator with the frequency $\omega = \sqrt{K/m}$. As $b$ increases, the oscillation becomes damped. When $b$ is larger than $2\sqrt{Km}$, the oscillation disappears, as the solution is a damping mode. + +Consider that increasing *b* continuously, while difficult mechanically, can be done electrically using Equation (2) by adjusting the resistance *R*. The transition from the oscillation mode to the damping mode is a continuous physical process. + +This *b* term leads to energy dissipation, but is not regarded as a fundamental force. It is inconvenient in the Hamiltonian formulation of mechanics and troublesome in transition to quantum mechanics, yet, plays an important role in classical mechanics. In this paper this term will help us understand the fundamental space-time symmetries of elementary particles. + +We are interested in constructing the fundamental symmetry group for particles in the Lorentz-covariant world. For this purpose, we transform the second-order differential equation of Equation (1) to two coupled first-order equations using two-by-two matrices. Only two linearly independent matrices are needed. They are the anti-symmetric and symmetric matrices + +$$A = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \text{and} \quad S = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \qquad (3)$$ + +respectively. The anti-symmetric matrix *A* is Hermitian and corresponds to the oscillation part, while the symmetric *S* matrix corresponds to the damping. + +These two matrices lead to the *Sp*(2) group consisting of two-by-two unimodular matrices with real elements. This group is isomorphic to the three-dimensional Lorentz group applicable to two space-like and one time-like coordinates. This group is commonly called the *O*(2, 1) group. + +This *O*(2, 1) group can explain all the essential features of Wigner's little groups dictating internal space-time symmetries of particles [1]. Wigner defined his little groups as the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. He observed that the little groups are different for massive, massless, and imaginary-mass particles. It has been a challenge to design a mathematical model which will combine those three into one formalism, but we show that the damped harmonic oscillator provides the desired mathematical framework. + +For the two space-like coordinates, we can assign one of them to the direction of the momentum, and the other to the direction perpendicular to the momentum. Let the direction of the momentum be along the z axis, and let the perpendicular direction be along the x axis. We therefore study the kinematics of the group within the zx plane, then see what happens when we rotate the system around the z axis without changing the momentum [2]. + +The Poincaré sphere for polarization optics contains the *SL*(2, *c*) symmetry isomorphic to the four-dimensional Lorentz group applicable to the Minkowski space [3–7]. Thus, the Poincaré sphere extends Wigner’s picture into the three space-like and one time-like coordinates. Specifically, this extension adds rotations around the given momentum which leaves the four-momentum invariant [2]. + +While the particle mass is a Lorentz-invariant variable, the Poincaré sphere contains an extra variable which allows the mass to change. This variable allows us to take the mass-limit of the symmetry operations. The transverse rotational degrees of freedom collapse into one gauge degree of freedom and polarization of neutrinos is a consequence of the requirement of gauge invariance [8,9]. + +The *SL*(2,*c*) group contains symmetries not seen in the three-dimensional rotation group. While we are familiar with two spinors for a spin-1/2 particle in nonrelativistic quantum mechanics, there are two additional spinors due to the reflection properties of the Lorentz group. There are thus 16 bilinear combinations of those four spinors. This leads to two scalars, two four-vectors, and one antisymmetric four-by-four tensor. The Maxwell-type electromagnetic field tensor can be obtained as a massless limit of this tensor [10]. + +In Section 2, we review the damped harmonic oscillator in classical mechanics, and note that the solution can be either in the oscillation mode or damping mode depending on the magnitude of +---PAGE_BREAK--- + +the damping parameter. The translation of the second order equation into a first order differential equation with two-by-two matrices is possible. This first-order equation is similar to the Schrödinger equation for a spin-1/2 particle in a magnetic field. + +Section 3 shows that the two-by-two matrices of Section 2 can be formulated in terms of the $Sp(2)$ group. These matrices can be decomposed into the Bargmann and Wigner decompositions. Furthermore, this group is isomorphic to the three-dimensional Lorentz group with two space and one time-like coordinates. + +In Section 4, it is noted that this three-dimensional Lorentz group has all the essential features of Wigner's little groups which dictate the internal space-time symmetries of the particles in the Lorentz-covariant world. Wigner's little groups are the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. The Bargmann Wigner decompositions are shown to be useful tools for studying the little groups. + +In Section 5, we note that the given momentum is invariant under rotations around it. The addition of this rotational degree of freedom extends the $Sp(2)$ symmetry to the six-parameter $SL(2,c)$ symmetry. In the space-time language, this extends the three dimensional group to the Lorentz group applicable to three space and one time dimensions. + +Section 6 shows that the Poincaré sphere contains the symmetries of $SL(2,c)$ group. In addition, it contains an extra variable which allows us to change the mass of the particle, which is not allowed in the Lorentz group. + +In Section 7, the symmetries of massless particles are studied in detail. In addition to rotation around the momentum, Wigner's little group generates gauge transformations. While gauge transformations on spin-1 photons are well known, the gauge invariance leads to the polarization of massless spin-1/2 particles, as observed in neutrino polarizations. + +In Section 8, it is noted that there are four spinors for spin-1/2 particles in the Lorentz-covariant world. It is thus possible to construct 16 bilinear forms, applicable to two scalars, and two vectors, and one antisymmetric second-rank tensor. The electromagnetic field tensor is derived as the massless limit. This tensor is shown to be gauge-invariant. + +## 2. Classical Damped Oscillators + +For convenience, we write Equation (1) as + +$$ \frac{d^2 y}{dt^2} + 2\mu \frac{dy}{dt} + \omega^2 y = 0 \quad (4) $$ + +with + +$$ \omega = \sqrt{\frac{K}{m}}, \quad \text{and} \quad \mu = \frac{b}{2m} \qquad (5) $$ + +The damping parameter $\mu$ is positive when there are no external forces. When $\omega$ is greater than $\mu$, the solution takes the form + +$$ y = e^{-\mu t} [C_1 \cos(\omega't) + C_2 \sin(\omega't)] \quad (6) $$ + +where + +$$ \omega' = \sqrt{\omega^2 - \mu^2} \quad (7) $$ + +and $C_1$ and $C_2$ are the constants to be determined by the initial conditions. This expression is for a damped harmonic oscillator. Conversely, when $\mu$ is greater than $\omega$, the quantity inside the square-root sign is negative, then the solution becomes + +$$ y = e^{-\mu t} [C_3 \cosh(\mu't) + C_4 \sinh(\mu't)] \quad (8) $$ + +with + +$$ \mu' = \sqrt{\mu^2 - \omega^2} \quad (9) $$ +---PAGE_BREAK--- + +If $\omega = \mu$, both Equations (6) and (8) collapse into one solution + +$$y(t) = e^{-\mu t} [C_5 + C_6 t] \quad (10)$$ + +These three different cases are treated separately in textbooks. Here we are interested in the transition from Equation (6) to Equation (8), via Equation (10). For convenience, we start from $\mu$ greater than $\omega$ with $\mu'$ given by Equation (9). + +For a given value of $\mu$, the square root becomes zero when $\omega$ equals $\mu$. If $\omega$ becomes larger, the square root becomes imaginary and divides into two branches. + +$$\pm i \sqrt{\omega^2 - \mu^2} \qquad (11)$$ + +This is a continuous transition, but not an analytic continuation. To study this in detail, we translate the second order differential equation of Equation (4) into the first-order equation with two-by-two matrices. + +Given the solutions of Equations (6) and (10), it is convenient to use $\psi(t)$ defined as + +$$\psi(t) = e^{\mu t} y(t), \quad \text{and} \quad y = e^{-\mu t} \psi(t) \qquad (12)$$ + +Then $\psi(t)$ satisfies the differential equation + +$$\frac{d^2 \psi(t)}{dt^2} + (\omega^2 - \mu^2)\psi(t) = 0 \qquad (13)$$ + +## 2.1. Two-by-Two Matrix Formulation + +In order to convert this second-order equation to a first-order system, we introduce $\psi_1(t)$ and $\psi_2(t)$ satisfying two coupled differential equations + +$$\frac{d\psi_1}{dt} = (\mu - \omega)\psi_2(t) \qquad (14)$$ + +$$\frac{d\psi_2}{dt} = (\mu + \omega)\psi_1(t) \qquad (15)$$ + +which can be written in matrix form as + +$$\frac{d}{dt} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \begin{pmatrix} 0 & \mu - \omega \\ \mu + \omega & 0 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} \qquad (16)$$ + +Using the Hermitian and anti-Hermitian matrices of Equation (3) in Section 1, we construct the linear combination + +$$H = \omega \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} + \mu \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \qquad (17)$$ + +We can then consider the first-order differential equation + +$$i \frac{\partial}{\partial t} \psi(t) = H \psi(t) \qquad (18)$$ + +While this equation is like the Schrödinger equation for an electron in a magnetic field, the two-by-two matrix is not Hermitian. Its first matrix is Hermitian, but the second matrix is anti-Hermitian. It is of course an interesting problem to give a physical interpretation to this non-Hermitian matrix +---PAGE_BREAK--- + +in connection with quantum dissipation [11], but this is beyond the scope of the present paper. +The solution of Equation (18) is + +$$ +\psi(t) = \exp \left\{ \begin{pmatrix} 0 & -\omega + \mu \\ \omega + \mu & 0 \end{pmatrix} t \right\} \begin{pmatrix} C_7 \\ C_8 \end{pmatrix} \quad (19) +$$ + +where $C_7 = \psi_1(0)$ and $C_8 = \psi_2(0)$ respectively. + +2.2. Transition from the Oscillation Mode to Damping Mode + +It appears straight-forward to compute this expression by a Taylor expansion, but it is not. +This issue was extensively discussed in the earlier papers by two of us [12,13]. The key idea is to write +the matrix + +$$ +\begin{pmatrix} +0 & -\omega + \mu \\ +\omega + \mu & 0 +\end{pmatrix} +\qquad (20) +$$ + +as a similarity transformation of + +$$ +\omega' \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \quad (\omega > \mu) \tag{21} +$$ + +and as that of + +$$ +\mu' \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \quad (\mu > \omega) \tag{22} +$$ + +with $\omega'$ and $\mu'$ defined in Equations (7) and (9), respectively. +Then the Taylor expansion leads to + +$$ +\left( \frac{\cos(\omega't)}{\sqrt{(\omega+\mu)/(\omega-\mu)}} \sin(\omega't) - \frac{\sqrt{(\omega-\mu)/(\omega+\mu)}}{\cos(\omega't)} \sin(\omega't) \right) \quad (23) +$$ + +when $\omega$ is greater than $\mu$. The solution $\psi(t)$ takes the form + +$$ +\begin{pmatrix} +C_7 \cos(\omega't) - C_8 \sqrt{(\omega-\mu)/( \omega+\mu)} \sin(\omega't) \\ +C_7 \sqrt{(\omega+\mu)/( \omega-\mu)} \sin(\omega't) + C_8 \cos(\omega't) +\end{pmatrix} +\quad (24) +$$ + +If $\mu$ is greater than $\omega$, the Taylor expansion becomes + +$$ +\left( \frac{\cosh(\mu't)}{\sqrt{(\mu+\omega)/(\mu-\omega)}} \frac{\sqrt{(\mu-\omega)/(\mu+\omega)}}{\cosh(\mu't)} \sinh(\mu't) \right) \quad (25) +$$ + +When $\omega$ is equal to $\mu$, both Equations (23) and (25) become + +$$ +\begin{pmatrix} 1 & 0 \\ 2\omega t & 1 \end{pmatrix} \tag{26} +$$ + +If $\omega$ is sufficiently close to but smaller than $\mu$, the matrix of Equation (25) becomes + +$$ +\begin{pmatrix} +1 + (\epsilon/2)(2\omega t)^2 & +\epsilon(2\omega t) \\ +(2\omega t) & 1 + (\epsilon/2)(2\omega t)^2 +\end{pmatrix} +\quad (27) +$$ + +with + +$$ +\epsilon = \frac{\mu - \omega}{\mu + \omega} \tag{28} +$$ +---PAGE_BREAK--- + +If $\omega$ is sufficiently close to $\mu$, we can let + +$$ \mu + \omega = 2\omega, \quad \text{and} \quad \mu - \omega = 2\mu\epsilon \tag{29} $$ + +If $\omega$ is greater than $\mu$, $\epsilon$ defined in Equation (28) becomes negative, the matrix of Equation (23) becomes + +$$ \begin{pmatrix} 1 - (-\epsilon/2)(2\omega t)^2 & -(\epsilon)(2\omega t) \\ 2\omega t & 1 - (-\epsilon/2)(2\omega t)^2 \end{pmatrix} \tag{30} $$ + +We can rewrite this matrix as + +$$ \begin{pmatrix} 1 - (1/2) \left[ (2\omega\sqrt{-\epsilon})t \right]^2 & -\sqrt{-\epsilon} \left[ (2\omega\sqrt{-\epsilon})t \right] \\ 2\omega t & 1 - (1/2) \left[ (2\omega\sqrt{-\epsilon})t \right]^2 \end{pmatrix} \tag{31} $$ + +If $\epsilon$ becomes positive, Equation (27) can be written as + +$$ \begin{pmatrix} 1 + (1/2) [(2\omega\sqrt{\epsilon})t]^2 & \sqrt{\epsilon} [(2\omega\sqrt{\epsilon})t] \\ 2\omega t & 1 + (1/2) [(2\omega\sqrt{\epsilon})t]^2 \end{pmatrix} \tag{32} $$ + +The transition from Equation (31) to Equation (32) is continuous as they become identical when $\epsilon = 0$. As $\epsilon$ changes its sign, the diagonal elements of above matrices tell us how cos($\omega't$) becomes cosh($\mu't$). As for the upper-right element, $-\sin(\omega't)$ becomes sinh($\mu't$). This non-analytic continuity is discussed in detail in one of the earlier papers by two of us on lens optics [13]. This type of continuity was called there "tangential continuity." There, the function and its first derivative are continuous while the second derivative is not. + +## 2.3. Mathematical Forms of the Solutions + +In this section, we use the Heisenberg approach to the problem, and obtain the solutions in the form of two-by-two matrices. We note that + +1. For the oscillation mode, the trace of the matrix is smaller than 2. The solution takes the form of + +$$ \begin{pmatrix} \cos(x) & -e^{-\eta} \sin(x) \\ e^{\eta} \sin(x) & \cos(x) \end{pmatrix} \tag{33} $$ + +with trace $2 \cos(x)$. The trace is independent of $\eta$. + +2. For the damping mode, the trace of the matrix is greater than 2. + +$$ \begin{pmatrix} \cosh(x) & e^{-\eta} \sinh(x) \\ e^{\eta} \sinh(x) & \cosh(x) \end{pmatrix} \tag{34} $$ + +with trace $2 \cosh(x)$. Again, the trace is independent of $\eta$. + +3. For the transition mode, the trace is equal to 2, and the matrix is triangular and takes the form of + +$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \tag{35} $$ + +When $x$ approaches zero, the Equations (33) and (34) take the form + +$$ \begin{pmatrix} 1 - x^2/2 & -xe^{-\eta} \\ xe^{\eta} & 1 - x^2/2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 + x^2/2 & xe^{-\eta} \\ xe^{\eta} & 1 + x^2/2 \end{pmatrix} \tag{36} $$ +---PAGE_BREAK--- + +respectively. These two matrices have the same lower-left element. Let us fix this element to be a +positive number $\gamma$. Then + +$$ +x = \gamma e^{-\eta} \tag{37} +$$ + +Then the matrices of Equation (36) become + +$$ +\begin{pmatrix} 1 - \gamma^2 e^{-2\eta} / 2 & -\gamma e^{-2\eta} \\ \gamma & 1 - \gamma^2 e^{-2\eta} / 2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 + \gamma^2 e^{-2\eta} / 2 & \gamma e^{-2\eta} \\ \gamma & 1 + \gamma^2 e^{-2\eta} / 2 \end{pmatrix} \tag{38} +$$ + +If we introduce a small number $\epsilon$ defined as + +$$ +\epsilon = \sqrt{\gamma} e^{-\eta} \tag{39} +$$ + +the matrices of Equation (38) become + +$$ +\begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \begin{pmatrix} 1 - \gamma\epsilon^2/2 & \sqrt{\gamma}\epsilon \\ \sqrt{\gamma}\epsilon & 1 - \gamma\epsilon^2/2 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{40} +$$ + +$$ +\begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \begin{pmatrix} 1 + \gamma\epsilon^2/2 & \sqrt{\gamma}\epsilon \\ \sqrt{\gamma}\epsilon & 1 + \gamma\epsilon^2/2 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} +$$ + +respectively, with $e^{-\eta} = \epsilon / \sqrt{\gamma}$. + +**3. Groups of Two-by-Two Matrices** + +If a two-by-two matrix has four complex elements, it has eight independent parameters. If the determinant of this matrix is one, it is known as an unimodular matrix and the number of independent parameters is reduced to six. The group of two-by-two unimodular matrices is called SL(2, c). This six-parameter group is isomorphic to the Lorentz group applicable to the Minkowski space of three space-like and one time-like dimensions [14]. + +We can start with two subgroups of SL(2, c). + +1. While the matrices of SL(2, c) are not unitary, we can consider the subset consisting of unitary matrices. This subgroup is called SU(2), and is isomorphic to the three-dimensional rotation group. This three-parameter group is the basic scientific language for spin-1/2 particles. + +2. We can also consider the subset of matrices with real elements. This three-parameter group is called Sp(2) and is isomorphic to the three-dimensional Lorentz group applicable to two space-like and one time-like coordinates. + +In the Lorentz group, there are three space-like dimensions with x, y, and z coordinates. +However, for many physical problems, it is more convenient to study the problem in the +two-dimensional (x, z) plane first and generalize it to three-dimensional space by rotating the system +around the z axis. This process can be called Euler decomposition and Euler generalization [2]. + +First, we study *Sp*(2) symmetry in detail, and achieve the generalization by augmenting the +two-by-two matrix corresponding to the rotation around the *z* axis. In this section, we study in detail +properties of *Sp*(2) matrices, then generalize them to *SL*(2, *c*) in Section 5. + +There are three classes of Sp(2) matrices. Their traces can be smaller or greater than two, or equal to two. While these subjects are already discussed in the literature [15–17] our main interest is what happens as the trace goes from less than two to greater than two. Here we are guided by the model we have discussed in Section 2, which accounts for the transition from the oscillation mode to the damping mode. +---PAGE_BREAK--- + +### 3.1. Lie Algebra of Sp(2) + +The two linearly independent matrices of Equation (3) can be written as + +$$ K_1 = \frac{1}{2} \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad \text{and} \quad J_2 = \frac{1}{2} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \qquad (41) $$ + +However, the Taylor series expansion of the exponential form of Equation (23) or Equation (25) requires an additional matrix + +$$ K_3 = \frac{1}{2} \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \qquad (42) $$ + +These matrices satisfy the following closed set of commutation relations. + +$$ [K_1, J_2] = iK_3, \quad [J_2, K_3] = iK_1, \quad [K_3, K_1] = -iJ_2 \qquad (43) $$ + +These commutation relations remain invariant under Hermitian conjugation, even though $K_1$ and $K_3$ are anti-Hermitian. The algebra generated by these three matrices is known in the literature as the group $Sp(2)$ [17]. Furthermore, the closed set of commutation relations is commonly called the Lie algebra. Indeed, Equation (43) is the Lie algebra of the $Sp(2)$ group. + +The Hermitian matrix $J_2$ generates the rotation matrix + +$$ R(\theta) = \exp(-i\theta J_2) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (44) $$ + +and the anti-Hermitian matrices $K_1$ and $K_2$, generate the following squeeze matrices. + +$$ S(\lambda) = \exp(-i\lambda K_1) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \qquad (45) $$ + +and + +$$ B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} \exp(\eta/2) & 0 \\ 0 & \exp(-\eta/2) \end{pmatrix} \qquad (46) $$ + +respectively. + +Returning to the Lie algebra of Equation (43), since $K_1$ and $K_3$ are anti-Hermitian, and $J_2$ is Hermitian, the set of commutation relation is invariant under the Hermitian conjugation. In other words, the commutation relations remain invariant, even if we change the sign of $K_1$ and $K_3$, while keeping that of $J_2$ invariant. Next, let us take the complex conjugate of the entire system. Then both the $J$ and $K$ matrices change their signs. + +### 3.2. Bargmann and Wigner Decompositions + +Since the $Sp(2)$ matrix has three independent parameters, it can be written as [15] + +$$ \begin{pmatrix} \cos(\alpha_1/2) & -\sin(\alpha_1/2) \\ \sin(\alpha_1/2) & \cos(\alpha_1/2) \end{pmatrix} \begin{pmatrix} \cosh\chi & \sinh\chi \\ \sinh\chi & \cosh\chi \end{pmatrix} \begin{pmatrix} \cos(\alpha_2/2) & -\sin(\alpha_2/2) \\ \sin(\alpha_2/2) & \cos(\alpha_2/2) \end{pmatrix} \qquad (47) $$ + +This matrix can be written as + +$$ \begin{pmatrix} \cos(\delta/2) & -\sin(\delta/2) \\ \sin(\delta/2) & \cos(\delta/2) \end{pmatrix} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} \cos(\delta/2) & \sin(\delta/2) \\ -\sin(\delta/2) & \cos(\delta/2) \end{pmatrix} \qquad (48) $$ +---PAGE_BREAK--- + +where + +$$ +\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \begin{pmatrix} \cosh \chi & \sinh \chi \\ \sinh \chi & \cosh \chi \end{pmatrix} \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \quad (49) +$$ + +with + +$$ +\delta = \frac{1}{2}(\alpha_1 - \alpha_2), \quad \text{and} \quad \alpha = \frac{1}{2}(\alpha_1 + \alpha_2) \tag{50} +$$ + +If we complete the matrix multiplication of Equation (49), the result is + +$$ +\left( +\begin{array}{cc} + (\cosh \chi) \cos \alpha & \sinh \chi - (\cosh \chi) \sin \alpha \\ + \sinh \chi + (\cosh \chi) \sin \alpha & (\cosh \chi) \cos \alpha +\end{array} +\right) +\qquad (51) +$$ + +We shall call hereafter the decomposition of Equation (49) the Bargmann decomposition. This means that every matrix in the Sp(2) group can be brought to the Bargmann decomposition by a similarity transformation of rotation, as given in Equation (48). This decomposition leads to an equidiagonal matrix with two independent parameters. + +For the matrix of Equation (49), we can now consider the following three cases. Let us assume that $\chi$ is positive, and the angle $\theta$ is less than 90°. Let us look at the upper-right element. + +1. If it is negative with $[\sinh\chi < (\cosh\chi)\sin\alpha]$, then the trace of the matrix is smaller than 2, and the matrix can be written as + +$$ +\begin{pmatrix} +\cos(\theta/2) & -e^{-\eta}\sin(\theta/2) \\ +e^{\eta}\sin(\theta/2) & \cos(\theta/2) +\end{pmatrix} +\qquad (52) +$$ + +with + +$$ +\cos(\theta/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{-2\eta} = \frac{(\cosh\chi)\sin\alpha - \sinh\chi}{(\cosh\chi)\sin\alpha + \sinh\chi} \tag{53} +$$ + +2. If it is positive with $[\sinh \chi > (\cosh \chi) \sin \alpha]$, then the trace is greater than 2, and the matrix can be written as + +$$ +\begin{pmatrix} +\cosh(\lambda/2) & e^{-\eta} \sinh(\lambda/2) \\ +e^{\eta} \sinh(\lambda/2) & \cosh(\lambda/2) +\end{pmatrix} +\qquad (54) +$$ + +with + +$$ +\cosh(\lambda/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{-2\eta} = \frac{\sinh\chi - (\cosh\chi)\sin\alpha}{(\cosh\chi)\sin\alpha + \sinh\chi} \tag{55} +$$ + +3. If it is zero with $[(\sinh \chi = (\cosh \chi) \sin \alpha)]$, then the trace is equal to 2, and the matrix takes the form + +$$ +\begin{pmatrix} +1 & 0 \\ +2 \sinh \chi & 1 +\end{pmatrix} +\qquad (56) +$$ + +The above repeats the mathematics given in Section 2.3. + +Returning to Equations (52) and (53), they can be decomposed into + +$$ +M(\theta, \eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (57) +$$ + +and + +$$ +M(\lambda, \eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cos(\lambda/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (58) +$$ + +respectively. In view of the physical examples given in Section 6, we shall call this the “Wigner decomposition.” Unlike the Bargmann decomposition, the Wigner decomposition is in the form of a similarity transformation. +---PAGE_BREAK--- + +We note that both Equations (57) and (58) are written as similarity transformations. Thus + +$$ +[M(\theta, \eta)]^n = \begin{pmatrix} \cos(n\theta/2) & -e^{-\eta} \sin(n\theta/2) \\ e^{\eta} \sin(n\theta/2) & \cos(n\theta/2) \end{pmatrix} \quad (59) +$$ + +$$ +[M(\lambda, \eta)]^n = \begin{pmatrix} \cosh(n\lambda/2) & e^\eta \sinh(n\lambda/2) \\ e^{-\eta} \sinh(n\lambda/2) & \cosh(n\lambda/2) \end{pmatrix} \quad (60) +$$ + +$$ +[M(\gamma)]^n = \begin{pmatrix} 1 & 0 \\ n\gamma & 1 \end{pmatrix} \tag{61} +$$ + +These expressions are useful for studying periodic systems [18]. + +The question is what physics these decompositions describe in the real world. To address this, +we study what the Lorentz group does in the real world, and study isomorphism between the Sp(2) +group and the Lorentz group applicable to the three-dimensional space consisting of one time and +two space coordinates. + +3.3. Isomorphism with the Lorentz Group + +The purpose of this section is to give physical interpretations of the mathematical formulas given in Section 3.2. We will interpret these formulae in terms of the Lorentz transformations which are normally described by four-by-four matrices. For this purpose, it is necessary to establish a correspondence between the two-by-two representation of Section 3.2 and the four-by-four representations of the Lorentz group. + +Let us consider the Minkowskian space-time four-vector + +$$ +(t, z, x, y) \tag{62} +$$ + +where $(t^2 - z^2 - x^2 - y^2)$ remains invariant under Lorentz transformations. The Lorentz group consists of four-by-four matrices performing Lorentz transformations in the Minkowski space. + +In order to give physical interpretations to the three two-by-two matrices given in +Equations (44)–(46), we consider rotations around the *y* axis, boosts along the *x* axis, and boosts +along the *z* axis. The transformation is restricted in the three-dimensional subspace of (*t*, *z*, *x*). It is +then straight-forward to construct those four-by-four transformation matrices where the *y* coordinate +remains invariant. They are given in Table 1. Their generators also given. Those four-by-four generators +satisfy the Lie algebra given in Equation (43). + +**Table 1.** Matrices in the two-by-two representation, and their corresponding four-by-four generators and transformation matrices. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MatricesGeneratorsFour-by-FourTransform matrices
R(θ)J2 = 12 (0
i − i
0)
0   0   0   0
0   0 − i   0
0   i   0   0
0   0   0   0
1   0   0   0
0   cosθ − sinθ   0
0   sinθ   cosθ   0
0   0   0   1
B(η)K3 = 12(i
0 − i)
i
0)
0   i   0   0
i   0   0   0
0   0   0   0
0   0   0   0
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
cosh ηsinh η00
sinh ηcosh η00
0010
0001
S(λ)K1 = 12(0
i i
0)
0   0   i   0
0   0   0   0
i   0   0   0
0   0   0   0
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
cosh λ0sinh λ0
0100
sinh λ0cosh λ0
0001
+ + +---PAGE_BREAK--- + +**4. Internal Space-Time Symmetries** + +We have seen that there corresponds a two-by-two matrix for each four-by-four Lorentz transformation matrix. It is possible to give physical interpretations to those four-by-four matrices. It must thus be possible to attach a physical interpretation to each two-by-two matrix. + +Since 1939 [1] when Wigner introduced the concept of the little groups many papers have been published on this subject, but most of them were based on the four-by-four representation. In this section, we shall give the formalism of little groups in the language of two-by-two matrices. In so doing, we provide physical interpretations to the Bargmann and Wigner decompositions introduced in Section 3.2. + +**4.1. Wigner's Little Groups** + +In [1], Wigner started with a free relativistic particle with momentum, then constructed subgroups of the Lorentz group whose transformations leave the four-momentum invariant. These subgroups thus define the internal space-time symmetry of the given particle. Without loss of generality, we assume that the particle momentum is along the z direction. Thus rotations around the momentum leave the momentum invariant, and this degree of freedom defines the helicity, or the spin parallel to the momentum. + +We shall use the word "Wigner transformation" for the transformation which leaves the four-momentum invariant: + +1. For a massive particle, it is possible to find a Lorentz frame where it is at rest with zero momentum. The four-momentum can be written as $m(1,0,0,0)$, where $m$ is the mass. This four-momentum is invariant under rotations in the three-dimensional $(z, x, y)$ space. + +2. For an imaginary-mass particle, there is the Lorentz frame where the energy component vanishes. The momentum four-vector can be written as $p(0,1,0,0)$, where $p$ is the magnitude of the momentum. + +3. If the particle is massless, its four-momentum becomes $p(1,1,0,0)$. Here the first and second components are equal in magnitude. + +The constant factors in these four-momenta do not play any significant roles. Thus we write them as $(1,0,0,0)$, $(0,1,0,0)$, and $(1,1,0,0)$ respectively. Since Wigner worked with these three specific four-momenta [1], we call them Wigner four-vectors. + +All of these four-vectors are invariant under rotations around the z axis. The rotation matrix is + +$$Z(\phi) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos\phi & -\sin\phi \\ 0 & 0 & \sin\phi & \cos\phi \end{pmatrix} \quad (63)$$ + +In addition, the four-momentum of a massive particle is invariant under the rotation around the y axis, whose four-by-four matrix was given in Table 1. The four-momentum of an imaginary particle is invariant under the boost matrix $S(\lambda)$ given in Table 1. The problem for the massless particle is more complicated, but will be discussed in detail in Section 7. See Table 2. +---PAGE_BREAK--- + +**Table 2.** Wigner four-vectors and Wigner transformation matrices applicable to two space-like and one time-like dimensions. Each Wigner four-vector remains invariant under the application of its Wigner matrix. + +
MassWigner Four-VectorWigner Transformation
Massive(1, 0, 0, 0)(1 0 0 0)
(0 cos θ - sinθ 0)
(0 sin θ cos θ 0)
(0 0 0 1)
Massless(1, 1, 0, 0)(1 + γ2/2 - γ2/2 γ 0)
2/2 1 - γ2/2 γ 0)
-γ γ 1 0
(0 0 0 1)
Imaginary mass(0, 1, 0, 0)(cosh λ 0 sinh λ 0)
(0 1 0 0)
(sinh λ 0 cosh λ 0)
(0 0 0 1)
+ +## 4.2. Two-by-Two Formulation of Lorentz Transformations + +The Lorentz group is a group of four-by-four matrices performing Lorentz transformations on the Minkowskian vector space of $(t, z, x, y)$, leaving the quantity + +$$t^2 - z^2 - x^2 - y^2 \quad (64)$$ + +invariant. It is possible to perform the same transformation using two-by-two matrices [7,14,19]. In this two-by-two representation, the four-vector is written as + +$$X = \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix} \quad (65)$$ + +where its determinant is precisely the quantity given in Equation (64) and the Lorentz transformation on this matrix is a determinant-preserving, or unimodular transformation. Let us consider the transformation matrix as [7,19] + +$$G = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \quad \text{and} \quad G^{\dagger} = \begin{pmatrix} \alpha^{*} & \gamma^{*} \\ \beta^{*} & \delta^{*} \end{pmatrix} \quad (66)$$ + +with + +$$\det(G) = 1 \quad (67)$$ + +and the transformation + +$$X' = GXG^{\dagger} \quad (68)$$ + +Since $G$ is not a unitary matrix, Equation (68) not a unitary transformation, but rather we call this the “Hermitian transformation”. Equation (68) can be written as + +$$\begin{pmatrix} t' + z' & x' - iy' \\ x + iy & t' - z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t + z & x - iy \\ x + iy & t - z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (69)$$ + +It is still a determinant-preserving unimodular transformation, thus it is possible to write this as a four-by-four transformation matrix applicable to the four-vector $(t,z,x,y)$ [7,14]. + +Since the $G$ matrix starts with four complex numbers and its determinant is one by Equation (67), it has six independent parameters. The group of these $G$ matrices is known to be locally isomorphic +---PAGE_BREAK--- + +to the group of four-by-four matrices performing Lorentz transformations on the four-vector (t, z, x, y). +In other words, for each G matrix there is a corresponding four-by-four Lorentz-transform matrix [7]. + +The matrix G is not a unitary matrix, because its Hermitian conjugate is not always its inverse. +This group has a unitary subgroup called SU(2) and another consisting only of real matrices called +Sp(2). For this later subgroup, it is sufficient to work with the three matrices R(θ), S(λ), and B(η) +given in Equations (44)–(46) respectively. Each of these matrices has its corresponding four-by-four +matrix applicable to the (t, z, x, y). These matrices with their four-by-four counterparts are tabulated in +Table 1. + +The energy-momentum four vector can also be written as a two-by-two matrix. It can be written +as + +$$ +P = \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix} \tag{70} +$$ + +with + +$$ +\det(P) = p_0^2 - p_x^2 - p_y^2 - p_z^2 \quad (71) +$$ + +which means + +$$ +\det(P) = m^2 \tag{72} +$$ + +where *m* is the particle mass. + +The Lorentz transformation can be written explicitly as + +$$ +P' = GPG^{\dagger} \qquad (73) +$$ + +or + +$$ +\begin{pmatrix} p'_0 + p'_z & p'_x - ip'_y \\ p'_x + ip'_y & E' - p'_z \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (74) +$$ + +This is an unimodular transformation, and the mass is a Lorentz-invariant variable. Furthermore, it was shown in [7] that Wigner's little groups for massive, massless, and imaginary-mass particles can be explicitly defined in terms of two-by-two matrices. + +Wigner's little group consists of two-by-two matrices satisfying + +$$ +P = WPW^{\dagger} \tag{75} +$$ + +The two-by-two W matrix is not an identity matrix, but tells about the internal space-time symmetry of a particle with a given energy-momentum four-vector. This aspect was not known when Einstein formulated his special relativity in 1905, hence the internal space-time symmetry was not an issue at that time. We call the two-by-two matrix W the Wigner matrix, and call the condition of Equation (75) the Wigner condition. + +If determinant of W is a positive number, then P is proportional to + +$$ +P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \tag{76} +$$ + +corresponding to a massive particle at rest, while if the determinant is negative, it is proportional to + +$$ +P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \tag{77} +$$ +---PAGE_BREAK--- + +corresponding to an imaginary-mass particle moving faster than light along the z direction, with a vanishing energy component. If the determinant is zero, P is + +$$P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \tag{78}$$ + +which is proportional to the four-momentum matrix for a massless particle moving along the z direction. + +For all three cases, the matrix of the form + +$$Z(\phi) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} \tag{79}$$ + +will satisfy the Wigner condition of Equation (75). This matrix corresponds to rotations around the z axis. + +For the massive particle with the four-momentum of Equation (76), the transformations with the rotation matrix of Equation (44) leave the *P* matrix of Equation (76) invariant. Together with the *Z*(Φ) matrix, this rotation matrix leads to the subgroup consisting of the unitary subset of the *G* matrices. The unitary subset of *G* is SU(2) corresponding to the three-dimensional rotation group dictating the spin of the particle [14]. + +For the massless case, the transformations with the triangular matrix of the form + +$$\begin{pmatrix} 1 & \gamma \\ 0 & 1 \end{pmatrix} \tag{80}$$ + +leave the momentum matrix of Equation (78) invariant. The physics of this matrix has a stormy history, and the variable $\gamma$ leads to a gauge transformation applicable to massless particles [8,9,20,21]. + +For a particle with an imaginary mass, a W matrix of the form of Equation (45) leaves the four-momentum of Equation (77) invariant. + +Table 3 summarizes the transformation matrices for Wigner's little groups for massive, massless, and imaginary-mass particles. Furthermore, in terms of their traces, the matrices given in this subsection can be compared with those given in Section 2.3 for the damped oscillator. The comparisons are given in Table 4. + +Of course, it is a challenging problem to have one expression for all three classes. This problem has been discussed in the literature [12], and the damped oscillator case of Section 2 addresses the continuity problem. + +**Table 3.** Wigner vectors and Wigner matrices in the two-by-two representation. The trace of the matrix tells whether the particle $m^2$ is positive, zero, or negative. + +
Particle MassFour-MomentumTransform MatrixTrace
Massive(10 01)(cos(θ/2) − sin(θ/2)
sin(θ/2) cos(θ/2))
less than 2
Massless(10 00)(10 γ)equal to 2
Imaginary mass(10 0−1)(cosh(λ/2) sinh(λ/2)
sinh(λ/2) cosh(λ/2))
greater than 2
+---PAGE_BREAK--- + +**Table 4.** Damped Oscillators and Space-time Symmetries. Both share Sp(2) as their symmetry group. + +
TraceDamped OscillatorParticle Symmetry
Smaller than 2Oscillation ModeMassive Particles
Equal to 2Transition ModeMassless Particles
Larger than 2Damping ModeImaginary-mass Particles
+ +## 5. Lorentz Completion of Wigner's Little Groups + +So far we have considered transformations applicable only to (t, z, x) space. In order to study the full symmetry, we have to consider rotations around the z axis. As previously stated, when a particle moves along this axis, this rotation defines the helicity of the particle. + +In [1], Wigner worked out the little group of a massive particle at rest. When the particle gains a momentum along the z direction, the single particle can reverse the direction of momentum, the spin, or both. What happens to the internal space-time symmetries is discussed in this section. + +### 5.1. Rotation around the z Axis + +In Section 3, our kinematics was restricted to the two-dimensional space of z and x, and thus includes rotations around the y axis. We now introduce the four-by-four matrix of Equation (63) performing rotations around the z axis. Its corresponding two-by-two matrix was given in Equation (79). Its generator is + +$$J_3 = \frac{1}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad (81)$$ + +If we introduce this additional matrix for the three generators we used in Sections 3 and 3.2, we end up the closed set of commutation relations + +$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, K_j] = i\epsilon_{ijk}K_k, \quad [K_i, K_j] = -i\epsilon_{ijk}J_k \qquad (82)$$ + +with + +$$J_i = \frac{1}{2}\sigma_i, \quad \text{and} \quad K_i = \frac{i}{2}\sigma_i \qquad (83)$$ + +where $\sigma_i$ are the two-by-two Pauli spin matrices. + +For each of these two-by-two matrices there is a corresponding four-by-four matrix generating Lorentz transformations on the four-dimensional Lorentz group. When these two-by-two matrices are imaginary, the corresponding four-by-four matrices were given in Table 1. If they are real, the corresponding four-by-four matrices were given in Table 5. +---PAGE_BREAK--- + +**Table 5.** Two-by-two and four-by-four generators not included in Table 1. The generators given there and given here constitute the set of six generators for SL(2, c) or of the Lorentz group given in Equation (82). + +
GeneratorTwo-by-TwoFour-by-Four
J312(1 0)
0 -1
(
000
000
00-i
00i
)
J112(0 1)
1 0
(
000
00i
000
0-i0
)
K212(0 1)
-1 0
(
00i
000
0i0
)
+ +This set of commutation relations is known as the Lie algebra for the SL(2, c), namely the group of two-by-two elements with unit determinants. Their elements are complex. This set is also the Lorentz group performing Lorentz transformations on the four-dimensional Minkowski space. + +This set has many useful subgroups. For the group SL(2, c), there is a subgroup consisting only of real matrices, generated by the two-by-two matrices given in Table 1. This three-parameter subgroup is precisely the Sp(2) group we used in Sections 3 and 3.2. Their generators satisfy the Lie algebra given in Equation (43). + +In addition, this group has the following Wigner subgroups governing the internal space-time symmetries of particles in the Lorentz-covariant world [1]: + +1. The $J_i$ matrices form a closed set of commutation relations. The subgroup generated by these Hermitian matrices is SU(2) for electron spins. The corresponding rotation group does not change the four-momentum of the particle at rest. This is Wigner's little group for massive particles. +If the particle is at rest, the two-by-two form of the four-vector is given by Equation (76). The Lorentz transformation generated by $J_3$ takes the form + +$$ \begin{pmatrix} e^{i\phi/2} & 0 \\ 0 & e^{-i\phi/2} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad (84) $$ + +Similar computations can be carried out for $J_1$ and $J_2$. + +2. There is another Sp(2) subgroup, generated by $K_1$, $K_2$, and $J_3$. They satisfy the commutation relations + +$$ [K_1, K_2] = -iJ_3, \quad [J_3, K_1] = iK_2, \quad [K_2, J_3] = iK_1. \quad (85) $$ + +The Wigner transformation generated by these two-by-two matrices leave the momentum four-vector of Equation (77) invariant. For instance, the transformation matrix generated by $K_2$ takes the form + +$$ \exp(-i\xi K_2) = \begin{pmatrix} \cosh(\xi/2) & i \sinh(\xi/2) \\ i \sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} \quad (86) $$ + +and the Wigner transformation takes the form + +$$ \begin{pmatrix} \cosh(\xi/2) & i \sinh(\xi/2) \\ -i \sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} \cosh(\xi/2) & i \sinh(\xi/2) \\ -i \sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \quad (87) $$ + +Computations with $K_2$ and $J_3$ lead to the same result. +---PAGE_BREAK--- + +Since the determinant of the four-momentum matrix is negative, the particle has an imaginary mass. In the language of the four-by-four matrix, the transformation matrices leave the four-momentum of the form (0, 1, 0, 0) invariant. + +3. Furthermore, we can consider the following combinations of the generators: + +$$N_1 = K_1 - J_2 = \begin{pmatrix} 0 & i \\ 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = K_2 + J_1 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \qquad (88)$$ + +Together with $J_3$, they satisfy the following commutation relations. + +$$[N_1, N_2] = 0, \quad [N_1, J_3] = -iN_2, \quad [N_2, J_3] = iN_1 \qquad (89)$$ + +In order to understand this set of commutation relations, we can consider an *x y* coordinate system in a two-dimensional space. Then rotation around the origin is generated by + +$$J_3 = -i \left( x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right) \qquad (90)$$ + +and the two translations are generated by + +$$N_1 = -i \frac{\partial}{\partial x}, \quad \text{and} \quad N_2 = -i \frac{\partial}{\partial y} \qquad (91)$$ + +for the *x* and *y* directions respectively. These operators satisfy the commutations relations given in Equation (89). + +The two-by-two matrices of Equation (88) generate the following transformation matrix. + +$$G(\gamma, \phi) = \exp[-i\gamma(N_1 \cos\phi + N_2 \sin\phi)] = \begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \qquad (92)$$ + +The two-by-two form for the four-momentum for the massless particle is given by Equation (78). The computation of the Hermitian transformation using this matrix is + +$$\begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \gamma e^{i\phi} & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (93)$$ + +confirming that $N_1$ and $N_2$, together with $J_3$, are the generators of the $E(2)$-like little group for massless particles in the two-by-two representation. The transformation that does this in the physical world is described in the following section. + +## 5.2. $E(2)$-Like Symmetry of Massless Particles + +From the four-by-four generators of $K_{1,2}$ and $J_{1,2}$, we can write + +$$N_1 = \begin{pmatrix} 0 & 0 & i & 0 \\ 0 & 0 & i & 0 \\ i & -i & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = \begin{pmatrix} 0 & 0 & 0 & i \\ 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 \\ i & -i & 0 & 0 \end{pmatrix} \qquad (94)$$ +---PAGE_BREAK--- + +These matrices lead to the transformation matrix of the form + +$$ +G(\gamma, \phi) = \begin{pmatrix} +1 + \frac{\gamma^2}{2} & -\frac{\gamma^2}{2} & \gamma \cos \phi & \gamma \sin \phi \\ +\frac{\gamma^2}{2} & 1 - \frac{\gamma^2}{2} & \gamma \cos \phi & \gamma \sin \phi \\ +-\gamma \cos \phi & \gamma \cos \phi & 1 & 0 \\ +-\gamma \sin \phi & \gamma \sin \phi & 0 & 1 +\end{pmatrix} \quad (95) +$$ + +This matrix leaves the four-momentum invariant, as we can see from + +$$ +G(\gamma, \phi) \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \tag{96} +$$ + +When it is applied to the photon four-potential + +$$ +G(\gamma, \phi) \begin{pmatrix} A_0 \\ A_3 \\ A_1 \\ A_2 \end{pmatrix} = \begin{pmatrix} A_0 \\ A_3 \\ A_1 \\ A_2 \end{pmatrix} + \gamma (A_1 \cos \phi + A_2 \sin \phi) \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad (97) +$$ + +with the Lorentz condition which leads to $A_3 = A_0$ in the zero mass case. Gauge transformations are well known for electromagnetic fields and photons. Thus Wigner's little group leads to gauge transformations. + +In the two-by-two representation, the electromagnetic four-potential takes the form + +$$ +\begin{pmatrix} +2A_0 & A_1 - iA_2 \\ +A_1 + iA_2 & 0 +\end{pmatrix} +\qquad +(98) +$$ + +with the Lorentz condition $A_3 = A_0$. Then the two-by-two form of Equation (97) is + +$$ +\begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 2A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \gamma e^{i\phi} & 1 \end{pmatrix} \quad (99) +$$ + +which becomes + +$$ +\begin{pmatrix} A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} + \begin{pmatrix} 2\gamma (A_1 \cos \phi - A_2 \sin \phi) & 0 \\ 0 & 0 \end{pmatrix} \quad (100) +$$ + +This is the two-by-two equivalent of the gauge transformation given in Equation (97). + +For massless spin-1/2 particles starting with the two-by-two expression of G(γ, φ) given in Equation (92), and considering the spinors + +$$ +u = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad v = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \tag{101} +$$ + +for spin-up and spin-down states respectively, + +$$ +Gu = u, \quad \text{and} \quad Gv = v + \gamma e^{-i\phi} u +\quad (102) +$$ + +This means that the spinor $u$ for spin up is invariant under the gauge transformation while $v$ is not. Thus, the polarization of massless spin-1/2 particle, such as neutrinos, is a consequence of the gauge invariance. We shall continue this discussion in Section 7. +---PAGE_BREAK--- + +### 5.3. Boosts along the z Axis + +In Sections 4.1 and 5.1, we studied Wigner transformations for fixed values of the four-momenta. The next question is what happens when the system is boosted along the z direction, with the transformation + +$$ \begin{pmatrix} t' \\ z' \end{pmatrix} = \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} t \\ z \end{pmatrix} \qquad (103) $$ + +Then the four-momenta become + +$$ (\cosh \eta, \sinh \eta, 0, 0), \quad (\sinh \eta, \cosh \eta, 0, 0), \quad e^{\eta}(1, 1, 0, 0) \qquad (104) $$ + +respectively for massive, imaginary, and massless particles cases. In the two-by-two representation, the boost matrix is + +$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \qquad (105) $$ + +and the four-momenta of Equation (104) become + +$$ \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix}, \quad \begin{pmatrix} e^\eta & 0 \\ 0 & -e^{-\eta} \end{pmatrix}, \quad \begin{pmatrix} e^\eta & 0 \\ 0 & 0 \end{pmatrix} \qquad (106) $$ + +respectively. These matrices become Equations (76)–(78) respectively when $\eta = 0$. + +We are interested in Lorentz transformations which leave a given non-zero momentum invariant. We can consider a Lorentz boost along the direction preceded and followed by identical rotation matrices, as described in Figure 1 and the transformation matrix as + +$$ \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \begin{pmatrix} \cosh \chi & -\sinh \chi \\ -\sinh \chi & \cosh \chi \end{pmatrix} \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \qquad (107) $$ + +which becomes + +$$ \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\ -\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix} \qquad (108) $$ + +**Figure 1.** Bargmann and Wigner decompositions. (a) Bargmann decomposition; (b) Wigner decomposition. In the Bargmann decomposition, we start from a momentum along the z direction. We can rotate, boost, and rotate to bring the momentum to the original position. The resulting matrix is the product of one boost and two rotation matrices. In the Wigner decomposition, the particle is boosted back to the frame where the Wigner transformation can be applied. Make a Wigner transformation there and come back to the original state of the momentum. This process also can also be written as the product of three simple matrices. +---PAGE_BREAK--- + +Except the sign of $\chi$, the two-by-two matrices of Equations (107) and (108) are identical with those given in Section 3.2. The only difference is the sign of the parameter $\chi$. We are thus ready to interpret this expression in terms of physics. + +1. If the particle is massive, the off-diagonal elements of Equation (108) have opposite signs, and this matrix can be decomposed into + +$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (109) $$ + +with + +$$ \cos(\theta/2) = (\cosh \chi) \cos \alpha, \quad \text{and} \quad e^{2\eta} = \frac{\cosh(\chi) \sin \alpha + \sinh \chi}{\cosh(\chi) \sin \alpha - \sinh \chi} \quad (110) $$ + +and + +$$ e^{2\eta} = \frac{p_0 + p_z}{p_0 - p_z} \quad (111) $$ + +According to Equation (109) the first matrix (far right) reduces the particle momentum to zero. The second matrix rotates the particle without changing the momentum. The third matrix boosts the particle to restore its original momentum. This is the extension of Wigner's original idea to moving particles. + +2. If the particle has an imaginary mass, the off-diagonal elements of Equation (108) have the same sign, + +$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cosh(\lambda/2) & -\sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (112) $$ + +with + +$$ \cosh(\lambda/2) = (\cosh \chi) \cos \alpha, \quad \text{and} \quad e^{2\eta} = \frac{\sinh \chi + \cosh(\chi) \sin \alpha}{\cosh(\chi) \sin \alpha - \sinh \chi} \quad (113) $$ + +and + +$$ e^{2\eta} = \frac{p_0 + p_z}{p_0 - p_z} \quad (114) $$ + +This is also a three-step operation. The first matrix brings the particle momentum to the zero-energy state with $p_0 = 0$. Boosts along the x or y direction do not change the four-momentum. We can then boost the particle back to restore its momentum. This operation is also an extension of the Wigner's original little group. Thus, it is quite appropriate to call the formulas of Equations (109) and (112) Wigner decompositions. + +3. If the particle mass is zero with + +$$ \sinh \chi = (\cosh \chi) \sin \alpha \quad (115) $$ + +the $\eta$ parameter becomes infinite, and the Wigner decomposition does not appear to be useful. We can then go back to the Bargmann decomposition of Equation (107). With the condition of Equations (115) and (108) becomes + +$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \quad (116) $$ + +with + +$$ \gamma = 2 \sinh \chi \quad (117) $$ + +The decomposition ending with a triangular matrix is called the Iwasawa decomposition [16,22] and its physical interpretation was given in Section 5.2. The $\gamma$ parameter does not depend on $\eta$. +---PAGE_BREAK--- + +Thus, we have given physical interpretations to the Bargmann and Wigner decompositions given in Section (3.2). Consider what happens when the momentum becomes large. Then $\eta$ becomes large for nonzero mass cases. All three four-momenta in Equation (106) become + +$$e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (118)$$ + +As for the Bargmann-Wigner matrices, they become the triangular matrix of Equation (116), with $\gamma = \sin(\theta/2)e^{\eta}$ and $\gamma = \sinh(\lambda/2)e^{\eta}$, respectively for the massive and imaginary-mass cases. + +In Section 5.2, we concluded that the triangular matrix corresponds to gauge transformations. However, particles with imaginary mass are not observed. For massive particles, we can start with the three-dimensional rotation group. The rotation around the z axis is called helicity, and remains invariant under the boost along the z direction. As for the transverse rotations, they become gauge transformation as illustrated in Table 6. + +**Table 6.** Covariance of the energy-momentum relation, and covariance of the internal space-time symmetry. Under the Lorentz boost along the z direction, $J_3$ remains invariant, and this invariant component of the angular momentum is called the helicity. The transverse component $J_1$ and $J_2$ collapse into a gauge transformation. The $\gamma$ parameter for the massless case has been studied in earlier papers in the four-by-four matrix formulation of Wigner's little groups [8,21]. + +
Massive, SlowCovarianceMassless, Fast
$E = p^2/2m$
$J_3$
Einstein's $E = mc^2$$E = cp$
Helicity
$J_1, J_2$Wigner's Little GroupGauge Transformation
+ +### 5.4. Conjugate Transformations + +The most general form of the SL(2, c) matrix is given in Equation (66). Transformation operators for the Lorentz group are given in exponential form as: + +$$D = \exp \left\{ -i \sum_{i=1}^{3} (\theta_i J_i + \eta_i K_i) \right\} \qquad (119)$$ + +where the $J_i$ are the generators of rotations and the $K_i$ are the generators of proper Lorentz boosts. They satisfy the Lie algebra given in Equation (43). This set of commutation relations is invariant under the sign change of the boost generators $K_i$. Thus, we can consider "dot conjugation" defined as + +$$\dot{D} = \exp \left\{ -i \sum_{i=1}^{3} (\theta_i J_i - \eta_i K_i) \right\} \qquad (120)$$ + +Since $K_i$ are anti-Hermitian while $J_i$ are Hermitian, the Hermitian conjugate of the above expression is + +$$D^{\dagger} = \exp \left\{ -i \sum_{i=1}^{3} (-\theta_i J_i + \eta_i K_i) \right\} \qquad (121)$$ + +while the Hermitian conjugate of G is + +$$\dot{D}^{\dagger} = \exp \left\{ -i \sum_{i=1}^{3} (-\theta_i J_i - \eta_i K_i) \right\} \qquad (122)$$ +---PAGE_BREAK--- + +Since we understand the rotation around the z axis, we can now restrict the kinematics to the +zt plane, and work with the Sp(2) symmetry. Then the D matrices can be considered as Bargmann +decompositions. First, D and $\tilde{D}$, and their Hermitian conjugates are + +$$ +D(\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & \sinh \chi - (\sin \alpha) \cosh \chi \\ \sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix} \quad (123) +$$ + +$$ +\dot{D}(\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\ -\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix} \quad (124) +$$ + +These matrices correspond to the "D loops" given in Figure 2a,b respectively. The "dot" conjugation changes the direction of boosts. The dot conjugation leads to the inversion of the space which is called the parity operation. + +We can also consider changing the direction of rotations. Then they result in the Hermitian +conjugates. We can write their matrices as + +$$ +D^{\dagger}(\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & \sinh \chi + (\sin \alpha) \cosh \chi \\ \sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix} \qquad (125) +$$ + +$$ +\dot{D}^{\dagger}(\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi + (\sin \alpha) \cosh \chi \\ -\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix} \quad (126) +$$ + +From the exponential expressions from Equation (119) to Equation (122), it is clear that + +$$ +D^{\dagger} = \dot{D}^{-1}, \quad \text{and} \quad \dot{D}^{\dagger} = D^{-1} \tag{127} +$$ + +The D loop given in Figure 1 corresponds to $\dot{D}$. We shall return to these loops in Section 7. + +Figure 2. Four D-loops resulting from the Bargmann decomposition. (a) Bargmann decomposition from Figure 1; (b) Direction of the Lorentz boost is reversed; (c) Direction of rotation is reversed; (d) Both directions are reversed. These operations correspond to the space-inversion, charge conjugation, and the time reversal respectively. +---PAGE_BREAK--- + +## 6. Symmetries Derivable from the Poincaré Sphere + +The Poincaré sphere serves as the basic language for polarization physics. Its underlying language is the two-by-two coherency matrix. This coherency matrix contains the symmetry of SL(2, c) isomorphic to the Lorentz group applicable to three space-like and one time-like dimensions [4,6,7]. + +For polarized light propagating along the z direction, the amplitude ratio and phase difference of electric field x and y components traditionally determine the state of polarization. Hence, the polarization can be changed by adjusting the amplitude ratio or the phase difference or both. Usually, the optical device which changes amplitude is called an "attenuator" (or "amplifier") and the device which changes the relative phase a "phase shifter". + +Let us start with the Jones vector: + +$$ \begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp[i(kz - \omega t)] \\ a \exp[i(kz - \omega t)] \end{pmatrix} \qquad (128) $$ + +To this matrix, we can apply the phase shift matrix of Equation (79) which brings the Jones vector to: + +$$ \begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp[i(kz - \omega t - i\phi/2)] \\ a \exp[i(kz - \omega t + i\phi/2)] \end{pmatrix} \qquad (129) $$ + +The generator of this phase-shifter is $J_3$ given Table 5. + +The optical beam can be attenuated differently in the two directions. The resulting matrix is: + +$$ e^{-\mu} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \qquad (130) $$ + +with the attenuation factor of $\exp(-\mu_0 + \eta/2)$ and $\exp(-\mu - \eta/2)$ for the x and y directions respectively. We are interested only in the relative attenuation given in Equation (46) which leads to different amplitudes for the x and y component, and the Jones vector becomes: + +$$ \begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} ae^{\mu/2} \exp[i(kz - \omega t - i\phi/2)] \\ ae^{-\mu/2} \exp[i(kz - \omega t + i\phi/2)] \end{pmatrix} \qquad (131) $$ + +The squeeze matrix of Equation (46) is generated by $K_3$ given in Table 1. + +The polarization is not always along the x and y axes, but can be rotated around the z axis using Equation (79) generated by $J_2$ given in Table 1. + +Among the rotation angles, the angle of 45° plays an important role in polarization optics. Indeed, if we rotate the squeeze matrix of Equation (46) by 45°, we end up with the squeeze matrix of Equation (45) generated by $K_1$ given also in Table 1. + +Each of these four matrices plays an important role in special relativity, as we discussed in Sections 3.2 and 6. Their respective roles in optics and particle physics are given in Table 7. +---PAGE_BREAK--- + +**Table 7.** Polarization optics and special relativity share the same mathematics. Each matrix has its clear role in both optics and relativity. The determinant of the Stokes or the four-momentum matrix remains invariant under Lorentz transformations. It is interesting to note that the decoherence parameter (least fundamental) in optics corresponds to the (mass)$^2$ (most fundamental) in particle physics. + +
Polarization OpticsTransformation MatrixParticle Symmetry
Phase shift by φ(e-iφ/2 0
0 eiφ/2)
Rotation around z.
Rotation around z(cos(θ/2) - sin(θ/2)
sin(θ/2) cos(θ/2))
Rotation around y.
Squeeze along x and y(eη/2 0
0 e-η/2)
Boost along z.
Squeeze along 45°(cosh(λ/2) sinh(λ/2)
sinh(λ/2) cosh(λ/2))
Boost along x.
a4 (sinξ)2Determinant(mass)2
+ +The most general form for the two-by-two matrix applicable to the Jones vector is the G matrix of Equation (66). This matrix is of course a representation of the SL(2, c) group. It brings the simplest Jones vector of Equation (128) to its most general form. + +## 6.1. Coherency Matrix + +However, the Jones vector alone cannot tell us whether the two components are coherent with each other. In order to address this important degree of freedom, we use the coherency matrix defined as [3,23] + +$$C = \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \qquad (132)$$ + +where + +$$\langle \psi_i^* \psi_j \rangle = \frac{1}{T} \int_0^T \psi_i^* (t + \tau) \psi_j(t) dt \qquad (133)$$ + +where T is a sufficiently long time interval. Then, those four elements become [4] + +$$S_{11} = \langle \psi_1^* \psi_1 \rangle = a^2, \quad S_{12} = \langle \psi_1^* \psi_2 \rangle = a^2 (\cos \xi) e^{-i\phi} \qquad (134)$$ + +$$S_{21} = \langle \psi_2^* \psi_1 \rangle = a^2(\cos\xi)e^{+i\phi}, \quad S_{22} = \langle \psi_2^* \psi_2 \rangle = a^2 \qquad (135)$$ + +The diagonal elements are the absolute values of $\psi_1$ and $\psi_2$ respectively. The angle $\phi$ could be different from the value of the phase-shift angle given in Equation (79), but this difference does not play any role in the reasoning. The off-diagonal elements could be smaller than the product of $\psi_1$ and $\psi_2$, if the two polarizations are not completely coherent. + +The angle $\xi$ specifies the degree of coherency. If it is zero, the system is fully coherent, while the system is totally incoherent if $\xi$ is $90^\circ$. This can therefore be called the “decoherence angle.” + +While the most general form of the transformation applicable to the Jones vector is G of Equation (66), the transformation applicable to the coherency matrix is + +$$C' = G C G^{\dagger} \qquad (136)$$ + +The determinant of the coherency matrix is invariant under this transformation, and it is + +$$\det(C) = a^4 (\sin \xi)^2 \qquad (137)$$ + +Thus, angle $\xi$ remains invariant. In the language of the Lorentz transformation applicable to the four-vector, the determinant is equivalent to the (mass)$^2$ and is therefore a Lorentz-invariant quantity. +---PAGE_BREAK--- + +## 6.2. Two Radii of the Poincaré Sphere + +Let us write explicitly the transformation of Equation (136) as + +$$ \begin{pmatrix} S'_{11} & S'_{12} \\ S'_{21} & S'_{22} \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (138) $$ + +It is then possible to construct the following quantities, + +$$ S_0 = \frac{S_{11} + S_{22}}{2}, \qquad S_3 = \frac{S_{11} - S_{22}}{2} \quad (139) $$ + +$$ S_1 = \frac{S_{12} + S_{21}}{2}, \qquad S_2 = \frac{S_{12} - S_{21}}{2i} \quad (140) $$ + +These are known as the Stokes parameters, and constitute a four-vector ($S_0, S_3, S_1, S_2$) under the Lorentz transformation. + +In the Jones vector of Equation (128), the amplitudes of the two orthogonal components are equal. Thus, the two diagonal elements of the coherency matrix are equal. This leads to $S_3 = 0$, and the problem is reduced from the sphere to a circle. In the resulting two-dimensional subspace, we can introduce the polar coordinate system with + +$$ R = \sqrt{S_1^2 + S_2^2} \quad (141) $$ + +$$ S_1 = R \cos \phi \quad (142) $$ + +$$ S_2 = R \sin \phi \quad (143) $$ + +The radius $R$ is the radius of this circle, and is + +$$ R = a^2 \cos \zeta \quad (144) $$ + +The radius $R$ takes its maximum value $S_0$ when $\zeta = 0^\circ$. It decreases as $\zeta$ increases and vanishes when $\zeta = 90^\circ$. This aspect of the radius $R$ is illustrated in Figure 3. + +**Figure 3.** Radius of the Poincaré sphere. The radius $R$ takes its maximum value $S_0$ when the decoherence angle $\zeta$ is zero. It becomes smaller as $\zeta$ increases. It becomes zero when the angle reaches $90^\circ$. +---PAGE_BREAK--- + +In order to see its implications in special relativity, let us go back to the four-momentum matrix of $m(1,0,0,0)$. Its determinant is $m^2$ and remains invariant. Likewise, the determinant of the coherency matrix of Equation (132) should also remain invariant. The determinant in this case is + +$$S_0^2 - R^2 = a^4 \sin^2 \xi \quad (145)$$ + +This quantity remains invariant under the Hermitian transformation of Equation (138), which is a Lorentz transformation as discussed in Sections 3.2 and 6. This aspect is shown on the last row of Table 7. + +The coherency matrix then becomes + +$$C = a^2 \begin{pmatrix} 1 & (\cos \xi)e^{-i\phi} \\ (\cos \xi)e^{i\phi} & 1 \end{pmatrix} \quad (146)$$ + +Since the angle $\phi$ does not play any essential role, we can let $\phi = 0$, and write the coherency matrix as + +$$C = a^2 \begin{pmatrix} 1 & \cos \xi \\ \cos \xi & 1 \end{pmatrix} \quad (147)$$ + +The determinant of the above two-by-two matrix is + +$$a^4 (1 - \cos^2 \xi) = a^4 \sin^2 \xi \quad (148)$$ + +Since the Lorentz transformation leaves the determinant invariant, the change in this $\xi$ variable is not a Lorentz transformation. It is of course possible to construct a larger group in which this variable plays a role in a group transformation [6], but here we are more interested in its role in a particle gaining a mass from zero or the mass becoming zero. + +### 6.3. Extra-Lorentzian Symmetry + +The coherency matrix of Equation (146) can be diagonalized to + +$$a^2 \begin{pmatrix} 1 + \cos \xi & 0 \\ 0 & 1 - \cos \xi \end{pmatrix} \quad (149)$$ + +by a rotation. Let us then go back to the four-momentum matrix of Equation (70). If $p_x = p_y = 0$, and $p_z = p_0 \cos \xi$, we can write this matrix as + +$$p_0 \begin{pmatrix} 1 + \cos \xi & 0 \\ 0 & 1 - \cos \xi \end{pmatrix} \quad (150)$$ + +Thus, with this extra variable, it is possible to study the little groups for variable masses, including the small-mass limit and the zero-mass case. + +For a fixed value of $p_0$, the $(mass)^2$ becomes + +$$(mass)^2 = (p_0 \sin \xi)^2, \quad \text{and} \quad (momentum)^2 = (p_0 \cos \xi)^2 \quad (151)$$ + +resulting in + +$$(energy)^2 = (mass)^2 + (momentum)^2 \quad (152)$$ + +This transition is illustrated in Figure 4. We are interested in reaching a point on the light cone from mass hyperbola while keeping the energy fixed. According to this figure, we do not have to make +---PAGE_BREAK--- + +an excursion to infinite-momentum limit. If the energy is fixed during this process, Equation (152) tells +the mass and momentum relation, and Figure 5 illustrates this relation. + +**Figure 4.** Transition from the massive to massless case. (a) Transition within the framework of the Lorentz group; (b) Transition allowed in the symmetry of the Poincaré sphere. Within the framework of the Lorentz group, it is not possible to go from the massive to massless case directly, because it requires the change in the mass which is a Lorentz-invariant quantity. The only way is to move to infinite momentum and jump from the hyperbola to the light cone, and come back. The extra symmetry of the Poincaré sphere allows a direct transition + +**Figure 5.** Energy-momentum-mass relation. This circle illustrates the case where the energy is fixed, while the mass and momentum are related according to the triangular rule. The value of the angle $\xi$ changes from zero to 180°. The particle mass is negative for negative values of this angle. However, in the Lorentz group, only $(mass)^2$ is a relevant variable, and negative masses might play a role for theoretical purposes. +---PAGE_BREAK--- + +Within the framework of the Lorentz group, it is possible, by making an excursion to infinite momentum where the mass hyperbola coincides with the light cone, to then come back to the desired point. On the other hand, the mass formula of Equation (151) allows us to go there directly. The decoherence mechanism of the coherency matrix makes this possible. + +**7. Small-Mass and Massless Particles** + +We now have a mathematical tool to reduce the mass of a massive particle from its positive value to zero. During this process, the Lorentz-boosted rotation matrix becomes a gauge transformation for the spin-1 particle, as discussed Section 5.2. For spin-1/2 particles, there are two issues. + +1. It was seen in Section 5.2 that the requirement of gauge invariance lead to a polarization of massless spin-1/2 particle, such as neutrinos. What happens to anti-neutrinos? + +2. There are strong experimental indications that neutrinos have a small mass. What happens to the $E(2)$ symmetry? + +**7.1. Spin-1/2 Particles** + +Let us go back to the two-by-two matrices of Section 5.4, and the two-by-two D matrix. For a massive particle, its Wigner decomposition leads to + +$$D = \begin{pmatrix} \cos(\theta/2) & -e^{-\eta} \sin(\theta/2) \\ e^{\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (153)$$ + +This matrix is applicable to the spinors *u* and *v* defined in Equation (101) respectively for the spin-up and spin-down states along the *z* direction. + +Since the Lie algebra of SL(2, c) is invariant under the sign change of the Kᵢ matrices, we can consider the “dotted” representation, where the system is boosted in the opposite direction, while the direction of rotations remain the same. Thus, the Wigner decomposition leads to + +$$\tilde{D} = \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (154)$$ + +with its spinors + +$$\dot{u} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad \dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \qquad (155)$$ + +For anti-neutrinos, the helicity is reversed but the momentum is unchanged. Thus, $D^\dagger$ is the appropriate matrix. However, $D^\dagger = \tilde{D}^{-1}$ as was noted in Section 5.4. Thus, we shall use $\tilde{D}$ for anti-neutrinos. + +When the particle mass becomes very small, + +$$e^{-\eta} = \frac{m}{2p} \qquad (156)$$ + +becomes small. Thus, if we let + +$$e^{\eta} \sin(\theta/2) = \gamma, \quad \text{and} \quad e^{-\eta} \sin(\theta/2) = \epsilon^2 \qquad (157)$$ + +then the *D* matrix of Equation (153) and the $\tilde{D}$ of Equation (154) become + +$$\begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\epsilon^2 \\ \gamma & 1 - \gamma\epsilon^2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\gamma \\ \epsilon^2 & 1 - \gamma\epsilon^2 \end{pmatrix} \qquad (158)$$ +---PAGE_BREAK--- + +respectively where $\gamma$ is an independent parameter and + +$$ \epsilon^2 = \gamma \left( \frac{m}{2p} \right)^2 \qquad (159) $$ + +When the particle mass becomes zero, they become + +$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \qquad (160) $$ + +respectively, applicable to the spinors $(u, v)$ and $(\hat{u}, \hat{v})$ respectively. + +For neutrinos, + +$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ \gamma \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \qquad (161) $$ + +For anti-neutrinos, + +$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} -\gamma \\ 1 \end{pmatrix} \qquad (162) $$ + +It was noted in Section 5.2 that the triangular matrices of Equation (160) perform gauge transformations. Thus, for Equations (161) and (162) the requirement of gauge invariance leads to the polarization of neutrinos. The neutrinos are left-handed while the anti-neutrinos are right-handed. Since, however, nature cannot tell the difference between the dotted and undotted representations, the Lorentz group cannot tell which neutrino is right handed. It can say only that the neutrinos and anti-neutrinos are oppositely polarized. + +If the neutrino has a small mass, the gauge invariance is modified to + +$$ \begin{pmatrix} 1 - \gamma e^2/2 & -e^2 \\ \gamma & 1 - \gamma e^2/2 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} - e^2 \begin{pmatrix} 1 \\ \gamma/2 \end{pmatrix} \qquad (163) $$ + +and + +$$ \begin{pmatrix} 1 - \gamma e^2/2 & -\gamma \\ e^2 & 1 - \gamma e^2 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} + e^2 \begin{pmatrix} -\gamma/2 \\ 1 \end{pmatrix} \qquad (164) $$ + +respectively for neutrinos and anti-neutrinos. Thus the violation of the gauge invariance in both cases is proportional to $e^2$ which is $m^2/4p^2$. + +## 7.2. Small-Mass Neutrinos in the Real World + +Whether neutrinos have mass or not and the consequences of this relative to the Standard Model and lepton number is the subject of much theoretical speculation [24,25], and of cosmology [26], nuclear reactors [27], and high energy experimentations [28,29]. Neutrinos are fast becoming an important component of the search for dark matter and dark radiation [30]. Their importance within the Standard Model is reflected by the fact that they are the only particles which seem to exist with only one direction of chirality, i.e., only left-handed neutrinos have been confirmed to exist so far. + +It was speculated some time ago that neutrinos in constant electric and magnetic fields would acquire a small mass, and that right-handed neutrinos would be trapped within the interaction field [31]. Solving generalized electroweak models using left- and right-handed neutrinos has been discussed recently [32]. Today these right-handed neutrinos which do not participate in weak interactions are called “sterile” neutrinos [33]. A comprehensive discussion of the place of neutrinos in the scheme of physics has been given by Drewes [30]. We should note also that the three different neutrinos, namely $v_e$, $v_\mu$, and $v_\tau$, may have different masses [34]. +---PAGE_BREAK--- + +**8. Scalars, Four-Vectors, and Four-Tensors** + +In Sections 5 and 7, our primary interest has been the two-by-two matrices applicable to spinors for spin-1/2 particles. Since we also used four-by-four matrices, we indirectly studied the four-component particle consisting of spin-1 and spin-zero components. + +If there are two spin 1/2 states, we are accustomed to construct one spin-zero state, and one spin-one state with three degeneracies. + +In this paper, we are confronted with two spinors, but each spinor can also be dotted. For this reason, there are 16 orthogonal states consisting of spin-one and spin-zero states. How many spin-zero states? How many spin-one states? + +For particles at rest, it is known that the addition of two one-half spins result in spin-zero and spin-one states. In this paper, we have two different spinors behaving differently under the Lorentz boost. Around the z direction, both spinors are transformed by + +$$Z(\phi) = \exp(-i\phi J_3) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} \qquad (165)$$ + +However, they are boosted by + +$$B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \qquad (166)$$ + +$$\dot{B}(\eta) = \exp(i\eta K_3) = \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \qquad (167)$$ + +applicable to the undotted and dotted spinors respectively. These two matrices commute with each other, and also with the rotation matrix Z(φ) of Equation (165). Since K₃ and J₃ commute with each other, we can work with the matrix Q(η, φ) defined as + +$$Q(\eta, \phi) = B(\eta)Z(\phi) = \begin{pmatrix} e^{(\eta-i\phi)/2} & 0 \\ 0 & e^{-(\eta-i\phi)/2} \end{pmatrix} \qquad (168)$$ + +$$\dot{Q}(\eta, \phi) = \dot{B}(\eta)\dot{Z}(\phi) = \begin{pmatrix} e^{-(\eta+i\phi)/2} & 0 \\ 0 & e^{(\eta+i\phi)/2} \end{pmatrix} \qquad (169)$$ + +When this combined matrix is applied to the spinors, + +$$Q(\eta, \phi)u = e^{(\eta-i\phi)/2}u, \quad Q(\eta, \phi)v = e^{-(\eta-i\phi)/2}v \qquad (170)$$ + +$$\dot{Q}(\eta, \phi)\dot{u} = e^{-(\eta+i\phi)/2}\dot{u}, \quad \dot{Q}(\eta, \phi)\dot{v} = e^{(\eta+i\phi)/2}\dot{v} \qquad (171)$$ + +If the particle is at rest, we can construct the combinations + +$$uu, \quad \frac{1}{\sqrt{2}}(uv + vu), \quad vv \qquad (172)$$ + +to construct the spin-1 state, and + +$$\frac{1}{\sqrt{2}}(uv - vu) \qquad (173)$$ + +for the spin-zero state. There are four bilinear states. In the SL(2, c) regime, there are two dotted spinors. If we include both dotted and undotted spinors, there are 16 independent bilinear combinations. They are given in Table 8. This table also gives the effect of the operation of Q(η, φ). +---PAGE_BREAK--- + +**Table 8.** Sixteen combinations of the SL(2,c) spinors. In the SU(2) regime, there are two spinors leading to four bilinear forms. In the SL(2,c) world, there are two undotted and two dotted spinors. These four spinors lead to 16 independent bilinear combinations. + +
Spin 1Spin 0
uu, 1√2(uv + vu), vv,1√2(uv − vu)
úú, 1√2(úv + vú), vúv,1√2(úv − vú)
uú, 1√2(uø + vú), vúv,1√2(uø − vú)
úú, 1√2(úv + vú), vúv,1√2(úv − vú)
+ +After the Operation of Q(η, φ) and $\dot{Q}(\eta, \phi)$ + +$$ +\begin{aligned} +e^{-i\phi} e^{\eta} u u, & \quad \frac{1}{\sqrt{2}} (uv + vu), \quad e^{i\phi} e^{-\eta} v v, \quad \frac{1}{\sqrt{2}} (uv - vu) \\ +e^{-i\phi} e^{-\eta} u \dot{u}, & \quad \frac{1}{\sqrt{2}} (\dot{u}v + \dot{v}\dot{u}), \quad e^{i\phi} e^{\eta} \dot{v} \dot{v}, \quad \frac{1}{\sqrt{2}} (\dot{u}\dot{v} - \dot{v}\dot{u}) \\ +e^{-i\phi} u \dot{u}, & \quad \frac{1}{\sqrt{2}} (e^{\eta} u \dot{v} + e^{-\eta} v \dot{u}), \quad e^{i\phi} v \dot{v}, \quad \frac{1}{\sqrt{2}} (e^{\eta} u \dot{v} - e^{-\eta} v \dot{u}) \\ +e^{-i\phi} \dot{u} u, & \quad \frac{1}{\sqrt{2}} (\dot{u}v + \dot{v}u), \quad e^{i\phi} \dot{v} v, \quad \frac{1}{\sqrt{2}} (e^{-\eta} \dot{u} v - e^{\eta} \dot{v} u) +\end{aligned} +$$ + +Among the bilinear combinations given in Table 8, the following two are invariant under rotations and also under boosts. + +$$S = \frac{1}{\sqrt{2}}(uv - vu), \quad \text{and} \quad S = -\frac{1}{\sqrt{2}}(\dot{u}\dot{v} - \dot{v}\dot{u}) \qquad (174)$$ + +They are thus scalars in the Lorentz-covariant world. Are they the same or different? Let us consider the following combinations + +$$S_+ = \frac{1}{\sqrt{2}}(S + \hat{S}), \quad \text{and} \quad S_- = \frac{1}{\sqrt{2}}(S - \hat{S}) \qquad (175)$$ + +Under the dot conjugation, $S_+$ remains invariant, but $S_-$ changes its sign. + +Under the dot conjugation, the boost is performed in the opposite direction. Therefore it is the operation of space inversion, and $S_+$ is a scalar while $S_-$ is called the pseudo-scalar. + +## 8.1. Four-Vectors + +Let us consider the bilinear products of one dotted and one undotted spinor as $u\dot{u}$, $u\dot{v}$, $\dot{u}v$, $v\dot{v}$, and construct the matrix + +$$U = \begin{pmatrix} u\dot{v} & v\dot{v} \\ u\dot{u} & v\dot{u} \end{pmatrix} \qquad (176)$$ + +Under the rotation $Z(\phi)$ and the boost $B(\eta)$ they become + +$$ +\begin{pmatrix} +e^{\eta} u \dot{v} & e^{-i\phi} v \dot{v} \\ +e^{i\phi} u \dot{u} & e^{-\eta} v \dot{u} +\end{pmatrix} +\qquad +(177) +$$ + +Indeed, this matrix is consistent with the transformation properties given in Table 8, and transforms like the four-vector + +$$ +\begin{pmatrix} +t+z & x-iy \\ +x+iy & t-z +\end{pmatrix} +\qquad +(178) +$$ + +This form was given in Equation (65), and played the central role throughout this paper. Under the space inversion, this matrix becomes + +$$ +\begin{pmatrix} +t-z & -(x-iy) \\ +-(x+iy) & t+z +\end{pmatrix} +\qquad +(179) +$$ +---PAGE_BREAK--- + +This space inversion is known as the parity operation. + +The form of Equation (176) for a particle or field with four-components, is given by $(V_0, V_z, V_x, V_y)$. The two-by-two form of this four-vector is + +$$ U = \begin{pmatrix} V_0 + V_z & V_x - iV_y \\ V_x + iV_y & V_0 - V_z \end{pmatrix} \qquad (180) $$ + +If boosted along the z direction, this matrix becomes + +$$ \begin{pmatrix} e^{\eta} (V_0 + V_z) & V_x - iV_y \\ V_x + iV_y & e^{-\eta} (V_0 - V_z) \end{pmatrix} \qquad (181) $$ + +In the mass-zero limit, the four-vector matrix of Equation (181) becomes + +$$ \begin{pmatrix} 2A_0 & A_x - iA_y \\ A_x + iA_y & 0 \end{pmatrix} \qquad (182) $$ + +with the Lorentz condition $A_0 = A_z$. The gauge transformation applicable to the photon four-vector was discussed in detail in Section 5.2. + +Let us go back to the matrix of Equation (180), we can construct another matrix $\dot{U}$. Since the dot conjugation leads to the space inversion, + +$$ \dot{U} = \begin{pmatrix} \dot{u}\nu & \dot{\nu}\nu \\ \dot{u}u & \dot{\nu}u \end{pmatrix} \qquad (183) $$ + +Then + +$$ \dot{u}\nu \approx (t-z), \qquad \dot{\nu}u \approx (t+z) \qquad (184) $$ + +$$ \dot{\nu}\nu \approx -(x-iy), \quad \dot{u}u \approx -(x+iy) \qquad (185) $$ + +where the symbol $\simeq$ means “transforms like”. + +Thus, $U$ of Equation (176) and $\dot{U}$ of Equation (183) used up 8 of the 16 bilinear forms. Since there are two bilinear forms in the scalar and pseudo-scalar as given in Equation (175), we have to give interpretations to the six remaining bilinear forms. + +## 8.2. Second-Rank Tensor + +In this subsection, we are studying bilinear forms with both spinors dotted and undotted. In Section 8.1, each bilinear spinor consisted of one dotted and one undotted spinor. There are also bilinear spinors which are both dotted or both undotted. We are interested in two sets of three quantities satisfying the $O(3)$ symmetry. They should therefore transform like + +$$ (\overline{x+iy})/\sqrt{2}, \quad (\overline{x-iy})/\sqrt{2}, \quad z \qquad (186) $$ + +which are like + +$$ uu, \quad vv, \quad (\overline{uv} + \overline{vu})/\sqrt{2} \qquad (187) $$ + +respectively in the $O(3)$ regime. Since the dot conjugation is the parity operation, they are like + +$$ -\dot{u}\dot{u}, \quad -\dot{\nu}\dot{\nu}, \quad -(\overline{\dot{u}\dot{\nu}} + \overline{\dot{\nu}\dot{u}})/\sqrt{2} \qquad (188) $$ + +In other words, + +$$ (\overline{uu}) = -\dot{u}\dot{u}, \quad \text{and} \quad (\overline{vv}) = -\dot{\nu}\dot{\nu} \qquad (189) $$ +---PAGE_BREAK--- + +We noticed a similar sign change in Equation (184). + +In order to construct the z component in this $O(3)$ space, let us first consider + +$$f_z = \frac{1}{2} [(uv + vu) - (\dot{u}\dot{v} + \dot{v}\dot{u})], \quad g_z = \frac{1}{2i} [(uv + vu) + (\dot{u}\dot{v} + \dot{v}\dot{u})] \qquad (190)$$ + +where $f_z$ and $g_z$ are respectively symmetric and anti-symmetric under the dot conjugation or the parity operation. These quantities are invariant under the boost along the z direction. They are also invariant under rotations around this axis, but they are not invariant under boost along or rotations around the x or y axis. They are different from the scalars given in Equation (174). + +Next, in order to construct the x and y components, we start with $g_\pm$ as + +$$f_+ = \frac{1}{\sqrt{2}} (uu - \dot{u}\dot{u}) \qquad g_+ = \frac{1}{\sqrt{2}i} (uu + \dot{u}\dot{u}) \qquad (191)$$ + +$$f_- = \frac{1}{\sqrt{2}} (vv - \dot{v}\dot{v}) \qquad g_- = \frac{1}{\sqrt{2}i} (vv + \dot{v}\dot{v}) \qquad (192)$$ + +Then + +$$f_x = \frac{1}{\sqrt{2}} (f_+ + f_-) = \frac{1}{2} [(uu - \dot{u}\dot{u}) + (vv - \dot{v}\dot{v})] \qquad (193)$$ + +$$f_y = \frac{1}{\sqrt{2}i} (f_+ - f_-) = \frac{1}{2i} [(uu - \dot{u}\dot{u}) - (vv - \dot{v}\dot{v})] \qquad (194)$$ + +and + +$$g_x = \frac{1}{\sqrt{2}} (g_+ + g_-) = \frac{1}{2i} [(uu + \dot{u}\dot{u}) + (vv + \dot{v}\dot{v})] \qquad (195)$$ + +$$g_y = \frac{1}{\sqrt{2}i} (g_+ - g_-) = -\frac{1}{2} [(uu + \dot{u}\dot{u}) - (vv + \dot{v}\dot{v})] \qquad (196)$$ + +Here $f_x$ and $f_y$ are symmetric under dot conjugation, while $g_x$ and $g_y$ are anti-symmetric. + +Furthermore, $f_z$, $f_x$, and $f_y$ of Equations (190) and (193) transform like a three-dimensional vector. The same can be said for $g_i$ of Equations (190) and (195). Thus, they can be grouped into the second-rank tensor + +$$T = \begin{pmatrix} +0 & -g_z & -g_x & -g_y \\ +g_z & 0 & -f_y & f_x \\ +g_x & f_y & 0 & -f_z \\ +g_y & -f_x & f_z & 0 +\end{pmatrix} \qquad (197)$$ + +whose Lorentz-transformation properties are well known. The $g_i$ components change their signs under space inversion, while the $f_i$ components remain invariant. They are like the electric and magnetic fields respectively. + +If the system is Lorentz-booted, $f_i$ and $g_i$ can be computed from Table 8. We are now interested in the symmetry of photons by taking the massless limit. According to the procedure developed in Section 6, we can keep only the terms which become larger for larger values of $\eta$. Thus, + +$$f_x \rightarrow \frac{1}{2}(uu - \dot{u}\dot{v}), \qquad f_y \rightarrow \frac{1}{2i}(uu + \dot{u}\dot{v}) \qquad (198)$$ + +$$g_x \rightarrow \frac{1}{2i}(uu + \dot{u}\dot{v}), \qquad g_y \rightarrow -\frac{1}{2}(uu - \dot{u}\dot{v}) \qquad (199)$$ + +in the massless limit. +---PAGE_BREAK--- + +Then the tensor of Equation (197) becomes + +$$ +F = \begin{pmatrix} +0 & 0 & -E_x & -E_y \\ +0 & 0 & -B_y & B_x \\ +E_x & B_y & 0 & 0 \\ +E_y & -B_x & 0 & 0 +\end{pmatrix} \tag{200} +$$ + +with + +$$ +B_x \approx \frac{1}{2} (uu - \bar{u}\bar{v}), \quad B_y \approx \frac{1}{2i} (uu + \bar{u}\bar{v}) \qquad (201) +$$ + +$$ +E_x = \frac{1}{2i} (uu + \bar{u}\bar{v}), \quad E_y = -\frac{1}{2} (uu - \bar{u}\bar{v}) \tag{202} +$$ + +The electric and magnetic field components are perpendicular to each other. Furthermore, + +$$ +E_x = B_y, \quad E_y = -B_x \tag{203} +$$ + +In order to address this question, let us go back to Equation (191). In the massless limit, + +$$ +B_+ \approx E_+ \approx uu, \quad B_- \approx E_- \approx \bar{u}\bar{v} \tag{204} +$$ + +The gauge transformation applicable to $u$ and $\dot{v}$ are the two-by-two matrices + +$$ +\begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ -\gamma & 1 \end{pmatrix} \tag{205} +$$ + +respectively as noted in Sections 5.2 and 7.1. Both $u$ and $\bar{u}$ are invariant under gauge transformations, while $i\dot{u}$ and $\bar{i}\dot{\bar{u}}$ do not. + +The $B_+$ and $E_+$ are for the photon spin along the $z$ direction, while $B_-$ and $E_-$ are for the opposite direction. In 1964 [35], Weinberg constructed gauge-invariant state vectors for massless particles starting from Wigner’s 1939 paper [1]. The bilinear spinors $uu$ and $\bar{u}\bar{v}$ correspond to Weinberg’s state vectors. + +8.3. Possible Symmetry of the Higgs Mechanism + +In this section, we discussed how the two-by-two formalism of the group SL(2, c) leads the scalar, four-vector, and tensor representations of the Lorentz group. We discussed in detail how the four-vector for a massive particle can be decomposed into the symmetry of a two-component massless particle and one gauge degree of freedom. This aspect was studied in detail by Kim and Wigner [20,21], and their results are illustrated in Figure 6. This decomposition is known in the literature as the group contraction. + +The four-dimensional Lorentz group can be contracted to the Euclidean and cylindrical groups. These contraction processes could transform a four-component massive vector meson into a massless spin-one particle with two spin components, and one gauge degree of freedom. + +Since this contraction procedure is spelled out detail in [21], as well as in the present paper, its reverse process is also well understood. We start with one two-component massless particle with one gauge degree of freedom, and end up with a massive vector meson with its four components. + +The mathematics of this process is not unlike the Higgs mechanism [36,37], where one massless field with two degrees of freedom absorbs one gauge degree freedom to become a quartet of bosons, namely that of W, Z± plus the Higgs boson. As is well known, this mechanism is the basis for the theory of electro-weak interaction formulated by Weinberg and Salam [38,39]. +---PAGE_BREAK--- + +**Figure 6.** Contractions of the three-dimensional rotation group. (a) Contraction in terms of the tangential plane and the tangential cylinder [20]; (b) Contraction in terms of the expansion and contraction of the longitudinal axis [21]. In both cases, the symmetry ends up with one rotation around the longitudinal direction and one translational degree along the longitudinal axis. The rotation and translation corresponds to the helicity and gauge degrees of freedom. + +The word “spontaneous symmetry breaking” is used for the Higgs mechanism. It could be an interesting problem to see that this symmetry breaking for the two Higgs doublet model can be formulated in terms of the Lorentz group and its contractions. In this connection, we note an interesting recent paper by Dée and Ivanov [40]. + +# 9. Conclusions + +The damped harmonic oscillator, Wigner's little groups, and the Poincaré sphere belong to the three different branches of physics. In this paper, it was noted that they are based on the same mathematical framework, namely the algebra of two-by-two matrices. + +The second-order differential equation for damped harmonic oscillators can be formulated in terms of two-by-two matrices. These matrices produce the algebra of the group $Sp(2)$. While there are three trace classes of the two-by-two matrices of this group, the damped oscillator tells us how to make transitions from one class to another. + +It is shown that Wigner's three little groups can be defined in terms of the trace classes of the $Sp(2)$ group. If the trace is smaller than two, the little group is for massive particles. If greater than two, the little group is for imaginary-mass particles. If the trace is equal to two, the little group is for massless particles. Thus, the damped harmonic oscillator provides a procedure for transition from one little group to another. + +The Poincaré sphere contains the symmetry of the six-parameter $SL(2, c)$ group. Thus, the sphere provides the procedure for extending the symmetry of the little group defined within the Lorentz group of three-dimensional Minkowski space to its full Lorentz group in the four-dimensional space-time. In addition, the Poincaré sphere offers the variable which allows us to change the symmetry of a massive particle to that of a massless particle by continuously decreasing the mass. + +In this paper, we extracted the mathematical properties of Wigner's little groups from the damped harmonic oscillator and the Poincaré sphere. In so doing, we have shown that the transition from one little group to another is tangentially continuous. + +This subject was initiated by İnönü and Wigner in 1953 as the group contraction [41]. In their paper, they discussed the contraction of the three-dimensional rotation group becoming contracted to the two-dimensional Euclidean group with one rotational and two translational degrees of freedom. While the $O(3)$ rotation group can be illustrated by a three-dimensional sphere, the plane tangential at +---PAGE_BREAK--- + +the north pole is for the $E(2)$ Euclidean group. However, we can also consider a cylinder tangential at the equatorial belt. The resulting cylindrical group is isomorphic to the Euclidean group [20]. While the rotational degree of freedom of this cylinder is for the photon spin, the up and down translations on the surface of the cylinder correspond to the gauge degree of freedom of the photon, as illustrated in Figure 6. + +It was noted also that the Bargmann decomposition of two-by-two matrices, as illustrated in Figure 1 and Figure 2, allows us to study more detailed properties of the little groups, including space and time reflection reflection properties. Also in this paper, we have discussed how the scalars, four-vectors, and four-tensors can be constructed from the two-by-two representation in the Lorentz-covariant world. + +In addition, it should be noted that the symmetry of the Lorentz group is also contained in the squeezed state of light [14] and the ABCD matrix for optical beam transfers [18]. We also mentioned the possibility of understanding the mathematics of the Higgs mechanism in terms of the Lorentz group and its contractions. + +## Acknowledgements + +In his 1939 paper [1], Wigner worked out the subgroups of the Lorentz group whose transformations leave the four momentum of a given particle invariant. In so doing, he worked out their internal space-time symmetries. In spite of its importance, this paper remains as one of the most difficult papers to understand. Wigner was eager to make his paper understandable to younger physicists. + +While he was the pioneer in introducing the mathematics of group theory to physics, he was also quite fond of using two-by-two matrices to explain group theoretical ideas. He asked one of the present authors (Young S. Kim) to rewrite his 1939 paper [1] using the language of those matrices. This is precisely what we did in the present paper. + +We are grateful to Eugene Paul Wigner for this valuable suggestion. + +## Author Contributions + +This paper is largely based on the earlier papers by Young S. Kim and Marilyn E. Noz, and those by Sibel Başkal and Young S. Kim. The two-by-two formulation of the damped oscillator in Section 2 was jointly developed by Sibel Başkal and Young S. Kim during the summer of 2012. Marilyn E. Noz developed the idea of the symmetry of small-mass neutrinos in Section 7. The limiting process in the symmetry of the Poincaré sphere was formulated by Young S. Kim. Sibel Başkal initially constructed the four-by-four tensor representation in Section 8. + +The initial organization of this paper was conceived by Young S. Kim in his attempt to follow Wigner's suggestion to translate his 1939 paper into the language of two-by-two matrices. Sibel Başkal and Marilyn E. Noz tightened the organization and filled in the details. + +## Conflicts of Interest + +The authors declare no conflicts of interest. + +## References + +1. Wigner, E. On unitary representations of the inhomogeneous Lorentz Group. *Ann. Math.* **1939**, *40*, 149–204. +2. Han, D.; Kim, Y.S.; Son, D. Eulerian parametrization of Wigner little groups and gauge transformations in terms of rotations in 2-component spinors. *J. Math. Phys.* **1986**, *27*, 2228–2235. +3. Born, M.; Wolf, E. *Principles of Optics*, 6th ed.; Pergamon: Oxford, UK, 1980. +---PAGE_BREAK--- + +4. Han, D.; Kim, Y.S.; Noz, M.E. Stokes parameters as a Minkowskian four-vector. Phys. Rev. E **1997**, 56, 6065-6076. + +5. Brosseau, C. *Fundamentals of Polarized Light: A Statistical Optics Approach*; John Wiley: New York, NY, USA, 1998. + +6. Başkal, S.; Kim, Y.S. De Sitter group as a symmetry for optical decoherence. J. Phys. A **2006**, 39, 7775-7788. + +7. Kim, Y.S.; Noz, M.E. Symmetries shared by the Poincaré Group and the Poincaré Sphere. *Symmetry* **2013**, *5*, 233–252. + +8. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. Phys. Rev. D **1982**, *26*, 3717–3725. + +9. Han, D.; Kim, Y.S.; Son, D. Photons, neutrinos and gauge transformations. Am. J. Phys. **1986**, *54*, 818–821. + +10. Başkal, S.; Kim, Y.S. Little groups and Maxwell-type tensors for massive and massless particles. Europhys. Lett. **1997**, *40*, 375–380. + +11. Leggett, A.; Chakravarty, S.; Dorsey, A.; Fisher, M.; Garg, A.; Zwerger, W. Dynamics of the dissipative 2-state system. Rev. Mod. Phys. **1987**, *59*, 1–85. + +12. Başkal, S.; Kim, Y.S. One analytic form for four branches of the ABCD matrix. J. Mod. Opt. **2010**, *57*, 1251–1259. + +13. Başkal, S.; Kim, Y.S. Lens optics and the continuity problems of the ABCD matrix. J. Mod. Opt. **2014**, *61*, 161–166. + +14. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986. + +15. Bargmann, V. Irreducible unitary representations of the Lorentz group. Ann. Math. **1947**, *48*, 568–640. + +16. Iwasawa, K. On some types of topological groups. Ann. Math. **1949**, *50*, 507–558. + +17. Guillemin, V.; Sternberg, S. *Symplectic Techniques in Physics*; Cambridge University Press: Cambridge, UK, 1984. + +18. Başkal, S.; Kim, Y.S. Lorentz Group in Ray and Polarization Optics. In *Mathematical Optics: Classical, Quantum and Computational Methods; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.*; CRC Taylor and Francis: New York, NY, USA, 2013; Chapter 9, pp. 303–340. + +19. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon: Oxford, UK, 1964. + +20. Kim, Y.S.; Wigner, E.P. Cylindrical group and massless particles. J. Math. Phys. **1987**, *28*, 1175–1179. + +21. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic particles. J. Math. Phys. **1990**, *31*, 55–60. + +22. Georgieva, E.; Kim, Y.S. Iwasawa effects in multilayer optics. Phys. Rev. E **2001**, *64*, doi:10.1103/PhysRevE.64.026602. + +23. Saleh, B.E.A.; Teich, M.C. *Fundamentals of Photonics*, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2007. + +24. Papoulias, D.K.; Kosmas, T.S. Exotic Lepton Flavour Violating Processes in the Presence of Nuclei. J. Phys.: Conf. Ser. **2013**, *410*, 012123:1–012123:5. + +25. Dinh, D.N.; Petcov, S.T.; Sasao, N.; Tanaka, M.; Yoshimura, M. Observables in neutrino mass spectroscopy using atoms. Phys. Lett. B **2013**, *719*, 154–163. + +26. Miramonti, L.; Antonelli, V. Advancements in Solar Neutrino physics. Int. J. Mod. Phys. E **2013**, *22*, 1–16. + +27. Li, Y.-F.; Cao, J.; Jun, Y.; Wang, Y.; Zhan, L. Unambiguous determination of the neutrino mass hierarchy using reactor neutrinos. Phys. Rev. D **2013**, *88*, 013008:1–013008:9. + +28. Bergstrom, J. Combining and comparing neutrinoless double beta decay experiments using different 584 nuclei. J. High Energy Phys. **2013**, *02*, 093:1–093:27. + +29. Han, T.; Lewis, I.; Ruiz, R.; Si, Z.-G. Lepton number violation and $W'$ chiral couplings at the LHC. Phys. Rev. D **2013**, *87*, 035011:1–035011:25. + +30. Drewes, M. The phenomenology of right handed neutrinos. Int. J. Mod. Phys. E **2013**, *22*, 1330019:1–1330019:75. + +31. Barut, A.O.; McEwan, J. The four states of the massless neutrino with pauli coupling by spin-gauge invariance. Lett. Math. Phys. **1986**, *11*, 67–72. + +32. Palcu, A. Neutrino Mass as a consequence of the exact solution of 3-3-1 gauge models without exotic electric charges. Mod. Phys. Lett. A **2006**, *21*, 1203–1217. + +33. Bilenky, S.M. Neutrino. Phys. Part. Nucl. **2013**, *44*, 1–46. + +34. Alhendi, H. A.; Lashin, E. I.; Mudlej, A. A. Textures with two traceless submatrices of the neutrino mass matrix. Phys. Rev. D **2008**, *77*, 013009:1–013009:1–13. + +35. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass. Phys. Rev. **1964**, *135*, B1049-B1056. + +36. Higgs, P.W. Broken symmetries and the masses of gauge bosons. Phys. Rev. Lett. **1964**, *13*, 508-509. + +Symmetry **2014**, *6*, 473–515 +---PAGE_BREAK--- + +37. Guralnik, G.S.; Hagen, C.R.; Kibble, T.W.B. Global conservation laws and massless particles. Phys. Rev. Lett. **1964**, *13*, 585–587. + +38. Weinberg, S. A model of leptons. Phys. Rev. Lett. **1967**, *19*, 1265–1266. + +39. Weinberg, S. *Quantum Theory of Fields, Volume II, Modern Applications*; Cambridge University Press: Cambridge, UK, 1996. + +40. Dée, A.; Ivanov, I.P. Higgs boson masses of the general two-Higgs-doublet model in the Minkowski-space formalism. Phys. Rev. D **2010**, *81*, 015012:1–015012:8. + +41. Inönü, E.; Wigner, E.P. On the contraction of groups and their representations. Proc. Natl. Acad. Sci. USA **1953**, *39*, 510–524. + +© 2014 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +# Chapter 2: +## Harmonic Oscillators in Modern Physics +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +Article + +# Analytical Solutions of Temporal Evolution of Populations in Optically-Pumped Atoms with Circularly Polarized Light + +Heung-Ryoul Noh + +Department of Physics, Chonnam National University, Gwangju 500-757, Korea; hrnoh@chonnam.ac.kr; +Tel.: +82-62-530-3366 + +Academic Editor: Young Suh Kim + +Received: 10 December 2015; Accepted: 14 March 2016; Published: 19 March 2016 + +**Abstract:** We present an analytical calculation of temporal evolution of populations for optically pumped atoms under the influence of weak, circularly polarized light. The differential equations for the populations of magnetic sublevels in the excited state, derived from rate equations, are expressed in the form of inhomogeneous second-order differential equations with constant coefficients. We present a general method of analytically solving these differential equations, and obtain explicit analytical forms of the populations of the ground state at the lowest order in the saturation parameter. The obtained populations can be used to calculate lineshapes in various laser spectroscopies, considering transit time relaxation. + +**Keywords:** second-order differential equations; optical pumping; analytical solutions + +**PACS:** 02.30.Hq; 32.80.Xx; 32.30.-r + +## 1. Introduction + +When an atom is illuminated by single-mode laser light, the populations of the magnetic sublevels and coherences between them exhibit complicated temporal variations. This phenomenon is called optical pumping, which is widely used in the preparation of internal atomic states of interest [1,2]. It has recently been observed that optical pumping affects the lineshapes in saturated absorption spectroscopy (SAS) [3], electromagnetically induced transparency (EIT) [4], and absorption of cold atoms with a Λ-type three-level scheme [5]. Nonlinear effects in optical pumping have also been investigated [6,7]. + +The temporal dynamics of the internal states of an atom are accurately described by density matrix equations [8,9]. In some special cases, however, a simpler method can be employed to solve for the dynamics of the internal states of the atom, using rate equations [10,11]. Furthermore, when the intensity of light is weak, the rate equations can be solved analytically [12–15]. These analytical solutions are practically very useful; once they are obtained, it is readily possible to obtain analytically computed quantities such as the absorption coefficient of a probe beam and lineshape functions in nonlinear laser spectroscopy. We have previously reported analytical solutions for SAS [16,17] and polarization spectroscopy (PS) [18]. + +Interestingly, the equations governing the temporal dynamics of populations at the weak intensity limit are homogeneous or inhomogeneous second-order linear differential equations (DEs) with constant coefficients [12–15]. Unlike the harmonic oscillator in mechanics, where under- or over-damped motions are observed [19], the equations for optical pumping show only over-damped behaviors. However, this system exhibits a variety of inhomogeneous DEs. In a recent publication, we reported the method of solving these equations analytically, in the context of a pedagogical +---PAGE_BREAK--- + +description of the method of solving inhomogeneous DEs [15]. Although the method is straightforward in principle, it is not easy to obtain analytical solutions for complicated atomic structures, such as Cs. Extending the previous study [15], in this paper, we present a general method of analytically solving the DEs for such a complicated atom. + +## 2. Theory + +The energy level diagram under consideration is shown in Figure 1. Since alkali-metal atoms are considered, there are two ground states with $F_g = I + 1/2$ and $F_g = I - 1/2$ ($I$: nuclear spin angular momentum quantum number). We consider a $\sigma^+$ polarized weak laser beam, whose Rabi frequency is $\Omega$ and optical frequency is $\omega = \omega_0 + \delta$ ($\omega_0$ is the resonance frequency and $\delta$ is the laser frequency detuning). We assume that the laser frequency is tuned to the transition from one of the two ground states (in Figure 1, the state $F_g = I + 1/2$). Then, the other ground state (in Figure 1, the state $F_g = I - 1/2$) is not excited by laser light, and can be populated by spontaneous emission from the excited state when the optical transition is not cycling. The populations (and the states themselves) of the magnetic sublevels in the excited, upper ground, and lower ground states are labeled, respectively, as $g_{ir}$, $f_{ir}$ and $h_i$ with $i = 1, 2, \dots$. + +Figure 1. An energy level diagram for an optically pumped atom under the influence of circularly polarized light. + +The internal dynamics of the atom can be described by the density matrix equation in the frame rotating with frequency $\omega$: + +$$ \dot{\rho} = -(i/\hbar)[H, \rho] + \dot{\rho}_{\text{sp}} \quad (1) $$ + +where $\rho$ is the density operator. In Equation (1), the Hamiltonian, $H$, is given by + +$$ H = -\sum_j \hbar \delta |g_j\rangle \langle g_j| - \sum_j \hbar \Delta_g |h_j\rangle \langle h_j| - \frac{\hbar \Omega}{2} \sum_j C_j^{\dagger} |g_j\rangle \langle f_j| + \text{h.c.}, \quad (2) $$ + +where $\Delta_g$ is the hyperfine splitting between the two ground states and h.c. denotes the harmonic conjugate. In Equation (2), the first two terms in the right-hand side represent the bare atomic Hamiltonian and the rest terms denote the atom-photon interaction Hamiltonian [20]. $C_j^\dagger$ is the normalized transition strength between the states $f_i$ and $g_{j'}$, and $R_i^j \equiv (C_i^j)^2$ is given below (Equation (13)). In Equation (1), $\dot{\rho}_{\text{sp}}$ represents spontaneous emission term, whose matrix representations are given by: + +$$ \begin{align} \langle g_i | \dot{\rho}_{\text{sp}} | g_j \rangle &= -\Gamma \langle g_i | \rho | g_j \rangle, \\ \langle g_i | \dot{\rho}_{\text{sp}} | f_j \rangle &= -\frac{\Gamma}{2} \langle g_i | \rho | f_j \rangle, \quad \langle g_i | \dot{\rho}_{\text{sp}} | h_j \rangle = -\frac{\Gamma}{2} \langle g_i | \rho | h_j \rangle, \\ \langle f_i | \dot{\rho}_{\text{sp}} | f_j \rangle &= \Gamma \sum_{\epsilon=-2}^{0} C_i^{i+\epsilon} C_j^{j+\epsilon} \langle g_{i+\epsilon} | \rho | g_{j+\epsilon} \rangle, \\ \langle h_i | \dot{\rho}_{\text{sp}} | h_j \rangle &= \Gamma \sum_{\epsilon=-2}^{0} D_i^{i+\epsilon} D_j^{j+\epsilon} \langle g_{i+\epsilon} | \rho | g_{j+\epsilon} \rangle, \end{align} \quad (3) $$ +---PAGE_BREAK--- + +and $\langle \mu | \dot{\rho}_{sp} | v \rangle = \langle v | \dot{\rho}_{sp} | \mu \rangle^*$ when $\mu \neq v$, where $\Gamma$ is the decay rate of the excited state. $D_i^j$ is the normalized transition strength between the states $h_i$ and $g_j$, and $T_i^j = (D_i^j)^2$ is also given below (Equation (13)). Inserting Equations (2) and (3) into Equation (1), we can obtain the following differential equations for the optical coherences and populations: + +$$ \langle g_i | \dot{\rho} | f_i \rangle = \left(i\delta - \frac{\Gamma}{2}\right) \langle g_i | \rho | f_i \rangle + \frac{i}{2} C_i^{\dagger} \Omega (g_i - f_i), \quad (4) $$ + +$$ \dot{g}_i = -\Gamma g_i + \frac{i}{2} C_i^{\dagger} \Omega (\langle g_i | \rho | f_i \rangle - \langle f_i | \rho | g_i \rangle), \quad (5) $$ + +$$ \dot{f}_i = \Gamma \sum_{j=i-2}^{i} (C_i^{\dagger})^2 g_j - \frac{i}{2} C_i^{\dagger} \Omega (\langle g_i | \rho | f_i \rangle - \langle f_i | \rho | g_i \rangle), \quad (6) $$ + +$$ h_i = \Gamma \sum_{j=i-2}^{i} (D_i^j)^2 g_j, \quad (7) $$ + +where we use simplified expressions for the populations: $\langle g_i | \rho | g_i \rangle = g_i$, $\langle f_i | \rho | f_i \rangle = f_i$, and $\langle h_i | \rho | h_i \rangle = h_i$. In Equations (4)–(7), we assume that $\langle g_i | \rho | h_i \rangle = 0$ because $\Delta_g$ is much larger than $|\delta|$ and $\Gamma$. We note that, because the polarization of light is $\sigma^+$, and therefore the Zeeman coherences between the magnetic sublevels in the excited and ground states disappear. + +In Equation (4), the characteristic decay rate of the optical coherence is $\Gamma/2$, which is much larger than the characteristic decay rate of the populations ($\sim s\Gamma$; see Equation (12) below for definition of $s$). Thus, the optical coherences evolve much faster than the populations, which is called the rate equation approximation [21]. Owing to this rate equation approximation, $\langle g_i | \rho | f_i \rangle$ can be expressed in terms of the populations as follows by letting $\langle g_i | \dot{\rho} | f_i \rangle = 0$: + +$$ \langle g_i | \rho | f_i \rangle = \frac{C_i^{\dagger} \Omega}{i\Gamma + 2\delta} (f_i - g_i). \quad (8) $$ + +Then, inserting Equation (8) and its complex conjugate into Equations (5)–(7), we can obtain the following rate equations for the populations: + +$$ \dot{f}_i = -\frac{\Gamma}{2} s R_i^{\dagger} (f_i - g_i) + \sum_{j=i-2}^{i} \Gamma R_j^{\dagger} g_j, \quad (9) $$ + +$$ \dot{g}_i = -\frac{\Gamma}{2} s R_i^{\dagger} (f_i - g_i) - \Gamma g_i, \quad (10) $$ + +$$ h_i = \sum_{j=i-2}^{i} \Gamma T_j^{\dagger} g_j, \quad (11) $$ + +for $i=1,2,\dots$. In Equations (9)–(11), $s$ is the saturation parameter, which is given by + +$$ s = \frac{\Omega^2/2}{\delta^2 + \Gamma^2/4}, \quad (12) $$ + +and $R_i^j = (C_i^j)^2$ and $T_i^j = (D_i^j)^2$. We note that $s$ is a function of both the $\delta$ and $\Gamma$ frequency. Notably, the reference of the frequency detuning differs, depending on the transition line considered. When $i$ and $j$ refer to the states $|F_g, m_g\rangle$ and $|F_e, m_e\rangle$, respectively, the transition strength ($R_i^j$) is given by + +$$ R_{F_g, m_g}^{F_e, m_e} = (2L_e+1)(2J_e+1)(2J_g+1)(2F_e+1)(2F_g+1) \\ \times \left[ \begin{Bmatrix} L_e & J_e & S \\ J_g & L_g & 1 \end{Bmatrix} \begin{Bmatrix} J_e & F_e & I \\ F_g & J_g & 1 \end{Bmatrix} \begin{pmatrix} F_g & 1 & F_e \\ m_g & m_e - m_g & -m_e \end{pmatrix} \right]^2, \quad (13) $$ +---PAGE_BREAK--- + +where *L* and *S* denote the orbital and electron spin angular momenta, respectively, and the curly (round) brackets represent the 6J (3J) symbol. $T_i^j$ are similarly obtained by using different $F_g$ values in Equation (13). + +The explicit form of Equation (9) is given by + +$$ \dot{f}_i = \frac{\Gamma}{2} s R_i^i (g_i - f_i) + \Gamma \left( R_i^{i-2} g_{i-2} + R_i^{i-1} g_{i-1} + R_i^i g_i \right), \quad (14) $$ + +and $f_i$ can be expressed in terms of $\dot{g}_i$ and $g_i$ from Equation (10) at the lowest order in *s* as follows: + +$$ f_i = \frac{2}{\Gamma s R_i^i} (\dot{g}_i + \Gamma g_i). \qquad (15) $$ + +Insertion of Equations (10) and (15) into Equation (14) yields the following DE for $g_i$: + +$$ \begin{aligned} \dot{g}_i + \Gamma \left(1 + \frac{s}{2} R_i^i\right) \dot{g}_i + \frac{s}{2} \Gamma^2 R_i^i \left(1 - R_i^i\right) g_i &= \frac{s}{2} \Gamma^2 R_i^{i-2} R_i^i g_{i-2} + \frac{s}{2} \Gamma^2 R_i^{i-1} R_i^i g_{i-1}. \\ &= \frac{s}{2} \Gamma^2 R_i^{i-2} R_i^i g_{i-2} + \frac{s}{2} \Gamma^2 R_i^{i-1} R_i^i g_{i-1}. \end{aligned} \quad (16) $$ + +when $i=1$, the right-hand side of Equation (16) vanishes. Therefore, Equation (16) becomes a homogeneous DE. In contrast, when $i \neq 1$, Equation (16) becomes an inhomogeneous DE because the right-hand side terms are functions of $g_i$. + +We solve Equation (16) from $i=1$ consecutively. As is well-known, the solution of Equation (16) consists of two parts: a homogeneous solution and a particular solution. We first find the solutions of the homogeneous equation by inserting the equation $g_i \sim e^{\lambda_1 \Gamma t}$ into Equation (16). Then, we have two values ($\lambda_{2i-1}, \lambda_{2i}$) for $\lambda$ as follows: + +$$ \lambda_{2i-1(2i)} = \frac{1}{4} \left( -2 - sR_i^i - (+)\sqrt{4 - 4sR_i^i + s(8+s)(R_i^i)^2} \right), $$ + +which can be approximated as follows in the weak intensity limit: + +$$ \lambda_{2i-1} \approx -1 - \frac{s}{2} (R_i^i)^2, \quad \lambda_{2i} \approx -\frac{s}{2} R_i^i (1 - R_i^i). $$ + +We consider the case of $i=1$ in Equation (16). Then, the solution is given by: + +$$ g_1 = C_{1,1}e^{\lambda_1 \Gamma t} + C_{1,2}e^{\lambda_2 \Gamma t}, $$ + +where the coefficients $C_{1,1}$ and $C_{1,2}$ should be determined using the initial conditions. In the case of $i=2$, the right-hand side in Equation (16) contains the terms of $e^{\lambda_1 \Gamma t}$ and $e^{\lambda_2 \Gamma t}$. Therefore, $g_2$ has four exponential terms: + +$$ g_2 = C_{2,1}e^{\lambda_1 \Gamma t} + C_{2,2}e^{\lambda_2 \Gamma t} + C_{2,3}e^{\lambda_3 \Gamma t} + C_{2,4}e^{\lambda_4 \Gamma t}, $$ + +where the coefficients should also be determined. Therefore, we can express $g_j$ generally as follows: + +$$ g_j = \sum_{k=1}^{2j} C_{j,k} e^{\lambda_k \Gamma t}. \quad (17) $$ +---PAGE_BREAK--- + +We find $C_{j,k}$ with $k = 1, 2, \dots, 2j$ by means of recursion relations; i.e., $C_{j,k}$ are expressed in terms of $C_{i,l}$ with $i < j$ and $l = 1, 2, \dots, 2i$. Inserting Equation (17) into Equation (16), we obtain + +$$ +\begin{aligned} +g_i = C_{i,2i-1} & e^{\lambda_{2i}-1\Gamma t} + C_{i,2i} e^{\lambda_{2i}\Gamma t} \\ +& + \sum_{k=1}^{2(i-1)} \frac{(s/2) R_i^{i-1} R_{i-k}^{i} C_{i-1,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1+\lambda_k - R_i^t)} e^{\lambda_k \Gamma t} \\ +& + \sum_{k=1}^{2(i-2)} \frac{(s/2) R_i^{i-2} R_{i-k}^{i} C_{i-2,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1+\lambda_k - R_i^t)} e^{\lambda_k \Gamma t}. +\end{aligned} +\quad (18) +$$ + +Comparing Equations (17) and (18) gives + +$$ +\begin{align} +C_{i,k} &= \frac{(s/2)R_i^i (R_i^{i-1}C_{i-1,k} + R_i^{i-2}C_{i-2,k})}{\lambda_k^2 + \lambda_k + \frac{s}{2}R_i^i(1+\lambda_k - R_i^t)}, \tag{19} \\ +\text{for } k &= 1, 2, \dots, 2(i-2), \nonumber +\end{align} +$$ + +$$ +\begin{align} +C_{i,k} &= \frac{(s/2) R_i^{i-1} R_i^i C_{i-1,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1 + \lambda_k - R_i^t)}, \tag{20} \\ +\text{for } k &= 2i-3 \text{ and } 2(i-1). \notag +\end{align} +$$ + +The remaining two coefficients, $C_{i,2i-1}$ and $C_{i,2i}$, can be derived from Equation (18) using two initial conditions for $g_i(0)$ and $\dot{g}_i(0)$: + +$$ g_i(0) = 0, \quad \dot{g}_i(0) = \frac{s}{2} p_0 R_i^i, $$ + +where $p_0$ is the population of each sublevel in the ground state at equilibrium, which is equal to $1/[2(2I+1)]$. Then, the results are given by + +$$ C_{i,2i-1} = \frac{1}{2Q_i} [2(A_i + 2A'_i + B_i + 2B'_i) + (A_i + B_i - 2p_0) sR_i^i] - \frac{A_i+B_i}{2}, \quad (21) $$ + +$$ C_{i,2i} = -\frac{1}{2Q_i} [2(A_i + 2A'_i + B_i + 2B'_i) + (A_i + B_i - 2p_0) sR_i^i] - \frac{A_i+B_i}{2}, \quad (22) $$ + +where + +$$ Q_i = \sqrt{4 + s R_i^i (-4 + (8+s) R_i^i)}, $$ + +$$ A_i = \sum_{k=1}^{2(i-1)} \frac{(s/2) R_i^{i-1} R_{i-k}^{i} C_{i-1,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1+\lambda_k - R_i^t)}, \quad \text{for } i \ge 2 $$ + +$$ B_i = \sum_{k=1}^{2(i-2)} \frac{(s/2) R_i^{i-2} R_{i-k}^{i} C_{i-2,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1+\lambda_k - R_i^t)}, \quad \text{for } i \ge 3, $$ + +$$ A'_i = \sum_{k=1}^{2(i-1)} \frac{(s/2) R_i^{i-1} R_k^i \lambda_k C_{i-1,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1+\lambda_k - R_i^t)}, \quad \text{for } i \ge 2 $$ + +$$ B'_i = \sum_{k=1}^{2(i-2)} \frac{(s/2) R_i^{i-2} R_k^i \lambda_k C_{i-2,k}}{\lambda_k^2 + \lambda_k + \frac{s}{2} R_i^i (1+\lambda_k - R_i^t)}, \quad \text{for } i \ge 3, $$ + +and + +$A_1 = 0$, $A'_1 = 0$, $B_1 = B_2 = 0$, and $B'_1 = B'_2 = 0$. +---PAGE_BREAK--- + +The coefficients in $g_i$ from $g_1$ can be obtained by successively using the recursion relations in Equations (19)–(22). Once $g_i$ are obtained, $f_i$ can be obtained using Equation (15). Up to the lowest order in s, the result is given by + +$$f_i = \sum_{k=1}^{i} \frac{2C_{i,2k}}{sR_i^k} e^{\lambda_{2k}\Gamma t}. \quad (23)$$ + +Since $\lambda_k \sim -1$ for odd $k$, $g_i$ can be expressed as follows: + +$$g_i = \sum_{k=1}^{i} \left(C_{i,2k-1}e^{-\Gamma t} + C_{i,2k}e^{\lambda_{2k}\Gamma t}\right). \quad (24)$$ + +Taking the derivative of Equation (24) with respect to time and letting $t=0$, we have + +$$\dot{g}_i(0) = -\sum_{k=1}^{i} C_{i,2k-1},$$ + +up to the first order in $s$, since $\lambda_{2k}$ ($k=1, 2, \dots, i$) are already in the first order in $s$. Because one of the initial conditions is $\dot{g}_i(0) = sp_0 R_i^i/2$, and $g_i(0) = \sum_{k=1}^i (C_{i,2k-1} + C_{i,2k}) = 0$ from the other initial condition, we obtain the following equations: + +$$\sum_{k=1}^{i} C_{i,2k-1} = - \sum_{k=1}^{i} C_{i,2k} = -\frac{s}{2} p_0 R_i^i. \quad (25)$$ + +Using the relations in Equations (23) and (25), we find the simplified form of $g_i$ as follows: + +$$g_i = \frac{R_i^i s}{2} (f_i - p_0 e^{-\Gamma t}). \quad (26)$$ + +We obtain the populations of the sublevels in the ground state, which are not excited by laser light. The one or two magnetic sublevels with higher magnetic quantum numbers correspond to this case. We can easily obtain analytical populations by integrating the populations spontaneously transferred from the excited state, and the result is given by + +$$f_i = p_0 + \sum_{k=1}^{i-2} R_i^{i-2} C_{i-2,2k} \frac{e^{\lambda_{2k}\Gamma t} - 1}{\lambda_{2k}} + \sum_{k=1}^{i-1} R_i^{i-1} C_{i-1,2k} \frac{e^{\lambda_{2k}\Gamma t} - 1}{\lambda_{2k}}. \quad (27)$$ + +In several cases of atomic transition systems, $\lambda_k$ can duplicate, and the method of solving particular solutions given in Equation (18) no longer holds. We may solve for the particular solutions using the method presented in our previous paper [15]. However, it is also possible to solve by intentionally modifying $\lambda_k$ to satisfy the conditions that all $\lambda_k$ are unique. One possible method is setting $R_i' \to R_i'^{-} + \delta_{i,j}j\epsilon$, where $\epsilon$ is a constant that is taken as zero at the final stage of the calculation. Although this method is not novel, it is very efficient. + +The populations ($h_i$) of the sublevels in the ground state, which are not excited by laser light, can be easily obtained analytically by integrating the populations spontaneously transferred from the excited state (Equation (11)), and the result is given by + +$$h_i = p_0 + \sum_{l=-2}^{0} \sum_{k=1}^{i+1} T_l^{i+1} C_{i+1,2k} \frac{e^{\lambda_{2k}\Gamma t} - 1}{\lambda_{2k}}. \quad (28)$$ +---PAGE_BREAK--- + +### 3. Calculated Results + +Based on the method developed in Section 2, here we present the calculated results of the populations for the two transition schemes: (i) $F_g = 4 \rightarrow F_e = 5$ and (ii) $F_g = 3 \rightarrow F_e = 3$ for the D2 line of Cs. The energy level diagram for the Cs-D2 line is shown in Figure 2a, and the energy level diagrams for these two transitions are shown in Figure 2b,c. Owing to the large hyperfine splitting in the excited states, it is justifiable to neglect the off-resonant transitions; i.e., the $F_g = 4 \rightarrow F_e = 4$ and $F_g = 4 \rightarrow F_e = 3$ transitions can be neglected when the laser light is tuned to the $F_g = 4 \rightarrow F_e = 5$ transition line. Although it is in principle possible to include the off-resonant transitions in the analytical calculation of the populations [13], the complicated analytical solutions may not be practically useful. + +Figure 2. (a) Energy level diagram of the Cs-D2 line. (b) Energy level diagrams for the $F_g = 4 \rightarrow F_e = 5$ cycling transition line and (c) for the $F_g = 3 \rightarrow F_e = 3$ transition line illuminated by $\sigma^+$ polarized laser light. + +#### 3.1. Results for the $F_g = 4 \rightarrow F_e = 5$ Transition + +The $F_g = 4 \rightarrow F_e = 5$ transition shown in Figure 2b is cycling, and is used in many experiments, such as laser cooling and trapping [22]. Because $\sigma^+$ polarized laser light is illuminated, the sublevels with $m_e = -5$ and $-4$ are not optically excited. The normalized transition strengths, for the transitions presented in Figure 2b, are given by + +$$ (R_1^1, R_2^2, R_3^3, R_4^4, R_5^5, R_6^6, R_7^7, R_8^8, R_9^9) \\ = (\frac{1}{45}, \frac{1}{15}, \frac{2}{15}, \frac{2}{9}, \frac{1}{3}, \frac{7}{15}, \frac{28}{45}, \frac{4}{5}, 1). $$ + +For the transition for $i=1$, we obtain $\lambda_1 \approx -1$ and $\lambda_2 \approx -\frac{22}{2025}s$, and + +$$ C_{1,1} = -\frac{s}{1440}, \quad C_{1,2} = \frac{s}{1440}. $$ + +Thus, using Equation (23), we obtain + +$$ f_1 = \frac{1}{16} e^{-\frac{22s\Gamma t}{2025}}. $$ +---PAGE_BREAK--- + +The $\lambda_4$ for the transition for $i = 2$ is approximately given by $-\frac{7}{225}s$, and the coefficients are given by + +$$C_{2,1} = \frac{s}{240}, \quad C_{2,2} = \frac{s}{2460},$$ + +$$C_{2,3} = -\frac{s}{160}, \quad C_{2,4} = \frac{11}{6560}s.$$ + +Therefore, we have + +$$f_2 = \frac{1}{82}e^{-22s\Gamma t/2025} + \frac{33}{656}e^{-7s\Gamma t/225}.$$ + +The remaining $\lambda_{2k}$ ($k=2, \dots, 9$) values are given by + +$$ +\begin{aligned} +& (\lambda_6, \lambda_8, \lambda_{10}, \lambda_{12}, \lambda_{14}, \lambda_{16}, \lambda_{18}) \\ +& = \left( -\frac{13}{225}s, -\frac{7}{81}s, -\frac{s}{9}, -\frac{28}{225}s, -\frac{238}{2025}s, -\frac{2}{25}s, 0 \right), +\end{aligned} +$$ + +and the remaining populations are explicitly given by + +$$f_3 = \frac{413}{31160} e^{-22\tau/2025} + \frac{77}{2624} e^{-7\tau/225} + \frac{121}{6080} e^{-13\tau/225},$$ + +$$f_4 = \frac{2317}{264860} e^{-22\tau/2025} + \frac{693}{20992} e^{-7\tau/225} + \frac{1089}{44080} e^{-13\tau/225} - \frac{1001}{252416} e^{-7\tau/81},$$ + +$$f_5 = \frac{25577}{3072376} e^{-22\tau/2025} + \frac{4235}{125952} e^{-7\tau/225} + \frac{5203}{141056} e^{-13\tau/225} - \frac{5005}{504832} e^{-7\tau/81} - \frac{143}{22272} e^{-\tau/9},$$ + +$$f_6 = \frac{148693}{17666162} e^{-22\tau/2025} + \frac{1925}{47232} e^{-7\tau/225} + \frac{2057}{35264} e^{-13\tau/225} - \frac{1625}{63104} e^{-7\tau/81} - \frac{715}{16704} e^{-\tau/9} + \frac{13}{552} e^{-28\tau/225},$$ + +$$f_7 = \frac{921751}{8926068} e^{-22\tau/2025} + \frac{2519}{41984} e^{-7\tau/225} + \frac{891}{7424} e^{-13\tau/225} - \frac{49075}{504832} e^{-7\tau/81} - \frac{5555}{7424} e^{-\tau/9} - \frac{273}{736} e^{-28\tau/225} + \frac{209}{192} e^{-238\tau/2025},$$ + +$$f_8 = \frac{39041249}{2119939440} e^{-22\tau/2025} + \frac{1561}{10496} e^{-7\tau/225} + \frac{225071}{352640} e^{-13\tau/225} + \frac{219275}{126208} e^{-7\tau/81} + \frac{9955}{3712} e^{-\tau/9} + \frac{3367}{3680} e^{-28\tau/225} - \frac{77}{24} e^{-238\tau/2025} - \frac{459}{160} e^{-2\tau/25},$$ + +$$f_9 = \frac{9}{16} - \frac{1205666281}{8479757760} e^{-22\tau/2025} - \frac{74771}{188928} e^{-7\tau/225} - \frac{316701}{352640} e^{-13\tau/225} - \frac{404009}{252416} e^{-7\tau/81} - \frac{62953}{33408} e^{-\tau/9} - \frac{3133}{5520} e^{-28\tau/225} + \frac{407}{192} e^{-238\tau/2025} + \frac{459}{160} e^{-2\tau/25},$$ + +where we use a simplified notation: $\tau \equiv s\Gamma t$. Since the $F_g = 4 \rightarrow F_e = 5$ transition is cycling, the populations in the magnetic sublevels in the $F_g = 3$ ground state remain at their equilibrium value, 1/16. It should be also noted that the sum of the ground state populations is conserved, i.e., + +$$\sum_{i=1}^{9} f_i = \frac{9}{16}.$$ +---PAGE_BREAK--- + +From Equation (26), the populations of the sublevels in the excited state can be expressed in terms +of the populations in the ground state as follows: + +$$g_i = \frac{R_i^{\prime s}}{2} \left( f_i - \frac{1}{16} e^{-\Gamma t} \right).$$ + +The constants in $f_9$ and $g_9$ can be accurately calculated using Equation (10). In the steady-state regime, all the populations except $f_9$ and $g_9$ vanish, and these satisfy the following equations: + +$$\frac{\Gamma}{2}s[f_9(\infty) - g_9(\infty)] - \Gamma g_9(\infty) = 0, \quad f_9(\infty) + g_9(\infty) = \frac{9}{16},$$ + +with $R_9^9 = 1$. Then, we have + +$$f_9(\infty) = \frac{9(2+s)}{32(1+s)}, \quad g_9(\infty) = \frac{9s}{32(1+s)}.$$ + +which can be used in a more accurate calculation of the SAS spectrum. + +## 3.2. Results for the $F_g = 3 \rightarrow F_e = 3$ Transition + +Now we present the calculated results of the populations for the $F_g = 3 \rightarrow F_e = 3$ transition of the D2 line of Cs. The energy level diagram for the transition is shown in Figure 2c. The sublevel of the excited state with $m_e = -3$ is not optically excited, and thus the sublevel of the upper-ground state with $m_g = -4$ is not filled by spontaneous emission. We also obtain the solutions for the populations in the other ground state ($F_g = 4$). To prevent the duplication of the transition strengths in this transition, we introduce $\epsilon$ so that the transition strengths are given explicitly by + +$$\begin{aligned} & (R_1^1, R_2^2, R_3^3, R_4^4, R_5^5, R_6^6) \\ &= \left( \frac{3}{16} + \epsilon, \frac{5}{16} + 2\epsilon, \frac{3}{8} + 3\epsilon, \frac{3}{8} + 4\epsilon, \frac{5}{16} + 5\epsilon, \frac{3}{16} + 6\epsilon \right). \end{aligned}$$ + +We take $\epsilon \to 0$ at the final stage of the calculation. The $\lambda_{2k}$ ($k = 1, \dots, 6$) values at $\epsilon \to 0$ are given by + +$$\begin{aligned} & (\lambda_2, \lambda_4, \lambda_6, \lambda_8, \lambda_{10}, \lambda_{12}) \\ &= \left( -\frac{39}{512}s, -\frac{55}{512}s, -\frac{15}{128}s, -\frac{15}{128}s, -\frac{55}{512}s, -\frac{39}{512}s \right). \end{aligned}$$ + +We first find various $C_{ik}$ values using the recursion relations in Equations (19)–(22). For the transition for $i=1$, we obtain + +$$C_{1,1} = -\frac{3}{512}s, \quad C_{1,2} = \frac{3}{512}s;$$ + +thus, using Equation (23), we obtain + +$$f_1 = \frac{1}{16}e^{-39s\Gamma t/512}.$$ + +Using a similar method, we can obtain $f_2$ and $f_3$ as follows: + +$$f_2 = \frac{3}{64}e^{-39\tau/512} + \frac{1}{64}e^{-55\tau/512},$$ + +$$f_3 = \frac{25}{448}e^{-39\tau/512} + \frac{1}{64}e^{-55\tau/512} - \frac{1}{112}e^{-15\tau/128},$$ +---PAGE_BREAK--- + +where the simplified notation, $\tau \equiv s\Gamma t$, is used. In the calculation of $f_4$, because $\lambda_6$ and $\lambda_8$ are equal, $f_4$ may contain the term $\sim \tau e^{-15\tau/128}$. However, because the transition between $g_3$ and $f_4$ is prohibited, the particular solution for $f_4$ does not contain the term $\sim \tau e^{-15\tau/128}$. In contrast, $f_5$, $f_6$, and $f_7$ contain the terms proportional to $\tau$. The results for $f_4$, $f_5$, and $f_6$ are explicitly given by + +$$f_4 = \frac{15}{224}e^{-39\tau/512} + \frac{3}{32}e^{-55\tau/512} - \frac{11}{112}e^{-15\tau/128},$$ + +$$f_5 = \frac{135}{896}e^{-39\tau/512} + \left(-\frac{173}{640} + \frac{9\tau}{4096}\right)e^{-55\tau/512} + \frac{51}{280}e^{-15\tau/128},$$ + +$$f_6 = \left( \frac{269}{12544} + \frac{1125\tau}{114688} \right) e^{-39\tau/512} \\ + \left( \frac{19}{256} - \frac{45\tau}{16384} \right) e^{-55\tau/512} - \frac{13}{392}e^{-15\tau/128}.$$ + +Since $f_7$ is not excited by laser light, using Equation (27) yields, + +$$f_7 = \frac{68971}{327184} - \left( \frac{343323}{2119936} + \frac{10125\tau}{1490944} \right) e^{-39\tau/512} \\ + \left( \frac{1371}{30976} + \frac{135\tau}{180224} \right) e^{-55\tau/512} - \frac{3}{98}e^{-15\tau/128}.$$ + +The populations of the sublevels in the excited state, using Equation (26), can be expressed as follows: + +$$g_i = \frac{R_i^i s}{2} \left( f_i - \frac{1}{16} e^{-\Gamma i} \right).$$ + +The populations of the sublevels in the ground state $F_g = 4$ can be obtained using Equation (28), and are presented in the appendix. + +**4. Conclusions** + +We have presented a general method of solving homogeneous or inhomogeneous second-order DEs corresponding to the optical pumping phenomenon with $\sigma^+$ polarized laser light. Unlike the harmonic oscillator in mechanics or electrical circuits, this system only exhibits over-damped behavior. Although the method of solving inhomogeneous DEs with constant coefficients is straightforward in principle, obtaining accurate analytical solutions for the equations related to optically pumped atoms, in particular, those with complicated atomic structures, such as Cs, is cumbersome. Our method of solving the DEs provides an easy way to obtain analytical solutions at the weak intensity limit. This method is general and applicable to most atoms. As stated in Section 1, the obtained analytical form of the populations can be used in the calculation of spectroscopic lineshapes such as in saturated absorption spectroscopy (SAS) [16,17] and polarization spectroscopy (PS) [18]. Calculations of SAS and PS for Cs atoms are in progress. + +**Acknowledgments:** This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2014R1A2A2A01006654). + +**Conflicts of Interest:** The authors declare no conflict of interest. + +**Appendix** + +When the laser frequency is tuned to the $F_g = 3 \rightarrow F_e = 3$ transition (Figure 2c), the populations of the sublevels in the ground state $F_g = 4$ are given by +---PAGE_BREAK--- + +$$h_1 = \frac{23}{312} - \frac{7}{624}e^{-39\tau/512},$$ + +$$h_2 = \frac{93}{1144} - \frac{41}{2496}e^{-39\tau/512} - \frac{5}{2112}e^{-55\tau/512},$$ + +$$h_3 = \frac{895}{10296} - \frac{1109}{52416}e^{-39\tau/512} - \frac{3}{704}e^{-55\tau/512} + \frac{1}{1008}e^{-15\tau/128},$$ + +$$h_4 = \frac{235}{2574} - \frac{685}{26208}e^{-39\tau/512} - \frac{19}{1760}e^{-55\tau/512} + \frac{41}{5040}e^{-15\tau/128},$$ + +$$h_5 = \frac{10727}{113256} - \frac{3475}{104832}e^{-39\tau/512} \\ +- \left( \frac{2641}{232320} + \frac{3\tau}{45056} \right) e^{-55\tau/512} + \frac{31}{2520}e^{-15\tau/128},$$ + +$$h_6 = \frac{143477}{1472328} - \left( \frac{843497}{19079424} + \frac{125\tau}{1490944} \right) e^{-39\tau/512} \\ ++ \left( \frac{401}{30976} - \frac{45\tau}{180224} \right) e^{-55\tau/512} - \frac{13}{3528}e^{-15\tau/128},$$ + +$$h_7 = \frac{293731}{2944656} - \left( \frac{147347}{2725632} + \frac{125\tau}{212992} \right) e^{-39\tau/512} \\ ++ \left( \frac{7889}{154880} - \frac{63\tau}{180224} \right) e^{-55\tau/512} - \frac{43}{1260}e^{-15\tau/128},$$ + +$$h_8 = \frac{299023}{2944656} - \left( \frac{24497}{681408} + \frac{125\tau}{53248} \right) e^{-39\tau/512} \\ ++ \left( -\frac{959}{116160} + \frac{21\tau}{45056} \right) e^{-55\tau/512} + \frac{13}{2520}e^{-15\tau/128}.$$ + +Finally, we note that the sum of the populations is conserved, i.e., + +$$\frac{1}{16} + \sum_{i=1}^{7} f_i + \sum_{i=1}^{8} h_i = 1,$$ + +where $1/16$ is the population at the sublevel $m_g = -4$ in the upper ground state. + +## References + +1. Happer, W. Optical pumping. *Rev. Mod. Phys.* **1972**, *44*, 169–249. + +2. McClelland, J.J. Optical State Preparation of Atoms. In *Atomic, Molecular, and Optical Physics: Atoms and Molecules*; Dunning, F.B., Hulet, R.G., Eds.; Academic Press: San Diego, CA, USA, 1995; pp. 145–170. + +3. Smith, D.A.; Hughes, I.G. The role of hyperfine pumping in multilevel systems exhibiting saturated absorption. *Am. J. Phys.* **2004**, *72*, 631–637. + +4. Magnus, F.; Boatwright, A.L.; Flodin, A.; Shiell, R.C. Optical pumping and electromagnetically induced transparency in a lithium vapour. *J. Opt. B: Quantum Semiclass. Opt.* **2005**, *7*, 109–118. + +5. Han, H.S.; Jeong, J.E.; Cho, D. Line shape of a transition between two levels in a three-level Λ configuration. *Phys. Rev. A* **2011**, *84*, doi:10.1103/PhysRevA.84.032502. + +6. Sydoryk, I.; Bezuglov, N.N.; Beterov, I.I.; Miculis, K.; Saks, E.; Janovs, A.; Spels, P.; Ekers, A. Broadening and intensity redistribution in the Na(3p) hyperfine excitation spectra due to optical pumping in the weak excitation limit. *Phys. Rev. A* **2008**, *77*, doi:10.1103/PhysRevA.77.042511. + +7. Porfido, N.; Bezuglov, N.N.; Bruvelis, M.; Shayeganrad, G.; Birindelli, S.; Tantussi, F.; Guerri, I.; Viteau, M.; Fioretti, A.; Ciampini, D.; et al. Nonlinear effects in optical pumping of a cold and slow atomic beam. *Phys. Rev. A* **2015**, *92*, doi:10.1103/PhysRevA.92.043408. + +8. McClelland, J.J.; Kelley, M.H. Detailed look at aspects of optical pumping in sodium. *Phys. Rev. A* **1985**, *31*, 3704–3710. +---PAGE_BREAK--- + +9. Farrell, P.M.; MacGillivary, W.R.; Standage, M.C. Quantum-electrodynamic calculation of hyperfine-state populations in atomic sodium. *Phys. Rev. A* **1988**, *37*, 4240–4251. + +10. Balykin, V.I. Cyclic interaction of Na atoms with circularly polarized laser radiation. *Opt. Commun.* **1980**, *33*, 31–36. + +11. Liu, S.; Zhang, Y.; Fan, D.; Wu, H.; Yuan, P. Selective optical pumping process in Doppler-broadened atoms. *Appl. Opt.* **2011**, *50*, 1620–1624. + +12. Moon, G.; Shin, S.R.; Noh, H.R. Analytic solutions for the populations of an optically-pumped multilevel atom. *J. Korean Phys. Soc.* **2008**, *53*, 552–557. + +13. Moon, G.; Heo, M.S.; Shin, S.R.; Noh, H.R.; Jhe, W. Calculation of analytic populations for a multilevel atom at low laser intensity. *Phys. Rev. A* **2008**, *78*, doi:10.1103/PhysRevA.78.015404. + +14. Won, J.Y.; Jeong, T.; Noh, H.R. Analytical solutions of the time-evolution of the populations for $D_1$ transition line of the optically-pumped alkali-metal atoms with $I = 3/2$. *Optik* **2013**, *124*, 451–455. + +15. Noh, H.R. Analytical Study of Optical Pumping for the $D_1$ Line of $^{85}$Rb Atoms. *J. Korean Phys. Soc.* **2104**, *64*, 1630–1635. + +16. Moon, G.; Noh, H.R. Analytic solutions for the saturated absorption spectra. *J. Opt. Soc. Am. B* **2008**, *25*, 701–711. + +17. Moon, G.; Noh, H.R. Analytic Solutions for the Saturated Absorption Spectrum of the $^{85}$Rb Atom with a Linearly Polarized Pump Beam. *J. Korean Phys. Soc.* **2009**, *54*, 13–22. + +18. Do, H.D.; Heo, M.S.; Moon, G.; Noh, H.R.; Jhe, W. Analytic calculation of the lineshapes in polarization spectroscopy of rubidium. *Opt. Commun.* **2008**, *281*, 4042–4047. + +19. Thornton, T.; Marion, J.B. *Classical Dynamics of Particles and Systems*, 5th ed.; Brooks/Cole: New York, NY, USA, 2004. + +20. Cohen-Tannoudji, C.; Dupont-Roc, J.; Grynberg, G. *Atom-Photon Interactions Basic Processes and Applications*; Wiley: New York, NY, USA, 1992. + +21. Meystre, P.; Sargent, M., III. *Elements of Quantum Optics*; Springer: New York, NY, USA, 2007. + +22. Metcalf, H.J.; van der Straten, P. *Laser Cooling and Trapping*; Springer: New York, NY, USA, 1999. + +© 2016 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Local Dynamics in an Infinite Harmonic Chain + +M. Howard Lee + +Department of Physics and Astronomy, University of Georgia, Athens, GA 30602, USA; mhlee@uga.edu + +Academic Editor: Young Suh Kim + +Received: 26 February 2016; Accepted: 6 April 2016; Published: 15 April 2016 + +**Abstract:** By the method of recurrence relations, the time evolution in a local variable in a harmonic chain is obtained. In particular, the autocorrelation function is obtained analytically. Using this result, a number of important dynamical quantities are obtained, including the memory function of the generalized Langevin equation. Also studied are the ergodicity and chaos in a local dynamical variable. + +**Keywords:** recurrence relations; harmonic chain; local dynamics; ergodicity; chaos + +# 1. Introduction + +A harmonic chain has been a useful model for a variety of dynamical phenomena, such as the lattice vibrations in solids, Brownian motion and diffusion. It has also been a useful model for testing theoretical concepts, such as the thermodynamic limit, irreversibility and ergodicity. One can study these properties in a harmonic chain. In this work, we shall touch on most of these issues analytically. + +The dynamics in a chain of nearest-neighbor (nn) coupled monatomic oscillators (defined in Section 3) has been studied in the past almost exclusively by means of normal modes [1]. If there are *N* oscillators in a chain, the single-particle or individual coordinates of the oscillators $q_i$, $i = 1, 2, .., N$, are replaced by the total or collective coordinates $Q_j$, $j = 1, 2, .., N$. In the space of the collective coordinates, the “collective” oscillators are no longer coupled. As a result, their motions are simply periodic. Each collective oscillator would have a unique frequency associated with it (if degeneracy due to symmetry could be ignored). + +On the one hand, this collective picture is very helpful in understanding the dynamics of a harmonic chain by avoiding what might be a complicated picture due to a set of motions of coupled single particles. If only the collective behavior is required, this approach is certainly sufficient. + +On the other hand, if one wishes to know the dynamics of a single oscillator in a chain, the traditional approach becomes cumbersome. Why would one wish to know the dynamics of one oscillator in a chain? There may be a defect in a chain, for example. It may be a heavier or lighter mass than its neighbors'. Diffusivity is attributed to the motions of single oscillators. For these and other physical reasons that will become apparent, there is a need to study how a single oscillator embedded in a chain evolves in time. We shall term it local dynamics to be distinguished from total dynamics. + +In the 1980s, a new method of calculating the time evolution in a Hermitian system was developed, known as the method of recurrence relations [2]. It solves the Heisenberg equation of motion for a dynamical variable of physical interest, which may be the momentum of a single particle, the number or current density. Although it was intended to deal with dynamical variables of quantum origin, i.e., operators, it was found to be applicable to classical variables by replacing commutators with Poisson brackets. During the past three decades, this method has been widely applied to a variety of dynamical issues emanating from the electron gas, lattice spins, lattice vibrations and classical fluids. For reviews, see [3–7]. For a partial list of recent papers, see [8–21]. +---PAGE_BREAK--- + +Formally, this method shows what types of solutions are admissible [22]. It provides a deeper insight into the memory function and the Langevin equation. It has also provided a basis from which to develop the ergometric theory of the ergodic hypothesis. + +In Section 2, we will briefly introduce the method of recurrence relations, mostly by assertion, referring the proofs to the original sources and review articles. In Section 3, the dynamics of a local variable (a single particle) in an infinite harmonic chain will be solved by the method of recurrence relations. Some useful physical applications will follow to complete this work. + +## 2. Method of Recurrence Relations + +Let $A$ be a dynamical variable, e.g., a spin operator, and $H(A)$ an N-body Hamiltonian. The number of particles $N$ is not restricted initially. The Hamiltonian $H$ must however be Hermitian, which means that there is to be no dissipation in the dynamics of $A$. The time evolution of $A$ is to be given by the Heisenberg equation of motion: + +$$ \dot{A}(t) = i[H, A(t)] \qquad (1) $$ + +with $\hbar = 1$ and $[H, A] = HA - AH$. If $A$ is a classical variable, the rhs of Equation (1) is to be replaced by the Poisson brackets. + +A formal solution for Equation (1) may be viewed in geometrical terms. Let $A(t)$ be a vector in an inner product space $S$ of $d$ dimensions. This space is spanned by $d$ basis vectors $f_k$, $k = 0, 1, .., d-1, d \ge 2$. These basis vectors are mutually orthogonal: + +$$ (f_k, f_{k'}) = 0 \text{ if } k \neq k' \qquad (2) $$ + +where $(\cdot, \cdot)$ denotes an inner product, which defines the space $S$. Observe that they are time independent. In terms of these, $A(t)$ may be expressed as: + +$$ A(t) = \sum_k a_k(t) f_k \qquad (3) $$ + +where $a_k$, $k = 0, 1, .., d-1$, is a set of functions or basis functions conjugate to the basis vectors. They carry time dependence. + +As $t$ evolves, this vector $A(t)$ evolves in this space $S$. Its motion in $S$ is governed by Equation (1), so that it is $H$ specific. Since $||A(t)|| = ||A||$, that is $(A(t), A(t)) = (A, A)$, the "length" of $A(t)$ in $S$ is an invariant of time. As $t$ evolves, $A(t)$ may only rotate in $S$. This means that there is a Bessel equality, which limits what kind of rotation is allowed. + +Since both the basis vectors and functions are only formally stated, Equation (3) is not yet useful. One does not know what is $d$, the dimensionality of $S$. To make it useful, we need to realize $S$, an abstract space by defining the inner product in a physically-useful way. + +### 2.1. Kubo Scalar Product + +We shall realize $S$ by the Kubo scalar product (KSP) as follows: let $X$ and $Y$ be two vectors in $S$. The inner product of $X$ and $Y$ is defined as: + +$$ (X,Y) = 1/\beta \int_0^\beta d\lambda < X(\lambda)Y^* > - < X > < Y^* > \qquad (4) $$ + +where $\beta = 1/k_B T$, $T$ temperature, $< .. >$ means an ensemble average, * means Hermitian conjugation and: + +$$ X(\lambda) = e^{\lambda H} X e^{-\lambda H} \qquad (5) $$ + +Equation (4) is known as KSP in many body theory [23]. There is a deep physical reason for using KSP to realize $S$ [24]. When realized by KSP, it shall be denoted $\tilde{S}$. +---PAGE_BREAK--- + +## 2.2. Basis Vectors + +We have proved that the basis vectors in $\bar{S}$ satisfy the following recurrence relation, known as RR I: + +$$f_{k+1} = \dot{f}_k + \Delta_k f_{k-1}, \quad k = 0, 1, 2, \dots, d-1 \qquad (6)$$ + +where $\dot{f}_k = i[H, f_k]$, $\Delta_k = ||f_k||/||f_{k-1}||$, with $f_{-1} = 0$ and $\Delta_0 = 1$. + +If $k=0$ in Equation (6), $f_1 = \dot{f}_0$. With $f_0 = A$ (by choice), $f_1$ is obtained and, therewith, $\Delta_1$. + +Given $\Delta_1$, by setting $k=1$ in Equation (6), one can calculate $f_2$, therewith $\Delta_2$. If proceeding in this manner, $f_d = 0$ for some finite value of $d$ giving a finite dimensional $\bar{S}$ or $f_d \neq 0$ as $d \to \infty$ giving an infinite dimensional $\bar{S}$. By RR I, we can determine $d$ and, thus, generate all of the basis vectors needed to span $A(t)$ in $\bar{S}$ for a particular $H$. In addition, we can construct the hypersurface $\sigma$: + +$$\sigma = (\Delta_1, \Delta_2, \dots, \Delta_{d-1}) \qquad (7)$$ + +As we shall see, the dynamics is governed by $\sigma$. The $\Delta$'s known as the recurrants are successive ratios of the norms of $f_k$. They are static quantities, so that they are in principle calculable as a function of parameters, such as temperature, wave vectors, etc., for a given $H$. They collectively define the shape of $\bar{S}$, constraining what kind of trajectory is possible for $A(t)$. + +## 2.3. Basis Functions + +If RR I is applied to Equation (1), it yields a recurrence relation for the basis functions: with $a_{-1} = 0$, + +$$\Delta_{k+1} a_{k+1} = -\dot{a}_k + a_{k-1}, \quad k = 0, 1, \dots, d-1 \qquad (8)$$ + +where $\dot{a}_k = d/dt a_k$. Equation (8) is known as RR II. It is actually composed of two recurrence relations, one for $k=0$ (because of $a_{-1}=0$) and another for the rest $k=1, 2, \dots, d-1$. + +There is an important boundary condition on $a_k$. By Equation (3), $A(t=0) = A = f_0$. Thus, $a_0(t=0) = 1$ and $a_k(t=0) = 0$, $k \neq 0$. These basis functions are autocorrelation functions. For example, $a_0 = (A(t), A)/(A, A)$, $a_1 = (A(t), f_1)/(f_1, f_1) = (A(t), \hat{A})/(\hat{A}, \hat{A})$, etc. Hence, the static and dynamic information is to be contained in them. + +## 2.4. Continued Fractions + +If $a_0$ is known, the rest of the basis functions can be obtained one by one by RR II. To obtain it, let $L_z a_k(t) = \tilde{a}_k(z)$, $k = 0, 1, \dots, d-1$, where $L_z$ is the Laplace transform operator. The RR II is transformed to: + +$$1 = z\tilde{a}_0 + \Delta_1 \tilde{a}_1 \qquad (9)$$ + +$$\tilde{a}_{k-1} = z\tilde{a}_k + \Delta_{k+1}\tilde{a}_{k+1}, \quad k = 1, 2, \dots, d-1 \qquad (10)$$ + +From Equation (9), $\tilde{a}_0$ is obtained in terms of $\tilde{b}_1 = \tilde{a}_1/\tilde{a}_0$. By setting $k=1$ in Equation (10), $\tilde{b}_1$ in terms of $\tilde{b}_2 = \tilde{a}_2/\tilde{a}_1$. Proceeding term by term, we obtain the continued fraction form for $\tilde{a}_0$: + +$$\tilde{a}_0(z) = 1/(z + \Delta_1/(z + \dots + \Delta_{d-1}/z)) \qquad (11)$$ + +If the hypersurface is determined, the continued fraction may be summable. By taking $L_z^{-1}$ on Equation (11), we can obtain $a_0(t)$: + +$$a_0(t) = 1/2\pi i \int_C \tilde{a}_0(z) e^{zt} dz, \quad \text{Re } z > 0 \qquad (12)$$ + +where by $c$, we mean that the contour is to be on the right of all singularities contained in the rhs of Equation (11). If $a_0(t)$ is thus determined, the rest of the basis functions can be obtained one by one by +---PAGE_BREAK--- + +RR II. Hence, $A(t)$ (see Equation (3)) is completed solved if formally. This recurrence relation analysis can be implemented for a harmonic chain, described in Section 3. + +**3. Local Dynamics in a Harmonic Chain** + +Consider a classical harmonic chain of *N* equal masses in periodic boundary conditions (*N* even number, *m* mass and $\kappa$ the coupling constant) defined by the Hamiltonian: + +$$H = \sum_{-N/2}^{N/2-1} \frac{p_i^2}{2m} + \frac{1}{2\kappa} (q_i - q_{i+1})^2 \quad (13)$$ + +where $p_i$ and $q_i$ are the momentum and the coordinate of mass *m* at site *i*, and sites $-N/2$ and $N/2 - 1$ are nns. Let $A = p_0$ the momentum of mass *m* at Site 0. The time evolution of $p_0$ follows from the method of recurrence relations: in units $m = \kappa = 1$, + +$$p_0(t) = a_0(t) p_0 + a_1(t) ((q_{-1} + q_1)/2 - q_0) + a_2(t) (p_{-1} + p_1) + \dots \quad (14)$$ + +Let HC denote a harmonic chain of *N* masses defined by Equation (13). It has been shown that for HC, $d = N + 1$ and that there are *N* recurants in the hypersurface [25]. If the recurants are expressed in our dimensionless units, the hypersurface has a symmetric structure in the form: $\sigma(N = 2) = (2, 2)$, $\sigma(N = 4) = (2, 1, 1, 2)$, $\sigma(N = 6) = (2, 1, 1, 1, 1, 2)$, etc. We can conclude that for *N* oscillators (*N* even number), $\Delta_1$ and $\Delta_N = 2$ and $\Delta_k = 1$, $k = 2, 3, .., N - 1$, giving a general form: + +$$\sigma(N) = (2, 1, 1, \dots, 1, 1, 2) \quad (15)$$ + +If these recurants are substituted in Equation (11), they will realize Equation (11). If $N \to \infty$ ($d \to \infty$), + +$$\sigma = (2, 1, 1, \dots) \quad (16)$$ + +Taking this limit breaks the front-end symmetry. Equation (11) is summable: + +$$\tilde{a}_0(z) = \frac{1}{\sqrt{4+z^2}} \quad (17)$$ + +By taking the inverse transform, see Equation (12), we obtain: + +$$a_0(t) = f_0(2t) \quad (18)$$ + +where *J* is the Bessel function. This is a known result [26,27]. By RR II, we obtain: + +$$a_k(t) = f_k(2t), \quad k = 1, 2, \dots \quad (19)$$ + +Therewith, we have obtained the complete time evolution of $p_0$ in an infinite HC. + +Observe that $a_0(t \to \infty) = 0$. The vanishing of the autocorrelation function at $t = \infty$ is an indication of irreversibility. It is possible in a Hermitian system only by the thermodynamic limit being taken. This property is an important consideration for the ergodicity of the dynamical variable $A = p_0$, to be considered later [28]. + +*Langevin Dynamics* + +The equation of motion for A may also be expressed by the generalized Langevin equation [29]: + +$$\frac{d}{dt} A(t) + \int_{0}^{t} M(t-t')A(t')dt' = F(t) \quad (20)$$ +---PAGE_BREAK--- + +where *M* and *F* are the memory function and the random force, resp. They are important quantities in many dynamical issues, most often given phenomenologically or approximately [23]. For an infinite HC, we can provide exact expressions for them. + +In obtaining a continued fraction for $\tilde{a}_0(z)$, we have introduced $\tilde{b}_k = \tilde{a}_k / \tilde{a}_{k-1}$, $k = 1, 2, ..d - 1$. By convolution, we can determine $b_k$. They are the basis functions for $\tilde{S}_1$, a subspace of $\tilde{\mathcal{S}}$, spanned by $f_k$, $k = 1, 2, ..d - 1$. They satisfy RR II with the boundary condition that $b_1(t = 0) = 1$ and $b_k(t = 0) = 0$ if $k \neq 1$, with $b_0 = 0$. The hypersurface for this subspace is the same as Equation (7) with $\Delta_1$ removed. One can also express $\tilde{b}_1(z)$ in a continued fraction: + +$$ \tilde{b}_1(z) = 1/(z + \Delta_2/(z + \Delta_3/(z + \dots + \Delta_{d-1}/z))) \quad (21) $$ + +The random force is a vector in $\tilde{\mathcal{S}}_1$; thus, + +$$ F(t) = \sum b_k(t) f_k \quad (22) $$ + +and: + +$$ M(t) = \Delta_1 b_1(t) \quad (23) $$ + +For the infinite HC, $\sigma_1 = (1, 1, 1, \dots)$, summable to: + +$$ \tilde{b}_1(z) = 1/2 (\sqrt{z^2+4} - z) \quad (24) $$ + +By the inverse Laplace transform, we obtain: + +$$ b_1(t) = J_1(2t)/t \quad (25) $$ + +and the rest by RR II. Therewith, we have obtained exact expressions for the two Langevin quantities. + +## 4. Dispersion Relation for Harmonic Chain + +Equation (11) for $\tilde{a}_0$ shows that if $d$ the dimensionality of $\tilde{\mathcal{S}}$ is finite, the continued fraction may be expressed as a ratio of two polynomials in $z$. For HC, let us denote the rhs of Equation (11) by $\tilde{\Psi}_N(z)$ and the rhs of Equation (11) the continued fraction by two polynomials as: + +$$ \tilde{\Psi}_N(z) = P_N(z)/Q_N(z) \quad (26) $$ + +Since every $Q_N$ is found to contain $z(z^2+4)$ as a common factor, we express it as: + +$$ Q_N = z(z^2+4)q_N, \quad N = 2, 4, 6, \dots \quad (27) $$ + +Below, we list $P'$s and $q'$s for several values of $N$, sufficient to draw a general conclusion therefrom: + +(a) $N=2$, $\sigma = (2, 2)$ +$P_2 = z^2 + 2$, +$q_2 = 1$ + +(b) $N=4$, $\sigma = (2, 1, 1, 2)$ +$P_4 = z^4 + 4z^2 + 2$ +$q_4 = z^2 + 2$ + +(c) $N=6$; $\sigma = (2, 1, 1, 1, 1, 2)$ +$P_6 = z^6 + 6z^4 + 9z^2 + 2$ +$q_6 = z^4 + 4z^2 + 3$ + +(d) $N=8$; $\sigma = (2, 1, 1, 1, 1, 1, 1, 2)$ +$P_8 = z^8 + 8z^6 + 20z^4 + 16z^2 + 2$ +---PAGE_BREAK--- + +$$q_8 = z^6 + 6z^4 + 10z^2 + 4$$ + +(e) $N=10$; $\sigma = (2, 1, 1, 1, 1, 1, 1, 1, 2)$ +$P_{10} = z^{10} + 10z^8 + 35z^6 + 50z^4 + 25z^2 + 2$ +$q_{10} = z^8 + 8z^6 + 21z^4 + 20z^2 + 5$ + +(f) $N = 12$; $\sigma = (2, 1, 1, 1, 1, 1, 1, 1, 1, 2)$ +$P_{12} = z^{12} + 12z^{10} + 54z^8 + 112z^6 + 105z^4 + 36z^2 + 2$ +$q_{12} = z^{10} + 10z^8 + 36z^6 + 56z^4 + 35z^2 + 6$ + +If $z = 2is\alpha$, $\alpha \neq 0$, the above polynomials have simple expressions for all orders of N: + +$$P_N = 2\cos N\alpha \quad (28)$$ + +$$q_N = \sin N\alpha / \sin 2\alpha, \sin 2\alpha \neq 0 \quad (29)$$ + +## 4.1. Zeros of qN + +The dispersion relation can be deduced from $z_k$ the zeros of $q_N$: + +$$q_N(z) = \Pi(z - z_k) \quad (30)$$ + +From Equation (29), + +$$\sin N\alpha_k = 0 \quad (31)$$ + +with $\sin 2\alpha_k \neq 0$ and $\alpha_k \neq 0$. Hence, + +$$\alpha_k = (\pi/N)k, k = \pm 1, \pm 2, \dots, \pm(N/2-1) \quad (32)$$ + +Hence, with $k$ given above, + +$$z_k = 2i\sin\alpha_k \quad (33)$$ + +One may also write: + +$$\Pi(z - z_k)|_{z=2is\alpha} = \sin N\alpha / \sin 2\alpha \quad (34)$$ + +Since $Q_N = z(z^2 + 4)q_N$ (see Equation (26)), the prefactor contributes to the zeros of $Q_N$. They may be included in Equation (32) if the range of $k$ is made to include zero and $N/2$. + +## 4.2. $a_0(t)$ for Finite N + +Given the zeros of $Q_N$, it is now straightforward to obtain $a_0(t)$ by Equation (12). For example, if $N=6$, + +$$a_0(t) = 1/6[1 + 2\cos t + 2\cos\sqrt{3}t + \cos 2t] \quad (35)$$ + +A general expression would be: + +$$a_0(t) = \frac{1}{N} \sum_k \cos \omega_k t \quad (36)$$ + +where: + +$$\omega_k = 2|\sin(\pi v_k)|, v_k = k/N \quad (37)$$ + +$k = -N/2, -, -1, 0, 1, .. N/2$. Since Equation (36) is a dispersion relation, $v$'s will be termed "wave vectors". +---PAGE_BREAK--- + +4.3. $a_0(t)$ When $N \to \infty$ + +If $N \to \infty$, the sum in Equation (36) may be converted to an integral: + +$$ \text{rhs of Equation (36)} = 1/2\pi \int_{-\pi}^{\pi} e^{2i\sin\theta} d\theta \quad (38) $$ + +The rhs of Equation (38) is an integral representation of $J_0(2t)$. Hence, $a_0(t) = J_0(2t)$, the same as Equation (18). + +It is worth noting here that the zeros of $J_0(2t)$ can thus be obtained from Equation (36) by taking $N \to \infty$ by the condition: + +$$ \omega_k t = \pi/2(2n+1), \quad n=0,1,2,\dots \qquad (39) $$ + +If we write $J_0(2t) = \Pi(2t - 2t_k)$, by Equation (37): + +$$ 2t_k = \pi(2n+1)/|2\sin\pi k/N|, \quad k/N = (-1/2, 1/2) \qquad (40) $$ + +Evidently, there are infinitely many zeros in $J_0$ [30]. This result will be significant in Section 6. + +4.4. $\tilde{a}_0(z) = \Psi_N(z)$ When $N \to \infty$ + +By Equations (26)–(29), + +$$ \tilde{\Psi}_N(z) = V \frac{\cos N\alpha}{\sin N\alpha} \qquad (41) $$ + +where $V = 2\sin2\alpha/(z(z^2+4)) = d\alpha/dz$ (by $z = 2i\sin\alpha$). Furthermore: + +$$ \begin{aligned} \frac{\cos N\alpha}{\sin N\alpha} &= 1/N \frac{d}{d\alpha}(\log\sin N\alpha) \\ &= 1/N \frac{d}{d\alpha}\left[\log(\sin N\alpha/\sin 2\alpha) + \log\sin 2\alpha\right] \end{aligned} \qquad (42) $$ + +The second term on the rhs of Equation (42) may be dropped if $N \to \infty$. For the first term, by Equations (28) and (29), + +$$ \text{rhs of Equation (42)} = dz/d\alpha \frac{d}{dz} \log\Pi(z-z_k) = dz/d\alpha \sum \frac{1}{z-z_k} \qquad (43) $$ + +The prefactor $dz/d\alpha = 1/V$. Since $N \to \infty$, we can convert the above sum into an integral: writing $\tilde{\Psi} = \tilde{\Psi}_N$, $N \to \infty$, + +$$ \Psi(z) = \frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \frac{d\theta}{z - 2i\sin\theta} = \frac{1}{\sqrt{4+z^2}} \qquad (44) $$ + +The above result is the same as Equation (17). + +The asymptotic results Equations (16) and (17) were obtained by taking the $N \to \infty$ limit first on the hypersurface. What is shown in Section 4 is that the same results are also obtained from finite N solutions for $a_0(t)$. + +5. Ergodicity of Dynamical Variable $A = p_0$ + +If A is a variable of a Hermitian system of N particles, $N \to \infty$, it is possible to determine whether it is ergodic. According to the ergometric theory of the ergodic hypothesis [31], A is ergodic if $W_A \neq 0$ or $\infty$, where: + +$$ W_A = \int_0^\infty r_A(t) dt \qquad (45) $$ +---PAGE_BREAK--- + +where $r_A(t) = (A(t), A)/(A, A) = a_0(t)$, the autocorrelation function of A. By Equation (12), + +$$W_A = \tilde{r}_A(z=0) \quad (46)$$ + +If $d \to \infty$ as $N \to \infty$, which is the case of HC, $z \to 0$ on Equation (11) yields an infinite product of the following form: + +$$W_A = \frac{\Delta_2 \times \Delta_4 \times \dots \times \Delta_{2n}}{\Delta_1 \times \Delta_3 \times \dots \Delta_{2n+1}}, \quad n \to \infty \quad (47)$$ + +Ordinarily, infinite products are difficult to evaluate, as they seem to require product rules that differ from those for finite products. However, they can be determined by Equation (45) or Equation (46) as illustrated below. + +## 5.1. Infinite Harmonic Chain + +If $A = p_0$ of HC, we can determine whether $A$ is ergodic by evaluating Equations (45)–(47). If $N \to \infty$, $\sigma = (2, 1, 1, ...)$ (see (16)), and $\Psi(t) = J_0(2t)$ (see Equation (18)). Hence, by Equation (45), $W_A = 1/2$. + +It was shown that $\Psi(z) = 1/\sqrt{z^2+4}$; see Equation (17). Hence, by Equation (46), $W_A = 1/2$. Finally, by $\sigma$, we can write down the infinite product: + +$$W_A = \frac{1 \times 1 \times 1 \times \dots}{2 \times 1 \times 1 \times \dots} = \frac{1}{2} \quad (48)$$ + +in agreement with the previous results. As noted above, computing infinite products is a delicate matter. The order of terms in an infinite product may not be altered, nor the terms themselves. In Equation (48), such a nicety did not enter since all elements are one but one. Compare with another example in Section 5.2 below. + +## 5.2. Infinite Harmonic Chain with One End Attached to a Wall + +We shall now change HC defined by Equation (13) slightly. Let the coupling between the oscillators at $q_{-2}$ and $q_{-1}$ be cut. Furthermore, let the mass of the oscillator at $q_{-1}$ be infinitely heavy, so that the oscillator at $q_0$ is attached as if to a wall. The rest of the chain is unchanged. The oscillators in this new configuration are labeled 0, 1, 2, ..., $N-1$, with one end attached to a wall and the other end free. Finally, let $N \to \infty$. + +If $A = p_0$, the recurrants are found to have the following form [27,32]: + +$$\Delta_1 = 2/1, \Delta_3 = 3/2, \Delta_5 = 4/3, \dots, \Delta_{2} = 1/2, \Delta_4 = 2/3, \Delta_6 = 3/4, \dots$$ + +Evidently, they may be put in the form: $\Delta_{2n-1} = (n+1)/n$ and $\Delta_{2n} = n/(n+1)$, $n = 1, 2, 3, ...$ These recurrants imply that for $A = p_0$ [27,32], + +$$a_0(t) = J_0(2t) - J_4(2t) \quad (49)$$ + +$$\tilde{a}_0(z) = \frac{1}{\sqrt{(z^2 + 4)}} [1 - \frac{1}{16} (\sqrt{z^2 + 4} - z)^4] \quad (50)$$ + +By Equation (47), + +$$W_A = \frac{1/2 \times 2/3 \times 3/4 \times \dots \times n/(n+1)}{2/1 \times 3/2 \times 4/3 \times \dots \times (n+1)/n}, \quad n \to \infty \quad (51)$$ + +Each term in the numerator is less than one, while each term in the denominator greater than one. If the terms and the order are preserved, $W_A \to 0$. By Equations (45) and (46), it may be tested using Equations (49) and (50). In both cases, we obtain $W_A = 0$ verifying the infinite product. +---PAGE_BREAK--- + +Since $W_A = 0$, $A = p_0$ is not ergodic in this chain. For this variable, the phase space is not transitive. If mass at Site 0 is slightly perturbed, the perturbed energy is not delocalized everywhere [33]. + +**6. Harmonic Chain and Logistic Map** + +The logistic map (LM) is sometimes called the Ising model of chaos for being possibly the simplest model exhibiting chaos [34]. If $x$ is a real number in an interval $(0,1)$, the map is defined by: + +$$f(x) = ax(1-x), \quad x = (0,1) \tag{52}$$ + +where $a$ is a control parameter, a real number limited to $1 < a \le 4$. Thus, the map is real and bounded as $x$. If there exists $x = x^*$, such that $f(x^*) = x^*$, it is termed a fixed point of $f(x)$. If $f^n$ is an $n$-fold nested function of $f$, i.e., $f^n(x) = f(f^{n-1}(x)) = f(...f(x)...)$, with $f^1 \equiv f$, there may be fixed points for $f^n : f^n(x^*) = x^*$. The values of the fixed points and the number of the fixed points will depend on the size of the control parameter $a$. + +If $a < 3$, there is only one fixed point for any $n$. There is a remarkable theorem due to Sharkovskii [35] on 1d continuous maps on the interval, such as LM. As applied to this map, this theorem says that if $a \ge 1 + \sqrt{8}$, there are infinitely many fixed points as $n \to \infty$. This implies that a trajectory starting from almost any point in $(0,1)$ is chaotic. At $a = 4$ (the largest possible value), the fixed points fill the interval $x = (0,1)$ densely with a unique distribution $\rho_x, \int \rho_x dx = 1$. This distribution is known as the invariant density of fixed points, first deduced by Ulam [36,37]: + +$$\rho_x = \frac{1}{\pi \sqrt{x(1-x)}}, \quad 0 < x < 1 \tag{53}$$ + +The invariant density refers to the spectrum of fixed points in $(0,1)$. The square-root singularity in Equation (53), a branch cut from 0–1, indicates that the spectrum is dense. If $\mu$ is a Lebesgue measure, $d\mu(x) = \rho_x dx$. Hence, $\mu = 1$. + +We wish to see whether $\rho_x$, a distribution of fixed points, bears a relationship to $\rho_\omega$, the power spectrum of frequencies in HC. For this purpose, consider the following transformations of variables: + +$$x = 1/2 + 1/4 \omega \tag{54}$$ + +and: + +$$\rho_x dx = \rho_\omega d\omega \tag{55}$$ + +By substituting Equation (54) in (53), we obtain by Equation (55): + +$$\begin{align} +\rho_{\omega} &= \frac{1}{\pi\sqrt{4-\omega^2}}, & -2 < \omega < 2 \tag{56} \\ +&= 0 \text{ if otherwise.} +\end{align}$$ + +For an infinite HC, $\tilde{a}_0(z = i\omega) = \pi\rho_\omega$. By Equation (17), or Equation (44), the rhs of Equation (56) is precisely the power spectrum for $A = p_0$. Equation (56) shows that the fixed points of LM at $a = 4$ ($LM_4$) correspond to the frequencies of HC. + +Since the frequencies in the power spectrum are positive quantities, let us express Equation (54) as: + +$$\omega = 2|1 - 2x|, \quad 0 < x < 1 \tag{57}$$ + +For $LM_4$, + +$$x = \sin^2 \pi y / 2 \tag{58}$$ + +$$y/2 = l/(2N+1), \quad l=1,2,\dots,N \tag{59}$$ +---PAGE_BREAK--- + +$y$ being the pre-fixed points of $x$ the fixed points. If Equation (59) is substituted in Equation (57) and $y$ replaced by $v + 1/2$: + +$$ \omega = 2|\sin\pi v| \quad (60) $$ + +The above is identical to Equation (37), the dispersion relation for HC. In the limit $N \to \infty$, both $v$ and $y$ lie in the same interval $(-1/2, 1/2)$. This property shows that the pre-fixed points of $LM_4$ also correspond to the wave vectors of HC. + +The correspondence between $x$ and $\omega$ and also between $y$ and $v$ indicate that the iteration dynamics of $LM_4$ and the time evolution in HC are isomorphic in their local variables. This implies that if a variable in HC is ergodic, a corresponding variable in $LM_4$ is also ergodic. If the trajectory of an initial value in $LM_4$ is chaotic, we must also conclude that the trajectory of a local variable in HC must also be chaotic. + +Chaos in HC? Let us first examine chaos in $LM_4$. According to Sharkovskii, chaos is implied where there are infinitely many periods. By our work, they form a set of uncountable pre-fixed points of Lebesgue measure 1. This results in an aleph cycle, which can never return to the initial point [34]. In an infinite HC, there are also infinitely many periods. See Equation (40). Thus, the HC has the necessary and possibly sufficient property for chaos. + +In an infinite HC attached to a wall (see Section 5.2), there is chaos also, as there are infinitely many periods. However, as was already shown, its variables are not ergodic. This indicates that ergodicity is a subtler property than chaos. In a continuous map, there may be chaos, but not ergodicity. + +## 7. Concluding Remarks + +In this work, we have dwelt with the dynamics of a monatomic chain with which to illustrate some of the finer points of the dynamics contained in it. This simplest of harmonic chains can be made richer in a variety of ways. One can make one oscillator to have a different mass than its neighbors [25]. It would be a model for an impurity or a defect. One could make it a periodic diatomic chain [8] or even an aperiodic diatomic chain [8]. We are providing a list of recent advances made by the method of recurrence relations on others [38–44]. For related studies on HC by Fokker-Planck dynamics and non-exponential decay, see [7,45,46]. + +**Acknowledgments:** I thank Joao Florencio for having kindled my interest in the dynamics of harmonic chains through our collaboration in the 1980s. I thank the University of Georgia Franklin College for supporting my research through the regents professorship. This work is dedicated to the memory of Bambi Hu. + +**Conflicts of Interest:** The author declares no conflict of interest. + +## References + +1. Mazur, P.; Montroll, E. Poincaré cycles, ergodicity, and irreversibility in assemblies of coupled harmonic oscillators. *J. Math. Phys.* **1960**, *1*, 70–84. + +2. Lee, M.H. Solutions of the generalized Langevin equation by a method of recurrence relations. *Phys. Rev. B* **1982**, *26*, 2547–2551. + +3. Pires, A.S.T. The memory function formalism in the study of the dynamics of a many body system. *Helv. Phys. Acta* **1988**, *61*, 988. + +4. Viswanath, V.S.; Mueller, G. *Recursion Method*; Springer-Verlag: Berlin, Germany, 1994. + +5. Balucani, U.; Lee, M.H.; Tognetti, V. Dynamical correlations. *Phys. Rep.* **2003**, *373*, 409–492. + +6. Mokshin, A.V. Self-consistent approach to the description of relaxation processes in classical multiparticle systems. *Theory Math. Phys.* **2015**, *183*, 449–477. + +7. Sen, S. Solving the Liouville equation for conservative systems: Continued fraction formalism and a simple application. *Phys. A* **2006**, *360*, 304–324. + +8. Kim, J.; Sawada, I. Dynamics of a harmonic oscillator on the Bethe lattice. *Phys. Rev. E* **2000**, *61*, R2172–R2175. + +9. Sawada, I. Dynamics of the S = 1/2 alternating chains at T = ∞. *Phys. Rev. Lett.* **1999**, *83*, 1668–1671. +---PAGE_BREAK--- + +10. Sen, S. Exact solution of the Heisenberg equation of motion for the surface spin in a semi-infinite S=1/2 XY chain at infinite temperatures. Phys. Rev. B **1991**, *44*, 7444-7450. + +11. Florencio, J.; Sá Barreto, F.C.S. Dynamics of the random one-dimensional transverse Ising model. Phys. Rev. B **1999**, *60*, 9555-9560. + +12. Silva Nunez, M.E.; Florencio, J. Effects of disorder on the dynamics of the XY chain. Phys. Rev. B **2003**, *68*, 144061-114065. + +13. Daligault, J.; Murillo, M.S. Continued fraction matrix representation of response functions in multicomponent systems. Phys. Rev. E **2003**, *68*, 154011-154014. + +14. Mokshin, A.V.; Yulmatyev, R.M.; Hanggi, P. Simple measure of memory for dynamical processes described by a generalized Langevin equation. Phys. Rev. Lett. **2005**, *95*, 200601. + +15. Hong, J., Kee, H.Y. Analytic treatment of Mott-Hubbard transition in the half-filled Hubbard model and its thermodynamics. Phys. Rev. B **1995**, *52*, 2415-2421. + +16. Liu, Z.-Q.; Kong, X.-M.; Chen, X.-S. Effects of Gaussian disorder on the dynamics of the random transverse Ising model. Phys. Rev. B **2006**, *73*, 224412. + +17. Chen, X.-S.; Shen, Y.-Y.; Kong, X.-M. Crossover of the dynamical behavior in two-dimensional random transverse Ising model. Phys. Rev. B **2010**, *82*, 174404. + +18. De Mello Silva, E. Time evolution in a two-dimensional ultrarelativistic-like electron gas by recurrence relations method. Acta Phys. Pol. B **2015**, *46*, 1135-1141. + +19. De Mello Silva, E. Dynamical class of a two-dimensional plasmonic Dirac system. Phys. Rev. E **2015**, *92*, 042146. + +20. Guimaraes, P.R.C.; Plascak, J.A.; de Alcantara Bonfim, O.F.; Florencio, J. Dynamics of the transverse Ising model with next-nearest-neighbor interactions. Phys. Rev. E **2015**, *92*, 042115. + +21. Sharma, N.L. Response and relaxation of a dense electron gas in D dimensions at long wavelengths. Phys. Rev. B **1992**, *45*, 3552-3556. + +22. Lee, M.H. Can the velocity autocorrelation function decay exponentially? Phys. Rev. Lett. **1983**, *51*, 1227-1230. + +23. Kubo, R. The fluctuation-dissipation theorem. Rep. Prog. Phys. **1966**, *29*, 255-284. + +24. Lee, M.H. Orthogonalization process by recurrence relations. Phys. Rev. Lett. **1982**, *49*, 1072-1075. + +25. Lee, M.H.; Florencio, J., Jr.; Hong, J. Dynamic equivalence of a two-dimensional quantum electron gas and a classical harmonic oscillator chain with an impurity mass. J. Phys. A **1989**, *22*, L331-L335. + +26. Fox, R.F. Long-time tails and diffusion. Phys. Rev. A **1983**, *27*, 3216-3233. + +27. Florencio, J., Jr.; Lee, M.H. Exact time evolution of a classical harmonic-oscillator chain. Phys. Rev. A **1985**, *31*, 3231-3236. + +28. Lee, M.H. Why Irreversibility is not a sufficient condition for ergodicity. Phys. Rev. Lett. **2007**, *98*, 190601. + +29. Lee, M.H. Derivation of the generalized Langevin equation by a method of recurrence relations. J. Math. Phys. **1983**, *24*, 2512-2514. + +30. Watson, G.N. *A Treatise on the Theory of Bessel Functions*; Cambridge U.P.: London, UK, 1980; Chapter 15. + +31. Lee, M.H. Ergodic theory, infinite products, and long time behavior in Hermitian models. Phys. Rev. Lett. **2001**, *87*, 250601/1-250601/4. + +32. Pestana Marino, E. Ph.D. Thesis, University of Georgia, Athens, GA, USA, 2011, unpublished. + +33. Lee, M.H. Birkhoff's theorem, many-body response functions, and the ergodic condition. Phys. Rev. Lett. **2007**, *98*, 110403. + +34. Lee, M.H. Solving for the fixed points of 3-cycle in the logistic map and toward realizing chaos by the theorems of Sharkovskii and Li-Yorke. Commu. Theor. Phys. **2014**, *62*, 485-496. + +35. Sharkovskii, A.N. Coexistence of cycles of a continuous transformation of a line into itself. Ukrainian Math. J. **1964**, *16*, 61-71 (in Russian); English transl.: Int. J. Bifurc. Chaos **1995**, *5*, 1363-1273. + +36. Ulam, S.M. *A Collection of Mathematical Problems*; Interscience: New York, NY, USA, 1960; pp. 73-74. + +37. Lee, M.H. Cyclic solutions in chaos and the Sharkovskii theorem. Acta Phys. Pol. B **2012**, *43*, 1053-1063. + +38. Yu, M.B. Momentum autocorrelation function of Fibonacci chains with finite number oscillators. Eur. J. Phys. B **2012**, *85*, 379. + +39. Yu, M.B. Momentum autocorrelation function of a classical oscillator chain with alternating masses. Eur. J. Phys. B **2013**, *86*, 57. + +40. Yu, M.B. Momentum autocorrelation function of an impurity in a classical oscillator chain with alternating masses - I. General theory. Phys. A **2014**, *398*, 252-263. + + +---PAGE_BREAK--- + +41. Yu, M.B. Momentum autocorrelation function of an impurity in a classical oscillator chain with alternating masses II. Illustrations. *Phys. A* **2015**, *438*, 469–486. + +42. Yu, M.B. Momentum autocorrelation function of an impurity in a classical oscillator chain with alternating masses III. Some limiting cases. *Phys. A* **2016**, *447*, 411–421. + +43. Wierling, A.; Sawada, I. Wave-number dependent current correlation for a harmonic oscillator. *Phys. Rev. E* **2010**, *82*, 051107. + +44. Wierling, A. Dynamic structure factor of linear harmonic chain - A recurrence relation approach. *Eur. J. Phys. B* **2012**, *85*, 20. + +45. Vitali, D.; Grigolini, P. Subdynamics, Fokker-Planck equation, and exponential decay of relaxation processes. *Phys. Rev. A* **1989**, *39*, 1486–1499. + +46. Grigolini, P. *Quantum Mechanical Irreversibility and Measurement*; World Scientific: Singapore, Singapore, 1993. + +© 2016 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +# Old Game, New Rules: Rethinking the Form of Physics + +**Christian Baumgarten** + +5244 Birrhard, Switzerland; christian-baumgarten@gmx.net + +Academic Editor: Young Suh Kim + +Received: 26 February 2016; Accepted: 28 April 2016; Published: 6 May 2016 + +**Abstract:** We investigate the modeling capabilities of sets of coupled *classical harmonic oscillators* (CHO) in the form of a modeling game. The application of the simple but restrictive rules of the game lead to conditions for an isomorphism between Lie-algebras and real Clifford algebras. We show that the correlations between two coupled classical oscillators find their natural description in the Dirac algebra and allow to model aspects of special relativity, inertial motion, electromagnetism and quantum phenomena including spin in one go. The algebraic properties of Hamiltonian motion of low-dimensional systems can generally be related to certain types of interactions and hence to the dimensionality of emergent space-times. We describe the intrinsic connection between phase space volumes of a 2-dimensional oscillator and the Dirac algebra. In this version of a phase space interpretation of quantum mechanics the (components of the) spinor wavefunction in momentum space are abstract canonical coordinates, and the integrals over the squared wave function represents second moments in phase space. The wave function in ordinary space-time can be obtained via Fourier transformation. Within this modeling game, 3+1-dimensional space-time is interpreted as a structural property of electromagnetic interaction. A generalization selects a series of Clifford algebras of specific dimensions with similar properties, specifically also 10- and 26-dimensional real Clifford algebras. + +**Keywords:** Hamiltonian mechanics; coupled oscillators; Lorentz transformation; Dirac equation + +**PACS:** 45.20.Jj, 47.10.Df, 41.75, 41.85, 03.65.Pm, 05.45.Xt, 03.30.+p, 03.65.-w,29.27.-a + +## 1. Introduction + +D. Hestenes had the joyful idea to describe physics as a modeling game [1]. We intend to play a modeling game with (ensembles of) classical harmonic oscillators (CHO). The CHO is certainly one of the most discussed and analyzed systems in physics and one of the few exactly solveable problems. One would not expect any substantially new discoveries related to this subject. Nevertheless there are aspects that are less well-known than others. One of these aspects concerns the transformation group of the symplectic transformations of $n$ coupled oscillators, $Sp(2n)$. We invite the reader to join us playing "a modeling game" and to discover some fascinating features related to possible reinterpretations of systems of two (or more) coupled oscillators. We will show that special relativity can be reinterpreted as a transformation theory of the second moments of the abstract canonical variables of coupled oscillator systems (The connection of the Dirac matrices to the symplectic group has been mentioned by Dirac in Reference [2]. For the connection of oscillators and Lorentz transformations (LTs) see also the papers of Kim and Noz [3–5] and references therein. The use of CHOs to model quantum systems has been recently described-for instance-by Briggs and Eisfeld [6–8]). We extend the application beyond pure LTs and show that the Lorentz force can be reinterpreted by the second moments of two coupled oscillators in proper time. Lorentz transformations can be modeled as symplectic transformations [4]. We shall show how Maxwell's equations find their place within the game. +---PAGE_BREAK--- + +The motivation for this game is to show that many aspects of modern physics can be understood on the basis of the classical notions of harmonic oscillation if these notions are appropriately reinterpreted. + +In Section 2 we introduce the rules of our game, in Section 3 we introduce the algebraic notions of the Hamilton formalism. In Section 4 we describe how geometry emerges from coupled oscillator systems, in Section 5 we describe the use of symplectic transformations and introduce the Pauli- and Dirac algebra. In Section 6 we introduce a physical interpretation of oscillator moments and in Section 7 we relate the phase space of coupled oscillators to the real Dirac algebra. Section 8 contains a short summary. + +## 2. The Rules Of The Game + +The first rule of our game is the principle of reason (POR): *No distinction without reason*—we should not add or remove something *specific* (an asymmetry, a concept, a distinction) from our model without having a clear and explicit reason. If there is no reason for a specific asymmetry or choice, then all possibilities are considered equivalently. + +The second rule is the principle of variation (POV): We postulate that change is immanent to all fundamental quantities in our game. From these two rules, we take that the mathematical object of our theory is a list (n-tuple) of quantities (variables) $\psi$, each of which varies at all times. + +The third rule is the principle of *objectivity* (POO): Any law within this game refers to measurements, defined as comparison of quantities (object properties) in relation to other object properties of the same type (i.e., unit). Measurements require reference standards (rulers). A measurement is objective if it is based on (properties of) the objects of the game. This apparent self-reference is unavoidable, as it models the *real* situation of physics as experimental science. Since all fundamental objects (quantities) in our model *vary at all times*, the only option to construct a constant quantity that might serve as a ruler, is given by *constants of motion* (COM). Hence the principle of objectivity requires that measurement standards are derived from constants of motion. + +This third rule implies that the fundamental variables can not be directly measured, but only functions of the fundamental variables of the same dimension (unit) of a COM. Thus the model has two levels: The level of the fundamental variable list $\psi$, which is experimentally not directly accessible and a level of *observables* which are (as we shall argue) even moments of the fundamental variables $\psi$. + +### 2.1. Discussion of the Rules + +E.T. Jaynes wrote that “Because of their empirical origins, QM and QED are not physical theories at all. In contrast, Newtonian celestial mechanics, Relativity, and Mendelian genetics are physical theories, because their mathematics was developed by reasoning out the consequences of clearly stated physical principles from which constraint the possibilities”. And he continues “To this day we have no constraining principle from which one can deduce the mathematics of QM and QED; [...] In other words, the mathematical system of the present quantum theory is [...] unconstrained by any physical principle” [9]. This remarkably harsh criticism of quantum mechanics raises the question of what we consider to be a physical principle. Are the rules of our game physical principles? We believe that they are no substantial physical principles but *formal* first principles, they are *preconditions* of a sensible theory. They contain no immediate physical content, but they define the *form* or the *idea* of physics. + +It is to a large degree immanent to science and specifically to physics to presuppose the existence of *reason*: Apples do not fall down by chance—there is a reason for this tendency. Usually this belief in reason implies the belief in causality, i.e., that we can also (at least in principle) explain why a specific apple falls at a specific time, but practically this latter belief can rarely be confirmed experimentally and therefore remains to some degree metaphysical. Thus, if, as scientists, we postulate that things have reason, then this is not a *physical* principle but a precondition, a first principle. + +The second rules (POV), is specific to the form (or idea) of physics, e.g., that it is the sense of physics to *recognize the pattern* of motion and to *predict future*. Therefore the notion of time in the form of change is indeed immanent to the physical description of reality. +---PAGE_BREAK--- + +The principle of objectivity (POO) is immanent to the very idea of physics: A measurement is the comparison of properties of objects with compatible properties of reference objects, e.g., requires “constant” rulers. Hence the rules of the game are to a large degree unavoidable: They follow from the very form of physics and therefore certain laws of physics are not substantial results of a physical theory. For instance a consistent “explanation” of the stability of matter is impossible as we presumed it already within the idea of measurement. More precisely: if this presumption does not follow within the framework of a physical theory, then the theory is flawed, since it can not reproduce its own presumptions. + +Einstein wrote with respect to relativity that “It is striking that the theory (except for the four-dimensional space) introduces two kinds of things, i.e., (1) measuring rods and clocks; (2) all other things, e.g., the electromagnetic field, the material point, etc. This, in a certain sense, is inconsistent; strictly speaking, measuring rods and clocks should emerge as solutions of the basic equations [...], not, as it were, as theoretically self-sufficient entities”. [10]. The more it may surprise that the stability of matter can not be obtained from classical physics as remarked by Elliott H. Lieb: “A fundamental paradox of classical physics is why matter, which is held together by Coulomb forces, does not collapse” [11]. This single sentence seems to rule out the possibility of a fundamental classical theory and uncovers the uncomfortable situation of theoretical physics today: Despite the overwhelming experimental and technological success, there is a deep-seated confusion concerning the theoretical foundations. Our game is therefore a meta-experiment. It is not the primary goal to find “new” laws of nature or new experimental predictions, but it is a conceptual “experiment” that aims to further develop our understanding of the consequences of principles: which ones are really required to derive central “results” of contemporary physics. In this short essay final answers can not be given, but maybe some new insights are possible. + +## 2.2. What about Space-Time? + +A theory has to make the choice between postulate and proof. If a 3+1 dimensional space-time is presumed, then it cannot be proven within the same theoretical framework. More precisely, the value of such a proof remains questionable. This is a sufficient reason to avoid postulates concerning the dimensionality of space-time. Another, even stronger, reason to avoid a direct postulate of space-time and its geometry has been given above: The fundamental variables that we postulated, can not be directly measured. This excludes space-time coordinates as primary variables (which can be directly measured), but with it almost all other apriori assumed concepts like velocity, acceleration, momentum, energy and so on. At some point these concepts certainly have to be introduced, but we suggest an approach to the formation of concepts that differs from the Newtonian axiomatic method. The POR does not allow to introduce distinctions between the fundamental variables into coordinates and momenta without reason. Therefore we are forced to use an interpretational method, which one might summarize as *function follows form*. We shall first derive equations and then we shall interpret the equations according to some formal criteria. This implies that we have to refer to already existing notions if we want to identify quantities according to their appearance within a certain formalism. The consequence for the game is, that we have to show how to give rise to *geometrical* notions: If we do not postulate space-time then we have to suggest a method to construct it. + +A consequence of our conception is that both, objects and fields have to be identified with dynamical structures, as there is simply nothing else available. This fits to the framework of structure preserving (symplectic) dynamics that we shall derive from the described principles. +---PAGE_BREAK--- + +### 3. Theory of Small Oscillations + +In this section we shall derive the theory of coupled oscillators from the rules of our game. According to the POO there exists a function (COM) $\mathcal{H}(\psi)$ such that (Let us first (for simplicity) assume that $\frac{\partial \mathcal{H}}{\partial t} = 0$): + +$$ \frac{d\mathcal{H}}{dt} = \sum_k \frac{\partial \mathcal{H}}{\partial \psi_k} \dot{\psi}_k = 0 \quad (1) $$ + +or in vector notation + +$$ \frac{d\mathcal{H}}{dt} = (\nabla_{\psi} \mathcal{H}) \cdot \dot{\psi} = 0 \quad (2) $$ + +The simplest solution is given by an arbitrary skew-symmetric matrix $\mathcal{X}$: + +$$ \dot{\psi} = \mathcal{X} \nabla_{\psi} \mathcal{H} \quad (3) $$ + +Note that it is only the skew-symmetry of $\mathcal{X}$, which ensures that it is always a solution to Equation (2) and which ensures that $\mathcal{H}$ is constant. If we now consider a state vector $\psi$ of dimension $k$, then there is a theorem in linear algebra, which states that for any skew-symmetric matrix $\mathcal{X}$ there exists a non-singular matrix $\mathcal{Q}$ such that we can write [12]: + +$$ \mathcal{Q}^T \mathcal{X} \mathcal{Q} = \operatorname{diag}(\eta_0, \eta_1, \eta_2, \ldots, 0, 0, 0) \quad (4) $$ + +where $\eta_0$ is the matrix + +$$ \eta_0 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \quad (5) $$ + +If we restrict us to orthogonal matrices $\mathcal{Q}$, then we may still write + +$$ \mathcal{Q}^T \mathcal{X} \mathcal{Q} = \operatorname{diag}(\lambda_0 \eta_0, \lambda_1 \eta_1, \lambda_2 \eta_2, \ldots, 0, 0, 0) \quad (6) $$ + +In both cases we may leave away the zeros, since they correspond to non-varying variables, which would be in conflict with the second rule of our modeling game. Hence $k=2n$ must be even and the square matrix $\mathcal{X}$ has the dimension $2n \times 2n$. As we have no specific reason to assume asymmetries between the different degrees of freedom (DOF), we have to choose all $\lambda_k = 1$ in Equation (6) and return to Equation (4) without zeros and define the block-diagonal so-called *symplectic unit matrix* (SUM) $\gamma_0$: + +$$ \mathcal{Q}^T \mathcal{X} \mathcal{Q} = \operatorname{diag}(\eta_0, \eta_1, \eta_2, \ldots, \eta_n) \equiv \gamma_0 \quad (7) $$ + +These few basic rules thus lead us directly to Hamiltonian mechanics: Since the state vector has even dimension and due to the form of $\gamma_0$, we can interpret $\psi$ as an ensemble of $n$ classical DOF-each DOF represented by a canonical pair of coordinate and momentum: $\psi = (q_1, p_1, q_2, p_2, \ldots, q_n, p_n)^T$. In this notation and after the application of the transformation $\mathcal{Q}$, Equation (3) can be written in form of the Hamiltonian equations of motion (HEQOM): + +$$ \begin{aligned} \dot{q}_i &= \frac{\partial \mathcal{H}}{\partial p_i} \\ \dot{p}_i &= -\frac{\partial \mathcal{H}}{\partial q_i} \end{aligned} \quad (8) $$ + +The validity of the HEQOM is of fundamental importance as it allows for the use of the results of Hamiltonian mechanics, of statistical mechanics and thermodynamics-but without the intrinsic presupposition that the $q_i$ have to be understood as positions in real space and the $p_i$ as the corresponding canonical momenta. This is legitimate as the theory of canonical transformations is independent from any specific physical interpretation of what the coordinates and momenta represent physically. As no other interpretation is at hand, we say that these canonical pairs are coordinates $q_i, p_i$ in an +---PAGE_BREAK--- + +abstract phase space and they are canonical coordinates and momenta only due to the form of the HEQOM. The choice of the specific form of $\gamma_0$ is for $n > 1$ DOF not unique. It could for instance be written as + +$$ \gamma_0 = \eta_0 \otimes \mathbf{1}_{n \times n} \qquad (9) $$ + +which corresponds a state vector of the form + +$$ \psi = (q_1, \dots, q_n, p_1, \dots, p_n)^T $$ + +or by + +$$ \gamma_0 = \mathbf{1}_{n \times n} \otimes \eta_0 \qquad (10) $$ + +as in Equation (7). Therefore we are forced to make an arbitrary choice (But we should keep in mind, that other "systems" with a different choice are possible. If we can not exclude their existence, then they should exist as well. With respect to the form of the SUM, we suggest that different "particle" types (different types of fermions for instance) have a different SUM). But in all cases the SUM $\gamma_0$ must be skew-symmetric and have the following properties: + +$$ \begin{aligned} \gamma_0^T &= -\gamma_0 \\ \gamma_0^2 &= -\mathbf{1} \end{aligned} \qquad (11) $$ + +which also implies that $\gamma_0$ is orthogonal and has unit determinant. Note also that all eigenvalues of $\gamma_0$ are purely imaginary. However, once we have chosen a specific form of $\gamma_0$, we have specified a set of canonical pairs $(q_i, p_i)$ within the state vector. This choice fixes the set of possible canonical (structure preserving) transformations. + +Now we write the Hamiltonian $\mathcal{H}(\psi)$ as a Taylor series, we remove the rule-violating constant term and cut it after the second term. We do not claim that higher terms may not appear, but we delay the discussion of higher orders to a later stage. All this is well-known in the theory of small oscillations. There is only one difference to the conventional treatment: We have no direct macroscopic interpretation for $\psi$ and following our first rule we have to write the second-order Hamiltonian $\mathcal{H}(\psi)$ in the most general form: + +$$ \mathcal{H}(\psi) = \frac{1}{2} \psi^T A \psi \qquad (12) $$ + +where $\mathcal{A}$ is only restricted to be *symmetric* as all non-symmetric terms *do not contribute* to $\mathcal{H}$. Since it is not unlikely to find more than a single constant of motion in systems with multiple DOFs, we distinguish systems with singular matrix $\mathcal{A}$ from those with a positive or negative definite matrix $\mathcal{A}$. Positive definite matrices are favoured in the sense that they allow to identify $\mathcal{H}$ with the amount of a substance or an amount of energy (It is immanent to the concept of substance that it is understood as something positive semidefinite). + +Before we try to interpret the elements in $\mathcal{A}$, we will explore some general algebraic properties of the Hamiltonian formalism. If we plug Equations (12) into (3), then the equations of motion can be written in the general form: + +$$ \dot{\psi} = \gamma_0 A \psi = F \psi \qquad (13) $$ + +The matrix $F = \gamma_0 A$ is the product of the symmetric (positive semi-definite) matrix $A$ and the skew-symmetric matrix $\gamma_0$. As known from linear algebra, the trace of such products is zero: + +$$ \mathrm{Tr}(F) = 0 \qquad (14) $$ + +Pure harmonic oscillation of $\psi$ is described by matrices $F$ with purely imaginary eigenvalues and those are the only stable solutions [12]. Note that Equation (13) may represent a tremendous amount of different types of systems-all linearly coupled systems in any dimension, chains or $d$-dimensional +---PAGE_BREAK--- + +lattices of linear coupled oscillators and wave propagation (However the linear approximation does +not allow for the description of the transport of heat). + +One quickly derives from the properties of $\gamma_0$ and $\mathcal{A}$ that + +$$ +\mathbf{F}^T = \mathcal{A}^T \gamma_0^T = -\mathcal{A} \gamma_0 = \gamma_0^2 \mathcal{A} \gamma_0 = \gamma_0 \mathbf{F} \gamma_0 \quad (15) +$$ + +Since any square matrix can be written as the sum of a symmetric and a skew-symmetric matrix, it is +nearby to also consider the properties of products of γ₀ with a skew-symmetric real square matrices B. +If C = γ₀ B, then + +$$ +\mathbf{C}^T = \mathcal{B}^T \gamma_0^T = \mathcal{B} \gamma_0 = -\gamma_0^2 \mathcal{B} \gamma_0 = -\gamma_0 \mathbf{C} \gamma_0 \quad (16) +$$ + +Symmetric $2n \times 2n$-matrices contain $2n(2n+1)/2$ different matrix elements and skew-symmetric +ones $2n(2n-1)/2$ elements, so that there are $v_s$ linear independent matrix elements in $\mathcal{A}$ + +$$ +v_s = n(2n + 1) \tag{17} +$$ + +and $v_c$ matrix elements in $\mathcal{B}$ with + +$$ +v_c = n(2n - 1) \qquad (18) +$$ + +In the theory of linear Hamiltonian dynamics, matrices of the form of F are known as “Hamiltonian” or +“infinitesimal symplectic” and those of the form of C as “skew-Hamiltonian” matrices. This convention +is a bit odd as F does not appear in the Hamiltonian and it is in general not symplectic. Furthermore +the term “Hamiltonian matrix” has a different meaning in quantum mechanics - in close analogy to A. +But it is known that this type of matrix is closely connected to symplectic matrices as every symplectic +matrix is a matrix exponential of a matrix F [12]. We consider the matrices as defined by Equations (15) +and (16) as too important and fundamental to have no meaningful and unique names: Therefore we +speak of a **symplex** (plural *symplices*), if a matrix holds Equation (15) and of a **cosymplex** if it holds +Equation (16). + +# Symplectic Motion and Second Moments + +So what is a symplectic matrix anyway? The concept of symplectic transformations is a specific formulation of the theory of canonical transformations. Consider we define a new state vector (or new coordinates) $\phi(\psi)$-with the additional requirement, that the transformation is reversible. Then the Jacobian matrix of the transformation is given by + +$$ +J_{ij} = \left( \frac{\partial \phi_i}{\partial \psi_j} \right) \tag{19} +$$ + +and the transformation is said to be symplectic, if the Jacobian matrix holds [12] + +$$ +\mathbf{J}\gamma_0\mathbf{J}^T = \gamma_0 +\quad (20) +$$ + +Let us see what this implies in the linear case: + +$$ +\begin{align*} +\mathbf{J} \psi &= \mathbf{J} \mathbf{F} \mathbf{J}^{-1} \mathbf{J} \psi \\ +\tilde{\psi} &= \mathbf{J} \psi \\ +\dot{\tilde{\psi}} &= \mathbf{J} \mathbf{F} \mathbf{J}^{-1} \tilde{\psi} \\ +\dot{\tilde{\psi}} &= \tilde{\mathbf{F}} \tilde{\psi} +\end{align*} +\tag{21} +$$ +---PAGE_BREAK--- + +and-by the use of Equation (20) one finds that $\tilde{\mathbf{F}}$ is still a symplex: + +$$ +\begin{align*} +\tilde{\mathbf{F}}^T &= (\mathbf{J}^{-1})^T \mathbf{F}^T \mathbf{J}^T \\ +\tilde{\mathbf{F}}^T &= (\mathbf{J}^{-1})^T \gamma_0 \mathbf{F} \gamma_0 \mathbf{J}^T \\ +\tilde{\mathbf{F}}^T &= -\gamma_0^2 (\mathbf{J}^{-1})^T \gamma_0 \mathbf{F} \mathbf{J}^{-1} \gamma_0 \tag{22} \\ +\tilde{\mathbf{F}}^T &= -\gamma_0 \mathbf{J} \gamma_0^2 \mathbf{F} \mathbf{J}^{-1} \gamma_0 \\ +\tilde{\mathbf{F}}^T &= \gamma_0 \mathbf{J} \mathbf{F} \mathbf{J}^{-1} \gamma_0 \\ +\tilde{\mathbf{F}}^T &= \gamma_0 \tilde{\mathbf{F}} \gamma_0 +\end{align*} +$$ + +Hence a symplectic transformation is first of all a similarity transformation, but secondly, it preserves the structure of all involved equations. Therefore the transformation is said to be *canonical* or *structure preserving*. The distinction between canonical and non-canonical transformations can therefore be traced back to the skew-symmetry of $\gamma_0$ and the symmetry of $\mathcal{A}$- both of them consequences of the rules of our physics modeling game. + +Recall that we argued that the matrix $\mathcal{A}$ should be symmetric *because* skew-symmetric terms do not contribute to the Hamiltonian. Let us have a closer look what this means. Consider the matrix of second moments $\Sigma$ that can be build from the variables $\psi$: + +$$ \Sigma = \langle \psi \psi^T \rangle \qquad (23) $$ + +in which the angles indicate some (yet unspecified) sort of average. The equation of motion of this +matrix is given by + +$$ +\begin{align} +\dot{\Sigma} &= \langle \dot{\psi} \psi^T \rangle + \langle \psi \dot{\psi}^T \rangle \\ +\dot{\Sigma} &= \langle \mathbf{F} \psi \psi^T \rangle + \langle \psi \psi^T \mathbf{F}^T \rangle \tag{24} +\end{align} +$$ + +Now, as long as $\mathbf{F}$ does not depend on $\psi$, we obtain + +$$ +\begin{align} +\dot{\Sigma} &= F\Sigma + \Sigma F^T \\ +\dot{\Sigma} &= F\Sigma + \Sigma \gamma_0 F \gamma_0 \\ +(\dot{\Sigma}\gamma_0) &= F(\Sigma\gamma_0) - (\Sigma\gamma_0)F \tag{25} \\ +\dot{\mathbf{S}} &= F\mathbf{S} - \mathbf{S}F +\end{align} +$$ + +where we defined the new matrix **S** ≡ Σ γ₀. For completeness we introduce the “adjunct” spinor +$\bar{\psi} = \psi^\dagger \gamma_0$ so that we may write + +$$ +\mathbf{S} = \langle \psi \bar{\psi} \rangle +\quad (26) +$$ + +Note that **S** is also a symplex. The matrix **S** (i.e., all second moments) is constant, iff **S** and **F** commute. +Now we define an *observable* to be an operator **O** with a (potentially) non-vanishing expectation +value, defined by: + +$$ +\langle \mathbf{O} \rangle = \langle \bar{\psi} \mathbf{O} \psi \rangle = \langle \psi^T \gamma_0 \mathbf{O} \psi \rangle +\quad (27) +$$ + +Thus, if the product $\gamma_0 \mathbf{O}$ is not skew-symmetric, i.e., contains a product of $\gamma_0$ with a symmetric matrix $\mathcal{B}$, then the expectation value is potentially non-zero: + +$$ +\langle O \rangle = \langle \psi^T \gamma_0 (\gamma_0 B) \psi \rangle = -\langle \psi^T B \psi \rangle +\quad (28) +$$ + +This means that only the symplex-part of an operator is "observable", while cosymplices yield a vanishing expectation value. Hence Equation (25) delivers the blueprint for the general definition of observables. Furthermore we find in the last line the constituting equation for Lax pairs [13]. Peter Lax has shown that for such pairs of operators **S** and **F** that obey Equation (25) there are the following constants of motion + +$$ +\mathrm{Tr}(\mathbf{S}^k) = \mathrm{const} \tag{29} +$$ +---PAGE_BREAK--- + +for arbitrary integer $k > 0$. Since $\mathbf{S}$ is a symplex and therefore by definition the product of a symmetric matrix and the skew-symmetric $\gamma_0$, Equation (29) is always zero and hence trivially true for $k = 1$. The same is true for any odd power of $\mathbf{S}$, as it can be easily shown that any odd power of a symplex is again a symplex (see Equation (35)), so that the only non-trivial general constants of motion correspond to even powers of $\mathbf{S}$, which implies that all observables are functions of even powers of the fundamental variables. + +To see the validity for $k > 1$ we have to consider the general algebraic properties of the trace operator. Let $\lambda$ be an arbitrary real constant and $\tau$ be a real parameter, then + +$$ +\begin{aligned} +\mathrm{Tr}(\mathbf{A}) &= \mathrm{Tr}(\mathbf{A}^T) \\ +\mathrm{Tr}(\lambda \mathbf{A}) &= \lambda \mathrm{Tr}(\mathbf{A}) \\ +\frac{d}{d\tau} \mathrm{Tr}(\mathbf{A}(\tau)) &= \mathrm{Tr}(\frac{d\mathbf{A}}{d\tau}) && (30) \\ +\mathrm{Tr}(\mathbf{A} + \mathbf{B}) &= \mathrm{Tr}(\mathbf{A}) + \mathrm{Tr}(\mathbf{B}) \\ +\mathrm{Tr}(\mathbf{A}\mathbf{B}) &= \mathrm{Tr}(\mathbf{B}\mathbf{A}) +\end{aligned} + $$ + +It follows that + +$$ +\begin{aligned} +0 &= \mathrm{Tr}(\mathbf{A}\mathbf{B} - \mathbf{B}\mathbf{A}) \\ +0 &= \mathrm{Tr}(\mathbf{A}^n \mathbf{B} - \mathbf{A}^{n-1} \mathbf{B}\mathbf{A}) \\ +0 &= \mathrm{Tr}[\mathbf{A}^{n-1} (\mathbf{A}\mathbf{B} - \mathbf{B}\mathbf{A})] +\end{aligned} + \quad (31) $$ + +From the last line of Equation (31) it follows with $\frac{d\mathbf{A}}{d\tau} = \lambda (\mathbf{A}\mathbf{B} - \mathbf{B}\mathbf{A})$ + +$$ \frac{d}{d\tau} \mathrm{Tr}(\mathbf{A}^n) = 0 \qquad (32) $$ + +Remark: This conclusion is not limited to simplices. + +However for single spinors $\psi$ and the corresponding second moments $\mathbf{S} = \sum_i \gamma_i = \psi\psi^\dagger\gamma_0$ we find: + +$$ +\begin{aligned} +\mathrm{Tr}(\mathbf{S}^k) &= \mathrm{Tr}[\psi \psi^\dagger \gamma_0 \cdots \psi \psi^\dagger \gamma_0] \\ +&= \mathrm{Tr}[(\psi^\dagger \gamma_0 \cdots \psi \psi^\dagger \gamma_0)] \\ +&= \mathrm{Tr}[(\psi^\dagger \gamma_0 \cdots \psi \psi^\dagger \gamma_0) \psi] \\ +&= \mathrm{Tr}[(\psi^\dagger \gamma_0 \psi) \cdots (\psi^\dagger \gamma_0 \psi)] = 0 +\end{aligned} + \quad (33) $$ + +since each single factor $(\psi^\dagger \gamma_0 \psi)$ vanishes due to the skew-symmetry of $\gamma_0$. Therefore the constants of motion as derived from Equation (29) are non-zero only for even $k$ and *after averaging over some kind of distribution* such that $\mathbf{S} = \langle \psi \psi^\dagger \gamma_0 \rangle$ has non-zero eigenvalues as in Equation (34) below. + +The symmetric matrix $2n \times 2n$-matrix $\Sigma$ (and also $\mathcal{A}$) is positive definite, if it can be written as a product $\Sigma = \Psi\Psi^\dagger$, where $\Psi$ is a non-singular matrix of size $2n \times m$ with $m \ge 2n$. + +For $n = m/2 = 1$, the form of $\Psi$ may be chosen as + +$$ +\begin{aligned} +\Psi &= \frac{1}{\sqrt{q^2+p^2}} \begin{pmatrix} q & -p \\ p & q \end{pmatrix} = \frac{1}{\sqrt{q^2+p^2}} (\mathbf{1}\psi, \eta_0 \psi) \\ +\Rightarrow \quad & \Sigma = \Psi\Psi^\dagger = \Psi^\dagger\Psi = \mathbf{1} \\ +\mathbf{S} &= \gamma_0 +\end{aligned} + \quad (34) $$ + +so that for $k=2$ the average of two "orthogonal" column-vectors $\psi$ and $\eta_0\psi$ gives a non-zero constant of motion via Lax pairs as $\gamma_0^2 = -1$. + +These findings have some consequences for the modeling game. The first is that we have found constants of motion-though some of them are physically meaningful only for a non-vanishing volume in phase space, i.e., by the combination of several spinors $\psi$. Secondly, a stable state $\dot{\mathbf{S}} = 0$ implies that the matrix operators forming the Lax pair have the same eigenvectors: a density distribution in phase space (as described by the matrix of second moments) is stable if it is adapted or *matched* to the +---PAGE_BREAK--- + +symplex F. The phase space distribution as represented by **S** and the driving terms (the components of F) must fit to each other in order to obtain a stable “eigenstate”. But we also found a clear reason, why generators (of symplectic transformations) are always observables and vice versa: Both, the generators as well as the observables are symplexes of the same type. There is a one-to-one correspondence between them, not only as *generators of infinitesimal transformations*, but also algebraically. + +Furthermore, we may conclude that (anti-) commutators are an essential part of “classical” +Hamiltonian mechanics and secondly that the matrix **S** has the desired properties of observables: +Though **S** is based on continuously varying fundamental variables, it is constant, if it commutes with +F, and it varies otherwise (In accelerator physics, Equation (25) describes the envelope of a beam in +linear optics. The matrix of second moments Σ is a covariance matrix-and therefore our modeling +game is connected to probability theory exactly when observables are introduced). + +Hence it appears sensible to take a closer look on the (anti-) commutation relations of (co-) +symplices and though the definitions of (co-) symplices are quite plain, the (anti-) commutator algebra +that emerges from them has a surprisingly rich structure. If we denote symplices by Sk and cosymplices +by Ck, then the following rules can quickly be derived: + +$$ +\left. +\begin{array}{l} +S_1 S_2 - S_2 S_1 \\ +C_1 C_2 - C_2 C_1 \\ +C S + S C \\ +S^{2n+1} +\end{array} +\right\} \Rightarrow \text{symplex} +$$ + +$$ +\left. +\begin{array}{l} +S_1 S_2 + S_2 S_1 \\ +C_1 C_2 + C_2 C_1 \\ +C S - S C \\ +S^{2n} \\ +C^n +\end{array} +\right\} \Rightarrow \text{cosymplex} \qquad (35) +$$ + +This *Hamiltonian* algebra of (anti-)commutators is of fundamental importance insofar as we derived it in a few steps from first principles (i.e., the rules of the game) and it defines the structure of Hamiltonian dynamics in phase space. The distinction between symplices and cosymplices is also the distinction between observables and non-observables. It is the basis of essential parts of the following considerations. + +4. Geometry from Hamiltonian Motion + +In the following we will demonstrate the geometrical content of the algebra of (co-)symplices +(Equation (35)) which emerges for specific numbers of DOF $n$. As shown above, pairs of canonical +variables (DOFs) are a direct consequence of the abstract rules of our game. Though single DOFs +are poor "objects", it is remarkable to find physical structures emerging from our abstract rules at all. +This suggests that there might be more structure to discover when $n$ DOF are combined, for instance +geometrical structures. The following considerations obey the rules of our game, since they are +based purely on symmetry considerations like those that guided us towards Hamiltonian dynamics. +The objects of interest in our algebraic interpretation of Hamiltonian dynamics are matrices. The first +matrix (besides $\mathcal{A}$) with a specific form that we found, is $\gamma_0$. It is a symplex: + +$$ +\gamma_0^T = -\gamma_0 = \gamma_0 \gamma_0 \gamma_0 \tag{36} +$$ + +According to Equation (17) there are $v_s = n(2n+1)$ ($i.e., v_s \ge 3$) symplices. Hence it is nearby to ask if other symplices with similar properties like $\gamma_0$ exist-and if so, what the relations between these matrices are. According to Equation (35) the commutator of two symplices is again a symplex, while the anti-commutator is a cosymplex. As we are primarily interested in *observables* and components of the +---PAGE_BREAK--- + +Hamiltonian (i.e., symplices), respectively, we look for further symplices that anti-commute with $\gamma_0$ and with each other. In this case, the product of two such matrices is also a symplex, i.e., another potential contribution to the general Hamiltonian matrix F. + +Assumed we had a set of $N$ mutually anti-commuting orthogonal simplices $\gamma_0$ and $\gamma_k$ with $k \in [1...N-1]$, then a Hamiltonian matrix F might look like + +$$F = \sum_{k=0}^{N-1} f_k \gamma_k + \dots \quad (37)$$ + +The $\gamma_k$ are simplices and anti-commute with $\gamma_0$: + +$$\gamma_0 \gamma_k + \gamma_k \gamma_0 = 0 \quad (38)$$ + +Multiplication from the left with $\gamma_0$ gives: + +$$-\gamma_k + \gamma_0 \gamma_k \gamma_0 = -\gamma_k + \gamma_k^T = 0 \quad (39)$$ + +so that all other possible simplices $\gamma_k$, which anticommute with $\gamma_0$, are symmetric and square to 1. This is an important finding for what follows, as it can (within our game) be interpreted as a classical proof of the uniqueness of (observable) time-dimension: Time is one-dimensional as there is no other skew-symmetric symplex that anti-commutes with $\gamma_0$. We can choose different forms for $\gamma_0$, but the emerging algebra allows for no second “direction of time”. + +The second order derivative of $\psi$ is (for constant $F$) given by $\ddot{\psi} = F^2 \psi$ which yields: + +$$F^2 = \sum_{i=0}^{N-1} f_i^2 \gamma_i^2 + \sum_{i \neq j} f_i f_j (\gamma_i \gamma_j + \gamma_j \gamma_i) \quad (40)$$ + +Since the anti-commutator on the right vanishes by definition, we are left with: + +$$F^2 = \left( \sum_{k=1}^{N-1} f_k^2 - f_0^2 \right) 1 \quad (41)$$ + +Thus-we find a set of (coupled) oscillators, if + +$$f_0^2 > \sum_{k=1}^{N-1} f_k^2 \quad (42)$$ + +such that + +$$\ddot{\psi} = -\omega^2 \psi \quad (43)$$ + +Given such matrix systems exist-then they generate a Minkowski type "metric" as in Equation (41) (Indeed it appears that Dirac derived his system of matrices from this requirement [14]). The appearance of this metric shows how a Minkowski type geometry emerges from the driving terms of oscillatory motion. This is indeed possible- at least for simplices of certain dimensions as we will show below. The first thing needed is some kind of measure to define the length of a "vector". Since the length is a measure that is invariant under certain transformations, specifically under rotations, we prefer to use a quantity with certain invariance properties to define a length. The only one we have at hand is given by Equation (29). Accordingly we define the (squared) length of a matrix representing a "vector" by + +$$\|\mathbf{A}\|^2 = \frac{1}{2n} \mathrm{Tr}(\mathbf{A}^2) \quad (44)$$ +---PAGE_BREAK--- + +The division by 2 $n$ is required to make the unit matrix have unit norm. Besides the norm we need a scalar product, i.e., a definition of orthogonality. Consider the Pythagorean theorem which says that two vectors $\vec{a}$ and $\vec{b}$ are orthogonal iff + +$$ (\vec{a} + \vec{b})^2 = \vec{a}^2 + \vec{b}^2 \quad (45) $$ + +The general expression is + +$$ (\vec{a} + \vec{b})^2 = \vec{a}^2 + \vec{b}^2 + 2 \vec{a} \cdot \vec{b} \quad (46) $$ + +The equations are equal, iff $\vec{a} \cdot \vec{b} = 0$. Hence the Pythagorean theorem yields a reasonable definition of orthogonality. However, we had no method yet to define vectors within our game. Using matrices **A** and **B** we may then write + +$$ +\begin{aligned} +\|\mathbf{A} + \mathbf{B}\|^2 &= \frac{1}{2n} \mathrm{Tr}[(\mathbf{A} + \mathbf{B})^2] \\ +&= \|\mathbf{A}\|^2 + \|\mathbf{B}\|^2 + \frac{1}{2n} \mathrm{Tr}(\mathbf{A}\mathbf{B} + \mathbf{B}\mathbf{A}) +\end{aligned} +\quad (47) $$ + +If we compare this to Equations (45) and (46), respectively, then the obvious definition of the inner product is given by: + +$$ \mathbf{A} \cdot \mathbf{B} = \frac{\mathbf{A}\mathbf{B} + \mathbf{B}\mathbf{A}}{2} \quad (48) $$ + +Since the anticommutator does in general not yield a scalar, we have to distinguish between inner product and scalar product: + +$$ (\mathbf{A} \cdot \mathbf{B})_S = \frac{1}{4n} \mathrm{Tr}(\mathbf{A}\mathbf{B} + \mathbf{B}\mathbf{A}) \quad (49) $$ + +where we indicate the scalar part by the subscript “S”. Accordingly we define the exterior product by the commutator + +$$ \mathbf{A} \wedge \mathbf{B} = \frac{\mathbf{A}\mathbf{B} - \mathbf{B}\mathbf{A}}{2} \quad (50) $$ + +Now that we defined the products, we should come back to the unit vectors. The only “unit vector” that we explicitely defined so far is the symplectic unit matrix $\gamma_0$. If it represents anything at all then it must be “the direction” of change, the direction of evolution in time as it was derived in this context and is the only “dimension” found so far. As we have already shown, all other unit vectors $\gamma_k$ must be symmetric, if they are simplices. And vice versa: If $\gamma_k$ is symmetric and anti-commutes with $\gamma_0$, then it is a symplex. As only simplices represent observables and are generators of symplectic transformations, we can have only a single “time” direction $\gamma_0$ and a yet unknown number of *symmetric* unit vectors (Thus we found a simple answer to the question, why only a single time direction is possible, a question also debated in Reference [15]). However, for $n > 1$, there might be different equivalent choices of $\gamma_0$. Whatever the specific form of $\gamma_0$ is, we will show that in combination with some general requirements like completeness, normalizability and observability it determines the structure of the complete algebra. Though we don't yet know how many symmetric and pairwise anti-commuting unit vectors $\gamma_k$ exist- we have to interpret them as unit vectors in “spatial directions” (The meaning of what a spatial direction is, especially in contrast to the direction of time $\gamma_0$, has to be derived from the form of the emerging equations, of course. As meaning follows form, we do not define space-time, but we identify structures that fit to the known concept of space-time). Of course unit vectors must have unit length, so that we have to demand that + +$$ \|\gamma_k\|^2 = \frac{1}{2n} \mathrm{Tr}(\gamma_k^2) = \pm 1 \quad (51) $$ + +Note that (since our norm is not positive definite), we explicitely allow for unit vectors with negative “length” as we find it for $\gamma_0$. Note furthermore that all skew-symmetric unit vectors square to $-1$ while the symmetric ones square to **1** [16]. + +Indeed systems of $N = p+q$ anti-commuting real matrices are known as real representations of Clifford algebras $Cl_{p,q}$. The index $p$ is the number of unit elements (“vectors”) that square to +1 +---PAGE_BREAK--- + +and $q$ is the number of unit vectors that square to $-1$. Clifford algebras are not necessarily connected to Hamiltonian motion, rather they can be regarded as purely mathematical "objects". They can be defined without reference to matrices whatsoever. Hence in mathematics, sets of matrices are merely "representations" of Clifford algebras. But our game is about physics and due to the proven one-dimensionality of time we concentrate on Clifford algebras $Cl_{N-1,1}$ which link CHOs in the described way with the generators of a Minkowski type metric. Further below it will turn out that the representation by matrices is-within the game-indeed helpful, since it leads to an overlap of certain symmetry structures. The unit elements (or unit "vectors") of a Clifford algebra, $\mathbf{e}_k$, are called the *generators* of the Clifford algebra. They pairwise anticommute and they square to $\pm 1$ (The role as *generator* of the Clifford algebra should not be confused with the role as generators of symplectic transformations (i.e., simplices). Though we are especially interested in Clifford algebras in which all generators are simplices, not all simplices are generators of the Clifford algebra. Bi-vectors for instance are simplices, but not generators of the Clifford algebra). Since the inverse of the unit elements $\mathbf{e}_k$ of a Clifford algebra must be unique, the products of different unit vectors form new elements and all possible products including the unit matrix form a group. There are $\binom{N}{k}$ possible combinations (products without repetition) of $k$ elements from a set of $N$ generators. We therefore find $\binom{N}{2}$ bi-vectors, which are products of two generators, $\binom{N}{3}$ trivectors) and so on. The product of all $N$ basic matrices is called pseudoscalar. The total number of all k-vectors then is (We identify $k=0$ with the unit matrix 1.): + +$$ \sum_{k=0}^{N} \binom{N}{k} = 2^N \quad (52) $$ + +If we desire to construct a complete system, then the number of variables of the Clifford algebra has to match the number of variables of the used matrix system: + +$$ 2^N = (2n)^2 \quad (53) $$ + +Note that the root of this equation gives an even integer $2^{N/2} = 2n$ so that $N$ must be even. Hence all Hamiltonian Clifford algebras have an even dimension. Of course not all elements of the Clifford algebra may be simplices. The unit matrix (for instance) is a cosymplex. Consider the Clifford algebra $Cl_{1,1}$ with $N=2$, which has two generators, say $\gamma_0$ with $\gamma_0^2 = -1$ and $\gamma_1$ with $\gamma_1^2 = 1$. Since these two anticommute (by definition of the Clifford algebra), so that we find (besides the unit matrix) a fourth matrix formed by the product $\gamma_0\gamma_1$: + +$$ \begin{aligned} \gamma_0 \gamma_1 &= -\gamma_1 \gamma_0 \\ (\gamma_0 \gamma_1)^2 &= \gamma_0 \gamma_1 \gamma_0 \gamma_1 \\ &= -\gamma_0 \gamma_0 \gamma_1 \gamma_1 = \mathbf{1} \end{aligned} \quad (54) $$ + +The completeness of the Clifford algebras as we use them here implies that any $2n \times 2n$-matrix $\mathbf{M}$ with $(2n)^2 = 2^N$ can be written as a linear combination of all elements of the Clifford algebra: + +$$ \mathbf{M} = \sum_{k=0}^{4n^2-1} m_k \gamma_k \quad (55) $$ + +The coefficients can be computed from the scalar product of the unit vectors with the matrix **M**: + +$$ m_k = (\gamma_k \cdot \mathbf{M})_S = \frac{s_k}{4n} \operatorname{Tr}(\gamma_k \mathbf{M} + \mathbf{M} \gamma_k) \quad (56) $$ + +Recall that skew-symmetric $\gamma_k$ have a negative length and therefore we included a factor $s_k$ which represents the “signature” of $\gamma_k$, in order to get the correct sign of the coefficients $m_k$. +---PAGE_BREAK--- + +Can we derive more properties of the constructable space-times? One restriction results from representation theory: A theorem from the theory of Clifford algebras states that $Cl_{p,q}$ has a representation by real matrices if (and only if) [17] + +$$p-q=0 \text{ or } 2 \operatorname{mod} 8 \qquad (57)$$ + +The additional requirement that all generators must be simplices so that $p = N-1$ and $q = 1$ then restricts $N$ to + +$$N-2=0 \text{ or } 2 \operatorname{mod} 8 \qquad (58)$$ + +Hence the only matrix systems that have the required symmetry properties within our modeling game are those that represent Clifford algebras with the dimensions $1+1, 3+1, 9+1, 11+1, 17+1, 19+1, 25+1, 27+1$ and so on. These correspond to matrix representations of size $2 \times 2, 4 \times 4, 32 \times 32, 64 \times 64, 512 \times 512$ and so on. The first of them is called *Pauli algebra*, the second one is the *Dirac algebra*. Do these two have special properties that the higher-dimensional algebras do not have? Yes, indeed. + +Firstly, since dynamics is based on canonical pairs, the real Pauli algebra describes the motion of a single DOF and the Dirac algebra describes the simplest system with interaction between two DOF. This suggests the interpretation that within our game, objects (Dirac-particles) are not located “within space-time”, since we did not define space at all up to this point, but that space-time can be modeled as an emergent phenomenon. Space-time is in between particles. + +Secondly, if we equate the number of fundamental variables ($2n$) of the oscillator phase space with the dimension of the Clifford space $N$, then Equation (53) leads to + +$$2^N = N^2 \qquad (59)$$ + +which allows for $N=2$ and $N=4$ only. But why should it be meaningful to assume $N=2n$? The reason is quite simple: If $2n > N$ as for all higher-dimensional state vectors, there are less generators of the algebra than independent variables. This discrepancy increases with $n$. Hence the described objects can not be pure vectors anymore, but must contain tensor-type components ($k$-vectors) (For a deeper discussion of the dimensionality of space-time, see Reference [16] and references therein). + +But before we describe a formal way to interpret Equation (59), let us first investigate the physical and geometrical implications of the game as described so far. + +## Matrix Exponentials + +We said that the unit vectors $\gamma_0$ and $\gamma_k$ are simplices and therefore generators of symplectic transformations. All symplectic matrices are matrix exponentials of simplices [12]. The computation of matrix exponentials is in the general case non-trivial. However, in the special case of matrices that square to $\pm 1$ (e.g., along the “axis” $\gamma_k$ of the coordinate system), the exponentials are readily evaluated: + +$$\exp(\gamma_a \tau) = \sum_{k=0}^{\infty} \frac{(\gamma_a \tau)^k}{k!} \qquad (60)$$ + +$$\exp(\gamma_a \tau) = \sum_{k=0}^{\infty} s^k \frac{\tau^{2k}}{(2k)!} + \gamma_a \sum_{k=0}^{\infty} s^k \frac{\tau^{2k+1}}{(2k+1)!}$$ + +where $s = \pm 1$ is the sign of the matrix square of $\gamma_a$. For $s = -1$ ($\gamma_a^2 = -1$), it follows that + +$$\mathbf{R}_a(\tau) = \exp(\gamma_a \tau) = \cos(\tau) + \gamma_a \sin(\tau) \qquad (61)$$ + +and for $s = 1$ ($\gamma_a^2 = 1$): + +$$\mathbf{B}_a(\tau) = \exp(\gamma_a \tau) = \cosh(\tau) + \gamma_a \sinh(\tau) \qquad (62)$$ + +We can indentify skew-symmetric generators with rotations and (as we will show in more detail below) symmetric generators with boosts. +---PAGE_BREAK--- + +The (hyperbolic) sine/cosine structure of symplectic matrices are not limited to the generators but are a general property of the matrix exponentials of the symplex F (These properties are the main motivation to choose the nomenclature of "symplex" and "cosymplex".): + +$$ +\mathbf{M}(t) = \exp(\mathbf{F} t) = \mathbf{C} + \mathbf{S} \tag{63} +$$ + +where the (co-) symplex S ( C ) is given by: + +$$ +\begin{align*} +\mathbf{S} &= \sinh(\mathbf{F} t) \\ +\mathbf{C} &= \cosh(\mathbf{F} t) +\end{align*} +\tag{64} +$$ + +since (the linear combination of) all odd powers of a symplex is again a symplex and the sum of all even powers is a cosymplex. The inverse transfer matrix $\mathbf{M}^{-1}(t)$ is given by: + +$$ +\mathbf{M}^{-1}(t) = \mathbf{M}(-t) = \mathbf{C} - \mathbf{S} \quad (65) +$$ + +The physical meaning of the matrix exponential results from Equation (13), which states that (for constant simplices F) the solutions are given by the matrix exponential of F: + +$$ +\psi(t) = \mathbf{M}(t) \psi(0) \tag{66} +$$ + +A symplectic transformation can be regarded as the result of a possible evolution in time. There is no proof that non-symplectic processes are forbidden by nature, but that only symplectic transformations are *structure preserving*. Non-symplectic transformations are then *structure defining*. Both play a fundamental role in the physics of our model reality, because fundamental particles are according to our model-represented by dynamical structures. Therefore symplectic transformations describe those processes and interactions, in which structure is preserved, i.e., in which the type of the particle is not changed. The fundamental variables are just “carriers” of the dynamical structures. Non-symplectic transformations can be used to transform the structure. This could also be described by a rotation of the direction of time. Another interpretation is that of a gauge-transformation [18]. + +**5. The Significance of (De-)Coupling** + +In physics it is a standard technique to reduce complexity of problems by a suitable change of variables. In case of linear systems, the change of variables is a linear canonical transformation. The goal of such transformations is usually to substitute the solution of a complicated problem by the solution of multiple simpler problems. This technique is known under various names, one of these names is decoupling, but it is also known as principal component analysis or (as we will later show) transformation into the “rest frame”. In other branches of science one might refer to it as pattern recognition. + +In the following we investigate, how to transform a general oscillatory $2n \times 2n$-dimensional symplex to normal form. Certainly it would be preferable to find a "physical method", i.e., a method that matches to the concepts that we introcuded so far and that has inherently physical significance. Or at least significance and explanatory power with respect to our modeling game. Let us start from the simplest systems, i.e., with the Pauli and Dirac algebras which correspond to matrices of size 2 × 2 and 4 × 4, respectively. +---PAGE_BREAK--- + +5.1. *The Pauli Algebra* + +The fundamental significance of the Pauli algebra is based on the even dimensionality of (classical) +phase space. The algebra of 2 × 2 matrices describes the motion of a single (isolated) DOF. Besides η₀, +the real Pauli algebra includes the following three matrices: + +$$ +\begin{align*} +\eta_1 &= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ +\eta_2 &= \eta_0 \eta_1 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\ +\eta_3 &= \mathbf{1} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} +\end{align*} +$$ + +(67) + +All except the unit matrix $\eta_3$ are simplices. If $\eta_0$ and $\eta_1$ are chosen to represent the generators of the corresponding Clifford algebra $Cl_{1,1}$, then $\eta_2$ is the only possible bi-vector. A general symplex has the form: + +$$ +\begin{align} +\mathbf{F} &= a\eta_0 + b\eta_1 + c\eta_2 \nonumber \\ +&= \begin{pmatrix} c & a+b \\ -a+b & -c \end{pmatrix} \tag{68} +\end{align} +$$ + +The characteristic equation is given by $\det(\mathbf{F} - \lambda \mathbf{1}) = 0$ + +$$ +\begin{align} +0 &= (c - \lambda)(-c - \lambda) - (a + b)(-a + b) \notag \\ +\lambda &= \pm \sqrt{c^2 + b^2 - a^2} \tag{69} +\end{align} +$$ + +The eigenvalues $\lambda_{\pm}$ are both either real for $a^2 < c^2 + b^2$ or both imaginary $a^2 > c^2 + b^2$ (or both zero). +Systems in stable oscillation have purely imaginary eigenvalues. This case is most interesting for our +modeling game. + +Decoupling is usually understood in the more general sense to treat the interplay of several +(at least two) DOF, but here we ask, whether all possible oscillating systems of $n = 1$ are isomorphic to +normal form oscillators. Since there are 3 parameters in F and only one COM, namely the frequency +$\omega$, we need at least two parameters in the transformation matrix. Let us see, if we can choose these +two transformations along the axis of the Clifford algebra. In this case we apply sequentially two +symplectic transformations along the axis $\eta_0$ and $\eta_2$. Applying the symplectic transformation matrix +$\exp(\eta_0 \tau/2)$ we obtain: + +$$ +\begin{align} +\mathbf{F}_1 &= \exp(\eta_0 \tau / 2) \mathbf{F} \exp(-\eta_0 \tau / 2) \notag \\ +&= a' \eta_0 + b' \eta_1 + c' \eta_2 \tag{70} +\end{align} +$$ + +(The "half-angle" argument is for convenience). The transformed coefficients $a'$, $b'$ and $c'$ are given by + +$$ +\begin{align*} +a' &= a \\ +b' &= b \cos \tau - c \sin \tau \\ +c' &= c \cos \tau + b \sin \tau +\end{align*} +$$ + +(71) + +so that depending on the "duration of the pulse", we can chose to transform into a coordinate system +in which either $b' = 0$ or $c' = 0$. If we choose $t = \arctan(-c/b)$, then $c' = 0$, so that + +$$ +\mathbf{F}' = a\eta_0 + \sqrt{b'^2 + c'^2}\eta_1 = a'\eta_0 + b'\eta_1 \quad (72) +$$ + +If we chose the next generator to be $\eta_2$, then: + +$$ +\begin{align*} +a'' &= a' \cosh \tau - b' \sinh \tau \\ +b'' &= b' \cosh \tau - a' \sinh \tau +\end{align*} +$$ + +(73) +---PAGE_BREAK--- + +In this case we have to distinguish between the case, where $a' > b'$ and $a' < b'$. The former is the oscillatory system and in this case the transformation with $\tau = \operatorname{artanh}(b'/a')$ leads to the normal form of a 1-dim. oscillator: + +$$ +\begin{aligned} +a'' &= \sqrt{a^2 - b^2 - c^2} \\ +b'' &= 0 \\ +c'' &= 0 +\end{aligned} +\qquad (74) $$ + +and the matrix $F''$ has the form + +$$ F'' = \sqrt{a^2 - b^2 - c^2} \eta_0 \qquad (75) $$ + +If the eigenvalues are imaginary, then $\lambda = \pm i\omega$ and hence + +$$ F'' = \omega \eta_0 \qquad (76) $$ + +so that the solution is for constant frequency-given by the matrix exponential: + +$$ +\begin{aligned} +\psi(t) &= \exp(\omega \eta_0 t) \psi(0) \\ +&= (\mathbf{1} \cos(\omega t) + \eta_0 \sin(\omega t)) \psi(0) +\end{aligned} +\qquad (77) $$ + +This shows that in the context of stable oscillator algebras the real Pauli algebra can be reduced to the complex number system: This becomes evident, if we consider possible representations of the complex numbers. Clearly we need two basic elements- the unit matrix and $\eta_0$, i.e., a matrix that commutes with the unit matrix and squares to -1. If we write "i" instead of $\eta_0$, then it is easily verified that (See also References [17,19] and Equation (34) in combination with Reference [20].): + +$$ +\begin{aligned} +z &= x + iy = Z = \begin{pmatrix} x & y \\ -y & x \end{pmatrix} \\ +\bar{z} &= x - iy = Z^T = x\mathbf{1} + \eta_0^T y \\ +\exp(i\phi) &= \cos(\phi) + i\sin(\phi) \\ +\|z\|^2 &= ZZ^T = z\bar{z} = x^2 + y^2 +\end{aligned} +\qquad (78) $$ + +The theory of holomorphic functions is based on series expansions and can be equally well formulated with matrices. Viewed from our perspective the complex numbers are a special case of the real Pauli algebra- since we have shown above that any one-dimensional oscillator can be canonically transformed into a system of the form of Equation (76). Nevertheless we emphasize that the complex numbers interpreted this way can only represent the normal form of an oscillator. The normal form excludes a different scaling of coordinates and momenta as used in classical mechanics, i.e., it avoids intrinsically the appearance of different "spring constants" and masses (There have been several attempts to explain the appearance of the complex numbers in quantum mechanics [21–27]. A general discussion of the use of complex numbers in physics is beyond the scope of this essay, therefore we add just a remark. Gary W. Gibbons wrote that "In particular there can be no evolution if $\psi$ is real" [24]. We agree with Gibbons that the unit imaginary can be related to evolution in time as it implies oscillation, but we do not agree with his conclusion. Physics was able to describe evolution in time without imaginaries before quantum mechanics and it still is. The unconscious use of the unit imaginary did not prevent quantum mechanics from being experimentally successful. But it prevents physicists from understanding its structure). + +## 5.2. The Dirac Algebra + +In this subsection we consider the oscillator algebra for two coupled DOF, the algebra of 4 × 4 matrices. In contrast to the real Pauli algebra, where the parameters *a*, *b* and *c* did not suggest a specific physical meaning, the structure of the Dirac algebra bears geometrical significance +---PAGE_BREAK--- + +as has been pointed out by David Hestenes and others [28–30]. The (real) Dirac algebra is the +simplest real algebra that enables for a description of two DOF and the interaction between them. +Furthermore the eigenfrequencies of a Dirac symplex F may be complex, while the spectrum of the +Pauli matrices does not include complex numbers off the real and imaginary axis. The spectrum of +general 2n × 2n-symplices has a certain structure - since the coefficients of the characteristic polynomial +are real: If λ is an eigenvalue of F, then its complex conjugate $\bar{\lambda}$ as well as λ and $-\bar{\lambda}$ are also eigenvalues. +As we will show, this is the spectrum of the Dirac algebra and therefore any 2n × 2n-system can, at +least in principle, be block-diagonalized using 4 × 4-blocks. The Dirac algebra is therefore the simplest +algebra that covers the general case. + +The structure of Clifford algebras follows Pascal’s triangle. The Pauli algebra has the structure 1 − 2 − 1 (scalar-vector-bivector), the Dirac algebra has the structure 1 − 4 − 6 − 4 − 1, standing for unit element (scalar), vectors, bi-vectors, tri-vectors and pseudoscalar. The vector elements are by convention indexed with γμ with μ = 0 ... 3, i.e., the generators of the algebra (According to Pauli’s fundamental theorem of the Dirac algebra, all possible choices of the Dirac matrices are, as long as the “metric tensor” gμν remains unchanged, equivalent [31].): + +$$ +\begin{align} +\gamma_0 &= \begin{pmatrix} +0 & 1 & 0 & 0 \\ +-1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & -1 & 0 +\end{pmatrix} & +\gamma_1 &= \begin{pmatrix} +0 & -1 & 0 & 0 \\ +-1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 +\end{pmatrix} \notag \\ +\gamma_2 &= \begin{pmatrix} +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 \\ +0 & 1 & 0 & 0 \\ +1 & 0 & 0 & 0 +\end{pmatrix} & +\gamma_3 &= \begin{pmatrix} +-1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & -1 & 0 \\ +0 & 0 & 0 & 1 +\end{pmatrix} \tag{79} +\end{align} +$$ + +We define the following numbering scheme for the remaining matrices (The specific choice of the +matrices is not unique. A table of the different systems can be found in Reference ([32]).): + +$$ +\begin{align*} +\gamma_{14} &= \gamma_0 \gamma_1 \gamma_2 \gamma_3; && \gamma_{15} &= \mathbf{1} \\ +\gamma_4 &= \gamma_0 \gamma_1; && \gamma_7 &= \gamma_{14} \gamma_0 \gamma_1 = \gamma_2 \gamma_3 \\ +\gamma_5 &= \gamma_0 \gamma_2; && \gamma_8 &= \gamma_{14} \gamma_0 \gamma_2 = \gamma_3 \gamma_1 \\ +\gamma_6 &= \gamma_0 \gamma_3; && \gamma_9 &= \gamma_{14} \gamma_0 \gamma_3 = \gamma_1 \gamma_2 \\ +\gamma_{10} &= \gamma_{14} \gamma_0 &&=&& \gamma_1 \gamma_2 \gamma_3 \\ +\gamma_{11} &= \gamma_{14} \gamma_1 &&=&& \gamma_0 \gamma_2 \gamma_3 \\ +\gamma_{12} &= \gamma_{14} \gamma_2 &&=&& \gamma_0 \gamma_3 \gamma_1 \\ +\gamma_{13} &= \gamma_{14} \gamma_3 &&=&& \gamma_0 \gamma_1 \gamma_2 +\end{align*} +\tag{80} +$$ + +According to Equation (17) we expect 10 simplices and since the 4 vectors and 6 bi-vectors are +simplices, all other elements are cosimplices. With this ordering, the general 4 × 4-symplex F can be +written as (instead of Equation (55)): + +$$ +F = \sum_{k=0}^{9} f_k \gamma_k +\quad (81) +$$ + +In Reference [32] we presented a detailed survey of the Dirac algebra with respect to symplectic Hamiltonian motion. The essence of this survey is the insight that the real Dirac algebra describes Hamiltonian motion of an ensembles of two-dimensional oscillators, but as well the motion of a “point particle” in 3-dimensional space, *i.e.*, that Equation (25) is, when expressed by the real Dirac algebra, *isomorphic to the Lorentz force equation* as we are going to show in Section 6.3. Or, in other words, the Dirac algebra allows to model a point particle and its interaction with the electromagnetic field in terms of the classical statistical ensemble of abstract oscillators. +---PAGE_BREAK--- + +## 6. Electromechanical Equivalence (EMEQ) + +The number and type of simplices within the Dirac algebra (80) suggests to use the following vector notation for the coefficients [32,33] of the observables: + +$$ +\begin{aligned} +\mathcal{E} & \equiv f_0 \\ +\vec{\mathcal{P}} & \equiv (f_1, f_2, f_3)^T \\ +\vec{\mathcal{E}} & \equiv (f_4, f_5, f_6)^T \\ +\vec{\mathcal{B}} & \equiv (f_7, f_8, f_9)^T +\end{aligned} +\qquad (82) $$ + +where the “clustering” of the coefficients into 3-dimensional vectors will be explained in the following. The first four elements $\mathcal{E}$ and $\vec{\mathcal{P}}$ are the coefficients of the generators of the Clifford algebra and the remaining simplices are 3 symmetric bi-vectors $\vec{\mathcal{E}}$ and skew-symmetric bi-vectors $\vec{\mathcal{B}}$. As explained above, the matrix exponentials of pure Clifford elements are readily evaluated (Equations (61) and (62)). The effect of a symplectic similarity transformation on a symplex + +$$ +\begin{aligned} +\tilde{\psi} &= \mathbf{R}(\tau/2) \psi \\ +\tilde{\mathbf{F}} &= \mathbf{R}(\tau/2) \mathbf{F} \mathbf{R}^{-1}(\tau/2) \\ +&= \mathbf{R}(\tau/2) \mathbf{F} \mathbf{R}(-\tau/2) +\end{aligned} +\qquad (83) $$ + +can then be computed component-wise as in the following case of a rotation (using Equation (81)): + +$$ +\begin{aligned} +\tilde{\mathbf{F}} &= \sum_{k=0}^{n} f_k \mathbf{R}_a \gamma_k \mathbf{R}_a^{-1} \\ +\mathbf{R}_a \gamma_k \mathbf{R}_a^{-1} &= (\cos(\tau/2) + \gamma_a \sin(\tau/2)) \gamma_k (\cos(\tau/2) - \gamma_a \sin(\tau/2)) \\ +&= \gamma_k \cos^2(\tau/2) - \gamma_a \gamma_k \gamma_a \sin^2(\tau/2) + (\gamma_a \gamma_k - \gamma_k \gamma_a) \cos(\tau/2) \sin(\tau/2) +\end{aligned} +\qquad (84) $$ + +Since all Clifford elements either commute or anti-commute with each other, we have two possible solutions. The first ($\gamma_k$ and $\gamma_a$ commute) yields with $\gamma_a^2 = -1$: + +$$ \mathbf{R}_a \gamma_k \mathbf{R}_a^{-1} = \gamma_k \cos^2(\tau/2) - \gamma_a^2 \gamma_k \sin^2(\tau/2) = \gamma_k \qquad (85) $$ + +but if ($\gamma_k$ and $\gamma_a$ anti-commute) we obtain a rotation: + +$$ +\begin{aligned} +\mathbf{R}_a \gamma_k \mathbf{R}_a^{-1} &= \gamma_k (\cos^2(\tau/2) - \sin^2(\tau/2)) + \gamma_a \gamma_k 2 \cos(\tau/2) \sin(\tau/2) \\ +&= \gamma_k \cos(\tau) + \gamma_a \gamma_k \sin(\tau) +\end{aligned} +\qquad (86) $$ + +For $a=9$ ($\gamma_a = \gamma_1 \gamma_2$) for instance we find: + +$$ +\begin{aligned} +\tilde{\gamma}_1 &= \gamma_1 \cos(\tau) + \gamma_1 \gamma_2 \gamma_1 \sin(\tau) = \gamma_1 \cos(\tau) - \gamma_2 \sin(\tau) \\ +\tilde{\gamma}_2 &= \gamma_2 \cos(\tau) + \gamma_1 \gamma_2 \gamma_2 \sin(\tau) = \gamma_2 \cos(\tau) + \gamma_1 \sin(\tau) \\ +\tilde{\gamma}_3 &= \gamma_3, +\end{aligned} +\qquad (87) $$ + +which is formally equivalent to a rotation of $\vec{\mathcal{P}}$ about the “z-axis”. If the generator $\gamma_a$ of the transformation is symmetric, we obtain: + +$$ +\begin{aligned} +\mathbf{R}_a \gamma_k \mathbf{R}_a^{-1} &= (\cosh(\tau/2) + \gamma_a \sinh(\tau/2)) \gamma_k (\cosh(\tau/2) - \gamma_a \sinh(\tau/2)) \\ +&= \gamma_k \cosh^2(\tau/2) - \gamma_a \gamma_k \gamma_a \sinh^2(\tau/2) + (\gamma_a \gamma_k - \gamma_k \gamma_a) \cosh(\tau/2) \sinh(\tau/2) +\end{aligned} +\qquad (88) $$ + +so that (if $\gamma_a$ and $\gamma_k$ commute): + +$$ +\begin{aligned} +\tilde{\gamma}_k &= \gamma_k \cosh^2(\tau/2) - \gamma_a^2 \gamma_k \sinh^2(\tau/2) \\ +\tilde{\gamma}_l &= \gamma_k (\cosh^2(\tau/2) - \sinh^2(\tau/2)) = \gamma_l +\end{aligned} +\qquad (89) $$ +---PAGE_BREAK--- + +and if $\gamma_a$ and $\gamma_k$ anticommute: + +$$ +\begin{aligned} +\tilde{\gamma}_k &= \gamma_k (\cosh^2(\tau/2) + \sinh^2(\tau/2)) + 2\gamma_a \gamma_k \cosh(\tau/2) \sinh(\tau/2) \\ +&= \gamma_k \cosh(\tau) + \gamma_a \gamma_k \sinh(\tau), +\end{aligned} +\quad (90) $$ + +which is equivalent to a boost when the following parametrization of “rapidity” $\tau$ is used: + +$$ +\begin{aligned} +\tanh(\tau) &= \beta \\ +\sinh(\tau) &= \beta\gamma \\ +\cosh(\tau) &= \gamma \\ +\gamma &= \frac{1}{\sqrt{1-\beta^2}} +\end{aligned} +\quad (91) $$ + +A complete survey of these transformations and the (anti-) commutator tables can be found in Reference [32] (This formalism corresponds exactly to the relativistic invariance of a Dirac spinor in QED as described for instance in Reference [34], although the Dirac theory uses complex numbers and a different sign-convention for the metric tensor). The “spatial” rotations are generated by the bi-vectors associated with $\vec{B}$ and Lorentz boosts by the components associated with $\vec{E}$. The remaining 4 generators of symplectic transformations correspond to $\mathcal{E}$ and $\vec{P}$. They where named *phase-rotation* (generated by $\gamma_0$) and *phase-boosts* (generated by $\vec{\gamma} = (\gamma_1, \gamma_2, \gamma_3)$) and have been used for instance for symplectic decoupling as described in Reference [33]. + +It is nearby (and already suggested by our notation) to consider the possibility that the EMEQ (Equation (82)) allows to model a relativistic particle as represented by energy $\mathcal{E}$ and momentum **P** either in an external electromagnetic field given by $\vec{E}$ and $\vec{B}$ or-alternatively-in an accelerating and/or rotating reference frame, where the elements $\vec{E}$ and $\vec{B}$ correspond to the axis of acceleration and rotation, respectively. We assumed, that all components of the state vector $\psi$ are equivalent in meaning and unit. Though we found that the state vector is formally composed of canonical pairs, the units are unchanged and identical for all elements of $\psi$. From Equation (13) we take, that the simplex **F** (and also **A**) have the unit of a frequency. If the Hamiltonian $\mathcal{H}$ is supposed to represent energy, then the components of $\psi$ have the unit of the square root of action. + +If the coefficients are supposed to represent the electromagnetic field, then we need to express these fields in the unit of frequency. This can be done, but it requires to involve natural conversion factors like $\hbar$, charge $e$, velocity $c$ and a mass, for instance the electron mass $m_e$. The magnetic field (for instance) is related to a “cyclotron frequency” $\omega_c$ by $\omega_c \propto \frac{e}{m_e B}$. + +However, according to the rules of the game, the distinction between particle properties and “external” fields requires a reason, an explanation. Especially as it is physically meaningless for macroscopic coupled oscillators. In References [32,33] this nomenclature was used in a merely *formal* way, namely to find a descriptive scheme to order the symplectic generators, so to speak an *equivalent circuit* to describe the general possible coupling terms for two-dimensional coupled linear optics as required for the description of charged particles beams. + +Here we play the reversed modeling game: Instead of using the EMEQ as an equivalent circuit to describe ensembles of oscillators, we now use ensembles of oscillators as an equivalent circuit to describe point particles. The motivation for Equation (82) is nevertheless similar, i.e., it follows from the formal structure of the Dirac Clifford algebra. The grouping of the coefficients comes along with the number of vector- and bi-vector-elements, 4 and 6, respectively. The second criterium is to distinguish between generators of rotations and boost, i.e., between symmetric and skew-symmetric simplices, which separates energy from momentum and electric from magnetic elements. Third of all, we note that even (Even k-vectors are those with even $k = 2m$, where m is a natural number) elements (scalar, bi-vectors, 4-vectors etc.) of even-dimensional Clifford algebras form a sub-algebra. This means that we can generate the complete Clifford algebra from the vector-elements by matrix multiplication (this is why we call them generators), but we can not generate vectors from bi-vectors by multiplication. And therefore the vectors are the particles (which are understood as the sources of fields) and the +---PAGE_BREAK--- + +bi-vectors are the fields, which are generated by the objects and influence their motion. The full Dirac symplex-algebra includes the description of a particle (vector) in a field (bi-vector). But why would the field be *external*? Simply, because it is impossible to generate bi-vectors from a single vector-type object, since any single vector-type object written as $\mathcal{E}\gamma_0 + \vec{P} \cdot \hat{\gamma}$ squares to a scalar. Therefore, the fields must be the result of interaction with other particles and hence we call them “external”. This is in some way a “first-order” approach, since there might be higher order processes that we did not consider yet. But in the linear approach (i.e., for second-order Hamiltonians), this distinction is reasonable and hence a legitimate move in the game. + +Besides the Hamiltonian structure (symplices vs. co-symplices) and the Clifford algebraic structure (distinguishing vectors, bi-vectors, tri-vectors etc.) there is a third essential symmetry, which is connected to the real matrix representation of the Dirac algebra and to the fact that it describes the general Hamiltonian motion of coupled oscillators: To distinguish the even from the odd elements with respect to the block-diagonal matrix structure. We used this property in Reference [33] to develop a general geometrical decoupling algorithm (see also Section 6.2). + +Now it may appear that we are cheating somehow, as relativity is usually "derived" from the constancy of the speed of light, while in our modeling game, we did neither introduce spatial notions nor light at all. Instead we directly arrive at notions of quantum electrodynamics (QED). How can this be? The definition of "velocity" within wave mechanics usually involves the dispersion relation of waves, i.e., the velocity of a wave packet is given by the group velocity $\vec{v}_{gr}$ defined by + +$$ \vec{v}_{gr} = \vec{\nabla}_{\vec{k}} \omega(\vec{k}) \quad (92) $$ + +and the so-called phase velocity $v_{ph}$ defined by + +$$ v_{ph} = \frac{\omega}{k} \quad (93) $$ + +It is then typically mentioned that the product of these two velocities is a constant $v_{gr} v_{ph} = c^2$. By the use of the EMEQ and Equation (29), the eigenvalues of $\mathbf{F}$ can be written as: + +$$ K_1 = -\mathrm{Tr}(\mathbf{F}^2)/4 $$ + +$$ K_2 = \mathrm{Tr}(\mathbf{F}^4)/16 - K_1^2/4 $$ + +$$ \omega_1 = \sqrt{K_1 + 2\sqrt{K_2}} $$ + +$$ \omega_2 = \sqrt{K_1 - 2\sqrt{K_2}} \quad (94) $$ + +$$ \omega_1^2 \omega_2^2 = K_1^2 - 4K_2 = \mathrm{Det}(\mathbf{F}) $$ + +$$ K_1 = \epsilon^2 + \vec{B}^2 - \vec{E}^2 - \vec{P}^2 $$ + +$$ K_2 = (\epsilon \vec{B} + \vec{E} \times \vec{P})^2 - (\vec{E} \cdot \vec{B})^2 - (\vec{P} \cdot \vec{B})^2 $$ + +Since symplectic transformations are similarity transformations, they do not alter the eigenvalues of the matrix $\mathbf{F}$ and since all possible evolutions in time (which can be described by the Hamiltonian) are symplectic transformations, the eigenvalues (of closed systems) are conserved. If we consider a “free particle”, we obtain from Equation (94): + +$$ \omega_{1,2} = \pm \sqrt{\epsilon^2 - \vec{p}^2} \quad (95) $$ + +As we mentioned before both, energy and momentum, have (within this game) the unit of frequencies. If we take into account that $\omega_{1,2} \equiv m$ is fixed, then the dispersion relation for “the energy” $\epsilon = \omega$ is + +$$ \epsilon = \omega = \sqrt{m^2 + \vec{p}^2} \quad (96) $$ +---PAGE_BREAK--- + +which is indeed the correct relativistic dispersion. But how do we make the step from pure oscillations to *waves*? (The question if Quantum theory requires Planck's constant $\hbar$, has been answered negative by John P. Ralston [35]). + +## 6.1. Moments and The Fourier Transform + +In case of "classical" probability distribution functions (PDFs) $\phi(x)$ we may use the Taylor terms of the characteristic function $\tilde{\phi}_x(t) = \langle \exp itx \rangle_x$, which is the Fourier transform of $\phi(x)$, at the origin. The $k$-th moment is then given by + +$$ \langle x^k \rangle = i^k \tilde{\phi}^{(k)}(0) \quad (97) $$ + +where $\phi^{(k)}$ is the $k$-th derivative of $\tilde{\phi}_x(t)$. + +A similar method would be of interest for our modeling game. Since a (phase space-) density is positive definite, we can always take the square root of the density instead of the density itself: $\phi = \sqrt{\rho}$. The square root can also defined to be a complex function, so that the density is $\rho = \phi\phi^* = \|\phi\|^2$ and, if mathematically well-defined (convergent), we can also define the Fourier transform of the complex root, i.e., + +$$ \tilde{\phi}(\omega, \vec{k}) = N \int \phi(t, \vec{x}) \exp(i\omega t - i\vec{k}\cdot\vec{x}) dt d^3x \quad (98) $$ + +and vice versa: + +$$ \tilde{\phi}(t, \vec{x}) = \tilde{N} \int \phi(\omega, \vec{k}) \exp(-i\omega t + i\vec{k}\cdot\vec{x}) d\omega d^3k \quad (99) $$ + +In principle, we may *define* the density no only by real and imaginary part, but by an arbitrary number of components. Thus, if we consider a four-component spinor, we may of course mathematically define its Fourier transform. But in order to see, why this might be more than a mathematical “trick”, but *physically meaningful*, we need to go back to the notions of classical statistical mechanics. Consider that we replace the single state vector by an “ensemble”, where we leave the question open, if the ensemble should be understood as a single phase space trajectory, averaged over time, or as some (presumably large) number of different trajectories. It is well-known, that the phase space density $\rho(\psi)$ is stationary, if it depends only on constants of motion, for instance if it depends only on the Hamiltonian itself. With the Hamiltonian of Equation (12), the density could for example have the form + +$$ \rho(H) \propto \exp(-\beta H) = \exp(-\beta \psi A \psi / 2) \quad (100) $$ + +which corresponds to a multivariate Gaussian. But more important is the insight, that the density exclusively depends on the second moments of the phase space variables as given by the Hamiltonian, i.e., in case of a "free particle" it depends on $\mathcal{E}$ and $\vec{P}$. And therefore we should be able to use energy and momentum as frequency $\omega$ and wave-vector $\vec{k}$. + +But there are more indications in our modeling game that suggest the use of a Fourier transform as we will show in the next section. + +## 6.2. The Geometry of (De-)Coupling + +In the following we give a (very) brief summary of Reference [33]. As already mentioned, decoupling is meant-despite the use of the EMEQ-first of all purely technical-mathematical. Let us delay the question, if the notions that we define in the following have any physical relevance. Here we +---PAGE_BREAK--- + +refer first of all to block-diagonalization, i.e., we treat the symplex F just as a "Hamiltonian" matrix. +From the definition of the real Dirac matrices we obtain F in explicit 4 × 4 matrix form: + +$$ +\mathbf{F} = +\begin{pmatrix} +-E_x & E_z + B_y & E_y - B_z & B_x \\ +E_z - B_y & E_x & -B_x & -E_y - B_z \\ +E_y + B_z & B_x & E_x & E_z - B_y \\ +-B_x & -E_y + B_z & E_z + B_y & -E_x \\ +-P_z & \varepsilon - P_x & 0 & P_y \\ +-\varepsilon - P_x & P_z & P_y & 0 \\ +0 & P_y & -P_z & \varepsilon + P_x \\ +P_y & 0 & -\varepsilon + P_x & P_z +\end{pmatrix} +\tag{101} +$$ + +If we find a (sequence of) symplectic similarity transformations that would allow to reduce the +4 × 4-form to a block-diagonal form, then we would obtain two separate systems of size 2 × 2 and we +could continue with the transformations of Section 5.1. + +Inspection of Equation (101) unveils that $\mathbf{F}$ is block-diagonal, if the coefficients $E_u, P_u, B_x$ and $B_z$ +vanish. Obviously this implies that $\vec{E} \cdot \vec{B} = 0$ and $\vec{P} \cdot \vec{B} = 0$. Or vice versa, if we find a symplectic +method that transforms into a system in which $\vec{E} \cdot \vec{B} = 0$ and $\vec{P} \cdot \vec{B} = 0$, then we only need to apply +appropriate rotations to achieve block-diagonal form. As shown in Reference [33] this can be done +in different ways, but in general it requires the use of the “phase rotation” $\gamma_0$ and “phase boosts” +$\tilde{\gamma}$. Within the conceptual framework of our game, the application of these transformations +equals the use of “matter fields”. But furthermore, this shows that block-diagonalization has also +geometric significance within the Dirac algebra and, with respect to the Fourier transformation, +the requirement $\vec{P} \cdot \vec{B} = 0$ indicates a divergence free magnetic field, as the replacement of $\vec{P}$ by +$\vec{\nabla}$ yields $\vec{\nabla} \cdot \vec{B} = 0$. The additional requirement $\vec{E} \cdot \vec{B} = 0$ also fits well to our physical picture of +e.m. waves. Note furthermore, that there is no analogous requirement to make $\vec{P} \cdot \vec{E}$ equal to zero. +Thus (within this analogy) we can accept $\vec{\nabla} \cdot \vec{E} \neq 0$. + +But this is not everything to be taken from this method. If we analyze in more detail, which expressions are required to vanish and which may remain, then it appears that $\vec{P} \cdot \vec{B}$ is explicitly given by + +$$ +\begin{align*} +P_x B_x \gamma_1 \gamma_2 \gamma_3 + P_y B_y \gamma_2 \gamma_3 \gamma_1 + P_z B_z \gamma_3 \gamma_1 \gamma_2 &= (\vec{P} \cdot \vec{B}) \gamma_{10} \\ +E_x B_x \gamma_4 \gamma_2 \gamma_3 + E_y B_y \gamma_5 \gamma_3 \gamma_1 + E_z B_z \gamma_6 \gamma_1 \gamma_2 &= (\vec{E} \cdot \vec{B}) \gamma_{14} \\ +P_x E_x \gamma_1 \gamma_4 \gamma_3 + P_y E_y \gamma_2 \gamma_5 \gamma_1 + P_z E_z \gamma_3 \gamma_6 \gamma_2 &= -(\vec{P} \cdot \vec{E}) \gamma_0 +\end{align*} +\tag{102} +$$ + +That means that exactly those products have to vanish which yield *cosymplices*. This can be interpreted +via the structure preserving properties of symplectic motion. Since within our game, the particle *type* +can only be represented by the structure of the dynamics, and since electromagnetic processes do not +change the type of a particle, then they are quite obviously *structure preserving* which then implies +the non-appearance of co-symplices. Or in other words-electromagnetism is of Hamiltonian nature. +We will come back to this point in Section 6.4. + +6.3. *The Lorentz Force* + +In the previous section we constituted the distinction between the “mechanical” elements +**P** = **ε** γ₀ + **γ̃** ⋅ **P** of the general matrix **F** and the electrodynamical elements **F** = γ₀ **γ̃** ⋅ **E** + γ₁₄ γ₀ **γ̃** ⋅ **B**. +Since the matrix **S** = Σ γ₀ is a symplex, let us assume to be equal to **P** and apply Equation (25). We then +find (with the appropriate relative scaling between **P** and **F** as explained above): + +$$ +\frac{d\mathbf{P}}{d\tau} = \mathbf{P} = \frac{q}{2m} (\mathbf{F}\mathbf{P} - \mathbf{P}\mathbf{F}) \quad (103) +$$ + +which yields written with the coefficients of the real Dirac matrices: +---PAGE_BREAK--- + +$$ +\begin{align} +\frac{d\mathcal{E}}{d\tau} &= \frac{q}{m} \vec{P} \cdot \vec{E} \\ +\frac{d\vec{P}}{d\tau} &= \frac{q}{m} (\varepsilon \vec{E} + \vec{P} \times \vec{B}) +\end{align} +\tag{104} +$$ + +where $\tau$ is the proper time. If we convert to the lab frame time $t$ using $dt = \frac{d\tau}{\gamma}$ Equation (103) yields +(setting $c = 1$): + +$$ +\begin{align*} +\gamma \frac{d\mathcal{E}}{dt} &= q \gamma \vec{\nu} \cdot \vec{E} \\ +\gamma \frac{d\vec{P}}{dt} &= \frac{q}{m} (m \gamma \vec{E} + m \gamma \vec{\nu} \times \vec{B}) \tag{105} \\ +\frac{d\mathcal{E}}{dt} &= q \vec{\nu} \cdot \vec{E} \\ +\frac{d\vec{P}}{dt} &= q (\vec{E} + \vec{\nu} \times \vec{B}) +\end{align*} +$$ + +which is the Lorentz force. Therefore the Lorentz force acting on a charged particle in 3 spatial dimensions can be modeled by an ensemble of 2-dimensional CHOs. The isomorphism between the observables of the perceived 3-dimensional world and the second moments of density distributions in the phase space of 2-dimensional oscillators is remarkable. + +In any case, Equation (103) clarifies two things within the game. Firstly, that both, energy $\mathcal{E}$ and momentum $\vec{p}$, have to be interpreted as mechanical energy and momentum (and not canonical), secondly the relative normalization between fields and mechanical momentum is fixed and last, but not least, it clarifies the relation between the time related to mass (proper time) and the time related to $\gamma_0$ and energy, which appears to be the laboratory time. + +6.4. *The Maxwell Equations* + +As we already pointed out, waves are (within this game) the result of a Fourier transformation +(FT). But there are different ways to argue this. In Reference [16] we argued that Maxwell’s equations +can be derived within our framework by (a) the postulate that space-time emerges from interaction, +i.e., that the fields $\vec{E}$ and $\vec{B}$ have to be constructed from the 4-vectors. $\mathbf{X} = t\,\gamma_0 + \vec{x}\cdot\vec{\gamma}, \mathbf{J} = \rho\gamma_0 + \vec{j}\cdot\vec{\gamma}$ +and $\mathbf{A} = \Phi\gamma_0 + \vec{\Lambda}\cdot\vec{\gamma}$ with (b) the requirement that no co-symplices emerge. But we can also argue +with the FT of the density (see Section 6.1). + +If we introduce the 4-derivative + +$$ +\partial = -\partial_t \gamma_0 + \partial_x \gamma_1 + \partial_y \gamma_2 + \partial_z \gamma_3 +\quad (106) +$$ + +The non-abelian nature of matrix multiplication requires to distinguish differential operators acting to +the right and to the left, i.e., we have $\partial$ as defined in Equation (106), $\overleftrightarrow{\partial}$ and $\overleftarrow{\partial}$ which is written to the +right of the operand (thus indicating the order of the matrix multiplication) so that + +$$ +\begin{equation} +\begin{aligned} +\overleftarrow{\mathbf{H}} &\equiv -\partial_t \mathbf{H} \gamma_0 + \partial_x \mathbf{H} \gamma_1 + \partial_y \mathbf{H} \gamma_2 + \partial_z \mathbf{H} \gamma_3 \\ +\overrightarrow{\partial \mathbf{H}} &\equiv -\gamma_0 \partial_t \mathbf{H} + \gamma_1 \partial_x \mathbf{H} + \gamma_2 \partial_y \mathbf{H} + \gamma_3 \partial_z \mathbf{H} +\end{aligned} +\tag{107} +\end{equation} +$$ + +The we find the following general rules (see Equation (35)) that prevent from non-zero cosymplices: + +$$ +\begin{align*} +& \frac{1}{2} \left( \overrightarrow{\partial} \text{ vector} - \text{vector} \overleftarrow{\partial} \right) &&\Rightarrow && \text{bi-vector} \\ +& \frac{1}{2} \left( \overrightarrow{\partial} \text{ bi-vector} - \text{bi-vector} \overleftarrow{\partial} \right) &&\Rightarrow && \text{vector} \\ +& \frac{1}{2} \left( \overrightarrow{\partial} \text{ bi-vector} + \text{bi-vector} \overleftarrow{\partial} \right) &&\Rightarrow && \text{axial vector } = 0 \\ +& \frac{1}{2} \left( \overrightarrow{\partial} \text{ vector} + \text{vector} \overleftarrow{\partial} \right) &&\Rightarrow && \text{scalar } = 0 +\end{align*} +\tag{108} +$$ +---PAGE_BREAK--- + +Application of these derivatives yields: + +$$ +\begin{align*} +\mathbf{F} &= \frac{1}{2} \left( \vec{\partial} \mathbf{A} - \mathbf{A} \vec{\partial} \right) \\ +4\pi \mathbf{J} &= \frac{1}{2} \left( \vec{\partial} \mathbf{F} - \mathbf{F} \vec{\partial} \right) \\ +0 &= \vec{\partial} \mathbf{F} + \mathbf{F} \vec{\partial} \\ +0 &= \frac{1}{2} \left( \vec{\partial} \mathbf{A} + \mathbf{A} \vec{\partial} \right) \\ +0 &= \frac{1}{2} \left( \vec{\partial} \mathbf{J} + \mathbf{J} \vec{\partial} \right) +\end{align*} +\tag{109} +$$ + +The first row of Equation (109) corresponds to the usual definition of the bi-vector fields from a vector potential $\mathbf{A}$ and is (written by components) given by + +$$ +\begin{align} +\vec{E} &= -\vec{\nabla}\phi - \partial_t \vec{A} \\ +\vec{B} &= \vec{\nabla} \times \vec{A} +\end{align} +\tag{110} +$$ + +The second row of Equation (109) corresponds to the usual definition of the 4-current J as sources of the fields and the last three rows just express the impossibility of the appearance of cosyplices. They explicitely represent the homogenuous Maxwell equations + +$$ +\begin{align} +\vec{\nabla} \cdot \vec{B} &= 0 \\ +\vec{\nabla} \times \vec{E} + \partial_t \vec{B} &= 0 +\end{align} +\tag{111} +$$ + +the continuity equation + +$$ +\partial_t \rho + \vec{\nabla} \cdot \vec{j} = 0 +\quad +(112) +$$ + +and the so-called “Lorentz gauge” + +$$ +\partial_t \Phi + \vec{\nabla} \cdot \vec{A} = 0 +\qquad +(113) +$$ + +The simplest idea about the 4-current within QED is to assume that it is proportional to the “probability current”, which is within our game given by the vector components of $\mathbf{S} = \Sigma \gamma_0$. + +7. The Phase Space + +Up to now, our modeling game referred to the second moments and the elements of S are second +moments such that the observables are given by (averages over) the following quadratic forms: + +$$ +\begin{align*} +\mathcal{E} &\propto \psi^T \psi = q_1^2 + p_1^2 + q_2^2 + p_2^2 \\ +p_x &\propto -q_1^2 + p_1^2 + q_2^2 - p_2^2 \\ +p_y &\propto 2(q_1 q_2 - p_1 p_2) \\ +p_z &\propto 2(q_1 p_1 + q_2 p_2) \\ +E_x &\propto 2(q_1 p_1 - q_2 p_2) \\ +E_y &\propto -2(q_1 p_2 + q_2 p_1) \\ +E_z &\propto q_1^2 - p_1^2 + q_2^2 - p_2^2 \\ +B_x &\propto 2(q_1 q_2 + p_1 p_2) \\ +B_y &\propto q_1^2 + p_1^2 - q_2^2 - p_2^2 \\ +B_z &\propto 2(q_1 p_2 - p_1 q_2) +\end{align*} +\tag{114} +$$ + +If we analyze the real Dirac matrix coefficients of $\mathbf{S} = \psi \psi^T \gamma_0$ in terms of the EMEQ and evaluate the +quadratic relations between those coefficients, then we obtain: +---PAGE_BREAK--- + +$$ +\begin{align*} +\vec{P}^2 &= \vec{E}^2 = \vec{B}^2 = \varepsilon^2 \\ +0 &= \vec{E}^2 - \vec{B}^2 \\ +\varepsilon^2 &= \frac{1}{2}(\vec{E}^2 + \vec{B}^2) \\ +\varepsilon \vec{P} &= \vec{E} \times \vec{B} \\ +\varepsilon^3 &= \vec{P} \cdot (\vec{E} \times \vec{B}) \\ +m^2 &\propto \varepsilon^2 - \vec{P}^2 = 0 \\ +\vec{P} \cdot \vec{E} &= \vec{E} \cdot \vec{B} = \vec{P} \cdot \vec{B} = 0 +\end{align*} +$$ + +Besides a missing renormalization these equations describe an object without mass but with the geometric properties of light as described by electrodynamics, e.g., by the electrodynamic description of electromagnetic waves, which are $\vec{E} \cdot \vec{B} = 0$, $\vec{P} \propto \vec{E} \times \vec{B}$, $\vec{E}^2 = \vec{B}^2$ and so on. Hence single spinors are light-like and can not represent massive particles. + +Consider the spinor as a vector in a four-dimensional Euclidean space. We write the symmetric matrix $\mathcal{A}$ (or $\Sigma$, respectively) as a product in the form of a Gramian: + +$$ +\mathcal{A} = \mathcal{B}^T \mathcal{B} \tag{116} +$$ + +or-componentwise: + +$$ +\begin{align} +\mathcal{A}_{ij} &= \sum_k (\mathcal{B}^T)_{ik} \mathcal{B}_{kj} \nonumber \\ +&= \sum_k \mathcal{B}_{ki} \mathcal{B}_{kj} \tag{117} +\end{align} +$$ + +The last line can be read such that matrix element $\mathcal{A}_{ij}$ is the conventional 4-dimensional scalar product of column vector $\mathcal{B}_i$ with column vector $\mathcal{B}_j$. + +From linear algebra we know that Equation (116) yields a non-singular matrix $\mathcal{A}$, iff the column-vectors of the matrix $\mathcal{B}$ are linearly independent. In the orthonormal case, the matrix $\mathcal{A}$ simply is the pure form of a non-singular matrix, i.e., the unit matrix. Hence, if we want to construct a massive object from spinors, we need several spinors to fill the columns of $\mathcal{B}$. The simplest case is the orthogonal case: the combination of four mutual orthogonal vectors. Given a general 4-component Hamiltonian spinor $\psi = (q_1, p_1, q_2, p_2)$, how do we find a spinor that is orthogonal to this one? In 3 (i.e., odd) space dimensions, we know that there are two vectors that are perpendicular to any vector $(x, y, z)^T$, but without fixing the first vector, we can't define the others. In even dimensions this is different: it suffices to find a non-singular skew-symmetric matrix like $\gamma_0$ to generate a vector that is orthogonal to $\psi$, namely $\gamma_0 \psi$. As in Equation (3), it is the skew-symmetry of the matrix that ensures the orthogonality. A third vector $\gamma_k \psi$ must then be orthogonal to $\psi$ and to $\gamma_0 \psi$. It must be skew-symmetric and it must hold $\psi^T \gamma_k^T \gamma_0 \psi = 0$. This means that the product $\gamma_k^T \gamma_0$ must also be skew-symmetric and hence that $\gamma_k$ must anti-commute with $\gamma_0$: + +$$ +\begin{align} +(\gamma_k^T \gamma_0)^T &= \gamma_0^T \gamma_k = -\gamma_k^T \gamma_0 \\ +\Rightarrow \quad &= \gamma_0^T \gamma_k + \gamma_k^T \gamma_0 = 0 \tag{118} \\ +0 &= \gamma_0 \gamma_k + \gamma_k \gamma_0 +\end{align} +$$ + +Now let us for a moment return to the question of dimensionality. There are in general $2n(2n-1)/2$ non-zero independent elements in a skew-symmetric square $2n \times 2n$ matrix. But how many matrices are there in the considered phase space dimensions, i.e., in $1+1$, $3+1$ and $9+1$ (etc.) dimensions which anti-commute with $\gamma_0$? We need at least $2n-1$ skew-symmetric anti-commuting elements to obtain a diagonal $\mathcal{A}$. However, this implies at least $N-1$ anticommuting elements of the Clifford algebra that square to $-1$. Hence the ideal case is $2n=N$, which is only true for the Pauli and Dirac algebra. For the Pauli algebra, there is one skew-symmetric element, namely $\eta_0$. In the Dirac algebra there are 6 skew-symmetric generators that contain two sets of mutually anti-commuting skew-symmetric +---PAGE_BREAK--- + +matrices: $\gamma_0, \gamma_{10}$ and $\gamma_{14}$ on the one hand and $\gamma_7, \gamma_8$ and $\gamma_9$ on the other hand. The next considered Clifford algebra with $N = 9+1$ dimensions requires a representation by $2n = 32 = \sqrt{2}^{10}$-dimensional real matrices. Hence this algebra may not represent a Clifford algebra with more than 10 unit elements-certainly not $2n$. Hence, we can not use the algebra to generate purely massive objects (e.g., diagonal matrices) without further restrictions (i.e., projections) of the spinor $\psi$. + +But what exactly does this mean? Of course we can easily find 32 linearly independent spinors to generate an orthogonal matrix $B$. So what exactly is special in the Pauli- and Dirac algebra? To see this, we need to understand, what it means that we can use the matrix $B$ of mutually orthogonal column-spinors + +$$ B = (\psi, \gamma_0 \psi, \gamma_{10} \psi, \gamma_{14} \psi) \tag{119} $$ + +This form implies that we can define the *mass* of the “particle” algebraically, and since we have $N-1=3$ anticommuting skew-symmetric matrices in the Dirac algebra, we can find a multispinor $B$ for any arbitrary point in phase space. This does not seem to be sensational at first sight, since this appears to be a property of any Euclidean space. The importance comes from the fact that $\psi$ is a “point” in a very special space-a point in phase space. In fact, we will argue in the following that this possibility to factorize $\psi$ and the density $\rho$ is everything but self-evident. + +If we want to simulate a phase space distribution, we can either define a phase space density $\rho(\psi)$ or we use the technique of Monte-Carlo simulations and represent the phase space by (a huge number of random) samples. If we generate a random sample and we like to implement a certain exact symmetry of the density in phase space, then we would (for instance) form a symmetric sample by appending not only a column-vector to $B$, but also its negative $-\psi$. In this way we obtain a sample with an exact symmetry. In a more general sense: If a phase space symmetry can be represented by a matrix $\gamma_s$ that allows to associate to an arbitrary phase space point $\psi$ a second point $\gamma_s \psi$ where $\gamma_s$ is skew-symmetric, then we have a certain continuous linear rotational symmetry in this phase space. As we have shown, phase-spaces are intrinsically structured by $\gamma_0$ and insofar much more restricted than Euclidean spaces. This is due to the distinction of symplectic from non-symplectic transformations and due to the intrinsic relation to Clifford algebras: Phase spaces are spaces structured by time. Within our game, the phase space is the only possible fundamental space. + +We may imprint the mentioned symmetry to an arbitrary phase space density $\rho$ by taking all phase space samples that we have so far and adding the same number of samples, each column multiplied by $\gamma_s$. Thus, we have a single rotation in the Pauli algebra and two of them in the Dirac algebra: + +$$ +\begin{aligned} +B_0 &= \psi \\ +\gamma_0 &\rightarrow B_1 = (\psi, \gamma_0 \psi) \\ +\gamma_{14} &\rightarrow B_2 = (\psi, \gamma_0 \psi, \gamma_{14} \psi, \gamma_{14} \gamma_0 \psi) \\ +&= (\psi, \gamma_0 \psi, \gamma_{14} \psi, \gamma_{10} \psi) +\end{aligned} +\tag{120} $$ + +or: + +$$ +\begin{aligned} +B_0 &= \psi \\ +\gamma_7 &\rightarrow B_1 = (\psi, \gamma_7 \psi) \\ +\gamma_8 &\rightarrow B_2 = (\psi, \gamma_7 \psi, \gamma_8 \psi, \gamma_8 \gamma_7 \psi) \\ +&= (\psi, \gamma_7 \psi, \gamma_8 \psi, -\gamma_9 \psi) +\end{aligned} +\tag{121} $$ + +Note that order and sign of the column-vectors in $B$ are irrelevant—at least with respect to the autocorrelation matrix $BB^T$. Thus we find that there are two fundamental ways to represent a positive mass in the Dirac algebra and one in the Pauli-algebra. The 4-dimensional phase space of the Dirac algebra is in two independent ways self-matched. +---PAGE_BREAK--- + +Our starting point was the statement that 2 $n$ linear independent vectors are needed to generate mass. If we can't find 2 $n$ vectors in the way described above for the Pauli and Dirac algebra, then this does (of course) not automatically imply that there are not 2 $n$ linear independent vectors. + +But what does it mean that the dimension of the Clifford algebra of observables (N) does not match the dimension of the phase space (2 $n$) in higher dimensions? There are different physical descriptions given. Classically we would say that a positive definite 2 $n$-component spinor describes a system of $n$ (potentially) coupled oscillators with $n$ frequencies. If $B$ is orthogonal, then all oscillators have the same frequency, i.e., the system is degenerate. But for $n > 2$ we find that not all eigenmodes can involve the complete 2 $n$-dimensional phase space. This phenomenon is already known in 3 dimensions: The trajectory of the isotropic three-dimensional oscillator always happens in a 2-dimensional plane, i.e., in a subspace. If it did not, then the angular momentum would not be conserved. In this case the isotropy of space would be broken. Hence one may say in some sense that the *isotropy of space* is the reason for a 4-dimensional phase-space and hence the reason for the 3 + 1-dimensional observable space-time of objects. Or in other words: higher-dimensional spaces are incompatible with isotropy, i.e., with the conservation of angular momentum. There is an intimate connection of these findings to the impossibility of Clifford algebras $Cl_{p,1}$ with $p > 3$ to create a homogeneous "Euclidean" space: Let $\gamma_0$ represent time and $\gamma_k$ with $k \in [1, ..., N-1]$ the spatial coordinates. The spatial rotators are products of two spatial basis vectors. The generator of rotations in the (1,2)-plane is $\gamma_1 \gamma_2$. Then we have 6 rotators in 4 "spatial" dimensions: + +$$ \gamma_1 \gamma_2, \ \gamma_1 \gamma_3, \ \gamma_1 \gamma_4, \ \gamma_2 \gamma_3, \ \gamma_2 \gamma_4, \ \gamma_3 \gamma_4 \qquad (122) $$ + +However, we find that some generators commute and while others anticommute and it can be taken from combinatorics that only sets of 3 mutual anti-commuting rotators can be formed from a set of symmetric anti-commuting $\gamma_k$. The 3 rotators + +$$ \gamma_1 \gamma_2, \ \gamma_2 \gamma_3, \ \gamma_1 \gamma_3 \qquad (123) $$ + +mutually anticommute, but $\gamma_1 \gamma_2$ and $\gamma_3 \gamma_4$ commute. Furthermore, in 9 + 1 dimensions, the spinors are either projections into 4-dimensional subspaces or there are non-zero off-diagonal terms in $\mathcal{A}$, i.e., there is "internal interaction". + +Another way to express the above considerations is the following: Only in 4 phase space dimensions we may construct a massive object from a matrix $B$ that represents a multispinor $\Psi$ of exactly $N = 2n$ single spinors and construct a wave-function according to + +$$ \Psi = \phi B \qquad (124) $$ + +where $\rho = \phi^2$ is the phase space density. + +It is easy to prove and has been shown in Reference [16] that the elements $\gamma_0, \gamma_{10}$ and $\gamma_{14}$ represent parity, time reversal and charge conjugation. The combination of these operators to form a multispinor, may lead (with normalization) to the construction of symplectic matrices $M$. Some examples are: + +$$ M = (\mathbf{1}\psi, \gamma_0\psi, -\gamma_{14}\psi, -\gamma_{10}\psi)/\sqrt{\psi^T\psi} $$ + +$$ M \gamma_0 M^T = \gamma_0 $$ + +$$ M = (\mathbf{1}\psi, -\gamma_{14}\psi, -\gamma_{10}\psi, \gamma_0\psi)/\sqrt{\psi^T\psi} $$ + +$$ M \gamma_{10} M^T = \gamma_{10} \qquad (125) $$ + +$$ M = (\gamma_{10}\psi, -\mathbf{1}\psi, -\gamma_{14}\psi, \gamma_0\psi)/\sqrt{\psi^T\psi} $$ + +$$ M \gamma_{14} M^T = \gamma_{14} $$ +---PAGE_BREAK--- + +Hence the combination of the identity and CPT-operators can be arranged such that the multispinor **M** is symplectic with respect to the directions of time γ₀, γ₁₀ and γ₁₄, but not with respect to γ₇, γ₈ or γ₉. As we tried to explain, the specific choice of the skew-symmetric matrix γ₀ is determined by a structure defining transformation. Since particles are nothing but dynamical structures in this game, the 6 possible SUMs should stand for 6 different particle types. However, for each direction of time, there are also two choices of the spatial axes. For γ₀ we have chosen γ₁, γ₂ and γ₃, but we could have used γ₄ = γ₀γ₁, γ₅ = γ₀γ₂ and γ₆ = γ₀γ₃ as well. + +Thus, there should be either 6 or 12 different types of structures (types of fermions) that can +be constructed within the Dirac algebra. The above construction allows for three different types +corresponding to three different forms of the symplectic unit matrix, further three types are expected +to be related to γ7, γ8 and γ9: + +$$ +\begin{align*} +\mathbf{M} &= (\mathbf{1}\psi, -\gamma_9\psi, -\gamma_8\psi, -\gamma_7\psi) / \sqrt{\psi^T\psi} \\ +\mathbf{M}\gamma_7 \mathbf{M}^T &= \gamma_7 +\end{align*} +$$ + +$$ +\begin{equation} +\begin{aligned} +\mathbf{M} &= (\mathbf{1}\psi, -\gamma_8\psi, -\gamma_7\psi, -\gamma_9\psi) / \sqrt{\psi^T\psi} \\ +\mathbf{M}\gamma_8 \mathbf{M}^T &= \gamma_8 +\end{aligned} +\tag{126} +\end{equation} +$$ + +$$ +\begin{equation} +\begin{aligned} +\mathbf{M} &= (\gamma_7 \psi, -\mathbf{1} \psi, -\gamma_8 \psi, -\gamma_9 \psi) / \sqrt{\psi^T \psi} \\ +\mathbf{M} \gamma_9 \mathbf{M}^T &= \gamma_9 +\end{aligned} +\tag{127} +\end{equation} +$$ + +These matrices describe specific symmetries of the 4-dimensional phase space, i.e., geometric objects in phase space. Therefore massive multispinors can be described as volumes in phase space. If we deform the figure by stretching parameters *a*, *b*, *c*, *d* such that + +$$ +\tilde{\mathbf{M}} = (a \mathbf{1} \psi, -b \gamma_0 \psi, -c \gamma_{14} \psi, -d \gamma_{10} \psi) / \sqrt{\psi^T \psi} \quad (127) +$$ + +then one obtains with $f_k$ taken from Equation (114): + +$$ +\begin{align*} +\tilde{\mathbf{M}} \tilde{\mathbf{M}}^T \gamma_0 &= \sum_{k=0}^{9} g_k f_k \gamma_k / \sqrt{\psi^T \psi} \\ +g_0 &= a^2 + b^2 + c^2 + d^2 \\ +g_1 &= -g_2 = g_3 = a^2 - b^2 + c^2 - d^2 \\ +g_4 &= -g_5 = g_6 = a^2 - b^2 - c^2 + d^2 \\ +g_7 &= g_8 = g_9 = a^2 + b^2 - c^2 - d^2 +\end{align*} +\tag{128} +$$ + +This result reproduces the quadratic forms $f_k$ of Equation (114), but furthermore the phase space radii $a, b, c$ and $d$ reproduce the structure of the Clifford algebra, i.e., the classification into the 4 types of observables $\mathcal{E}, \vec{\mathcal{P}}, \vec{\mathcal{E}}$ and $\vec{\mathcal{B}}$. This means that a deformation of the phase space “unit cell” represents momenta and fields, i.e., the dimensions of the phase space unit cell are related to the appearance of certain simplices: + +$$ +(a = b) \text{ AND } (c = d) \Rightarrow \vec{P} = \vec{E} = 0 \\ +(a = c) \text{ AND } (b = d) \Rightarrow \vec{E} = \vec{B} = 0 \\ +(a = d) \text{ AND } (b = c) \Rightarrow \vec{P} = \vec{B} = 0 +$$ + +(129) + +while for $a = b = c = d$ all vectors but $\mathcal{E}$ vanish. Only in this latter case, the matrix **M** is symplectic for $a=b=c=d=1$. These relations confirm the intrinsic connection between a classical 4-dimensional Hamiltonian phase space and Clifford algebras in dimension 3+1. +---PAGE_BREAK--- + +## 8. Summary and Discussion + +Based on three fundamental principles, which describe the form of physics, we have shown that the algebraic structure of coupled classical degrees of freedom is (depending on the number of the DOFs) isomorph to certain Clifford algebras that allow to explain the dimensionality of space-time, to model Lorentz-transformations, the relativistic energy-momentum relation and even Maxwell's equations. + +It is usually assumed that we have to define the properties of space-time in the first place: "In Einstein's theory of gravitation matter and its dynamical interaction are based on the notion of an intrinsic geometric structure of the space-time continuum" [36]. However, as we have shown within this "game", it has far more explanatory power to derive and explain space-time from the principles of interaction. Hence we propose to reverse the above statement: The intrinsic geometric structure of the space-time continuum is based on the dynamical interaction of matter. A rigorous consequence of this reversal of perspective is that "space-time" does not need to have a fixed and unique dimensionality at all. It appears that the dimensionality is a property of the type of interaction. However, supposed higher-dimensional space-times (see Reference [16]) would emerge in analogy to the method presented here, for instance in nuclear interaction, then these space-times would not simply be Euclidean spaces of higher dimension. Clifford algebras, especially if they are restricted by symplectic conditions by a Hamiltonian function, have a surprisingly complicated intrinsic structure. As we pointed out, if all generators of a Clifford algebra are simplices, then in 9 + 1 dimensions, we find k-vectors with $k \in [0,10]$ but k-vectors generated from simplices are themselves simplices only for $k \in [1,2,5,6,9,10,...]$. However, if space-time is constraint by Hamiltonian motion, then ensembles of oscillators may also clump together to form "objects" with 9 + 1 or 25 + 1-dimensional interactions, despite the fact that we gave strong arguments for the fundamentality of the 3 + 1-dimensional Hamiltonian algebra. + +There is no a priori reason to exclude higher order terms-whenever they include constants of motion. However, as the Hamiltonian then involves terms of higher order, we might then need to consider higher order moments of the phase space distribution. In this case we would have to invent an action constant in order to scale $\psi$. + +Our game is based a few general rules and symmetry considerations. The math used in our derivation-taken the results of representation theory for granted-is simple and can be understood on an undergraduate level. And though we never intended to find a connection to string theory, we found-besides the 3 + 1-dimensional interactions a list of possible higher-dimensional candidates, two of which are also in the focus of string theories, namely $9+1=10$-dimensional and $25+1=26$-dimensional theories [37]. + +We understand this modeling game as a contribution to the demystification (and unification) of our understanding of space-time, relativity, electrodynamics and quantum mechanics. Despite the fact that it has become tradition to write all equations of motion of QED and QM in a way that requires the use of the unit imaginary, our model seems to indicate that it does not have to be that way. Though it is frequently postulated that evolution in time has to be unitary within QM, it appears that symplectic motion does not only suffice, but is superior as it yields the correct number of relevant operators. While in the unitary case, one should expect 16 (15) unitary (traceless) operators for a 4-component spinor, but the natural number of generators in the corresponding symplectic treatment is 10 as found by Dirac himself in QED [2,38]. If a theory contains things which are *not required*, then we have added something arbitrary and artificial. The theory as we described it indicates that in momentum space, which is used here, there is no immediate need for the use of the unit imaginary and no need for more than 10 fundamental generators. The use of the unit imaginary however appears unavoidable when we switch via Fourier transform to the "real space". + +There is a dichotomy in physics. On the one hand all *causes* are considered to inhabit space-time (*local causality*), but on the other hand the *physical reasoning* mostly happens in energy-momentum space: There are no Feyman-graphs, no scattering amplitudes, no fundamental physical relations, that +---PAGE_BREAK--- + +do not refer in some way to energy or momentum (-conservation). We treat problems in solid state physics as well as in high energy physics mostly in Fourier space (reciprocal lattice). + +We are aware that the rules of the game are, due to their rigour, difficult to accept. However, maybe it does not suffice to speculate that the world might be a hologram (As t'Hooft suggested [39] and Leonard Susskind sketched in his celebrated paper, Reference [40])-we really should play modeling games that might help to decide, if and how it could be like that. + +**Conflicts of Interest:** "The author declares no conflict of interest." + +## Appendix Microcanonical Ensemble + +Einstein once wrote that "A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more extended its area of applicability. Hence the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content concerning which I am convinced that, within the framework of the applicability of its basic concepts, it will never be overthrown [...]". We agree with him and we will try to show in the following that this holds also for the branch of thermodynamics that is called statistical mechanics. By the use of the EMEQ it has been shown, that the expectation values + +$$f_k = \frac{\operatorname{Tr}(\gamma_k^2)}{16} \bar{\psi} \gamma_k \psi \qquad (\text{A1})$$ + +can be associated with energy $\mathcal{E}$ and momentum $\vec{p}$ of and with the electric (magnetic) field $\vec{E}$ and $\vec{B}$ as seen by a relativistic charged particle. It has also been shown that stable systems can always be transformed in such a way as to bring $\mathcal{H}$ into a diagonal form: + +$$\mathbf{F} = \begin{pmatrix} 0 & \omega_1 & 0 & 0 \\ -\omega_1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \omega_2 \\ 0 & 0 & -\omega_2 & 0 \end{pmatrix} \qquad (\text{A2})$$ + +In the following we will use the classical model of the microcanonical ensemble to compute some phase space averages. Let the constant value of the Hamiltonian be $\mathcal{H} = U$ where $U$ is some energy, the volume in phase space $\Phi^*$ that is limited by the surface of constant energy $U$ is given by [41]: + +$$\Phi^* = \int_{\mathcal{H} 0$ [11] (resp. $b(t) < 0$, see [13]) in the low-intensity limit, the graded-index waveguide acts as a linear defocusing (focusing) lens. + +Depending on the selections of the coefficients in Equation (1), its applications vary in very specific problems (see [16] and references therein): + +* Bose-Einstein condensates: $b(\cdot) \neq 0$, $a, h$ constants and other coefficients are zero. + +* Dispersion-managed optical fibers and soliton lasers [9,14,15]: $a(\cdot), h(\cdot), d(\cdot) \neq 0$ are respectively dispersion, nonlinearity and amplification, and the other coefficients are zero. $a(\cdot)$ and $h(\cdot)$ can be periodic as well, see [29]. + +* Pulse dynamics in the dispersion-managed fibers [10]: $h(\cdot) \neq 0$, $a$ is a constant and other coefficients are zero. + +In this paper, to obtain the main results, we use a fundamental approach consisting of the use of similarity transformations and the solutions of Riccati systems with several parameters inspired by the work in [30]. Similarity transformations have been a very popular strategy in nonlinear optics since the lens transform presented by Talanov [27]. Extensions of this approach have been presented in [26,28]. Applications include nonlinear optics, Bose-Einstein condensates, integrability of NLS and quantum mechanics, see for example [3,31-33], and references therein. E. Marhic in 1978 introduced (probably for the first time) a one-parameter {$a(0)$} family of solutions for the linear Schrödinger equation of the one-dimensional harmonic oscillator, where the use of an explicit formulation (classical Melher's formula [34]) for the propagator was fundamental. The solutions presented by E. Marhic constituted a generalization of the original Schrödinger wave packet with oscillating width. + +In addition, in [35], a generalized Melher's formula for a general linear Schrödinger equation of the one-dimensional generalized harmonic oscillator of the form Equation (1) with $h(t) = 0$ was presented. For the latter case, in [36-38], multiparameter solutions in the spirit of Marhic in [30] have been presented. The parameters for the Riccati system arose originally in the process of proving convergence to the initial data for the Cauchy initial value problem Equation (1) with $h(t) = 0$ and in the process of finding a general solution of a Riccati system [38,39]. In addition, Ermakov systems with solutions containing parameters [36] have been used successfully to construct solutions for the generalized harmonic oscillator with a hidden symmetry [37], and they have also been used to present Galilei transformation, pseudoconformal transformation and others in a unified manner, see [37]. More recently, they have been used in [40] to show spiral and breathing solutions and solutions with bending for the paraxial wave equation. In this paper, as the second main result, we introduce a family of Schrödinger equations presenting periodic soliton solutions by using multiparameter solutions for Riccati systems. Furthermore, as the third main result, we show that these parameters provide a control on the dynamics of solutions for equations of the form Equation (1). These results should deserve numerical and experimental studies. + +This paper is organized as follows: In Section 2, by means of similarity transformations and using computer algebra systems, we show the existence of Peregrine, bright and dark solitons for the family Equation (1). Thanks to the computer algebra systems, we are able to find an extensive list of integrable VCNLS, in the sense that they can be reduced to the standard integrable NLS, see Table 1. In Section 3, we use different similarity transformations than those used in Section 3. The advantage of the presentation of this section is a multiparameter approach. These parameters provide us a control on the center axis of bright and dark soliton solutions. Again in this section, using Table 2 and by means of computer algebra systems, we show that we can produce a very extensive number of integrable VCNLS allowing soliton-type solutions. A supplementary Mathematica file is provided where it is evident how the variation of the parameters change the dynamics of the soliton solutions. In Section 4, we use a finite difference method to compare analytical solutions described in [41] (using similarity transformations) with numerical approximations for the paraxial wave equation (also known as linear Schrödinger equation with quadratic potential). +---PAGE_BREAK--- + +Table 1. Families of NLS with variable coefficients. + +
#Variable Coefficient NLSSolutions (j=1,2,3)
1t = l0ψxx - bmtm-14l0 + b2x2m4l0x2ψ
-ibtmx - λl0e-btm+1m+1 |ψ|^2ψ
ψj(x,t) = 1√e-btm+1ei(btm4 - l0x2) uj(x,t)
2t = l0ψxx - t-22l0x2ψ
+i14x - λl0|ψ|^2ψ
ψj(x,t) = 1√tei(-btm+14 - l0x2) uj(x,t)
3t = l0ψxx - (c24l0)x2ψ
+icxψx - λl0ect|ψ|^2ψ
ψj(x,t) = 1√ectei(c2b - l0x2) uj(x,t)
4t = l0ψxx - b24l0tkx2ψ
+ibxψx - λl0ebt|ψ|^2ψ
ψj(x,t) = 1√ebktei(c2b-l0x2) uj(x,t)
5t = l0ψxx - atbk4l0+a2atbx2ψ
-iaebtkx-λl0ea-atb|ψ|^2ψ
ψj(x,t) = 1√en-αtei(αt-n-αt)/4-l0x2) uj(x,t)
t=l0ψxx-1/4l0x2ψ
-icoth(t)xψx-λl0cscch(t)|ψ|^2ψ
-itan(t)xψx-λl0cos(t)|ψ|^2ψ
-ibln(t)xψx-λl0t-bt-et|ψ|^2ψ
-tan(t)xψx-λl0cosh(t)|ψ|^2ψ
-λl0csc(t)|ψ|^2ψ
-ian(t)xψx-λl0cscch(t)|ψ|^2ψ
-itan(-t)xψx-λl0csc(t)|ψ|^2ψ
-ian(-t)xψx-λl0cosh(-t)|ψ|^2ψ
-iacosh(-t)xψx-λl0e-asinh(-t)bkt(bt)/b|ψ|^2ψ
-iacosh(bt)xψx-λl0e-asinh(bt)bkt(bt)/b|ψ|^2ψ
+iacos(bt)xψx-λl0e-asin(bt)bkt(bt)/b|ψ|^2ψ
+iacos(bt)
-iasin(bt)xψx+λl0ea/b|ψ|^2ψ
+atanh(bt)xψx-λl0|cosh(bt)|^b/b|ψ|^2ψ
+atanh(bt)(bt)
-atanh(bt)
+atanh(bt)
-ab
+atanh(bt)
+acoth(bt)
-acoth(bt)
-atanh(bt)
+atanh(bt)
+acoth(bt)
-acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
+atanh(bt)
+atanh(bt)
+acoth(bt)
+acoth(bt)
6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,t=l0ψxx+1/4l0x2ψ+icot(-t)xψ+1/4l0x2-iαl0csc(cot(t))|ψ|^2 ψ
-αlαsinc(αsinc(t))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ
-αlαsinh(cosh(-b))|ω|²
-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at>b
a>b
a b c d e f g h i j k l m n o p q r s t u v w x y z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-yy_yn_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnneeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc +# 1 +$$i\psi_t = l_0 \psi_{xx} - \frac{bt^{m-1}}{4l_0} + \frac{b^2 x^m}{4l_0} x^2 \psi - ib t^m x \psi_{x} - \lambda l_0 e^{-bt^{m+1}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - \frac{b^2}{4l_0} x^2 \psi + i \frac{1}{4} x \psi_{x} - \lambda l_0 t |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{c^2}{4} l_0) x^2 \psi + icx \psi_{x} - \lambda l_0 e^{ct} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - \frac{b^2}{4l_0} t^k x^2 \psi + ibx \psi_{x} - \lambda l_0 e^{bt} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - \frac{ab^{bt} + a^2 e^{bkt}}{4l_0} x^2 \psi - ia e^{bt} x \psi_{x} - \lambda l_0 e^{-\frac{ab-tb}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - \frac{bt^{-1} + b^2 ln^2(t)}{4l_0} x^2 \psi - ib ln(t) x \psi_{x} - \lambda l_0 t^{-bt} e^{bt} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi + icot(-t) x \psi_{x} - \lambda l_0 csc(cot(t)) |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi - itan(-t) x \psi_{x} - \lambda l_0 cos(t) |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - \frac{bt^{-1} + b^2 ln^2(t)}{4l_0} x^2 \psi - ibln(t) x \psi_{x} - \lambda l_0 t^{-bt} e^{bt} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi + icot(-it) x \psi_{x} - \lambda l_0 cscsec(cot(t)) |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi - itan(-it) x \psi_{x} - \lambda l_0 sec(t) |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{a^2 + absin(h_t) + a^2 sin(h_t)^2}{4l_0}) x^2 \psi - ia cos(h_t) x \psi_{x} - \lambda l_0 e^{-\frac{asin(h_t)}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{a^2 + absin(h_t) - a^2 sin(h_t)^2}{4l_0}) x^2 \psi + iacos(h_t) x \psi_{x} - \lambda l_0 e^{-\frac{asin(h_t)}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{a^2 + abcos(h_t) - a^2 cos(h_t)^2}{4l_0}) x^2 \psi - iasin(h_t) x \psi_{x} + \lambda l_0 e^{-\frac{abcos(h_t)}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{atanh(h_t)(a+b)+ab}{4l_0}) x^2 \psi - itan(h_t) x \psi_{x} - \lambda l_0 |cos(h_t)|^{\frac{b}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{atanh(h_t)(a+b)-ab}{4l_0}) x^2 \psi + atan(h_t) x \psi_{x} - \lambda l_0 |cos(h_t)|^{\frac{b}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} - (\frac{atanh(h_t)(b-a)-ab}{4l_0}) x^2 \psi - atan(h_t) x \psi_{x} + \lambda l_0 e^{-\frac{atanh(h_t)}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + (\frac{atanh^2(h_t)(b-a)-ab}{4l_0}) x^2 \psi - iacoth(h_t) x \psi_{x} - \lambda l_0 |sin(h_t)|^{\frac{b}{b}} |\psi|^2 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + (\frac{atanh^2(h_t)(b-a)-ab}{4l_0}) x^2 \psi - iacoth(h_t) x \psi_{x} + (\frac{atanh^2(h_t)(b-a)-ab}{4l_0}) x^3 \psi$$ +$$i\psi_t = l_0 \psi_{xx} + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \psi - iacot(h_t) x \psi_{x} - (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^4 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x\\# 1 +$$ + + +---PAGE_BREAK--- + +**Table 2.** Riccati equations used to generate the similarity transformations. + +
#Riccati EquationSimilarity Transformation from Table 1
1y'x = axny2 + bmxm-1 - ab2xn+2m1
2(axn + b)y'x = by2 + axn-22
3y'x = axny2 + bxmy + bcxm - ac2xn3
4y'x = axny2 + bxmy + ckxk-1 - bcxm+k - ac2xn+2k1
5xy'x = axny2 + my - ab2xn+2m3
6(axn + bxm + c)y'x = axky2 + βxsy - αb2xk + βbxs4
7y'x = beμxy2 + acecx - a2be(μ+2c)x5
8y'x = aenxy2 + cy - ab2e(μ+2c)x3
9y'x = aecxy2 + bnxn-1 - ab2ex2n1
10y'x = axny2 + bceex - ab2xn2x8
11y'x = axny2 + cy - ab2xn2cx3
12y'x = [a sinh2(cx) - c]y2 - a sinh2(cx) + c - a6
132y'x = [a - b + a cosh(bx)]y2 + a + b - a cosh(bx)7
14y'x = a(ln x)ny2 + bmxm-1 - ab2x2m(ln x)n1
15xy'x = axny2 + b - ab2xn ln2 x8
16y'x = [b + a sin2(bx)]y2 + b - a + a sin2(bx)9
172y'x = [b + a + a cos(bx)]y2 + b - a + a cos(bx)10
18y'x = [b + a cos2(bx)]y2 + b - a + a cos2(bx)10
19y'x = c(arcsin x)ny2 + ay + ab - b2c arctan x n3
20y'x = a(arcsin x)n/2 y2 + βmx m-1 - aβ²x²m (arcsin x)n
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1br/>(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)
(g-f)(af+b)
a tanh²(bx)(af+b)+ab
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ 38 + + y'x = fy² - a²f + ab sinh(bx) - a²f sinh²(bx) + + 14 +
+ 39 + + y'x = fy² - a²f + ab sin(bx) + a²f sin²(bx) + + 15 +
+ 40 + + y'x = fy² - a²f + ab cos(bx) + a²f cos²(bx) + + 16 +
+ 41 + + y'x = fy² - a tan²(bx)(af - b) + ab + + 17 +
+ 42 + + y'x = fy² - a cot²(bx)(af - b) + ab + + 18 +
+ +Symmetry **2016**, *8*, 38 +---PAGE_BREAK--- + +**2. Soliton Solutions for VCNLS through Riccati Equations and Similarity Transformations** + +In this section, by means of a similarity transformation introduced in [42], and using computer +algebra systems, we show the existence of Peregrine, bright and dark solitons for the family Equation +(1). Thanks to the computer algebra systems, we are able to find an extensive list of integrable +variable coefficient nonlinear Schrödinger equations (see Table 1). For similar work and applications to +Bose-Einstein condensates, we refer the reader to [1] + +**Lemma 1.** ([42]) Suppose that $h(t) = -l_0\lambda\mu(t)$ with $\lambda \in \mathbb{R}$, $l_0 = \pm 1$ and that $c(t)$, $\alpha(t)$, $\delta(t)$, $\kappa(t)$, $\mu(t)$ and $g(t)$ satisfy the equations: + +$$ +\begin{align} +\alpha(t) &= l_0 \frac{c(t)}{4}, \quad \delta(t) = -l_0 \frac{g(t)}{2}, \quad h(t) = -l_0 \lambda \mu(t), \tag{2} \\ +\kappa(t) &= \kappa(0) - \frac{l_0}{4} \int_0^t g^2(z) dz, \tag{3} \\ +\mu(t) &= \mu(0) \exp \left( \int_0^t (2d(z) - c(z)) dz \right) \mu(0) \neq 0, \tag{4} \\ +g(t) &= g(0) - 2l_0 \exp \left( -\int_0^t c(z) dz \right) \int_0^t \exp \left( \int_0^z c(y) dy \right) f(z) dz. \tag{5} +\end{align} +$$ + +Then, + +$$ +\psi(t,x) = \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} u(t,x) \quad (6) +$$ + +is a solution to the Cauchy problem for the nonautonomous Schrödinger equation + +$$ +i\psi_t - l_0\psi_{xx} - b(t)x^2\psi + ic(t)x\psi_x + id(t)\psi + f(t)x\psi - ig(t)\psi_x - h(t)|\psi|^2\psi = 0, \quad (7) +$$ + +$$ +\psi(0, x) = \psi_0(x), +$$ + +if and only if $u(t,x)$ is a solution of the Cauchy problem for the standard Schrödinger equation + +$$ +iu_t - l_0 u_{xx} + l_0 |\lambda| u^2 = 0, +$$ + +with initial data + +$$ +u(0,x) = \sqrt{\mu(0)}e^{-i(\alpha(0)x^2+\delta(0)x+\kappa(0))}\psi_0(x). \quad (10) +$$ + +Now, we proceed to use Lemma 1 to discuss how we can construct NLS with variable coefficients +equations that can be reduced to the standard NLS and therefore be solved explicitly. We start +recalling that + +$$ +u_1(t, x) = A \exp\left(2iA^2t\right) \left(\frac{3 + 16iA^2t - 16A^4t^2 - 4A^2x^2}{1 + 16A^4t^2 + 4A^2x^2}\right), A \in \mathbb{R} \quad (11) +$$ + +is a solution for ($l_0 = -1$ and $\lambda = -2$) + +$$ +iu_t + u_{xx} + 2|u|^2 u = 0, t, x \in \mathbb{R}. \tag{12} +$$ + +In addition, + +$$ +u_2(\xi, \tau) = A \tanh(A\xi)e^{-2iA^2\tau} \quad (13) +$$ + +is a solution of ($l_0 = -1$ and $\lambda = 2$) + +$$ +iu_{\tau} + u_{\xi\xi} - 2|u|^2 u = 0, \quad (14) +$$ +---PAGE_BREAK--- + +and + +$$ +u_3(\tau, \xi) = \sqrt{v} \operatorname{sech}(\sqrt{v}\xi) \exp(-iv\tau), v > 0 \quad (15) +$$ + +is a solution of ($l_0 = 1$ and $\lambda = -2$), + +$$ +iu_{\tau} - u_{\xi\xi} - 2|u|^2 u = 0. \tag{16} +$$ + +**Example 1.** Consider the NLS: + +$$ +i\psi_t + \psi_{xx} - \frac{c^2}{4} x^2 \psi - icx\psi_x \pm 2e^{ct} |\psi|^2 \psi = 0. \quad (17) +$$ + +Our intention is to construct a similarity transformation from Equation (17) to standard NLS Equation (9) by means of Lemma 1. Using the latter, we obtain + +$$ +b(t) = \frac{c^2}{4}, c(t) = c, \mu(t) = e^{ct}, +$$ + +and + +$$ +\alpha(t) = -\frac{c}{4}, h(t) = \pm 2e^{ct}. +$$ + +Therefore, + +$$ +\psi(x,t) = \frac{e^{-i\frac{x}{c^2}t}}{\sqrt{e^{ct}}} u_j(x,t), j=1,2 +$$ + +is a solution of the form Equation (6), and $u_j(x,t)$ are given by Equations (12) and (13). + +**Example 2.** Consider the NLS: + +$$ +i\psi_t + \psi_{xx} - \frac{1}{2t^2}x^2\psi - i\frac{1}{t}x\psi_x \pm 2t|\psi|^2\psi = 0. \quad (18) +$$ + +By Lemma 1, a Riccati equation associated to the similarity transformation is given by + +$$ +\frac{dc}{dt} + c(t)^2 - 2t^{-2} = 0, \tag{19} +$$ + +and we obtain the functions + +$$ +b(t) = \frac{1}{2t^2}, c(t) = -\frac{1}{t}, \mu(t) = t, +$$ + +$$ +\alpha(t) = -\frac{1}{4t}, h_1(t) = -2t, h_2(t) = 2t. +$$ + +Using $u_j(x,t)$, $j=1$ and $2$, given by Equations (12) and (13), we get the solutions + +$$ +\psi_j(x,t) = \frac{e^{-i\frac{1}{4t}x^2}}{\sqrt{t}} u_i(x,t). \quad (20) +$$ + +Table 1 shows integrable variable coefficient NLS and the corresponding similarity transformation to constant coefficient NLS. Table 2 lists some Riccati equations that can be used to generate these transformations. +---PAGE_BREAK--- + +**Example 3.** If we consider the following family (m and B are parameters) of variable coefficient NLS, + +$$i\psi_t + \psi_{xx} - \frac{Bmt^{m-1} + Bt^{2m}}{4}x^2\psi + iBt^m x\psi_x + \gamma e^{-\frac{Bt^{m+1}}{m+1}}|\psi|^2\psi = 0, \quad (21)$$ + +by means of the Riccati equation + +$$y_t = At^n y^2 + Bmt^{m-1} - AB^2t^{n+2m}, \quad (22)$$ + +and Lemma 1, we can construct soliton-like solutions for Equation (21). For this example, we restrict ourselves to taking $A = -1$ and $n = 0$. Furthermore, taking in Lemma 1 $l_0 = -1$, $\lambda = -2$, $a(t) = 1$, $b(t) = \frac{Bmt^{m-1}+Bt^{2m}}{4}$, $c(t) = Bt^m$, $\mu(t) = e^{-\frac{Bt^{m+1}}{m+1}}$, $h(t) = -2e^{-\frac{Bt^{m+1}}{m+1}}$, and $\alpha(t) = -Bt^m/4$, soliton-like solutions to the Equation (21) are given by + +$$\psi_j(x,t) = e^{i-\frac{B^2 j^m}{4}} e^{\frac{B j^{m+1}}{2(m+1)}} u_j(x,t), \quad (23)$$ + +where using $u_j(x,t)$, $j=1$ and $2$, given by Equations (12) and (15), we get the solutions. It is important to notice that if we consider $B=0$ in Equation (21) we obtain standard NLS models. + +### 3. Riccati Systems with Parameters and Similarity Transformations + +In this section, we use different similarity transformations than those used in Section 2, but they have been presented previously [26,35,39,42]. The advantage of the presentation of this section is a multiparameter approach. These parameters provide us with a control on the center axis of bright and dark soliton solutions. Again in this section, using Table 2, and by means of computer algebra systems, we show that we can produce a very extensive number of integrable VCNLS allowing soliton-type solutions. The transformations will require: + +$$\frac{d\alpha}{dt} + b(t) + 2c(t)\alpha + 4a(t)\alpha^2 = 0, \quad (24)$$ + +$$\frac{d\beta}{dt} + (c(t) + 4a(t)\alpha(t))\beta = 0, \quad (25)$$ + +$$\frac{d\gamma}{dt} + l_0 a(t) \beta^2(t) = 0, l_0 = \pm 1, \quad (26)$$ + +$$\frac{d\delta}{dt} + (c(t) + 4a(t)\alpha(t))\delta = f(t) + 2a(t)g(t), \quad (27)$$ + +$$\frac{d\epsilon}{dt} = (g(t) - 2a(t)\delta(t))\beta(t), \quad (28)$$ + +$$\frac{d\kappa}{dt} = g(t)\delta(t) - a(t)\delta^2(t). \quad (29)$$ + +Considering the standard substitution + +$$\alpha(t) = \frac{1}{4a(t)} \frac{\mu'(t)}{\mu(t)} - \frac{d(t)}{2a(t)}, \quad (30)$$ + +it follows that the Riccati Equation (24) becomes + +$$\mu'' - \tau(t)\mu' + 4\sigma(t)\mu = 0, \quad (31)$$ + +with + +$$\tau(t) = \frac{a'}{a} - 2c + 4d, \sigma(t) = ab - cd + d^2 + \frac{d}{2}\left(\frac{a'}{a} - \frac{d'}{d}\right). \quad (32)$$ +---PAGE_BREAK--- + +We will refer to Equation (31) as the characteristic equation of the Riccati system. Here, $a(t)$, $b(t)$, $c(t)$, $d(t)$, $f(t)$ and $g(t)$ are real value functions depending only on the variable $t$. A solution of the Riccati system Equations (24)–(29) with multiparameters is given by the following expressions (with the respective inclusion of the parameter $l_0$) [26,35,39]: + +$$ \mu(t) = 2\mu(0)\mu_0(t)(\alpha(0) + \gamma_0(t)), \quad (33) $$ + +$$ \alpha(t) = \alpha_0(t) - \frac{\beta_0^2(t)}{4(\alpha(0) + \gamma_0(t))'}, \quad (34) $$ + +$$ \beta(t) = -\frac{\beta(0)\beta_0(t)}{2(\alpha(0) + \gamma_0(t))} = \frac{\beta(0)\mu(0)}{\mu(t)}w(t), \quad (35) $$ + +$$ \gamma(t) = l_0\gamma(0) - \frac{l_0\beta^2(0)}{4(\alpha(0) + \gamma_0(t))}, \quad l_0 = \pm 1, \quad (36) $$ + +$$ \delta(t) = \delta_0(t) - \frac{\beta_0(t)(\delta(0) + \varepsilon_0(t))}{2(\alpha(0) + \gamma_0(t))}, \quad (37) $$ + +$$ \varepsilon(t) = \varepsilon(0) - \frac{\beta(0)(\delta(0) + \varepsilon_0(t))}{2(\alpha(0) + \gamma_0(t))}, \quad (38) $$ + +$$ \kappa(t) = \kappa(0) + \kappa_0(t) - \frac{(\delta(0) + \varepsilon_0(t))^2}{4(\alpha(0) + \gamma_0(t))'}, \quad (39) $$ + +subject to the initial arbitrary conditions $\mu(0), \alpha(0), \beta(0) \neq 0, \gamma(0), \delta(0), \varepsilon(0)$ and $\kappa(0)$. $\alpha_0, \beta_0, \gamma_0, \delta_0, \varepsilon_0$ and $\kappa_0$ are given explicitly by: + +$$ a_0(t) = \frac{1}{4a(t)} \frac{\mu'_0(t)}{\mu_0(t)} - \frac{d(t)}{2a(t)}, \quad (40) $$ + +$$ \beta_0(t) = -\frac{w(t)}{\mu_0(t)}, w(t) = \exp\left(-\int_0^t (c(s) - 2d(s))ds\right), \quad (41) $$ + +$$ \gamma_0(t) = \frac{d(0)}{2a(0)} + \frac{1}{2\mu_1(0)} \frac{\mu_1(t)}{\mu_0(t)}, \quad (42) $$ + +$$ \delta_0(t) = \frac{w(t)}{\mu_0(t)} \int_0^t \left[ \left(f(s) - \frac{d(s)}{a(s)}g(s)\right)\mu_0(s) + \frac{g(s)}{2a(s)}\mu'_0(s) \right] \frac{ds}{w(s)}, \quad (43) $$ + +$$ \begin{aligned} \varepsilon_0(t) = & -\frac{2a(t)w(t)}{\mu'_0(t)}\delta_0(t) + 8 \int_0^t \frac{a(s)\varphi(s)w(s)}{(\mu'_0(s))^2}(\mu_0(s)\delta_0(s))ds \\ & + 2\int_0^t \frac{a(s)w(s)}{\mu'_0(s)}[f(s) - \frac{d(s)}{a(s)}g(s)]ds, \end{aligned} \quad (44) $$ + +$$ \begin{aligned} \kappa_0(t) = & \frac{a(t)\mu_0(t)}{\mu'_0(t)}\delta_0^2(t) - 4\int_0^t \frac{a(s)\varphi(s)}{(\mu'_0(s))^2}(\mu_0(s)\delta_0(s))^2 ds \\ & - 2\int_0^t \frac{a(s)}{\mu'_0(s)}(\mu_0(s)\delta_0(s))[f(s) - \frac{d(s)}{a(s)}g(s)]ds, \end{aligned} \quad (45) $$ + +with $\delta_0(0) = g_0(0)/(2a(0))$, $\varepsilon_0(0) = -\delta_0(0)$, $\kappa_0(0) = 0$. Here, $\mu_0$ and $\mu_1$ represent the fundamental solution of the characteristic equation subject to the initial conditions $\mu_0(0) = 0, \mu'_0(0) = 2a(0) \neq 0$ and $\mu_1(0) \neq 0, \mu'_1(0) = 0$. + +Using the system Equations (34)–(39), in [26], a generalized lens transformation is presented. Next, we recall this result (here we use a slight perturbation introducing the parameter $l_0 = \pm 1$ in order to use Peregrine type soliton solutions): +---PAGE_BREAK--- + +**Lemma 2** ($l_0 = 1$, [26]). Assume that $h(t) = \lambda a(t) \beta^2(t) \mu(t)$ with $\lambda \in \mathbb{R}$. Then, the substitution + +$$ \psi(t,x) = \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} u(\tau, \xi), \quad (46) $$ + +where $\xi = \beta(t)x + \epsilon(t)$ and $\tau = \gamma(t)$, transforms the equation + +$$ i\psi_t = -a(t)\psi_{xx} + b(t)x^2\psi - ic(t)x\psi_x - id(t)\psi - f(t)x\psi + ig(t)\psi_x + h(t)|\psi|^2\psi $$ + +into the standard Schrödinger equation + +$$ iu_{\tau} - l_{0}u_{\xi\xi} + l_{0}\lambda|u|^{2}u = 0, l_{0} = \pm 1, \quad (47) $$ + +as long as $\alpha, \beta, \gamma, \delta, \varepsilon$ and $\kappa$ satisfy the Riccati system Equations (24)–(29) and also Equation (30). + +**Example 4.** Consider the NLS: + +$$ i\psi_t = \psi_{xx} - \frac{x^2}{4}\psi + h(0) \operatorname{sech}(t) |\psi|^2 \psi. \quad (48) $$ + +It has the associated characteristic equation $\mu'' + a\mu = 0$, and, using this, we will obtain the functions: + +$$ \alpha(t) = \frac{\coth(t)}{4} - \frac{1}{2} \operatorname{csch}(t) \operatorname{sech}(t), \quad \delta(t) = -\operatorname{sech}(t), \quad (49) $$ + +$$ \kappa(t) = 1 - \frac{\tanh(t)}{2}, \quad \mu(t) = \cosh(t), \quad (50) $$ + +$$ h(t) = h(0) \operatorname{sech}(t), \quad \beta(t) = \frac{1}{\cosh(t)}, \quad (51) $$ + +$$ \varepsilon(t) = -1 + \tanh(t), \quad \gamma(t) = 1 - \frac{\tanh(t)}{2}. \quad (52) $$ + +Then, we can construct solution of the form + +$$ \psi_j(t,x) = \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} u_j\left(1 - \frac{\tanh(t)}{2}, \frac{x}{\cosh(t)} - 1 + \tanh(t)\right), \quad (53) $$ + +with $u_j, j = 1$ and $2$, given by Equations (12) and (13). + +**Example 5.** Consider the NLS: + +$$ i\psi_t(x,t) = \psi_{xx}(x,t) + \frac{h(0)\beta(0)^2\mu(0)}{1+\alpha(0)2c_2t} |\psi(x,t)|^2 \psi(x,t). $$ + +It has the characteristic equation $\mu'' + a\mu = 0$, and, using this, we will obtain the functions: + +$$ \alpha(t) = \frac{1}{4t} - \frac{1}{2+\alpha(0)4c_2^2t^2}, \quad \delta(t) = \frac{\delta(0)}{1+\alpha(0)2c_2t'} \quad (54) $$ + +$$ \kappa(t) = \kappa(0) - \frac{\delta(0)^2 c_2 t}{2 + 4\alpha(0)c_2 t'}, \quad h(t) = \frac{h(0)\beta(0)^2\mu(0)}{1 + \alpha(0)2c_2 t'}, \quad (55) $$ + +$$ \mu(t) = (1 + \alpha(0)2c_2t)\mu(0), \quad \beta(t) = \frac{\beta(0)}{1 + \alpha(0)2c_2t'} $$ +---PAGE_BREAK--- + +$$ +\gamma(t) = \gamma(0) - \frac{\beta(0)^2 c_2 t}{2 + 4\alpha(0)c_2 t}, \quad \epsilon(t) = \epsilon(0) - \frac{\beta(0)\delta(0)c_2 t}{1 + 2\alpha(0)c_2 t}. +$$ + +Then, we can construct a solution of the form + +$$ +\begin{equation} +\begin{split} +\psi_j(t,x) ={}& \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} \\ +& u_j \left( \gamma(0) - \frac{\beta(0)^2 c_2 t}{2+4\alpha(0)c_2 t'} \frac{\beta(0)x}{1+\alpha(0)2c_2 t} + \epsilon(0) - \frac{\beta(0)\delta(0)c_2 t}{1+2\alpha(0)c_2 t} \right), +\end{split} +\tag{56} +\end{equation} +$$ + +with $u_j, j = 1$ and $2$, Equations (12) and (13). + +Following Table 2 of Riccati equations, we can use Equation (24) and Lemma 2 to construct an extensive list of integrable variable coefficient nonlinear Schrödinger equations. + +**4. Crank-Nicolson Scheme for Linear Schrödinger Equation with Variable Coefficients Depending on Space** + +In addition, in [35], a generalized Melher’s formula for a general linear Schrödinger equation of the one-dimensional generalized harmonic oscillator of the form Equation (1) with $h(t) = 0$ was presented. As a particular case, if $b = \lambda \frac{\omega^2}{2}$; $f = b$, $\omega > 0$, $\lambda \in \{-1, 0, 1\}$, $c = g = 0$, then the evolution operator is given explicitly by the following formula (note—this formula is a consequence of Mehler’s formula for Hermite polynomials): + +$$ +\psi(x,t) = U_V(t)f := \frac{1}{\sqrt{2i\pi\mu_j(t)}} \int_{\mathbb{R}^n} e^{iS_V(x,y,t)} f(y)dy, \quad (57) +$$ + +where + +$$ +S_V(x, y, t) = \frac{1}{\mu_j(t)} \left( \frac{x_j^2 + y_j^2}{2} l_j(t) - x_j y_j \right), +$$ + +$$ +\{\mu_j(t), l_j(t)\} = \begin{cases} i\psi_l = -\Delta\psi + V(x, t)\psi, & (59) \\ 0, & (58) \end{cases} +$$ + +Using Riccati-Ermakov systems in [41], it was shown how computer algebra systems can be used to derive the multiparameter formulas (33)–(45). This multi-parameter study was used also to study solutions for the inhomogeneous paraxial wave equation in a linear and quadratic approximation including oscillating laser beams in a parabolic waveguide, spiral light beams, and more families of propagation-invariant laser modes in weakly varying media. However, the analytical method is restricted to solve Riccati equations exactly as the ones presented in Table 2. In this section, we use a finite differences method to compare analytical solutions described in [41] with numerical approximations. We aim (in future research) to extend numerical schemes to solve more general cases that the analytical method exposed cannot. Particularly, we will pursue to solve equations of the general form: + +using polynomial approximations in two variables for the potential function $V(x, t)$ ($V(x, t) \approx b(t)(x_1^2 + x_2^2) + f(t)x_1 + g(t)x_2 + h(t))$. For this purpose, it is necessary to analyze stability of different methods applied to this equation. +---PAGE_BREAK--- + +We also will be interested in extending this process to nonlinear Schrödinger-type equations with potential terms dependent on time, such as + +$$i\psi_t = -\Delta\psi + V(\mathbf{x}, t)\psi + s|\psi|^2\psi. \quad (60)$$ + +In this section, we show that the Crank-Nicolson scheme seems to be the best method to deal with reconstructing numerically the analytical solutions presented in [41]. + +Numerical methods arise as an alternative when it is difficult to find analytical solutions of the Schrödinger equation. Despite numerical schemes not providing explicit solutions to the problem, they do yield approaches to the real solutions which allow us to obtain some relevant properties of the problem. Most of the simplest and often-used methods are those based on finite differences. + +In this section, the Crank-Nicolson scheme is used for linear Schrödinger equation in the case of coefficients depending only on the space variable because it is absolutely stable and the matrix of the associate system does not vary for each iteration. + +A rectangular mesh $(x_m, t_n)$ is introduced in order to discretize a bounded domain $\Omega \times [0, T]$ in space and time. In addition, $\tau$ and $\mathbf{h}$ represent the size of the time step and the size of space step, respectively. $\mathbf{x}_m$ and $\mathbf{h}$ are in $\mathbb{R}$ if one-dimensional space is considered; otherwise, they are in $\mathbb{R}^2$. + +The discretization is given by the matrix system + +$$\left(I + \frac{i\alpha\tau}{2h^2}\Delta + \frac{i\tau}{2}V(\mathbf{x})\right)\psi^{n+1} = \left(I - \frac{i\alpha\tau}{2h^2}\Delta - \frac{i\tau}{2}V(\mathbf{x})\right)\psi^n, \quad (61)$$ + +where $I$ is the identity matrix, $\Delta$ is the discrete representation of the Laplacian operator in space, and $V(\mathbf{x})$ is the diagonal matrix that represents the operator of the external potential depending on $\mathbf{x}$. + +The paraxial wave equation (also known as harmonic oscillator) + +$$2i\psi_t + \Delta\psi - r^2\psi = 0, \quad (62)$$ + +where $r = x$ for $\mathbf{x} \in \mathbb{R}$ or $r = \sqrt{x_1^2 + x_2^2}$ for $\mathbf{x} \in \mathbb{R}^2$, describes the wave function for a laser beam [40]. + +One solution for this equation can be presented as Hermite-Gaussian modes on a rectangular domain: + +$$ \begin{aligned} \psi_{nm}(\mathbf{x}, t) = & A_{nm} \frac{\exp[i(\kappa_1+\kappa_2)+2i(n+m+1)\gamma]}{\sqrt{2^{n+m}n!m!\pi}} \beta \\ & \times \exp\left[i(\alpha\mathbf{r}^2 + \delta_1x_1 + \delta_2x_2) - (\beta x_1 + \epsilon_1)^2/2 - (\beta x_2 + \epsilon_2)^2/2\right] \\ & \times H_n(\beta x_1 + \epsilon_1)H_m(\beta x_2 + \epsilon_2), \end{aligned} \quad (63) $$ + +where $H_n(x)$ is the n-th order Hermite polynomial in the variable $x$, see [40,41]. + +In addition, some solutions of the paraxial equation may be expressed by means of Laguerre-Gaussian modes in the case of cylindrical domains (see [43]): + +$$ \begin{aligned} \psi_n^m(\mathbf{x}, t) = & A_n^m \sqrt{\frac{n!}{\pi(n+m)!}\beta} \\ & \times \exp\left[i(\alpha\mathbf{r}^2 + \delta_1x_1 + \delta_2x_2 + \kappa_1 + \kappa_2) - (\beta x_1 + \epsilon_1)^2/2 - (\beta x_2 + \epsilon_2)^2/2\right] \\ & \times \exp[i(2n+m+1)\gamma](\beta(x_1 \pm ix_2) + \epsilon_1 \pm i\epsilon_2)^m \\ & \times L_n^m((\beta x_1 + \epsilon_1)^2 + (\beta x_2 + \epsilon_2)^2), \end{aligned} \quad (64) $$ + +with $L_n^m(x)$ being the n-th order Laguerre polynomial with parameter $m$ in the variable $x$. + +$\alpha, \beta, \gamma, \delta_1, \delta_2, \epsilon_1, \epsilon_2, \kappa_1$ and $\kappa_2$ given by Equations (34)-(39) for both Hermite-Gaussian and Laguerre-Gaussian modes. + +Figures 1 and 2 show two examples of solutions of the one-dimensional paraxial equation with $\Omega = [-10, 10]$ and $T = 12$. The step sizes are $\tau = \frac{10}{200}$ and $h = \frac{10}{200}$. +---PAGE_BREAK--- + +**Figure 1.** (a) corresponding approximation for the one-dimensional Hermite-Gaussian beam with $t = 10$. The initial condition is $\sqrt{\frac{2}{3\sqrt{\pi}}}e^{(\frac{3}{2}x)^2/2}$; (b) the exact solution for the one-dimensional Hermite-Gaussian beam with $t = 10$, $A_n = 1$, $\mu_0 = 1$, $\alpha_0 = 0$, $\beta_0 = \frac{4}{9}$, $n_0 = 0$, $\delta_0 = 0$, $\gamma_0 = 0$, $\epsilon_0 = 0$, $\kappa_0 = 0$. + +**Figure 2.** (a) corresponding approximation for the one-dimensional Hermite-Gaussian beam with $t = 10$. The initial condition is $\sqrt{\frac{2}{3\sqrt{\pi}}}e^{(\frac{3}{2}x)^2/2+ix}$; (b) the exact solution for the one-dimensional Hermite-Gaussian beam with $t = 10$, $A_n = 1$, $\mu_0 = 1$, $\alpha_0 = 0$, $\beta_0 = \frac{4}{9}$, $n_0 = 0$, $\delta_0 = 1$, $\gamma_0 = 0$, $\epsilon_0 = 0$, $\kappa_0 = 0$. + +Figure 3 shows four profiles of two-dimensional Hermite-Gaussian beams considering $\Omega = [-6,6] \times [-6,6]$ and $T = 10$. The corresponding step sizes are $\tau = \frac{10}{40}$ and $h = (\frac{12}{48}, \frac{12}{48})$. +---PAGE_BREAK--- + +Figure 3. (Left): corresponding approximations for the two-dimensional Hermite-Gaussian beams with $t = 10$. The initial conditions are (a) $\frac{1}{\sqrt{8\pi}}e^{-(x^2+y^2)}$; (b) $\frac{1}{\sqrt{2\pi}}e^{-(x^2+y^2)x}$; (c) $\sqrt{\frac{2}{\pi}}e^{-(x^2+y^2)xy}$; (d) $\frac{1}{4\sqrt{32\pi}}e^{-(x^2+y^2)}(8x^2-2)(8y^2-2)$. (Right): the exact solutions for the two-dimensional Hermite-Gaussian beams with $t = 10$ and parameters $A_{nm} = \frac{1}{4}$, $a_0 = 0$, $\beta_0 = \sqrt{2}$, $\delta_{0,1} = 1$, $\gamma_{0,1} = 0$, $\epsilon_{0,1} = 0$, $\kappa_{0,1} = 0$. For (a) $n=0$ and $m=0$, for (b) $n=1$ and $m=0$, for (c) $n=1$ and $m=1$, for (d) $n=2$ and $m=2$. +---PAGE_BREAK--- + +Figure 4 shows two profiles of two-dimensional Laguerre-Gaussian beams considering $\Omega = [-6,6] \times [-6,6]$ and $T = 10$. The corresponding step sizes are $\tau = \frac{10}{40}$ and $\mathbf{h} = (\frac{12}{48}, \frac{12}{48})$. + +**Figure 4.** (Left): corresponding approximations for the two-dimensional Laguerre-Gaussian beams with $t = 10$. The initial conditions are (a) $\frac{1}{\sqrt{4\pi}}e^{-(x^2+y^2)}(x+iy)$; (b) $\frac{1}{\sqrt{2\pi}}e^{-(x^2+y^2)}(x+iy)(1-x^2-y^2)$. (Right): the exact solutions for the two-dimensional Laguerre-Gaussian beams with $t = 10$ and parameters $A_n^m = \frac{1}{4}$, $a_0 = 0$, $\beta_0 = \sqrt{2}$, $\delta_{0,1} = 1$, $\gamma_{0,1} = 0$, $\epsilon_{0,1} = 0$, $\kappa_{0,1} = 0$. + +**5. Conclusions** + +Rajendran et al. in [1] used similarity transformations introduced in [28] to show a list of integrable NLS equations with variable coefficients. In this work, we have extended this list, using similarity transformations introduced by Suslov in [26], and presenting a more extensive list of families of integrable nonlinear Schrödinger (NLS) equations with variable coefficients (see Table 1 as a primary list. In both approaches, the Riccati equation plays a fundamental role. The reader can observe that, using computer algebra systems, the parameters (see Equations (33)–(39)) provide a change of the dynamics of the solutions; the Mathematica files are provided as a supplement for the readers. Finally, we have tested numerical approximations for the inhomogeneous paraxial wave equation by the Crank-Nicolson scheme with analytical solutions. These solutions include oscillating laser beams and Laguerre and Gaussian beams. The explicit solutions have been found previously thanks to explicit solutions of Riccati-Ermakov systems [41]. + +**Supplementary Materials:** The following are available online at http://www.mdpi.com/2073-8994/8/5/38/s1, Mathematica supplement file. + +**Acknowledgments:** The authors were partially funded by the Mathematical American Association through NSF (grant DMS-1359016) and NSA (grant DMS-1359016). Also, the authors are thankful for the funding received from the Department of Mathematics and Statistical Sciences and the College of Liberal Arts and Sciences at University of Puerto Rico, Mayagüez. E. S. is funded by the Simons Foundation Grant # 316295 and by the National Science Foundation Grant DMS-1440664. E.S is also thankful for the start up funds and the "Faculty +---PAGE_BREAK--- + +Development Funding Program Award" received from the School of Mathematics and Statistical Sciences and the College of Sciences at University of Texas, Rio Grande Valley. + +**Author Contributions:** The original results presented in this paper are the outcome of a research collaboration started during the Summer 2015 and continuous until Spring 2016. Similarly, the selection of the examples, tables, graphics and extended bibliography is the result of a continuous long interaction between the authors. + +**Conflicts of Interest:** The authors declare no conflict of interest. + +References + +1. Rajendran, S.; Muruganandam, P.; Lakshmanan, M. Bright and dark solitons in a quasi-1D Bose-Einstein condensates modelled by 1D Gross-Pitaevskii equation with time-dependent parameters. *Phys. D Nonlinear Phenom.* **2010**, *239*, 366–386. [CrossRef] + +2. Agrawal, G.-P. *Nonlinear Fiber Optics*, 4th ed.; Academic Press: New York, NY, USA, 2007. + +3. Al Khawaja, U. A comparative analysis of Painlevé, Lax Pair and similarity transformation methods in obtaining the integrability conditions of nonlinear Schrödinger equations. *J. Phys. Math.* **2010**, *51*. [CrossRef] + +4. Brugarino, T.; Sciacca, M. Integrability of an inhomogeneous nonlinear Schrödinger equation in Bose-Einstein condensates and fiber optics. *J. Math. Phys.* **2010**, *51*. [CrossRef] + +5. Chen, H.-M.; Liu, C.S. Solitons in nonuniform media. *Phys. Rev. Lett.* **1976**, *37*, 693–697. [CrossRef] + +6. He, X.G.; Zhao, D.; Li, L.; Luo, H.G. Engineering integrable nonautonomous nonlinear Schrödinger equations. *Phys. Rev. E* **2009**, *79*. [CrossRef] [PubMed] + +7. He, J.; Li, Y. Designable ineigrability of the variable coefficient nonlinear Schrödinger equations. *Stud. Appl. Math.* **2010**, *126*, 1–15. [CrossRef] + +8. He, J.S.; Charalampidis, E.G.; Kevrekidis, P.G.; Frantzeskakis, D.J. Rogue waves in nonlinear Schrödinger models with variable coefficients: Application to Bose-Einstein condensates. *Phys. Lett. A* **2014**, *378*, 577–583. [CrossRef] + +9. Kruglov, V.I.; Peacock, A.C.; Harvey, J.D. Exact solutions of the generalized nonlinear Schrödinger equation with distributed coefficients. *Phys. Rev. E* **2005**, *71*. [CrossRef] [PubMed] + +10. Marikhin, V.G.; Shabat, A.B.; Boiti, M.; Pimpinelli, F. Self-similar solutions of equations of the nonlinear Schrödinger type. *J. Exp. Theor. Phys.* **2000**, *90*, 553–561. [CrossRef] + +11. Ponomarenko, S.A.; Agrawal, G.P. Do Solitonlike self-similar waves exist in nonlinear optical media? *Phys. Rev. Lett.* **2006**, *97*. [CrossRef] [PubMed] + +12. Ponomarenko, S.A.; Agrawal, G.P. Optical similaritons in nonlinear waveguides. *Opt. Lett.* **2007**, *32*, 1659–1661. [CrossRef] [PubMed] + +13. Raghavan, S.; Agrawal, G.P. Spatiotemporal solitons in inhomogeneous nonlinear media. *Opt. Commun.* **2000**, *180*, 377–382. [CrossRef] + +14. Serkin, V.N.; Hasegawa, A. Novel Soliton solutions of the nonlinear Schrödinger Equation model. *Phys. Rev. Lett.* **2000**, *85*. [CrossRef] [PubMed] + +15. Serkin, V.; Matsumoto, M.; Belyaeva, T. Bright and dark solitary nonlinear Bloch waves in dispersion managed fiber systems and soliton lasers. *Opt. Commun.* **2001**, *196*, 159–171. [CrossRef] + +16. Tian, B.; Shan, W.; Zhang, C.; Wei, G.; Gao, Y. Transformations for a generalized variable-coefficient nonlinear Schrödinger model from plasma physics, arterial mechanics and optical fibers with symbolic computation. *Eur. Phys. J. B* **2005**, *47*, 329–332. [CrossRef] + +17. Dai, C.-Q.; Wang, Y.-Y. Infinite generation of soliton-like solutions for complex nonlinear evolution differential equations via the NLSE-based constructive method. *Appl. Math. Comput.* **2014**, *236*, 606–612. [CrossRef] + +18. Wang, M.; Shan, W.-R.; Lü, X.; Xue, Y.-S.; Lin, Z.-Q.; Tian, B. Soliton collision in a general coupled nonlinear Schrödinger system via symbolic computation. *Appl. Math. Comput.* **2013**, *219*, 11258–11264. [CrossRef] + +19. Yu, F.; Yan, Z. New rogue waves and dark-bright soliton solutions for a coupled nonlinear Schrödinger equation with variable coefficients. *Appl. Math. Comput.* **2014**, *233*, 351–358. [CrossRef] + +20. Fibich, G. *The Nonlinear Schrödinger Equation*, Singular Solutions and Optical Collapse; Springer: Berlin/Heidelberg, Germany, 2015. + +21. Kevrekidis, P.G.; Frantzeskakis, D.J.; Carretero-Gonzáles, R. *Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment*; Springer Series of Atomic, Optical and Plasma Physics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 45. +---PAGE_BREAK--- + +22. Suazo, E.; Suslov, S.-K. Soliton-Like solutions for nonlinear Schrödinger equation with variable quadratic Hamiltonians. *J. Russ. Laser Res.* **2010**, *33*, 63–83. [CrossRef] + +23. Sulem, C.; Sulem, P.L. *The Nonlinear Schrödinger Equation*; Springer: New York, NY, USA, 1999. + +24. Tao, T. Nonlinear dispersive equations: Local and global analysis. In *CBMS Regional Conference Series in Mathematics*; American Mathematical Society: Providence, RI, USA, 2006. + +25. Zakharov, V.-E.; Shabat, A.-B. Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media. *Soviet. Phys. JETP* **1972**, *34*, 62–69. + +26. Suslov, S.-K. On integrability of nonautonomous nonlinear Schrödinger equations. *Proc. Am. Math. Soc.* **2012**, *140*, 3067–3082. [CrossRef] + +27. Talanov, V.I. Focusing of light in cubic media. *JETP Lett.* **1970**, *11*, 199–201. + +28. Perez-Garcia, V.M.; Torres, P.J.; Konotop, V.K. Similarity transformations for nonlinear Schrödinger equations with time-dependent coefficients. *Physica D* **2006**, *221*, 31–36. [CrossRef] + +29. Ablowitz, M.; Hooroka, T. Resonant intrachannel pulse interactions in dispersion-managed transmission systems. *IEEE J. Sel. Top. Quantum Electron.* **2002**, *8*, 603–615. [CrossRef] + +30. Marhic, M.E. Oscillating Hermite-Gaussian wave functions of the harmonic oscillator. *Lett. Nuovo Cim.* **1978**, *22*, 376–378. [CrossRef] + +31. Carles, R. Nonlinear Schrödinger equation with time dependent potential. *Commun. Math. Sci.* **2010**, *9*, 937–964. [CrossRef] + +32. López, R.M.; Suslov, S.K.; Vega-Guzmán, J.M. On a hidden symmetry of quantum harmonic oscillators. *J. Differ. Equ. Appl.* **2013**, *19*, 543–554. [CrossRef] + +33. Aldaya, V.; Cossio, F.; Guerrero, J.; López-Ruiz, F.F. The quantum Arnold transformation. *J. Phys. A Math. Theor.* **2011**, *44*, 1–6. [CrossRef] + +34. Feynman, R.P.; Hibbs, A.R. *Quantum Mechanics and Path Integrals*; McGraw-Hill: New York, NY, USA, 1965. + +35. Cordero-Soto, R.; Lopez, R.M.; Suazo, E.; Suslov, S.K. Propagator of a charged particle with a spin in uniform magnetic and perpendicular electric fields. *Lett. Math. Phys.* **2008**, *84*, 159–178. [CrossRef] + +36. Lanfear, N.; López, R.M.; Suslov, S.K. Exact wave functions for a generalized harmonic oscillators. *J. Russ. Laser Res.* **2011**, *32*, 352–361. [CrossRef] + +37. López, R.M.; Suslov, S.K.; Vega-Guzmán, J.M. Reconstructing the Schrödinger groups. *Phys. Scr.* **2013**, *87*, 1–6. [CrossRef] + +38. Suazo, E.; Suslov, S.K. Cauchy problem for Schrödinger equation with variable quadratic Hamiltonians. *2011*. to be submitted. + +39. Suazo, E. Fundamental Solutions of Some Evolution Equations. Ph.D. Thesis, Arizona State University, Tempe, AZ, USA, September 2009. + +40. Mahalov, A.; Suazo, E.; Suslov, S.K. Spiral laser beams in inhomogeneous media. *Opt. Lett.* **2013**, *38*, 2763–2766. [CrossRef] [PubMed] + +41. Koutschan, C.; Suazo, E.; Suslov, S.K. Fundamental laser modes in paraxial optics: From computer algebra and simulations to experimental observation. *Appl. Phys. B* **2015**, *121*, 315–336. [CrossRef] + +42. Escoriacia, J.; Suazo, E. Blow-up results and soliton solutions for a generalized variable coefficient nonlinear Schrödinger equation. Available online: http://arxiv.org/abs/1605.07554 (accessed on 24 May 2016). + +43. Andrews, L.C.; Phillips, R.L. *Laser Beam Propagation through Random Media*, 2nd ed.; SPIE Press: Bellingham, WA, USA, 2005. + +© 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +# Coherent States of Harmonic and Reversed Harmonic Oscillator + +Alexander Rauh + +Department of Physics, University of Oldenburg, Oldenburg D-26111, Germany; +alexander.rauh@uni-oldenburg.de; Tel.: +49-441-798-3460 + +Academic Editor: Young Suh Kim + +Received: 16 January 2016; Accepted: 3 June 2016; Published: 13 June 2016 + +**Abstract:** A one-dimensional wave function is assumed whose logarithm is a quadratic form in the configuration variable with time-dependent coefficients. This trial function allows for general time-dependent solutions both of the harmonic oscillator (HO) and the reversed harmonic oscillator (RO). For the HO, apart from the standard coherent states, a further class of solutions is derived with a time-dependent width parameter. The width of the corresponding probability density fluctuates, or "breathes" periodically with the oscillator frequency. In the case of the RO, one also obtains normalized wave packets which, however, show diffusion through exponential broadening with time. At the initial time, the integration constants give rise to complete sets of coherent states in the three cases considered. The results are applicable to the quantum mechanics of the Kepler-Coulomb problem when transformed to the model of a four-dimensional harmonic oscillator with a constraint. In the classical limit, as was shown recently, the wave packets of the RO basis generate the hyperbolic Kepler orbits, and, by means of analytic continuation, the elliptic orbits are also obtained quantum mechanically. + +**Keywords:** inverted harmonic oscillator; harmonic trap; Kepler-Coulomb problem; Kustaanheimo-Stiefel transformation + +## 1. Introduction + +Coherent states of the harmonic oscillator (HO) were introduced already at the beginning of wave mechanics [1]. Much later, such states were recognized as being useful as a basis to describe radiation fields [2] and optical correlations [3]. The reversed harmonic oscillator (RO) refers to a model with repulsive harmonic forces, and was discussed in [4] in the context of irreversibility. Recently, in [5], which also communicates historical remarks, the RO was applied to describe nonlinear optical phenomena. As mentioned in [5], the term “inverted harmonic oscillator” (IO) originally refers to a model with negative kinetic and potential energy, as proposed in [6]. Nevertheless, most articles under the headline IO, actually consider the RO model, see, e.g., [7–9]. + +The RO model formally can be obtained by assuming a purely imaginary oscillator frequency. It is then not anymore possible to construct coherent states by means of creation and annihilation operators; for a text book introduction see [10]. In [9], the RO was generalized by the assumption of a time-dependent mass and frequency. The corresponding Schrödinger equation was solved by means of an algebraic method with the aim to describe quantum tunneling. + +In the present study, emphasis is laid on the derivation of complete sets of coherent states both for the HO and the RO model, together with their time evolution. In the case of the HO, in addition to the standard coherent states, a further function set is found with a time-dependent width parameter. Both in the HO and RO case, the integration constants of the time-dependent solutions induce complete function sets which, at time $t = 0$, are isomorphic to the standard coherent states of the HO. +---PAGE_BREAK--- + +In Section 6, an application to the quantum mechanics of the Kepler-Coulomb problem will be briefly discussed. As has first been observed by Fock [11], the underlying four-dimensional rotation symmetry of the non-relativistic Hamiltonian of the hydrogen atom permits the transformation to the problem of four isotropic harmonic oscillators with a constraint; for applications see, e.g., [12–14]. The transformation proceeds conveniently by means of the Kustaanheimo-Stiefel transformation [15]. In [14], the elliptic Kepler orbits were derived in the classical limit on the basis of coherent HO states. By means of coherent RO states, the classical limit for hyperbolic Kepler orbits was achieved in [16,17], whereby the elliptic regime could be obtained by analytic continuation from the hyperbolic side. Recently, by means of the same basis, a first order quantum correction to Kepler’s equation was derived in [18], whereby the smallness parameter was defined by the reciprocal angular momentum in units of $\hbar$. + +As compared to the classical elliptic Kepler orbits, the derivation of hyperbolic orbits from quantum mechanics was accomplished quite recently [16,17]. For this achievement, it was crucial to devise a suitable time-dependent ansatz for the wave function, see (1) below, in order to construct coherent RO states. As it turns out, the wave function (1) contains also the usual coherent HO states, and, unexpectedly, a further set of coherent states, which we call type-II states. The latter are characterized by a time-dependent width parameter and are solutions of the time-dependent Schrödinger equation of the HO. Section 4 contains the derivation. Essentially, the type-II states offer a disposable width parameter which allows us, for instance, to describe arbitrarily narrowly peaked initial states together with their time evolution in a harmonic potential. In this paper, a unified derivation is presented of coherent states of the HO, RO, and type-II HO states. Furthermore, the connection of HO and RO with the quantum mechanics of the Kepler-Coulomb problem is briefly discussed in the context of the derivation of the classical Kepler orbits from quantum mechanics. + +## 2. Introducing a Trial Wave Function + +In order to solve the Schrödinger equation for the harmonic oscillator (HO) and the reversed oscillator (RO), a trial wave function of Gaussian type is assumed as follows + +$$ \psi(x,t) = C_0 \exp \left[ C(t) + B(t)x - \Gamma(t)x^2 \right], \quad x \in \mathbf{R}, \quad \text{Real}(\Gamma) > 0, \qquad (1) $$ + +where $C, B, \Gamma$ are complex functions of time $t$ and $C_0$ the time-independent normalization constant. When the Schrödinger operator $[\mathrm{i}\hbar\partial_t - H]$ is applied to $\psi$ for a Hamiltonian with harmonic potential, then the wave function $\psi$ is reproduced up to a factor which is a quadratic polynomial and must vanish identically in the configuration variable $x$: + +$$ 0 = p_0(t) + p_1(t)x + p_2(t)x^2. \qquad (2) $$ + +The conditions $p_0 = 0$, $p_1 = 0$, and $p_2 = 0$, give rise to three first-order differential equations for the functions $C(t)$, $B(t)$, and $\Gamma(t)$. In the following we examine two cases for the HO: type-I and type-II are characterized by a constant and time-dependent function $\Gamma$, respectively. In the case of the RO, only a time-dependent $\Gamma$ leads to a solution. By a suitable choice of the parameters, the ansatz (1) solves the time-dependent Schrödinger equation both for the HO and the RO Hamiltonian + +$$ H = p^2/(2m) + (m\omega^2/2)x^2 \quad \text{and} \quad H_{\Omega} = p^2/(2m) - (m\Omega^2/2)x^2, \quad \omega, \Omega > 0, $$ + +respectively. + +## 3. Standard (Type-I) Coherent States of the HO + +In the following, the time-dependent solutions are derived, within the trial function scheme, for the Hamiltonian + +$$ H = p^2/(2m) + (m\omega^2/2)x^2 = (\hbar\omega/2) [-\partial_\xi^2 + \zeta^2], \qquad (3) $$ +---PAGE_BREAK--- + +where $\zeta = ax$ is dimensionless with $a^2 = m\omega/\hbar$. For later comparison, we list the standard definition of coherent states from the textbook [10], see Equations (4.72) and (4.75): + +$$|z\rangle = \exp\left[-\frac{1}{2}zz^*\right] \sum_{n=0}^{\infty} \frac{z^n}{\sqrt{n!}} |n\rangle, \quad (4)$$ + +$$\psi_z(\zeta) = \pi^{-1/4} \exp\left[-\frac{1}{2}(zz^* + z^2)\right] \exp\left[-\frac{1}{2}\zeta^2 + \sqrt{2}\zeta z\right], \quad \zeta = ax, \quad a^2 = \frac{m\omega}{\hbar}, \quad (5)$$ + +where $\psi_z(\zeta) = \langle\zeta|z\rangle$, $|n\rangle$ denotes the n-th energy eigenvector, and the star superscript means complex conjugation. The time evolution gives rise to, see [10], + +$$|z,t\rangle = \exp[-i\omega t/2] |z \exp[-i\omega t]\rangle, \quad (6)$$ + +$$\psi_z(\zeta, t) = \exp[-i\omega t/2] \psi_{(z \exp[-i\omega t])}(\zeta). \quad (7)$$ + +The state $|z\rangle$ is minimal with respect to the position-momentum uncertainty product $\Delta x \Delta p$, and there exists the following completeness property, see [3], + +$$\frac{1}{\pi} \int_0^\infty u du \int_0^{2\pi} d\varphi |z\rangle\langle z| = \sum_n |n\rangle\langle n|, \quad z = u \exp[i\varphi]. \quad (8)$$ + +The relation (8) follows immediately from the definition (4). An equivalent statement is: + +$$\frac{1}{\pi} \int_{0}^{\infty} u du \int_{0}^{2\pi} d\varphi \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = \delta(\zeta_2 - \zeta_1), \quad (9)$$ + +which corresponds to the completeness of the energy eigenfunctions of the harmonic oscillator. In Appendix B, we reproduce a proof of (9), which is appropriate, since the proof has to be extended to the modified coherent states in the type-II HO and the RO cases. + +In terms of the scaled variables $\zeta$ and $\tau = t\omega$, the trial ansatz reads: + +$$\psi(\zeta, \tau) = C_0 \exp[c(\tau) + \beta(\tau)\zeta - \gamma(\tau)\zeta^2/2], \quad (10)$$ + +where $c, \beta, \gamma$ are dimensionless functions of $\tau$, and the re-scaling factor of the probability density, $1/\sqrt{\alpha}$, is taken into the normalization constant $C_0$. + +We assume that $\gamma = \gamma_0 = \text{const}$. Then, the polynomial (2) gives rise to the equations: + +$$\gamma_0^2 = 1, \quad i\beta'(\tau) = \beta(\tau), \quad 2ic'(t) = 1 - \beta^2(t), \quad (11)$$ + +which implies that $\gamma_0 = 1$ is fixed. The further solutions emerge easily as: + +$$\beta(\tau) = C_2 \exp[-i\tau], \quad c(\tau) = -i\tau/2 - (C_2^2/4) \exp[-2i\tau] + C_3, \quad (12)$$ + +where $C_2$ and $C_3$ are complex integration constants. A comparison with (5), at $t=0$, suggests to set: + +$$C_2 = \sqrt{2}z, \quad C_3 = -(1/2)zz^*, \quad (13)$$ + +which specifies the functions $\beta$ and $c$ as follows: + +$$\beta(\tau) = \sqrt{2}(z \exp[-i\tau]), \quad c(t) = -i\tau/2 - (1/2)[zz^* + (z \exp[-i\tau])^2]. \quad (14)$$ +---PAGE_BREAK--- + +The normalization integral with respect to $\zeta$ amounts to the condition + +$$C_0^2 \sqrt{\pi} \exp[zz^*] = 1; \qquad (15)$$ + +hence (7) with (5) is reproduced. + +**4. Type-II Solutions of the Harmonic Oscillator** + +With $\gamma$ being a function of time, one obtains the following differential equations with prime denoting the derivative with respect to the scaled time $\tau$: + +$$i\gamma' = \gamma^2 - 1, \quad i\beta' = \gamma\beta; \quad 2i\gamma'c' = \gamma - \beta^2. \qquad (16)$$ + +The solution for $\gamma$ is + +$$\gamma(\tau) = \frac{\exp(2i\tau) - C_1}{\exp(2i\tau) + C_1}, \quad C_1 = \frac{1-\gamma_0}{1+\gamma_0}. \quad \gamma_0 = \gamma(0). \qquad (17)$$ + +Splitting $\gamma$ into its real and imaginary parts, one can write + +$$\begin{aligned} \gamma(\tau) &= \gamma_R + i\gamma_I; & \gamma_R &= (1-C_1^2)N_1^{-1}, & \gamma_I &= 2C_1N_1^{-1}\sin(2\tau), \\ N_1(\tau) &= 1+C_1^2+2C_1\cos(2\tau) = 4(1+\gamma_0)^{-2}[1+(\gamma_0^2-1)\sin^2(\tau)]. & & & \end{aligned} \qquad (18)$$ + +In order that the wave function is square integrable, $\gamma_R$ has to be positive, which implies that + +$$C_1^2 < 1 \text{ or } \gamma_0 > 0. \qquad (19)$$ + +The initial value $\gamma(t=0) = \gamma_0 > 0$ emerges as a disposable parameter. + +The probability density, $P = |\psi(\zeta, \tau)|^2$, is characterized by a width of order of magnitude $d = 1/\sqrt{\gamma_R}$: + +$$d(\tau) = \sqrt{[1 + (\gamma_0^2 - 1) \sin^2(\tau)] / \gamma_0}. \qquad (20)$$ + +Obviously, the width fluctuates, or "breathes", periodically with time. Of course, this is not a breathing mode as observed in systems of confined interacting particles, see [19,20], e.g., + +Integration of the $\beta$ equation leads to + +$$\beta = C_2 \exp(i\tau) [\exp(2i\tau) + C_1]^{-1} = C_2 N_1^{-1} [\exp(-i\tau) + \exp(i\tau) C_1]. \qquad (21)$$ + +Later on, the complex integration constant $C_2 = A_2 + iB_2$ will serve as a state label. The third differential equation of (16) amounts to + +$$c(\tau) = i\tau/2 - C_2^2 [4(\exp(2i\tau) + C_1)]^{-1} - (1/2) \ln \left( \sqrt{\exp(2i\tau) + C_1} \right) + C_3. \qquad (22)$$ + +By reasons explained in Appendix A, we dispose of the integration constant $C_3$ as follows + +$$C_3 = -(1 + \gamma_0)(8\gamma_0)^{-1}(A_2^2 + \gamma_0 B_2^2), \quad C_2 = A_2 + iB_2. \qquad (23)$$ + +In Appendix A, the probability density $P$ is derived in the following form + +$$P(\xi, \tau) = \frac{C_0^2}{\sqrt{N_1}} \exp[-\gamma_R (\xi - \beta_R / \gamma_R)^2], \qquad (24)$$ +---PAGE_BREAK--- + +where the time-dependent functions $\gamma_R$ and $N_1$ are defined through (17) and (18), and $\beta_R$ comes out as + +$$ \beta_R(\tau) = (1/8)(1 + \gamma_0)^{-1} N_1^{-1} [A_2 \cos(\tau) + B_2 \sin(\tau)]. \quad (25) $$ + +The complex integration constant $C_2$ corresponds to the familiar complex quantum number $z$ in the case of the standard coherent states; hence, the real numbers $A_2, B_2$ characterize different states. The normalization constant $C_0$ obeys the following condition, see Appendix A, + +$$ 1 = (1/2) C_0^2 \sqrt{\pi / \gamma_0 (1 + \gamma_0)}. \quad (26) $$ + +## 4.1. Completeness of Type-II States + +Combining the above results, we write the time-dependent wave function as follows: + +$$ \psi(\xi, \tau) = \frac{C_0}{\sqrt{\exp(2i\tau) + C_1}} \exp \left[ C_3 - \frac{C_2^2 (\exp(-2i\tau) + C_1)}{4N_1} + \beta(\tau)\xi - \gamma(\tau)\frac{\xi^2}{2} \right], \quad (27) $$ + +where $\gamma$, $\beta$, and $C_3$ are defined in (18), (21), and (23), respectively. Let us consider $\psi$ at zero time: + +$$ \psi(\xi, 0) = \frac{C_0}{\sqrt{1+C_1}} \exp \left[ C_3 - \frac{C_2^2}{4(1+C_1)} + C_2(1+\gamma_0)\xi/2 - \gamma_0\xi^2/2 \right]. \quad (28) $$ + +In (28), we set $\tilde{\xi} = \xi/\sqrt{\gamma_0}$ to write: + +$$ \psi(\tilde{\xi}, 0) = \frac{C_0 \gamma_0^{-1/4}}{\sqrt{1+C_1}} \exp \left[ C_3 - \frac{C_2^2}{4(1+C_1)} + C_2(1+\gamma_0)/\sqrt{\gamma_0 \tilde{\xi}/2} - \frac{\tilde{\xi}^2}{2} \right]. \quad (29) $$ + +Now we substitute the complex variable $z$ for the integration constant $C_2$ as follows: + +$$ C_2 \frac{1 + \gamma_0}{2\sqrt{\gamma_0}} = \sqrt{2}z \quad (30) $$ + +and obtain: + +$$ \psi(\tilde{\xi}, 0) = \frac{C_0}{\sqrt{1+C_1}} \exp \left[ C_3 - z^2 \frac{\gamma_0}{1+\gamma_0} + \sqrt{2}z\tilde{\xi} - \frac{\tilde{\xi}^2}{2} \right]. \quad (31) $$ + +In $C_3$, given in (23), we make the following replacements which are induced by (30): + +$$ A_2 \rightarrow \kappa(z+z^*), \quad B_2 \rightarrow -i\kappa(z-z^*), \quad \kappa = \frac{\sqrt{2\gamma_0}}{1+\gamma_0}. \quad (32) $$ + +There occur some nice cancelations, and one obtains: + +$$ \psi_z(\tilde{\xi}) = \frac{C_0 \gamma_0^{-1/4}}{\sqrt{1+C_1}} \exp \left[ -\frac{1}{2}(zz^* + z^2) + iD + \sqrt{2}z\tilde{\xi} - \frac{\tilde{\xi}^2}{2} \right], \quad D = \frac{1-\gamma_0}{2(1+\gamma_0)} \operatorname{Im}(z^2). \quad (33) $$ + +Comparison with (5) shows that the wave function (33) has the same structure apart from the purely imaginary phase $iD$. The latter drops out in the completeness proof, see (A15) in Appendix B. As a consequence, the states (33) form a complete set of states with respect to the state label $z$. + +At $\tau=0$, the states (33) differ from the standard coherent states (5) by the state dependent phase $D$, through the variables $\tilde{\zeta}$ and $\tilde{\xi}$ which denote the differently scaled space variable $x$, and also through the different definition of the quantum number $z$, which for simplicity was denoted by the same symbol in (30). Essentially, type-I and type-II states differ by their time evolution and width parameter $\gamma_0$ which is equal to $a^2 = m\omega/\hbar$ and to an arbitrary positive number, respectively. +---PAGE_BREAK--- + +## 4.2. Mean Values and Uncertainty Product + +In the following, we list mean values for the time-dependent states (27) including the position momentum uncertainty product $\Delta_{xp}$. They are periodic in time with the oscillator angular frequency $\omega \equiv 2\pi/T$. The uncertainty product is minimal at the discrete times $t_n = (1/4)nT$, $n = 0, 1, \dots$. For comparison, the traditional coherent states are always minimal [10]. We use the abbreviations $(\Delta_x)^2 = \langle x^2 \rangle - \langle x \rangle^2$ and $(\Delta_v)^2 = \langle v^2 \rangle - \langle v \rangle^2$ for the mean square deviations of position and velocity, respectively. + +$$ \langle x(\tau) \rangle = (1/\alpha)(1 + \gamma_0)(2\gamma_0)^{-1} [A_2 \cos(\tau) + B_2\gamma_0 \sin(\tau)]; \quad (34) $$ + +$$ \langle v(\tau) \rangle = \hbar a(2m\gamma_0)^{-1} [-A_2 \sin(\tau) + \gamma_0 B_2 \cos(\tau)]; \quad (35) $$ + +$$ (\Delta_x)^2 = (4a^2\gamma_0)^{-1} [1 + \gamma_0^2 + (1-\gamma_0^2)\cos(2\tau)]; \quad (36) $$ + +$$ (\Delta_v)^2 = \hbar^2 a^2 (4m^2\gamma_0)^{-1} [1 + \gamma_0^2 + (\gamma_0^2 - 1)\cos(2\tau)]; \quad (37) $$ + +$$ \langle H \rangle = \hbar\omega(8\gamma_0^2)^{-1} \left[ (1+\gamma_0)^2 (A_2^2 + \gamma_0^2 B_2^2) + 2\gamma_0(1+\gamma_0^2) \right]. \quad (38) $$ + +It is noticed that the mean square deviations do not depend on the state label ($A_2, B_2$). The uncertainty product follows immediately from (36) and (37) as + +$$ \Delta_{xp} := (\Delta_x)^2 (\Delta_p)^2 = m^2 (\Delta x)^2 (\Delta v)^2 = \frac{\hbar^2}{16\gamma_0^2} [(1+\gamma_0^2)^2 - (1-\gamma_0^2)^2 \cos^2(2\tau)]. \quad (39) $$ + +In the special case $\gamma_0 = 1$, the product is always minimal. As a matter of fact, $\gamma_0 = 1$ is the type-I case of Section 3. + +By (38), the mean energy does not depend on time and is positive definite, as it must be. The limit to the standard case with $\gamma_0 = 1$, gives the known result + +$$ \langle H \rangle_{\gamma_0=1} = \hbar\omega(zz^* + 1/2). \quad (40) $$ + +and the state with $z=0$ is the ground state of the HO with zero point energy $\hbar\omega/2$. + +# 5. Wave Packet Solutions for the RO + +For convenience, we will keep the same symbols for the trial functions $\gamma(\tau)$, $\beta(\tau)$, and $c(\tau)$. Setting $\omega = i\Omega$ with $\Omega > 0$, implies that $a^2 = -m\Omega/\hbar$. In the coherent state (5), the exponential part, $-z^2/2 = -(m\omega/\hbar)x^2/2$, is then replaced by $(m\Omega/\hbar)x^2/2$, which precludes normalization. + +We introduce $1/a_\Omega$ as the new length parameter and define the dimensionless magnitudes + +$$ z = a_\Omega x, \quad \tau = t\Omega, \quad \text{with } a_\Omega^2 = m\Omega/\hbar. \quad (41) $$ + +The Schrödinger equation, with the ansatz (10), has to be solved for the RO Hamiltonian + +$$ H_{\Omega} = p^2/(2m) - m\Omega^2/2 x^2 = -\hbar\Omega/2 [\partial_{x}^{2} + z^{2}]. \quad (42) $$ + +From (2), the following differential equations result: + +$$ i\gamma'(\tau) = 1 + \gamma^2(\tau), \quad i\beta'(\tau) = \gamma(\tau)\beta(\tau), \quad 2ic'(\tau) = \gamma(\tau) - \beta^2(\tau), \quad (43) $$ +---PAGE_BREAK--- + +where, as compared with the HO case in (16), only the equation for $\gamma$ differs. Beginning with $\gamma$, one successively obtains the following solutions + +$$ \gamma(\tau) = -i \tanh(\tau + iC_1), \quad (44) $$ + +$$ \beta(\tau) = C_2 / \cosh(\tau + i C_1), \quad (45) $$ + +$$ c(\tau) = C_3 - (1/2) \ln(\cosh(\tau + i C_1)) + (i/2) C_2^2 \tanh(\tau + i C_1), \quad (46) $$ + +where $C_1, C_2, C_3$ are integration constants. We assume that + +$$ \gamma_0 \equiv \gamma(0) = \tan(C_1) > 0, \quad 0 < C_1 < \pi/2, \quad (47) $$ + +which implies that + +$$ \cos(C_1) = (1 + \gamma_0^2)^{-1/2}, \quad \sin(C_1) = \gamma_0 (1 + \gamma_0^2)^{-1/2}. \quad (48) $$ + +In order to decompose the functions $c(\tau)$, $\beta(\tau)$, $\gamma(\tau)$ into their real and imaginary parts, we take over the following abbreviations from [16] + +$$ f(\tau) = \cosh(\tau) - i\gamma_0 \sinh(\tau), \quad h(\tau) = [ff^*]^{-1}. \quad (49) $$ + +After the decompositions $\beta = \beta_R + i\beta_I$, $\gamma = \gamma_R + i\gamma_I$, $C_2 = A_2 + iB_2$, we infer from (44) to (46): + +$$ \gamma_R = h(\tau)\gamma_0, \quad \gamma_I = -(h(\tau)/2)(1+\gamma_0^2)\sinh(2\tau); \quad (50) $$ + +$$ \beta_R = h(\tau) \sqrt{1 + \gamma_0^2} [A_2 \cosh(\tau) + \gamma_0 B_2 \sinh(\tau)], $$ + +$$ \beta_I = h(\tau) \sqrt{1 + \gamma_0^2} [B_2 \cosh(\tau) - \gamma_0 A_2 \sinh(\tau)]; \quad (51) $$ + +$$ \exp[c(\tau)] = [\cosh(\tau + i C_1)]^{-1/2} \exp[C_3 - C_2^2 \gamma(\tau)/2]. \quad (52) $$ + +According to (50), $\gamma_R$ is larger zero, which makes the wave function (10) a normalizable wave packet. The probability density reads: + +$$ P(\zeta, \tau) = C_0^2 \exp[c + c^* + 2\beta_R\zeta - \gamma_R\zeta^2]. \quad (53) $$ + +Integration with respect to $\zeta$ leads to the normalization condition + +$$ 1 = C_0^2 \sqrt{\pi/\gamma_R} \exp[c(\tau) + c^*(\tau) + \beta_R^2/\gamma_R]. \quad (54) $$ + +The normalization constant $C_0$ was determined in [16] for real constants $C_2$. With $C_2 = A_2 + iB_2$, we dispose of the integration constant $C_3$ as + +$$ C_3 = -(1/2)(A_2^2/\gamma_0 + B_2^2\gamma_0) \quad (55) $$ + +to obtain in a straightforward manner + +$$ C_0^2 = \sqrt{\pi(\gamma_0^{-1} + \gamma_0)}, \quad (56) $$ + +which is a time independent condition as it must be. + +With the aid of elementary trigonometric manipulations and the normalization constant $C_0$ given in (56), the wave function can be written as follows: + +$$ \psi(\zeta, \tau) = (\gamma_0/\pi)^{1/4} \sqrt{h(\tau)f(\tau)} \exp[C_3 - (1/2)C_2^2\gamma(\tau) + \beta(\tau)\zeta - \gamma(\tau)\zeta^2/2]. \quad (57) $$ +---PAGE_BREAK--- + +5.1. Coherent States of the RO + +As before, let us consider the wave function at time $t = 0$, where in particular $h = f = 1$: + +$$ +\psi(\zeta, 0) \equiv \psi(\zeta, \tau = 0) = (\gamma_0 / \pi)^{1/4} \exp \left[ C_3 - \frac{1}{2} C_2^2 \gamma_0 + C_2 \sqrt{1 + \gamma_0^2 \zeta - \gamma_0 \zeta^2 / 2} \right]. \quad (58) +$$ + +After the re-scaling $\zeta \rightarrow \tilde{\zeta}$ with $\tilde{\zeta} = \sqrt{\gamma_0} \zeta$, one obtains + +$$ +\Psi(\tilde{\zeta}, 0) = \pi^{-1/4} \exp \left[ C_3 - \frac{1}{2} C_2^2 \gamma_0 + C_2 \sqrt{(1+\gamma_0^2)/\gamma_0} \tilde{\zeta} - \frac{\tilde{\zeta}^2}{2} \right]. \quad (59) +$$ + +In view of the standard HO wave function (5), we replace the integration constant $C_2$ by $z$: + +$$ +C_2 \sqrt{(1 + \gamma_0^2) / \gamma_0} = \sqrt{2} z \tag{60} +$$ + +and obtain + +$$ +\Psi_z(\xi) = \pi^{-1/4} \exp \left[ C_3 - \frac{\gamma_0^2 z^2}{(1+\gamma_0^2)} + \sqrt{2} z \xi - \frac{\xi^2}{2} \right]. \quad (61) +$$ + +In $C_3$, given in (55), the relation (60) gives rise to the substitutions + +$$ +A_2 \rightarrow \kappa_1(z+z^*), \quad B_2 \rightarrow -i\kappa_1(z-z^*), \quad \kappa_1 = (1/2)\sqrt{2\gamma_0/(1+\gamma_0^2)}, \qquad (62) +$$ + +and hence to + +$$ +C_3 = [4(1 + \gamma_0^2)]^{-1} [(7\gamma_0^2 - 1)(z^2 + z^*z^*) - 2(1 + \gamma_0^2)zz^*]. \quad (63) +$$ + +After some elementary re-arrangements, one finds + +$$ +\Psi_z(\xi) = \frac{1}{\pi^{1/4}} \exp \left[ -\frac{1}{2}(zz^* + z^2) + iD_1 + \sqrt{2}z\xi - \frac{\xi^2}{2} \right], \quad D_1 = \frac{1-\gamma_0^2}{2(1+\gamma_0^2)} \operatorname{Im}(z^2). \quad (64) +$$ + +Apart from the purely imaginary phase $i D_1$, the wave functions $\Psi_z$ are the same as the standard coherent states (5). Since in the completeness proof the $D_1$ phase drops out, see (A15) in Appendix B, the states $\Psi_z$ form a complete function set. + +5.2. Mean Values + +With the aid of Mathematica [21], we get the following mean values for position x, velocity v, +their mean square deviations (Δx)², (Δv)², and the mean energy ⟨HΩ>: + +$$ +\langle x \rangle = (\alpha_{\Omega})^{-1} \sqrt{1 + \gamma_0^{-2}} [A_2 \cosh(\tau) + \gamma_0 B_2 \sinh(\tau)]; \quad (65) +$$ + +$$ +(\Delta x)^2 = (2a_\Omega^2 \gamma_0)^{-1} [\cosh^2(\tau) + \gamma_0^2 \sinh^2(\tau)]; \quad (66) +$$ + +$$ +\langle v \rangle = (\hbar a_{\Omega}/m) \sqrt{1 + \gamma_0^{-2}} [A_2 \sinh(\tau) + \gamma_0 B_2 \cosh(\tau)]; \quad (67) +$$ + +$$ +(\Delta v)^2 = (\hbar a_{\Omega} / (2m))^2 \gamma_0^{-1} [\gamma_0^2 - 1 + (1 + \gamma_0^2) \cosh(2\tau)]; \quad (68) +$$ + +$$ +\langle H_{\Omega} \rangle = \hbar\Omega(4\gamma_0)^{-1}[\gamma_0^2 - 1 + 2(\gamma_0 + \gamma_0^{-1})(\gamma_0^2 B_2^2 - A_2^2)]. \quad (69) +$$ + +The mean energy does not depend on time, as it must be. With the aid of (62), the mean energy +could also be expressed in terms of the complex state label z. Since $A_2$ and $B_2$ are arbitrary real +---PAGE_BREAK--- + +numbers, the mean energy can have any positive or negative value. From (66) and (68) one infers the +position-momentum uncertainty product $\Delta_{xp}$ as + +$$ +\Delta_{xp}^2(\tau) = \hbar^2 / (8\gamma_0^2) \left[ \cosh^2(\tau) + \gamma_0^2 \sinh(\tau) \right] \left[ \gamma_0^2 - 1 + (1+\gamma_0^2) \cosh(2\tau) \right]. \quad (70) +$$ + +This product obeys the inequality + +$$ +\Delta_{xp}^2(\tau) > \Delta_{xp}^2(0) = \frac{\hbar^2}{4}, \quad \tau > 0. \tag{71} +$$ + +Obviously, the uncertainty product is minimal at $\tau = 0$, which means for the coherent states (64). +By (66), the wave packets broaden exponentially with time. + +**6. Application to the Kepler-Coulomb Problem** + +The connection of the non-relativistic Hamiltonian for the hydrogen atom with the model +of a four-dimensional oscillator is conveniently achieved by means of the Kustaanheimo-Stiefel +transformation [15], which we write as follows [16,22] + +$$ +\begin{align*} +u_1 &= \sqrt{r} \cos(\theta/2) \cos(\varphi - \Phi); & u_2 &= \sqrt{r} \cos(\theta/2) \sin(\varphi - \Phi); \\ +u_3 &= \sqrt{r} \sin(\theta/2) \cos(\Phi); & u_4 &= \sqrt{r} \sin(\theta/2) \sin(\Phi), +\end{align*} +\tag{72} +$$ + +where $r, \theta, \varphi$ are three-dimensional polar coordinates with $r > 0, 0 < \theta < \pi, 0 \le \varphi < 2\pi$, +and $0 \le \Phi < 2\pi$ generates the extension to the fourth dimension. The vector **u** = {$u_1, u_2, u_3, u_4$} +covers the $\mathbf{R}^4$ and the volume elements are related as [16] + +$$ +du_1 du_2 du_3 du_4 = (1/8) r \sin(\theta) dr d\theta d\varphi d\Phi. \quad (73) +$$ + +The stationary Schrödinger equation $H\psi = E\psi$ for the Hamiltonian $H = p^2/(2m) - \lambda/r$ is +transformed into the following form of a four-dimensional harmonic oscillator [14]: + +$$ +H_u \Psi(\mathbf{u}) = \lambda \Psi(\mathbf{u}), \quad H_u = -\frac{\hbar^2}{(8m)} \Delta_u - E \mathbf{u} \cdot \mathbf{u}, \quad \Delta_u = \partial_{u_1}^2 + \dots \partial_{u_4}^2 +\qquad (74) +$$ + +with the constraint + +$$ +\partial_{\Phi} \Psi(\mathbf{u}) = 0. \tag{75} +$$ + +It should be noticed that, by (72), the components $u_i^2$ have the dimension of a length rather than +length square. As a consequence, in the evolution equation $i\hbar\partial_\nu\Psi = H_u\Psi$, the parameter $\sigma$, which has +the dimension time/length, is not the time parameter of the original problem. For negative energies +with $E<0$, four-dimensional coherent oscillator states (of type-I) were used in [14] to show that elliptic +orbits emerge in the classical limit whereby $\sigma$ turns out being proportional to the eccentric anomaly. + +In the spectrum of positive energies (ionized states of the hydrogen atom) with $E > 0$, +coherent states of the RO were constructed in [16] and gave rise to hyperbolic orbits in the classical limit; +by analytic continuation, also the elliptic orbits were derived from the RO states in the classical +limit [17]. In addition, Kepler's equation was obtained by the assumption that time-dependence enters +through the curve parameter $\sigma$ only. Recently [18], based on the coherent RO states, the first order +quantum correction to Kepler's equation could be established for the smallness parameter $\epsilon = \hbar/L$ +where L denotes the orbital angular momentum. + +**7. Conclusions** + +Besides the standard coherent states of the harmonic oscillator (H0), a further solution family of +the time-dependent Schrödinger equation was derived with the following properties: (i) The functions +are normalizable of Gaussian type and contain a disposable width parameter. The latter allows us, +for instance, to use arbitrarily concentrated one-particle states independently of the parameters of +---PAGE_BREAK--- + +a harmonic trap; (ii) The functions are complete and isomorphic to the standard coherent states at time $t=0$; (iii) The states minimize the position-momentum uncertainty product at the discrete times $T_n = n\pi/(2\omega)$, $n=0,1,...$; (iv) The width of the wave packets "breathes" periodically with period $T/2 = \pi/\omega$. (v) There is no diffusion, $T = 2\pi/\omega$ is the recurrence time of the states. + +In the case of the reversed harmonic oscillator (RO), there exists only one family of time-dependent solutions. They share the properties (i) and (ii) of the type-II HO states, and (iii) is fulfilled at time $t=0$, only. There is no recurrence, instead there is diffusion with a broadening which increases exponentially with time. The application to the Kepler-Coulomb problem was briefly discussed. The HO coherent states of type-I and the RO coherent states served as basis to derive, in the classical limit, the elliptic Kepler orbits [14] and the hyperbolic ones [16,17], respectively. + +**Acknowledgments:** The author expresses his gratitude to Jürgen Parisi for his constant encouragement and support. He also profited from his critical reading of the manuscript. + +**Conflicts of Interest:** The author declares no conflict of interest. + +## Appendix. Probability Density for Type-II States + +We have to decompose the functions $\beta(\tau)$ and $c(\tau)$, as given by (21)and (22), into their real and imaginary parts. To this end, we set $C_2 = A_2 + iB_2$ with real constants $A_2$ and $B_2$ and $\beta = \beta_R + i\beta_I$. Using the definitions of $N_1$ and $C_1$ in terms of $\gamma_0$, we obtain + +$$ +\begin{aligned} +\beta_R &= \frac{1 + \gamma_0}{2} \frac{A_2 \cos(\tau) + B_2 \gamma_0 \sin(\tau)}{1 + (\gamma_0^2 - 1) \sin^2(\tau)}, \\ +\beta_I &= \frac{1 + \gamma_0}{2} \frac{B_2 \cos(\tau) - A_2 \gamma_0 \sin(\tau)}{1 + (\gamma_0^2 - 1) \sin^2(\tau)}. +\end{aligned} +\quad (A1) $$ + +In view of the function $c(\tau)$, we make use of the following auxiliary relations + +$$ F_c \equiv -C_2^2 [4 (\exp(2i \tau) + C_1)]^{-1} = F_R + i F_I, $$ + +$$ F_R = \left(1/(4N_1)\right) \left[(B_2^2 - A_2^2)\cos(2\tau) - 2A_2B_2\sin(2\tau) + (B_2^2 - A_2^2)C_1\right], $$ + +$$ F_I = \left(1/(4N_1)\right) \left[(A_2^2 - B_2^2)\sin(2\tau) - 2A_2B_2\cos(2\tau) - 2A_2B_2C_1\right], \quad (A2) $$ + +$$ \exp[c(\tau) + c^{*}(\tau)] = (1/\sqrt{N_1}) \exp[2C_3 + 2F_R], \quad (A3) $$ + +where the integration constant $C_3$ is assumed being real and the star suffix means complex conjugation. The probability density $P$ results from the wave function (10) in the form + +$$ P(\xi, \tau) = \frac{C_0^2}{\sqrt{N_1}} \exp \left[ 2C_3 + 2F_R + 2\beta_R \xi - \gamma_R \xi^2 \right], \quad (A4) $$ + +where $C_0$ is defined through the normalization integral + +$$ 1 = \int_{-\infty}^{\infty} d\xi P(\xi, \tau) = \frac{C_0^2 \sqrt{\pi}}{\sqrt{N_1 \gamma_R}} \exp(G), \quad G = 2C_3 + 2F_R + \frac{\beta_R^2}{\gamma_R}. \quad (A5) $$ + +From the expression of $G$, it is not obvious that $C_0$ is independent of $\tau$ which was assumed in (10). Clearly, since $\Phi := \psi/C_0$ obeys the Schrödinger equation and $H$ is hermitian, one has the property + +$$ \partial_{\tau}\langle\Phi|\Phi\rangle = 0. \quad (A6) $$ + +As a matter of fact, it is straightforward to show that + +$$ 2F_R + \frac{\beta_R^2}{\gamma_R} = [B_2^2(C_1 - 1) - A_2^2(1 + C_1)] [2(C_1^2 - 1)]^{-1} \quad (A7) $$ +---PAGE_BREAK--- + +does not depend on $\tau$. We now dispose of the integration constant $C_3$ such that the exponent $G$ vanishes: + +$$ C_3 = - [B_2^2(C_1-1) - A_2^2(1+C_1)] [4(C_1^2-1)]^{-1} \quad (A8) $$ + +In view of $G=0$, we replace $2C_3 + 2F_R$ by $-\beta_R^2/\gamma_R$, so that + +$$ P(\zeta, \tau) = \frac{C_0^2}{\sqrt{N_1}\gamma_R} \exp[-\gamma_R (\zeta - \beta_R/\gamma_R)^2], \quad (A9) $$ + +which is the result (24). The normalization condition comes out immediately in the form + +$$ 1 = \frac{C_0^2 \sqrt{\pi}}{\sqrt{N_1 \gamma_R}} = \frac{C_0^2 \sqrt{\pi}}{\sqrt{1-C_1^2}} = \frac{C_0^2 \sqrt{\pi}(1+\gamma_0)}{2\sqrt{\gamma_0}}. \quad (A10) $$ + +## Appendix. Proof of Completeness + +In order to prove the completeness of the functions (5), i.e., for the type-I HO case, we take advantage of the following generating function of the Hermite polynomials [23]: + +$$ \exp[2XZ - Z^2] = \sum_{n=0}^{\infty} \frac{Z^n}{n!} H_n(X). \quad (A11) $$ + +In the function (5), we replace $z$ by $\sqrt{2}Z$ to obtain + +$$ \psi_z(\zeta) = \pi^{-1/4} \exp[-ZZ^* - (1/2)\zeta^2] \exp[-Z^2 + 2\zeta Z]. \quad (A12) $$ + +With the aid of (A11), one can write + +$$ \psi_z(\zeta) = \exp[-(1/2)zz^*] \sum_{n=0}^{\infty} \frac{z^n}{\sqrt{n!}} \varphi_n(\zeta), \quad (A13) $$ + +where + +$$ \varphi_n(\zeta) = \frac{1}{\sqrt{n! 2^n \sqrt{\pi}}} H_n(\zeta) \exp[-(1/2)\zeta^2]. \quad (A14) $$ + +By means of (A13) and setting $z = u \exp[i\varphi]$, we obtain + +$$ \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = \exp[-u^2] \sum_{m,n=0}^{\infty} \frac{u^{n+m} \exp[i(m-n)\varphi]}{\sqrt{m!n!}} \varphi_m(\zeta_2) \varphi_n(\zeta_1). \quad (A15) $$ + +In (A15), the $\varphi$ integration projects out the terms $n=m$ with the result + +$$ \frac{1}{\pi} \int_{0}^{\infty} u du \int_{0}^{2\pi} d\varphi \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = 2 \int_{0}^{\infty} u du \exp[-u^2] \sum_{n=0}^{\infty} \frac{u^{2n}}{n!} \varphi_n(\zeta_2) \varphi_n(\zeta_1). \quad (A16) $$ + +After changing the integration variable $u \to v$ with $v = u^2$ with $udu = dv/2$, one uses + +$$ \int_{0}^{\infty} dv \frac{v^n}{n!} \exp[-v] = 1, \quad n = 0, 1, \dots \quad (A17) $$ + +and, in view of the completeness of the Hermite polynomials, arrives at + +$$ \frac{1}{\pi} \int_{0}^{\infty} u du \int_{0}^{2\pi} d\varphi \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = \sum_{n=0}^{\infty} \varphi_n(\zeta_2) \varphi_n(\zeta_1) = \delta(\zeta_2 - \zeta_1). \quad (A18) $$ +---PAGE_BREAK--- + +In the type-II HO and the RO cases, there appear additional purely imaginary phases in the +wave function, which do not depend on $\zeta_1$, $\zeta_2$, and drop out at the step (A15) of the completeness +proof above. + +References + +1. Schrödinger, E. Der stetige Übergang von der Mikro-zur Makromechanik. *Naturwissenschaften* **1926**, *14*, 664-666. + +2. Glauber, R.J. Coherent and incoherent states of the radiation field. *Phys. Rev.* **1963**, *131*, 2766. + +3. Glauber, R.J. Photon Correlations. *Phys. Rev. Lett.* **1963**, *10*, 84. + +4. Antoniou, I.E.; Progogine, I. Intrinsic irreversibility and integrability of dynamics. *Phys. A Stat. Mech. Appl.* **1993**, *192*, 443–464. + +5. Gentilini, S.; Braidotti, M.C.; Marcucci, G.; DelRe, E.; Conti, C. Physical realization of the Glauber quantum oscillator. *Sci. Rep.* **2015**, *5*, 15816. + +6. Glauber, R.J. Amplifiers, attenuators, and schrödinger's cat. *Ann. N. Y. Acad. Sci.* **1986**, *480*, 336–372. + +7. Barton, G. Quantum mechanics of the inverted oscillator potential. *Ann. Phys.* **1986**, *166*, 322–363. + +8. Bhaduri, R.K.; Khare, A.; Reimann, S.M.; Tomisiek, E.L. The riemann zeta function and the inverted harmonic oscillator. *Ann. Phys.* **1997**, *264*, 25–40. + +9. Guo, G.-J.; Ren, Z.-Z.; Ju, G.-X.; Guo, X.-Y. Quantum tunneling effect of a time-dependent inverted harmonic oscillator. *J. Phys. A Math. Theor.* **2011**, *44*, 185301. + +10. Galindo, A.; Pascual, P. *Quantum Mechanics I*; Springer: Berlin, Germany, 1990. + +11. Fock, V.A. Zur Theorie des Wassenstoffatoms. Z. Phys. **1935**, *98*, 145–154. + +12. Chen, A.C. Hydrogen atom as a four-dimensional oscillator. *Phys. Rev.* **A 1980**, *22*, 333–335. + +13. Gracia-Bondia, J.M. Hydrogen atom in the phase-space formulation of quantum mechanics. *Phys. Rev.* **A 1984**, *30*, 691–697. + +14. Gerry, C.C. Coherent states and the Kepler-Coulomb problem. *Phys. Rev.* **A 1986**, *33*, 6–11. + +15. Kustaanheimo, P.; Stiefel, E. Perturbation theory of Kepler motion based on spinor regularization. *J. Reine Angew. Math.* **1965**, *218*, 204–219. + +16. Rauh, A.; Parisi, J. Quantum mechanics of hyperbolic orbits in the Kepler problem. *Phys. Rev.* **A 2011**, *83*, 042101. + +17. Rauh, A.; Parisi, J. Quantum mechanics of Kepler orbits. *Adv. Stud. Theor. Phys.* **2014**, *8*, 889–938. + +18. Rauh, A.; Parisi, J. Quantum mechanical correction to Kepler’s equation. *Adv. Stud. Theor. Phys.* **2016**, *10*, 1–22. + +19. Baletto, F.; Riccardo, F. Structural properties of nanoclusters: Energetic, thermodynamic, and kinetic effects. *Rev. Mod. Phys.* **2015**, *77*, 371–423. + +20. Bauch, S.; Balzer, K.; Bonitz, M. Quantum breathing mode of trapped bosons and fermions at arbitrary coupling. *Phys. Rev. B* **2009**, *80*, 054515. + +21. Wolfram Research, Inc. Mathematica; Version 10.1.0.0; Wolfram Research, Inc.: Champaign, IL, USA, 2015. + +22. Chen, C.; Kibler, M. Connection between the hydrogen atom and the four-dimensional oscillator. *Phys. Rev.* **A 1985**, *31*, 3960–3963. + +23. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products; Academic Press: New York, NY, USA, 1965. + +© 2016 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Entangled Harmonic Oscillators and +Space-Time Entanglement + +Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³ + +¹ Department of Physics, Middle East Technical University, 06800 Ankara, Turkey; baskal@newton.physics.metu.edu.tr + +² Center for Fundamental Physics, University of Maryland College Park, College Park, MD 20742, USA + +³ Department of Radiology, New York University School of Medicine, New York, NY 10016, USA; marilyn.noz@med.nyu.edu + +* Correspondence: yskim@umd.edu; Tel.: +1-301-937-6306 + +Academic Editor: Sergei D. Odintsov + +Received: 26 February 2016; Accepted: 20 June 2016; Published: 28 June 2016 + +**Abstract:** The mathematical basis for the Gaussian entanglement is discussed in detail, as well as its implications in the internal space-time structure of relativistic extended particles. It is shown that the Gaussian entanglement shares the same set of mathematical formulas with the harmonic oscillator in the Lorentz-covariant world. It is thus possible to transfer the concept of entanglement to the Lorentz-covariant picture of the bound state, which requires both space and time separations between two constituent particles. These space and time variables become entangled as the bound state moves with a relativistic speed. It is shown also that our inability to measure the time-separation variable leads to an entanglement entropy together with a rise in the temperature of the bound state. As was noted by Paul A. M. Dirac in 1963, the system of two oscillators contains the symmetries of the $O(3,2)$ de Sitter group containing two $O(3,1)$ Lorentz groups as its subgroups. Dirac noted also that the system contains the symmetry of the $Sp(4)$ group, which serves as the basic language for two-mode squeezed states. Since the $Sp(4)$ symmetry contains both rotations and squeezes, one interesting case is the combination of rotation and squeeze, resulting in a shear. While the current literature is mostly on the entanglement based on squeeze along the normal coordinates, the shear transformation is an interesting future possibility. The mathematical issues on this problem are clarified. + +**Keywords:** Gaussian entanglement; two coupled harmonic oscillators; coupled Lorentz groups; space-time separation; Wigner's little groups; $O(3,2)$ group; Dirac's generators for two coupled oscillators + +**PACS:** 03.65.Fd, 03.65.Pm, 03.67.-a, 05.30.-d + +# 1. Introduction + +Entanglement problems deal with fundamental issues in physics. Among them, the Gaussian entanglement is of current interest not only in quantum optics [1–4], but also in other dynamical systems [3,5–8]. The underlying mathematical language for this form of entanglement is that of harmonic oscillators. In this paper, we present first the mathematical tools that are and may be useful in this branch of physics. + +The entangled Gaussian state is based on the formula: + +$$ \frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k \chi_k(x) \chi_k(y) \quad (1) $$ + +where $\chi_n(x)$ is the $n^{th}$ excited-state oscillator wave function. +---PAGE_BREAK--- + +In Chapter 16 of their book [9], Walls and Milburn discussed in detail the role of this formula in the theory of quantum information. Earlier, this formula played the pivotal role for Yuen to formulate his two-photon coherent states or two-mode squeezed states [10]. The same formula was used by Yurke and Patasek in 1987 [11] and by Ekert and Knight [12] for the two-mode squeezed state where one of the photons is not observed. The effect of entanglement is to be seen from the beam splitter experiments [13,14]. + +In this paper, we point out first that the series of Equation (1) can also be written as a squeezed Gaussian form: + +$$ \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta} (x+y)^2 + e^{2\eta} (x-y)^2 \right] \right\} \quad (2) $$ + +which becomes: + +$$ \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (x^2 + y^2) \right\} \qquad (3) $$ + +when $\eta = 0$. + +We can obtain the squeezed form of Equation (2) by replacing $x$ and $y$ by $x'$ and $y'$, respectively, where: + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cosh \eta & -\sinh \eta \\ -\sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \qquad (4) $$ + +If $x$ and $y$ are replaced by $z$ and $t$, Equation (4) becomes the formula for the Lorentz boost along the $z$ direction. Indeed, the Lorentz boost is a squeeze transformation [3,15]. + +The squeezed Gaussian form of Equation (2) plays the key role in studying boosted bound states in the Lorentz-covariant world [16–20], where $z$ and $t$ are the space and time separations between two constituent particles. Since the mathematics of this physical system is the same as the series given in Equation (1), the physical concept of entanglement can be transferred to the Lorentz-covariant bound state, as illustrated in Figure 1. + +**Figure 1.** One mathematics for two branches of physics. Let us look at Equations (1) and (2) applicable to quantum optics and special relativity, respectively. They are the same formula from the Lorentz group with different variables as in the case of the Inductor-Capacitor-Resistor (LCR) circuit and the mechanical oscillator sharing the same second-order differential equation. + +We can approach this problem from the system of two harmonic oscillators. In 1963, Paul A. M. Dirac studied the symmetry of this two-oscillator system and discussed all possible transformations +---PAGE_BREAK--- + +applicable to this oscillator [21]. He concluded that there are ten possible generators of transformations satisfying a closed set of commutation relations. He then noted that this closed set corresponds to the Lie algebra of the $O(3, 2)$ de Sitter group, which is the Lorentz group applicable to three space-like and two time-like dimensions. This $O(3, 2)$ group has two $O(3, 1)$ Lorentz groups as its subgroups. + +We note that the Lorentz group is the language of special relativity, while the harmonic oscillator is one of the major tools for interpreting bound states. Therefore, Dirac's two-oscillator system can serve as a mathematical framework for understanding quantum bound systems in the Lorentz-covariant world. + +Within this formalism, the series given in Equation (1) can be produced from the ten-generator Dirac system. In discussing the oscillator system, the standard procedure is to use the normal coordinates defined as: + +$$u = \frac{x+y}{\sqrt{2}}, \quad \text{and} \quad v = \frac{x-y}{\sqrt{2}} \qquad (5)$$ + +In terms of these variables, the transformation given in Equation (4) takes the form: + +$$\begin{pmatrix} u' \\ v' \end{pmatrix} = \begin{pmatrix} e^{-\eta} & 0 \\ 0 & e^{\eta} \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} \qquad (6)$$ + +where this is a squeeze transformation along the normal coordinates. While the normal-coordinate transformation is a standard procedure, it is interesting to note that it also serves as a Lorentz boost [18]. + +With these preparations, we shall study in Section 2 the system of two oscillators and coordinate transformations of current interest. It is pointed out in Section 3 that there are ten different generators for transformations, including those discussed in Section 2. It is noted that Dirac derived ten generators of transformations applicable to these oscillators, and they satisfy the closed set of commutation relations, which is the same as the Lie algebra of the $O(3, 2)$ de Sitter group containing two Lorentz groups among its subgroups. In Section 4, Dirac's ten-generator symmetry is studied in the Wigner phase-space picture, and it is shown that Dirac's symmetry contains both canonical and Lorentz transformations. + +While the Gaussian entanglement starts from the oscillator wave function in its ground state, we study in Section 5 the entanglements of excited oscillator states. We give a detailed explanation of how the series of Equation (1) can be derived from the squeezed Gaussian function of Equation (2). + +In Section 6, we study in detail how the sheared state can be derived from a squeezed state. It appears to be a rotated squeezed state, but this is not the case. In Section 7, we study what happens when one of the two entangled variables is not observed within the framework of Feynman's rest of the universe [22,23]. + +In Section 8, we note that most of the mathematical formulas in this paper have been used earlier for understanding relativistic extended particles in the Lorentz-covariant harmonic oscillator formalism [20,24–28]. These formulas allow us to transport the concept of entanglement from the current problem of physics to quantum bound states in the Lorentz-covariant world. The time separation between the constituent particles is not observable and is not known in the present form of quantum mechanics. However, this variable effects the real world by entangling itself with the longitudinal variable. + +## 2. Two-Dimensional Harmonic Oscillators + +The Gaussian form: + +$$\left[ \frac{1}{\sqrt{\pi}} \right]^{1/4} \exp \left( -\frac{x^2}{2} \right) \qquad (7)$$ + +is used for many branches of science. For instance, we can construct this function by throwing dice. +---PAGE_BREAK--- + +In physics, this is the wave function for the one-dimensional harmonic oscillator in the ground state. This function is also used for the vacuum state in quantum field theory, as well as the zero-photon state in quantum optics. For excited oscillator states, the wave function takes the form: + +$$ \chi_n(x) = \left[ \frac{1}{\sqrt{\pi} 2^{n} n!} \right]^{1/2} H_n(x) \exp \left( -\frac{x^2}{2} \right) \quad (8) $$ + +where $H_n(x)$ is the Hermite polynomial of the $n$-th degree. The properties of this wave function are well known, and it becomes the Gaussian form of Equation (7) when $n=0$. + +We can now consider the two-dimensional space with the orthogonal coordinate variables x and y and the same wave function with the y variable: + +$$ \chi_m(y) = \left[ \frac{1}{\sqrt{\pi} 2^{m} m!} \right]^{1/2} H_m(y) \exp \left( -\frac{y^2}{2} \right) \quad (9) $$ + +and construct the function: + +$$ \psi^{n,m}(x,y) = [\chi_n(x)] [\chi_m(y)] \quad (10) $$ + +This form is clearly separable in the x and y variables. If *n* and *m* are zero, the wave function becomes: + +$$ \psi^{0,0}(x,y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (x^2 + y^2) \right\} \quad (11) $$ + +Under the coordinate rotation: + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \quad (12) $$ + +this function remains separable. This rotation is illustrated in Figure 2. This is a transformation very familiar to us. + +We can next consider the scale transformation of the form: + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \quad (13) $$ + +This scale transformation is also illustrated in Figure 2. This area-preserving transformation is known as the squeeze. Under this transformation, the Gaussian function is still separable. + +If the direction of the squeeze is rotated by 45°, the transformation becomes the diagonal transformation of Equation (6). Indeed, this is a squeeze in the normal coordinate system. This form of squeeze is most commonly used for squeezed states of light, as well as the subject of entanglements. It is important to note that, in terms of the x and y variables, this transformation can be written as Equation (4) [18]. In 1905, Einstein used this form of squeeze transformation for the longitudinal and time-like variables. This is known as the Lorentz boost. + +In addition, we can consider the transformation of the form: + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} 1 & 2\alpha \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \quad (14) $$ + +This transformation shears the system as is shown in Figure 2. + +After the squeeze or shear transformation, the wave function of Equation (10) becomes non-separable, but it can still be written as a series expansion in terms of the oscillator wave functions. It can take the form: + +$$ \psi(x,y) = \sum_{n,m} A_{n,m} \chi_n(x) \chi_m(y) \quad (15) $$ +---PAGE_BREAK--- + +with: + +$$ \sum_{n,m} |A_{n,m}|^2 = 1 $$ + +if $\psi(x, y)$ is normalized, as was the case for the Gaussian function of Equation (11). + +## 2.1. Squeezed Gaussian Function + +Under the squeeze along the normal coordinate, the Gaussian form of Equation (11) becomes: + +$$ \psi_{\eta}(x, y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta}(x+y)^2 + e^{2\eta}(x-y)^2 \right] \right\} \quad (16) $$ + +which was given in Equation (2). This function is not separable in the x and y variables. These variables are now entangled. We obtain this form by replacing, in the Gaussian function of Equation (11), the x and y variables by $x'$ and $y'$, respectively, where: + +$$ x' = (\cosh \eta)x - (\sinh \eta)y, \quad \text{and} \quad y' = (\cosh \eta)y - (\sinh \eta)x \qquad (17) $$ + +This form of squeeze is illustrated in Figure 3, and the expansion of this squeezed Gaussian function becomes the series given in Equation (1) [20,26]. This aspect will be discussed in detail in Section 5. + +**Figure 2.** Transformations in the two-dimensional space. The object can be rotated, squeezed or sheared. In all three cases, the area remains invariant. + +**Figure 3.** Squeeze along the 45 °C direction, discussed most frequently in the literature. +---PAGE_BREAK--- + +In 1976 [10], Yuen discussed two-photon coherent states, often called squeezed states of light. This series expansion served as the starting point for two-mode squeezed states. More recently, in 2003, Giedke et al. [1] used this formula to formulate the concept of the Gaussian entanglement. + +There is another way to derive the series. For the harmonic oscillator wave functions, there are step-down and step-up operators [17]. These are defined as: + +$$a = \frac{1}{\sqrt{2}} \left( x + \frac{\partial}{\partial x} \right), \quad \text{and} \quad a^{\dagger} = \frac{1}{\sqrt{2}} \left( x - \frac{\partial}{\partial x} \right) \qquad (18)$$ + +If they are applied to the oscillator wave function, we have: + +$$a \chi_n(x) = \sqrt{n} \chi_{n-1}(x), \quad \text{and} \quad a^{\dagger} \chi_n(x) = \sqrt{n+1} \chi_{n+1}(x) \qquad (19)$$ + +Likewise, we can introduce $b$ and $b^\dagger$ operators applicable to $\chi_n(y)$: + +$$b = \frac{1}{\sqrt{2}} \left( y + \frac{\partial}{\partial y} \right), \quad \text{and} \quad b^{\dagger} = \frac{1}{\sqrt{2}} \left( y - \frac{\partial}{\partial y} \right) \qquad (20)$$ + +Thus + +$$\begin{aligned} \left(a^{\dagger}\right)^{n} \psi^{0}(x) &= \sqrt{n!} \chi_{n}(x) \\ \left(b^{\dagger}\right)^{n} \psi^{0}(y) &= \sqrt{n!} \chi_{n}(y) \end{aligned} \qquad (21)$$ + +and: + +$$a \chi_0(x) = b \chi_0(y) = 0 \qquad (22)$$ + +In terms of these variables, the transformation leading the Gaussian function of Equation (11) to its squeezed form of Equation (16) can be written as: + +$$\exp\left\{\frac{\eta}{2}(a^{\dagger}b^{\dagger} - ab)\right\} \qquad (23)$$ + +which can also be written as: + +$$\exp\left\{-\eta\left(x\frac{\partial}{\partial y} + y\frac{\partial}{\partial x}\right)\right\} \qquad (24)$$ + +Next, we can consider the exponential form: + +$$\exp\left\{(\tanh \eta)a^{\dagger}b^{\dagger}\right\} \qquad (25)$$ + +which can be expanded as: + +$$\sum_{n} \frac{1}{n!} (\tanh \eta)^n (a^{\dagger} b^{\dagger})^n \qquad (26)$$ + +If this operator is applied to the ground state of Equation (11), the result is: + +$$\sum_{n} (\tanh \eta)^n \chi_n(x) \chi_n(y) \qquad (27)$$ + +This form is not normalized, while the series of Equation (11) is. What is the origin of this difference? + +There is a similar problem with the one-photon coherent state [29,30]. There, the series comes from the expansion of the exponential form: + +$$\exp\{aa^{\dagger}\} \qquad (28)$$ +---PAGE_BREAK--- + +which can be expanded to: + +$$ \sum_n \frac{1}{n!} a^n (a^\dagger)^n \qquad (29) $$ + +However, this operator is not unitary. In order to make this series unitary, we consider the exponential form: + +$$ \exp (\alpha a^{\dagger} - \alpha^* a) \qquad (30) $$ + +which is unitary. This expression can then be written as: + +$$ e^{-\alpha a^*/2} [\exp(\alpha a^{\dagger})] [\exp(\alpha^* a)] \qquad (31) $$ + +according to the Baker–Campbell–Hausdorff (BCH) relation [31,32]. If this is applied to the ground state, the last bracket can be dropped, and the result is: + +$$ e^{-\alpha a^*/2} \exp[\alpha a^{\dagger}] \qquad (32) $$ + +which is the unitary operator with the normalization constant: + +$$ e^{-\alpha a^*/2} $$ + +Likewise, we can conclude that the series of Equation (27) is different from that of Equation (1) due to the difference between the unitary operator of Equation (23) and the non-unitary operator of Equation (25). It may be possible to derive the normalization factor using the BCH formula, but it seems to be intractable at this time. The best way to resolve this problem is to present the exact calculation of the unitary operator leading to the normalized series of Equation (11). We shall return to this problem in Section 5, where squeezed excited states are studied. + +## 2.2. Sheared Gaussian Function + +In addition, there is a transformation called "shear," where only one of the two coordinates is translated, as shown in Figure 2. This transformation takes the form: + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} 1 & 2\alpha \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \qquad (33) $$ + +which leads to: + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} x + 2\alpha y \\ y \end{pmatrix} \qquad (34) $$ + +This shear is one of the basic transformations in engineering sciences. In physics, this transformation plays the key role in understanding the internal space-time symmetry of massless particles [33–35]. This matrix plays the pivotal role during the transition from the oscillator mode to the damping mode in classical damped harmonic oscillators [36,37]. + +Under this transformation, the Gaussian form becomes: + +$$ \psi_{shr}(x,y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} \left[ (x - 2\alpha y)^2 + y^2 \right] \right\} \qquad (35) $$ + +It is possible to expand this into a series of the form of Equation (15) [38]. + +The transformation applicable to the Gaussian form of Equation (11) is: + +$$ \exp(-2\alpha y \frac{\partial}{\partial x}) \qquad (36) $$ +---PAGE_BREAK--- + +and the generator is: + +$$ -iy \frac{\partial}{\partial x} \tag{37} $$ + +It is of interest to see where this generator stands among the ten generators of Dirac. + +However, the most pressing problem is whether the sheared Gaussian form can be regarded as a rotated squeezed state. The basic mathematical issue is that the shear matrix of Equation (33) is triangular and cannot be diagonalized. Therefore, it cannot be a squeezed state. Yet, the Gaussian form of Equation (35) appears to be a rotated squeezed state, while not along the normal coordinates. We shall look at this problem in detail in Section 6. + +### 3. Dirac's Entangled Oscillators + +Paul A. M. Dirac devoted much of his life-long efforts to the task of making quantum mechanics compatible with special relativity. Harmonic oscillators serve as an instrument for illustrating quantum mechanics, while special relativity is the physics of the Lorentz group. Thus, Dirac attempted to construct a representation of the Lorentz group using harmonic oscillator wave functions [17,21]. + +In his 1963 paper [21], Dirac started from the two-dimensional oscillator whose wave function takes the Gaussian form given in Equation (11). He then considered unitary transformations applicable to this ground-state wave function. He noted that they can be generated by the following ten Hermitian operators: + +$$ L_1 = \frac{1}{2} (a^\dagger b + b^\dagger a), \quad L_2 = \frac{1}{2i} (a^\dagger b - b^\dagger a) $$ + +$$ L_3 = \frac{1}{2} (a^\dagger a - b^\dagger b), \quad S_3 = \frac{1}{2} (a^\dagger a + b b^\dagger) $$ + +$$ K_1 = -\frac{1}{4} (a^\dagger a^\dagger + aa - b^\dagger b^\dagger - bb) $$ + +$$ K_2 = \frac{i}{4} (a^\dagger a^\dagger - aa + b^\dagger b^\dagger - bb) $$ + +$$ K_3 = \frac{1}{2} (a^\dagger b^\dagger + ab) $$ + +$$ Q_1 = -\frac{i}{4} (a^\dagger a^\dagger - aa - b^\dagger b^\dagger + bb) $$ + +$$ Q_2 = -\frac{1}{4} (a^\dagger a^\dagger + aa + b^\dagger b^\dagger + bb) $$ + +$$ Q_3 = \frac{i}{2} (a^\dagger b^\dagger - ab) \tag{38} $$ + +He then noted that these operators satisfy the following set of commutation relations. + +$$ [L_i, L_j] = i\epsilon_{ijk}L_k, \quad [L_i, K_j] = i\epsilon_{ijk}K_k, \quad [L_i, Q_j] = i\epsilon_{ijk}Q_k $$ + +$$ [K_i, K_j] = [Q_i, Q_j] = -i\epsilon_{ijk}L_k, \quad [L_i, S_3] = 0 $$ + +$$ [K_i, Q_j] = -i\delta_{ij}S_3, \quad [K_i, S_3] = -iQ_i, \quad [Q_i, S_3] = iK_i \tag{39} $$ + +Dirac then determined that these commutation relations constitute the Lie algebra for the $O(3,2)$ de Sitter group with ten generators. This de Sitter group is the Lorentz group applicable to three space +---PAGE_BREAK--- + +coordinates and two time coordinates. Let us use the notation (x, y, z, t, s), with (x, y, z) as the space +coordinates and (t, s) as two time coordinates. Then, the rotation around the z axis is generated by: + +$$ +L_3 = \begin{pmatrix} 0 & -i & 0 & 0 \\ i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \tag{40} +$$ + +The generators $L_1$ and $L_2$ can also be constructed. The $K_3$ and $Q_3$ generators will take the form: + +$$ +K_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & i & 0 \\ 0 & 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \quad Q_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \tag{41} +$$ + +From these two matrices, the generators $K_1, K_2, Q_1, Q_2$ can be constructed. The generator $S_3$ can be written as: + +$$ +S_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i & 0 \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \quad (42) +$$ + +The last five-by-five matrix generates rotations in the two-dimensional space of (t, s). If we introduce +these two time variables, the O(3,2) group leads to two coupled Lorentz groups. The particle mass is +invariant under Lorentz transformations. Thus, one Lorentz group cannot change the particle mass. +However, with two coupled Lorentz groups, we can describe the world with variable masses, such as +the neutrino oscillations. + +In Section 2, we used the operators $Q_3$ and $K_3$ as the generators for the squeezed Gaussian +function. For the unitary transformation of Equation (23), we used: + +$$ +\exp(-i\eta Q_3) \tag{43} +$$ + +However, the exponential form of Equation (25) can be written as: + +$$ +\exp\{-i(\tanh \eta)(Q_3 + iK_3)\} \qquad (44) +$$ + +which is not unitary, as was seen before. + +From the space-time point of view, both $K_3$ and $Q_3$ generate Lorentz boosts along the z direction, +with the time variables $t$ and $s$, respectively. The fact that the squeeze and Lorentz transformations +share the same mathematical formula is well known. However, the non-unitary operator $iK_3$ does not +seem to have a space-time interpretation. + +As for the sheared state, the generator can be written as: + +$$ +Q_3 - L_2 \tag{45} +$$ + +leading to the expression given in Equation (37). This is a Hermitian operator leading to the unitary +transformation of Equation (36). +---PAGE_BREAK--- + +**4. Entangled Oscillators in the Phase-Space Picture** + +Also in his 1963 paper, Dirac states that the Lie algebra of Equation (39) can serve as the four-dimensional symplectic group $Sp(4)$. This group allows us to study squeezed or entangled states in terms of the four-dimensional phase space consisting of two position and two momentum variables [15,39,40]. + +In order to study the $Sp(4)$ contents of the coupled oscillator system, let us introduce the Wigner function defined as [41]: + +$$ +\begin{aligned} +W(x,y;p,q) = & \left(\frac{1}{\pi}\right)^2 \int \exp\{-2i(px' + qy')\} \\ +& \times \psi^*(x+x',y+y')\psi(x-x',y-y')dx'dy' +\end{aligned} +\quad (46) +$$ + +If the wave function $\psi(x, y)$ is the Gaussian form of Equation (11), the Wigner function becomes: + +$$ W(x,y:p,q) = \left(\frac{1}{\pi}\right)^2 \exp\left\{-\left(x^2 + p^2 + y^2 + q^2\right)\right\} \quad (47) $$ + +The Wigner function is defined over the four-dimensional phase space of $(x, p, y, q)$ just as in the case of classical mechanics. The unitary transformations generated by the operators of Equation (38) are translated into Wigner transformations [39,40,42]. As in the case of Dirac's oscillators, there are ten corresponding generators applicable to the Wigner function. They are: + +$$ +\begin{aligned} +L_1 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial q} - q \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial p} - p \frac{\partial}{\partial y} \right) \right\} \\ +L_2 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right) + \left( p \frac{\partial}{\partial q} - q \frac{\partial}{\partial p} \right) \right\} \\ +L_3 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} - p \frac{\partial}{\partial x} \right) - \left( y \frac{\partial}{\partial q} - q \frac{\partial}{\partial y} \right) \right\} \\ +S_3 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} - p \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial q} - q \frac{\partial}{\partial y} \right) \right\} +\end{aligned} +\quad (48) +$$ + +and: + +$$ +\begin{aligned} +K_1 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} + p \frac{\partial}{\partial x} \right) - \left( y \frac{\partial}{\partial q} + q \frac{\partial}{\partial y} \right) \right\} \\ +K_2 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial x} + y \frac{\partial}{\partial y} \right) - \left( p \frac{\partial}{\partial p} + q \frac{\partial}{\partial q} \right) \right\} \\ +K_3 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial q} + q \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial p} + p \frac{\partial}{\partial y} \right) \right\} \\ +Q_1 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial x} + q \frac{\partial}{\partial q} \right) - \left( y \frac{\partial}{\partial y} + p \frac{\partial}{\partial p} \right) \right\} \\ +Q_2 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} + p \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial q} + q \frac{\partial}{\partial y} \right) \right\} \\ +Q_3 &= -\frac{i}{2} \left\{ \left( y \frac{\partial}{\partial x} + x \frac{\partial}{\partial y} \right) - \left( q \frac{\partial}{\partial p} + p \frac{\partial}{\partial q} \right) \right\} +\end{aligned} +\quad (49) +$$ +---PAGE_BREAK--- + +These generators also satisfy the Lie algebra given in Equations (38) and (39). Transformations generated by these generators have been discussed in the literature [15,40,42]. + +As in the case of Section 3, we are interested in the generators $Q_3$ and $K_3$. The transformation generated by $Q_3$ takes the form: + +$$ \left[ \exp \left\{ \eta \left( x \frac{\partial}{\partial y} + y \frac{\partial}{\partial x} \right) \right\} \right] \left[ \exp \left\{ -\eta \left( p \frac{\partial}{\partial q} + q \frac{\partial}{\partial p} \right) \right\} \right] \quad (50) $$ + +This exponential form squeezes the Wigner function of Equation (47) in the *x* *y* space, as well as in their corresponding momentum space. However, in the momentum space, the squeeze is in the opposite direction, as illustrated in Figure 4. This is what we expect from canonical transformation in classical mechanics. Indeed, this corresponds to the unitary transformation, which played the major role in Section 2. + +**Figure 4.** Transformations generated by $Q_3$ and $K_3$. As the parameter $\eta$ becomes larger, both the space and momentum distribution becomes larger. + +Even though shown insignificant in Section 2, $K_3$ had a definite physical interpretation in Section 3. The transformation generated by $K_3$ takes the form: + +$$ \left[ \exp \left\{ \eta \left( x \frac{\partial}{\partial q} + q \frac{\partial}{\partial x} \right) \right\} \right] \left[ \exp \left\{ \eta \left( y \frac{\partial}{\partial p} + p \frac{\partial}{\partial y} \right) \right\} \right] \quad (51) $$ + +This performs the squeeze in the *x* *q* and *y* *p* spaces. In this case, the squeezes have the same sign, and the rate of increase is the same in all directions. We can thus have the same picture of squeeze for both *x* *y* and *p* *q* spaces, as illustrated in Figure 4. This parallel transformation corresponds to the Lorentz squeeze [20,25]. + +As for the sheared state, the combination: + +$$ Q_3 - L_2 = -i \left( y \frac{\partial}{\partial x} + q \frac{\partial}{\partial p} \right) \quad (52) $$ + +generates the same shear in the *p* *q* space. +---PAGE_BREAK--- + +**5. Entangled Excited States** + +In Section 2, we discussed the entangled ground state and noted that the entangled state of Equation (1) is a series expansion of the squeezed Gaussian function. In this section, we are interested in what happens when we squeeze an excited oscillator state starting from: + +$$ \chi_n(x)\chi_m(y) \tag{53} $$ + +In order to entangle this state, we should replace $x$ and $y$, respectively, by $x'$ and $y'$ given in Equation (17). + +The question is how the oscillator wave function is squeezed after this operation. Let us note first that the wave function of Equation (53) satisfies the equation: + +$$ \frac{1}{2} \left\{ \left( x^2 - \frac{\partial^2}{\partial x^2} \right) - \left( y^2 - \frac{\partial^2}{\partial y^2} \right) \right\} \chi_n(x) \chi_m(y) = (n-m) \chi_n(x) \chi_m(y) \tag{54} $$ + +This equation is invariant under the squeeze transformation of Equation (17), and thus, the eigenvalue $(n-m)$ remains invariant. Unlike the usual two-oscillator system, the $x$ component and the $y$ component have opposite signs. This is the reason why the overall equation is squeeze-invariant [3,25,43]. + +We then have to write this squeezed oscillator in the series form of Equation (15). The most interesting case is of course for $m=n=0$, which leads to the Gaussian entangled state given in Equation (16). Another interesting case is for $m=0$, while $n$ is allowed to take all integer values. This single-excitation system has applications in the covariant oscillator formalism where no time-like excitations are allowed. The Gaussian entangled state is a special case of this single-excited oscillator system. + +The most general case is for nonzero integers for both $n$ and $m$. The calculation for this case is available in the literature [20,44]. Seeing no immediate physical applications of this case, we shall not reproduce this calculation in this section. + +For the single-excitation system, we write the starting wave function as: + +$$ \chi_n(x)\chi_0(y) = \left[ \frac{1}{\pi 2^n n!} \right]^{1/2} H_n(x) \exp \left\{ -\left( \frac{x^2 + y^2}{2} \right) \right\} \tag{55} $$ + +There are no excitations along the $y$ coordinate. In order to squeeze this function, our plan is to replace $x$ and $y$ by $x'$ and $y'$, respectively, and write $\chi_n(x')\chi_0(y')$ as a series in the form: + +$$ \chi_n(x')\chi_0(y') = \sum_{k',k} A_{k',k}(n)\chi_{k'}(x)\chi_k(y) \tag{56} $$ + +Since $k' - k = n$ or $k' = n + k$, according to the eigenvalue of the differential equation given in Equation (54), we write this series as: + +$$ \chi_n(x')\chi_0(y') = \sum_{k',k} A_k(n)\chi_{(k+n)}(x)\chi_k(y) \tag{57} $$ + +with: + +$$ \sum_k |A_k(n)|^2 = 1 \tag{58} $$ + +This coefficient is: + +$$ A_k(n) = \int \chi_{k+n}(x)\chi_k(y)\chi_n(x')\chi_0(y') dx dy \tag{59} $$ + +This calculation was given in the literature in a fragmentary way in connection with a Lorentz-covariant description of extended particles starting from Ruiz's 1974 paper [45], subsequently by Kim et al. in +---PAGE_BREAK--- + +1979 [26] and by Rotbart in 1981 [44]. In view of the recent developments of physics, it seems necessary +to give one coherent calculation of the coefficient of Equation (59). + +We are now interested in the squeezed oscillator function: + +$$ +\begin{equation} +\begin{aligned} +A_k(n) = & \left[ \frac{1}{\pi^2 2^n n! (k+n)^2 (n+k)! k^{2k}!} \right]^{1/2} \\ +& \times \int H_{n+k}(x) H_k(y) H_n(x') \exp \left\{ -\left( \frac{x^2 + y^2 + x'^2 + y'^2}{2} \right) \right\} dx dy +\end{aligned} +\tag{60} +\end{equation} +$$ + +As was noted by Ruiz [45], the key to the evaluation of this integral is to introduce the generating +function for the Hermite polynomials [46,47]: + +$$ +G(r,z) = \exp(-r^2 + 2rz) = \sum_m \frac{r^m}{m!} H_m(z) \quad (61) +$$ + +and evaluate the integral: + +$$ +I = \int G(r,x)G(s,y)G(r',x') \exp \left\{ - \left( \frac{x^2 + y^2 + (x'^2 + y'^2)}{2} \right) \right\} dx dy \quad (62) +$$ + +The integrand becomes one exponential function, and its exponent is quadratic in x and y. +This quadratic form can be diagonalized, and the integral can be evaluated [20,26]. The result is: + +$$ +I = \left[ \frac{\pi}{\cosh \eta} \right] \exp(2rs \tanh \eta) \exp\left(\frac{2rr'}{\cosh \eta}\right) \quad (63) +$$ + +We can now expand this expression and choose the coefficients of r$^{n+k}$, s$^{k}$, r$^{m}$ for H$_{(n+k)}$ (x), H$_{n}$ (y) and +H$_{n}$ (z'), respectively. The result is: + +$$ +A_{n;k} = \left( \frac{1}{\cosh \eta} \right)^{(n+1)} \left[ \frac{(n+k)!}{n!k!} \right]^{1/2} (\tanh \eta)^k \quad (64) +$$ + +Thus, the series becomes: + +$$ +\chi_n(x')\chi_0(y') = \left(\frac{1}{\cosh \eta}\right)^{(n+1)} \sum_k \left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh \eta)^k \chi_{k+n}(x)\chi_k(y) \quad (65) +$$ + +If $n = 0$, it is the squeezed ground state, and this expression becomes the entangled state of +Equation (16). + +**6. E(2)-Sheared States** + +Let us next consider the effect of shear on the Gaussian form. From Figures 3 and 5, it is clear that +the sheared state is a rotated squeezed state. + +In order to understand this transformation, let us note that the squeeze and rotation are generated +by the two-by-two matrices: + +$$ +K = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad J = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \tag{66} +$$ +---PAGE_BREAK--- + +which generate the squeeze and rotation matrices of the form: + +$$ +\begin{align} +\exp(-i\eta K) &= \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \notag \\ +\exp(-i\theta J) &= \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \tag{67} +\end{align} +$$ + +respectively. We can then consider: + +$$ +S = K - J = \begin{pmatrix} 0 & 2i \\ 0 & 0 \end{pmatrix} \tag{68} +$$ + +This matrix has the property that S² = 0. Thus, the transformation matrix becomes: + +$$ +\exp(-i\alpha S) = \begin{pmatrix} 1 & 2\alpha \\ 0 & 1 \end{pmatrix} \qquad (69) +$$ + +Since $S^2 = 0$, the Taylor expansion truncates, and the transformation matrix becomes the triangular matrix of Equation (34), leading to the transformation: + +$$ +\begin{pmatrix} x \\ y \end{pmatrix} \rightarrow \begin{pmatrix} x + 2\alpha y \\ y \end{pmatrix} \qquad (70) +$$ + +The shear generator S of Equation (68) indicates that the infinitesimal transformation is a rotation followed by a squeeze. Since both rotation and squeeze are area-preserving transformations, the shear should also be an area-preserving transformations. + +Figure 5. Shear transformation of the Gaussian form given in Equation (11). + +In view of Figure 5, we should ask whether the triangular matrix of Equation (69) can be obtained from one squeeze matrix followed by one rotation matrix. This is not possible mathematically. It can however, be written as a squeezed rotation matrix of the form: + +$$ +\begin{pmatrix} e^{\lambda/2} & 0 \\ 0 & e^{-\lambda/2} \end{pmatrix} \begin{pmatrix} \cos \omega & \sin \omega \\ -\sin \omega & \cos \omega \end{pmatrix} \begin{pmatrix} e^{-\lambda/2} & 0 \\ 0 & e^{\lambda/2} \end{pmatrix} \quad (71) +$$ + +resulting in: + +$$ +\left( \begin{array}{cc} \cos \omega & e^{\lambda} \sin \omega \\ -e^{-\lambda} \sin \omega & \cos \omega \end{array} \right) \qquad (72) +$$ + +If we let: + +$$ +(\sin \omega) = 2\alpha e^{-\lambda} \tag{73} +$$ +---PAGE_BREAK--- + +Then: + +$$ +\begin{pmatrix} +\cos \omega & 2\alpha \\ +-2\alpha e^{-2\lambda} & \cos \omega +\end{pmatrix} +\qquad (74) +$$ + +If $\lambda$ becomes infinite, the angle $\omega$ becomes zero, and this matrix becomes the triangular matrix of Equation (69). This is a singular process where the parameter $\lambda$ goes to infinity. + +If this transformation is applied to the Gaussian form of Equation (11), it becomes: + +$$ +\psi(x, y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} \left[ (x - 2\alpha y)^2 + y^2 \right] \right\} \quad (75) +$$ + +The question is whether the exponential portion of this expression can be written as: + +$$ +\exp \left\{ -\frac{1}{2} \left[ e^{-2\eta} (x \cos \theta + y \sin \theta)^2 + e^{2\eta} (x \sin \theta - y \cos \theta)^2 \right] \right\} \quad (76) +$$ + +The answer is yes. This is possible if: + +$$ +e^{2\eta} = 1 + 2\alpha^2 + 2\alpha \sqrt{\alpha^2 + 1} +e^{-2\eta} = 1 + 2\alpha^2 - 2\alpha \sqrt{\alpha^2 + 1} +$$ + +In Equation (74), we needed a limiting case of $\lambda$ becoming infinite. This is necessarily a singular transformation. On the other hand, the derivation of the Gaussian form of Equation (75) appears to be analytic. How is this possible? In order to achieve the transformation from the Gaussian form of Equations (11) to (75), we need the linear transformation: + +$$ +\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix} \tag{78} +$$ + +If the initial form is invariant under rotations as in the case of the Gaussian function of Equation (11), +we can add another rotation matrix on the right-hand side. We choose that rotation matrix to be: + +$$ +\begin{pmatrix} \cos(\theta - \pi/2) & -\sin(\theta - \pi/2) \\ \sin(\theta - \pi/2) & \cos(\theta - \pi/2) \end{pmatrix} \tag{79} +$$ + +write the three matrices as: + +$$ +\begin{pmatrix} \cos \theta' & -\sin \theta' \\ \sin \theta' & \cos \theta' \end{pmatrix} \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} \cos \theta' & -\sin \theta' \\ \sin \theta' & \cos \theta' \end{pmatrix} \quad (80) +$$ + +with: + +$$ +\theta' = \theta - \frac{\pi}{4} +$$ + +The multiplication of these three matrices leads to: + +$$ +\begin{pmatrix} +(\cosh \eta) \sin(2\theta) & \sinh \eta + (\cosh \eta) \cos(2\theta) \\ +\sinh \eta - (\cosh \eta) \cos(2\theta) & (\cosh \eta) \sin(2\theta) +\end{pmatrix} +\quad (81) +$$ +---PAGE_BREAK--- + +The lower-left element can become zero when $\sinh\eta = \cosh(\eta)\cos(2\theta)$, and consequently, this matrix becomes: + +$$ \begin{pmatrix} 1 & 2 \sinh \eta \\ 0 & 1 \end{pmatrix} \qquad (82) $$ + +Furthermore, this matrix can be written in the form of a squeezed rotation matrix given in Equation (72), with: + +$$ \cos \omega = (\cosh \eta) \sin(2\theta) $$ + +$$ e^{-2\lambda} = \frac{\cos(2\theta) - \tanh \eta}{\cos(2\theta) + \tanh \eta} \qquad (83) $$ + +The matrices of the form of Equations (72) and (81) are known as the Wigner and Bargmann decompositions, respectively [33,36,48–50]. + +## 7. Feynman's Rest of the Universe + +We need the concept of entanglement in quantum systems of two variables. The issue is how the measurement of one variable affects the other variable. The simplest case is what happens to the first variable while no measurements are taken on the second variable. This problem has a long history since von Neumann introduced the concept of the density matrix in 1932 [51]. While there are many books and review articles on this subject, Feynman stated this problem in his own colorful way. In his book on statistical mechanics [22], Feynman makes the following statement about the density matrix. + +*When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts—the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system.* + +Indeed, Yurke and Potasek [11] and also Ekert and Knight [12] studied this problem in the two-mode squeezed state using the entanglement formula given in Equation (16). Later in 1999, Han et al. studied this problem with two coupled oscillators where one oscillator is observed while the other is not and, thus, is in the rest of the universe as defined by Feynman [23]. + +Somewhat earlier in 1990 [27], Kim and Wigner observed that there is a time separation wherever there is a space separation in the Lorentz-covariant world. The Bohr radius is a space separation. If the system is Lorentz-boosted, the time-separation becomes entangled with the space separation. However, in the present form of quantum mechanics, this time-separation variable is not measured and not understood. + +This variable was mentioned in the paper of Feynman et al. in 1971 [43], but the authors say they would drop this variable because they do not know what to do with it. While what Feynman et al. did was not quite respectable from the scientific point of view, they made a contribution by pointing out the existence of the problem. In 1990, Kim and Wigner [27] noted that the time-separation variable belongs to Feynman's rest of the universe and studied its consequences in the observable world. + +In this section, we first reproduce the work of Kim and Wigner using the *x* and *y* variables and then study the consequences. Let us introduce the notation $\psi_{\eta}^{n}(x,y)$ for the squeezed oscillator wave function given in Equation (65): + +$$ \psi_{\eta}^{n}(x,y) = \chi_{n}(x')\chi_{0}(y') \qquad (84) $$ + +with no excitations along the *y* direction. For $\eta = 0$, this expression becomes $\chi_n(x)\chi_0(y)$. + +From this wave function, we can construct the pure-state density matrix as: + +$$ \rho_{\eta}^{n}(x, y; r, s) = \psi_{\eta}^{n}(x, y)\psi_{\eta}^{n}(r, s) \qquad (85) $$ +---PAGE_BREAK--- + +which satisfies the condition $\rho^2 = \rho$, which means: + +$$ \rho_{\eta}^{n}(x, y; r, s) = \int \rho_{\eta}^{n}(x, y; u, v) \rho_{\eta}^{n}(u, v; r, s) du dv \quad (86) $$ + +As illustrated in Figure 6, it is not possible to make measurements on the variable $y$. We thus have to take the trace of this density matrix along the $y$ axis, resulting in: + +$$ \begin{aligned} \rho_{\eta}^{n}(x,r) &= \int \psi_{\eta}^{n}(x,y)\psi_{\eta}^{n}(r,y)dy \\ &= \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_{k} \frac{(n+k)!}{n!k!} (\tanh \eta)^{2k} \chi_{n+k}(x) \chi_{k+n}(r) \end{aligned} \quad (87) $$ + +The trace of this density matrix is one, but the trace of $\rho^2$ is: + +$$ \begin{aligned} \mathrm{Tr} (\rho^2) &= \int \rho_{\eta}^{n}(x,r)\rho_{\eta}^{n}(r,x)drdx \\ &= \left(\frac{1}{\cosh \eta}\right)^{4(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^2 (\tanh \eta)^{4k} \end{aligned} \quad (88) $$ + +which is less than one. This is due to the fact that we are not observing the $y$ variable. Our knowledge is less than complete. + +**Figure 6.** Feynman's rest of the universe. As the Gaussian function is squeezed, the $x$ and $y$ variables become entangled. If the $y$ variable is not measured, it affects the quantum mechanics of the $x$ variable. + +The standard way to measure this incompleteness is to calculate the entropy defined as [51–53]: + +$$ S = -\operatorname{Tr} (\rho(x, r) \ln[\rho(x, r)]) \quad (89) $$ + +which leads to: + +$$ S = 2(n+1)[(\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta)^2 \ln(\sinh \eta)] \\ - \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_k \frac{(n+k)!}{n!k!} \ln\left[\frac{(n+k)!}{n!k!}\right] (\tanh \eta)^{2k} \quad (90) $$ + +Let us go back to the wave function given in Equation (84). As is illustrated in Figure 6, its localization property is dictated by its Gaussian factor, which corresponds to the ground-state wave +---PAGE_BREAK--- + +function. For this reason, we expect that much of the behavior of the density matrix or the entropy for +the $n^{th}$ excited state will be the same as that for the ground state with $n = 0$. For this state, the density +matrix is: + +$$ \rho_{\eta}(x, r) = \left( \frac{1}{\pi \cosh(2\eta)} \right)^{1/2} \exp \left\{ -\frac{1}{4} \left[ \frac{(x+r)^2}{\cosh(2\eta)} + (x-r)^2 \cosh(2\eta) \right] \right\} \quad (91) $$ + +and the entropy is: + +$$ S_{\eta} = 2 \left[ (\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta)^2 \ln(\sinh \eta) \right] \quad (92) $$ + +The density distribution $\rho_\eta(x,x)$ becomes: + +$$ \rho_{\eta}(x,x) = \left( \frac{1}{\pi \cosh(2\eta)} \right)^{1/2} \exp \left( -\frac{x^2}{\cosh(2\eta)} \right) \qquad (93) $$ + +The width of the distribution becomes $\sqrt{\cosh(2\eta)}$, and the distribution becomes wide-spread as $\eta$ becomes larger. Likewise, the momentum distribution becomes wide-spread as can be seen in Figure 4. This simultaneous increase in the momentum and position distribution widths is due to our inability to measure the y variable hidden in Feynman's rest of the universe [22]. + +In their paper of 1990 [27], Kim and Wigner used the *x* and *y* variables as the longitudinal and time-like variables respectively in the Lorentz-covariant world. In the quantum world, it is a widely-accepted view that there are no time-like excitations. Thus, it is fully justified to restrict the *y* component to its ground state, as we did in Section 5. + +**8. Space-Time Entanglement** + +The series given in Equation (1) plays the central role in the concept of the Gaussian or continuous-variable entanglement, where the measurement on one variable affects the quantum mechanics of the other variable. If one of the variables is not observed, it belongs to Feynman's rest of the universe. + +The series of the form of Equation (1) was developed earlier for studying harmonic oscillators in moving frames [20,24–28]. Here, *z* and *t* are the space-like and time-like separations between the two constituent particles bound together by a harmonic oscillator potential. There are excitations along the longitudinal direction. However, no excitations are allowed along the time-like direction. Dirac described this as the “c-number” time-energy uncertainty relation [16]. Dirac in 1927 was talking about the system without special relativity. In 1945 [17], Dirac attempted to construct space-time wave functions using harmonic oscillators. In 1949 [18], Dirac introduced his light-cone coordinate system for Lorentz boosts, demonstrating that the boost is a squeeze transformation. It is now possible to combine Dirac’s three observations to construct the Lorentz covariant picture of quantum bound states, as illustrated in Figure 7. + +If the system is at rest, we use the wave function: + +$$ \psi_0^n(z,t) = \chi_n(z)\chi_0(t) \qquad (94) $$ + +which allows excitations along the *z* axis, but no excitations along the *t* axis, according to Dirac's c-number time-energy uncertainty relation. + +If the system is boosted, the *z* and *t* variables are replaced by *z'* and *t'* where: + +$$ z' = (\cosh \eta)z - (\sinh \eta)t, \quad \text{and} \quad t' = -(\sinh \eta)z + (\cosh \eta)t \qquad (95) $$ + +This is a squeeze transformation as in the case of Equation (17). In terms of these space-time variables, the wave function of Equation (84), can be written as: + +$$ \psi_{\eta}^{n}(z, t) = \chi_{n}(z')\chi_{0}(t') \qquad (96) $$ +---PAGE_BREAK--- + +and the series of Equation (65) then becomes: + +$$ \psi_{\eta}^{n}(z, t) = \left(\frac{1}{\cosh \eta}\right)^{(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh \eta)^{k} \chi_{k+n}(z) \chi_{k}(t) \quad (97) $$ + +**Figure 7.** Dirac's form of Lorentz-covariant quantum mechanics. In addition to Heisenberg's uncertainty relation, which allows excitations along the spatial direction, there is the "c-number" time-energy uncertainty without excitations. This form of quantum mechanics can be combined with Dirac's light-cone picture of Lorentz boost, resulting in the Lorentz-covariant picture of quantum mechanics. The elliptic squeeze shown in this figure can be called the space-time entanglement. + +Since the Lorentz-covariant oscillator formalism shares the same set of formulas with the Gaussian entangled states, it is possible to explain some aspects of space-time physics using the concepts and terminologies developed in quantum optics, as illustrated in Figure 1. + +The time-separation variable is a case in point. The Bohr radius is a well-defined spatial separation between the proton and electron in the hydrogen atom. However, if the atom is boosted, this radius picks up a time-like separation. This time-separation variable does not exist in the Schrödinger picture of quantum mechanics. However, this variable plays the pivotal role in the covariant harmonic oscillator formalism. It is gratifying to note that this “hidden or forgotten” variable plays a role in the real world while being entangled with the observable longitudinal variable. With this point in mind, let us study some of the consequences of this space-time entanglement. + +First of all, does the wave function of Equation (96) carry a probability interpretation in the Lorentz-covariant world? Since $dzdt = dz'dt'$, the normalization: + +$$ \int |\psi_{\eta}^{n}(z, t)|^{2} dtdz = 1 \qquad (98) $$ + +This is a Lorentz-invariant normalization. If the system is at rest, the z and t variables are completely dis-entangled, and the spatial component of the wave function satisfies the Schrödinger equation without the time-separation variable. + +However, in the Lorentz-covariant world, we have to consider the inner product: + +$$ (\psi_{\eta}^{n}(z,t), \psi_{\eta'}^{m}(z,t)) = \int [\psi_{\eta}^{n}(z,t)]^{*} \psi_{\eta'}^{m}(z,t) dzdt \quad (99) $$ +---PAGE_BREAK--- + +The evaluation of this integral was carried out by Michael Ruiz in 1974 [45], and the result was: + +$$ \left( \frac{1}{|\cosh(\eta - \eta')|} \right)^{n+1} \delta_{nm} \qquad (100) $$ + +In order to see the physical implications of this result, let us assume that one of the oscillators is at rest with $\eta' = 0$ and the other is moving with the velocity $\beta = \tanh(\eta)$. Then, the result is: + +$$ (\psi_{\eta}^{n}(z,t), \psi_{0}^{m}(z,t)) = (\sqrt{1-\beta^2})^{n+1} \delta_{nm} \qquad (101) $$ + +Indeed, the wave functions are orthonormal if they are in the same Lorentz frame. If one of them is boosted, the inner product shows the effect of Lorentz contraction. We are familiar with the contraction $\sqrt{1-\beta^2}$ for the rigid rod. The ground state of the oscillator wave function is contracted like a rigid rod. + +The probability density $|\psi_\eta^0(z)|^2$ is for the oscillator in the ground state, and it has one hump. For the $n^{th}$ excited state, there are $(n+1)$ humps. If each hump is contracted like $\sqrt{1-\beta^2}$, the net contraction factor is $(\sqrt{1-\beta^2})^{n+1}$ for the $n^{th}$ excited state. This result is illustrated in Figure 8. + +**Figure 8.** Orthogonality relations for two covariant oscillator wave functions. The orthogonality relation is preserved for different frames. However, they show the Lorentz contraction effect for two different frames. + +With this understanding, let us go back to the entanglement problem. The ground state wave function takes the Gaussian form given in Equation (11): + +$$ \psi_0(z,t) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (z^2 + t^2) \right\} \qquad (102) $$ + +where the x and y variables are replaced by z and t, respectively. If Lorentz-boosted, this Gaussian function becomes squeezed to [20,24,25]: + +$$ \psi_{\eta}^{0}(z,t) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta}(z+t)^2 + e^{2\eta}(z-t)^2 \right] \right\} \qquad (103) $$ + +leading to the series: + +$$ \frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k \chi_k(z) \chi_k(t) \qquad (104) $$ + +According to this formula, the z and t variables are entangled in the same way as the x and y variables are entangled. +---PAGE_BREAK--- + +Here, the z and t variables are space and time separations between two particles bound together by the oscillator force. The concept of the space separation is well defined, as in the case of the Bohr radius. On the other hand, the time separation is still hidden or forgotten in the present form of quantum mechanics. In the Lorentz-covariant world, this variable affects what we observe in the real world by entangling itself with the longitudinal spatial separation. + +In Chapter 16 of their book [9], Walls and Milburn wrote down the series of Equation (1) and discussed what would happen when the $\eta$ parameter becomes infinitely large. We note that the series given in Equation (104) shares the same expression as the form given by Walls and Milburn, as well as other papers dealing with the Gaussian entanglement. As in the case of Wall and Milburn, we are interested in what happens when $\eta$ becomes very large. + +As we emphasized throughout the present paper, it is possible to study the entanglement series using the squeezed Gaussian function given in Equation (103). It is then possible to study this problem using the ellipse. Indeed, we can carry out the mathematics of entanglement using the ellipse shown Figure 9. This figure is the same as that of Figure 6, but it illustrates the entanglement of the space and time separations, instead of the x and y variables. If the particle is at rest with $\eta = 0$, the Gaussian form corresponds to the circle in Figure 9. When the particle gains speed, this Gaussian function becomes squeezed into an ellipse. This ellipse becomes concentrated along the light cone with $t = z$, as $\eta$ becomes very large. + +The point is that we are able to observe this effect in the real world. These days, the velocity of protons from high-energy accelerators is very close to that of light. According to Gell-Mann [54], the proton is a bound state of three quarks. Since quarks are confined in the proton, they have never been observed, and the binding force must be like that of the harmonic oscillator. Furthermore, the observed mass spectra of the hadrons exhibit the degeneracy of the three-dimensional harmonic oscillator [43]. We use the word “hadron” for the bound state of the quarks. The simplest hadron is thus the bound state of two quarks. + +In 1969 [55], Feynman observed that the same proton, when moving with a velocity close to that of light, can be regarded as a collection of partons, with the following peculiar properties. + +1. The parton picture is valid only for protons moving with velocity close to that of light. + +2. The interaction time between the quarks becomes dilated, and partons are like free particles. + +3. The momentum distribution becomes wide-spread as the proton moves faster. Its width is proportional to the proton momentum. + +4. The number of partons is not conserved, while the proton starts with a finite number of quarks. + +**Figure 9.** Feynman's rest of the universe. This figure is the same as Figure 6. Here, the space variable z and the time variable t are entangled. +---PAGE_BREAK--- + +Indeed, Figure 10 tells why the quark and parton models are two limiting cases of one Lorentz-covariant entity. In the oscillator regime, the three-particle system can be reduced to two independent two-particle systems [43]. Also in the oscillator regime, the momentum-energy wave function takes the same form as the space-time wave function, thus with the same squeeze or entanglement property as illustrated in this figure. This leads to the wide-spread momentum distribution [20,56,57]. + +**Figure 10.** The transition from the quark to the parton model through space-time entanglement. When $\eta = 0$, the system is called the quark model where the space separation and the time separation are dis-entangled. Their entanglement becomes maximum when $\eta = \infty$. The quark model is transformed continuously to the parton model as the $\eta$ parameter increases from zero to $\infty$. The mathematics of this transformation is given in terms of circles and ellipses. + +Also in Figure 10, the time-separation between the quarks becomes large as $\eta$ becomes large, leading to a weaker spring constant. This is why the partons behave like free particles [20,56,57]. + +As $\eta$ becomes very large, all of the particles are confined into a narrow strip around the light cone. The number of particles is not constant for massless particles as in the case of black-body radiation [20,56,57]. + +Indeed, the oscillator model explains the basic features of the hadronic spectra [43]. Does the oscillator model tell the basic feature of the parton distribution observed in high-energy laboratories? The answer is yes. In his 1982 paper [58], Paul Hussar compared the parton distribution observed in a high-energy laboratory with the Lorentz-boosted Gaussian distribution. They are close enough to justify that the quark and parton models are two limiting cases of one Lorentz-covariant entity. + +To summarize, the proton makes a phase transition from the bound state into a plasma state as it moves faster, as illustrated in Figure 10. The unobserved time-separation variable becomes more prominent as $\eta$ becomes larger. We can now go back to the form of this entropy given in Equation (92) and calculate it numerically. It is plotted against $(\tanh \eta)^2 = \beta^2$ in Figure 11. The entropy is zero when the hadron is at rest, and it becomes infinite as the hadronic speed reaches the speed of light. +---PAGE_BREAK--- + +Figure 11. Entropy and temperature as functions of $[\tanh(\eta)]^2 = \beta^2$. They are both zero when the hadron is at rest, but they become infinitely large when the hadronic speed becomes close to that of light. The curvature for the temperature plot changes suddenly around $[\tanh(\eta)]^2 \approx 0.8$, indicating a phase transition. + +Let us go back to the expression given in Equation (87). For this ground state, the density matrix becomes: + +$$ \rho_{\eta}(z, z') = \left( \frac{1}{\cosh \eta} \right)^2 \sum_k (\tanh \eta)^{2k} \chi_k(z) \chi_k(z') \quad (105) $$ + +We can now compare this expression with the density matrix for the thermally-excited oscillator state [22]: + +$$ \rho_{\eta}(z, z') = (1 - e^{-1/T}) \sum_{k} [\cosh z]^{k} \chi_{k}(z) \chi_{k}(z') \quad (106) $$ + +By comparing these two expressions, we arrive at: + +$$ [\tanh(\eta)]^2 = e^{-1/T} \quad (107) $$ + +and thus: + +$$ T = \frac{-1}{\ln[(\tanh \eta)^2]} \quad (108) $$ + +This temperature is also plotted against $(\tanh \eta)^2$ in Figure 11. The temperature is zero if the hadron is at rest, but it becomes infinite when the hadronic speed becomes close to that of light. The slope of the curvature changes suddenly around $(\tanh \eta)^2 \approx 0.8$, indicating a phase transition from the bound state to the plasma state. + +In this section, we have shown how useful the concept of entanglement is in understanding the role of the time-separation in high energy hadronic physics including Gell-Mann's quark model and Feynman's parton model as two limiting cases of one Lorentz-covariant entity. + +**9. Concluding Remarks** + +The main point of this paper is the mathematical identity: + +$$ \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta} (x+y)^2 + e^{2\eta} (x-y)^2 \right] \right\} = \frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k \chi_k(x) \chi_k(y) \quad (109) $$ + +which says that the series of Equation (1) is an expansion of the Gaussian form given in Equation (2). +---PAGE_BREAK--- + +The first derivation of this series was published in 1979 [26] as a formula from the Lorentz group. Since this identity is not well known, we explained in Section 5 how this formula can be derived from the generating function of the Hermite polynomials. + +While the series serves useful purposes in understanding the physics of entanglement, the Gaussian form can be used to transfer this idea to high-energy hadronic physics. The hadron, such as the proton, is a quantum bound state. As was pointed out in Section 8, the squeezed Gaussian function of Equation (109) plays the pivotal role for hadrons moving with relativistic speeds. + +The Bohr radius is a very important quantity in physics. It is a spatial separation between the proton and electron in the the hydrogen atom. Likewise, there is a space-like separation between constituent particles in a bound state at rest. When the bound state moves, it picks up a time-like component. However, in the present form of quantum mechanics, this time-like separation is not recognized. Indeed, this variable is hidden in Feynman's rest of the universe. When the system is Lorentz-boosted, this variable entangles itself with the measurable longitudinal variable. Our failure to measure this entangled variable appears in the form of entropy and temperature in the real world. + +While harmonic oscillators are applicable to many aspects of quantum mechanics, Paul A. M. Dirac observed in 1963 [21] that the system of two oscillators contains also the symmetries of the Lorentz group. We discussed in this paper one concrete case of Dirac's symmetry. There are different languages for harmonic oscillators, such as the Schrödinger wave function, step-up and step-down operators and the Wigner phase-space distribution function. In this paper, we used extensively a pictorial language with circles and ellipses. + +Let us go back to Equation (109); this mathematical identity was published in 1979 as textbook material in the American Journal of Physics [26], and the same formula was later included in a textbook on the Lorentz group [20]. It is gratifying to note that the same formula serves as a useful tool for the current literature in quantum information theory [59,60]. + +**Author Contributions:** Each of the authors participated in developing the material presented in this paper and in writing the manuscript. + +**Conflicts of Interest:** The authors declare that no conflict of interest exists. + +## References + +1. Giedke, G.; Wolf, M.M.; Krueger, O.; Werner, R.F.; Cirac, J.J. Entanglement of formation for symmetric Gaussian states. Phys. Rev. Lett. **2003**, *91*, 10790.1–10790.4. + +2. Braunstein, S.L.; van Loock, P. Quantum information with continuous variables. Rev. Mod. Phys. **2005**, *28*, 513–676. + +3. Kim, Y.S.; Noz, M.E. Coupled oscillators, entangled oscillators, and Lorentz-covariant Oscillators. J. Opt. B Quantum Semiclass. **2003**, *7*, s459–s467. + +4. Ge, W.; Tasgin, M.E.; Suhail Zubairy, S. Conservation relation of nonclassicality and entanglement for Gaussian states in a beam splitter. Phys. Rev. A **2015**, *92*, 052328. + +5. Gingrich, R.M.; Adami, C. Quantum Engtanglement of Moving Bodies. Phys. Rev. Lett. **2002**, *89*, 270402. + +6. Dodd, P.J.; Halliwell, J.J. Disentanglement and decoherence by open system dynamics. Phys. Rev. A **2004**, *69*, 052105. + +7. Ferraro, A.; Olivares, S.; Paris, M.G.A. Gaussian States in Continuous Variable Quantum Information. EDIZIONI DI FILOSOFIA E SCIENZE (2005). Available online: http://arxiv.org/abs/quant-ph/0503237 (accessed on 24 June 2016). + +8. Adesso, G.; Illuminati, F. Entanglement in continuous-variable systems: Recent advances and current perspectives. J. Phys. A **2007**, *40*, 7821–7880. + +9. Walls, D.F.; Milburn, G.J. Quantum Optics, 2nd ed.; Springer: Berlin, Germany, 2008. + +10. Yuen, H.P. Two-photon coherent states of the radiation field. Phys. Rev. A **1976**, *13*, 2226–2243. + +11. Yurke, B.; Potasek, M. Obtainment of Thermal Noise from a Pure State. Phys. Rev. A **1987**, *36*, 3464–3466. + +12. Ekert, A.K.; Knight, P.L. Correlations and squeezing of two-mode oscillations. Am. J. Phys. **1989**, *57*, 692–697. +---PAGE_BREAK--- + +13. Paris, M.G.A. Entanglement and visibility at the output of a Mach-Zehnder interferometer. Phys. Rev. A **1999**, *59*, 1615. + +14. Kim, M.S.; Son, W.; Buzek, V.; Knight, P.L. Entanglement by a beam splitter: Nonclassicality as a prerequisite for entanglement. Phys. Rev. A **2002**, *65*, 02323. + +15. Han, D.; Kim, Y.S.; Noz, M.E. Linear Canonical Transformations of Coherent and Squeezed States in the Wigner phase Space III. Two-mode States. Phys. Rev. A **1990**, *41*, 6233-6244. + +16. Dirac, P.A.M. The Quantum Theory of the Emission and Absorption of Radiation. Proc. Roy. Soc. (Lond.) **1927**, A114, 243-265. + +17. Dirac, P.A.M. Unitary Representations of the Lorentz Group. Proc. Roy. Soc. (Lond.) **1945**, A183, 284-295. + +18. Dirac, P.A.M. Forms of relativistic dynamics. Rev. Mod. Phys. **1949**, *21*, 392-399. + +19. Yukawa, H. Structure and Mass Spectrum of Elementary Particles. I. General Considerations. Phys. Rev. **1953**, *91*, 415-416. + +20. Kim, Y.S.; Noz, M.E. Theory and Applications of the Poincaré Group; Reidel: Dordrecht, The Netherlands, 1986. + +21. Dirac, P.A.M. A Remarkable Representation of the 3 + 2 de Sitter Group. J. Math. Phys. **1963**, *4*, 901-909. + +22. Feynman, R.P. Statistical Mechanics; Benjamin Cummings: Reading, MA, USA, 1972. + +23. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative Example of Feynman's Rest of the Universe. Am. J. Phys. **1999**, *67*, 61-66. + +24. Kim, Y.S.; Noz, M.E. Covariant harmonic oscillators and the quark model. Phys. Rev. D **1973**, *8*, 3521-3627. + +25. Kim, Y.S.; Noz, M.E.; Oh, S.H. Representations of the Poincaré group for relativistic extended hadrons. J. Math. Phys. **1979**, *20*, 1341-1344. + +26. Kim, Y.S.; Noz, M.E.; Oh, S.H. A simple method for illustrating the difference between the homogeneous and inhomogeneous Lorentz groups. Am. J. Phys. **1979**, *47*, 892-897. + +27. Kim, Y.S.; Wigner, E.P. Entropy and Lorentz Transformations. Phys. Lett. A **1990**, *147*, 343-347. + +28. Kim, Y.S.; Noz, M.E. Lorentz Harmonics, Squeeze Harmonics and Their Physical Applications. Symmety **2011**, *3*, 16-36. + +29. Klauder, J.R.; Sudarshan, E.C.G. Fundamentals of Quantum Optics; Benjamin: New York, NY, USA, 1968. + +30. Saleh, B.E.A.; Teich, M.C. Fundamentals of Photonics, 2nd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2007. + +31. Miller, W. Symmetry Groups and Their Applications; Academic Press: New York, NY, USA, 1972. + +32. Hall, B.C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, 2nd ed.; Springer International: Cham, Switzerland, 2015. + +33. Wigner, E. On Unitary Representations of the Inhomogeneous Lorentz Group. Ann. Math. **1939**, *40*, 149-204. + +34. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass. Phys. Rev. **1964**, *135*, B1049-B1056. + +35. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic-particles. J. Math. Phys. **1990**, *31*, 55-60. + +36. Başkal, S.; Kim, Y.S.; Noz, M.E. Wigner's Space-Time Symmetries Based on the Two-by-Two Matrices of the Damped Harmonic Oscillators and the Poincaré Sphere. Symmetry **2014**, *6*, 473-515. + +37. Başkal, S.; Kim, Y.S.; Noz, M.E. Physics of the Lorentz Group; IOP Science; Morgan & Claypool Publishers: San Rafael, CA, USA, 2015. + +38. Kim, Y.S.; Yeh, Y. $E(2)$-symmetric two-mode sheared states. J. Math. Phys. **1992**, *33*, 1237-1246 + +39. Kim, Y.S.; Noz, M.E. Phase Space Picture of Quantum Mechanics; World Scientific Publishing Company: Singapore, Singapore, 1991. + +40. Kim, Y.S.; Noz, M.E. Dirac Matrices and Feynman's Rest of the Universe. Symmetry **2012**, *4*, 626-643. + +41. Wigner, E. On the Quantum Corrections for Thermodynamic Equilibrium. Phys. Rev. **1932**, *40*, 749-759. + +42. Han, D.; Kim, Y.S.; Noz, M.E. $O(3,3)$-like Symmetries of Coupled Harmonic Oscillators. J. Math. Phys. **1995**, *36*, 3940-3954. + +43. Feynman, R.P.; Kislinger, M.; Ravndal, F. Current Matrix Elements from a Relativistic Quark Model. +Phys. Rev. D **1971**, *3*, 2706-2732. + +44. Rotbart, F.C. Complete orthogonality relations for the covariant harmonic oscillator. +Phys. Rev. D **1981**, *12*, 3078-3090. + +45. Ruiz, M.J. Orthogonality relations for covariant harmonic oscillator wave functions. +Phys. Rev. D **1974**, *10*, 4306-4307. + +46. Magnus, W.; Oberhettinger, F.; Soni, R.P. Formulas and Theorems for the Special Functions of Mathematical Physics; +Springer-Verlag: Heidelberg, Germany, 1966. +---PAGE_BREAK--- + +47. Doman, B.G.S. *The Classical Orthogonal Polynomials*; World Scientific: Singapore, Singapore, 2016. + +48. Bargmann, V. Irreducible unitary representations of the Lorentz group. *Ann. Math.* **1947**, *48*, 568–640. + +49. Han, D.; Kim, Y.S. Special relativity and interferometers. *Phys. Rev. A* **1988**, *37*, 4494–4496. + +50. Han, D.; Kim, Y.S.; Noz, M.E. Wigner rotations and Iwasawa decompositions in polarization optics. *Phys. Rev. E* **1999**, *1*, 1036–1041. + +51. Von Neumann, J. *Die mathematische Grundlagen der Quanten-Mechanik*; Springer: Berlin, Germany, 1932. (von Neumann, I. *Mathematical Foundation of Quantum Mechanics*; Princeton University: Princeton, NJ, USA, 1955.) + +52. Fano, U. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques. *Rev. Mod. Phys.* **1957**, *29*, 74–93. + +53. Wigner E.P.; Yanase, M.M. Information Contents of Distributions. Proc. Natl. Acad. Sci. USA **1963**, *49*, 910–918. + +54. Gell-Mann, M. A Schematic Model of Baryons and Mesons. Phys. Lett. **1964**, *8*, 214-215. + +55. Feynman, R.P. Very High-Energy Collisions of Hadrons. Phys. Rev. Lett. **1969**, *23*, 1415-1417. + +56. Kim, Y.S.; Noz, M.E. Covariant harmonic oscillators and the parton picture. Phys. Rev. D **1977**, *15*, 335-338. + +57. Kim, Y.S. Observable gauge transformations in the parton picture. Phys. Rev. Lett. **1989**, *63*, 348-351. + +58. Hussar, P.E. Valons and harmonic oscillators. Phys. Rev. D **1981**, *23*, 2781-2783. + +59. Leonhardt, U. *Essential Quantum Optics*; Cambridge University Press: London, UK, 2010. + +60. Furusawa, A.; Loock, P.V. *Quantum Teleportation and Entanglement: A Hybrid Approach to Optical Quantum Information Processing*; Wiley-VCH: Weinheim, Germany, 2010. + +© 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Massless Majorana-Like Charged Carriers in +Two-Dimensional Semimetals + +Halina Grushevskaya † and George Krylov †,* + +Physics Department, Belarusian State University, 4 Nezaleznasti Ave., 220030 Minsk, Belarus; grushevskaja@bsu.by + +* Correspondence: krylov@bsu.by; Tel.: +375-296-62-44-97 + +† These authors contributed equally to this work. + +Academic Editor: Young Suh Kim + +Received: 29 February 2016; Accepted: 1 July 2016; Published: 8 July 2016 + +**Abstract:** The band structure of strongly correlated two-dimensional (2D) semimetal systems is found to be significantly affected by the spin-orbit coupling (SOC), resulting in SOC-induced Fermi surfaces. Dirac, Weyl and Majorana representations are used for the description of different semimetals, though the band structures of all these systems are very similar. We develop a theoretical approach to the band theory of two-dimensional semimetals within the Dirac–Hartree–Fock self-consistent field approximation. It reveals partially breaking symmetry of the Dirac cone affected by quasi-relativistic exchange interactions for 2D crystals with hexagonal symmetry. Fermi velocity becomes an operator within this approach, and elementary excitations have been calculated in the tight-binding approximation when taking into account the exchange interaction of $\pi(p_z)$-electron with its three nearest $\pi(p_z)$-electrons. These excitations are described by the massless Majorana equation instead of the Dirac one. The squared equation for this field is of the Klein–Gordon–Fock type. Such a feature of the band structure of 2D semimetals as the appearance of four pairs of nodes is shown to be described naturally within the developed formalism. Numerical simulation of band structure has been performed for the proposed 2D-model of graphene and a monolayer of Pb atoms. + +**Keywords:** 2D semimetals; Dirac–Hartree–Fock self-consistent field approximation; Majorana-like field; Weyl-like nodes; Fermi velocity operator + +PACS: 73.22.-f, 81.05.Bx + +# 1. Introduction + +Strongly correlated materials, such as two-dimensional (2D) complex oxides of transition metals, graphene, oxides with a perovskite structure, and IV–VI semiconductors being three-dimensional (3D) analogues of graphene, can demonstrate unusual electronic and magnetic properties, such as e.g., half-metallicity. The linear dispersion law for such materials is stipulated by the simultaneous existence of positively and negatively charged carriers [1]. Conical singularities are generic in the quantum crystals having honeycomb lattice symmetry [2]. Bipolarity of the material suggests that the state of an excitonic insulator is possible for it. Since an electron-hole pair is at the same time its own antiparticle, the Majorana representation has been used [3,4] to describe the interaction of pseudospins with the valley currents in a monolayer graphene. + +The electron is a complex fermion, so if one decomposes it into its real and imaginary parts, which would be Majorana fermions, they are rapidly re-mixed by electromagnetic interactions. However, such a decomposition could be reasonable for a superconductor where, because of effective electrostatic screening, the Bogoliubov quasi-fermions behave as if they are neutral excitations [5]. +---PAGE_BREAK--- + +A helical magnetic ordering (commensurate magnetism) occurs due to strong spin-orbit coupling (SOC) between Fe and Pb atoms in the system where a chain of ferromagnetic Fe atoms is placed on the surface of conventional superconductor composed of Pb atoms [6]. In this case, the imposition of SOC results in the appearance of Majorana-like excitations at the ends of the Fe atom chain. + +The discovered p-wave function pairing in this Fe-chain is allowed to assume that there exists a new mechanism of superconductivity in high-temperature superconductors through the exchange of Majorana particles rather than phonons in the Bardeen-Cooper-Schrieffer theory. Such a novel superconducting state emerges, for example, in compound CeCoIn₅ in strong magnetic fields in addition to ordinary superconducting state, [7]. It has been shown [8–10] that the coupling of electrons into Cooper pairs in pnictides (LiFeAs with slabs FeAs) is mediated by the mixing of d-electron orbitals surrounding the atomic cores of transition metal. The new state is mediated by an anti-ferromagnetic order, and its fluctuations appear due to strong spin-orbit coupling [8,9,11]. It has been experimentally confirmed in [10] for LiFeAs. For antiferromagnetic itinerant-electron system LaFe₁₂B₆, ultrasharp magnetization steps have been observed [12]. The last can be only explained by the existence of anti-ferromagnetic order, and its fluctuations appear due to strong spin-orbit coupling. + +Thus, there is a strong evidence that SOC may control the spin ordering in the absence of external magnetic fields. However, the mechanism that leads to such, commensurate magnetism has not been yet established. + +The phenomenon of the contraction of electron density distribution in one direction is called nematicity. It is observed in pnictides BaFe₂(As₁₋₃)Pₓ₂ placed in a magnetic field, and such a phenomenon remains in the superconducting state [13]. The nematicity is coupled with considerable stripe spin fluctuations in FeSe [14]. The very strong spin orbit coupling leads to contraction in a factor of about 10% and rotation on 30° of the hexagonal Brillouin zone of delafossite oxide PtCoO₂, belonging to yet another class of topological insulators in which atoms of metal are in layers with triangular lattices [15]. + +Other topological insulators, namely so-called Weyl materials with a linear dispersion law, are close in properties with layered perovskite-like materials (see [16] and references therein). Currently, the first candidate for such a material has been found, namely TaAs, whose Brillouin zone has Weyl-like nodes and Fermi arcs [17–19]. + +Moreover, the experimental evidence of the similarities between the Fermi surfaces of insulator SmB₆ and metallic rare earth hexaborides (PrB₆ and LaB₆) has been presented in [20]. To explain the accompanying ordering phenomena, each associated with different symmetry breaking, it is necessary to develop a unified theory as it has been pointed out in [9]. + +Electrically charged carriers in the strongly correlated semimetallic systems with half-filled bands are massless fermions [15,21,22]. + +In a low-dimensional system, the exciton binding energy turns out to be high [23] and, respectively, the transition to the state of excitonic insulator is possible. Therefore, the Majorana rather than Weyl representation is preferable for the description of 2D semimetals. An attempt to represent the transition to the state of excitonic insulator as the appearance of Majorana zero-modes solution in graphene with trigonal warping [24] contradicts experimental data on the absence of a gap in band structure of graphene [25] and on diminishing of charged carriers mobility [26] and minimal conductivity [27]. However, at the present time, there exist experimental signatures of graphene Majorana states in graphene-superconductor junctions without the need for spin-orbit coupling [28]. However, modern Quantum Field Theory of pseudo-Dirac quasiparticles in random phase approximation predicts a strong screening that destroys the excitonic pairing instability if the fermion dynamic mass *m*(*p*) dependent on momentum *p* is small in comparison with the chemical potential *μ*: *m*(*p*) ≤ *μ* [29]. + +In the paper, we would like to show how the above described features of the layered materials can be formalized in 2D models, where the charged carriers are the quasiparticles of Majorana rather than of the Weyl type. We also show that, under certain conditions, these quasiparticles reveal themselves as Weyl-like states or massless Dirac pseudofermions. +---PAGE_BREAK--- + +However, the use of the well-known Majorana representations to describe a semimetal as a massless-quasiparticle system is encountered with such a puzzle as the absence of harmonic oscillatory solutions in ultrarelativistic limit for Majorana particles of zero mass [30]. The equations are known for massive Majorana particles only [31–33]. + +In the paper, we reveal different aspects of appearance of Majorana-like quasiparticle states in the band structure of semimetals. 2D Hartree-Fock approximation for graphene, however, predicts experimentally observable increase of the Fermi velocity value $v_F(\vec{p})$ at small momenta $p$ [25] but leads to logarithmically divergent $v_F(\vec{p})$ at $p \to 0$ [34]. To take into account this effect of long range Coulomb interactions correctly, our calculation is based on the quasi-relativistic Dirac-Hartree-Fock self-consistent field approach developed earlier [35,36]. + +The goal is to construct a 2D-semimetal model in which a motion equation is a pseudo-relativistic massless Majorana-like one. We show that the squared equation for this field is of a Klein-Gordon-Fock type, and therefore the charged carriers in such 2D-semimetal models can be assumed massless Majorana-like quasiparticles. + +We study quasiparticle excitations of the electronic subsystem of a hexagonal monoatomic layer (monolayer) of light or heavy atoms in tight-binding approximation. The simulations are performed for the atoms of C and Pb on the assumption that sp²-hybridization for s- and p-electron orbitals is also possible for the atoms of Pb. + +We demonstrate that the band-structure features for the hexagonal monolayers are similar to each other due to the similarity of external electronic shells of their atoms. Despite the similarity of the band structure, the charged carriers in such 2D-semimetal models can possess different features, e.g., the charged carriers in the monolayer of the atoms of C can be thought of as massless Dirac pseudofermions, whereas in the monolayer from the atoms of Pb, they reveal themselves as Weyl-like states. + +The paper is organized as follows. In Section 2, we propose a semimetal model with coupling between pseudospin and valley currents and prove the pseudo-helicity conservation law. In Section 3, we briefly introduce the approach [3,35–37] and use it in a simple tight-binding approximation to obtain the system of equations for a Majorana secondary quantized field. In Section 4, we support the statement that the squared equation for the constructed field is of the Klein-Gordon-Fock type for different model exchange operators. We also discuss features of our model manifesting in the band structure of real semimetals. In Section 5, we discuss the proposed approximations for the exchange interactions in 2D semimetals and summarize our findings. + +## 2. Monolayer Semimetal Model with Partial Unfolding of Dirac Bands + +Semimetals are known to be bipolar materials with half-filled valence and conduction bands. A distinctive feature of the graphene band structure is the existence of Dirac cones in the Dirac points (valleys) K, K' of the Brillouin zone. In the present paper, these Dirac points are designated as $K_A, K_B$. We assume that pseudo-spins of hexagonally packed carbon atoms in the monoatomic layer (monolayer) graphene are anti-ordered, as it is shown schematically in Figure 1a. The fact that the pseudo-helicity (chirality) conservation law forbids massless charged carriers to be in lattice sites with the opposite signs of pseudo-spin, makes possible the existence of valley currents due to jumps through the forbidden sites. This is shown schematically in Figure 1a. Coupling between the pseudo-spin and the valley current in the Majorana representation of bispinors can be determined in the following way. +---PAGE_BREAK--- + +**Figure 1.** (a) graphene lattice, comprised of two sublattices {A} with spin “up” and {B} with spin “down”. Right and left valley currents $J_V^R$ and $J_V^L$ are shown as circular curves with arrows. Double arrows from site A to site $B_L$ and from A to $B_R$ indicate clockwise and anti-clockwise directions. The axis of mirror reflection from $A_R$ to $B_L$ is marked by dash-dotted line; (b) transformations of a q-circumference into ellipses under an action of exchange operators ($\Sigma_{rel}^x$)$_{AB}$ and ($\Sigma_{rel}^y$)$_{BA}$ (in color). + +According to Figure 1a, a particle can travel from a lattice site A to e.g., a lattice site $A_R$ through right or left sites $B_R$ or $B_L$, respectively. Since the particle is symmetrical, its description in the right and left reference frames has to be equivalent. Therefore, a bispinor wave function $\Psi'$ of graphene has to be chosen in the Majorana representation, and its upper and lower spin components $\psi'$, $\psi'$ are transformed by left and right representations of the Lorentz group: + +$$ \Psi' = \begin{pmatrix} \psi'_{\sigma} \\ \psi'_{-\sigma} \end{pmatrix} = \begin{pmatrix} e^{\frac{i}{2}\vec{\sigma}\cdot\vec{n}}\psi_{\sigma} \\ e^{\frac{i}{2}(-\vec{\sigma})\cdot\vec{n}}\psi_{-\sigma} \end{pmatrix}. \quad (1) $$ + +The wave-function $\tilde{\chi}_{\vec{\sigma}}^{\dagger}(\vec{r}_A) |0, +\sigma\rangle$ of a particle (in our case of an electron-hole pair) located on the site A, behaves as a component $\psi_{\sigma}$, while the wave-function $\tilde{\chi}_{-\sigma}^{\dagger}(\vec{r}_B) |0, -\sigma\rangle$ of a particle located on the site B behaves as a component $\psi_{-\sigma}$ of the bispinor (1). + +Relativistic particles with non-zero spin possess the helicity $h$, which is the projection of the particle's spin to the direction of motion [32]: + +$$ h = \vec{p} \cdot \vec{S} = \frac{1}{2} p_i \begin{pmatrix} \sigma_i & 0 \\ 0 & \sigma_i \end{pmatrix}, \quad (2) $$ + +where $\vec{p}$ is the particle momentum, $\vec{S}$ is the spin operator for a particle, $\vec{\sigma}$ is the vector of the Pauli matrices $\sigma_i$, and $i = x, y$. In quantum relativistic field theory, the value of the helicity of a massless particle is preserved in the transition from one reference frame moving with the velocity $v_1$, to another one moving with the velocity $v_2$ [32,38]. + +Let us designate the two-dimensional spin of the quasi-particle in valleys $K_A$ and $K_B$ as $\vec{S}_{AB} = \hbar\vec{\sigma}_{AB}/2$ and $\vec{S}_{BA} = \hbar\vec{\sigma}_{BA}/2$, respectively. + +Let us introduce two-dimensional pseudospin $\vec{S}_{AB}$ and $\vec{S}_{BA}$ of quasi-particles in valleys $K_A$ and $K_B$ through the transformed vector $\vec{\sigma}$ of the Pauli matrices $\sigma_i$, $i = x, y$ as $\vec{S}_{AB} = \hbar\vec{\sigma}_{AB}/2$ and $\vec{S}_{BA} = \hbar\vec{\sigma}_{BA}/2$. The explicit form of this transformation is given in Section 3. + +A valley current $J_V^R$ or $J_V^L$, on the right or left closed contour $\{A \to B_R \to A_R \to B \to A_L \to B_L \to A\}$ or $\{A \to B_L \to A_L \to B \to A_R \to B_R \to A\}$, respectively, in Figure 1, is created by an electron (hole) with pseudo-angular momentum $\vec{l}_{AB_R}$ and momentum $\vec{p}_{AB_R}$ or by an electron (hole) with $\vec{l}_{AB_L}$ and +---PAGE_BREAK--- + +$\vec{p}_{AB_L}$. Pseudo-helicity of bispinors (1), describing the particles right or left the from lattice site A, is defined by the expressions, which are analogous to (2): + +$$h_{BR,A} = \vec{p}_{AB_R} \cdot \vec{S}_{BRA}, \quad (3)$$ + +$$h_{BL,A} = \vec{p}_{ABL} \cdot \vec{S}_{BLA}. \quad (4)$$ + +Let us use the parity operator $P$, which mirrors the bispinor (1) with respect to the line passing through the points A and B. Pseudo-helicity of the mirrored bispinor is defined by the expression: + +$$Ph_{BRAR}P = h_{ALBL} = \vec{p}_{BLAL} \cdot \vec{S}_{ALBL}. \quad (5)$$ + +Pseudo-helicity $h_{AB}$ does not change its value while the valley momentum and the pseudo-spin change signs: $\vec{p}_{ALBL} = -\vec{p}_{BRA_R}$ and $\vec{S}_{ALBL} = -\vec{S}_{BRA_R}$. + +The pseudo-helicity $h_{AB}$ is expressed through the projection $\tilde{\mathcal{M}}_{AB} = \vec{\sigma}_{BA} \cdot (\vec{l}_{AB} + \hbar\vec{\sigma}_{BA}/2)$ of the total angular momentum on the direction of the spin $\vec{\sigma}_{BA}$ as [39,40]: + +$$\vec{\sigma}_{BA} \cdot \vec{p}_{AB} = \sigma^r_{BA} \left( p_{r,BA} + i \frac{\tilde{\mathcal{M}}_{AB}}{r} - \hbar/2 \right) = \sigma^r_{BA} \left( p_{r,BA} + i \frac{\vec{\sigma}_{BA} \cdot \vec{l}_{AB}}{r} \right), \quad (6)$$ + +where $\sigma^r_{BA}$ and $p_{r,BA}$ are radial components of the spin and the momentum, respectively. According to Equation (6), the pseudo-spin-orbit scalar $\vec{\sigma}_{BA} \cdot \vec{l}_{AB}$ describes the coupling (interaction) of the spin with the valley currents flowing along a closed loop clockwise or in opposite directions, as is shown in Figure 1a. Hence, there exists a preferred direction along which the spin projection of the bispinor (1) is not changed after transition from one moving reference frame into another. At this, the spin of a particle precesses. Transformation of the electron and hole into each other in an exciton is a pseudo-precession. + +As a result, the coupling of pseudo-spin and valley currents stipulates the spin precession of exciton charged carriers in graphene. In our model, the orientation of non-equilibrium spin of the states of monolayer graphene in electromagnetic fields may be retained for a long time due to prohibition of change for exciton pseudo-helicity. Pseudo-precession is possible, if spins of p_z-electrons are anti-ordered (pseudo-antiferromagnetic ordering). Therefore, the pseudo-spin precession of the exciton can be implemented through the exchange interaction. Furthermore, we determine the operators $\vec{\sigma}_{BA(AB)}$, $\vec{p}_{AB(BA)}$ and describe the effects of pseudo-spin and valley current coupling. + +### 3. Effects of Coupling between Pseudo-Spin and Valley Current + +In quasi-relativistic approximation ($c^{-1}$ expansion), the eigenproblem for the equation of motion of the secondary quantized field $\hat{\chi}_{-\sigma_A}^\dagger$ in the model shown in Figure 1a has the form: [35–37] + +$$\left\{ \vec{\sigma} \cdot \vec{p} \, \hat{\sigma}_F^{qu} - \frac{1}{c} (i\Sigma_{rel}^x)_{AB} (i\Sigma_{rel}^x)_{BA} \right\} \hat{\chi}_{-\sigma_A}^\dagger (\vec{r}) |0, -\sigma\rangle \\ = E_{qu}(p) \hat{\chi}_{-\sigma_A}^\dagger (\vec{r}) |0, -\sigma\rangle, \quad (7)$$ + +where the Fermi velocity operator $\hat{\sigma}_F^{qu}$ is defined as + +$$\hat{\sigma}_F^{qu} = [(\Sigma_{rel}^x)_{BA} + c\hbar\vec{\sigma} \cdot (\vec{K}_A + \vec{K}_B)] .$$ +---PAGE_BREAK--- + +($\Sigma_{rel}^{x}$)$_{BA}$, ($\Sigma_{rel}^{x}$)$_{AB}$ are determined through an ordinary exchange interaction contribution, +for example [39,40]: + +$$ +\begin{align*} +(\Sigma_{rel}^{x})_{AB} \hat{\chi}_{\sigma_B}^{\dagger}(\vec{r}) |0, \sigma\rangle &= \sum_{i=1}^{N_v N} \int d\vec{r}_i \hat{\chi}_{\sigma_i B}^{\dagger}(\vec{r}) |0, \sigma\rangle \\ +&\quad \times \langle 0, -\sigma_i | \hat{\chi}_{-\sigma_i A}^{\dagger}(\vec{r}_i) V(\vec{r}_i - \vec{r}) \hat{\chi}_{-\sigma_B}(\vec{r}_i) |0, -\sigma_i'\rangle. +\end{align*} +$$ + +$V(\vec{r}_i - \vec{r})$ is the Coulomb interaction between two valent electrons with radius-vectors $\vec{r}_i$ and $\vec{r}$; $N$ is a total number of atoms in the system, $N_v$ is a number of valent electrons in an atom, $c$ is the speed of light. + +After applying the non-unitary transformation to the wave function in the form + +$$ +\tilde{\chi}_{-\sigma_A}^{\uparrow} |0, -\sigma\rangle = (\Sigma_{rel}^{x})_{BA} \tilde{\chi}_{-\sigma_A}^{\uparrow} |0, -\sigma\rangle, +$$ + +we obtain (neglecting mixing of the states for the Dirac points) the equation that is similar to the one +in 2D quantum field theory (QFT) [41–43], but it describes the motion of a particle with pseudo-spin +$\vec{S}_{AB} = \hbar\vec{\sigma}_{AB}/2$: + +$$ +\{\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA} - c^{-1}\Sigma_{BA}\tilde{\Sigma}_{AB}\} \tilde{\chi}_{-\sigma_A}^{\uparrow}(\vec{r}) |0, -\sigma\rangle = \tilde{E}_{qu}(p) \tilde{\chi}_{-\sigma_A}^{\uparrow}(\vec{r}) |0, -\sigma\rangle , \quad (8) +$$ + +with a transformed 2D vector $\vec{\sigma}_{2D}^{AB}$ of the Pauli matrices, which are determined as +$\vec{\sigma}_{2D}^{AB} = (\Sigma_{rel}^{x})_{BA} \vec{\sigma} \cdot (\Sigma_{rel}^{x})_{BA}^{-1}$. The following notions are introduced: $\vec{p}_{BA}\tilde{\chi}_{-\sigma_A}^{\uparrow} = (\Sigma_{rel}^{x})_{BA} \vec{p} \cdot (\Sigma_{rel}^{x})_{BA}^{-1}\tilde{\chi}_{-\sigma_A}^{\uparrow} = [(\Sigma_{rel}^{x})_{BA}\vec{p}] \tilde{\chi}_{-\sigma_A}^{\uparrow}, \tilde{E}_{qu} = E_{qu}/\hat{v}_{F}^{BA}, \hat{v}_{F}^{BA} = (\Sigma_{rel}^{x})_{BA}, \tilde{\Sigma}_{BA}\tilde{\Sigma}_{AB} = (\Sigma_{rel}^{x})_{BA}(i\Sigma_{rel}^{x})_{AB}(i\Sigma_{rel}^{x})_{BA}(\Sigma_{rel}^{x})_{BA}^{-1} = (i\Sigma_{rel}^{x})_{BA}(i\Sigma_{rel}^{x})_{AB}$; and the product of two capital sigma, as one sees from the last chain of formulas, behaves like a scalar mass term. + +Further simulations are performed in nearest neighbor tight-binding approximation [44,45]. +This approximation correctly predicts the graphene band structure in the energy range ±1 eV [46]. +This turns out to be sufficient for our purposes. We use the expressions for the exchange between +$\pi(p_z)$-electrons only. One can find the explicit form of these expressions in [4]. + +The action of the matrices ($\Sigma_{rel}^x$)$_{BA}$, ($\Sigma_{rel}^x$)$_{AB}$ in the momentum space is shown in Figure 1b. +As ($\Sigma_{rel}^x$)$_{BA}$ $\neq$ ($\Sigma_{rel}^x$)$_{AB}$, the vector $\vec{p}_{BA}$ is rotated with respect to $\vec{p}_{AB}$ and stretched. According to +Figure 1b, ellipses in momentum spaces of electrons and holes are rotated 90° with respect to each +other. With an account of the hexagonal symmetry of the system, the last explains the experimentally +observed rotation in 30° of the hexagonal Brillouin zone of PtCoO$_2$ [15]. + +Thus, the sequence of exchange interactions $(\Sigma_{rel}^x)_{AB}$ $(\Sigma_{rel}^x)_{BA}$ $(\Sigma_{rel}^x)_{AB}$ for valley currents makes +rotation initially of the electron Brillouin zone and Dirac band into the hole Brillouin zone and +Dirac band, and then vice-versa. Thus, the exchange $(\Sigma_{rel}^x)_{AB(AB)} \equiv \Sigma_{AB(BA)}$ changes the sublattices +wave functions: + +$$ +|\psi_{AB}\rangle = \Sigma_{AB} |\psi_{BA}^*\rangle. +$$ + +Owing to it and neglecting a very small mass term $c^{-1}\Sigma_{BA}\tilde{\Sigma}_{AB}$, the equation in which the operator of the Fermi velocity enters, can be rewritten as follows: + +$$ +\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB} |\psi_{AB}\rangle = E_{qu} |\psi_{BA}^*\rangle . \qquad (9) +$$ + +Taking into account that $E \to i\frac{\partial}{\partial t}$ and $\vec{p} = -i\vec{\nabla}$, we transform the system of equations for the Majorana bispinor ($\psi_{AB}^\dagger$, ($\psi_{BA}^\dagger$)$^\dagger$: +---PAGE_BREAK--- + +$$ \vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB} |\psi_{AB}\rangle = i \frac{\partial}{\partial t} |\psi_{BA}^*\rangle, \quad (10) $$ + +$$ \vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA}^* |\psi_{BA}^*\rangle = -i \frac{\partial}{\partial t} |\psi_{AB}\rangle, \quad (11) $$ + +into the wave equation of the form: + +$$ (\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA}^*)(\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB}) |\psi_{AB}\rangle = \frac{\partial^2}{\partial t^2} |\psi_{AB}\rangle. \quad (12) $$ + +Equation (12) describes an oscillator with the energy operator $\hat{\omega}(\vec{p})$ + +$$ \hat{\omega}(\vec{p}) = \frac{1}{\sqrt{2}} [(\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA})(\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB}) + (\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB})(\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA})]^{1/2}. \quad (13) $$ + +Now, one can really see that the obtained equation is the equation of motion for a Majorana bispinor wave function of the semimetal charged carriers. + +Thus, the Fermi velocity becomes an operator within this approach, and elementary excitations are fermionic excitations described by the massless Majorana-like equation rather than Dirac-like one. + +**4. Harmonic Analysis of the Problem** + +Equation (13) can be rewritten in the following form: + +$$ \hat{\omega}^2(\vec{p}) = \frac{1}{2} (\hat{H}_{AB}\hat{H}_{BA} + \hat{H}_{BA}\hat{H}_{AB}). \quad (14) $$ + +In order to describe the proposed secondary quantized field by a set of harmonic oscillators, it is necessary to show that the squared Equation (14), obtained by the symmetrization of the product of the Hamiltonians $\hat{H}_{AB}$ and $\hat{H}_{BA}$, is the Klein-Gordon-Fock operator. This will be the case if the non-diagonal matrix elements of the operator vanish identically, and therefore the components of the equation are independent. Then, $\hat{\omega}^2(\vec{p})$ can be considered as a "square of energy operator". + +Unfortunately, because of the complex form of the exchange operator, the statement is difficult to prove in the general case. Therefore, we do this for several approximations of the exchange interaction and demonstrate that the Equation (14) is a Klein-Gordon-Fock one. + +As a first particular case, when the proposed Majorana-like field is proven to be a harmonic oscillators set, we consider $\epsilon$-neighborhood ($\epsilon \to 0$) of the Dirac point $K_A(K_B)$. + +Let us designate the momentum of a particle in a valley as $\vec{q}$. The momentum $\vec{q}$ is determined as $\vec{q} = \vec{p} - \hbar\vec{K}_A$. In the case of very small values of $\vec{q}, q \to 0$ the exchange operator $\Sigma_{AB(BA)}$ is approximated by a power series expansion up to the fourth order in $q$. Then, an analytical calculation of non-diagonal elements of the operator $\hat{\omega}^2(\vec{p})$ performed in the Mathematica system proves that they are identically zero. + +Band structures for monolayer graphene and monolayer of atoms of Pb are shown in Figure 2a,b. One can see that the Weyl nodes in graphene are located far enough from the Dirac point. The Weyl nodes are shifted to the Dirac point for the Pb-monolayer. Therefore, Weyl-like character in the behavior of charged carriers may be exhibited for the Pb-monolayer under the condition that the contributions up to 4-th order in $q$ are prevailing in the exchange. In accordance with Figure 1b, the exchange operator matrices transform a circumference in the momentum space into a highly stretched ellipse that allows us to assume the presence of nematicity in the model. + +For a given $\vec{q}$, where the eigenfunction of Equation (9) represents 2D spinor $\Psi$, we choose its normalization in the form $\Psi(\vec{q}) = (\psi(\vec{q}), 1)^\dagger$ with lower component equal to unity. Then, as it can be easily shown for the massless Dirac pseudo-fermion model [47], the absolute value of the upper component $|\psi(\vec{q})|$ does not depend upon the wave vector $\vec{q}$, demonstrating the equivalence of all +---PAGE_BREAK--- + +directions in $\vec{q}$ space. We construct $|\psi(\vec{q})|^2$ for Equation (9) in $q^4$-approximation for the exchange. The results are shown in Figure 2c. The isotropy of $|\psi(\vec{q})|^2$ is broken for our model due to the appearance of the preferable directions in the momentum space. + +As one can see from Figure 2c, the existence of almost one-dimensional regions with sharp jump in $|\psi(\vec{q})|^2$ should probably lead to some anisotropy already in the configuration space for the carriers that we consider as manifestation of nematicity. + +The approximation $q^4$ for the exchange operator expression presents a particular interest for systems with strong damping of quasi-particle excitations. + +**Figure 2.** A splitting of Dirac cone replicas: for graphene (a) and Pb monolayer (b). One of the six pairs of Weyl-like nodes: source and sink are indicated; (c) the square of the absolute value of the upper spinor component $|\psi|^2$ of $\vec{q}$-eigenstate in the 2D semimetal model. $\vec{q} = \vec{p} - \vec{K}_A$. (in color) + +The second approximation of the exchange, for which we can prove the harmonic origin of the proposed Majorana-like field, is the model exchange with full exponential factors taken into account, but with the phase-difference between $\pi(p_z)$-electrons wavefunction chosen to be identically zero (see Ref. [4] for detail). Numeric simulation of $\omega^2(\vec{p})$ with this model exchange has been performed on a discrete lattice in the Brillouin zone. It has been demonstrated that the operator $\omega^2(\vec{p})$ is always diagonal in this case. + +Now, we perform the simulations with the exact expression for the exchange term. + +In this general case, the exchange between $\pi(p_z)$-electron and its three nearest $\pi(p_z)$-electrons has been calculated based on the method proposed in [4]. Band structure of the 2D semimetal has the form of a degenerated Dirac cone in the neighborhood of the Dirac point. Then, the emergence of unfolding leads to replica appearance, and further splitting of these replicas gives the octagonal symmetry of the problem, as one can see in Figure 3. Hyperbolic points (saddle points) are located between nodes and at the apex of the Dirac cone (Van-Hove singularities) as one can see in Figure 2a,b [3,48–50]. Therefore, a fractal-like set of Fermi arcs which are shown in Figure 4, is formed in the absence of damping in the system. Contrary to the graphene case, the splitting of the Dirac bands for the Pb-monolayer occurs at sufficiently small $q$, and therefore, can be observed experimentally. In addition, for the Pb-monolayer, there exist regions with huge numbers of Fermi arcs, and, respectively, regions with strong fluctuations of antiferromagnetic ordering. + +Thus, the secondary quantized field described by Equation (9) represents a field in which quanta manifest themselves as Dirac pseudo-fermions in the apex of the Dirac cone and as Weyl-like particles for sufficiently large $q$ at the presence of the dumping in the system. For an ideal system ($\Im m \epsilon(\vec{q}) = 0$), such a behavior is similar to that of the mathematical pendulum in the vicinity of the separatrix [51,52]. +---PAGE_BREAK--- + +**Figure 3.** A band structure in the graphene model with partial unfolding of Dirac cone: real (a) and imaginary (b) parts of $\epsilon(\vec{q})$; range of high momenta. $\vec{q} = \vec{p} - \vec{K}_A$ (in color). + +**Figure 4.** Density of Fermi arcs sets in graphene (a) and Pb-monolayer bands for values of momentum $q$ in the range $0 \ge q/|\vec{K}_A| \le 10^{-4}$, $\vec{q} = \vec{p} - \vec{K}_A$. + +## 5. Discussion + +Discussing the obtained results, we have to point out, firstly, that the excitations of the constructed secondary-quantized pseudo-fermionic field are Majorana-like massless quasiparticles. + +The set of Fermi arcs in our model shows that the splitting of Dirac replicas on a huge number of Weyl-like states occurs in the momentum space except for the Dirac cone apex. + +In contrast to known massless Dirac and Weyl models, in the proposed model, there is a partial removing of the degeneracy of the Dirac cone, and the octagonal symmetry of the bands emerges for sufficiently large $q$. Thus, Majorana particles in our model can be represented as a wave package of infinitely large number of Weyl-like states. + +Secondly, the Dirac cone for the proposed 2D-semimetal model is degenerated in a very small neighborhood of the Dirac point $K_A(K_B)$ at $q \to 0$. + +Thirdly, the first-approximation with damping demonstrates that sufficiently strong decay leads to diminishing the number of the Weyl states and formation of bands having hexagonal symmetry. In accordance with the obtained results, in the system with strong damping, only six pairs of Weyl nodes survive. In this case, each Dirac hole (electron) cone is surrounded by three electron (hole) bands relating to three Weyl pairs. Provided the lifetime of the Weyl-like states is sufficiently large (small but finite damping) to preserve the octagonal symmetry of the bands, each Dirac hole (electron) cone will be surrounded by four electron (hole) bands relating to four Weyl pairs. + +Important features of the proposed model are that the fractal set of Fermi arches manifests pseudospin fluctuations and the phenomenon of nematicity is possible. +---PAGE_BREAK--- + +**6. Conclusions** + +In conclusion, contrary to known Dirac and Weyl models, the constructed 2D-semimetal model allows for description, in a general formalism, the band structure of a wide class of existing strongly. + +**Acknowledgments:** This work has been supported in part by Research grant No. 2.1.01.1 within the Basic Research Program "Microcosm and Universe" of the Republic of Belarus. + +**Author Contributions:** Both authors equally contributed to this work. + +**Conflicts of Interest:** The authors declare no conflict of interest. + +**References** + +1. Grushevskaya, H.V.; Hurski, L.I. Coherent charge transport in strongly correlated electron systems: Negatively charged exciton. *Quantum Matter* **2015**, *4*, 384–386. + +2. Fefferman, C.L.; Weinstein, M.I. Honeycomb lattice potentials and Dirac points. *J. Am. Math. Soc.* **2012**, *25*, 1169–1220. + +3. Grushevskaya, H.V.; Krylov, G. Quantum field theory of graphene with dynamical partial symmetry breaking. *J. Mod. Phys.* **2014**, *5*, 984–994. + +4. Grushevskaya, H.V.; Krylov, G. Semimetals with Fermi Velocity Affected by Exchange Interactions: Two Dimensional Majorana Charge Carriers. *J. Nonlinear Phenom. Complex Syst.* **2015**, *18*, 266–283. + +5. Semenoff, G.W.; Sodano, P. Stretched quantum states emerging from a Majorana medium. *J. Phys. B: At. Mol. Opt. Phys.* **2007**, *40*, 1479–1488. + +6. Nadj-Perge, S.; Drozdov, I.K.; Li, J.; Chen, H.; Jeon, S.; Seo, J.; MacDonald, A.H.; Bernevig, A.; Yazdani, A. Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor. *Science* **2014**, *346*, 602–607. + +7. Gerber, S.; Bartkowiak, M.; Gavilano, J.L.; Ressouche, E.; Egetenmeyer, N.; Niedermayer, C.; Bianchi, A.D.; Movshovich, R.; Bauer, E.D.; Thompson, J.D.; et al. Switching of magnetic domains reveals spatially inhomogeneous superconductivity. *Nat. Phys.* **2014**, *10*, 126–129. + +8. Shimojima, T.; Sakaguchi, F.; Ishizaka, K.; Ishida, Y.; Kiss, T.; Okawa, M.; Togashi, T.; Chen, C.-T.; Watanabe, S.; Arita, M.; et al. Orbital-independent superconducting gaps in iron-pnictides. *Science* **2011**, *332*, 564–567. + +9. Davis, J.C.S.; Lee, D.-H. Concepts relating magnetic interactions, intertwined electronic orders, and strongly correlated superconductivity. *Proc. Natl. Acad. Sci. USA* **2013**, *110*, 17623–17630. + +10. Borisenko, S.V.; Evtushinsky, D.V.; Liu, Z.-H.; Morozov, I.; Kappenberger, R.; Wurmehl, S.; Büchner, B.; Yaresko, A.N.; Kim, T.K.; Hoesch, M.; et al. Direct observation of spin-orbit coupling in iron-based superconductors. *Nat. Phys.* **2015**, doi:10.1038/nphys3594. + +11. Hurski, L.I.; Grushevskaya, H.V.; Kalanda, N.A. Non-adiabatic paramagnetic model of pseudo-gap state in high-temperature cuprate superconductors. *Dokl. Nat. Acad. Sci. Belarus* **2010**, *54*, 55–62. (In Russian) + +12. Diop, L.V.B.; Isnard, O.; Rodriguez-Carvajal, J. Ultrasharp magnetization steps in the antiferromagnetic itinerant-electron system $LaFe_{12}B_6$. *Phys. Rev.* **2016**, *B93*, 014440. + +13. Kasahara, S.; Shi, H.J.; Hashimoto, K.; Tonegawa, S.; Mizukami, Y.; Shibauchi, T.; Sugimoto, K.; Fukuda, T.; Terashima, T.; Nevidomskyy, A.H.; et al. Electronic nematicity above the structural and superconducting transition in $BaFe_2(As_{1-x}P_x)_2$. *Nature* **2012**, *486*, 382–385. + +14. Wang, Q.; Shen, Y.; Pan, B.; Hao, Y.; Ma, M.; Zhou, F.; Steffens, P.; Schmalzl, K.; Forrest, T.R.; Abdel-Hafiez, M.; et al. Strong interplay between stripe spin fluctuations, nematicity and superconductivity in FeSe. *Nat. Mater.* **2016**, *15*, 159–163. + +15. Kushwaha, P.; Sunko, V.; Moll, Ph.J.W.; Bawden, L.; Riley, J.M.; Nandi, N.; Rosner, H.; Schmidt, M.P.; Arnold, F.; Hassinger, E.; et al. Nearly free electrons in a 5d delafossite oxide metal. *Sci. Adv.* **2015**, *e1500692*. + +16. Lv, M.; Zhang, S.-C. Dielectric function, Friedel oscillation and plasmons in Weyl semimetals. *Int. J. Mod. Phys.* **B** **2013**, *27*, 1350177. + +17. Xu, S.-Y.; Belopolski, I.; Alidoust, N.; Neupane, M.; Bian, G.; Zhang, C.; Sankar, R.; Chang, G.; Yuan, Z.; Lee, C.-C.; et al. Discovery of a Weyl Fermion semimetal and topological Fermi arcs. *Science* **2015**, *349*, 613–617. +---PAGE_BREAK--- + +18. Lv, B.Q.; Xu, N.; Weng, H.M.; Ma, J.Z.; Richard, P.; Huang, X.C.; Zhao, L.X.; Chen, G.F.; Matt, C.E.; Bisti, F.; et al. Observation of Weyl nodes in TaAs. *Nat. Phys.* **2015**, *11*, 724–727. + +19. Huang, S.-M.; Xu, S.-Y.; Belopolski, I.; Lee, C.-C.; Chang, G.; Wang, B.K.; Alidoust, N.; Bian, G.; Neupane, M.; Zhang, C.; et al. A Weyl Fermion semimetal with surface Fermi arcs in the transition metal monopnictide TaAs class. *Nat. Commun.* **2015**, *6*, 7373. + +20. Tan, B.S.; Hsu, Y.-T.; Zeng, B.; Ciomaga Hatnean, M.; Harrison, N.; Zhu, Z.; Hartstein, M.; Kiourlappou, M.; Srivastava, A.; Johannes, M.D.; et al. Unconventional Fermi surface in an insulating state. *Science* **2015**, *349*, 287–290. + +21. Falkovsky, L.A. Optical properties of graphene and IV-VI semiconductors. *Phys.-Uspekhi* **2008**, *51*, 887–897. + +22. Novoselov, K.S.; Jiang, D.; Schedin, F.; Booth, T.J.; Khotkevich, V.V.; Morozov, S.V.; Geim, A.K. Two-dimensional atomic crystals. *Proc. Natl. Acad. Sci. USA* **2005**, *102*, 10451–10453. + +23. Keldysh, L.V. Coulomb interaction in thin semiconductor and semimetal films. *Lett. J. Exper. Theor. Phys.* **1979**, *29*, 716–719. + +24. Dora, B.; Gulacsi, M.; Sodano, P. Majorana zero modes in graphene with trigonal warping. *Phys. Status Solidi RRL* **2009**, *3*, 169–171. + +25. Elias, D.C.; Gorbachev, R.V.; Mayorov, A.S.; Morozov, S.V.; Zhukov, A.A.; Blake, P.; Ponomarenko, L.A.; Grigorieva, I.V.; Novoselov, K.S.; Guinea, F.; et al. Dirac cones reshaped by interaction effects in suspended graphene. *Nat. Phys.* **2012**, *8*, 172. + +26. Du, X.; Skachko, I.; Barker, A.; Andrei, E.Y. Approaching ballistic transport in suspended graphene. *Nat. Nanotechnol.* **2008**, *3*, 491–495. + +27. Cooper, D.R.; D'Anjou, B.; Ghattamaneni, N.A.; Harack, B.; Hilke, M.; Horth, A.; Majlis, N.; Massicotte, M.; Vandsburger, L.; Whiteway, E.; et al. Experimental Review of Graphene. *ISRN Condensed Matter Phys.* **2012**, 2012, Article ID 501686. + +28. San-Jose, P.; Lado, J. L.; Aguado, R.; Guinea, F.; Fernandez-Rossier, J. Majorana Zero Modes in Graphene. *Phys. Rev. X* **2015**, *5*, 041042. + +29. Wang, J.R.; Liu, G.Z. Eliashberg theory of excitonic insulating transition in graphene. *J. Phys. Condensed Matter* **2011**, *23*, 155602. + +30. Pessa, E. The Majorana Oscillator. *Electr. J. Theor. Phys.* **2006**, *3*, 285–292. + +31. Majorana, E. Theory of Relativistic Particles with Arbitrary Intrinsic Moment. *Nuovo Cimento* **1932**, *9*, 335. + +32. Peskin, M.E.; Schroeder, D.V. *An Introduction to Quantum Field Theory*; Addison-Wesley Publishing Company: Oxford, UK, 1995. + +33. Simpao, V.A. Exact Solution of Majorana Equation via Heaviside Operational Ansatz. *Electr. J. Theor. Phys.* **2006**, *3*, 239–247. + +34. Hainzl, C.; Lewin, M.; Sparber, C. Ground state properties of graphene in Hartree-Fock theory. *J. Math. Phys.* **2012**, *53*, 095220. + +35. Grushevskaya, H.V.; Krylov, G.G. Charge Carriers Asymmetry and Energy Minigaps in Monolayer Graphene: Dirac-Hartree-Fock approach. *Int. J. Nonliner Phenom. Complex Syst.* **2013**, *16*, 189–208. + +36. Grushevskaya, H.V.; Krylov, G.G. Nanotechnology in the Security Systems, NATO Science for Peace and Security Series C: Environmental Security; Bonča, J., Kruchinin, S., Eds.; Springer: Dordrecht, The Netherlands, 2015; Chapter 3. + +37. Grushevskaya, H.V.; Krylov, G.G. Electronic Structure and Transport in Graphene: QuasiRelativistic Dirac-Hartree-Fock Self-Consistent Field Approximation. In *Graphene Science Handbook*. Vol. 3: Electrical and Optical Properties; Aliofkhazraei, M., Ali, N., Milne, W.I., Ozkan, C.S., Mitura, S., Gervasoni, J.L., Eds.; CRC Press—Taylor&Francis Group: Boca Raton, FL, USA, 2016. + +38. Gribov, V.N. *Quantum Electrodynamics*; R & C Dynamics: Izhevsk, Russia, 2001. (In Russian) + +39. Fock, V.A. *Principles of Quantum Mechanics*; Science: Moscow, Russia, 1976. (In Russian) + +40. Krylova, H.; Hursky, L. Spin Polarization in Strong-Correlated Nanosystems; LAP LAMBERT Academic Publishing, AV Akademikerverlag GmbH & Co.: Saarbrüken, Germany, 2013. + +41. Semenoff, G.W. Condensed-matter simulation of a three-dimensional anomaly. *Phys. Rev. Lett.* **1984**, *53*, 2449. + +42. Abergel, D.S.L.; Apalkov, V.; Berashevich, J.; Ziegler, K.; Chakraborty, T. Properties of graphene: A theoretical perspective. *Adv. Phys.* **2010**, *59*, 261. + +43. Gusynin, V.P.; Sharapov, S.G.; Carbotte, J.P. AC Conductivity of Graphene: From Tight-binding model to 2 + 1-dimensional quantum electrodynamics. *Int. J. Mod. Phys. B* **2007**, *21*, 4611. +---PAGE_BREAK--- + +44. Wallace, P.R. The band theory of graphite. *Phys. Rev.* **1971**, *71*, 622-634. + +45. Saito, R.; Dresselhaus, G.; Dresselhaus, M.S. *Physical Properties of Carbon Nanotubes*; Imperial: London, UK, 1998. + +46. Reich, S.; Maultzsch, J.; Thomsen, C.; Ordejón, P. Tight-binding description of graphene. *Phys. Rev. B* **2002**, *66*, 035412. + +47. Castro Neto, A.H.; Guinea, F.; Peres, N.M.; Novoselov, K.S.; Geim, A.K. The electronic properties of graphene. *Rev. Mod. Phys.* **2009**, *81*, 109. + +48. Brihuega, I.; Mallet, P.; González-Herrero, H.; Trambly de Laissardière, G.; Ugeda, M.M.; Magaud, L.; Gomez-Rodríguez, J.M.; Ynduráin, F.; Veuillen, J.-Y. Unraveling the Intrinsic and Robust Nature of van Hove Singularities in Twisted Bilayer Graphene by Scanning Tunneling Microscopy and Theoretical Analysis. *Phys. Rev. Lett.* **2012**, *109*, 196802; Erratum in *2012*, *109*, 209905. + +49. Andrei, E.Y.; Li, G.; Du, X. Electronic properties of graphene: A perspective from scanning tunneling microscopy and magnetotransport. *Rep. Prog. Phys.* **2012**, *75*, 056501. + +50. Grushevskaya, H.V.; Krylov, G.; Gaisyonok, V.A.; Serow, D.V. Symmetry of Model N = 3 for Graphene with Charged Pseudo-Excitons. *J. Nonliner Phenom. Complex Sys.* **2015**, *18*, 81-98. + +51. Zaslavsky, G. M.; Sagdeev, R.Z.; Usikov, D.A.; Chernikov, A.A. *Weak Chaos and Quasi-Regular Patterns*; Cambridge University Press: New York, NY, USA, 1991. + +52. Guckenheimer, J.; Holmes, P. *Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields*; Springer-Verlag: New York, NY, USA, 1990; Volume 42. + +© 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Chapter 3: +Papers Published by This Issue Editor in +Symmetry +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +Article + +Lorentz Harmonics, Squeeze Harmonics and Their +Physical Applications + +Young S. Kim ¹,* and Marilyn E. Noz ² + +¹ Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA + +² Department of Radiology, New York University, New York, NY 10016, USA + +* E-Mail: yskim@umd.edu; Tel.: 301-405-6024. + +Received: 6 January 2011; in revised form: 7 February 2011 / Accepted: 11 February 2011 / +Published: 14 February 2011 + +**Abstract:** Among the symmetries in physics, the rotation symmetry is most familiar to us. It is known that the spherical harmonics serve useful purposes when the world is rotated. Squeeze transformations are also becoming more prominent in physics, particularly in optical sciences and in high-energy physics. As can be seen from Dirac's light-cone coordinate system, Lorentz boosts are squeeze transformations. Thus the squeeze transformation is one of the fundamental transformations in Einstein's Lorentz-covariant world. It is possible to define a complete set of orthonormal functions defined for one Lorentz frame. It is shown that the same set can be used for other Lorentz frames. Transformation properties are discussed. Physical applications are discussed in both optics and high-energy physics. It is shown that the Lorentz harmonics provide the mathematical basis for squeezed states of light. It is shown also that the same set of harmonics can be used for understanding Lorentz-boosted hadrons in high-energy physics. It is thus possible to transmit physics from one branch of physics to the other branch using the mathematical basis common to them. + +**Keywords:** Lorentz harmonics; relativistic quantum mechanics; squeeze transformation; Dirac's efforts; hidden variables; Lorentz-covariant bound states; squeezed states of light + +Classification: PACS 03.65.Ge, 03.65.Pm + +# 1. Introduction + +In this paper, we are concerned with symmetry transformations in two dimensions, and we are accustomed to the coordinate system specified by x and y variables. On the xy plane, we know how to make rotations and translations. The rotation in the xy plane is performed by the matrix algebra + +$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \qquad (1) $$ + +but we are not yet familiar with + +$$ \begin{pmatrix} z' \\ t' \end{pmatrix} = \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} z \\ t \end{pmatrix} \qquad (2) $$ +---PAGE_BREAK--- + +We see this form when we learn Lorentz transformations, but there is a tendency in the literature to avoid this form, especially in high-energy physics. Since this transformation can also be written as + +$$ \begin{pmatrix} u' \\ v' \end{pmatrix} = \begin{pmatrix} \exp(\eta) & 0 \\ 0 & \exp(-\eta) \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} \qquad (3) $$ + +with + +$$ u = \frac{z+t}{\sqrt{2}}, \quad v = \frac{z-t}{\sqrt{2}} \qquad (4) $$ + +where the variables *u* and *v* are expanded and contracted respectively, we call Equation (2) or Equation (3) **squeeze transformations** [1]. + +From the mathematical point of view, the symplectic group $Sp(2)$ contains both the rotation and squeeze transformations of Equations (1) and (2), and its mathematical properties have been extensively discussed in the literature [1,2]. This group has been shown to be one of the essential tools in quantum optics. From the mathematical point of view, the squeezed state in quantum optics is a harmonic oscillator representation of this $Sp(2)$ group [1]. + +We are interested in this paper in "squeeze transformations" of localized functions. We are quite familiar with the role of spherical harmonics in three dimensional rotations. We use there the same set of harmonics, but the rotated function has different linear combinations of those harmonics. Likewise, we are interested in a complete set of functions which will serve the same purpose for squeeze transformations. It will be shown that harmonic oscillator wave functions can serve the desired purpose. From the physical point of view, squeezed states define the squeeze or Lorentz harmonics. + +In 2003, Giedke et al. used the Gaussian function to discuss the entanglement problems in information theory [3]. This paper allows us to use the oscillator wave functions to address many interesting current issues in quantum optics and information theory. In 2005, the present authors noted that the formalism of Lorentz-covariant harmonic oscillators leads to a space-time entanglement [4]. We developed the oscillator formalism to deal with hadronic phenomena observed in high-energy laboratories [5]. It is remarkable that the mathematical formalism of Giedke et al. is identical with that of our oscillator formalism. + +While quantum optics or information theory is a relatively new branch of physics, the squeeze transformation has been the backbone of Einstein's special relativity. While Lorentz, Poincaré, and Einstein used the transformation of Equation (2) for Lorentz boosts, Dirac observed that the same equation can be written in the form of Equation (3) [6]. Unfortunately, this squeeze aspect of Lorentz boosts has not been fully addressed in high-energy physics dealing with particles moving with relativistic speeds. + +Thus, we can call the same set of functions "squeeze harmonics" and "Lorentz harmonics" in quantum optics and high-energy physics respectively. This allows us to translate the physics of quantum optics or information theory into that of high-energy physics. + +The physics of high-energy hadrons requires a Lorentz-covariant localized quantum system. This description requires one variable which is hidden in the present form of quantum mechanics. It is the time-separation variable between two constituent particles in a quantum bound system like the hydrogen atom, where the Bohr radius measures the separation between the proton and the electron. What happens to this quantity when the hydrogen atom is boosted and the time-separation variable starts playing its role? The Lorentz harmonics will allow us to address this question. + +In Section 2, it is noted that the Lorentz boost of localized wave functions can be described in terms of one-dimensional harmonic oscillators. Thus, those wave functions constitute the Lorentz harmonics. It is also noted that the Lorentz boost is a squeeze transformation. + +In Section 3, we examine Dirac's life-long efforts to make quantum mechanics consistent with special relativity, and present a Lorentz-covariant form of bound-state quantum mechanics. In Section 4, +---PAGE_BREAK--- + +we construct a set of Lorentz-covariant harmonic oscillator wave functions, and show that they can be +given a Lorentz-covariant probability interpretation. + +In Section 5, the formalism is shown to constitute a mathematical basis for squeezed states of light, and for quantum entangled states. In Section 6, this formalism can serve as the language for Feynman's rest of the universe [7]. Finally, in Section 7, we show that the harmonic oscillator formalism can be applied to high-energy hadronic physics, and what we observe there can be interpreted in terms of what we learn from quantum optics. + +## 2. Lorentz or Squeeze Harmonics + +Let us start with the two-dimensional plane. We are quite familiar with rigid transformations such as rotations and translations in two-dimensional space. Things are different for non-rigid transformations such as a circle becoming an ellipse. + +We start with the well-known one-dimensional harmonic oscillator eigenvalue equation + +$$ \frac{1}{2} \left[ -\left(\frac{\partial}{\partial x}\right)^2 + x^2 \right] \chi_n(x) = \left(n + \frac{1}{2}\right) \chi_n(x) \quad (5) $$ + +For a given value of integer $n$, the solution takes the form + +$$ \chi_n(x) = \left[ \frac{1}{\sqrt{\pi 2^n n!}} \right]^{1/2} H_n(x) \exp \left( -\frac{x^2}{2} \right) \quad (6) $$ + +where $H_n(x)$ is the Hermite polynomial of the n-th degree. We can then consider a set of functions with all integer values of $n$. They satisfy the orthogonality relation + +$$ \int \chi_n(x) \chi_{n'}(x) = \delta_{nn'} \quad (7) $$ + +This relation allows us to define $f(x)$ as + +$$ f(x) = \sum_{n} A_{n} \chi_{n}(x) \quad (8) $$ + +with + +$$ A_n = \int f(x)\chi_n(x)dx \quad (9) $$ + +Let us next consider another variable added to Equation (5), and the differential equation + +$$ \frac{1}{2} \left\{ \left[ -\left(\frac{\partial}{\partial x}\right)^2 + x^2 \right] + \left[ -\left(\frac{\partial}{\partial y}\right)^2 + y^2 \right] \right\} \phi(x,y) = \lambda \phi(x,y) \quad (10) $$ + +This equation can be re-arranged to + +$$ \frac{1}{2} \left\{ -\left(\frac{\partial}{\partial x}\right)^2 - \left(\frac{\partial}{\partial y}\right)^2 + x^2 + y^2 \right\} \phi(x,y) = \lambda \phi(x,y) \quad (11) $$ + +This differential equation is invariant under the rotation defined in Equation (1). In terms of the polar coordinate system with + +$$ r = \sqrt{x^2 + y^2}, \qquad \tan \theta = \left(\frac{y}{x}\right) \quad (12) $$ + +this equation can be written: + +$$ \frac{1}{2} \left\{ -\frac{\partial^2}{\partial r^2} - \frac{1}{r} \frac{\partial}{\partial r} - \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} + r^2 \right\} \phi(r, \theta) = \lambda \phi(r, \theta) \quad (13) $$ +---PAGE_BREAK--- + +and the solution takes the form + +$$ +\phi(r, \theta) = e^{-r^2/2} R_{n,m}(r) \{ A_m \cos(m\theta) + B_n \sin(m\theta) \} \quad (14) +$$ + +The radial equation should satisfy + +$$ +\frac{1}{2} \left\{ -\frac{\partial^2}{\partial r^2} - \frac{1}{r} \frac{\partial}{\partial r} + \frac{m^2}{r^2} + r^2 \right\} R_{n,m}(r) = (n+m+1)R_{n,m}(r) \quad (15) +$$ + +In the polar form of Equation (14), we can achieve the rotation of this function by changing the angle variable $\theta$. + +On the other hand, the differential equation of Equation (10) is separable in the x and y variables. +The eigen solution takes the form + +$$ +\Phi_{n_x, n_y}(x, y) = \chi_{n_x}(x) \chi_{n_y}(y) \tag{16} +$$ + +with + +$$ +\lambda = n_x + n_y + 1 +\quad +(17) +$$ + +If a function $f(x,y)$ is sufficiently localized around the origin, it can be expanded as + +$$ +f(x,y) = \sum_{n_x, n_y} A_{n_x, n_y} \chi_{n_x}(x) \chi_{n_y}(y) \qquad (18) +$$ + +with + +$$ +A_{n_x, n_y} = \int f(x,y)\chi_{n_x}(x)\chi_{n_y}(y) dx dy \quad (19) +$$ + +If we rotate $f(x,y)$ according to Equation (1), it becomes $f(x^*, y^*)$, with + +$$ +x^* = (\cos \theta)x - (\sin \theta)y, \quad y^* = (\sin \theta)x + (\cos \theta)y \tag{20} +$$ + +This rotated function can also be expanded in terms of $\chi_{n_x}(x)$ and $\chi_{n_y}(y)$: + +$$ +f(x^*, y^*) = \sum_{n_x, n_y} A_{n_x, n_y}^* \chi_{n_x}(x) \chi_{n_y}(y) \quad (21) +$$ + +with + +$$ +A_{n_x, n_y}^* = \int f(x^*, y^*) \chi_{n_x}(x) \chi_{n_y}(y) dx dy \quad (22) +$$ + +Next, let us consider the differential equation + +$$ +\frac{1}{2} \left\{ -\left(\frac{\partial}{\partial z}\right)^2 + \left(\frac{\partial}{\partial t}\right)^2 + z^2 - t^2 \right\} \psi(z,t) = \lambda \psi(z,t) \quad (23) +$$ + +Here we use the variables *z* and *t*, instead of *x* and *y*. Clearly, this equation can be also separated in the +*z* and *t* coordinates, and the eigen solution can be written as + +$$ +\psi_{nz,n_l}(z,t) = \chi_{nz}(z)\chi_{nl}(z,t) \tag{24} +$$ + +with + +$$ +\lambda = n_z - n_t. \tag{25} +$$ + +The oscillator equation is not invariant under coordinate rotations of the type given in Equation (1). +It is however invariant under the squeeze transformation given in Equation (2). +---PAGE_BREAK--- + +The differential equation of Equation (23) becomes + +$$ +\frac{1}{4} \left\{ -\frac{\partial}{\partial u} \frac{\partial}{\partial v} + uv \right\} \psi(u, v) = \lambda \psi(u, v) \quad (26) +$$ + +Both Equation (11) and Equation (23) are two-dimensional differential equations. They are +invariant under rotations and squeeze transformations respectively. They take convenient forms +in the polar and squeeze coordinate systems respectively as shown in Equation (13) and Equation (26). + +The solutions of the rotation-invariant equation are well known, but the solutions of the squeeze-invariant equation are still strange to the physics community. Fortunately, both equations are separable in the Cartesian coordinate system. This allows us to study the latter in terms of the familiar rotation-invariant equation. This means that if the solution is sufficiently localized in the z and t plane, it can be written as + +$$ +\psi(z, t) = \sum_{n_z, n_t} A_{n_z, n_t} \chi_{n_z}(z) \chi_{n_t}(t) \tag{27} +$$ + +with + +$$ +A_{n_z, n_t} = \int \psi(z,t) \chi_{n_z}(z) \chi_{n_t}(t) \, dz \, dt \quad (28) +$$ + +If we squeeze the coordinate according to Equation (2), + +$$ +\psi(z^*, t^*) = \sum_{n_z, n_t} A_{n_z, n_t}^* \chi_{n_z}(z) \chi_{n_t}(t) \quad (29) +$$ + +with + +$$ +A_{n_z, n_t}^* = \int \psi(z^*, t^*) \chi_{n_z}(z) \chi_{n_t}(t) \, dz \, dt \quad (30) +$$ + +Here again both the original and transformed wave functions are linear combinations of the wave +functions for the one-dimensional harmonic oscillator given in Equation (6). + +The wave functions for the one-dimensional oscillator are well known, and they play important +roles in many branches of physics. It is gratifying to note that they could play an essential role in +squeeze transformations and Lorentz boosts, see Table (1). We choose to call them Lorentz harmonics +or squeeze harmonics. + +**Table 1.** Cylindrical and hyperbolic equations. The cylindrical equation is invariant under rotation while the hyperbolic equation is invariant under squeeze transformation + + + + + + + + + + + + + + + + + + + + + +
+ Equation + + Invariant under + + Eigenvalue +
+ Cylindrical + + Rotation + + λ = nx + ny + 1 +
+ Hyperbolic + + Squeeze + + λ = nx - ny +
+ +**3. The Physical Origin of Squeeze Transformations** + +Paul A. M. Dirac made it his life-long effort to combine quantum mechanics with special relativity. +We examine the following four of his papers. + +* In 1927 [8], Dirac pointed out the time-energy uncertainty should be taken into consideration for efforts to combine quantum mechanics and special relativity. + +* In 1945 [9], Dirac considered four-dimensional harmonic oscillator wave functions with + +$$ +\exp\left\{-\frac{1}{2}\left(x^2 + y^2 + z^2 + t^2\right)\right\} \qquad (31) +$$ + +and noted that this form is not Lorentz-covariant. +---PAGE_BREAK--- + +* In 1949 [6], Dirac introduced the light-cone variables of Equation (4). He also noted that the construction of a Lorentz-covariant quantum mechanics is equivalent to the construction of a representation of the Poncaré group. + +* In 1963 [10], Dirac constructed a representation of the (3 + 2) deSitter group using two harmonic oscillators. This deSitter group contains three (3 + 1) Lorentz groups as its subgroups. + +In each of these papers, Dirac presented the original ingredients which can serve as building blocks for making quantum mechanics relativistic. We combine those elements using Wigner's little groups [11] and Feynman's observation of high-energy physics [12–14]. + +First of all, let us combine Dirac’s 1945 paper and his light-cone coordinate system given in his 1949 paper. Since x and y variables are not affected by Lorentz boosts along the z direction in Equation (31), it is sufficient to study the Gaussian form + +$$ \exp\left\{-\frac{1}{2}(z^2 + t^2)\right\} \qquad (32) $$ + +This form is certainly not invariant under Lorentz boost as Dirac noted. On the other hand, it can be written as + +$$ \exp\left\{-\frac{1}{2}(u^2 + v^2)\right\} \qquad (33) $$ + +where *u* and *v* are the light-cone variables defined in Equation (4). If we make the Lorentz-boost or Lorentz squeeze according to Equation (3), this Gaussian form becomes + +$$ \exp\left\{-\frac{1}{2}\left(e^{-2\eta}u^2 + e^{2\eta}v^2\right)\right\} \qquad (34) $$ + +If we write the Lorentz boost as + +$$ z' = \frac{z + \beta t}{\sqrt{1 - \beta^2}} \qquad t' = \frac{t + \beta z}{\sqrt{1 - \beta^2}} \qquad (35) $$ + +where $\beta$ is the velocity parameter $v/c$, then $\beta$ is related to $\eta$ by + +$$ \beta = \tanh(\eta) \qquad (36) $$ + +Let us go back to the Gaussian form of Equation (32), this expression is consistent with Dirac’s earlier paper on the time-energy uncertainty relation [8]. According to Dirac, this is a c-number uncertainty relation without excitations. The existence of the time-energy uncertainty is illustrated in the first part of Figure 1. + +In his 1927 paper, Dirac noted the space-time asymmetry in uncertainty relations. While there are no time-like excitations, quantum mechanics allows excitations along the z direction. How can we take care of problem? + +If we suppress the excitations along the *t* coordinate, the normalized solution of this differential equation, Equation (24), is + +$$ \psi(z,t) = \left( \frac{1}{\pi 2^n n!} \right)^{1/2} H_n(z) \exp \left\{ - \left( \frac{z^2 + t^2}{2} \right) \right\} \qquad (37) $$ +---PAGE_BREAK--- + +**Figure 1.** Space-time picture of quantum mechanics. In his 1927 paper, Dirac noted that there is a c-number time-energy uncertainty relation, in addition to Heisenberg's position-momentum uncertainty relations, with quantum excitations. This idea is illustrated in the first figure (upper left). In his 1949 paper, Dirac produced his light-cone coordinate system as illustrated in the second figure (upper right). It is then not difficult to produce the third figure, for a Lorentz-covariant picture of quantum mechanics. This Lorentz-squeeze property is observed in high-energy laboratories through Feynman's parton picture discussed in Section 7. + +If we boost the coordinate system, the Lorentz-boosted wave functions should take the form + +$$ \begin{aligned} \psi_{\eta}^{n}(z, t) = & \left( \frac{1}{\pi 2^{n} n!} \right)^{1/2} H_n \left( z \cosh \eta - t \sinh \eta \right) \\ & \times \exp \left\{ - \left[ \frac{( \cosh 2\eta )(z^2 + t^2) - 4( \sinh 2\eta )zt }{2} \right] \right\} \end{aligned} \quad (38) $$ + +These are the solutions of the phenomenological equation of Feynman *et al.* [12] for internal motion of the quarks inside a hadron. In 1971, Feynman *et al.* wrote down a Lorentz-invariant differential equation of the form + +$$ \frac{1}{2} \left\{ - \left( \frac{\partial}{\partial x_{\mu}} \right)^2 + x_{\mu}^2 \right\} \psi(x_{\mu}) = (\lambda + 1) \psi(x_{\mu}) \quad (39) $$ + +where $x_\mu$ is for the Lorentz-covariant space-time four vector. This oscillator equation is separable in the Cartesian coordinate system, and the transverse components can be separated out. Thus, the differential of Equation (23) contains the essential element of the Lorentz-invariant Equation (39). + +However, the solutions contained in Reference [12] are not normalizable and therefore cannot carry physical interpretations. It was shown later that there are normalizable solutions which constitute a representation of Wigner's O(3)-like little group [5,11,15]. The O(3) group is the three-dimensional rotation group without a time-like direction or time-like excitations. This addresses Dirac's concern about the space-time asymmetry in uncertainty relations [8]. Indeed, the expression of Equation (37) is considered to be the representation of Wigner's little group for quantum bound states [11,15]. We shall return to more physical questions in Section 7. + +## 4. Further Properties of the Lorentz Harmonics + +Let us continue our discussion of quantum bound states using harmonic oscillators. We are interested in this section to see how the oscillator solution of Equation (37) would appear to a moving observer. +---PAGE_BREAK--- + +The variable z and *t* are the longitudinal and time-like separations between the two constituent particles. In terms of the light-cone variables defined in Equation (4), the solution of Equation (37) takes the form + +$$ +\psi_0^n(z, t) = \left[ \frac{1}{\pi n! 2^n} \right]^{1/2} H_n \left( \frac{u+v}{\sqrt{2}} \right) \exp \left\{ - \left( \frac{u^2 + v^2}{2} \right) \right\} \quad (40) +$$ + +and + +$$ +\psi_{\eta}^{n}(z,t) = \left[ \frac{1}{\pi n! 2^n} \right]^{1/2} H_n \left( \frac{e^{-\eta} u + e^{\eta} v}{\sqrt{2}} \right) \exp \left\{ - \left( \frac{e^{-2\eta} u^2 + e^{2\eta} v^2}{2} \right) \right\} \quad (41) +$$ + +for the rest and moving hadrons respectively. + +It is mathematically possible to expand this as [5,16] + +$$ +\psi_{\eta}^{n}(z, t) = \left(\frac{1}{\cosh \eta}\right)^{(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh \eta)^{k} \chi_{n+k}(z) \chi_{n}(t) \quad (42) +$$ + +where $\chi_n(z)$ is the $n$-th excited state oscillator wave function which takes the familiar form + +$$ +\chi_n(z) = \left[ \frac{1}{\sqrt{\pi 2^n n!}} \right]^{1/2} H_n(z) \exp \left( -\frac{z^2}{2} \right) \qquad (43) +$$ + +as given in Equation (6). This is an expansion of the Lorentz-boosted wave function in terms of the Lorentz harmonics. + +If the hadron is at rest, there are no time-like oscillations. There are time-like oscillations for a moving hadron. This is the way in which the space and time variable mix covariantly. This also provides a resolution of the space-time asymmetry pointed out by Dirac in his 1927 paper [8]. We shall return to this question in Section 6. Our next question is whether those oscillator equations can be given a probability interpretation. + +Even though we suppressed the excitations along the *t* direction in the hadronic rest frame, it is an interesting mathematical problem to start with the oscillator wave function with an excited state in the time variable. This problem was addressed by Rotbart in 1981 [17]. + +## 4.1. Lorentz-Invariant Orthogonality Relations + +Let us consider two wave functions $\psi_\eta^n(z, t)$. If two covariant wave functions are in the same Lorentz frame and have thus the same value of $\eta$, the orthogonality relation + +$$ +(\psi_{\eta}^{n'}, \psi_{\eta}^{n}) = \delta_{nn'} \quad (44) +$$ + +is satisfied. + +If those two wave functions have different values of η, we have to start with + +$$ +(\psi_{\eta'}^{n'}, \psi_{\eta}^{n}) = \int (\psi_{\eta'}^{n'}(z,t))^* \psi_{\eta}^{n}(z,t) dz dt \quad (45) +$$ + +Without loss of generality, we can assume $\eta' = 0$ in the system where $\eta = 0$, and evaluate the integration. The result is [18] + +$$ +(\psi_0^{n'}, \psi_\eta^n) = \int (\psi_0^{n'}(z,t))^2 \psi_\eta^n(z,t) dxdt = (\sqrt{1-\beta^2})^{(n+1)} \delta_{n,n'} \quad (46) +$$ + +where $\beta = \tanh(\eta)$, as given in Equation (36). This is like the Lorentz-contraction property of a rigid rod. The ground state is like a single rod. Since we obtain the first excited state by applying a step-up operator, this state should behave like a multiplication of two rods, and a similar argument can be given to *n* rigid rods. This is illustrated in Figure 2. +---PAGE_BREAK--- + +**Figure 2.** Orthogonality relations for the covariant harmonic oscillators. The orthogonality remains invariant. For the two wave functions in the orthogonality integral, the result is zero if they have different values of *n*. If both wave functions have the same value of *n*, the integral shows the Lorentz contraction property. + +With these orthogonality properties, it is possible to give quantum probability interpretation in the Lorentz-covariant world, and it was so stated in our 1977 paper [19]. + +## 4.2. Probability Interpretations + +Let us study the probability issue in terms of the one-dimensional oscillator solution of Equation (6) whose probability interpretation is indisputable. Let us also go back to the rotationally invariant differential equation of Equation (11). Then the product + +$$ \chi_{n_x}(x) \chi_{n_y}(y) \quad (47) $$ + +also has a probability interpretation with the eigen value $(n_x + n_y + 1)$. Thus the series of the form [1,5] + +$$ \phi_{\eta}^{n}(x, y) = \left( \frac{1}{\cosh \eta} \right)^{(n+1)} \sum_{k} \left[ \frac{(n+k)!}{n!k!} \right]^{1/2} (\tanh \eta)^k \chi_{n+k}(x) \chi_n(y) \quad (48) $$ + +also has its probability interpretation, but it is not in an eigen state. Each term in this series has an eigenvalue $(2n + k + 1)$. The expectation value of Equation (11) is + +$$ \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_k \frac{(2n+k+1)(n+k)!}{n!k!} (\tanh \eta)^{2k} \quad (49) $$ + +If we replace the variables *x* and *y* by *z* and *t* respectively in the above expression of Equation (48), it becomes the Lorentz-covariant wave function of Equation (42). Each term $\chi_{n+k}(z)\chi_k(t)$ in the series has the eigenvalue *n*. Thus the series is in the eigen state with the eigenvalue *n*. + +This difference does not prevent us from importing the probability interpretation from that of Equation (48). + +In the present covariant oscillator formalism, the time-separation variable can be separated from the rest of the wave function, and does not require further interpretation. For a moving +---PAGE_BREAK--- + +hadron, time-like excitations are mixed with longitudinal excitations. Is it possible to give a physical interpretation to those time-like excitations? To address this issue, we shall study in Section 5 two-mode squeezed states also based on the mathematics of Equation (48). There, both variables have their physical interpretations. + +**5. Two-Mode Squeezed States** + +Harmonic oscillators play the central role also in quantum optics. There the $n^{th}$ excited oscillator state corresponds to the *n*-photon state $|n\rangle$. The ground state means the zero-photon or vacuum state $|0\rangle$. The single-photon coherent state can be written as + +$$|\alpha\rangle = e^{-\alpha a^*/2} \sum_n \frac{a^n}{\sqrt{n!}} |n\rangle \quad (50)$$ + +which can be written as [1] + +$$|\alpha\rangle = e^{-\alpha a^*/2} \sum_n \frac{\alpha^n}{n!} (\hat{a}^\dagger)^n |0\rangle = \left\{e^{-\alpha a^*/2}\right\} \exp\{\alpha \hat{a}^\dagger\} |0\rangle \quad (51)$$ + +This aspect of the single-photon coherent state is well known. Here we are dealing with one kind of photon, namely with a given momentum and polarization. The state $|n\rangle$ means there are $n$ photons of this kind. + +Let us next consider a state of two kinds of photons, and write $|n_1, n_2\rangle$ as the state of $n_1$ photons of the first kind, and $n_2$ photons of the second kind [20]. We can then consider the form + +$$\frac{1}{\cosh \eta} \exp \{(\tanh \eta) \hat{a}_1^\dagger \hat{a}_2^\dagger\} |0, 0\rangle \quad (52)$$ + +The operator $\hat{a}_1^\dagger \hat{a}_2^\dagger$ was studied by Dirac in connection with his representation of the deSitter group, as we mentioned in Section 3. After making a Taylor expansion of Equation (52), we arrive at + +$$\frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k |k, k\rangle \quad (53)$$ + +which is the squeezed vacuum state or two-photon coherent state [1,20]. This expression is the wave function of Equation (48) in a different notation. This form is also called the entangled Gaussian state of two photons [3] or the entangled oscillator state of space and time [4]. + +If we start with the *n*-particle state of the first photon, we obtain + +$$ \begin{aligned} & \left[ \frac{1}{\cosh \eta} \right]^{(n+1)} \exp \left\{ (\tanh \eta) \hat{a}_1^\dagger \hat{a}_2^\dagger \right\} |n, 0\rangle \\ &= \left[ \frac{1}{\cosh \eta} \right]^{(n+1)} \sum_k \left[ \frac{(n+k)!}{n!k!} \right]^{1/2} (\tanh \eta)^k |k+n, k\rangle \end{aligned} \quad (54) $$ + +which is the wave function of Equation (42) in a different notation. This is the *n*-photon squeezed state [1]. + +Since the two-mode squeezed state and the covariant harmonic oscillators share the same set of mathematical formulas, it is possible to transmit physical interpretations from one to the other. For two-mode squeezed state, both photons carry physical interpretations, while the interpretation is yet to be given to the time-separation variable in the covariant oscillator formalism. It is clear from Equation (42) and Equation (54) that the time-like excitations are like the second-photon states. + +What would happen if the second photon is not observed? This interesting problem was addressed by Yurke and Potasek [21] and by Ekert and Knight [22]. They used the density matrix formalism and +---PAGE_BREAK--- + +integrated out the second-photon states. This increases the entropy and temperature of the system. We choose not to reproduce their mathematics, because we will be presenting the same mathematics in Section 6. + +**6. Time-Separation Variable in Feynman's Rest of the Universe** + +As was noted in the previous section, the time-separation variable has an important role in the covariant formulation of the harmonic oscillator wave functions. It should exist wherever the space separation exists. The Bohr radius is the measure of the separation between the proton and electron in the hydrogen atom. If this atom moves, the radius picks up the time separation, according to Einstein [23]. + +On the other hand, the present form of quantum mechanics does not include this time-separation variable. The best way we can interpret it at the present time is to treat this time-separation as a variable in Feynman's rest of the universe [24]. In his book on statistical mechanics [7], Feynman states + +> When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts - the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system. + +The failure to include what happens outside the system results in an increase of entropy. The entropy is a measure of our ignorance and is computed from the density matrix [25]. The density matrix is needed when the experimental procedure does not analyze all relevant variables to the maximum extent consistent with quantum mechanics [26]. If we do not take into account the time-separation variable, the result is an increase in entropy [27,28]. + +For the covariant oscillator wave functions defined in Equation (42), the pure-state density matrix is + +$$ \rho_{\eta}^{n}(z, t; z', t') = \psi_{\eta}^{n}(z, t) \psi_{\eta}^{n}(z', t') \quad (55) $$ + +which satisfies the condition $\rho^2 = \rho$: + +$$ \rho_{\eta}^{n}(z, t; x', t') = \int \rho_{\eta}^{n}(z, t; x'', t'') \rho_{\eta}^{n}(z'', t''; z', t') dz'' dt'' \quad (56) $$ + +However, in the present form of quantum mechanics, it is not possible to take into account the time separation variables. Thus, we have to take the trace of the matrix with respect to the $t$ variable. Then the resulting density matrix is: + +$$ \begin{aligned} \rho_{\eta}^{n}(z, z') &= \int \psi_{\eta}^{n}(z, t) \psi_{\eta}^{n}(z', t) dt \\ &= \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_{k} \frac{(n+k)!}{n!k!} (\tanh \eta)^{2k} \psi_{n+k}(z) \psi_{n+k}^{*}(z') \end{aligned} \quad (57) $$ + +The trace of this density matrix is one, but the trace of $\rho^2$ is less than one, as: + +$$ \begin{aligned} \mathrm{Tr}(\rho^2) &= \int \rho_{\eta}^{n}(z,z') \rho_{\eta}^{n}(z',z) dzdz' \\ &= \left(\frac{1}{\cosh \eta}\right)^{4(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^2 (\tanh \eta)^{4k} \end{aligned} \quad (58) $$ + +which is less than one. This is due to the fact that we do not know how to deal with the time-like separation in the present formulation of quantum mechanics. Our knowledge is less than complete. +---PAGE_BREAK--- + +The standard way to measure this ignorance is to calculate the entropy defined as + +$$S = -\mathrm{Tr}(\rho \ln(\rho)) \qquad (59)$$ + +If we pretend to know the distribution along the time-like direction and use the pure-state density matrix given in Equation (55), then the entropy is zero. However, if we do not know how to deal with the distribution along $t$, then we should use the density matrix of Equation (57) to calculate the entropy, and the result is + +$$S = 2(n+1) \left\{ (\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta) \ln(\sinh \eta) \right\} \\ - \left( \frac{1}{\cosh \eta} \right)^{2(n+1)} \sum_k \frac{(n+k)!}{n!k!} \ln \left[ \frac{(n+k)!}{n!k!} \right] (\tanh \eta)^{2k} \qquad (60)$$ + +In terms of the velocity $v$ of the hadron, + +$$S = -(n+1) \left\{ \ln \left[ 1 - \left( \frac{v}{c} \right)^2 \right] + \frac{(v/c)^2 \ln(v/c)^2}{1 - (v/c)^2} \right\} \\ - \left[ 1 - \left( \frac{1}{v} \right)^2 \right] \sum_k \frac{(n+k)!}{n!k!} \ln \left[ \frac{(n+k)!}{n!k!} \right] \left( \frac{v}{c} \right)^{2k} \qquad (61)$$ + +Let us go back to the wave function given in Equation (41). As is illustrated in Figure 3, its localization property is dictated by the Gaussian factor which corresponds to the ground-state wave function. For this reason, we expect that much of the behavior of the density matrix or the entropy for the $n^{th}$ excited state will be the same as that for the ground state with $n=0$. For this state, the density matrix and the entropy are + +$$\rho(z,z') = \left(\frac{1}{\pi \cosh(2\eta)}\right)^{1/2} \exp\left\{-\frac{1}{4}\left[\frac{(z+z')^2}{\cosh(2\eta)} + (z-z')^2 \cosh(2\eta)\right]\right\} \qquad (62)$$ + +and + +$$S = 2 \left\{ (\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta)^2 \ln(\sinh \eta) \right\} \qquad (63)$$ + +respectively. The quark distribution $\rho(z, z)$ becomes + +$$\rho(z, z) = \left( \frac{1}{\pi \cosh(2\eta)} \right)^{1/2} \exp \left( \frac{-z^2}{\cosh(2\eta)} \right) \qquad (64)$$ + +The width of the distribution becomes $\sqrt{\cosh\eta}$, and becomes wide-spread as the hadronic speed increases. Likewise, the momentum distribution becomes wide-spread [5,29]. This simultaneous increase in the momentum and position distribution widths is called the parton phenomenon in high-energy physics [13,14]. The position-momentum uncertainty becomes $\cosh\eta$. This increase in uncertainty is due to our ignorance about the physical but unmeasurable time-separation variable. + +Let us next examine how this ignorance will lead to the concept of temperature. For the Lorentz-boosted ground state with $n=0$, the density matrix of Equation (62) becomes that of the harmonic oscillator in a thermal equilibrium state if $(\tanh\eta)^2$ is identified as the Boltzmann factor [29]. For other states, it is very difficult, if not impossible, to describe them as thermal equilibrium states. Unlike the case of temperature, the entropy is clearly defined for all values of $n$. Indeed, the entropy in this case is derivable directly from the hadronic speed. + +The time-separation variable exists in the Lorentz-covariant world, but we pretend not to know about it. It thus is in Feynman's rest of the universe. If we do not measure this time-separation, it becomes translated into the entropy. +---PAGE_BREAK--- + +Figure 3. Localization property in the $zt$ plane. When the hadron is at rest, the Gaussian form is concentrated within a circular region specified by $(z+t)^2 + (z-t)^2 = 1$. As the hadron gains speed, the region becomes deformed to $e^{-2\eta}(z+t)^2 + e^{2\eta}(z-t)^2 = 1$. Since it is not possible to make measurements along the $t$ direction, we have to deal with information that is less than complete. + +Figure 4. The uncertainty from the hidden time-separation coordinate. The small circle indicates the minimal uncertainty when the hadron is at rest. More uncertainty is added when the hadron moves. This is illustrated by a larger circle. The radius of this circle increases by $\sqrt{\cosh(2\eta)}$. + +We can see the uncertainty in our measurement process from the Wigner function defined as + +$$W(z,p) = \frac{1}{\pi} \int \rho(z+y,z-y)e^{ipy} dy \quad (65)$$ + +After integration, this Wigner function becomes + +$$W(z,p) = \frac{1}{\pi \cosh(2\eta)} \exp \left\{ - \left( \frac{z^2 + p^2}{\cosh(2\eta)} \right) \right\} \quad (66)$$ + +This Wigner phase distribution is illustrated in Figure 4. The smaller inner circle corresponds to the minimal uncertainty of the single oscillator. The larger circle is for the total uncertainty including the statistical uncertainty from our failure to observe the time-separation variable. The two-mode squeezed state tells us how this happens. In the two-mode case, both the first and second photons are observable, but we can choose not to observe the second photon. + +## 7. Lorentz-Covariant Quark Model + +The hydrogen atom played the pivotal role while the present form of quantum mechanics was developed. At that time, the proton was in the absolute Galilean frame of reference, and it was thinkable that the proton could move with a speed close to that of light. + +Also, at that time, both the proton and electron were point particles. However, the discovery of Hofstadter *et al*. changed the picture of the proton in 1955 [30]. The proton charge has its internal +---PAGE_BREAK--- + +distribution. Within the framework of quantum electrodynamics, it is possible to calculate the Rutherford formula for the electron-proton scattering when both electron and proton are point particles. Because the proton is not a point particle, there is a deviation from the Rutherford formula. We describe this deviation using the formula called the “proton form factor” which depends on the momentum transfer during the electron-proton scattering. + +Indeed, the study of the proton form factor has been and still is one of the central issues in high-energy physics. The form factor decreases as the momentum transfer increases. Its behavior is called the “dipole cut-off” meaning an inverse-square decrease, and it has been a challenging problem in quantum field theory and other theoretical models [31]. Since the emergence of the quark model in 1964 [32], the hadrons are regarded as quantum bound states of quarks with space-time wave functions. Thus, the quark model is responsible for explaining this form factor. There are indeed many papers written on this subject. We shall return to this problem in Subsection 7.2. + +Another problem in high-energy physics is Feynman's parton picture [13,14]. If the hadron is at rest, we can approach this problem within the framework of bound-state quantum mechanics. If it moves with a speed close to that of light, it appears as a collection of an infinite number of partons, which interact with external signals incoherently. This phenomenon raises the question of whether the Lorentz boost destroys quantum coherence [33]. This leads to the concept of Feynman's decoherence [34]. We shall discuss this problem first. + +## 7.1. Feynman's Parton Picture and Feynman's Decoherence + +In 1969, Feynman observed that a fast-moving hadron can be regarded as a collection of many “partons” whose properties appear to be quite different from those of the quarks [5,14]. For example, the number of quarks inside a static proton is three, while the number of partons in a rapidly moving proton appears to be infinite. The question then is how the proton looking like a bound state of quarks to one observer can appear different to an observer in a different Lorentz frame? Feynman made the following systematic observations. + +a. The picture is valid only for hadrons moving with velocity close to that of light. + +b. The interaction time between the quarks becomes dilated, and partons behave as free independent particles. + +c. The momentum distribution of partons becomes widespread as the hadron moves fast. + +d. The number of partons seems to be infinite or much larger than that of quarks. + +Because the hadron is believed to be a bound state of two or three quarks, each of the above phenomena appears as a paradox, particularly (b) and (c) together. How can a free particle have a wide-spread momentum distribution? + +In order to address this question, let us go to Figure 5, which illustrates the Lorentz-squeeze property of the hadron as the hadron gains its speed. If we use the harmonic oscillator wave function, its momentum-energy wave function takes the same form as the space-time wave function. As the hadron gains its speed, both wave functions become squeezed. + +As the wave function becomes squeezed, the distribution becomes wide-spread, the spring constant appears to become weaker. Consequently, the constituent quarks appear to become free particles. + +If the constituent particles are confined in the narrow elliptic region, they become like massless particles. If those massless particles have a wide-spread momentum distribution, it is like a black-body radiation with infinite number of photon distributions. + +We have addressed this question extensively in the literature, and concluded Gell-Mann's quark model and Feynman's parton model are two different manifestations of the same Lorentz-covariant quantity [19,35,36]. Thus coherent quarks and incoherent partons are perfectly consistent within the framework of quantum mechanics and special relativity [33]. Indeed, this defines Feynman's decoherence [34]. +---PAGE_BREAK--- + +**Figure 5.** Lorentz-squeezed space-time and momentum-energy wave functions. As the hadron’s speed approaches that of light, both wave functions become concentrated along their respective positive light-cone axes. These light-cone concentrations lead to Feynman’s parton picture. + +More recently, we were able to explain this decoherence problem in terms of the interaction time among the constituent quarks and the time required for each quark to interact with external signals [4]. + +## 7.2. Proton Form Factors and Lorentz Coherence + +As early as in 1970, Fujimura et al. calculated the electromagnetic form factor of the proton using the wave functions given in this paper and obtained the so-called “dipole” cut-off of the form factor [37]. At that time, these authors did not have a benefit of the differential equation of Feynman and his co-authors [12]. Since their wave functions can now be given a bona-fide covariant probability interpretation, their calculation could be placed between the two limiting cases of quarks and partons. + +Even before the calculation of Fujimura et al. in 1965, the covariant wave functions were discussed by various authors [38–40]. In 1970, Licht and Pagnamenta also discussed this problem with Lorentz-contracted wave functions [41]. + +In our 1973 paper [42], we attempted to explain the covariant oscillator wave function in terms of the coherence between the incoming signal and the width of the contracted wave function. This aspect was explained in terms of the overlap of the energy-momentum wave function in our book [5]. + +In this paper, we would like to go back to the coherence problem we raised in 1973, and follow-up on it. In the Lorentz frame where the momentum of the proton has the opposite signs before and after the collision, the four-momentum transfer is + +$$ (p, E) - (-p, E) = (2p, 0) \qquad (67) $$ + +where the proton comes along the z direction with its momentum $p$, and its energy $\sqrt{p^2 + m^2}$. +---PAGE_BREAK--- + +Then the form factor becomes + +$$F(p) = \int e^{2ipz} (\psi_{\eta}(z,t))^* \psi_{-\eta}(z,t) dz dt \quad (68)$$ + +If we use the ground-state oscillator wave function, this integral becomes + +$$\frac{1}{\pi} \int e^{2ipz} \exp \left\{ -\cosh(2\eta) (z^2 + t^2) \right\} dz dt \quad (69)$$ + +After the $t$ integration, this integral becomes + +$$\frac{1}{\sqrt{\pi} \cosh(2\eta)} \int e^{2ipz} \exp \{-z^2 \cosh(2\eta)\} dz \quad (70)$$ + +The integrand is a product of a Gaussian factor and a sinusoidal oscillation. The width of the Gaussian factor shrinks by $1/\sqrt{\cosh(2\eta)}$, which becomes $\exp(-\eta)$ as $\eta$ becomes large. The wave length of the sinusoidal factor is inversely proportional to the momentum $p$. The wave length decreases also at the rate of $\exp(-\eta)$. Thus, the rate of the shrinkage is the same for both the Gaussian and sinusoidal factors. For this reason, the cutoff rate of the form factor of Equation (68) should be less than that for + +$$\int e^{2ipz} (\psi_0(z,t))^* \psi_0(z,t) dz dt = \frac{1}{\sqrt{\pi}} \int e^{2ipz} \exp(-z^2) dz \quad (71)$$ + +which corresponds to the form factor without the squeeze effect on the wave function. The integration of this expression leads to $\exp(-p^2)$, which corresponds to an exponential cut-off as $p^2$ becomes large. Let us go back to the form factor of Equation (68). If we complete the integral, it becomes + +$$F(p) = \frac{1}{\cosh(2\eta)} \exp \left\{ \frac{-p^2}{\cosh(2\eta)} \right\} \quad (72)$$ + +As $p^2$ becomes large, the Gaussian factor becomes a constant. However, the factor $1/\cosh(2\eta)$ leads the form factor decrease of $1/p^2$, which is a much slower decrease than the exponential cut-off without squeeze effect. + +There still is a gap between this mathematical formula and the observed experimental data. Before looking at the experimental curve, we have to realize that there are three quarks inside the hadron with two oscillator mode. This will lead to a $(1/p^2)^2$ cut-off, which is commonly called the dipole cut-off in the literature. + +There is still more work to be done. For instance, the effect of the quark spin should be addressed [43,44]. Also there are reports of deviations from the exact dipole cut-off [45]. There have been attempts to study the form factors based on the four-dimensional rotation group [46], and also on the lattice QCD [47]. + +Yet, it is gratifying to note that the effect of Lorentz squeeze leads to the polynomial decrease in the momentum transfer, thanks to the Lorentz coherence illustrated in Figure 6. We started our logic from the fundamental principles of quantum mechanics and relativity. + +## 8. Conclusions + +In this paper, we presented one mathematical formalism applicable both to the entanglement problems in quantum optics [3] and to high-energy hadronic physics [4]. The formalism is based on harmonic oscillators familiar to us. We have presented a complete orthonormal set with a Lorentz-covariant probability interpretation. + +Since both branches of physics share the same mathematical base, it is possible to translate physics from one branch to the other. In this paper, we have given a physical interpretation to the +---PAGE_BREAK--- + +**Figure 6.** Coherence between the wavelength and the proton size. As the momentum transfer increases, the external signal sees Lorentz-contracting proton distribution. On the other hand, the wavelength of the signal also decreases. Thus, the cutoff is not as severe as the case where the proton distribution is not contracted. + +time-separation variable as a hidden variable in Feynman's rest of the universe, in terms of the two-mode squeezed state where both photons are observable. + +This paper is largely a review paper with an organization to suit the current interest in physics. For instance, the concepts of entanglement and decohercne did not exist when those original papers were written. Furthermore, the probability interpretation given in Subsection 4.2 has not been published before. + +The rotation symmetry plays its role in all branches of physics. We noted that the squeeze symmetry plays active roles in two different subjects of physics. It is possible that the squeeze transformation can serve useful purposes in many other fields, although we are not able to specify them at this time. + +References + +1. Kim, Y.S.; Noz, M.E. *Phase Space Picture of Quantum Mechanics*; World Scientific Publishing Company: Singapore, 1991. + +2. Guillemin, V.; Sternberg, S. *Symplectic Techniques in Physics*; Cambridge University: Cambridge, UK, 1984. + +3. Giedke, G.; Wolf, M.M.; Krger, O.; Werner, R.F.; Cirac, J.J. Entanglement of formation for symmetric Gaussian states. *Phys. Rev. Lett.* **2003**, *91*, 107901-107904. + +4. Kim, Y.S.; Noz, M.E. Coupled oscillators, entangled oscillators, and Lorentz-covariant harmonic oscillators. *J. Opt. B: Quantum Semiclass. Opt.* **2005**, *7*, S458-S467. + +5. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group* D; Reidel Publishing Company: Dordrecht, The Netherlands, 1986. + +6. Dirac, P.A.M. Forms of Relativistic Dynamics. *Rev. Mod. Phys.* **1949**, *21*, 392-399. + +7. Feynman, R.P. *Statistical Mechanics*; Benjamin/Cummings: Reading, MA, USA, 1972. + +8. Dirac, P.A.M. The Quantum Theory of the Emission and Absorption of Radiation. Proc. Roy. Soc. (London) **1927**, A114, 243-265. + +9. Dirac, P.A.M. Unitary Representations of the Lorentz Group. Proc. Roy. Soc. (London) **1945**, A183, 284-295. + +10. Dirac, P.A.M. A Remarkable Representation of the 3 + 2 de Sitter Group. J. Math. Phys. **1963**, *4*, 901-909. + +11. Wigner, E. On Unitary Representations of the Inhomogeneous Lorentz Group. Ann. Math. **1939**, *40*, 149-204. +---PAGE_BREAK--- + +12. Feynman, R.P.; Kislinger, M.; Ravndal F. Current Matrix Elements from a Relativistic Quark Model. Phys. Rev. D **1971**, *3*, 2706-2732. + +13. Feynman, R.P. Very High-Energy Collisions of Hadrons. Phys. Rev. Lett. **1969**, *23*, 1415-1417. + +14. Feynman, R.P. The Behavior of Hadron Collisions at Extreme Energies in High-Energy Collisions. In Proceedings of the Third International Conference; Gordon and Breach: New York, NY, USA, 1969; pp. 237-249. + +15. Kim, Y.S.; Noz, M.E.; Oh, S.H. Representations of the Poincaré group for relativistic extended hadrons. J. Math. Phys. **1979**, *20*, 1341-1344. + +16. Kim, Y.S.; Noz, M.E.; Oh, S.H.; A Simple Method for Illustrating the Difference between the Homogeneous and Inhomogeneous Lorentz Groups. Am. J. Phys. **1979**, *47*, 892-897. + +17. Rotbart, F.C. Complete orthogonality relations for the covariant harmonic oscillator. Phys. Rev. D **1981**, *12*, 3078-3090. + +18. Ruiz, M.J. Orthogonality relations for covariant harmonic oscillator wave functions. Phys. Rev. D **1974**, *10*, 4306-4307. + +19. Kim, Y.S.; Noz, M.E. Covariant Harmonic Oscillators and the Parton Picture. Phys. Rev. D **1977**, *15*, 335-338. + +20. Yuen, H.P. Two-photon coherent states of the radiation field. Phys. Rev. A **1976**, *13*, 2226-2243. + +21. Yurke, B.; Potasek, M. Obtainment of Thermal Noise from a Pure State. Phys. Rev. A **1987**, *36*, 3464-3466. + +22. Ekert, A.K.; Knight, P.L. Correlations and squeezing of two-mode oscillations. Am. J. Phys. **1989**, *57*, 692-697. + +23. Kim, Y.S.; Noz, M.E. The Question of Simultaneity in Relativity and Quantum Mechanics. In Quantum Theory: Reconsideration of Foundations-3; Adenier, G., Khrennikov, A., Nieuwenhuizen, T.M., Eds.; AIP Conference Proceedings 180, American Institute of Physics, College Park, MD, USA, 2006; pp. 168-178. + +24. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative Example of Feynman's Rest of the Universe. Am. J. Phys. **1999**, *67*, 61-66. + +25. von Neumann, J. *Die Mathematische Grundlagen der Quanten-mechanik*; Springer: Berlin, Germany, 1932. + +26. Fano, U. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques. Rev. Mod. Phys. **1967**, *29*, 74-93. + +27. Kim, Y.S.; Wigner, E.P. Entropy and Lorentz Transformations. Phys. Lett. A **1990**, *147*, 343-347. + +28. Kim, Y.S. Coupled oscillators and Feynman's three papers. J. Phys. Conf. Ser. **2007**, *70*, 012010: 1-19. + +29. Han, D.; Kim, Y.S.; Noz, M.E. Lorentz-Squeezed Hadrons and Hadronic Temperature. Phys. Lett. A **1990**, *144*, 111-115. + +30. Hofstadter, R.; McAllister, R.W. Electron Scattering from the Proton. Phys. Rev. **1955**, *98*, 217-218. + +31. Frazer, W.; Fulco, J. Effect of a Pion-Pion Scattering Resonance on Nucleon Structure. Phys. Rev. Lett. **1960**, *2*, 365-368. + +32. Gell-Mann, M. Nonleptonic Weak Decays and the Eightfold Way. Phys. Lett. **1964**, *12*, 155-156. + +33. Kim, Y.S. Does Lorentz Boost Destroy Coherence? Fortschr. der Physik **1998**, *46*, 713-724. + +34. Kim, Y.S.; Noz, M.E. Feynman's Decoherence. Optics Spectro. **2003**, *47*, 733-740. + +35. Hussar, P.E. Valons and harmonic oscillators. Phys. Rev. D **1981**, *23*, 2781-2783. + +36. Kim, Y.S. Observable gauge transformations in the parton picture. Phys. Rev. Lett. **1989**, *63*, 348-351. + +37. Fujimura, K.; Kobayashi, T.; Namiki, M. Nucleon Electromagnetic Form Factors at High Momentum Transfers in an Extended Particle Model Based on the Quark Model. Prog. Theor. Phys. **1970**, *43*, 73-79. + +38. Yukawa, H. Structure and Mass Spectrum of Elementary Particles. I. General Considerations. Phys. Rev. **1953**, *91*, 415-416. + +39. Markov, M. On Dynamically Deformable Form Factors in the Theory Of Particles. Suppl. Nuovo Cimento **1956**, *3*, 760-772. + +40. Ginzburg, V.L.; Man'ko, V.I. Relativistic oscillator models of elementary particles. Nucl. Phys. **1965**, *74*, 577-588. + +41. Licht, A.L.; Pagnamenta, A. Wave Functions and Form Factors for Relativistic Composite Particles I. Phys. Rev. D **1970**, *2*, 1150-1156. + +42. Kim, Y.S.; Noz, M.E. Covariant harmonic oscillators and the quark model. Phys. Rev. D **1973**, *8*, 3521-3627. + +43. Lipes, R. Electromagnetic Excitations of the Nucleon in a Relativistic Quark Model. Phys. Rev. D **1972**, *5*, 2849-2863. + +44. Henriques, A.B.; Keller, B.H.; Moorhouse, R.G. General three-spinor wave functions and the relativistic quark model. Ann. Phys. (NY) **1975**, *93*, 125-151 +---PAGE_BREAK--- + +45. Punjabi, V.; Perdrisat, C.F.; Aniol, K.A.; Baker, F.T.; Berthot, J.; Bertin, P.Y.; Bertozzi, W.; Besson, A.; Bimbot, L.; Boeglin, W.U.; et al. Proton elastic form factor ratios to Q2 = 3.5 GeV2 by polarization transfer. *Phys. Rev. C* **2005**, *71*, 055202-27. + +46. Alkofer, R.; Holl, A.; Kloker, M.; Karssnigg A.; Roberts, C.D. On Nucleon Electromagnetic Form Factors. *Few-Body Sys.* **2005**, *37*, 1-31. + +47. Matevosyan, H.H.; Thomas, A.W.; Miller, G.A. Study of lattice QCD form factors using the extended Gari-Krumpelmann model. *Phys. Rev. C* **2005**, *72*, 065204-5. + +© 2011 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Dirac Matrices and Feynman's Rest of the Universe + +Young S. Kim ¹,* and Marilyn E. Noz ² + +¹ Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA + +² Department of Radiology, New York University, New York, NY 10016, USA; marilyne.noz@gmail.com + +* Author to whom correspondence should be addressed; yskim@umd.edu; Tel.: +1-301-937-6306. + +Received: 25 June 2012; in revised form: 6 October 2012; Accepted: 23 October 2012; Published: 30 October 2012 + +**Abstract:** There are two sets of four-by-four matrices introduced by Dirac. The first set consists of fifteen Majorana matrices derivable from his four $\gamma$ matrices. These fifteen matrices can also serve as the generators of the group $SL(4, r)$. The second set consists of ten generators of the $Sp(4)$ group which Dirac derived from two coupled harmonic oscillators. It is shown possible to extend the symmetry of $Sp(4)$ to that of $SL(4, r)$ if the area of the phase space of one of the oscillators is allowed to become smaller without a lower limit. While there are no restrictions on the size of phase space in classical mechanics, Feynman's rest of the universe makes this $Sp(4)$-to-$SL(4, r)$ transition possible. The ten generators are for the world where quantum mechanics is valid. The remaining five generators belong to the rest of the universe. It is noted that the groups $SL(4, r)$ and $Sp(4)$ are locally isomorphic to the Lorentz groups $O(3, 3)$ and $O(3, 2)$ respectively. This allows us to interpret Feynman's rest of the universe in terms of space-time symmetry. + +**Keywords:** Dirac gamma matrices; Feynman's rest of the universe; two coupled oscillators; Wigner's phase space; non-canonical transformations; group generators; $SL(4, r)$ isomorphic $O(3, 3)$; quantum mechanics interpretation + +# 1. Introduction + +In 1963, Paul A. M. Dirac published an interesting paper on the coupled harmonic oscillators [1]. Using step-up and step-down operators, Dirac was able to construct ten operators satisfying a closed set of commutation relations. He then noted that this set of commutation relations can also be used as the Lie algebra for the $O(3, 2)$ de Sitter group applicable to three space and two time dimensions. He noted further that this is the same as the Lie algebra for the four-dimensional symplectic group $Sp(4)$. + +His algebra later became the fundamental mathematical language for two-mode squeezed states in quantum optics [2–5]. Thus, Dirac’s ten oscillator matrices play a fundamental role in modern physics. + +In the Wigner phase-space representation, it is possible to write the Wigner function in terms of two position and two momentum variables. It was noted that those ten operators of Dirac can be translated into the operators with these four variables [4,6], which then can be written as four-by-four matrices. There are thus ten four-by-four matrices. We shall call them Dirac’s oscillator matrices. They are indeed the generators of the symplectic group $Sp(4)$. + +We are quite familiar with four Dirac matrices for the Dirac equation, namely $\gamma_1, \gamma_2, \gamma_3$, and $\gamma_0$. They all become imaginary in the Majorana representation. From them we can construct fifteen linearly independent four-by-four matrices. It is known that these four-by-four matrices can serve as the generators of the $SL(4, r)$ group [6,7]. It is also known that this $SL(4, r)$ group is locally isomorphic to the Lorentz group $O(3, 3)$ applicable to the three space and three time dimensions [6,7]. + +There are now two sets of the four-by-four matrices constructed by Dirac. The first set consists of his ten oscillator matrices, and there are fifteen $\gamma$ matrices coming from his Dirac equation. There is +---PAGE_BREAK--- + +thus a difference of five matrices. The question is then whether this difference can be explained within +the framework of the oscillator formalism with tangible physics. + +It was noted that his original O(3,2) symmetry can be extended to that of O(3,3) Lorentz group applicable to the six dimensional space consisting of three space and three time dimensions. This requires the inclusion of non-canonical transformations in classical mechanics [6]. These non-canonical transformations cannot be interpreted in terms of the present form of quantum mechanics. + +On the other hand, we can use this non-canonical effect to illustrate the concept of Feynman's rest of the universe. This oscillator system can serve as two different worlds. The first oscillator is the world in which we do quantum mechanics, and the second is for the rest of the universe. Our failure to observe the second oscillator results in the increase in the size of the Wigner phase space, thus increasing the entropy [8]. + +Instead of ignoring the second oscillator, it is of interest to see what happens to it. In this paper, +it is shown that Planck's constant does not have a lower limit. This is allowed in classical mechanics, +but not in quantum mechanics. + +Indeed, Dirac's ten oscillator matrices explain the quantum world for both oscillators. The set of Dirac's fifteen $\gamma$ matrices contains his ten oscillator matrices as a subset. We discuss in this paper the physics of this difference. + +In Section 2, we start with Dirac’s four $\gamma$ matrices in the Majorana representation and construct all fifteen four-by-four matrices applicable to the Majorana form of the Dirac spinors. Section 3 reproduces Dirac’s derivation of the $O(3,2)$ symmetry with ten generators from two coupled oscillators. This group is locally isomorphic to $Sp(4)$, which allows canonical transformations in classical mechanics. + +In Section 4, we translate Dirac’s formalism into the language of the Wigner phase space. +This allows us to extend the $Sp(4)$ symmetry into the non-canonical region in classical mechanics. +The resulting symmetry is that of $SL(4,r)$, isomorphic to that of the Lorentz group $O(3,3)$ with fifteen +generators. This allows us to establish the correspondence between Dirac’s Majorana matrices with +those $SL(4,r)$ four-by-four matrices applicable to the two oscillator system, as well as the fifteen +six-by-six matrices that serve as the generators of the $O(3,3)$ group. + +Finally, in Section 5, it is shown that the difference between the ten oscillator matrices and the +fifteen Majorana matrix can serve as an illustrative example of Feynman’s rest of the universe [8,9]. + +## 2. Dirac Matrices in the Majorana Representation + +Since all the generators for the two coupled oscillator system can be written as four-by-four +matrices with imaginary elements, it is convenient to work with Dirac matrices in the Majorana +representation, where all the elements are imaginary [7,10,11]. In the Majorana representation, +the four Dirac $\gamma$ matrices are + +$$ \gamma_1 = i \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, \quad \gamma_2 = \begin{pmatrix} 0 & -\sigma_2 \\ \sigma_2 & 0 \end{pmatrix} $$ + +$$ \gamma_3 = -i \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix}, \quad \gamma_0 = \begin{pmatrix} 0 & \sigma_2 \\ \sigma_2 & 0 \end{pmatrix} \qquad (1) $$ + +where + +$$ \sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \sigma_2 = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $$ + +These $\gamma$ matrices are transformed like four-vectors under Lorentz transformations. From these four matrices, we can construct one pseudo-scalar matrix + +$$ \gamma_5 = i\gamma_0\gamma_1\gamma_2\gamma_3 = \begin{pmatrix} \sigma_2 & 0 \\ 0 & -\sigma_2 \end{pmatrix} \qquad (2) $$ +---PAGE_BREAK--- + +and a pseudo vector $i\gamma_5\gamma_\mu$ consisting of + +$$ +\begin{align} +i\gamma_5\gamma_1 &= i \begin{pmatrix} -\sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix}, & +i\gamma_5\gamma_2 &= -i \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix} \\ +i\gamma_5\gamma_0 &= i \begin{pmatrix} 0 & I \\ -I & 0 \end{pmatrix}, & +i\gamma_5\gamma_3 &= i \begin{pmatrix} -\sigma_3 & 0 \\ 0 & +\sigma_3 \end{pmatrix} +\end{align} +$$ + +(3) + +In addition, we can construct the tensor of the $\gamma$ as + +$$ +T_{\mu\nu} = \frac{i}{2} (\gamma_{\mu}\gamma_{\nu} - \gamma_{\nu}\gamma_{\mu}) \quad (4) +$$ + +This antisymmetric tensor has six components. They are + +$$ +i\gamma_0\gamma_1 = -i \begin{pmatrix} 0 & \sigma_1 \\ \sigma_1 & 0 \end{pmatrix}, i\gamma_0\gamma_2 = -i \begin{pmatrix} -I & 0 \\ 0 & I \end{pmatrix}, i\gamma_0\gamma_3 = -i \begin{pmatrix} 0 & \sigma_3 \\ \sigma_3 & 0 \end{pmatrix} \quad (5) +$$ + +and + +$$ +i\gamma_1\gamma_2 = i \begin{pmatrix} 0 & -\sigma_1 \\ \sigma_1 & 0 \end{pmatrix}, i\gamma_2\gamma_3 = -i \begin{pmatrix} 0 & -\sigma_3 \\ \sigma_3 & 0 \end{pmatrix}, i\gamma_3\gamma_1 = \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix} \quad (6) +$$ + +There are now fifteen linearly independent four-by-four matrices. They are all traceless and their components are imaginary [7]. We shall call these Dirac's Majorana matrices. + +In 1963 [1], Dirac constructed another set of four-by-four matrices from two coupled harmonic oscillators, within the framework of quantum mechanics. He ended up with ten four-by-four matrices. It is of interest to compare his oscillator matrices and his fifteen Majorana matrices. + +**3. Dirac’s Coupled Oscillators** + +In his 1963 paper [1], Dirac started with the Hamiltonian for two harmonic oscillators. It can be written as + +$$ +H = \frac{1}{2} (p_1^2 + x_1^2) + \frac{1}{2} (p_2^2 + x_2^2) \tag{7} +$$ + +The ground-state wave function for this Hamiltonian is + +$$ +\psi_0(x_1, x_2) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (x_1^2 + x_2^2) \right\} \qquad (8) +$$ + +We can now consider unitary transformations applicable to the ground-state wave function of +Equation (8), and Dirac noted that those unitary transformations are generated by [1] + +$$ +\begin{align*} +L_1 &= \frac{1}{2}(a_1^\dagger a_2 + a_2^\dagger a_1), & L_2 &= \frac{1}{2i}(a_1^\dagger a_2 - a_2^\dagger a_1) \\ +L_3 &= \frac{1}{2}(a_1^\dagger a_1 - a_2^\dagger a_2), & S_3 &= \frac{i}{2}(a_1^\dagger a_1 + a_2^\dagger a_2) \\ +K_1 &= -\frac{1}{4}(a_1^\dagger a_1^\dagger + a_1 a_1 - a_2^\dagger a_2^\dagger - a_2 a_2) \\ +K_2 &= \frac{i}{4}(a_1^\dagger a_1^\dagger - a_1 a_1 + a_2^\dagger a_2^\dagger - a_2 a_2) \\ +K_3 &= \frac{i}{2}(a_1^\dagger a_2^\dagger + a_1 a_2) \\ +Q_1 &= -\frac{i}{4}(a_1^\dagger a_1^\dagger - a_1 a_1 - a_2^\dagger a_2^\dagger + a_2 a_2) \\ +Q_2 &= -\frac{i}{4}(a_1^\dagger a_1^\dagger + a_1 a_1 + a_2^\dagger a_2^\dagger + a_2 a_2) \\ +Q_3 &= \frac{i}{2}(a_1^\dagger a_2^\dagger - a_1 a_2) +\end{align*} +$$ + +(9) + +where $a^\dagger$ and $a$ are the step-up and step-down operators applicable to harmonic oscillator wave functions. These operators satisfy the following set of commutation relations. +---PAGE_BREAK--- + +$$ +\begin{align} +[L_i, L_j] &= i\epsilon_{ijk}L_k, \quad [L_i, K_j] = i\epsilon_{ijk}K_k, \quad [L_i, Q_j] = i\epsilon_{ijk}Q_k \nonumber \\ +[K_i, K_j] &= [Q_i, Q_j] = -i\epsilon_{ijk}L_k, \quad [L_i, S_3] = 0 \nonumber \\ +[K_i, Q_j] &= -i\delta_{ij}S_3, \quad [K_i, S_3] = -iQ_i, \quad [Q_i, S_3] = iK_i \tag{10} +\end{align} +$$ + +Dirac then determined that these commutation relations constitute the Lie algebra for the O(3,2) de Sitter group with ten generators. This de Sitter group is the Lorentz group applicable to three space coordinates and two time coordinates. Let us use the notation (x,y,z,t,s), with (x,y,z) as space coordinates and (t,s) as two time coordinates. Then the rotation around the z axis is generated by + +$$ +L_3 = \begin{pmatrix} +0 & -i & 0 & 0 & 0 \\ +i & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 +\end{pmatrix} +\qquad (11) +$$ + +The generators $L_1$ and $L_2$ can be also be constructed. The $K_3$ and $Q_3$ will take the form + +$$ +K_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & i & 0 \\ 0 & 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, Q_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \tag{12} +$$ + +From these two matrices, the generators $K_1, K_2, Q_1, Q_2$ can be constructed. The generator $S_3$ can be written as + +$$ +S_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \tag{13} +$$ + +The last five-by-five matrix generates rotations in the two-dimensional space of $(t,s)$. + +In his 1963 paper [1], Dirac states that the Lie algebra of Equation (10) can serve as the four-dimensional symplectic group $Sp(4)$. In order to see this point, let us go to the Wigner phase-space picture of the coupled oscillators. + +### **3.1. Wigner Phase-Space Representation** + +For this two-oscillator system, the Wigner function is defined as [4,6] + +$$ +W(x_1, x_2; p_1, p_2) = \left(\frac{1}{\pi}\right)^2 \int \exp\{-2i(p_1 y_1 + p_2 y_2)\} \\ +\times \psi^*(x_1+y_1, x_2+y_2) \psi(x_1-y_1, x_2-y_2) dy_1 dy_2 \tag{14} +$$ + +Indeed, the Wigner function is defined over the four-dimensional phase space of $(x_1, p_1, x_2, p_2)$ just as in the case of classical mechanics. The unitary transformations generated by the operators of Equation (9) are translated into linear canonical transformations of the Wigner function [4]. The canonical transformations are generated by the differential operators [4]: + +$$ +L_1 = +i \frac{1}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_1} \right) + \left( x_2 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_2} \right) \right\} +$$ +---PAGE_BREAK--- + +$$ +\begin{align*} +L_2 &= -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_2} - x_2 \frac{\partial}{\partial x_1}\right) + \left(p_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial p_1}\right) \right\} \\ +L_3 &= +\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_1}\right) - \left(x_2 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_2}\right) \right\} \\ +S_3 &= -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_1}\right) + \left(x_2 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_2}\right) \right\} +\end{align*} +$$ + +and + +$$ +\begin{align} +K_1 &= -\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_1} \right) - \left( x_2 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_2} \right) \right\} \\ +K_2 &= -\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial x_1} - p_1 \frac{\partial}{\partial p_1} \right) + \left( x_2 \frac{\partial}{\partial x_2} - p_2 \frac{\partial}{\partial p_2} \right) \right\} \\ +K_3 &= +\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_1} \right) + \left( x_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_2} \right) \right\} \\ +Q_1 &= +\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial x_1} - p_1 \frac{\partial}{\partial p_1} \right) - \left( x_2 \frac{\partial}{\partial x_2} - p_2 \frac{\partial}{\partial p_2} \right) \right\} \\ +Q_2 &= -\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_1} \right) + \left( x_2 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_2} \right) \right\} \\ +Q_3 &= -\frac{i}{2} \left\{ \left( x_2 \frac{\partial}{\partial x_1} + x_1 \frac{\partial}{\partial x_2} \right) - \left( p_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial p_2} \right) \right\} +\end{align} +$$ + +$$ +(15) +$$ + +$$ +(K_i M) = (M K_i)^{-1} +$$ + +$$ +(M J M^*) = J +$$ + +where *M* is a four-by-four matrix defined by + +$$ +M_{ij} = \frac{\partial}{\partial \eta_j} \xi_i +$$ + +and + +$$ +J = \begin{pmatrix} +0 & 1 & 0 & 0 \\ +-1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & -1 & 0 +\end{pmatrix} +$$ + +(19) + +According to this form of the *J* matrix, the area of the phase space for *x*₁ and *p*₁ variables remains invariant, and the story is the same for the phase space of *x*₂ and *p*₂. + +We can then write the generators of the Sp(4) group as + +$$ +L_1 = -\frac{1}{2} \begin{pmatrix} 0 & \sigma_2 \\ \sigma_2 & 0 \end{pmatrix}, L_2 = \frac{i}{2} \begin{pmatrix} 0 & -I \\ I & 0 \end{pmatrix} +$$ + +$$ +L_3 = \frac{1}{2} \begin{pmatrix} -\sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, S_3 = \frac{1}{2} \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix} +$$ + +and + +$$ +K_1 = i \begin{pmatrix} \sigma_1 & 0 \\ 0 & -\sigma_1 \end{pmatrix}, K_2 = i \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, K_3 = -i \begin{pmatrix} 0 & \sigma_1 \\ \sigma_1 & 0 \end{pmatrix} +$$ +---PAGE_BREAK--- + +and + +$$Q_1 = \frac{i}{2} \begin{pmatrix} -\sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, Q_2 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix}, Q_3 = \frac{i}{2} \begin{pmatrix} 0 & \sigma_3 \\ \sigma_3 & 0 \end{pmatrix} \quad (21)$$ + +These four-by-four matrices satisfy the commutation relations given in Equation (10). Indeed, the de Sitter group *O*(3,2) is locally isomorphic to the *Sp*(4) group. The remaining question is whether these ten matrices can serve as the fifteen Dirac matrices given in Section 2. The answer is clearly no. How can ten matrices describe fifteen matrices? We should therefore add five more matrices. + +**4. Extension to O(3,3) Symmetry** + +Unlike the case of the Schrödinger picture, it is possible to add five non-canonical generators to the above list. They are + +$$S_1 = +\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_2} - x_2 \frac{\partial}{\partial x_1}\right) - \left(p_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial p_1}\right) \right\}$$ + +$$S_2 = -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_1}\right) + \left(x_2 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_2}\right) \right\} \quad (22)$$ + +as well as three additional squeeze operators: + +$$G_1 = -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_2} + x_2 \frac{\partial}{\partial x_1}\right) + \left(p_1 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial p_1}\right) \right\}$$ + +$$G_2 = \frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_1}\right) - \left(x_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_2}\right) \right\}$$ + +$$G_3 = -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_1} + p_1 \frac{\partial}{\partial p_1}\right) + \left(x_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_2}\right) \right\} \quad (23)$$ + +These five generators perform well-defined operations on the Wigner function. However, the question is whether these additional generators are acceptable in the present form of quantum mechanics. + +In order to answer this question, let us note that the uncertainty principle in the phase-space picture of quantum mechanics is stated in terms of the minimum area in phase space for a given pair of conjugate variables. The minimum area is determined by Planck's constant. Thus we are allowed to expand the phase space, but are not allowed to contract it. With this point in mind, let us go back to $G_3$ of Equation (23), which generates transformations that simultaneously expand one phase space and contract the other. Thus, the $G_3$ generator is not acceptable in quantum mechanics even though it generates well-defined mathematical transformations of the Wigner function. + +If the five generators of Equations (22) and (23) are added to the ten generators given in Equations (15) and (16), there are fifteen generators. They satisfy the following set of commutation relations. + +$$ +\begin{align*} +[L_i, L_j] &= i\epsilon_{ijk}L_k, & [S_i, S_j] &= i\epsilon_{ijk}S_k, & [L_i, S_j] &= 0 \\ +[L_i, K_j] &= i\epsilon_{ijk}K_k, & [L_i, Q_j] &= i\epsilon_{ijk}Q_k, & [L_i, G_j] &= i\epsilon_{ijk}G_k \\ +[K_i, K_j] &= [Q_i, Q_j] &= [Q_i, Q_j] &= -i\epsilon_{ijk}L_k \\ +[K_i, Q_j] &= -i\delta_{ij}S_3, & [Q_i, G_j] &= -i\delta_{ij}S_1, & [G_i, K_j] &= -i\delta_{ij}S_2 \\ +[K_i, S_3] &= -iQ_i, & [Q_i, S_3] &= iK_i, & [G_i, S_3] &= 0 \\ +[K_i, S_1] &= 0, & [Q_i, S_1] &= -iG_i, & [G_i, S_1] &= iQ_i \\ +[K_i, S_2] &= iG_i, & [Q_i, S_2] &= 0, & [G_i, S_2] &= -iK_i +\end{align*} +\tag{24} +$$ + +As we shall see in Section 4.2, this set of commutation relations serves as the Lie algebra for the group SL(4, r) and also for the *O*(3, 3) Lorentz group. +---PAGE_BREAK--- + +These fifteen four-by-four matrices are written in terms of Dirac's fifteen Majorana matrices, and are tabulated in Table 1. There are six anti-symmetric and nine symmetric matrices. These anti-symmetric matrices were divided into two sets of three rotation generators in the four-dimensional phase space. The nine symmetric matrices can be divided into three sets of three squeeze generators. However, this classification scheme is easier to understand in terms the group $O(3,3)$, discussed in Section 4.2. + +**Table 1.** SL(4,*r*) and Dirac matrices. Two sets of rotation generators and three sets of boost generators. +There are 15 generators. + +
First componentSecond componentThird component
Rotation$L_1 = \frac{-i}{2}\gamma_0$$L_2 = \frac{i}{2}\gamma_5\gamma_0$$L_3 = \frac{-i}{2}\gamma_5$
Rotation$S_1 = \frac{i}{2}\gamma_2\gamma_3$$S_2 = \frac{i}{2}\gamma_1\gamma_2$$S_3 = \frac{i}{2}\gamma_3\gamma_1$
Boost$K_1 = \frac{-i}{2}\gamma_5\gamma_1$$K_2 = \frac{1}{2}\gamma_1$$K_3 = \frac{1}{2}\gamma_0\gamma_1$
Boost$Q_1 = \frac{i}{2}\gamma_5\gamma_3$$Q_2 = \frac{-i}{2}\gamma_3$$Q_3 = -\frac{i}{2}\gamma_0\gamma_3$
Boost$G_1 = \frac{-i}{2}\gamma_5\gamma_2$$G_2 = \frac{1}{2}\gamma_2$$G_3 = \frac{1}{2}\gamma_0\gamma_2$
+ +## 4.1. Non-Canonical Transformations in Classical Mechanics + +In addition to Dirac's ten oscillator matrices, we can consider the matrix + +$$ G_3 = \frac{i}{2} \begin{pmatrix} I & 0 \\ 0 & -I \end{pmatrix} \qquad (25) $$ + +which will generate a radial expansion of the phase space of the first oscillator, while contracting that of the second phase space [14], as illustrated in Figure 1. What is the physical significance of this operation? The expansion of phase space leads to an increase in uncertainty and entropy [8,14]. + +**Figure 1.** Expanding and contracting phase spaces. Canonical transformations leave the area of each phase space invariant. Non-canonical transformations can change them, yet the product of these two areas remains invariant. + +The contraction of the second phase space has a lower limit in quantum mechanics, namely it cannot become smaller than Planck's constant. However, there is no such lower limit in classical mechanics. We shall go back to this question in Section 5. +---PAGE_BREAK--- + +In the meantime, let us study what happens when the matrix $G_3$ is introduced into the set of matrices given in Equations (20) and (21). It commutes with $S_3$, $L_3$, $K_1$, $K_2$, $Q_1$, and $Q_2$. However, its commutators with the rest of the matrices produce four more generators: + +$$[G_3, L_1] = iG_2, [G_3, L_2] = -iG_1, [G_3, K_3] = iS_2, [G_3, Q_3] = -iS_1 \qquad (26)$$ + +where + +$$G_1 = \frac{i}{2} \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix}, G_2 = \frac{1}{2} \begin{pmatrix} 0 & -\sigma_2 \\ \sigma_2 & 0 \end{pmatrix}$$ + +$$S_1 = \frac{i}{2} \begin{pmatrix} 0 & \sigma_3 \\ -\sigma_3 & 0 \end{pmatrix}, S_2 = \frac{i}{2} \begin{pmatrix} 0 & -\sigma_1 \\ \sigma_1 & 0 \end{pmatrix} \qquad (27)$$ + +If we take into account the above five generators in addition to the ten generators of $Sp(4)$, there are fifteen generators. These generators satisfy the set of commutation relations given in Equation (24). + +Indeed, the ten $Sp(4)$ generators together with the five new generators form the Lie algebra for the group $SL(4,r)$. There are thus fifteen four-by-four matrices. They can be written in terms of the fifteen Majorana matrices, as given in Table 1. + +## 4.2. Local Isomorphism between O(3,3) and SL(4,r) + +It is now possible to write fifteen six-by-six matrices that generate Lorentz transformations on the three space coordinates and three time coordinates [6]. However, those matrices are difficult to handle and do not show existing regularities. In this section, we write those matrices as two-by-two matrices of three-by-three matrices. + +For this purpose, we construct four sets of three-by-three matrices given in Table 2. There are two sets of rotation generators: + +$$L_i = \begin{pmatrix} A_i & 0 \\ 0 & 0 \end{pmatrix}, S_i = \begin{pmatrix} 0 & 0 \\ 0 & A_i \end{pmatrix} \qquad (28)$$ + +applicable to the space and time coordinates respectively. + +There are also three sets of boost generators. In the two-by-two representation of the matrices given in Table 2, they are: + +$$K_i = \begin{pmatrix} 0 & B_i \\ \tilde{B}_i & 0 \end{pmatrix}, Q_i = \begin{pmatrix} 0 & C_i \\ \tilde{C}_i & 0 \end{pmatrix}, G_i = \begin{pmatrix} 0 & D_i \\ \tilde{D}_i & 0 \end{pmatrix} \qquad (29)$$ + +where the three-by-three matrices $A_i, B_i, C_i$, and $D_i$ are given in Table 2, and $\tilde{A}_i, \tilde{B}_i, \tilde{C}_i, \tilde{D}_i$ are their transposes respectively. +---PAGE_BREAK--- + +**Table 2.** Three-by-three matrices constituting the two-by-two representation of generators of the $O(3,3)$ group. + +
i = 1i = 2i = 3
Ai$\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & i & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & i \\ 0 & 0 & 0 \\ -i & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$
Bi$\begin{pmatrix} i & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ i & 0 & 0 \end{pmatrix}$
Ci$\begin{pmatrix} 0 & i & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & 0 \\ 0 & i & 0 \\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & i & 0 \end{pmatrix}$
Di$\begin{pmatrix} 0 & 0 & i \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & 0 \\ 0 & i & 0 \\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & i \end{pmatrix}$
+ +There is a four-by-four Majorana matrix corresponding to each of these fifteen six-by-six matrices, as given in Table 1. + +There are of course many interesting subgroups. The most interesting case is the $O(3,2)$ subgroup, and there are three of them. Another interesting feature is that there are three time dimensions. Thus, there are also $O(2,3)$ subgroups applicable to two space and three time coordinates. This symmetry between space and time coordinates could be an interesting future investigation. + +## **5. Feynman's Rest of the Universe** + +In his book on statistical mechanics [9], Feynman makes the following statement. When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts - the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system. + +We can use two coupled harmonic oscillators to illustrate what Feynman says about his rest of the universe. One of the oscillators can be used for the world in which we make physical measurements, while the other belongs to the rest of the universe [8]. + +Let us start with a single oscillator in its ground state. In quantum mechanics, there are many kinds of excitations of the oscillator, and three of them are familiar to us. First, it can be excited to a state with a definite energy eigenvalue. We obtain the excited-state wave functions by solving the eigenvalue problem for the Schrödinger equation, and this procedure is well known. + +Second, the oscillator can go through coherent excitations. The ground-state oscillator can be excited to a coherent or squeezed state. During this process, the minimum uncertainty of the ground state is preserved. The coherent or squeezed state is not in an energy eigenstate. This kind of excited state plays a central role in coherent and squeezed states of light, which have recently become a standard item in quantum mechanics. + +Third, the oscillator can go through thermal excitations. This is not a quantum excitation but a statistical ensemble. We cannot express a thermally excited state by making linear combinations of wave functions. We should treat this as a canonical ensemble. In order to deal with this thermal state, we need a density matrix. + +For the thermally excited single-oscillator state, the density matrix takes the form [9,15,16]. + +$$ \rho(x,y) = (1 - e^{-1/T}) \sum_k e^{-k/T} \phi_k(x) \phi_k^*(x) \quad (30) $$ + +where the absolute temperature T is measured in the scale of Boltzmann's constant, and $\phi_k(x)$ is the k-th excited state wave oscillator wave function. The index ranges from 0 to $\infty$. +---PAGE_BREAK--- + +We also use Wigner functions to deal with statistical problems in quantum mechanics. The Wigner function for this thermally excited state is [4,9,15] + +$$W_T(x, p) = \frac{1}{\pi} \int e^{-2ipz} \rho(x-z, x+z) dz \quad (31)$$ + +which becomes + +$$W_T = \left[ \frac{\tanh(1/2T)}{\pi} \right] \exp \left[ - (x^2 + p^2) \tanh(1/2T) \right] \quad (32)$$ + +This Wigner function becomes + +$$W_0 = \frac{1}{\pi} \exp[-(x^2 + p^2)] \quad (33)$$ + +when $T=0$. As the temperature increases, the radius of this Gaussian form increases from one to [14]. + +$$\frac{1}{\sqrt{\tanh(1/2T)}} \qquad (34)$$ + +The question is whether we can derive this expanding Wigner function from the concept of Feynman's rest of the universe. In their 1999 paper [8], Han et al. used two coupled harmonic oscillators to illustrate what Feynman said about his rest of the universe. One of their two oscillators is for the world in which we do quantum mechanics and the other is for the rest of the universe. However, these authors did not use canonical transformations. In Section 5.1, we summarize the main point of their paper using the language of canonical transformations developed in the present paper. + +Their work was motivated by the papers by Yurke et al. [17] and by Ekert et al. [18], and the Barnett-Phoenix version of information theory [19]. These authors asked the question of what happens when one of the photons is not observed in the two-mode squeezed state. + +In Section 5.2, we introduce another form of Feynman's rest of the universe, based on non-canonical transformations discussed in the present paper. For a two-oscillator system, we can define a single-oscillator Wigner function for each oscillator. Then non-canonical transformations allow one Wigner function to expand while forcing the other to shrink. The shrinking Wigner function has a lower limit in quantum mechanics, while there is none in classical mechanics. Thus, Feynman's rest of the universe consists of classical mechanics where Planck's constant has no lower limit. + +In Section 5.3, we translate the mathematics of the expanding Wigner function into the physical language of entropy. + +## 5.1. Canonical Approach + +Let us start with the ground-state wave function for the uncoupled system. Its Hamiltonian is given in Equation (7), and its wave function is + +$$\psi_0(x_1, x_2) = \frac{1}{\sqrt{\pi}} \exp \left[ -\frac{1}{2} (x_1^2 + x_2^2) \right] \quad (35)$$ + +We can couple these two oscillators by making the following canonical transformations. First, let us rotate the coordinate system by 45° to get + +$$\frac{1}{\sqrt{2}}(x_1+x_2), \frac{1}{\sqrt{2}}(x_1-x_2) \qquad (36)$$ + +Let us then squeeze the coordinate system: + +$$\frac{e^{\eta}}{\sqrt{2}}(x_1 + x_2), \frac{e^{-\eta}}{\sqrt{2}}(x_1 - x_2) \qquad (37)$$ +---PAGE_BREAK--- + +Likewise, we can transform the momentum coordinates to + +$$ \frac{e^{-\eta}}{\sqrt{2}}(p_1 + p_2), \quad \frac{e^{\eta}}{\sqrt{2}}(p_1 - p_2) \qquad (38) $$ + +Equations (37) and (38) constitute a very familiar canonical transformation. The resulting wave function for this coupled system becomes + +$$ \psi_{\eta}(x_1, x_2) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} [e^{\eta}(x_1 - x_2)^2 + e^{-\eta}(x_1 + x_2)^2] \right\} \quad (39) $$ + +This transformed wave function is illustrated in Figure 2. + +As was discussed in the literature for several different purposes [4,20–22], this wave function can be expanded as + +$$ \psi_{\eta}(x_1, x_2) = \frac{1}{\cosh \eta} \sum_k \left( \tanh \frac{\eta}{2} \right)^k \phi_k(x_1) \phi_k(x_2) \quad (40) $$ + +where the wave function $\phi_k\phi(x)$ and the range of summation are defined in Equation (30). From this wave function, we can construct the pure-state density matrix + +$$ \rho(x_1, x_2; x'_1, x'_2) = \psi_\eta(x_1, x_2) \psi_\eta(x'_1, x'_2) \quad (41) $$ + +which satisfies the condition $\rho^2 = \rho$: + +$$ \rho(x_1, x_2; x'_1, x'_2) = \int \rho(x_1, x_2; x''_1, x''_2) \rho(x''_1, x''_2; x'_1, x'_2) dx'_1 dx''_2 \quad (42) $$ + +**Figure 2.** Two-dimensional Gaussian form for two-coupled oscillators. One of the variables is observable while the second variable is not observed. It belongs to Feynman's rest of the universe. + +If we are not able to make observations on the $x_2$, we should take the trace of the $\rho$ matrix with respect to the $x_2$ variable. Then the resulting density matrix is + +$$ \rho(x, x') = \int \psi_{\eta}(x, x_2) \{\psi_{\eta}(x', x_2)\}^* dx_2 \quad (43) $$ +---PAGE_BREAK--- + +Here, we have replaced $x_1$ and $x'_1$ by $x$ and $x'$ respectively. If we complete the integration over the $x_2$ variable, + +$$ \rho(x,x') = \left(\frac{1}{\pi \cosh \eta}\right)^{1/2} \exp\left\{-\frac{(x+x')^2 + (x-x')^2 \cosh^2 \eta}{4 \cosh \eta}\right\} \quad (44) $$ + +The diagonal elements of the above density matrix are + +$$ \rho(x,x) = \left( \frac{1}{\pi \cosh \eta} \right)^{1/2} \exp(-x^2 / \cosh \eta) \quad (45) $$ + +With this expression, we can confirm the property of the density matrix: $\text{Tr}(\rho) = 1$. As for the trace of $\rho^2$, we can perform the integration + +$$ \mathrm{Tr}(\rho^2) = \int \rho(x,x')\rho(x',x)dx'dx = \frac{1}{\cosh\eta} \quad (46) $$ + +which is less than one for nonzero values of $\eta$. + +The density matrix can also be calculated from the expansion of the wave function given in Equation (40). If we perform the integral of Equation (43), the result is + +$$ \rho(x,x') = \left( \frac{1}{\cosh(\eta/2)} \right)^2 \sum_k \left( \tanh \frac{\eta}{2} \right)^{2k} \phi_k(x) \phi_k^*(x') \quad (47) $$ + +which leads to $\text{Tr}(\rho) = 1$. It is also straightforward to compute the integral for $\text{Tr}(\rho^2)$. The calculation leads to + +$$ \mathrm{Tr}(\rho^2) = \left(\frac{1}{\cosh(\eta/2)}\right)^4 \sum_k \left(\tanh \frac{\eta}{2}\right)^{4k} \quad (48) $$ + +The sum of this series becomes to $(1/\cosh\eta)$, as given in Equation (46). + +We can approach this problem using the Wigner function. The Wigner function for the two oscillator system is [4] + +$$ W_0(x_1, p_1; x_2, p_2) = \left(\frac{1}{\pi}\right)^2 \exp\left[-(x_1^2 + p_1^2 + x_2^2 + p_2^2)\right] \quad (49) $$ + +If we pretend not to make measurement on the second oscillator coordinate, the $x_2$ and $p_2$ variables have to be integrated out [8]. The net result becomes the Wigner function for the first oscillator. + +The canonical transformation of Equations (37) and (38) changes this Wigner function to + +$$ W(x_1, x_2; p_1, p_2) = \left(\frac{1}{\pi}\right)^2 \exp \left\{ -\frac{1}{2} [e^\eta (x_1 - x_2)^2 + e^{-\eta} (x_1 + x_2)^2 + e^{-\eta}(p_1 - p_2)^2 + e^\eta (p_1 + p_2)^2] \right\} \quad (50) $$ + +If we do not observe the second pair of variables, we have to integrate this function over $x_2$ and $p_2$: + +$$ W_{\eta}(x_1, p_1) = \int W(x_1, x_2; p_1, p_2) dx_2 dp_2 \quad (51) $$ + +and the evaluation of this integration leads to [8] + +$$ W_{\eta}(x,p) = \frac{1}{\pi \cosh \eta} \exp\left[-\left(\frac{x^2 + p^2}{\cosh \eta}\right)\right] \quad (52) $$ + +where we use $x$ and $p$ for $x_1$ and $p_1$ respectively. +---PAGE_BREAK--- + +This Wigner function is of the form given in Equation (32) for the thermal excitation, if we identify +the squeeze parameter $\eta$ as [23] + +$$ \cosh \eta = \frac{1}{\tanh(1/2T)} \quad (53) $$ + +The failure to make measurement on the second oscillator leads to the radial expansion of the Wigner phase space as in the case of the thermal excitation. + +## 5.2. Non-Canonical Approach + +As we noted before, among the fifteen Dirac matrices, ten of them can be used for canonical transformations in classical mechanics, and thus in quantum mechanics. They play a special role in quantum optics [2–5]. + +The remaining five of them can have their roles if the change in the phase space area is allowed. In quantum mechanics, the area can be increased, but it has a lower limit called Plank’s constant. In classical mechanics, this constraint does not exist. The mathematical formalism given in this paper allows us to study this aspect of the system of coupled oscillators. + +Let us choose the following three matrices from those in Equations (20) and (21). + +$$ S_3 = \frac{1}{2} \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, K_2 = \frac{i}{2} \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, Q_2 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix} \quad (54) $$ + +They satisfy the closed set of commutation relations: + +$$ [S_3, K_2] = iQ_2, [S_3, Q_2] = -iQ_3, [K_2, Q_2] = -iS_3 \quad (55) $$ + +This is the Lie algebra for the $Sp(2)$ group. This is the symmetry group applicable to the single-oscillator phase space [4], with one rotation and two squeezes. These matrices generate the same transformation for the first and second oscillators. + +We can choose three other sets with similar properties. They are: + +$$ S_3 = \frac{1}{2} \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, Q_1 = \frac{i}{2} \begin{pmatrix} \sigma_3 & 0 \\ 0 & -\sigma_3 \end{pmatrix}, K_1 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & -\sigma_1 \end{pmatrix} \quad (56) $$ + +$$ L_3 = \frac{1}{2} \begin{pmatrix} -\sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, K_2 = \frac{i}{2} \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, K_1 = \frac{i}{2} \begin{pmatrix} -\sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix} \quad (57) $$ + +and + +$$ L_3 = \frac{1}{2} \begin{pmatrix} -\sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, -Q_2 = \frac{i}{2} \begin{pmatrix} -\sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, Q_2 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix} \quad (58) $$ + +These matrices also satisfy the commutation relations given in Equation (55). In this case, the squeeze transformations take opposite directions in the second phase space. + +Since all these transformations are canonical, they leave the area of each phase space invariant. However, let us look at the non-canonical generator $G_3$ of Equation (25). It generates the transformation matrix of the form: + +$$ \begin{pmatrix} e^{\eta} & 0 \\ 0 & e^{-\eta} \end{pmatrix} \quad (59) $$ + +If $\eta$ is positive, this matrix expands the first phase space while contracting the second. This contraction of the second phase space is allowed in classical mechanics, but it has a lower limit in quantum mechanics. + +The expansion of the first phase space is exactly like the thermal expansion resulting from our failure to observe the second oscillator that belongs to the rest of the universe. If we expand the system of Dirac's ten oscillator matrices to the world of his fifteen Majorana matrices, we can expand and +---PAGE_BREAK--- + +contract the first and second phase spaces without mixing them up. We can thus construct a model where the observed world and the rest of the universe remain separated. In the observable world, quantum mechanics remains valid with thermal excitations. In the rest of the universe, since the area of the phase space can decrease without lower limit, only classical mechanics is valid. + +During the expansion/contraction process, the product of the areas of the two phase spaces remains constant. This may or may not be an extended interpretation of the uncertainty principle, but we choose not to speculate further on this issue. + +Let us turn our attention to the fact that the groups $SL(4,r)$ and $Sp(4)$ are locally isomorphic to $O(3,3)$ and $O(3,2)$ respectively. This means that we can do quantum mechanics in one of the $O(3,2)$ subgroups of $O(3,3)$, as Dirac noted in his 1963 paper [1]. The remaining generators belong to Feynman's rest of the universe. + +### 5.3. Entropy and the Expanding Wigner Phase Space + +We have seen how Feynman's rest of the universe increases the radius of the Wigner function. It is important to note that the entropy of the system also increases. + +Let us go back to the density matrix. The standard way to measure this ignorance is to calculate the entropy defined as [16,24–27]. + +$$S = -\operatorname{Tr}(\rho \ln(\rho)) \qquad (60)$$ + +where S is measured in units of Boltzmann's constant. If we use the density matrix given in Equation (44), the entropy becomes + +$$S = 2\left\{\cosh^2\left(\frac{\eta}{2}\right) \ln\left(\cosh\frac{\eta}{2}\right) - \sinh^2\left(\frac{\eta}{2}\right) \ln\left(\sinh\frac{\eta}{2}\right)\right\} \quad (61)$$ + +In order to express this equation in terms of the temperature variable $T$, we write Equation (53) as + +$$\cosh \eta = \frac{1 + e^{-1/T}}{1 - e^{-1/T}} \qquad (62)$$ + +which leads to + +$$\cosh^2\left(\frac{\eta}{2}\right) = \frac{1}{1+e^{-1/T}}, \quad \sinh^2\left(\frac{\eta}{2}\right) = \frac{e^{-1/T}}{1+e^{-1/T}} \qquad (63)$$ + +Then the entropy of Equation (61) takes the form [8] + +$$S = \left(\frac{1}{T}\right) \left\{ \frac{1}{\exp\left(\frac{1}{T}\right) - 1} \right\} - \ln\left(1 - e^{-1/T}\right) \qquad (64)$$ + +This familiar expression is for the entropy of an oscillator state in thermal equilibrium. Thus, for this oscillator system, we can relate our ignorance of the Feynman's rest of the universe, measured by the coupling parameter $\eta$, to the temperature. + +## 6. Concluding Remarks + +In this paper, we started with the fifteen four-by-four matrices for the Majorana representation of the Dirac matrices, and the ten generators of the $Sp(4)$ group corresponding to Dirac's oscillator matrices. Their explicit forms are given in the literature [6,7], and their roles in modern physics are well-known [3,4,11]. We re-organized them into tables. + +The difference between these two representations consists of five matrices. The physics of this difference is discussed in terms of Feynman's rest of the universe [9]. According to Feynman, this universe consists of the world in which we do quantum mechanics, and the rest of the universe. In the rest of the universe, our physical laws may or may not be respected. In the case of coupled oscillators, without the lower limit on Planck's constant, we can do classical mechanics but not quantum mechanics in the rest of the universe. +---PAGE_BREAK--- + +In 1971, Feynman et al. [28] published a paper on the oscillator model of hadrons, where the proton consists of three quarks linked up by oscillator springs. In order to treat this problem, they use a three-particle symmetry group formulated by Dirac in his book on quantum mechanics [29,30]. An interesting problem could be to see what happens to the two quarks when one of them is not observed. Another interesting question could be to see what happens to one of the quarks when two of them are not observed. + +Finally, we note here that group theory is a very powerful tool in approaching problems in modern physics. Different groups can share the same set of commutation relations for their generators. Recently, the group SL(2, c) through its correspondence with the SO(3,1) has been shown to be the underlying language for classical and modern optics [4,31]. In this paper, we exploited the correspondence between SL(4, r) and O(3,3), as well as the correspondence between Sp(4) and O(3,2), which was first noted by Paul A. M. Dirac [1]. + +There could be more applications of group isomorphisms in the future. A comprehensive list of those correspondences is given in Gilmore's book on Lie groups [32]. + +**Acknowledgments:** We would like to thank Christian Baumgarten for telling us about the *Sp*(2) symmetry in classical mechanics. + +References + +1. Dirac, P.A.M. A remarkable representation of the 3 + 2 de Sitter Group. J. Math. Phys. **1963**, *4*, 901-909. + [CrossRef] + +2. Yuen, H.P. Two-photon coherent states of the radiation field. Phys. Rev. A **1976**, *13*, 2226-2243. [CrossRef] + +3. Yurke, B.S.; McCall, S.L.; Klauder, J.R. SU(2) and SU(1,1) interferometers. Phys. Rev. A **1986**, *33*, 4033-4054. + [CrossRef] [PubMed] + +4. Kim, Y.S.; Noz, M.E. Phase Space Picture of Quantum Mechanics; World Scientific Publishing Company: Singapore, 1991. + +5. Han, D.; Kim, Y.S.; Noz, M.E.; Yeh, L. Symmetries of two-mode squeezed states. J. Math. Phys. **1993**, *34*, 5493-5508. [CrossRef] + +6. Han, D.; Kim, Y.S.; Noz, M.E. O(3,3)-like symmetries of coupled harmonic oscillators. J. Math. Phys. **1995**, *36*, 3940-3954. [CrossRef] + +7. Lee, D.-G. The Dirac gamma matrices as "relics" of a hidden symmetry?: As fundamental representation of the algebra Sp(4,r). J. Math. Phys. **1995**, *36*, 524-530. [CrossRef] + +8. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative example of Feynman's rest of the universe. Am. J. Phys. **1999**, *67*, 61-66. [CrossRef] + +9. Feynman, R.P. Statistical Mechanics; Benjamin/Cummings: Reading, MA, USA, 1972. + +10. Majorana, E. Relativistic theory of particles with arbitrary intrinsic angular momentum. Nuovo Cimento **1932**, *9*, 335-341. [CrossRef] + +11. Itzykson, C.; Zuber, J.B. Quantum Field Theory; MaGraw-Hill: New York, NY, USA, 1980. + +12. Goldstein, H. *Classical Mechanics*, 2nd ed.; Addison-Wesley: Reading, MA, USA, 1980. + +13. Abraham, R.; Marsden, J.E. *Foundations of Mechanics*, 2nd ed.; Benjamin/Cummings: Reading, MA, USA, 1978. + +14. Kim, Y.S.; Li, M. Squeezed states and thermally excited states in the Wigner phase-space picture of quantum mechanics. Phys. Lett. A **1989**, *139*, 445-448. [CrossRef] + +15. Davies, R.W.; Davies, K.T.R. On the Wigner distribution function for an oscillator. Ann. Phys. **1975**, *89*, 261-273. [CrossRef] + +16. Landau, L.D.; Lifshitz, E.M. Statistical Physics; Pergamon Press: London, UK, 1958. + +17. Yurke, B.; Potasek, M. Obtainment of thermal noise from a pure state. Phys. Rev. A **1987**, *36*, 3464-3466. + [CrossRef] [PubMed] + +18. Ekert, A.K.; Knight, P.L. Correlations and squeezing of two-mode oscillations. Am. J. Phys. **1989**, *57*, 692-697. + [CrossRef] + +19. Barnett, S.M.; Phoenix, S.J.D. Information theory, squeezing and quantum correlations. Phys. Rev. A **1991**, *44*, 535-545. [CrossRef] [PubMed] +---PAGE_BREAK--- + +20. Kim, Y.S.; Noz, M.E.; Oh, S.H. A simple method for illustrating the difference between the homogeneous and inhomogeneous Lorentz Groups. Am. J. Phys. **1979**, *47*, 892–897. [CrossRef] + +21. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, the Netherlands, 1986. + +22. Giedke, G.; Wolf, M.M.; Krueger, O.; Werner, R.F.; Cirac, J.J. Entanglement of formation for symmetric Gaussian states. Phys. Rev. Lett. **2003**, *91*, 107901–107904. [CrossRef] [PubMed] + +23. Han, D.; Kim, Y.S.; Noz, M.E. Lorentz-squeezed hadrons and hadronic temperature. Phys. Lett. A **1990**, *144*, 111–115. [CrossRef] + +24. von Neumann, J. *Mathematical Foundation of Quantum Mechanics*; Princeton University: Princeton, NJ, USA, 1955. + +25. Fano, U. Description of states in quantum mechanics by density matrix and operator techniques. Rev. Mod. Phys. **1957**, *29*, 74–93. [CrossRef] + +26. Blum, K. *Density Matrix Theory and Applications*; Plenum: New York, NY, USA, 1981. + +27. Kim, Y.S.; Wigner, E.P. Entropy and Lorentz transformations. Phys. Lett. A **1990**, *147*, 343–347. [CrossRef] + +28. Feynman, R.P.; Kislinger, M.; Ravndal, F. Current matrix elements from a relativistic Quark Model. Phys. Rev. D **1971**, *3*, 2706–2732. [CrossRef] + +29. Dirac, P.A.M. *Principles of Quantum Mechanics*, 4th ed.; Oxford University: London, UK, 1958. + +30. Hussar, P.E.; Kim, Y.S.; Noz, M.E. Three-particle symmetry classifications according to the method of Dirac. Am. J. Phys. **1980**, *48*, 1038–1042. [CrossRef] + +31. Başkal, S.; Kim, Y.S. Lorentz Group in ray and polarization optics. In *Mathematical Optics: Classical, Quantum and Imaging Methods*; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.; CRC Press: New York, NY, USA, 2012. + +32. Gilmore, R. *Lie Groups, Lie Algebras, and Some of Their Applications*; Wiley: New York, NY, USA, 1974. + +© 2012 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +# Symmetries Shared by the Poincaré Group and the Poincaré Sphere + +Young S. Kim ¹,* and Marilyn E. Noz ² + +¹ Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA + +² Department of Radiology, New York University, New York, NY 10016, USA; marilyne.noz@gmail.com + +* Author to whom correspondence should be addressed; yskim@umd.edu; Tel.: +1-301-937-1306. + +Received: 29 May 2013; in revised form: 9 June 2013; Accepted: 9 June 2013; Published: 27 June 2013 + +**Abstract:** Henri Poincaré formulated the mathematics of Lorentz transformations, known as the Poincaré group. He also formulated the Poincaré sphere for polarization optics. It is shown that these two mathematical instruments can be derived from the two-by-two representations of the Lorentz group. Wigner's little groups for internal space-time symmetries are studied in detail. While the particle mass is a Lorentz-invariant quantity, it is shown to be possible to address its variations in terms of the decoherence mechanism in polarization optics. + +**Keywords:** Poincaré group; Poincaré sphere; Wigner's little groups; particle mass; decoherence mechanism; two-by-two representations; Lorentz group + +## 1. Introduction + +It was Henri Poincaré who worked out the mathematics of Lorentz transformations before Einstein and Minkowski, and the Poincaré group is the underlying language for special relativity. In order to analyze the polarization of light, Poincaré also constructed a graphic illustration known as the Poincaré sphere [1–3]. + +It is of interest to see whether the Poincaré sphere can also speak the language of special relativity. In that case, we can study the physics of relativity in terms of what we observe in optical laboratories. For that purpose, we note first that the Lorentz group starts as a group of four-by-four matrices, while the Poincaré sphere is based on the two-by-two matrix consisting of four Stokes parameters. Thus, it is essential to find a two-by-two representation of the Lorentz group. Fortunately, this representation exists in the literature [4,5], and we shall use it in this paper. + +As for the problems in relativity, we shall discuss here Wigner’s little groups dictating the internal space-time symmetries of relativistic particles [6]. In his original paper of 1939 [7], Wigner considered the subgroups of the Lorentz group, whose transformations leave the four-momentum of a given particle invariant. While this problem has been extensively discussed in the literature, we propose here to study it using Naimark’s two-by-two representation of the Lorentz group [4,5]. + +This two-by-two representation is useful for communicating with the symmetries of the Poincaré sphere based on the four Stokes parameters, which can take the form of two-by-two matrices. We shall prove here that the Poincaré sphere shares the same symmetry property as that of the Lorentz group, particularly in approaching Wigner’s little groups. By doing this, we can study the Lorentz symmetries of elementary particles from what we observe in optical laboratories. + +The present paper starts from an unpublished note based on an invited paper presented by one of the authors (YSK) at the Fedorov Memorial Symposium: Spins and Photonic Beams at Interface held in Minsk (2011) [8]. To this, we have added a detailed discussion of how the decoherence mechanism in polarization optics is mathematically equivalent to a massless particle gaining mass to become a massive particle. We are particularly interested in how the variation of mass can be accommodated in the study of internal space-time symmetries. +---PAGE_BREAK--- + +In Section 2, we define the symmetry problem we propose to study in this paper. We are interested in the subgroups of the Lorentz group, whose transformations leave the four-momentum of a given particle invariant. This is an old problem and has been repeatedly discussed in the literature [6,7,9]. In this paper, we discuss this problem using the two-by-two formulation of the Lorentz group. This two-by-two language is directly applicable to polarization optics and the Poincaré sphere. + +While Wigner formulated his little groups for particles in their given Lorentz frames, we give a formalism applicable to all Lorentz frames. In his 1939 paper, Wigner pointed out that his little groups are different for massive, massless and imaginary-particles. In Section 3, we discuss the possibility of deriving the symmetry properties for massive and imaginary-mass particles from that of the massless particle. + +In Section 4, we assemble the variables in polarization optics, and define the matrix operators corresponding to transformations applicable to those variables. We write the Stokes parameters in the form of a two-by-two matrix. The Poincaré sphere can be constructed from this two-by-two Stokes matrix. In Section 5, we note that there can be two radii for the Poincaré sphere. Poincaré's original sphere has one fixed radius, but this radius can change, depending on the degree of coherence. Based on what we studied in Section 3, we can associate this change of the radius to the change in mass of the particle. + +## 2. Poincaré Group and Wigner's Little Groups + +Poincaré formulated the group theory of Lorentz transformations applicable to four-dimensional space consisting of three space coordinates and one time variable. There are six generators for this group consisting of three rotation and three boost generators. + +In addition, Poincaré considered translations applicable to those four space-time variables, with four generators. If we add these four generators to the six generators for the homogeneous Lorentz group, the result is the inhomogeneous Lorentz group [7] with ten generators. This larger group is called the Poincaré group in the literature. + +The four translation generators produce space-time four-vectors consisting of the energy and momentum. Thus, within the framework of the Poincaré group, we can consider the subgroup of the Lorentz group for a fixed value of momentum [7]. This subgroup defines the internal space-time symmetry of the particle. Let us consider a particle at rest. Its momentum consists of its mass as its time-like variable and zero for the three momentum components. + +$$ (m, 0, 0, 0) \qquad (1) $$ + +For convenience, we use the four-vector convention, $(t, z, x, y)$ and $(E, p_x, p_y)$. + +This four-momentum of Equation (1) is invariant under three-dimensional rotations applicable only to the $z, x, y$ coordinates. The dynamical variable associated with this rotational degree of freedom is called the spin of the particle. + +We are then interested in what happens when the particle moves with a non-zero momentum. If it moves along the z direction, the four-momentum takes the value: + +$$ m(\cosh \eta, \sinh \eta, 0, 0) \qquad (2) $$ + +which means: + +$$ p_0 = m(\cosh \eta)p_z = m(\sinh \eta)e^{\eta} = \sqrt{\frac{p_0 + p_z}{p_0 - p_z}} \qquad (3) $$ + +Accordingly, the little group consists of Lorentz-boosted rotation matrices. This aspect of the little group has been discussed in the literature [6,9]. The question then is whether we could carry out the same logic using two-by-two matrices +---PAGE_BREAK--- + +Of particular interest is what happens when the transformation parameter, $\eta$, becomes very large and the four-momentum becomes that of a massless particle. This problem has also been discussed in the literature within the framework of four-dimensional Minkowski space. The $\eta$ parameter becomes large when the momentum becomes large, but it can also become large when the mass becomes very small. The two-by-two formulation allows us to study these two cases separately, as we will do in Section 3. + +If the particle has an imaginary mass, it moves faster than light and is not observable. Yet, particles of this kind play important roles in Feynman diagrams, and their space-time symmetry should also be studied. In his original paper [7], Wigner studied the little group as the subgroup of the Lorentz group whose transformations leave the four-momentum invariant of the form: + +$$ (0, k, 0, 0) \tag{4} $$ + +Wigner observed that this four-momentum remains invariant under the Lorentz boost along the x or y direction. + +If we boost this four-momentum along the z direction, the four-momentum becomes: + +$$ k(\sinh\eta, \cosh\eta, 0, 0) \tag{5} $$ + +with: + +$$ e^{\eta} = \sqrt{\frac{p_0 + p_z}{p_z - p_0}} \tag{6} $$ + +The two-by-two formalism also allows us to study this problem. + +In Section 2.1, we shall present the two-by-two representation of the Lorentz group. In Section 2.2, we shall present Wigner's little groups in this two-by-two representation. While Wigner's analysis was based on particles in their fixed Lorentz frames, we are interested in what happens when they start moving. We shall deal with this problem in Section 3. + +## 2.1. Two-by-Two Representation of the Lorentz Groups + +The Lorentz group starts with a group of four-by-four matrices performing Lorentz transformations on the Minkowskian vector space of $(t, z, x, y)$, leaving the quantity: + +$$ t^2 - z^2 - x^2 - y^2 \tag{7} $$ + +invariant. It is possible to perform this transformation using two-by-two representations [4,5]. This mathematical aspect is known as SL(2, c), the universal covering group for the Lorentz group. + +In this two-by-two representation, we write the four-vector as a matrix: + +$$ X = \begin{pmatrix} t+z&x-iy \\ x+iy&t-z \end{pmatrix} \tag{8} $$ + +Then, its determinant is precisely the quantity given in Equation (7). Thus, the Lorentz transformation on this matrix is a determinant-preserving transformation. Let us consider the transformation matrix as: + +$$ G = \begin{pmatrix} a & b \\ c & d \end{pmatrix} G^\dagger = \begin{pmatrix} a^* & c^* \\ b^* & d^* \end{pmatrix} \tag{9} $$ + +with: + +$$ \det(G) = 1 \tag{10} $$ + +The $G$ matrix starts with four complex numbers. Due to the above condition on its determinant, it has six independent parameters. The group of these $G$ matrices is known to be locally isomorphic to +---PAGE_BREAK--- + +the group of four-by-four matrices performing Lorentz transformations on the four-vector (t,z,x,y). +In other words, for each G matrix, there is a corresponding four-by-four Lorentz-transform matrix, as +is illustrated in the Appendix A. + +The matrix, $G$, is not a unitary matrix, because its Hermitian conjugate is not always its inverse. +The group can have a unitary subgroup, called $SU(2)$, performing rotations on electron spins. As far +as we can see, this $G$-matrix formalism was first presented by Naimark in 1954 [4]. Thus, we call this +formalism the Naimark representation of the Lorentz group. We shall see first that this representation +is convenient for studying space-time symmetries of particles. We shall then note that this Naimark +representation is the natural language for the Stokes parameters in polarization optics. + +With this point in mind, we can now consider the transformation: + +$$X' = GXG^{\dagger} \qquad (11)$$ + +Since $G$ is not a unitary matrix, it is not a unitary transformation. In order to tell this difference, we call +this the "Naimark transformation". This expression can be written explicitly as: + +$$\begin{pmatrix} t' + z' & x' - iy' \\ x + iy & t' - z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t + z & x - iy \\ x + iy & t - z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (12)$$ + +For this transformation, we have to deal with four complex numbers. However, for all practical +purposes, we may work with two Hermitian matrices: + +$$Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} R(\delta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (13)$$ + +and two symmetric matrices: + +$$B(\eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \quad (14)$$ + +whose Hermitian conjugates are not their inverses. The two Hermitian matrices in Equation (13) lead +to rotations around the *z* and *y* axes, respectively. The symmetric matrices in Equation (14) perform +Lorentz boosts along the *z* and *x* directions, respectively. + +Repeated applications of these four matrices will lead to the most general form of the $G$ matrix of +Equation (9) with six independent parameters. For each two-by-two Naimark transformation, there is +a four-by-four matrix performing the corresponding Lorentz transformation on the four-component +four-vector. In the Appendix A, the four-by-four equivalents are given for the matrices of Equations (13) +and (14). + +It was Einstein who defined the energy-momentum four-vector and showed that it also has the +same Lorentz-transformation law as the space-time four-vector. We write the energy-momentum +four-vector as: + +$$P = \begin{pmatrix} E + p_z & p_x - ip_y \\ p_x + ip_y & E - p_z \end{pmatrix} \qquad (15)$$ + +with: + +$$\det(P) = E^2 - p_x^2 - p_y^2 - p_z^2 \qquad (16)$$ + +which means: + +$$\det(P) = m^2 \qquad (17)$$ + +where *m* is the particle mass. +---PAGE_BREAK--- + +Now, Einstein's transformation law can be written as: + +$$P' = GPC^+ \quad (18)$$ + +or explicitly: + +$$\begin{pmatrix} E' + p_z' & p_x' - ip_y' \\ p_x' + ip_y' & E' - p_z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} E + p_z & p_x - ip_y \\ p_x + ip_y & E - p_z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \beta^* \\ \delta^* & \end{pmatrix} \quad (19)$$ + +## 2.2. Wigner's Little Groups + +Later in 1939 [7], Wigner was interested in constructing subgroups of the Lorentz group whose transformations leave a given four-momentum invariant. He called these subsets "little groups". Thus, Wigner's little group consists of two-by-two matrices satisfying: + +$$P = WPW^+ \quad (20)$$ + +This two-by-two W matrix is not an identity matrix, but tells about the internal space-time symmetry of a particle with a given energy-momentum four-vector. This aspect was not known when Einstein formulated his special relativity in 1905. The internal space-time symmetry was not an issue at that time. + +If its determinant is a positive number, the P matrix can be brought to a form proportional to: + +$$P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad (21)$$ + +corresponding to a massive particle at rest. + +If the determinant is negative, it can be brought to a form proportional to: + +$$P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \quad (22)$$ + +corresponding to an imaginary-mass particle moving faster than light along the z direction, with its vanishing energy component. + +If the determinant is zero, we may write P as: + +$$P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \quad (23)$$ + +which is proportional to the four-momentum matrix for a massless particle moving along the z direction. + +For all three of the above cases, the matrix of the form: + +$$Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} \quad (24)$$ + +will satisfy the Wigner condition of Equation (20). This matrix corresponds to rotations around the z axis, as is shown in the Appendix A. + +For the massive particle with the four-momentum of Equation (21), the Naimark transformations with the rotation matrix of the form: + +$$R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (25)$$ +---PAGE_BREAK--- + +also leave the *P* matrix of Equation (21) invariant. Together with the *Z*(*δ*) matrix, this rotation matrix +leads to the subgroup consisting of the unitary subset of the *G* matrices. The unitary subset of *G* is +*SU*(2), corresponding to the three-dimensional rotation group dictating the spin of the particle [9]. + +For the massless case, the transformations with the triangular matrix of the form: + +$$ +\begin{pmatrix} +1 & \gamma \\ +0 & 1 +\end{pmatrix} +\qquad (26) +$$ + +leave the momentum matrix of Equation (23) invariant. The physics of this matrix has a stormy history, +and the variable, $\gamma$, leads to gauge transformation applicable to massless particles [6,10]. + +For a particle with its imaginary mass, the W matrix of the form: + +$$ +S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \tag{27} +$$ + +will leave the four-momentum of Equation (22) invariant. This unobservable particle does not appear +to have observable internal space-time degrees of freedom. + +Table 1 summarizes the transformation matrices for Wigner’s subgroups for massive, massless and imaginary-mass particles. Of course, it is a challenging problem to have one expression for all those three cases, and this problem has been addressed in the literature [11]. + +**Table 1.** Wigner’s Little Groups. The little groups are the subgroups of the Lorentz group, whose transformations leave the four-momentum of a given particle invariant. Thus, the little groups define the internal space-time symmetries of particles. The four-momentum remains invariant under the rotation around it. In addition, the four-momentum remains invariant under the following transformations. These transformations are different for massive, massless and imaginary-mass particles. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Particle mass + + Four-momentum + + Transform matrices +
+ Massive + + ( + + + + + + + + + +
+ 1 + + 0 +
+ 0 + + 1 +
+ ) +
+ ( + + + + + + + + + +
+ cosh(θ/2) + + -sin(θ/2) +
+ sin(θ/2) + + cos(θ/2) +
+ ) +
+ Massless + + ( + + + + + + + + + +
+ 1 + + 0 +
+ 0 + + 0 +
+ ) +
+ ( + + + + + + + + + +
+ 1 + + γ +
+ 0 + + 1 +
+ ) +
+ Imaginary mass + + ( + + + + + + + + + +
+ 1 + + 0 +
+ 0 + + -1 +
+ ) +
+ ( + + + + + + + + + +
+ cosh(λ/2) + + sinh(λ/2) +
+ sinh(λ/2) + + cosh(λ/2) +
+ ) +
+ +**3. Lorentz Completion of Wigner's Little Groups** + +In his original paper [7], Wigner worked out his little groups for specific Lorentz frames. For the massive particle, he constructed his little group in the frame where the particle is at rest. For the imaginary-mass particle, the energy-component of his frame is zero. + +For the massless particle, it moves along the *z* direction with a nonzero momentum. There are no specific frames particularly convenient for us. Thus, the specific frame can be chosen for an arbitrary value of the momentum, and the triangular matrix of Equation (26) should remain invariant under Lorentz boosts along the *z* direction. + +For the massive particle, let us Lorentz-boost the four-momentum matrix of Equation (21) by performing a Naimark transformation: + +$$ +\begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (28) +$$ + +which leads to: + +$$ +\left( \begin{array}{cc} e^{\eta} & 0 \\ 0 & e^{-\eta} \end{array} \right) \qquad (29) +$$ +---PAGE_BREAK--- + +This resulting matrix corresponds to the Lorentz-boosted four-momentum given in Equation (2). For simplicity, we let $m = 1$ hereafter in this paper. The Lorentz transformation applicable to the four-momentum matrix is not a similarity transformation, but it is a Naimark transformation, as defined in Equation (11). + +On the other hand, the rotation matrix of Equation (25) is Lorentz-boosted as a similarity transformation: + +$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (30) $$ + +and it becomes: + +$$ \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (31) $$ + +If we perform the Naimark transformation of the four-momentum matrix of Equation (29) with this Lorentz-boosted rotation matrix: + +$$ \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta/2} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{\eta} & 0 \\ 0 & e^{-\eta} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & e^{\eta} \sin(\theta/2) \\ -e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (32) $$ + +the result is the four-momentum matrix of Equation (29). This means that the Lorentz-boosted rotation matrix of Equation (31) represents the little group, whose transformations leave the four-momentum matrix of Equation (29) invariant. + +For the imaginary-mass case, the Lorentz boosted four-momentum matrix becomes: + +$$ \begin{pmatrix} e^\eta & 0 \\ 0 & -e^{-\eta} \end{pmatrix} \quad (33) $$ + +The little group matrix is: + +$$ \begin{pmatrix} \cosh(\lambda/2) & e^\eta \sinh(\lambda/2) \\ e^{-\eta} \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \quad (34) $$ + +where $\eta$ is given in Equation (6). + +For the massless case, if we boost the four-momentum matrix of Equation (23), the result is: + +$$ e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \quad (35) $$ + +Here, the $\eta$ parameter is an independent variable and cannot be defined in terms of the momentum or energy. + +The remaining problem is to see whether the massive and imaginary-mass cases collapse to the massless case in the large $\eta$ limit. This variable becomes large when the momentum becomes large or the mass becomes small. We shall discuss these two cases separately. + +### 3.1. Large-Momentum Limit + +While Wigner defined his little group for the massive particle in its rest frame in his original paper [7], the little group represented by Equation (31) is applicable to the moving particle, whose four-momentum is given in Equation (29). This matrix can also be written as: + +$$ e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & e^{-2\eta} \end{pmatrix} \quad (36) $$ +---PAGE_BREAK--- + +In the limit of large η, we can change the above expression into: + +$$e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (37)$$ + +This process is continuous, but not necessarily analytic [11]. After making this transition, we can come back to the original frame to obtain the four momentum matrix of Equation (23). + +The remaining problem is the Lorentz-boosted rotation matrix of Equation (31). If this matrix is going to remain finite as $\eta$ approaches infinity, the upper-right element should be finite for large values of $\eta$. Let it be $\gamma$. Then: + +$$-\varepsilon^{\eta} \sin(\theta/2) = \gamma \qquad (38)$$ + +This means that angle $\theta$ has to become zero. As a consequence, the little group matrix of Equation (31) becomes the triangular matrix given in Equation (26) for massless particles. + +Imaginary-mass particles move faster than light, and they are not observable. On the other hand, the mathematics applicable to Wigner's little group for this particle has been useful in the two-by-two beam transfer matrix in ray and polarization optics [12]. + +Let us go back to the four-momentum matrix of Equation (22). If we boost this matrix, it becomes: + +$$\begin{pmatrix} e^{\eta} & 0 \\ 0 & -e^{-\eta} \end{pmatrix} \qquad (39)$$ + +which can be written as: + +$$e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & -e^{-2\eta} \end{pmatrix} \qquad (40)$$ + +This matrix can be changed to form Equation (37) in the limit of large $\eta$. + +Indeed, the little groups for massive, massless and imaginary cases coincide in the large-$\eta$ limit. Thus, it is possible to jump from one little group to another, and it is a continuous process, but not necessarily analytic [12]. + +The $\eta$ parameter can become large as the momentum becomes large or the mass becomes small. In this subsection, we considered the case for large momentum. However, it is of interest to see the limiting process when the mass becomes small, especially in view of the fact that neutrinos have small masses. + +## 3.2. Small-Mass Limit + +Let us start with a massive particle with fixed energy, $E$. Then, $p_0 = E$, and $p_z = E \cos \chi$. The four-momentum matrix is: + +$$E \begin{pmatrix} 1 + \cos \chi & 0 \\ 0 & 1 - \cos \chi \end{pmatrix} \qquad (41)$$ + +The determinant of this matrix is $E^2 (\sin \chi)^2$. In the regime of the Lorentz group, this is the $(mass)^2$ and is a Lorentz-invariant quantity. There are no Lorentz transformations that change the angle, $\chi$. Thus, with this extra variable, it is possible to study the little groups for variable masses, including the small-mass limit and the zero-mass case. + +If $\chi = 0$, the matrix of Equation (41) becomes that of the four-momentum matrix for a massless particle. As it becomes a positive small number, the matrix of Equation (41) can be written as: + +$$E(\sin\chi) \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix} \qquad (42)$$ +---PAGE_BREAK--- + +with + +$$e^{\eta} = \sqrt{\frac{1 + \cos \chi}{1 - \cos \chi}} \qquad (43)$$ + +Here, again, the determinant of Equation (42) is $E^2(\sin \chi)^2$. With this matrix, we can construct Wigner's little group for each value of the angle, $\chi$. If $\chi$ is not zero, even if it is very small, the little group is $O(3)$-like, as in the case of all massive particles. As the angle, $\chi$, varies continuously from zero to 90°, the mass increases from zero to its maximum value. + +It is important to note that the little groups are different for the small-mass limit and for the zero-mass case. In this section, we studied the internal space-time symmetries dictated by Wigner's little groups, and we are able to present their Lorentz-covariant picture in Table 2. + +**Table 2.** Covariance of the energy-momentum relation and covariance of the internal space-time symmetry groups. The $\gamma$ parameter for the massless case has been studied in earlier papers in the four-by-four matrix formulation [6]. It corresponds to a gauge transformation. Among the three spin components, $S_3$ is along the direction of the momentum and remains invariant. It is called the "helicity". + +
Massive, SlowCovarianceMassless, Fast
$E = p^2/2m$
$S_3$
Einstein's $E = mc^2$$E = cp$
Helicity
$S_1, S_2$Wigner's Little GroupGauge Transformation
+ +## 4. Jones Vectors and Stokes Parameters + +In studying polarized light propagating along the z direction, the traditional approach is to consider the x and y components of the electric fields. Their amplitude ratio and the phase difference determine the state of polarization. Thus, we can change the polarization either by adjusting the amplitudes, by changing the relative phase or both. For convenience, we call the optical device that changes amplitudes an "attenuator" and the device that changes the relative phase a "phase shifter". + +The traditional language for this two-component light is the Jones-vector formalism, which is discussed in standard optics textbooks [13]. In this formalism, the above two components are combined into one column matrix, with the exponential form for the sinusoidal function: + +$$\begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp\{i(kz - \omega t + \phi_1)\} \\ b \exp\{i(kz - \omega t + \phi_2)\} \end{pmatrix} \qquad (44)$$ + +This column matrix is called the Jones vector. + +When the beam goes through a medium with different values of indexes of refraction for the x and y directions, we have to apply the matrix: + +$$\begin{pmatrix} e^{i\delta_1} & 0 \\ 0 & e^{i\delta_2} \end{pmatrix} = e^{i(\delta_1+\delta_2)/2} \begin{pmatrix} e^{-i\delta/2} & 0 \\ 0 & e^{i\delta/2} \end{pmatrix} \qquad (45)$$ + +with $\delta = \delta_1 - \delta_2$. In measurement processes, the overall phase factor, $e^{i(\delta_1+\delta_2)/2}$, cannot be detected and can therefore be deleted. The polarization effect of the filter is solely determined by the matrix: + +$$Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} \qquad (46)$$ + +which leads to a phase difference of $\delta$ between the x and y components. The form of this matrix is given in Equation (13), which serves as the rotation around the z axis in the Minkowski space and time. +---PAGE_BREAK--- + +Also along the x and y directions, the attenuation coefficients could be different. This will lead to +the matrix [14]: + +$$ +\begin{pmatrix} +e^{-\eta_1} & 0 \\ +0 & e^{-\eta_2} +\end{pmatrix} += +e^{-(\eta_1+\eta_2)/2} +\begin{pmatrix} +e^{\eta/2} & 0 \\ +0 & e^{-\eta/2} +\end{pmatrix} +\quad (47) +$$ + +with $\eta = \eta_2 - \eta_1$. If $\eta_1 = 0$ and $\eta_2 = \infty$, the above matrix becomes: + +$$ +\begin{pmatrix} +1 & 0 \\ +0 & 0 +\end{pmatrix} +\qquad (48) +$$ + +which eliminates the y component. This matrix is known as a polarizer in the textbooks [13] and is a +special case of the attenuation matrix of Equation (47). + +This attenuation matrix tells us that the electric fields are attenuated at two different rates. +The exponential factor, $e^{-(\eta_1+\eta_2)/2}$, reduces both components at the same rate and does not affect the +state of polarization. The effect of polarization is solely determined by the squeeze matrix [14]: + +$$ +B(\eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{49} +$$ + +This diagonal matrix is given in Equation (14). In the language of space-time symmetries, this matrix performs a Lorentz boost along the z direction. + +The polarization axes are not always the x and y axes. For this reason, we need the rotation matrix: + +$$ +R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (50) +$$ + +which, according to Equation (13), corresponds to the rotation around the *y* axis in the space-time symmetry. + +Among the rotation angles, the angle of 45° plays an important role in polarization optics. +Indeed, if we rotate the squeeze matrix of Equation (49) by 45°, we end up with the squeeze matrix: + +$$ +R(\theta) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \quad (51) +$$ + +which is also given in Equation (14). In the language of space-time physics, this matrix leads to a +Lorentz boost along the x axis. + +Indeed, the *G* matrix of Equation (9) is the most general form of the transformation matrix applicable to the Jones vector. Each of the above four matrices plays its important role in special relativity, as we discussed in Section 2. Their respective roles in optics and particle physics are given in Table 3. + +However, the Jones vector alone cannot tell us whether the two components are coherent with each other. In order to address this important degree of freedom, we use the coherency matrix [1,2]: + +$$ +C = \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \tag{52} +$$ + +with: + +$$ +\langle \psi_i^* \psi_j \rangle = \frac{1}{T} \int_0^T \psi_i^*(t + \tau) \psi_j(t) dt \quad (53) +$$ +---PAGE_BREAK--- + +where $T$, for a sufficiently long time interval, is much larger than $\tau$. Then, those four elements become [15]: + +$$ +\begin{aligned} +S_{11} &= \langle \psi_1^\dagger \psi_1 \rangle = a^2 & S_{12} &= \langle \psi_1^\dagger \psi_2 \rangle = abe^{-(\sigma+i\delta)} \\ +S_{21} &= \langle \psi_2^\dagger \psi_1 \rangle = abe^{-(\sigma-i\delta)} & S_{22} &= \langle \psi_2^\dagger \psi_2 \rangle = b^2 +\end{aligned} +\quad (54) $$ + +The diagonal elements are the absolute values of $\psi_1$ and $\psi_2$, respectively. The off-diagonal elements could be smaller than the product of $\psi_1$ and $\psi_2$, if the two beams are not completely coherent. The $\sigma$ parameter specifies the degree of coherency. + +This coherency matrix is not always real, but it is Hermitian. Thus, it can be diagonalized by a unitary transformation. If this matrix is normalized so that its trace is one, it becomes a density matrix [16,17]. + +**Table 3.** Polarization optics and special relativity sharing the same mathematics. Each matrix has its clear role in both optics and relativity. The determinant of the Stokes or the four-momentum matrix remains invariant under Lorentz transformations. It is interesting to note that the decoherence parameter (least fundamental) in optics corresponds to the mass (most fundamental) in particle physics. + +
Polarization OpticsTransformation MatrixParticle Symmetry
Phase shift δ( eδ/2 0
0 e-iδ/2)
Rotation around z
Rotation around z( cos(θ/2) - sin(θ/2)
sin(θ/2) cos(θ/2))
Rotation around y
Squeeze along x and y( eη/2 0
0 e-η/2)
Boost along z
Squeeze along 45°
(ab)2 sin2χ
( cosh(λ/2) sinh(λ/2)
sinh(λ/2) cosh(λ/2))
Determinant
Boost along x
(mass)2
+ +If we start with the Jones vector of the form of Equation (44), the coherency matrix becomes: + +$$ C = \begin{pmatrix} a^2 & ab e^{-(\sigma+i\delta)} \\ ab e^{-(\sigma-i\delta)} & b^2 \end{pmatrix} \qquad (55) $$ + +We are interested in the symmetry properties of this matrix. Since the transformation matrix applicable to the Jones vector is the two-by-two representation of the Lorentz group, we are particularly interested in the transformation matrices applicable to this coherency matrix. + +The trace and the determinant of the above coherency matrix are: + +$$ +\begin{aligned} +\det(C) &= (ab)^2 (1 - e^{-2\sigma}) \\ +\operatorname{tr}(C) &= a^2 + b^2 +\end{aligned} +\quad (56) $$ + +Since $e^{-\sigma}$ is always smaller than one, we can introduce an angle, $\chi$, defined as: + +$$ \cos \chi = e^{-\sigma} \quad (57) $$ + +and call it the "decoherence angle". If $\chi = 0$, the decoherence is minimum, and it becomes maximum when $\chi = 90^\circ$. We can then write the coherency matrix of Equation (55) as: + +$$ C = \begin{pmatrix} a^2 & ab(\cos \chi)e^{-i\delta} \\ ab(\cos \chi)e^{i\delta} & b^2 \end{pmatrix} \quad (58) $$ +---PAGE_BREAK--- + +The degree of polarization is defined as [13]: + +$$f = \sqrt{1 - \frac{4 \det(C)}{(tr(C))^2}} = \sqrt{1 - \frac{4(ab)^2 \sin^2 \chi}{(a^2 + b^2)^2}} \quad (59)$$ + +This degree is one if $\chi = 0$. When $\chi = 90^\circ$, it becomes: + +$$\frac{a^2 - b^2}{a^2 + b^2} \qquad (60)$$ + +Without loss of generality, we can assume that *a* is greater than *b*. If they are equal, this minimum degree of polarization is zero. + +Under the influence of the Naimark transformation given in Equation (11), this coherency matrix is transformed as: + +$$ (61) $$ + +It is more convenient to make the following linear combinations: + +$$ +\begin{aligned} +S_0 &= \frac{S_{11} + S_{22}}{2} S_3 = \frac{S_{11} - S_{22}}{2} \\ +S_1 &= \frac{S_{12} - S_{21}}{2} S_2 = \frac{S_{12} + S_{21}}{2} +\end{aligned} +\qquad (62) $$ + +These four parameters are called Stokes parameters, and four-by-four transformations applicable to these parameters are widely known as Mueller matrices [1,3]. However, if the Naimark transformation given in Equation (61) is translated into the four-by-four Lorentz transformations according to the correspondence given in the Appendix A, the Mueller matrices constitute a representation of the Lorentz group. + +Another interesting aspect of the two-by-two matrix formalism is that the coherency matrix can be formulated in terms of quarternions [18–20]. The quarnion representation can be translated into rotations in four-dimensional space. There is a long history between the Lorentz group and the four-dimensional rotation group. It would be interesting to see what the quarnion representation of polarization optics will add to this history between those two similar, but different, groups. + +As for earlier applications of the two-by-two representation of the Lorentz group, we note the vector representation by Fedorov [21,22]. Fedorov showed that it is easier to carry out kinematical calculations using his two-by-two representation. For instance, the computation of the Wigner rotation angle is possible in the two-by-two representation [23]. Earlier papers on group theoretical approaches to polarization optics include also those on Mueller matrices [24] and on relativistic kinematics and polarization optics [25]. + +**5. Geometry of the Poincaré Sphere** + +We now have the four-vector, ($S_0, S_3, S_1, S_2$), which is Lorentz-transformed like the space-time four-vector, $(t, z, x, y)$, or the energy-momentum four-vector of Equation (15). This Stokes four-vector has a three-component subspace, ($S_3, S_1, S_2$), which is like the three-dimensional Euclidean subspace +---PAGE_BREAK--- + +in the four-dimensional Minkowski space. In this three-dimensional subspace, we can introduce the +spherical coordinate system with: + +$$ +\begin{align} +&R = \sqrt{S_3^2 + S_1^2 + S_2^2} \notag \\ +&S_3 = R \cos \zeta \tag{63} \\ +&S_1 = R(\sin \zeta) \cos \delta S_2 = R(\sin \zeta) \sin \delta \notag +\end{align} +$$ + +The radius, *R*, is the radius of this sphere, and is: + +$$ +R = \frac{1}{2} \sqrt{(a^2 - b^2)^2 + 4(ab)^2 \cos^2 \chi} \quad (64) +$$ + +with: + +$$ +S_3 = \frac{a^2 - b^2}{2} \tag{65} +$$ + +This spherical picture is traditionally known as the Poincaré sphere [1–3]. Without loss of generality, we assume *a* is greater than *b*, and *S*₃ is non-negative. In addition, we can consider another sphere with its radius: + +$$ +S_0 = \frac{a^2 + b^2}{2} \tag{66} +$$ + +according to Equation (62). + +The radius, *R*, takes its maximum value, $S_0$, when $\chi = 0^\circ$. It decreases and reaches its minimum value, $S_3$, when $\chi = 90^\circ$. In terms of *R*, the degree of polarization given in Equation (59) is: + +$$ +f = \frac{R}{S_0} \tag{67} +$$ + +This aspect of the radius *R* is illustrated in Figure 1a. The minimum value of *R* is *S*3 of Equation (64). + +**Figure 1.** Radius of the Poincaré sphere. The radius, *R*, takes its maximum value, $S_0$, when the decoherence angle, $\chi$, is zero. It becomes smaller as $\chi$ increases. It becomes minimum when the angle reaches 90°. Its minimum value is $S_3$, as is illustrated in Figure 1a. The degree of polarization is maximum when $R = S_0$ and is minimum when $R = S_3$. According to Equation (65), $S_3$ becomes zero when $a = b$, and the minimum value of $R$ becomes zero, as is indicated in Figure 1b. Its maximum value is still $S_0$. This maximum radius can become larger because $b$ becomes larger to make $a = b$. +---PAGE_BREAK--- + +Let us go back to the four-momentum matrix of Equation (15). Its determinant is $m^2$ and remains invariant. Likewise, the determinant of the coherency matrix of Equation (58) should also remain invariant. The determinant in this case is: + +$$S_0^2 - R^2 = (ab)^2 \sin^2 \chi \quad (68)$$ + +This quantity remains invariant. This aspect is shown on the last row of Table 3. + +Let us go back to Equation (49). This matrix changes the relative magnitude of the amplitudes, *a* and *b*. Thus, without loss of generality, we can study the Stokes parameters with *a* = *b*. The coherency matrix then becomes: + +$$C = a^2 \begin{pmatrix} 1 & (\cos \chi)e^{-i\delta} \\ (\cos \chi)e^{i\delta} & 1 \end{pmatrix} \quad (69)$$ + +Since the angle, $\delta$, does not play any essential roles, we can let $\delta = 0$ and write the coherency matrix as: + +$$C = a^2 \begin{pmatrix} 1 & \cos \chi \\ \cos \chi & 1 \end{pmatrix} \quad (70)$$ + +Then, the minimum radius, $S_3 = 0$, and $S_0$ of Equation (62) and *R* of Equation (64) become: + +$$S_0 = a^2 R = a^2(\cos \chi) \quad (71)$$ + +respectively. The Poincaré sphere becomes simplified to that of Figure 1b. This Poincaré sphere allows *R* to decrease to zero. + +The determinant of the above two-by-two matrix is: + +$$a^4 (1 - \cos^2 \chi) = a^4 \sin^2 \chi \quad (72)$$ + +Since the Lorentz transformation leaves the determinant invariant, the change in this $\chi$ variable is not a Lorentz transformation. It is of course possible to construct a larger group in which this variable plays a role in a group transformation [23], but in this paper, we are more interested in its role in a particle gaining a mass. With this point in mind, let us diagonalize the coherency matrix of Equation (69). Then it takes the form: + +$$a^2 \begin{pmatrix} 1 + \cos \chi & 0 \\ 0 & 1 - \cos \chi \end{pmatrix} \quad (73)$$ + +This form is the same as the four-momentum matrix given in Equation (41). There, we were not able to associate the variable, $\chi$, with any known physical process or symmetry operations of the Lorentz group. Fortunately, in this section, we noted that this variable comes from the degree of decoherence in polarization optics. + +## 6. Concluding Remarks + +In this paper, we noted first that the group of Lorentz transformations can be formulated in terms of two-by-two matrices. This two-by-two formalism can also be used for transformations of the coherency matrix in polarization optics consisting of four Stokes parameters. + +Thus, this set of the four parameters is like a Minkowskian four-vector under four-by-four Lorentz transformations. In order to accommodate all four Stokes parameters, we noted that the radius of the Poincaré sphere should be allowed to vary from its maximum value to its minimum, corresponding to the fully and minimal coherent cases. + +As in the case of the particle mass, the decoherence parameter in the Stokes formalism is invariant under Lorentz transformations. However, the Poincaré sphere, with a variable radius, provides the +---PAGE_BREAK--- + +mechanism for the variations of the decoherence parameter. It was noted that this variation gives a +physical process whose mathematics correspond to that of the mass variable in particle physics. + +As for polarization optics, the traditional approach has been to work with two polarizer matrices, like: + +$$ +\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \qquad (74) +$$ + +We have replaced these two matrices by one attenuation matrix of Equation (47). This replacement enables us to formulate the Lorentz group for the Stokes parameters [15]. Furthermore, this attenuation matrix makes it possible to make a continuous transformation from one matrix to another by adjusting the attenuation parameters in optical media. It could be interesting to design optical experiments along this direction. + +**Acknowledgments:** This paper is in part based on an invited paper presented by one of the authors (YSK) at the Fedorov Memorial Symposium: International Conference "Spins and Photonic Beams at Interface", dedicated to the 100th anniversary of F.I. Fedorov (1911–1994) (Minsk, Belarus, 2011). He would like to thank Sergei Kilin for inviting him to the conference. + +In addition to numerous original contributions in optics, Fedorov wrote a book on two-by-two representations of the Lorentz group based on his own research on this subject. It was, therefore, quite appropriate for him (YSK) to present a paper on applications of the Lorentz group to optical science. He would like to thank V. A. Dluganovich and M. Glaynskii for bringing the papers and the book written by Academician Fedorov, as well as their own papers to his attention. + +**Conflicts of Interest:** The authors declare no conflict of interest. + +Appendix Appendix + +In Section 2, we listed four two-by-two matrices whose repeated applications lead to the most general form of the two-by-two matrix, *G*. It is known that every *G* matrix can be translated into a four-by-four Lorentz transformation matrix through [4,9,15]: + +$$ +\begin{pmatrix} +t' + z' \\ +x' - iy' \\ +x' + iy' \\ +t' - z' +\end{pmatrix} += +\begin{pmatrix} +\alpha\alpha^* & \alpha\beta^* & \beta\alpha^* & \beta\beta^* \\ +\alpha\gamma^* & \alpha\delta^* & \beta\gamma^* & \beta\delta^* \\ +\gamma\alpha^* & \gamma\beta^* & \delta\alpha^* & \delta\beta^* \\ +\gamma\gamma^* & \gamma\delta^* & \delta\gamma^* & \delta\delta^* +\end{pmatrix} +\begin{pmatrix} +t+z \\ +x-iy \\ +x+iy \\ +t-z +\end{pmatrix} +\tag{75} +$$ + +and: + +$$ +\begin{pmatrix} t \\ z \\ x \\ y \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & -1 \\ 0 & 1 & 1 & 0 \\ 0 & i & -i & 0 \end{pmatrix} \begin{pmatrix} t+z \\ x-iy \\ x+iy \\ t-z \end{pmatrix} \quad (76) +$$ + +These matrices appear to be complicated, but it is enough to study the matrices of Equation (13) and Equation (14) to cover all the matrices in this group. Thus, we give their four-by-four equivalents in this Appendix A: + +$$ +Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} \tag{77} +$$ + +leads to the four-by-four matrix: + +$$ +\begin{pmatrix} +1 & 0 & 0 & 0 \\ +1 & 0 & 0 & 0 \\ +0 & 1 & \cos \delta & -\sin \delta \\ +0 & 0 & \sin \delta & \cos \delta +\end{pmatrix} +\qquad (78) +$$ +---PAGE_BREAK--- + +Likewise: + +$$ +B(\eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \rightarrow \begin{pmatrix} \cosh \eta & \sinh \eta & 0 & 0 \\ \sinh \eta & \cosh \eta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \qquad (79) +$$ + +$$ +R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \sin(\theta/2) \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \quad (80) +$$ + +and: + +$$ +S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \sinh(\lambda/2) \end{pmatrix} \rightarrow \begin{pmatrix} \cosh\lambda & 0 & \sinh\lambda & 0 \\ 0 & 1 & 0 & 0 \\ \sinh\lambda & 0 & \cosh\lambda & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \quad (81) +$$ + +References + +1. Azzam, R.A.M.; Bashara, I. *Ellipsometry and Polarized Light*; North-Holland: Amsterdam, The Netherlands, 1977. + +2. Born, M.; Wolf, E. *Principles of Optics*, 6th ed.; Pergamon: Oxford, NY, USA, 1980. + +3. Brosseau, C. *Fundamentals of Polarized Light: A Statistical Optics Approach*; John Wiley: New York, NY, USA, 1998. + +4. Naimark, M.A. Linear representation of the Lorentz group. *Uspekhi Mater. Nauk* **1954**, *9*, 19–93, Translated by Atkinson, F.V., American Mathematical Society Translations, Series 2, **1957**, *6*, 379–458. + +5. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon Press: Oxford, NY, USA, 1958; Translated by Swinfen, A.; Marstrand, O.J., 1964. + +6. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic particles. *J. Math. Phys.* **1990**, *31*, 55–60. [CrossRef] + +7. Wigner, E. On unitary representations of the inhomogeneous Lorentz group. *Ann. Math.* **1939**, *40*, 149–204. [CrossRef] + +8. Kim, Y.S. Poincaré Sphere and Decoherence Problems. Available online: http://arxiv.org/abs/1203.4539 (accessed on 17 June 2013). + +9. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986. + +10. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. *Phys. Rev. D* **1982**, *26*, 3717–3725. + +11. Başkal, S.; Kim, Y.S. One analytic form for four branches of the ABCD matrix. *J. Mod. Opt.* **2010**, *57*, 1251–1259. +[CrossRef] + +12. Başkal, S.; Kim, Y.S. Lorentz Group in Ray and Polarization Optics. In *Mathematical Optics: Classical, Quantum and Computational Methods*; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.; CRC Taylor and Francis: New York, NY, USA, 2013; Chapter 9; pp. 303–349. + +13. Saleh, B.E.A.; Teich, M.C. *Fundamentals of Photonics*, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2007. + +14. Han, D.; Kim, Y.S.; Noz, M.E. Jones-vector formalism as a representation of the Lorentz group. *J. Opt. Soc. Am. A* **1997**, *14*, 2290–2298. + +15. Han, D.; Kim, Y.S.; Noz, M.E. Stokes parameters as a Minkowskian four-vector. *Phys. Rev. E* **1997**, *56*, 6065–6076. + +16. Feynman, R.P. *Statistical Mechanics*; Benjamin/Cummings: Reading, MA, USA, 1972. + +17. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative example of Feynman's rest of the universe. *Am. J. Phys.* **1999**, *67*, 61–66. [CrossRef] + +18. Pellat-Finet, P. Geometric approach to polarization optics. II. Quarternionic representation of polarized light. *Optik* **1991**, *87*, 68–76. + +19. Dlugunovich, V.A.; Kurochkin, Y.A. Vector parameterization of the Lorentz group transformations and polar decomposition of Mueller matrices. *Opt. Spectrosc.* **2009**, *107*, 312–317. [CrossRef] +---PAGE_BREAK--- + +20. Tudor, T. Vectorial Pauli algebraic approach in polarization optics. I. Device and state operators. *Optik* **2010**, *121*, 1226–1235. [CrossRef] + +21. Fedorov, F.I. Vector parametrization of the Lorentz group and relativistic kinematics. *Theor. Math. Phys.* **1970**, *2*, 248–252. [CrossRef] + +22. Fedorov, F.I. *Lorentz Group*; [in Russian]; Global Science, Physical-Mathematical Literature: Moscow, Russia, 1979. + +23. Başkal, S.; Kim, Y.S. De Sitter group as a symmetry for optical decoherence. *J. Phys. A* **2006**, *39*, 7775–7788. + +24. Dargys, A. Optical Mueller matrices in terms of geometric algebra. *Opt. Commun.* **2012**, *285*, 4785–4792. +[CrossRef] + +25. Pellat-Finet, P.; Basset, M. What is common to both polarization optics and relativistic kinematics? *Optik* **1992**, *90*, 101–106. + +© 2013 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Wigner's Space-Time Symmetries Based on the Two-by-Two Matrices of the Damped Harmonic Oscillators and the Poincaré Sphere + +Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³ + +¹ Department of Physics, Middle East Technical University, Ankara 06800, Turkey; E-Mail: baskal@newton.physics.metu.edu.tr + +² Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA + +³ Department of Radiology, New York University, New York, NY 10016, USA; E-Mail: marilyne.noz@gmail.com + +* E-Mail: yskim@umd.edu; Tel.: +1-301-937-1306. + +Received: 28 February 2014; in revised form: 28 May 2014 / Accepted: 9 June 2014 / Published: 25 June 2014 + +**Abstract:** The second-order differential equation for a damped harmonic oscillator can be converted to two coupled first-order equations, with two two-by-two matrices leading to the group $Sp(2)$. It is shown that this oscillator system contains the essential features of Wigner's little groups dictating the internal space-time symmetries of particles in the Lorentz-covariant world. The little groups are the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. It is shown that the damping modes of the oscillator correspond to the little groups for massive and imaginary-mass particles respectively. When the system makes the transition from the oscillation to damping mode, it corresponds to the little group for massless particles. Rotations around the momentum leave the four-momentum invariant. This degree of freedom extends the $Sp(2)$ symmetry to that of $SL(2, c)$ corresponding to the Lorentz group applicable to the four-dimensional Minkowski space. The Poincaré sphere contains the $SL(2, c)$ symmetry. In addition, it has a non-Lorentzian parameter allowing us to reduce the mass continuously to zero. It is thus possible to construct the little group for massless particles from that of the massive particle by reducing its mass to zero. Spin-1/2 particles and spin-1 particles are discussed in detail. + +**Keywords:** damped harmonic oscillators; coupled first-order equations; unimodular matrices; Wigner's little groups; Poincaré sphere; $Sp(2)$ group; $SL(2, c)$ group; gauge invariance; neutrinos; photons + +**PACS:** 03.65.Fd, 03.67.-a, 05.30.-d + +# 1. Introduction + +We are quite familiar with the second-order differential equation + +$$m \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + Ky = 0 \quad (1)$$ + +for a damped harmonic oscillator. This equation has the same mathematical form as + +$$L \frac{d^2 Q}{dt^2} + R \frac{dQ}{dt} + \frac{1}{C} Q = 0 \quad (2)$$ + +for electrical circuits, where L, R, and C are the inductance, resistance, and capacitance respectively. These two equations play fundamental roles in physical and engineering sciences. Since they start from the same set of mathematical equations, one set of problems can be studied in terms of the other. For instance, many mechanical phenomena can be studied in terms of electrical circuits. +---PAGE_BREAK--- + +In Equation (1), when $b = 0$, the equation is that of a simple harmonic oscillator with the frequency $\omega = \sqrt{K/m}$. As $b$ increases, the oscillation becomes damped. When $b$ is larger than $2\sqrt{Km}$, the oscillation disappears, as the solution is a damping mode. + +Consider that increasing *b* continuously, while difficult mechanically, can be done electrically using Equation (2) by adjusting the resistance *R*. The transition from the oscillation mode to the damping mode is a continuous physical process. + +This *b* term leads to energy dissipation, but is not regarded as a fundamental force. It is inconvenient in the Hamiltonian formulation of mechanics and troublesome in transition to quantum mechanics, yet, plays an important role in classical mechanics. In this paper this term will help us understand the fundamental space-time symmetries of elementary particles. + +We are interested in constructing the fundamental symmetry group for particles in the Lorentz-covariant world. For this purpose, we transform the second-order differential equation of Equation (1) to two coupled first-order equations using two-by-two matrices. Only two linearly independent matrices are needed. They are the anti-symmetric and symmetric matrices + +$$A = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \text{and} \quad S = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \qquad (3)$$ + +respectively. The anti-symmetric matrix *A* is Hermitian and corresponds to the oscillation part, while the symmetric *S* matrix corresponds to the damping. + +These two matrices lead to the *Sp*(2) group consisting of two-by-two unimodular matrices with real elements. This group is isomorphic to the three-dimensional Lorentz group applicable to two space-like and one time-like coordinates. This group is commonly called the *O*(2, 1) group. + +This *O*(2, 1) group can explain all the essential features of Wigner's little groups dictating internal space-time symmetries of particles [1]. Wigner defined his little groups as the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. He observed that the little groups are different for massive, massless, and imaginary-mass particles. It has been a challenge to design a mathematical model which will combine those three into one formalism, but we show that the damped harmonic oscillator provides the desired mathematical framework. + +For the two space-like coordinates, we can assign one of them to the direction of the momentum, and the other to the direction perpendicular to the momentum. Let the direction of the momentum be along the z axis, and let the perpendicular direction be along the x axis. We therefore study the kinematics of the group within the zx plane, then see what happens when we rotate the system around the z axis without changing the momentum [2]. + +The Poincaré sphere for polarization optics contains the *SL*(2, *c*) symmetry isomorphic to the four-dimensional Lorentz group applicable to the Minkowski space [3–7]. Thus, the Poincaré sphere extends Wigner’s picture into the three space-like and one time-like coordinates. Specifically, this extension adds rotations around the given momentum which leaves the four-momentum invariant [2]. + +While the particle mass is a Lorentz-invariant variable, the Poincaré sphere contains an extra variable which allows the mass to change. This variable allows us to take the mass-limit of the symmetry operations. The transverse rotational degrees of freedom collapse into one gauge degree of freedom and polarization of neutrinos is a consequence of the requirement of gauge invariance [8,9]. + +The *SL*(2,*c*) group contains symmetries not seen in the three-dimensional rotation group. While we are familiar with two spinors for a spin-1/2 particle in nonrelativistic quantum mechanics, there are two additional spinors due to the reflection properties of the Lorentz group. There are thus 16 bilinear combinations of those four spinors. This leads to two scalars, two four-vectors, and one antisymmetric four-by-four tensor. The Maxwell-type electromagnetic field tensor can be obtained as a massless limit of this tensor [10]. + +In Section 2, we review the damped harmonic oscillator in classical mechanics, and note that the solution can be either in the oscillation mode or damping mode depending on the magnitude of +---PAGE_BREAK--- + +the damping parameter. The translation of the second order equation into a first order differential equation with two-by-two matrices is possible. This first-order equation is similar to the Schrödinger equation for a spin-1/2 particle in a magnetic field. + +Section 3 shows that the two-by-two matrices of Section 2 can be formulated in terms of the $Sp(2)$ group. These matrices can be decomposed into the Bargmann and Wigner decompositions. Furthermore, this group is isomorphic to the three-dimensional Lorentz group with two space and one time-like coordinates. + +In Section 4, it is noted that this three-dimensional Lorentz group has all the essential features of Wigner's little groups which dictate the internal space-time symmetries of the particles in the Lorentz-covariant world. Wigner's little groups are the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. The Bargmann Wigner decompositions are shown to be useful tools for studying the little groups. + +In Section 5, we note that the given momentum is invariant under rotations around it. The addition of this rotational degree of freedom extends the $Sp(2)$ symmetry to the six-parameter $SL(2, c)$ symmetry. In the space-time language, this extends the three dimensional group to the Lorentz group applicable to three space and one time dimensions. + +Section 6 shows that the Poincaré sphere contains the symmetries of $SL(2, c)$ group. In addition, it contains an extra variable which allows us to change the mass of the particle, which is not allowed in the Lorentz group. + +In Section 7, the symmetries of massless particles are studied in detail. In addition to rotation around the momentum, Wigner's little group generates gauge transformations. While gauge transformations on spin-1 photons are well known, the gauge invariance leads to the polarization of massless spin-1/2 particles, as observed in neutrino polarizations. + +In Section 8, it is noted that there are four spinors for spin-1/2 particles in the Lorentz-covariant world. It is thus possible to construct 16 bilinear forms, applicable to two scalars, and two vectors, and one antisymmetric second-rank tensor. The electromagnetic field tensor is derived as the massless limit. This tensor is shown to be gauge-invariant. + +## 2. Classical Damped Oscillators + +For convenience, we write Equation (1) as + +$$ \frac{d^2 y}{dt^2} + 2\mu \frac{dy}{dt} + \omega^2 y = 0 \quad (4) $$ + +with + +$$ \omega = \sqrt{\frac{K}{m}}, \quad \text{and} \quad \mu = \frac{b}{2m} \qquad (5) $$ + +The damping parameter $\mu$ is positive when there are no external forces. When $\omega$ is greater than $\mu$, the solution takes the form + +$$ y = e^{-\mu t} [C_1 \cos(\omega't) + C_2 \sin(\omega't)] \quad (6) $$ + +where + +$$ \omega' = \sqrt{\omega^2 - \mu^2} \qquad (7) $$ + +and $C_1$ and $C_2$ are the constants to be determined by the initial conditions. This expression is for a damped harmonic oscillator. Conversely, when $\mu$ is greater than $\omega$, the quantity inside the square-root sign is negative, then the solution becomes + +$$ y = e^{-\mu t} [C_3 \cosh(\mu't) + C_4 \sinh(\mu't)] \quad (8) $$ + +with + +$$ \mu' = \sqrt{\mu^2 - \omega^2} \qquad (9) $$ +---PAGE_BREAK--- + +If $\omega = \mu$, both Equations (6) and (8) collapse into one solution + +$$y(t) = e^{-\mu t} [C_5 + C_6 t] \quad (10)$$ + +These three different cases are treated separately in textbooks. Here we are interested in the transition from Equation (6) to Equation (8), via Equation (10). For convenience, we start from $\mu$ greater than $\omega$ with $\mu'$ given by Equation (9). + +For a given value of $\mu$, the square root becomes zero when $\omega$ equals $\mu$. If $\omega$ becomes larger, the square root becomes imaginary and divides into two branches. + +$$\pm i \sqrt{\omega^2 - \mu^2} \quad (11)$$ + +This is a continuous transition, but not an analytic continuation. To study this in detail, we translate the second order differential equation of Equation (4) into the first-order equation with two-by-two matrices. + +Given the solutions of Equations (6) and (10), it is convenient to use $\psi(t)$ defined as + +$$\psi(t) = e^{\mu t} y(t), \quad \text{and} \quad y = e^{-\mu t} \psi(t) \quad (12)$$ + +Then $\psi(t)$ satisfies the differential equation + +$$\frac{d^2 \psi(t)}{dt^2} + (\omega^2 - \mu^2)\psi(t) = 0 \quad (13)$$ + +## 2.1. Two-by-Two Matrix Formulation + +In order to convert this second-order equation to a first-order system, we introduce $\psi_1(t)$ and $\psi_2(t)$ satisfying two coupled differential equations + +$$\begin{align} +\frac{d\psi_1(t)}{dt} &= (\mu - \omega)\psi_2(t) \tag{14} \\ +\frac{d\psi_2(t)}{dt} &= (\mu + \omega)\psi_1(t) \tag{15} +\end{align}$$ + +which can be written in matrix form as + +$$\frac{d}{dt} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \begin{pmatrix} 0 & \mu - \omega \\ \mu + \omega & 0 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} \quad (16)$$ + +Using the Hermitian and anti-Hermitian matrices of Equation (3) in Section 1, we construct the linear combination + +$$H = \omega \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} + \mu \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \quad (17)$$ + +We can then consider the first-order differential equation + +$$i \frac{\partial}{\partial t} \psi(t) = H \psi(t) \quad (18)$$ + +While this equation is like the Schrödinger equation for an electron in a magnetic field, the two-by-two matrix is not Hermitian. Its first matrix is Hermitian, but the second matrix is anti-Hermitian. It is of course an interesting problem to give a physical interpretation to this non-Hermitian matrix +---PAGE_BREAK--- + +in connection with quantum dissipation [11], but this is beyond the scope of the present paper. +The solution of Equation (18) is + +$$ +\psi(t) = \exp \left\{ \begin{pmatrix} 0 & -\omega + \mu \\ \omega + \mu & 0 \end{pmatrix} t \right\} \begin{pmatrix} C_7 \\ C_8 \end{pmatrix} \quad (19) +$$ + +where $C_7 = \psi_1(0)$ and $C_8 = \psi_2(0)$ respectively. + +2.2. Transition from the Oscillation Mode to Damping Mode + +It appears straight-forward to compute this expression by a Taylor expansion, but it is not. +This issue was extensively discussed in the earlier papers by two of us [12,13]. The key idea is to write +the matrix + +$$ +\begin{pmatrix} +0 & -\omega + \mu \\ +\omega + \mu & 0 +\end{pmatrix} +\qquad (20) +$$ + +as a similarity transformation of + +$$ +\omega' \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \quad (\omega > \mu) \tag{21} +$$ + +and as that of + +$$ +\mu' \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \quad (\mu > \omega) \tag{22} +$$ + +with $\omega'$ and $\mu'$ defined in Equations (7) and (9), respectively. +Then the Taylor expansion leads to + +$$ +\left( \frac{\cos(\omega't)}{\sqrt{(\omega+\mu)/(\omega-\mu)}} \sin(\omega't) - \frac{\sqrt{(\omega-\mu)/(\omega+\mu)}}{\cos(\omega't)} \sin(\omega't) \right) \quad (23) +$$ + +when $\omega$ is greater than $\mu$. The solution $\psi(t)$ takes the form + +$$ +\begin{pmatrix} +C_7 \cos(\omega't) - C_8 \sqrt{(\omega - \mu)/( \omega + \mu)} \sin(\omega't) \\ +C_7 \sqrt{(\omega + \mu)/( \omega - \mu)} \sin(\omega't) + C_8 \cos(\omega't) +\end{pmatrix} +\quad (24) +$$ + +If $\mu$ is greater than $\omega$, the Taylor expansion becomes + +$$ +\left( \frac{\cosh(\mu't)}{\sqrt{(\mu+\omega)/(\mu-\omega)}} \frac{\sqrt{(\mu-\omega)/(\mu+\omega)}}{\cosh(\mu't)} \sinh(\mu't) \right) \quad (25) +$$ + +When $\omega$ is equal to $\mu$, both Equations (23) and (25) become + +$$ +\begin{pmatrix} 1 & 0 \\ 2\omega t & 1 \end{pmatrix} \tag{26} +$$ + +If $\omega$ is sufficiently close to but smaller than $\mu$, the matrix of Equation (25) becomes + +$$ +\begin{pmatrix} +1 + (\epsilon/2)(2\omega t)^2 & +\epsilon(2\omega t) \\ +(2\omega t) & 1 + (\epsilon/2)(2\omega t)^2 +\end{pmatrix} +\quad (27) +$$ + +with + +$$ +\epsilon = \frac{\mu - \omega}{\mu + \omega} \tag{28} +$$ +---PAGE_BREAK--- + +If $\omega$ is sufficiently close to $\mu$, we can let + +$$ \mu + \omega = 2\omega, \quad \text{and} \quad \mu - \omega = 2\mu\epsilon \tag{29} $$ + +If $\omega$ is greater than $\mu$, $\epsilon$ defined in Equation (28) becomes negative, the matrix of Equation (23) becomes + +$$ \begin{pmatrix} 1 - (-\epsilon/2)(2\omega t)^2 & -(\epsilon)(2\omega t) \\ 2\omega t & 1 - (-\epsilon/2)(2\omega t)^2 \end{pmatrix} \tag{30} $$ + +We can rewrite this matrix as + +$$ \begin{pmatrix} 1 - (1/2) \left[ (2\omega\sqrt{-\epsilon})t \right]^2 & -\sqrt{-\epsilon} \left[ (2\omega\sqrt{-\epsilon})t \right] \\ 2\omega t & 1 - (1/2) \left[ (2\omega\sqrt{-\epsilon})t \right]^2 \end{pmatrix} \tag{31} $$ + +If $\epsilon$ becomes positive, Equation (27) can be written as + +$$ \begin{pmatrix} 1 + (1/2) \left[ (2\omega\sqrt{\epsilon})t \right]^2 & \sqrt{\epsilon} \left[ (2\omega\sqrt{\epsilon})t \right] \\ 2\omega t & 1 + (1/2) \left[ (2\omega\sqrt{\epsilon})t \right]^2 \end{pmatrix} \tag{32} $$ + +The transition from Equation (31) to Equation (32) is continuous as they become identical when $\epsilon = 0$. As $\epsilon$ changes its sign, the diagonal elements of above matrices tell us how cos($\omega't$) becomes cosh($\mu't$). As for the upper-right element element, $-\sin(\omega't)$ becomes sinh($\mu't$). This non-analytic continuity is discussed in detail in one of the earlier papers by two of us on lens optics [13]. This type of continuity was called there "tangential continuity." There, the function and its first derivative are continuous while the second derivative is not. + +## 2.3. Mathematical Forms of the Solutions + +In this section, we use the Heisenberg approach to the problem, and obtain the solutions in the form of two-by-two matrices. We note that + +1. For the oscillation mode, the trace of the matrix is smaller than 2. The solution takes the form of + +$$ \begin{pmatrix} \cos(x) & -e^{-\eta} \sin(x) \\ e^{\eta} \sin(x) & \cos(x) \end{pmatrix} \tag{33} $$ + +with trace $2\cos(x)$. The trace is independent of $\eta$. + +2. For the damping mode, the trace of the matrix is greater than 2. + +$$ \begin{pmatrix} \cosh(x) & e^{-\eta} \sinh(x) \\ e^{\eta} \sinh(x) & \cosh(x) \end{pmatrix} \tag{34} $$ + +with trace $2\cosh(x)$. Again, the trace is independent of $\eta$. + +3. For the transition mode, the trace is equal to 2, and the matrix is triangular and takes the form of + +$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \tag{35} $$ + +When $x$ approaches zero, the Equations (33) and (34) take the form + +$$ \begin{pmatrix} 1 - x^2/2 & -xe^{-\eta} \\ xe^{\eta} & 1 - x^2/2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 + x^2/2 & xe^{-\eta} \\ xe^{\eta} & 1 + x^2/2 \end{pmatrix} \tag{36} $$ +---PAGE_BREAK--- + +respectively. These two matrices have the same lower-left element. Let us fix this element to be a +positive number $\gamma$. Then + +$$ +x = \gamma e^{-\eta} \tag{37} +$$ + +Then the matrices of Equation (36) become + +$$ +\begin{pmatrix} +1 - \gamma^2 e^{-2\eta} / 2 & -\gamma e^{-2\eta} \\ +\gamma & 1 - \gamma^2 e^{-2\eta} / 2 +\end{pmatrix}, +\quad +\text{and} +\quad +\begin{pmatrix} +1 + \gamma^2 e^{-2\eta} / 2 & \gamma e^{-2\eta} \\ +\gamma & 1 + \gamma^2 e^{-2\eta} / 2 +\end{pmatrix} +\qquad (38) +$$ + +If we introduce a small number $\epsilon$ defined as + +$$ +\epsilon = \sqrt{\gamma} e^{-\eta} \tag{39} +$$ + +the matrices of Equation (38) become + +$$ +\begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \begin{pmatrix} 1 - \gamma \epsilon^2/2 & \sqrt{\gamma} \epsilon \\ \sqrt{\gamma} \epsilon & 1 - \gamma \epsilon^2/2 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{40} +$$ + +$$ +\begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \begin{pmatrix} 1 + \gamma \epsilon^2/2 & \sqrt{\gamma} \epsilon \\ \sqrt{\gamma} \epsilon & 1 + \gamma \epsilon^2/2 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} +$$ + +respectively, with $e^{-\eta} = \epsilon / \sqrt{\gamma}$. + +**3. Groups of Two-by-Two Matrices** + +If a two-by-two matrix has four complex elements, it has eight independent parameters. If the determinant of this matrix is one, it is known as an unimodular matrix and the number of independent parameters is reduced to six. The group of two-by-two unimodular matrices is called SL(2, c). This six-parameter group is isomorphic to the Lorentz group applicable to the Minkowski space of three space-like and one time-like dimensions [14]. + +We can start with two subgroups of SL(2, c). + +1. While the matrices of SL(2, c) are not unitary, we can consider the subset consisting of unitary matrices. This subgroup is called SU(2), and is isomorphic to the three-dimensional rotation group. This three-parameter group is the basic scientific language for spin-1/2 particles. + +2. We can also consider the subset of matrices with real elements. This three-parameter group is called Sp(2) and is isomorphic to the three-dimensional Lorentz group applicable to two space-like and one time-like coordinates. + +In the Lorentz group, there are three space-like dimensions with x, y, and z coordinates. +However, for many physical problems, it is more convenient to study the problem in the +two-dimensional (x,z) plane first and generalize it to three-dimensional space by rotating the system +around the z axis. This process can be called Euler decomposition and Euler generalization [2]. + +First, we study *Sp*(2) symmetry in detail, and achieve the generalization by augmenting the +two-by-two matrix corresponding to the rotation around the *z* axis. In this section, we study in detail +properties of *Sp*(2) matrices, then generalize them to *SL*(2, *c*) in Section 5. + +There are three classes of Sp(2) matrices. Their traces can be smaller or greater than two, or equal to two. While these subjects are already discussed in the literature [15–17] our main interest is what happens as the trace goes from less than two to greater than two. Here we are guided by the model we have discussed in Section 2, which accounts for the transition from the oscillation mode to the damping mode. +---PAGE_BREAK--- + +### 3.1. Lie Algebra of Sp(2) + +The two linearly independent matrices of Equation (3) can be written as + +$$ K_1 = \frac{1}{2} \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad \text{and} \quad J_2 = \frac{1}{2} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \qquad (41) $$ + +However, the Taylor series expansion of the exponential form of Equation (23) or Equation (25) requires an additional matrix + +$$ K_3 = \frac{1}{2} \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \qquad (42) $$ + +These matrices satisfy the following closed set of commutation relations. + +$$ [K_1, J_2] = iK_3, \quad [J_2, K_3] = iK_1, \quad [K_3, K_1] = -iJ_2 \qquad (43) $$ + +These commutation relations remain invariant under Hermitian conjugation, even though $K_1$ and $K_3$ are anti-Hermitian. The algebra generated by these three matrices is known in the literature as the group $Sp(2)$ [17]. Furthermore, the closed set of commutation relations is commonly called the Lie algebra. Indeed, Equation (43) is the Lie algebra of the $Sp(2)$ group. + +The Hermitian matrix $J_2$ generates the rotation matrix + +$$ R(\theta) = \exp(-i\theta J_2) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (44) $$ + +and the anti-Hermitian matrices $K_1$ and $K_2$, generate the following squeeze matrices. + +$$ S(\lambda) = \exp(-i\lambda K_1) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \qquad (45) $$ + +and + +$$ B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} \exp(\eta/2) & 0 \\ 0 & \exp(-\eta/2) \end{pmatrix} \qquad (46) $$ + +respectively. + +Returning to the Lie algebra of Equation (43), since $K_1$ and $K_3$ are anti-Hermitian, and $J_2$ is Hermitian, the set of commutation relation is invariant under the Hermitian conjugation. In other words, the commutation relations remain invariant, even if we change the sign of $K_1$ and $K_3$, while keeping that of $J_2$ invariant. Next, let us take the complex conjugate of the entire system. Then both the $J$ and $K$ matrices change their signs. + +### 3.2. Bargmann and Wigner Decompositions + +Since the $Sp(2)$ matrix has three independent parameters, it can be written as [15] + +$$ \begin{pmatrix} \cos(\alpha_1/2) & -\sin(\alpha_1/2) \\ \sin(\alpha_1/2) & \cos(\alpha_1/2) \end{pmatrix} \begin{pmatrix} \cosh\chi & \sinh\chi \\ \sinh\chi & \cosh\chi \end{pmatrix} \begin{pmatrix} \cos(\alpha_2/2) & -\sin(\alpha_2/2) \\ \sin(\alpha_2/2) & \cos(\alpha_2/2) \end{pmatrix} \qquad (47) $$ + +This matrix can be written as + +$$ \begin{pmatrix} \cos(\delta/2) & -\sin(\delta/2) \\ \sin(\delta/2) & \cos(\delta/2) \end{pmatrix} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} \cos(\delta/2) & \sin(\delta/2) \\ -\sin(\delta/2) & \cos(\delta/2) \end{pmatrix} \qquad (48) $$ +---PAGE_BREAK--- + +where + +$$ +\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \begin{pmatrix} \cosh \chi & \sinh \chi \\ \sinh \chi & \cosh \chi \end{pmatrix} \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \quad (49) +$$ + +with + +$$ +\delta = \frac{1}{2}(\alpha_1 - \alpha_2), \quad \text{and} \quad \alpha = \frac{1}{2}(\alpha_1 + \alpha_2) \tag{50} +$$ + +If we complete the matrix multiplication of Equation (49), the result is + +$$ +\left( +\begin{array}{cc} + (\cosh \chi) \cos \alpha & \sinh \chi - (\cosh \chi) \sin \alpha \\ + \sinh \chi + (\cosh \chi) \sin \alpha & (\cosh \chi) \cos \alpha +\end{array} +\right) +\qquad (51) +$$ + +We shall call hereafter the decomposition of Equation (49) the Bargmann decomposition. This means that every matrix in the Sp(2) group can be brought to the Bargmann decomposition by a similarity transformation of rotation, as given in Equation (48). This decomposition leads to an equidiagonal matrix with two independent parameters. + +For the matrix of Equation (49), we can now consider the following three cases. Let us assume that $\chi$ is positive, and the angle $\theta$ is less than 90°. Let us look at the upper-right element. + +1. If it is negative with $[\sinh\chi < (\cosh\chi)\sin\alpha]$, then the trace of the matrix is smaller than 2, and the matrix can be written as + +$$ +\begin{pmatrix} +\cos(\theta/2) & -e^{-\eta}\sin(\theta/2) \\ +e^{\eta}\sin(\theta/2) & \cos(\theta/2) +\end{pmatrix} +\qquad (52) +$$ + +with + +$$ +\cos(\theta/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{-2\eta} = \frac{(\cosh\chi)\sin\alpha - \sinh\chi}{(\cosh\chi)\sin\alpha + \sinh\chi} \tag{53} +$$ + +2. If it is positive with $[\sinh \chi > (\cosh \chi) \sin \alpha]$, then the trace is greater than 2, and the matrix can be written as + +$$ +\begin{pmatrix} +\cosh(\lambda/2) & e^{-\eta} \sinh(\lambda/2) \\ +e^{\eta} \sinh(\lambda/2) & \cosh(\lambda/2) +\end{pmatrix} +\qquad (54) +$$ + +with + +$$ +\cosh(\lambda/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{-2\eta} = \frac{\sinh\chi - (\cosh\chi)\sin\alpha}{(\cosh\chi)\sin\alpha + \sinh\chi} \tag{55} +$$ + +3. If it is zero with $[(\sinh \chi = (\cosh \chi) \sin \alpha)]$, then the trace is equal to 2, and the matrix takes the form + +$$ +\begin{pmatrix} +1 & 0 \\ +2 \sinh \chi & 1 +\end{pmatrix} +\qquad (56) +$$ + +The above repeats the mathematics given in Section 2.3. + +Returning to Equations (52) and (53), they can be decomposed into + +$$ +M(\theta, \eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (57) +$$ + +and + +$$ +M(\lambda, \eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cos(\lambda/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (58) +$$ + +respectively. In view of the physical examples given in Section 6, we shall call this the “Wigner decomposition.” Unlike the Bargmann decomposition, the Wigner decomposition is in the form of a similarity transformation. +---PAGE_BREAK--- + +We note that both Equations (57) and (58) are written as similarity transformations. Thus + +$$[M(\theta, \eta)]^n = \begin{pmatrix} \cos(n\theta/2) & -e^{-\eta} \sin(n\theta/2) \\ e^{\eta} \sin(n\theta/2) & \cos(n\theta/2) \end{pmatrix} \quad (59)$$ + +$$[M(\lambda, \eta)]^n = \begin{pmatrix} \cosh(n\lambda/2) & e^{\eta} \sinh(n\lambda/2) \\ e^{-\eta} \sinh(n\lambda/2) & \cosh(n\lambda/2) \end{pmatrix} \quad (60)$$ + +$$[M(\gamma)]^n = \begin{pmatrix} 1 & 0 \\ n\gamma & 1 \end{pmatrix} \quad (61)$$ + +These expressions are useful for studying periodic systems [18]. + +The question is what physics these decompositions describe in the real world. To address this, we study what the Lorentz group does in the real world, and study isomorphism between the $Sp(2)$ group and the Lorentz group applicable to the three-dimensional space consisting of one time and two space coordinates. + +### 3.3. Isomorphism with the Lorentz Group + +The purpose of this section is to give physical interpretations of the mathematical formulas given in Section 3.2. We will interpret these formulae in terms of the Lorentz transformations which are normally described by four-by-four matrices. For this purpose, it is necessary to establish a correspondence between the two-by-two representation of Section 3.2 and the four-by-four representations of the Lorentz group. + +Let us consider the Minkowskian space-time four-vector + +$$ (t, z, x, y) \qquad (62) $$ + +where $(t^2 - z^2 - x^2 - y^2)$ remains invariant under Lorentz transformations. The Lorentz group consists of four-by-four matrices performing Lorentz transformations in the Minkowski space. + +In order to give physical interpretations to the three two-by-two matrices given in Equations (44)–(46), we consider rotations around the *y* axis, boosts along the *x* axis, and boosts along the *z* axis. The transformation is restricted in the three-dimensional subspace of $(t,z,x)$. It is then straight-forward to construct those four-by-four transformation matrices where the *y* coordinate remains invariant. They are given in Table 1. Their generators also given. Those four-by-four generators satisfy the Lie algebra given in Equation (43). + +**Table 1.** Matrices in the two-by-two representation, and their corresponding four-by-four generators and transformation matrices. + +
MatricesGeneratorsFour-by-FourTransform matrices
R(θ)J2 = 12 (0
i
−i
0)
0    0    0
0    0    −i
0    i    0
0    0    0
1    0    0
0    cos θ    − sin θ
0    sin θ    cos θ
0    0    0
B(η)K3 = 12(i
0
−i
0))
0    i    0
i    0    0
0    0    0
0    0    0
cosh ηsinh η00
sinh ηcosh η00
0010
0001
S(λ)K1 = 12(0
i
i
0))
0    0    i
i    0    0
0    0    0
cosh λ0sinh λ0
0100
sinh λ0cosh λ0
0001
+ + +---PAGE_BREAK--- + +**4. Internal Space-Time Symmetries** + +We have seen that there corresponds a two-by-two matrix for each four-by-four Lorentz transformation matrix. It is possible to give physical interpretations to those four-by-four matrices. It must thus be possible to attach a physical interpretation to each two-by-two matrix. + +Since 1939 [1] when Wigner introduced the concept of the little groups many papers have been published on this subject, but most of them were based on the four-by-four representation. In this section, we shall give the formalism of little groups in the language of two-by-two matrices. In so doing, we provide physical interpretations to the Bargmann and Wigner decompositions introduced in Section 3.2. + +**4.1. Wigner's Little Groups** + +In [1], Wigner started with a free relativistic particle with momentum, then constructed subgroups of the Lorentz group whose transformations leave the four-momentum invariant. These subgroups thus define the internal space-time symmetry of the given particle. Without loss of generality, we assume that the particle momentum is along the z direction. Thus rotations around the momentum leave the momentum invariant, and this degree of freedom defines the helicity, or the spin parallel to the momentum. + +We shall use the word "Wigner transformation" for the transformation which leaves the four-momentum invariant: + +1. For a massive particle, it is possible to find a Lorentz frame where it is at rest with zero momentum. The four-momentum can be written as $m(1,0,0,0)$, where $m$ is the mass. This four-momentum is invariant under rotations in the three-dimensional $(z,x,y)$ space. + +2. For an imaginary-mass particle, there is the Lorentz frame where the energy component vanishes. The momentum four-vector can be written as $p(0,1,0,0)$, where $p$ is the magnitude of the momentum. + +3. If the particle is massless, its four-momentum becomes $p(1,1,0,0)$. Here the first and second components are equal in magnitude. + +The constant factors in these four-momenta do not play any significant roles. Thus we write them as $(1,0,0,0)$, $(0,1,0,0)$, and $(1,1,0,0)$ respectively. Since Wigner worked with these three specific four-momenta [1], we call them Wigner four-vectors. + +All of these four-vectors are invariant under rotations around the z axis. The rotation matrix is + +$$Z(\phi) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos\phi & -\sin\phi \\ 0 & 0 & \sin\phi & \cos\phi \end{pmatrix} \quad (63)$$ + +In addition, the four-momentum of a massive particle is invariant under the rotation around the y axis, whose four-by-four matrix was given in Table 1. The four-momentum of an imaginary particle is invariant under the boost matrix $S(\lambda)$ given in Table 1. The problem for the massless particle is more complicated, but will be discussed in detail in Section 7. See Table 2. +---PAGE_BREAK--- + +**Table 2.** Wigner four-vectors and Wigner transformation matrices applicable to two space-like and one time-like dimensions. Each Wigner four-vector remains invariant under the application of its Wigner matrix. + +
MassWigner Four-VectorWigner Transformation
Massive(1, 0, 0, 0)(1 0 0 0)
(0 cos θ - sinθ 0)
(0 sin θ cos θ 0)
(0 0 0 1)
Massless(1, 1, 0, 0)(1 + γ2/2 - γ2/2 γ 0)
2/2 1 - γ2/2 γ 0)
-γ γ 1 0
(0 0 0 1)
Imaginary mass(0, 1, 0, 0)(cosh λ 0 sinh λ 0)
(0 1 0 0)
(sinh λ 0 cosh λ 0)
(0 0 0 1)
+ +## 4.2. Two-by-Two Formulation of Lorentz Transformations + +The Lorentz group is a group of four-by-four matrices performing Lorentz transformations on the Minkowskian vector space of $(t,z,x,y)$, leaving the quantity + +$$t^2 - z^2 - x^2 - y^2 \quad (64)$$ + +invariant. It is possible to perform the same transformation using two-by-two matrices [7,14,19]. + +In this two-by-two representation, the four-vector is written as + +$$X = \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix} \quad (65)$$ + +where its determinant is precisely the quantity given in Equation (64) and the Lorentz transformation on this matrix is a determinant-preserving, or unimodular transformation. Let us consider the transformation matrix as [7,19] + +$$G = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \quad \text{and} \quad G^{\dagger} = \begin{pmatrix} \alpha^{*} & \gamma^{*} \\ \beta^{*} & \delta^{*} \end{pmatrix} \quad (66)$$ + +with + +$$\det(G) = 1 \quad (67)$$ + +and the transformation + +$$X' = GXG^{\dagger} \quad (68)$$ + +Since $G$ is not a unitary matrix, Equation (68) not a unitary transformation, but rather we call this the “Hermitian transformation”. Equation (68) can be written as + +$$\begin{pmatrix} t' + z' & x' - iy' \\ x + iy & t' - z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t + z & x - iy \\ x + iy & t - z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (69)$$ + +It is still a determinant-preserving unimodular transformation, thus it is possible to write this as a four-by-four transformation matrix applicable to the four-vector $(t,z,x,y)$ [7,14]. + +Since the $G$ matrix starts with four complex numbers and its determinant is one by Equation (67), it has six independent parameters. The group of these $G$ matrices is known to be locally isomorphic +---PAGE_BREAK--- + +to the group of four-by-four matrices performing Lorentz transformations on the four-vector $(t, z, x, y)$. In other words, for each $G$ matrix there is a corresponding four-by-four Lorentz-transform matrix [7]. + +The matrix $G$ is not a unitary matrix, because its Hermitian conjugate is not always its inverse. This group has a unitary subgroup called $SU(2)$ and another consisting only of real matrices called $Sp(2)$. For this later subgroup, it is sufficient to work with the three matrices $R(\theta), S(\lambda)$, and $B(\eta)$ given in Equations (44)–(46) respectively. Each of these matrices has its corresponding four-by-four matrix applicable to the $(t, z, x, y)$. These matrices with their four-by-four counterparts are tabulated in Table 1. + +The energy-momentum four vector can also be written as a two-by-two matrix. It can be written as + +$$P = \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix} \qquad (70)$$ + +with + +$$\det(P) = p_0^2 - p_x^2 - p_y^2 - p_z^2 \qquad (71)$$ + +which means + +$$\det(P) = m^2 \qquad (72)$$ + +where *m* is the particle mass. + +The Lorentz transformation can be written explicitly as + +$$P' = GPG^+ \qquad (73)$$ + +or + +$$\begin{pmatrix} p'_0 + p'_z & p'_x - ip'_y \\ p'_x + ip'_y & E' - p'_z \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \qquad (74)$$ + +This is an unimodular transformation, and the mass is a Lorentz-invariant variable. Furthermore, it was shown in [7] that Wigner's little groups for massive, massless, and imaginary-mass particles can be explicitly defined in terms of two-by-two matrices. + +Wigner's little group consists of two-by-two matrices satisfying + +$$P = WPW^{+} \qquad (75)$$ + +The two-by-two $W$ matrix is not an identity matrix, but tells about the internal space-time symmetry of a particle with a given energy-momentum four-vector. This aspect was not known when Einstein formulated his special relativity in 1905, hence the internal space-time symmetry was not an issue at that time. We call the two-by-two matrix $W$ the Wigner matrix, and call the condition of Equation (75) the Wigner condition. + +If determinant of $W$ is a positive number, then $P$ is proportional to + +$$P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \qquad (76)$$ + +corresponding to a massive particle at rest, while if the determinant is negative, it is proportional to + +$$P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad (77)$$ +---PAGE_BREAK--- + +corresponding to an imaginary-mass particle moving faster than light along the z direction, with +a vanishing energy component. If the determinant is zero, P is + +$$ +P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \tag{78} +$$ + +which is proportional to the four-momentum matrix for a massless particle moving along the z direction. + +For all three cases, the matrix of the form + +$$ +Z(\phi) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} \quad (79) +$$ + +will satisfy the Wigner condition of Equation (75). This matrix corresponds to rotations around +the z axis. + +For the massive particle with the four-momentum of Equation (76), the transformations with the rotation matrix of Equation (44) leave the *P* matrix of Equation (76) invariant. Together with the *Z*(*φ*) matrix, this rotation matrix leads to the subgroup consisting of the unitary subset of the *G* matrices. The unitary subset of *G* is *SU*(2) corresponding to the three-dimensional rotation group dictating the spin of the particle [14]. + +For the massless case, the transformations with the triangular matrix of the form + +$$ +\begin{pmatrix} 1 & \gamma \\ 0 & 1 \end{pmatrix} \qquad (80) +$$ + +leave the momentum matrix of Equation (78) invariant. The physics of this matrix has a stormy history, +and the variable $\gamma$ leads to a gauge transformation applicable to massless particles [8,9,20,21]. + +For a particle with an imaginary mass, a W matrix of the form of Equation (45) leaves the +four-momentum of Equation (77) invariant. + +Table 3 summarizes the transformation matrices for Wigner's little groups for massive, massless, +and imaginary-mass particles. Furthermore, in terms of their traces, the matrices given in this +subsection can be compared with those given in Section 2.3 for the damped oscillator. The comparisons +are given in Table 4. + +Of course, it is a challenging problem to have one expression for all three classes. This problem +has been discussed in the literature [12], and the damped oscillator case of Section 2 addresses the +continuity problem. + +**Table 3.** Wigner vectors and Wigner matrices in the two-by-two representation. The trace of the matrix tells whether the particle $m^2$ is positive, zero, or negative. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Particle Mass + + Four-Momentum + + Transform Matrix + + Trace +
+ Massive + + ( + + 1 + + 0) +
+ (0 1) +
+ ( + + cos(θ/2) + + − sin(θ/2)) +
+ ( + + sin(θ/2) + + cos(θ/2)) +
+ less than 2 +
+ Massless + + ( + + 1 + + 0) +
+ (0 0) +
+ ( + + 1 + + γ) +
+ (0 1) +
+ equal to 2 +
+ Imaginary mass + + ( + + 1 + + 0) +
+ (0 −1) +
+ ( + + cosh(λ/2) + + sinh(λ/2)) +
+ ( + + sinh(λ/2) + + cosh(λ/2)) +
+ greater than 2 +
+---PAGE_BREAK--- + +**Table 4.** Damped Oscillators and Space-time Symmetries. Both share Sp(2) as their symmetry group. + +
TraceDamped OscillatorParticle Symmetry
Smaller than 2Oscillation ModeMassive Particles
Equal to 2Transition ModeMassless Particles
Larger than 2Damping ModeImaginary-mass Particles
+ +## 5. Lorentz Completion of Wigner's Little Groups + +So far we have considered transformations applicable only to (t, z, x) space. In order to study the full symmetry, we have to consider rotations around the z axis. As previously stated, when a particle moves along this axis, this rotation defines the helicity of the particle. + +In [1], Wigner worked out the little group of a massive particle at rest. When the particle gains a momentum along the z direction, the single particle can reverse the direction of momentum, the spin, or both. What happens to the internal space-time symmetries is discussed in this section. + +### 5.1. Rotation around the z Axis + +In Section 3, our kinematics was restricted to the two-dimensional space of z and x, and thus includes rotations around the y axis. We now introduce the four-by-four matrix of Equation (63) performing rotations around the z axis. Its corresponding two-by-two matrix was given in Equation (79). Its generator is + +$$J_3 = \frac{1}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad (81)$$ + +If we introduce this additional matrix for the three generators we used in Sections 3 and 3.2, we end up the closed set of commutation relations + +$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, K_j] = i\epsilon_{ijk}K_k, \quad [K_i, K_j] = -i\epsilon_{ijk}J_k \qquad (82)$$ + +with + +$$J_i = \frac{1}{2}\sigma_i, \quad \text{and} \quad K_i = \frac{i}{2}\sigma_i \qquad (83)$$ + +where $\sigma_i$ are the two-by-two Pauli spin matrices. + +For each of these two-by-two matrices there is a corresponding four-by-four matrix generating Lorentz transformations on the four-dimensional Lorentz group. When these two-by-two matrices are imaginary, the corresponding four-by-four matrices were given in Table 1. If they are real, the corresponding four-by-four matrices were given in Table 5. +---PAGE_BREAK--- + +**Table 5.** Two-by-two and four-by-four generators not included in Table 1. The generators given there and given here constitute the set of six generators for SL(2, c) or of the Lorentz group given in Equation (82). + +
GeneratorTwo-by-TwoFour-by-Four
J312(
10
0-1
)
0000
000-i
00i0
J112(
01
10
)
0000
00i0
0000
0-i00
K212(
01
-10
)
000i
0000
0000
i000
+ +This set of commutation relations is known as the Lie algebra for the SL(2, c), namely the group of two-by-two elements with unit determinants. Their elements are complex. This set is also the Lorentz group performing Lorentz transformations on the four-dimensional Minkowski space. + +This set has many useful subgroups. For the group SL(2, c), there is a subgroup consisting only of real matrices, generated by the two-by-two matrices given in Table 1. This three-parameter subgroup is precisely the Sp(2) group we used in Sections 3 and 3.2. Their generators satisfy the Lie algebra given in Equation (43). + +In addition, this group has the following Wigner subgroups governing the internal space-time symmetries of particles in the Lorentz-covariant world [1]: + +1. The $J_i$ matrices form a closed set of commutation relations. The subgroup generated by these Hermitian matrices is SU(2) for electron spins. The corresponding rotation group does not change the four-momentum of the particle at rest. This is Wigner's little group for massive particles. If the particle is at rest, the two-by-two form of the four-vector is given by Equation (76). The Lorentz transformation generated by $J_3$ takes the form + +$$ \begin{pmatrix} e^{i\phi/2} & 0 \\ 0 & e^{-i\phi/2} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad (84) $$ + +Similar computations can be carried out for $J_1$ and $J_2$. + +2. There is another Sp(2) subgroup, generated by $K_1$, $K_2$, and $J_3$. They satisfy the commutation relations + +$$ [K_1, K_2] = -iJ_3, \quad [J_3, K_1] = iK_2, \quad [K_2, J_3] = iK_1. \quad (85) $$ + +The Wigner transformation generated by these two-by-two matrices leave the momentum four-vector of Equation (77) invariant. For instance, the transformation matrix generated by $K_2$ takes the form + +$$ \exp(-i\xi K_2) = \begin{pmatrix} \cosh(\xi/2) & i\sinh(\xi/2) \\ i\sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} \quad (86) $$ + +and the Wigner transformation takes the form + +$$ \begin{pmatrix} \cosh(\xi/2) & i\sinh(\xi/2) \\ -i\sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} \cosh(\xi/2) & i\sinh(\xi/2) \\ -i\sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \quad (87) $$ + +Computations with $K_2$ and $J_3$ lead to the same result. +---PAGE_BREAK--- + +Since the determinant of the four-momentum matrix is negative, the particle has an imaginary mass. In the language of the four-by-four matrix, the transformation matrices leave the four-momentum of the form (0, 1, 0, 0) invariant. + +3. Furthermore, we can consider the following combinations of the generators: + +$$N_1 = K_1 - J_2 = \begin{pmatrix} 0 & i \\ 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = K_2 + J_1 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \qquad (88)$$ + +Together with $J_3$, they satisfy the following commutation relations. + +$$[N_1, N_2] = 0, \quad [N_1, J_3] = -iN_2, \quad [N_2, J_3] = iN_1 \qquad (89)$$ + +In order to understand this set of commutation relations, we can consider an x y coordinate system in a two-dimensional space. Then rotation around the origin is generated by + +$$J_3 = -i \left( x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right) \qquad (90)$$ + +and the two translations are generated by + +$$N_1 = -i \frac{\partial}{\partial x}, \quad \text{and} \quad N_2 = -i \frac{\partial}{\partial y} \qquad (91)$$ + +for the x and y directions respectively. These operators satisfy the commutations relations given in Equation (89). + +The two-by-two matrices of Equation (88) generate the following transformation matrix. + +$$G(\gamma, \phi) = \exp[-i\gamma(N_1 \cos\phi + N_2 \sin\phi)] = \begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \qquad (92)$$ + +The two-by-two form for the four-momentum for the massless particle is given by Equation (78). The computation of the Hermitian transformation using this matrix is + +$$\begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \gamma e^{i\phi} & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (93)$$ + +confirming that $N_1$ and $N_2$, together with $J_3$, are the generators of the $E(2)$-like little group for massless particles in the two-by-two representation. The transformation that does this in the physical world is described in the following section. + +## 5.2. $E(2)$-Like Symmetry of Massless Particles + +From the four-by-four generators of $K_{1,2}$ and $J_{1,2}$, we can write + +$$N_1 = \begin{pmatrix} 0 & 0 & i & 0 \\ 0 & 0 & i & 0 \\ i & -i & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = \begin{pmatrix} 0 & 0 & 0 & i \\ 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 \\ i & -i & 0 & 0 \end{pmatrix} \qquad (94)$$ +---PAGE_BREAK--- + +These matrices lead to the transformation matrix of the form + +$$ +G(\gamma, \phi) = \begin{pmatrix} +1 + \gamma^2/2 & -\gamma^2/2 & \gamma \cos \phi & \gamma \sin \phi \\ +\gamma^2/2 & 1 - \gamma^2/2 & \gamma \cos \phi & \gamma \sin \phi \\ +-\gamma \cos \phi & \gamma \cos \phi & 1 & 0 \\ +-\gamma \sin \phi & \gamma \sin \phi & 0 & 1 +\end{pmatrix} \quad (95) +$$ + +This matrix leaves the four-momentum invariant, as we can see from + +$$ +G(\gamma, \phi) \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \tag{96} +$$ + +When it is applied to the photon four-potential + +$$ +G(\gamma, \phi) \begin{pmatrix} A_0 \\ A_3 \\ A_1 \\ A_2 \end{pmatrix} = \begin{pmatrix} A_0 \\ A_3 \\ A_1 \\ A_2 \end{pmatrix} + \gamma (A_1 \cos \phi + A_2 \sin \phi) \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad (97) +$$ + +with the Lorentz condition which leads to $A_3 = A_0$ in the zero mass case. Gauge transformations are well known for electromagnetic fields and photons. Thus Wigner's little group leads to gauge transformations. + +In the two-by-two representation, the electromagnetic four-potential takes the form + +$$ +\begin{pmatrix} 2A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} \qquad (98) +$$ + +with the Lorentz condition $A_3 = A_0$. Then the two-by-two form of Equation (97) is + +$$ +\begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 2A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \gamma e^{i\phi} & 1 \end{pmatrix} \quad (99) +$$ + +which becomes + +$$ +\begin{pmatrix} A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} + \begin{pmatrix} 2\gamma (A_1 \cos \phi - A_2 \sin \phi) & 0 \\ 0 & 0 \end{pmatrix} \quad (100) +$$ + +This is the two-by-two equivalent of the gauge transformation given in Equation (97). + +For massless spin-1/2 particles starting with the two-by-two expression of $G(\gamma, \phi)$ given in Equation (92), and considering the spinors + +$$ +u = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad v = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \tag{101} +$$ + +for spin-up and spin-down states respectively, + +$$ +Gu = u, \quad \text{and} \quad Gv = v + \gamma e^{-i\phi} u +\quad (102) +$$ + +This means that the spinor $u$ for spin up is invariant under the gauge transformation while $v$ is not. Thus, the polarization of massless spin-1/2 particle, such as neutrinos, is a consequence of the gauge invariance. We shall continue this discussion in Section 7. +---PAGE_BREAK--- + +5.3. Boosts along the z Axis + +In Sections 4.1 and 5.1, we studied Wigner transformations for fixed values of the four-momenta. +The next question is what happens when the system is boosted along the z direction, with the +transformation + +$$ +\begin{pmatrix} t' \\ z' \end{pmatrix} = \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} t \\ z \end{pmatrix} \qquad (103) +$$ + +Then the four-momenta become + +$$ +(\cosh \eta, \sinh \eta, 0, 0), \quad (\sinh \eta, \cosh \eta, 0, 0), \quad e^{\eta}(1, 1, 0, 0) \tag{104} +$$ + +respectively for massive, imaginary, and massless particles cases. In the two-by-two representation, +the boost matrix is + +$$ +\begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{105} +$$ + +and the four-momenta of Equation (104) become + +$$ +\begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix}, \quad \begin{pmatrix} e^\eta & 0 \\ 0 & -e^{-\eta} \end{pmatrix}, \quad \begin{pmatrix} e^\eta & 0 \\ 0 & 0 \end{pmatrix} \tag{106} +$$ + +respectively. These matrices become Equations (76)–(78) respectively when $\eta = 0$. + +We are interested in Lorentz transformations which leave a given non-zero momentum invariant. +We can consider a Lorentz boost along the direction preceded and followed by identical rotation +matrices, as described in Figure 1 and the transformation matrix as + +$$ +\begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \begin{pmatrix} \cosh \chi & -\sinh \chi \\ -\sinh \chi & \cosh \chi \end{pmatrix} \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \quad (107) +$$ + +which becomes + +$$ +\begin{pmatrix} +(\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\ +-\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi +\end{pmatrix} +\quad (108) +$$ +---PAGE_BREAK--- + +Figure 1. Bargmann and Wigner decompositions. (a) Bargmann decomposition; (b) Wigner decomposition. In the Bargmann decomposition, we start from a momentum along the z direction. We can rotate, boost, and rotate to bring the momentum to the original position. The resulting matrix is the product of one boost and two rotation matrices. In the Wigner decomposition, the particle is boosted back to the frame where the Wigner transformation can be applied. Make a Wigner transformation there and come back to the original state of the momentum. This process also can also be written as the product of three simple matrices. + +Except the sign of $\chi$, the two-by-two matrices of Equations (107) and (108) are identical with those given in Section 3.2. The only difference is the sign of the parameter $\chi$. We are thus ready to interpret this expression in terms of physics. + +1. If the particle is massive, the off-diagonal elements of Equation (108) have opposite signs, and this matrix can be decomposed into + +$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (109) $$ + +with + +$$ \cos(\theta/2) = (\cosh \chi) \cos \alpha, \quad \text{and} \quad e^{2\eta} = \frac{\cosh(\chi) \sin \alpha + \sinh \chi}{\cosh(\chi) \sin \alpha - \sinh \chi} \quad (110) $$ + +and + +$$ e^{2\eta} = \frac{p_0 + p_z}{p_0 - p_z} \quad (111) $$ + +According to Equation (109) the first matrix (far right) reduces the particle momentum to zero. The second matrix rotates the particle without changing the momentum. The third matrix boosts the particle to restore its original momentum. This is the extension of Wigner's original idea to moving particles. + +2. If the particle has an imaginary mass, the off-diagonal elements of Equation (108) have the same sign, + +$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cosh(\lambda/2) & -\sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (112) $$ +---PAGE_BREAK--- + +with + +$$ \cosh(\lambda/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{2\eta} = \frac{\sinh\chi + \cosh(\chi)\sin\alpha}{\cosh(\chi)\sin\alpha - \sinh\chi} \qquad (113) $$ + +and + +$$ e^{2\eta} = \frac{p_0 + p_z}{p_z - p_0} \qquad (114) $$ + +This is also a three-step operation. The first matrix brings the particle momentum to the zero-energy state with $p_0 = 0$. Boosts along the x or y direction do not change the four-momentum. We can then boost the particle back to restore its momentum. This operation is also an extension of the Wigner's original little group. Thus, it is quite appropriate to call the formulas of Equations (109) and (112) Wigner decompositions. + +3. If the particle mass is zero with + +$$ \sinh \chi = (\cosh \chi) \sin \alpha \qquad (115) $$ + +the $\eta$ parameter becomes infinite, and the Wigner decomposition does not appear to be useful. We can then go back to the Bargmann decomposition of Equation (107). With the condition of Equations (115) and (108) becomes + +$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \qquad (116) $$ + +with + +$$ \gamma = 2 \sinh \chi \qquad (117) $$ + +The decomposition ending with a triangular matrix is called the Iwasawa decomposition [16,22] and its physical interpretation was given in Section 5.2. The $\gamma$ parameter does not depend on $\eta$. + +Thus, we have given physical interpretations to the Bargmann and Wigner decompositions given in Section (3.2). Consider what happens when the momentum becomes large. Then $\eta$ becomes large for nonzero mass cases. All three four-momenta in Equation (106) become + +$$ e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (118) $$ + +As for the Bargmann-Wigner matrices, they become the triangular matrix of Equation (116), with $\gamma = \sin(\theta/2)e^{\eta}$ and $\gamma = \sinh(\lambda/2)e^{\eta}$, respectively for the massive and imaginary-mass cases. + +In Section 5.2, we concluded that the triangular matrix corresponds to gauge transformations. However, particles with imaginary mass are not observed. For massive particles, we can start with the three-dimensional rotation group. The rotation around the z axis is called helicity, and remains invariant under the boost along the z direction. As for the transverse rotations, they become gauge transformation as illustrated in Table 6. + +**Table 6.** Covariance of the energy-momentum relation, and covariance of the internal space-time symmetry. Under the Lorentz boost along the z direction, $J_3$ remains invariant, and this invariant component of the angular momentum is called the helicity. The transverse component $J_1$ and $J_2$ collapse into a gauge transformation. The $\gamma$ parameter for the massless case has been studied in earlier papers in the four-by-four matrix formulation of Wigner's little groups [8,21]. + +
Massive, SlowCovarianceMassless, Fast
$E = p^2/2m$
$J_3$
Einstein's $E = mc^2$
Wigner's Little Group
$E = cp$
Helicity
Gauge Transformation
$J_1, J_2$
+---PAGE_BREAK--- + +5.4. Conjugate Transformations + +The most general form of the SL(2, c) matrix is given in Equation (66). Transformation operators for the Lorentz group are given in exponential form as: + +$$ +D = \exp \left\{ -i \sum_{i=1}^{3} (\theta_i J_i + \eta_i K_i) \right\} \qquad (119) +$$ + +where the $J_i$ are the generators of rotations and the $K_i$ are the generators of proper Lorentz boosts. They satisfy the Lie algebra given in Equation (43). This set of commutation relations is invariant under the sign change of the boost generators $K_i$. Thus, we can consider “dot conjugation” defined as + +$$ +\dot{D} = \exp \left\{ -i \sum_{i=1}^{3} (\theta_i J_i - \eta_i K_i) \right\} \quad (120) +$$ + +Since $K_i$ are anti-Hermitian while $J_i$ are Hermitian, the Hermitian conjugate of the above expression is + +$$ +D^{\dagger} = \exp \left\{ -i \sum_{i=1}^{3} (-\theta_i J_i + \eta_i K_i) \right\} \qquad (121) +$$ + +while the Hermitian conjugate of G is + +$$ +\dot{D}^{\dagger} = \exp \left\{ -i \sum_{i=1}^{3} (-\theta_i J_i - \eta_i K_i) \right\} \qquad (122) +$$ + +Since we understand the rotation around the z axis, we can now restrict the kinematics to the +zt plane, and work with the Sp(2) symmetry. Then the D matrices can be considered as Bargmann +decompositions. First, D and $\dot{D}$, and their Hermitian conjugates are + +$$ +D(\alpha, \chi) = \begin{pmatrix} +(\cos \alpha) \cosh \chi & \sinh \chi - (\sin \alpha) \cosh \chi \\ +\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi +\end{pmatrix} \tag{123} +$$ + +$$ +\dot{D}(\alpha, \chi) = \begin{pmatrix} +(\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\ +-\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi +\end{pmatrix} \quad (124) +$$ + +These matrices correspond to the "D loops" given in Figure 2a,b respectively. The "dot" conjugation changes the direction of boosts. The dot conjugation leads to the inversion of the space which is called the parity operation. + +We can also consider changing the direction of rotations. Then they result in the Hermitian +conjugates. We can write their matrices as + +$$ +D^{\dagger}(\alpha, \chi) = \begin{pmatrix} +(\cos \alpha) \cosh \chi & \sinh \chi + (\sin \alpha) \cosh \chi \\ +\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi +\end{pmatrix} \quad (125) +$$ + +$$ +\dot{D}^{\dagger}(\alpha, \chi) = \begin{pmatrix} +(\cos \alpha) \cosh \chi & -\sinh \chi + (\sin \alpha) \cosh \chi \\ +-\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi +\end{pmatrix} \quad (126) +$$ + +From the exponential expressions from Equation (119) to Equation (122), it is clear that + +$$ +D^{\dagger} = D^{-1}, \quad \text{and} \quad D^{\dagger} = D^{-1} \tag{127} +$$ + +The D loop given in Figure 1 corresponds to $\dot{D}$. We shall return to these loops in Section 7. +---PAGE_BREAK--- + +Figure 2. Four D-loops resulting from the Bargmann decomposition. (a) Bargmann decomposition from Figure 1; (b) Direction of the Lorentz boost is reversed; (c) Direction of rotation is reversed; (d) Both directions are reversed. These operations correspond to the space-inversion, charge conjugation, and the time reversal respectively. + +**6. Symmetries Derivable from the Poincaré Sphere** + +The Poincaré sphere serves as the basic language for polarization physics. Its underlying +language is the two-by-two coherency matrix. This coherency matrix contains the symmetry of SL(2, c) +isomorphic to the the Lorentz group applicable to three space-like and one time-like dimensions [4,6,7]. + +For polarized light propagating along the z direction, the amplitude ratio and phase difference of +electric field x and y components traditionally determine the state of polarization. Hence, the polarization +can be changed by adjusting the amplitude ratio or the phase difference or both. Usually, the optical +device which changes amplitude is called an “attenuator” (or “amplifier”) and the device which changes +the relative phase a “phase shifter”. + +Let us start with the Jones vector: + +$$ +\begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp[i(kz - \omega t)] \\ a \exp[i(kz - \omega t)] \end{pmatrix} \tag{128} +$$ +---PAGE_BREAK--- + +To this matrix, we can apply the phase shift matrix of Equation (79) which brings the Jones vector to + +$$ +\begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp[i(kz - \omega t - i\phi/2)] \\ a \exp[i(kz - \omega t + i\phi/2)] \end{pmatrix} \quad (129) +$$ + +The generator of this phase-shifter is $I_3$ given Table 5. + +The optical beam can be attenuated differently in the two directions. The resulting matrix is + +$$ +e^{-\mu} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \qquad (130) +$$ + +with the attenuation factor of exp(-μ₀ + η/2) and exp(-μ - η/2) for the x and y directions respectively. We are interested only the relative attenuation given in Equation (46) which leads to different amplitudes for the x and y component, and the Jones vector becomes + +$$ +\begin{pmatrix} \psi_1(z, t) \\ \psi_2(z, t) \end{pmatrix} = \begin{pmatrix} ae^{\mu/2} \exp[i(kz - \omega t - i\phi/2)] \\ ae^{-\mu/2} \exp[i(kz - \omega t + i\phi/2)] \end{pmatrix} \quad (131) +$$ + +The squeeze matrix of Equation (46) is generated by $K_3$ given in Table 1. + +The polarization is not always along the *x* and *y* axes, but can be rotated around the *z* axis using Equation (79) generated by $J_2$ given in Table 1. + +Among the rotation angles, the angle of 45° plays an important role in polarization optics. Indeed, if we rotate the squeeze matrix of Equation (46) by 45°, we end up with the squeeze matrix of Equation (45) generated by $K_1$ given also in Table 1. + +Each of these four matrices plays an important role in special relativity, as we discussed in Sections 3.2 and 6. Their respective roles in optics and particle physics are given in Table 7. + +**Table 7.** Polarization optics and special relativity share the same mathematics. Each matrix has its clear role in both optics and relativity. The determinant of the Stokes or the four-momentum matrix remains invariant under Lorentz transformations. It is interesting to note that the decoherence parameter (least fundamental) in optics corresponds to the (mass)$^2$ (most fundamental) in particle physics. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Polarization Optics + + Transformation Matrix + + Particle Symmetry +
+ Phase shift by φ + + + + + + + + + + +
+ e-iφ/2 + + 0 +
+ 0 + + eiφ/2 +
+
+ Rotation around z. +
+ Rotation around z + + + + + + + + + + +
+ cos(θ/2) + + -sin(θ/2) +
+ sin(θ/2) + + cos(θ/2) +
+
+ Rotation around y. +
+ Squeeze along x and y + + + + + + + + + + +
+ eη/2 + + 0 +
+ 0 + + e-η/2 +
+
+ Boost along z. +
+ Squeeze along 45° + + + + + + + + + + +
+ cosh(λ/2) + + sinh(λ/2) +
+ sinh(λ/2) + + cosh(λ/2) +
+
+ Boost along x. +
+ a⁴ (sinξ)² Determinant + + (mass)² +
+ +The most general form for the two-by-two matrix applicable to the Jones vector is the G matrix of Equation (66). This matrix is of course a representation of the SL(2, c) group. It brings the simplest Jones vector of Equation (128) to its most general form. +---PAGE_BREAK--- + +## 6.1. Coherency Matrix + +However, the Jones vector alone cannot tell us whether the two components are coherent with each other. In order to address this important degree of freedom, we use the coherency matrix defined as [3,23] + +$$ C = \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \qquad (132) $$ + +where + +$$ \langle \psi_i^* \psi_j \rangle = \frac{1}{T} \int_0^T \psi_i^*(t+\tau) \psi_j(t) dt \qquad (133) $$ + +where T is a sufficiently long time interval. Then, those four elements become [4] + +$$ S_{11} = \langle \psi_1^* \psi_1 \rangle = a^2, \quad S_{12} = \langle \psi_1^* \psi_2 \rangle = a^2 (\cos \zeta) e^{-i\phi} \qquad (134) $$ + +$$ S_{21} = \langle \psi_2^* \psi_1 \rangle = a^2 (\cos \zeta) e^{+i\phi}, \quad S_{22} = \langle \psi_2^* \psi_2 \rangle = a^2 \qquad (135) $$ + +The diagonal elements are the absolute values of $\psi_1$ and $\psi_2$ respectively. The angle $\phi$ could be different from the value of the phase-shift angle given in Equation (79), but this difference does not play any role in the reasoning. The off-diagonal elements could be smaller than the product of $\psi_1$ and $\psi_2$, if the two polarizations are not completely coherent. + +The angle $\zeta$ specifies the degree of coherency. If it is zero, the system is fully coherent, while the system is totally incoherent if $\zeta$ is $90^\circ$. This can therefore be called the "decoherence angle." + +While the most general form of the transformation applicable to the Jones vector is G of Equation (66), the transformation applicable to the coherency matrix is + +$$ C' = G C G^{\dagger} \qquad (136) $$ + +The determinant of the coherency matrix is invariant under this transformation, and it is + +$$ \det(C) = a^4 (\sin \zeta)^2 \qquad (137) $$ + +Thus, angle $\zeta$ remains invariant. In the language of the Lorentz transformation applicable to the four-vector, the determinant is equivalent to the $(mass)^2$ and is therefore a Lorentz-invariant quantity. + +## 6.2. Two Radii of the Poincaré Sphere + +Let us write explicitly the transformation of Equation (136) as + +$$ \begin{pmatrix} S'_{11} & S'_{12} \\ S'_{21} & S'_{22} \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \qquad (138) $$ + +It is then possible to construct the following quantities, + +$$ S_0 = \frac{S_{11} + S_{22}}{2}, \quad S_3 = \frac{S_{11} - S_{22}}{2} \qquad (139) $$ + +$$ S_1 = \frac{S_{12} + S_{21}}{2}, \quad S_2 = \frac{S_{12} - S_{21}}{2i} \qquad (140) $$ + +These are known as the Stokes parameters, and constitute a four-vector ($S_0, S_3, S_1, S_2$) under the Lorentz transformation. + +In the Jones vector of Equation (128), the amplitudes of the two orthogonal components are equal. Thus, the two diagonal elements of the coherency matrix are equal. This leads to $S_3 = 0$, and the +---PAGE_BREAK--- + +problem is reduced from the sphere to a circle. In the resulting two-dimensional subspace, we can +introduce the polar coordinate system with + +$$ +\begin{align} +R &= \sqrt{S_1^2 + S_2^2} \tag{141} \\ +S_1 &= R \cos \phi \tag{142} \\ +S_2 &= R \sin \phi \tag{143} +\end{align} +$$ + +The radius $R$ is the radius of this circle, and is + +$$ +R = a^2 \cos \zeta \quad (144) +$$ + +The radius $R$ takes its maximum value $S_0$ when $\zeta = 0^\circ$. It decreases as $\zeta$ increases and vanishes when $\zeta = 90^\circ$. This aspect of the radius $R$ is illustrated in Figure 3. + +**Figure 3.** Radius of the Poincaré sphere. The radius $R$ takes its maximum value $S_0$ when the decoherence angle $\zeta$ is zero. It becomes smaller as $\zeta$ increases. It becomes zero when the angle reaches 90°. + +In order to see its implications in special relativity, let us go back to the four-momentum matrix of $m(1,0,0,0)$. Its determinant is $m^2$ and remains invariant. Likewise, the determinant of the coherency matrix of Equation (132) should also remain invariant. The determinant in this case is + +$$ +S_0^2 - R^2 = a^4 \sin^2 \zeta \quad (145) +$$ + +This quantity remains invariant under the Hermitian transformation of Equation (138), which is a Lorentz transformation as discussed in Sections 3.2 and 6. This aspect is shown on the last row of Table 7. + +The coherency matrix then becomes + +$$ +C = a^2 \begin{pmatrix} 1 & (\cos \xi)e^{-i\phi} \\ (\cos \xi)e^{i\phi} & 1 \end{pmatrix} \qquad (146) +$$ +---PAGE_BREAK--- + +Since the angle $\phi$ does not play any essential role, we can let $\phi = 0$, and write the coherency matrix as + +$$ C = a^2 \begin{pmatrix} 1 & \cos \xi \\ \cos \xi & 1 \end{pmatrix} \qquad (147) $$ + +The determinant of the above two-by-two matrix is + +$$ a^4 (1 - \cos^2 \xi) = a^4 \sin^2 \xi \qquad (148) $$ + +Since the Lorentz transformation leaves the determinant invariant, the change in this $\xi$ variable is not a Lorentz transformation. It is of course possible to construct a larger group in which this variable plays a role in a group transformation [6], but here we are more interested in its role in a particle gaining a mass from zero or the mass becoming zero. + +### 6.3. Extra-Lorentzian Symmetry + +The coherency matrix of Equation (146) can be diagonalized to + +$$ a^2 \begin{pmatrix} 1 + \cos \xi & 0 \\ 0 & 1 - \cos \xi \end{pmatrix} \qquad (149) $$ + +by a rotation. Let us then go back to the four-momentum matrix of Equation (70). If $p_x = p_y = 0$, and $p_z = p_0 \cos \xi$, we can write this matrix as + +$$ p_0 \begin{pmatrix} 1 + \cos \xi & 0 \\ 0 & 1 - \cos \xi \end{pmatrix} \qquad (150) $$ + +Thus, with this extra variable, it is possible to study the little groups for variable masses, including the small-mass limit and the zero-mass case. + +For a fixed value of $p_0$, the $(mass)^2$ becomes + +$$ (mass)^2 = (p_0 \sin \xi)^2, \quad \text{and} \quad (momentum)^2 = (p_0 \cos \xi)^2 \qquad (151) $$ + +resulting in + +$$ (energy)^2 = (mass)^2 + (momentum)^2 \qquad (152) $$ + +This transition is illustrated in Figure 4. We are interested in reaching a point on the light cone from mass hyperbola while keeping the energy fixed. According to this figure, we do not have to make an excursion to infinite-momentum limit. If the energy is fixed during this process, Equation (152) tells the mass and momentum relation, and Figure 5 illustrates this relation. +---PAGE_BREAK--- + +Figure 4. Transition from the massive to massless case. (a) Transition within the framework of the Lorentz group; (b) TransITION allowed in the symmetry of the Poincaré sphere. Within the framework of the Lorentz group, it is not possible to go from the massive to massless case directly, because it requires the change in the mass which is a Lorentz-invariant quantity. The only way is to move to infinite momentum and jump from the hyperbola to the light cone, and come back. The extra symmetry of the Poincaré sphere allows a direct transition + +Figure 5. Energy-momentum-mass relation. This circle illustrates the case where the energy is fixed, while the mass and momentum are related according to the triangular rule. The value of the angle ξ changes from zero to 180°. The particle mass is negative for negative values of this angle. However, in the Lorentz group, only (mass)$^2$ is a relevant variable, and negative masses might play a role for theoretical purposes. + +Within the framework of the Lorentz group, it is possible, by making an excursion to infinite momentum where the mass hyperbola coincides with the light cone, to then come back to the desired point. On the other hand, the mass formula of Equation (151) allows us to go there directly. The decoherence mechanism of the coherency matrix makes this possible. +---PAGE_BREAK--- + +## 7. Small-Mass and Massless Particles + +We now have a mathematical tool to reduce the mass of a massive particle from its positive value to zero. During this process, the Lorentz-boosted rotation matrix becomes a gauge transformation for the spin-1 particle, as discussed Section 5.2. For spin-1/2 particles, there are two issues. + +1. It was seen in Section 5.2 that the requirement of gauge invariance lead to a polarization of massless spin-1/2 particle, such as neutrinos. What happens to anti-neutrinos? + +2. There are strong experimental indications that neutrinos have a small mass. What happens to the $E(2)$ symmetry? + +### 7.1. Spin-1/2 Particles + +Let us go back to the two-by-two matrices of Section 5.4, and the two-by-two $D$ matrix. For a massive particle, its Wigner decomposition leads to + +$$ D = \begin{pmatrix} \cos(\theta/2) & -e^{-\eta} \sin(\theta/2) \\ e^{\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (153) $$ + +This matrix is applicable to the spinors $u$ and $v$ defined in Equation (101) respectively for the spin-up and spin-down states along the $z$ direction. + +Since the Lie algebra of $SL(2,c)$ is invariant under the sign change of the $K_i$ matrices, we can consider the “dotted” representation, where the system is boosted in the opposite direction, while the direction of rotations remain the same. Thus, the Wigner decomposition leads to + +$$ \dot{D} = \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (154) $$ + +with its spinors + +$$ \dot{u} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad \dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \qquad (155) $$ + +For anti-neutrinos, the helicity is reversed but the momentum is unchanged. Thus, $D^\dagger$ is the appropriate matrix. However, $D^\dagger = \tilde{D}^{-1}$ as was noted in Section 5.4. Thus, we shall use $\tilde{D}$ for anti-neutrinos. + +When the particle mass becomes very small, + +$$ e^{-\eta} = \frac{m}{2p} \qquad (156) $$ + +becomes small. Thus, if we let + +$$ e^{\eta} \sin(\theta/2) = \gamma, \quad \text{and} \quad e^{-\eta} \sin(\theta/2) = \epsilon^2 \qquad (157) $$ + +then the $D$ matrix of Equation (153) and the $\tilde{D}$ of Equation (154) become + +$$ \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\epsilon^2 \\ \gamma & 1 - \gamma\epsilon^2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\gamma \\ \epsilon^2 & 1 - \gamma\epsilon^2 \end{pmatrix} \qquad (158) $$ + +respectively where $\gamma$ is an independent parameter and + +$$ \epsilon^2 = \gamma \left( \frac{m}{2p} \right)^2 \qquad (159) $$ +---PAGE_BREAK--- + +When the particle mass becomes zero, they become + +$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \tag{160} $$ + +respectively, applicable to the spinors $(u, v)$ and $(\tilde{u}, \tilde{v})$ respectively. + +For neutrinos, + +$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ \gamma \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \tag{161} $$ + +For anti-neutrinos, + +$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} -\gamma \\ 1 \end{pmatrix} \tag{162} $$ + +It was noted in Section 5.2 that the triangular matrices of Equation (160) perform gauge transformations. Thus, for Equations (161) and (162) the requirement of gauge invariance leads to the polarization of neutrinos. The neutrinos are left-handed while the anti-neutrinos are right-handed. Since, however, nature cannot tell the difference between the dotted and undotted representations, the Lorentz group cannot tell which neutrino is right handed. It can say only that the neutrinos and anti-neutrinos are oppositely polarized. + +If the neutrino has a small mass, the gauge invariance is modified to + +$$ \begin{pmatrix} 1 - \gamma\epsilon^{2/2} & -\epsilon^2 \\ \gamma & 1 - \gamma\epsilon^{2/2} \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} - \epsilon^2 \begin{pmatrix} 1 \\ \gamma/2 \end{pmatrix} \tag{163} $$ + +and + +$$ \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\gamma \\ \epsilon^2 & 1 - \gamma\epsilon^2 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \epsilon^2 \begin{pmatrix} -\gamma/2 \\ 1 \end{pmatrix} \tag{164} $$ + +respectively for neutrinos and anti-neutrinos. Thus the violation of the gauge invariance in both cases is proportional to $\epsilon^2$ which is $m^2/4p^2$. + +## 7.2. Small-Mass Neutrinos in the Real World + +Whether neutrinos have mass or not and the consequences of this relative to the Standard Model and lepton number is the subject of much theoretical speculation [24,25], and of cosmology [26], nuclear reactors [27], and high energy experimentations [28,29]. Neutrinos are fast becoming an important component of the search for dark matter and dark radiation [30]. Their importance within the Standard Model is reflected by the fact that they are the only particles which seem to exist with only one direction of chirality, i.e., only left-handed neutrinos have been confirmed to exist so far. + +It was speculated some time ago that neutrinos in constant electric and magnetic fields would acquire a small mass, and that right-handed neutrinos would be trapped within the interaction field [31]. Solving generalized electroweak models using left- and right-handed neutrinos has been discussed recently [32]. Today these right-handed neutrinos which do not participate in weak interactions are called “sterile” neutrinos [33]. A comprehensive discussion of the place of neutrinos in the scheme of physics has been given by Drewes [30]. We should note also that the three different neutrinos, namely $ν_e$, $ν_μ$, and $ν_τ$, may have different masses [34]. +---PAGE_BREAK--- + +**8. Scalars, Four-Vectors, and Four-Tensors** + +In Sections 5 and 7, our primary interest has been the two-by-two matrices applicable to spinors for spin-1/2 particles. Since we also used four-by-four matrices, we indirectly studied the four-component particle consisting of spin-1 and spin-zero components. + +If there are two spin 1/2 states, we are accustomed to construct one spin-zero state, and one spin-one state with three degeneracies. + +In this paper, we are confronted with two spinors, but each spinor can also be dotted. For this reason, there are 16 orthogonal states consisting of spin-one and spin-zero states. How many spin-zero states? How many spin-one states? + +For particles at rest, it is known that the addition of two one-half spins result in spin-zero and spin-one states. In this paper, we have two different spinors behaving differently under the Lorentz boost. Around the z direction, both spinors are transformed by + +$$Z(\phi) = \exp(-i\phi J_3) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} \quad (165)$$ + +However, they are boosted by + +$$B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (166)$$ + +$$\dot{B}(\eta) = \exp(i\eta K_3) = \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (167)$$ + +applicable to the undotted and dotted spinors respectively. These two matrices commute with each other, and also with the rotation matrix Z(φ) of Equation (165). Since K₃ and J₃ commute with each other, we can work with the matrix Q(η, φ) defined as + +$$Q(\eta, \phi) = B(\eta)Z(\phi) = \begin{pmatrix} e^{(\eta-i\phi)/2} & 0 \\ 0 & e^{-(\eta-i\phi)/2} \end{pmatrix} \quad (168)$$ + +$$\dot{Q}(\eta, \phi) = \dot{B}(\eta)\dot{Z}(\phi) = \begin{pmatrix} e^{-(\eta+i\phi)/2} & 0 \\ 0 & e^{(\eta+i\phi)/2} \end{pmatrix} \quad (169)$$ + +When this combined matrix is applied to the spinors, + +$$Q(\eta, \phi)u = e^{(\eta-i\phi)/2}u, \quad Q(\eta, \phi)v = e^{-(\eta-i\phi)/2}v \quad (170)$$ + +$$\dot{Q}(\eta, \phi)\dot{u} = e^{-(\eta+i\phi)/2}\dot{u}, \quad \dot{Q}(\eta, \phi)\dot{v} = e^{(\eta+i\phi)/2}\dot{v} \quad (171)$$ + +If the particle is at rest, we can construct the combinations + +$$uu, \quad \frac{1}{\sqrt{2}}(uv + vu), \quad vv \quad (172)$$ + +to construct the spin-1 state, and + +$$\frac{1}{\sqrt{2}}(uv - vu) \qquad (173)$$ + +for the spin-zero state. There are four bilinear states. In the SL(2, c) regime, there are two dotted spinors. If we include both dotted and undotted spinors, there are 16 independent bilinear combinations. They are given in Table 8. This table also gives the effect of the operation of Q(η, φ). +---PAGE_BREAK--- + +**Table 8.** Sixteen combinations of the SL(2, c) spinors. In the SU(2) regime, there are two spinors leading to four bilinear forms. In the SL(2, c) world, there are two undotted and two dotted spinors. These four spinors lead to 16 independent bilinear combinations. + +
Spin 1Spin 0
uu, 1√2(uv + vu), vv,1√2(uv − vu)
úú, 1√2(úv + vú), vúv,1√2(úv − vú)
uú, 1√2(uø + vú), vúv,1√2(uø − vú)
úú, 1√2(úv + vú), vúv,1√2(úv − vú)
+ +After the Operation of Q(η, φ) and $\tilde{Q}(\eta, \phi)$ + +$$ +\begin{aligned} +e^{-i\phi} e^{\eta} u u, & \quad \frac{1}{\sqrt{2}} (uv + vu), \quad e^{i\phi} e^{-\eta} v v, \quad \frac{1}{\sqrt{2}} (uv - vu) \\ +e^{-i\phi} e^{-\eta} u \dot{u}, & \quad \frac{1}{\sqrt{2}} (\dot{u}v + \dot{v}\dot{u}), \quad e^{i\phi} e^{\eta} \dot{v} \dot{v}, \quad \frac{1}{\sqrt{2}} (\dot{u}\dot{v} - \dot{v}\dot{u}) \\ +e^{-i\phi} u \dot{u}, & \quad \frac{1}{\sqrt{2}} (e^{\eta} u \dot{v} + e^{-\eta} v \dot{u}), \quad e^{i\phi} v \dot{v}, \quad \frac{1}{\sqrt{2}} (e^{\eta} u \dot{v} - e^{-\eta} v \dot{u}) \\ +e^{-i\phi} \dot{u} u, & \quad \frac{1}{\sqrt{2}} (\dot{u}v + \dot{v}u), \quad e^{i\phi} \dot{v} v, \quad \frac{1}{\sqrt{2}} (e^{-\eta} \dot{u} v - e^{\eta} \dot{v} u) +\end{aligned} +$$ + +Among the bilinear combinations given in Table 8, the following two are invariant under rotations and also under boosts. + +$$S = \frac{1}{\sqrt{2}}(uv - vu), \quad \text{and} \quad S = -\frac{1}{\sqrt{2}}(\dot{u}\dot{v} - \dot{v}\dot{u}) \qquad (174)$$ + +They are thus scalars in the Lorentz-covariant world. Are they the same or different? Let us consider the following combinations + +$$S_+ = \frac{1}{\sqrt{2}} (S + S'), \quad \text{and} \quad S_- = \frac{1}{\sqrt{2}} (S - S') \qquad (175)$$ + +Under the dot conjugation, $S_+$ remains invariant, but $S_-$ changes its sign. + +Under the dot conjugation, the boost is performed in the opposite direction. Therefore it is the operation of space inversion, and $S_+$ is a scalar while $S_-$ is called the pseudo-scalar. + +## 8.1. Four-Vectors + +Let us consider the bilinear products of one dotted and one undotted spinor as $u\dot{u}$, $u\dot{v}$, $\dot{u}v$, $v\dot{v}$, and construct the matrix + +$$U = \begin{pmatrix} u\dot{v} & v\dot{v} \\ u\dot{u} & v\dot{u} \end{pmatrix} \qquad (176)$$ + +Under the rotation $Z(\phi)$ and the boost $B(\eta)$ they become + +$$ +\begin{pmatrix} +e^{\eta} u \dot{v} & e^{-i\phi} v \dot{v} \\ +e^{i\phi} u \dot{u} & e^{-\eta} v \dot{u} +\end{pmatrix} +\qquad +(177) +$$ + +Indeed, this matrix is consistent with the transformation properties given in Table 8, and transforms like the four-vector + +$$ +\begin{pmatrix} +t+z & x-iy \\ +x+iy & t-z +\end{pmatrix} +\qquad +(178) +$$ + +This form was given in Equation (65), and played the central role throughout this paper. Under the space inversion, this matrix becomes + +$$ +\begin{pmatrix} +t-z & -(x-iy) \\ +-(x+iy) & t+z +\end{pmatrix} +\qquad +(179) +$$ +---PAGE_BREAK--- + +This space inversion is known as the parity operation. + +The form of Equation (176) for a particle or field with four-components, is given by $(V_0, V_z, V_x, V_y)$. The two-by-two form of this four-vector is + +$$ U = \begin{pmatrix} V_0 + V_z & V_x - iV_y \\ V_x + iV_y & V_0 - V_z \end{pmatrix} \qquad (180) $$ + +If boosted along the z direction, this matrix becomes + +$$ \begin{pmatrix} e^{\eta} (V_0 + V_z) & V_x - iV_y \\ V_x + iV_y & e^{-\eta} (V_0 - V_z) \end{pmatrix} \qquad (181) $$ + +In the mass-zero limit, the four-vector matrix of Equation (181) becomes + +$$ \begin{pmatrix} 2A_0 & A_x - iA_y \\ A_x + iA_y & 0 \end{pmatrix} \qquad (182) $$ + +with the Lorentz condition $A_0 = A_z$. The gauge transformation applicable to the photon four-vector was discussed in detail in Section 5.2. + +Let us go back to the matrix of Equation (180), we can construct another matrix $\dot{U}$. Since the dot conjugation leads to the space inversion, + +$$ \dot{U} = \begin{pmatrix} \dot{u}\nu & \dot{\nu}\nu \\ \dot{u}u & \dot{\nu}u \end{pmatrix} \qquad (183) $$ + +Then + +$$ \dot{u}\nu \approx (t-z), \qquad \dot{\nu}u \approx (t+z) \qquad (184) $$ + +$$ \dot{\nu}\nu \approx -(x-iy), \quad \dot{u}u \approx -(x+iy) \qquad (185) $$ + +where the symbol $\simeq$ means “transforms like”. + +Thus, $U$ of Equation (176) and $\dot{U}$ of Equation (183) used up 8 of the 16 bilinear forms. Since there are two bilinear forms in the scalar and pseudo-scalar as given in Equation (175), we have to give interpretations to the six remaining bilinear forms. + +## 8.2. Second-Rank Tensor + +In this subsection, we are studying bilinear forms with both spinors dotted and undotted. In Section 8.1, each bilinear spinor consisted of one dotted and one undotted spinor. There are also bilinear spinors which are both dotted or both undotted. We are interested in two sets of three quantities satisfying the $O(3)$ symmetry. They should therefore transform like + +$$ (\overline{x+iy})/\sqrt{2}, \quad (\overline{x-iy})/\sqrt{2}, \quad z \qquad (186) $$ + +which are like + +$$ uu, \quad vv, \quad (\overline{uv} + \overline{vu})/\sqrt{2} \qquad (187) $$ + +respectively in the $O(3)$ regime. Since the dot conjugation is the parity operation, they are like + +$$ -\dot{u}\dot{u}, \quad -\dot{\nu}\dot{\nu}, \quad -(\overline{\dot{u}\dot{\nu}} + \overline{\dot{\nu}\dot{u}})/\sqrt{2} \qquad (188) $$ + +In other words, + +$$ (\overline{uu}) = -\dot{u}\dot{u}, \quad \text{and} \quad (\overline{vv}) = -\dot{\nu}\dot{\nu} \qquad (189) $$ +---PAGE_BREAK--- + +We noticed a similar sign change in Equation (184). + +In order to construct the z component in this O(3) space, let us first consider + +$$f_z = \frac{1}{2} [(uv + vu) - (\dot{u}\dot{v} + \dot{v}\dot{u})], \quad g_z = \frac{1}{2i} [(uv + vu) + (\dot{u}\dot{v} + \dot{v}\dot{u})] \qquad (190)$$ + +where $f_z$ and $g_z$ are respectively symmetric and anti-symmetric under the dot conjugation or the parity operation. These quantities are invariant under the boost along the z direction. They are also invariant under rotations around this axis, but they are not invariant under boost along or rotations around the x or y axis. They are different from the scalars given in Equation (174). + +Next, in order to construct the x and y components, we start with $g_\pm$ as + +$$f_+ = \frac{1}{\sqrt{2}} (uu - \dot{u}\dot{u}) \qquad g_+ = \frac{1}{\sqrt{2}i} (uu + \dot{u}\dot{u}) \qquad (191)$$ + +$$f_- = \frac{1}{\sqrt{2}} (vv - \dot{v}\dot{v}) \qquad g_- = \frac{1}{\sqrt{2}i} (vv + \dot{v}\dot{v}) \qquad (192)$$ + +Then + +$$f_x = \frac{1}{\sqrt{2}} (f_+ + f_-) = \frac{1}{2} [(uu - \dot{u}\dot{u}) + (vv - \dot{v}\dot{v})] \qquad (193)$$ + +$$f_y = \frac{1}{\sqrt{2}i} (f_+ - f_-) = \frac{1}{2i} [-(vv - \dot{v}\dot{v})] \qquad (194)$$ + +and + +$$g_x = \frac{1}{\sqrt{2}} (g_+ + g_-) = \frac{1}{2i} [(uu + \dot{u}\dot{u}) + (vv + \dot{v}\dot{v})] \qquad (195)$$ + +$$g_y = \frac{1}{\sqrt{2}i} (g_+ - g_-) = -\frac{1}{2} [(uu + \dot{u}\dot{u}) - (vv + \dot{v}\dot{v})] \qquad (196)$$ + +Here $f_x$ and $f_y$ are symmetric under dot conjugation, while $g_x$ and $g_y$ are anti-symmetric. + +Furthermore, $f_z$, $f_x$, and $f_y$ of Equations (190) and (193) transform like a three-dimensional vector. The same can be said for $g_i$ of Equations (190) and (195). Thus, they can be grouped into the second-rank tensor + +$$T = \begin{pmatrix} +0 & -g_z & -g_x & -g_y \\ +g_z & 0 & -f_y & f_x \\ +g_x & f_y & 0 & -f_z \\ +g_y & -f_x & f_z & 0 +\end{pmatrix} \qquad (197)$$ + +whose Lorentz-transformation properties are well known. The $g_i$ components change their signs under space inversion, while the $f_i$ components remain invariant. They are like the electric and magnetic fields respectively. + +If the system is Lorentz-booted, $f_i$ and $g_i$ can be computed from Table 8. We are now interested in the symmetry of photons by taking the massless limit. According to the procedure developed in Section 6, we can keep only the terms which become larger for larger values of $\eta$. Thus, + +$$f_x \rightarrow \frac{1}{2}(uu - \dot{u}\dot{v}), \qquad f_y \rightarrow \frac{1}{2i}(uu + \dot{u}\dot{v}) \qquad (198)$$ + +$$g_x \rightarrow \frac{1}{2i}(uu + \dot{u}\dot{v}), \qquad g_y \rightarrow -\frac{1}{2}(uu - \dot{u}\dot{v}) \qquad (199)$$ + +in the massless limit. +---PAGE_BREAK--- + +Then the tensor of Equation (197) becomes + +$$F = \begin{pmatrix} 0 & 0 & -E_x & -E_y \\ 0 & 0 & -B_y & B_x \\ E_x & B_y & 0 & 0 \\ E_y & -B_x & 0 & 0 \end{pmatrix} \qquad (200)$$ + +with + +$$B_x \approx \frac{1}{2}(uu - \bar{u}\bar{v}), \quad B_y \approx \frac{1}{2i}(uu + \bar{u}\bar{v}) \qquad (201)$$ + +$$E_x = \frac{1}{2i}(uu + \bar{u}\bar{v}), \quad E_y = -\frac{1}{2}(uu - \bar{u}\bar{v}) \qquad (202)$$ + +The electric and magnetic field components are perpendicular to each other. Furthermore, + +$$E_x = B_y, \quad E_y = -B_x \qquad (203)$$ + +In order to address this question, let us go back to Equation (191). In the massless limit, + +$$B_+ \approx E_+ \approx uu, \quad B_- \approx E_- \approx \bar{u}\bar{v} \qquad (204)$$ + +The gauge transformation applicable to $u$ and $\bar{v}$ are the two-by-two matrices + +$$\begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ -\gamma & 1 \end{pmatrix} \qquad (205)$$ + +respectively as noted in Sections 5.2 and 7.1. Both $u$ and $\bar{v}$ are invariant under gauge transformations, while $i\dot{u}$ and $\bar{v}$ do not. + +The $B_+$ and $E_+$ are for the photon spin along the $z$ direction, while $B_-$ and $E_-$ are for the opposite direction. In 1964 [35], Weinberg constructed gauge-invariant state vectors for massless particles starting from Wigner’s 1939 paper [1]. The bilinear spinors $uu$ and $\bar{u}\bar{v}$ correspond to Weinberg’s state vectors. + +### 8.3. Possible Symmetry of the Higgs Mechanism + +In this section, we discussed how the two-by-two formalism of the group $SL(2,c)$ leads the scalar, four-vector, and tensor representations of the Lorentz group. We discussed in detail how the four-vector for a massive particle can be decomposed into the symmetry of a two-component massless particle and one gauge degree of freedom. This aspect was studied in detail by Kim and Wigner [20,21], and their results are illustrated in Figure 6. This decomposition is known in the literature as the group contraction. + +The four-dimensional Lorentz group can be contracted to the Euclidean and cylindrical groups. These contraction processes could transform a four-component massive vector meson into a massless spin-one particle with two spin components, and one gauge degree of freedom. + +Since this contraction procedure is spelled out detail in [21], as well as in the present paper, its reverse process is also well understood. We start with one two-component massless particle with one gauge degree of freedom, and end up with a massive vector meson with its four components. + +The mathematics of this process is not unlike the Higgs mechanism [36,37], where one massless field with two degrees of freedom absorbs one gauge degree freedom to become a quartet of bosons, namely that of $W, Z^\pm$ plus the Higgs boson. As is well known, this mechanism is the basis for the theory of electro-weak interaction formulated by Weinberg and Salam [38,39]. +---PAGE_BREAK--- + +**Figure 6.** Contractions of the three-dimensional rotation group. (a) Contraction in terms of the tangential plane and the tangential cylinder [20]; (b) Contraction in terms of the expansion and contraction of the longitudinal axis [21]. In both cases, the symmetry ends up with one rotation around the longitudinal direction and one translational degree along the longitudinal axis. The rotation and translation corresponds to the helicity and gauge degrees of freedom. + +The word "spontaneous symmetry breaking" is used for the Higgs mechanism. It could be an interesting problem to see that this symmetry breaking for the two Higgs doublet model can be formulated in terms of the Lorentz group and its contractions. In this connection, we note an interesting recent paper by Dée and Ivanov [40]. + +# 9. Conclusions + +The damped harmonic oscillator, Wigner's little groups, and the Poincaré sphere belong to the three different branches of physics. In this paper, it was noted that they are based on the same mathematical framework, namely the algebra of two-by-two matrices. + +The second-order differential equation for damped harmonic oscillators can be formulated in terms of two-by-two matrices. These matrices produce the algebra of the group $Sp(2)$. While there are three trace classes of the two-by-two matrices of this group, the damped oscillator tells us how to make transitions from one class to another. + +It is shown that Wigner's three little groups can be defined in terms of the trace classes of the $Sp(2)$ group. If the trace is smaller than two, the little group is for massive particles. If greater than two, the little group is for imaginary-mass particles. If the trace is equal to two, the little group is for massless particles. Thus, the damped harmonic oscillator provides a procedure for transition from one little group to another. + +The Poincaré sphere contains the symmetry of the six-parameter $SL(2, c)$ group. Thus, the sphere provides the procedure for extending the symmetry of the little group defined within the Lorentz group of three-dimensional Minkowski space to its full Lorentz group in the four-dimensional space-time. In addition, the Poincaré sphere offers the variable which allows us to change the symmetry of a massive particle to that of a massless particle by continuously decreasing the mass. + +In this paper, we extracted the mathematical properties of Wigner's little groups from the damped harmonic oscillator and the Poincaré sphere. In so doing, we have shown that the transition from one little group to another is tangentially continuous. + +This subject was initiated by İnönü and Wigner in 1953 as the group contraction [41]. In their paper, they discussed the contraction of the three-dimensional rotation group becoming contracted to the two-dimensional Euclidean group with one rotational and two translational degrees of freedom. While the $O(3)$ rotation group can be illustrated by a three-dimensional sphere, the plane tangential at +---PAGE_BREAK--- + +the north pole is for the $E(2)$ Euclidean group. However, we can also consider a cylinder tangential at the equatorial belt. The resulting cylindrical group is isomorphic to the Euclidean group [20]. While the rotational degree of freedom of this cylinder is for the photon spin, the up and down translations on the surface of the cylinder correspond to the gauge degree of freedom of the photon, as illustrated in Figure 6. + +It was noted also that the Bargmann decomposition of two-by-two matrices, as illustrated in Figure 1 and Figure 2, allows us to study more detailed properties of the little groups, including space and time reflection reflection properties. Also in this paper, we have discussed how the scalars, four-vectors, and four-tensors can be constructed from the two-by-two representation in the Lorentz-covariant world. + +In addition, it should be noted that the symmetry of the Lorentz group is also contained in the squeezed state of light [14] and the ABCD matrix for optical beam transfers [18]. We also mentioned the possibility of understanding the mathematics of the Higgs mechanism in terms of the Lorentz group and its contractions. + +## Acknowledgements + +In his 1939 paper [1], Wigner worked out the subgroups of the Lorentz group whose transformations leave the four momentum of a given particle invariant. In so doing, he worked out their internal space-time symmetries. In spite of its importance, this paper remains as one of the most difficult papers to understand. Wigner was eager to make his paper understandable to younger physicists. + +While he was the pioneer in introducing the mathematics of group theory to physics, he was also quite fond of using two-by-two matrices to explain group theoretical ideas. He asked one of the present authors (Young S. Kim) to rewrite his 1939 paper [1] using the language of those matrices. This is precisely what we did in the present paper. + +We are grateful to Eugene Paul Wigner for this valuable suggestion. + +## Author Contributions + +This paper is largely based on the earlier papers by Young S. Kim and Marilyn E. Noz, and those by Sibel Başkal and Young S. Kim. The two-by-two formulation of the damped oscillator in Section 2 was jointly developed by Sibel Başkal and Young S. Kim during the summer of 2012. Marilyn E. Noz developed the idea of the symmetry of small-mass neutrinos in Section 7. The limiting process in the symmetry of the Poincaré sphere was formulated by Young S. Kim. Sibel Başkal initially constructed the four-by-four tensor representation in Section 8. + +The initial organization of this paper was conceived by Young S. Kim in his attempt to follow Wigner's suggestion to translate his 1939 paper into the language of two-by-two matrices. Sibel Başkal and Marilyn E. Noz tightened the organization and filled in the details. + +## Conflicts of Interest + +The authors declare no conflicts of interest. + +## References + +1. Wigner, E. On unitary representations of the inhomogeneous Lorentz Group. *Ann. Math.* **1939**, *40*, 149–204. +2. Han, D.; Kim, Y.S.; Son, D. Eulerian parametrization of Wigner little groups and gauge transformations in terms of rotations in 2-component spinors. *J. Math. Phys.* **1986**, *27*, 2228–2235. +3. Born, M.; Wolf, E. *Principles of Optics*, 6th ed.; Pergamon: Oxford, UK, 1980. +---PAGE_BREAK--- + +4. Han, D.; Kim, Y.S.; Noz, M.E. Stokes parameters as a Minkowskian four-vector. Phys. Rev. E **1997**, 56, 6065-6076. + +5. Brosseau, C. *Fundamentals of Polarized Light: A Statistical Optics Approach*; John Wiley: New York, NY, USA, 1998. + +6. Başkal, S.; Kim, Y.S. De Sitter group as a symmetry for optical decoherence. J. Phys. A **2006**, 39, 7775-7788. + +7. Kim, Y.S.; Noz, M.E. Symmetries shared by the Poincaré Group and the Poincaré Sphere. *Symmetry* **2013**, *5*, 233–252. + +8. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. Phys. Rev. D **1982**, *26*, 3717–3725. + +9. Han, D.; Kim, Y.S.; Son, D. Photons, neutrinos and gauge transformations. Am. J. Phys. **1986**, *54*, 818–821. + +10. Başkal, S.; Kim, Y.S. Little groups and Maxwell-type tensors for massive and massless particles. Europhys. Lett. **1997**, *40*, 375–380. + +11. Leggett, A.; Chakravarty, S.; Dorsey, A.; Fisher, M.; Garg, A.; Zwerger, W. Dynamics of the dissipative 2-state system. Rev. Mod. Phys. **1987**, *59*, 1–85. + +12. Başkal, S.; Kim, Y.S. One analytic form for four branches of the ABCD matrix. J. Mod. Opt. **2010**, *57*, 1251–1259. + +13. Başkal, S.; Kim, Y.S. Lens optics and the continuity problems of the ABCD matrix. J. Mod. Opt. **2014**, *61*, 161–166. + +14. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986. + +15. Bargmann, V. Irreducible unitary representations of the Lorentz group. Ann. Math. **1947**, *48*, 568–640. + +16. Iwasawa, K. On some types of topological groups. Ann. Math. **1949**, *50*, 507–558. + +17. Guillemin, V.; Sternberg, S. *Symplectic Techniques in Physics*; Cambridge University Press: Cambridge, UK, 1984. + +18. Başkal, S.; Kim, Y.S. Lorentz Group in Ray and Polarization Optics. In *Mathematical Optics: Classical, Quantum and Computational Methods; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.*; CRC Taylor and Francis: New York, NY, USA, 2013; Chapter 9, pp. 303–340. + +19. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon: Oxford, UK, 1964. + +20. Kim, Y.S.; Wigner, E.P. Cylindrical group and massless particles. J. Math. Phys. **1987**, *28*, 1175-1179. + +21. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic particles. J. Math. Phys. **1990**, *31*, 55-60. + +22. Georgieva, E.; Kim, Y.S. Iwasawa effects in multilayer optics. Phys. Rev. E **2001**, *64*, doi:10.1103/PhysRevE.64.026602. + +23. Saleh, B.E.A.; Teich, M.C. *Fundamentals of Photonics*, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2007. + +24. Papoulias, D.K.; Kosmas, T.S. Exotic Lepton Flavour Violating Processes in the Presence of Nuclei. J. Phys.: Conf. Ser. **2013**, *410*, 012123:1-012123:5. + +25. Dinh, D.N.; Petcov, S.T.; Sasao, N.; Tanaka, M.; Yoshimura, M. Observables in neutrino mass spectroscopy using atoms. Phys. Lett. B **2013**, *719*, 154-163. + +26. Miramonti, L.; Antonelli, V. Advancements in Solar Neutrino physics. Int. J. Mod. Phys. E **2013**, *22*, 1-16. + +27. Li, Y.-F.; Cao, J.; Jun, Y.; Wang, Y.; Zhan, L. Unambiguous determination of the neutrino mass hierarchy using reactor neutrinos. Phys. Rev. D **2013**, *88*, 013008:1-013008:9. + +28. Bergstrom, J. Combining and comparing neutrinoless double beta decay experiments using different 584 nuclei. J. High Energy Phys. **2013**, *02*, 093:1-093:27. + +29. Han, T.; Lewis, I.; Ruiz, R.; Si, Z.-G. Lepton number violation and $W'$ chiral couplings at the LHC. Phys. Rev. D **2013**, *87*, 035011:1-035011:25. + +30. Drewes, M. The phenomenology of right handed neutrinos. Int. J. Mod. Phys. E **2013**, *22*, 1330019:1-1330019:75. + +31. Barut, A.O.; McEwan, J. The four states of the massless neutrino with pauli coupling by spin-gauge invariance. + Lett. Math. Phys. **1986**, *11*, 67–72. + +32. Palcu, A. Neutrino Mass as a consequence of the exact solution of 3-3-1 gauge models without exotic electric charges. + Mod. Phys. Lett. A **2006**, *21*, 1203–1217. + +33. Bilenky, S.M. Neutrino. + Phys. Part. Nucl. **2013**, *44*, 1–46. + +34. Alhendi, H. A.; Lashin, E. I.; Mudlej, A. A. Textures with two traceless submatrices of the neutrino mass matrix. + Phys. Rev. D **2008**, *77*, 013009:1-013009:1-13. + +35. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass. + Phys. Rev. **1964**, *135*, B1049-B1056. + +36. Higgs, P.W. Broken symmetries and the masses of gauge bosons. + Phys. Rev. Lett. **1964**, *13*, 508-509. + +Symmetry **2014**, *6*, 473–515 +---PAGE_BREAK--- + +37. Guralnik, G.S.; Hagen, C.R.; Kibble, T.W.B. Global conservation laws and massless particles. Phys. Rev. Lett. **1964**, *13*, 585–587. + +38. Weinberg, S. A model of leptons. Phys. Rev. Lett. **1967**, *19*, 1265–1266. + +39. Weinberg, S. *Quantum Theory of Fields, Volume II, Modern Applications*; Cambridge University Press: Cambridge, UK, 1996. + +40. Dée, A.; Ivanov, I.P. Higgs boson masses of the general two-Higgs-doublet model in the Minkowski-space formalism. Phys. Rev. D **2010**, *81*, 015012:1–015012:8. + +41. Inönü, E.; Wigner, E.P. On the contraction of groups and their representations. Proc. Natl. Acad. Sci. USA **1953**, *39*, 510–524. + +© 2014 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access +article distributed under the terms and conditions of the Creative Commons Attribution +(CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +Article + +Loop Representation of Wigner's Little Groups + +Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³ + +¹ Department of Physics, Middle East Technical University, 06800 Ankara, Turkey; baskal@newton.physics.metu.edu.tr + +² Center for Fundamental Physics, University of Maryland College Park, Maryland, MD 20742, USA + +³ Department of Radiology, New York University, New York, NY 10016, USA; marilyn.noz@med.nyu.edu + +* Correspondence: yskim@umd.edu; Tel.: +1-301-937-6306 + +Academic Editor: Sergei D. Odintsov + +Received: 12 May 2017; Accepted: 15 June 2017; Published: 23 June 2017 + +**Abstract:** Wigner's little groups are the subgroups of the Lorentz group whose transformations leave the momentum of a given particle invariant. They thus define the internal space-time symmetries of relativistic particles. These symmetries take different mathematical forms for massive and for massless particles. However, it is shown possible to construct one unified representation using a graphical description. This graphical approach allows us to describe vividly parity, time reversal, and charge conjugation of the internal symmetry groups. As for the language of group theory, the two-by-two representation is used throughout the paper. While this two-by-two representation is for spin-1/2 particles, it is shown possible to construct the representations for spin-0 particles, spin-1 particles, as well as for higher-spin particles, for both massive and massless cases. It is shown also that the four-by-four Dirac matrices constitute a two-by-two representation of Wigner's little group. + +**Keywords:** Wigner's little groups; Lorentz group; unified picture of massive and massless particles; two-by-two representations; graphical approach to internal space-time symmetries + +PACS: 02.10.Yn; 02.20.Uw; 03.65.Fd + +# 1. Introduction + +In his 1939 paper [1], Wigner introduced subgroups of the Lorentz group whose transformations leave the momentum of a given particle invariant. These subgroups are called Wigner’s little groups in the literature and are known as the symmetry groups for internal space-time structure. + +For instance, a massive particle at rest can have spin that can be rotated in three-dimensional space. +The little group in this case is the three-dimensional rotation group. For a massless particle moving +along the z direction, Wigner noted that rotations around the z axis do not change the momentum. +In addition, he found two more degrees of freedom, which together with the rotation, constitute a +subgroup locally isomorphic to the two-dimensional Euclidean group. + +However, Wigner’s 1939 paper did not deal with the following critical issues. + +1. As for the massive particle, Wigner worked out his little group in the Lorentz frame where the particle is at rest with zero momentum, resulting in the three-dimensional rotation group. He could have Lorentz-boosted the O(3)-like little group to make the little group for a moving particle. + +2. While the little group for a massless particle is like *E*(2), it is not difficult to associate the rotational degree of freedom to the helicity. However, Wigner did not give physical interpretations to the two translation-like degrees of freedom. + +3. While the Lorentz group does not allow mass variations, particles with infinite momentum should behave like massless particles. The question is whether the Lorentz-boosted O(3)-like little group becomes the *E*(2)-like little group for particles with infinite momentum. +---PAGE_BREAK--- + +These issues have been properly addressed since then [2–5]. The translation-like degrees of freedom for massless particles collapse into one gauge degree of freedom, and the *E*(2)-like little group can be obtained as the infinite-momentum limit of the *O*(3)-like little group. This history is summarized in Figure 1. + +**Figure 1.** *O*(3)-like and *E*(2)-like internal space-time symmetries of massive and massless particles. The sphere corresponds to the *O*(3)-like little group for the massive particle. There is a plane tangential to the sphere at its north pole, which is *E*(2). There is also a cylinder tangent to the sphere at its equatorial belt. This cylinder gives one helicity and one gauge degree of freedom. This figure thus gives a unified picture of the little groups for massive and massless particles [5]. + +In this paper, we shall present these developments using a mathematical language more transparent than those used in earlier papers. + +1. In his original paper [1], Wigner worked out his little group for the massive particle when its momentum is zero. How about moving massive particles? In this paper, we start with a moving particle with non-zero momentum. We then perform rotations and boosts whose net effect does not change the momentum [6–8]. This procedure can be applied to the massive, massless, and imaginary-mass cases. + +2. By now, we have a clear understanding of the group SL(2, c) as the universal covering group of the Lorentz group. The logic with two-by-two matrices is far more transparent than the mathematics based on four-by-four matrices. We shall thus use the two-by-two representation of the Lorentz group throughout the paper [5,9–11]. + +The purpose of this paper is to make the physics contained in Wigner’s original paper more transparent. In Section 2, we give the six generators of the Lorentz group. It is possible to write them in terms of coordinate transformations, four-by-four matrices, and two-by-two matrices. In Section 3, we introduce Wigner’s little groups in terms of two-by-two matrices. In Section 4, it is shown possible to construct transformation matrices of the little group by performing rotations and a boost resulting in a non-trivial matrix, which leaves the given momentum invariant. + +Since we are more familiar with Dirac matrices than the Lorentz group, it is shown in Section 5 that Dirac matrices are a representation of the Lorentz group, and his four-by-four matrices are two-by-two +---PAGE_BREAK--- + +representations of the two-by-two representation of Wigner's little groups. In Section 6, we construct spin-0 and spin-1 particles for the SL(2,c) spinors. We also discuss massless higher spin particles. + +## 2. Lorentz Group and Its Representations + +The group of four-by-four matrices, which performs Lorentz transformations on the four-dimensional Minkowski space leaving invariant the quantity ($t^2 - z^2 - x^2 - y^2$), forms the starting point for the Lorentz group. As there are three rotation and three boost generators, the Lorentz group is a six-parameter group. + +Einstein, by observing that this Lorentz group also leaves invariant $(E, p_z, p_x, p_y)$, was able to derive his Lorentz-covariant energy-momentum relation commonly known as $E = mc^2$. Thus, the particle mass is a Lorentz-invariant quantity. + +The Lorentz group is generated by the three rotation operators: + +$$J_i = -i \left( x_j \frac{\partial}{\partial x_k} - x_k \frac{\partial}{\partial x_j} \right), \qquad (1)$$ + +where $i, j, k = 1, 2, 3$, and three boost operators: + +$$K_i = -i \left( t \frac{\partial}{\partial x_i} + x_i \frac{\partial}{\partial t} \right). \qquad (2)$$ + +These generators satisfy the closed set of commutation relations: + +$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, K_j] = i\epsilon_{ijk}K_j, \quad [K_i, K_j] = -i\epsilon_{ijk}J_k, \qquad (3)$$ + +which are known as the Lie algebra for the Lorentz group. + +Under the space inversion, $x_i \rightarrow -x_i$, or the time reflection, $t \rightarrow -t$, the boost generators $K_i$ change sign. However, the Lie algebra remains invariant, which means that the commutation relations remain invariant under Hermitian conjugation. + +In terms of four-by-four matrices applicable to the Minkowskian coordinate of $(t,z,x,y)$, the generators can be written as: + +$$J_3 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 \end{pmatrix}, \quad K_3 = \begin{pmatrix} 0 & i & 0 & 0 \\ i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \qquad (4)$$ + +for rotations around and boosts along the z direction, respectively. Similar expressions can be written for the x and y directions. We see here that the rotation generators $J_i$ are Hermitian, but the boost generators $K_i$ are anti-Hermitian. + +We can also consider the two-by-two matrices: + +$$J_i = \frac{1}{2}\sigma_i, \quad \text{and} \quad K_i = \frac{i}{2}\sigma_i, \qquad (5)$$ + +where $\sigma_i$ are the Pauli spin matrices. These matrices also satisfy the commutation relations given in Equation (3). + +There are interesting three-parameter subgroups of the Lorentz group. In 1939 [1], Wigner considered the subgroups whose transformations leave the four-momentum of a given particle invariant. First of all, consider a massive particle at rest. The momentum of this particle is invariant under rotations in three-dimensional space. What happens for the massless particle that cannot be brought to a rest frame? In this paper we shall consider this and other problems using the two-by-two representation of the Lorentz group. +---PAGE_BREAK--- + +### 3. Two-by-Two Representation of Wigner's Little Groups + +The six generators of Equation (5) lead to the group of two-by-two unimodular matrices of the form: + +$$ G = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \qquad (6) $$ + +with $\det(G) = 1$, where the matrix elements are complex numbers. There are thus six independent real numbers to accommodate the six generators given in Equation (5). The groups of matrices of this form are called SL(2, c) in the literature. Since the generators $K_i$ are not Hermitian, the matrix G is not always unitary. Its Hermitian conjugate is not necessarily the inverse. + +The space-time four-vector can be written as [5,9,11]: + +$$ \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix}, \qquad (7) $$ + +whose determinant is $t^2 - z^2 - x^2 - z^2$, and remains invariant under the Hermitian transformation: + +$$ X' = G X G^{\dagger}. \qquad (8) $$ + +This is thus a Lorentz transformation. This transformation can be explicitly written as: + +$$ \begin{pmatrix} t'+z' & x'-iy' \\ x'+iy' & t'-z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix}. \qquad (9) $$ + +With these six independent real parameters, it is possible to construct four-by-four matrices for Lorentz transformations applicable to the four-dimensional Minkowskian space [5,12]. For the purpose of the present paper, we need some special cases, and they are given in Table 1. + +Table 1. Two-by-two and four-by-four representations of the Lorentz group. + +
GeneratorsTwo-by-TwoFour-by-Four
J3 = 12(0   0)
                                                                                                                                                                      (exp(iφ/2)
& 0
& exp(-iφ/2))
0
& 0
0
& 0
K3 = 12(i   0)
& 0
& -i)
0
& 0
0
& 0
J1 = 12(0 & 1)
& 1
& 0)
0
& 0
0
& 0
K1 = 12(0 & i)
& i
& 0)
0
& 0
0
& 0
J2 = 12(0 & -i)
& i
& 0)
0
& 0
0
& 0
K2 = 12(0 & -1)
& -1
& 0)
0
& 0
0
& 0
+ +$$ \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \qquad (6) $$ + +$$ X' = G X G^{\dagger}. \qquad (8) $$ + +$$ \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & 0 & 0 \\ 0 & 0 & \sin\theta & 0 \\ 0 & 0 & 0 & \cos\theta \end{pmatrix}. \qquad (9) $$ + +$$ \begin{pmatrix} \cosh\lambda & 0 & \sinh\lambda & 0 \\ 0 & 1 & 0 & 0 \\ \sinh\lambda & 0 & \cosh\lambda & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. $$ + +$$ \begin{pmatrix} \cosh\lambda & 0 & \sinh\lambda & 0 \\ 0 & -\sin\theta & 0 & 0 \\ \sinh\lambda & 0 & \cosh\lambda & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. $$ + +$$ \begin{pmatrix} \cosh\lambda & 0 & 0 & \sinh\lambda \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \sinh\lambda & 0 & 0 & \cosh\lambda \end{pmatrix}. $$ +---PAGE_BREAK--- + +Likewise, the two-by-two matrix for the four-momentum takes the form: + +$$P = \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix}, \qquad (10)$$ + +with $p_0 = \sqrt{m^2 + p_z^2 + p_x^2 + p_2^2}$. The transformation property of Equation (9) is applicable also to this energy-momentum four-vector. + +In 1939 [1], Wigner considered the following three four-vectors. + +$$P_+ = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad P_0 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad P_- = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \qquad (11)$$ + +whose determinants are 1, 0, and -1, respectively, corresponding to the four-momenta of massive, massless, and imaginary-mass particles, as shown in Table 2. + +Table 2. The Wigner momentum vectors in the two-by-two matrix representation together with the corresponding transformation matrix. These four-momentum matrices have determinants that are positive, zero, and negative for massive, massless, and imaginary-mass particles, respectively. + +
Particle MassFour-MomentumTransform Matrix
Massive(10 01)(cos(θ/2) − sin(θ/2))
Massless(10 00)(10 − γ-1)
Imaginary mass(10 0−1)(cosh(λ/2) sinh(λ/2))
+ +He then constructed the subgroups of the Lorentz group whose transformations leave these four-momenta invariant. These subgroups are called Wigner's little groups in the literature. Thus, the matrices of these little groups should satisfy: + +$$W P_i W^\dagger = P_i, \qquad (12)$$ + +where $i = +, 0, -$. + +Since the momentum of the particle is fixed, these little groups define the internal space-time symmetries of the particle. For all three cases, the momentum is invariant under rotations around the z axis, as can be seen from the expression given for the rotation matrix generated by $J_3$ given in Table 1. + +For the first case corresponding to a massive particle at rest, the requirement of the subgroup is: + +$$W P_+ W^\dagger = P_+. \qquad (13)$$ + +This requirement tells that the subgroup is the rotation subgroup with the rotation matrix around the y direction: + +$$R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix}. \qquad (14)$$ + +For the second case of $P_0$, the triangular matrix of the form: + +$$\Gamma(\xi) = \begin{pmatrix} 1 & -\xi \\ 0 & 1 \end{pmatrix}, \qquad (15)$$ +---PAGE_BREAK--- + +satisfies the Wigner condition of Equation (12). If we allow rotations around the z axis, the expression becomes: + +$$ \Gamma(\xi, \phi) = \begin{pmatrix} 1 & -\xi \exp(-i\phi) \\ 0 & 1 \end{pmatrix}. \quad (16) $$ + +This matrix is generated by: + +$$ N_1 = J_2 - K_1 = \begin{pmatrix} 0 & -i \\ 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = J_1 + K_2 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}. \quad (17) $$ + +Thus, the little group is generated by $J_3$, $N_1$, and $N_2$. They satisfy the commutation relations: + +$$ [N_1, N_2] = 0, \quad [J_3, N_1] = iN_2, \quad [J_3, K_2] = -iN_1. \quad (18) $$ + +Wigner in 1939 [1] observed that this set is the same as that of the two-dimensional Euclidean group with one rotation and two translations. The physical interpretation of the rotation is easy to understand. It is the helicity of the massless particle. On the other hand, the physics of the $N_1$ and $N_2$ matrices has a stormy history, and the issue was not completely settled until 1990 [4]. They generate gauge transformations. + +For the third case of $P_-$, the matrix of the form: + +$$ S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix}, \quad (19) $$ + +satisfies the Wigner condition of Equation (12). This corresponds to the Lorentz boost along the x direction generated by $K_1$ as shown in Table 1. Because of the rotation symmetry around the z axis, the Wigner condition is satisfied also by the boost along the y axis. The little group is thus generated by $J_3$, $K_1$, and $K_2$. These three generators: + +$$ [J_3, K_1] = iK_2, \quad [J_3, K_2] = -iK_1, \quad [K_1, K_2] = -iJ_3 \quad (20) $$ + +form the little group $O(2, 1)$, which is the Lorentz group applicable to two space-like and one time-like dimensions. + +Of course, we can add rotations around the z axis. Let us Lorentz-boost these matrices along the z direction with the diagonal matrix: + +$$ B(\eta) = \begin{pmatrix} \exp(\eta/2) & 0 \\ 0 & \exp(-\eta/2) \end{pmatrix}. \quad (21) $$ + +Then, the matrices of Equations (14), (15), and (19) become: + +$$ B(\eta)R(\theta)B(-\eta) = \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix}, \quad (22) $$ + +$$ B(\eta)\Gamma(\xi)B(-\eta) = \begin{pmatrix} 1 & -e^{\eta}\xi \\ 0 & 1 \end{pmatrix}, \quad (23) $$ + +$$ B(\eta)S(-\lambda)B(-\eta) = \begin{pmatrix} \cosh(\lambda/2) & -e^{\eta} \sinh(\lambda/2) \\ -e^{-\eta} \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix}, \quad (24) $$ + +respectively. We have changed the sign of $\lambda$ for future convenience. +---PAGE_BREAK--- + +When $\eta$ becomes large, $\theta$, $\tilde{\epsilon}$, and $\lambda$ should become small if the upper-right elements of these three matrices are to remain finite. In that case, the diagonal elements become one, and all three matrices become like the triangular matrix: + +$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}. \tag{25} $$ + +Here comes the question of whether the matrix of Equation (24) can be continued from Equation (22), via Equation (23). For this purpose, let us write Equation (22) as: + +$$ \begin{pmatrix} 1 - \frac{(\gamma\epsilon)^2}{2} & -\gamma \\ \gamma\epsilon^2 & 1 - \frac{(\gamma\epsilon)^2}{2} \end{pmatrix}, \tag{26} $$ + +for small $\theta = 2\gamma\epsilon$, with $\epsilon = e^{-\eta}$. For Equation (24), we can write: + +$$ \begin{pmatrix} 1 + \frac{(\gamma\epsilon)^2}{2} & -\gamma \\ -\gamma\epsilon^2 & 1 + \frac{(\gamma\epsilon)^2}{2} \end{pmatrix}, \tag{27} $$ + +with $\lambda = -2\gamma\epsilon$. Both of these expressions become the triangular matrix of Equation (25) when $\epsilon = 0$. For small values of $\epsilon$, the diagonal elements change from $\cos(\theta/2)$ to $\cosh(\lambda/2)$ while $\sin(\theta/2)$ becomes $-\sinh(\lambda/2)$. Thus, it is possible to continue from Equation (22) to Equation (24). The mathematical details of this process have been discussed in our earlier paper on this subject [13]. + +We are then led to the question of whether there is one expression that will take care of all three cases. We shall discuss this issue in Section 4. + +**4. Loop Representation of Wigner's Little Groups** + +It was noted in Section 3 that matrices of Wigner’s little group take different forms for massive, massless, and imaginary-mass particles. In this section, we construct one two-by-two matrix that works for all three different cases. + +In his original paper [1], Wigner constructs those matrices in specific Lorentz frames. For instance, for a moving massive particle with a non-zero momentum, Wigner brings it to the rest frame and works out the *O*(3) subgroup of the Lorentz group as the little group for this massive particle. In order to complete the little group, we should boost this *O*(3) to the frame with the original non-zero momentum [4]. + +In this section, we construct transformation matrices without changing the momentum. Let us assume that the momentum is along the z direction; the rotation around the z axis leaves the momentum invariant. According to the Euler decomposition, the rotation around the y axis, in addition, will accommodate rotations along all three directions. For this reason, it is enough to study what happens in transformations within the xz plane [14]. + +It was Kupersztych [6] who showed in 1976 that it is possible to construct a momentum-preserving transformation by a rotation followed by a boost as shown in Figure 2. In 1981 [7], Han and Kim showed that the boost can be decomposed into two components as illustrated in Figure 2. In 1988 [8], Han and Kim showed that the same purpose can be achieved by one boost preceded and followed by the same rotation matrix, as shown also in Figure 2. We choose to call this loop the “D loop” and write the transformation matrix as: + +$$ D(\alpha, \chi) = R(\alpha)S(-2\chi)R(\alpha). \tag{28} $$ +---PAGE_BREAK--- + +**Figure 2.** Evolution of the Wigner loop. In 1976 [6], Kupersztych considered a rotation followed by a boost whose net result will leave the momentum invariant. In 1981 [7], Han and Kim considered the same problem with simpler forms for boost matrices. In 1988, Han and Kim [8] constructed the Lorentz kinematics corresponding to the Bargmann decomposition [10] consisting of one boost matrix sandwiched by two rotation matrices. In the present case, the two rotation matrices are identical. + +The *D* matrix can now be written as three matrices. This form is known in the literature as the Bargmann decomposition [10]. This form gives additional convenience. When we take the inverse or the Hermitian conjugate, we have to reverse the order of matrices. However, this particular form does not require re-ordering. + +The *D* matrix of Equation (28) becomes: + +$$ D(\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\ -\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}. \quad (29) $$ + +If the diagonal element is smaller than one with $((\cos \alpha) \cosh \chi) < 1$, the off-diagonal elements have opposite signs. Thus, this *D* matrix can serve as the Wigner matrix of Equation (22) for massive particles. If the diagonal elements are one, one of the off-diagonal elements vanishes, and this matrix becomes triangular like Equation (23). If the diagonal elements are greater than one with $((\cos \alpha) \cosh \chi) > 1$, this matrix can become Equation (24). In this way, the matrix of Equation (28) can accommodate the three different expressions given in Equations (22)–(24). + +### 4.1. Continuity Problems + +Let us go back to the three separate formulas given in Equations (22)–(24). If $\eta$ becomes infinity, all three of them become triangular. For the massive particle, $\tanh \eta$ is the particle speed, and: + +$$ \tanh \eta = \frac{p}{p_0}, \quad (30) $$ + +where *p* and $p_0$ are the momentum and energy of the particle, respectively. +When the particle is massive with $m^2 > 0$, the ratio: + +$$ \frac{\text{lower-left element}}{\text{upper-right element}'} \quad (31) $$ + +is negative and is: + +$$ -e^{-2\eta} = \frac{1 - \sqrt{1 + m^2/p^2}}{1 + \sqrt{1 + m^2/p^2}}. \quad (32) $$ +---PAGE_BREAK--- + +If the mass is imaginary with $m^2 < 0$, the ratio is positive and: + +$$e^{-2\eta} = \frac{1 - \sqrt{1 + m^2/p^2}}{1 + \sqrt{1 + m^2/p^2}} \quad (33)$$ + +This ratio is zero for massless particles. This means that when $m^2$ changes from positive to negative, the ratio changes from $-e^{-2\eta}$ to $e^{-2\eta}$. This transition is continuous, but not analytic. This aspect of non-analytic continuity has been discussed in one of our earlier papers [13]. + +The *D* matrix of Equation (29) combines all three matrices given in Equations (22)–(24) into one matrix. For this matrix, the ratio of Equation (31) becomes: + +$$\frac{\tanh \chi - \sin \alpha}{\tanh \chi + \sin \alpha} = \frac{1 - \sqrt{1 + (m/p)^2}}{1 + \sqrt{1 + (m/p)^2}} \quad (34)$$ + +Thus, + +$$\frac{m^2}{p^2} = \left( \frac{\sin \alpha}{\tanh \chi} \right)^2 - 1. \quad (35)$$ + +For the *D* loop of Figure 2, both $\tanh \chi$ and $\sin \alpha$ range from 0–1, as illustrated in Figure 3. For small values of the mass for a fixed value of the momentum, this expression becomes: + +$$-\frac{m^2}{4p^2}. \quad (36)$$ + +Thus, the change from positive values of $m^2$ to negative values is continuous and analytic. For massless particles, $m^2$ is zero, while it is negative for imaginary-mass particles. + +We realize that the mass cannot be changed within the frame of the Lorentz group and that both $\alpha$ and $\eta$ are parameters of the Lorentz group. On the other hand, their combinations according to the *D* loop of Figure 2 can change the value of $m^2$ according to Equation (35) and Figure 3. + +**Figure 3.** Non-Lorentzian transformations allowing mass variations. The *D* matrix of Equation (29) allows us to change the $\chi$ and $\alpha$ analytically within the square region in (a). These variations allow the mass variations illustrated in (b), not allowed in Lorentz transformations. The Lorentz transformations are possible along the hyperbolas given in this figure. + +## 4.2. Parity, Time Reversal, and Charge Conjugation + +Space inversion leads to the sign change in $\chi$: + +$$D(\alpha, -\chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & \sinh \chi - (\sin \alpha) \cosh \chi \\ \sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}, \quad (37)$$ +---PAGE_BREAK--- + +and time reversal leads to the sign change in both $\alpha$ and $\chi$: + +$$D(-\alpha, -\chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & \sinh \chi + (\sin \alpha) \cosh \chi \\ \sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}. \quad (38)$$ + +If we space-invert this expression, the result is a change only in the direction of rotation, + +$$D(-\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi + (\sin \alpha) \cosh \chi \\ -\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}. \quad (39)$$ + +The combined transformation of space inversion and time reversal is known as the “charge conjugation”. All of these transformations are illustrated in Figure 4. + +Figure 4. Parity, time reversal, and charge conjugation of Wigner’s little groups in the loop representation. + +Let us go back to the Lie algebra of Equation (3). This algebra is invariant under Hermitian conjugation. This means that there is another set of commutation relations, + +$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, \hat{K}_j] = i\epsilon_{ijk}\hat{K}_k, \quad [\hat{K}_i, \hat{K}_j] = -i\epsilon_{ijk}J_k, \quad (40)$$ + +where $K_i$ is replaced with $\hat{K}_i = -K_i$. Let us go back to the expression of Equation (2). This transition to the dotted representation is achieved by the space inversion or by the parity operation. + +On the other hand, the complex conjugation of the Lie algebra of Equation (3) leads to: + +$$[J_i^*, J_j^*] = -i\epsilon_{ijk}J_k^*, \quad [J_i^*, K_j^*] = -i\epsilon_{ijk}K_k^*, \quad [K_i^*, K_j^*] = i\epsilon_{ijk}J_k^*. \quad (41)$$ +---PAGE_BREAK--- + +It is possible to restore this algebra to that of the original form of Equation (3) if we replace $J_i^*$ by $-J_i$ and $K_i^*$ by $-K_i$. This corresponds to the time-reversal process. This operation is known as the anti-unitary transformation in the literature [15,16]. + +Since the algebras of Equations (3) and (41) are invariant under the sign change of $K_i$ and $K_i^*$, respectively, there is another Lie algebra with $J_i^*$ replaced by $-J_i$ and $K_i^*$ by $-K_i$. This is the parity operation followed by time reversal, resulting in charge conjugation. With the four-by-four matrices for spin-1 particles, this complex conjugation is trivial, and $J_i^* = -J_i$, as well as $K_i^* = -K_i$. + +On the other hand, for spin 1/2 particles, we note that: + +$$ +\begin{aligned} +J_1^* &= J_1, & J_2^* &= -J_2, & J_3^* &= J_3, \\ +K_1^* &= -K_1, & K_2^* &= K_2, & K_3^* &= -K_3. +\end{aligned} +\quad (42) $$ + +Thus, $J_i^*$ should be replaced by $\sigma_2 J_i \sigma_2$, and $K_i^*$ by $-\sigma_2 K_i \sigma_2$. + +**5. Dirac Matrices as a Representation of the Little Group** + +The Dirac equation, Dirac matrices, and Dirac spinors constitute the basic language for spin-1/2 particles in physics. Yet, they are not widely recognized as the package for Wigner's little group. Yes, the little group is for spins, so are the Dirac matrices. + +Let us write the Dirac equation as: + +$$ (p \cdot \gamma - m)\psi(\vec{x}, t) = \lambda\psi(\vec{x}, t). \quad (43) $$ + +This equation can be explicitly written as: + +$$ \left( -i\gamma_0 \frac{\partial}{\partial t} - i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} - i\gamma_3 \frac{\partial}{\partial z} - m \right) \psi(\vec{x}, t) = \lambda \psi(\vec{x}, t), \quad (44) $$ + +where: + +$$ \gamma_0 = \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix}, \quad \gamma_1 = \begin{pmatrix} 0 & \sigma_1 \\ -\sigma_1 & 0 \end{pmatrix}, \quad \gamma_2 = \begin{pmatrix} 0 & \sigma_2 \\ -\sigma_2 & 0 \end{pmatrix}, \quad \gamma_3 = \begin{pmatrix} 0 & \sigma_3 \\ -\sigma_3 & 0 \end{pmatrix}, \quad (45) $$ + +where *I* is the two-by-two unit matrix. We use here the Weyl representation of the Dirac matrices. + +The Dirac spinor has four components. Thus, we write the wave function for a free particle as: + +$$ \psi(\vec{x}, t) = U_{\pm} \exp [i (\vec{p} \cdot \vec{x} - p_0 t)], \quad (46) $$ + +with the Dirac spinor: + +$$ U_{+} = \begin{pmatrix} u \\ \dot{u} \end{pmatrix}, \qquad U_{-} = \begin{pmatrix} v \\ \dot{v} \end{pmatrix}, \quad (47) $$ + +where: + +$$ u = \dot{u} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad v = \dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. \quad (48) $$ + +In Equation (46), the exponential form $\exp[i(\vec{p} \cdot \vec{x} - p_0 t)]$ defines the particle momentum, and the column vector $U_{\pm}$ is for the representation space for Wigner's little group dictating the internal space-time symmetries of spin-1/2 particles. + +In this four-by-four representation, the generators for rotations and boosts take the form: + +$$ J_i = \frac{1}{2} \begin{pmatrix} \sigma_i & 0 \\ 0 & \sigma_i \end{pmatrix}, \quad \text{and} \quad K_i = \frac{i}{2} \begin{pmatrix} \sigma_i & 0 \\ 0 & -\sigma_i \end{pmatrix}. \quad (49) $$ +---PAGE_BREAK--- + +This means that both dotted and undotted spinor are transformed in the same way under rotation, while they are boosted in the opposite directions. + +When this $\gamma_0$ matrix is applied to $U_\pm$: + +$$ \gamma_0 U_+ = \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix} \begin{pmatrix} u \\ \dot{u} \end{pmatrix} = \begin{pmatrix} \dot{u} \\ u \end{pmatrix}, \quad \text{and} \quad \gamma_0 U_- = \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix} \begin{pmatrix} v \\ \dot{v} \end{pmatrix} = \begin{pmatrix} \dot{v} \\ v \end{pmatrix}. \qquad (50) $$ + +Thus, the $\gamma_0$ matrix interchanges the dotted and undotted spinors. The four-by-four matrix for the rotation around the y axis is: + +$$ R_{44}(\theta) = \begin{pmatrix} R(\theta) & 0 \\ 0 & R(\theta) \end{pmatrix}, \qquad (51) $$ + +while the matrix for the boost along the z direction is: + +$$ B_{44}(\eta) = \begin{pmatrix} B(\eta) & 0 \\ 0 & B(-\eta) \end{pmatrix}, \qquad (52) $$ + +with: + +$$ B(\pm\eta) = \begin{pmatrix} e^{\pm\eta/2} & 0 \\ 0 & e^{\mp\eta/2} \end{pmatrix}. \qquad (53) $$ + +These $\gamma$ matrices satisfy the anticommutation relations: + +$$ \{\gamma_{\mu}, \gamma_{\nu}\} = 2g_{\mu\nu}, \qquad (54) $$ + +where: + +$$ g_{00} = 1, \quad g_{11} = g_{22} = g_{22} = -1, $$ + +$$ g_{\mu\nu} = 0 \quad \text{if } \mu \neq \nu. \qquad (55) $$ + +Let us consider space inversion with the exponential form changing to $\exp[i(-\vec{p} \cdot \vec{x} - p_0t)]$. For this purpose, we can change the sign of $x$ in the Dirac equation of Equation (44). It then becomes: + +$$ (-i\gamma_0 \frac{\partial}{\partial t} + i\gamma_1 \frac{\partial}{\partial x} + i\gamma_2 \frac{\partial}{\partial y} + i\gamma_3 \frac{\partial}{\partial z} - m) \psi(-\vec{x}, t) = \lambda \psi(-\vec{x}, 0). \qquad (56) $$ + +Since $\gamma_0\gamma_i = -\gamma_i\gamma_0$ for $i=1,2,3$, + +$$ (-i\gamma_0 \frac{\partial}{\partial t} - i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} - i\gamma_3 \frac{\partial}{\partial z} - m) [\gamma_0\psi(-\vec{x} \cdot \vec{p}, p_0t)] = \lambda[\gamma_0\psi(-\vec{x} \cdot \vec{p}, p_0t)]. \qquad (57) $$ + +This is the Dirac equation for the wave function under the space inversion or the parity operation. The Dirac spinor $U_\pm$ becomes $\gamma_0 U_\pm$, according to Equation (50). This operation is illustrated in Table 3 and Figure 4. + +**Table 3.** Parity, charge conjugation, and time reversal in the loop representation. + +
StartTime Reflection
StartStart with
R(α)S(-2χ)R(α)
Time Reversal
R(-α)S(2χ)R(-α)
Space
Inversion
Parity
R(α)S(2χ)R(α)
Charge Conjugation
R(-α)S(-2χ)R(-α)
+---PAGE_BREAK--- + +We are interested in changing the sign of $t$. First, we can change both space and time variables, and then, we can change the space variable. We can take the complex conjugate of the equation first. Since $\gamma_2$ is imaginary, while all others are real, the Dirac equation becomes: + +$$ \left( i\gamma_0 \frac{\partial}{\partial t} + i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} + i\gamma_3 \frac{\partial}{\partial z} - m \right) \psi^*(\vec{x}, t) = \lambda \psi^*(\vec{x}, t). \quad (58) $$ + +We are now interested in restoring this equation to the original form of Equation (44). In order to achieve this goal, let us consider $(\gamma_1 \gamma_3)$. This form commutes with $\gamma_0$ and $\gamma_2$ and anti-commutes with $\gamma_1$ and $\gamma_3$. Thus, + +$$ \left(-i\gamma_0 \frac{\partial}{\partial t} - i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} - i\gamma_3 \frac{\partial}{\partial z} - m\right) (\gamma_1 \gamma_3) \psi^*(\vec{x}, -t) = \lambda (\gamma_1 \gamma_3) \psi^*(\vec{x}, -t). \quad (59) $$ + +Furthermore, since: + +$$ \gamma_1 \gamma_3 = \begin{pmatrix} i\sigma_2 & 0 \\ 0 & i\sigma_2 \end{pmatrix}, \quad (60) $$ + +this four-by-four matrix changes the direction of the spin. Indeed, this form of time reversal is consistent with Table 3 and Figure 4. + +Finally, let us change the signs of both $\vec{x}$ and $t$. For this purpose, we go back to the complex-conjugated Dirac equation of Equation (43). Here, $\gamma_2$ anti-commutes with all others. Thus, the wave function: + +$$ \gamma_2 \psi(-\vec{x} \cdot \vec{p}, -p_0 t), \quad (61) $$ + +should satisfy the Dirac equation. This form is known as the charge-conjugated wave function, and it is also illustrated in Table 3 and Figure 4. + +## 5.1. Polarization of Massless Neutrinos + +For massless neutrinos, the little group consists of rotations around the z axis, in addition to $N_i$ and $\tilde{N}_i$ applicable to the upper and lower components of the Dirac spinors. Thus, the four-by-four matrix for these generators is: + +$$ N_{44(i)} = \begin{pmatrix} N_i & 0 \\ 0 & \tilde{N}_i \end{pmatrix}. \quad (62) $$ + +The transformation matrix is thus: + +$$ D_{44}(\alpha, \beta) = \exp(-i\alpha N_{44(1)} - i\beta N_{44(2)}) = \begin{pmatrix} D(\alpha, \beta) & 0 \\ 0 & \tilde{D}(\alpha, \beta) \end{pmatrix}, \quad (63) $$ + +with: + +$$ D(\alpha, \beta) = \begin{pmatrix} 1 & \alpha - i\beta \\ 0 & 1 \end{pmatrix}, \qquad \tilde{D}(\alpha, \beta) = \begin{pmatrix} 1 & 0 \\ -\alpha - i\beta & 1 \end{pmatrix}. \quad (64) $$ + +As is illustrated in Figure 1, the $D$ transformation performs the gauge transformation on massless photons. Thus, this transformation allows us to extend the concept of gauge transformations to massless spin-1/2 particles. With this point in mind, let us see what happens when this $D$ transformation is applied to the Dirac spinors. + +$$ D(\alpha, \beta)u = u, \qquad \tilde{D}(\alpha, \beta)\dot{v} = \dot{v}. \quad (65) $$ + +Thus, $u$ and $\dot{v}$ are invariant gauge transformations. +---PAGE_BREAK--- + +What happens to $v$ and $\dot{u}$? + +$$D(\alpha, \beta)v = v + (\alpha - i\beta)u, \quad \dot{D}(\alpha, \beta)\dot{u} = \dot{u} - (\alpha + i\beta)\dot{v}. \qquad (66)$$ + +These spinors are not invariant under gauge transformations [17,18]. + +Thus, the Dirac spinor: + +$$U_{\text{inv}} = \begin{pmatrix} u \\ \dot{v} \end{pmatrix}, \qquad (67)$$ + +is gauge-invariant while the spinor: + +$$U_{\text{non}} = \begin{pmatrix} v \\ \dot{u} \end{pmatrix}, \qquad (68)$$ + +is not. Thus, gauge invariance leads to the polarization of massless spin-1/2 particles. Indeed, this is what we observe in the real world. + +## 5.2. Small-Mass Neutrinos + +Neutrino oscillation experiments presently suggest that neutrinos have a small, but finite mass [19]. If neutrinos have mass, there should be a Lorentz frame in which they can be brought to rest with an $O(3)$-like $SU(2)$ little group for their internal space-time symmetry. However, it is not likely that at-rest neutrinos will be found anytime soon. In the meantime, we have to work with the neutrino with a fixed momentum and a small mass [20]. Indeed, the present loop representation is suitable for this problem. + +Since the mass is so small, it is appropriate to approach this small-mass problem as a departure from the massless case. In Section 5.1, it was noted that the polarization of massless neutrinos is a consequence of gauge invariance. Let us start with a left-handed massless neutrino with the spinor: + +$$\dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \qquad (69)$$ + +and the gauge transformation applicable to this spinor: + +$$\Gamma(\gamma) = \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}. \qquad (70)$$ + +Since: + +$$\begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \qquad (71)$$ + +the spinor of Equation (69) is invariant under the gauge transformation of Equation (70). + +If the neutrino has a small mass, the transformation matrix is for a rotation. However, for a small non-zero mass, the deviation from the triangular form is small. The procedure for deriving the Wigner matrix for this case is given toward the end of Section 3. The matrix in this case is: + +$$\mathcal{D}(\gamma) = \begin{pmatrix} 1 - (\gamma\epsilon)^2/2 & -\gamma\epsilon^2 \\ \gamma & 1 - (\gamma\epsilon)^2/2 \end{pmatrix}, \qquad (72)$$ + +with $\epsilon^2 = m/p$, where *m* and *p* are the mass and momentum of the neutrino, respectively. This matrix becomes the gauge transformation of Equation (70) for $\epsilon = 0$. If this matrix is applied to the spinor of Equation (69), it becomes: + +$$D(\gamma)\dot{v} = \begin{pmatrix} -\gamma\epsilon^2 \\ 1 \end{pmatrix}. \qquad (73)$$ +---PAGE_BREAK--- + +In this way, the left-handed neutrino gains a right-handed component. We took into account that $(\gamma e)^2$ is much smaller than one. + +Since massless neutrinos are gauge independent, we cannot measure the value of $\gamma$. For the small-mass case, we can determine this value from the measured values of $m/p$ and the density of right-handed neutrinos. + +## 6. Scalars, Vectors, and Tensors + +We are quite familiar with the process of constructing three spin-1 states and one spin-0 state from two spinors. Since each spinor has two states, there are four states if combined. + +In the Lorentz-covariant world, for each spin-1/2 particle, there are two additional two-component spinors coming from the dotted representation [12,21–23]. There are thus four states. If two spinors are combined, there are 16 states. In this section, we show that they can be partitioned into + +1. scalar with one state, + +2. pseudo-scalar with one state, + +3. four-vector with four states, + +4. axial vector with four states, + +5. second-rank tensor with six states. + +These quantities contain sixteen states. We made an attempt to construct these quantities in our earlier publication [5], but this earlier version is not complete. There, we did not take into account the parity operation properly. We thus propose to complete the job in this section. + +For particles at rest, it is known that the addition of two one-half spins result in spin-zero and spin-one states. Hence, we have two different spinors behaving differently under the Lorentz boost. Around the z direction, both spinors are transformed by: + +$$Z(\phi) = \exp(-i\phi J_3) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix}. \qquad (74)$$ + +However, they are boosted by: + +$$B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix},$$ + +$$\dot{B}(\eta) = \exp(i\eta K_3), = \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix}, \qquad (75)$$ + +which are applicable to the undotted and dotted spinors, respectively. These two matrices commute with each other and also with the rotation matrix $Z(\phi)$ of Equation (74). Since $K_3$ and $J_3$ commute with each other, we can work with the matrix $Q(\eta, \phi)$ defined as: + +$$Q(\eta, \phi) = B(\eta)Z(\phi) = \begin{pmatrix} e^{(\eta-i\phi)/2} & 0 \\ 0 & e^{-(\eta-i\phi)/2} \end{pmatrix},$$ + +$$\dot{Q}(\eta, \phi) = \dot{B}(\eta)\dot{Z}(\phi) = \begin{pmatrix} e^{-(\eta+i\phi)/2} & 0 \\ 0 & e^{(\eta+i\phi)/2} \end{pmatrix}. \qquad (76)$$ + +When this combined matrix is applied to the spinors, + +$$Q(\eta, \phi)u = e^{(\eta-i\phi)/2}u, \quad Q(\eta, \phi)v = e^{-(\eta-i\phi)/2}v,$$ + +$$\dot{Q}(\eta, \phi)\dot{u} = e^{-(\eta+i\phi)/2}\dot{u}, \quad \dot{Q}(\eta, \phi)\dot{v} = e^{(\eta+i\phi)/2}\dot{v}. \qquad (77)$$ +---PAGE_BREAK--- + +If the particle is at rest, we can explicitly construct the combinations: + +$$uu, \quad \frac{1}{\sqrt{2}}(uv + vu), \quad vv, \tag{78}$$ + +to obtain the spin-1 state and: + +$$\frac{1}{\sqrt{2}}(uv - vu), \tag{79}$$ + +for the spin-zero state. This results in four bilinear states. In the $SL(2, c)$ regime, there are two dotted spinors, which result in four more bilinear states. If we include both dotted and undotted spinors, there are sixteen independent bilinear combinations. They are given in Table 4. This table also gives the effect of the operation of $Q(\eta, \phi)$. + +**Table 4.** Sixteen combinations of the $SL(2, c)$ spinors. In the $SU(2)$ regime, there are two spinors leading to four bilinear forms. In the $SL(2, c)$ world, there are two undotted and two dotted spinors. These four-spinors lead to sixteen independent bilinear combinations. + +
Spin 1Spin 0
uu, 1√2(uv + vu), vv,u1√2(uv − vu)
úú, 1√2(úv + vú), vú,v1√2(úv − vú)
uú, 1√2(uv + vú), vv,v1√2(uø − vø)
úu, 1√2(úv + vú), vú,v1√2(úv − vú)
After the operation of Q(η, φ) and Q̇(η, φ)
e−iφeηuu, 1√2(uv + vu), ee−ηvv,u1√2(uv − vu)
e−iφe−ηúú, 1√2(úv + vú), eeηvú,v1√2(úv − vú)
e−iφuú, 1√2(eηuv + e−ηvú), evú,e1√2(eηuø − e−ηvø)
e−iφúú, 1√2(úv + vú), evv,e1√2(eηúv − e−ηvø)
+ +Among the bilinear combinations given in Table 4, the following two equations are invariant under rotations and also under boosts: + +$$S = \frac{1}{\sqrt{2}}(uv - vu), \quad \text{and} \quad \dot{S} = -\frac{1}{\sqrt{2}}(\dot{u}\dot{v} - \dot{v}\dot{u}). \tag{80}$$ + +They are thus scalars in the Lorentz-covariant world. Are they the same or different? Let us consider the following combinations: + +$$S_+ = \frac{1}{\sqrt{2}}(S + \dot{S}), \quad \text{and} \quad S_- = \frac{1}{\sqrt{2}}(S - \dot{S}). \tag{81}$$ + +Under the dot conjugation, $S_+$ remains invariant, but $S_-$ changes sign. The boost is performed in the opposite direction and therefore is the operation of space inversion. Thus, $S_+$ is a scalar, while $S_-$ is called a pseudo-scalar. + +## 6.1. Four-Vectors + +Let us go back to Equation (78) and make a dot-conjugation on one of the spinors. + +$$u\dot{u}, \quad \frac{1}{\sqrt{2}}(u\dot{v} + v\dot{u}), \quad v\dot{v}, \quad \frac{1}{\sqrt{2}}(u\dot{v} - v\dot{u}),$$ + +$$\dot{u}u, \quad \frac{1}{\sqrt{2}}(\dot{u}v + \dot{v}u), \quad \dot{v}v, \quad \frac{1}{\sqrt{2}}(\dot{u}v - \dot{v}u). \tag{82}$$ +---PAGE_BREAK--- + +We can make symmetric combinations under dot conjugation, which lead to: + +$$ +\frac{1}{\sqrt{2}} (u\dot{u} + \dot{u}u), \quad \frac{1}{2} [(u\dot{\nu} + v\dot{u}) + (\dot{u}v + \dot{v}u)], \quad \frac{1}{\sqrt{2}} (v\dot{\nu} + \dot{v}v), \quad \text{for spin 1}, +$$ + +$$ +\frac{1}{2}[(u\dot{v}-v\dot{u})+(\dot{u}v-\dot{v}u)], \quad \text{for spin 0,} \tag{83} +$$ + +and anti-symmetric combinations, which lead to: + +$$ +\frac{1}{\sqrt{2}}(u\dot{u} - \dot{u}u), \quad \frac{1}{2}[(u\dot{v} + v\dot{u}) - (\dot{u}v + \dot{v}u)], \quad \frac{1}{\sqrt{2}}(v\dot{v} - \dot{v}v), \quad \text{for spin 1,} +$$ + +$$ +\frac{1}{2}[(u\ddot{v} - v\ddot{u}) - (\dot{u}\ddot{v} - \ddot{u}v)], \quad \text{for spin } 0. \qquad (84) +$$ + +Let us rewrite the expression for the space-time four-vector given in Equation (7) as: + +$$ +\begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix}, \tag{85} +$$ + +which, under the parity operation, becomes + +$$ +\begin{pmatrix} +t-z & -x+iy \\ +-x-iy & t+z +\end{pmatrix}. +\qquad +(86) +$$ + +If the expression of Equation (85) is for an axial vector, the parity operation leads to: + +$$ +\begin{pmatrix} -t+z & x-iy \\ x+iy & -t-z \end{pmatrix}, \qquad (87) +$$ + +where only the sign of *t* is changed. The off-diagonal elements remain invariant, while the diagonal elements are interchanged with sign changes. + +We note here that the parity operation corresponds to dot conjugation. Then, from the expressions given in Equations (83) and (84), it is possible to construct the four-vector as: + +$$ +V = \begin{pmatrix} u\ddot{v} - \dot{u}u & v\ddot{v} - \dot{v}u \\ u\dot{u} - \dot{u}u & \dot{u}v - v\dot{u} \end{pmatrix}, \qquad (88) +$$ + +where the off-diagonal elements change their signs under the dot conjugation, while the diagonal elements are interchanged. + +The axial vector can be written as: + +$$ +A = \begin{pmatrix} u\ddot{v} + v\dot{u} & v\ddot{v} + v\dot{v} \\ u\dot{u} + \dot{u}u & -\dot{u}v - v\dot{u} \end{pmatrix}. \qquad (89) +$$ + +Here, the off-diagonal elements do not change their signs under dot conjugation, and the diagonal elements become interchanged with a sign change. This matrix thus represents an axial vector. + +6.2. Second-Rank Tensor + +There are also bilinear spinors, which are both dotted or both undotted. We are interested in two +sets of three quantities satisfying the O(3) symmetry. They should therefore transform like: + +$$ +(x + iy)/\sqrt{2}, \quad (x - iy)/\sqrt{2}, \quad z, \tag{90} +$$ +---PAGE_BREAK--- + +which are like: + +$$uu, \quad vv, \quad (uv + vu) / \sqrt{2}, \tag{91}$$ + +respectively, in the $O(3)$ regime. Since the dot conjugation is the parity operation, they are like: + +$$-\dot{u}\dot{u}, \quad -\dot{v}\dot{v}, \quad -(\dot{u}\dot{v} + \dot{v}\dot{u})/\sqrt{2}. \tag{92}$$ + +In other words, + +$$(uu) = -\dot{u}\dot{u}, \quad \text{and} \quad (vv) = -\dot{v}\dot{v}. \tag{93}$$ + +We noticed a similar sign change in Equation (86). + +In order to construct the z component in this $O(3)$ space, let us first consider: + +$$f_z = \frac{1}{2} [(uv + vu) - (\dot{u}\dot{v} + \dot{v}\dot{u})], \qquad g_z = \frac{1}{2i} [(uv + vu) + (\dot{u}\dot{v} + \dot{v}\dot{u})]. \tag{94}$$ + +Here, $f_z$ and $g_z$ are respectively symmetric and anti-symmetric under the dot conjugation or the parity operation. These quantities are invariant under the boost along the z direction. They are also invariant under rotations around this axis, but they are not invariant under boosts along or rotations around the x or y axis. They are different from the scalars given in Equation (80). + +Next, in order to construct the x and y components, we start with $f_{\pm}$ and $g_{\pm}$ as: + +$$f_+ = \frac{1}{\sqrt{2}}(uu - \dot{u}\dot{u}), \quad f_- = \frac{1}{\sqrt{2}}(vv - \dot{v}\dot{v}),$$ + +$$g_+ = \frac{1}{\sqrt{2i}}(uu + \dot{u}\dot{u}), \quad g_- = \frac{1}{\sqrt{2i}}(vv + \dot{v}\dot{v}). \tag{95}$$ + +Then: + +$$f_x = \frac{1}{\sqrt{2}}(f_+ + f_-) = \frac{1}{2}[(uu + vv) - (\dot{u}\dot{u} + \dot{v}\dot{v})],$$ + +$$f_y = \frac{1}{\sqrt{2i}}(f_+ - f_-) = \frac{1}{2i}[(uu - vv) - (\dot{u}\dot{u} - \dot{v}\dot{v})], \tag{96}$$ + +and: + +$$g_x = \frac{1}{\sqrt{2}}(g_+ + g_-) = \frac{1}{2}[(uu + vv) + (\dot{u}\dot{u} + \dot{v}\dot{v})],$$ + +$$g_y = \frac{1}{\sqrt{2i}}(g_+ - g_-) = \frac{1}{2i}[(uu - vv) + (\dot{u}\dot{u} - \dot{v}\dot{v})]. \tag{97}$$ + +Here, $f_x$ and $f_y$ are symmetric under dot conjugation, while $g_x$ and $g_y$ are anti-symmetric. + +Furthermore, $f_z$, $f_x$ and $f_y$ of Equations (94) and (96) transform like a three-dimensional vector. The same can be said for $g_i$ of Equations (94) and (97). Thus, they can be grouped into the second-rank tensor: + +$$\begin{pmatrix} +0 & -f_z & -f_x & -f_y \\ +f_z & 0 & -g_y & g_x \\ +f_x & g_y & 0 & -g_z \\ +f_y & -g_x & g_z & 0 +\end{pmatrix}, \tag{98}$$ + +whose Lorentz-transformation properties are well known. The $g_i$ components change their signs under space inversion, while the $f_i$ components remain invariant. They are like the electric and magnetic fields, respectively. +---PAGE_BREAK--- + +If the system is Lorentz-boosted, $f_i$ and $g_i$ can be computed from Table 4. We are now interested in the symmetry of photons by taking the massless limit. Thus, we keep only the terms that become larger for larger values of $\eta$. Thus, + +$$ +\begin{aligned} +f_x & \rightarrow \frac{1}{2} (uu - \dot{u}\dot{v}), && f_y \rightarrow \frac{1}{2i} (uu + \dot{u}\dot{v}), \\ +g_x & \rightarrow \frac{1}{2i} (uu + \dot{v}\dot{u}), && g_y \rightarrow -\frac{1}{2} (uu - \dot{u}\dot{v}), +\end{aligned} +\quad (99) $$ + +in the massless limit. + +Then, the tensor of Equation (98) becomes: + +$$ \begin{pmatrix} 0 & 0 & -E_x & -E_y \\ 0 & 0 & -B_y & B_x \\ E_x & B_y & 0 & 0 \\ E_y & -B_x & 0 & 0 \end{pmatrix}, \qquad (100) $$ + +with: + +$$ +\begin{aligned} +E_x &\approx \frac{1}{2}(uu - \dot{u}\dot{v}), && E_y \approx \frac{1}{2i}(uu + \dot{u}\dot{v}), \\ +B_x &= \frac{1}{2i}(uu + \dot{v}\dot{u}), && B_y = -\frac{1}{2}(uu - \dot{u}\dot{v}). +\end{aligned} +\quad (101) $$ + +The electric and magnetic field components are perpendicular to each other. Furthermore, + +$$ B_x = E_y, \quad B_y = -E_x. \quad (102) $$ + +In order to address symmetry of photons, let us go back to Equation (95). In the massless limit, + +$$ B_+ \approx E_+ \approx uu, \quad B_- \approx E_- \approx \dot{u}\dot{v}. \quad (103) $$ + +The gauge transformations applicable to $u$ and $\bar{v}$ are the two-by-two matrices: + +$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}, \qquad (104) $$ + +respectively. Both $u$ and $\bar{v}$ are invariant under gauge transformations, while $u$ and $\bar{v}$ are not. + +The $B_+$ and $E_+$ are for the photon spin along the z direction, while $B_-$ and $E_-$ are for the opposite direction. + +### 6.3. Higher Spins + +Since Wigner's original book of 1931 [24,25], the rotation group, without Lorentz transformations, has been extensively discussed in the literature [22,26,27]. One of the main issues was how to construct the most general spin state from the two-component spinors for the spin-1/2 particle. + +Since there are two states for the spin-1/2 particle, four states can be constructed from two spinors, leading to one state for the spin-0 state and three spin-1 states. With three spinors, it is possible to construct four spin-3/2 states and two spin-1/2 states, resulting in six states. This partition process is much more complicated [28,29] for the case of three spinors. Yet, this partition process is possible for all higher spin states. + +In the Lorentz-covariant world, there are four states for each spin-1/2 particle. With two spinors, we end up with sixteen (4 × 4) states, and they are tabulated in Table 4. There should be 64 states for +---PAGE_BREAK--- + +three spinors and 256 states for four spinors. We now know how to Lorentz-boost those spinors. We also know that the transverse rotations become gauge transformations in the limit of zero-mass or infinite-$\eta$. It is thus possible to bundle all of them into the table given in Figure 5. + +**Figure 5.** Unified picture of massive and massless particles. The gauge transformation is a Lorentz-boosted rotation matrix and is applicable to all massless particles. It is possible to construct higher-spin states starting from the four states of the spin-1/2 particle in the Lorentz-covariant world. + +In the relativistic regime, we are interested in photons and gravitons. As was noted in Sections 6.1 and 6.2, the observable components are invariant under gauge transformations. They are also the terms that become largest for large values of $\eta$. + +We have seen in Section 6.2 that the photon state consists of $uu$ and $\bar{u}\bar{v}$ for those whose spins are parallel and anti-parallel to the momentum, respectively. Thus, for spin-2 gravitons, the states must be $uuuu$ and $\bar{u}\bar{v}\bar{v}\bar{v}$, respectively. + +In his effort to understand photons and gravitons, Weinberg constructed his states for massless particles [30], especially photons and gravitons [31]. He started with the conditions: + +$$N_1|\text{state}>=0, \quad \text{and} \quad N_2|\text{state}>=0, \qquad (105)$$ + +where $N_1$ and $N_2$ are defined in Equation (17). Since they are now known as the generators of gauge transformations, Weinberg's states are gauge-invariant states. Thus, $uu$ and $\bar{u}\bar{v}$ are Weinberg's states for photons, and $uuuu$ are $\bar{u}\bar{v}\bar{v}\bar{v}$ are Weinberg's states for gravitons. + +## 7. Concluding Remarks + +Since the publication of Wigner's original paper [1], there have been many papers written on the subject. The issue is how to construct subgroups of the Lorentz group whose transformations do not change the momentum of a given particle. The traditional approach to this problem has been to work with a fixed mass, which remains invariant under Lorentz transformation. + +In this paper, we have presented a different approach. Since, we are interested in transformations that leave the momentum invariant, we do not change the momentum throughout mathematical processes. Figure 3 tells the difference. In our approach, we fix the momentum, and we allow transitions from one hyperbola to another analytically with one transformation matrix. It is an interesting future problem to see what larger group can accommodate this process. + +Since the purpose of this paper is to provide a simpler mathematics for understanding the physics of Wigner's little groups, we used the two-by-two $SL(2,c)$ representation, instead of four-by-four matrices, for the Lorentz group throughout the paper. During this process, it was noted in Section 5 that the Dirac equation is a representation of Wigner's little group. + +We also discussed how to construct higher-spin states starting from four-component spinors for the spin-1/2 particle. We studied how the spins can be added in the Lorentz-covariant world, as illustrated in Figure 5. + +**Author Contributions:** Each of the authors participated in developing the material presented in this paper and in writing the manuscript. +---PAGE_BREAK--- + +**Conflicts of Interest:** The authors declare no conflict of interest. + +References + +1. Wigner, E. On unitary representations of the inhomogeneous Lorentz group. *Ann. Math.* **1939**, *40*, 149–204. + +2. Han, D.; Kim, Y.S.; Son, D. Gauge transformations as Lorentz-boosted rotations. *Phys. Lett. B* **1983**, *131*, 327–329. + +3. Kim, Y.S.; Wigner, E.P. Cylindrical group and massless particles. *J. Math. Phys.* **1987**, *28*, 1175–1179. + +4. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic-particles. *J. Math. Phys.* **1990**, *31*, 55–60. + +5. Başkal, S.; Kim, Y.S.; Noz, M.E. *Physics of the Lorentz Group*, IOP Concise Physics; Morgan & Claypool Publishers: San Rafael, CA, USA, 2015. + +6. Kupersztych, J. Is there a link between gauge invariance, relativistic invariance and Electron Spin? *Nuovo Cimento* **1976**, *31B*, 1–11. + +7. Han, D.; Kim, Y.S. Little group for photons and gauge transformations. *Am. J. Phys.* **1981**, *49*, 348–351. + +8. Han, D.; Kim, Y.S. Special relativity and interferometers. *Phys. Rev. A* **1988**, *37*, 4494–4496. + +9. Dirac, P.A.M. Applications of quaternions to Lorentz transformations. *Proc. R. Irish Acad.* **1945**, *A50*, 261–270. + +10. Bargmann, V. Irreducible unitary representations of the Lorentz group. *Ann. Math.* **1947**, *48*, 568–640. + +11. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon Press: Oxford, UK, 1954. + +12. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986. + +13. Başkal, S.; Kim, Y.S.; Noz, M.E. Wigner’s space-time symmetries based on the two-by-two matrices of the damped harmonic oscillators and the poincaré sphere. *Symmetry* **2014**, *6*, 473–515. + +14. Han, D.; Kim, Y.S.; Son, D. Eulerian parametrization of Wigner little groups and gauge transformations in terms of rotations in 2-component spinors. *J. Math. Phys.* **1986**, *27*, 2228–2235. + +15. Wigner, E.P. Normal form of antiunitary operators. *J. Math. Phys.* **1960**, *1*, 409–413. + +16. Wigner, E.P. Phenomenological distinction between unitary and antiunitary symmetry operators. *J. Math. Phys.* **1960**, *1*, 413–416. + +17. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. *Phys. Rev. D* **1982**, *26*, 3717–3725. + +18. Han, D.; Kim, Y.S.; Son, D. Photons, neutrinos, and gauge transformations. *Am. J. Phys.* **1986**, *54*, 818–821. + +19. Mohapatra, R.N.; Smirnov, A.Y. Neutrino mass and new physics. *Ann. Rev. Nucl. Part. Sci.* **2006**, *56*, 569–628. + +20. Kim, Y.S.; Maguire, G.Q., Jr.; Noz, M.E. Do small-mass neutrinos participate in gauge transformations? *Adv. High Energy Phys.* **2016**, 2016, 1847620, doi:10.1155/2016/1847620. + +21. Berestetskii, V.B.; Pitaevskii, L.P.; Lifshitz, E.M. *Quantum Electrodynamics*, Volume 4 of the Course of Theoretical Physics, 2nd ed.; Pergamon Press: Oxford, UK, 1982. + +22. Gel'fand, I.M.; Minlos, R.A.; Shapiro, A. *Representations of the Rotation and Lorentz Groups and their Applications*; MacMillan: New York, NY, USA, 1963. + +23. Weinberg, S. Feynman rules for any spin. *Phys. Rev.* **1964**, *133*, B1318-B1332. + +24. Wigner, E. *Gruppentheorie und ihre Anwendungen auf die Quantenmechanik der Atomspektren*; Friedrich Vieweg und Sohn: Braunsweig, Germany, 1931. (In German) + +25. Wigner, E.P. *Group Theory and Its Applications to the Quantum Mechanics of Atomic Spectra*, Translated from the German; Griffin, J.J., Ed.; Academic Press: New York, NY, USA, 1959. + +26. Condon, E.U.; Shortley, G.H. *The Theory of Atomic Spectra*; Cambridge University Press: London, UK, 1951. + +27. Hamermesh, M. *Group Theory and Application to Physical Problems*; Addison-Wesley: Reading, MA, USA, 1962. + +28. Feynman, R.P.; Kislinger, M.; Ravndal, F. Current matrix elements from a relativistic quark model. *Phys. Rev. D* **1971**, *3*, 2706–2732. + +29. Hussar, P.E.; Kim, Y.S.; Noz, M.E. Three-particle symmetry classifications according to the method of Dirac. *Am. J. Phys.* **1980**, *48*, 1038–1042. + +30. Weinberg, S. Feynman rules for any spin II. massless particles. *Phys. Rev.* **1964**, *134*, B882-B896. + +31. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass. *Phys. Rev.* **1964**, *135*, B1049-B1056. + +© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). +---PAGE_BREAK--- + +MDPI AG + +St. Alban-Anlage 66 +4052 Basel, Switzerland + +Tel. +41 61 683 77 34 +Fax +41 61 302 89 18 + +http://www.mdpi.com + +*Symmetry* Editorial Office + +E-mail: symmetry@mdpi.com + +http://www.mdpi.com/journal/symmetry +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +MDPI AG +St. Alban-Anlage 66 +4052 Basel +Switzerland + +Tel: +41 61 683 77 34 +Fax: +41 61 302 89 18 + +www.mdpi.com \ No newline at end of file diff --git a/samples/texts_merged/6026555.md b/samples/texts_merged/6026555.md new file mode 100644 index 0000000000000000000000000000000000000000..edc588b73deca44abee75ce162eaa0a4ab0518b2 --- /dev/null +++ b/samples/texts_merged/6026555.md @@ -0,0 +1,180 @@ + +---PAGE_BREAK--- + +# Thermodynamics of Efflux Process of Liquids and Gases + +E. A. Mikaelian¹, Saif A. Mouhammad²* + +¹Gubkin Russian State University of Oil and Gas, Moscow, Russia + +²Physics Department, Faculty of Science, Taif University, Taif, Kingdom of Saudi Arabia + +Email: saifnet70@hotmail.com + +Received 29 March 2015; accepted 11 May 2015; published 14 May 2015 + +Copyright © 2015 by authors and Scientific Research Publishing Inc. +This work is licensed under the Creative Commons Attribution International License (CC BY). +http://creativecommons.org/licenses/by/4.0/ + +Open Access + +## Abstract + +The main objective of this work is to obtain the calculated ratio of efflux processes for liquids, vapors, gases on the basis of the developed mathematical model, which allows to determine the characteristics of the channel profiles nozzles and diffusers, to solve a number of subsequent applications for analysis modes. On the basis of the calculated ratios are equations of the first law of thermodynamics for the flow of liquids and gases. The obtained calculated ratios are extended for the case of the efflux of compressible liquids, vapors and gases and as a special case, for incompressible liquids. The characteristics of the critical efflux regime liquids, which allows to determine the linear and the mass efflux rate of the critical regime and the calculated characteristics of the channel profiles nozzles and diffusers, Laval nozzles for different modes of operation are obtained. + +## Keywords + +Thermodynamics, Efflux, Compressible, Incompressible, Liquids, Diffusers, Nozzles + +## 1. Introduction + +The efflux processes are quite common in various technological processes performed with the power technology equipment in the gas and oil industry, in heat engines, pumps, compressor machines, mas-and-heat exchange units, pipelines, in separate elements of machines and devices: nozzles, diffusers, convergent nozzles, mud guns, fittings, locking devices, gate valves, valves, various calibration holes etc. It is worth emphasising a special role in studying the processes of the gas and liquid efflux through various sorts of leakiness and gap [1] [2]. + +The efflux process can be considered as a special case of the occurrence and distribution of potential work. Effective work in the efflux process is distributed on the work, directly transmitted to the bodies of external systems. + +*Corresponding author. +---PAGE_BREAK--- + +tem (in our case in the efflux process this work is absent: $\delta W_{ez}^* = 0$) and to a change in the energy of external position of working medium itself ($de_{ez}$). The last term, in turn, consists of the kinetic energy $d(c^2/2)$ and the potential energy ($gdz$). + +Thus, the initial equation of the theoretical efflux process has the following form: + +$$ \delta W = -VdP = d(c^2/2) + gdz. \quad (1) $$ + +Switching to the real efflux processes then is carried out by introducing correction factors: velocity rates ($\phi$) and flow rates ($\phi_*$). The integral of the initial equation of efflux for the expression of potential flow work from the initial 1 to the final section 2 of a flow has the form as follows: + +$$ W_{12} = c_2^2/2 - c_1^2/2 + g(z_2 - z_1); \quad (2) $$ + +$$ W_{12} = \left[ 1 - \left( \frac{P_2}{P_1} \right)^{(n-1)/n} \right] \frac{P_1 V_1 n}{n-1}. \quad (3) $$ + +The rate of gas efflux in the initial section can be considered as a result of the efflux of a conditional initial state 0-0 at zero velocity $c_0 = 0$; with the graded level $z_0 = z_2$ and pressure $P_0$. + +Then the calculated expression for the potential work and linear velocity of the efflux of the final section of the flow is determined by the following equations: + +$$ W_{02} = W_{12} + W_{01} = \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \frac{P_0 V_0 n}{n-1}; \quad (4) $$ + +$$ c_2 = (2W_{02})^{0.5} = \left[ 2W_{12} + c_1^2 + 2g(z_1 + z_2) \right]^{0.5}. \quad (5) $$ + +The theoretical efflux process is regarded as an adiabatic one, then based on the first law of thermodynamics for the flow the potential flow work is determined as the specific heat drop of the flow equal to the difference between the heat content (enthalpy) of it, taken with the opposite sign [3] [4]: + +$$ W_{12} = h_1 - h_2; \quad q_{12} = 0. \quad (6) $$ + +Further the mass rate of the efflux is entered in calculations: + +$$ u = G/f = V\rho/f = \rho c, \quad (7) $$ + +where *f*—the cross section of the flow under consideration; *G* and *V* are the mass flow rate and volumetric flow rate; *ρ*—liquid density; *c*—the linear velocity of the liquid in the direction of movement (the average velocity in the section *f* in the direction of a normal to this section). + +The concept of the mass flow rate in research is most essential. The concept of linear velocity characterises only the kinetic energy of the flow, averaging of such a velocity depends on the flow mode (laminar, transitional, turbulent) and is not identical with the mass flow rate. + +The calculated expression of the theoretical mass flow rate at the outlet section is obtained according to the last equation depending on the linear velocity (4) and (5) and the equation of the efflux process: + +$$ u_2 = \left\{ \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \left( \frac{P_2}{P_0} \right)^{\frac{2}{n}} \left( \frac{P_0}{V_0} \right)^{\frac{2}{n}} \right\}^{\frac{0.5}{n-1}}. \quad (8) $$ + +For transition to the real characteristics of a flow we input correction factors in our calculations: + +$$ c = \varphi c_2; \quad u = \varphi u_2; \quad G = uf = \varphi u_2 f = \varphi c_2 \rho_2 f. \quad (9) $$ + +In the formula (9) the velocity and flow-rate factors are determined as the ratio of theoretical and actual velocities: + +$$ \varphi = c/c_2 = V/fc_2; \quad \varphi = u/u_2 = G/fu_2. \quad (10) $$ + +The work of irreversible energy losses associated with the real efflux process: + +$$ W^{**} = (c_2^2 - c^2)/2 = (1 - \varphi^2)c_2^2/2 = \xi c_2^2/2, \quad (11) $$ +---PAGE_BREAK--- + +where $\xi$—the factor of energy losses in the real process. + +To calculate the velocity and flow rate, as follows from the formulas (10) it is necessary to arrange for mass (volume) measurements of the liquid flow rates [5] [6]. + +## 2. Efflux of Incompressible Liquids + +The initial condition ($\rho_1 = \rho_2 = \rho = 1/\nu = \text{idem}$): + +$$W_{12} = (P_1 - P_2)/\rho; W_{02} = (P_0 - P_2)/\rho. \quad (12)$$ + +Further, by using the initial general ratios (5), (7), (9) and Equation (12), we will obtain the calculated ratios for a particular case of the efflux of incompressible liquids: + +$$c_2 = (2W_{02})^{0.5} = \left[ 2W_{12} + c_1^2 + 2g(z_1 + z_2) \right]^{0.5} = \left[ 2(P_0 - P_2)/\rho \right]^{0.5} \\ = \left[ 2(P_1 - P_2)/\rho + c_1^2 + 2g(z_1 + z_2) \right]^{0.5}; \quad (13)$$ + +$$u_2 = G/f = V\rho/f = \rho c_2 = \left[ 2(P_0 - P_2)\rho \right]^{0.5}; \quad (14)$$ + +$$G = \phi u_2 f. \quad (15)$$ + +The obtained ratios can be applied to the efflux of compressible liquids (gases) with the condition of insignificant fluctuations of the densities. In this case, in Formulae (12)-(15) there should be introduced the average density value, for example, as arithmetic mean: + +$$(\rho_1 + \rho_2)/2 = \rho_m$$ + +## 3. Efflux of Compressible Liquids (Gases) + +The general solution of problems concerning the efflux of compressible liquids is obtained by a corresponding development of the previously obtained initial relationships. + +From a consideration of the original ratio (8) it follows that the mass velocity becomes zero for the following values of pressure ratios: 1) $P_2/P_0 = 1$, this takes place at the beginning of the efflux $c=0$; $u=c\rho=0$ due to the initial rate; 2) $P_2/P_0 = 0$—in the efflux to vacuum at the outlet section $\rho=0$; $u=c\rho=0$ due to density. Within this range, the mass flow rate passes through the maximum (Rolle's theorem). This means the variable factor of the radicand (8) passes through the maximum: + +$$\Psi = \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \left( \frac{P_2}{P_0} \right)^{\frac{2}{n}}. \quad (16)$$ + +Let us introduce the following designations: + +$$(P_2/P_0) = \tau^{n/(n-1)}; \quad (P_2/P_0)^{2/n} = \tau^{2/(n-1)}, \quad (17)$$ + +using the Rolle's theorem for investigating the function to the maximum, we obtain the parameters of the critical mode of efflux for compressible liquids + +$$\tau_{cr} = (P_2/P_0)_{cr}^{(n-1)/n} = 2/(n+1), \quad (18)$$ + +$$\beta = (P_2/P_0)_{cr} = \tau_{cr}^{n/(n-1)} = \left[ 2/(n+1) \right]^{n/(n-1)}, \quad (19)$$ + +$$\Psi_{cr} = (1 - \tau_{cr}) \tau_{cr}^{2/(n-1)}. \quad (20)$$ + +Depending on parameters of the critical efflux mode, the linear and mass efflux rates of the critical mode are determined: + +$$c_{cr} = \left[ n(PV)_{cr} \right]^{0.5}, \quad (21)$$ +---PAGE_BREAK--- + +**Table 1. Characteristic values of the discharge critical mode.** + +
n1.11.21.31.4
τcr = 2/(n+1)0.9530.9090.8700.833
β = τq/(n-1)cr0.58470.56450.54570.5283~0.55
Ψcr1.96772.03092.08962.1443~2.05
+ +$$u_{cr} = \left[ 2P_0 \rho_0 \Psi_{cr} n / (n-1) \right]^{0.5}. \quad (22)$$ + +**Table 1** shows the values of the critical discharge characteristics depending on the performance of the efflux process. + +## 4. Particular Cases of Efflux + +The ideal gas ($PV = RT$): + +$$c_{cr} = [nRT_{cr}]^{0.5}; \quad u_{cr} = P_0 [2\Psi_{cr}n/(n-1)RT_{cr}]^{0.5}, \quad (23)$$ + +The incompressible liquids ($V$ = idem; $n = \infty$): $c_{cr} = \infty$. + +This means that the critical mode for incompressible fluids is unattainable. The critical linear velocity of the adiabatic efflux ($n = k$) is the velocity of sound: + +$$a^* = [k(PV)_{cr}]^{0.5},$$ + +for the ideal gas: + +$$a^* = [nRT_{cr}]^{0.5}. \quad (24)$$ + +## 5. Conclusion + +According to the energy conservation law, the equation of distribution and occurrence of the potential work of any thermodynamic systems is obtained. Taken as a basis for the theory of the efflux of gases, compressible and incompressible liquids, the characteristic features of the critical mode of the liquid efflux are obtained. The derived calculated ratios will further determine the calculated characteristics of the channel profiles of nozzles and diffusers, Laval nozzles for a range of modes of operation. + +## References + +[1] Mikaelian, E.A. (2000) Maintenance Energotechnological Equipment, Gas Turbine Gas Compressor Units of Gas Gathering and Transportation. Methodology, Research, Analysis and Practice, Fuel and Energy, Moscow, 304. +http://www.dobi.oglib.ru/bgl/5076.html + +[2] Mikaelian, E.A. (2001) Improving the Quality, to Ensure Reliability and Safety of the Main Pipelines. In: Margulov, G.D., Ed., Series: Sustainable Energy and Society, Fuel and Energy, Moscow, 640. +http://www.dobi.oglib.ru/bgl/4625.html + +[3] Vladimirov, A.I. and Kershenbaum, Y.V. (2008) Industrial Safety Compressor Stations. Management of Safety and Reliability. Inter-Sector Foundation “National Institute of Oil and Gas”, Moscow, 640. +http://www.mdk-arbat.ru/bookcard?book_id=3304125 + +[4] Mikaelian, E.A. (2008) Diagnosis Energotechnological Equipment GGPA Based on Various Diagnostic Features. Gas Industry, **4**, 59-63. + +[5] Mikaelian, E.A. (2014) Determination of the Characteristic Features and Technical Condition of the Gas-Turbine and Gas-Compressor Units of Compressor Stations Based on a Simplified Thermodynamic Model. Quality Management in Oil and Gas Industry, **1**, 44-48. +http://instoilgas.ru/ukang + +[6] Mikaelian, E.A. and Mouhammed, S.A. (2014) Survey Equipment Gas Transmission Systems. Quality Management in Oil and Gas Industry, **4**, 29-36. +http://instoilgas.ru/ukang \ No newline at end of file diff --git a/samples/texts_merged/6080891.md b/samples/texts_merged/6080891.md new file mode 100644 index 0000000000000000000000000000000000000000..6b29ead8802f4ca0abec08587dfaaab92de82cbe --- /dev/null +++ b/samples/texts_merged/6080891.md @@ -0,0 +1,760 @@ + +---PAGE_BREAK--- + +# A Hankel matrix acting on Hardy and Bergman spaces + +by + +PETROS GALANOPOULOS and JOSÉ ÁNGEL PELÁEZ (Málaga) + +**Abstract.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$. Let $\mathcal{H}_\mu = (\mu_{n,k})_{n,k \ge 0}$ be the Hankel matrix with entries $\mu_{n,k} = \int_{[0,1)} t^{n+k} d\mu(t)$. The matrix $\mathcal{H}_\mu$ induces formally an operator on the space of all analytic functions in the unit disc by the formula + +$$ \mathcal{H}_{\mu}(f)(z) = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{\infty} \mu_{n,k} a_k \right) z^n, \quad z \in \mathbb{D}, $$ + +where $f(z) = \sum_{n=0}^{\infty} a_n z^n$ is an analytic function in $\mathbb{D}$. + +We characterize those positive Borel measures on $[0,1)$ such that $\mathcal{H}_\mu(f)(z) = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t)$ for all $f$ in the Hardy space $H^1$, and among them we describe those for which $\mathcal{H}_\mu$ is bounded and compact on $H^1$. We also study the analogous problem for the Bergman space $A^2$. + +**1. Introduction.** We denote by $\mathbb{D} = \{z \in \mathbb{C} : |z| < 1\}$ the unit disc and by $\mathbb{T}$ the unit circle. Let $\mathcal{Hol}(\mathbb{D})$ be the space of analytic functions in $\mathbb{D}$ and let $H^p(0 < p \le \infty)$ be the classical Hardy space of analytic functions in $\mathbb{D}$ (see [D]). + +If $0 < p < \infty$ the Bergman space $A^p$ is the set of all $f \in \mathcal{Hol}(\mathbb{D})$ such that + +$$ \|f\|_{A^p} := \int_{\mathbb{D}} |f(z)|^p dA(z) < \infty, $$ + +where $dA(z) = \pi^{-1}dx dy$ is the normalized Lebesgue area measure on $\mathbb{D}$. + +For the theory of these spaces we refer to [DS] and [Zh]. + +Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and let $\mathcal{H}_\mu = (\mu_{n,k})_{n,k \ge 0}$ be the Hankel matrix with entries $\mu_{n,k} = \int_{[0,1)} t^{n+k} d\mu(t)$. The matrix $\mathcal{H}_\mu$ induces formally an operator (which will also be denoted $\mathcal{H}_\mu$) on $\mathcal{Hol}(\mathbb{D})$ in the following sense. If $f(z) = \sum_{n \ge 0} a_n z^n \in \mathcal{Hol}(\mathbb{D})$, by multiplication of the + +2010 Mathematics Subject Classification: Primary 47B35; Secondary 30H10. +Key words and phrases: Hankel matrices, Hardy spaces, Bergman spaces. +---PAGE_BREAK--- + +matrix with the sequence of Taylor coefficients of the function, + +$$ \{a_n\}_{n \ge 0} \mapsto \left\{ \sum_{k \ge 0} \mu_{n,k} a_k \right\}_{n \ge 0}, $$ + +we can formally define + +$$ (1.1) \qquad \mathcal{H}_\mu(f)(z) = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{\infty} \mu_{n,k} a_k \right) z^n, \quad z \in \mathbb{D}. $$ + +If $\mu$ is the Lebesgue measure on the interval $[0,1]$ we get the classical Hilbert matrix $H = \{\frac{1}{n+k+1}\}_{n,k \ge 0}$. This matrix induces, in the same way as above, a bounded operator on $H^p$, $p \in (1, \infty)$ (see [DiS]), and on $A^p$, $p \in (2, \infty)$ (see [Di]); estimates on the norms have also been obtained. Recently in [DJV], a further progress has been achieved in this direction. + +In this paper we shall focus our attention on the limit cases $H^1$ and $A^2$, that is, we shall study the boundedness, compactness, and other related properties of $\mathcal{H}_\mu$ on these spaces in terms of $\mu$. Similar investigations have previously been conducted by several authors in different spaces of analytic functions in $\mathbb{D}$ (see e.g. [W], [Po]). + +The classical Hilbert matrix $\mathcal{H}$ is well defined but it is not bounded on $H^1$ (see [DiS]). It is known that the operator induced by the Hilbert matrix is not even well defined on $A^2$. Indeed, $f(z) = \sum_{n=1}^{\infty} \frac{1}{\log(n+1)}z^n \in A^2$ but $Hf(0) = \sum_{n=1}^{\infty} \frac{1}{(n+1)\log(n+1)} = \infty$ (see [DJV]). Thus, it is natural to study under which conditions on the measure $\mu$ the corresponding matrix $\mathcal{H}_\mu$ induces a well defined and bounded operator on $H^1$ and on $A^2$. + +The structure of the paper is as follows. In Section 2 we deal with the case of the Hardy space $H^1$. Let $\mu$ be a positive Borel measure in $\mathbb{D}$. For $\alpha \ge 0$ and $s > 0$, we say that $\mu$ is an $\alpha$-logarithmic $s$-Carleson measure, resp. a vanishing $\alpha$-logarithmic $s$-Carleson measure, if + +$$ \sup_{a \in \mathbb{D}} \frac{\mu(S(a)) (\log \frac{2}{1-|a|^2})^\alpha}{(1 - |a|^2)^s} < \infty, \quad \text{resp. } \lim_{|a| \to 1^{-}} \frac{\mu(S(a)) (\log \frac{2}{1-|a|^2})^\alpha}{(1 - |a|^2)^s} = 0. $$ + +By $S(a)$ we denote the Carleson box with vertex at $a$, that is, + +$$ S(a) = \left\{ z \in \mathbb{D} : 1 - |z| \le 1 - |a|, \left| \frac{\arg(a\bar{z})}{2\pi} \right| \le \frac{1 - |a|}{2} \right\}. $$ + +The above definition is a generalization of the fundamental notion of *classical Carleson measure* introduced by Carleson (see [C]). These are measures that occur for $\alpha = 0$ and $s = 1$. + +We shall prove that any classical Carleson measure induces a well defined operator on $H^1$, and conversely being Carleson is necessary in the following sense. +---PAGE_BREAK--- + +**PROPOSITION 1.1.** Suppose that $\mu$ is a finite positive Borel measure on $[0, 1)$. + +(i) If $\mu$ is a classical Carleson measure then the power series $\mathcal{H}_\mu(f)(z)$ represents a function in $\text{Hol}(\mathbb{D})$ for any $f \in H^1$, and moreover + +$$ (1.2) \qquad \mathcal{H}_\mu(f)(z) = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t), \quad f \in H^1. $$ + +(ii) If the integral in (1.2) converges for each $z \in \mathbb{D}$ and $f \in H^1$, then $\mu$ is a classical Carleson measure. + +The hope that any classical Carleson measure $\mu$ induces a bounded operator $\mathcal{H}_\mu$ on $H^1$ is unjustified, because the Lebesgue measure does not. The next result describes the appropriate subclass of classical Carleson measures. + +**THEOREM 1.2.** Suppose that $\mu$ is a classical Carleson measure on $[0, 1)$. + +(i) $\mathcal{H}_\mu : H^1 \to H^1$ is bounded if and only if $\mu$ is a 1-logarithmic 1-Carleson measure. + +(ii) $\mathcal{H}_\mu : H^1 \to H^1$ is compact if and only if $\mu$ is a vanishing 1-logarithmic 1-Carleson measure. + +In many papers (see [CS], [JPS], [T], [PV] and [Pe]), another approach to the study of Hankel operators on spaces of analytic functions is developed, using the symbol of the operator, which in our case is essentially the function + +$$ (1.3) \qquad h_\mu(z) = \sum_n \mu_n z^n, \quad \mu_n = \int_{[0,1)} t^n d\mu(t). $$ + +A characterization of the boundedness and compactness of the operator $\mathcal{H}_\mu : H^1 \to H^1$ in terms of $h_\mu$ follows from [PV, Theorems 1.6 and 1.7] (see also [CS], [JPS] and [T]). We shall provide two proofs of Theorem 1.2, a first one based on the integral representation (1.2) and a second one which uses the last cited result. + +In the case of $H^2$, $\mathcal{H}_\mu$ is bounded if and only if $\mu$ is a classical Carleson measure (see [Pe]). Power, [Po, p. 428], proved that if $\int_{[0,1)} d\mu(t)/(1-t)^2 < \infty$, then $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator, and raised the question of a necessary condition. The next result solves this problem. + +**THEOREM 1.3.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and suppose that the operator $\mathcal{H}_\mu$ is bounded on $H^2$. Then $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $H^2$ if and only if + +$$ (1.4) \qquad \int_{[0,1)} \frac{\mu([t, 1])}{(1-t)^2} d\mu(t) < \infty. $$ +---PAGE_BREAK--- + +In Section 3 we turn our attention to $A^2$. First we clarify for which measures the operator is well defined on this space and also gets an integral representation. + +**PROPOSITION 1.4.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$. + +(i) If $\mu$ satisfies (1.4) then the power series $\mathcal{H}_\mu(f)(z)$ is in $\text{Hol}(\mathbb{D})$ for any $f \in A^2$ and moreover + +$$ (1.5) \qquad \mathcal{H}_\mu(f)(z) = \int_{[0,1]} \frac{f(t)}{1-tz} d\mu(t), \quad f \in A^2. $$ + +(ii) If for any choice of $f \in A^2$ and $z \in \mathbb{D}$ the integral in (1.5) converges, then (1.4) is satisfied. + +Unfortunately, condition (1.4) does not imply the boundedness of $\mathcal{H}_\mu$ on $A^2$ (see Theorem 1.5 and Proposition 1.7 below), so we need to look for a stronger one. Observe that (1.4) can be restated by saying that the analytic function $h_\mu$ belongs to the *Dirichlet space* + +$$ \mathcal{D} = \left\{ f(z) = \sum_{n=0}^{\infty} a_n z^n \in \text{Hol}(\mathbb{D}) : \int_{\mathbb{D}} |f'(z)|^2 dA(z) < \infty \right\}, $$ + +which is a Hilbert space equipped with the inner product $\langle f, g \rangle_{\mathcal{D}} = a_0 \bar{b}_0 + \sum_{n \ge 0} (n+1)a_{n+1} \bar{b}_{n+1}$. We characterize in these terms the boundedness of the operator $\mathcal{H}_\mu$ on $A^2$. + +**THEOREM 1.5.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ that satisfies (1.4). The operator $\mathcal{H}_\mu$ is bounded in $A^2$ if and only if the measure $|h'_\mu(z)|^2 dA(z)$ is a Dirichlet Carleson measure. + +We remind the reader that a finite positive Borel measure $\nu$ in $\mathbb{D}$ is called a *Dirichlet Carleson measure* if the identity operator is bounded from the Dirichlet space to $L^2(\mathbb{D}, \nu)$. We refer to [S] and [ARS] for descriptions of these measures. + +It would be nice to relate the boundedness of the operator directly to a condition on the measure. In this spirit, we are able to describe the Hilbert-Schmidt operators on $A^2$. + +**THEOREM 1.6.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ that satisfies (1.4). The operator $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $A^2$ if and only if + +$$ (1.6) \qquad \int_{[0,1]} \frac{\mu([t, 1])}{(1-t)^2} \log \frac{1}{1-t} d\mu(t) < \infty. $$ + +Obviously, (1.6) gives bounded operators $\mathcal{H}_\mu$ on $A^2$; maybe surprisingly, it is sharp for the boundedness in a certain sense. +---PAGE_BREAK--- + +PROPOSITION 1.7. For each $\beta \in [0,1)$ there is a finite positive Borel measure $\mu$ on $[0,1)$ such that + +$$ (1.7) \quad \int_{[0,1)} \frac{\mu([t, 1))}{(1-t)^2} \left(\log \frac{1}{1-t}\right)^\beta d\mu(t) < \infty, $$ + +and $\mathcal{H}_\mu$ is not bounded on $A^2$. + +**2. The Hankel matrix $\mathcal{H}_\mu$ acting on $H^1$.** Before we proceed to the proofs of Proposition 1.1 and Theorem 1.2 some results and definitions must be recalled. First, we present an equivalent description of the $\alpha$-logarithmic $s$-Carleson measures (see [Z]). + +LEMMA A. Suppose that $0 \le \alpha < \infty$ and $0 < s < \infty$ and $\mu$ is a positive Borel measure in $\mathbb{D}$. Then $\mu$ is an $\alpha$-logarithmic $s$-Carleson measure if and only if + +$$ (2.1) \quad \sup_{a \in \mathbb{D}} \left( \log \frac{2}{1 - |a|^2} \right)^\alpha \int_{\mathbb{D}} \left( \frac{1 - |a|^2}{|1 - \bar{a}z|^2} \right)^s d\mu(z) < \infty. $$ + +We shall write $BMOA_{\log,\alpha}$, $\alpha \ge 0$, (see [Gi] and [PV]) for the space of those $H^1$ functions whose boundary values satisfy + +$$ (2.2) \quad \|f\|_{BMOA_{\log,\alpha}} = |f(0)| + \sup_{a \in \mathbb{D}} \left( \log \frac{2}{1-|a|} \right)^\alpha \frac{1}{2\pi} \int_0^{2\pi} |f(e^{i\theta}) - f(a)| P_a(e^{i\theta}) d\theta < \infty, $$ + +where $P_a(e^{i\theta}) = (1-|a|^2)/(1-ae^{-i\theta})^2$ is the Poisson kernel. + +We shall write $VMOA_{\log,\alpha}$ for the subspace of $H^1$ of those functions $f$ such that + +$$ \lim_{|a| \to 1^-} \left( \log \frac{2}{1 - |a|} \right)^\alpha \int_T |f(e^{i\theta}) - f(a)| P_a(e^{i\theta}) d\theta = 0. $$ + +If $\alpha = 0$, we obtain the classical space BMOA [VMOA] of $H^1$-functions with bounded [vanishing] mean oscillation. For simplicity, we shall write $BMOA_{\log}$ [VMOA$_{\log}$] for the space $BMOA_{\log,1}$ [VMOA$_{\log,1}$]. + +We shall also use Fefferman's result (see [Gi]) that $(H^1)^* \cong \text{BMOA}$ and $(\text{VMOA})^* \cong H^1$, under the Cauchy pairing + +$$ (2.3) \quad \langle f, g \rangle_{H^2} = \lim_{r \to 1^-} \frac{1}{2\pi} \int_0^{2\pi} f(re^{i\theta}) \overline{g(e^{i\theta})} d\theta, $$ + +$f \in H^1$, $g \in \text{BMOA}$ (resp. VMOA). + +*Proof of Proposition 1.1.* (i) Let $f(z) = \sum_{n \ge 0} a_n z^n \in H^1$ and assume that $\mu$ is a classical Carleson measure. This means equivalently that (see +---PAGE_BREAK--- + +[Pe, p. 42]) $\sup_{n \in \mathbb{N}} \mu_n(n+1) < \infty$. This fact together with Hardy's inequality (see [D, p. 48]) implies that + +$$ \sum_{k=0}^{\infty} \mu_{n,k} |a_k| \le C \sum_{k=0}^{\infty} \frac{|a_k|}{n+k+1} \le C \|f\|_{H^1}, \quad n \in \mathbb{N}, $$ + +so $H_\mu(f)(z) \in \text{Hol}(\mathbb{D})$. The above inequalities also justify that + +$$ \sum_{k \ge 0} \mu_{n,k} a_k = \int_{[0,1)} t^n f(t) d\mu(t), \quad n \in \mathbb{N}. $$ + +Then + +$$ H_{\mu}(f)(z) = \sum_{n \ge 0} \left( \int_{[0,1)} t^n f(t) d\mu(t) \right) z^n = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t), \quad z \in \mathbb{D}. $$ + +The last equality is true since $\mu$ is a classical Carleson measure and so + +$$ \sum_{n \ge 0} \left( \int_{[0,1)} t^n |f(t)| d\mu(t) \right) |z|^n \le C \|f\|_{H^1} \frac{1}{1-|z|}. $$ + +(ii) Assume that for any choice of $f \in H^1$ and $z \in D$ the integral (1.2) converges. Fix $f \in H^1$ and choose $z=0$. This means that $\int_{[0,1)} |f(t)| d\mu(t) < \infty$. If for any $\beta \in [0, 1]$ we define $T_\beta : H^1 \to L^1(d\mu)$ by setting $T_\beta(f) = f \cdot \chi_{\{0 \le |z| < \beta\}}$, then there is $C > 0$ such that + +$$ \|T_\beta(f)\|_{L^1(d\mu)} = \int_{[0,\beta]} |f(t)| d\mu(t) \le \int_{[0,1]} |f(t)| d\mu(t) \le C $$ + +for any $\beta \in [0, 1]$, which together with the uniform boundedness principle gives $\sup_{\beta \in [0,1]} \|T_\beta\|_{L^1(d\mu)} < \infty$, that is, the identity operator from $H^1$ to $L^1(d\mu)$ is bounded, thus by Carleson's result (see [D, Theorem 9.3]) $\mu$ is a classical Carleson measure. $\blacksquare$ + +Now we are ready to prove our main result in this section. + +*Proof of Theorem 1.2.* + +*Proof of (i): Boundedness.* We observe that the duality relation (VMOA)* $\cong$ $H^1$, Proposition 1.1, Cauchy's integral representation for functions in $H^1$ (see [D, Theorem 3.9]) and Fubini's theorem imply that + +$$ (2.4) \qquad \mathcal{H}_{\mu}: H^{1} \rightarrow H^{1} \text{ is bounded} $$ + +$$ +\begin{align*} +&\Leftrightarrow \lim_{r \to 1^{-}} \left| \frac{1}{2\pi} \int_0^{2\pi} \left( \int_0^1 \frac{f(t)}{1 - tre^{i\theta}} d\mu(t) \right) \overline{g(e^{i\theta})} d\theta \right| \le C \|f\|_{H^1} \|g\|_{\text{BMOA}} \\ +&\Leftrightarrow \lim_{r \to 1^{-}} \left| \int_0^1 f(t) \overline{g.rt)} d\mu(t) \right| \le C \|f\|_{H^1} \|g\|_{\text{BMOA}}, +\end{align*} +$$ + +for all $f \in H^1$ and $g \in \text{VMOA}$. +---PAGE_BREAK--- + +Suppose that $\mathcal{H}_\mu : H^1 \to H^1$ is bounded and select the families of test functions + +$$ (2.5) \qquad g_a(z) = \log \frac{2}{1-az}, \quad f_b(z) = \frac{1-b^2}{(1-bz)^2}, \quad a,b \in [0,1). $$ + +A calculation shows that {$g_a$} $\subset$ VMOA and {$f_b$} $\subset$ $H^1$ with + +$$ (2.6) \quad \sup_{a \in [0,1)} \|g_a\|_{\text{BMOA}} < \infty \quad \text{and} \quad \sup_{b \in [0,1)} \|f_b\|_{H^1} < \infty. $$ + +Next, taking $a=b \in [0,1)$ and $r \in [a, 1]$ we obtain + +$$ \begin{aligned} \left|\int_0^1 f_a(t) \overline{g_a(rt)} d\mu(t)\right| &\ge \int_a^1 \frac{1-a^2}{(1-rt)^2} \log \frac{2}{1-rat} d\mu(t), \\ &\ge C \frac{\log \frac{2}{1-a^2}}{1-a^2} \mu([a, 1)), \end{aligned} $$ + +which bearing in mind (2.4) and (2.6) implies that $\mu$ is a 1-logarithmic 1-Carleson measure. + +Conversely, suppose that $\mu$ is a 1-logarithmic 1-Carleson measure. Then by Lemma A, + +$$ (2.7) \qquad K_\mu := \sup_{a \in D} \log \frac{2}{1-|a|^2} \int_D \frac{1-|a|^2}{|1-\bar{a}z|^2} d\mu(z) < \infty. $$ + +Let us see that $\mathcal{H}_\mu$ is bounded on $H^1$. Using (2.4), it is enough to prove + +$$ (2.8) \quad \lim_{r \to 1^-} \int_0^1 |f(t)| |g.rt)| d\mu(t) \le C \|f\|_{H^1} \|g\|_{\text{BMOA}} $$ + +for all $f \in H^1$ and $g \in \text{VMOA}$, + +which together with [D, Theorem 9.3] and Lemma A is equivalent to + +$$ (2.9) \quad \lim_{r \to 1^-} \sup_{a \in D} \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)| d\mu(z) \le C \|g\|_{\text{BMOA}} \quad \text{for all } g \in \text{VMOA}. $$ + +On the other hand, for each $r \in (0,1)$, $a \in D$ and $g \in \text{VMOA}$, + +$$ (2.10) \quad \begin{aligned} &\int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)| d\mu(z) \\ &\le |g.ra)| \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} d\mu(z) + \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)-g.ra)| d\mu(z) \\ &= I_1(r,a) + I_2(r,a). \end{aligned} $$ +---PAGE_BREAK--- + +Bearing in mind that any function $g$ in the Bloch space $\mathcal{B}$ (see [ACP]) has the growth + +$$|g(z)| \le 2 \|g\|_{\mathcal{B}} \log \frac{2}{1 - |z|} \quad \text{for all } z \in \mathbb{D}$$ + +and BMOA $\subset \mathcal{B}$ (see Theorem 5.1 of [Gi]), by (2.7) we have + +$$ +\begin{align*} +(2.11) \quad I_1(r, a) &\le C \|g\|_{\text{BMOA}} \log \frac{2}{1-|a|} \int_D \frac{1-|a|^2}{|1-\bar{a}z|^2} d\mu(z) \\ +&\le CK_\mu \|g\|_{\text{BMOA}} < \infty \quad \text{for all } r \in (0,1) \text{ and } a \in \mathbb{D}. +\end{align*} +$$ + +Next, combining (2.7), $\mathbb{D}$, Theorem 9.3], (2.2) and the fact that BMOA is closed under subordination (see [Gi, Theorem 10.3]), we deduce that + +$$ +\begin{align*} +I_2(r, a) &\le CK_\mu \int_T \frac{1-|a|^2}{|1-\bar{a}e^{i\theta}|^2} |g(re^{i\theta}) - g(ra)| d\theta \\ +&\le CK_\mu \|g_r\|_{\text{BMOA}} \\ +&\le CK_\mu \|g\|_{\text{BMOA}} \quad \text{for all } r \in (0,1), a \in \mathbb{D} \text{ and } g \in \text{VMOA}, +\end{align*} +$$ + +which together with (2.10) and (2.11) implies (2.9). + +*Proof of (ii): Compactness.* Suppose that $\mathcal{H}_\mu : H^1 \to H^1$ is compact. Let $\{f_b\}$ be the family of functions defined in (2.5) and let $\{b_n\}$ be a sequence of points of $(0,1)$ such that $\lim_{n\to\infty} b_n = 1$. Since $\{f_{b_n}\}$ is a bounded sequence in $H^1$, there is a subsequence $\{b_{n_k}\}$ and $g \in H^1$ such that $\lim_{k\to\infty} \| \mathcal{H}_\mu(f_{b_{n_k}}) - g \|_{H^1} = 0$. Now, as $\{f_{b_{n_k}}\}$ converges to 0 uniformly on compact subsets of $\mathbb{D}$ and $\mu$ is a 1-logarithmic 1-Carleson measure, $\{\mathcal{H}_\mu(f_{b_{n_k}})\}$ converges to 0 uniformly on compact subsets of $\mathbb{D}$, which implies that $g=0$. Thus, combining the fact that $\lim_{k\to\infty} \|\mathcal{H}_\mu(f_{b_{n_k}})\|_{H^1} = 0$ with the inequality (for all $g \in$ VMOA) + +$$ +\lim_{r \to 1^{-}} \left| \int_{0}^{1} f_{b_{n_k}}(t) \overline{g(rt)} d\mu(t) \right| \le C \| \mathcal{H}_{\mu}(f_{b_{n_k}}) \|_{H^1} \| g \|_{\text{BMOA}}, +$$ + +and the reasoning used in the boundedness case, we deduce that + +$$ +\lim_{k \to \infty} \frac{\mu([b_{n_k}, 1)) \log \frac{2}{1-b_{n_k}}}{1-b_{n_k}} = 0. +$$ + +Consequently, $\mu$ is a vanishing 1-logarithmic 1-Carleson measure. + +Conversely, assume that $\mu$ is a vanishing 1-logarithmic 1-Carleson measure. The proof of the sufficiency for the boundedness yields + +$$ +(2.12) \quad \int_0^1 |f(t)| |g(t)| d\mu(t) \le CK_\mu \|f\|_{H^1} \|g\|_{\text{BMOA}} +$$ + +for all $f \in H^1$ and $g \in \text{VMOA}$. +---PAGE_BREAK--- + +So, it suffices to prove that for any sequence $\{f_n\}$ such that $\sup_{n \in \mathbb{N}} \|f_n\|_{H^1} < \infty$ and $\lim_{n \to \infty} f_n = 0$ on compact subsets of $\mathbb{D}$, + +$$ (2.13) \quad \lim_{n \to \infty} \int_0^1 |f_n(t)| |g(t)| d\mu(t) = 0 \quad \text{for all } g \in \text{VMOA.} $$ + +Let us write $d\mu_r = \chi_{\{r<|z|<1\}}d\mu$. Since $\mu$ is a vanishing 1-logarithmic 1-Carleson measure, $\lim_{r \to 1^-} K_{\mu_r} = 0$. This together with the fact that $\lim_{n \to \infty} f_n = 0$ on compact subsets of $\mathbb{D}$, and (2.12), shows (using a standard argument) that $\mathcal{H}_\mu$ is compact on $H^1$. ■ + +In order to present a second proof of Theorem 1.2 some definitions and known results are needed. Given $g(\xi) \sim \sum_{n=-\infty}^{\infty} \hat{g}(n)\xi^n \in L^2(\mathbb{T})$, the associated Hankel operator (see [Pe] or [PV]) is formally defined as + +$$ H_g(f) = P(gJf) $$ + +where *P* is the Riesz projection and + +$$ Jf(\xi) = \bar{\xi}f(\bar{\xi}) = \sum_{n=-\infty}^{\infty} \hat{f}(-n-1)\xi^n, \quad \xi \in \mathbb{T}. $$ + +Moreover, if $\mu$ is a classical Carleson measure, Nehari's Theorem implies that (see [Pe, p. 3] or [D, Theorem 6.8]) there is $g_\mu \in L^\infty(\mathbb{T})$ with $\mu_n = \hat{g}_\mu(n+1)$, so + +$$ \mathcal{H}_\mu(f)(z) = \overline{H_{g_\mu}(f)(\bar{z})}, $$ + +and consequently $\mathcal{H}_\mu$ is bounded on $H^1$ if and only if $H_{g_\mu}$ is bounded on $H^1$. On the other hand, + +$$ +\begin{align*} +P_1(g_\mu)(z) &:= P(g_\mu)(z) - \hat{g}_\mu(0) = \sum_{n=1}^{\infty} \hat{g}_\mu(n)z^n = \sum_{n=0}^{\infty} \hat{g}_\mu(n+1)z^{n+1} \\ +&= \sum_{n=0}^{\infty} \mu_n z^{n+1} = zh_\mu(z). +\end{align*} +$$ + +Thus, we have the next result joining [PV, Theorems 1.6 and 1.7] (see also [CS], [JPS] and [T]). + +**THEOREM A.** Suppose that $\mu$ is a classical Carleson measure on $[0, 1)$. + +(i) $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is bounded if and only if $h_{\mu} \in \text{BMOA}_{\log}$. + +(ii) $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is compact if and only if $h_{\mu} \in \text{VMOA}_{\log}$. + +**Second proof of Theorem 1.2** + +*Proof of (i): Boundedness.* If $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is bounded, then by Theorem A the function $h_{\mu}$ is in $\text{BMOA}_{\log}$. For any $a \in (0, 1)$ we deduce that +---PAGE_BREAK--- + +$$ +\begin{equation} \tag{2.14} +\begin{aligned} +& \frac{1}{2\pi} \int_0^{2\pi} |h_\mu(e^{i\theta}) - h_\mu(a)| \frac{1-a^2}{|1-ae^{i\theta}|^2} d\theta \\ +&= \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \left| \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right| d\theta \\ +&\ge \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \operatorname{Re} \left( \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right) d\theta \\ +&= \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \int_0^1 \frac{t(1-t\cos(\theta))}{|1-te^{i\theta}|^2(1-ta)} d\mu(t) d\theta \\ +&= \int_0^1 \frac{t(1-a^2)}{1-ta} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1-t\cos(\theta)}{|1-te^{i\theta}|^2|1-ae^{i\theta}|} d\theta \right) d\mu(t) \\ +&\ge \frac{1}{2} \int_0^1 \frac{t(1-a^2)^2}{1-ta} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1-t\cos(\theta)}{|1-te^{i\theta}|^2|1-ae^{i\theta}|^2} d\theta \right) d\mu(t). +\end{aligned} +\end{equation} +$$ + +Assume, for the moment, that + +$$ +(2.15) \quad \frac{1}{2\pi} \int_{0}^{2\pi} \frac{1 - t \cos(\theta)}{|1 - te^{i\theta}|^2 |1 - ae^{i\theta}|^2} d\theta = \frac{1}{(1 - at)(1 - a^2)} +$$ + +for any $a, t \in [0, 1)$. + +This together with (2.14) yields + +$$ +\sup_{a \in [0,1]} \log \frac{2}{1-a} \int_0^1 \frac{t(1-a^2)}{(1-ta)^2} d\mu(t) \le C \|h_\mu\|_{BMOA_{\log}} < \infty, +$$ + +so $\mu$ is a 1-logarithmic 1-Carleson measure. + +Now, (2.15) will be proved. We assume that $a \neq t$ (if $a = t$ a similar calculation also gives (2.15)), and we write + +$$ +F(z) = \frac{z - \frac{t}{2}(z^2 + 1)}{(z - t)(1 - tz)(z - a)(1 - az)}. +$$ + +Therefore, using the residue theorem we see that + +$$ +\begin{align*} +& \frac{1}{2\pi} \int_0^{2\pi} \frac{1 - t \cos(\theta)}{|1 - te^{i\theta}|^2 |1 - ae^{i\theta}|^2} d\theta \\ +&= \operatorname{Res}(F, t) + \operatorname{Res}(F, a) \\ +&= \frac{\frac{t}{2}}{(t-a)(1-at)} - \frac{a - \frac{t}{2}(a^2 + 1)}{(t-a)(1-at)(1-a^2)} \\ +&= \frac{1}{(1-at)(1-a^2)}, +\end{align*} +$$ + +which proves (2.15). +---PAGE_BREAK--- + +Conversely, suppose that $\mu$ is a 1-logarithmic 1-Carleson measure. Then $h_\mu$ has finite radial limit a.e. on $\mathbb{T}$, indeed $h_\mu \in H^2$ (see [Pe, p. 42]), and for any $a \in \mathbb{D}$, + +$$ +\begin{align*} +(2.16) \quad & \frac{1}{2\pi} \int_0^{2\pi} |h_\mu(e^{i\theta}) - h_\mu(a)| \frac{1-|a|^2}{|1-ae^{i\theta}|^2} d\theta \\ +& = \frac{1}{2\pi} \int_0^{2\pi} \left| \frac{1-|a|^2}{|1-ae^{i\theta}|} \right| \left| \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right| d\theta \\ +& \le \frac{1}{2\pi} \int_0^{2\pi} \frac{1-|a|^2}{|1-ae^{i\theta}|} \left| \int_0^1 \frac{d\mu(t)}{|1-te^{i\theta}||1-ta|} \right| d\theta \\ +& \le \frac{1-|a|^2}{2\pi} \int_0^1 \frac{1}{|1-ta|} \int_0^{2\pi} \frac{d\theta}{|1-ae^{i\theta}||1-te^{i\theta}|} d\mu(t) \\ +& \le \frac{1-|a|^2}{2\pi} \int_0^1 \frac{1}{|1-ta|} \left( \int_0^{2\pi} \frac{d\theta}{|1-ae^{i\theta}|^2} \right)^{1/2} \left( \int_0^{2\pi} \frac{d\theta}{|1-te^{i\theta}|^2} \right)^{1/2} d\mu(t) \\ +& \le C(1-|a|^2)^{1/2} \int_0^1 \frac{1}{|1-ta|(1-t)^{1/2}} d\mu(t) \\ +& \le C(1-|a|^2)^{1/2} \int_0^1 \frac{1}{(1-t|a|)(1-t)^{1/2}} d\mu(t). +\end{align*} +$$ + +Moreover, using that $\mu$ is a 1-logarithmic 1-Carleson measure and a standard argument (see [G] or [Z]) we conclude that + +$$ +\sup_{a \in (0,1)} (1-a^2)^{1/2} \int_0^1 \frac{1}{(1-ta)(1-t)^{1/2}} d\mu(t) < \infty, +$$ + +which together with (2.16) shows that $h_\mu \in \text{BMOA}_{\log}$, thus by Theorem A, $\mathcal{H}_\mu : H^1 \to H^1$ is bounded. + +The proof of (ii) is analogous, so it will be omitted. $\blacksquare$ + +Proof of Theorem 1.3. We recall that $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $H^2$ if and only if $\sum_{k \ge 0} \|H_\mu(e_k)\|_{H^2}^2 < \infty$ for any orthonormal base $\{e_k\}_{k=0}^\infty$. We choose the orthonormal base $e_k(z) = z^k$. For $z = re^{i\theta} \in \mathbb{D}$, we observe that $\int_0^{2\pi} |\mathcal{H}_\mu(e_k)(re^{i\theta})|^2 d\theta = \sum_{n \ge 0} |\mu_{n,k}|^2 r^{2n}$. So + +$$ +\begin{align*} +\sum_{k \ge 0} \| \mathcal{H}_\mu(e_k) \|_{H^2}^2 &= \sum_{k \ge 0} \sum_{n \ge 0} |\mu_{n,k}|^2 = \sum_{k \ge 0} \sum_{n \ge 0} \int_{[0,1]} \int_{[0,1]} (ts)^{n+k} d\mu(s) d\mu(t) \\ +&= \int_{[0,1]} \int_{[0,1]} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t) \approx \int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} d\mu(t). +\end{align*} +$$ + +This finishes the proof. $\blacksquare$ +---PAGE_BREAK--- + +Finally, we shall see that although $\mathcal{H}_\mu$ is not bounded on $H^1$ for a classical Carleson measure $\mu$, in some sense $\mathcal{H}_\mu$ is close to having this property. + +**THEOREM 2.1.** If $\mu$ is a classical Carleson measure supported on $[0, 1)$ and $0 < p < 1$, then $\mathcal{H}_\mu : H^1 \to H^p$ is bounded. + +*Proof.* As $\mu$ is a classical Carleson measure, + +$$ +\begin{aligned} +(2.17) \quad & \| \mathcal{H}_\mu(f) \|_{H^p}^p \le \sup_{01} H^p $$ + +where $Tf(e^{it}) = f(e^{-it})$ and $M_g$ is the multiplication operator by $g$. Thus, using standard techniques and well-known results we deduce that $\mathcal{H}_{\mu}$ is of weak type $(1,1)$ on Hardy spaces. ■ + +**3. The Hankel matrix $\mathcal{H}_{\mu}$ acting on $A^2$.** We recall that the Bergman projection $Pf(z) = \int_{\mathbb{D}} f(w) \overline{K_z(w)} dA(w)$ is bounded from $L^2(dA)$ to $A^2$ (see [Zh]), where $K_z(w) = (1 - \bar{z}w)^{-2}$ is the Bergman kernel of $A^2$. It follows that any $f \in A^2$ can be represented by its Bergman projection and moreover $(A^2)^* \cong A^2$ under the pairing $\langle f, g \rangle_{A^2} = \int_{\mathbb{D}} f(z) \overline{g(z)} dA(z)$. + +*Proof of Proposition 1.4.* (i) Fix $n \in \mathbb{N}$. If $f(z) = \sum_{k=0}^{\infty} a_k z^k \in A^2$, then by the Cauchy–Schwarz inequality, +---PAGE_BREAK--- + +$$ (3.1) \quad \left| \sum_{k \ge 0} \mu_{n,k} a_k \right| \le \sum_{k \ge 0} \mu_{n,k} |a_k| \le \left\{ \sum_{k \ge 0} (k+1) \mu_{n,k}^2 \right\}^{1/2} \|f\|_{A^2}. $$ + +But + +$$ +\begin{align*} +(3.2) \quad \sum_{k \ge 0} (k+1)\mu_{n,k}^2 &= \int_{[0,1]} \int_{[0,1]} \frac{(ts)^n}{(1-ts)^2} d\mu(s) d\mu(t) \\ +&= 2 \int_{[0,1]} \int_{[t,1)} \frac{(ts)^n}{(1-ts)^2} d\mu(s) d\mu(t) \le 2 \int_{[0,1)} \frac{\mu([t,1))}{(1-t)^2} d\mu(t). +\end{align*} +$$ + +Thus, if $\mu$ satisfies (1.4) the power series (1.1) is well defined and it represents an analytic function in $\mathbb{D}$. Under (1.4) we can also write + +$$ \sum_{k \ge 0} \mu_{n,k} a_k = \int_{[0,1)} t^n f(t) d\mu(t). $$ + +So, for $z \in \mathbb{D}$, + +$$ \mathcal{H}_{\mu}(f)(z) = \sum_{n \ge 0} \left( \int_{[0,1)} t^n f(t) d\mu(t) \right) z^n = \int_{[0,1)} \frac{f(t)}{1-zt} d\mu(t). $$ + +The last equality is true since + +$$ \sum_{n \ge 0} \left( \int_{[0,1)} t^n |f(t)| d\mu(t) \right) |z|^n \le \left\{ 2 \int_{[0,1)} \frac{\mu([t,1))}{(1-t)^2} d\mu(t) \right\}^{1/2} \|f\|_{A^2} \frac{1}{1-|z|}. $$ + +(ii) Take $f \in A^2$. Assume that the integral in (1.5) converges for each $z \in D$. We choose $z = 0$. So, there is $C > 0$ such that + +$$ (3.3) \quad \left| \int_{[0,\beta)} f(t) d\mu(t) \right| \le \int_{[0,\beta)} |f(t)| d\mu(t) \le \int_{[0,1)} |f(t)| d\mu(t) \le C $$ + +for all $\beta \in (0, 1)$. + +On the other hand, the integral representation of $f \in A^2$ through the Bergman projection, and Fubini's theorem, imply that + +$$ +\begin{align*} +\int_{[0,\beta)} f(t) d\mu(t) &= \int_{[0,\beta)} \int_{\mathbb{D}} \frac{f(w)}{(1-\bar{w}t)^2} dA(z) d\mu(t) \\ +&= \int_{\mathbb{D}} f(w) \overline{\int_{[0,\beta)} \frac{1}{(1-wt)^2} d\mu(t)} = \langle f, g_\beta \rangle_{A^2}, +\end{align*} +$$ + +where $g_\beta(w) = \int_{[0,\beta)} \frac{1}{(1-wt)^2} d\mu(t) \in A^2$ for every $\beta$. Then, combining (3.3), the fact that $(A^2)^* \cong A^2$ under the pairing $\langle \cdot, \cdot \rangle_{A^2}$, and the uniform bound- +---PAGE_BREAK--- + +edness principle, we conclude that $\sup_{\beta} \|g_{\beta}\|_{A^2} < C$. Thus, using that +$\|g_{\beta}\|_{A^2}^2 = \int_{[0,\beta]} \int_{[0,\beta)} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t)$, we get + +$$ +C \geq \int_{[0,1]} \int_{[0,1]} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t) \geq \frac{1}{4} \int_{[0,1]} \frac{\mu([t,1))}{(1-t)^2} d\mu(t). +$$ + +So condition (1.4) is true. ■ + +Proof of Theorem 1.5. It is known that $(A^2)^* \cong D$ and $D^* \cong A^2$ under the Cauchy pairing $\langle f, g \rangle_{H^2} = \sum_{n \ge 0} a_n \bar{b}_n$ where $f(z) = \sum_n a_n z^n \in A^2$ and $g(z) = \sum_n b_n z^n \in D$. We observe that, under this relation, $\mathcal{H}_\mu$ is self-adjoint. Therefore, $\mathcal{H}_\mu$ is bounded on $A^2$ if and only if it is on $D$. + +If $f,g \in D$ we shall write $f_1(z) = \sum_n |a_n|z^n$, $g_1(z) = \sum_n |b_n|z^n$ so that +$\|f\|_D = \|f_1\|_D$ and $\|g\|_D = \|g_1\|_D$. Then + +$$ +\begin{align*} +& |\langle \mathcal{H}_{\mu}(f), g \rangle_{\mathcal{D}}| \\ +& \leq \sum_{n \geq 0} (n+1) \left( \sum_{k \geq 0} \mu_{n+1,k} |a_k| \right) |b_{n+1}| + \mu_0 |a_0| |b_0| + |b_0| \sum_{k=0}^{\infty} \mu_{k+1} |a_{k+1}| \\ +& \leq \sum_{n \geq 0} \mu_{n+1} \left( \sum_{k=0}^{n} (k+1) |b_{k+1}| |a_{n-k}| \right) + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\ +& \phantom{\leq} + \|g\|_{\mathcal{D}} \int_{\mathcal{D}} \left( \frac{f_1(z) - f_1(0)}{z} \right) \overline{h'_{\mu}(z)} dA(z) \\ +& \leq \int_{\mathcal{D}} f_1(z) g'_1(z) \overline{h'_{\mu}(z)} dA(z) + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\ +& \phantom{\leq} + \|g\|_{\mathcal{D}} \int_{\mathcal{D}} \left( \frac{f_1(z) - f_1(0)}{z} \right) \overline{h'_{\mu}(z)} dA(z). +\end{align*} +$$ + +So, if $|\overline{h}'_\mu(z)|^2 dA(z)$ is a Dirichlet Carleson measure, we get + +$$ +\begin{align*} +& |\langle \mathcal{H}_{\mu}(f), g \rangle_{\mathcal{D}}| \\ +&\leq \left\{ \int_{\mathcal{D}} |f_1(z)|^2 |\overline{h}'_{\mu}(z)|^2 dA(z) \right\}^{1/2} \left\{ \int_{\mathcal{D}} |g'_1(z)|^2 dA(z) \right\}^{1/2} + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\ +&\quad + \left\{ \int_{\mathcal{D}} \left| \frac{f_1(z) - f_1(0)}{z} \right|^2 |\overline{h}'_{\mu}(z)|^2 dA(z) \right\}^{1/2} \left\{ \int_{\mathcal{D}} |g'_1(z)|^2 dA(z) \right\}^{1/2} \\ +&\leq C \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}}, +\end{align*} +$$ + +and consequently $\mathcal{H}_\mu$ is bounded. +---PAGE_BREAK--- + +Conversely, assume that $\mathcal{H}_\mu$ is bounded on $\mathcal{D}$. Then + +$$ +\begin{align*} +& \left| \int_{\mathcal{D}} f(z) g'(z) \overline{h'_\mu(z)} dA(z) \right| \\ +& \leq \int_0^1 \sum_{n \geq 0} (n+1) \mu_{n+1} \left( \sum_{k=0}^n (k+1) |b_{k+1}| |a_{n-k}| \right) r^{n+1} dr \\ +& \leq \sum_{n \geq 0} (n+1) \left( \sum_{k \geq 0} \mu_{n+1,k} |a_k| \right) |b_{n+1}| \\ +& = |\langle \mathcal{H}_\mu(f_1), g_1 \rangle_\mathcal{D}| \leq C \|f\|_\mathcal{D} \|g\|_\mathcal{D}. +\end{align*} +$$ + +So (exchanging also the roles of $f$ and $g$) we have + +$$ +\left| \int_D (fg)'(z) \overline{h'_{\mu}(z)} dA(z) \right| \leq C \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} +$$ + +for every $f,g \in D$. Finally, Theorem 1 of [ARSW] (see also [Wu]) implies +that $|h'_{\mu}(z)|^2 dA(z)$ is a Dirichlet Carleson measure. $\blacksquare$ + +**REMARK 3.1.** We recall that [ARS, Theorem 1] says that a positive Borel measure $\nu$ in $\mathbb{D}$ is a Dirichlet Carleson measure if and only if there is a positive constant $C$ such that for all $a \in \mathbb{D}$, + +$$ +(3.4) \quad \int_{\tilde{S}(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1-|z|^2)^2} \le C\nu(S(a)), +$$ + +where + +$$ +\tilde{S}(a) = \left\{ z \in \mathbb{D} : 1 - |z| \le 2(1 - |a|), \left| \frac{\arg(a\bar{z})}{2\pi} \right| \le \frac{1 - |a|}{2} \right\}. +$$ + +We note that if $\nu$ is finite, (3.4) is equivalent to the simpler condition + +$$ +(3.5) \quad \int_{S(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1-|z|^2)^2} \le C\nu(S(a)), +$$ + +because in this case + +$$ +\begin{align*} +& \int_{\tilde{S}(a) \setminus S(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1 - |z|^2)^2} \\ +&\le C(1 - |a|)^{-2} \iint_{\tilde{S}(a) \setminus S(a)} (\nu(S(z) \cap S(a)))^2 dA(z) \\ +&\le C(1 - |a|)^{-2}\nu(S(a))^2 \int_{\tilde{S}(a) \setminus S(a)} dA(z) \le C\nu(S(a)). +\end{align*} +$$ + +Consequently, combining Proposition 1.4 and Theorem 1.5, if $\mu$ is a finite positive Borel measure on $[0,1)$ that satisfies (1.4), $\mathcal{H}_{\mu}$ is bounded in $A^2$ if and only if the measure $\nu = |h'_{\mu}(z)|^2 dA(z)$ satisfies (3.5) for all $a \in D$. +---PAGE_BREAK--- + +*Proof of Theorem 1.6.* Take the orthonormal basis $\{e_k\}_{k \ge 0} = (k+1)^{1/2} z^k$ and observe that + +$$ +\begin{align*} +(3.6) \quad \sum_{k=0}^{\infty} \| \mathcal{H}_{\mu}(e_k) \|_{A^2}^2 &= \sum_{k=0}^{\infty} (k+1) \sum_{n=0}^{\infty} (n+1)^{-1} \mu_{n,k}^2 \\ +&= \sum_{k=0}^{\infty} (k+1) \iint_{0}^{1} (ts)^k \frac{1}{ts} \log \frac{1}{1-ts} d\mu(t) d\mu(s) \\ +&\asymp \int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} \log \frac{1}{1-t} d\mu(t). +\end{align*} +$$ + +So the operator is Hilbert–Schmidt if and only if (1.6) holds. ■ + +Finally we shall prove Proposition 1.7. + +*Proof of Proposition 1.7.* We claim that if $\mathcal{H}_\mu$ is bounded on $A^2$ then + +$$ +(3.7) \quad \sup_{a \in (0,1)} \frac{\int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} \left(\frac{1}{at} \log \frac{1}{1-at}\right)^2 d\mu(t)}{\frac{1}{a^2} \log \frac{1}{1-a^2}} < \infty. +$$ + +Assume (3.7) for the moment. Let $\beta \in [0,1)$, $\alpha \in ((1+\beta)/2, 1)$ and consider the measure $d\mu_\alpha(t) = (\frac{1}{t}\log\frac{1}{1-t})^{-\alpha}dt$. Using that $\mu_\alpha([t,1)) \asymp (1-t)(\frac{1}{t}\log\frac{1}{1-t})^{-\alpha}$, we deduce + +$$ +\int_0^1 \frac{\mu_\alpha([t, 1))}{(1-t)^2} \left( \frac{1}{t} \log \frac{1}{1-t} \right)^\beta d\mu_\alpha(t) \asymp \int_0^1 \frac{1}{(1-t)} \left( \frac{1}{t} \log \frac{1}{1-t} \right)^{\beta-2\alpha} dt < \infty +$$ + +and + +$$ +\begin{align*} +& \left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{-1} \int_{[0,1]} \frac{\mu_\alpha([t,1))}{(1-t)^2} \left(\frac{1}{at} \log \frac{1}{1-at}\right)^2 d\mu_\alpha(t) \\ +&\ge C \left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{-1} \int_{[0,a]} \frac{1}{1-t} \left(\frac{1}{t} \log \frac{1}{1-t}\right)^{-2\alpha} \left(\frac{1}{t^2} \log \frac{1}{1-t^2}\right)^2 dt \\ +&\ge C \left(\log \frac{1}{1-a}\right)^{2-2\alpha}, +\end{align*} +$$ + +which in particular implies that + +$$ +\lim_{a \to 1^-} \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \int_{[0,1]} \frac{\mu_\alpha([t, 1))}{(1-t)^2} \left( \frac{1}{at} \log \frac{1}{1-at} \right)^2 d\mu_\alpha(t) = \infty. +$$ + +So, $\mu_\alpha$ does not satisfy (3.7) and thus $\mathcal{H}_{\mu_\alpha}$ is not bounded. +---PAGE_BREAK--- + +In order to prove (3.7), using that $(A^2)^* \cong A^2$ under the pairing $\langle \cdot, \rangle_{A^2}$, +we obtain + +$$ +(3.8) \quad \mathcal{H}_\mu : A^2 \to A^2 \text{ is bounded} +\quad \Leftrightarrow \quad +\left| \int_D \left( \int_{[0,1]} \frac{f(t)}{1-tz} d\mu(t) \right) \overline{g(z)} dA(z) \right| \le C \|f\|_{A^2} \|g\|_{A^2} \text{ for all } f,g \in A^2. +$$ + +Set $g_a(z) = \frac{1}{1-az}$, $a \in (0,1)$. Then $\|g_a\|_{A^2}^2 = \frac{1}{a^2} \log \frac{1}{1-az}$ and + +$$ +\begin{align*} +\int_D \frac{g_a(z)}{1-t\bar{z}} dA(z) &= \int_D \left(\sum_{n=0}^\infty (az)^n\right) \left(\sum_{n=0}^\infty (t\bar{z})^n\right) dA(z) \\ +&= \frac{1}{at} \log \frac{1}{1-at}, \quad a,t \in (0,1). +\end{align*} +$$ + +Then, by (3.8) (with $g = g_a$) and Fubini's theorem, we get + +$$ +(3.9) \quad \sup_{a \in (0,1)} \left| \int_0^1 f(t) d\mu_a(t) \right| \le C \|f\|_{A^2} \quad \text{for all } f \in A^2, +$$ + +where + +$$ +d\mu_a(t) = \frac{\frac{1}{at} \log \frac{1}{1-at}}{\left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{1/2}} d\mu(t). +$$ + +So, there is $C > 0$ such that + +$$ +(3.10) \quad \sup_{a, \beta \in (0,1)} \left| \int_0^\beta f(t) d\mu_a(t) \right| \le C \|f\|_{A^2} \quad \text{for all } f \in A^2. +$$ + +Next, arguing as in the proof of Proposition 1.4, we obtain + +$$ +(3.11) \quad \sup_{a, \beta \in (0,1)} \left\| \int_0^\beta \frac{d\mu_a(t)}{(1-wt)^2} \right\|_{A^2} < \infty, +$$ + +which together with the fact that + +$$ +\begin{align*} +\left\| \int_0^\beta \frac{d\mu_a(t)}{(1-wt)^2} \right\|_{A^2}^2 &= \sum_{n=0}^\infty (n+1) \left[ \int_0^\beta t^n d\mu_a(t) \right]^2 \\ +&\geq \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \sum_{n=0}^\infty (n+1) \int_0^\beta t^{2n} \left( \frac{1}{at} \log \frac{1}{1-at} \right)^2 \mu([t,\beta]) d\mu(t) \\ +&\geq \frac{1}{4} \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \int_0^\beta \frac{\left( \frac{1}{at} \log \frac{1}{1-at} \right)^2}{(1-t)^2} \mu([t,\beta]) d\mu(t) +\end{align*} +$$ + +finishes the proof. $\blacksquare$ +---PAGE_BREAK--- + +**Acknowledgements.** The authors wish to thank Professor A. Aleman for his helpful comments and for interesting discussions on the topic of the paper. + +The first author is partially supported by the European Networking Programme “HCAA” of the European Science Foundation. The second author is partially supported by the Ramón y Cajal program of MICINN (Spain). Both authors are supported by grants from “Ministerio de Educación y Ciencia, Spain” (MTM2007-60854) and from “La Junta de Andalucía” (FQM210) and P09-FQM-4468. + +References + +[ACP] J. M. Anderson, J. Clunie and Ch. Pommerenke, *On Bloch functions and normal functions*, J. Reine Angew. Math. 270 (1974), 12–37. + +[ARS] N. Arcozzi, R. Rochberg and E. Sawyer, *Carleson measures for analytic Besov spaces*, Rev. Mat. Iberoamer. 18 (2002), 443–510. + +[ARSW] N. Arcozzi, R. Rochberg, E. Sawyer and B. Wick, *Bilinear forms on the Dirichlet space*, Anal. PDE 3 (2010), 21–47. + +[C] L. Carleson, *An interpolation problem for bounded analytic functions*, Amer. J. Math. 80 (1958), 921–930. + +[CS] J. Cima and D. Stegenga, *Hankel operators on $H^p$, in: Analysis at Urbana. Vol. 1: Analysis in Function Spaces*, London Math. Soc. Lecture Note Ser. 137, Cambridge Univ. Press, Cambridge, 1989), 133–150. + +[CSi] J. Cima and A. Siskakis, *Cauchy transforms and Cesàro averaging operators*, Acta Sci. Math. (Szeged) (1999), 505–513. + +[Di] E. Diamantopoulos, *Hilbert matrix on Bergman spaces*, Illinois J. Math. 48 (2004), 1067–1078. + +[DiS] E. Diamantopoulos and A. Siskakis, *Composition operators and the Hilbert matrix*, Studia Math. 140 (2000), 191–198. + +[DJV] M. Dostanić, M. Jevtić and D. Vukotić, *Norm of the Hilbert matrix on Bergman and Hardy spaces and a theorem of Nehari type*, J. Funct. Anal. 254 (2008), 2800–2815. + +[D] P. L. Duren, *Theory of $H^p$ Spaces*, Academic Press, New York, 1970. Reprint: Dover, Mineola, NY, 2000. + +[DS] P. L. Duren and A. P. Schuster, *Bergman Spaces*, Math. Surveys Monogr. 100, Amer. Math. Soc., Providence, RI, 2004. + +[G] J. B. Garnett, *Bounded Analytic Functions*, Academic Press, 1981. + +[Gi] D. Girela, *Analytic functions of bounded mean oscillation*, in: Complex Functions Spaces, R. Aulaskari (ed.), Univ. Joensuu Dept. Math. Rep. Ser. 4 (2001), 61–171. + +[JPS] S. Janson, J. Petree and S. Semmes, *On the action of Hankel and Toeplitz operators on some function spaces*, Duke Math. J. 51 (1984), 937–958. + +[PV] M. Papadimitrakis and J. A. Virtanen, *Hankel and Toeplitz operators on $H^1$: continuity, compactness and Fredholm properties*, Integral Equations Operator Theory 61 (2008), 573–591. + +[Pe] V. Peller, *Hankel Operators and Their Applications*, Springer Monogr. Math., Springer, New York, 2003. +---PAGE_BREAK--- + +[Po] S. C. Power, *Hankel operators on Hilbert space*, Bull. London Math. Soc. 12 (1980), 422–442. + +[S] D. Stegenga, *Multipliers of the Dirichlet space*, Illinois J. Math. 24 (1980), 113–139. + +[T] V. A. Tolokonnikov, *Hankel and Toeplitz operators in Hardy spaces*, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 141 (1985), 165–175 (in Russian); English transl.: J. Soviet Math. 37 (1987), 1359–1364. + +[W] H. Widom, *Hankel matrices*, Trans. Amer. Math. Soc. 121 (1966), 1–35. + +[Wu] Z. Wu, *The dual and second predual of $W_\sigma$*, J. Funct. Anal. 116 (1993), 314–334. + +[Z] R. Zhao, *On logarithmic Carleson measures*, Acta Sci. Math. (Szeged) 69 (2003), 605–618. + +[Zh] K. Zhu, *Operator Theory in Function Spaces, I, II*, 2nd ed., Cambridge Univ. Press, Cambridge, 1959. + +Petros Galanopoulos, José Ángel Peláez +Departamento de Análisis Matemático +Universidad de Málaga +Campus de Teatinos, 29071 Málaga, Spain +E-mail: galanopoulos_petros@yahoo.gr +japelaez@uma.es + +Received December 9, 2009 + +Revised version May 26, 2010 + +(6764) \ No newline at end of file diff --git a/samples/texts_merged/6324184.md b/samples/texts_merged/6324184.md new file mode 100644 index 0000000000000000000000000000000000000000..c7c817b4e646a86b73629939cfd0b5344405877a --- /dev/null +++ b/samples/texts_merged/6324184.md @@ -0,0 +1,395 @@ + +---PAGE_BREAK--- + +Supporting information for + +“Spatial structure, host heterogeneity and parasite virulence: implications for +vaccine-driven evolution” + +Y. H. Zurita-Gutiérrez & S. Lion + +April 30, 2015 + +**Appendix S1: Theory** + +S1.1 Spatial invasion fitness + +The dynamics of the mutant parasite are given by the following equations + +$$ +\begin{align*} +\frac{dp_{I'_{N}}}{dt} &= \beta'_{NN}[S_N|I'_N]p_{I'_N} + \beta'_{TN}[S_N|I'_T]p_{I'_T} - (d+\alpha'_{N})p_{I'_N} \\ +\frac{dp_{I'_T}}{dt} &= \beta'_{NT}[S_T|I'_N]p_{I'_N} + \beta'_{TT}[S_T|I'_T]p_{I'_T} - (d+\alpha'_{T})p_{I'_T} +\end{align*} +$$ + +or, in matrix form, + +$$ +\frac{d}{dt} \begin{pmatrix} p_{I'_{N}} \\ p_{I'_{T}} \end{pmatrix} = M \begin{pmatrix} p_{I'_{N}} \\ p_{I'_{T}} \end{pmatrix} \quad (S1.1) +$$ + +where + +$$ +M = \begin{pmatrix} +\beta'_{NN}[S_N|I'_N] - (d+\alpha'_N) & \beta'_{TN}[S_N|I'_T] \\ +\beta'_{NT}[S_T|I'_N] & \beta'_{TT}[S_T|I'_T] - (d+\alpha'_T) +\end{pmatrix} +$$ + +We can rewrite **M** as **M** = **F** − **V**, where + +$$ +\mathbf{F} = \begin{pmatrix} \beta'_{NN}[S_N | I'_N] & \beta'_{TN}[S_N | I'_T] \\ \beta'_{NT}[S_T | I'_N] & \beta'_{TT}[S_T | I'_T] \end{pmatrix} +$$ + +and + +$$ +\boldsymbol{V} = \begin{pmatrix} d + \alpha'_{N} & 0 \\ 0 & d + \alpha'_{T} \end{pmatrix} +$$ + +All the entries of **F** and **V**-1 are positive, and the dominant eigenvalue of −**V** is cleary negative, so we can use the Next-Generation Theorem. Thus, the mutant invades if the dominant eigenvalue of **A** = **F**·**V**-1 is greater than 1. With the notations + +$$ +\begin{align*} +R'_{NN} &= \beta'_{NN} / \delta'_{N} \\ +R'_{TN} &= \beta'_{TN} / \delta'_{N} \\ +R'_{NT} &= \beta'_{NT} / \delta'_{T} \\ +R'_{TT} &= \beta'_{TT} / \delta'_{T} +\end{align*} +$$ + +we have + +$$ +\mathbf{A} = \begin{pmatrix} +R'_{NN}[S_N | I'_N] & R'_{TN}[S_N | I'_T] \\ +R'_{NT}[S_T | I'_N] & R'_{TT}[S_T | I'_T] +\end{pmatrix} +$$ +---PAGE_BREAK--- + +Some straightforward algebra shows that the dominant eigenvalue of this matrix is + +$$ +\begin{align*} +\mathcal{R} ={}& \frac{1}{2} (R'_{NN}[S_N|I'_N] + R'_{TT}[S_T|I'_T]) \\ +& + \frac{1}{2} \sqrt{(R'_{NN}[S_N|I'_N] + R'_{TT}[S_T|I'_T])^2 + 4(R'_{NT}R'_{TN}[S_N|I'_T][S_T|I'_N] - R'_{NN}R'_{TT}[S_N|I'_N][S_T|I'_T])} +\end{align*} +$$ + +When $g_P = 1$ (global dispersal), we recover the expression found by Gandon (2004) for a well-mixed population. + +Noting $a_{ij}$ the elements of $\mathbf{A}$, we have + +$$ +\mathbf{A} = \begin{pmatrix} a_{NN} & a_{TN} \\ a_{NT} & a_{TT} \end{pmatrix} +$$ + +At equilibrium, the dominant eigenvalue is unity, $\mathcal{R} = 1$. An associated right eigenvector is the vector +of densities of each class of infected hosts at equilibrum, $\mathbf{u} = (\hat{p}_{I_N} \ \hat{p}_{I_T})^T$. We therefore have + +$$ +\frac{p_{I_T}}{p_{I_N}} = \frac{1 - a_{NN}}{a_{TN}} = \frac{a_{NT}}{1 - a_{TT}} \quad (S1.2) +$$ + +An associated left eigenvector is the vector of reproductive values, $\mathbf{v}$ (Taylor, 1990; Rousset, 2004). Normalising $\mathbf{v}$ such that $\mathbf{v}^T\mathbf{u} = 1$, we find that the class reproductive values $c_j = v_j u_j$ at equilibrium satisfy $c_N + c_T = 1$, with + +$$ +c_N = \frac{a_{NT} p_{IN}^2}{a_{NT} p_{IN}^2 + a_{TN} p_{IT}^2}. \qquad (S1.3) +$$ + +Furthermore, at equilibrium, det($\mathbf{A} - \mathbf{I}$) = 0, which yields the following equilibrium condition + +$$ +1 - a_{NN} - a_{TT} = a_{NT}a_{TN} - a_{NN}a_{TT} \quad (S1.4) +$$ + +For the sake of simplicity, we make now the additional assumption that transmission can be written +as the product of infectivity and susceptibility. Hence, we write $\beta_{ij} = \beta_i \sigma_j$, where $\sigma_N = 1$ and $\sigma_T$ is +the relative susceptibility of treated hosts. We then have + +$$ +\begin{align*} +R'_{NN} &= R'_N = \beta'_N / \delta'_N \\ +R'_{TT} &= \sigma_T R'_T = \sigma_T \beta'_T / \delta'_T \\ +R'_{TN} &= R'_T \frac{\delta'_T}{\delta'_N} \\ +R'_{NT} &= \sigma_T R'_N \frac{\delta'_N}{\delta'_T} +\end{align*} +$$ + +and we obtain + +$$ +\mathcal{R} = \frac{1}{2} (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{1}{2} \sqrt{(R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C'} \quad (\text{S1.5}) +$$ + +where + +$$ +C' = [S_N | I'_N][S_T | I'_T] - [S_N | I'_T][S_T | I'_N] \tag{S1.6} +$$ + +measures the spatial correlation of treatments experienced by mutant hosts. Equation (S1.4) can then +be rewritten as + +$$ +1 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) = -\sigma_T R_N R_T C \quad (\text{S1.7}) +$$ +---PAGE_BREAK--- + +## S1.2 Selection gradient + +Assuming that selection is weak, we can further calculate the selection gradient. + +$$ \partial \mathcal{R} = \frac{1}{2} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{\frac{1}{4} \partial ((R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C')}{\sqrt{(R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C}} $$ + +At neutrality, we have $\mathcal{R} = 1$ and therefore + +$$ \sqrt{(R_N(S_N|I_N) + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C} = 2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) > 0 \quad (\text{S1.8}) $$ + +Using equation (S1.7), we thus have + +$$ \sqrt{(R_N(S_N|I_N) + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C} = 1 - \sigma_T R_N R_T C > 0 \quad (\text{S1.9}) $$ + +Hence + +$$ \partial \mathcal{R} = \frac{1}{2} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{\frac{1}{4} \partial ((R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C')}{1 - \sigma_T R_N R_T C} $$ + +The numerator of the right-hand side of the latter equation can be written as + +$$ +\begin{aligned} +& \frac{1}{2}\left(2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])\right) \partial(R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T]) \\ +& \qquad + \frac{1}{4}\partial\left((R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T])^2 - 4\sigma_T R'_N R'_T C'\right) \\ +&=\frac{1}{2}\left(2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])\right) \partial(R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T]) \\ +& \qquad + \frac{1}{2}(R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) \partial((R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T])) - \sigma_T \partial(R'_N R'_T C') +\end{aligned} +$$ + +which yields the following expression for $\mathcal{R}$ + +$$ \partial \mathcal{R} = \frac{1}{1 - \sigma_T R_N R_T C} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T] - \sigma_T R'_{N} R'_{T} C') \quad (\text{S1.10}) $$ + +## S1.3 Simplifications + +We can write $\partial \mathcal{R}$ as + +$$ \partial \mathcal{R} = \frac{\partial W + \partial S}{1 - \sigma_T R_N R_T C} \quad (\text{S1.11}) $$ + +where $\partial W$ collects all direct selective effects, and $\partial S$ collects all indirect selective effects, i.e. the selective effects on local densities. + +### Direct effects + +We have + +$$ \partial W = [S_N | I_N] \partial R'_N + \sigma_T [S_T | I_T] \partial R'_T - \sigma_T C (R_N \partial R'_T + R_T \partial R'_N) \quad (\text{S1.12}) $$ + +Plugging (S1.7) into the expression of $\partial W$, we obtain + +$$ \partial W = [S_N|I_N]\partial R'_N + \sigma_T [S_T|I_T]\partial R'_T + (R_N\partial R'_T + R_T\partial R'_N)\left(\frac{1-(R_N[S_N|I_N]+\sigma_T R_T[S_T|I_T])}{R_N R_T}\right) \quad (\text{S1.13}) $$ + +which gives after simplifications + +$$ \partial W = \frac{\partial R'_{N}}{R_{N}} (1 - \sigma_{T} R_{T} [S_{T}|I_{T}]) + \frac{\partial R'_{T}}{R_{T}} (1 - R_{N}[S_{N}|I_{N}]) \quad (\text{S1.14}) $$ +---PAGE_BREAK--- + +From the dynamics of $I_N$ and $I_T$, we have + +$$R_N[S_N|I_N] = 1 - \frac{\beta_T[S_N|I_T]p_{IT}}{\delta_N p_{IN}} = 1 - \frac{\beta_T[S_N|I_T]p_{IT}}{h_N p_{S_N}} = 1 - \frac{\beta_T[I_T|S_N]}{h_N} \quad (S1.15)$$ + +$$\sigma_T R_T [S_T | I_T] = 1 - \sigma_T \frac{\beta_N [S_T | I_N] p_{IN}}{\delta_T p_{IT}} = 1 - \sigma_T \frac{\beta_N [S_T | I_N] p_{IN}}{\sigma_T h_T p_{S_T}} = 1 - \frac{\beta_N [I_N | S_T]}{h_T} \quad (S1.16)$$ + +so $\tau_T \equiv 1 - R_N[S_N|I_N]$ is the share of the force of infection on naive hosts that is caused by infections from the treated class, and $\tau_N = 1 - \sigma_T R_T[S_T|I_T]$ has the same interpretation for treated hosts. We then have + +$$\partial W = \tau_N \frac{\partial R'_N}{R_N} + \tau_T \frac{\partial R'_T}{R_T} \quad (S1.17)$$ + +## Indirect effects + +We now turn to the “spatial” component of the selection gradient + +$$ +\begin{align} +\partial S &= R_N \partial[S_N | I'_N] + \sigma_T R_T \partial[S_T | I'_T] \nonumber \\ +&\quad + \sigma_T R_N R_T ([S_N | I_T] \partial[S_T | I'_N] + [S_T | I_N] \partial[S_N | I'_T] - [S_N | I_N] \partial[S_T | I'_T] - [S_T | I_T] \partial[S_N | I'_N]) \tag{S1.18} \\ +&= R_N (1 - \sigma_T R_T [S_T | I_T]) \partial[S_N | I'_N] + \sigma_T R_T (1 - R_N [S_N | I_N]) \partial[S_T | I'_T] \nonumber \\ +&\quad + \sigma_T R_N R_T [S_N | I_T] \partial[S_T | I'_N] + \sigma_T R_N R_T [S_T | I_N] \partial[S_N | I'_T] \tag{S1.19} \\ +&= R_N [(1 - \sigma_T R_T [S_T | I_T]) \partial[S_N | I'_N] + \sigma_T R_T [S_N | I_T] \partial[S_T | I'_N]] \nonumber \\ +&\quad + \sigma_T R_T [(1 - R_N [S_N | I_N]) \partial[S_T | I'_T] + R_N [S_T | I_N] \partial[S_N | I'_T]] \tag{S1.20} +\end{align} +$$ + +Furthermore, we have + +$$R_T[S_N|I_T] = \frac{\delta_N p_{IN}}{\delta_T p_{IT}} (1 - R_N[S_N|I_N]) = \frac{h_N p_{S_N}}{\sigma_T h_T p_{ST}} \tau_T \quad (S1.21)$$ + +$$\sigma_T R_N [S_T | I_N] = \frac{\delta_T p_{IT}}{\delta_N p_{IN}} (1 - \sigma_T R_T [S_T | I_T]) = \frac{\sigma_T h_T p_{ST}}{h_N p_{SN}} \tau_N \quad (S1.22)$$ + +This yields + +$$\partial S = R_N \left[ \tau_N \partial[S_N | I'_N] + \frac{h_N p_{SN}}{h_T p_{ST}} \tau_T \partial[S_T | I'_N] \right] + \sigma_T R_T \left[ \tau_T \partial[S_T | I'_T] + \frac{h_T p_{ST}}{h_N p_{SN}} \tau_N \partial[S_N | I'_T] \right] \quad (S1.23)$$ + +or equivalently + +$$ +\begin{align} +\partial S ={}& \tau_N \left[ R_N \partial[S_N | I'_N] + R_T \frac{\sigma_T h_T p_{ST}}{h_N p_{SN}} \partial[S_N | I'_T] \right] \nonumber \\ +& + \tau_T \left[ R_N \frac{h_N p_{SN}}{\sigma_T h_T p_{ST}} \sigma_T \partial[S_T | I'_N] + \sigma_T R_T \partial[S_T | I'_T] \right] \tag{S1.24} +\end{align} +$$ + +## Link with reproductive values + +The quantities $\tau_N$ and $\tau_T$ have a direct interpretation in terms of reproductive values. Indeed, we have + +$$\tau_T = \frac{\beta_T[I_T|S_N]}{h_N} = \frac{\beta_T[S_N|I_T]p_{IT}}{\delta_N p_{IN}} = a_{TN} \frac{p_{IT}}{p_{IN}} = 1 - a_{NN} \quad (S1.25)$$ + +The last equation comes from equation (S1.2). Similarly, we have + +$$\tau_N = a_{NT} \frac{p_{IN}}{p_{IT}} = 1 - a_{TT} \quad (S1.26)$$ +---PAGE_BREAK--- + +Hence, it follows from equation (S1.4) + +$$ +\tau_N + \tau_T = 1 + \sigma_T R_N R_T C \tag{S1.27} +$$ + +and + +$$ +\frac{\tau_N}{\tau_N + \tau_T} = \frac{a_{NT} p_{IN}^2}{a_{NT} p_{IN}^2 + a_{TN} p_{IT}^2} \quad (\text{S1.28}) +$$ + +where the last expression can be identified as $c_N$ in equation (S1.3). + +**Full selection gradient** + +Plugging equation (S1.17) and (S1.24) into equation (S1.11), and noting that the denominator is $\tau_N + \tau_T$, we obtain the following expression for the selection gradient + +$$ +\begin{align} +\partial R = c_N & \left[ \frac{\partial R'_{N}}{R_N} + R_N \partial[S_N | I'_N] + R_T \frac{\sigma_T h_T p_{S_T}}{h_N p_{S_N}} \partial[S_N | I'_T] \right] \tag{S1.29a} \\ +& + c_T \left[ \frac{\partial R'_{T}}{R_T} + R_N \frac{h_N p_{S_N}}{h_T p_{S_T}} \partial[S_T | I'_N] + R_T \sigma_T \partial[S_T | I'_T] \right] \tag{S1.29b} +\end{align} +$$ + +Although we have obtained this result by direct differentiation of the invasion fitness, we note that an alternative derivation can be obtained by noting that the selection gradient can be written as + +$$ +\partial \mathcal{R} = \sum_{k,l} v_k u_l \partial(a_{lk}) +$$ + +By writing $a_{\ell k} = F_\ell m_{\ell k}$, we can write an equation similar to equation (5) in Rousset (1999), and further simplifications lead to equation (S1.29). + +**S1.4 Uncorrelated landscapes** + +If the landscape is uncorrelated, additional simplifications follow. First, the spatial correlation in treatment is always zero, hence $C = C' = 0$. It follows from equation (S1.5) that the invasion fitness of a rare mutant takes the following simple form: + +$$ +\mathcal{R} = R'_{N}[S_N | I'_{N}] + R'_{T}\sigma_{T}[S_T | I'_{T}] \quad (\text{S1.30}) +$$ + +Then the selection gradient can be written simply as + +$$ +\partial \mathcal{R} = R_N[S_N | I'_N] \frac{\partial R'_N}{R_N} + \sigma_T R_T [S_T | I'_T] \frac{\partial R'_T}{R_T} + R_N \partial[S_N | I'_N] + R_T \sigma_T \partial[S_T | I'_T] \quad (\text{S1.31}) +$$ + +For a neutral mutant, we have at equilibrium [$S_N|I'_N$] = [$S_N|I_N$] and [$S_T|I'_T$] = [$S_T|I_T$]. Furthermore, +we have at equilibrium + +$$ +R_N[S_N | I_N] = c_N +\quad +(S1.32) +$$ + +and + +$$ +\sigma_T R_T [S_T | I_T] = c_T = 1 - c_N \quad (\text{S1.33}) +$$ + +Combining equations (S1.30)-(S1.33), and noting that $\partial[S_x|I'_y] = (1-g_P)q_{S_x/I'_y}$, we obtain equation (9) in the main text. + +**S1.5 Host reproduction** + +So far, our results depend neither on host reproduction nor on the specific mechanism generating het- +erogeneity. The only assumption we make is that the parasite can only transmit horizontally (i.e. there +is no vertical transmission). For the specific example of vaccination, we consider density-dependent +reproduction, following previous spatial models of host-parasite interactions (Boots & Sasaki, 2000; +Lion & Gandon, 2015). +---PAGE_BREAK--- + +We assume that host reproduction occurs at rate $b$ and can be either global (with probability $g_H$) or local (with probability $1-g_H$). We also assume that only susceptible hosts can reproduce. Reproduction takes place into empty sites, which introduces density-dependence. Offspring are produced at rates $\lambda_N = b[o|S_N]$ and $\lambda_T = b[o|S_T]$ for naive and treated susceptible hosts, respectively, where $[o|S_i] = g_H p_o + (1-g_H)q_o/S_i$. + +For the vaccination example, we further consider that offspring have a probability $\nu$ of entering the treated class at birth, as depicted in figure 1a. Note that, for a fully imperfect vaccine ($r_i = 0$), all hosts are identical for the parasite and, as a result, $c = \nu$. + +## S1.6 Stochastic simulations + +We performed stochastic individual-based simulations to analyse the effect of spatial structure and host quality on the evolution of host exploitation. The program was coded in C and implements the host-parasite life cycle (figure 1a in the main text) on a regular square lattice with 100×100 sites. Each site can contain at most one individual. The lattice is updated asynchronously in continuous time using the Gillespie algorithm (Gillespie, 1977). + +For the simulations, we used the following trade-off: + +$$ \beta(x) = 20 \ln(x+1) \quad (\text{S1.34}) $$ + +$$ \alpha(x) = x \quad (\text{S1.35}) $$ + +Upon infection, parasites can mutate at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. All simulations were run with parameters values: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. + +## References + +[1] Gillespie, D. (1977). Exact stochastic simulation of coupled chemical reactions. *The Journal of Physical Chemistry.* **81**: 2340–2361. + +[2] Taylor, P. D. (1990). Allele-frequency change in a class-structured population. *Am. Nat.* **135**(1): 95–106. DOI: 10.1086/285034. + +[3] Rousset, F. (1999). Reproductive value vs sources and sinks. *Oikos*. **86**(3): 591–596. + +[4] Boots, M. & A. Sasaki (2000). The evolutionary dynamics of local infection and global reproduction in host-parasite interactions. *Ecol. Lett.* **3**: 181–185. DOI: 10.1046/j.1461-0248.2000.00139.x. + +[5] Gandon, S., M. J. Mackinnon, S. Nee & A. F. Read (2001). Imperfect vaccines and the evolution of pathogen virulence. *Nature*. **414**: 751–756. DOI: 10.1038/414751a. + +[6] Gandon, S., M. J. Mackinnon, S. Nee & A. F. Read (2003). Imperfect vaccination: some epidemiological and evolutionary consequences. *Proc. R. Soc. B.* **270**: 1129–1136. DOI: 10.1098/rspb.2003.2370. + +[7] Gandon, S. (2004). Evolution of multihost parasites. *Evolution*. **58**(3): 455–469. DOI: 10.1111/j.0014-3820.2004.tb01669.x. + +[8] Rousset, F. (2004). Genetic structure and selection in subdivided populations. Princeton University Press, Princeton, NJ, USA. + +[9] Lion, S. & M. Boots (2010). Are parasites "prudent" in space? *Ecol. Lett.* **13**(10): 1245–55. DOI: 10.1111/j.1461-0248.2010.01516.x. + +[10] Lion, S. & S. Gandon (2015). Evolution of spatially structured host-parasite interactions. *J. evol. Biol*. DOI: 10.1111/jeb.12551. +---PAGE_BREAK--- + +Appendix S2: Evolutionary consequences of an anti-growth vaccine: +vaccine coverage (figure S2) + +We show here the impact of vaccination coverage on parasite prevalence and virulence, for near-perfect vaccines ($r_2 = 0.9$). We broadly recover the predictions of Gandon et al. (2001, 2003): increasing vaccination coverage has little impact on parasite prevalence, but may select for higher virulence (figure S2a). Note that, as parasite dispersal becomes more local, parasite prevalence is minimised at lower vaccination coverage (figure S2b). Lower parasite dispersal leads to lower prevalence and more prudent exploitation over the whole range of vaccination coverage, but selection for increased virulence is stronger at intermediate parasite dispersal. + +Figure S2: The evolutionarily stable host exploitation (a) and prevalence (b) of the parasite as a function of vaccine coverage for an anti-growth vaccine $r_2$. The dashed lines indicate the predictions of non-spatial theory. The dots indicate the mean and standard deviation for six runs of the stochastic process. The fractions represent the number of runs that went extinct out of the six runs. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$. +---PAGE_BREAK--- + +Appendix S3: Evolutionary consequences of an anti-transmission vaccine (figure S3) + +Figure S3: The evolutionarily stable host exploitation (a,b) and prevalence (c,d) of the parasite as a function of parasite dispersal, vaccine efficacy, and vaccine coverage for an anti-transmission vaccine $r_3$. The dashed lines indicate the predictions non-spatial theory. The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$. +---PAGE_BREAK--- + +Appendix S4: Effect of parasite evolution on total host density (figure S4) + +Figure S4: The total host density on the evolutionary attractor as a function of (a,c) vaccine efficacy and (b,d) vaccine coverage for (a,b) anti-infection ($r_1$) and (c,d) anti-growth ($r_2$) vaccines. The dashed lines indicate the predictions of non-spatial theory. The dots indicate the mean for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$. +---PAGE_BREAK--- + +# Appendix S5: Effect of host dispersal (figure S5) + +In the main text, we investigate how changes in parasite dispersal affect the parasite evolution when host reproduce locally. Here, we show the robustness of our results when host dispersal is either partially ($g_H = 0.5$) or fully global ($g_H = 1$). For anti-growth (b) and anti-toxin (c) vaccines, global host dispersal weakens the effect of local parasite dispersal on the evolution of virulence. For anti-infection vaccines (a), the interplay between global host dispersal and local parasite dispersal gives rise to a non-linear relationship between vaccine efficacy and ES virulence, with a maximum for near-perfect vaccine. A complete study of the interplay between host and parasite dispersal kernels is beyond the scope of this paper, but this result suggests that the evolutionary outcome depends on both host and parasite dispersal patterns (see also Lion & Gandon, 2015 for a discussion in homogeneous spatially structured populations). Note that, as expected, global host dispersal always leads to higher prevalence (d,e,f). + +Figure S5: The evolutionarily stable host exploitation (a,b,c) and prevalence (d,e,f) for (a,d) anti-infection ($r_1$), (b,e) anti-growth ($r_2$) and (c,f) anti-toxin ($r_4$) vaccines. For each figure, the results for fully local parasite dispersal ($g_P = 0$) and either fully local ($g_H = 0$, plain lines), partially global ($g_H = 0.5$, dotted lines), or fully global ($g_H = 1$, dashed lines) host dispersal are shown. The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$. +---PAGE_BREAK--- + +# Appendix S6: Effect of host fecundity (figure S6) + +Previous studies have shown that, in the absence of vaccination, the kin competition effect is predicted to vanish when habitat saturation increases: as host fecundity increases, the differences between spatial and non-spatial models flatten out (Lion & Boots, 2010). Indeed, when host fecundity is infinite, the model converge towards a simple SIS model without demography, for which parasite dispersal only affects the speed of evolution, but not the endpoint. Stochastic simulations lead to the same result for anti-infection and anti-transmission vaccines, although for an anti-growth vaccine, the effect of host fecundity appears to be more complex (figure S6). + +Figure S6: The evolutionarily stable host exploitation (plain lines) and prevalence (dashed lines) of the parasite as a function of parasite dispersal for (a) an anti-infection vaccine ($r_1$), (b) an anti-infection vaccine ($r_2$) and (c) an anti-transmission vaccine ($r_3$) for a near-perfect vaccine ($\nu = 0.9$ and $r_i = 0.9$) and increasing values of host fecundity ($b = 8, 12, 24, 40, 100$). The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$. \ No newline at end of file diff --git a/samples/texts_merged/6332297.md b/samples/texts_merged/6332297.md new file mode 100644 index 0000000000000000000000000000000000000000..b181a9269b6f4db08f08bdfdf1d98a7c8b144bbe --- /dev/null +++ b/samples/texts_merged/6332297.md @@ -0,0 +1,449 @@ + +---PAGE_BREAK--- + +# The Worst Case Finite Optimal Value in Interval Linear Programming + +Milan Hladík¹,* + +¹ Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University, +Malostranské nám. 25, 11800, Prague, Czech Republic +E-mail: *hladik@kam.mff.cuni.cz* + +**Abstract.** We consider a linear programming problem, in which possibly all coefficients are subject to uncertainty in the form of deterministic intervals. The problem of computing the worst case optimal value has already been thoroughly investigated in the past. Notice that it might happen that the value can be infinite due to infeasibility of some instances. This is a serious drawback if we know a priori that all instances should be feasible. Therefore we focus on the feasible instances only and study the problem of computing the worst case finite optimal value. We present a characterization for the general case and investigate special cases, too. We show that the problem is easy to solve provided interval uncertainty affects the objective function only, but the problem becomes intractable in case of intervals in the right-hand side of the constraints. We also propose a finite reduction based on inspecting candidate bases. We show that processing a given basis is still an NP-hard problem even with non-interval constraint matrix, however, the problem becomes tractable as long as uncertain coefficients are situated either in the objective function or in the right-hand side only. + +**Key words:** linear programming, interval analysis, sensitivity analysis, interval linear programming, NP-completeness + +Received: September 28, 2018; accepted: November 14, 2018; available online: December 13, 2018 + +DOI: 10.17535/crorr.2018.0019 + +## 1. Introduction + +Consider a linear programming (LP) problem + +$$f(A, b, c) = \min c^T x \text{ subject to } x \in M(A, b), \quad (1)$$ + +where $M(A, b)$ is the feasible set with constraint matrix $A \in \mathbb{R}^{m \times n}$ and the right-hand side vector $b \in \mathbb{R}^m$. We use the convention $\min\emptyset = \infty$ and $\max\emptyset = -\infty$. Basically, one of the following canonical forms + +$$f(A,b,c) = \min c^T x \text{ subject to } Ax = b, x \ge 0, \qquad (\text{A})$$ + +$$f(A,b,c) = \min c^T x \text{ subject to } Ax \le b, \qquad (\text{B})$$ + +$$f(A,b,c) = \min c^T x \text{ subject to } Ax \le b, x \ge 0 \qquad (\text{C})$$ + +is usually considered. As was repeatedly observed, in the interval setting, these forms are not equivalent to each other in general [10, 12, 17], so they have to be analyzed separately. We can consider a general form involving all the canonical forms together [13], but from the sake of exposition, it is better to consider the canonical forms separately. + +*Corresponding author. +---PAGE_BREAK--- + +**Interval data.** An interval matrix is defined as the set + +$$ \mathbf{A} = \{ A \in \mathbb{R}^{m \times n}; \underline{A} \leq A \leq \overline{A} \}, $$ + +where $\underline{A}, \overline{A} \in \mathbb{R}^{m \times n}$, $\underline{A} \leq \overline{A}$ are given matrices. We will use also the notion of the midpoint and radius matrix defined respectively as + +$$ A_c := \frac{1}{2}(\underline{A} + \overline{A}), \quad A_{\Delta} := \frac{1}{2}(\overline{A} - \underline{A}). $$ + +The set of all $m \times n$ interval matrices is denoted by $\mathbb{IR}^{m \times n}$. Similar notation is used for interval vectors, considered as one column interval matrices, and interval numbers. For interval arithmetic see, e.g., the textbooks [20, 22]. + +**Interval linear programming.** Let $\mathbf{A} \in \mathbb{IR}^{m \times n}$, $\mathbf{b} \in \mathbb{IR}^m$ and $\mathbf{c} \in \mathbb{IR}^n$ be given. By an interval linear programming problem we mean a family of LP problems (1) with $\mathbf{A} \in \mathbf{A}$, $\mathbf{b} \in \mathbf{b}$ and $\mathbf{c} \in \mathbf{c}$. A particular LP problem from this family is called a *realization*. + +In the recent years, the optimal value range problem was intensively studied. The problem consists of determining the best case and worst case optimal values defined as + +$$ +\begin{align*} +\underline{f} &:= \min f(\mathbf{A}, \mathbf{b}, \mathbf{c}) && \text{subject to } \mathbf{A} \in \mathbf{A}, \mathbf{b} \in \mathbf{b}, c \in \mathbf{c}, \\ +\overline{f} &:= \max f(\mathbf{A}, \mathbf{b}, \mathbf{c}) && \text{subject to } \mathbf{A} \in \mathbf{A}, \mathbf{b} \in \mathbf{b}, c \in \mathbf{c}. +\end{align*} +$$ + +The interval $\boldsymbol{f} = [\boldsymbol{f}, \boldsymbol{\bar{f}}]$ then gives us the range of optimal values of the interval LP problem; each realization (1) has the optimal value in $\boldsymbol{f}$. If we define the image of optimal values + +$$ f(\mathbf{A}, \mathbf{b}, \mathbf{c}) := \{f(\mathbf{A}, b, c) \mid A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}\}, $$ + +then the optimal value range alternatively reads + +$$ +\begin{align*} +\underline{f} &:= \min f(\mathbf{A}, \mathbf{b}, \mathbf{c}), \\ +\overline{f} &:= \max f(\mathbf{A}, \mathbf{b}, \mathbf{c}). +\end{align*} +$$ + +References [6, 12] present a survey on this topic. Methods and formulae for determining $\underline{f}$ and $\overline{f}$ were discussed in [5, 11, 21, 24]. Some of the values are easily computable, but some are NP-hard, depending on the particular form (A)-(C) of the LP problem. The hard cases are $\overline{f}$ for type (A) and $\underline{f}$ for type (B); NP-hardness was proved in [6, 7, 26, 28]. Hladík [15] proposes approximation method for the intractable cases. Garajová et al. [10] study what is the effect of transformations of the constraints on the optimal value range, among others. + +Besides the optimal value range problem also the effects on the optimal solution set were investigated. See [2, 16, 19] for some of the recent results and the types of solutions considered. + +**Problem formulation.** The worst case optimal value $\overline{f}$ can be infinite (i.e., $\overline{f} = \infty$) due to infeasibility of some realization. However, in many situations, we know a priori or can assure that all instances are feasible; a typical example is the transportation problem [4]. Therefore, we focus on feasible realizations only and define the *worst case finite optimal value* as + +$$ \bar{f}_{fin} := \max f(\mathbf{A}, b, c) \text{ subject to } A \in \mathcal{A}, b \in \mathcal{B}, c \in \mathcal{C}, f(\mathbf{A}, b, c) < \infty. $$ + +**Example 1.** Consider the interval LP problem + +$$ +\min x \quad \text{subject to} \quad x \le [-1, 1], x \ge 0. +$$ + +Choosing a negative value from the interval [-1, 1], we obtain an infeasible LP problem. Choosing a nonnegative value, the resulting optimal value is zero. Therefore $f(\mathbf{A}, \mathbf{b}, \mathbf{c}) = \{0, \infty\}$ and $\boldsymbol{f} = [\boldsymbol{\underline{f}}, \boldsymbol{\overline{f}}] = [0, \infty]$, but $\bar{f}_{fin} = 0$. +---PAGE_BREAK--- + +We will assume that there is at least one infeasible realization, that is, $f(A, b, c) = \infty$ for some $A \in \mathbf{A}$, $b \in \mathbf{b}$ and $c \in \mathbf{c}$; methods for checking this property are discussed in [6, 13], among others. Otherwise, if every realization is feasible, then $\bar{f}_{fin} = \bar{f}$, and we can use standard techniques for computing $\bar{f}$. + +## 2. General results + +As the following example shows, even the value of $\bar{f}_{fin}$ can be infinite. We will show later in Proposition 5 that this happens only if there are intervals in the constraint matrix. + +**Example 2.** Consider the interval LP problem + +$$ \min -x_1 \quad \text{subject to} \quad [0,1]x_2 = -1, x_1 - x_2 = 0, x_1, x_2 \le 0. $$ + +By direct inspection, we observe that $f(\mathbf{A}, \mathbf{b}, \mathbf{c}) = [1, \infty]$ and $\mathbf{f} = [1, \infty]$. We have $\bar{f} = \infty$ because the LP problem is infeasible when choosing the zero from the interval $[0, 1]$. However, we have also $\bar{f}_{fin} = \infty$ since the optimal value $f(A, b, c) \to \infty$ as the selection from $[0, 1]$ tends to zero. + +Denote by + +$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad y \in N(A^T, c) \qquad (2) $$ + +the dual problem to (1). For the canonical forms (A)–(C), the dual problems respectively read + +$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y \le c, \qquad (A) $$ + +$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y = c, y \le 0, \qquad (B) $$ + +$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y \le c, y \le 0. \qquad (C) $$ + +By duality in linear programming, we can replace the inner optimization problem in the definition of $\bar{f}_{fin}$ by its dual problem with no additional assumptions. This is a bit surprising since duality in real or interval linear programming usually needs some kind of (strong) feasibility; see Novotná et al. [23]. + +**Proposition 1.** We have + +$$ \bar{f}_{fin} = \max g(A,b,c) \quad \text{subject to} \quad A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}, g(A,b,c) < \infty. \qquad (3) $$ + +**Proof.** By strong duality in linear programming, both primal and dual problems have the same optimal value as long as at least one of them is feasible. If the primal problem is infeasible for every realization of interval data, then the dual problem is for every realization either infeasible or unbounded. In any case, both sides of (3) are equal to $-\infty$. Thus we will assume that the feasible set $M(A,b)$ is nonempty for at least one realization. The assumption ensures feasibility of at least one realization, so we can replace the primal problem by the dual one. Notice that feasibility of all realizations is not necessary to assume since primarily infeasible instances are idle for both primal and dual problems. $\square$ + +The advantage of the formula (3) is that the “max min” optimization problem is reduced to “max max” problem + +$$ \bar{f}_{fin} = \max b^T y \quad \text{subject to} \quad y \in N(A^T, c), M(A,b) \neq \emptyset, A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}, \qquad (4) $$ + +which can be hopefully more easy to deal with. +---PAGE_BREAK--- + +### 3. Special cases with A real + +In this section, we focus on certain sub-classes of the main problem. In particular, we consider the case with real constraint matrix, i.e., $A_{\Delta} = 0$. This case is not much on restriction on generality since the matrix $A$ characterizes the structure of the model and often is fixed. This is particularly true in transportations problems or flows in networks [1, 27]. In contrast, costs $c$ in the objective function and capacities corresponding to the right-hand side vectors $b$ are typically affected various kinds of uncertainties. + +As we already mentioned, transformations between the LP forms (A)-(C) is not equivalent in general. Nevertheless, in some cases, it is possible. Garajová et al. [10] showed that provided $A$ is real, finite optimal values (and therefore also $\bar{f}_{fin}$) is not changed under the following transformations: + +* transform an interval LP problem of type (A) + +$$ \min c^T x \text{ subject to } Ax = b, x \ge 0 $$ + +to form (C) splitting equations to double inequalities + +$$ \min c^T x \text{ subject to } Ax \le b, Ax \ge b, x \ge 0, $$ + +* transform an interval LP problem of type (B) + +$$ \min c^T x \text{ subject to } Ax \le b $$ + +to form (C) by imposing nonnegativity of variables + +$$ \min c^T x^{+} - c^T x^{-} \text{ subject to } Ax^{+} - Ax^{-} \le b, x^{+}, x^{-} \ge 0. $$ + +In Garajová et al. [10], it was also observed that the first transformation may change finite optimal values in the case with interval $\mathcal{A}$. Below, we show by an example that this is also true for the second transformation. + +**Example 3.** Consider the interval LP problem of type (B) + +$$ \min -x \text{ subject to } [0,1]x \le -1, -[1,2]x \le 5. $$ + +It is easy to see that $f = [1, 5] \cup \{\infty\}$ and $\bar{f}_{fin} = 5$. Imposing nonnegativity of variables leads to the interval LP problem + +$$ \min -x^{+} + x^{-} \text{ subject to } [0,1]x^{+} - [0,1]x^{-} \le -1, -[1,2]x^{+} + [1,2]x^{-} \le 5. $$ + +Now, the set of optimal values expands significantly. For instance, the realization + +$$ \min -x^{+} + x^{-} \text{ subject to } 0.1x^{+} - 0.1x^{-} \le -1, -2x^{+} + 1x^{-} \le 5 $$ + +has the optimal value of 10. By direct inspection, we can see that $f = \{-\infty\} \cup [1, \infty]$. That is, the worst case finite optimal value grows to $\bar{f}_{fin} = \infty$. + +#### 3.1. Interval objective function + +If interval data are situated in the objective vector only, computation of $\bar{f}_{fin}$ is easy just by solving one LP problem. + +**Proposition 2.** If $A$ and $b$ are real, then computation of $\bar{f}_{fin}$ is a polynomial problem. +---PAGE_BREAK--- + +**Proof.** Under the assumptions, the problem (4) takes the form of an LP problem in variables $x, y, c$. Moreover, the variable $c$ can be easily eliminated. For types (A) and (C) in particular, the resulting LP problem draw, respectively + +$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax = b, x \ge 0, A^T y \le \bar{c}, \qquad (5) $$ + +$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax \le b, x \ge 0, A^T y \le \bar{c}, y \le 0. \qquad (6) $$ + +For type (B) we have + +$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax \le b, c \le A^T y \le \bar{c}, y \le 0. \quad \square $$ + +**Corollary 1.** Suppose that $A$ and $b$ are real and $M(A, b) \neq \emptyset$. For interval LP problems of types (A) and (C) the value of $\bar{f}_{fin}$ is attained at $c := \bar{c}$. + +**Proof.** Due to $M(A,b) \neq \emptyset$, problems (5) and (6) take respectively the form of + +$$ \bar{f}_{fin} = \max b^T y \text{ subject to } A^T y \le \bar{c}, $$ + +$$ \bar{f}_{fin} = \max b^T y \text{ subject to } A^T y \le \bar{c}, y \le 0. $$ + +Again by $M(A,b) \neq \emptyset$, we can replace the LP problems by their duals + +$$ \bar{f}_{fin} = \min \bar{c}^T x \text{ subject to } Ax = b, x \ge 0, $$ + +$$ \bar{f}_{fin} = \min \bar{c}^T x \text{ subject to } Ax \le b, x \ge 0. $$ + +The LP problems on the right-hand sides yield $\bar{f}_{fin}$ for the corresponding LP forms. $\square$ + +Notice that for LP problems of type (B), this property is not true. In general, $\bar{f}_{fin}$ is not attained for extremal values of $c$, which is illustrated by the following example. + +**Example 4.** Consider the interval LP problem of type (B) + +$$ \min -x_1 + c_2 x_2 \text{ subject to } x_1 + x_2 \le 2, -x_1 + x_2 \le 0, $$ + +where $c_2 \in c_2 = [-0.5, 2]$. It is not hard to see that $\bar{f}_{fin} = \bar{f} = -2$, and it is attained for the value of $c_2 := -1$ at the point $x = (1, 1)^T$. For smaller $c_2$, the optimal value is $-1 + c_2 < -2$. For larger $c_2$, the optimal value is $-\infty$ since the problem is unbounded. + +## 3.2. Interval right-hand side + +In contrast to the previous case, if interval data are situated in the right-hand side vector only (i.e., $A_{\Delta} = 0$ and $c_{\Delta} = 0$), computation of $\bar{f}_{fin}$ is intractable. + +**Proposition 3.** If $A$ and $c$ are real, then checking $\bar{f}_{fin} > 0$ is NP-hard for type (A). + +**Proof.** By [9], checking whether there is at least one feasible realization of the interval system + +$$ A^T y \le 0, b^T y > 0 $$ + +is an NP-hard problem. Hence it is NP-hard to check $\bar{f} > 0$ (not yet speaking about $\bar{f}_{fin}$) for the interval LP problem + +$$ \max b^T y \text{ subject to } A^T y \le 0. $$ +---PAGE_BREAK--- + +Due to positive homogeneity of the constraints, we can rewrite the problem as + +$$ +\max \mathbf{b}^T y \text{ subject to } A^T y \le 0, y \le e, -y \le e, \qquad (7) +$$ + +where $e = (1, \dots, 1)^T$. For this interval problem, checking $\bar{f}_{fin} > 0$ is NP-hard. + +The interval problem (7) follows the form (3); the condition $g(A, b, c) < \infty$ needn't be considered since the problem is feasible and finite for each realization. Thus we can view this problem as the dual of an interval LP problem of type (A), which has a fixed objective function vector and a fixed constraint matrix. $\square$ + +**Corollary 2.** If *A* and *c* are real, then checking $\bar{f}_{fin} > 0$ is NP-hard for type (B) and for type (C). + +*Proof.* By Proposition 3, checking $\bar{f}_{fin} > 0$ is NP-hard for an interval LP problem + +$$ +\min c^T x \text{ subject to } Ax = \mathbf{b}, x \ge 0. +$$ + +According to the discussion at the beginning of Section 3, the value of $\bar{f}_{fin}$ is not changed under +the transformation of equations to double inequalities + +$$ +\min c^T x \text{ subject to } Ax \leq b, Ax \geq b, x \geq 0. +$$ + +This is, however, a type (C) problem, which must therefore be NP-hard. + +Type (B) problems are also NP-hard since every problem in the form of (C) is essentially +in the form of (B). $\square$ + +Despite intractability, computation of $\bar{f}_{fin}$ need not be always so hard. If $A$ is real, then +(4) takes the form of a bilinear programming problem, that is, the constraints are linear and +the objective function is bilinear (with respect to variables $y, b, c$). Even though it is NP-hard, +some instances may be faster solvable. + +**Example 5.** Consider an interval LP problem in the form + +$$ +\min c^T x \text{ subject to } Ax \geq b +$$ + +with $b > 0$. Then (4) reads + +$$ +\bar{f}_{\text{fin}} = \max b^T y \text{ subject to } Ax \geq b, A^T y = c, y \geq 0, b \in \mathbf{b}. +$$ + +Since the variables are nonnegative, it has the special form of a geometric program, and hence +it is efficiently solvable [3]. + +**4. Basis approach** + +If the LP problem (1) has a finite optimal value, then it possesses an optimal solution cor- +responding to an optimal basis. For concreteness, consider type (A) problem. A basis B is +optimal if and only if the following two conditions are satisfied + +$$ +A_B^{-1} b \ge 0, \tag{8a} +$$ + +$$ +c_N^T - c_B^T A_B^{-1} A_N \ge 0^T. \tag{8b} +$$ + +The optimal value then is $f(A, b, c) = c_B^T A_B^{-1} b.$ + +Given a basis *B* and an interval LP problem, we will now address the question what is +the highest optimal value achievable at this basis. This can be formulated as an optimization +problem + +$$ +\max c_B^T A_B^{-1} b \text{ subject to (8), } A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}. \quad (9) +$$ +---PAGE_BREAK--- + +**Real constraint matrix.** Suppose from now on that $A$ is real. Then the optimization problem (9) reads + +$$ \max c_B^T A_B^{-1} b \quad \text{subject to} \quad (8), \ b \in \mathbf{b}, \ c \in \mathbf{c}. \tag{10} $$ + +Its constraints are linear in variables $b, c$. Therefore, checking its feasibility is an easy task. In accordance with [12], we say that a basis $B$ is weakly optimal if it admits at least one finite optimal value, that is, $B$ is optimal for some realization. From the above reasoning, we have + +**Proposition 4.** *Checking whether a basis B is weakly optimal is a polynomial problem.* + +The feasible set of (10) is bounded, so the optimal value is bounded, too. Since there are finitely many basis, the worst case finite optimal value must be finite. Hence we just derived + +**Proposition 5.** If $A$ is real, then $\bar{f}_{fin} < \infty$. + +If $c$ is real, then (9) takes the form of an LP problem + +$$ \max c_B^T A_B^{-1} b \quad \text{subject to} \quad (8), \ b \in \mathbf{b}, \tag{11} $$ + +and so it is polynomially solvable. Similarly in the case when $b$ is real. + +**Proposition 6.** If $A, b$ are real or $A, c$ are real, then solving (9) is polynomial. + +Solving problem (9) with $A$ real and $\mathbf{b}, \mathbf{c}$ interval values is, however, still intractable. + +**Proposition 7.** If $A$ is real, then solving (9) is NP-hard. + +*Proof.* By Witsenhausen [29], it is NP-hard to find the maximum value of a bilinear form $u^T M v$ on interval domain $u, v \in [0, 1]^n$, where $M$ is symmetric nonsingular. We will reduce this problem to our problem. We put $\mathbf{b} := [0, 1]^n$ and $A_B := I_n$, where $I_n$ is the identity matrix. Next, we substitute $c_B := M u$. The condition + +$$ c_B = M u, \ u \in [0, 1]^n $$ + +is equivalent to + +$$ 0 \leq M^{-1} c_B \leq 1, $$ + +so we can formulate it as (8b) for $A_N = (M^{-1}, -M^{-1})$ and $c_N = (1^T, 0^T)^T$. The condition (8a) is trivially satisfied as $A_B^{-1} b = b \in [0, 1]^n$. This completes the reduction. $\square$ + +**Real A and c.** By Proposition 3 we know that computing $\bar{f}_{fin}$ is NP-hard even when $A$ and $c$ are real, and intervals are situated in the right-hand side vector $\mathbf{b}$ only. The above considerations give us a finite reduction for computing $\bar{f}_{fin}$. For each basis $B$, check if it is weakly optimal and determine the worst case optimal value associated with $B$ by solving the LP problem (11). + +In this way, the box $\mathbf{b}$ splits into convex polyhedral sub-parts, which are usually called stability or critical regions in the context of sensitivity analysis and parametric programming [8]. Each region corresponds to a weakly optimal basis. In the area of interval linear programming, but in another context, stability regions were also discussed in Mráz [21]. + +The obvious drawback of this approach is that there are exponentially many bases. On the other hand, the number of weakly optimal bases might be reasonably small. In order to process them, consider the following graph. The nodes correspond to weakly optimal bases. There is an edge between two nodes if and only if the corresponding bases are neighbors, that is, they differ in exactly one entry (the basic index sets differ in one entry). Since the set $\mathbf{b}$ of the objective vectors of the dual problem (2) is convex and compact, the graph of weakly +---PAGE_BREAK--- + +Figure 1: (Example 6) Illustration of the dual problem: for different values of the objective vector **b**, the optimal solution moves from $y^1$ to $y^2$ and to unbounded instances. + +optimal bases is connected. Therefore, we can start with one weakly optimal basis, inspect the neighboring bases for weak optimality and process until all weakly optimal bases are found. + +This method can be significantly faster than processing all possible bases. In particular, if the interval vector **b** is narrow, then we can expect that the number of weakly optimal basis is small, or even there is a unique one. This case of unique basis is called *basis stable* problem and was investigated in [14, 18, 25]. Even though it is NP-hard to check for basis stability of a basis B for a general interval LP problem, there are practically efficient sufficient conditions; see [14]. + +Moreover, basis stability is polynomially decidable provided A, b or A, c are real, which is our case. Concretely, we have to verify two conditions. First, check (8b), which is easy as all data are constant. Second, compute by interval arithmetic the expression $A_B^{-1}b$, and check that the lower bound is nonnegative. + +**Example 6.** Consider the interval LP problem of type (A) with data + +$$A = \begin{pmatrix} 1 & 2 & 0 & -1 & -1 \\ 1 & 1 & 1 & 1 & 0 \end{pmatrix}, \quad b = \begin{pmatrix} [3, 5] \\ [2, 4] \end{pmatrix}, \quad c = (10 \ 20 \ 5 \ 3 \ 1)^T.$$ + +The dual problem is illustrated on Figure 1. There are two weakly optimal bases, $B = \{1, 2\}$ and $B' = \{1, 3\}$. On the figure, they correspond to vertices $y^1 = (10,0)^T$ and $y^2 = (5,5)^T$. + +For basis B, the constraint $A_B^{-1}b \ge 0$ from (8a) takes the form + +$$ +\begin{aligned} +-b_1 + 2b_2 &\geq 0, \\ +b_1 - b_2 &\geq 0. +\end{aligned} + $$ + +By the LP problem (11), we compute the value of the highest optimal value corresponding to this basis as 50. + +For basis $B'$, the constraint $A_B^{-1}b \ge 0$ draws + +$$ +\begin{aligned} +b_1 &\geq 0, \\ +-b_1 + b_2 &\geq 0. +\end{aligned} + $$ + +The LP problem (11) now gives the value of 40 for the highest optimal value associated to $B'$. + +In total, we see that the worst case optimal value is $\bar{f}_{fin} = 50$ and it is attained for basis B. Figure 2 depicts the interval vector **b** and its subparts corresponding to the optimal bases B and $B'$ and to infeasible instances. +---PAGE_BREAK--- + +Figure 2: (Example 6) The sub-parts of interval vector **b** corresponding to the optimal bases **B** and **B'** and to infeasible instances. + +## 5. Conclusion + +We investigated the problem of computing the highest possible optimal value when input data are subject to variations in given intervals and we restrict to feasible instances only. We analyzed the computational complexity issues by identifying the cases that are already polynomially solvable and those that are still NP-hard. The basis reduction proposes an approach that is not a priori exponential even for the NP-hard cases. + +Several open questions arised during the work on the topic. This includes for example the problem of what is the computational complexity of this question: Is $f_{fin}$ attained for a given basis $\mathbf{B}$? + +## Acknowledgement + +The author was supported by the Czech Science Foundation Grant P403-18-04735S. + +## References + +[1] Ahuja, R. K., Magnanti, T. L. and Orlin, J. B. (1993). Network Flows. Theory, Algorithms, and Applications. Englewood Cliffs, NJ: Prentice Hall. + +[2] Ashayerinasab, H. A., Nehi, H. M. and Allahdadi, M. (2018). Solving the interval linear programming problem: A new algorithm for a general case. Expert Systems with Applications, 93, Suppl. C, 39–49. + +[3] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press. + +[4] Cerulli, R., D'Ambrosio, C. and Gentili, M. (2017). Best and worst values of the optimal cost of the interval transportation problem. In Sforza, A., and Sterle, C. (Eds.), Optimization and Decision Science: Methodologies and Applications, volume 217 of Springer Proceedings in Mathematics & Statistics, (pp. 367–374). Cham: Springer. + +[5] Chinneck, J. W. and Ramadan, K. (2000). Linear programming with interval coefficients. Journal of the Operational Research Society, 51(2), 209–220. +---PAGE_BREAK--- + +[6] Fiedler, M., Nedoma, J., Ramík, J., Rohn, J. and Zimmermann, K. (2006). Linear Optimization Problems with Inexact Data. New York: Springer. + +[7] Gabrel, V. and Murat, C. (2010). Robustness and duality in linear programming. Journal of the Operational Research Society, 61(8), 1288-1296. + +[8] Gal, T. and Greenberg, H. J. (Eds.) (1997). Advances in Sensitivity Analysis and Parametric Programming. Boston: Kluwer Academic Publishers. + +[9] Garajová, E., Hladík, M. and Rada, M. (2017). On the properties of interval linear programs with a fixed coefficient matrix. In Sforza, A., and Sterle, C. (Eds.), Optimization and Decision Science: Methodologies and Applications, volume 217 of Springer Proceedings in Mathematics & Statistics, (pp. 393–401). Cham: Springer. + +[10] Garajová, E., Hladík, M. and Rada, M. (2018). Interval linear programming under transformations: Optimal solutions and optimal value range. Central European Journal of Operations Research. In press, doi: 10.1007/s10100-018-0580-5 + +[11] Hladík, M. (2009). Optimal value range in interval linear programming. Fuzzy Optimization and Decision Making, 8(3), 283-294. + +[12] Hladík, M. (2012). Interval linear programming: A survey. In Mann, Z. A. (Ed.), Linear Programming – New Frontiers in Theory and Applications, chapter 2, (pp. 85–120). New York: Nova Science Publishers. + +[13] Hladík, M. (2013). Weak and strong solvability of interval linear systems of equations and inequalities. Linear Algebra and its Applications, 438(11), 4156–4165. + +[14] Hladík, M. (2014). How to determine basis stability in interval linear programming. Optimization Letters, 8(1), 375–389. + +[15] Hladík, M. (2014). On approximation of the best case optimal value in interval linear programming. Optimization Letters, 8(7), 1985–1997. + +[16] Hladík, M. (2017). On strong optimality of interval linear programming. Optimization Letters, 11(7), 1459–1468. + +[17] Hladík, M. (2017). Transformations of interval linear systems of equations and inequalities. Linear and Multilinear Algebra, 65(2), 211–223. + +[18] Koníčková, J. (2001). Sufficient condition of basis stability of an interval linear programming problem. ZAMM, Z. Angew. Mathematics and Mechanics of Solids, 81, Suppl. 3, 677–678. + +[19] Li, W., Liu, X. and Li, H. (2015). Generalized solutions to interval linear programmes and related necessary and sufficient optimality conditions. Optimization Methods and Software, 30(3), 516–530. + +[20] Moore, R. E., Kearfott, R. B., and Cloud, M. J. (2009). Introduction to Interval Analysis. Philadelphia, PA: SIAM. + +[21] Mráz, F. (1998). Calculating the exact bounds of optimal values in LP with interval coefficients. Annals of Operations Research, 81, 51–62. + +[22] Neumaier, A. (1990). Interval Methods for Systems of Equations. Cambridge: Cambridge University Press. + +[23] Novotná, J., Hladík, M. and Masařík, T. (2017). Duality gap in interval linear programming. In Zadnik Stirn et al., L. (Ed.), Proceedings of the 14th International Symposium on Operational Research SOR’17, Bled, Slovenia, September 27-29, 2017, (pp. 501–506). Ljubljana, Slovenia: Slovenian Society Informatika. + +[24] Rohn, J. (1984). Interval linear systems. Freiburger Intervall-Berichte 84/7, Albert-Ludwigs-Universität, Freiburg. + +[25] Rohn, J. (1993). Stability of the optimal basis of a linear program under uncertainty. Operations Research Letters, 13(1), 9–12. + +[26] Rohn, J. (1997). Complexity of some linear problems with interval data. Reliable Computing, 3(3), 315–323. + +[27] Schrijver, A. (2004). Combinatorial Optimization. Polyhedra and efficiency, volume 24 of Algorithms and Combinatorics. Berlin: Springer. + +[28] Serafini, P. (2005). Linear programming with variable matrix entries. Operations Research Letters, 33(2), 165–170. + +[29] Witsenhausen, H. S. (1986). A simple bilinear optimization problem. Systems & Control Letters, 8(1), 1–4. \ No newline at end of file diff --git a/samples/texts_merged/6376231.md b/samples/texts_merged/6376231.md new file mode 100644 index 0000000000000000000000000000000000000000..ef29ecdba73d680195d947e3c446efa799cbf459 --- /dev/null +++ b/samples/texts_merged/6376231.md @@ -0,0 +1,621 @@ + +---PAGE_BREAK--- + +Project Choice from a Verifiable Proposal + +Yingni Guo + +Eran Shmaya* + +May 8, 2021 + +Abstract + +An agent observes the set of available projects and proposes some, but not neces- +sarily all, of them. A principal chooses one or none from the proposed set. We solve +for a mechanism that minimizes the principal's worst-case regret. If the agent can pro- +pose only one project, it is chosen for sure if the principal's payoff exceeds a threshold; +otherwise, the probability that it is chosen decreases in the agent's payoff. If the agent +can propose multiple projects, his payoff from a multiproject proposal equals the max- +imal payoff from proposing each project alone. Our results highlight the benefits from +randomization and from the ability to propose multiple projects. + +JEL: D81, D82, D86 + +Keywords: verifiable disclosure, evidence, project choice, regret minimization + +# 1 Introduction + +Project choice is one of the most important functions of an organization. The process +often involves two parties: (i) a party at a lower hierarchical level who has expertise and + +*Guo: Department of Economics, Northwestern University; email: yingni.guo@northwestern.edu. +Shmaya: Department of Economics, Stony Brook University; email: eran.shmaya@stonybrook.edu. We +thank seminar audiences at the One World Mathematical Game Theory Seminar, the Toulouse School of +Economics, the University of Bonn, Northwestern University, the University of Pittsburgh, and Carnegie +Mellon University for valuable feedback. +---PAGE_BREAK--- + +proposes projects, and (ii) a part at a higher hierarchical level who evaluates the proposed projects and makes the choice. This describes the relationship between a division and the headquarters when the division has a chance to choose a factory location or to choose an office building. It also applies to the relationship between a department and the university when the department has a hiring slot open. + +This process of project choice is naturally a principal-agent problem. The agent privately observes which projects are available and proposes a subset of the available projects. The principal chooses one from the proposed projects or rejects them all. If the two parties had identical preferences over projects, the agent would propose the project that is their shared favorite among the available ones, and the principal would always automatically approve the agent's proposal. In many applications, however, the two parties do not share the same preferences. For instance, the division may fail to internalize each project's externalities on other divisions; the department and the university may put different weights on candidates' research and nonresearch abilities. Armed with the proposal-setting power, the agent has a tendency to propose his favorite project and hide his less preferred ones, even if those projects are "superstars" for the principal. How shall the principal encourage the agent to propose the principal's preferred projects? What is the principal's optimal mechanism for choosing a project? + +It is easy to see that no mechanism can guarantee that the principal's favorite project among the available ones will always be chosen. We define the principal's *regret* as the difference between his payoff from his favorite project and his expected payoff from the project chosen under the mechanism. We look for a mechanism that works fairly well for the principal in all circumstances, i.e., a mechanism that minimizes the principal's worst-case regret. This worst-case regret approach to uncertainty can be traced back to Wald (1950) and Savage (1951). It has since been used widely in game theory, mechanism design, and machine learning. A decision theoretical axiomatization for the minimax regret criterion can +---PAGE_BREAK--- + +be found in Milnor (1954) and Stoye (2011). + +Depending on the principal's verification capacity, we distinguish two environments. In the *multiproject* environment, the agent can propose any subset of the available projects. In the *single-project* environment, the agent can propose only one available project. Besides project choice within organizations, the single-project environment also applies to antitrust regulation: a firm chooses a merger from available merger opportunities to propose and the regulator decides whether to approve or reject the firm's proposal (e.g., Lyons (2003), Neven and Röller (2005), Armstrong and Vickers (2010), Ottaviani and Wickelgren (2011), Nocke and Whinston (2013)). + +We take the environment as exogenous and derive the optimal mechanisms in both environments. In the single-project environment, the only way for the principal to incentivize the agent is to reject his proposal with positive probability. The multiproject environment, however, allows the principal to “spend” this rejection probability on other proposed projects. Therefore, even though the principal chooses at most one project, he expects to do better in the multiproject environment than in the single-project one. Comparing the two environments will also allow us to quantify the principal’s gain from higher verification capacity. + +We begin with the single-project environment. A mechanism specifies for each proposed single project the probability that it will be approved. In the optimal mechanism, if the proposed project gives the principal a sufficiently high payoff, it is approved for sure. We call such projects *good* projects for the principal. If, on the contrary, the proposed project is *mediocre* for the principal, it is approved only with some probability. The probability that a mediocre project is approved decreases in its payoff to the agent, in order to deter the agent from hiding projects that are more valuable for the principal. This mechanism aligns the incentives of the agent with those of the principal in the following ways. First, if the agent has at least one good project for the principal, he will propose a good project. Second, if all his projects are mediocre for the principal, he will propose the principal's favorite one. +---PAGE_BREAK--- + +In the multiproject environment, a mechanism specifies for each proposed set of projects a randomization over the proposed projects and “no project.” If the agent proposes only one project, the optimal mechanism takes a form similar to the one in the single-project environment. In particular, if the proposed project is sufficiently good for the principal, it is chosen for sure. Otherwise, the project is chosen with some probability that decreases in its payoff to the agent. If the agent proposes more than one project, the randomization maximizes the principal’s expected payoff, subject to the constraint that the agent is promised the maximal expected payoff he would get from proposing each project alone. Under this mechanism, the more projects the agent proposes, the weakly higher his expected payoff is, so the agent is willing to propose all available projects. + +Since the agent gets the maximal expected payoff from proposing each project alone, we call this mechanism the *project-wide maximal-payoff mechanism*. This mechanism implements a compromise between the two parties in the multiproject environment: with some probability the choice favors the agent and with some probability it favors the principal. We also show that randomization is crucial for the principal's minimal worst-case regret to be lower in the multiproject environment than in the single-project one. In other words, if the principal is restricted to deterministic mechanisms, his minimal worst-case regret is the same in both the single-project and multiproject environments. + +**Related literature.** Our paper is closely related to Armstrong and Vickers (2010) and Nocke and Whinston (2013), which study the project choice problem using the Bayesian approach. Armstrong and Vickers (2010) characterize the optimal deterministic mechanism in the single-project environment and show through examples that the principal does strictly better if randomization or multiproject proposals are allowed. Nocke and Whinston (2013) focus on mergers (i.e., projects) that are ex ante different and further incorporate the bargaining process among firms. They show that a tougher standard is imposed on mergers +---PAGE_BREAK--- + +involving larger partners. We take the worst-case regret approach to this multidimensional +screening problem. This more tractable approach allows us to explore questions which are +intractable under the Bayesian approach, including how much the principal benefits from +randomization, from higher verification capacity and from a smaller project domain. + +Goel and Hann-Caruthers (2020) consider the project choice problem where the number of available projects is public information. The projects are only partially verifiable, since the agent's only constraint is not to overreport projects' payoffs to the principal. Because their agent cannot hide projects like our agent does, he loses the proposal-setting power. The resulting incentive schemes are thus quite different. + +Since in our model the agent can propose only those projects that are available, the agent's proposal is some evidence about his private information. Hence, our paper is closely related to research on verifiable disclosure (e.g., Grossman and Hart (1980), Grossman (1981), Milgrom (1981), Dye (1985)) and, more broadly, the evidence literature (see Dekel (2016) for a survey). We discuss the relation to this literature in more detail after we introduce the model. + +Our result relates to a theme in Aghion and Tirole (1997), namely, that the principal has formal authority, but the agent shares real authority due to his private information. We take this theme one step further. Our agent's real authority has two sources: he knows which projects are available, and he determines the proposal from which the principal chooses a project. The idea of striking a compromise is related to Bonatti and Rantakari (2016). They examine the compromise between two symmetric, competing agents whose efforts are crucial for discovering projects. We instead focus on the compromise between an agent who proposes projects and a principal who chooses one or none from the proposed projects. + +Finally, our paper contributes to the literature on mechanism design in which the designer minimizes his worst-case regret. Hurwicz and Shapiro (1978) examine a moral hazard problem. Bergemann and Schlag (2008, 2011) examine monopoly pricing. Renou and Schlag +---PAGE_BREAK--- + +(2011) apply the solution concept of $\epsilon$-minimax regret to the problem of implementing social choice correspondences. Beviá and Corchón (2019) examine the contest which minimizes the designer's worst-case regret. Guo and Shmaya (2019) study the optimal mechanism for monopoly regulation and Malladi (2020) studies the optimal approval rules for innovation. More broadly, we contribute to the growing literature of mechanism design with worst-case objectives. For a survey on robustness in mechanism design, see Carroll (2019). + +## 2 Model and mechanism + +Let $D$ be the domain of all possible *verifiable projects*. Let $u: D \to R_+$ be the agent's payoff function, so his payoff is $u(a)$ if project $a$ is chosen. If no project is chosen, the agent's payoff is zero. + +The agent's private type $A \subseteq D$ is a finite set of available projects. The agent proposes a set $P$ of projects, and the principal can choose one project from this set. The set $P$ is called the agent's *proposal*. It must satisfy two conditions. First, the agent can propose only available projects. Hence, the agent's proposal must be a subset of his type, $P \subseteq A$. This is what we meant earlier when we said that projects are verifiable. Second, $P \in \mathcal{E}$ for some fixed set $\mathcal{E}$ of subsets of $D$. The set $\mathcal{E}$ captures all the exogenous restrictions on the proposal. For instance, in the setting of antitrust regulation, the agent is restricted to proposing at most one project. In many organizations, the principal have limited verification capacity or limited attention, so the agent can propose at most a certain number of projects. + +We begin with two environments which are natural first steps: *single-project* and *multiproject*. In the single-project environment, the agent can propose at most one available project, so $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$. In the multiproject environment, the agent can propose any set of available projects so $\mathcal{E} = 2^D$, the power set of $D$. In subsection 6.1, we discuss the intermediate environments in which the agent can propose up to $k$ projects for some fixed +---PAGE_BREAK--- + +number $k \ge 2$. + +The agent's proposal *P* serves two roles. First, if we view a proposal as a message, then different types have access to different messages. Hence, the agent's proposal is some evidence about his type, as in Green and Laffont (1986). We explore the implication of this evidence role in section 3. Second, the proposal determines the set of projects from which the principal can choose. This second role is a key difference between our paper and the evidence literature. Once the agent puts his proposal on the table, there is no relevant information asymmetry left. This implies that cheap-talk communication will not help. We elaborate on this point in subsection 6.2. + +A subprobability measure over D with a finite support is given by $\pi: D \to [0, 1]$ such that + +$$\text{support}(\pi) = \{a \in D : \pi(a) > 0\}$$ + +is finite, and $\sum_a \pi(a) \le 1$. When we say that a project *is chosen from* a subprobability measure $\pi$ with finite support, we mean that project *a* is chosen with probability $\pi(a)$, and that no project is chosen with probability $1 - \sum_a \pi(a)$. + +The principal's ability to reject all proposed projects (or equivalently, to choose no project) is crucial for him to retain some "bargaining power." If, on the contrary, the principal must choose a project as long as the agent has proposed some, then the agent effectively has all the bargaining power. The agent will propose only his favorite project which will be chosen for sure. + +A *mechanism* $\rho$ attaches to each proposal $P \in \mathcal{E}$ a subprobability measure $\rho(\cdot|P)$ such that $\text{support}(\rho(\cdot|P)) \subseteq P$. The interpretation is that, if the agent proposes $P$, then a project is chosen from the subprobability measure $\rho(\cdot|P)$. Thus, the agent's expected payoff under the mechanism $\rho$ if he proposes $P$ is $U(\rho, P) = \sum_{a \in P} u(a)\rho(a|P)$. + +A *choice function* $f$ attaches to each type $A$ of the agent a subprobability measure $f(\cdot|A)$ +---PAGE_BREAK--- + +such that $\text{support}(f(\cdot|A)) \subseteq A$. The interpretation is that, if the set of available projects is $A$, then a project is chosen from the subprobability measure $f(\cdot|A)$. + +A choice function $f$ is *implemented* by a mechanism $\rho$ if, for every type $A$ of the agent, there exists a probability measure $\mu$ with support over $\text{argmax}_{P\subseteq A, P\in\mathcal{E}} U(\rho, P)$ such that $f(a|A) = \sum_P \mu(P)\rho(a|P)$. The interpretation is that the agent selects only proposals that give him the highest expected payoff among the proposals that he can make, and that, if the agent has multiple optimal proposals, then he can randomize among them. + +# 3 The evidence structure + +When the agent proposes a set $P$ of projects, he provides evidence that his type $A$ satisfies $P \subseteq A$. In this section, we discuss the implication of this role of the agent's proposal as well as the relation to the evidence literature. + +## 3.1 Normality in the multiproject environment + +In our multiproject environment, where $\mathcal{E} = 2^D$, the agent has the ability to provide the maximal evidence for his type. This property is called *normality* in the literature (Lipman and Seppi (1995), Bull and Watson (2007), Ben-Porath, Dekel and Lipman (2019)). Another interpretation of the multiproject environment is to view an agent who proposes a set $P$ as an agent who claims that his type is $P$. The relation that “type $A$ can claim to be type $B$” between types is reflexive and transitive, by the corresponding properties of the inclusion relation between sets. Transitivity is called the nested range condition in Green and Laffont (1986) and is also assumed in Hart, Kremer and Perry (2017). + +In our single-project environment, where $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$, normality does not hold. The single-project environment is the main focus in Armstrong and Vickers (2010) and Nocke and Whinston (2013), and is similar to the assumption in Glazer and Rubinstein +---PAGE_BREAK--- + +(2006) and Sher (2014) that the speaker can make one and only one of the statements he has access to. + +## 3.2 Revelation principle in the multiproject environment + +Consider the multiproject environment $\mathcal{E} = 2^D$. A mechanism $\rho$ is incentive-compatible (IC) if the agent finds it optimal to propose his type $A$ truthfully. That is, $U(\rho, A) \ge U(\rho, P)$ for every finite set $A \subseteq D$ and every subset $P \subseteq A$. Equivalently, a mechanism $\rho$ is IC if and only if $U(\rho, P)$ weakly increases in $P$ with respect to set inclusion. The following proposition states the revelation principle in the multiproject environment. + +**Proposition 3.1.** *Assume $\mathcal{E} = 2^D$. If a choice function $f$ is implemented by some mechanism, then the mechanism $f$ is IC and implements the choice function $f$.* + +As we explained in subsection 3.1, the multiproject environment satisfies normality and the nested range condition. Previous papers (e.g., Green and Laffont (1986), Bull and Watson (2007)) have shown that the revelation principle holds under these assumptions. Our proposition 3.1 does not follow directly from their theorems, however, because the agent's proposal $P$ serves two roles in our model. In addition to providing evidence, the proposal also determines the set of projects from which the principal can choose. Nonetheless, a similar argument for the revelation principle can be made within our model as well. + +*Proof of Proposition 3.1.* Assume that the mechanism $\rho$ implements the choice function $f$. Then for every finite set $A \subseteq D$ and every subset $P \subseteq A$, we have: + +$$U(f, A) = \max_{Q \subseteq A} U(\rho, Q) \ge \max_{Q \subseteq P} U(\rho, Q) = U(f, P),$$ + +where the inequality follows from the fact that $Q \subseteq P$ implies $Q \subseteq A$, and the two equalities follow from the fact that $\rho$ implements $f$. Hence, the mechanism $f$ is IC. Also, by definition, +---PAGE_BREAK--- + +if the mechanism $f$ is IC, then it implements the choice function $f$. $\square$ + +Since an implementable choice function is itself an IC mechanism and vice versa, we will +use both terms interchangeably whenever we discuss the multiproject environment. + +# 4 The principal's problem + +Let $v: D \rightarrow \mathbb{R}_{+}$ be the principal's payoff function, so his payoff is $v(a)$ if project $a$ is chosen. +If no project is chosen, the principal's payoff is zero. + +The principal's *regret* from a choice function *f* when the set of available projects is *A* is: + +$$ \mathrm{RGRT}(f, A) = \max_{a \in A} v(a) - \sum_{a \in A} v(a)f(a|A). $$ + +The regret is the difference between what the principal could have achieved if he knew the set $A$ of available projects and what he actually achieves. Savage (1951) calls this difference *loss*. We instead call it regret, by following the more recent game theory and computer science literature. Wald (1950) and Savage (1972) propose to consider only *admissible* choice functions (i.e., choice functions that are not weakly dominated). A choice function $f$ is *admissible* if there exists no other $f'$ such that the principal's regret is weakly higher under $f$ than under $f'$ for every type of the agent and strictly higher for some type. For the rest of the paper, we focus on admissible choice functions. + +The worst-case regret (WCR) from a choice function $f$ is: + +$$ \text{WCR}(f) = \sup_{A \subseteq D, |A| < \infty} \text{RGRT}(f, A), $$ + +where the supremum ranges over all possible types of the agent (i.e., all possible finite sets of available projects). The principal's problem is to minimize WCR($f$) over all implementable +---PAGE_BREAK--- + +choice functions $f$. This step is our only departure from the Bayesian approach. The Bayesian approach will instead assign a prior belief over the number and the characteristics of the available projects. The principal's problem, then, is to minimize the *expected* regret instead of the *worst-case* regret. + +Note that, while our principal takes the worst-case regret approach to uncertainty about the agent’s type, he calculates the expected payoff with respect to his own objective randomization. The same assumption is made by Savage (1972) when he discusses the use of randomized acts under the worst-case regret approach (Savage, 1972, Chapter 9.3). A similar assumption is made in the ambiguity aversion literature. For example, in Gilboa and Schmeidler (1989), the decision maker calculates his expected payoff with respect to random outcomes (i.e., “roulette lotteries”) but evaluates acts using the maxmin approach with non-unique priors. If we make the alternative assumption that the principal takes the worst-case regret approach even towards his own randomization, we effectively restrict the principal to deterministic mechanism. + +From now on, we assume that the set $D$ of all possible verifiable projects is $[\underline{u}, 1] \times [\underline{v}, 1]$ for some parameters $\underline{u}, \underline{v} \in [0, 1]$, and that the functions $u(\cdot)$ and $v(\cdot)$ are projections over the first and second coordinates. Abusing notation, we denote a project $a \in D$ also by $a = (u, v)$, where $u$ and $v$ are the agent’s and the principal’s payoffs, respectively, if project $a$ is chosen. + +The parameters $\underline{u}$ and $\underline{v}$ quantify the uncertainty faced by the principal: the higher they are, the smaller the uncertainty. They also measure players’ preference intensity over projects. As $\underline{u}$ increases, the agent’s preferences over projects become less strong, so it becomes easier to align the incentives of the agent with those of the principal. As $\underline{v}$ increases, the principal’s preferences over projects become less strong, so the agent’s tendency to propose his own favorite project becomes less costly for the principal. +---PAGE_BREAK--- + +# 5 Optimal mechanisms + +## 5.1 Preliminary intuition + +We now use an example to illustrate the fundamental trade-off faced by the principal, as well as the intuition behind the optimal mechanisms. We first explain how randomization helps to reduce the WCR in the single-project environment. We then explain how the multiproject environment can further reduce the WCR. For this illustration, we assume that $v = 0$ so $D = [\underline{u}, 1] \times [0, 1]$. + +Figure 1: Preliminary intuition, $v = 0$ + +Consider the single-project environment and assume first that the principal is restricted to deterministic mechanisms. In this case, a mechanism is a set of projects that the principal approves for sure, and all other projects are rejected outright. For each such mechanism, the principal has two fears. First, if the agent has multiple projects which will be approved, then he will propose what he likes the most, even if projects are available that are more valuable to the principal. Second, if the agent has only projects which will be rejected, then the principal loses the payoff from these projects. Applied to the project $\bar{a} = (1, 1/2)$, these two fears imply that no matter how the principal designs the deterministic mechanism, his +---PAGE_BREAK--- + +WCR is at least 1/2. As shown in figure 1, this project $\bar{a}$ gives the agent his highest payoff 1, while giving the principal only a moderate payoff 1/2. If the mechanism approves $\bar{a}$ and the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$, then the agent will propose $\bar{a}$ rather than $(\underline{u}, 1)$, so the principal suffers regret 1/2. If the mechanism rejects $\bar{a}$ but $\bar{a}$ is the only available project, then the principal also suffers regret 1/2. Thus, the WCR under any deterministic mechanism is at least 1/2. On the other hand, the deterministic mechanism that approves project $(u, v)$ if and only if $v \ge 1/2$ achieves the WCR of 1/2, so it is optimal among all the deterministic mechanisms. + +We now explain how randomization can reduce the WCR in the single-project environment. We first note that, if $\underline{u} = 0$, then, even with randomized mechanisms, the principal cannot reduce his WCR below 1/2. This is because the only way to incentivize the agent to propose the project $(\underline{u}, 1) = (0, 1)$ when the set of available projects is $\{\bar{a}, (0, 1)\}$ is still to reject the project $\bar{a}$ outright if $\bar{a}$ is proposed. However, if $\underline{u} > 0$, then the principal can do better. He can approve the project $\bar{a}$ with probability $\underline{u}$, while still maintaining the agent's incentive to propose the principal's preferred project $(\underline{u}, 1)$ when the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$. We carry out this idea in Theorem 5.1 in subsection 5.2. + +Let us now consider the multiproject environment. We again begin with deterministic mechanisms. Under deterministic mechanisms, more choice functions can be implemented in the multiproject environment than in the single-project one.¹ However, when restricted to deterministic mechanisms, the principal has the same minimal WCR in the multiproject environment as in the single-project one. This is because, if the principal wants to choose $(\underline{u}, 1)$ when the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$, then the only way to incentivize the agent to include $(\underline{u}, 1)$ in his proposal is to reject the project $\bar{a}$ when $\bar{a}$ is proposed alone. + +We now explain how randomization can help in the multiproject environment, even when + +¹For example, the principal can implement the choice function that chooses (i) the agent’s favorite project, if there are at least two available projects, and (ii) nothing, if there is at most one available project. +---PAGE_BREAK--- + + = 0. While a deterministic mechanism must pick either $\bar{a}$ or (0, 1) or nothing when the agent proposes $\{\bar{a}, (0, 1)\}$, a randomized mechanism can reach a compromise by choosing each project with probability 1/2. On the other hand, if the agent proposes only $\bar{a}$, the principal chooses $\bar{a}$ with probability 1/2, so the agent of type $\{\bar{a}, (0, 1)\}$ is willing to propose $\{\bar{a}, (0, 1)\}$ instead of just $\bar{a}$. The regret is 1/4 both when the agent's type is $\{\bar{a}, (0, 1)\}$ and when his type is $\{\bar{a}\}$. We carry out this idea of reaching a compromise in Theorem 5.2 in subsection 5.3. Specifically, when the agent proposes $P$, the principal gives the agent the maximal payoff he can offer, subject to the constraint that he can give the agent this same payoff if the agent proposes $P \cup \{(\underline{u}, 1)\}$ and can still keep his regret under control. + +## 5.2 Optimal mechanism in the single-project environment + +Since the agent can propose at most one project, a mechanism specifies the approval probability for each proposed project. Instead of using our previous notation $\rho(a|\{a\})$, we let $\alpha(u, v) \in [0, 1]$ denote the approval probability if the agent proposes the project $(u, v)$. + +**Theorem 5.1 (Single-project environment).** Assume $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$. Let + +$$R^s = \max_{v \in [\underline{u}, 1]} \min((1-\underline{u})v, 1-v) = \min\left(\frac{1-\underline{u}}{2-\underline{u}}, 1-\frac{v}{u}\right).$$ + +1. The WCR under any mechanism is at least $R^s$. + +2. The mechanism $\alpha^s$ is given by: + +$$\alpha^s(u, v) = \begin{cases} 1, & \text{if } v \ge 1 - R^s \text{ or } u = 0, \\ \frac{u}{u}, & \text{if } v < 1 - R^s \text{ and } u > 0. \end{cases}$$ + +It implements a choice function that has the WCR of $R^s$ and is admissible. +---PAGE_BREAK--- + +3. If a mechanism $\alpha$ implements a choice function that has the WCR of $R^s$, then $\alpha(u, v) \le \alpha^s(u, v)$ for every $(u, v) \in D$. + +The mechanism $\alpha^s$ consists of an *automatic-approval* region and a *chance* region. If the proposed project is sufficiently good for the principal (i.e., $v \ge 1 - R^s$), then it is automatically approved. If the project is mediocre for the principal (i.e., $v < 1 - R^s$), then the approval probability equals $\underline{u}/u$, so the agent expects a payoff $\underline{u}$ from proposing a mediocre project. + +The agent will propose a project in the automatic-approval region if he has at least one such project. If all his projects are in the chance region, he will propose a project that gives the principal the highest payoff. The principal still suffers regret from two sources. First, if the agent has multiple projects that will be automatically approved, he will propose what he favors instead of what the principal favors. Second, if the agent has only projects in the chance region, his proposal is rejected with positive probability. The threshold for the automatic-approval region, $1 - R^s$, is chosen to keep the regret from both sources under control. + +The approval probability $\alpha^s(u, v)$ increases in $v$ (the principal's payoff) and decreases in $u$ (the agent's payoff). This monotonicity in $v$ and $u$ is natural. In particular, the principal is less likely to approve projects that give the agent high payoffs in order to deter the agent from hiding projects that give the principal high payoffs. It is interesting to compare our optimal mechanism $\alpha^s$ in the single-project environment to that in Armstrong and Vickers (2010). They characterize the optimal deterministic mechanism in a Bayesian setting. Under the assumptions that (i) projects are i.i.d. and (ii) the number of available projects is independent of their characteristics, they show that the optimal deterministic mechanism $\alpha(u, v)$ increases in $v$: a project $(u, v)$ is approved if and only if $v \ge r(u)$ for some function $r(u)$. They also characterize the optimal $r(u)$ explicitly. Their argument can be generalized to show that the optimal randomized mechanism $\alpha(u, v)$ also increases in $v$, +---PAGE_BREAK--- + +but it is not clear how to solve for the optimal $\alpha(u, v)$. It is an open problem under which assumptions on the prior belief the optimal randomized mechanism $\alpha(u, v)$ in the Bayesian setting decreases in $u$. + +The typical situation under the worst-case regret approach to uncertainty is that multiple mechanisms can achieve the minimal WCR. Assertion 3 in Theorem 5.1 says that the mechanism $\alpha^s$ is uniformly more generous in approving the agent's proposal than any other mechanism that can have the WCR of $R^s$. This assertion has two implications. First, among all mechanisms that can have the WCR of $R^s$, the mechanism $\alpha^s$ is the agent's most preferred one. Second, compared to any mechanism that can have the WCR of $R^s$, the mechanism $\alpha^s$ gives the principal a higher payoff (or equivalently, a lower regret) for every singleton $A$ and a strictly higher payoff for some singleton $A$. + +## 5.3 Optimal mechanism in the multiproject environment + +We now present the optimal mechanism in the multiproject environment. Let $\alpha : [\underline{u}, 1] \times [\underline{u}, 1] \rightarrow [0, 1]$ be a function and consider the following *project-wide maximal-payoff mechanism* (PMP mechanism) induced by the function $\alpha$: + +1. If the proposal $P$ includes only one project $(u, v)$, it is approved with probability $\alpha(u, v)$. + +2. If the proposal $P$ includes multiple projects, the mechanism randomizes over the proposed projects and no project to maximize the principal's expected payoff, while promising the agent an expected payoff of $\max_{(u,v)\in P} \alpha(u, v)u$. This is the maximal expected payoff the agent could get from proposing each project alone. + +By the definition of a PMP mechanism, the more projects the agent proposes, the weakly higher his expected payoff will be. The agent is therefore willing to propose his type truthfully. In other words, PMP mechanisms are IC. Note that for a mechanism to be IC, the +---PAGE_BREAK--- + +agent's payoff from a multiproject proposal must be at least his payoff from proposing each project alone. A PMP mechanism has the feature that the agent is promised exactly the maximal payoff from proposing each project alone, but not more. + +Our next theorem shows that there exists an optimal PMP mechanism. + +**Theorem 5.2** (Multiproject environment). Assume $\mathcal{E} = 2^D$. For every $u \in [\underline{u}, 1]$ and $p \in [0, 1]$, let $\gamma(p, u)$ be + +$$ \gamma(p, u) = \min\{q \in [0, 1] : qu + (1-q)\underline{u} \ge pu\}. \quad (1) $$ + +Let + +$$ R^m = \max_{(u,v) \in D} \min_{p \in [0,1]} \max(v(1-p), (1-v)\gamma(p,u)). \quad (2) $$ + +1. The WCR under any mechanism is at least $R^m$. + +2. Let $\rho^m$ be the PMP mechanism induced by + +$$ \alpha^m(u, v) = \max\{p \in [0, 1] : (1-v)\gamma(p, u) \le R^m\}. \quad (3) $$ + +It has the WCR of $R^m$ and is admissible. + +3. If $\rho$ is an IC, admissible mechanism which has the WCR of $R^m$, then $U(\rho, A) \le U(\rho^m, A)$ for every type $A$. + +The explicit expressions for $R^m$ and $\alpha^m(u, v)$ are presented at the end of this subsection. + +It follows from (1) and (3) that $\alpha^m(u, v) = 1$ if $v \ge 1 - R^m$ and $\alpha^m(u, v) < 1$ otherwise. Like in the case of the single-project environment, when the agent proposes only one project, the project is approved for sure if its payoff to the principal is sufficiently high and approved with some probability otherwise. For this reason, we still call $v \ge 1 - R^m$ and $v < 1 - R^m$ the automatic-approval and the chance regions, respectively. Figure 2 depicts these two regions. +---PAGE_BREAK--- + +When the agent proposes more than one project, the principal promises the agent an +expected payoff of $\max_{(u,v) \in P} \alpha^m(u, v)u$. In both panels of figure 2, each dotted curve con- +nects all the projects that induce the same value of $\alpha^m(u, v)u$, so it can be interpreted as +an “indifference curve” for the agent. For a project in the automatic-approval region, the +principal is willing to compensate the agent his full payoff. In contrast, for a project in the +chance region, the principal is willing to compensate the agent only a discounted payoff. +The lower the project’s payoff to the principal, the more severe the discounting. Hence, +indifference curves are vertical in the automatic-approval region and tilt counterclockwise as +the principal’s payoff $v$ further decreases. The agent’s expected payoff is determined by the +project (among those proposed) that is on the highest indifference curve. + +Figure 2: Reaching a compromise when agent's favorite project is in chance region, $u = v = 0$ + +Under the optimal mechanism $\rho^m$, if the agent's favorite project is in the automatic-approval region, then this project will be chosen for sure. In this case, there is no benefit to either party from proposing other available projects. The left panel of figure 2 gives such an example: ⭐ and ▲ denote the available projects and ▲ will be chosen for sure. In contrast, if the agent's favorite project is in the chance region, the benefit to the principal from the +---PAGE_BREAK--- + +agent's proposing multiple projects can be significant. The right panel of figure 2 illustrates such an example. Instead of rejecting ▲ with positive probability, the mechanism randomizes between ▲ and ★ while promising the agent the same payoff he would get from proposing ▲ alone. In such cases, the optimal mechanism imposes a compromise between the two parties: sometimes the choice favors the agent, and at other times it favors the principal. + +Lastly, the explicit expressions for $R^m$ and $\alpha^m$ are given by: + +$$R^m = \begin{cases} \frac{(1-\underline{u})(2-\underline{u}-2\sqrt{1-\underline{u}})}{\underline{u}^2} & \text{if } \underline{v} < \frac{1-\sqrt{1-\underline{u}}}{\underline{u}}, \\ \frac{(1-\underline{u})(1-\underline{v})\underline{v}}{1-\underline{u}\underline{v}} & \text{otherwise,} \end{cases}$$ + +and + +$$\alpha^m(u, v) = \begin{cases} 1, & \text{if } v \ge 1 - R^m \text{ or } u = 0, \\ \left(1 - \frac{R^m}{1-v}\right) \frac{u}{v} + \frac{R^m}{1-v}, & \text{if } v < 1 - R^m \text{ and } u > 0. \end{cases}$$ + +## 5.4 Comparing the WCR under two environments + +Figure 3 compares the WCR under the single-project and the multiproject environments. The left panel depicts the WCR as a function of $\underline{u}$ for a fixed $\underline{v}$. The right panel depicts the WCR as a function of $\underline{v}$ for a fixed $\underline{u}$. Roughly speaking, the principal's gain from having the multiproject environment as compared to the single-project environment, measured by $R^m - R^s$, is larger when $\underline{u}$ or $\underline{v}$ is smaller (i.e., when the principal faces more uncertainty or when players can potentially have strong preferences over projects). +---PAGE_BREAK--- + +Figure 3: WCR: single-project (dashed curve) vs. multiproject (solid curve) + +# 6 Discussion + +## 6.1 Intermediate verification capacity + +We have focused on the single-project and the multiproject environments, which are natural first steps for us to study. Nonetheless, there are intermediate environments in which the principal can verify up to $k$ projects for some fixed $k \ge 2$, so $\mathcal{E} = \{P \subseteq D : |P| \le k\}$. We call this the $k$-project environment. + +**Proposition 6.1** (Two are enough). For any $k \ge 2$, the PMP mechanism induced by $\alpha^m(u, v)$ is optimal in the $k$-project environment. The WCR under this mechanism is $R^m$. + +*Proof*. Let $A$ be the set of available projects. Let $(u_p, v_p) \in \argmax\{v : (u, v) \in A\}$ and $(u_a, v_a) \in \argmax\{\alpha^m(u, v)u : (u, v) \in A\}$. Let $P = \{(u_p, v_p), (u_a, v_a)\}$. Then under the PMP mechanism induced by $\alpha^m(u, v)$, the agent is willing to propose $P$ since this proposal gives him $\alpha^m(u_a, v_a)u_a$, the maximal payoff he can get under the mechanism. The principal's payoff given the proposal $P$ equals his payoff if the set of available projects was actually $P$. By Theorem 5.2 this payoff is at least $v_p - R^m$, so the principal's regret is at most $R^m$. $\square$ + +Proposition 6.1 shows that having the full benefit of compromise does not require infinite +---PAGE_BREAK--- + +or high verification capacity. A capacity of only two projects is sufficient. Furthermore, even if the principal can verify up to ten projects, it suffices to let the agent propose up to two, which provides a parsimonious way to get the full benefit of compromise. + +## 6.2 Cheap-talk communication does not help for any $\mathcal{E}$ + +We could have started from a more general definition of a mechanism that chooses a project based on both the proposal $P$ and a cheap-talk message $m$ from the agent, as in Bull and Watson (2007) and Ben-Porath, Dekel and Lipman (2019). However, in our model cheap talk does not benefit the principal. This is because the principal can choose a project only from the proposed set $P$ and he knows the payoffs that each project in $P$ gives to both parties. Hence, no information asymmetry remains after the agent proposes $P$, and so there is no benefit to cheap talk. + +More specifically, for any proposal $P$ and any cheap-talk messages $m_1, m_2$, we argue that it is without loss for the principal to choose the same subprobability measure over $P$ after $(P, m_1)$ and after $(P, m_2)$. Suppose otherwise that the principal chooses a subprobability measure $\pi_1$ after $(P, m_1)$ and chooses $\pi_2$ after $(P, m_2)$. If the agent strictly prefers $\pi_1$ to $\pi_2$, then he can profitably deviate to $(P, m_1)$ whenever he is supposed to say $(P, m_2)$. Hence, $(P, m_2)$ never occurs on the equilibrium path. If the agent is indifferent between $\pi_1$ and $\pi_2$, then the principal can pick his preferred measure between $\pi_1$ and $\pi_2$ after both $(P, m_1)$ and $(P, m_2)$, without affecting the agent's incentives. This argument does not depend on the exogenous restriction $\mathcal{E}$ on the agent's proposal $P$, so cheap-talk communication does not help for any $\mathcal{E}$. +---PAGE_BREAK--- + +## 6.3 The commitment assumption + +Commitment is crucial for the principal to have some “bargaining power” in the project choice problem. If the principal has no commitment power, sequential rationality requires that he choose his favorite project among the proposed one(s). The agent now has all the bargaining power. He will propose only his favorite project which will be chosen for sure. + +In the multiproject environment, the full-commitment solution involves two types of ex post suboptimality. First, no project is chosen despite that the agent has proposed some. Second, a worse project for the principal is chosen despite that a better project for him is also proposed. Some applications may fall between the full-commitment and the no-commitment settings: the principal can commit to choosing no project but cannot commit to choosing a worse project when a better project is also proposed. In such a partial-commitment setting, a multiproject proposal is effectively a single-project proposal with only the principal’s favorite project among the proposed one. The optimal mechanism in this partial-commitment setting is then the same as that in the single-project environment characterized in Theorem 5.1. + +# 7 Proofs + +## 7.1 Proof of Theorem 5.1 + +**Claim 7.1.** *The WCR from any mechanism is at least Rs.* + +*Proof.* Let $v \in [\underline{v}, 1]$. If $\alpha(1, v) > \underline{u}$, then, if the agent has two projects $(1, v)$ and $(\underline{u}, 1)$, the agent will propose $(1, v)$ and the regret will be $1 - \alpha(1, v)v \ge 1 - v$. If $\alpha(1, v) \le \underline{u}$, then, if the agent has only the project $(1, v)$, the regret is $v - \alpha(1, v)v \ge v(1 - \underline{u})$. Therefore, WCR $\ge \min((1 - \underline{u})v, 1 - v)$ for every $v \in [\underline{v}, 1]$. $\square$ + +**Claim 7.2.** *The WCR from $\alpha^s$ is Rs.* +---PAGE_BREAK--- + +*Proof.* We call a project $(u, v)$ good if $v \ge 1 - R^s$ and mediocre if $v < 1 - R^s$. From the definition of $R^s$ it follows that $(1 - \underline{u})v \le R^s$ for every mediocre project. + +According to $\alpha^s$, if the agent proposes a mediocre project, then his expected payoff is $\underline{u}$; if the agent proposes a good project $(u, v)$, then his expected payoff is $u \ge \underline{u}$. Therefore, if the agent has some good project, he will propose a good project $(u, v)$ and the regret is at most $1 - v \le R^s$. If all projects are mediocre, then the agent will propose the project $(u, v)$ with the highest $v$, so the regret is at most $(1 - \alpha^s(u, v))v = (1 - \underline{u}/u)v \le (1 - \underline{u})v \le R^s$. $\square$ + +**Claim 7.3.** If $\alpha$ has the WCR of $R^s$, then $\alpha(u,v) \le \alpha^s(u,v)$ for every $(u,v) \in D$. Hence, $\alpha^s$ is admissible. + +*Proof.* Fix a project $(u,v)$. If $v \ge 1 - R^s$ or $u=0$, then $\alpha^s(u,v)=1$ and therefore $\alpha(u,v) \le \alpha^s(u,v)$. If $v < 1 - R^s$ and $u > 0$, then since the WCR under $\alpha$ is $R^s$, it must be the case that if $A = \{(u,v), (\underline{u},1)\}$, then the agent proposes the project $(\underline{u},1)$. Otherwise, the regret is at least $1 - v > R^s$. Therefore $\alpha(u,v)u \le \alpha(\underline{u},1)\underline{u} \le \underline{u}$, which implies $\alpha(u,v) \le \underline{u}/u = \alpha^s(u,v)$, as desired. + +Finally, if $\alpha$ has the WCR of $R^s$ and $\alpha \ne \alpha^s$, then there exists $(u,v) \in D$ such that $\alpha(u,v) < \alpha^s(u,v)$. The regret is strictly higher under $\alpha$ than under $\alpha^s$ if $A = \{(u,v)\}$, so $\alpha^s$ is admissible. $\square$ +---PAGE_BREAK--- + +## 7.2 Proof of Theorem 5.2 + +Let $a^* = (\underline{u}, 1)$. Let $\bar{U}(P)$ be the optimal value of the following linear programming with variables $\pi(u, v)$ for every $(u, v) \in P$: + +$$ \bar{U}(P) = \max_{\pi} \underbrace{\underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u})}_{\text{s.t.}} \quad (4a) $$ + +$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (4b) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (4c) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v)(1-v) \le R^m. \quad (4d) $$ + +The following claim explains the role of $\bar{U}(P)$ in our argument: $\bar{U}(P)$ is the maximal payoff that the principal can give the agent for the proposal $P$ such that the principal can give the agent this same payoff if the agent proposed $P \cup \{a^*\}$, while still keeping regret below $R^m$. + +**Claim 7.4.** If $\rho$ is an IC mechanism which has the WCR of at most $R^m$, then $U(\rho, P) \le \bar{U}(P)$ for every proposal $P$. + +*Proof.* Let $\tilde{P} = P \cup \{a^*\}$. Let $\pi = \rho(\cdot|\tilde{P})$. Since the regret under the mechanism $\rho$ when the set of available projects is $\tilde{P}$ is at most $R^m$, it follows that $\sum_{(u,v) \in P} \pi(u,v)(1-v) \le R^m$. Therefore the restriction of $\pi$ to the set $P$ is a feasible point in problem (4). Moreover + +$$ U(\rho, \tilde{P}) = \pi(a^*)\underline{u} + \sum_{(u,v) \in P} \pi(u,v)u \le \underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u}), \quad (5) $$ + +where the inequality follows from $\pi(a^*) + \sum_{(u,v) \in P} \pi(u,v) \le 1$. The right hand side of (5) is the objective function of (4) at $\pi$. Therefore, $U(\rho, \tilde{P}) \le \bar{U}(P)$. Finally, since the mechanism $\rho$ is IC, it follows that $U(\rho, P) \le U(\rho, \tilde{P})$. Therefore, $U(\rho, P) \le \bar{U}(P)$, as desired. $\square$ + +When $P$ is a singleton $\{(u,v)\}$, we also denote $\bar{U}(\{(u,v)\})$ by $\bar{U}(u,v)$. The following +---PAGE_BREAK--- + +claim, which follows immediately from (1) and (3), explains the role of the function $\alpha^m(u, v)$ +in our argument. + +**Claim 7.5.** When $P$ is a singleton $\{(u, v)\}$, $\overline{U}(u, v) = \alpha^m(u, v)u$. + +For a proposal $P$, let $\underline{U}(P) = \max_{(u,v) \in P} \alpha^m(u, v)u$. The following claim explains the role of $\underline{U}(P)$ in our argument. + +**Claim 7.6.** If $\rho$ is an IC mechanism that accepts the singleton proposal $\{(u, v)\}$ with probability $\alpha^m(u, v)$, then $U(\rho, P) \ge \underline{U}(P)$. + +*Proof.* Since $\rho$ is IC, we have that $U(\rho, P) \ge U(\{(u, v)\}, \rho) = \alpha^m(u, v)u$ for every $(u, v) \in P$. $\square$ + +Claims 7.4 bounds from above the agent's expected payoff in an IC mechanism which has +the WCR of at most $R^m$. Claim 7.6 bounds from below the agent's expected payoff in an IC +mechanism which approves the singleton proposal $\{(u, v)\}$ with probability $\alpha^m(u, v)$. The +following claim shows that the definition of $R^m$ is such that both bounds can be satisfied. + +**Claim 7.7.** $\underline{U}(P) \le \overline{U}(P)$ for every $P$. + +*Proof.* The function $\overline{U}(P)$ defined in (4) is increasing in $P$. Therefore, from Claim 7.5 we have: + +$$ +\alpha^m(u, v)u = \overline{U}(u, v) \le \overline{U}(P), \quad \forall (u, v) \in P. +$$ + +It follows that: + +$$ +\underline{U}(P) = \max_{(u,v) \in P} \alpha^m(u,v)u \leq \overline{U}(P). +\quad \square +$$ +---PAGE_BREAK--- + +By definition, the mechanism $\rho^m$ solves the following linear programming: + +$$ \rho^m(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (6a) $$ + +$$ \text{s.t.} \quad \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (6b) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (6c) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v)u = \underline{U}(P). \quad (6d) $$ + +It is possible that (6) has multiple optimal solutions. Since all the optimal solutions are payoff-equivalent for both the principal and the agent, we do not distinguish among them. From now on, the notation $\rho(\cdot|P) \neq \rho^m(\cdot|P)$ means that $\rho(\cdot|P)$ is not among the optimal solutions to (6). + +The following lemma is the core of the argument. It gives an equivalent characterization of the mechanism $\rho^m$. + +**Lemma 7.8.** The optimal solutions to (6) and those to the following problem coincide. Hence, $\rho^m(\cdot|P)$ is also given by the solution to the following problem: + +$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (7a) $$ + +$$ \text{s.t.} \quad \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (7b) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (7c) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v)u \ge \underline{U}(P), \quad (7d) $$ + +$$ \sum_{(u,v) \in P} \pi(u, v)u \le \overline{U}(P). \quad (7e) $$ + +*Proof of Lemma 7.8.* We discuss two cases separately. +---PAGE_BREAK--- + +Case 1. Assume that there exists some $(u, v) \in P$ such that $v \ge 1 - R^m$. Consider the following linear programming which is a relaxation of both problem (6) and (7): + +$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (8a) $$ + +s.t. + +$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (8b) $$ + +$$ \sum_{(u,v) \in P} \pi(u,v) \le 1, \quad (8c) $$ + +$$ \sum_{(u,v) \in P} \pi(u,v) u \ge \underline{U}(P). \quad (8d) $$ + +We claim that the constraint (8d) holds with equality at every optimal solution. Indeed, if (8d) is not binding then an optimal solution to (8) is also an optimal solution to the following linear programming: + +$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (9a) $$ + +s.t. + +$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (9b) $$ + +$$ \sum_{(u,v) \in P} \pi(u,v) \le 1, \quad (9c) $$ + +which is derived from (8) by removing (8d). Let $v_p = \max_{(u,v) \in P} v$ and $u_p = \max_{(u,v_p) \in P} u$. By the definition of $\alpha^m$ in (3), $\alpha^m(u_p, v_p) = 1$ given that $v_p \ge 1 - R^m$. Every optimal solution $\pi^*$ to problem (9) satisfies $\text{support}(\pi^*) \subseteq \text{argmax}_{(u,v) \in P} v$, which implies that + +$$ \sum_{(u,v) \in P} \pi^*(u,v)u \le u_p = \alpha^m(u_p, v_p)u_p \le \underline{U}(P). $$ + +This implies that every optimal solution to (8) satisfies (8d) with equality, so it is a feasible point in both (6) and (7). Since problem (8) is a relaxation of both problem (6) and (7), +---PAGE_BREAK--- + +the optimal values of (6), (7), and (8) coincide. Hence, every optimal solution to (6) or (7) +is optimal in (8). This, combined with the fact that every optimal solution to (8) is optimal +in (6) and (7), implies that the optimal solutions to (6) and (7) coincide. + +Case 2. Assume now that $v < 1 - R^m$ for every $(u, v) \in P$. We claim that $\underline{U}(P) = \overline{U}(P)$ and therefore problems (6) and (7) coincide. Given that $v < 1 - R^m$ for every $(u, v) \in P$, the constraint (4c) in problem (4) must be slack since if it is satisfied with an equality then (4d) is violated. Therefore, in this case $\overline{U}(P)$ also satisfies + +$$ +\begin{aligned} +\bar{U}(P) &= \max_{\pi} \underbrace{u + \sum_{(u,v) \in P} \pi(u,v)(u-u)}_{\text{s.t.}} \\ +&\qquad \sum_{(u,v) \in P} \pi(u,v)(1-v) \le R^m, +\end{aligned} +\tag{10} $$ + +which is derived from problem (4) by removing (4c). Problem (10) admits a solution $\pi^*$ with the property that, for some $(u^*, v^*) \in P$, the only non-zero element of $\pi^*$ is $\pi^*(u^*, v^*)$. Therefore, by Claim 7.5, + +$$ \bar{U}(P) = \bar{U}(u^*, v^*) = \alpha^m(u^*, v^*)u^* \le \underline{U}(P). $$ + +Therefore, by Claim 7.7 we get $\bar{U}(P) = \underline{U}(P)$, as desired. +□ + +We now show that, when the set of available projects is a singleton, the regret under the +mechanism $\rho^m$ is at most $R^m$. + +**Claim 7.9.** For every singleton $A = \{(u, v)\}$, the regret under $\rho^m$ is at most $R^m$. + +*Proof.* In this case, $\rho^m$ accepts with probability $\alpha^m(u, v)$ so the regret is $v(1 - \alpha^m(u, v))$. By the definition of $R^m$, there exists some $\bar{p} \in [0, 1]$ such that $\max(v(1-\bar{p}), (1-v)\gamma(\bar{p}, u)) \le R^m$. +---PAGE_BREAK--- + +By (3), $\bar{p} \le \alpha^m(u, v)$. Therefore, it also follows that $v(1 - \alpha^m(u, v)) \le v(1 - \bar{p}) \le R^m$. $\square$ + +**Claim 7.10.** The optimal value in problem (7) is at least $\max_{(u,v)\in P} v - R^m$. + +*Proof.* Since the constraints (7d) and (7e) cannot both be binding, it is sufficient to prove that the optimal value in the two problems derived from (7) by removing either (7d) or (7e) is at least $v_p - R^m$ where $v_p = \max_{(u,v)\in P} v$. Let $(u_p, v_p) \in P$ denote a principal's favorite project. + +If we remove (7d) let $\pi$ be given by $\pi(u_p, v_p) = \alpha^m(u_p, v_p)$ and $\pi(u, v) = 0$ when $(u, v) \ne (u_p, v_p)$. Then $\sum_{(u,v)\in P} \pi(u,v)u = \alpha^m(u_p, v_p)u_p \le \underline{U}(P) \le \overline{U}(P)$ so (7e) is satisfied. Also $v_p(1 - \alpha^m(u_p, v_p)) \le R^m$ by Claim 7.9, which implies that the value of the objective function in (7) at $\pi$ is at least $v_p - R^m$, as desired. + +If we remove (7e) let $\pi$ be the optimal solution to (4) and let $\pi'$ be the probability distribution over $P$ such that $\pi'(u, v) = \pi(u, v)$ when $(u, v) \ne (u_p, v_p)$ and $\pi'(u_p, v_p) = 1 - \sum_{(u,v)\in P\setminus\{(u_p,v_p)\}} \pi(u,v)$, so $\pi'$ is derived from $\pi$ by allocating the probability of choosing no project to $(u_p, v_p)$. Then + +$$ \sum_{(u,v) \in P} \pi'(u,v)u = u_p + \sum_{(u,v) \in P} \pi(u,v)(u-u_p) \ge \underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u}) = \overline{U}(P) \ge \underline{U}(P), $$ + +where the last equality follows from the fact that $\pi$ is optimal in (4). Therefore, $\pi'$ satisfies (7d). Also + +$$ \sum_{(u,v) \in P} \pi'(v)(v_p - v) = \sum_{(u,v) \in P} \pi(v)(v_p - v) \le \sum_{(u,v) \in P} \pi(v)(1-v) \le R^m $$ + +where the last inequality follows from (4d), as desired. $\square$ + +*Proof of Theorem 5.2.* + +1. Fix $(u, v) \in D$ and let $P = \{(u, v)\}$ and $\tilde{P} = \{(u, v), (\underline{u}, 1)\}$. + +Let $p$ be the probability that $\rho$ accepts $(u, v)$ when the proposal is $P$. So, $RGRT(P, \rho) =$ +---PAGE_BREAK--- + +$(1-p)v$. Since the mechanism is IC, the agent's expected payoff under $\tilde{P}$ must be at least $pu$. By definition of $\gamma(u,p)$, this implies that when the proposal is $\tilde{P}$ the mechanism accepts $(u,v)$ with probability at least $\gamma(u,p)$. So, $RGRT(\tilde{P}, \rho) \ge (1-v)\gamma(u,p)$. Therefore $WCR(\rho) \ge \max((1-p)v, (1-v)\gamma(u,p))$. + +2. The mechanism $\rho^m$ is IC, and it solves problem (7) by Lemma 7.8. By Claim 7.10, the optimal value in problem (7) is at least $\max_{(u,v) \in P} v - R^m$. Since the objective function in (7) is the principal's payoff under $\pi$, the principal's regret is at most $R^m$. + +We next argue that $\rho^m$ is admissible. Let $\rho$ be an IC mechanism which has the WCR of $R^m$ and let $\alpha(u, v)$ be the probability that $\rho$ accepts a singleton proposal $\{(u, v)\}$. Then, $\rho^m$ is not weakly dominated by $\rho$ based on the following two claims: + +(a) If the agent's type $A$ is a singleton $\{(u,v)\}$, then $\alpha(u,v) \le \alpha^m(u,v)$ by claims 7.4 and 7.5. Hence, the principal's payoff is weakly higher under $\rho^m$ than under $\rho$ for singleton $A$. + +(b) Suppose that $\alpha(u,v) = \alpha^m(u,v)$ for every $(u,v)$. Fix a proposal $P$ and let $\pi = \rho(\cdot|P)$ so $U(\rho,P) = \sum_{(u,v) \in P} \pi(u,v)u$. Then, since $\rho$ is IC it follows from Claim 7.6 that $U(\rho,P) \ge \underline{U}(P)$, and, from Claim (7.4), that $U(\rho,P) \le \overline{U}(P)$. Therefore $\pi$ is a feasible point in problem (7). Since $\rho^m(\cdot|P)$ is the optimal solution to (7), the principal's payoff is weakly higher under $\rho^m$ than under $\rho$. + +3. Let $\rho$ be an IC, admissible mechanism which has the WCR of $R^m$ and which differs from $\rho$. We want to show that $U(\rho,P) \le U(\rho^m,P)$ for every finite $P \subseteq D$. Recall that $U(\rho^m,P) = \underline{U}(P)$ for every $P$. +---PAGE_BREAK--- + +We first construct a new mechanism $\tilde{\rho}$ based on $\rho$ and $\rho^m$: + +$$ \tilde{\rho}(\cdot|P) = \begin{cases} \rho^m(\cdot|P), & \text{if } U(P, \rho) \ge \underline{U}(P) \\ \rho(\cdot|P), & \text{if } U(P, \rho) < \underline{U}(P). \end{cases} $$ + +By definition, $U(\tilde{\rho}, P) = \min(U(\rho, P), U(\rho^m, P))$. The functions $U(P, \rho)$ and $U(P, \rho^m)$ are increasing in $P$ since $\rho$ and $\rho^m$ are IC. Therefore $U(P, \tilde{\rho})$ is increasing in $P$, so $\tilde{\rho}$ is also IC. Moreover, for every $P$ either $\tilde{\rho}(\cdot|P) = \rho(\cdot|P)$ or $\tilde{\rho}(\cdot|P) = \rho^m(\cdot|P)$. Therefore the WCR under $\tilde{\rho}$ is also $R^m$. + +We next argue that for every $P$, $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does. + +(a) Consider a set $P$ such that $U(\rho, P) < \underline{U}(P)$. Then $\tilde{\rho}(\cdot|P) = \rho(\cdot|P)$, so $\tilde{\rho}$ gives the principal the same payoff as $\rho$ does. + +(b) Consider a set $P$ such that $U(\rho, P) \ge \underline{U}(P)$. From Claim 7.4 we know that $U(P, \rho) \le \overline{U}(P)$ for every $P$. Therefore, $\rho(\cdot|P)$ is a feasible point in problem (7). It follows from Lemma 7.8 that $\rho^m$ gives the principal a weakly higher payoff than $\rho$ does. Moreover, if $\rho(\cdot|P) \ne \rho^m(\cdot|P)$, then $\rho^m$ gives the principal a strictly higher payoff than $\rho$ does. + +Since $\tilde{\rho}(\cdot|P) = \rho^m(\cdot|P)$ for every $P$ such that $U(\rho, P) \ge \underline{U}(P)$, $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does for every such $P$. + +We have argued that $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does for every $P$. On the other hand, $\rho$ is admissible, so there cannot be a $P$ such that $\tilde{\rho}$ gives the principal a strictly higher payoff than $\rho$ does. This implies that for every $P$ such that $U(\rho, P) \ge \underline{U}(P)$, $\rho(\cdot|P) = \rho^m(\cdot|P)$, so $U(\rho, P)$ is equal to $\underline{U}(P)$. Hence, for every $P$, $U(\rho, P) \le \underline{U}(P). +---PAGE_BREAK--- + + +---PAGE_BREAK--- + +References + +Aghion, Philippe, and Jean Tirole. 1997. "Formal and Real Authority in Organizations." *Journal of Political Economy*, 105(1): 1–29. + +Armstrong, Mark, and John Vickers. 2010. "A Model of Delegated Project Choice." *Econometrica*, 78(1): 213–244. + +Ben-Porath, Elchanan, Eddie Dekel, and Barton L Lipman. 2019. "Mechanisms with evidence: Commitment and robustness." *Econometrica*, 87(2): 529–566. + +Bergemann, Dirk, and Karl H. Schlag. 2008. "Pricing without Priors." *Journal of the European Economic Association*, 6(2/3): 560–569. + +Bergemann, Dirk, and Karl Schlag. 2011. "Robust monopoly pricing." *Journal of Economic Theory*, 146(6): 2527–2543. + +Beviá, Carmen, and Luis Corchón. 2019. "Contests with dominant strategies." *Economic Theory*. + +Bonatti, Alessandro, and Heikki Rantakari. 2016. "The Politics of Compromise." *American Economic Review*, 106(2): 229–59. + +Bull, Jesse, and Joel Watson. 2007. "Hard evidence and mechanism design." *Games and Economic Behavior*, 58(1): 75–93. + +Carroll, Gabriel. 2015. "Robustness and linear contracts." *American Economic Review*, 105(2): 536–63. + +Carroll, Gabriel. 2019. "Robustness in Mechanism Design and Contracting." *Annual Review of Economics*, 11(1): 139–166. +---PAGE_BREAK--- + +Chassang, Sylvain. 2013. “Calibrated incentive contracts.” *Econometrica*, 81(5): 1935–1971. + +Dekel, Eddie. 2016. “On Evidence in Games and Mechanism Design.” *Econometric Society* *Presidential Address*. + +Dye, Ronald A. 1985. “Strategic Accounting Choice and the Effects of Alternative Financial Reporting Requirements.” *Journal of Accounting Research*, 23(2): 544–574. + +Gilboa, Itzhak, and David Schmeidler. 1989. “Maxmin expected utility with non-unique prior.” *Journal of Mathematical Economics*, 18(2): 141–153. + +Glazer, Jacob, and Ariel Rubinstein. 2006. “A study in the pragmatics of persuasion: a game theoretical approach.” *Theoretical Economics*, 1: 395–410. + +Goel, Sumit, and Wade Hann-Caruthers. 2020. “Project selection with partially verifiable information.” + +Green, Jerry R., and Jean-Jacques Laffont. 1986. “Partially Verifiable Information and Mechanism Design.” *The Review of Economic Studies*, 53(3): 447–456. + +Grossman, Sanford J. 1981. “The Informational Role of Warranties and Private Disclosure about Product Quality.” *The Journal of Law and Economics*, 24(3): 461–483. + +Grossman, S. J., and O. D. Hart. 1980. “Disclosure Laws and Takeover Bids.” *The Journal of Finance*, 35(2): 323–334. + +Guo, Yingni, and Eran Shmaya. 2019. “Robust Monopoly Regulation.” *Working paper*. + +Hart, Sergiu, Ilan Kremer, and Motty Perry. 2017. “Evidence Games: Truth and Commitment.” *American Economic Review*, 107(3): 690–713. +---PAGE_BREAK--- + +Hurwicz, Leonid, and Leonard Shapiro. 1978. "Incentive Structures Maximizing Residual Gain under Incomplete Information." *The Bell Journal of Economics*, 9(1): 180–191. + +Kasberger, Bernhard, and Karl H Schlag. 2020. "Robust bidding in first-price auctions: How to bid without knowing what others are doing." Available at SSRN 3044438. + +Lipman, Barton L, and Duane J Seppi. 1995. "Robust inference in communication games with partial provability." *Journal of Economic Theory*, 66(2): 370–405. + +Lyons, Bruce R. 2003. "Could politicians be More Right Than Economists? A Theory of Merger Standards." *Working paper*. + +Malladi, Suraj. 2020. "Judged in Hindsight: Regulatory Incentives in Approving Innovations." Available at SSRN. + +Milgrom, Paul R. 1981. "Good News and Bad News: Representation Theorems and Applications." *The Bell Journal of Economics*, 12(2): 380–391. + +Milnor, John. 1954. "Games against nature, in \"Decision Processes\" (RM Thrall, CH Coombs, and RL Davis, Eds.)." + +Neven, Damien J., and Lars-Hendrik Röller. 2005. "Consumer surplus vs. welfare standard in a political economy model of merger control." *International Journal of Industrial Organization*, 23(9): 829–848. Merger Control in International Markets. + +Nocke, Volker, and Michael D. Whinston. 2013. "Merger Policy with Merger Choice." *American Economic Review*, 103(2): 1006–33. + +Ottaviani, Marco, and Abraham L. Wickelgren. 2011. "Ex ante or ex post competition policy? A progress report." *International Journal of Industrial Organization*, 29(3): 356–359. Special Issue: Selected Papers, European Association for Research in Industrial Economics 37th Annual Conference, Istanbul, Turkey, September 2-4, 2010. +---PAGE_BREAK--- + +Renou, Ludovic, and Karl H. Schlag. 2011. “Implementation in minimax regret equilibrium.” *Games and Economic Behavior*, 71(2): 527–533. + +Savage, Leonard J. 1972. *The foundations of statistics*. Courier Corporation. + +Savage, L. J. 1951. "The Theory of Statistical Decision." *Journal of the American Statistical Association*, 46(253): 55-67. + +Sher, Itai. 2014. “Persuasion and dynamic communication.” *Theoretical Economics*, 9(1): 99–136. + +Stoye, Jörg. 2011. “Axioms for minimax regret choice correspondences.” *Journal of Economic Theory*, 146(6): 2226–2251. + +Wald, Abraham. 1950. "Statistical decision functions." \ No newline at end of file diff --git a/samples/texts_merged/6470527.md b/samples/texts_merged/6470527.md new file mode 100644 index 0000000000000000000000000000000000000000..c55b8cae90f58b9c9e572f5ba5b9f7cb80af8170 --- /dev/null +++ b/samples/texts_merged/6470527.md @@ -0,0 +1,315 @@ + +---PAGE_BREAK--- + +# MECHANISM DESIGN AND MOTION PLANNING OF PARALLEL-CHAIN NONHOLONOMIC MANIPULATOR + +Li, L. + +School of Mechanical Engineering, Baoji University of Arts and Sciences, Baoji 721016, China +E-Mail: leeliang@126.com + +## Abstract + +Inspired by the nonholonomic theory, this paper proposes a parallel-chain nonholonomic manipulator with a chainable kinetics model. To build the manipulator, the friction disc motion synthesis and decomposition mechanism was taken as the joint transmission component. Based on Chow's theorem, the kinetics model of the manipulator was proved as nonholonomic and controllable. Then, the system's configuration coordinates were mapped from the joint space to the chain space via coordinate transformation, and the manipulator motion was planned in the chain space. Through two simulation experiments, it is proved that all joints of the proposed manipulator can move to the target configuration within the specified time. To sum up, the author successfully built an underactuated manipulator that can drive the motion of four joints with two motors. The research findings lay the basis for the development of small lightweight manipulators. + +(Received, processed and accepted by the Chinese Representative Office.) + +**Key Words:** Nonholonomic, Parallel-Chain, Chain Transformation, Motion Planning + +## 1. INTRODUCTION + +In analytical mechanics, a nonholonomic system refers to a system whose constraint equations contain the derivative of the coordinates with respect to time. In other words, the system speed or acceleration is under constraint. The nonholonomic mechanical system is underactuated, as it has fewer degrees of freedom (DoFs) than the number of dimensions in its configuration space. Hence, a multi-dimensional motion in the configuration space can be determined by a few control inputs, making it possible to design compact, lightweight multi-joint manipulators. The research into nonholonomic manipulator carries practical implications for the development of assistive robots like small robots, medical robots and multi-fingered dexterous hands. + +In the field of robotics, the research into nonholonomic system mainly concentrates on the control of existing nonholonomic robots, such as wheeled mobile robots, spherical robots and underwater robots [1-3]. Owing to the motion nonlinearity of nonholonomic robots, it is necessary to develop a unique path planning method for each nonholonomic system, adding to the difficulty in the motion control of new nonholonomic robots. + +In reality, many kinematics models of existing nonholonomic robots (e.g. wheeled mobile robots and trailer systems) can be converted into the chained model, a drift-free controllable nonholonomic system model. A system whose kinematics equations can be described with a chained model is called a chained system. Such a system boasts excellent properties (nilpotent and smooth), and simple structured mathematical model. In view of these advantages, many scholars have created nonholonomic robots with chainable kinematics model. For example, Nakamura proposed an underactuated manipulator based on a friction ball vector synthesis and decomposition mechanism [4]. The manipulator supports path planning via the control method of a chained system, as its kinematics model can be converted into a chained model. Under the diffeomorphism of chained transformation, paper [5] designs the gear steering connection mechanism for nonpowered trailer, and constructs a chainable wheeled mobile trailer system that can accurately track the target trajectory. Yamaguchi developed a 4 DoFs +---PAGE_BREAK--- + +wheeled mobile robot capable of chained transformation [6-8]; the wheeled mobile mechanism is controlled precisely with the drive angle and azimuth of the traction robot and the angle of the active steering system mounted on the connecting rod. + +Based on the previous research into a parallel-chain type chainable nonholonomic manipulator [9-11], this paper puts forward a two-motor parallel-chain four-joint nonholonomic manipulator. In the parallel-chain manipulator, the friction disc motion synthesis and decomposition mechanism serve as the joint transmission component, and the motion is transferred by dual universal joint in parallel-chain mode. Compared to the parallel-chain manipulator, the proposed manipulator, with a concise structure and a small power loss, offers an effective solution to the conflict between the number of drive units and manipulator mass in multi-joint manipulator. + +The remainder of this paper is organized as follows: Section 2 introduces the design of the parallel-chain nonholonomic manipulator; Section 3 establishes the kinematics model of the manipulator, demonstrates the manipulator controllability, and analyses the chain transformation features; Section 4 plans a path that maps back to the joint space in the chain space based on the control law of time polynomial motion planning; Section 5 concludes that the proposed manipulator can move from the initial configuration to the target configuration within the specific time under the control law of the chained system, and outperforms the parallel-chain manipulator in trajectory simplicity and motion efficiency. + +# 2. PARALLEL-CHAIN NONHOLONOMIC MANIPULATOR +## MECHANISM + +### 2.1 Motion principle of friction disc + +As shown in Fig. 1, when the friction wheel with the radius $r$ rotates around axis $I$ at the angular velocity $W_i$, there is only pure rolling between the friction wheel and the friction disc; then, the friction disc will rotate around axis $O$ at the angular velocity $W_o$. The friction wheel and the friction disc are perpendicular to each other. Let $M$ be the contact point between the friction wheel and the friction disc. The friction wheel can also rotate relative to the friction disc around the connecting line between its own axis and point $M$. When the rotation angle reaches $\alpha$, the linear velocities of the friction wheel and the friction disc were plotted into a vector diagram (Fig. 1 b). + +Figure 1: Friction disc motion synthesis and decomposition mechanism. + +Then, the following equation holds: $V_o = W_o R = V_i \cos \alpha = W_i r \cos \alpha$. +---PAGE_BREAK--- + +Thus, we have: + +$$W_o = \frac{r}{R} w_i \cos \alpha \quad (1)$$ + +where R is the distance between point M and the centre of friction disc; $V_i$ and $V_o$ are the linear velocities of the friction wheel and the friction disc at point M, respectively. + +It can be seen that the transmission ratio between the friction wheel and the friction disc can be controlled by adjusting the angle $\alpha$. Hence, $\alpha$ was defined as the transmission angle. + +The rolling-induced relative motion of the friction wheel on the friction disc depends on the relative change of configuration. Based on the relative configuration-variable structure, the designed friction disc motion synthesis and decomposition mechanism is subjected to the nonholonomic constraint [12-15]. + +## 2.2 Design of parallel-chain nonholonomic manipulator + +A friction disc mechanism was arranged at each joint of the manipulator. In the mechanism, the friction wheel and the friction disc are permanently connected to the front and rear joints, respectively. The transmission ratio between the two components changes with the included angle between them (i.e. the joint angle). Fig. 2 illustrates the structure of parallel-chain four-joint manipulator. + +Figure 2: Mechanism of parallel-chain four-joint manipulator. + +The rotation of motor 2 directly drives joint 1 to rotate about the axis by the angle $\theta_1$. Since friction wheel 1 is fixed to the frame through the side plate and friction plate 1 is fixed to the first joint, motor 2 controls the rotation angle $\theta_1$ of joint 1 as if a transmission angle $\theta_1$ is added to the friction transmission of the friction wheel and the friction disc. + +Motor 1 transmits its energy in two directions. In one direction, the motor drives the friction wheel through gears, the friction wheel drives the friction disc via rolling friction, and the friction disc drives joint 2 to rotate by the angle $\theta_2$ through the synchronous belt; meanwhile, the motor adds a transmission angle $\theta_2$ between the friction wheel and the friction disc at joint 2. In the other direction, motor 1 transmits its energy to the nearest rear joint via the dual universal joint, so that each rear joint can transmit energy to its next rear joint in turns. + +In this way, the four joints can be driven by two motors. The prototype of the parallel-chain four-joint manipulator is presented in Fig. 3. +---PAGE_BREAK--- + +Figure 3: Prototype of parallel-chain four-joint manipulator. + +The following issues call for special attention in the production and assembly of the prototype: + +(1) To ensure effective, reliable and accurate transmission of motion and force, there should be sufficient friction between the friction wheel and the friction disc. Hence, the material should have a large friction coefficient. Besides, a certain amount of positive pressure should be applied to point M, such that there is no relative sliding but pure rolling between the friction wheel and the friction. + +(2) As shown in Fig. 4 a, point M should be placed on the axis of the joint. Otherwise, the friction wheel will slide on the friction disc when the joint rotates to a certain angle. The resulting change in the distance R between point M and the centre of the friction disc will reduce the transmission accuracy. + +(3) The input shaft and the output shaft of the dual universal joint should have the same rotational angular velocity. In other words, the centreline OO of the dual universal joint must be consistent with the joint axis. Moreover, the intermediate shaft should be retractable, so as to compensate for the change in the axial distance between the input and output shafts caused by the rotation of manipulator joints (Fig. 4 b). + +(4) For the compactness and lightweight of the whole structure, the periphery of the connecting rod should be made into large rounded corner and the central part of the rod should be grooved, without sacrificing the strength and rigidity. In the horizontal direction, the main energy transmission chain (dual universal joint) and the motion transmission chain (friction wheel and friction disc) should be arranged at the same distance from the edge of the manipulator. The distance should approximate the spacing between the two transmission chains. In the vertical direction, the two transmission chains should be placed symmetrically about the connecting rod. All these arrangements ensure that the centre of mass of the manipulator is close to its geometric centre, thereby improving the kinetic performance of the manipulator. + +Figure 4: a) location of point M, b) structure of dual universal joint. +---PAGE_BREAK--- + +# 3. KINEMATICS ANALYSIS AND CHAIN TRANSFORMATION + +## 3.1 Kinematics modelling + +The configuration space of the four-joint nonholonomic manipulator hinges on the joint rotation angle $\theta_i$ ($i=1, 2, 3, 4$) and the angular displacement $\varphi$ of the friction wheel. Hence, the generalized coordinate vector of the manipulator system was defined as $q = [q_1, q_2, q_3, q_4, q_5] = [\varphi, \theta_1, \theta_2, \theta_3, \theta_4]$, and the control inputs as the angular velocities of the two motors $u_1$ and $u_2$. According to the kinematics relationship, the kinematics model of the parallel-chain four-joint manipulator can be derived as: + +$$ +\begin{bmatrix} \dot{q}_1 \\ \dot{q}_2 \\ \dot{q}_3 \\ \dot{q}_4 \\ \dot{q}_5 \end{bmatrix} = +\begin{bmatrix} \varphi \\ \dot{\theta}_1 \\ \dot{\theta}_2 \\ \dot{\theta}_3 \\ \dot{\theta}_4 \end{bmatrix} = +\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \frac{r}{R}\cos\theta_1 & 0 \\ \frac{r}{R}\cos\theta_2 & 0 \\ \frac{r}{R}\cos\theta_3 & 0 \end{bmatrix} +\begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = [p_1(q) \enspace p_2(q)] +\begin{bmatrix} u_1 \\ u_2 \end{bmatrix} +\quad (2) +$$ + +where *r* is the radius of the friction wheel. + +## 3.2 Controllability analysis + +Eq. (2) describes a drift-free control system. For such a drift-free symmetric affine system, the reachable space is expanded from the distribution $\Delta(q) = \text{span}\{p_1, p_2\}$. + +According to the controllability conditions of nonholonomic systems (Chow's theorem) [16], a drift-free affine system is controllable if its reachable distribution $\Delta_p(q) = \text{span}\{p_1, p_2, [p_1, p_2], [p_1, [p_1, p_2]], ...\}$ is in full rank. Note that $[p_1, p_2]$ and $[p_1, [p_1, p_2]]$ are the Lie bracket operations on vectors $p_1, p_2$ and $p_1, [p_1, p_2]$, respectively. Then, we have $[p_1, p_2] = \frac{\partial p_2 q}{\partial q} p_1(q) - \frac{\partial p_1(q)}{\partial q} p_2(q)$. + +Thus, the reachable space of the parallel-chain nonholonomic four-joint manipulator can be expressed as: + +$$ +\Delta_p (q) = \operatorname{span} \{ p_1, p_2, [p_1, p_2], [p_1, [p_1, p_2]], [p_1, [p_1, [p_1, p_2]]] \} = +\begin{bmatrix} +1 & 0 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 & 0 \\ +k c_1 & 0 & k s_1 & 0 & 0 \\ +k c_2 & 0 & 0 & k^2 s_1 s_2 & k^3 s_1 c_1 c_2 \\ +k c_3 & 0 & 0 & 0 & k^3 s_1 s_2 s_3 +\end{bmatrix} +\quad (3) +$$ + +where $k = \frac{r}{R}$, $c_i = \cos \theta_i$, $s_i = \sin \theta_i \neq 0$ ($i = 1, 2, 3$). + +It can be derived from Eq. (3) that $\dim \Delta_p(q) = 5$ if $\sin\theta_1 \neq 0$, $\sin\theta_2 \neq 0$ and $\sin\theta_3 \neq 0$, that is, $\sin\theta_i \neq 0$ ($i = 1, 2, 3$). In this case, the rank of the matrix equals the number of dimensions in the configuration space. In other words, the reachable space expanded from the system is involutive, which satisfies the controllability rank condition. Therefore, the parallel-chain four-joint nonholonomic manipulator is nonholonomic and controllable in the five-dimensional reachable space, as long as its work space satisfies $\theta_i \neq 0$ ($i = 1, 2, 3$). In this case, the motion of the five configuration variables can be controlled with two motors. + +## 3.3 Analysis of chain transformation features + +After investigating a wheeled mobile robot system with *n* trailers, Sørdalen proposed the conditions and methods for the chain transformation of a drift-free affine system with a triangular configuration [17], similar to Eq. (2): +---PAGE_BREAK--- + +$$ +\left\{ +\begin{array}{l} +\dot{q}_1 = u_1 \\ +\dot{q}_2 = u_2 & i \in \{3, \dots, n\} \\ +\dot{q}_1 = f_i(q_{i-1})u_1 +\end{array} +\right. +$$ + +If the smooth function $f_i(q_{i-1})$ satisfies $\left. \frac{\partial f_i(q_{i-1})}{\partial q_{i-1}} \right|_{q=q_0} \neq 0$ ($\forall i \in \{3, 4, \dots, n\}$) in the neighbourhood of $q_0$, there exist diffeomorphic coordinate transformation and input transformation such that the system can be converted to a chained system. + +If $\theta_i \neq 0$ ($i=1, 2, 3$), then the chain transformation and input feedback transformation of the four-joint nonholonomic manipulator can be expressed as: + +$$ +\left\{ +\begin{aligned} +Z_5 &= \theta_4 \\ +Z_4 &= k \cos \theta_3 \\ +Z_3 &= -k^2 \cos \theta_2 \sin \theta_3 \\ +Z_2 &= k^3 (\cos \theta_1 \sin \theta_2 \sin \theta_3 - \cos^2 \theta_2 \cos \theta_3) \\ +Z_1 &= \varphi +\end{aligned} +\right. +\tag{4} +$$ + +$$ +\begin{equation} +\begin{cases} +v_1 = \dot{z}_1 = \dot{\varphi} = u_1 \\ +v_2 = \dot{z}_2 = k^4 c_2 (3c_1 s_2 s_3 + s_3 c_2^2 + s_3 c_2^2) u_1 - k^3 s_1 s_2 s_3 u_2 +\end{cases} +\tag{5} +\end{equation} +$$ + +# 4. MOTION PLANNING FOR PARALLEL-CHAIN FOUR-JOINT NONHOLONOMIC MANIPULATOR + +The basic idea of the motion planning for chainable nonholonomic manipulator is to map the initial configuration $q^i$ and target configuration $q^f$ of the system into the initial configuration $z^i$ and target configuration $z^f$ of the chain space, forming a path from the initial configuration $z^i$ to the target configuration $z^f$, and then map the path to the joint space through inverse chain transformation. + +The relative mature motion planning methods for chained systems include piecewise constant input method, trigonometric function input method, polynomial input method, and switching control method. Among them, the polynomial input method stands out for its simple integration operation and the ability to control all variables to move to the target configuration along a smooth trajectory. The polynomial expression of the time-variation of two control inputs is: + +$$ +\begin{equation} +\begin{cases} +V_1(t) = b_1 \\ +V_2(t) = b_2 + b_3t + b_4t^2 +\end{cases} +\tag{6} +\end{equation} +$$ + +The motion planning aims to find a bounded control input $u(t)$ such that the system reaches the target configuration $z^f$ from the initial configuration $z^i$ over the specified time $T$. In other words, the system satisfies the following constraints: + +$$ +\left\{ +\begin{array}{l} +f_1 = Z_2(T) - Z_2^f = 0 \\ +f_2 = Z_3(T) - Z_3^f = 0 \\ +f_3 = Z_4(T) - Z_4^f = 0 \\ +f_4 = Z_5(T) - Z_5^f = 0 +\end{array} +\right. +\qquad (7) +$$ + +Through integration, the chained system can be expressed as: +---PAGE_BREAK--- + +$$ +\left\{ +\begin{aligned} +z_2(T) &= b_2 T + \frac{T^2}{2} b_3 + \frac{T^3}{3} b_4 + z_2^i \\ +z_3(T) &= \frac{T^2}{2} b_1 b_2 + \frac{T^3}{6} b_1 b_3 + \frac{T^4}{12} b_1 b_4 + T z_2^i b_1 + z_3^i \\ +z_4(T) &= \frac{T^3}{6} b_1^2 b_2 + \frac{T^4}{24} b_1^2 b_3 + \frac{T^5}{60} b_1^2 b_4 + \frac{T^2}{2} b_1^2 z_2^i + z_3^i T b_1 + z_4^i \\ +z_5(T) &= \frac{T^4}{24} b_1^3 b_2 + \frac{T^5}{120} b_1^3 b_3 + \frac{T^6}{360} b_1^3 b_4 + \frac{T^3}{6} b_1^3 z_2^i + \frac{T^2}{2} z_3^i b_1^2 + T z_4^i b_1 + z_5^i +\end{aligned} +\right. +\quad (8) +$$ + +Substituting Eq. (8) into Eq. (7), we have a set of nonlinear equations about $b_1, b_2, b_3$ and $b_4$. The Newton iteration form of the equation set is: + +$$ +b^{(k+1)} = b^{(k)} - [f'(b^{(k)})]^T F(b^{(k)}) +$$ + +where $F'(b)$ is the Jacobian matrix of $F(b)$; $[F'(b)]^+$ is the pseudo-inverse of $F'(b)$. Let $b = [b_1, b_2, b_3, b_4]^T$ and $F = [f_1, f_2, f_3, f_4]^T$. + +Given the initial value $b^{(0)}$, $b$ can be calculated by the iteration Eq. (9). Then, the trajectory of $z_i^{(t)}$ can be acquired by substituting $b$ into Eq. (8). Through the inverse chain transformation of Eq. (4), we can obtain the expression of the angular displacement of each joint with respect to the $z$-variable. Thus, the motion curves of the angular displacement of the four joints can be expressed as: + +$$ +\left\{ +\begin{array}{l} +\theta_4 = Z_5 \\ +\theta_3 = \arcos(Z_4/K) \\ +\theta_2 = \arcos(-\displaystyle\frac{Z_3}{K^2 \sin\theta_3}) \\ +\theta_1 = \arcos(\displaystyle\frac{\displaystyle\frac{Z_2}{K^3} + \cos\theta_3(\cos\theta_2)^2}{\sin\theta_2 \sin\theta_3}) +\end{array} +\right. +\qquad (10) +$$ + +# 5. SIMULATION EXPERIMENTS + +Experiment 1: + +Let the initial configuration $\theta^i = [\theta_1^i \ \theta_2^i \ \theta_3^i \ \theta_4^i]^T$ of an parallel-chain four-joint nonholonomic manipulator be $[5^0 \ 5^0 \ 5^0 \ 5^0]^T$ and the target configuration of that manipulator be $\theta^f = [\theta_1^f \ \theta_2^f \ \theta_3^f \ \theta_4^f]^T = [15^0 \ 15^0 \ 15^0 \ 15^0]^T$. + +Substituting the configurations into Eq. (4), the boundary conditions in the chain space +can be derived as $z^i = [z_1^i \ z_2^i \ z_3^i \ z_4^i]^T = [-0.1958 \ -0.0297 \ 0.5822 \ 0.0873]^T$ and $z^f = +[z_1^f \ z_2^f \ z_3^f \ z_4^f]^T = [-0.1670 \ -0.0854 \ 0.5645 \ 0.2618]^T$. + +Figure 5: Trajectory, a) of variable *z* in the chain space, b) of each joint in the joint space. +---PAGE_BREAK--- + +Let the motion time $T = 20$ s and $b^{(0)} = [0.1 \ 0.1 \ 0.1 \ 0.1]^T$. The termination condition of the system iteration was set with the error at the termination time: + +$$e = \sqrt{(z_2(T) - Z_2^g)^2 + (z_3(T) - Z_3^g)^2 + (z_4(T) - Z_4^g)^2 + (z_5(T) - Z_5^g)^2} < 10^{-6}.$$ + +Then, Eq. (9) was solved by Newton iteration method. Through 9 iterations, we have $b = [b_1 \ b_2 \ b_3 \ b_4]^T = [0.0151831 \ 0.0007583 \ 0.0000772 \ -0.0000007]^T$. Substituting $b$ into Eq. (8), we have the time-variation curve of variable $z$ (Fig. 5 a). According to Eq. (10), the path in the chain space can be mapped back to the joint space via inverse transformation. Under the time polynomial input control, the output of the four joints of the nonholonomic manipulator is as shown in Fig. 5 b. + +At $T=20$ s, $\theta_1=14.9999999^{\circ}$, $\theta_2=14.9999999^{\circ}$, $\theta_3=14.9999999^{\circ}$ and $\theta_4=14.9999999^{\circ}$. + +Let the error of target configuration be: $e = \frac{\theta^r - \theta^i}{\theta^g - \theta^i}$ + +where $\theta^r$ is the actual displacement of joint rotation. At this time, target configuration error of each joint is $e_{\theta_1}=0.0000001\%$, $e_{\theta_2}=0.0000001\%$, $e_{\theta_3}=0.0000001\%$ and $e_{\theta_4}=0.0000001\%$. The simulation results show that, under the time polynomial input control, all joints have smooth trajectories except for a slight fluctuation of joint 1 in the initial phase, and arrive at the target configuration. + +### Experiment 2: + +Let the initial configuration of the proposed manipulator $\theta^i = [\theta_1^i \ \theta_2^i \ \theta_3^i \ \theta_4^i]^T$ be $[20^{\circ} \ 20^{\circ} \ 20^{\circ} \ 20^{\circ}]^T$ and its target configuration be $\theta^f = [\theta_1^f \ \theta_2^f \ \theta_3^f \ \theta_4^f]^T = [10^{\circ} \ 10^{\circ} \ 10^{\circ} \ 10^{\circ}]^T$. Suppose the motion time $T = 20$ s. Through simulation, the time-variation trajectories of the chain variable and joint variable are as shown in Figs. 6 a and 6 b, respectively. + +Figure 6: Trajectory, a) of variable $z$ in the chain space, b) of each joint in the joint space. + +At $T = 20$ s, $\theta_1 = 10.000000000016^{\circ}$, $\theta_2 = 10.000000000016^{\circ}$, $\theta_3 = 10^{\circ}$ and $\theta_4 = 10^{\circ}$. The simulation results show that each joint of the manipulator has a smooth trajectory and arrives at the target configuration within the specified time. + +Comparing the results of the two simulation experiments, it is clear that, all joints of the parallel-chain four-joint manipulator can move accurately from the initial configuration to the target configuration within the specified time, when the input is controlled by the time polynomial obtained through Newton iteration. The motion of each joint is stable, with virtually no large fluctuation. Therefore, the Newton iteration-based polynomial input control is a feasible motion planning method for the parallel-chain four-joint nonholonomic manipulator. +---PAGE_BREAK--- + +# 6. CONCLUSIONS + +Considering the friction disc motion synthesis and decomposition mechanism, this paper proposes a chainable-type parallel-chain four-joint nonholonomic manipulator based on the parallel-chain nonholonomic manipulator. According to the nonlinear control theory, the author proved that the reachable space expanded from the manipulator system satisfies the involution distribution, i.e. the system is controllable. Then, the nonholonomic motion planning was transformed into the solution to nonlinear equation set, using the time polynomial input method of the chained system. The unknown coefficients of the time polynomial were solved by Newton iteration method. After that, two simulation experiments were performed on the motion between initial and target configurations. The results show that all joints of the proposed manipulator can move stably and accurately from the initial configuration to the target configuration within the specified time. + +Nevertheless, there is no guarantee that the planned path between the initial configuration and the target configuration in the chain space can be transformed back into the joint space without singularity, especially when the joint variables are coupled tightly due to the increase in the number of joints on the manipulator. Thus, the key to the path planning of nonholonomic manipulator lies in the existence of the solution to inverse transformation of the planed path from the chain space to joint space. In the future research, the author will construct the mathematical expression of the geometric and topological features of the nonholonomic path, identify the conditions for the path between adjacent configurations to converge into the chain space, and establish the existence criterion of the inverse transformation solution for the nonholonomic path. + +# ACKNOWLEDGEMENT + +This work is supported by the Special Scientific Research Plan of Shaanxi Provincial Department of Education (17JK0048), and the Specialized Research Fund for the Doctor Program of Baoji University of Arts and Sciences (ZK16044). + +# REFERENCES + +[1] Zhai, J.-Y.; Song, Z.-B. (2018). Adaptive sliding mode trajectory tracking control for wheeled mobile robots, *International Journal of Control*, 8 pages, doi:10.1080/00207179.2018.1436194 + +[2] Van Loock, W.; Pipeleers, G.; Diehl, M.; De Schutter, J.; Swevers, J. (2014). Optimal path following for differentially flat robotic systems through a geometric problem formulation, *IEEE Transactions on Robotics*, Vol. 30, No. 4, 980-985, doi:10.1109/TRO.2014.2305493 + +[3] Li, L. (2017). Nonholonomic motion planning using trigonometric switch inputs, *International Journal of Simulation Modelling*, Vol. 16, No. 1, 176-186, doi:10.2507/IJSIMM16(1)CO5 + +[4] Chung, W.-J.; Nakamura, Y. (2002). Design and control of a chained form manipulator, *International Journal of Robotics Research*, Vol. 21, No. 5-6, 389-408, doi:10.1177/027836402761393351 + +[5] Nakamura, Y.; Ezaki, H.; Tan, Y.-G.; Chung, W. (2001). Design of steering mechanism and control of nonholonomic trailer systems, *IEEE Transactions on Robotics and Automation*, Vol. 17, No. 3, 367-374, doi:10.1109/70.938393 + +[6] Yamaguchi, H.; Mori, M.; Kawakami, A. (2011). Control of a five-axle, three-steering coupled-vehicle system and its experimental verification, *IFAC Proceedings Volumes*, Vol. 44, No. 1, 12976-12984, doi:10.3182/20110828-6-IT-1002.01455 + +[7] Yamaguchi, H. (2012). Dynamical analysis of an undulatory wheeled locomotor: a trident steering walker, *IFAC Proceedings Volumes*, Vol. 45, No. 22, 157-164, doi:10.3182/20120905-3-HR-2030.00064 +---PAGE_BREAK--- + +[8] Yamaguchi, H. (2007). A path following feedback control law for a trident steering walker, *Transactions of the Society of Instrument and Control Engineers*, Vol. 43, No. 7, 562-571, doi:10.9746/ve.sicetr1965.43.562 + +[9] Dobrin, C.; Bondrea, I.; Pîrvu, B.-C. (2015). Modelling and simulation of collaborative processes in manufacturing, *Academic Journal of Manufacturing Engineering*, Vol. 13, No. 3, 18-25 + +[10] Tan, Y.-G.; Li, L.; Liu, M.-Y.; Chen, G.-L. (2012). Design and path planning for controllable underactuated manipulator, *International Journal of Advancements in Computing Technology*, Vol. 4, No. 2, 212-221, doi:10.4156/ijact.vol4 issue 2.26 + +[11] Li, L.; Tan, Y.-G.; Li, Z. (2014). Nonholonomic motion planning strategy for underactuated manipulator, *Journal of Robotics*, Vol. 2014, Paper 743857, 10 pages, doi:10.1155/2014/743857 + +[12] Djedai, H.; Mdouki, R.; Mansouri, Z.; Aouissi, M. (2017). Numerical investigation of three-dimensional separation control in an axial compressor cascade, *International Journal of Heat and Technology*, Vol. 35, No. 3, 657-662, doi:10.18280/ijht.350325 + +[13] Tan, Y.-G.; Jiang, Z.-Q.; Zhou, Z.-D. (2006). A nonholonomic motion planning and control based on chained form transformation, *Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems*, 3149-3153, doi:10.1109/IROS.2006.282337 + +[14] Pamuk, M. T.; Savaş, A.; Seçgin, Ö.; Arda, E. (2018). Numerical simulation of transient heat transfer in friction-stir welding, *International Journal of Heat and Technology*, Vol. 36, No. 1, 26-30, doi:10.18280/ijht.360104 + +[15] Medina, Y. C.; Fonticiella, O. M. C., Morales, O. F. G. (2017). Design and modelation of piping systems by means of use friction factor in the transition turbulent zone, *Mathematical Modelling of Engineering Problems*, Vol. 4, No. 4, 162-167, doi:10.18280/mmep.040404 + +[16] Li, Z. X. (1997). *A Mathematical Introduction to Robot Manipulation*, China Machine Press, Beijing + +[17] Sørdalen, O. J. (1993). Conversion of the kinematics of a car with n trailers into a chained form, *Proceedings of the 1993 IEEE International Conference on Robotics and Automation*, Vol. 1, 382-387, doi:10.1109/ROBOT.1993.292011 \ No newline at end of file diff --git a/samples/texts_merged/6535016.md b/samples/texts_merged/6535016.md new file mode 100644 index 0000000000000000000000000000000000000000..a7a458fc320ae505836052cb60232fe72059ab1b --- /dev/null +++ b/samples/texts_merged/6535016.md @@ -0,0 +1,607 @@ + +---PAGE_BREAK--- + +Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics + +Evan D. Remington¹, Devika Narain¹,², Eghbal A. Hosseini², and Mehrdad Jazayeri¹,²,* + + + +¹McGovern Institute for Brain Research, ²Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA + +*Correspondence + +Mehrdad Jazayeri, Ph.D. +Robert A. Swanson Career Development Professor +Assistant Professor, Department of Brain and Cognitive Sciences +Investigator, McGovern Institute for Brain Research +Investigator, Center for Sensorimotor Neural Engineering +MIT 46-6041 +43 Vassar Street +Cambridge, MA 02139, USA +Phone: 617-715-5418 +Fax: 617-253-5659 +Email: mjaz@mit.edu + +Acknowledgements + +We thank S.W. Egger, H. Sohn, and V. Parks for their helpful suggestions on the manuscript. D.N. is supported by the Rubicon grant (2015/446-14-008) from the Netherlands organization for scientific research (NWO). M.J. is supported by NIH (NINDS-NS078127), the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute. + +Author contributions + +E.D.R. and M.J. designed the main experimental paradigm. E.D.R. and E.A.H. trained the animals. E.D.R. collected neural data from both animals. E.D.R. developed KiNeT. E.D.R. performed all analyses. D.N. developed the recurrent neural network models. E.D.R. and M.J. interpreted the results and wrote the paper. + +Declaration of Interests + +The authors declare no competing interests. +---PAGE_BREAK--- + +## Summary + +Sensorimotor computations can be flexibly adjusted according to internal states and contextual inputs. The mechanisms supporting this flexibility are not understood. Here, we tested the utility of a dynamical system perspective to approach this problem. In a dynamical system whose state is determined by interactions among neurons, computations can be rapidly and flexibly reconfigured by controlling the system's inputs and initial conditions. To investigate whether the brain employs such control strategies, we recorded from the dorsomedial frontal cortex (DMFC) of monkeys trained to measure time intervals and subsequently produce timed motor responses according to multiple context-specific stimulus-response rules. Analysis of the geometry of neural states revealed a control mechanism that relied on the system's inputs and initial conditions. A tonic input specified by the behavioral context adjusted firing rates throughout each trial, while the dynamics in the measurement epoch allowed the system to establish initial conditions for the ensuing production epoch. This initial condition in turn set the speed of neural dynamics in the production epoch allowing the animal to aim for the target interval. These results provide evidence that the language of dynamical systems can be used to parsimoniously link brain activity to sensorimotor computations. +---PAGE_BREAK--- + +# Introduction + +Humans and nonhuman primates are capable of generating a vast array of behaviors, a feat dependent on the brain's ability to produce a vast repertoire of neural activity patterns. However, identifying the mechanisms by which the brain flexibly selects neural activity patterns across a multitude of contexts remains a fundamental and outstanding problem in systems neuroscience. + +Here, we aimed to answer this question using a dynamical systems approach. Work in the motor system has provided support for a hypothesis that movement-related activity in motor cortex can be described at the level of neural populations and viewed as low dimensional neural trajectories of a dynamical system (Churchland et al. 2010; Churchland et al. 2012; Seely et al. 2016; Fetz 1992; Michaels et al. 2016). More recently, a dynamical systems view has been used to provide explanations for neural trajectories in premotor and prefrontal cortical areas in various cognitive tasks (Mante et al. 2013; Rigotti et al. 2010; Carnevale et al. 2015; Hennequin et al. 2014; Rajan et al. 2016). This line of investigation has been complemented by efforts in developing, training, and analyzing recurrent neural network models that can emulate a range of motor and cognitive behaviors, leading to novel insights into the underlying latent dynamics (Mante et al. 2013; Hennequin et al. 2014; Sussillo et al. 2015; Chaisangmongkon et al. 2017; Wang et al. 2017). These early successes hold promise for the development of a more ambitious "computation-through-dynamics" (CTD) as a general framework for understanding how activity patterns in the brain support flexible behaviorally-relevant computations. + +The behavior of a dynamical system can be described in terms of three components: (1) the interaction between state variables that characterize the system's latent dynamics, (2) the system's initial state, and (3) the external inputs to the system. Accordingly, the hope for using the mathematics of dynamical systems to understand flexible generation of neural activity patterns and behavior depends on our ability to understand the co-evolution of behavioral and neural states in terms of these three components. Assuming that synaptic couplings between neurons and other biophysical properties are approximately constant on short timescales (i.e. trial to trial), we asked whether behavioral flexibility can be understood in terms of adjustments to initial state and external inputs. + +There is evidence that certain aspects of behavioral flexibility can be understood through these mechanisms. For example, it has been proposed that preparatory activity prior to movement initializes the system such that ensuing movement-related activity follows the appropriate trajectory (Churchland et al. 2010). Similarly, the presence of a context input can enable a recurrent neural network to perform flexible rule- (Mante et al. 2013; Song et al. 2016) and category-based decisions (Chaisangmongkon et al. 2017). However, whether these initial insights would apply more broadly and generalize when both inputs and initial conditions change is an important outstanding question. +---PAGE_BREAK--- + +For many behaviors, distinguishing the effects of the synaptic coupling, inputs and initial conditions in neural activity patterns is challenging. For example, neural activity during a reaching movement is likely governed by both local recurrent interactions and distal inputs from time-varying and condition-dependent reafferent signals (Todorov & Jordan 2002; Scott 2004; Pruszynski et al. 2011). Similarly, in many perceptual decision making tasks, it is not straightforward to disambiguate the sensory drive from recurrent activity representing the formation of a decision and the subsequent motor plan (Mante et al. 2013; Meister et al. 2013; Thura & Cisek 2014). This makes it difficult to tease apart the contribution of recurrent dynamics governed by initial conditions from the contribution of dynamic inputs (Sussillo et al. 2016). To address this challenge, we designed a sensorimotor task for nonhuman primates in which animals had to measure and produce time intervals using internally-generated patterns of neural activity in the absence of potentially confounding time varying sensory and reafferent inputs. Using a novel analysis of the geometry and dynamics of *in-vivo* activity in the dorsal medial frontal cortex (DMFC) and *in-silico* activity in recurrent neural network models trained to perform the same task, we found that behavioral flexibility is mediated by the complementary action of inputs and initial conditions controlling the structural organization of neural trajectories. +---PAGE_BREAK--- + +# Results + +## Ready, Set, Go (RSG) task + +Our aim was to ask whether flexible control of internally-generated dynamics could be understood in terms of systematic adjustments made to initial conditions and external inputs of a dynamical system. We designed a “Ready, Set, Go” (RSG) timing task to directly investigate the role of these two factors. The basic sensory and motor events in the task were as follows: following fixation of a central spot, monkeys viewed two peripheral visual flashes (“Ready” followed by “Set”) separated by a sample interval, $t_s$, and produced an interval, $t_p$, after Set by making a saccade to a visual target that was presented throughout the trial. In order to obtain juice reward, animals had to generate $t_p$ as close as possible to a target interval, $t_i$ (Figure 1B), which was equal to $t_s$ times a “gain factor”, $g$ ($t_i=gt_s$). The demand for flexibility was imposed in two ways (Figure 1C). First, $t_s$ varied between 0.5 and 1 sec on a trial-by-trial basis (drawn from a discrete uniform “prior” distribution). Second, $g$ switched between 1 ($g$=1 context) and 1.5 ($g$=1.5 context) across blocks of trials (Figure 1D, mean block length = 101, std = 49 trials). + +To verify that animals learned the task (Figure 1E), we used regression analyses to assess the dependence of $t_p$ on $t_s$ and $g$. First, we analyzed the relationship between $t_s$ and $t_p$ within each context ($t_p = \beta_0 + \beta_1 t_s$). Results indicated that $t_p$ increased monotonically with $t_s$ for both contexts ($\beta_1 > 0$, p << 0.001 for all sessions). Next, we assessed the influence of gain on $t_p$ in several complementary analyses. First, we compared regression slopes relating $t_p$ to $t_s$ within each context. The slopes were significantly higher in the $g$=1.5 compared to $g$=1 context (mean $\beta_1$= 0.84 vs. 1.2; signed-rank test p = 0.002, n = 10 sessions; Figure 1E, inset). Second, we fit a regression model to behavior across both gains that included additional regressors for gain and its interaction with $t_s$ ($t_p = \beta_0 + \beta_1 t_s + \beta_2 g + \beta_3 gt_s$). Results indicated a significant positive interaction between $t_s$ and $g$ (mean $\beta_3$ = 0.73; $\beta_3$ > 0, p < 0.0001 in each session). Finally, we fit a regression model relating $t_p$, z-scored for each $t_s$, to the number of trials following a context switch to determine how fast monkeys adjusted their behavior. There was no evidence for a slow adaptation of $t_p$ as a function of number of trials after switch (one-tailed test for $\beta_1$ in first 25 trials after switch, p > 0.25), indicating that the switching was rapid. Together, these results confirmed that animals used an estimate of $t_s$ to compute $t_p$ and flexibly adjusted their responses according to the gain information. + +For both gains, responses were variable, and average responses exhibited a regression to the mean (mean $\beta_1$ < 1, p = 0.005 for $g$=1, and mean $\beta_1$ < 1.5, p = 0.0001 for $g$=1.5, one-sided signed-rank test). As with previous work (Jazayeri & Shadlen 2015; Acerbi et al. 2012; Miyazaki et al. 2005; Jazayeri & Shadlen 2010), behavior was accurately captured by a Bayesian model (Figure 1E, Methods) indicating that animals integrated their knowledge about the prior distribution, the sample interval and the gain to optimize their behavior. +---PAGE_BREAK--- + +**Figure 1.** The RSG task and behavior. (A) RSG task. On each trial, three rectangular stimuli termed “Ready,” “Set,” and “Go” were shown on the screen arranged in a semi-circle. Following fixation, Ready and Set were extinguished. After a random delay, first Ready and then Set stimuli were flashed (small lines around the rectangles signify flashed stimuli). The time interval between Ready and Set demarcated a sample interval, *t*s. The monkey's task was to generate a saccade (“Go”) to a visual target such that the interval between Set and Go (produced interval, *t*p) was equal to a target interval, *t*v, of *t*s multiplied by a gain factor, *g*. The animal had to perform the task in two behavioral contexts, one in which *t*t was equal to *t*s (*g*=1 context), and one in which *t*t was 50% longer than *t*s (*g*=1.5 context). The context was cued by the color of fixation and the position of a context stimulus (small white square below the fixation) throughout the trial. (B) Animals received juice reward when the error between *t*p and *t*t was small, and the reward magnitude decreased with the size of error (see Methods for details). On rewarded trials, the saccadic target turned green (panel A). (C) For both contexts, *t*s was drawn from a discrete uniform distribution with seven values equally spaced from 0.5 to 1 sec (left). The values of *t*s were chosen such that the corresponding values of *t*t across the two contexts were different but partially overlapping (right). (D) The context changed across blocks of trials. The number of trials in a block was varied pseudorandomly (mean and std shown). (E) *t*p as a function of *t*s for each context across all recording sessions. Circles indicate mean *t*p across all sessions, shaded regions indicate +/- one standard deviation from the mean, dashed lines indicate *t*p, and solid lines are the fits of a Bayesian observer model to behavior. Inset: Slope of the regression line (*β*1) relating *t*p to *t*s in the two contexts. Regression slopes were larger in the *g*=1.5 context, with a significant interaction between *t*s and *g* (*p* < 0.0001) for all sessions (see text; ** indicates *p* < 0.002 for signed-rank test). In all panels, different shades of gray and red are associated with *g*=1 and *g*=1.5, respectively. +---PAGE_BREAK--- + +Neural activity in the RSG task + +To assess the neural computations in RSG, we focused on the dorsal region of the medial frontal cortex (DMFC) comprising supplementary eye fields, supplementary motor area and presupplementary motor area. DMFC is a natural candidate for our task because it plays a crucial role in timing as shown by numerous studies in humans (Halsband et al. 1993; Rao et al. 2001; Coull et al. 2004; Pfeuty et al. 2005; Macar et al. 2006; Cui et al. 2009), monkeys (Okano & Tanji 1987; Merchant et al. 2013; Kunimatsu & Tanaka 2012; Isoda & Tanji 2003; Romo & Schultz 1992; Merchant et al. 2011; Mita et al. 2009; Ohmae et al. 2008; Kurata & Wise 1988), and rodents (Matell et al. 2003; Kim et al. 2009; Smith et al. 2010; Kim et al. 2013; Xu et al. 2014; Murakami et al. 2014), and because it is involved in context-specific control of actions (Isoda & Hikosaka 2007; Ray & Heinen 2015; Yang & Heinen 2014; Shima et al. 1996; Matsuzaka & Tanji 1996; Brass & von Cramon 2002). + +We recorded from 326 units (127 from monkey C and 199 from monkey J) in DMFC. Between 11 and 82 units were recorded simultaneously in a given session, however in this study, we combined data across all units irrespective of whether they were recorded simultaneously. Firing patterns were heterogeneous and varied across units, task epochs, and experimental contexts. In the Ready-Set epoch, responses were modulated by both gain and elapsed time (e.g. units #1, 3, and 5, **Figure 2A**). For many units, firing rate modulations underwent a salient change at the earliest expected time of Set (0.5 sec). For example, responses of some units increased monotonically in the first 0.5 sec but decreased afterwards (**Figure 2A**, units #1, 3). + +Following Set, firing rates were characterized by a mixture of 1) transient changes after Set (unit #1 and 3), 2) sustained modulations during the Set-Go epoch (units #1 and 5), and 3) monotonic changes in anticipation of the saccade (units #1, 2 and 4). These characteristics were not purely sensory or motor and varied systematically with $t_s$ and gain. For example, the amplitude of the early transient response (unit #1) depended on both $t_s$ and gain, indicating that it was not a visually-triggered response to Set. The same was true for the sustained modulations after Set and activity modulations prior to saccade initiation. + +We also examined the representation of $t_s$ and gain across the population by projecting the data on dimensions along which activity was strongly modulated by context and interval in state-space (i.e. the space spanned by the firing rates of all 326 units; see Methods). Similar to individual units, population activity was modulated by both elapsed time and gain during both the Ready-Set (**Figure 2B**) and Set-Go (**Figure 2C**) epochs. We used this rich dataset to investigate whether the flexible adjustment of intrinsic dynamics across the population with respect to $t_s$ and gain could be understood using the language of dynamical systems. +---PAGE_BREAK--- + +Figure 2. Neural responses in dorsomedial frontal cortex (DMFC) during the RSG task. (A) Firing rates of 5 example units during the various phases of the task aligned to Ready (left column), Set (middle) and Go (right). Responses aligned to Ready and Set were sorted by $t_s$. Responses aligned to Go were sorted into 5 bins, each with the same number of trials, ordered by $t_p$. Gray and red lines correspond to activity during the $g=1$ and $g=1.5$ contexts, respectively, with darker lines corresponding to longer intervals. (B) Visualization of population activity in the Ready-Set epoch sorted by $t_s$. The “gain axis” corresponds to the axis along which responses were maximally separated with respect to context. The other two dimensions (“PC 1 & PC 2”) correspond to the first two principal components of the data after removing the context dimension. (C) Visualization of population activity in the Set-Go epoch sorted into 5 bins, each with the same number of trials, ordered by $t_p$. Top: Activity plotted in 2 dimensions spanned by PC 1 and the dimension of maximum variance with respect to $t_p$ within each context (“Interval axis”). Bottom: Same as Top rotated 90 degree (circular arrow) to visualize activity in the plane spanned by the context axis (“Gain axis”) and PC 1. In both panels, PC1 was computed after removing the variance explained along the Interval axis and Gain axis dimensions. Squares, circles, and crosses in the state space plots represent Ready, Set, and Go, respectively. +---PAGE_BREAK--- + +Flexible neural computations: a dynamical systems perspective + +We pursued the idea that neural computations responsible for flexible control of saccade initiation time can be understood in terms of the behavior of a dynamical system established by interactions among neurons. To formulate a rigorous hypothesis for how a dynamical system could confer such flexibility, we considered the goal of the task and worked backwards logically. The goal of the animal is to flexibly control the saccade initiation time to a fixed target. Previous motor timing studies proposed that saccade initiation is triggered when the activity of a subpopulation of neurons with monotonically increasing firing rates (i.e., “ramping”) reaches a threshold (Mita et al. 2009; Kunimatsu & Tanaka 2012; Romo & Schultz 1987; Roitman & Shadlen 2002; Hanes & Schall 1996; Tanaka 2005; Maimon & Assad 2006). For these neurons, flexibility requires that the slope of the ramping activity be adjusted (Jazayeri & Shadlen 2015). More recently, it was found that actions are initiated when the collective activity of neurons with both ramping and more complex activity patterns reach an action-triggering state (Churchland et al. 2006; Wang et al. 2017), and that flexible control of initiation time can be understood in terms of the speed with which neural activity evolves toward that terminal state (Wang et al. 2017). + +In a dynamical system, the speed with which activity evolves over time is determined by the derivative of the state. If we denote the state of the system by $X$, the derivative is usually specified by two factors, a function of the current state, $f(X)$, and an external input, $U$, that may be constant or context- and/or time-dependent: + +$$ \frac{dX}{dt} = f(X) + U $$ + +When analyzing the collective activity of a specific population of neurons, this formulation has a straightforward interpretation. The state represents the collective firing rate of neurons under investigation, $f(X)$ accounts for the interactions among those neurons, and $U$ corresponds to external input from another population of neurons, possibly controlled by an external sensory drive. The only additional information needed to determine the behavior of this system is its initial condition, $X_0$, which specifies the initial neural state prior to generating a desired dynamic pattern of activity. + +To assess the utility of the dynamical systems perspective for understanding behavioral flexibility, we assumed that $f(X)$ (i.e., synaptic coupling in DMFC) is fixed across trials. This leaves inputs and initial conditions as the only “dials” for achieving flexibility (Figure 3). To formalize a set of concrete hypotheses for the potential role of inputs and initial conditions, we first focused on behavioral flexibility with respect to $t_s$ for each gain context. How can a dynamical system adjust the speed at which activity during Set-Go evolves in a $t_s$-dependent manner? In RSG, within each context, there are no sensory inputs (exafferent or reafferent) that could serve as a $t_s$-dependent input drive. Therefore, we hypothesized that the $t_s$-dependent adjustment of +---PAGE_BREAK--- + +speed in the Set-Go epoch results from a parametric control of initial conditions at the time of Set. The corollary to this hypothesis is that the time-varying activity during the Ready-Set epoch is responsible for adjusting this initial condition based on the desired speed during the ensuing Set-Go epoch (Wang et al. 2017). + +Second, we asked how speed might be controlled across the two gain contexts. One possibility is to establish initial conditions that generalize across the two contexts (Figure 3A). To do so, initial conditions must vary with speed requirements associated with producing $t_i=gt_s$, which has implicit information about both gain and $t_s$ (i.e., $X_0 gt_s$). If both gain and $t_s$ are encoded by initial conditions, we would expect neural trajectories to form a single organized structure with respect to the target time ($t_i=gt_s$). In the extreme case, neural trajectories associated with the same value of $gt_s$ across the two contexts (e.g., 1.5x0.5 and 1.0x0.75) should terminate in the same state at the time of Set and should evolve along identical trajectories during the Set-Go epoch. We refer to this solution as $A_1$ (Figure 3A). + +Alternatively, DMFC responses may rely on a persistent gain-dependent input to adjust speed across the two gain contexts (Figure 3B). As exemplified by recurrent neural network models, in dynamical systems, a persistent input can rapidly reconfigure computations by driving the system to different regions of the state space (Mante et al. 2013; Sussillo et al. 2015; Hennequin et al. 2014; Chaisangmongkon et al. 2017; Song et al. 2016). This solution, which we refer to as $A_2$, predicts a qualitatively different geometrical organization of neural trajectories compared to $A_1$, with two key features. First, there should be a gain-dependent organization forming two sets of neural trajectories in two different regions of the state space. Second, neural trajectories should be organized with respect to $t_s$ and $t_p$ (i.e., within each context) but not necessarily with respect to $t_i$ (i.e., across contexts). Because the context information in RSG was provided as an external visual input (fixation cue), and was available throughout the trial, we predict that this solution offers the more plausible prediction for how the brain might solve the task. + +Therefore, the dynamical systems perspective in RSG leads us to the following specific hypotheses: 1) the evolution of activity in the Ready-Set epoch parametrizes the initial conditions needed to control the speed of dynamics in the production epoch for each context, and 2) the context cue acts as a persistent external input leading the system to establish structurally similar yet distinct sets of neural trajectories associated with the two gains, and no $t_p$-related structure across contexts, consistent with $A_2$. + +Visualization of neural trajectories from Set to Go in state space (Figure 3C, same as in Figure 2C) provided qualitative support for these hypotheses. First, within each context, neural trajectories for different $t_p$ bins were clearly associated with different initial conditions and remained separate and ordered throughout the Set-Go epoch. Second, context information seemed to displace the entire group of neural trajectories to a different region of neural state space without altering their relative organization as a function of $t_p$. Third, indexing time along nearby trajectories suggested that the speed with which responses evolved along each trajectory was systematically related to the desired $t_i$; i.e., slower for longer $t_i$. To validate these observations quantitatively, we +---PAGE_BREAK--- + +developed an analysis technique which we termed “kinematic analysis of neural trajectories” (KiNeT) that +helped us measure the relative speed and position of multiple, possibly curved (Figure S1), neural trajectories. +---PAGE_BREAK--- + +**Figure 3.** Dynamical systems predictions for the RSG task. (A,B) Schematic illustrations for dynamical systems solutions to generalize RSG across contexts through manipulation of initial conditions or external inputs. (A) Gain-control by initial condition ($A_1$). Top: The target interval $t_f=gt_s$ ($g$, gain, $t_s$ sample interval) is encoded by the initial conditions ($X_0 gt_s$) generated during the Ready-Set epoch (not shown). Middle: After the Set cue (open circles), activity evolves towards an action-triggering state (crosses) with a speed (colored arrows) fully determined by position along the initial condition subspace (ordinate). Activity across contexts is organized according to $t_f=gt_s$. Bottom: same trajectories, rotated to show an oblique view. Trajectories are separated only along the initial condition axis across both contexts such that trajectory structure reflects $t_f$ explicitly. There is no separation along the Input axis. (B) Gain-control by external input ($A_2$). Top: $t_s$ is encoded by initial conditions ($X_0(t_s)$), and a persistent context-dependent input encodes the gain (red and gray arrow for the two gains). Middle: within each context, trajectories associated with the same $t_s$ evolve along the same position on the initial condition axis at different speeds due to the context-dependent input. Activity is organized according to $t_s$ and not $t_f$. Bottom: oblique view. A context-dependent external input creates two sets of neural trajectories in the state space for the two contexts in the Set-Go epoch. This input controls speed in conjunction with $t_s$-dependent initial conditions, generating a structure which reflects $t_s$ and $g$ explicitly, but not $t_f$. In both $A_1$ and $A_2$, responses would be initiated when activity projected onto the time axis reaches a threshold. (C) DMFC data. Top: unknown mechanism of RSG control in DMFC. Middle, bottom: 3-dimensional projection of DMFC activity in the Set-Go epoch (from Figure 2C). Middle: qualitative assessment indicated that neural trajectories within each context for different $t_p$ bins were associated with different initial conditions and remained separate and ordered through the response. Bottom: Across the two contexts, neural trajectories formed two separated sets of neural trajectories without altering their relative organization as a function of $t_p$. Both of these features were consistent with $A_2$. Filled circles depict states along each trajectory at a constant fraction of the trajectory length, illustrating speed differences across trajectories. +---PAGE_BREAK--- + +Control of neural trajectories by initial condition within contexts + +We first employed KiNeT to validate that animals' behavior was predicted by the speed with which neural trajectories evolved over time. We reasoned that neural states evolving faster will reach the same destination on the trajectory in a shorter amount of time. Therefore, we estimated relative speed across the trajectories by performing a time alignment to identify the times when neural activity reached nearby points on each trajectory (Figure 4A). We then used this approach to analyze the geometrical structure of trajectories through the Set-Go epoch. + +To perform KiNeT, we binned trials from each gain and recording session into five groups according to $t_p$. +Neural responses from these trials were averaged, then PCA was applied to generate five neural trajectories +within the state space spanned by the first 10 PCs that explained 89% of variance. We denote each trajectory +by $\Omega[i](t)$ (or $\Omega[i]$ for shorthand; a table with definitions of all symbols is provided in Methods) where $i$ +indexes the trajectory and $t$ represents elapsed time since Set. We estimated speed and position along each +$\Omega[i]$ relative to the trajectory associated with the middle (third) bin, which we refer to as the reference +trajectory $\Omega[\text{ref}]$. We denoted neural states on the reference trajectory by $s[\text{ref}][j]$, where $j$ indexes states +through time along $\Omega[\text{ref}]$. We used curly brackets to refer to a collection of indices. For example, $s[\text{ref}]\{j\}$ +refers to all states on $\Omega[\text{ref}]$, and $t[\text{ref}]\{j\}$ corresponds to the time points on $\Omega[\text{ref}]$ associated with those +states. + +For each $s[\text{ref}][j]$, we found the nearest point on all non-reference trajectories ($i \neq \text{ref}$) as measured by +Euclidean distance. We denoted the collection of the nearest states on $\Omega[i]$ by $s[i]\{j\}$, and the +corresponding time points by $t[i]\{j\}$. The corresponding time points along different trajectories provided the +means for comparing speed: if $t[i]\{j\}$ were systematically greater than $t[\text{ref}]\{j\}$, we could conclude that +$\Omega[i]$ evolves at a slower speed compared to $\Omega[\text{ref}]$ (Figure 4A). This relationship can be readily inferred from +the slope of the line that relates $t[i]\{j\}$ to $t[\text{ref}]\{j\}$. While a unity slope indicates that the speeds are the +same, higher and lower values would indicate slower and faster speeds of $\Omega[i]$ compared to $\Omega[\text{ref}]$, +respectively. + +Applying KiNeT to neural trajectories in the Set-Go epoch indicated that $\Omega\{i\}$ evolved at similar speeds +immediately following the Set cue (unity slope). Later, speed profiles diverged such that neural trajectories +associated with longer intervals slowed down and trajectories associated with shorter intervals sped up for +---PAGE_BREAK--- + +both gain contexts (Figure 4B). This is consistent with previous work that the key variable predicting $t_p$ is the speed with which neural trajectories evolve (Wang et al. 2017). One common concern in this type of analysis is that averaging firing rates across trials of slightly different duration could lead to a biased estimate of neural trajectory. To ensure that our estimates of average speed were robust, we applied KiNeT to neural trajectories while aligning trials to Go instead of Set. Results remained unchanged and confirmed that the speed of neural trajectories predicted $t_p$ across trials (Figure S2). + +Having validated speed as the key variable for predicting $t_p$, we focused on our first hypothesis that the evolution of activity in the Ready-Set epoch parametrizes the initial conditions needed to control the speed of dynamics in the production epoch for each context. Because speed is a scalar variable and has an orderly relationship to $t_p$, this hypothesis predicts that the neural trajectories (and their initial conditions) should also have an orderly organizational structure with respect to $t_p$. In other words, there should be a systematic relationship between the vectors connecting nearest points across neural trajectories and the $t_p$ to which they correspond. We tested this prediction in two complementary ways. First, we performed an *analysis of direction* testing whether the vectors connecting nearby trajectories were more aligned than expected by chance. Second, we performed an *analysis of distance* asking whether the distance between the reference trajectory and the other trajectories respected the distance between the corresponding speeds. + +**Analysis of direction.** We used KiNeT to measure the angle between vectors connecting nearest points (Euclidean distance) across consecutive trajectories ordered by $t_p$. Let us use $\vec{\Delta}_{\Omega[i][j]}$ to denote the difference vector ($\vec{\Delta}$) connecting nearest points across trajectories (subscript $\Omega$) between $s[i][j]$ and $s[i+1][j]$. According to our hypothesis, the direction of $\vec{\Delta}_{\Omega[i][j]}$ should be similar to $\vec{\Delta}_{\Omega[i+1][j]}$ connecting $s[i+1][j]$ to $s[i+2][j]$. To test this, we measured the angle between these two difference vectors, denoted by $\theta_{\Omega[i][j]}$. The null hypothesis of unordered trajectories predicts that $\vec{\Delta}_{\Omega[i][j]}$ and $\vec{\Delta}_{\Omega[i+1][j]}$ should be unaligned on average ($\bar{\theta}_{\Omega\{i\}[j]} = 90^\circ$; bar signifies mean of the angles over the index $i$ in curly brackets). Results indicated that $\theta_{\Omega[i][j]}$ was substantially smaller than 90 degrees for both contexts (Figure 4C). This provides the first line of quantitative evidence for an orderly organization of neural trajectories with respect to $t_p$. + +**Analysis of distance.** We used KiNeT to measure the length of the vectors connecting nearest points on $\Omega[i]$ and $\Omega[\text{ref}]$, denoted by $D[i][j]$, at different time points ($[j]$). This analysis revealed that trajectories evolving faster than $\Omega[\text{ref}]$ and those evolving slower than $\Omega[\text{ref}]$ were located on the opposite sides of $\Omega[\text{ref}]$, and that the magnitude of $D[i][j]$ increased progressively for larger speed differences (Figure 4D). This analysis +---PAGE_BREAK--- + +provided clear evidence that, for each context, the relative position of neural trajectories and their initial conditions in the state space were predictive of $t_p$. + +To further substantiate the link between the geometry of neural trajectories and behavior, we asked whether trial-by-trial fluctuations of $t_p$ for each $t_s$ could be explained in terms of systematic fluctuations of speed and location of neural trajectories in the state space. We reasoned that fluctuations of $t_p$ partially reflect animals' misestimation of $t_s$. This predicts that larger values of $t_p$ for the same $t_s$ result from slower neural trajectories whose location in state space are biased toward longer values of $t_s$. We tested this prediction by using KiNet to examine the relative geometrical organization of neural trajectories associated with larger and smaller values of $t_p$ for the same $t_s$. Results indicated that neural trajectories that correspond to larger values of $t_p$ evolved at slower speeds and were shifted in state space toward larger values of $t_s$ (Figure S3). This analysis extends the correspondence between behavior and the organization of neural trajectories to include animals' trial-by-trial variability. Together, these results provide strong evidence for our first hypothesis: that activity during Ready-Set epoch parametrically adjusts the system's initial condition (i.e., neural state at the time of Set), which in turn controls the speed of neural trajectory in the Set-Go epoch and the consequent $t_p$. +---PAGE_BREAK--- + +**Figure 4.** Kinematic analysis of neural trajectories (KiNeT). (A) Illustration of KiNeT. Top: a collection of trajectories $\Omega\{i\}$ originate from Set, organized by initial condition, and terminate at Go. Tick marks on the trajectories indicate unit time. Darker trajectories evolve at a lower speed as demonstrated by the distance between tick marks and the dashed line connecting tick marks. KiNeT quantifies the position of trajectories and the speed with which states evolve along them relative to a reference trajectory (middle trajectory, $\Omega[\text{ref}]$). To do so, it finds a collection of states $s[i]\{j\}$ on each $\Omega[i]$ that are closest to $\Omega[\text{ref}]$ through time. Trajectories which evolve at a slower speed require more time to reach those states leading to larger values of $t[i][j]$. KiNet quantifies relative position by a distance measure, $D[i][j]$ (distance between $\Omega[i]$ and $\Omega[\text{ref}]$ at $t[i][j]$) that is signed (blue arrows) and is considered positive when $\Omega[i]$ corresponds to larger values of $t_p$ (slower trajectories). Middle: trajectories rotated such that the time axis is normal to the plane of illustration, denoted by a circle with an inscribed cross. Filled circles represent the states $s\{i\}[j]$ aligned to $s[\text{ref}][j]$ for a particular $j$. Vectors $\vec{\Delta}\Omega[i][j]$ connect states on trajectories of shorter to longer $t_p$. Angles $\theta\Omega[i][j]$ between successive $\vec{\Delta}\Omega[i][j]$ provide a measure of $t_p$-related structure. Bottom: equations defining the relevant variables. (B). Speed of neural trajectories compared to $\Omega[\text{ref}]$ computed for each context separately. Shortly after Set, all trajectories evolved with similar speed (unity slope). Afterwards, $\Omega[i]$ associated with shorter $t_s$ evolved faster than $\Omega[\text{ref}]$ as indicated by a slope of less than unity (i.e., $t[i]\{j\}$ smaller than $t[\text{ref}]\{j\}$), $\Omega[i]$ associated with longer $t_s$ evolved slower than $\Omega[\text{ref}]$. Filled circles on the unity line +---PAGE_BREAK--- + +indicate *Ĵ* values for which $t[i][j]$ was significantly correlated with $l_p[i][j]$ (bootstrap test, r > 0, p < 0.05, n = 100). (C) Relative position of adjacent neural trajectories computed for each context separately. $\bar{\theta}_{\Omega\{i\}[j]}$ (bar signifies average across trajectories) were significantly smaller than 90 degrees (filled circle) for the majority of the Set-Go epoch (bootstrap test, $\bar{\theta}_{\Omega\{i\}[j]}$ < 90, p < 0.05, n = 100) indicating that $\vec{\Delta}_{\Omega\{i\}[j]}$ were similar across $\Omega[i]$. (D) Distance of neural trajectories to $\Omega[\text{ref}]$ computed for each context separately. Distance measures ($D[i][j]$) indicated that $\Omega\{i\}$ had the same ordering as $l_p\{i\}$. Significance tested using bootstrap samples for each *Ĵ* (p < 0.05, n = 100). +---PAGE_BREAK--- + +Control of neural trajectories across contexts by external input + +To identify the mechanism by which flexible speed control might be generalized across contexts, we first tested whether both gain and $t_s$ are encoded by initial conditions ($A_1$). According to this alternative, neural trajectories should follow the organization of $t_p$ across both contexts (Figure 3A), in addition to within each context (Figure 4C). To test $A_1$, we sorted neural trajectories across the two contexts according to $t_p$ (Figure 5A, top), and asked whether the angle between vectors connecting nearest points ($\theta_{\Omega[i][j]}$) was significantly less than 90 degrees (Figure 5A, bottom). Unlike the within-context results (Figure 4C), when neural trajectories from both contexts were combined, the angle between nearby neural trajectories was significantly larger than 90 degrees ($p < 0.05$ for all $j$; Figure 5B). This indicates that trajectories across contexts do not have an orderly relationship to $t_t$ ($A_2$: less than 90 deg) even though they exhibit a structural organization that deviates from randomness (90 deg). + +Next, we investigated the hypothesis that the context cue acts as a persistent external input ($A_2$; Figure 3B), leading the system to establish structurally similar but distinct collections of neural trajectories across contexts (Figure 6A,B). This hypothesis can be broken down to a set of specific geometrical constraints in the Set-Go epoch. We determined whether the data met these constraints by testing whether the converse of each could be rejected, as illustrated in Figure 6C-F. If we denote the collection of neural trajectories in the two contexts by $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$, these constraints and tests can be formalized as follows: + +1. $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$ should evolve in the same direction as a function of time with different average speeds (i.e. slower for $\Omega_{g=1.5}\{i\}$). If the converse were true (i.e., trajectories evolving in different directions, Figure 6C, left), we would expect no systematic relationship between time points across the two contexts. Results from KiNeT across contexts (see Methods) revealed a monotonically increasing relationship between $t_g=1[\text{ref}]\{j\}$ and $t_g=1.5[\text{ref}]\{j\}$, confirming that Set-Go trajectories across contexts evolved in the same direction (Figure 6C, right). Moreover, $t_g=1.5[\text{ref}]\{j\}$ had a higher rate of change than $t_g=1[\text{ref}]\{j\}$ indicating that average speeds were slower in the $g=1.5$ condition. This suggests that speed control played a consistent role across contexts (Figure 6A). + +2. $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$ should be organized similarly with respect to $t_p$. In other words, the vector that connects nearby points in $\Omega_{g=1}\{i\}$ should be aligned to its counterpart that connect nearby points in $\Omega_{g=1.5}\{i\}$. To evaluate this constraint, we used the angle between pairs of vectors that connect nearby points within each context. We use an example to illustrate the procedure (Figure 6B). Consider one vector connecting nearby points in two successive neural trajectories in the gain of 1 (e.g. $\Omega_{g=1}[1]$ and $\Omega_{g=1}[2]$), +---PAGE_BREAK--- + +and another vector connecting the corresponding points in the gain of 1.5 (e.g., $\Omega_{g=1.5}[1]$ and $\Omega_{g=1.5}[2]$). A similar orientation between the two groups of trajectories (Figure 6A) would cause the angle between these vectors ($\theta_g[i]$) to be significantly smaller than 90 degrees. If instead, $\Omega_{g=1}$ and $\Omega_{g=1.5}$ were oriented differently (Figure 6D, left) or had no consistent relationship, these vectors would be on average orthogonal. Using KiNeT, we found that this angle ($\theta_g[i][j]$) was consistently smaller than 90 degrees throughout the Set-Go epoch, providing quantitative evidence that the collection of neural trajectories associated with the two gains were structurally similar (Figure 6A). + +3. If context information is provided as a tonic input, $\Omega_{g=1}$ and $\Omega_{g=1.5}$ should be separated in state space along a context axis throughout the Set-Go epoch. To verify this constraint, we assumed that neural trajectories for each context were embedded in distinct manifolds and compared the minimum distance between the two manifolds ($D_g$) to an analogous distance metric within each manifold (Figure 6B; see Methods). These distance measures should be the same if the groups of trajectories associated with the two contexts overlap in state space (Figure 6E, left). However, we found distances to be substantially larger across contexts compared to within contexts (Figure 6E, right). This confirms that the groups of trajectories associated with the two contexts were separated in state space (Figure 6A). + +4. The results so far reject a number of alternative hypotheses (Figure 6C,D,E) and leave out two possibilities: either $\Omega_{g=1}$ and $\Omega_{g=1.5}$ are separated along the same dimension that separates trajectories within each context (Figure 6F, left), or they are separated along a distinct input axis in accordance with $A_2$ (Figure 6A). To distinguish between these two, we asked whether the vector associated with the minimum distance $D_g[j] (\vec{\Delta}_g[j])$ was aligned to vectors connecting nearby states within each context ($\vec{\Delta}\{\{i\}[j]\}$). Analysis of the angle between these vectors ($\theta_g[\{i\}[j]]$) indicated that the two were orthogonal for almost all j (Figure 6F, right). This ruled out the remaining possibility that trajectories across contexts were separated along the same dimension as within-context (Figure 6F, left). + +Having validated these constraints quantitatively, we concluded that population activity across gains formed two groups of isomorphic speed-dependent neural trajectories (Figure 6A). These results support our primary hypothesis that flexible control of speed based on gain context was established by a context-dependent persistent external input (Figure 3B). +---PAGE_BREAK--- + +**Figure 5.** Neural trajectories across contexts do not form a single structure reflecting $t_p$. (A) A schematic illustrating neural trajectories across the two contexts after Set. Top: The expected geometrical structure under $A_1$. Neural trajectories for the gain of 1 (gray) and 1.5 (red) are organized along a single initial condition axis and ordered with respect to $t_p$. Bottom: A rotation of the top showing neural trajectories with the time axis normal to the plane of illustration. If the neural trajectories were organized as such, then the angle between vectors connecting nearby points (e.g., $\theta_{\Omega}[3][j]$) would be less than 90 ($A_1$, **Figure 3A**). (B) Left: orientation of vectors connecting adjacent neural trajectories combined across the two contexts. Right: possible geometrical structures, including $A_1$ (bottom), $A_2$ (top), and unorganized (middle). $\bar{\theta}_{\Omega}\{i\}[j]$ was larger 90 degrees for all $j$ in the Set-Go interval, consistent with $A_2$. Shaded regions represent 90% bootstrap confidence intervals. +---PAGE_BREAK--- + +**Figure 6.** Neural trajectories comprise distinct but similar structures across gains. (A) A schematic showing the organization of neural trajectories in a subspace spanned by Input, Initial condition and Time if context were controlled by persistent external input. If DMFC were to receive a gain-dependent input, we would expect neural trajectories from Set to Go to be separated along an input subspace, generating two similar but separated $t_p$-related structures for each context ($A_2$, **Figure 3B**). We verified this geometrical structure by excluding alternative structures (interdictory circles indicate rejected alternatives). (B) An illustration of neural trajectories for $g=1$ (gray filled circle) and $g=1.5$ (red filled circle) with the time axis normal to the plane of illustration. Gray and red arrows show vectors connecting nearby points in each context independently ($\vec{\Delta}_g=1$ and $\vec{\Delta}_g=1.5$). When the neural trajectories associated with the two gains are structured similarly, these vectors are aligned and the angle between them ($\theta_g$) is less than 90 deg. We used KiNeT to test this possibility (see Methods). (C) Left: Schematic illustrating a condition in which the time axis for trajectories in the two contexts (gray and red) are not aligned. Right: $t_g=1[\text{ref}]\{j\}$ increased monotonically with $t_g=1.5[\text{ref}]\{j\}$ indicating that the time axes across contexts were aligned. Values of $t_g=1.5[\text{ref}]\{j\}$ above the unity line indicate that activity evolved at a slower speed in the $g=1.5$ context. The dashed gray line represents unity and the dashed red line represent expected values for $t_g=1.5[\text{ref}]\{j\}$ if speeds were scaled perfectly by a factor of 1.5. (D) Left: Schematic illustrating an example configuration in which $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$ do not share the same $t_p$-related structure. Right: $\bar{\theta}_g[i][j]$ was significantly less than 90 degrees for all $j$ indicating that the tp-structure was similar across the two contexts. (E). Left: Schematic illustrating a condition in which $\Omega_{g=1}\{i\}$ and +---PAGE_BREAK--- + +Ωg=1.5{i} are overlapping. Right: The minimum distance Dg across contexts (black line) was substantially larger than that found +between subsets of trajectories within contexts (red and gray lines, see Methods) indicating the two sets of trajectories were not +overlapping. (F) Left: Schematic illustrating a condition in which Ωg=1{i} and Ωg=1.5{i} are separated along the same direction + +that neural trajectories within each context were separated. Right: $\vec{\Delta}_g[j]$ was orthogonal to $\vec{\Delta}_{\Omega\{i\}[j]}$ representing tp-related +structure within each context (gray and red lines). In (C-E), shaded regions represent 90% bootstrap confidence intervals, and circles +represent statistical significance (p < 0.05, bootstrap test, n = 100). +---PAGE_BREAK--- + +RNN models recapitulate the predictions of inputs and initial conditions + +The geometry and dynamics of DMFC responses were consistent with the hypothesis that behavioral flexibility in the RSG task relies on systematic adjustments of initial conditions and external inputs of a dynamical system. Motivated by recent advances in the use of recurrent neural networks (RNNs) as a tool for testing hypotheses about cortical dynamics (Mante et al. 2013; Hennequin et al. 2014; Sussillo et al. 2015; Chaisangmongkon et al. 2017; Wang et al. 2017), we investigated whether RNNs trained to perform the RSG task would establish similar geometrical structures and dynamics. + +We focused on a generic class of RNNs comprised of synaptically coupled nonlinear units that receive nonspecific background activity (see Methods). First, we tested whether RNNs could perform the RSG task in a single gain context ($g=1$ or $g=1.5$ only). To do so, we created RNNs that received an additional input encoding Ready and Set as two brief pulses separated by $t_s$. We trained these RNNs to generate a linear output function after Set that reached a threshold (Go) at the desired production interval, $t_i = gt_s$. Analysis of successfully trained RNNs revealed that they, like DMFC, controlled $t_p$ by adjusting the speed of neural trajectories within a low-dimensional geometrical structure parameterized by initial conditions (Figure S4). + +Next, we investigated RNNs trained to perform the RSG task across multiple gain values. Our primary aim was to verify the importance of a persistent gain-dependent input in establishing isomorphic geometrical structures similar to DMFC (Figures 3, 6). To do so, we created RNNs with two different architectures, one in which the gain information was provided by the level of a persistent input, and another in which the gain information was provided by a transient pulse before the Ready cue. We refer to these networks as tonic-input RNNs and transient-input RNNs, respectively (Figure 7A). We used the tonic-input RNN as a direct test of whether a gain-dependent persistent input could emulate the geometrical structure of responses in DMFC, and the transient-input RNN to test whether such persistence was necessary. + +Using PCA and KiNeT, we found that neural trajectories in the two networks were structured differently. In the tonic-input RNN, trajectories formed two isomorphic structures separated along the dimension associated with the gain-dependent persistent input (Figure 7B). In contrast, trajectories generated by the transient-input RNN were better described as coalescing towards a single structure parameterized by initial condition (Figure 7C). To verify these observations quantitatively, we evaluated the geometry of neural trajectories in the two RNN variants using the same analyses we performed on DMFC activity. In particular, we sorted trajectories with respect to $t_p$ across the two gain contexts ($g=1$ and $g=1.5$) and quantified the angle between vectors connecting nearest points $(\theta_{\Omega}[i][j])$. As noted in the analysis of DMFC, this angle is expected to be acute if trajectories form a single structure ($A_j: \bar{\theta}_{\Omega}\{i\}[j] < 90^\circ$), and obtuse if trajectories form two gain-dependent +---PAGE_BREAK--- + +structures ($A_2$: $\theta_\Omega[i][j] \ge 90^\circ$). As predicted, the tonic-input solved the task by forming two isomorphic structures ($A_2$) indicating that when a persistent gain-dependent input is present, RNNs rely on a solution with separate gain-dependent geometrical structures (Figure 7E). In contrast, in the transient-input RNNs, angles between consecutive trajectories were acute ($A_1$). This result strengthens the conclusion about the importance of a persistent gain-dependent input in establishing separate isomorphic structures (Figure 7D). + +We also compared the two RNNs in terms of the distance between trajectories across the two contexts using the same metric ($D_g$) we used previously used for the analysis of DMFC (Figure 6E). The minimum distance between $\Omega_g=1\{i\}$ and $\Omega_g=1.5\{i\}$ at the time of Set was consistently smaller in the transient-input RNN compared to tonic-input RNN (Figure 7F,G). In some of the successfully trained transient networks, $D_g$ was larger at the time of Set, but this distance consistently decayed from Set to Go. In contrast, in the tonic-input RNN, $D_g$ remained large throughout the production epoch. We compared the two types of RNN quantitatively but comparing values of $D_g$ in each RNN normalized by the distance between the trajectories that correspond to the shortest and longest $t_p$ bin for the $g=1$ context in the same RNN. In the tonic networks, the minimum normalized distance ranged between 0.4 and 1.6, which was nearly 10 times larger than the that observed in the transient networks (0.003 to 0.04). Additionally, trajectories in all transient networks gradually established a $t_t$-related structure consistent with $A_1$. In contrast, trajectories in the tonic networks, like the DMFC data, were characterized by two separate $t_p$-related structures, one for each gain context. These results provide an important theoretical confirmation of our original dynamical systems hypothesis that when gain information is provided as persistent input, the system establishes distinct and isomorphic gain-dependent sets of neural trajectories. +---PAGE_BREAK--- + +**Figure 7.** RNNs with tonic but not transient input captured the structure of activity in DMFC. (A) Schematic illustration of the recurrent neural networks (RNNs). The networks are provided with brief Ready and Set pulses separated in time by $t_s$, after which the activity projected onto the output space by weighting function z must generate a ramp to a threshold (dashed line) at the context-dependent $t_t$. Additionally, each network is provided with a context-dependent “input” which either terminates prior to Ready (“Transient input,” top), or persists throughout the trial (“Tonic input,” bottom) (B) Top: state-space projections of tonic-input RNN activity in the Set-Go epoch within the plane spanned by Initial condition (ordinate) and Time (abscissa). Within this plane of view, neural trajectories within each context are separated based on $t_p$ but overlap with respect to gain. Bottom: Same neural trajectories shown in the top panel viewed within the plane spanned by Input (ordinate) and Time (abscissa). In this view, neural trajectories are separated by gain but overlap with respect to $t_p$ within each gain. Results are shown with the same format as Figure 3. (C) Same as panel B for the transient-input RNN. Top: Trajectories, when viewed within the plane of Initial condition and Time, are organized with respect to $t_p$ across both gains. Bottom: when viewed within the plane of Input and Time, trajectories are highly overlapping irrespective of gain. (D) +---PAGE_BREAK--- + +Analysis of direction in the tonic-input RNN with the same format as **Figure 5B**. $\bar{\theta}_{\Omega}[i][j]$ was larger than 90 deg for the entire Set-Go epoch. This is consistent with a geometry in which the two gains form two separate sets of isomorphic neural trajectories (inset). + +(E) Same as panel D for the transient-input network for which $\bar{\theta}_{\Omega}[i][j]$ was consistently less than 90 deg. This is consistent with a geometry in which neural trajectories are organized with respect to $t_p$ regardless of the gain context (inset). (F,G) Trajectory separation across contexts for the tonic-input (F) and transient-input (G) networks with the same format as **Figure 6E**. $D_g$ was substantially larger through the Set-Go epoch in the tonic-input network (F). In (D-G), shaded regions represent 90% bootstrap confidence intervals, and circles represent statistical significance ($p < 0.05$, bootstrap test, n = 100). +---PAGE_BREAK--- + +# Discussion + +Linking behavioral computations to neural mechanisms requires that the space of models we consider suitably match the computational demands of the behavior. In this study, we focused on the computations that enable the brain to exert precise and flexible control over movement initiation time (Wang et al. 2017). Because such temporal control depends on intrinsically dynamic patterns of neural activity, we employed a dynamical systems perspective to understand the underlying computational logic. An important feature of the dynamical systems view is that it obviates the need for the system to harbor an explicit representation of experimentally defined task-relevant variables ($t_s$, $g$, and $t_i$). Instead, neural signals that control behavior may be more appropriately characterized in terms of constraints imposed by latent dynamics that hold an implicit representation of task-relevant variables to control behavior. This viewpoint has a strong basis in current theories of motor control that posit an implicit representation of kinematic information in motor cortical activity during movements (Churchland et al. 2010; Churchland et al. 2012; Chaisangmongkon et al. 2017; Fetz 1992; Shenoy et al. 2013; Michaels et al. 2016). These theories cast movement control in terms of the function of an inverse model (Wolpert & Kawato 1998; Todorov & Jordan 2002; Sabes 2000) that inverts a desired endpoint to suitable control mechanisms during movement. We built upon this framework by evaluating the utility of dynamical systems theory in characterizing the control mechanisms the brain uses to produce a desired interval ($t_i$) jointly specified by gain and $t_s$ ($t_i = gt_s$). + +Results indicated that flexible control of behavior could be parsed in terms of systematic adjustments to initial conditions and external inputs of a dynamical system. Activity structure within each gain context indicated that the system's initial conditions controlled $t_p$ by parameterizing the speed of neural trajectories (Jazayeri & Shadlen 2015; Wang et al. 2017). The displacement of neural trajectories in the state space as a function of gain, and the lack of structural representation of $t_p$ across both gains suggested that DMFC received the gain information as a context-dependent tonic input. Following recent advances in using RNNs to generate and test hypotheses about dynamical systems (Mante et al. 2013; Rigotti et al. 2010; Hennequin et al. 2014; Rajan et al. 2016; Sussillo et al. 2015; Chaisangmongkon et al. 2017), we verified this interpretation by analyzing the behavior of different RNN models trained to perform the RSG task with either tonic or transient context-dependent inputs. Although both networks used initial conditions to set the speed of neural trajectories, only the tonic-input RNNs reliably established separate structures of neural trajectories across gains, similar to what we found in DMFC. + +Although we do not know the constraints that led the brain to establish separate geometrical structures, we speculate about potential computational advantages associated with this particular solution. First and foremost, this may be a particularly robust solution; as the gain information was provided by a persistent visual cue, the brain could use this input as a reliable signal to modulate neural dynamics in RSG. This solution may also +---PAGE_BREAK--- + +reflects animals' learning strategy. We trained monkeys to perform the RSG tasks with two gain contexts. On the one extreme, animals could have treated these as completely different tasks leading to completely unrelated response structures for the two gains. On the other extreme, animals could have established a single parametric solution that would enable the animal to perform the two contexts as part of a single continuum (e.g., represent $t_i$). DMFC responses, however, did not match either extreme. Instead, the system established what might be viewed as a modular solution comprised of two separate isomorphic structures. We take this as evidence that the brain sought similar solutions for the two contexts, but it did so while keeping the solutions separated in the state space. This strategy preserves a separable, unambiguous representation of gain and $t_s$ at the population level (Machens et al. 2010; Mante et al. 2013; Kobak et al. 2016) and provides the additional flexibility of parametric adjustments to the two parameters independently. Future extensions of our experimental paradigm to cases where context information is not present throughout the trial (e.g., internally inferred rules) might provide a more direct test of these possibilities. + +Regardless of the learning strategies and constraints that shaped DMFC responses, our results highlight an important computational role for inputs that deviate from traditional views. We found that changing the level of a static input can be used to generalize an arbitrary stimulus response mapping in the RSG task to a new context. Similar inferences can be made from other recent studies that have evaluated the computational utility of inputs that encode task rules and behavioral contexts (Mante et al. 2013; Song et al. 2016; Chaisangmongkon et al. 2017). Extending this idea, it may be possible for the system to use multiple orthogonal input vectors to flexibly and rapidly switch between sensorimotor mappings along different dimensions. Together, these findings suggest that a key function of cortical inputs may be to flexibly reconfigure the intrinsic dynamics of cortical circuits by driving the system to different regions of the state space. This allows the same group of neurons to access a reservoir of latent dynamics needed to perform different task-relevant computations. + +Our results raise a number of additional important questions. First, future work should identify the neurobiological substrate of the putative context-dependent input to DMFC in the RSG task, which may be among various cortical and subcortical areas (Lu et al. 1994; Bates & Goldman-Rakic 1993; Wang et al. 2005; Akkal et al. 2007; Wallis et al. 2001). The nature of the input is also unknown. In our RNN models, context information was provided by external drive, and was indistinguishable from recurrent inputs from the perspective of individual units. In cortex, reconfiguration of circuit dynamics may be achieved by either an external drive similar to the function of thalamic relay signals, or through targeted modulation of neural activity (Harris & Thiele 2011; Nadim & Bucher 2014). Second, while the signals recorded in this study were consistent with a prominent role for DMFC in RSG, other brain areas, such as the thalamus (Guo et al. 2017; Schmitt et al. 2017) and prefrontal cortex (Miller & Cohen 2001) are also likely to help maintain the observed dynamics. Third, although we assumed that recurrent interactions were fixed during our experiment, it is almost certain +---PAGE_BREAK--- + +that synaptic plasticity plays a key role as the network learns to incorporate context-dependent inputs (Kleim et al. 1998; Pascual-Leone et al. 1995; Yang et al. 2014; Xu et al. 2009). Finally, the persistent separation of neural trajectories observed in DMFC allowed for a dynamical account which did not require invocation of “hidden” network states to explain timing behavior (Buonomano & Merzenich 1995; Karmarkar & Buonomano 2007; Murray & Escola 2017) or contextual control (Stokes et al. 2013). However, it is possible that factors not measured by extracellular recording (e.g., short-term synaptic plasticity) contribute to both contextual control and timing behavior in RSG and similar tasks. These open questions aside, our results provide a novel way to bridge the divide between neural activity and behavior by using the language of dynamical systems. +---PAGE_BREAK--- + +## References + +Acerbi, L., Wolpert, D.M. & Vijayakumar, S., 2012. Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing. *PLoS computational biology*, 8(11), p.e1002771. + +Afshar, A. et al., 2011. Single-trial neural correlates of arm movement preparation. *Neuron*, 71(3), pp.555–564. + +Akkal, D., Dum, R.P. & Strick, P.L., 2007. Supplementary motor area and presupplementary motor area: targets of basal ganglia and cerebellar output. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 27(40), pp.10659–10673. + +Bates, J.F. & Goldman-Rakic, P.S., 1993. Prefrontal connections of medial motor areas in the rhesus monkey. *The Journal of comparative neurology*, 336(2), pp.211–228. + +Brass, M. & von Cramon, D.Y., 2002. The role of the frontal cortex in task preparation. *Cerebral cortex*, 12(9), pp.908–914. + +Buonomano, D.V. & Merzenich, M.M., 1995. Temporal information transformed into a spatial code by a neural network with realistic properties. *Science*, 267(5200), pp.1028–1030. + +Carnevale, F. et al., 2015. Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty. *Neuron*. Available at: http://dx.doi.org/10.1016/j.neuron.2015.04.014. + +Chaisangmongkon, W. et al., 2017. Computing by Robust Transience: How the Fronto-Parietal Network Performs Sequential, Category-Based Decisions. *Neuron*, 93(6), pp.1504–1517.e4. + +Churchland, M.M. et al., 2010. Cortical preparatory activity: representation of movement or first cog in a dynamical machine? *Neuron*, 68(3), pp.387–400. + +Churchland, M.M. et al., 2012. Neural population dynamics during reaching. *Nature*, 487(7405), pp.51–56. + +Churchland, M.M., Afshar, A. & Shenoy, K.V., 2006. A central source of movement variability. *Neuron*, 52(6), pp.1085–1096. + +Coull, J.T. et al., 2004. Functional anatomy of the attentional modulation of time estimation. *Science*, 303(5663), pp.1506–1508. + +Cui, X. et al., 2009. Ready...go: Amplitude of the FMRI signal encodes expectation of cue arrival time. *PLoS biology*, 7(8), p.e1000167. + +Fetz, E.E., 1992. Are movement parameters recognizably coded in the activity of single neurons? *The Behavioral and brain sciences*. Available at: http://journals.cambridge.org/abstract_S0140525X00072599. + +Garcia, C., 2012. A simple procedure for the comparison of covariance matrices. *BMC evolutionary biology*, 12, p.222. + +Guo, Z.V. et al., 2017. Maintenance of persistent activity in a frontal thalamocortical loop. *Nature*, 545(7653), pp.181–186. + +Halsband, U. et al., 1993. The role of premotor cortex and the supplementary motor area in the temporal control of movement in man. *Brain: a journal of neurology*, 116 (Pt 1), pp.243–266. + +Hanes, D.P. & Schall, J.D., 1996. Neural control of voluntary movement initiation. *Science*, 274(5286), +---PAGE_BREAK--- + +pp.427–430. + +Harris, K.D. & Thiele, A., 2011. Cortical state and attention. *Nature reviews. Neuroscience*, 12(9), pp.509–523. + +Hennequin, G., Vogels, T.P. & Gerstner, W., 2014. Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements. *Neuron*, 82(6), pp.1394–1406. + +Isoda, M. & Hikosaka, O., 2007. Switching from automatic to controlled action by monkey medial frontal cortex. *Nature neuroscience*, 10(2), pp.240–248. + +Isoda, M. & Tanji, J., 2003. Contrasting neuronal activity in the supplementary and frontal eye fields during temporal organization of multiple saccades. *Journal of neurophysiology*, 90(5), pp.3054–3065. + +Jazayeri, M. & Shadlen, M.N., 2015. A Neural Mechanism for Sensing and Reproducing a Time Interval. *Current biology: CB*. Available at: http://dx.doi.org/10.1016/j.cub.2015.08.038. + +Jazayeri, M. & Shadlen, M.N., 2010. Temporal context calibrates interval timing. *Nature neuroscience*, 13(8), pp.1020–1026. + +Karmarkar, U.R. & Buonomano, D.V., 2007. Timing in the absence of clocks: encoding time in neural network states. *Neuron*, 53(3), pp.427–438. + +Kim, J. et al., 2009. Inactivation of medial prefrontal cortex impairs time interval discrimination in rats. *Frontiers in behavioral neuroscience*, 3, p.38. + +Kim, J. et al., 2013. Neural correlates of interval timing in rodent prefrontal cortex. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 33(34), pp.13834–13847. + +Kleim, J.A., Barbay, S. & Nudo, R.J., 1998. Functional reorganization of the rat motor cortex following motor skill learning. *Journal of neurophysiology*, 80(6), pp.3321–3325. + +Kobak, D. et al., 2016. Demixed principal component analysis of neural population data. *eLife*, 5. Available at: http://dx.doi.org/10.7554/eLife.10989. + +Kunimatsu, J. & Tanaka, M., 2012. Alteration of the timing of self-initiated but not reactive saccades by electrical stimulation in the supplementary eye field. *The European journal of neuroscience*, 36(9), pp.3258–3268. + +Kurata, K. & Wise, S.P., 1988. Premotor and supplementary motor cortex in rhesus monkeys: neuronal activity during externally- and internally-instructed motor tasks. *Experimental brain research. Experimentelle Hirnforschung*. *Experimentation cerebrale*, 72(2), pp.237–248. + +Lu, M.T., Preston, J.B. & Strick, P.L., 1994. Interconnections between the prefrontal cortex and the premotor areas in the frontal lobe. *The Journal of comparative neurology*, 341(3), pp.375–392. + +Macar, F., Coull, J. & Vidal, F., 2006. The supplementary motor area in motor and perceptual time processing: fMRI studies. *Cognitive processing*, 7(2), pp.89–94. + +Machens, C.K., Romo, R. & Brody, C.D., 2010. Functional, But Not Anatomical, Separation of “What” and “When” in Prefrontal Cortex. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 30(1), pp.350–360. + +Maimon, G. & Assad, J.A., 2006. A cognitive signal for the proactive timing of action in macaque LIP. *Nature neuroscience*, 9(7), pp.948–955. + +Mante, V. et al., 2013. Context-dependent computation by recurrent dynamics in prefrontal cortex. *Nature*, +---PAGE_BREAK--- + +503(7474), pp.78–84. + +Matell, M.S., Meck, W.H. & Nicolelis, M.A.L., 2003. Interval timing and the encoding of signal duration by ensembles of cortical and striatal neurons. *Behavioral neuroscience*, 117(4), pp.760–773. + +Matsuzaka, Y. & Tanji, J., 1996. Changing directions of forthcoming arm movements: neuronal activity in the presupplementary and supplementary motor area of monkey cerebral cortex. *Journal of neurophysiology*, 76(4), pp.2327–2342. + +Meister, M.L.R., Hennig, J.A. & Huk, A.C., 2013. Signal multiplexing and single-neuron computations in lateral intraparietal area during decision-making. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 33(6), pp.2254–2267. + +Merchant, H. et al., 2013. Interval Tuning in the Primate Medial Premotor Cortex as a General Timing Mechanism. *Journal of Neuroscience*, 33(21), pp.9082-9096. + +Merchant, H. et al., 2011. Measuring time with different neural chronometers during a synchronization-continuation task. *Proceedings of the National Academy of Sciences of the United States of America*, 108(49), pp.19784–19789. + +Michaels, J.A. et al., 2015. Predicting Reaction Time from the Neural State Space of the Premotor and Parietal Grasping Network. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 35(32), pp.11415–11432. + +Michaels, J.A., Dann, B. & Scherberger, H., 2016. Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning. *PLoS computational biology*, 12(11), p.e1005175. + +Miller, E.K. & Cohen, J.D., 2001. An integrative theory of prefrontal cortex function. *Annual review of neuroscience*, 24, pp.167–202. + +Mita, A. et al., 2009. Interval time coding by neurons in the presupplementary and supplementary motor areas. *Nature neuroscience*, 12(4), pp.502–507. + +Miyazaki, M., Nozaki, D. & Nakajima, Y., 2005. Testing Bayesian models of human coincidence timing. *Journal of neurophysiology*, 94(1), pp.395–399. + +Murakami, M. et al., 2014. Neural antecedents of self-initiated actions in secondary motor cortex. *Nature neuroscience*, 17(11), pp.1574–1582. + +Murray, J.M. & Escola, G.S., 2017. Learning multiple variable-speed sequences in striatum via cortical tutoring. *eLife*, 6. Available at: http://dx.doi.org/10.7554/eLife.26084. + +Nadim, F. & Bucher, D., 2014. Neuromodulation of neurons and synapses. *Current opinion in neurobiology*, 29, pp.48–56. + +Ohmae, S. et al., 2008. Neuronal activity related to anticipated and elapsed time in macaque supplementary eye field. *Experimental brain research. Experimentelle Hirnforschung*. *Experimentation cerebrale*, 184(4), pp.593–598. + +Okano, K. & Tanji, J., 1987. Neuronal activities in the primate motor fields of the agranular frontal cortex preceding visually triggered and self-paced movement. *Experimental brain research. Experimentelle Hirnforschung*. *Experimentation cerebrale*, 66(1), pp.155–166. + +Pachitariu, M. et al., 2016. Kilosort: realtime spike-sorting for extracellular electrophysiology with hundreds of channels. *bioRxiv*, p.061481. Available at: http://www.biorxiv.org/content/early/2016/06/30/061481 +---PAGE_BREAK--- + +[Accessed September 11, 2017]. + +Pascual-Leone, A. et al., 1995. Modulation of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new fine motor skills. *Journal of neurophysiology*, 74(3), pp.1037-1045. + +Pfeuty, M., Ragot, R. & Pouthas, V., 2005. Relationship between CNV and timing of an upcoming event. *Neuroscience letters*, 382(1-2), pp.106-111. + +Pruszynski, J.A. et al., 2011. Primary motor cortex underlies multi-joint integration for fast feedback control. *Nature*, 478(7369), pp.387-390. + +Rajan, K. & Abbott, L.F., 2006. Eigenvalue spectra of random matrices for neural networks. *Physical review letters*, 97(18), p.188104. + +Rajan, K., Harvey, C.D. & Tank, D.W., 2016. Recurrent Network Models of Sequence Generation and Memory. *Neuron*, 90(1), pp.128-142. + +Rakitin, B.C. et al., 1998. Scalar expectancy theory and peak-interval timing in humans. *Journal of experimental psychology: Animal behavior processes*, 24(1), pp.15-33. + +Rao, S.M., Mayer, A.R. & Harrington, D.L., 2001. The evolution of brain activation during temporal processing. *Nature neuroscience*, 4(3), pp.317-323. + +Ray, S. & Heinen, S.J., 2015. A mechanism for decision rule discrimination by supplementary eye field neurons. *Experimental brain research. Experimentelle Hirnforschung*. Experimentation cerebrale, 233(2), pp.459-476. + +Remington, E. & Jazayeri, M., 2017. Late Bayesian inference in sensorimotor behavior. bioRxiv, p.130062. +Available at: http://biorxiv.org/content/early/2017/04/24/130062 [Accessed April 24, 2017]. + +Rigotti, M. et al., 2010. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. *Frontiers in computational neuroscience*, 4, p.24. + +Roitman, J.D. & Shadlen, M.N., 2002. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 22(21), pp.9475-9489. + +Romo, R. & Schultz, W., 1987. Neuronal activity preceding self-initiated or externally timed arm movements in area 6 of monkey cortex. *Experimental brain research. Experimentelle Hirnforschung*. Experimentation cerebrale, 67(3), pp.656-662. + +Romo, R. & Schultz, W., 1992. Role of primate basal ganglia and frontal cortex in the internal generation of movements. III. Neuronal activity in the supplementary motor area. *Experimental brain research*. *Experimentelle Hirnforschung*. Experimentation cerebrale, 91(3), pp.396-407. + +Rossant, C. et al., 2016. Spike sorting for large, dense electrode arrays. *Nature neuroscience*, 19(4), pp.634-641. + +Sabes, P.N., 2000. The planning and control of reaching movements. *Current opinion in neurobiology*, 10(6), pp.740-746. + +Schmitt, L.I. et al., 2017. Thalamic amplification of cortical connectivity sustains attentional control. *Nature*, 545(7653), pp.219-223. + +Scott, S.H., 2004. Optimal feedback control and the neural basis of volitional motor control. *Nature reviews. Neuroscience*, 5(7), pp.532-546. +---PAGE_BREAK--- + +Seely, J.S. et al., 2016. Tensor Analysis Reveals Distinct Population Structure that Parallels the Different Computational Roles of Areas M1 and V1. *PLoS computational biology*, 12(11), p.e1005164. + +Shenoy, K.V., Sahani, M. & Churchland, M.M., 2013. Cortical control of arm movements: a dynamical systems perspective. *Annual review of neuroscience*, 36, pp.337–359. + +Shima, K. et al., 1996. Role for cells in the presupplementary motor area in updating motor plans. *Proceedings of the National Academy of Sciences of the United States of America*, 93(16), pp.8694–8698. + +Smith, N.J. et al., 2010. Reversible Inactivation of Rat Premotor Cortex Impairs Temporal Preparation, but not Inhibitory Control, During Simple Reaction-Time Performance. *Frontiers in integrative neuroscience*, 4, p.124. + +Song, H.F., Yang, G.R. & Wang, X.-J., 2016. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework. *PLoS computational biology*, 12(2), p.e1004792. + +Stokes, M.G. et al., 2013. Dynamic coding for cognitive control in prefrontal cortex. *Neuron*, 78(2), pp.364–375. + +Sussillo, D. et al., 2015. A neural network that finds a naturalistic solution for the production of muscle activity. *Nature neuroscience*, 18(7), pp.1025–1033. + +Sussillo, D. et al., 2016. LFADS - Latent Factor Analysis via Dynamical Systems. arXiv [cs.LG]. Available at: http://arxiv.org/abs/1608.06315. + +Tanaka, M., 2005. Involvement of the central thalamus in the control of smooth pursuit eye movements. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 25(25), pp.5866–5876. + +Thura, D. & Cisek, P., 2014. Deliberation and commitment in the premotor and primary motor cortex during dynamic decision making. *Neuron*, 81(6), pp.1401–1416. + +Todorov, E. & Jordan, M.I., 2002. Optimal feedback control as a theory of motor coordination. *Nature neuroscience*, 5(11), pp.1226–1235. + +Wallis, J.D., Anderson, K.C. & Miller, E.K., 2001. Single neurons in prefrontal cortex encode abstract rules. *Nature*, 411(6840), pp.953–956. + +Wang, J. et al., 2017. Flexible timing by temporal scaling of cortical responses. *Nature neuroscience*. Available at: http://dx.doi.org/10.1038/s41593-017-0028-6. + +Wang, Y. et al., 2005. Prefrontal cortical cells projecting to the supplementary eye field and presupplementary motor area in the monkey. *Neuroscience research*, 53(1), pp.1–7. + +Werbos, P.J., 1990. Backpropagation through time: what it does and how to do it. *Proceedings of the IEEE*, 78(10), pp.1550–1560. + +Wolpert, D.M. & Kawato, M., 1998. Multiple paired forward and inverse models for motor control. *Neural networks: the official journal of the International Neural Network Society*, 11(7-8), pp.1317–1329. + +Xu, M. et al., 2014. Representation of interval timing by temporally scalable firing patterns in rat prefrontal cortex. *Proceedings of the National Academy of Sciences of the United States of America*, 111(1), pp.480–485. + +Xu, T. et al., 2009. Rapid formation and selective stabilization of synapses for enduring motor memories. *Nature*, 462(7275), pp.915–919. + +Yang, G. et al., 2014. Sleep promotes branch-specific formation of dendritic spines after learning. *Science*, +---PAGE_BREAK--- + +344(6188), pp. 1173–1178. + +Yang, S.-N. & Heinen, S., 2014. Contrasting the roles of the supplementary and frontal eye fields in ocular decision making. *Journal of neurophysiology*, 111(12), pp.2644–2655. +---PAGE_BREAK--- + +# Methods + +All experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the Committee of Animal Care at the Massachusetts Institute of Technology. Two monkeys (*Macaca mulatta*), one female (C) and one male (J), were trained to perform the Ready, Set, Go (RSG) behavioral task. Monkeys were seated comfortably in a dark and quiet room. Stimuli and behavioral contingencies were controlled using MWorks (https://mworks.github.io/) on a 2012 Mac Pro computer. Visual stimuli were presented on a frontoparallel 23-inch Acer H236HL monitor at a resolution of 1920x1080 at a refresh rate of 60 Hz, and auditory stimuli were played from the computer's internal speaker. Eye positions were tracked with an infrared camera (Eyelink 1000; SR Research Ltd, Ontario, Canada) and sampled at 1 kHz. + +## RSG Task + +**Task contingencies.** Monkeys had to measure a sample interval, $t_s$, and subsequently produce a target interval $t_t$ whose relationship to $t_s$ was specified by a context-dependent gain parameter ($l_t = gain \times l_s$) which was set to either 1 (g=1 context) or 1.5 (g=1.5 context). On each trial, $t_s$ was drawn from a discrete uniform prior distribution (7 values, minimum = 500 ms, maximum = 1000 ms), and *gain* (*g*) was switched across blocks of trials (101 +/- 49 trials (mean +/- std)). + +**Trial structure.** Each trial began with the presentation of a central fixation point (FP, circular, 0.5 deg diameter), a secondary context cue (CC, square, 0.5 deg width, 3-5 deg below FP), an open circle centered at FP (OC, radius 8-10 deg, line width 0.05 deg, gray) and three rectangular stimuli (2.0x0.5 deg, gray) placed 90 deg apart over the perimeter of OC with their long side oriented radially. FP was red for the g=1 context and purple for the g=1.5 context. CC was placed directly below FP in the g=1 context, and was shifted 0.5 deg rightward in the g=1.5 context. Two of the rectangular stimuli were presented only briefly and served as placeholders for the subsequent 'Ready' and 'Set' flashes. The third rectangle served as the saccadic target ('Go'), which together with FP, CC, and OC remained visible throughout the trial. Ready was always positioned to the right or left of FP (3 o'clock or 9 o'clock position). Set was positioned 90 deg clockwise with respect to Ready and the saccadic target was placed opposite to Ready (**Figure 1A**). + +Monkeys had to maintain their gaze within an electronic window around FP (2.5 and 5.5 deg window for C and J, respectively) or the trial was aborted. After a random delay (uniform hazard), first the Ready and then the Set cues were flashed (83 ms, white). The two flashes were accompanied by a short auditory cue (the "pop" system sound), and were separated by $t_s$. The produced interval $t_p$ was defined as the interval between the onset of the Set cue and the time the eye position entered a 5-deg electronic window around the saccadic target. Following saccade, the response was deemed a "hit" if the error $\epsilon = |l_p - l_t|$ was smaller than a +---PAGE_BREAK--- + +$t_i$-dependent threshold $\epsilon_{thresh} = \alpha t_i + \beta$ where $\alpha$ was between 0.2 and 0.25, and $\beta$ was 25 ms. The exact choice of these parameters were not critical for performing the task or for the observed behavior; instead, they were chosen to maintain the animals motivated and willing to work for more trials per session. On hit trials, the target, animals received juice reward and FP turned green. The reward amount, as a fraction of maximum possible reward, decreased with increasing error according to $((\epsilon_{thresh} - \epsilon)/\epsilon_{thresh})^{1.5}$, with a minimum fraction 0.1 (Figure 1B). Trials in which $t_p$ was more than 3.5 times the median absolute deviation (MAD) away from the mean were considered outliers and were excluded from further analyses. + +As an initial analysis of whether monkeys learned the RSG task across gains, we fit linear regression models to +the behavior separately for each gain: + +$$ +(1) t_p = \beta_1 \times t_s + \beta_0 +$$ + +To quantify the difference in slopes between the two contexts. We also fit models with an interaction term +across both contexts: + +$$ +(2) t_p = \beta_1 t_s + \beta_2 g + \beta_3 g t_s +$$ + +If the animals successfully learned to apply the gain, $\beta_3$ should be positive. + +We further applied a Bayesian observer model (Jazayeri & Shadlen 2015; Acerbi et al. 2012; Miyazaki et al. 2005; Jazayeri & Shadlen 2010), which captured the behavior in both contexts (Figure 1E). Full details of the model can be found in previous work (Jazayeri & Shadlen 2010; Jazayeri & Shadlen 2015). Briefly, we assumed that both measurement and production of time intervals are noisy. Measurement and production noise were modeled as zero-mean Gaussian with standard deviation proportional to the base interval (Rakitin et al. 1998), with constant of proportionality of $w_m$ and $w_p$, respectively. A Bayesian model observer produced $t_p$ after deriving an optimal estimate of $t_i$ from the mean of the posterior. To account for the possibility that the mental operation of mapping $t_s$ to $t_i$ according to the gain factor might be noisier in the g=1.5 context than in the g=1 context (Remington & Jazayeri 2017), we allowed $w_m$ and $w_p$ to vary across contexts. + +Recording + +We recorded neural activity in dorsomedial frontal cortex (DMFC) with 24-channel linear probes (Plexon, inc.). Recording locations were selected according to stereotaxic coordinates and the existence of task-relevant modulation of neural activity. In monkey C, recordings were made between 3.5 mm to 7 mm lateral of the midline and 1.5 mm posterior to 4.5 mm anterior of the genu of the arcuate sulcus. In monkey J, we recorded +---PAGE_BREAK--- + +from between 3 mm to 4.5 mm lateral of the midline and 0.75 mm to 5 mm anterior of the genu of the arcuate sulcus. Data were recorded and stored using a Cerebus Neural Signal Processor (NSP; Blackrock Microsystems). Preliminary spike sorting was performed online using the Blackrock NSP, followed by offline sorting using the Phy spike sorting software package using the spikedetekt, klusta, and kilosort algorithms (Rossant et al. 2016; Pachitariu et al. 2016). Sorted spikes were then analyzed using custom code in MATLAB (The MathWorks Inc.). + +## Analysis of DMFC data + +Average firing rates of individual neurons were estimated using a 150 ms smoothing filter applied to spike counts in 1 ms time bins. We used PCA to visualize and analyze activity patterns across the population of neurons across animals. PCA was applied after a soft normalization: spike counts measured in 10 ms bins were divided by the square root of the maximum spike count across all bins and conditions. The normalization was implemented to minimize the possibility high firing rate neurons dominating the analysis. + +When binning data according to increasing values of $t_p$, we ensured that all bins had equal number of trials, independently for each session. To average firing rates across trials within a group, we truncated trials to the median $t_p$, and averaged firing rates with attrition. Analyses of neural data were applied to all 10 sessions across both monkeys. For analyses, we included neurons for which at least 15 trials were recorded in each condition and which had a minimum unsmoothed modulation depth of 15 spikes per second. We did not separately analyze trials immediately following context switches due to the low number of context switches per session (mean = 6.8 switches). + +For visualization of neural trajectories in state space, we identified dimensions along which responses were maximally separated with respect to context ("gain axis," **Figure 2B,C**, "initial condition," **Figure 3C**) and $t_p$ ("interval axis," **Figure 2B,C**, and "initial condition," **Figure 3C**). We first calculated the context component by projecting data onto the vector defined by the difference between neural activity averaged over time and $t_p$ for each context. This component of the activity was then subtracted away from the full activity. For the Ready-Set epoch, we then performed PCA (PCs 1 and 2, **Figure 2B**) on the data with the context component removed. For the Set-Go epoch, we calculated the $t_p$ component by projecting data onto the vector defined by the difference between the activity associated with longest and shortest values $t_p$, averaged across time and context. We then performed PCA (PC 1, **Figures 2C and 3C**) on the data with the context and $t_p$ components removed. +---PAGE_BREAK--- + +Kinematic analysis of neural trajectories (KiNeT) + +We developed KiNeT to compare the geometry, relative speed and relative position along a group of neural +trajectories that have an orderly organization and change smoothly with time. To describe KiNeT rigorously, we +developed the following symbolic notations. Square and curly brackets refer to individual items and groups of +items, respectively. + +The algorithm for applying KiNeT can be broken down into the following steps: 1) Choose a Euclidean coordinate system to analyze the neural trajectories. We chose the first 10 PCs in the Set-Go epoch, which captured 89% of the variance in the data. 2) Designate one trajectory as reference, $\Omega[\text{ref}]$. We used the trajectory associated with the middle $t_p$ bin as reference. 3) On each of the non-reference trajectories $\Omega[i]$ ($i \neq \text{ref}$), find $s[i]\{j\}$ with minimum Euclidean distance to $s[\text{ref}]\{j\}$ and their associated times $t[i]\{j\}$ according to the following equations: + +$$ (3) \quad t[i][j] = \arg \min_t ||\Omega[i](t) - s[\text{ref}][j]|| $$ + +$$ (4) \quad s[i][j] = \Omega[i](t[i][j]) $$ + +Organization of trajectories in state space: The distances $D[i]\{j\}$ were used to characterize positions in neural state space of each $\Omega[i]$ relative to $\Omega[\text{ref}]$. The magnitude of $D[i][j]$ was defined as the norm of the vector connecting $s[i][j]$ to $s[\text{ref}][j]$, which we refer to as $\vec{\Delta}_{\text{ref}}[i][j]$. The sign of $D[i][j]$ was defined as follows: for the trajectory $\Omega[1]$ associated with the shortest $t_s$ or $t_p$, and $\Omega[N]$ associated with the longest, $D[i][j]$ was defined to be negative and positive, respectively. For all other trajectories, $D[i][j]$ was positive if the angle between $\vec{\Delta}_{\text{ref}}[i][j]$ and $\vec{\Delta}_{\text{ref}}[N][j]$ was smaller than the angle between $\vec{\Delta}_{\text{ref}}[i][j]$ and $\vec{\Delta}_{\text{ref}}[1][j]$, and negative otherwise. + +Analysis of neural trajectories across contexts: We analyzed the geometry across gains in three ways. +First, we analyzed the relationships between the two sets of trajectories. This required aligning the activity +between the two contexts in time. To do this, we started with the aligned times $t\{i\}\{j\}$ found within each +context, and using successive groups of neural states in the g=1 context indexed by $t_g=1[\text{ref}]\{j\}$, found the +reference time $l_g=1.5[\text{ref}]\{j\}$ in the g=1.5 context for which the mean distances between neural states in +paired trajectories (i.e. the first $t_p$ bins of both gains, second $t_p$ bins, etc.) were smallest. This resulted in an +---PAGE_BREAK--- + +array of times from $l_g=1.5[\text{ref}]\{j\}$, indexed by $l_g=1[\text{ref}]\{j\}$, such that the trajectories across gains were aligned in time for subsequent analyses (Figure 6C). The second way that we analyzed geometry across gains was to collect trajectories across both gains, order according to trajectory duration, and run the standard KiNeT procedure. Finally, we measured the distance $D_g$ between the structures using the across-context time alignment. For successive $j$, we measured the minimum distance between line segments connecting consecutive trajectories within each context. For five $t_p$ bins, this meant four line segments for each context, and $4^2=16$ distances. We chose the minimum of these distance values as the value of $D_g$ between the two structures. As a point of comparison, we generated set of “null” distances by splitting trajectories from each context into odd- and even- numbered trajectories and calculating the minimum distance between the sets of connecting line segments (Figure 6E). + +
SymbolDescription
Ω[i]The i-th neural trajectory
Ω[i](t)The state on the i-th trajectory at time t, 1 ≤ i ≤ N, where N is the number of trajectories
Ω{i}A collection of neural trajectories
Ω[ref]“Reference” neural trajectory
Ω[1]The trajectory of shortest duration
Ω[N]The trajectory of longest duration
l[ref][j]Elapsed time for j-th time bin on Ω[ref]
s[ref][j]Neural state on Ω[ref] at l[ref][j]
s[i][j]Neural state on Ω[i] with minimum distance to s[ref][j]
s[i]{j}s[i][j] across all time bins
s{i}[j]s[i][j] on all trajectories at j-th time bin
l[i][j]Elapsed time on Ω[i] at s[i][j]
+---PAGE_BREAK--- + +
l[i]{j}Elapsed time on Ω[i] across all time bins
l{i}{j]Elapsed time on all trajectories at j-th time bin
D[i][j]Euclidean distance between s[i][j] and s[ref][j]
D[i]{j}Array of euclidean distances between s{i}[j] and s[ref][j]
Δ̅ref[i][j]Vector travelling from Ω[ref] to Ω[i] at the j-th time bin, i ≠ ref
Δ̅Ω[i][j]Vector traveling from s[i][j] to s[i + 1][j], 1 ≤ i ≤ N − 1
θΩ[i][j]Angle between Δ̅Ω[i][j] and Δ̅Ω[i + 1][j], 1 ≤ i ≤ N − 2
θ̅Ω{i}{j]Average of θΩ[i][j] across i for the j-th time bin
θg[i][j]Angle between Δ̅Ω,g=1[i][j] and Δ̅Ω,g=1.5[i][j], 1 ≤ i ≤ N − 1
Δ̅g[j]Vector connecting the nearest points on line segments connecting sg=1{i}[j] and sg=1.5{i}[j]
Dg[j]Magnitude (length) of Δ̅g[j]
θg,Ω[j]Angle between Δ̅g[j] and the mean of Δ̅Ω{i}[j] over i.
+ +**Statistics:** Confidence intervals for KiNeT performed on trajectories binned according to $t_p$ were computed by a bootstrapping procedure, randomly selecting trials with replacement 100 times. To test for statistical significance of metrics generated through the KiNeT procedure, we used bootstrap tests, where p was the fraction of bootstrap iterations for which the metric was consistent with the null hypothesis. Unless otherwise stated, significance of a measure for individual time points was set to p < 0.05. The results of KiNeT applied to neural data from individual monkeys produced similar results, and were similar for different methods of data smoothing. +---PAGE_BREAK--- + +Recurrent neural network + +We constructed a firing rate recurrent neural network (RNN) model with $N = 200$ nonlinear units. The network dynamics were governed by the following differential equation: + +$$\tau \dot{x}(t) = -x(t) + Jr(t) + Bu + c_x + \rho_x(t)$$ + +$$r(t) = \tanh[x(t)]$$ + +$x(t)$ is a vector containing the activity of all units. and $r(t)$ represents the firing rates of those units by transforming $x$ through a $\tanh$ nonlinearity. Time $t$ was sampled every millisecond for a duration of $T = 3300$ ms. The time constant of decay for each unit was set to $\tau = 10$ ms. The unit activations also contain an offset $c_x$ and white noise $\rho_x(t)$ at each time step with standard deviation in the range [0.01-0.015]. The matrix $J$ represents recurrent connections in the network. The network received multi-dimensional input $u$ through synaptic weights $B = [b_c; b_s]$. The input $u$ was comprised of a gain-dependent context cue $u_c(t)$ and an input $u_s(t)$ that provided Ready and Set pulses. In $u_s(t)$ Ready and Set were encoded as 20 ms pulses with a magnitude of 0.4 that were separated by time $t_s$. + +Two classes of networks were trained to perform the RSG task with multiple gains. In the tonic-input RNNs, the gain-dependent input $u_c(t)$ was set to a fixed offset for the entire duration of the trial. In contrast, in the transient-input RNNs, $u_c(t)$ was active transiently for 440 ms and was terminated 50-130 ms before the onset of the Ready pulse. The amplitude of $u_c(t)$ was set to 0.3 for g=1 and 0.4 for g=1.5. The transient network received an additional gain-independent persistent input of magnitude 0.4, similar to the tonic networks. Both types of networks produced a one-dimensional output $z(t)$ through summation of units with weights $w_o$ and a bias term $c_z$. + +$$z(t) = w_o^T r(t) + c_z$$ + +Network Training + +Prior to training, model parameters ($\theta$), which comprised $J$, $B$, $w_o$, $c_x$ and $c_z$ were initialized. Initial values of matrix $J$ were drawn from a normal distribution with zero mean and variance $1/N$, following previous work (Rajan & Abbott 2006). Synaptic weights $B = [b_c; b_s]$ and the initial state vector $x(0)$ and unit biases $c_x$ were initialized to random values drawn from a uniform distribution with range [-1,1]. The output weights, $w_o$ and bias $c_z$, were initialized to zero. During training, model parameters were optimized by truncated Newton +---PAGE_BREAK--- + +methods using backpropagation-through-time (Werbos 1990) by minimizing a squared loss function between +the network output $z_i(t)$ and a target function $f_i(t)$, as defined by: + +$$H(\theta) = \frac{1}{|TI|} \sum_{I} \sum_{t} (z_{i}(t) - f_{i}(t))^{2}$$ + +Here *i* indexes different trials in a training set (*I* = different gains (*G*) x intervals (*t*s) x repetitions (*r*)). The target function *f**i*(*t*) was only defined in the Set-Go epoch (the output of the network was not constrained during the Ready-Set epoch). The value of *f**i*(*t*) was zero during the Set pulse. After Set, the target function was governed by two parameters that could be adjusted to make *f**i*(*t*) nonlinear, scaling, non-scaling or approximately-linear: + +$$f_i(t) = A(e^{\frac{t}{\alpha t_s}} - 1)$$ + +For the networks reported, $f_i(t)$ was an approximately-linear ramp function parametrized by $\Lambda = 3$ and $\alpha = 2.8$. Variable $t_t$ represents the transformed interval for a given $t_s$ and gain $G$. Solutions were robust with respect to the parametric variations of the target function (e.g., nonlinear and non-scaling target functions). In trained networks, the production time, $t_p$ was defined as the time between the Set pulse and when the output ramped to a fixed threshold ($z_i = 1$). + +During training, we employed two strategies to obtain robust solutions. First, we trained the networks to flexibly switch between three gain contexts, the two original values ($g=1$ and $g=1.5$) and an additional intermediate value of $g=1.25$ for which the amplitude of $u_c(t)$ was set to 0.35. However, the behavior of networks trained with the two original gains were qualitatively similar. Second, we set $\rho(t)$ to zero, and instead, the context-dependent input, $u_c(t)$ received white noise with standard deviation of 0.005, per unit time ($\Delta t = 0$). +---PAGE_BREAK--- + +Supplement + +Go-aligned KiNeT + +**Figure S1.** “Go”-aligned KiNeT, related to **Figure 4**. Applying KiNeT to neural trajectories aligned to the Set cue resulted in $t[i][j]$ which diverged from $t[\text{ref}]$ to scale with trajectory length in a manner consistent with neural speed control as a means to produce different $t_p$. To rule out the possibility that this temporal scaling of trajectories was an artifact of temporal smearing of PSTHs near the time of Go caused by averaging trials of different lengths, we applied KiNeT to data aligned to Go (saccade). (A). Aligned times (speed) across both contexts. As in the Set-aligned analysis, $t[i][j]$ for shorter $\Omega[i]$ diverged to shorter values, while $t[i][j]$ for longer $\Omega[i]$ diverged towards longer values as $t[\text{ref}]$ (here time before Go) increased. In contrast to the lack of temporal scaling proximal to the Set cue, $t[i][j]$ were ordered according to $t_p$ leading all the way up to the Go cue. Circles on the $t[\text{ref}]$ line indicate $j$ for which the ordering of $t[i][j]$ was significantly correlated with the $t_p$ bin (bootstrap test, r > 0.1, p < 0.05, n = 100). (B,C) $t_p$-related structure of $\Omega\{i\}$ (B). Analysis of direction. As +---PAGE_BREAK--- + +in the Set-aligned KiNeT, $\bar{\theta}_{\Omega\{i\}}[j]$ (bar signifies mean over the index $i$ in curly brackets) was significantly smaller than 90 degrees for the majority of the Set-Go interval (bootstrap test, $\bar{\theta}_{\Omega\{i\}}[j] < 90$, p < 0.05, n = 100) indicating that $\vec{\Delta}_{\Omega\{i\}}[j]$ were similar, across $\Omega[i]$. (C) Analysis of distance. Euclidean distance to $\Omega[\text{ref}]$. Trajectories were ordered in neural space according to $D[i][j]$, where $\Omega[i]$ with $t_p$ with more similar to the middle $t_p$ bin to being located closer to $\Omega[\text{ref}]$. Significance tested by counting the number of times in which $D[i][j]$ was not ordered according to $t_p$ bin in bootstrap samples for each $j$ (p < 0.05, n = 100). +---PAGE_BREAK--- + +Rotation of trajectories through time + +**Figure S2.** Rotation of trajectories through time, related to Figure 4. We estimated the degree to which the principal axes (PC directions) associated with nearest states along the five trajectories, $s\{i\}[j]$, changed with time relative to $t=0$ using two metrics: a similarity index ($SI(0, t)$) that measures the variance explained by PCs at time $t$ and $t=0$ (see below for full description), and a rotation index ($\theta_{PC_1}(0, t)$) measuring the angle of the first PC ($PC_1$) in the state space at time $t$ compared to $t=0$. (A) $SI(0, t)$. This index varies between 0 and 1 with 1 signifying matching PCs and 0 signifying orthogonal PCs. The gradual change in $SI(0, t)$ away from 1 and toward 0 indicated that $\Omega\{i\}$ gradually changed orientation with time. Shaded area represents 90% bootstrap confidence intervals (n = 100). Dashed lines represent the 90% confidence intervals for the similarity of two sets of $s\{i\}[j]$ drawn randomly from a multivariate Gaussian distribution with covariance matched to the data. $SI(0, t)$ captures the extent to which the orientation of $s\{i\}[j]$ in state space changes with time and is therefore sensitive to both rotations and scaling transformations. (B) $\theta_{PC_1}(0, t)$. The gradual change in $\theta_{PC_1}(0, t)$ away from 0 toward 90 deg indicates that trajectories underwent rotations through state space from Set to Go. Unlike $SI(0, t)$ that is sensitive to both rotations and scaling transformations, $\theta_{PC_1}(0, t)$ is only sensitive to rotations. These data-driven observations motivated the use of KiNeT for analyzing neural trajectories throughout the paper. +---PAGE_BREAK--- + +**Similarity Index:** The similarity index, adapted from (Garcia 2012), was calculated using the following procedure: 1) Select two datasets, one for neural activity patterns at the time of Set ($t=0$), denoted by $r_0$, and one at time $t$ after Set, denoted by $r_t$. 2) Calculate the principal component coefficients for each dataset. 3) Project the points of each dataset onto their own and the others' principal coefficients, creating four sets of principal component scores. 4) Calculate the fraction of variance explained by each principal component in each of the four sets of scores. $\sigma_{0,0}^{2,i}$ is the fraction of variance in $r_0$ explained by principal component $i$ of $r_0$, $\sigma_{t,0}^{2,i}$ is the fraction of variance of $r_t$ explained by principal component $i$ of $r_0$, $\sigma_{t,t}^{2,i}$ is the fraction of variance in $r_t$ explained by principal component $i$ of $r_t$, and $\sigma_{0,t}^{2,i}$ is the fraction of variance in $r_0$ explained by principal component $i$ of $r_t$. 5) For each component of each dataset, calculate the difference between (1) the fraction of variance explained by that component for its own dataset (e.g. $\sigma_{0,0}^{2,i}$) and (2) the fraction explained by that same component for the other dataset (e.g. $\sigma_{t,0}^{2,i}$). 6) Sum and normalize the calculated differences. This can be written as follows: + +$$S(0, t) = 1 - \frac{1}{4} \sum_i (|\sigma_{0,0}^{2,i} - \sigma_{t,0}^{2,i}| + |\sigma_{t,t}^{2,t} - \sigma_{0,t}^{2,i}|)$$ + +The similarity index is 0 when the associated covariance matrix of one dataset lies in the nullspace of the other, and 1 when the covariance matrices are identical. + +In order to interpret the values of similarity index in the DMFC dataset, we compared similarity index for two surrogate datasets that matched the statistics of DMFC activity. Each dataset was constructed by drawing five samples (the number of $t_p$ bins) from a ten-dimensional Gaussian distribution (the number of principal components) with a diagonal covariance matrix constructed using the eigenvalues of the covariance matrix of the DMFC data. We calculated the similarity index for 1000 pairs of surrogate data (i.e., null distribution), and used the 5th and 95th percentiles to generate 90% confidence intervals. With this procedure, a similarity index above the 90% confidence interval was considered more "similar" than expected by chance, whereas a similarity index below the 90% confidence interval was considered dissimilar. +---PAGE_BREAK--- + +Variability in neural trajectories systematically predicted behavioral variability + +**Figure S3.** Relating neural variability to behavioral variability, related to Figure 4. (A). Schematic showing three neural trajectories between Set (circle) and Go (cross) associated with three different $t_s$ values. Neural states, $s[i][j]$, are indexed by trajectory $(i)$, which is specified by initial condition, and elapsed time ($j$). Noise may cause neural states to deviate from mean trajectories. We reasoned that deviations across and along trajectories may cause systematic biases in $t_p$. $s_{short}[i][j]$ (light star) shows an example in which noise moves the state in the direction of shorter $t_s$ (toward $s[i-1][j]$) and in the direction of the Go state ($s[i][j+1]$) by vectors $\epsilon_\Omega$ and $\epsilon_t$, respectively. Both deviations should lead to shorter $t_p$. (B) Prediction 1 ($P_1$): deviations $\epsilon_\Omega$ off of one trajectory toward a trajectory associated with larger $t_s$ should lead to larger $t_p$, and vice versa. To test $P_1$, we divided trials for each $t_s$ into two bins. One bin contained all trials in which $t_p$ was shorter than median $t_p$ +---PAGE_BREAK--- + +and the other, all trials in which $t_p$ was longer than median $t_p$. We computed neural trajectories for the short and long $t_p$ bins, and denoted the corresponding states by $s_{short}[i][j]$ and $s_{long}[i][j]$ (dark star), respectively. If $P_j$ is correct, then the geometric relationship between $s_{short}[i][j]$ and $s_{long}[i][j]$ should be similar to that between $s[i-1][j]$ (shorter $t_s$) and $s[i][j+1]$ (longer $t_s$). Therefore the vector pointing from $s_{short}[i][j]$ to $s_{long}[i][j]$ ($\vec{\Delta}_p[i][j]$, dashed arrow) and the vector pointing from $s[i-1][j]$ to $s[i][j+1]$ ($\vec{\Delta}_{\Omega}[i][j]$, blue arrow) should be aligned, and the angle between them, denoted by $\theta_{p,\Omega}[i][j]$, should be acute. See below description for calculation of $\vec{\Delta}_{\Omega}[i][j]$ for shortest and longest $t_s$. (C) Prediction 2 ($P_2$): deviations $\epsilon_t$ along trajectories should influence the time it takes for activity to reach the Go state and should therefore influence $t_p$ (Afshar et al. 2011; Michaels et al. 2015). If $P_2$ is correct, then $s_{short}[i][j]$ should be ahead of $s_{long}[i][j]$. Therefore, $\vec{\Delta}_p[i][j]$ should point backwards in time, and the angle between $\vec{\Delta}_p[i][j]$ and $\vec{\Delta}_l[i][j]$ that connects $s[i][j-1]$ to $s[i][j+1]$, denoted by $\theta_{p,t}[i][j]$ should be obtuse. See below description for calculation of $\vec{\Delta}_l[i][j]$ for first and last time points. + +(D,E) Testing $P_1$ and $P_2$ for the $g=1$ (D) and $g=1.5$ (E) contexts. Consistent with $P_1$, average $\theta_{p,\Omega\{i\}[j]}$ ($\bar{\theta}_{p,\Omega\{i\}[j]}$, blue), were less than 90 deg from Set to Go indicating that $t_p$ was larger (smaller) when neural states deviated toward a trajectory associated with a larger (smaller) $t_s$. Importantly, the systematic relationship between $t_p$ and neural activity was already present at the time of the Set, indicating that $t_p$ was influenced by variability during the Ready-Set measurement epoch. Consistent with $P_2$, $\bar{\theta}_{p,t\{i\}[j]}$ (green) was greater than 90 deg, indicating that $t_p$ was larger (smaller) when speed along the neural trajectory was slower (faster). The angle between $\vec{\Delta}_p\{i\}[j]$ and $\vec{\Delta}_l\{i\}[j]$ was initially close to 90 deg consistent with the observation that trajectories evolved at similar speeds early in the Set-Go epoch (Figure 4B). + +We also measured the angle between $\vec{\Delta}_{\Omega\{i\}[j]}$ and $\vec{\Delta}_{l\{i\}[j]}$, denoted by $\bar{\theta}_{\Omega,t\{i\}[j]}$ (yellow). This angle was not significantly different than expected by chance (90 deg) for most time points. We determined when (at what $j$) an angle was significantly different from 90 deg ($p < 0.05$) by comparing angles to the corresponding null distribution derived from 100 random shuffles with respect to $t_p$. Angles that were significantly different from 90 deg are shown by darker circles. Because the comparison of $t_s$- vs. $t_p$-related structure (Figure S3) required grouping trials into substantially more bins than the other analyses (14 vs. 7 or 5), we reduced the minimum number of trials required to 10 for this analysis (273 units; 95 from monkey C and 178 from monkey J). We did not find that the results of any of the analyses were dependent on the specific threshold chosen, and results were similar in individual subjects. +---PAGE_BREAK--- + +**Calculation of $\vec{\Delta}_{\Omega}[i][j]$ for shortest and longest $t_s$:** + +Because there was a finite number of $t_s$ values, we could not compute $\vec{\Delta}_{\Omega}[i][j]$ for $i = 1$ and $i = N_i$ ($s[i-1][j]$ was not defined for $i = 1$ and $s[i+1][j]$ was not defined for $i = N_i$). Therefore, for the shortest $t_s$, we changed $\vec{\Delta}_{\Omega}[i][j]$ to $s[2][j] - s[1][j]$ (instead of $s[2][j] - s[0][j]$), and for the longest $t_s$, to $s[N_i][j] - s[N_i - 1][j]$ (instead of $s[N_i + 1][j] - s[N_i - 1][j]$). + +**Calculation of $\vec{\Delta}_t[i][j]$ for the earliest and latest times:** + +Because there was a finite number of time points, we could not compute $\vec{\Delta}_t[i][j]$ for $j = 1$ and $j = N_j$ ($s[i][j-1]$ was not defined for $j = 1$ and $s[i][j+1]$ was not defined for $j = N_j$). Therefore, for the first time point, we changed $\vec{\Delta}_t[i][j]$ to $s[i][2] - s[i][1]$ (instead of $s[i][2] - s[i][0]$), and for the last time point, to $s[i][N_j] - s[i][N_j - 1]$ (instead of $s[N_j + 1][j] - s[N_j - 1][j]$). +---PAGE_BREAK--- + +Analysis of the recurrent neural networks + +**Figure S4.** Analysis of the recurrent neural networks (RNNs), related to Figure 4 and Figure 7. (A-E) Tonic-input RNN. (A) “Behavior”; same format as in Figure 1E. The networks successfully learned the task as evidenced by positive regression slopes ($β_1$, larger for $g= 1.5$ context) and a significant positive interaction between $t_s$ and $g$ ($p << 0.001$). For each network, we simulated 30 trials per $t_s$ and $g$, removing outliers in which $t_p$ was more than 3.5 times the median absolute deviation (MAD) away from the mean. (B-D) Organization of neural trajectories within each context; same format as Figure 4B-D. KiNeT analysis verified that the organization of neural trajectories in the tonic-input RNN matched the organization observed in DMFC (compare to Figure 4B-D). (E) Relating unit variability to behavioral variability; same format as in Figure S3. (F-J) Same analyses as in A-E for the transient-input RNN. +---PAGE_BREAK--- + +
SymbolDescription
Ω[i]The i-th neural trajectory
Ω[i](t)The state on the i-th trajectory at time t, 1 ≤ i ≤ N, where N is the number of trajectories
Ω{i}A collection of neural trajectories
Ω[ref]"Reference" neural trajectory
Ω[1]The trajectory of shortest duration
Ω[N]The trajectory of longest duration
l[ref][j]Elapsed time for j-th time bin on Ω[ref]
s[ref][j]Neural state on Ω[ref] at l[ref][j]
s[i][j]Neural state on Ω[i] with minimum distance to s[ref][j]
s[i]{j}s[i][j] across all time bins
s{i}[j]s[i][j] on all trajectories at j-th time bin
l[i][j]Elapsed time on Ω[i] at s[i][j]
l[i]{j}Elapsed time on Ω[i] across all time bins
l{i}[j]Elapsed time on all trajectories at j-th time bin
D[i][j]Euclidean distance between s[i][j] and s[ref][j]
D[i]{j}Array of euclidean distances between s{i}[j] and s[ref][j]
Δ̂ref[i][j]Vector travelling from Ω[ref] to Ω[i] at the j-th time bin, i ≠ ref
Δ̂Ω[i][j]Vector traveling from s[i][j] to s[i + 1][j], 1 ≤ i ≤ N − 1
+---PAGE_BREAK--- + +
θΩ[i][j]Angle between Δ⃗Ω[i][j] and Δ⃗Ω[i + 1][j], 1 ≤ i ≤ N - 2
θ̅Ω{i}[j]Average of θΩ[i][j] across i for the j-th time bin
θg[i][j]Angle between Δ⃗Ω,g=1[i][j] and Δ⃗Ω,g=1.5[i][j], 1 ≤ i ≤ N - 1
Δ⃗g[j]Vector connecting the nearest points on line segments connecting sg=1{i}[j] and sg=1.5{i}[j]
Dg[j]Magnitude (length) of Δ⃗g[j]
θg,Ω[j]Angle between Δ⃗g[j] and the mean of Δ⃗Ω{i}[j] over i.
\ No newline at end of file diff --git a/samples/texts_merged/6724971.md b/samples/texts_merged/6724971.md new file mode 100644 index 0000000000000000000000000000000000000000..2f5802ce1e30cda564ce7a8398c693debd6281ab --- /dev/null +++ b/samples/texts_merged/6724971.md @@ -0,0 +1,641 @@ + +---PAGE_BREAK--- + +# THE EXISTENCE OF FIXED POINTS FOR THE $·/GI/1$ QUEUE + +BY JEAN MAIRESSE AND BALAJI PRABHAKAR + +CNRS-Université Paris 7 and Stanford University + +A celebrated theorem of Burke's asserts that the Poisson process is a fixed point for a stable exponential single server queue; that is, when the arrival process is Poisson, the equilibrium departure process is Poisson of the same rate. This paper considers the following question: Do fixed points exist for queues which dispense i.i.d. services of finite mean, but otherwise of arbitrary distribution (i.e., the so-called $·/GI/1/∞$/FCFS queues)? We show that if the service time $S$ is nonconstant and satisfies $\int P\{S \ge u\}^{1/2} du < \infty$, then there is an unbounded set $\mathcal{S} \subset (E[S], \infty)$ such that for each $\alpha \in \mathcal{S}$ there exists a unique ergodic fixed point with mean inter-arrival time equal to $\alpha$. We conjecture that in fact $\mathcal{S} = (E[S], \infty)$. + +## 1. Introduction. +Consider a single server First-Come-First-Served queue with infinite waiting room, at which the service times are i.i.d. ($a·/GI/1/∞$/FCFS queue). We are interested in the question of whether such queues possess fixed points: an inter-arrival process which has the same distribution as the corresponding inter-departure process. + +The question of the existence of fixed points is intimately related to the limiting behavior of the distribution of departure processes from a tandem of queues. Specifically, consider an infinite tandem of $·/GI/1/∞$/FCFS queues. The queues are indexed by $k \in \mathbb{N}$ and the customers are indexed by $n \in \mathbb{Z}$. The numbering of each customer is fixed at the first queue and remains the same as he/she passes through the tandem. Each customer leaving queue $k$ immediately enters queue $k+1$. At queue $k$, write $S(n, k)$ for the service time of customer $n$ and $A(n, k)$ for the inter-arrival time between customers $n$ and $n+1$. We assume that the initial inter-arrival process, $A^0 = (A(n, 0), n \in \mathbb{Z})$, is ergodic and independent of $(S(n, k), n \in \mathbb{Z}, k \in \mathbb{N})$. We also assume that the service variables $(S(n, k), n, k)$ are i.i.d. and that $E[S(0, 0)] < E[A(0, 0)] < \infty$. To avoid trivialities we assume that the service times are nonconstant, that is, $P\{S(0, 0) \neq E[S(0, 0)]\} > 0$. + +By Loynes' results [15], each of the equilibrium departure processes $A^k = (A(n, k), n \in \mathbb{Z})$ for $k \ge 1$ is ergodic of mean $E[A(0, 0)]$. The following are natural fixed point problems: + +Received February 2001; revised January 2003. +AMS 2000 subject classifications. 60K25, 60K35, 68M20, 90B15, 90B22. +Key words and phrases. Queue, tandem queueing networks, general independent services, stability, Loynes theorem, Burke theorem. +---PAGE_BREAK--- + +*Existence.* For a given service distribution, does there exist a mean $\alpha$ ergodic inter-arrival process such that the corresponding inter-departure process has the same distribution? If yes, call such a distribution an *ergodic fixed point* of mean $\alpha$. + +*Uniqueness.* If an ergodic fixed point of mean $\alpha$ exists, is it unique? + +*Convergence.* Assume there is a unique ergodic fixed point of mean $\alpha$. If the inter-arrival process to the first queue, $A^0$, is ergodic of mean $\alpha$, then does the law of $A^k$ converge weakly to the ergodic fixed point of mean $\alpha$ as $k \to \infty$? If yes, call the fixed point an *attractor*. + +A strand of research in stochastic network theory has pursued these questions for some time. Perhaps the earliest and best-known result is Burke's theorem [7], which shows that the Poisson process of rate $1/\alpha$ is a fixed point for exponential server queues with mean service time $\beta < \alpha$. Anantharam [1] established its uniqueness, and Mountford and Prabhakar [18] established that it is an attractor. + +For $·/GI/1/∞/FCFS$ queues, the subject of this paper, Chang [8] established the uniqueness of an ergodic fixed point, should it exist, assuming that the services have a finite mean and an unbounded support. Prabhakar [19] provides a complete solution to the problems of uniqueness and convergence assuming only a finite mean for the service time and the existence of an ergodic fixed point. However, the existence of such fixed points was only known for exponential and geometric service times. + +This paper establishes the existence of fixed points for a large class of service time distributions. We obtain the following result: if the service time $S$ has mean $\beta$ and if $\int P\{S \ge u\}^{1/2} du < \infty$, then there is a set $\mathcal{S}$ closed in $(\beta, \infty)$, with $\inf\{u \in \mathcal{S}\} = \beta$, $\sup\{u \in \mathcal{S}\} = \infty$ and such that: + +(a) For $\alpha \in \mathcal{S}$, there exists a mean $\alpha$ ergodic fixed point for the queue. Given this, [19] implies the attractiveness of the fixed point. + +(b) For $\alpha \notin \mathcal{S}$, consider the stationary (but not ergodic) process $F$ of mean $\alpha$ obtained as the convex combination of the ergodic fixed points of means $\underline{\alpha}$ and $\bar{\alpha}$ where $\underline{\alpha} = \sup\{u \in \mathcal{S}, u \le \alpha\}$ and $\bar{\alpha} = \inf\{u \in \mathcal{S}, \alpha \le u\}$. (Since $\mathcal{S}$ is closed, $\underline{\alpha}$ and $\bar{\alpha}$ belong to $\mathcal{S}$ and $F$ is a fixed point for the queue.) If the inter-arrival times of the input process have a mean $\alpha$, then the Cesaro average of the laws of the first $k$ inter-departure processes converges weakly to $F$ as $k \to \infty$. + +These results rely heavily on a strong law of large numbers for the total time spent by a customer in a tandem of queues proved in [2]. We conjecture that our results are suboptimal and that in fact $\mathcal{S} = (\beta, \infty)$. + +**2. Preliminaries.** The presence of an underlying probability space $(\Omega, \mathcal{F}, P)$ on which all the r.v.'s are defined is assumed all along. Given a measurable space $(K, \mathcal{K})$, we denote by $\mathcal{L}(K)$ the set of $K$-valued random variables, and by $\mathcal{M}(K)$ the set of probability measures on $(K, \mathcal{K})$. Throughout the paper, we +---PAGE_BREAK--- + +consider random variables valued in $\mathbb{R}_+^Z$. Equipped with the product topology, or +topology of coordinate-wise convergence, $\mathbb{R}_+^Z$ is a Polish space. We shall work +on the measurable space $(\mathbb{R}_+^Z, \mathcal{B})$ where $\mathcal{B}$ is the corresponding Borel $\sigma$-algebra. +With the topology of weak convergence, the space $\mathcal{M}(\mathbb{R}_+^Z)$ is a Polish space. For +details see, for instance, [3], [10] or [11]. The weak convergence of $(\mu_n)_n$ to $\mu$ is +denoted by $\mu_n \xrightarrow{w} \mu$. Furthermore, for $X_n, X \in \mathcal{L}(\mathbb{R}_+^Z)$, we say that $X_n$ converges +weakly to $X$ (and we write $X_n \xrightarrow{w} X$) if the law of $X_n$ converges weakly to the law +of $X$. A process $X \in \mathcal{L}(\mathbb{R}_+^Z)$ is *constant* if $X = (c)^Z$ a.s. for some $c \in \mathbb{R}_+$. + +We write $\mathcal{M}_s(\mathbb{R}_+^Z)$ for the set of stationary probability measures with finite one- +dimensional mean, and $\mathcal{M}_c(\mathbb{R}_+^Z)$ for the set of ergodic probability measures with +finite one-dimensional mean. For $\alpha \in \mathbb{R}_+$, we denote by $\mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$ and $\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)$ +the sets of stationary and ergodic probability measures with one-dimensional +mean $\alpha$. + +The strong order on $\mathcal{M}(\mathbb{R}_+^Z)$, or $\mathcal{L}(\mathbb{R}_+^Z)$, is defined as follows (see [21] for more +on strong orders). Consider $A, B \in \mathcal{L}(\mathbb{R}_+^Z)$ with respective distributions $\mu$ and $\nu$. +We say that $A$ (resp. $\mu$) is strongly dominated by $B$ (resp. $\nu$), denoted $A \le_{\text{st}} B$ + resp. $\mu \le_{\text{st}} \nu$), if + +$$E[f(A)] \le E[f(B)] \quad (\text{resp.} \int f d\mu \le \int f dv),$$ + +for any measurable $f: \mathbb{R}_+^Z \to \mathbb{R}$ which is increasing and such that the expectations +are well defined. Here we consider the usual component-wise ordering of $\mathbb{R}_+^Z$. + +PROPOSITION 2.1 ([22]). For $\mu$ and $\nu$ belonging to $\mathcal{M}(\mathbb{R}_+^Z)$, $\mu \le_{\text{st}} \nu$ iff +$\int f d\mu \le \int f dv$ for any increasing and continuous real function $f$ such that the +expectations are well defined. For $\mu_n, \nu_n, n \in \mathbb{N}$, $\mu$ and $\nu$ belonging to $\mathcal{M}(\mathbb{R}_+^Z)$, +suppose that $\mu_n \xrightarrow{w} \mu$, $\nu_n \xrightarrow{w} \nu$ and that $\mu_n \le_{\text{st}} \nu_n$. Then $\mu \le_{\text{st}} \nu$. + +We shall use the following fact a couple of times. Consider two random +processes on $\mathbb{R}_+^Z$: A which is ergodic and B which is stationary. Assume +that $A \le_{\text{st}} B$. Let B be compatible with a P-stationary shift $\theta: \Omega \to \Omega$ and denote +by $\tilde{\mathfrak{T}}$ the invariant $\sigma$-algebra. Then we have + +$$ (1) \qquad E[A(0)] \le E[B(0)|\tilde{\mathfrak{T}}] \qquad \text{a.s.} $$ + +Furthermore, if A is independent of B then the conditional law of B on the event +{$E[B(0)|\tilde{\mathfrak{T}}] = E[A(0)]$} is equal to the law of A. To prove this, the two ingredients +are a representation theorem such as Theorem 1 in [14] and Birkhoff's ergodic +theorem. + +The symbols $\sim$ and $\perp$ stand for "is distributed as" and "is independent of," respectively. We use the notation $N^* = N \setminus \{0\}$, $R^* = R \setminus \{0\}$, and $x^+ = \max(x, 0) = x \vee 0$. For $u, v$ in $\mathbb{R}^N$ or $\mathbb{R}^Z$, $u \le v$ denotes $u(n) \le v(n)$ for all $n$. +---PAGE_BREAK--- + +**3. The model.** We introduce successively the $·/·/1/∞/FCFS$ queue (Section 3.1), the $G/G/1/∞/FCFS$ queue (Section 3.2), and the infinite tandem $G/G/1/∞/FCFS → ·/GI/1/∞/FCFS → ...$ (Section 3.3). The presentation is made in an abstract and functional way. However, to help intuition, we use the queueing terminology and notation. + +**3.1. The single queue.** Define the mapping + +$$ (2) \qquad \begin{aligned} \Psi : \mathbb{R}_+^Z &\times \mathbb{R}_+^Z \rightarrow \mathbb{R}_+^Z \cup \{(+\infty)^Z\}, \\ (a,s) &\mapsto w = \Psi(a,s), \end{aligned} $$ + +with + +$$ (3) \qquad \begin{aligned} w(n) &= \Psi(a, s)(n) \\ &= \left[ \sup_{j \le n-1} \sum_{i=j}^{n-1} s(i) - a(i) \right]^+. \end{aligned} $$ + +A priori, $\Psi$ is valued in $[0, \infty)^Z$, but it is easily checked using (5) below that $\Psi$ actually takes values in $\mathbb{R}_+^Z \cup \{(+\infty)^Z\}$. The map $\Psi$ computes the workloads ($w$) from the inter-arrivals ($a$) and the services ($s$). Observe that we have, for $m < n$ (Lindley's equations), + +$$ (4) \qquad w(n) = [w(n-1) + s(n-1) - a(n-1)]^+, $$ + +$$ (5) \qquad w(n) = \left[ \max_{m E[A(0)|\mathfrak{T}]\}$, we have $W = (\infty)^Z$ and $D = LS$ [i.e., $\forall n, D(n) = S(n+1)$]. + +*The critical case.* On the event $\{E[S(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]\}$, we have $D=LS$ and anything may happen for $W$. For instance, if $A=S=(c)^Z$ for $c \in \mathbb{R}_+$, then $W=(0)^Z$. If $S$ is i.i.d. and nonconstant and $A \perp S$, then $W=(\infty)^Z$. + +Observe that a consequence of the above is that + +$$ \{E[D(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]\} = \{E[S(0)|\mathfrak{T}] \le E[A(0)|\mathfrak{T}]\} $$ + +(more rigorously, the symmetric difference of the two events has 0 probability). + +When the shift $\theta$ is ergodic, we are a.s. in the stable case when $E[S(0)] < E[A(0)]$, respectively, in the unstable case when $E[S(0)] > E[A(0)]$, and in the critical case when $E[S(0)] = E[A(0)]$. + +Let $\sigma$ be the law of $S$. Define + +$$ (10) \qquad \begin{array}{l} \Phi_{\sigma}: M_S(\mathbb{R}_+^Z) \to M_S(\mathbb{R}_+^Z), \\ \mu \mapsto \Phi_{\sigma}(\mu), \end{array} $$ +---PAGE_BREAK--- + +where $\Phi_{\sigma}(\mu)$ is the law of $\Phi(A, S)$ where $A \sim \mu$, $S \sim \sigma$ and $A \perp S$. The map $\Phi_{\sigma}$ is called the *queueing map*. A distribution $\mu$ such that $\Phi_{\sigma}(\mu) = \mu$ is called a *fixed point* for the queue. If the inter-arrival process is distributed as a fixed point $\mu$, then so is the inter-departure process. Consider now an ergodic queue. Rephrasing Loynes' results, we get + +$$ +\begin{align*} +\forall \alpha > \beta, \quad \Phi_{\sigma}: \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}) &\rightarrow \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}), \\ +\forall \alpha \leq \beta, \quad \Phi_{\sigma}: \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}) &\rightarrow \{\sigma\}. +\end{align*} +$$ + +In particular, we have $\Phi_{\sigma}(\sigma) = \sigma$. We say that $\sigma$ is a *trivial* fixed point for the ergodic queue. + +Below the main objective is to get nontrivial fixed points for $\Phi_{\sigma}$ in the special case of an i.i.d. queue. More precisely, we want to address the following question: for any $\alpha > \beta$, does there exist a fixed point which is ergodic and of mean $\alpha$? + +3.3. *Stable i.i.d. queues in tandem.* Consider a family $\{S(n, k), n \in \mathbb{Z}, k \in \mathbb{N}\}$ of i.i.d. random variables valued in $\mathbb{R}_+$ with $E[S(0, 0)] = \beta \in \mathbb{R}_+^*$. Assume that $S(0, 0)$ is nonconstant, that is, $P\{S(0, 0) = \beta\} < 1$. For $k$ in $\mathbb{N}$, define $S^k: \Omega \to \mathbb{R}_+^\mathbb{Z}$ by $S^k = (S(n,k))_{n \in \mathbb{Z}}$. Let $\sigma$ be the distribution of $S^k$. Consider $A^0 = (A(n,0))_{n \in \mathbb{Z}}: \Omega \to \mathbb{R}_+^\mathbb{Z}$ and assume that $A^0$ is stationary, independent of $S^k$ for all $k$, and satisfies $E[A(0,0)] = \alpha \in \mathbb{R}_+^*$. Let $\theta$ be a $P$-stationary shift such that $A^0$ and $S^k$ for all $k$ are compatible with $\theta$. Let $\mathfrak{T}$ be the corresponding invariant $\sigma$-algebra. We assume that the stability condition $\beta < E[A(0,0)|\mathfrak{T}]$ holds a.s. + +Define recursively for all $k \in \mathbb{N}$ + +$$ (11) \qquad W^k = (W(n,k))_{n \in \mathbb{Z}} = \Psi(A^k, S^k), $$ + +$$ (12) \qquad A^{k+1} = (A(n, k+1))_{n \in \mathbb{Z}} = \Phi(A^k, S^k). $$ + +The random processes $A^k$, $S^k$ and $W^k$ are, respectively, the inter-arrival, service and workload processes at queue $k$. The random process $A^{k+1}$ is the inter-departure process at queue $k$ and the inter-arrival process at queue $k+1$. Each $(A^k, S^k)$ defines a stable i.i.d. queue according to the terminology of Section 3.2. Globally, this model is called a *tandem of stable i.i.d. queues*. + +The sequence $(A^k)_k$ is a Markov chain on the state space $\mathbb{R}_+^\mathbb{Z}$. Clearly, $\mu$ is a stationary distribution of $(A^k)_k$ if and only if $\mu$ is a fixed point for the queue, that is, iff $\Phi_\sigma(\mu) = \mu$. Hence, the problem to be solved can be rephrased as: does the Markov chain $(A^k)_k$ admit nontrivial stationary distributions? + +**4. Uniqueness of fixed points and convergence.** In this section, we recall several results about the uniqueness of fixed points as well as convergence results. Associated with the existence results to be proved in Section 5, the results recalled here complete the picture about fixed point theorems. More importantly, they will be instrumental in several of the later proofs. +---PAGE_BREAK--- + +THEOREM 4.1 ([2, 17]). Consider the stable i.i.d. tandem model defined in Section 3.3 with an ergodic inter-arrival process of mean $\alpha > \beta$. Assume that + +$$ (13) \qquad \int_{\mathbb{R}_+} P\{S(0, 0) \ge u\}^{1/2} du < \infty. $$ + +Then there exists $M(\alpha) \in \mathbb{R}_+$ such that almost surely $\lim_{n \to +\infty} n^{-1} \times \sum_{i=0}^{n-1} W(0, i) = M(\alpha)$, where $M(\alpha) = \sup_{x \ge 0} (\gamma(x) - \alpha x)$ and the function $\gamma: \mathbb{R}_+ \to \mathbb{R}_+$ depends only on the service process. If we further assume that the initial inter-arrival process satisfies + +$$ (14) \qquad \exists c, E[S(0, 0)] < c < E[A(0, 0)], \\ E\left[\sup_{n \in \mathbb{N}^*}\left[\sum_{i=-n}^{-1} c - A(i, 0)\right]^+\right] < \infty, $$ + +then the convergence to $M(\alpha)$ also holds in $L_1$. + +Observe that $M(\alpha)$ depends on the inter-arrival process only via its mean. The function $\gamma$ in Theorem 4.1 is continuous, strictly increasing, concave and satisfies $\gamma(0) = 0$. For details on $\gamma$, refer to [2, 12]. + +Theorem 4.1 is proved in [2] under the condition: $E[S(0, 0)^{3+a}] < \infty$ for some $a > 0$. The above version is proved in [17] (using similar methods as in [2]) and is better since we have + +$$ \begin{align*} [\exists a > 0, E[S(0, 0)^{2+a}] < \infty] &\implies \int P\{S(0, 0) \ge u\}^{1/2} du < \infty \\ &= E[S(0, 0)^2] < \infty. \end{align*} $$ + +Condition (14) is slightly stronger than $E[W(0, 0)] < \infty$. Indeed, recall the following results from [9]. If $E[S(0, 0)^2] < \infty$, then, setting $\beta = E[S(0, 0)]$, + +$$ (15) \qquad \begin{aligned} \exists c > \beta, E\left[\sup_{n \ge 1}\left[\sum_{i=-n}^{-1} c - A(i, 0)\right]^+\right] &< \infty \\ &\implies E[W(0, 0)] < \infty, \\ E[W(0, 0)] &< \infty \\ &\implies E\left[\sup_{n \ge 1}\left[\sum_{i=-n}^{-1} \beta - A(i, 0)\right]^+\right] < \infty. \end{aligned} $$ + +Condition (14) is satisfied, for example, by the deterministic process $P\{A^0 = (\alpha)^Z\} = 1$. + +The next result requires some preparation. Let $\mathcal{L}_s(\mathbb{R}_+^Z \times \mathbb{R}_+^Z)$ be the set of random processes $((X(n), Y(n))_{n \in \mathbb{Z}}$ which are stationary in $n$. Consider $\mu$ and $\nu$ +---PAGE_BREAK--- + +in $\mathcal{M}_s(\mathbb{R}_+^Z)$ and let $\mathcal{D}(\mu, \nu) = \{(X, Y) \in \mathcal{L}_s(\mathbb{R}_+^Z \times \mathbb{R}_+^Z) | X \sim \mu, Y \sim \nu\}$. That is, $\mathcal{D}(\mu, \nu)$ is the set of jointly stationary processes whose marginals are distributed as $\mu$ and $\nu$. The $\bar{\rho}$ distance between $\mu$ and $\nu$ is given by + +$$ (16) \qquad \bar{\rho}(\mu, \nu) = \inf_{(X,Y) \in \mathcal{D}(\mu,\nu)} E[|X(0) - Y(0)|]. $$ + +See Gray [13], Chapter 8, for a proof that $\bar{\rho}$ is indeed a distance. Given two r.v.'s A and B with respective laws $\mu$ and $\nu$, set $\bar{\rho}(A, B) = \bar{\rho}(\mu, \nu)$. We recall a well-known fact (see also Section 7): convergence in the $\bar{\rho}$ distance implies weak convergence, but not conversely. + +**THEOREM 4.2 ([8, 19]).** Consider a stationary queue as in Section 3.2 with service process S and two inter-arrival processes A and B, possibly of different means. Assume that $A \perp S$ and $B \perp S$. Then, + +$$ (17) \qquad \bar{\rho}(\Phi(A, S), \Phi(B, S)) \le \bar{\rho}(A, B). $$ + +Consider now a stable i.i.d. tandem model as in Section 3.3 with inter-arrival processes $A^0$ and $B^0$ with different laws but such that $E[A(0,0)|\mathcal{T}] = E[B(0,0)|\mathcal{T}]$ a.s. Recall that $(A^n)_n$ and $(B^n)_n$ are defined recursively by $A^{n+1} = \Phi(A^n, S^n)$ and $B^{n+1} = \Phi(B^n, S^n)$. Then there exists $k \in \mathbb{N}^*$ such that + +$$ (18) \qquad \bar{\rho}(A^k, B^k) < \bar{\rho}(A^0, B^0). $$ + +If we further assume that $B^1 = \Phi(B^0, S^0) \sim B^0$, then + +$$ (19) \qquad \lim_{n \to +\infty} \bar{\rho}(A^n, B^0) = 0 \quad \text{and} \quad \text{hence } A^n \xrightarrow{w} B^0. $$ + +Chang [8] gives an elegant proof of (17). He also proves (18) for unbounded services. Prabhakar [19] removes this restriction and also establishes (19). As opposed to Theorem 4.1, observe that the convergence result in (19) is proved under the a priori assumption of existence of a fixed point. + +Define (“*p*: α” stands for “pathwise means are equal to α”) + +$$ (20) \qquad \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z) = \left\{ \mu \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z) \mid X \sim \mu \Rightarrow \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} X(i) = \alpha \text{ a.s.} \right\}. $$ + +Obviously, $\mathcal{M}_e^\alpha(\mathbb{R}_+^Z) \subset \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z) \subset \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$. The ergodic components of $\chi \in \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$ all have one-dimensional mean $\alpha$. An important consequence of (18) is the following uniqueness result. + +**COROLLARY 4.3.** Consider an i.i.d. queue as in Section 3.2. The corresponding queueing map $\Phi_\sigma$ has at most one fixed point in $\mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$ for $\alpha > E[S(0)]$. + +In particular, there is at most one fixed point in $\mathcal{M}_e^\alpha(\mathbb{R}_+^Z)$. In fact, we have the following stronger result. +---PAGE_BREAK--- + +PROPOSITION 4.4. Consider an i.i.d. queue as in Section 3.2 and $\alpha > E[S(0)]$. If $\zeta \in M_s^{p:\alpha}(\mathbb{R}_+^Z)$ is a fixed point, then it is necessarily ergodic; that is, $\zeta \in M_c^\alpha(\mathbb{R}_+^Z)$. + +PROOF. Suppose that the ergodic decomposition of $\zeta$ is given by $\zeta = \int \mu\Gamma(d\mu)$, where $\Gamma$ is a probability measure on $M_c^\alpha(\mathbb{R}_+^Z)$. Denote the support of $\Gamma$ by $\text{supp}(\Gamma) \subset M_c^\alpha(\mathbb{R}_+^Z)$. Assume that $\zeta$ is nonergodic, meaning that $\text{supp}(\Gamma)$ is not a singleton. Let $S$ be a subset of $\text{supp}(\Gamma)$ such that $0 < \Gamma\{S\} < 1$. + +Consider a stable i.i.d. tandem model as in Section 3.3. Let $A^0$ and $B^0$ be two inter-arrival processes, independent of the services, and such that $A^0 \sim \zeta$, $B^0 \sim \zeta$, $A^0 \perp B^0$. Define $(A^k)_k$ and $(B^k)_k$ as in (12). Let $C_b(\mathbb{R}_+^Z)$ be the set of continuous and bounded functions from $\mathbb{R}_+^Z$ to $\mathbb{R}$. Recall that $L$ is the left translation shift of $\mathbb{R}_+^Z$ and define recursively $L^{i+1} = L \circ L^i$. Define the $\theta$-invariant events + +$$ A = \left\{ \exists \mu \in S, \forall f \in C_b(\mathbb{R}_+^Z), \lim_{n} \frac{1}{n} \sum_{i=0}^{n-1} f(L^i A^0) = \int f d\mu \right\}, $$ + +$$ B = \left\{ \exists \mu \in \text{supp}(\Gamma) \setminus S, \forall f \in C_b(\mathbb{R}_+^Z), \lim_{n} \frac{1}{n} \sum_{i=0}^{n-1} f(L^i B^0) = \int f d\mu \right\}. $$ + +Roughly speaking, on the event $A \cap B$, the processes $A^0$ and $B^0$ are distributed according to different components of the ergodic decomposition of $\zeta$. Using the independence of $A^0$ and $B^0$, we have + +$$ P\{A \cap B\} = P\{A\}P\{B\} = \Gamma\{S\}(1 - \Gamma\{S\}) > 0. $$ + +Define the processes + +$$ \tilde{A}^0 = A^0 1_{A \cap B} + (\alpha)^{\mathbb{Z}} 1_{(A \cap B)^c}, \quad \tilde{B}^0 = B^0 1_{A \cap B} + (\alpha)^{\mathbb{Z}} 1_{(A \cap B)^c}. $$ + +By construction, the laws of $\tilde{A}^0$ and $\tilde{B}^0$ are different and we have $E[\tilde{A}(0, 0)|\mathcal{T}] = E[\tilde{B}(0, 0)|\mathcal{T}] = \alpha$ almost surely. Hence we can apply (18) in Theorem 4.2: there exists $k \in \mathbb{N}^*$ such that $\tilde{\rho}(\tilde{A}^k, \tilde{B}^k) < \tilde{\rho}(\tilde{A}^0, \tilde{B}^0)$. We deduce easily that $\tilde{\rho}(A^k, B^k) < \tilde{\rho}(A^0, B^0)$. This is in obvious contradiction with $\tilde{\rho}(A^0, B^0) = 0$ which follows from $A^0 \sim B^0$. We conclude that the support of $\Gamma$ is a singleton. $\square$ + +**5. Existence of fixed points.** Consider the stable i.i.d. tandem model of Section 3.3. The objective is to prove Theorem 5.1, that is, to obtain nontrivial stationary distributions for $(A^k)_k$, or equivalently nontrivial fixed points for $\Phi_\sigma$. + +The first step is classical and consists of considering Cesaro averages of the laws of $A^k$. Consider the quadruple $(A^k, S^k, W^k, A^{k+1})$ and denote its law by +---PAGE_BREAK--- + +$v_k \in \mathcal{M}(\mathbb{R}_+^Z \times \mathbb{R}_+^Z \times [0, \infty]^Z \times \mathbb{R}_+^Z)$. For $n \in \mathbb{N}^*$, define $\mu_n \in \mathcal{M}(\mathbb{R}_+^Z \times \mathbb{R}_+^Z \times [0, \infty]^Z \times \mathbb{R}_+^Z)$ by + +$$\mu_n = \frac{1}{n} \sum_{k=0}^{n-1} v_k.$$ + +The following interpretation may be useful: $\mu_n$ is the law of $(A^N, S^N, W^N, A^{N+1})$ where $N$ is a r.v. uniformly distributed over $\{0, \dots, n-1\}$ and independent of all the other r.v.'s of the problem. + +For all $n \in \mathbb{N}^*$, consider a quadruple of random processes $(\hat{A}^n, \hat{S}^n, \hat{W}^n, \hat{D}^n)$ distributed according to $\mu_n$. We have + +$$ (21) \qquad \hat{S}^n \sim \sigma, \quad \hat{S}^n \perp \hat{A}^n, \quad \hat{W}^n = \Psi(\hat{A}^n, \hat{S}^n), \quad \hat{D}^n = \Phi(\hat{A}^n, \hat{S}^n). $$ + +First of all, we argue that the sequence $(\mu_n)_n$ is tight. Denote by $\mu_n^{(1)}$, $\mu_n^{(2)}$, $\mu_n^{(3)}$ and $\mu_n^{(4)}$ the marginals of $\mu_n$ corresponding respectively to the laws of $\hat{A}^n$, $\hat{S}^n$, $\hat{W}^n$ and $\hat{D}^n$. Since $\mu_n^{(3)}$ is defined on the compact space $[0, \infty]^Z$ and since $\mu_n^{(2)} = \sigma$, the only point to be argued is that $(\mu_n^{(1)})_n$ and $(\mu_n^{(4)})_n$ are tight. According to Loynes' results, we have $\mu_n^{(1)}$, $\mu_n^{(4)} \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$ [we even have $\mu_n^{(1)}$, $\mu_n^{(4)} \in \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$]. For $\varepsilon > 0$, the set $K = \prod_{i \in Z}[0, 2^{li+2}/\varepsilon]$ is compact in the product topology according to Tychonoff's theorem. It is immediate to check that for $\mu \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$, we have $\mu\{K\} \ge 1 - \alpha\varepsilon$. We conclude that $(\mu_n^{(1)})_n$ and $(\mu_n^{(4)})_n$ are tight. + +Consequently, by Prohorov's theorem, $(\mu_n)_n$ admits weakly converging subsequences. Let $\mu$ be a subsequential limit of $(\mu_n)_n$. Consider a quadruple of random processes + +$$ (22) \qquad (\hat{A}, \hat{S}, \widetilde{\hat{W}}, \widetilde{\hat{D}}) \sim \mu. $$ + +It follows immediately from (21) that + +$$ (23) \qquad \hat{S} \sim \sigma, \quad \hat{S} \perp \hat{A}. $$ + +Recall that we have $\hat{D}^n = [\hat{A}^n - \hat{S}^n - \hat{W}^n]^+ + L\hat{S}^n$. By the continuous mapping theorem, we deduce that + +$$ (24) \qquad \tilde{D} = [\hat{A} - \hat{S} - \widetilde{\hat{W}}]^+ + L\widetilde{\hat{S}}. $$ + +On the other hand, it is not a priori true that $\widetilde{\hat{W}} = \Psi(\hat{A}, \hat{S})$ and $\tilde{D} = \Phi(\hat{A}, \hat{S})$ (which is the reason for the notation $\hat{A}, \hat{S}$ on the one side and $\widetilde{\hat{W}}, \tilde{D}$ on the other). Using (5) we have, for all $k < l-1$, + +$$ +\begin{aligned} +& \left[ \max_{k \beta \}. +$$ + +Using Loynes' results for the critical case, we have $\Phi(\hat{A}, \hat{S}) = L\hat{S}$ on the event $\mathcal{A}$. +Now using (26), we deduce that $\tilde{D} = \Phi(\hat{A}, \hat{S}) = L\tilde{S}$ on the event $\mathcal{A}$. + +Since $\hat{A} \ge_{st} \hat{S}$ and $\hat{A} \perp \tilde{S}$, we have, according to (1), + +$$ +\hat{A} = \bar{\bar{S}} 1_{\mathcal{A}} + \hat{A} 1_{\mathcal{A}^c}, +$$ +---PAGE_BREAK--- + +where $\bar{S} \sim \hat{S}$. Furthermore, we have just proved that + +$$ \tilde{D} = L\hat{S}_{1A} + \tilde{D}_{1A^c}. $$ + +Since $\hat{A} \sim \tilde{D}$, we deduce readily that $\hat{A}1_{A^c} \sim \tilde{D}1_{A^c}$. On the event $A^c$, we have, using Birkhoff's ergodic theorem, + +$$ \lim_{n \to \infty} \frac{1}{n} \sum_{i=-n}^{-1} \hat{A}(i) = E[\hat{A}(0)|\mathcal{T}] > \beta \implies \lim_{n \to \infty} \frac{1}{n} \sum_{i=-n}^{-1} \tilde{D}(i) > \beta. $$ + +In view of $\tilde{D} = [\hat{A} + \hat{S} - \tilde{W}]^+ + L\hat{S}$, we deduce that on $A^c$, we have $\tilde{W} \in \mathbb{R}_+^Z$ a.s. For $k \beta\}. \end{aligned} $$ + +Consequently, if $\tilde{W} = (\infty)^{\mathbb{Z}}$ a.s. then $\zeta = \sigma$, and if $P\{\tilde{W} \in \mathbb{R}_+^{\mathbb{Z}}\} > 0$ then $\zeta$ is a nontrivial fixed point for the queue. + +Assume now that the moment condition $\int P\{S(0, 0) \ge u\}^{1/2} du < \infty$ is satisfied. This is the condition needed in Theorem 4.1 to obtain that $\lim_n n^{-1} \times \sum_{i=0}^{n-1} W(0, i) = M(\alpha)$ a.s. for a finite constant $M(\alpha)$. Let us prove that + +$$ (31) \qquad \lim_{n \to +\infty} \frac{1}{n} \sum_{i=0}^{n-1} W(0, i) = M(\alpha) \text{ a.s.} \implies \tilde{W}(0) \in \mathbb{R}_{+} \text{ a.s.} $$ + +We argue by contradiction; hence, suppose that $P\{\tilde{W}(0) = +\infty\} = a > 0$. Fix $K > 0$. Let $f$ be a strictly increasing function of $\mathbb{N}$ such that $\mu_{f(n)} \xrightarrow{w} \mu$. We have $\widehat{W}^{f(n)}(0) \xrightarrow{w} \tilde{W}(0)$. Recall that $P\{\widehat{W}^n(0) \ge K\} = n^{-1}\sum_{i=0}^{n-1} P\{W(0, i) \ge K\}$. We deduce that + +$$ \forall b \in (0, a), \exists N, \forall n = f(k) \ge N, \quad \frac{1}{n} \sum_{i=0}^{n-1} P\{W(0, i) \ge K\} \ge b. $$ + +Fix $b \in (0, a)$, $c \in (0, b)$ and $n = f(k) \ge N$. Define the event $\mathcal{E} = \{n^{-1} \times \sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}} \ge c\}$ and set $q = P\{\mathcal{E}\}$. We have + +$$ +\begin{aligned} +& \sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}} \\ +& = \left(\sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}}\right) 1_{\mathcal{E}} + \left(\sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}}\right) 1_{\mathcal{E}^c} \le n1_{\mathcal{E}} + nc1_{\mathcal{E}^c}. +\end{aligned} +$$ + +Taking expectations, we get + +$$ nb \le \sum_{i=0}^{n-1} P\{W(0, i) \ge K\} \le nq + n(1-q)c. $$ + +We conclude that $q \ge (b-c)/(1-c) > 0$. Since this last inequality is valid for any $K$, we clearly have a contradiction with the a.s. convergence of $n^{-1}\sum_{i=0}^{n-1} W(0, i)$ to a finite constant. + +We conclude that under the assumptions of Theorem 4.1, the fixed point $\zeta$ is nontrivial. Summarizing all of the above, we obtain the following result. +---PAGE_BREAK--- + +**THEOREM 5.1.** Consider a single server infinite buffer FCFS queue with an i.i.d. service process $S$ satisfying: $E[S(0)] \in \mathbb{R}_+^*$, $P\{S(0) = E[S(0)]\} < 1$ and $\int P\{S(0) \ge u\}^{1/2} du < \infty$. Then there exists an ergodic inter-arrival process $A$ with $A \perp S$ and $E[S(0)] < E[A(0)] < \infty$, and such that the corresponding inter-departure process $D$ has the same distribution as $A$. + +PROOF. Consider a tandem of queues as in Section 3.3 where the service processes $S^k$ are distributed as $S$ with law $\sigma$. Consider the process $\hat{A}$ with law $\zeta$ as defined in (22). By the ergodic decomposition theorem and the linearity of $\Phi_\sigma$, we have + +$$ \zeta = \int_{\mathcal{M}_c(\mathbb{R}_+^Z)} \chi \Gamma(d\chi), \quad \Phi_\sigma(\zeta) = \int_{\mathcal{M}_c(\mathbb{R}_+^Z)} \Phi_\sigma(\chi) \Gamma(d\chi). $$ + +But $\zeta = \Phi_\sigma(\zeta)$. Therefore, the uniqueness of ergodic decompositions and the mean preservation property of stable queues imply that + +$$ \zeta_\alpha = \int_{\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)} \chi \Gamma(d\chi) = \int_{\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)} \Phi_\sigma(\chi) \Gamma(d\chi) = \Phi_\sigma(\zeta_\alpha) $$ + +for every $\alpha$ in the support of $E[\hat{A}(0)|\mathfrak{T}]$. By Proposition 4.4, the distributions $\zeta_\alpha$ are ergodic. According to (31), which holds since $\int P\{S(0) \ge u\}^{1/2} du < \infty$, we have $P\{\tilde{W} \in \mathbb{R}_+^Z\} = 1$ and $E[\hat{A}(0)|\mathfrak{T}] > E[S(0)]$ according to (30). Hence any $\alpha$ in the support of $E[\hat{A}(0)|\mathfrak{T}]$ is such that $\alpha > E[S(0)]$ and we conclude that the corresponding distribution $\zeta_\alpha \in \mathcal{M}_c^\alpha(\mathbb{R}_+^Z)$ is such that $\Phi_\sigma(\zeta_\alpha) = \zeta_\alpha$. $\square$ + +To the best of our knowledge, this provides the first positive answer (apart from the cases of exponential and geometric service times) to the intriguing question of the existence of nontrivial ergodic fixed points for a $./GI/1/\infty$/FCFS queue. + +**6. Values of the means for which a fixed point exists.** Consider a tandem of stable i.i.d. queues as in Section 3.3 and let $\Phi_\sigma$ be the corresponding queueing operator. Assume also that the condition (13) holds. Define + +$$ (32) \qquad \mathcal{S} = \{\alpha \in (\beta, +\infty) \mid \exists \mu \in \mathcal{M}_c^\alpha(\mathbb{R}_+^Z), \Phi_\sigma(\mu) = \mu\}. $$ + +According to Theorem 5.1, the set $\mathcal{S}$ is nonempty. We establish in Theorem 6.4 that $\mathcal{S}$ is unbounded, and closed in $(\beta, \infty)$. We believe that $\mathcal{S} = (\beta, +\infty)$ but we have not been able to prove this last point (see Conjecture 6.6). Proposition 6.5 also describes the limiting behavior from inputting in the tandem an ergodic inter-arrival process whose mean $\alpha$ does not belong to $\mathcal{S}$ (the case $\alpha \in \mathcal{S}$ is settled by Theorem 4.2). + +From now on, for $\alpha \in \mathcal{S}$, denote by $\zeta_\alpha$ the unique ergodic fixed point of mean $\alpha$ and by $A_\alpha$ an inter-arrival process distributed as $\zeta_\alpha$. Let $S$ be distributed as $\sigma$ and independent of all other r.v.'s. Also it is convenient to denote by $\mathcal{L}(A)$ the law of a r.v. $A$, and by supp $A$ its support. +---PAGE_BREAK--- + +The following argument is used several times. Consider $\alpha \in \mathcal{S}$ and let $(A^n)_n$ be defined as in (12) starting from an ergodic process $A^0$ of mean $\alpha$. According to (19), we have $A^n \xrightarrow{w} A_\alpha$. It implies that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(A^i) \xrightarrow{w} \mathcal{L}(A_\alpha)$. According to (28), we have + +$$ (33) \qquad \frac{1}{n} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(A^i, S^i)) \xrightarrow{w} \mathcal{L}(\Psi(A_\alpha, S)). $$ + +We now prove a series of preliminary lemmas. + +LEMMA 6.1. For any $\alpha > \beta$, $\mathcal{S} \cap (\beta, \alpha) \neq \emptyset$. + +PROOF. Fix $\alpha > \beta$. Let $(A^n)_n$ be defined as in (12) starting from an ergodic process $A^0$ of mean $\alpha$. Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. Recall from the proof of Theorem 5.1 that + +$$ (34) \qquad \operatorname{supp} E[\hat{A}(0)|\mathfrak{T}] \subset \mathfrak{S} \subset (\beta, \infty). $$ + +By Fatou's lemma, $E[\hat{A}(0)] \le \alpha$. Since $E[\hat{A}(0)] = E[E[\hat{A}(0)|\mathfrak{T}]]$, we conclude that $\mathfrak{S} \cap (\beta, \alpha] \ne \emptyset$. $\square$ + +LEMMA 6.2. Consider an ergodic inter-arrival process $A^0$ of mean $\alpha > \beta$. Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. Consider $\delta \in \mathcal{S} \cap (\beta, \alpha]$ (resp. $\delta \in \mathcal{S} \cap [\alpha, \infty)$, assuming $\mathcal{S} \cap [\alpha, \infty) \ne \emptyset$), then $A_\delta \le_{\text{st}} \hat{A}$ and $\Psi(\hat{A}, S) \ge_{\text{st}} \Psi(A_\delta, S)$ [resp., $A_\delta \ge_{\text{st}} \hat{A}$ and $\Psi(\hat{A}, S) \le_{\text{st}} \Psi(A_\delta, S)$]. Further, if $\mathcal{S} \cap [\alpha, \infty) \ne \emptyset$, then $E[\hat{A}(0)] = \alpha$. + +PROOF. Consider the case $\delta \in \mathcal{S} \cap [\alpha, \infty)$. The other case can be treated similarly. Define the process $B^0 = \delta\alpha^{-1}A^0$, that is, + +$$ \forall n, \quad B(n, 0) = -\frac{\delta}{\alpha} A(n, 0). $$ + +The process $B^0$ is ergodic and of mean $\delta$. At mean $\delta$, $\Phi_\sigma$ admits the fixed point $\zeta_\delta$. By (19), we have $B^k \xrightarrow{w} A_\delta$. By construction, we have $A^0 \le B^0$ almost surely. Using the monotonicity property (9), we get that, for all $k \in \mathbb{N}$, + +$$ A^k \le B^k \quad \text{and} \quad \Psi(A^k, S^k) \ge \Psi(B^k, S^k). $$ + +It implies that for all $k \in \mathbb{N}^*$, + +$$ \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \le_{\text{st}} \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(B^i) $$ +---PAGE_BREAK--- + +and + +$$ \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(\Psi(A^k, S^k)) \geq_{\text{st}} \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(\Psi(B^k, S^k)). $$ + +Going to the limit along an appropriate subsequence and applying (33), we obtain + +$$ \hat{A} \leq_{\text{st}} A_{\delta} \quad \text{and} \quad \Psi(\hat{A}, S) \geq_{\text{st}} \Psi(A_{\delta}, S). $$ + +We are left with having to show that $E[\hat{A}(0)] = \alpha$. Observe that $k^{-1} \times \sum_{i=0}^{k-1} \mathcal{L}(B^k) \xrightarrow{w} \zeta_\delta$, and that the one-dimensional marginals converge in expectation since $k^{-1} \sum_{i=0}^{k-1} E[B(0, i)] = \delta = E[A_\delta(0)]$. It follows by Theorem 5.4 of [3] that the sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(B^k))_k$ is uniformly integrable. It implies that the dominated sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i))_k$ is also uniformly integrable. Along an appropriate subsequence, this last sequence converges weakly to the law of $\hat{A}$ and we conclude (Theorem 5.4 of [3]) that it also converges in expectation. Since $k^{-1} \sum_{i=0}^{k-1} E[A(0, k)] = \alpha$ for all $k$, we deduce that $E[\hat{A}(0)] = \alpha$. $\square$ + +LEMMA 6.3. *The following statements are true:* + +(a) for $\alpha, \delta \in \mathcal{S}$ and $\alpha < \delta$, $A_\alpha \leq_{\text{st}} A_\delta$ and $\Psi(A_\alpha, S) \geq_{\text{st}} \Psi(A_\delta, S)$; + +(b) for $\alpha \in \mathcal{S}, E[\Psi(A_\alpha, S)(0)] = M(\alpha)$, where $M(\alpha)$ is defined in Theorem 4.1. + +PROOF. Part (a) is a direct consequence of Lemma 6.2. Consider part (b). Fix $\alpha \in \mathcal{S}$. Consider $A^0$ an ergodic inter-arrival process of mean $\alpha$ satisfying condition (14). From Theorem 4.1, we have + +$$ \lim_{n} \frac{1}{n} E[\Psi(A^i, S^i)(0)] = M(\alpha). $$ + +Starting from (33) and applying Fatou's lemma, we get + +$$ E[\Psi(A_\alpha, S)(0)] \leq \lim_{n} \frac{1}{n} E[\Psi(A^i, S^i)(0)] = M(\alpha). $$ + +Now let us prove that $M(\alpha) \leq E[\Psi(A_\alpha, S)(0)]$. By Lemma 6.1, there exists $\delta \in \mathcal{S} \cap (\beta, \alpha)$. Define the process $B^0 = \alpha\delta^{-1}A_\delta$ and let $(B^n)_n$ be defined as in (12). The process $B^0$ is ergodic of mean $\alpha$. We also have $B^0 \geq A_\delta$ a.s. Using (9), this implies + +$$ \frac{1}{n} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i(0))) \leq_{\text{st}} \mathcal{L}(\Psi(A_\delta, S)(0)) \quad \text{for all } n. $$ + +Since $E[\Psi(A_\delta, S)(0)] \leq M(\delta) < \infty$, the sequence $\{n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i)(0)), n \in \mathbb{N}^*\}$ is uniformly integrable. Furthermore, we have from (33) +---PAGE_BREAK--- + +that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i)(0)) \xrightarrow{w} \mathcal{L}(\Psi(A_\alpha, S)(0))$. Applying Theorem 5.4 of [3], weak convergence plus uniform integrability implies convergence in expectation: + +$$\lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(B^i, S^i)(0)] = E[\Psi(A_\alpha, S)(0)].$$ + +Now recall from Theorem 4.1 that we have $n^{-1} \sum_{i=0}^{n-1} \Psi(B^i, S^i)(0) \to M(\alpha)$ almost surely. Applying Fatou's lemma, we get + +$$M(\alpha) \leq \lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(B^i, S^i)(0)].$$ + +Summarizing, we have $M(\alpha) \leq E[\Psi(A_\alpha, S)(0)]$. This completes the proof. $\square$ + +**THEOREM 6.4.** *The set $\mathcal{S}$ is closed in $(\beta, \infty)$ and $\inf\{u \in \mathcal{S}\} = \beta$, $\sup\{u \in \mathcal{S}\} = +\infty$.* + +**PROOF.** A direct consequence of Lemma 6.1 is that $\inf\{u \in \mathcal{S}\} = \beta$. We prove that $\sup\{u \in \mathcal{S}\} = +\infty$ by contradiction. Thus, suppose $\sup\{u \in \mathcal{S}\} < \infty$ and consider $\alpha > \sup\{u \in \mathcal{S}\}$. Let $A^0$ be an ergodic inter-arrival process of mean $\alpha$ satisfying condition (14). Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. By Lemma 6.2, $A_\delta \le_{st} \hat{A}$ for any $\delta \in \mathcal{S}$. According to (1), this implies that $\delta \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Since $\supp E[\hat{A}(0)|\mathfrak{T}] \subset \mathcal{S}$, see (34), we conclude that almost surely + +$$E[\hat{A}(0)|\mathfrak{T}] = \sup\{u \in \mathcal{S}\} \in \mathcal{S}.$$ + +Set $\eta = \sup\{u \in \mathcal{S}\}$. Since $\hat{A}$ is a fixed point, we must have $\hat{A} \sim A_\eta$. In particular, along an appropriate subsequence, we have that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(A^i, S^i))$ converges weakly to $\mathcal{L}(\Psi(A_\eta, S))$. Now, a sequential use of Lemma 6.3, Fatou's lemma and Theorem 4.1 gives us + +$$M(\eta) = E[\Psi(A_\eta, S)(0)] \leq \lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(A^i, S^i)(0)] = M(\alpha).$$ + +It follows from the properties of $\gamma$ recalled after the statement of Theorem 4.1 that $M(x)$ is a positive and decreasing function that is strictly decreasing on the interval $\{x | M(x) > 0\}$. Since $\alpha < \eta$ and $M(\alpha) \le M(\eta)$, we conclude that $M(\alpha) = M(\eta) = 0$. Thus, $E[\Psi(A_\eta, S)(0)] = 0$, that is, $P\{\Psi(A_\eta, S) = (0)^{\mathbb{Z}}\} = 1$. Let us input the process $A_\eta$ into the tandem of queues. Using (8) recursively, we obtain + +$$\begin{align*} +A_{\eta}^{k}(0) &= A_{\eta}(0) + \sum_{i=0}^{k-1}[S(1, i) - S(0, i)] + \sum_{i=0}^{k-1}[\Psi(A_{\eta}^{i}, S^{i})(1) - \Psi(A_{\eta}^{i}, S^{i})(0)] \\ +&= A_{\eta}(0) + \sum_{i=0}^{k-1}[S(1, i) - S(0, i)]. +\end{align*}$$ +---PAGE_BREAK--- + +Since the service times are i.i.d. and nonconstant, the partial sums $\sum_{i=0}^{k-1}[S(1, i) - S(0, i)]$ form a null-recurrent random walk. Thus there is a $k$ for which $A_{\alpha_k}^k(0) < 0$ with strictly positive probability, which is impossible. Or, we cannot have $M(\eta) = 0$. In turn, this implies $\sup\{u \in \mathcal{S}\} = \infty$, and via Lemma 6.2 we get that $E[\hat{A}(0)] = \alpha$. + +We now prove that $\mathcal{S}$ is closed in $(\beta, \infty)$. Consider a sequence $\alpha_k$ of elements of $\mathcal{S}$ that increases to $\alpha \in (\beta, \infty)$. Let $A^0$ and $\hat{A}$ be defined as above (for the mean $\alpha$). Using Lemma 6.2, we have $A_{\alpha_k} \le \text{st} \hat{A}$ and using (1), we have $\alpha_k \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Passing to the limit, we get $\alpha \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Since $E[\hat{A}(0)] = E[E[\hat{A}(0)|\mathfrak{T}]] = \alpha$, we conclude that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\alpha\}$. It implies that $\alpha \in \mathcal{S}$. The proof works similarly when $\alpha_k$ is a decreasing sequence. $\square$ + +**PROPOSITION 6.5.** *Consider an ergodic inter-arrival process $A^0$ of mean $\alpha$. There are two possibilities:* + +1. if $\alpha \in \mathcal{S}$, then $\bar{\rho}(A^k, A_\alpha) \xrightarrow{k} 0$ and hence $A^k \xrightarrow{w} A_\alpha$; + +2. if $\alpha \notin \mathcal{S}$, then $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \xrightarrow{w} p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$, where + +$$ (35) \qquad \underline{\alpha} = \sup\{u \in \mathcal{S}; u \le \alpha\}, \qquad \overline{\alpha} = \inf\{u \in \mathcal{S}; u \ge u\} \quad \text{and} \quad p = \frac{\overline{\alpha} - \alpha}{\underline{\alpha} - \overline{\alpha}}. $$ + +In words, the weak Cesaro limit is a linear combination of the largest ergodic fixed point of mean less than $\alpha$ and of the smallest ergodic fixed point of mean more than $\alpha$. The weak Cesaro limit always has mean $\alpha$. + +**PROOF.** The case $\alpha \in \mathcal{S}$ is a restatement of (19). Consider $\alpha \notin \mathcal{S}$. Denote by $\hat{A}$ a process whose law is a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. By Lemma 6.2, we have $A_u \le \text{st} \hat{A} \le \text{st} A_v$ for any $u, v \in \mathcal{S}$ such that $u < \alpha < v$. Therefore, using (1), we get that $u \le E[\hat{A}(0)|\mathfrak{T}] \le v$ a.s. Since $\text{supp } E[\hat{A}(0)|\mathfrak{T}] \subset \mathcal{S}$ [see (34)] and $E[\hat{A}(0)] = \alpha$ (Lemma 6.2) we conclude that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\underline{\alpha}, \overline{\alpha}\}$, where $\underline{\alpha}$ and $\overline{\alpha}$ are defined as in (35). + +We know from Section 5 that the law of $\hat{A}$ is a fixed point. Given that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\underline{\alpha}, \overline{\alpha}\}$, Proposition 4.4 tells us that $\hat{A} \sim pA_{\underline{\alpha}} + (1-p)A_{\overline{\alpha}}$ for some $p$. Therefore $E[\hat{A}(0)] = p\underline{\alpha} + (1-p)\overline{\alpha}$ and from $E[\hat{A}(0)] = \alpha$, we conclude that $p = (\overline{\alpha} - \alpha)/((\underline{\alpha} - \alpha))$. + +A consequence of the above argument is that any convergent subsequence of $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i)$ must converge weakly to $p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$. Recalling an argument of Section 5, the sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i), k \in \mathbb{N}^*)$ is tight, hence sequentially compact. This implies that $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \xrightarrow{w} p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$. $\square$ + +The previous results characterize $\mathcal{S}$ to a certain extent. We believe that more is true. +---PAGE_BREAK--- + +CONJECTURE 6.6. For any $\alpha > \beta = E[S(0, 0)]$, there exists an ergodic fixed point of mean $\alpha$. That is, $\mathcal{S} = (\beta, +\infty)$. + +It is possible to show that $\mathcal{S}$ is equal to the image of the derivative of $\gamma$ defined in Theorem 4.1. (Since $\gamma$ is concave, its derivative $\gamma'$ is continuous except at a countable number of points. At the points of discontinuity, we consider that both the left and the right-hand limits belong to the image.) Hence the conjecture is true if the function $\gamma$ has a continuous derivative. However, we have not been able to prove this. The function $\gamma$ defines the limit shape of an oriented last-passage time percolation model on $\mathbb{N}^2$ with weights $(S(i, j))_{i,j}$ on the lattice points; see [2, 12, 17]. Establishing the smoothness of the limit shape in percolation models is usually a difficult question. + +**7. Complements.** In proving Theorem 5.1, an essential step was to establish the identity (28): $\tilde{D} = \Phi(\hat{A}, \hat{S})$. This can be rephrased as the weak continuity of the operator $\Phi_\sigma$ of an i.i.d. queue on the converging subsequences of the Cesaro averages of the laws of $A^k$. In fact a much stronger result holds: + +**THEOREM 7.1.** For a stationary queue defined as in Section 3.2, the operator $\Phi_\sigma$ is weakly continuous on $M_s(\mathbb{R}_+^Z)$. + +Theorem 7.1 is a generalization of a result due to Borovkov ([4], Chapter 11 or [5], Chapter 4); see also [6]. Borovkov proves that for an ergodic queue, $\Phi_\sigma$ is weakly continuous on $\bigcup_{\beta E[S(0)]$. The set $M_c^\alpha(\mathbb{R}_+^Z)$ is mapped into itself by $\Phi_\sigma$. However, it is not convex. Its convexification is the set $M_s^{p:\alpha}(\mathbb{R}_+^Z)$ defined in (20). The set $M_s^{p:\alpha}(\mathbb{R}_+^Z)$ is not weakly closed [as can be seen by considering $(\xi_n)_n$ defined in (36)]. Its closure is the set $\bigcup_{x \le \alpha} M_s^x(\mathbb{R}_+^Z)$. +---PAGE_BREAK--- + +Since $\Phi_{\sigma}(\mu) \geq_{\text{st}} \sigma$ for all $\mu$, we deduce the following natural and “minimal” candidate for $\mathcal{C}$: + +$$ \mathcal{C} = \bigcup_{x \leq \alpha} \mathcal{M}_s^x(\mathbb{R}_+^Z) \cap \{\mu \mid \mu \geq_{\text{st}} \sigma\}. $$ + +It is easily checked that $\mathcal{C}$ is compact, convex, and mapped into itself by $\Phi_{\sigma}$. We therefore conclude that there exists a fixed point in $\mathcal{C}$. The problem is that $\mathcal{C}$ is too large: it contains the trivial fixed point $\sigma$, and we have no way to assert the existence of a nontrivial fixed point. + +Building on the above idea, one could try the same approach with another topology on $\mathcal{M}_s(\mathbb{R}_+^Z)$: the one induced by the $\bar{\rho}$ distance defined in (16). According to Theorem 4.2, the map $\Phi_{\sigma}$ is 1-Lipschitz on $\mathcal{M}_s(\mathbb{R}_+^Z)$, hence continuous. However, there is no clear way to build a compact and convex set on which to work. Indeed, let $\xi_n \in \mathcal{M}_e^1(\mathbb{R}_+^Z)$ be the distribution of the periodic process whose period is given by + +$$ (36) \qquad (\underbrace{0, \dots, 0}_{n}, \underbrace{2, \dots, 2}_{n}). $$ + +It is easy to see that $(\xi_n)_n$ is not sequentially compact in $\mathcal{M}_s(\mathbb{R}_+^Z)$ for the $\bar{\rho}$ topology. Indeed, we have $\xi_n \xrightarrow{w} \xi$, where $\xi$ is defined by $P\{\xi = (0)^Z\} = P\{\xi = (2)^Z\} = 1/2$. Since convergence in the $\bar{\rho}$ topology implies weak convergence, if $(\xi_n)_n$ admits a subsequential limit in the $\bar{\rho}$ topology, then it has to be $\xi$. However, it is easy to check that $\bar{\rho}(\xi_n, \xi) = 1$ for all $n$. + +**Acknowledgment.** The authors would like to thank Tom Kurtz for a very careful reading and in particular for suggesting a simplification of the original proof of Theorem 5.1. This has led to an important shortening and overall improvement of the paper. + +## REFERENCES + +[1] ANANTHARAM, V. (1993). Uniqueness of stationary ergodic fixed point for a $M/K$ node. *Ann. Appl. Probab.* **3** 154–172. [Correction (1994) *Ann. Appl. Probab.* **4** 607.] + +[2] BACCELLI, F., BOROVKOV, A. and MAIRESSE, J. (2000). Asymptotic results on infinite tandem queueing networks. *Probab. Theory Related Fields* **118** 365–405. + +[3] BILLINGSLEY, P. (1968). *Convergence of Probability Measures*. Wiley, New York. + +[4] BOROVKOV, A. (1976). *Stochastic Processes in Queueing Theory*. Springer, Berlin. [Russian edition (1972), Nauka, Moscow.] + +[5] BOROVKOV, A. (1984). *Asymptotic Methods in Queueing Theory*. Wiley, New York. [Russian edition (1980), Nauka, Moscow.] + +[6] BRANDT, A., FRANKEN, P. and LISEK, B. (1990). *Stationary Stochastic Models*. Wiley, New York. + +[7] BURKE, P. (1956). The output of a queueing system. *Oper. Res.* **4** 699–704. + +[8] CHANG, C. S. (1994). On the input-output map of a $G/G/1$ queue. *J. Appl. Probab.* **31** 1128–1133. +---PAGE_BREAK--- + +[9] DALEY, D. and ROLSKI, T. (1992). Finiteness of waiting-time moments in general stationary single-server queues. *Ann. Appl. Probab.* **2** 987–1008. + +[10] DUDLEY, R. (1989). *Real Analysis and Probability*. Wadsworth & Brooks/Cole, Belmont, CA. + +[11] ETHIER, S. and KURTZ, T. (1986). *Markov Processes: Characterization and Convergence*. Wiley, New York. + +[12] GLYNN, P. and WHITT, W. (1991). Departures from many queues in series. *Ann. Appl. Probab.* **1** 546–572. + +[13] GRAY, R. (1988). *Probability, Random Processes, and Ergodic Properties*. Springer, Berlin. + +[14] KAMAE, T., KRENGEL, U. and O'BRIEN, G. L. (1977). Stochastic inequalities on partially ordered spaces. *Ann. Probab.* **5** 899–912. + +[15] LOYNES, R. (1962). The stability of a queue with non-independent interarrival and service times. *Proc. Cambridge Philos. Soc.* **58** 497–520. + +[16] MAIRESSE, J. and PRABHAKAR, B. (1999). On the existence of fixed points for the $·/GI/1$ queue. LIAFA Research Report 99/25, Université Paris 7. + +[17] MARTIN, J. (2002). Large tandem queueing networks with blocking. *Queueing Systems Theory Appl.* **41** 45–72. + +[18] MOUNTFORD, T. and PRABHAKAR, B. (1995). On the weak convergence of departures from an infinite sequence of $·/M/1$ queues. *Ann. Appl. Probab.* **5** 121–127. + +[19] PRABHAKAR, B. (2003). The attractiveness of the fixed points of a $·/GI/1$ queue. *Ann. Probab.* **31** 2237–2269. + +[20] RUDIN, W. (1991). *Functional Analysis*, 2nd ed. McGraw-Hill, New York. + +[21] STOYAN, D. (1984). *Comparison Methods for Queues and Other Stochastic Models*. Wiley, New York. + +[22] WHITT, W. (1992). Uniform conditional stochastic order. *J. Appl. Probab.* **17** 112–123. + +LIAFA +UNIVERSITY DENIS DIDEROT +CASE 7014 +2 PLACE JUSSIEU +F-75251 PARIS CEDEX 05 +FRANCE + +E-MAIL: jean.mairesse@liafa.jussieu.fr + +DEPARTMENTS OF ELECTRICAL ENGINEERING +AND COMPUTER SCIENCE + +STANFORD UNIVERSITY + +STANFORD, CALIFORNIA 94305-9510 + +E-MAIL: balaji@stanford.edu \ No newline at end of file diff --git a/samples/texts_merged/6743834.md b/samples/texts_merged/6743834.md new file mode 100644 index 0000000000000000000000000000000000000000..fdec9cfecea87a0ead2237590343f3d38b8622fa --- /dev/null +++ b/samples/texts_merged/6743834.md @@ -0,0 +1,93 @@ + +---PAGE_BREAK--- + +# Solutions Complex Analysis Stein Shakarchi + +When people should go to the ebook stores, search creation by shop, shelf by shelf, it is truly problematic. This is why we provide the ebook compilations in this website. It will agreed ease you to look guide **solutions complex analysis stein shakarchi** as you such as. + +By searching the title, publisher, or authors of guide you truly want, you can discover them rapidly. In the house, workplace, or perhaps in your method can be all best place within net connections. If you take aim to download and install the solutions complex analysis stein shakarchi, it is totally easy then, back currently we extend the link to purchase and make bargains to download and install solutions complex analysis stein shakarchi hence simple! + +is one of the publishing industry's leading distributors, providing a comprehensive and impressively high-quality range of fulfilment and print services, online book reading and download. + +## Solutions Complex Analysis Stein Shakarchi + +SOLUTIONS/HINTS TO THE EXERCISES FROM COMPLEX ANALYSIS BY STEIN AND SHAKARCHI 3 Solution 3.zn= seicφ implies that z= s1n ei(φ +2πik), where k= 0,1,…,n- 1 and s1 n is the real nth root of the positive number s. There are nsolutions as there should be since we are finding the roots of a degree n polynomial in the algebraically closed field C. + +## SOLUTIONS/HINTS TO THE EXERCISES FROM COMPLEX ANALYSIS BY ... + +Chapter 1. Preliminaries to Complex Analysis 1.1 Complex numbers and the complex plane 1.1.1 Basic properties 1.1.2 Convergence 5.1.3 Sets in the complex plane 5.2 Functions on the complex plane 8.2.1 Continuous functions 8.2.2 Holomorphic functions 8.2.3 Power series 14.3 Integration along curves 18.4 Exercises 24 Chapter 2. +---PAGE_BREAK--- + +**Complex Analysis (Princeton Lectures in Analysis, Volume II)** + +Complex Analysis (Elias M. Stein, Rami Shakarchi) + +**(PDF) Complex Analysis (Elias M. Stein, Rami Shakarchi ...** + +solutions-complex-analysis-stein-shakarchi 1/1 Downloaded from datacenterdynamics.com.br on October 27, 2020 by guest [MOBI] Solutions Complex Analysis Stein Shakarchi Yeah, reviewing a book solutions complex analysis stein shakarchi could increase your close links listings. This is just one of the solutions for you to be successful. + +**Solutions Complex Analysis Stein Shakarchi ...** + +Stein and Shakarchi move from an introduction addressing Fourier series and integrals to in-depth considerations of complex analysis; measure and integration theory, and Hilbert spaces; and, finally, further topics such as functional analysis, distributions and elements of probability theory. + +**Stein And Shakarchi Complex Analysis Manual Solution ...** + +SOLUTIONS/HINTS TO THE EXERCISES FROM COMPLEX ANALYSIS BY STEIN AND SHAKARCHI 3 Solution 3.zn = $\text{sei}\u3c6$ implies that $z = s \ 1 \ n \ \text{ei}(\u3c6 \ n + 2\pi i k)$, where $k = 0, 1, \dots, n-1$ and $s \ 1 \ n$ is the real nth root of the positive number s. + +**solution to complex analysis stein shakarchi - Análise Complex** + +Solutions Complex Analysis Stein Shakarchi Solutions Complex Analysis Stein Shakarchi 3 Solution 3zn= $\text{sei}\varphi$ implies that $z=s1n\text{ei}(\varphi+2\pi ik)$, where $k=0,1,\dots,n-1$ and $s1n$ is the real nth root of the positive number s There are nsolutions as there should be since we are finding the roots of a degree n polynomial in the algebraically Fourier Analysis Solutions Stein Shakarchi Stein Shakarchi Real Analysis Solutions FROM COMPLEX ANALYSIS BY STEIN AND + +**Read Online Real Analysis Stein Shakarchi Solutions** + +Stein And Shakarchi Complex Analysis Manual Solution. ... The starting point is the simple idea of extending a function initially given for real values of the argument to one that is defined when +---PAGE_BREAK--- + +the argument is complex. ... + +**Stein Real Analysis Solution - costamagarakis.com** + +Fourier Analysis Solutions Stein Shakarchi The Princeton Lectures in Analysis is a series of four mathematics textbooks, each covering a different area of mathematical analysis. They were written by Elias M. Stein and Rami Shakarchi and published by Princeton University Press between 2003 and 2011. + +**Download Stein Shakarchi Real Analysis** + +and the textbook is Complex Analysis by Stein and Shakarchi (ISBN13: 978-0-691-11385-2). Note to students: it's nice to include the statement of the problems, but I leave that up to you. I am only skimming the solutions. I will occasionally add some comments or mention alternate solutions. If + +**Math 302: Solutions to Homework - Williams College** + +Princeton Lectures in Analysis. The Princeton Lectures in Analysis is a series of four mathematics textbooks, each covering a different area of mathematical analysis. They were written by Elias M. Stein and Rami Shakarchi and published by Princeton University Press between 2003 and 2011. They are, in order, Fourier Analysis: An Introduction; Complex Analysis; Real Analysis: Measure Theory, Integration, and Hilbert Spaces; and Functional Analysis: Introduction to Further Topics in Analysis. + +**Princeton Lectures in Analysis - Wikipedia** + +June 22nd, 2018 - Download and Read Stein Shakarchi Fourier Analysis Solutions Stein Shakarchi Fourier Analysis Solutions Give us 5 minutes and we will show you the best book to read today " COMPLEX ANALYSIS BY ELIAS M STEIN ANSWERS + +**Fourier Analysis Solutions Stein Shakarchi** + +Problem 4 (3.2 in Stein-Shakarchi) Integrate over the upper semicircular contour; the integral over the semicircular part is 0 since the degree of the denominator is greater than 2. Therefore the desired integral is just the sum of all residues that lie in the upper semicircular contour. The poles are the 4-th + +**Solution to Stein Complex Analysis | Holomorphic** +---PAGE_BREAK--- + +**Function ...** + +Numerous examples and applications throughout its four planned volumes, of which Complex Analysis is the second, highlight the far-reaching consequences of certain ideas in analysis to other fields... + +**Complex Analysis by Elias M. Stein, Rami Shakarchi - Books ...** + +Real Analysis: Measure Theory, Integration, and Hilbert Spaces +Elias M. Stein and Rami Shakarchi. Real Analysis is the third volume in the Princeton Lectures in Analysis, a series of four textbooks that aim to present, in an integrated manner, the core areas of analysis. Here the focus is on the development of measure and... + +**Rami Shakarchi | Princeton University Press** + +and Shakarchi Real Analysis Solution(Stein………………) - कौशल The Princeton Lectures in Analysis is a series of four mathematics textbooks, each covering a different area of mathematical analysis.They were written by Elias M. Stein and Rami Shakarchi Stein Real Analysis Solution - food.whistleblower.org + +**Real Analysis Stein Shakarchi Solutions** + +Harvard Mathematics Department : Home page + +**Harvard Mathematics Department : Home page** + +Veja gratis o arquivo Stein & Shakarchi - Complex Analysis - Solutions enviado para a disciplina de Análise Complexa +Categoria: Exercício - 5 - 30060137 + +Copyright code: d41d8cd98f00b204e9800998ecf8427e. \ No newline at end of file diff --git a/samples/texts_merged/6772016.md b/samples/texts_merged/6772016.md new file mode 100644 index 0000000000000000000000000000000000000000..296df04ae04bd451b7e8d920d770984e53faf3c0 --- /dev/null +++ b/samples/texts_merged/6772016.md @@ -0,0 +1,219 @@ + +---PAGE_BREAK--- + +GEOMETRIC EVOLUTION PROBLEMS AND +ACTION-MEASURES + +M. BULIGA + +1. INTRODUCTION + +Geometric evolution problems are connected to many interesting phenom- +ena, such as ice melting, metal solidification, explosions, damage mechanics. +Any such problem numbers among the unknowns a geometric object. The +canonical example of a geometric evolution problem is the mean curvature +flow of a surface. A more complex situation arises in the study of brittle crack +propagation. The state of a brittle body is described by a pair displacement- +crack, therefore the crack propagation problem has two unknowns. We have +to suppose that, at any moment, the displacement has no discontinuities away +from the crack. Moreover, the displacement is connected with the crack by +the boundary conditions: these contain conditions such as unilateral contact +of the lips of the crack. + +In most of the studies the fracture propagation is not recognized to have a +geometrical nature. It is the purpose of this paper to formulate a general geo- +metric evolution problem based on the notion of action-measure, introduced +here. For particular choices of the action-measure we obtain formulations of +the mean curvature flow or the brittle fracture propagation problems. + +2. ACTION MEASURES AND VISCOSITY SOLUTIONS + +($L$, $\le$, $\tau$) is a sequential topological ordered set (or t.o.s.) if ($L$, $\le$) is an ordered set and for any sequence $(\beta_h)_h$ in $L$, converging to some $\beta \in L$, if there exists $\alpha \in L$ such that $\beta_h \le \alpha$ for any $h$, then $\beta \le \alpha$. + +Let us consider $F : X \to L$, where $X$ is a topological space and $L$ is a +sequential t.o.s. A minimal element of $F$ is any $x \in X$ such that for any +$y \in X$, if $F(y) \le F(x)$ then $F(y) = F(x)$. Remark however that, due to the + +*Key words and phrases.* geometric evolution problems, viscosity solutions, brittle fracture mechanics, mean curvature flow. +---PAGE_BREAK--- + +lack of total ordering, a minimal element may not be a minimizer, i.e. even +if $x \in X$ is a minimal element of $F$, it is not true that $F(x) \leq F(y)$ for any +$y \in X$. + +A particular case of t.o.s. is any space of measures. An action measure +is a function defined over a topological space with values in a space of mea- +sures. The direct method in the calculus of variations can be reformulated in +this frame. In particular, if the space of measures is a topological dual of a +space of functions then the direct method can be written in a particular form. +We leave to the reader the formulation of the general direct method and the +reformulation of the theorem in this case. + +Action measures are related to (first order) viscosity solutions (see [4], [5], +[6]). Indeed, take a function + +$$ +H : \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R} +$$ + +$C^1$ in the first argument and positive one-homogeneous in the second. (Weaker assumptions may be taken.) Consider now $L$, the polar of $H$, + +$$ +L(x, p) = \sup \{ \langle p, q \rangle - H(x, q) : q \in \mathbb{R}^n \} +$$ + +For any fixed $T > 0$ we define the set + +$$ +\begin{align*} +\Lambda_{\tau} &= \{ c : \bar{\Omega} \times [0, T] \to \bar{\Omega} : c(x, \cdot) \in C^1([0, T]) \quad \forall x \in \Omega, \\ +&c(\cdot, 0) = id, c(x, T) \in \partial\Omega \quad \forall x \in \Omega \} +\end{align*} +$$ + +and the function $F: A_T \to M(\Omega)$ + +$$ +F(c)(B) = \int_B g(x,T) \, dx + \int_B \int_0^T L(c(x,t), \dot{c}(x,t)) \, dt \, dx . +$$ + +Here $g$ is a positive function defined on $\partial\Omega$. This action-measure has minimal elements. Moreover it has minimizing elements. Let $c_0$ be any one of them. Then + +$$ +(1) \qquad F(c_0)(B) = \int_B u(x) \, dx \quad \forall B \in \mathcal{B}(\Omega) +$$ + +where *u* is the viscosity solution of the problem + +$$ +(2) \qquad H(u, \nabla u) = 0 , \quad u = g \text{ on } \partial\Omega . +$$ + +Notice that in this setting of the problem (2) the primary unknown is the map $c_0$. The viscosity solution of (2), that is $u$, is the Lebesgue density of the measure $F(c_0)$. + +Any function $c \in A_T$ can be identified with a path of deformations of $\Omega$ by +$c(\cdot, t) \mapsto c_j(\cdot) : \bar{\Omega} \to \bar{\Omega}$. This fact make us formulate the following general +problem: +---PAGE_BREAK--- + +Consider a space $M$ of curves $t \mapsto \phi_t : \Omega \to \Omega$ and an action measure $\Lambda: M \to \mathrm{Meas}(\Omega)$, where $\mathrm{Meas}(\Omega)$ is a space of scalar measures over $\Omega$. Find and describe, under suitable conditions over $M$ and $\Lambda$, the minimal elements of the action measure $\Lambda$. + +### 3. EVOLUTION DRIVEN BY DIFFEMORPHISMS + +Diff$_0(\Omega)$ denotes the space of $C^\infty$ diffeomorphisms of $\Omega$ with compact support, that is the set of all $C^\infty$ functions $\phi: R^n \to R^n$ such that $\phi^{-1} \in C^\infty$ and $\mathrm{supp}(\phi - id) \subset \subset \Omega$. It is well known that any vector-field $\eta \in C_0^\infty(\Omega, R^n)$ (i.e. with compact support in $\Omega$) generates a one-parameter flow $t \mapsto \phi_t \in \mathrm{Diff}_0(\Omega)$, solution of the problem: $\dot{\phi}_t = \eta \cdot \phi_t$, $\phi_0 = id$, where the dot "·" denotes function composition. + +Consider a sufficiently regular set $B \subset \Omega$. Let $\xi_B$ be the characteristic function of B. For any $\phi \in \mathrm{Diff}(\Omega)$ we have the equality: $\xi_{\phi(B)} = \xi_B \cdot \phi^{-1}$. + +A geometric evolution of the set $B$ is any curve $t \mapsto B(t)$, such that $B(0) = B$. A particular case of geometric evolution of $B$ is when $B(t)$ is isotopically equivalent to $B$. Such an evolution (which we call isotopic) can be obtained by considering a curve $t \mapsto \phi_t \in \mathrm{Diff}_0(\Omega)$, $\phi_0 = id$. Any such curve induces a geometric evolution of $B$ by $B(t) = \phi_t(B)$. Therefore, this kind of geometric evolution of the set $B$ is equivalent to a curve in $\mathrm{Diff}_0(\Omega)$, with origin at $id$. + +We can make weaker assumptions upon the geometric evolution of $B$. In this paper we shall introduce the notion of geometric evolution driven by diffeomorphisms. The advantage of this notion is that potentially complex evolutions of $B$ are locally approximated by isotopic evolutions. We describe further what an evolution driven by diffeomorphisms is. + +The regularity assumptions upon the initial set $B$ are described first. $H^k$ denotes the $k$-dimensional Hausdorff measure. We shall suppose that $B$ has $k$ Hausdorff dimension. We suppose also that for any vector-field $\eta \in C_0^\infty(\Omega, R^n)$ there exists the derivative with respect to $t$ of the function $t \mapsto \xi_{\phi_t(B)}H^k$, where $\phi_t$ is the one-parameter flow generated by $\eta$. Moreover, this derivative is supposed to be absolutely continuous with respect to the measure $H^{k-1}$. + +An evolution of $B$ driven by diffeomorphisms is a curve $t \mapsto B(t)$, $B(0) = B$, such that: + +i) $d/dt \xi_{B(t)} H^k$ is absolutely continuous with respect to $H^{k-1}$. The support of this measure is denoted by $\partial^+ B(t)$ and is called the border of $B(t)$. + +ii) there is a curve $t \mapsto \eta(t) \in C_0^\infty(\Omega, R^n)$ such that for almost any $t$ we have the inequality of measures: + +$$ \frac{d}{dt} \xi_{B(t)} H^k \leq \frac{d}{ds} \xi_{B(t)} \cdot \phi_s^{-1} \cdot \eta(s) H^k $$ +---PAGE_BREAK--- + +where $s \mapsto \phi_{s,\eta(t)}$ is the one-parameter flow generated by $\eta(t)$ and the derivative with respect to $s$ is made for $s=0$. + +iii) the function $t \mapsto d/ds \xi_{B(t)} \cdot \phi_{s,\eta(t)}^{-1} \mathcal{H}^k(\Omega)$ is measurable. + +iv) for any $t < t'$ we have $B(t) \subset B(t')$. + +Let us denote by $Bar^+(t, Q)$ the set of all $\eta \in C_0^\infty(\Omega, \mathbb{R}^n)$ with compact support in $Q \subset \Omega$ which satisfy: $d/dt \xi_{B(t)} \mathcal{H}^k \leq d/ds \xi_{B(t)} \cdot \phi_{s,\eta}^{-1} \mathcal{H}^k$. Obviously, the set $Bar^+(t, Q)$ depends on the evolution $t \mapsto B(t)$. + +We have the following result: for almost any $t$ there is a positive vector-field $v(t)$, with support on $\partial^* B(t)$, called the normal velocity field, such that for any $\eta \in Bar^+(t, \Omega)$ we have + +$$ \frac{d}{dt} \xi_{B(t)} \mathcal{H}^k \leq v(t) \mathcal{H}^{k-1} \leq \frac{d}{ds} \xi_{B(t)} \cdot \phi_{s,\eta}^{-1} \mathcal{H}^k . $$ + +### 4. A GENERAL GEOMETRIC EVOLUTION PROBLEM + +Consider now a set $C \subset P(\Omega)$, which contains only regular closed sets $B \subset \Omega$ and let $M$ be a family of evolutions of an initial set $B_0 \in C$ driven by diffeomorphisms, such that for any $t$ and any curve $t \mapsto B(t) \in M$ we have $B(t) \in C$. Let us consider also a functional $E: C \to R$, such that $E(B) \geq E(B')$ if $B \subset B'$. $E$ is smooth in the following sense: for any $B \in C$ and any one-parameter flow $t \mapsto \phi_{1,\eta}$ the function $t \mapsto E(\phi_{1,\eta}(B))$ is derivable in $t=0$. This derivative will be denoted by $dE(B, \eta)$. Given a geometric evolution $t \mapsto B(t) \in M$, for any borelian set $Q \in B(\Omega)$, the variation of $E$ at $B(t) \in C$, inside $Q$ is defined by the formula: + +$$ dE(B(t))(Q) = \sup \left\{ dE(B(t), \eta) : \exists \lambda > 0, \lambda\eta \in Bar^+(t, Q), d(\partial^* B, \phi_{1,\eta}(\partial^* B)) \leq 1 \right\}. $$ + +Under suitable assumptions $-dE(B(t))$ is a positive measure. + +We introduce now the action-measure defined for any geometric evolution $t \mapsto B(t) \in M$ by the expression: + +$$ A(t \mapsto B(t))(Q) = \int_0^T \int_{\partial^* B(t) \cap Q} v(t) d\mathcal{H}^{k-1} dt + \int_0^T dE(B(t))(Q) dt . $$ + +Notice that the first term of $A$ can be written as the variation of $\mathcal{H}^k(B(t))$ from 0 to $T$. Remark also that we can consider functions $E = E(B, t)$, such that $E(B, t) \geq E(B, t')$ if $B \subset B'$. + +**Example 1.** Mean curvature flow. (see [1]) Let us take $k=n$ in the regularity assumptions, that is $B_0$ $n$-dimensional, and $E(B) = -\mathcal{H}^{n-1}(\partial^* B)$. Then any minimal element of the action measure $A$ defined above is a super-solution of the mean curvature flow problem, that is for almost any $t$ and +---PAGE_BREAK--- + +almost any $x \in \partial^{\ast}B(t)$ we have $v(t) \geq k(x,t)$, where $k(x,t)$ is the mean curvature of $\partial^{\ast}B(t)$ in $x$ (with the convention of positive curvature for spheres). + +**Example 2.** **Brittle crack propagation.** By a crack set in $\Omega$ we mean a closed, finite rectifiable set $B$. $\Omega$ represents the reference configuration and $\mathbf{u}: \bar{\Omega} \to \mathbb{R}^n$ is the deformation of a hyper-elastic body. The free energy density is $w(\nabla \mathbf{u})$; in the case of infinitesimal deformations $\mathbf{u}$ represents the displacement of the body and $w$ is a quadratic function of the symmetric gradient of $\mathbf{u}$. + +A path $t \mapsto \mathbf{v}(t)$ of deformations (or displacements) is given on $\partial\Omega$. The evolution of the body is supposed to be quasi-static. An initial crack set $B_0$ is present in the body. We are interested in the propagation of this crack under the path of imposed deformations. We introduce for this the following functional, defined for any crack set $B$ and any moment $t$: + +$$E(B, t) = \inf \left\{ \int_{\Omega} w(\nabla(\mathbf{u})) dx : \mathbf{u} \in C^1(\bar{\Omega} \setminus B), \mathbf{u} = \mathbf{v}(t) \text{ on } \partial\Omega \setminus B \right\}.$$ + +Our principle of brittle crack propagation states that the evolution of the initial crack $B_0$ is a minimal element of the action-measure: + +$$\Lambda(t \mapsto B(t))(Q) = G H^{n-1}(B(T) \cap Q) + \int_{0}^{T} dE(B(t), t)(Q) dt .$$ + +The physical meaning of this principle is: choose the crack propagation $t \mapsto B(t)$ such that the energy consumed by the body in order to produce in $Q$ the crack growth $t \mapsto B(t) \cap Q$ is less than the energy released in $Q$ due only to crack propagation. + +In the particular case of infinitesimal deformations if we take the curve $t \mapsto B_0(t) = B_0$ we see that $\Lambda(B_0(\cdot))(Q) = 0$ for any $Q$, therefore $\Lambda(B(\cdot))$ is a negative measure. Therefore, in this case, a generalization of Griffith criterion holds. + +In [2], [3] we have proposed a minimizing movement model of brittle crack propagation in infinitesimal deformations ([3], definitions 4.1 and 5.1). The model is presented here in a condensed form. Let us consider the set $M$ of all $(\mathbf{u}, K)$ such that $K \subset \bar{\Omega}$ is a crack set, $\mathbf{u} \in \mathbf{u} \in C^1(\bar{\Omega} \setminus K, \mathbb{R}^n)$ and for $H^{n-1}$-almost any $x \in K$ there exist the normal $\mathbf{n}(x)$ at $K$ in $x$ and $\mathbf{u}^+(x), \mathbf{u}^-(x)$. + +We define the functions + +$$J: M \times M \to R,$$ + +$$J((\mathbf{u}, K), (\mathbf{v}, L)) = \int_{\Omega} w(\nabla \mathbf{v}) \, d\mathbf{x} + G H^{n-1}(L \setminus K),$$ + +$$\Psi: [0, \infty) \times M \to \{0, +\infty\},$$ +---PAGE_BREAK--- + +$$ \Psi(\lambda, (v, K)) = \begin{cases} 0 & \text{if } v = u_0(\lambda) \text{ on } \partial\Omega \setminus K \\ +\infty & \text{otherwise.} \end{cases} $$ + +We consider the initial data $(u_0, K) \in M$ such that $u_0 = u(u_0(0), K)$. For any $s \ge 1$ we define the sequences + +$$ k \in N \mapsto u^s(k), L^s(k), K^s(k), $$ + +$(u^s(k), L^s(k)) \in M$ and $(u^s(k), K^s(k)) \in M$, recursively: + +i) $(u^s, K^s)(0) = (u_0, K)$, $L^s(0) = K$, + +ii) for any $k \in N$ $(u^s, L^s)(k+1) \in M$ minimizes the functional + +$$ (v, L) \in M \mapsto J(((u^s, K^s)(k), (v, L)) + \Psi((k+1)/s, (v, L)) $$ + +over $M$. $K^s(k+1)$ is defined by the formula: + +$$ K^s(k+1) = K^s(k) \cup L^s(k+1). $$ + +$(u, L, K): [0, +\infty) \to M$ is an energy minimizing movement associated to $J$ with the constraint $\Psi$ and initial data $(u_0, K)$ if there is a diverging sequence $(s_i)$ such that for any $t > 0$ we have: $u^s([s_i t]) \to u(t)$ in $L^2(\Omega, R^n)$. $L(t)$ is called the active crack at the moment $t$ and + +$$ K(t) = \bigcup_{s \in [0, t]} S(s) $$ + +is the total damaged region at the same moment. + +We have the following result which connects these two models of brittle +crack propagation presented here. + +**Theorem.** Let us consider an energy minimizing brittle crack propagation $t \mapsto (u, S(t), K(t))$. Suppose that $t \mapsto K(t)$ is driven by diffeomorphisms. Then the curve $t \mapsto K(t)$ is a minimal element of the action-measure $\mathcal{A}$ defined above, in the case of infinitesimal deformations. + +REFERENCES + +[1] L. Ambrosio, Geometric evolution problems, distance function and viscosity solutions, *Università di Pisa Preprint* 2.245.986, 1996 + +[2] M. Buliga, Variational Formulations in Brittle Fracture Mechanics. PhD Thesis, Institute of Mathematics of the Romanian Academy, 1997 + +[3] M. Buliga, Energy minimizing brittle crack propagation, *Journal of Elasticity*, (to appear), 1998 + +[4] M.G. Crandall, P.L. Lions, Viscosity solutions of Hamilton-Jacobi equations *Trans. Amer. Math. Soc.*, **277**, 1983, 1–43 + +[5] M.G. Crandall, L.C. Evans, P.L. Lions, Some properties of viscosity solutions to Hamilton-Jacobi equations, *Trans. Amer. Math. Soc.*, **282**, 1984, 487–502 + +[6] P.L. Lions, Generalized solutions of Hamilton-Jacobi equations, *Research Notes in Math*, **69**, Pitman, 1982 \ No newline at end of file diff --git a/samples/texts_merged/6838080.md b/samples/texts_merged/6838080.md new file mode 100644 index 0000000000000000000000000000000000000000..8572f82cb7ce3f0a8db77d293942a28393b5f51a --- /dev/null +++ b/samples/texts_merged/6838080.md @@ -0,0 +1,1211 @@ + +---PAGE_BREAK--- + +# Unlinkable and Strongly Accountable Sanitizable Signatures from Verifiable Ring Signatures* + +Xavier Bultel¹,² and Pascal Lafourcade¹,² + +¹ CNRS, UMR 6158, LIMOS, F-63173 Aubière, France + +² Université Clermont Auvergne, BP 10448, 63000 Clermont-Ferrand, France + +**Abstract.** An *Unlinkable Sanitizable Signature* scheme (USS) allows a sanitizer to modify some parts of a signed message such that nobody can link the modified signature to the original one. A *Verifiable Ring Signature* scheme (VRS) allows the users to sign messages anonymously within a group such that a user can prove a *posteriori* to a verifier that he is the signer of a given message. In this paper, we first revisit the notion of VRS: we improve the proof capabilities of the users, we give a complete security model for VRS and we give an efficient and secure scheme called EVeR. Our main contribution is GUSS, a generic USS based on a VRS scheme and an unforgeable signature scheme. We show that GUSS instantiated with EVeR and the Schnorr's signature is twice as efficient as the best USS scheme of the literature. Moreover, we propose a stronger definition of accountability: an USS is accountable when the signer can prove whether a signature is sanitized. We formally define the notion of strong accountability when the sanitizer can also prove the origin of a signature. We show that the notion of strong accountability is important in practice. Finally, we prove the security properties of GUSS (including the strong accountability) and EVeR under the Decisional Diffie-Hellman assumption in the random oracle model. + +## 1 Introduction + +Sanitizable Signatures (SS) were introduced by Ateniese et al. [1], but similar primitives were independently proposed in [23]. In this primitive, a signer allows a proxy (called the sanitizer) to modify some parts of a signed message. For example, a magistrate wishes to delegate the power to summon someone to the court to his secretary. He signs the message "Franz is summoned to court for an interrogation on Monday" and gives the signature to his secretary, where "Franz" and "Monday" are sanitizable and the other parts are fixed. Thus, in order to summoned Joseph K. on Saturday in the name of the magistrate, the secretary can change the signed message into "Joseph K. is summoned to the court for an interrogation on Saturday". + +Ateniese et al. in [1] propose some applications of this primitive in privacy of health data, authenticated media streams and reliable routing informations. They also introduced five security properties formalized by Brzuska et al. in [4]: + +**Unfogeability:** no unauthorised user can generate a valid signature. + +* This research was conducted with the support of the “Digital Trust” Chair from the University of Auvergne Foundation. +---PAGE_BREAK--- + +**Immutability:** sanitizer cannot transform a signature from an unauthorised message. + +**Privacy:** no information about the original message is leaked by a sanitized signature. + +**Transparency:** nobody can say if a signature is sanitized or not. + +**Accountability:** the signer can prove that a signature is sanitized or is the original one. + +Finally, in [6] authors point a non-studied but relevant property called *unlinkability*: a scheme is unlinkable when it is not possible to link a sanitized signature to the original one. The authors give a generic unlinkable scheme based on group signatures. In 2016, Fleischhacker et al. [16] give a more efficient construction based on signatures with re-randomizable keys. + +On the other hand, ring signature is a well-studied cryptographic primitive introduced by Shamir et al. in [22] where any user can sign anonymously within an ad-hoc group of users. Such a scheme is verifiable [21] when any user can prove a posteriori to a verifier that he is the signer of a given message. In this paper, we improve the proof properties of VRS, we give an efficient VRS scheme called EVeR and a generic unlinkable sanitizable signature scheme called GUSS that uses verifiable ring signatures. We also show that the definition of accountability is too weak for practical uses, and we propose a stronger definition. + +**Contributions:** Existing VRS schemes allow any user to prove that he is the signer of a given message. We extend the definition of VRS to allow a user to prove that he is not the signer of a given message. We give a formal security model for VRS that takes into account this property. We first extend the classical security properties of ring signatures to verifiable ring signatures, namely the *unforgeability* (no unauthorised user can forge a valid signature) and the *anonymity* (nobody can distinguish the signer in the group). In addition we define the *accountability* (a user cannot sign a message and prove that he is not the signer) and the *non-usurpability* (a user cannot prove that he is the signer of a message if it is not true, and a user cannot forge a message such that the other users cannot prove that they are not the signers). To the best of our knowledge, it is the first time that formal security models are proposed for VRS. We also design an efficient secure VRS scheme under the decisional Diffie-Hellman assumption in the random oracle model. + +The usual definition of accountability for SS considers that the signer can prove the origin of a signature (signer or sanitizer) using a proof algorithm such that: + +1. The signer cannot forge a signature together with a proof that the signature comes from the sanitizer. + +2. The sannitizer cannot forge a signature such that the proof algorithm accuses the signer. + +The proof algorithm requires the secret key of the signer. To show that this definition is too weak, we consider a dishonest signer who refuses to proof the origin of a litigious signature. The dishonest signer claims that he lost his secret key because of problems with his hard drive. There is no way to verify whether the signer is lying. Unfortunately, without his secret key, the signer cannot generate the proof for the litigious signature. Then nobody can judge if the signature is sanitized or not and there is a risk of accusing the honest sanitizer wrongly. To solve this problem, we add a second proof algorithm that allows the sanitizer to prove the origin of a signature. To achieve the strong accountability, the two following additional properties are required: +---PAGE_BREAK--- + +1. The sanitizer cannot sanitize a signature $\sigma$ and prove that $\sigma$ is not sanitized. + +2. The signer cannot forge a signature such that the sanitizer proof algorithm accuses the sanitizer. + +The main contribution of this paper is to propose an efficient and generic unlinkable SS scheme called GUSS. This scheme is instantiated by a VRS and an unforgeable signature scheme. It is the first SS scheme that achieves strong accountability. We compare GUSS with the other schemes of the literature: + +**Brzuska et al. [6]** This scheme is based on group signatures. Our scheme is build on the same model, but it uses ring signatures instead of group signatures. The main advantage of group signatures is that the size of the signature is not proportional to the size of the group. However, for small groups, ring signatures are much more efficient than group signatures. Since the scheme of Brzuska et al. and GUSS uses group/ring signatures for groups of two users, GUSS is much more practicale for an equivalent level of genericity. + +**Fleischhacker et al. [16]** This scheme is based on signatures with re-randomizable keys. It is generic, however it uses different tools that must have special properties to be compatible with each other. To the best of our knowledge, it is the most efficient scheme of the literature. GUSS instantiated with EVeR and the Schnorr's signature is twice as efficient as the best instantiation of this scheme. In Fig. 1, we compare the efficiency of each algorithm of our scheme and the scheme of Fleischhacker et al.. + +**Lai et al. [19]** Recently, Lai et al. proposed an USS that is secure in the standard model, however it uses pairing and it is much less efficient than the scheme of Fleischhacker et al. that is in the random oracle model, thus it is much less efficient than our scheme. In their paper [19], Lai et al. give a comparison of the efficiency of the three schemes of the literature. + +
SiGenSaGenSigSanVerSiProofSiJudgeTotalpkspksksskσπ
[16]711514172367371141144
GUSS21871032362121124
+ +**Fig. 1.** Comparison of GUSS and the scheme of Fleischhacker et al.: The first six columns give the number of exponentiations of each algorithms of both schemes, namely the key generation algorithm of the signer (SiGen) and the sanitizer (SaGen), the signature algorithm (Sig), the verification algorithm (Ver), the sanitize algorithm (San), the proof algorithm (SiProof) and the judge algorithm (SiJudge). The last six columns gives respectively the size of the public key of the signer (pk) and the sanitizer (pk), the size of the secret key of the signer (sk) and the sanitizer (ssk), the size of a signature ($\sigma$) and the size of a proof ($\pi$) outputted by SiProof. This size is measured in elements of a group $G$ of prime order. As in [16], for the sake of clarity, we do not distinguish between elements of $G$ and elements of $\mathbb{Z}_p^*$. We consider the best instantiation of the scheme of Fleischhacker et al. given in [16]. + +**Related works:** Sanitizable Signatures (SS) was first introduced by Ateniese et al. [1]. Later, Brzuska et al. give formal security definitions [5] for unfogeability, immutability, privacy, transparency and accountability. Unlinkability was introduced and +---PAGE_BREAK--- + +formally defined by Brzuska et al. in [6]. In [7], Brzuska et al. introduce an alternative definition of accountability called *non-interactive public accountability* where the capability to prove the origin of a signature is given to a third party. One year later, the same authors propose a stronger definition of unlinkability [8] and design a scheme that is both strongly unlinkable and non-interactive public accountable. However, non-interactive public accountability is not compatible with transparency. In this paper, we focus on schemes that are unlinkable, transparent and interactive accountable. To the best of our knowledge, there are only 3 schemes with these 3 properties, i.e. [6, 16, 19]. + +Some works are focused on other properties of SS that we do not consider here, as SS with multiple sanitizers [10], or SS where the power of the sanitizer is limited [9]. Finally, there exist other primitives that solve related but different problems as homomorphic signatures [18], redactable signatures [3] or proxy signatures [17]. Differences between these primitives and sanitizable signatures are detailed in [16]. + +On the other hand, *Ring Signatures (RS)* [22] were introduced by rivest et al. in 2003 and *Verifiable Ring Signatures (VRS)* [21] were introduced in 2003 by Lv. RS allows the users to sign anonymously within a group, and VRS allows a user to prove that he is the signer of a given message. To the best of our knowledge, even if several VRS have been proposed [12, 24], there is no security model for this primitive in the literature. Convertible ring signatures [20] are very closed to verifiable ring signatures, it allows the signer of an anonymous (ring) signature to transform it into a standard signature (*i.e.* a desanonimized signature). It can be used as a verifiable ring signature because the desanonimized signature can be viewed as a proof that the user is the signer of a given message. However, in this paper we propose a stronger definition of VRS where a user also can prove that he is not the signer of a message, and this property cannot be achieved using convertible signatures. + +A *List Signature* scheme (LS) [11] is a kind of RS that have the following property: if a user signs two messages for the same *event-id*, then it is possible to link these signatures and the user identity is publicly revealed. It can be used to design a VRS in our model: to prove whether he is the signer of a given message, the user signs a second message using the same event-id. If the two signatures are linked, then the judge is convinced that the user is the signer, else he is convinced that the user is not the signer. However, LS requires security properties that are too strong for VRS (linkability and traceability) and it would result in less efficient schemes. + +**Outline:** In section 2, we present the formal definition and the security models for both verifiable ring signatures and unlinkable sanitizable signatures. In section 3, we present our two schemes EvER and GUSS, before concluding in section 4. Moreover, we recall in appendix A the standard cryptographic definitions used in this paper, namely the DDH assumption, the deterministic digital signatures (DS), the Schnorr's signature and the non-interactive zero-knowledge proofs (NIZKP). +---PAGE_BREAK--- + +# 2 Formal Definitions + +## 2.1 Verifiable Ring Signatures + +We give formal definitions and security of Verifiable Ring Signatures (VRS). A VRS is a ring signature scheme where a user can prove to a judge if he is the signer of a message or not. It is composed of 6 algorithms. $V.Init$, $V.Gen$, $V.Sig$ and $V.Ver$ are defined as in the usual ring signature definitions. $V.Gen$ generates public and private keys. $V.Sig$ anonymously signs a message according to a set of public keys. $V.Ver$ verifies the soundness of a signature. A VRS has two additional algorithms: $V.Proof$ allows a user to prove whether he is the signer of a message or not, and $V.Judge$ allows to verify the proofs outputted by $V.Proof$. + +**Definition 1 (Verifiable Ring Signature (VRS)).** A Verifiable Ring Signature scheme is a tuple of 6 algorithms defined by: + +* $V.Init(1^k)$: It returns a setup value *init*. +* $V.Gen(init)$: It returns a pair of signer public/private keys ($pk, sk$). +* $V.Sig(L, m, sk)$: This algorithm computes a signature $\sigma$ using the key $sk$ for the message $m$ according to the set of public keys $L$. +* $V.Ver(L, m, \sigma)$: It returns a bit $b$: if the signature $\sigma$ of $m$ is valid according to the set of public key $L$ then $b = 1$, else $b = 0$. +* $V.Proof(L, m, \sigma, pk, sk)$: It returns a proof $\pi$ for the signature $\sigma$ of $m$ according to the set of public key $L$. +* $V.Judge(L, m, \sigma, pk, \pi)$: It returns a bit $b$ or the bottom symbol $\perp$: if $b = 1$ (resp. 0) then $\pi$ proves that $\sigma$ comes from (resp. does not come from) the signer corresponding to the public key $pk$. It outputs $\perp$ when the proof is not well formed. + +*Unforgeability*: We first adapt the unforgeability property of ring signatures to VRS. Informally, a VRS is unforgeable when no adversary is able to forge a signature for a ring of public keys without any corresponding secret key. In this model, the adversary has access to a signature oracle $V.Sig(\cdot, \cdot, \cdot)$ (that outputs signatures of chosen messages for chosen users in the ring) and a proof oracle $V.Proof(\cdot, \cdot, \cdot, \cdot, \cdot)$ (that computes proofs as the algorithm $V.Proof$ for chosen signatures and chosen users). The adversary succeeds the attack when he outputs a valid signature that was not already computed by the signature oracle. + +**Definition 2 (Unforgeability).** Let $P$ be a VRS of security parameter $k$, $n$ be an integer. + +We consider two oracles: + +* $V.Sig(\cdot, \cdot, \cdot):$ On input $(L, l, m)$, if $1 \le l \le n$ then this oracle returns the message $V.Sig(L, sk_i, m)$, else it returns $\perp$. +* $V.Proof(\cdot, \cdot, \cdot, \cdot, \cdot):$ On input $(L, m, \sigma, l)$, if $1 \le l \le n$ then this proof oracle returns $V.Proof(L, m, \sigma, pk_l, sk_l)$, else it returns $\perp$. + +$P$ is *n*-unf secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $q_S$ is the number of calls to the oracle $V.Sig(\cdot, \cdot, \cdot)$ and $\sigma_i$ is the $i^{th}$ signature outputted by this oracle: +---PAGE_BREAK--- + +**ExpP,An-unf(k):** + +$$ +\begin{align*} +\text{init} & \leftarrow \text{V.Init}(1^k) \\ +& \forall 1 \le i \le n, (\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{V.Gen}(\text{init}) \\ +& (L_*, \sigma_*, m_*) \leftarrow \mathcal{A}^{\text{V.Sig}(\cdot, \cdot, \cdot)}_{\text{V.Proof}(\cdot, \cdot, \cdot, \cdot)} (\{\mathbf{pk}_i\}_{1 \le i \le n}) \\ +& \text{if } \text{V.Ver}(L_*, \sigma_*, m_*) = 1 \text{ and } L_* \subseteq \{\mathbf{pk}_i\}_{1 \le i \le n} \text{ and } \forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_* \\ +& \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +P is unforgeable when it is *n-unf* secure for any polynomials bounded *n*. + +Anonymity: we adapt the anonymity property of ring signatures to VRS. Informally, a VRS is anonymous when no adversary is able to link a signature to the corresponding user. The adversary has access to the signature oracle and the proof oracle. During a first phase, he chooses two honest users in the ring, and in the second phase, he has access to a challenge oracle $\mathrm{LRSO}_b(d_0, d_1, \cdot, \cdot)$ that outputs signatures of chosen messages using the secret key of one of the two chosen users. The adversary successes the attack if he guesses who is the user chosen by the challenge oracle. Note that the adversary cannot use the proof oracle on the signatures outputted by the challenge oracle. + +**Definition 3 (Anonymity).** Let $P$ be a VRS of security parameter $k$, let $n$ be an integer. +Let the following oracle be: + +$\mathcal{LRSO}_b(d_0, d_1, \cdot, \cdot):$ On input $(m, L)$, if $\{\mathbf{pk}_{d_0}, \mathbf{pk}_{d_1}\} \subseteq L$ then this oracle returns $\mathbf{V.Sig}(L, \mathbf{sk}_{d_b}, m)$, else it returns $\bot$. + +$P$ is *n-ano* secure when for any polynomial time adversary $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$, the difference between $1/2$ and the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\mathbf{V.Sig}(\cdot, \cdot, \cdot)$ and $\mathbf{V.Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 2 and where $q_S$ (resp. $q_P$) is the number of calls to the oracle $\mathbf{V.Sig}(\cdot, \cdot, \cdot)$ (resp. $\mathbf{V.Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$), $(L_i, m_i, \sigma_i, l_i)$ is the $i^{th}$ query sent to oracle $\mathbf{V.Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$ and $\sigma'_j$ is the $j^{th}$ signature outputted by the oracle $\mathcal{LRSO}_b(d_0, d_1, \cdot, \cdot)$: + +**ExpP,An-ano(k):** + +$$ +\begin{align*} +\text{init} & \leftarrow \text{V.Init}(1^k) \\ +& \forall 1 \le i \le n, (\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{V.Gen}(\text{init}) \\ +& (d_0, d_1) \leftarrow \mathcal{A}_1^{\text{V.Sig}(\cdot, \cdot, \cdot)}_{\text{V.Proof}(\cdot, \cdot, \cdot, \cdot)} (\{\mathbf{pk}_i\}_{1 \le i \le n}) \\ +& b \stackrel{\S}{\leftarrow} \{\mathbf{0}, 1\} \\ +& b_* \leftarrow \mathcal{A}_2^{\text{V.Sig}(\cdot, \cdot, \cdot)}_{\text{V.Proof}(\cdot, \cdot, \cdot, \cdot)}_{\text{LRSO}_b(d_0, d_1, \cdot)} (\{\mathbf{pk}_i\}_{1 \le i \le n}) \\ +& \text{if } (b = b_*) \text{ and } (\forall i,j \in \{\mathbf{1},\ldots,\max(q_S,q_P)\}, (\sigma_i \neq \sigma'_j) \text{ or } (l_i \neq d_0 \text{ and } l_i \neq d_1)) \\ +& \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +P is anonymous when it is *n-ano* secure for any polynomials bounded *n*. + +Accountability: We consider an adversary that has access to a proof oracle and a signature oracle. A VRS is accountable when no adversary is able to forge a signature $\sigma$ (that does not be outputted by the signature oracle) together with a proof that he is not the signer of $\sigma$. Note that the ring of $\sigma$ must contain at most one public key that does not come from an honest user, thus the adersary knows at most one secret key that corresponds to a public key in the ring. + +**Definition 4 (Accountability).** Let $P$ be a VRS of security parameter $k$ and let $n$ +be an integer. $P$ is $n$-acc secure when for any polynomial time adversary $\mathcal{A}$, the +---PAGE_BREAK--- + +probability that $\mathcal{A}$ wins the following experiment is negligible, where $V.Sig(\cdot, \cdot, \cdot)$ and $V.Proof(\cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 2 and where $q_S$ is the number of calls to the oracle $V.Sig(\cdot, \cdot, \cdot)$ and $\sigma_i$ is the $i^{th}$ signature outputted by this oracle: + +$$ +\begin{array}{l} +\textbf{Exp}_{P,\mathcal{A}}^{\text{n-acc}}(k): \\ +\text{init} \leftarrow V.\text{Init}(1^k) \\ +\forall 1 \le i \le n, (\textit{pk}_i, \textit{sk}_i) \leftarrow V.\text{Gen}(\text{init}) \\ +(L_*, m_*, \sigma_*, \textit{pk}_*, \pi_*) \leftarrow \mathcal{A}^{\text{V.Sig}(\cdot, \cdot, \cdot), \text{V.Proof}(\cdot, \cdot, \cdot, \cdot)}(\{\textit{pk}_i\}_{1 \le i \le n}) \\ +\quad \text{if } (L \subseteq \{\textit{pk}_i\}_{1 \le i \le n} \cup \{\textit{pk}_*\}) \text{ and } (\text{V.Ver}(L_*, \sigma_*, m_*) = 1) \text{ and} \\ +\quad \quad (\text{V.Judge}(\bar{L}_*, m_*, \sigma_*, \textit{pk}_*, \pi_*) = 0) \text{ and } (\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_*) \\ +\quad \text{then return } 1, \text{ else return } 0 +\end{array} +$$ + +*P* is accountable when it is *n*-**acc** secure for any polynomially bounded *n*. + +Non-usurpability: We distinguish two experiments for this property: + +– The first experiment, denoted non-usu-1, considers an adversary that has access to a proof oracle and a signature oracle. Its goal is to forge a valid signature with a proof that the signer is another user in the ring. Since this property is not required to build our generic USS, we give the formal definition of this security experiment in Appendix B. + +– The second experiment, denoted non-usu-2, considers an adversary that has access to a proof oracle and a signature oracle and that recieves the public key of an honest user as input. The goal of the adversary is to forge a signature $\sigma$ such that the proof algorithm runs by the honest user returns a proof that $\sigma$ was computed by the honest user (i.e. the proof algorithm returns 1) or a non-valid proof (i.e. the proof algorithm returns $\perp$). Moreover, the signature $\sigma$ must not come from the signature orale. + +**Definition 5 (Non-usurpability).** Let $P$ be a VRS of security parameter $k$ and let $n$ be an integer. $P$ is $n$-non-usu-$2$ secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $V.Sig(\cdot, \cdot, \cdot)$ and $V.Proof(\cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 2 and where $q_S$ is the number of calls to the oracle $V.Sig(\cdot, \cdot, \cdot)$ and $\sigma_i$ is the $i^{th}$ signature outputted by this oracle: + +$$ +\begin{array}{l} +\mathbf{Exp}_{P,\mathcal{A}}^{\text{n-non-usu-2}}(k): \\ +\text{init} \leftarrow V.\text{Init}(1^k) \\ +(\textit{pk}, \textit{sk}) \leftarrow V.\text{Gen}(\text{init}) \\ +(L_*, m_*, \sigma_*) \leftarrow \mathcal{A}^{\text{V.Sig}(\cdot, \cdot, \cdot), \text{V.Proof}(\cdot, \cdot, \cdot, \cdot)}(\textit{pk}) \\ +\pi \leftarrow V.\text{Proof}(L_*, m_*, \sigma_*, \textit{pk}, \textit{sk}) \\ +\text{if } (\text{V.Ver}(L_*, \sigma_*, m_*) = 1) \text{ and} \\ +\quad (\text{V.Judge}(L_*, m_*, \sigma_*, \textit{pk}_*, \pi_*) \neq 0) \text{ and } (\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_*) \\ +\quad \text{then return } 1, \text{ else return } 0 +\end{array} +$$ + +*P* is non-usurpable when it is both *n-non-usu-1* (see Appendix. B) and *n-non-usu-2* +secure for any polynomials bounded *n*. + +## 2.2 Sanitizable Signature + +We give the formal definition and security properties of the sanitizable signature primitive. Comparing to the previous definitions where only the signer can prove the origin +---PAGE_BREAK--- + +of a signature, our definition introduces algorithms that allow the sanitizer to prove +the origin of a signature. Moreover, in addition to the usual security models of [5], we +present two new security experiments that improve the accountability definition. + +A SS scheme contains 10 algorithms. Init outputs the setup values. SiGen and SaGen generate respectively the signer and the sanitizer public/private keys. As in classical signature schemes, the algorithms Sig and Ver allow the users to sign a message and to verify a signature. However, signatures are computed using a sanitizer public key and an admissible function ADM. The algorithm San allows the sanitizer to transform a signature of a message $m$ according to a modification function MOD: if MOD is admissible according to the admissible function (i.e. $MOD(ADM) = 1$) this algorithm returns a signature of the message $MOD(m)$. + +SiProof allows the signer to prove whether a signature is sanitized or not. Proofs +outputted by this algorithm can be checked by anybody using the algorithm SiJudge. +Finally, algorithms SaProof and SaJudge have the same functionalities as SiProof +and SiJudge, but the proof are computed from the secret parameters of the sanitizer +instead of the signer. + +**Definition 6 (Sanitizable Signature (SS)).** A Sanitizable Signature scheme is a tuple of 10 algorithms defined as follows: + +Init(1^k): It returns a setup value init. + +SiGen(init): It returns a pair of signer public/private keys (pk, sk). + +SaGen(init): It returns a pair of sanitizer public/private keys (spk, ssk). + +Sig(m, sk, spk, ADM): This algorithm computes a signature σ using the key sk for the message m, the sanitizer key spk and the admissible function ADM. Note that we assume that ADM can be efficiently recovered from any signature. + +San(m, MOD, σ, pk, ssk): Let the admissible function ADM according to the signature σ. If $\text{ADM}(MOD) = 1$ then this algorithm returns a signature σ' of the message $m' = MOD(m)$ using the signature σ, the signer public key pk and the sanitizer secret key ssk. Else it returns ⊥. + +Ver(m, σ, pk, spk): It returns a bit b: if the signature σ of m is valid for the two public keys pk and spk then b = 1, else b = 0. + +SiProof(sk, m, σ, spk): It returns a signer proof $\pi_{si}$ for the signature $\sigma$ of m using the signer secret key sk and the sanitizer public key spk. + +SaProof(ssk, m, σ, pk): It returns a sanitizer proof $\pi_{sa}$ for the signature $\sigma$ of m using the sanitizer secret key ssk and the signer public key pk. + +SiJudge(m, σ, pk, spk, πsi): It returns a bit d or the bottom symbol ⊥: if $\pi_{si}$ proves that $\sigma$ comes from the signer corresponding to the public key pk then $d = 1$, else if $\pi_{si}$ proves that $\sigma$ comes from the sanitizer corresponding to the public key spk then $d = 0$, else the algorithm outputs ⊥. + +SaJudge(m, σ, pk, spk, πsa): It returns a bit d or the bottom symbol ⊥: if $\pi_{sa}$ proves that $\sigma$ comes from the signer corresponding to the public key pk then $d = 1$, else if $\pi_{sa}$ proves that $\sigma$ comes from the sanitizer corresponding to the public key spk then $d = 0$, else the algorithm outputs ⊥. + +As it is mentioned in Introduction, SS schemes have the following security prop- +erties: unfogeability, immutability, privacy, transparency and accountability. In [5] au- +---PAGE_BREAK--- + +thors show that if a scheme has the *immutability*, the *transparency* and the *accountability* properties, then it has the *unforgeability* and the *privacy* properties. Hence we do not need to prove these two properties, then we do not recall their formal definitions. + +**Immutability:** A SS is immutable when no adversary is able to sanitize a signature without the corresponding sanitizer secret key or to sanitize a signature using a modification function that is not admissible (i.e. $\text{ADM}(\text{MOD}) = 0$). To help him, the adversary has access to a signature oracle $\text{Sig}(\cdot, \text{sk}, \cdot, \cdot)$ and a proof oracle $\text{SiProof}(\text{sk}, \cdot, \cdot, \cdot)$. + +**Definition 7 (Immutability).** We consider the two following oracles: + +$\text{Sig}(\cdot, \text{sk}, \cdot, \cdot):$ On input $(m, \text{ADM}, \text{spk})$, this oracle returns $\text{Sig}(m, \text{sk}, \text{ADM}, \text{spk})$. + +$\text{SiProof}(\text{sk}, \cdot, \cdot, \cdot):$ On input $(m, \sigma, \text{spk})$, this oracle returns $\text{SiProof}(\text{sk}, m, \sigma, \text{spk})$. + +Let $P$ be a SS of security parameter $k$. $P$ is *Immut* secure (or immutable) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $q_{\text{Sig}}$ is the number of calls to the oracle $\text{Sig}(\cdot, \text{sk}, \cdot, \cdot)$, $(m_i, \text{ADM}_i, \text{spk}_i)$ is the $i^{\text{th}}$ query asked to this oracle and $\sigma_i$ is the corresponding response: + +$$ +\begin{array}{l} +\mathbf{Exp}_{P,A}^{\text{Immut}}(k): \\ +\quad init \leftarrow Init(1^k) \\ +\quad (\text{pk}, \text{sk}) \leftarrow \text{SiGen}(init) \\ +\quad (\text{spk}_*, m_*, \sigma_*) \leftarrow A^{\text{Sig}(\cdot, \text{sk}, \dots, \cdot)}(\text{SiProof}(\text{sk}, \dots, \cdot))(\text{pk}) \\ +\quad \quad \text{if } (\text{Ver}(m_*, \sigma_*, \text{pk}, \text{spk}_*) = 1) \text{ and } (\forall i \in \{1, \dots, q_{\text{Sig}}\}, (\text{spk}_* \neq \text{spk}_i) \text{ or } (\forall \text{ MOD such that } \text{ADM}_i(\text{MOD}) = 1, m_* \neq \text{MOD}(m_i))) \\ +\quad \quad \quad \text{then return } 1, \text{ else return } 0 +\end{array} +$$ + +**Transparency:** The transparency property guarantees that no adversary is able to distinguish whether a signature is sanitized or not. In addition to the signature oracle and the signer proof oracle, the adversary has access to a sanitize oracle $\text{San}(\cdot, \cdot, \cdot, \cdot, \cdot, \cdot, \cdot, \cdot)$ that sanitizes chosen signatures and a sanitizer proof oracle $\text{SaProof}(\text{ssk}, \cdot, \cdot, \cdot)$ that computes sanitizer proofs for given signatures. Moreover the adversary has access to a challenge oracle $\text{Sa}/\text{Si}(b, \text{pk}, \text{spk}, \text{sk}, \text{ssk}, \cdot, \cdot, \cdot)$ that depends to a randomly chosen bit $b$: this oracle signs a given message and sanitizes it, if $b=0$ then it outputs the original signature, else it outputs the sanitized signature. The adversary cannot use the proof oracles on the signatures outputted by the challenge oracle. To success the experiment, the adversary must guess $b$. + +**Definition 8 (Transparency).** We consider the following oracles: + +$\text{San}(\cdot, \cdot, \cdot, \cdot, \cdot, \cdot, \cdot):$ On input $(m, \text{MOD}, \sigma, \text{pk})$, it returns $\text{San}(m, \text{MOD}, \sigma, \text{pk}, \text{ssk})$. + +$\text{SaProof}(\text{ssk}, \cdot, \cdot, \cdot):$ On input $(m, \sigma, \text{pk})$, this oracle returns $\text{SaProof}(\text{ssk}, m, \sigma, \text{pk})$. + +$\text{Sa}/\text{Si}(b, \text{pk}, \text{spk}, \text{sk}, \text{ssk}, \cdot, \cdot, \cdot):$ On input $(m, \text{ADM}, \text{MOD})$, if $\text{ADM}(\text{MOD}) = 0$, this oracle returns $\perp$. Else if $b=0$, this oracle returns $\text{Sig}(\text{MOD}(m), \text{sk}, \text{spk}, \text{ADM})$, else if $b=1$, this oracle returns $\text{San}(m, \text{MOD}, \text{Sig}(m, \text{sk}, \text{spk}, \text{ADM}), \text{pk}, \text{ssk})$. + +Let $P$ be a SS of security parameter $k$. $P$ is *Trans* secure (or transparent) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\text{Sig}(\cdot, \text{sk}, \cdot, \cdot)$ and $\text{SiProof}(\cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 7, and where $S_{\text{Sa/Si}}$ (resp. $S_{\text{SiProof}}$ and $S_{\text{SaProof}}$) corresponds to the set of all signature outputted by the oracle $\text{Sa/Si}$ (resp. sending to the oracles $\textit{SiProof}$ and $\textit{SaProof}$): +---PAGE_BREAK--- + +**ExpP,ATrans(k):** + +$$ +\begin{align*} +\text{init} & \leftarrow \text{Init}(1^k) \\ +(\mathit{pk}, \mathit{sk}) & \leftarrow \text{SiGen}(\text{init}) \\ +(\mathit{spk}, \mathit{ssk}) & \leftarrow \text{SaGen}(\text{init}) \\ +b & \stackrel{\$}{\leftarrow} \{0, 1\} \\ +& \qquad \text{Sig}(.., \mathit{sk}, ..), \text{San}(.., .., \mathit{ssk}), \text{SiProof}(\mathit{sk}, .., ..) +\end{align*} +$$ + +$$ +b' \leftarrow A^{SaProof(ssk,..), Sa/Si b,pk, spk, sk, ssk,...)} (pk, spk) +$$ + +$$ +\begin{array}{l} +\text{if } (b = b') \text{ and } (S_{Sa/Si} \cap (S_{SiProof} \cup S_{SaProof}) = \emptyset) \\ +\quad \text{then return } 1, \text{ else return } 0 +\end{array} +$$ + +Unlinkablility: The unlinkablility property ensures that a sanitized signature cannot be +linked with the original one. We consider an adversary that has access to the signature +oracle, the sanitize oracle, and both the signer and the sanitizer proof oracles. Moreover, +the adversary has access to a challenge oracle LRSan(b, pk, ssk, ., .) that depends to +a bit b: this oracle takes as input two signatures σ₀ and σ₁, the two corresponding +messages m₀ and m₁ and two modification functions MOD₀ and MOD₁ chosen by the +adversary. If the two signatures have the same admissible function ADM, if MOD₀ and +MOD₁ are admissible according to ADM and if MOD₀(m₀) = MOD₁(m₁) then the +challenge oracle sanitizes σ_b using MOD_b and returns it. The goal of the adversary is to +guess the bit b. + +**Definition 9 (Unlinkability).** Let the following oracle: + +LRSan(b, pk, ssk, ., .): On input $((m_0, \text{MOD}_0, \sigma_0)(m_1, \text{MOD}_1, \sigma_1))$, if for $i \in \{0, 1\}$, +$\text{Ver}(m_i, \sigma_i, pk, spk) = 1$ and $\text{ADM}_0 = \text{ADM}_1$ and $\text{ADM}_0(\text{MOD}_0) = 1$ and +$\text{ADM}_1(\text{MOD}_1) = 1$ and $\text{MOD}_0(m_0) = \text{MOD}_1(m_1)$, then this oracle returns +$\text{San}(m_b, \text{MOD}_b, \sigma_b, pk, ssk)$, else it returns 0. + +Let P be a SS of security parameter k. P is Unlink secure (or unlinkable) when for any polynomial time adversary A, the difference between 1/2 and the probability that A wins the following experiment is negligible, where Sig(., sk, .., .) and SiProof(sk, .., ..) are defined as in Def. 7 and San(., ., ., ., ssk) and SaProof(ssk, .., ., .) are defined as in Def. 8: + +**ExpP,AUnlink(k):** + +$$ +\begin{align*} +\text{init} & \leftarrow \text{Init}(1^k) \\ +(\mathit{pk}, \mathit{sk}) & \leftarrow \text{SiGen}(\text{init}) \\ +(\mathit{spk}, \mathit{ssk}) & \leftarrow \text{SaGen}(\text{init}) \\ +b & \stackrel{\$}{\leftarrow} \{0, 1\} \\ +& \quad \text{Sig}(., sk..., ) , \text{San}(., ..., ssk) \\ +b' & \leftarrow A^{SiProof(sk..., ), SaProof(ssk..., ), LRSan(b, pk, spk...)} (\mathit{pk}, \mathit{spk}) \\ +\multicolumn{2}{l}{\text{if } (b = b') \text{ then return } 1, \text{ else return } 0} +\end{align*} +$$ + +Accountability: Standard definition of accountability is split into two security exper- +iments: the sanitizer accountability and the signer accountability. In the sanitizer ac- +countability experiment, the adversary has access to the signature oracle and the signer +proof oracle. Its goal is to forge a signature such that the signer proof algorithm returns +a proof that this signature is not sanitized. To success the experiment, this signature +must not come from the signature oracle. +---PAGE_BREAK--- + +**Definition 10 (Sanitizer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is SaAcc-1 secure (or sanitizer accountable) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\text{Sig}(., sk, ..)$ and $\text{SiProof}(sk, .., ..)$ are defined as in Def. 7, $q_{\text{Sig}}$ is the number of calls to the oracle $\text{Sig}(., sk, ..)$, $(m_i, \text{ADM}_i, \text{spk}_i)$ is the $i^{th}$ query asked to this oracle and $\sigma_i$ is the corresponding response: + +$$ +\begin{align*} +\mathbf{Exp}_{P,A}^{\mathrm{SaAcc}-1}(k): +& init \leftarrow \mathrm{Init}(1^k) \\ +& (\mathit{pk}, \mathit{sk}) \leftarrow \mathrm{SiGen}(init) \\ +& (\mathit{spk}_*, m_*, \sigma_*) \leftarrow \mathcal{A}^{\mathrm{Sig}(., sk,..,.)}_{(\mathrm{Sig}(., sk,..,)), \mathrm{SiProof}(sk,..,.)}(\mathit{pk}) \\ +& \pi_*^i \leftarrow \mathrm{SiProof}(sk, m_*, \sigma_*, \mathit{spk}_*) \\ +& \text{if } \forall i \in \{1,\dots,q_{\mathrm{Sig}}\}, (\sigma_* \neq \sigma_i) \\ +& \quad \text{and } ((\mathrm{Ver}(m_*, \sigma_*, \mathit{pk}, \mathit{spk}_*) = 1)) \\ +& \quad \text{and } (\mathrm{SiJudge}(m_*, \sigma_*, \mathit{pk}, \mathit{spk}_*, \pi_*^i) \neq 0) \\ +& \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +In the signer accountability experiment, the adversary knows the public key of the +sanitizer and has access to the sanitize oracle and the sanitizer proof oracle. Its goal is +to forge a signature together with a proof that this signature is sanitized. To success the +experiment, this signature must not come from the sanitize oracle. + +**Definition 11 (Signer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is SiAcc-1 secure (or signer accountable) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\text{San}(., ., ., ., ssk)$ and $\text{SaProof}(\text{ssk}, ., ., .)$ are defined as in Def. 8 and where $q_{\text{San}}$ is the number of calls to the oracle $\text{San}(., ., ., ., ssk)$, $(m_i, \text{MOD}_i, \sigma_i, \mathbf{pk}_i)$ is the $i^{th}$ query asked to this oracle and $\sigma'_i$ is the corresponding response: + +$$ +\begin{align*} +\mathbf{Exp}_{P,A}^{\text{SiAcc}-1}(k): +& init \leftarrow \text{Init}(1^k) \\ +& (\mathit{spk}, \mathit{ssk}) \leftarrow \text{SaGen}(init) \\ +& (\mathit{pk}_*, m_*, \sigma_*, \pi_*^i) \leftarrow \mathcal{A}^{\text{San}(., ., ., ., ssk), \text{SaProof}(\mathit{ssk}, ., ., .)}(\mathit{spk}) \\ +& \text{if } \forall i \in \{1, \dots, q_{\text{San}}\}, (\sigma_* \neq \sigma'_i) \\ +& \quad \text{and } ((\text{Ver}(m_*, \sigma_*, \mathit{pk}_*, \mathit{spk}) = 1)) \\ +& \quad \text{and } (\text{SiJudge}(m_*, \sigma_*, \mathit{pk}_*, \mathit{spk}, \pi_*^i) = 0) \\ +& \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +Strong Accountability: Since our definition of sanitizable signature provides a second proof algorithm for the sanitizer, we define two additional security experiments (for signer and sanitizer accountability) to ensure the soundness of the proofs computed by this algorithm. We say that a scheme is strongly accountable when it is signer and sanitizer accountable for both the signer and the sanitizer proof algorithm. + +Thus, in our second signer accountability experiment, we consider an adversary +that has access to the sanitize oracle and the sanitizer proof oracle. Its goal is to forge +a signature such that the sanitizer proof algorithm returns a proof that this signature is +sanitized. To win the experiment, this signature must not come from the sanitize oracle. + +**Definition 12 (Strong Signer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is SiAcc-2 secure when for any polynomial time adversary $\mathcal{A}$, the probability that +---PAGE_BREAK--- + +A wins the following experiment is negligible, where $q_{San}$ is the number of calls to the oracle $San(., .., ., ssk)$, ($m_i$, $MOD_i$, $\sigma_i$, $pk_i$) is the $i^{th}$ query asked to this oracle and $\sigma'_i$ is the corresponding response: + +$$ +\begin{align*} +\mathbf{Exp}_{P,A}^{\text{SiAcc-2}}(k): +& init \leftarrow \text{Init}(1^k) \\ +& (spk, ssk) \leftarrow \text{SaGen}(init) \\ +& (pk_*, m_*, \sigma_*) \leftarrow A^{\text{San}(.,...,ssk), \text{SaProof}(ssk,...)}(spk) \\ +& \pi_{sa} \leftarrow \text{SaProof}(ssk, m_*, \sigma_*, pk_*) \\ +& \text{if } \forall i \in \{1, ..., q_{San}\}, (\sigma_* \neq \sigma'_i) \\ +& \quad \text{and } ((\text{Ver}(m_*, \sigma_*, pk_*, spk) = 1)) \\ +& \quad \text{and } (\text{SaJudge}(m_*, \sigma_*, pk_*, spk, \pi_{sa}^*) \neq 1) \\ +& \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +P is strong signer accountable when it is both *SiAcc-1* and *SiAcc-2* secure. + +Finally, in our second sanitizer accountability experiment, we consider an adversary that knows the public key of the signer and has access to the signer oracle and the signer proof oracle. Its goal is to sanitize a signature with a proof that this signature is not sanitized. To win the experiment, this signature must not come from the signer oracle. + +**Definition 13 (Strong Sanitizer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is *SaAcc-2* secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, $\text{Sig}(., sk, ..)$ and $\text{SiProof}(sk, .., ..)$ are defined as in Def. 7, $q_{\text{Sig}}$ is the number of calls to the oracle $\text{Sig}(., sk, ..)$, $(m_i, \text{ADM}_i, spk_i)$ is the $i^{th}$ query asked to this oracle and $\sigma_i$ is the corresponding response: + +$$ +\begin{align*} +\mathbf{Exp}_{P,A}^{\text{SaAcc-2}}(k): +& init \leftarrow \text{Init}(1^k) \\ +& (pk, sk) \leftarrow \text{SaGen}(init) \\ +& (spk_*, m_*, \sigma_*, \pi_{sa}^*) \leftarrow A^{\text{Sig}(.,sk,...),\text{SiProof}(sk,...)}(spk) \\ +& \text{if } \forall i \in \{1, \dots, q_{\text{Sig}}\}, (\sigma_* \neq \sigma'_i) \\ +& \quad \text{and } ((\text{Ver}(m_*, \sigma_*, pk, spk_*) = 1)) \\ +& \quad \text{and } (\text{SaJudge}(m_*, \sigma_*, pk, spk_*, \pi_{sa}^*) = 1) \\ +& \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +P is strong sanitizer accountable when it is both SaAcc-1 and SaAcc-2 secure. + +# 3 Schemes + +## 3.1 An Efficient Verifiable Ring Signature: EVeR + +We present our VRS scheme called EVeR (for *Efficient Verifiable Ring signature*) . It is based on the DDH assumption and uses a NIZKP of equality of two discrete logarithms out of n elements. We show how to build this NIZKP: let G be a group of prime order p, n be an integer and the following language be: + +$$ +\mathcal{L}_n = \{ \{(h_i, z_i, g_i, y_i)\}_{1 \le i \le n} : \exists 1 \le i \le n, (h_i, z_i, g_i, y_i) \in G^4; \log_{g_i}(y_i) = \log_{h_i}(z_i) \} +$$ + +Consider the case $n = 1$. In [13], authors present an interactive zero-knowledge proof of knowledge system for the language $\mathcal{L}_1$. It proves the equality of two discrete logarithms. For example using $(h, z, g, y) \in \mathcal{L}_1$, a prover convinces a verifier +---PAGE_BREAK--- + +that $\log_g(y) = \log_h(z)$. The witness used by the prover is $x = \log_g(y)$. This proof system is a *sigma protocol* in the sense that there are only three interactions: the prover sends a commitment, the verifier sends a challenge, and the prover returns a response. + +To transform the proof system of $\mathcal{L}_1$ into a generic proof system of any $\mathcal{L}_n$, we use +the generic transformation given in [14]. For any language $\mathcal{L}$ and any integer $n$, the +authors show how to transform a proof that an element is in $\mathcal{L}$ into a proof that one out +of $n$ element is in $\mathcal{L}$ under the condition that the proof is a sigma protocol. Note that the +resulting proof system is also a sigma protocol. + +The final step is to transform it into a non-interactive proof system. We use the well-known Fiat-Shamir transformation [15]. This transformation outputs a non interactive proof system from any interactive proof system that is a sigma protocol. The resulting proof system is complete, sound and zero-knowledge in the random oracle model. Finally, we obtain the following scheme. + +**Scheme 1 (LogEqn)** Let $G$ be a group of prime order $p$, $H : \{0,1\} \to \mathbb{Z}_p^*$ be a hash function and $n$ be an integer. We define the NIZKP system LogEqn = (LEproven, LEverifn) for $\mathcal{L}_n$ by: + +LEproven({(hi, zi, gi, yi}1≤i≤n, x). We denote by $j$ the integer such that $x = \log_{g_j}(y_j) = \log_{h_j}(z_j)$. This algorithm picks $r_j ← \mathbb{Z}_{p_j}^*$, computes $R_j = g_j^{r_j}$ and $S_j = h_j^{r_j}$. For all $i \in \{1, ..., n\}$ and $i ≠ j$, it picks $c_i ← \mathbb{Z}_{p_i}^*$ and $\gamma_i ← \mathbb{Z}_{p_i}^*$, and computes $R_i = g_i^{\gamma_i}/y_i^{c_i}$ and $S_i = h_i^{\gamma_i}/z_i^{c_i}$. It computes $c = H(R_1||S_1||...||R_n||S_n)$. It then computes $c_j = c/(\prod_{i=1;i≠j}^n c_i)$ and $\gamma_j = r_j + c_j \cdot x$. It outputs $\pi = (\{R_i, S_i, c_i, \gamma_i\}_{1≤i≤n})$. + +LEverifn({(hi, zi, gi, yi}1≤i≤n, π). Using π = ({Ri, Si, ci, γi}1≤i≤n). If ∏ni=1; i≠j ci ≠ H(R1||S1||...||Rn||Sn) then it returns 0. Else if there exists i ∈ {1, ..., n} such that giγi ≠ Ri · yici or hiγi ≠ Si · zici then it returns 0. Else it returns 1. + +**Theorem 1.** The NIZKP LogEq$_n$ is a proof of knowledge, moreover it is complete, sound, and zero-knowledge in the random oracle model. + +The proof of this theorem is a direct implication of [13], [14] and [15]. +Using this proof system, we build our VRS scheme called EVeR: + +**Scheme 2 (Efficient Verifiable Ring Signature (EVeR))** EVeR is a VRS defined by: + +V.Init(1^k): It generates a prime order group setup (G, p, g) and a hash function H : +{0, 1}* → G. It returns a setup value init = (G, p, g, H). + +V.Gen(init): It picks sk ← Z*_p, computes pk = g*sk and returns a pair of signer public/private keys (pk, sk). + +V.Sig(L, m, sk): It picks r ← Z*_p, it computes h = H(m||r) and z = h*sk, it runs P ← LEprove_{|L|}({(h, z, g, pk_l)}_{pk_l∈L}, sk) and returns σ = (r, z, P). + +V.Ver(L, m, σ): It parses σ = (r, z, P), computes h = H(m||r) and returns b ← LEverif_{|L|}({(h, z, g, pk_l)}_{pk_l∈L}, P). + +V.Proof(L, m, σ, pk, sk): It parses σ = (r, z, P), computes h = H(m||r) and z̄ = h^sk, runs P ← LEprove₁({(h, z̄, g, pk)}, sk) and returns π = (z̄, P). +---PAGE_BREAK--- + +V. Judge($L, m, \sigma, \mathbf{pk}, \pi$): It parses $\sigma = (r, z, P)$ and $\pi = (\bar{z}, \bar{P})$, computes $h = H(m||r)$ and runs $b \leftarrow LE_{\text{prove}}_1(\{(h, \bar{z}, g, \mathbf{pk}\}, \pi)$. If $b \neq 1$ then it returns $\perp$. Else, if $z = \bar{z}$ then it returns 1, else it returns 0. + +All users have an ElGamal key pair ($\mathbf{pk}, \mathbf{sk}$) such that $\mathbf{pk} = g^{\mathbf{sk}}$ where $g$ is a generator of a prime order group. To sign a message $m$ according to a set of public key $L$ using her key pair ($\mathbf{pk}, \mathbf{sk}$), Alice chooses a random $r$ and computes $h = H(m||r)$ and $z = h^{\mathbf{sk}}$ where $H$ is an universal hash function. Alice produces a proof $\pi$ that there exists $\mathbf{pk}_l \in L$ such that $\log_g(\mathbf{pk}_l) = \log_h(z)$ using the NIZKP $\mathrm{LogEq}_{|L|}$ where $|L|$ denotes the cardinal of $L$. The signature is the triplet $(r, z, \pi)$. To verify a signature, it suffices to verify the proof $\pi$ according to $L$, $m$ and the other parts of the signature. To prove that she is the signer of the message $m$, Alice generates a proof that $\log_g(\mathbf{pk}) = \log_h(z)$ using the NIZKP $\mathrm{LogEq}_1$. Verifying this proof, a judge is convinced that $z = h^{\mathbf{sk}}$. We then consider a second signature $(r', z', \pi')$ of a message $m'$ produced from another key pair $(\mathbf{pk}', \mathbf{sk}')$. We set $h' = H(m'||r')$, and we recall that $z' = (h')^{\mathbf{sk}'}$. To prove that she is not the signer of $m'$, Alice computes $\bar{z}' = (h')^{\mathbf{sk}}$ and she generates a proof that $\log_g(\mathbf{pk}) = \log_{h'}(\bar{z}')$. Since $\bar{z}' \neq z'$, Alice proves that $\log_g(\mathbf{pk}) \neq \log_{h'}(z')$, then she is not the signer of $(r', z', \pi')$. + +**Theorem 2.** *EVeR is unforgeable, anonymous, accountable and non-usurpable under the DDH assumption in the random oracle model.* + +We give the intuition of the security properties, the proof of the theorem is given Appendix C: + +**Unforgeability:** The scheme is unforgeable since nobody can prove that $\log_g(\mathbf{pk}_l) = \log_h(z)$ without the knowledge of $\mathbf{sk} = \log_h(z)$. + +**Anonymity:** Breaking the anonymity of such a signature is equivalent to break the DDH assumption. Indeed, to link a signature $z = h^{\mathbf{sk}}$ with the corresponding public key of Alice $\mathbf{pk} = g^{\mathbf{sk}}$, an attacker must solve the DDH problem on the instance $(\mathbf{pk}, h, z)$. Moreover, note that since the value $r$ randomizes the signature, it is not possible to link two signatures of the same message produced by Alice. + +**Accountability:** To break the accountability, an adversary must to forge a valid signature (i.e. to prove that there exists $\mathbf{pk}_l$ in the group such that $\log_g(\mathbf{pk}_l) \neq \log_h(z)$) and to prove that he is not the signer (i.e. $\log_g(\mathbf{pk}) \neq \log_h(z)$ where $\mathbf{pk}$ is the public key chosen by the adversary). However, since the adversary does not know the secret keys of the other members of the group, it would have to break the soundness of $\mathrm{LogEq}$ to win the experiment, which is not possible. + +**Non-usurpability: (-non-usu-1) no adversary is able to forge a proof that he is the signer of a signature produced by another user since it is equivalent to prove a false statement using a sound NIZKP. (-non-usu-2) the proof algorithm run by a honest user with the public key $\mathbf{pk}$ returns a proof that this user is the signer of a given signature only if $\log_g(\mathbf{pk}) = \log_h(z)$. Since no adversary is able to compute $z$ such that $\log_g(\mathbf{pk}) = \log_h(z)$ without the corresponding secret key, no adversary is able to break the non-usurpability of EvER.* +---PAGE_BREAK--- + +## 3.2 Our Unlinkable Sanitizable Signature Scheme: GUSS + +We present our USS instantiated by a digital signature (DS) scheme and a VRS. + +**Scheme 3 (Generic Unlinkable Sanitizable Signature (GUSS))** Let $D$ be a deterministic digital signature scheme and $V$ be a verifiable ring signature scheme such that: +$$D = (\mathcal{D.Init}, \mathcal{D.Gen}, \mathcal{D.Sig}, \mathcal{D.Ver}) \quad V = (\mathcal{V.Init}, \mathcal{V.Gen}, \mathcal{V.Sig}, \mathcal{V.Ver}, \mathcal{V.Proof}, \mathcal{V.Judge})$$ +GUSS instantiated with $(D, V)$ is a sanitizable signature scheme defined by: + +**Init(1k):** It runs $\text{init}_d \leftarrow \mathcal{D.Init}(1^k)$ and $\text{init}_v \leftarrow \mathcal{V.Init}(1^k)$, it returns $\text{init} = (\text{init}_d, \text{init}_v)$. + +**SiGen(init):** It parses $\text{init} = (\text{init}_d, \text{init}_v)$, runs $(pk_d, sk_d) \leftarrow \text{SiGen}(\text{init}_d)$ and $(pk_v, sk_v) \leftarrow \mathcal{V.Gen}(\text{init}_v)$ returns $(pk, sk)$ where $pk = (pk_d, pk_v)$ and $sk = (sk_d, sk_v)$. + +**SaGen(init):** It parses $\text{init} = (\text{init}_d, \text{init}_v)$ and runs $(spk, ssk) \leftarrow \mathcal{V.Gen}(\text{init}_v)$. It returns $(spk, ssk)$. + +**Sig(m, sk, spk, ADM):** It parses $sk = (sk_d, sk_v)$. It first computes the fixed message part $M \leftarrow \text{FIX}_{\text{ADM}}(m)$ and runs $\sigma_1 \leftarrow \mathcal{D.Sig}(sk_d, (M||\text{ADM}||pk||spk))$ and $\sigma_2 \leftarrow \mathcal{V.Sig}(\{pk_v, spk\}, sk_v, (\sigma_1||m))$. It returns $\sigma = (\sigma_1, \sigma_2, \text{ADM})$. + +**San(m, MOD, σ, pk, ssk):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and $pk = (pk_d, pk_v)$. This algorithm first computes the modified message $m' \leftarrow \text{MOD}(m)$ and it runs $\sigma'_2 \leftarrow \mathcal{V.Sig}(\{pk_v, spk\}, ssk, (\sigma_1||m'))$. It returns $\sigma' = (\sigma_1, \sigma'_2, \text{ADM})$. + +**Ver(m, σ, pk, spk):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and it computes the fixed message part $M \leftarrow \text{FIX}_{\text{ADM}}(m)$. It then runs $b_1 \leftarrow \mathcal{D.Ver}(pk_d, (M||\text{ADM}||pk||spk), \sigma_1)$ and $b_2 \leftarrow \mathcal{V.Ver}(\{pk_d, spk\}, (\sigma_1||m), \sigma_2)$. It returns $b = (b_1 \land b_2)$. + +**SiProof(sk, m, σ, spk):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and the key $sk = (sk_d, sk_v)$. It runs $\pi_{si} \leftarrow \mathcal{V.Proof}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; pk_v, sk_v)$ and returns it. + +**SaProof(ssk, m, σ, pk):** It parses the signature $\sigma = (\sigma_1, \sigma_2, \text{ADM})$. It runs $\pi_{sa} \leftarrow \mathcal{V.Proof}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; spk, ssk)$ and returns it. + +**SiJudge(m, σ, pk, spk, πsi):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and $pk = (pk_d, pk_v)$. It runs $b \leftarrow \mathcal{V.Judge}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; pk_v, π_{si})$ and returns it. + +**SaJudge(m, σ, pk, spk, πsa):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and $pk = (pk_d, pk_v)$. It runs $b \leftarrow \mathcal{V.Judge}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; spk, π_{sa})$ and returns $(1-b)$. + +The signer secret key $sk = (sk_d, sk_v)$ contains a secret key $sk_d$ compatible with the DS scheme and a secret key $sk_v$ compatible the VRS scheme. The signer public key $pk = (pk_d, pk_v)$ contains the two corresponding public keys. The sanitizer public/secret key pair $(spk, ssk)$ is generated as in the VRS scheme. + +Let $m$ be a message and $M$ be the fixed part chosen by the signer according to the admissible function $\text{ADM}$. To sign $m$, the signer first signs $M$ together with the public key of the sanitizer $spk$ and the admissible function $\text{ADM}$ using the DS scheme. We denote this signature by $\sigma_1$. The signer then signs in $\sigma_2$ the full message $m$ together with $\sigma_1$ using the VRS scheme for the set of public keys $L = \{pk_v, spk\}$. On the other words, he anonymously signs $(\sigma_1||m)$ within a group of two users: the signer and the sanitizer. The final sanitizable signature is $\sigma = (\sigma_1, \sigma_2)$. The verification algorithm is in two steps: it verifies the signature $\sigma_1$ and it verifies the anonymous signature $\sigma_2$. + +To sanitize this signature $\sigma = (\sigma_1, \sigma_2)$, the sanitizer chooses an admissible message $m'$ according to $\text{ADM}$ (i.e. $m$ and $m'$ have the same fixed part). He then anonymously signs $m'$ together with $\sigma_1$ using the VRS for the group $L = \{pk_v, spk\}$ using the secret key ssk. We denote by $\sigma'_2$ this signature. The final sanitized signature is $\sigma' = (\sigma_1, \sigma'_2)$. +---PAGE_BREAK--- + +**Theorem 3.** For any deterministic and unforgeable DS scheme *D* and any unforgeable, anonymous, accountable and non-usurpable VRS scheme *V*, GUSS instantiated with (*D*, *V*) is immutable, transparent, strongly accountable and unlinkalbe. + +We give the intuition of the security properties, the proof of the theorem is given in Appendix D: + +Transparency: According to the anonymity of $\sigma_2$ and $\sigma'_2$, nobody can guess if a signature come from the signer or the sanitizer, and since both signatures have the same structure, we cannot guess whether a signature is sanitized or not. + +Immutability: Since it is produced by an unforgeable DS scheme, nobody can forge the signature $\sigma_1$ of the fixed part $M$ without the signer secret key. Thus the sanitizer cannot change the fixed part of the signatures. Moreover, since $\sigma_1$ signs the public key of the sanitizer in addition to $M$, the other users can not forge a signature of an admissible message using $\sigma_1$. + +Unlinkability: An adversary knows (i) two signatures $\sigma^0$ and $\sigma^1$ that have the same fixed part $M$ according to the same function $\text{ADM}$ for the same sanitizer and (ii) the sanitized signature $\sigma' = (\sigma'_1, \sigma'_2)$ computed from $\sigma^b$ for a given admissible message $m'$ and an unknown bit $b$. To achieve unlinkability, it must be hard to guess $b$. Since the DS scheme is deterministic, the two signatures $\sigma^0 = (\sigma^0_1, \sigma^0_2)$ and $\sigma^1 = (\sigma^1_1, \sigma^1_2)$ have the same first part (i.e. $\sigma^0_1 = \sigma^1_1$). As it was shown before, the $\sigma'$ has the same first part $\sigma'_1$ as the original one, thus $\sigma'_1 = \sigma^0_1 = \sigma^1_1$ and $\sigma'_1$ leaks no information about $b$. On the other hand, the second part of the sanitized signature $\sigma'_2$ is computed from the modified message $m'$ and the first part of the original signature. Since $\sigma^0_1 = \sigma^1_1$, we deduce that $\sigma'_2$ leaks no information about $b$. Finally, the best strategy of the adversary is to randomly guess $b$. + +(Strong) Accountability: the signer must be able to prove the provenance of a signature. It is equivalent to break the anonymity of the second parts $\sigma_2$ of this signature: if it was created by the signer then it is the original signature, else it was created by the sanitizer and it is a sanitized signature. By definition, the VRS scheme used to generate $\sigma_2$ provides a way to prove whether a user is the author of a signature or not. GUSS uses it in its proof algorithm to achieve accountability. Note that since the sanitizer uses the same VRS scheme to sanitize a signature, it also can prove the origin of a given signature to achieve the strong accountability. + +# 4 Conclusion + +In this paper, we revisit the notion of verifiable ring signatures. We improve its properties of verifiability, we give a security model for this primitive and we design a simple, efficient and secure scheme named EvER. We extend the security model of sanitizable signatures in order to allow the sanitizer to prove the origin of a signature. Finally, we design a generic unlinkable sanitizable signature scheme named GUSS based on verifiable ring signatures. This scheme is twice as efficient as the best scheme of the literature. In the future, we aim at find other applications to verifiable ring signatures that are secure in our model. +---PAGE_BREAK--- + +References + +1. Giuseppe Ateniese, Daniel H. Chou, Breno de Medeiros, and Gene Tsudik. *Sanitizable Signatures*, pages 159–177. Springer Berlin Heidelberg, 2005. + +2. Dan Boneh. The decision Diffie-Hellman problem. In *Third Algorithmic Number Theory Symposium (ANTS)*, volume 1423 of LNCS. Springer, 1998. Invited paper. + +3. Christina Brzuska, Heike Busch, Oezguer Dagdelen, Marc Fischlin, Martin Franz, Stefan Katzenbeisser, Mark Manulis, Cristina Onete, Andreas Peter, Bertram Poettering, and Dominique Schröder. *Redactable Signatures for Tree-Structured Data: Definitions and Constructions*. Springer Berlin Heidelberg, 2010. + +4. Christina Brzuska, Marc Fischlin, Tobias Freudenreich, Anja Lehmann, Marcus Page, Jakob Schelbert, Dominique Schröder, and Florian Volk. Security of sanitizable signatures revisited. In Stanislaw Jarecki and Gene Tsudik, editors, *PKC 2009*, volume 5443 of LNCS, pages 317–336. Springer, March 2009. + +5. Christina Brzuska, Marc Fischlin, Tobias Freudenreich, Anja Lehmann, Marcus Page, Jakob Schelbert, Dominique Schröder, and Florian Volk. *Security of Sanitizable Signatures Revisited*, pages 317–336. Springer Berlin Heidelberg, 2009. + +6. Christina Brzuska, Marc Fischlin, Anja Lehmann, and Dominique Schröder. Unlinkability of sanitizable signatures. In Phong Q. Nguyen and David Pointcheval, editors, *PKC 2010*, volume 6056 of LNCS, pages 444–461. Springer, May 2010. + +7. Christina Brzuska, Henrich C. Pöhls, and Kai Samelin. *Non-interactive Public Accountability for Sanitizable Signatures*, pages 178–193. Berlin, Heidelberg, 2013. + +8. Christina Brzuska, Henrich C. Pöhls, and Kai Samelin. *Efficient and Perfectly Unlinkable Sanitizable Signatures without Group Signatures*, pages 12–30. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014. + +9. Sébastien Canard and Amandine Jambert. *On Extended Sanitizable Signature Schemes*, pages 179–194. Springer Berlin Heidelberg, 2010. + +10. Sébastien Canard, Amandine Jambert, and Roch Lescuyer. *Sanitizable Signatures with Several Signers and Sanitizer s*, pages 35–52. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. + +11. Sbastien Canard, Berry Schoenmakers, Martijn Stam, and Jacques Traoré. List signature schemes. *Discrete Applied Mathematics*, 154(2):189 – 201, 2006. + +12. Z. Changlun, L. Yun, and H. Dequan. A new verifiable ring signature scheme based on nyberg-rueppel scheme. In *2006 8th international Conference on Signal Processing*, volume 4, 2006. + +13. David Chaum and Torben P. Pedersen. Wallet databases with observers. In Ernest F. Brickell, editor, *CRYPTO'92*, volume 740 of LNCS, pages 89–105. Springer, August 1993. + +14. R. Cramer, I. Damgård, and B. Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In *CRYPTO'94*, volume 839 of LNCS. Springer, 1994. + +15. Amos Fiat and Adi Shamir. *Advances in Cryptology — CRYPTO' 86: Proceedings*, chapter How To Prove Yourself: Practical Solutions to Identification and Signature Problems, pages 186–194. Springer Berlin Heidelberg, Berlin, Heidelberg, 1987. + +16. N. Fleischhacker, J. Krupp, G. Malavolta, J. Schneider, D. Schrader, and M. Simkin. Efficient unlinkable sanitizable signatures from signatures with re-randomizable keys. In *Public-Key Cryptography PKC 2016*, LNCS. Springer, 2016. + +17. Georg Fuchsbauer and David Pointcheval. *Anonymous Proxy Signatures*. Springer Berlin Heidelberg, 2008. + +18. Robert Johnson, David Molnar, Dawn Song, and David Wagner. *Homomorphic Signature Schemes*, pages 244–262. Springer Berlin Heidelberg, Berlin, Heidelberg, 2002. +---PAGE_BREAK--- + +19. Russell W. F. Lai, Tao Zhang, Sherman S. M. Chow, and Dominique Schröder. *Efficient Sanitizable Signatures Without Random Oracles*, pages 363–380. Springer International Publishing, 2016. + +20. K. C. Lee, H. A. Wen, and T. Hwang. Convertible ring signature. *IEE Proceedings - Communications*, 152(4):411–414, 2005. + +21. Jiqiang Lv and Xinmei Wang. *Verifiable Ring Signature*, pages 663–665. DMS Proceedings, 2003. + +22. Ronald L. Rivest, Adi Shamir, and Yael Tauman. How to leak a secret. In Colin Boyd, editor, *ASIACRYPT 2001*, volume 2248 of LNCS, pages 552–565. Springer, December 2001. + +23. Ron Steinfeld, Laurence Bull, and Yuliang Zheng. *Content Extraction Signatures*, pages 285–304. Springer Berlin Heidelberg, 2002. + +24. Shangping Wang, Rui Ma, Yaling Zhang, and Xiaofeng Wang. Ring signature scheme based on multivariate public key cryptosystems. *Computers and Mathematics with Applications*, 62(10):3973 – 3979, 2011. + +A Cryptographic Background + +**Definition 14 (DDH [2]).** Let $\mathbb{G}$ be a multiplicative group of prime order $p$ and $g \in \mathbb{G}$ be a generator. Given an instance $h = (g^a, g^b, g^z)$ for unknown $a, b, z \stackrel{*}{\leftarrow} \mathbb{Z}_p^*$, the Decisional Diffie-Hellman (DDH) problem is to decide whether $z = a \cdot b$ or not. The DDH hypothesis states that there exists no PPT algorithm that solves DDH problem. + +We recall the notion of deterministic digital signature, and we recall the determin- +istic version of the Schnorr’s signature. + +**Definition 15 ((Deterministic) Digital Signature (DS)).** A Digital Signature scheme *S* is a tuple of 4 algorithms defined as follows: + +D.Init(1k): It returns a setup value init. + +SiGen(init): It returns a pair of signer public/private keys (pk, sk). + +D.Sig(m, sk): This algorithm computes a signature σ of m using the key sk. + +D.Ver(pk, m, σ): It returns a bit b: if the signature σ of m is valid according to pk then +b = 1, else b = 0. + +Such a scheme is unforgeable when no polynomial adversary wins the following experi- +ment with non-negligible probability where D.Sig(·, sk) is a signature oracle, qS is the +number of queries to this oracle and σᵢ is the iᵗʰ signature computed by this oracle: + +$$ +\begin{align*} +\text{Exp}_{S,A}^{\text{unf}}(k): +& \quad \text{init} \leftarrow D.\text{Init}(1^k) \\ +& \quad (\text{pk}, \text{sk}) \leftarrow \text{SiGen}(\text{init}) \\ +& \quad (m_*, \sigma_*) \leftarrow A^{\text{D.Sig}(., \text{sk})}(\text{pk}) \\ +& \quad \text{if } (D.\text{Ver}(\sigma_*, m_*, =) = 1) \text{ and } (\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_*) \\ +& \quad \quad \text{then return } 1, \text{ else return } 0 +\end{align*} +$$ + +Moreover, such a scheme is deterministic when the algorithm D.Sig(m, sk) is determin- +istic. As it is mentioned in [16], any DS scheme can be transformed into a deterministic +DS scheme without lost of efficiency or security using a pseudo random function, that +can be simulated by a hash function in the random oracle model. +---PAGE_BREAK--- + +**Definition 16 (Deterministic Schnorr's Signature).** The (Deterministic) Schnorr's Signature is defined by the following algorithms: + +D.Init($1^k$): It returns a setup value $init = (G, p, g, H)$ where $G$ is a group of prime order $p$, $g \in G$ and $H: \{0, 1\} \to \mathbb{Z}_p^*$ is a hash function. + +SiGen(init): It picks $sk \leftarrow \mathbb{Z}_p^*$, computes $pk = g^{sk}$ and returns ($pk, sk$). + +D.Sig($m, sk$): It compute the $r = H(m||sk)$, $R = g^r$, $z = r + sk \cdot H(R||m)$ and returns $\sigma = (R, z)$. + +D.Ver($pk, m, \sigma$): It parse $\sigma = (R, z)$, if $g^z = R \cdot pk^{H(R||m)}$ then it returns 1, else 0. + +This DS scheme is deterministic and unforgeable under the DDH assumption in the random oracle model. + +A zero-knowledge proof (ZKP) allows a prover knowing a witness to convince a verifier that a statement $s$ is in a given language without leak any information except $s$. Such a proof is a proof of knowledge (PoK) when the verifier is also convinced that the prover knows the witness $w$. We recall the definition of a non-interactive zero-knowledge proof of knowledge. + +**Definition 17 (NIZKP).** A non-interactive ZKP (NIZKP) For a language $\mathcal{L}$ is a couple of algorithms (ZKPpro, ZKPver) such that: + +Prove($s, w$). This algorithm outputs a proof $\pi$ that $s \in \mathcal{L}$ using the witness $w$. + +Verify($s, \pi$). This algorithm checks whether $\pi$ is a valid proof that $s \in \mathcal{L}$ and outputs a bit. + +A NIZKP proof verifies the following properties: + +**Completeness** For any statement $s \in \mathcal{L}$ and the corresponding witness $w$, we have that Verify($s$, Prove($s$, $w$)) = 1. + +**Soundness** There is no polynomial time adversary $\mathcal{A}$ such that $\mathcal{A}(\mathcal{L})$ outputs $(s, \pi)$ such that Verify($s$, $\pi$) = 1 and $x \notin \mathcal{L}$ with non-negligible probability. + +**Zero-knowledge.** A proof $\pi$ leaks no information, i.e. there exists a PPT algorithm Sim (called the simulator) such that outputs of Prove($s$, $w$) and the outputs of Sim($s$) follow the same probability distribution. + +Moreover, such a proof is a proof of knowledge when for any $s \in \mathcal{L}$ and the corresponding witness $w$, any bit-string input $i \in \{0, 1\}^*$ and any algorithm $\mathcal{A}(s, \text{in})$ there exists a knowledge extractor $\mathcal{E}$ such that the probability that $\mathcal{E}^{\mathcal{A}(s,\text{in})}(s)$ outputs the witness $w$ given access to oracle $\mathcal{A}(s, \text{in})$ is as high as the probability that $\mathcal{A}(s, \text{in})$ outputs a proof $\pi$ such that Verify($s$, $\pi$) = 1. + +## B First experiment for non-usurpability + +**Definition 18 (n-non-usu-1 experiment).** Let $P$ be a SS of security parameter $k$. $P$ is n-non-usu-1 secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $V\Sig(\cdot, \cdot, \cdot)$ and $V\text{Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$ and where $q_S$ is the number of calls to the oracle $V\Sig(\cdot, \cdot, \cdot)$ and $(L_i, l_i, m_i)$ (resp. $\sigma_i$) is the $i^{th}$ query to this oracle (resp. signature outputted by this oracle): +---PAGE_BREAK--- + +**Exp***P*,A**n*-non-*usu*-1 (*k*):* + +init ← V.Init(1*k*) + +∀1 ≤ *i* ≤ *n*, (pk*i*, sk*i*) ← V.Gen(init) + +(L*, m*, σ*, l*, π*) ← 𝒜V.Sig(..., ...), V.Proof(..., ...)({pk*i*}1≤*i*≤*n*) + +π ← V.Proof(L*, m*, σ*, pk, sk) + +if (V.Ver(L*, σ*, m*) = 1) and + +(V.Judge(L*, m*, σ*, pk*l*, π*) = 1) and + +(∀*i* ∈ {1, ..., qS}, (L*i*, l*i*, m*i*, σ*i*) = (L*, l*, m*, σ*)) + +then return 1, else return 0 + +C Security proofs of EVeR + +**Lemma 1.** EVeR is $n$-unf secure for any polynomially bounded $n$ under the DL assumption in the random oracle model. + +*Proof.* We recall that since LogEq$_n$ is valid, then for any $s \in \mathcal{L}$ and the corresponding witness $w$, for any bit-string input in $\in \{0, 1\}^*$ and any algorithm $\mathcal{A}(\text{in})$ there exists a knowledge extractor $\mathcal{E}$ such that the probability that $\mathcal{E}^{\mathcal{A}(\text{in})}(k)$ outputs the witness $w$ given access to oracle $\mathcal{A}(\text{in})$ is as high as the probability that $\mathcal{A}(\text{in})$ outputs a proof $\pi$ such that Verify($s, \pi$) = 1. Moreover, since LogEq$_n$ is zero-knowledge there exists a PPT algorithm Sim (called the simulator) such that outputs of Prove($s, w$) and the outputs of Sim($s$) follow the same probability distribution. + +Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{EVeR}, \mathcal{A}}^{\text{n-unf}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ that solve the DL problem with non-negligible probability. + +$\mathcal{B}$ construction: $\mathcal{B}$ receives the input $(G, p, g, y)$ where $g$ is the generator of the group $G$ of prime order $p$ and $y$ is an element of $G$. For all $i \in \{1, \dots, n\}$, it picks $x_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$ and sets $\mathbf{pk}_i = y^{x_i}$. $\mathcal{B}$ initializes an empty list $H_{\text{list}}$. $\mathcal{B}$ runs $x' \leftarrow \mathcal{E}^{\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \le i \le n})}(k)$ where $\mathcal{A}'$ is the following algorithm: + +**Algorithm** $\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \le i \le n})$: It runs $(L_*, \sigma_*, m_*) \leftarrow \mathcal{A}(\{\mathbf{pk}_i\}_{1 \le i \le n})$. It simulates the oracles to $\mathcal{A}$ as follows: + +**Random oracle** $H(\cdot)$: On the $i^{\text{th}}$ input $M_i$, if $\exists j < i$ such that $M_j = M_i$ then it sets $u_j = u_i$. Else it picks $u_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. Finally, it returns $g^{u_i}$. + +**Oracle V.Sig**(*·*, *·*, *·*): On the $i^{\text{th}}$ input $(L_i, l_i, m_i)$, it picks $r_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. It computes $h_i = H(m_i||r_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $z_i = \mathbf{pk}_{l'_i}^{u_j}$ and it runs $P_i \leftarrow \text{Sim}(\{(h_i, z_i, g, \mathbf{pk}_l)\}_{\mathbf{pk}_l \in L_i})$. It returns $(r_i, z_i, P_i)$ to $\mathcal{A}$. + +**Oracle V.Proof**(*·*, *·*, *·*, *·*): On the $i^{\text{th}}$ input $(L'_i, m'_i, \sigma'_i, l'_i)$, it parses $\sigma'_i = (r'_i, z'_i, P'_i)$. It computes $h'_i = H(m'_i||r'_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $\tilde{z}_i = \mathbf{pk}_{l'_i}^{u_j}$ and it runs $P'_i \leftarrow \text{Sim}((h'_i, \tilde{z}_i, g, \mathbf{pk}_{l'_i})$. It returns $(\tilde{z}_i, P'_i)$ to $\mathcal{A}$. + +Finally, $\mathcal{A}'$ parse $\sigma_* = (r_*, z_*, P_*)$ and returns $P_*$. +---PAGE_BREAK--- + +**Analyze:** First note that the experiment $n$-unf is perfectly simulated for $\mathcal{A}$, then it returns $(L_*, \sigma_*, m_*)$ such that for $\sigma_* = (r_*, z_*, P_*)$ and $h_* = H(m_*|r_*)$, we have $\Pr[\text{LEverif}_{L_*}((\{h_*, z_*, g_*, \mathbf{pk}_l\})_{\mathbf{pk}_l \in L_*}, P_*)] = 1] \geq \lambda(k)$ and $L_* \subset \{\mathbf{pk}_i\}_{1 \leq i \leq n}$. We deduce that $\mathcal{A}'$ returns a proof $P_*$ such that $\Pr[\text{LEverif}_{L_*}((\{h_*, z_*, g_*, \mathbf{pk}_l\})_{\mathbf{pk}_l \in L_*}, P_*)] = 1] = \lambda(k)$, then $\mathcal{E}^{\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \leq i \leq n})}(k)$ returns the discrete logarithm $x'$ of one of the public keys in $\{\mathbf{pk}_i\}_{1 \leq i \leq n}$ with probability at least $\lambda(k)$. Suppose that $\mathcal{A}'$ returns a valid proof. Since for all $i$, $\mathbf{pk}_i = y^{x_i}$, and since there exists $j$ such that $\mathbf{pk}_j = g^{x'}$, then the discrete logarithm of $y$ is $x'/x_j$. We deduce that $\mathcal{B}$ returns the discrete logarithm of $y$ with probability at least $\lambda(k)$. $\square$ + +**Lemma 2.** *EVeR* is *n*-ano secure for any polynomially bounded *n* under the DDH assumption in the random oracle model. + +*Proof.* Let the *n*-ano$_{\psi}$ be the same experiment as *n*-ano except that the oracle LRSO$_b$ can be called at most $\psi$ times. We prove the two following claims: + +**Claim 1** If $\exists \mathcal{A} \in \text{POLY}(k)$ such that $\lambda_1(k) = \text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_1}(k)$ is non-negligible, then $\exists \mathcal{B} \in \text{POLY}(k)$ that breaks the DDH assumption with non-negligible probability. + +**Claim 2** Let $\psi \ge 1$ be, suppose that $\epsilon(k) = \text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_{\psi}}(k)$ is negligible. Then, if $\exists \mathcal{A} \in \text{POLY}(k)$ such that $\lambda_{\psi+1}(k) = \text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_{\psi+1}}(k)$ is non-negligible, then $\exists \mathcal{B} \in \text{POLY}(k)$ that breaks the DDH assumption with non-negligible probability. + +This two claims imply that $\text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_{\psi}}(k)$ is negligible for any $n$ and any $\psi$ that are polynomially bounded. + +**Proof of Claim 1:** We show how to build the algorithm $\mathcal{B}$. It receives a DDH instance $((G, p, g), X, Y, Z)$ as input. It picks $d \stackrel{s}{\leftarrow} \{1, \dots, n\}$. For all $i \in \{1, \dots, n\}$: + +- if $i=d$ then it sets $\mathbf{pk}_i = X$ + +- else, it runs $(\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{V.Gen}(init)$ where $init = (G, p, g, H)$. + +$\mathcal{B}$ runs $(d_0, d_1) \leftarrow \mathcal{A}_1(\{\mathbf{pk}_i\}_{1 \le i \le n})$. During the experiment, $\mathcal{B}$ simulates the oracle for $\mathcal{A}$ as follows: + +**Random oracle** $H(.):$ On the $i^{\text{th}}$ input $M_i$, if $\exists j < i$ such that $M_j = M_i$ then it sets $u_j = u_i$. Else it picks $u_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. Finally, it returns $g^{u_i}$. + +**Oracle V.Sig**$(\cdot, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(L_i, l_i, m_i)$, it picks $r_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. It computes $h_i = H(m_i||r_i)$ using the oracle $H(.)$, then there exists $j$ such that $m'_i||r'_i = M_j$. + +- If $l_i = d$ then it computes $z_i = X^{u_j}$ and it runs $P_i \leftarrow \text{Sim}(\{(h_i, z_i, g, \mathbf{pk}_l)\}_{\mathbf{pk}_l \in L_i})$. It returns $\sigma_i = (r_i, z_i, P_i)$ to $\mathcal{A}$. + +- Else it runs and returns $\sigma_i \leftarrow \text{V.Sig}(L_i, \mathbf{sk}_{l_i}, m_i)$ + +**Oracle V.Proof**$(\cdot, \cdot, \cdot, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(L'_i, m'_i, \sigma'_i, l'_i)$, it parses $\sigma'_i = (r'_i, z'_i, P'_i)$. It computes $h'_i = H(m'_i||r'_i)$ using the oracle $H(.)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $\bar{z}_i = \mathbf{pk}_{l'_i}^{u'_j}$ and it runs $P'_i \leftarrow \text{Sim}((h'_i, \bar{z}_i, g, \mathbf{pk}_{l'_i})$. It returns $(\bar{z}_i, P'_i)$ to $\mathcal{A}$. + +$\mathcal{B}$ runs $b_* \leftarrow \mathcal{A}_2(\{\mathbf{pk}_i\}_{1 \le i \le n})$. During the experiment, $\mathcal{B}$ simulates the oracle $\text{V.Sig}(\cdot, \cdot, \cdot)$ as in the first phase. It simulates the three other oracles as follows: +---PAGE_BREAK--- + +**Oracle LRSOb(d0, d1, ...,):** On input (m'', L''), it picks r'' $\Leftarrow$ Zp*. If ∃i such that ri = r'' then B aborts the experiment and returns b'* $\Leftarrow$ {0, 1}, else it runs P'' $\leftarrow$ Sim({(Y, Z, g, pkl)pki∈L'') and returns (r'', Z, P'') to A. + +**Oracle V.Proof(...):** On the i-th input (L'i, m'i, σ'i, l'i), if LRSOb have been already called and σ'i = σ'' and (l'i = d0 or l'i = d1) then it returns ⊥ to A. Else, it process as in first phase. + +**Random oracle H(.):** On the *i*-th input $M_i$, if LRSO$_b$ have been already called and $M_i = (r''||m'')$ then it returns Y to A. Else, it process as in first phase. + +Let $b'$ be the bit that verifies $d_{b'} = d$. If $b' = b_*$ then B returns $b_*' = 1$, else $b_*' = 0$. + +*Analyze:* Let $q$ be the number of queries asked to V.Sig(*·*, ..., ) and let $E$ be the event "B does not aborts the experiment of $\mathcal{A}$. We have: + +$$ +\begin{align*} +\Pr[\neg E] &= \Pr[(\exists i, r_i = r'') \lor (d_0 \neq d \land d_1 \neq d)] \\ +&\leq \Pr[(\exists i, r_i = r'')] + \Pr[(d_0 \neq d \land d_1 \neq d)] \\ +&\leq \sum_{i=1}^{q} \Pr[r_i = r'] + \Pr[(d_0 \neq d \land d_1 \neq d)] \\ +&\leq \frac{q}{|G|} + \frac{1}{n} +\end{align*} +$$ + +We deduce that: + +$$ \mathrm{Pr}[E] \geq 1 - \left( \frac{q}{|G|} + \frac{1}{n} \right) \geq \frac{n-1}{n} - \frac{q}{|G|} \geq \frac{1}{n} - \frac{q}{|G|} $$ + +Let $\alpha, \beta$ be two elements of $G$ such that $X = g^\alpha$ and $Y = g^\beta$. Let $b$ be the solution to the DDH instance, i.e. $b = 1$ iff $Z = g^{\alpha \cdot \beta}$. We compute the probability that $B$ wins its DDH experiment: + +$$ +\begin{align*} +\Pr[b'_* = b] &= \Pr[E] \cdot \Pr[b'_* = b|E] + (1 - \Pr[E]) \cdot \Pr[b'_* = b|\neg E] \\ +&= \Pr[E] \cdot (\Pr[b'_* = b|E] - \Pr[b'_* = b|\neg E]) + \Pr[b'_* = b|\neg E] \\ +&= \Pr[E] \cdot (\Pr[b'_* = b|E] - \frac{1}{2}) + \frac{1}{2} \\ +&= \Pr[E] \cdot (\Pr[Z = g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \land (Z = g^{\alpha \cdot \beta})] \\ +&\quad + \Pr[Z \neq g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \land (Z \neq g^{\alpha \cdot \beta})] - \frac{1}{2}) + \frac{1}{2} \\ +&= \Pr[E] \cdot (\frac{1}{2} \cdot (\frac{1}{2} \pm \lambda_1(k)) + \frac{1}{2} \cdot \frac{1}{2} - \frac{1}{2}) + \frac{1}{2} \\ +&= \pm \lambda_1(k) \cdot \frac{\Pr[E]}{2} + \frac{1}{2} +\end{align*} +$$ +---PAGE_BREAK--- + +Finally, we deduce the advantage of $\mathcal{B}$ against the DDH problem: + +$$ \left| \Pr[b'_* = b] - \frac{1}{2} \right| = \lambda_1(k) \cdot \frac{\Pr[E]}{2} = \lambda_1(k) \cdot \left( \frac{1}{2 \cdot n} - \frac{q}{2 \cdot |G|} \right) $$ + +This advantage is non-negligible, which conclude the proof of Claim 1. + +**Proof of Claim 2:** We show how to build the algorithm $\mathcal{B}$. It runs the same reduction as in claim 1, except that the algorithm $\mathcal{B}$ simulates the oracles LRSO$_b(d_0, d_1, \cdot, \cdot)$ and V.Proof$(\cdot, \cdot, \cdot, \cdot, \cdot)$ as follows during the second phase of the $\mathcal{A}$ experiment: + +**Oracle LRSO$_b$(d$_0$, d$_1$, ·, ·):** On the $i$th input $(m_i''$, $L_i'')$, if $i = 0$ then this oracle is defined as in the reduction of the Claim 1. Else it runs the oracle V.Sig$(\cdot, \cdot, \cdot)$ on the input $(m_i'', d, L_i'')$ and returns the resulted signature $\sigma_i''$ to $\mathcal{A}$. + +**Oracle V.Proof($\cdot, \cdot, \cdot, \cdot, \cdot$):** On the $i$th input $(L_i', m_i', \sigma_i', l_i')$, if LRSO$_b$ have been already called and $\exists j$ such that $\sigma_i' = \sigma_j''$ and ($l_i' = d_0$ or $l_i' = d_1$) then it returns $\perp$ to $\mathcal{A}$. Else, it process as in the reduction of Claim 1. + +*Analyze:* Let $q$ be the number of queries asked to V.Sig$(\cdot, \cdot, \cdot)$ and let $E$ be the event "B does not aborts the experiment of $\mathcal{A}$. As in Claim 1, we have: + +$$ \Pr[E] \geq \frac{1}{n} - \frac{q}{|G|} $$ + +Let $\alpha, \beta$ be two elements of $G$ such that $X = g^\alpha$ and $Y = g^\beta$. Let $b$ be the solution to the DDH instance, i.e. $b = 1$ iff $Z = g^{\alpha \cdot \beta}$. We compute the probability that $\mathcal{B}$ wins its DDH experiment: + +$$ +\begin{align*} +\Pr[b'_* = b] &= \Pr[E] \cdot \Pr[b'_* = b|E] + (1 - \Pr[E]) \cdot \Pr[b'_* = b|\neg E] \\ +&= \Pr[E] \cdot (\Pr[Z = g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \wedge (Z = g^{\alpha \cdot \beta})] \\ +&\quad + \Pr[Z \neq g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \wedge (Z \neq g^{\alpha \cdot \beta})] - \frac{1}{2}) + \frac{1}{2} \\ +&= \Pr[E] \cdot (\frac{1}{2} \cdot (\frac{1}{2} \pm \lambda(k)) + (\frac{1}{2} \pm \epsilon_1(k)) - \frac{1}{2}) + \frac{1}{2} \\ +&= (\pm\lambda(k) \pm \epsilon(k)) \cdot \frac{\Pr[E]}{2} + \frac{1}{2} +\end{align*} +$$ +---PAGE_BREAK--- + +Finally, we deduce the advantage of $\mathcal{B}$ against the DDH problem: + +$$ +\begin{align*} +\left|\Pr[b'_* = b] - \frac{1}{2}\right| &= \left|\pm \lambda(k) \pm \epsilon(k)\right| \cdot \frac{\Pr[E]}{2} \\ +&\geq (\lambda(k) - \epsilon(k)) \cdot \left(\frac{1}{2 \cdot n} - \frac{q}{2 \cdot |G|}\right) \\ +&\geq \lambda(k) \cdot \frac{1}{2 \cdot n} - \left(\frac{q \cdot \lambda(k)}{2 \cdot |G|}\right) - \epsilon(k) \cdot \left(\frac{1}{2 \cdot n} - \frac{q}{2 \cdot |G|}\right) \\ +&\geq \lambda(k) \cdot \frac{1}{2 \cdot n} - \left(\frac{q \cdot \lambda(k)}{2 \cdot |G|}\right) - \frac{\epsilon(k)}{2 \cdot n} +\end{align*} +$$ + +This advantage is non-negligible, which conclude the proof of Claim 2 and the proof of the lemma. $\square$ + +**Lemma 3.** EVeR is $n$-acc secure for any polynomially bounded $n$ under the DL assumption in the random oracle model. + +*Proof.* We first recall that since LogEq$_n$ is valid, then there exists a polynomial time extractor $\mathcal{E}$ and a polynomial time simulator Sim for LogEq$_n$. Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{EVeR},\mathcal{A}}^{\text{n-acc}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ that solve the DL problem with non-negligible probability. + +$\mathcal{B}$ description: For all $i \in \{1, \dots, n\}$, it picks $x_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$ and sets $\mathbf{pk}_i = Y^{x_i}$. $\mathcal{B}$ runs $x' \leftarrow \mathcal{E}^{\mathcal{A'}(\{\mathbf{pk}_i\}_{1 \le i \le n})}(k)$ where $\mathcal{A}'$ is the following algorithm: + +**Algorithm $\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \le i \le n})$:** It runs $(L_*, m_*, \sigma_*, \mathbf{pk}_*, \pi_*) \leftarrow \mathcal{A}(\{\mathbf{pk}_i\}_{1 \le i \le n})$. It simulates the oracles to $\mathcal{A}$ as follows: + +**Random oracle $H(\cdot)$:** On the $i^{\text{th}}$ input $M_i$, if $\exists j < i$ such that $M_j = M_i$ then it sets $u_j = u_i$. Else it picks $u_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. Finally, it returns $g^{u_i}$. + +**Oracle V.Sig**(·, ·, ·): On the $i^{\text{th}}$ input $(L_i, l_i, m_i)$, it picks $r_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. It computes $h_i = H(m_i||r_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $z_i = X^{u_j}$ and it runs $P_i \leftarrow \text{Sim}(\{(h_i, z_i, g, \mathbf{pk}_l)\}_{\mathbf{pk}_l \in L_i})$. It returns $\sigma_i = (r_i, z_i, P_i)$ to $\mathcal{A}$. + +**Oracle V.Proof**(*·*, *·*, *·*, *·*): On the $i^{\text{th}}$ input $(L'_i, m'_i, \sigma'_i, l'_i)$, it parses $\sigma'_i = (r'_i, z'_i, P'_i)$. It computes $h'_i = H(m'_i||r'_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $\bar{z}_i = \mathbf{pk}_{l'_i}^{u'_j}$ and it runs $P'_i \leftarrow \text{Sim}((h'_i, \bar{z}_i, g, \mathbf{pk}_{l'_i})$. It returns $(\bar{z}_i, \pi'_i)$ to $\mathcal{A}$. + +Finally, $\mathcal{A}'$ compute $h_*(r_*||m_*)$ using the random oracle $H(\cdot)$, parses $\sigma_* = (r_*, z_*, P_*)$ and returns $P_*$. +---PAGE_BREAK--- + +Analyze: We parse $\sigma_* = (r_*, z_*, P_*)$ and $\pi_* = (\bar{z}_*, P_*')$. Suppose that $\mathcal{A}$ wins the experiment, then we have: + +$$L_* \subseteq \{\text{pk}_i\}_{1 \le i \le n} \cup \{\text{pk}_*\} \qquad (1)$$ + +$$\text{V.Ver}(L_*, \sigma_*, m_*) = 1 \qquad (2)$$ + +$$\text{V.Judge}(L_*, m_*, \sigma_*, \text{pk}_*, \pi_*) = 0 \qquad (3)$$ + +$$\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_* \qquad (4)$$ + +Moreover, equation (4) implies that $\forall i \in \{1, \dots, q_S\}, P_i \neq P_*$, then $P_i$ was not generated by the simulator Sim. We deduce the following equation from (2): + +$$\text{LEverif}_{|L_*|}(\{(h_*, z_*, g, \text{pk}_l)\}_{\text{pk}_l \in L_*}, P_*) = 1 \qquad (5)$$ + +Thus $\mathcal{A}$ returns a valid proof with non negligible probability $\lambda(k)$. Since $\mathcal{E}$ is an extractor for LogEq$_n$, it implies that: + +$$\Pr[\exists \text{pk} \in L_*, x' = \log_g(\text{pk}) = \log_{h_*}(z_*)] \ge \lambda(k) \qquad (6)$$ + +We deduce the following equation from (3): + +$$\bar{z}_* \neq z_* \qquad (7)$$ + +$$\text{LEverif}_1(\{(h_*, \bar{z}_*, g, \text{pk}_*)\}, P_*') = 1 \qquad (8)$$ + +Since LogEq$_n$ is sound, we deduce there exists a negligible function $\epsilon$ such that: + +$$\Pr[\log_g(\text{pk}_*) = \log_{h_*}(\bar{z}_*)] \ge 1 - \epsilon(k) \qquad (9)$$ + +$$\Rightarrow \Pr[\log_g(\text{pk}_*) \neq \log_{h_*}(z_*)] \ge 1 - \epsilon(k) \qquad (10)$$ + +$$\Rightarrow \Pr[\log_g(\text{pk}_*) = \log_{h_*}(z_*)] \le \epsilon(k) \qquad (11)$$ + +Finally, from (1), (6) and (11) we deduce the probability that $\mathcal{B}$ wins the experiment: + +$$\Pr[\exists \text{pk} \in L_*, x' = \log_g(\text{pk}) = \log_{h_*}(z_*)] \ge \lambda(k)$$ + +$$\iff \Pr[\exists \text{pk} \in L_* \setminus \{\text{pk}_*\}, x' = \log_g(\text{pk}) = \log_{h_*}(z_*)] + \Pr[x' = \log_g(\text{pk}_*) = \log_{h_*}(z_*)] \ge \lambda(k)$$ + +$$\iff \Pr[Y = g^x] + \Pr[x' = \log_g(\text{pk}_*) = \log_{h_*}(z_*)] \ge \lambda(k)$$ + +$$\iff \Pr[Y = g^x] \ge \lambda(k) - \Pr[\log_g(\text{pk}_*) = \log_{h_*}(z_*)]$$ + +$$\iff \Pr[Y = g^x] \ge \lambda(k) - \epsilon(k)$$ + +Since $\Pr[Y = g^x]$ is non negligible then $\mathcal{B}$ solve the DL problem with non-negligible probability. $\square$ + +**Lemma 4.** EVeR is *n*-non-*usu*-2 secure for any polynomially bounded *n* under the DL assumption in the random oracle model. + +*Proof.* We first recall that since LogEq$_n$ is valid, then there exists a polynomial time extractor $\mathcal{E}$ and a polynomial time simulator Sim for LogEq$_n$. Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \Pr[\text{Exp}_{\text{EVeR},\mathcal{A}}^{\text{n-non-usu-2}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ that solve the DL problem with non-negligible probability. +---PAGE_BREAK--- + +B description: B sets pk = Y. B runs x ← $\mathcal{E}^{\mathcal{A}'(\text{pk})}(k)$ where $\mathcal{A}'$ is the following +algorithm: + +**Algorithm $\mathcal{A}'(\text{pk})$:** It runs $(L_*, m_*, \sigma_*) \leftarrow \mathcal{A}(\text{pk})$. It simulates the oracles to $\mathcal{A}$ as in the reduction of the previous proof. Finally, $\mathcal{A}'$ compute $h_*(r_*||m_*)$ using the random oracle $H(\cdot)$, parses $\sigma_* = (r_*, z_*, P_*)$ and returns $P_*$. + +Finally, $\mathcal{B}$ returns $x$. + +Analyze: We parse σ* = (r*, z*, P*). Suppose that A wins the experiment, then we have, for any π* ← V.Proof(L*, m*, σ*, pk, sk) where π* = (z̃*, P'*): + +$$ +\begin{align} +\text{V.Ver}(L_*, \sigma_*, m_*) &= 1 \tag{12} \\ +\text{V.Judge}(L_*, m_*, \sigma_*, \text{pk}, \pi_*) &= 1 \tag{13} \\ +\forall i \in \{1, \dots, q_S\}, \sigma_i &\neq \sigma_* \tag{14} +\end{align} +$$ + +Moreover, equation (4) implies that $\forall i \in \{1, \dots, q_S\}, P_i \neq P_*$, then $P_i$ was not generated by the simulator Sim. We deduce the following equation from (12): + +$$ +LEverif_{L_*}(\{(h_*, z_*, g, pk_l)\}_{pk_l \in L_*}, P_*) = 1 \quad (15) +$$ + +Thus $\mathcal{A}$ returns a valid proof with non negligible probability $\lambda(k)$. Since $\mathcal{E}$ is an extrac- +tor for LogEq$_n$, it implies that: + +$$ +\Pr[\exists \mathbf{pk}_l \in L_*, x = \log_g(\mathbf{pk}_l) = \log_{h_*}(z_*)] \geq \lambda(k) \quad (16) +$$ + +We deduce the following equation from (13): + +$$ +\bar{z}_* = z_* +$$ + +$$ +LEverif_1(\{(h_*, z_*, g, \mathbf{pk})\}, P'_*) = 1 \quad (18) +$$ + +Since LogEq_n is sound, we deduce there exists a negligible function ϵ such that: + +$$ +\Pr[\log_g(\mathbf{pk}) = \log_{h_*}(z_*)] \geq 1 - \epsilon(k) \quad (19) +$$ + +$$ +\iff \Pr[\log_g(\mathbf{pk}) \neq \log_{h_*}(z_*)] \leq \epsilon(k) \quad (20) +$$ + +Finally, from (16) and (20) we deduce the probability that $\mathcal{B}$ wins the experiment: + +$$ +\begin{align*} +& \Pr[\exists \text{pk}_l \in L_*; x = \log_g(\text{pk}_l) = \log_{h_*}(z_*)] \geq \lambda(k) \\ +& \iff \Pr[x = \log_g(\text{pk}) = \log_{h_*}(z_*)] + \Pr[\exists \text{pk}_l \in L_* \setminus \{\text{pk}\}, x = \log_g(\text{pk}_l) = \log_{h_*}(z_*)] \geq \lambda(k) \\ +& \iff \Pr[x = \log_g(\text{pk}) = \log_{h_*}(z_*)] \geq \lambda(k) - \Pr[\exists \text{pk}_l \in L_* \setminus \{\text{pk}\}, x = \log_g(\text{pk}_l) = \log_{h_*}(z_*)] \\ +& \iff \Pr[Y = g^x] \geq \lambda(k) - \Pr[\log_g(\text{pk}) = \log_{h_*}(z_*)] \\ +& \iff \Pr[Y = g^x] \geq \lambda(k) - \epsilon(k) +\end{align*} +$$ + +Since $\Pr[Y = g^x]$ is non negligible then $\mathcal{B}$ solve the DL problem with non-negligible probability. $\square$ +---PAGE_BREAK--- + +**D Security proofs of GUSS** + +**Lemma 5.** If *D* is *unf* secure then *GUSS* is *immut* secure. + +*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{immut}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ such that $\text{Pr}[\text{Exp}_{\mathcal{D},\mathcal{B}}^{\text{unf}}(k) = 1]$ is non-negligible. + +$\mathcal{B}$ construction: $\mathcal{B}$ receives the public key $pk_d$ as input. It runs $init_v \leftarrow \text{V.Init}(1^k)$ and $(pk_v, sk_v) \leftarrow \text{V.Gen}(init_v)$. It sets $pk = (pk_d, pk_v)$ and runs $(spk_*, m_*, \sigma_*) \leftarrow \mathcal{A}(pk)$. During the experiment, $\mathcal{B}$ simulates the two oracles $\text{Sig}(\cdot, sk, \cdot, \cdot)$ and $\text{SiProof}(sk, \cdot, \cdot, \cdot)$ to $\mathcal{A}$ as follows: + +$\text{Sig}(\cdot, sk, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(m_i, \text{ADM}_i, \text{spk}_i)$, $\mathcal{B}$ first computes the fixed message part $M \leftarrow \text{FIX}_{\text{ADM}_i}(m_i)$ and sends $(M_i || \text{ADM}_i || \text{pk} || \text{spk}_i)$ to the oracle $\text{D.Sig}(sk_d, \cdot)$ and receives the signature $\sigma_{i,1}$. It runs $\sigma_2 \leftarrow \text{V.Sig}(\{\text{pk}_v, \text{spk}_i\}, \text{sk}_v, (\sigma_{i,1} || m_i))$. It returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \text{ADM}_i)$. + +$\text{SiProof}(sk, \cdot, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(m'_i, \sigma'_i, \text{spk}'_i)$, It parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \text{ADM}'_i)$. It runs $\pi'_{\text{SK},i} \leftarrow \text{V.Proof}(\{\text{pk}_v, \text{spk}'_i\}, (m'_i || \sigma'_{i,1}), \sigma'_{i,2}, \text{pk}_v, \text{sk}_v)$ and returns it. + +Finally, $\mathcal{B}$ parses $\sigma_* = (\sigma_{1,*}, \sigma_{2,*}, \text{ADM}_*)$, $M_* \leftarrow \text{FIX}_{\text{ADM}_*}(m_*)$ and returns the couple $((M_* || \text{ADM}_* || \text{pk} || \text{spk}_*), \sigma_{1,*})$. + +*analyze* We show the if $\mathcal{A}$ wins its experiment, then $\mathcal{B}$ also wins its experiment. Suppose that $\mathcal{A}$ wins its experiment, then the following equations holds: + +$$ +\begin{gather} +\mathrm{Ver}(m_*, \sigma_*, \mathrm{pk}, \mathrm{spk}_*) = 1 \tag{21} \\ +\forall i \in \{1, \dots, q_{\mathrm{Sig}}\}, (\mathrm{spk}_* \neq \mathrm{spk}_i) \text{ or } (\mathrm{FIX}_{\mathrm{ADM}_*}(m_*) \neq \mathrm{FIX}_{\mathrm{ADM}_i}(m_i)) \tag{22} +\end{gather} +$$ + +(21) implies the following equation: + +$$ D.\operatorname{Ver}(\mathrm{pk}_d, (M_* || \mathrm{ADM}_* || \mathrm{pk} || \mathrm{spk}_*), \sigma_{1,*}) = 1 $$ + +Moreover, (22) implies that: + +$$ +\forall i \in \{1, \dots, q_{\mathrm{Sig}}\}, (M_* || \mathrm{ADM}_* || \mathrm{pk} || \mathrm{spk}_*) \neq (M_i || \mathrm{ADM}_i || \mathrm{pk} || \mathrm{spk}_i) +$$ + +We deduce that $\mathcal{B}$ never sends the message $(M_* || \mathrm{ADM}_* || \mathrm{pk} || \mathrm{spk}_*)$ to the oracle $\mathrm{Sig}(\cdot, sk, \cdot, \cdot)$. Moreover, we deduce that if $\mathcal{A}$ wins its experiment, then $\mathcal{B}$ wins its experiment, thus +$$ +\Pr[\mathrm{Exp}_{\mathcal{D},\mathcal{B}}^{\mathrm{unf}}(k) = 1] \geq \lambda(k) +\hspace*{\fill} \square +$$ + +**Lemma 6.** If *V* is *2-ano* secure then *GUSS* is *trans* secure. + +*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\textit{trans}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \textbf{POLY}(k)$ such that $\text{Pr}[\text{Exp}_{\mathcal{V},\mathcal{B}}^{\textit{2-ano}}(k) = 1]$ is non-negligible. +---PAGE_BREAK--- + +$B_1$ receives $(\mathbf{pk}, \mathbf{pk}_v)$ as input, and returns $(1, 2)$. $B_2$ runs $(\mathbf{pk}_d, \mathbf{sk}_d) \leftarrow \text{SiGen}(\text{D.Init}(1^k))$ and sets $\mathbf{pk} = (\mathbf{pk}_d, \mathbf{pk}_v)$. It runs $b' \leftarrow A(\mathbf{pk}, \mathbf{spk})$ and returns $b'$. During the experiment, $B_2$ simulates the oracles to $\mathcal{A}$ as follows: + +**Sig**(*., sk*, ..., *): On the $i$-th input $(m_i, \text{ADM}_i, \text{spk}_i)$, $B_2$ first computes the fixed message part $\tilde{M}_i \leftarrow \text{FIX}_{\text{ADM}_i}(m_i)$ and runs $\sigma_{i,1} \leftarrow \text{D.Sig}(\text{sk}_d, (\tilde{M}_i || \text{ADM}_i || \mathbf{pk} || \text{spk}_i))$ and it sends $(\{\mathbf{pk}_v, \text{spk}_i\}, 1, (m_i || \sigma_{i,1}))$ to the oracle $\text{V.Sig}(..., ...)$ that returns the signature $\sigma_{i,2}$. $B_2$ returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \text{ADM}_i)$ to $\mathcal{A}$. + +**San**(*., ..., ssk*): On the $i$-th input $(m'_i, \text{MOD}'_i, \sigma'_i, \mathbf{pk}'_i)$, it parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \text{ADM}'_i)$ and $\mathbf{pk}'_i = (\mathbf{pk}'_{d,i}, \mathbf{pk}'_{v,i})$. This algorithm first computes the modified message $\bar{m}'_i \leftarrow \text{MOD}'_i(m'_i)$ and it sends $(\{\mathbf{pk}'_{v,i}, \mathbf{spk}\}, 2, (\bar{m}'_i || \sigma'_{i,1}))$ to the oracle $\text{V.Sig}(..., ...)$ that returns the signature $\bar{\sigma}'_{i,2}$. $B_2$ returns $\bar{\sigma}'_i = (\bar{\sigma}'_{i,1}, \bar{\sigma}'_{i,2}, \text{ADM}'_i)$ to $\mathcal{A}$. + +**SiProof**($\mathbf{sk}$, ..., *): On the $i$-th input $(m''_i, \sigma''_i, \mathbf{spk''}_i)$, $B$ parses $\sigma''_i = (\sigma''_{i,1}, \sigma''_{i,2}, \text{ADM''}_i)$. It sends $(\{\mathbf{pk}_v, \mathbf{spk''}_i\}, (m''_i || \sigma''_{i,1}), \sigma''_{i,2}, \mathbf{pk}_v, 1)$ to the oracle $\text{V.Proof}(..., ..., ...)$ that returns the proof $\pi''_{\text{Si},i}$. Finally, $B$ returns $\pi''_{\text{Si},i}$. + +**SaProof**($\mathbf{ssk}$, ..., *): On the $i$-th input $(m'''_i, \sigma'''_i, \mathbf{pk'''}_i)$, $B$ parses $\sigma'''_i = (\sigma'''_{i,1}, \sigma'''_{i,2}, \text{ADM'''}_i)$ and $\mathbf{pk}'''_i = (\mathbf{pk}'''_{d,i}, \mathbf{pk}'''_{v,i})$. It sends $(\{\mathbf{pk}'''_{v,i}, \mathbf{spk}\}, (m'''_i || \sigma'''_{i,1}), \sigma'''_{i,2}, \mathbf{spk}, 2)$ to the oracle $\text{V.Proof}(..., ..., ...)$ that returns the proof $\pi'''_{\text{Sa},i}$. Finally, $B$ returns $\pi'''_{\text{Sa},i}$. + +**Sa/Si**(b, **pk**, **spk**, sk, ssk, ..., *): On the $i$-th input $(\tilde{m}_i, \tilde{\text{ADM}}_i, \tilde{\text{MOD}}_i)$, if $\tilde{\text{ADM}}_i(\tilde{\text{MOD}}_i) = 0$, $B_2$ returns $\perp$. Else $B_2$ computes the fixed message part $\tilde{M}_i \leftarrow \text{FIX}_{\tilde{\text{ADM}}_i}(\tilde{m}_i)$. It runs $\tilde{\sigma}_{i,1} \leftarrow \text{D.Sig}(\text{sk}_d, (\tilde{M}_i || \tilde{\text{ADM}}_i || \mathbf{pk} || \mathbf{spk}))$ and it sends $((\tilde{\text{MOD}}(\tilde{m}_i) || \tilde{\sigma}_{i,1}), \{\mathbf{pk}_v, \mathbf{skp}\})$ to the oracle $\text{LRSO}_b(1, 2, ..., *)$ that returns the signature $\tilde{\sigma}_{i,2}$. $B_2$ returns $\tilde{\sigma}_i = (\tilde{\sigma}_{i,1}, \tilde{\sigma}_{i,2}, \tilde{\text{ADM}}_i)$ to $\mathcal{A}$. + +*analyze*: Suppose that $\mathcal{A}$ wins its experiment, then $b = b'$ and: + +$$ S_{\text{Sa/Si}} \cap (S_{\text{SiProof}} \cup S_{\text{SaProof}}) = \emptyset $$ + +where $S_{\text{Sa/Si}}$ (resp. $S_{\text{SiProof}}$ and $S_{\text{SaProof}}$) corresponds to the set of all signatures out- +putted by the oracle Sa/Si (resp. sending to the oracles SiProof and SaProof). It im- +plies that the messages sending to the oracle V.Proof(...,...,...) was not already signed +by LRSO$_b$(1, 2, ..., ). More formally, we have: + +$$ \forall i,j \in \{1,\dots,\max(q_S,q_P)\}, (\sigma_i \neq \sigma_j') $$ + +where $q_S$ (resp. $q_P$) is the number of calls to the oracle $\text{V.Sig}(\cdot,\cdot,\cdot)$ (resp. $\text{V.Proof}(\cdot,\cdot,\cdot,\cdot,\cdot)$). +Finally, the probability that $\mathcal{B}$ wins its experiment is the same as the probability that $\mathcal{A}$ +wins its experiments: + +$$ \Pr[\mathrm{Exp}_{V,B}^{2-\mathrm{ano}}(k) = 1] \geq \lambda(k) $$ + +which conclude the proof. □ + +**Lemma 7.** If *D* is *unf* secure then *GUSS* is unlink secure. + +*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = |\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k) = 1]| - 1/2|$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ such that $\Pr[\text{Exp}_{\mathcal{D},\mathcal{B}}^{\text{unf}}(k) = 1]$ is non-negligible. +---PAGE_BREAK--- + +$B$ construction: $B_1$ receives $(pk_d)$ as input, $B$ runs $(pk_v, sk_v) \leftarrow \text{SiGen}(V.\text{Init}(1^k))$ and $(spk, ssk) \leftarrow \text{SiGen}(V.\text{Init}(1^k))$, and sets $pk = (pk_d, pk_v)$. It chooses $b \stackrel{\$}{\leftarrow} \{0, 1\}$ and runs $b' \leftarrow A(pk, spk)$. During the experiment, $B_2$ simulates the oracles to $A$ as follows: + +**Sig**(**sk**, ..): On the $i^{th}$ input $(m_i, ADM_i, spk_i)$, $B$ first computes the fixed message part $M \leftarrow \text{FIX}_{ADM_i}(m_i)$ and sends $(M_i || ADM_i || pk || spk_i)$ to the oracle $\text{D.Sig}(sk_d, \cdot)$ and receives the signature $\sigma_{i,1}$. It runs $\sigma_2 \leftarrow \text{V.Sig}(\{pk_v, spk_i\}, sk_v, (\sigma_{i,1} || m_i))$. It returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, ADM_i)$. + +**SiProof**(sk, .., ..): On the $i^{th}$ input $(m'_i, \sigma'_i, spk'_i)$, It parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, ADM'_i)$. It runs $\pi'_{si,i} \leftarrow \text{V.Proof}(\{pk_v, spk'_i\}, (m'_i || \sigma'_{i,1}), \sigma'_{i,2}, pk_v, sk_v)$ and returns it. + +**San**(**sk**, .., .., ssk): On the $i^{th}$ input $(m''_i, MOD''_i, \sigma''_i, pk''_i)$, $B$ runs $\bar{\sigma}''_i \leftarrow \text{San}(m''_i, MOD''_i, \sigma''_i, pk''_i, ssk)$ and returns $\bar{\sigma}''_i$ to $A$. + +**SaProof**(ssk, .., ..): On the $i^{th}$ input $(m'''_i, \sigma'''_i, pk'''_i)$, $B$ runs $\pi'''_{sa,i} \leftarrow \text{SaProof}(ssk, m'''_i, \sigma'''_i, pk'''_i)$ and returns $\pi_{sa}$ to $A$. + +**LRSan**(b, pk, ssk, ..): On the $i^{th}$ input $((\tilde{m}_{0,i}, \tilde{MOD}_{0,i}, \tilde{\sigma}_{0,i})(\tilde{m}_{1,i}, \tilde{MOD}_{1,i}, \tilde{\sigma}_{1,i}))$, if for $j \in \{0, 1\}$, $\text{Ver}(\tilde{m}_{j,i}, \tilde{\sigma}_{j,i}, pk, spk) = 1$ and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, then this oracle returns $\tilde{\sigma}'_i = (\tilde{\sigma}'_{1,b,i}, \tilde{\sigma}'_{2,b,i}, a\tilde{d}m'_b) \leftarrow \text{San}(\tilde{m}_{b,i}, \tilde{MOD}_{b,i}, \tilde{\sigma}_{b,i}, pk, ssk)$ to $A$, else it returns $\perp$. Moreover, if for $j \in \{0, 1\}$, $\text{Ver}(\tilde{m}_{j,i}, \tilde{\sigma}_{j,i}, pk, spk) = 1$ and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, and if $\exists x$ such that $\tilde{\sigma}_{x,i}$ was not already outputted by the oracle $\text{D.Sig}(sk_d, \cdot)$, then $B$ returns $((\text{FIX}_{\tilde{ADM}_{x,i}}(\tilde{m}_{x,i})||\tilde{ADM}_{x,i}||pk||spk), \tilde{\sigma}_{x,i})$ to the challenger and aborts the experiment for $A$. + +If $B$ has not already aborted the experiment, then it returns $\perp$. + +*analyze*: First observe that, if for any $i \in \{1, ..., q\}$ where $q$ is the number of queries to the oracle LRSan($b$, `pk`, `ssk`, `..`), for $j \in \{0, 1\}$, Ver($\tilde{m}_{j,i}$, $\tilde{\sigma}_{j,i}$, `pk`, `spk`) = 1 and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, and $\tilde{\sigma}_{j,i}$ was already outputted by the oracle D.Sig(`sk_d`, `·`), then + +$$ +\mathrm{FIX}_{\tilde{AD}\bar{M}_{0,i}}(\tilde{m}_{0,i}) || \bar{AD}\bar{M}_{0,i} || pk || spk = \mathrm{FIX}_{\tilde{AD}\bar{M}_{1,i}}(\tilde{m}_{1,i}) || \bar{AD}\bar{M}_{1,i} || pk || spk +$$ + +Since *D* is deterministic, we deduce that $\tilde{\sigma}'_{1,b,i} = \tilde{\sigma}_{0,i} = \tilde{\sigma}_{1,i}$. On the other hand, the second parts of the outputted signature $\tilde{\sigma}'_{2,b,i}$ does not depend of *b*. Finally, $\tilde{AD}\bar{M}'_{b,i} = \bar{AD}\bar{M}_{0,i} = \bar{AD}\bar{M}_{1,i}$, then $\tilde{AD}\bar{M}'_{b,i}$ does not depend of *b*. We deduce that the outputted signature $\tilde{\sigma}'_{b,i}$ leaks no information about *b*. In this case, the best strategy of *A* to wins the experiment is to randomly guess the bit $b'$. + +On the other hand, if there exists $i \in \{1, ..., q\}$ where $q$ is the number of queries to the oracle LRSan($b$, `pk`, `ssk`, `..`), for $j \in \{0, 1\}$, Ver($\tilde{m}_{j,i}$, $\tilde{\sigma}_{j,i}$, `pk`, `spk`) = 1 and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, and if $\exists x$ such that $\tilde{\sigma}_{x,i}$ was not already outputted by the oracle D.Sig(`sk_d`, `·`), then $B$ returns $((\text{FIX}_{\bar{AD}\bar{M}_{x,i}}(\tilde{m}_{x,i})||\bar{AD}\bar{M}_{x,i}||pk||spk), \tilde{\sigma}_{x,i})$ to the challenger and wins its experiment. We denote this event by $E$. We have: + +$$ +\Pr[\mathrm{Exp}_{D,B}^{\mathrm{unf}}(k) = 1] \geq \Pr[E] +$$ +---PAGE_BREAK--- + +On the other hand, we have: + +$$ +\begin{align*} +\Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1] &= \Pr[E] \cdot \Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1|E] \\ +&\quad + (1 - \Pr[E]) \cdot \Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1|\neg E] \\ +&= \Pr[E] \cdot \Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1|E] + \frac{1}{2} - \frac{1}{2} \cdot \Pr[E] +\end{align*} +$$ + +It implies that: + +$$ +\Pr[E] = \frac{\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k)=1] - \frac{1}{2}}{\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k)=1|E] - \frac{1}{2}} = \frac{\pm\lambda(k)}{\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k)=1|E] - \frac{1}{2}} \geq \lambda(k) +$$ + +Finally, we deduce that + +$$ +\Pr[\mathrm{Exp}_{D,B}^{\mathrm{unf}}(k) = 1] \geq \lambda(k) +$$ + +which conclude the proof □ + +**Lemma 8.** If V is 1-acc secure then GUSS is SiAcc-1 secure. + +*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \mathrm{POLY}(k)$ such that the advantage $\lambda(k) = \Pr[\mathrm{Exp}_{\mathrm{GUSS},\mathcal{A}}^{\mathrm{SiAcc-1}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \mathrm{POLY}(k)$ such that $\Pr[\mathrm{Exp}_{V,\mathcal{B}}^{\mathrm{1-acc}}(k) = 1]$ is non-negligible. + +$\mathcal{B}$ construction: $\mathcal{B}$ receives ($\mathbf{spk}$) as input, and runs runs ($\mathbf{pk}_*, m_*, \sigma_*, \pi_{si,*}) \leftarrow \mathcal{A}(\mathbf{spk})$. During the experiment, $\mathcal{B}$ simulates the oracles to $\mathcal{A}$ as follows: + +San(.,..,.,ssk): On the i-th input ($m_i$, $\mathrm{MOD}_i$, $\sigma_i$, $\mathbf{pk}_i$), it parses $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \mathrm{ADM}_i)$ and $\mathbf{pk}_i = (\mathbf{pk}_{d,i}, \mathbf{pk}_{v,i})$. This algorithm first computes the modified message $\bar{m}_i \leftarrow \mathrm{MOD}_i(m_i)$ and it sends $(\{\mathbf{pk}_{v,i}, \mathbf{spk}\}, 1, (\bar{m}_i || \sigma_{i,1}))$ to the oracle $V.\mathrm{Sig}(.,,.)$ that returns the signature $\bar{\sigma}_{i,2}$. $\mathcal{B}_2$ returns $\bar{\sigma}_i = (\sigma_{i,1}, \bar{\sigma}_{i,2}, \mathrm{ADM}_i)$ to $\mathcal{A}$. + +SaProof(ssk,..,...): On the i-th input ($m'_i$, $\sigma'_i$, $\mathbf{pk}'_i$), $\mathcal{B}$ parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \mathrm{ADM}'_i)$ and $\mathbf{pk}'_i = (\mathbf{pk}'_{d,i}, \mathbf{pk}'_{v,i})$. It sends $(\{\mathbf{pk}'_{v,i}, \mathbf{spk}\}, (m'_i || \sigma'_{i,1}), \sigma'_{i,2}, \mathbf{spk}, 1)$ to the oracle $V.\mathrm{Proof}(.,.,.,.)$ that returns the proof $\pi'_{\mathrm{sa},i}$. Finally, $\mathcal{B}$ returns $\pi'_{\mathrm{sa},i}$ to $\mathcal{A}$. + +Finally, $\mathcal{B}$ parses $\mathbf{pk}_* = (\mathbf{pk}_{d,*}, \mathbf{pk}_{v,*})$ and $\sigma_* = (\sigma_{1,*}, \sigma_{2,*}, \mathrm{ADM}_*)$ and returns $(\{\mathbf{spk}, \mathbf{pk}_{v,*}\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \mathbf{pk}_{v,*}, \pi_{si,*})$ + +*analyze*: Suppose that $\mathcal{A}$ wins its experiment, then: + +$$ +\forall i \in \{1, \dots, q_{\text{San}}\}, (\sigma_* \neq \sigma'_i) \tag{23} +$$ + +$$ +\operatorname{Ver}(m_*, \sigma_*, \mathbf{pk}_*, \mathbf{spk}) = 1 \tag*{(24)} +$$ + +$$ +\text{\sc SiJudge}(m_*, \sigma_*, \mathbf{pk}_*, \mathbf{spk}, \pi_{s i, *}) = 0 \qquad (25) +$$ + +where $q_{\text{San}}$ is the number of calls to the oracle San(.,,.,,., ssk). First note that $\{\texttt{spk}, \texttt{pk}_{v,*}\} \subset +\{\texttt{spk}\} \cup \{\texttt{pk}_{v,*}\}$. (23) implies that: + +$$ +\forall i \in \{1, \dots, q_S\}, \sigma_{*,2} \neq \bar{\sigma}_{i,2} +$$ +---PAGE_BREAK--- + +where $q_S$ is the number of queries to V.Sig(.,..,). Indeed, if $(\sigma_* \neq \sigma'_i)$ then $\sigma_{1,*} \neq \sigma_{1,i}$ or $\sigma_{2,*} \neq \sigma_{2,i}$ or $\text{ADM}_* \neq \text{ADM}_i$: if $\text{ADM}_* \neq \text{ADM}_i$ then $\sigma_{1,*} \neq \sigma_{1,i}$ because $\sigma_{1,*}$ (resp. $\sigma_{1,i}$) is a signature of $\text{ADM}_*$ (resp. $\text{ADM}_i$). If $\sigma_{1,*} \neq \sigma_{1,i}$ then $\sigma_{2,*} \neq \sigma_{2,i}$ because $\sigma_{2,*}$ (resp. $\sigma_{2,i}$) is a signature of $\sigma_{1,*}$ (resp. $\sigma_{1,i}$). Finally, in all cases $\sigma_{*,2} \neq \bar{\sigma}_{i,2}$. + +On the other hand, (24) implies that: + +$$ +V.Ver(\{\text{spk}, \text{pk}_{v,*}\}, \sigma_{2,*}, m_* || \sigma_{1,*}) = 1 +$$ + +Finally, (25) implies that: + +$$ +V.Judge(\{\text{spk}, \text{pk}_{v,*}\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \text{pk}_{v,*}, \pi_{s_i,*}) = 0 +$$ + +We deduce that the probability that $\mathcal{B}$ wins its experiment is the same as the probability +that $\mathcal{A}$ wins its experiments: + +$$ +\Pr[\mathrm{Exp}_{V,\mathcal{B}}^{\mathrm{1-acc}}(k) = 1] \geq \lambda(k) +$$ + +which conclude the proof. +□ + +**Lemma 9.** If *V* is 1-*non-usu-2* secure then GUSS is SaAcc-1 secure. + +*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{SaAcc-1}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ such that $\Pr[\text{Exp}_{V,\mathcal{B}}^{\text{1-non-usu-2}}(k) = 1]$ is non-negligible. + +$\mathcal{B}$ construction: $\mathcal{B}$ receives $(\mathbf{pk}_v)$ as input, it generates $(\mathbf{pk}_d, \mathbf{sk}_d) \leftarrow \text{SiGen}(\text{init}_d)$, +sets $\mathbf{pk} = (\mathbf{pk}_d, \mathbf{pk}_v)$ and runs $(\mathbf{spk}_*, m_*, \sigma_*) \leftarrow \mathcal{A}(\mathbf{pk})$. During the experiment, +$\mathcal{B}$ simulates the oracles to $\mathcal{A}$ as follows: + +Sig(., sk, ..): On the i'th input ($m_i$, $\text{ADM}_i$, spk$_i$), B first computes the fixed message part $M_i \leftarrow \text{FIX}_{\text{ADM}_i}(m_i)$ and runs $\sigma_{i,1} \leftarrow \text{D.Sig}(sk_d, (M_i||\text{ADM}_i||\mathbf{pk}||\mathbf{spk}_i))$ and it sends $(\{\mathbf{pk}_v, \mathbf{spk}_i\}, 1, (m_i||\sigma_{i,1}))$ to the oracle V.Sig(.,..,) that returns the signature $\sigma_{i,2}$. B returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \text{ADM}_i)$ to A. + +SiProof(sk, ., ., .): On the i-th input ($m'_i$, $\sigma'_i$, spk'_i), B parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \text{ADM}'_i)$. It sends $(\{\mathbf{pk}_v, \mathbf{spk}'_i\}, (m'_i||\sigma'_{i,1}), \sigma'_{i,2}, \mathbf{pk}_v, 1)$ to the oracle V.Proof(., ., ., ., .) that returns the proof $\pi'_{\text{si},i}$. Finally, B returns $\pi'_{\text{si},i}$. + +Finally, B parses $\sigma_* = (\sigma_{1,*}, \sigma_{2,*}, \text{ADM}_*)$ and returns $(\{\text{spk}_*, \text{pk}_v\}, m_* || \sigma_{1,*}, \sigma_{2,*})$. + +*analyze*: Suppose that $\mathcal{A}$ wins its experiment, then, for any $\pi_{\text{si},*} \leftarrow \text{SiProof}(\text{sk}, m_*, \sigma_*, \text{spk}_*)$: + +$$ +\forall i \in \{1, \dots, q_{\text{Sig}}\}, (\sigma_* \neq \sigma'_i) \tag{26} +$$ + +$$ +\mathrm{Ver}(m_*, \sigma_*, \mathrm{pk}, \mathrm{spk}_*) = 1 +$$ + +$$ +(27) \\ +\text{SaJudge}(m_*, \sigma_*, \mathrm{pk}, \mathrm{spk}_*, \pi_{\mathrm{si},*}) = 1 +$$ + +$$ +(28) \\ +\text{SaJudge}(m_*, \sigma_*, pk, spk_*) = 1 +$$ + +where $q_\text{San}$ is the number of calls to the oracle $\text{San}(., ., ., ., ssk)$. First note that (26) +implies that: + +$$ +\forall i \in \{1, \dots, q_S\}, \sigma_{*,2} \neq \bar{\sigma}_{i,2} +$$ +---PAGE_BREAK--- + +where $q_S$ is the number of queries to V.Sig(.,..). Indeed, if $(\sigma_* \neq \sigma'_i)$ then $\sigma_{1,*} \neq \sigma_{1,i}$ or $\sigma_{2,*} \neq \sigma_{2,i}$ or $\text{ADM}_* \neq \text{ADM}_i$: if $\text{ADM}_* \neq \text{ADM}_i$ then $\sigma_{1,*} \neq \sigma_{1,i}$ because $\sigma_{1,*}$ (resp. $\sigma_{1,i}$) is a signature of $\text{ADM}_*$ (resp. $\text{ADM}_i$). If $\sigma_{1,*} \neq \sigma_{1,i}$ then $\sigma_{2,*} \neq \sigma_{2,i}$ because $\sigma_{2,*}$ (resp. $\sigma_{2,i}$) is a signature of $\sigma_{1,*}$ (resp. $\sigma_{1,i}$). Finally, in all cases $\sigma_{*,2} \neq \bar{\sigma}_{i,2}$. + +On the other hand, (27) implies that: + +$$ +V.Ver(\{\text{spk}_*, \text{pk}_v\}, \sigma_{2,*}, m_* || \sigma_{1,*}) = 1 +$$ + +Moreover, (28) implies that: + +$$ +\mathrm{SaJudge}(m_*, \sigma_*, \mathbf{pk}, \mathbf{spk}_*, \pi_{\mathrm{si},*}) = 1 +$$ + +Indeed, $\pi_{\mathrm{si},*}$ cannot be equal to $\perp$ since it is computed by the proof algorithm from a valid signature. It implies that: + +$$ +V.Judge(\{\text{spk}_*, \text{pk}_v\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \text{pk}_v, \pi_{si,*}) = 0 +$$ + +Finally, note that since $\pi_{si,*} \leftarrow SiProof(\mathbf{sk}, m_*, \sigma_*, \mathbf{spk}_*)$ then: + +$$ +\pi_{si,*} \leftarrow V.Proof(\{\text{spk}_*, \text{pk}_v\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \text{pk}_v, \text{sk}_v) +$$ + +We deduce that the probability that *B* wins its experiment is the same as the probability +that *A* wins its experiments: + +$$ +\Pr[\text{Exp}_{V,B}^{1-\text{non-usu-}\cdot 2}(k) = 1] \geq \lambda(k) +$$ + +which conclude the proof. +□ \ No newline at end of file diff --git a/samples/texts_merged/6859646.md b/samples/texts_merged/6859646.md new file mode 100644 index 0000000000000000000000000000000000000000..7f787b531bd4bb8ab3e5a0d0a794f8d9ab429252 --- /dev/null +++ b/samples/texts_merged/6859646.md @@ -0,0 +1,2485 @@ + +---PAGE_BREAK--- + +# Secondary School Examination-2020 +## Marking Scheme - MATHEMATICS STANDARD + +**Subject Code: 041 Paper Code: 30/2/1, 30/2/2, 30/2/3** + +### General instructions + +1. You are aware that evaluation is the most important process in the actual and correct assessment of the candidates. A small mistake in evaluation may lead to serious problems which may affect the future of the candidates, education system and teaching profession. To avoid mistakes, it is requested that before starting evaluation, you must read and understand the spot evaluation guidelines carefully. Evaluation is a 10-12 days mission for all of us. Hence, it is necessary that you put in your best efforts in this process. + +2. Evaluation is to be done as per instructions provided in the Marking Scheme. It should not be done according to one's own interpretation or any other consideration. Marking Scheme should be strictly adhered to and religiously followed. However, while evaluating, answers which are based on latest information or knowledge and/or are innovative, they may be assessed for their correctness otherwise and marks be awarded to them. In class-X, while evaluating two competency based questions, please try to understand given answer and even if reply is not from marking scheme but correct competency is enumerated by the candidate, marks should be awarded. + +3. The Head-Examiner must go through the first five answer books evaluated by each evaluator on the first day, to ensure that evaluation has been carried out as per the instructions given in the Marking Scheme. The remaining answer books meant for evaluation shall be given only after ensuring that there is no significant variation in the marking of individual evaluators. + +4. Evaluators will mark (√) wherever answer is correct. For wrong answer 'X' be marked. Evaluators will not put right kind of mark while evaluating which gives an impression that answer is correct and no marks are awarded. This is **most common mistake which evaluators are committing**. + +5. If a question has parts, please award marks on the right-hand side for each part. Marks awarded for different parts of the question should then be totaled up and written in the left-hand margin and encircled. This may be followed strictly. + +6. If a question does not have any parts, marks must be awarded in the left-hand margin and encircled. This may also be followed strictly. + +7. If a student has attempted an extra question, answer of the question deserving more marks should be retained and the other answer scored out. + +8. No marks to be deducted for the cumulative effect of an error. It should be penalized only once. + +9. A full scale of marks 0-80 marks as given in Question Paper) has to be used. Please do not hesitate to award full marks if the answer deserves it. + +10. Every examiner has to necessarily do evaluation work for full working hours i.e. 8 hours every day and evaluate 20 answer books per day in main subjects and 25 answer books per day in other subjects (Details are given in Spot Guidelines). + +11. Ensure that you do not make the following common types of errors committed by the Examiner in the past: +* Leaving answer or part thereof unassessed in an answer book. +* Giving more marks for an answer than assigned to it. +* Wrong totaling of marks awarded on a reply. +* Wrong transfer of marks from the inside pages of the answer book to the title page. +* Wrong question wise totaling on the title page. +* Wrong totaling of marks of the two columns on the title page. +* Wrong grand total. +* Marks in words and figures not tallying. +* Wrong transfer of marks from the answer book to online award list. +* Answers marked as correct, but marks not awarded. (Ensure that the right tick mark is correctly and clearly indicated. It should merely be a line. Same is with the X for incorrect answer.) +* Half or a part of answer marked correct and the rest as wrong, but no marks awarded. + +12. While evaluating the answer books if the answer is found to be totally incorrect, it should be marked as cross (X) and awarded zero (0) Marks. + +13. Any unassessed portion, non-carrying over of marks to the title page, or totaling error detected by the candidate shall damage the prestige of all the personnel engaged in the evaluation work as also of the Board. Hence, in order to uphold the prestige of all concerned, it is again reiterated that the instructions be followed meticulously and judiciously. + +14. The Examiners should acquaint themselves with the guidelines given in the Guidelines for spot Evaluation before starting the actual evaluation. + +15. Every Examiner shall also ensure that all the answers are evaluated, marks carried over to the title page, correctly totaled and written in figures and words. + +16. The Board permits candidates to obtain photocopy of the Answer Book on request in an RTI application and also separately as a part of the re-evaluation process on payment of the processing charges. +---PAGE_BREAK--- + +QUESTION PAPER CODE 30/2/1 +EXPECTED ANSWER/VALUE POINTS +SECTION - A + +Question numbers 1 to 10 are multiple choice questions of 1 mark each. + +You have to select the correct choice : + +Marks + +Q.No. + +1. The sum of exponents of prime factors in the prime-factorisation of 196 is + (a) 3 + (b) 4 + (c) 5 + (d) 2 + **Ans:** (b) 4 + +1 + +2. Euclid's division Lemma states that for two positive integers a and b, there exists unique integer q and r satisfying a = bq + r, and + (a) $0 < r < b$ + (b) $0 < r \leq b$ + (c) $0 \leq r < b$ + (d) $0 \leq r \leq b$ + **Ans:** (c) $0 \leq r < b$ + +1 + +3. The zeroes of the polynomial $x^2 - 3x - m(m+3)$ are + (a) $m, m+3$ + (b) $-m, m+3$ + (c) $m, -(m+3)$ + (d) $-m, -(m+3)$ + **Ans:** (b) $-m, m+3$ + +1 + +4. The value of k for which the system of linear equations $x + 2y = 3$, $5x + ky + 7 = 0$ is inconsistent is + (a) $-\frac{14}{3}$ + (b) $\frac{2}{5}$ + (c) 5 + (d) 10 + **Ans:** (d) 10 + +1 + +5. The roots of the quadratic equation $x^2 - 0.04 = 0$ are + (a) $\pm 0.2$ + (b) $\pm 0.02$ + (c) 0.4 + (d) 2 + **Ans:** (a) $\pm 0.2$ + +1 + +6. The common difference of the A.P. $\frac{1}{p}$, $\frac{1-p}{p}$, $\frac{1-2p}{p}$, ... is + (a) 1 + (b) $\frac{1}{p}$ + (c) -1 + (d) $\frac{-1}{p}$ + **Ans:** (c) -1 + +1 + +7. The $n^{th}$ term of the A.P. a, 3a, 5a, ... is + (a) na + (b) $(2n-1)a$ + (c) $(2n+1)a$ + (d) 2na + **Ans:** (b) $(2n-1)a$ + +1 + +8. The point P on x-axis equidistant from the points A(-1, 0) and B(5, 0) is + (a) (2, 0) + (b) (0, 2) + (c) (3, 0) + (d) (2, 2) + **Ans:** (a) (2, 0) + +1 + +9. The co-ordinates of the point which is reflection of point (-3, 5) in x-axis are + (a) (3, 5) + (b) (3, -5) + (c) (-3, -5) + (d) (-3, 5) + **Ans:** (c) (-3, -5) + +1 +---PAGE_BREAK--- + +10. + +If the point P (6, 2) divides the line segment joining A(6, 5) and B(4, y) in the ratio 3 : 1, then the value of y is + +(a) 4 + +(b) 3 + +(c) 2 + +(d) 1 + +**Ans:** 1 mark be awarded to everyone + +1 + +In Q. Nos. 11 to 15, fill in the blanks. Each question is of 1 mark. + +11. + +In fig. 1, MN || BC and AM : MB = 1 : 2, then $\frac{ar(\Delta AMN)}{ar(\Delta ABC)} = \underline{\hspace{2cm}}$ + +Fig. 1 + +**Ans:** $\frac{1}{9}$ + +1 + +12. + +In given Fig. 2, the length PB = _______ cm. + +**Ans:** 4 + +13. + +In $\triangle ABC$, AB = $6\sqrt{3}$ cm, AC = 12 cm and BC = 6 cm, then $\angle B = \underline{\hspace{2cm}}$. + +**Ans:** 90° + +OR + +Two triangles are similar if their corresponding sides are ______. + +**Ans:** proportional + +1 + +1 + +14. + +The value of $(\tan 1^\circ \tan 2^\circ \dots \tan 89^\circ)$ is equal to ______. + +**Ans:** 1 + +15. + +In Fig. 3, the angles of depressions from the observing positions O₁ and O₂ respectively of the object A are ______, ______. + +Fig. 3 + +**Ans:** 30°, 45° + +$\frac{1}{2} + \frac{1}{2}$ +---PAGE_BREAK--- + +Q. Nos. 16 to 20 are short answer type questions of 1 mark each. + +16. If $\sin A + \sin^2 A = 1$, then find the value of the expression $(\cos^2 A + \cos^4 A)$. + +$$ +\begin{array}{l} +\text{Ans: } \sin A = 1 - \sin^2 A \\ +\qquad \sin A = \cos^2 A +\end{array} +$$ + +$$ \cos^2 A + \cos^4 A = \sin A + \sin^2 A = 1 $$ + +1/2 + +1/2 + +17. In Fig. 4 is a sector of circle of radius 10.5 cm. Find the perimeter of the sector. (Take $\pi = \frac{22}{7}$) + +Fig. 4 + +$$ +\begin{aligned} +\text{Ans: Perimeter} &= 2r + \frac{\pi r \theta}{180^\circ} \\ +&= 2 \times 10.5 + \frac{22}{7} \times 10.5 \times \frac{60^\circ}{180^\circ} \\ +&= 21 + 11 = 32 \text{ cm} +\end{aligned} +$$ + +1/2 + +1/2 + +18. If a number x is chosen at random from the numbers -3, -2, -1, 0, 1, 2, 3, then find the probability of x² < 4. + +$$ +\begin{align*} +\text{Ans: Number of Favourable outcomes} &= 3 \text{ i.e., } \{-1, 0, 1\} \quad \therefore P(x^2 < 4) = \frac{3}{7} +\end{align*} +$$ + +OR + +What is the probability that a randomly taken leap year has 52 Sundays ? + +$$ +\text{Ans: } P(52 \text{ Sundays}) = \frac{5}{7} +$$ + +1 + +19. Find the class-marks of the classes 10-25 and 35-55. + +$$ +\text{Ans: Class Marks } \frac{10+25}{2} = 17.5; \frac{35+55}{2} = 45 +$$ + +1/2+1/2 + +20. A die is thrown once. What is the probability of getting a prime number. + +$$ +\begin{array}{l} +\text{Ans: Number of prime numbers} = 3 \text{ i.e. ; } \{2, 3, 5\} \\[1em] +P(\text{Prime Number}) = \frac{3}{6} \text{ or } \frac{1}{2} +\end{array} +$$ + +1/2 + +1/2 +---PAGE_BREAK--- + +SECTION - B + +Q. Nos. 21 to 26 carry 2 marks each + +21. A teacher asked 10 of his students to write a polynomial in one variable on a paper and then to handover the paper. The following were the answers given by the students: + +$$2x + 3, 3x^2 + 7x + 2, 4x^3 + 3x^2 + 2, x^3 + \sqrt{3x} + 7, 7x + \sqrt{7}, 5x^3 - 7x + 2,$$ + +$$2x^2 + 3 - \frac{5}{x}, 5x - \frac{1}{2}, ax^3 + bx^2 + cx + d, x + \frac{1}{x}.$$ + +Answer the following questions : + +(i) How many of the above ten, are not polynomials ? + +(ii) How many of the above ten, are quadratic polynomials ? + +Ans: (i) 3 + +(ii) 1 + +1 + +1 + +22. In Fig. 5, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that + +$$\frac{ar(\Delta ABC)}{ar(\Delta DBC)} = \frac{AO}{DO}$$ + +Fig. 5 + +Ans: + +Draw $AX \perp BC$, $DY \perp BC$ +$\triangle AOX \sim \triangle DOY$ + +$$\frac{AX}{DY} = \frac{AO}{DO} \quad \dots (i)$$ + +$$\frac{ar(\triangle ABC)}{ar(\triangle DBC)} = \frac{\frac{1}{2} \times BC \times AX}{\frac{1}{2} \times BC \times DY}$$ + +$$\frac{AX}{DY} = \frac{AO}{DO} \text{ (From (i))}$$ + +OR + +In Fig. 6, if $AD \perp BC$, then prove that $AB^2 + CD^2 = BD^2 + AC^2$. + +Fig. 6 + +Ans: In rt $\triangle ABD$ + +$AB^2 = BD^2 + AD^2$ ... (i) + +In rt $\triangle ADC$ + +$CD^2 = AC^2 - AD^2$ ... (ii) + +Adding (i) & (ii) + +$$AB^2 + CD^2 = BD^2 + AC^2$$ + +1/2 + +1/2 + +1/2 + +1/2 + +1/2 + +1 +---PAGE_BREAK--- + +23. Prove that $1 + \frac{\cot^2 \alpha}{1 + \cos \alpha} = \cos \alpha \sec \alpha$ + +$$ +\begin{align*} +\text{Ans: L.H.S} &= 1 + \frac{\cos \sec^2 \alpha - 1}{1 + \cos \sec \alpha} \\ +&= 1 + \frac{(\cos \sec \alpha - 1)(\cos \sec \alpha + 1)}{\cos \sec \alpha + 1} \\ +&= \cos \sec \alpha = R.H.S +\end{align*} +$$ + +OR + +$$ +\sin^2 \theta + \tan^2 \theta = \sec^2 \theta - \tan^2 \theta +$$ + +$$ +\begin{align*} +\text{Ans: L.H.S} &= \tan^4 \theta + \tan^2 \theta \\ +&= \tan^2 \theta (\tan^2 \theta + 1) \\ +&= (\sec^2 \theta - 1) (\sec^2 \theta) = \sec^4 \theta - \sec^2 \theta = R.H.S +\end{align*} +$$ + +24. The volume of a right circular cylinder with its height equal to the radius is $25\frac{1}{7}$ cm³. Find the height of the cylinder. (Use $\pi = \frac{22}{7}$) + +$$ +\text{Ans: Let height and radius of cylinder } x \text{ cm} +$$ + +$$ +V = \frac{176}{7} \text{cm}^3 +$$ + +$$ +\frac{22}{7} \times x^2 \times x = \frac{176}{7} +$$ + +$$ +x^{3}=8 \Rightarrow x=2 +$$ + +∴ height of cylinder = 2 cm + +25. A child has a die whose six faces show the letters as shown below : + +The die is thrown once. What is the probability of getting (i) A, (ii) D ? + +$$ +\text{Ans: (i) } P(A) = \frac{2}{6} \text{ or } \frac{1}{3} \qquad (\text{ii) } P(D) = \frac{1}{6} +$$ + +1+1 + +26. Compute the mode for the following frequency distribution : + + + + + + + + + + + + + + + + + + + + + + +
Size of items
(in cm)
0-44-88-1212-1616-2020-2424-28
Frequency5791712106
+ +$$ +\text{Ans: } l = 12 \quad f_0 = 9 \quad f_1 = 17 \quad f_2 = 12 \quad h = 4 +$$ + +$$ +\text{Mode} = 12 + \frac{17-9}{34-9-12} \times 4 = 14.46 \text{ cm (Approx)} +$$ + +$$ +\frac{1}{1+\frac{1}{2}} +$$ +---PAGE_BREAK--- + +SECTION - C + +Question numbers 27 to 34 carry 3 marks each. + +27. If $2x + y = 23$ and $4x - y = 19$, find the value of $(5y - 2x)$ and $\left(\frac{y}{x} - 2\right)$ + +**Ans:** $2x + y = 23, 4x - y = 19$ +Solving, we get $x = 7, y = 9$ + +$5y - 2x = 31, \frac{y}{x} - 2 = \frac{-5}{7}$ + +OR + +Solve for x: $\frac{1}{x+4} - \frac{1}{x+7} = \frac{11}{30}, x \neq -4, 7$ + +**Ans:** + +$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x-7} &= \frac{11}{30} \\ &\Rightarrow \frac{-11}{(x+4)(x-7)} = \frac{11}{30} \end{aligned} $$ + +$$ \Rightarrow x^2 - 3x + 2 = 0 $$ + +$$ \Rightarrow (x-2)(x-1) = 0 $$ + +$$ \Rightarrow x = 2, 1 $$ + +The Following solution should also be accepted + +$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x+7} &= \frac{11}{30} \\ &\Rightarrow \frac{x+7-x-4}{(x+4)(x-7)} = \frac{11}{30} \\ &\Rightarrow 11x^2 + 121x + 218 = 0 \end{aligned} $$ + +Here, D = 5049 + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ x = -\frac{(a+c)(b+c-2a)}{6(b-a)} $$ + +**Ans:** + +Here $d = b - a$ + +Let c be the n-th term +$\therefore c = a + (n-1)(b-a)$ +$$ n = -\frac{c+b-2a}{b-a} $$ +$$ S_n = -\frac{(c+b-2a)(a+c)}{6(b-a)} $$ + +$$ n = -\frac{(c+b-2a)}{(b-a)} $$ + +$$ S_n = -\frac{(c+b-2a)(a+c)}{(b-a)} $$ +---PAGE_BREAK--- + +OR + +Solve the equation : 1 + 4 + 7 + 10 + ... + x = 287. + +**Ans:** Let sum of n terms = 287 + +$$ \frac{n}{2} [2 \times 1 + (n-1)3] = 287 $$ + +$$ \frac{1}{2} $$ + +$$ 3n^2 - n - 574 = 0 $$ + +$$ \frac{1}{2} $$ + +$$ (3n + 41)(n - 14) = 0 $$ + +$$ \frac{1}{2} $$ + +$$ n = 14 \left( \text{Reject } n = \frac{-41}{3} \right) $$ + +$$ \frac{1}{2} $$ + +$$ x = a_{14} = 1 + 13 \times 3 = 40 $$ + +$$ 1 $$ + +29. In a flight of 600 km, an aircraft was slowed down due to bad weather. The average speed of the trip was reduced by 200 km/hr and the time of flight increased by 30 minutes. Find the duration of flight. + +**Ans:** Let actual speed = x km/hr +A.T.Q + +$$ \frac{600}{x - 200} - \frac{600}{x} = \frac{1}{2} $$ + +$$ 1 $$ + +$$ x^2 - 200x - 240000 = 0 $$ + +$$ (x - 600)(x + 400) = 0 $$ + +$$ x = 600 \text{ (x = -400 Rejected)} $$ + +$$ \frac{1}{2} $$ + +$$ \text{Duration of flight} = \frac{600}{600} = 1 \text{ hr} $$ + +$$ \frac{1}{2} $$ + +30. If the mid-point of the line segment joining the points A(3, 4) and B(k, 6) is P(x, y) and $x + y - 10 = 0$, find the value of k. + +**Ans:** + +$$ A \left( \frac{\text{mid point of A}}{k}, \frac{\text{mid point of A}}{6} \right) $$ + +$$ x = \frac{3+k}{2}, \quad y=5 $$ + +$$ \frac{1}{2} + \frac{1}{2} $$ + +$$ x + y - 10 = 0 \Rightarrow \frac{3+k}{2} + 5 - 10 = 0 $$ + +$$ \Rightarrow k = 7 $$ + +$$ 1 $$ + +OR + +Find the area of triangle ABC with A(1, -4) and the mid-points of sides through A being (2, -1) and (0, -1). + +**Ans:** B(3, 2), C(-1, 2) + +Area = $\frac{1}{2}|1(2-2)+3(2+4)-1(-4-2)| = 12$ squnits + +$$ \frac{1}{2} + \frac{1}{2} $$ + +$$ 1+1 $$ +---PAGE_BREAK--- + +31. In Fig. 7, if $\triangle ABC \sim \triangle DEF$ and their sides of lengths (in cm) are marked along them, then find the lengths of sides of each triangle. + +Fig. 7 + +**Ans:** As $\triangle ABC \sim \triangle DEF$ + +$$ \frac{2x-1}{18} = \frac{3x}{6x} $$ + +$x = 5$ + +AB = 9 cm DE = 18 cm + +BC = 12 cm EF = 24 cm + +CA = 15 cm FD = 30 cm + +1/2+1/2 + +32. If a circle touches the side BC of a triangle ABC at P and extended sides AB and AC at Q and R, respectively, prove that + +$$AQ = \frac{1}{2}(BC + CA + AB)$$ + +**Ans:** + +Correct Fig + +$$ \begin{aligned} AQ &= \frac{1}{2} (2AQ) \\ &= \frac{1}{2} (AQ + AQ) \\ &= \frac{1}{2} (AQ + AR) \\ &= \frac{1}{2} (AB + BQ + AC + CR) \\ &= \frac{1}{2} (AB + BC + CA) \end{aligned} $$ + +$\therefore$ [BQ = BP, CR = CP] + +1/2 + +33. If $\sin \theta + \cos \theta = \sqrt{2}$, prove that $\tan \theta + \cot \theta = 2$. + +$$ \text{Ans: } \sin \theta + \cos \theta = \sqrt{2} $$ + +$$ \begin{array}{l} \tan \theta + 1 = \sqrt{2} \sec \theta \\ \\ \text{Sq. both sides} \\ \tan^2 \theta + 1 + 2 \tan \theta = 2\sec^2 \theta \\ \\ \tan^2 \theta + 1 + 2 \tan \theta = 2(1 + \tan^2 \theta) \\ \\ 2 \tan \theta = \tan^2 \theta + 1 \\ \\ 2 = \tan \theta + \cot \theta \end{array} $$ + +1 + +1 + +1 + +1 +---PAGE_BREAK--- + +**34.** The area of a circular play ground is 22176 cm². Find the cost of fencing this ground at the rate of 50 per metre. + +**Ans:** Let the radius of playground be r cm + +$$ \pi r^2 = 22176 \text{ cm}^2 $$ + +$$ r = 84 \text{ cm} $$ + +1 + +$$ \text{Circumference} = 2\pi r = 2 \times \frac{22}{7} \times 84 = 528 \text{ cm} $$ + +1 + +$$ \text{Cost of fencing} = \frac{50}{100} \times 528 = 264 $$ + +1 + +### SECTION - D + +Question numbers 35 to 40 carry 4 marks each. + +**35.** Prove that $\sqrt{5}$ is an irrational number. + +**Ans:** Let $\sqrt{5}$ be a rational number. + +$$ \sqrt{5} = \frac{p}{q}, p \& q \text{ are coprimes } & \& q \neq 0 \\ 5q^2 = p^2 \Rightarrow 5 \text{ divides } p^2 \Rightarrow 5 \text{ divides } p \text{ also Let } p = 5a, \text{ for some integer } a \\ 5q^2 = 25a^2 \Rightarrow q^2 = 5a^2 \Rightarrow 5 \text{ divides } q^2 \Rightarrow 5 \text{ divides } q \text{ also} $$ + +∴ 5 is a common factor of p, q, which is not possible as +p, q are coprimes. + +Hence assumption is wrong $\sqrt{5}$ is irrational no. + +1 + +1 + +1 + +1 + +**36.** It can take 12 hours to fill a swimming pool using two pipes. If the pipe of larger diameter is used for four hours and the pipe of smaller diameter for 9 hours, only half of the pool can be filled. How long would it take for each pipe to fill the pool separately? + +**Ans:** Let time taken by pipe of larger diameter to fill the tank be x hr +Let time taken by pipe of smaller diameter to fill the tank be y hr +A.T.Q + +$$ \frac{1}{x} + \frac{1}{y} = \frac{1}{12}, \quad \frac{4}{x} + \frac{9}{y} = \frac{1}{2} $$ + +1+1 + +Solving we get x = 20 hr y = 30 hr + +1+1 + +**37.** Draw a circle of radius 2 cm with centre O and take a point P outside the circle such that OP = 6.5 cm. From P, draw two tangents to the circle. + +**Ans:** Correct construction of circle of radius 2 cm +Correct construction of tangents. + +1 + +3 + +OR + +Construct a triangle with sides 5 cm, 6 cm and 7 cm and then construct another triangle whose sides are $\frac{3}{4}$ times the corresponding sides of the first triangle. + +**Ans:** Correct construction of given triangle +Construction of Similar triangle + +1 + +3 +---PAGE_BREAK--- + +**38.** From a point on the ground, the angles of elevation of the bottom and the top of a tower fixed at the top of a 20 m high building are 45° and 60° respectively. Find the height of the tower. + +**Ans:** Let height of tower = h m + +In rt. $\Delta BCD \tan 45^\circ = \frac{BC}{CD}$ + +$$ +\left. +\begin{array}{l} +1 = \frac{20}{CD} \\ +CD = 20 \text{ m} +\end{array} +\right\} +$$ + +In rt. $\Delta ACD \tan 60^\circ = \frac{AC}{CD}$ + +$$ \sqrt{3} = \frac{20+h}{20} $$ + +$$ h = 20(\sqrt{3}-1)m $$ + +corr fig. 1 + +1 + +1 + +1 + +**39.** Find the area of the shaded region in Fig. 8, if PQ = 24 cm, PR = 7 cm and O is the centre of the circle. + +Fig. 8 + +**Ans:** + +$\angle P = 90^\circ \ RQ = \sqrt{(24)^2 + 7^2} = 25 \text{ cm}, r = \frac{25}{2} \text{ cm}$ + +$$ \left. +\begin{array}{l} +\text{Area of shaded portion} = \text{Area of semi circle} - \ar(\Delta PQR) \\ += \frac{1}{2} \times \frac{22}{7} \times \left(\frac{25}{2}\right)^2 - 84 \\ += 161.54 \text{ cm}^2 +\end{array} +\right\} $$ + +$$ +\begin{array}{l} +\frac{1}{2} \\ +2 \\ +\frac{1}{2} +\end{array} +$$ + +OR + +Find the curved surface area of the frustum of a cone, the diameters of whose circular ends are 20 m and 6 m and its height is 24 m. + +**Ans:** + +$R = 10 \text{ m}$ $r = 3 \text{ m}$ $h = 24 \text{ m}$ + +$$ l = \sqrt{(24)^2 + (10-3)^2} = 25 \text{ m} $$ + +$$ CSA = \pi(10 + 3)25 = 325 \pi \text{ m}^2 $$ + +$$ +\begin{array}{l} +\frac{1}{2}+1\frac{1}{2} \\ +1 \\ +1+1 +\end{array} +$$ + +**40.** The mean of the following frequency distribution is 18. The frequency f in the class interval 19 – 21 is missing. Determine f. + +
Class interval11 – 1313 – 1515 – 1717 – 1919 – 2121 – 2323 – 25
Frequency36913f54
+---PAGE_BREAK--- + +**Ans:** + +
C.Ifxxf
11-1331236
13-1561484
15-17916144
17-191318234
19-21f2020f
21-23522110
23-2542496
40+f704 + 20f
+ +$$ \text{Mean} = \frac{\sum xf}{\sum f} \Rightarrow 18 = \frac{704+20f}{40+f} \Rightarrow f=8 $$ + +OR + +The following table gives production yield per hectare of wheat of 100 farms of a village : + +
Production yield40-4545-5050-5555-6060-6565-70
No. of farms4616203024
+ +Change the distribution to a 'more than' type distribution and draw its ogive. + +**Ans:** + +
Production yieldNumber of farms
More than or equal to 40100
More than or equal to 4596
More than or equal to 5090
More than or equal to 5574
More than or equal to 6054
More than or equal to 6524
+ +Plotting of points (40, 100) (45, 96) (50, 90) (55, 74) (60, 54) (65, 24) join to get ogive. + +2 + +2 + +2 + +2 +---PAGE_BREAK--- + +QUESTION PAPER CODE 30/2/2 +EXPECTED ANSWER/VALUE POINTS +SECTION - A + +Question numbers 1 to 10 are multiple choice questions of 1 mark each. + +You have to select the correct choice : + +Marks + +Q.No. + +1. The value of k for which the system of linear equations x + 2y = 3, 5x + ky + 7 = 0 is inconsistent is + +(a) $-\frac{14}{3}$ + +(b) $\frac{2}{5}$ + +(c) 5 + +(d) 10 + +Ans: (d) 10 + +1 + +2. The zeroes of the polynomial $x^2 - 3x - m(m+3)$ are + +(a) m, m + 3 + +(b) -m, m + 3 + +(c) m, -(m + 3) + +(d) -m, -(m + 3) + +Ans: (b) -m, m + 3 + +1 + +3. Euclid's division Lemma states that for two positive integers a and b, there exists unique integer q and r satisfying $a = bq + r$, and + +(a) $0 < r < b$ + +(b) $0 < r \leq b$ + +(c) $0 \leq r < b$ + +(d) $0 \leq r \leq b$ + +Ans: (c) $0 \leq r < b$ + +1 + +4. The sum of exponents of prime factors in the prime-factorisation of 196 is + +(a) 3 + +(b) 4 + +(c) 5 + +(d) 2 + +Ans: (b) 4 + +1 + +5. If the point P(6, 2) divides the line segment joining A(6, 5) and B(4, y) in the ratio 3 : 1, then the value of y is + +(a) 4 + +(b) 3 + +(c) 2 + +(d) 1 + +Ans: 1 mark be awarded to everyone + +1 + +6. The co-ordinates of the point which is reflection of point (-3, 5) in x-axis are + +(a) (3, 5) + +(b) (3, -5) + +(c) (-3, -5) + +(d) (-3, 5) + +Ans: (c) (-3, -5) + +1 + +7. The point P on x-axis equidistant from the points A(-1, 0) and B(5, 0) is + +(a) (2, 0) + +(b) (0, 2) + +(c) (3, 0) + +(d) (2, 2) + +Ans: (a) (2, 0) + +1 + +8. The $n^{th}$ term of the A.P. a, 3a, 5a, ... is + +(a) na + +(b) $(2n-1)a$ + +(c) $(2n+1)a$ + +(d) 2na + +Ans: (b) $(2n-1)a$ + +1 + +9. The common difference of the A.P. $\frac{1}{p}, \frac{1-p}{p}, \frac{1-2p}{p}, ...$ is + +(a) 1 + +(b) $\frac{1}{p}$ + +(c) -1 + +(d) $-\frac{1}{p}$ + +Ans: (c) -1 + +1 +---PAGE_BREAK--- + +10. The roots of the quadratic equation $x^2 - 0.04 = 0$ are + +(a) ± 0.2 + +(b) ± 0.02 + +(c) 0.4 + +(d) 2 + +Ans: (a) ± 0.2 + +In Q. Nos. 11 to 15, fill in the blanks. Each question is of 1 mark. + +11. In Fig. 1, the angles of depressions from the observing positions O₁ and O₂ respectively of the object A are ______, ______. + +Fig. 1 + +Ans: 30°, 45° + +$\frac{1}{2} + \frac{1}{2}$ + +12. In Fig. 2, MN || BC and AM : MB = 1 : 2, then $\frac{\text{ar}(ΔAMN)}{\text{ar}(ΔABC)} = $ ______. + +Fig. 2 + +Ans: $\frac{1}{9}$ + +13. In given Fig. 3, the length PB = ______ cm. + +Fig. 3 + +Ans: 4 + +14. In ΔABC, AB = $6\sqrt{3}$ cm, AC = 12 cm and BC = 6 cm, then ∠B = ______. + +Ans: 90° + +OR +Two triangles are similar if their corresponding sides are ______. + +1 + +1 + +15. The value of sin 23° cos 67° + cos 23° sin 67° is ______. + +Ans: proportional + +1 + +1 +---PAGE_BREAK--- + +Q. Nos. 16 to 20 are short answer type questions of 1 mark each. + +16. In Fig. 4 is a sector of circle of radius 10.5 cm. Find the perimeter of the sector. (Take $\pi = \frac{22}{7}$) + +Fig. 4 + +**Ans:** Perimeter $= 2r + \frac{\pi r \theta}{180^{\circ}}$ +$= 2 \times 10.5 + \frac{22}{7} \times 10.5 \times \frac{60^{\circ}}{180^{\circ}}$ +$= 21 + 11 = 32 \text{ cm}$ + +1/2 + +1/2 + +17. If a number x is chosen at random from the numbers -3, -2, -1, 0, 1, 2, 3, then find the probability of x² < 4. + +**Ans:** Number of Favourable outcomes = 3 i.e., {-1, 0, 1} : P(x² < 4) = $\frac{3}{7}$ + +1/2+1/2 + +OR + +What is the probability that a randomly taken leap year has 52 Sundays ? + +**Ans:** P(52 Sundays) = $\frac{5}{7}$ + +1 + +18. A die is thrown once. What is the probability of getting a prime number. + +**Ans:** Number of prime numbers = 3 i.e. {2, 3, 5} + +P(Prime Number) = $\frac{3}{6}$ or $\frac{1}{2}$ + +1/2 + +1/2 + +19. If tan A = cot B, then find the value of (A + B). + +**Ans:** $\tan A = \tan (90^\circ - B)$ +$\therefore A + B = 90^\circ$ + +1/2 + +1/2 + +20. Find the class marks of the classes 15 – 35 and 45 – 60. + +**Ans:** +$$\frac{15+35}{2} = 25$$ + +$$\frac{45+60}{2} = 52.5$$ + +1/2 + +1/2 + +SECTION - B + +Q. Nos. 21 to 26 carry 2 marks each + +21. A teacher asked 10 of his students to write a polynomial in one variable on a paper and then to handover the paper. The following were the answers given by the students: +---PAGE_BREAK--- + +$$2x+3, 3x^2+7x+2, 4x^3+3x^2+2, x^3+\sqrt{3x}+7, 7x+\sqrt{7}, 5x^3-7x+2,$$ + +$$2x^2 + 3 - \frac{5}{x}, 5x - \frac{1}{2}, ax^3 + bx^2 + cx + d, x + \frac{1}{x}.$$ + +Answer the following questions : + +(i) How many of the above ten, are not polynomials ? + +(ii) How many of the above ten, are quadratic polynomials ? + +**Ans:** (i) 3 + +(ii) 1 + +1 + +1 + +**22. Compute the mode for the following frequency distribution :** + + + + + + + + + + + + + + + + + + + + + + +
+ Size of items (in cm) + + 0 - 4 + + 4 - 8 + + 8 - 12 + + 12 - 16 + + 16 - 20 + + 20 - 24 + + 24 - 28 +
+ Frequency + + 5 + + 7 + + 9 + + 17 + + 12 + + 10 + + 6 +
+ +1/2 + +$$ +\text{Mode} = 12 + \frac{17-9}{34-9-12} \times 4 = 14.46 \text{ cm (Approx)} +$$ + +$$ +1 + \frac{1}{2} +$$ + +**23.** In Fig. 5, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that + +$$ +\frac{\text{ar}(\Delta \text{ABC})}{\text{ar}(\Delta \text{DBC})} = \frac{\text{AO}}{\text{DO}} +$$ + +Fig. 5 + +$$ +\frac{\text{AX}}{\text{DY}} = \frac{\text{AO}}{\text{DO}} \quad \dots (i) +$$ + +$$ +\frac{\text{ar}(\Delta \text{ABC})}{\text{ar}(\Delta \text{DBC})} = \frac{\frac{1}{2} \times \text{BC} \times \text{AX}}{\frac{1}{2} \times \text{BC} \times \text{DY}} +$$ + +$$ +\frac{\mathrm{AX}}{\mathrm{DY}}=\frac{\mathrm{AO}}{\mathrm{DO}} \quad (\text { From } (1)) +$$ + +OR + +In Fig. 6, if AD ⊥ BC, then prove that AB² + CD² = BD² + AC². + +Fig. 6 +---PAGE_BREAK--- + +**Ans:** In rt $\triangle$ ABD + +$AB^2 = BD^2 + AD^2$ ... (i) + +1/2 + +In rt $\triangle$ ADC + +$CD^2 = AC^2 - AD^2$ ... (ii) + +1/2 + +Adding (i) & (ii) + +$AB^2 + CD^2 = BD^2 + AC^2$ + +1 + +**24.** Prove that $1 + \frac{\cot^2 \alpha}{1 + \cos \alpha} = \cos \alpha$ + +**Ans:** L.H.S = $1 + \frac{\cos ec^2\alpha - 1}{1 + \cos ec \alpha}$ + +1/2 + +$$ +\begin{aligned} +&= 1 + \frac{(\cos ec \alpha - 1)(\cos ec \alpha + 1)}{\cos ec \alpha + 1} \\ +&= \cosec \alpha = R.H.S +\end{aligned} + $$ + +1 + +1/2 + +OR + +Show that $\tan^4\theta + \tan^2\theta = \sec^4\theta - \sec^2\theta$ + +**Ans:** L.H.S = $\tan^4\theta + \tan^2\theta$ + +$$ +\begin{aligned} +&= \tan^2\theta (\tan^2\theta + 1) \\ +&= (\sec^2\theta - 1)(\sec^2\theta) = \sec^4\theta - \sec^2\theta = R.H.S +\end{aligned} + $$ + +1/2 + +1+1/2 + +**25.** A child has a die whose six faces show the letters as shown below : + +A B C D E + +The die is thrown once. What is the probability of getting (i) A, (ii) D ? + +**Ans:** (i) P(A) = $\frac{2}{6}$ or $\frac{1}{3}$ + +(ii) P(D) = $\frac{3}{6}$ or $\frac{1}{2}$ + +1+1 + +**26.** A solid is in the shape of a cone mounted on a hemisphere of same base radius. If the curved surface areas of the hemispherical part and the conical part are equal, then find the ratio of the radius and the height of the conical part. + +**Ans:** CSA of conical part = CSA of hemispherical part + +$$ +\begin{aligned} +& \pi rl = 2\pi r^2 \\ +& \sqrt{r^2 + h^2} = 2r \\ +& h^2 = 3r^2 \\ +& \frac{r}{h} = \frac{1}{\sqrt{3}} \Rightarrow \text{ratio is } 1 : \sqrt{3} +\end{aligned} + $$ + +1/2 + +1/2 + +1/2 + +1/2 +---PAGE_BREAK--- + +**SECTION - C** + +**Question numbers 27 to 34 carry 3 marks each.** + +27. In Fig. 7, if $\triangle ABC \sim \triangle DEF$ and their sides of lengths (in cm) are marked along them, then find the lengths of sides of each triangle. + +Fig. 7 + +**Ans:** As $\triangle ABC \sim \triangle DEF$ + +$$ \frac{2x-1}{18} = \frac{3x}{6x} $$ + +$1$ + +$x = 5$ + +1 + +AB = 9 cm DE = 18 cm + +BC = 12 cm EF = 24 cm + +CA = 15 cm FD = 30 cm + +$$ \frac{1}{2} + \frac{1}{2} = \frac{1}{2} $$ + +28. If a circle touches the side BC of a triangle ABC at P and extended sides AB and AC at Q and R, respectively, prove that + +$$ AQ = \frac{1}{2} (BC + CA + AB) $$ + +**Ans:** + +Correct Fig + +$$ AQ = \frac{1}{2} (2AQ) $$ + +$$ \frac{1}{2} $$ + +$$ = \frac{1}{2} (AQ + AQ) $$ + +$$ = \frac{1}{2} (AQ + AR) $$ + +$$ = \frac{1}{2} (AB + BQ + AC + CR) $$ + +$$ 1 $$ + +$$ = \frac{1}{2} (AB + BC + CA) $$ + +$$ 1 $$ + +$$ \therefore [BQ = BP, CR = CP] $$ + +29. The area of a circular play ground is $22176 \text{ cm}^2$. Find the cost of fencing this ground at the rate of 50 per metre. + +**Ans:** Let the radius of playground be r cm + +$$ \pi r^2 = 22176 \text{ cm}^2 $$ + +$$ r = 84 \text{ cm} $$ + +$$ \frac{22}{7} $$ + +Circumference = $2\pi r = 2 \times \frac{22}{7} \times 84 = 528 \text{ cm}$ + +$$ 1 $$ +---PAGE_BREAK--- + +Cost of fencing = $\frac{50}{100} \times 528 = 264$ + +30. + +If $2x + y = 23$ and $4x - y = 19$, find the value of $(5y - 2x)$ and $(\frac{y}{x} - 2)$ + +**Ans:** $2x + y = 23, 4x - y = 19$ +Solving, we get $x = 7, y = 9$ + +$5y - 2x = 31, \frac{y}{x} - 2 = \frac{-5}{7}$ + +1 + +1+1 + +$\frac{1}{2}+1\frac{1}{2}$ + +OR + +Solve for x: $\frac{1}{x+4} - \frac{1}{x+7} = \frac{11}{30}, x\# = -4, 7$ + +**Ans:** + +$$ +\begin{align*} +\frac{1}{x+4} - \frac{1}{x-7} &= \frac{11}{30} \\ +&\Rightarrow \frac{-11}{(x+4)(x-7)} = \frac{11}{30} +\end{align*} +$$ + +$$ +\Rightarrow x^2 - 3x + 2 = 0 +$$ + +$$ +\Rightarrow (x-2) (x-1) = 0 +$$ + +$$ +\Rightarrow x = 2, 1 +$$ + +The Following solution should also be accepted + +$$ +\begin{align*} +\frac{1}{x+4} - \frac{1}{x+7} &= \frac{11}{30} \\ +&\Rightarrow \frac{x+7-x-4}{(x+4)(x-7)} = \frac{11}{30} +\end{align*} +$$ + +$$ +\Rightarrow 11x^2 + 121x + 218 = 0 +$$ + +Here, D = 5049 + +$$ +x = \frac{-121 \pm \sqrt{5049}}{22} +$$ + +$\frac{1}{2}$ + +31. + +If the mid-point of the line segment joining the points A(3, 4) and B(k, 6) is P(x, y) and $x + y - 10 = 0$, find the value of k. + +**Ans:** + +$$ +A \left( \frac{\text{P}}{(3, 4)}, \left( \frac{\text{P}}{(x, y)}, \frac{\text{P}}{(K, 6)} \right) \right) +$$ + +$$ +x = \frac{3+k}{2} \quad y = 5 +$$ + +$$ +x + y - 10 = 0 \Rightarrow \frac{3+k}{2} + 5 - 10 = 0 +$$ + +$$ +\Rightarrow k = 7 +$$ + +OR + +Find the area of triangle ABC with A(1, -4) and the mid-points of sides through A being (2, -1) and (0, -1). + +**Ans:** B(3, 2), C(-1, 2) + +$$ +\text{Area} = \frac{1}{2} |(1(2-2) + 3(2+4) - 1(-4-2))| = 12 \text{ sq units} +$$ + +$\frac{1}{2}+1\frac{1}{2}$ + +$1+1$ +---PAGE_BREAK--- + +32. If in an A.P., the sum of first m terms is n and the sum of its first n terms is m, then prove that the sum of its first (m + n) terms is $-(m + n)$. + +**Ans:** +$S_m = n$ and $S_n = m$ + +$$2a + (m-1)d = \frac{2n}{m} \quad \dots(i) \qquad 2a + (n-1)d = \frac{2m}{n} \quad \dots(ii)$$ + +1 + +Solving (i) & (ii), $a = \frac{m^2+n^2+mn-n-m}{mn}$ & $d = \frac{-2(n-m)}{mn}$ + +1 + +$$S_{m+n} = \frac{m+n}{2} \left[ \frac{2 \times m^2 + n^2 + mn - n - m}{mn} \right] + (m+n-1) \left\{ \frac{-2(n+m)}{mn} \right\}$$ + +$$= (-1)(m+n)$$ + +1/2 +1/2 + +OR + +Find the sum of all 11 terms of an A.P. whose middle term is 30. + +**Ans:** +Middle term = $\left(\frac{11+1}{2}\right)^{\text{th}}$ term = $a_6 = 30$ + +1 + +$$S_{11} = \frac{11}{2}[2a + 10d]$$ + +$$= 11(a + 5d)$$ + +$$= 11 a_6 = 11 \times 30 = 330$$ + +1/2 +1/2 +1 + +33. A fast train takes 3 hours less than a slow train for a journey of 600 km. If the speed of the slow train is 10 km/h less than that of the fast train, find the speed of each train. + +**Ans:** +Let the speeds of fast train & slow train be x km/hr +& (x - 10) km/hr respectively. +A.T.Q. + +$$\frac{600}{x-10} - \frac{600}{x} = 3$$ + +$$x^2 - 10x - 2000 = 0$$ + +$$(x - 50)(x + 40) = 0$$ + +$x = 50$ or $-40$ + +Speed is always positive, So, $x = 50$ + +1/2 + +∴ Speed of fast train & slow train are 50 km/hr & 40 km/hr respectively. + +1/2 + +34. If $1 + \sin^2\theta = 3 \sin\theta \cos\theta$, prove that $\tan\theta = 1$ or $\frac{1}{2}$ + +**Ans:** +$$\frac{1+\sin^2\theta}{\cos^2\theta} = \frac{3\sin\theta \cdot \cos\theta}{\cos^2\theta} \text{ (Dividing both sides by } \cos^2\theta\text{)}$$ + +$$\sec^2\theta + \tan^2\theta = 3\tan\theta$$ + +$$(1 + \tan^2\theta) + \tan^2\theta = 3\tan\theta$$ + +$$2\tan^2\theta - 3\tan\theta + 1 = 0$$ + +$$(\tan\theta - 1)(2\tan\theta - 1) = 0$$ + +1/2 +1/2 +1/2 +1/2 +1/2 +---PAGE_BREAK--- + +$$ \tan \theta = 1 \text{ or } \frac{1}{2} $$ + +## SECTION - D + +**Question numbers 35 to 40 carry 4 marks each.** + +**35.** The mean of the following frequency distribution is 18. The frequency f in the class interval 19 – 21 is missing. Determine f. + +
Class interval11 - 1313 - 1515 - 1717 - 1919 - 2121 - 2323 - 25
Frequency36913f54
+ +**Ans:** +C.I +11-13 +13-15 +15-17 +17-19 +19-21 +21-23 +23-25 +f +3 +6 +9 +13 +f +5 +4 +x +12 +14 +16 +18 +20 +22 +24 +\underline{40+f} +xf +36 +84 +144 +234 +20f +110 +96 +\underline{704 + 20f} + +$$ \text{Mean} = \frac{\sum xf}{\sum f} \Rightarrow 18 = \frac{704+20f}{40+f} \Rightarrow f=8 $$ + +OR + +The following table gives production yield per hectare of wheat of 100 farms of a village : + +
Production yield40-4545-5050-5555-6060-6565-70
No. of farms4616203024
+ +Change the distribution to a 'more than' type distribution and draw its ogive. + +**Ans:** + +
Production yieldNumber of farms
More than or equal to 40100
More than or equal to 4596
More than or equal to 5090
More than or equal to 5574
More than or equal to 6054
More than or equal to 6524
+ +Plotting of points (40, 100) (45, 96) (50, 90) (55, 74) (60, 54) (65, 24) join to get ogive. + +$$ \tan \theta = 1 \text{ or } \frac{1}{2} $$ + +2 + +2 + +2 + +2 +---PAGE_BREAK--- + +**36.** Find the area of the shaded region in Fig. 8, if PQ = 24 cm, PR = 7 cm and O is the centre of the circle. + +Fig. 8 + +$$ +\begin{aligned} +\text{Ans: } \angle P = 90^\circ \text{ RQ} &= \sqrt{(24)^2 + 7^2} = 25 \text{ cm}, r = \frac{25}{2} \text{ cm} \\ +&= \frac{1}{2} \times \frac{22}{7} \times \left(\frac{25}{2}\right)^2 - 84 \\ +&= 161.54 \text{ cm}^2 +\end{aligned} +$$ + +OR + +Find the curved surface area of the frustum of a cone, the diameters of whose circular ends are 20 m and 6 m and its height is 24 m. + +$$ +\begin{array}{l} +\text{Ans: } R = 10 \text{ m} \quad r = 3 \text{ m} \quad h = 24 \text{ m} \\[1em] +l = \sqrt{(24)^2 + (10-3)^2} = 25 \text{ m} \\ +CSA = \pi(10 + 3)25 = 325 \pi \text{ m}^2 +\end{array} +$$ + +**37.** Prove that $\sqrt{5}$ is an irrational number. + +$$ +\begin{array}{l} +\text{Ans: Let } \sqrt{5} \text{ be a rational number.} \\ +\sqrt{5} = \frac{p}{q}, p \text{ & q are coprimes & } q \neq 0 \\ +5q^2 = p^2 \Rightarrow 5 \text{ divides } p^2 \Rightarrow 5 \text{ divides } p \text{ also Let } p = 5a, \text{ for some integer } a \\ +5q^2 = 25a^2 \Rightarrow q^2 = 5a^2 \Rightarrow 5 \text{ divides } q^2 \Rightarrow 5 \text{ divides } q \text{ also} \\ +\therefore 5 \text{ is a common factor of } p, q, \text{ which is not possible as } \\ +\text{p, q are coprimes.} \\ +\text{Hence assumption is wrong } \sqrt{5} \text{ is irrational no.} +\end{array} +$$ + +**38.** It can take 12 hours to fill a swimming pool using two pipes. If the pipe of larger diameter is used for four hours and the pipe of smaller diameter for 9 hours, only half of the pool can be filled. How long would it take for each pipe to fill the pool separately ? + +$$ +\begin{array}{l} +\text{Ans: Let time taken by pipe of larger diameter to fill the tank be x hr} \\ +\text{Let time taken by pipe of smaller diameter to fill the tank be y hr} \\ +\text{A.T.Q} \\ +\\ +\displaystyle \frac{1}{x} + \frac{1}{y} = \frac{1}{12}, \quad \frac{4}{x} + \frac{9}{y} = \frac{1}{2} \\ +\\ +\text{Solving we get } x = 20 \text{ hr } y = 30 \text{ hr} +\end{array} +$$ +---PAGE_BREAK--- + +**39.** Draw two tangents to a circle of radius 4 cm, which are inclined to each other at an angle of 60°. + +**Ans:** Correct construction of circle of radius 4 cm + +Correct construction of tangents + +OR + +Construct a triangle ABC with sides 3 cm, 4 cm and 5 cm. Now, construct another triangle whose sides are $\frac{4}{5}$ times the corresponding sides of ΔABC. + +**Ans:** Correct construction of triangle with sides 3 cm, 4 cm & 5 cm + +Correct construction of similar triangle + +**40.** The angle of elevation of the top of a building from the foot of a tower is 30° and the angle of elevation of the top of a tower from the foot of the building is 60°. If the tower is 50 m high, then find the height of the building. + +**Ans:** Correct figure +Let the height of building be h m + +$$ \text{In rt. } \triangle \text{BCD, } \tan 60^\circ = \frac{50}{BC} $$ + +$$ \Rightarrow BC = \frac{50}{\sqrt{3}} \quad \dots (i) $$ + +$$ \text{In rt. } \triangle \text{ABC, } \tan 30^\circ = \frac{h}{BC} $$ + +$$ \Rightarrow \quad \frac{1}{\sqrt{3}} = \frac{h}{50/\sqrt{3}} \quad (\text{from (i)}) $$ + +$$ \therefore h = \frac{50}{3} \text{ or } 16\frac{2}{3} \text{ or } 16.67 \text{ m} $$ +---PAGE_BREAK--- + +QUESTION PAPER CODE 30/2/3 +EXPECTED ANSWER/VALUE POINTS +SECTION - A + +Question numbers 1 to 10 are multiple choice questions of 1 mark each. + +You have to select the correct choice : + +Marks + +Q.No. + +1. The point P on x-axis equidistant from the points A(-1, 0) and B(5, 0) is + +(a) (2, 0) + +(b) (0, 2) + +(c) (3, 0) + +(d) (2, 2) + +Ans: (a) (2, 0) + +1 + +2. The co-ordinates of the point which is reflection of point (-3, 5) in x-axis are + +(a) (3, 5) + +(b) (3, -5) + +(c) (-3, -5) + +(d) (-3, 5) + +Ans: (c) (-3, -5) + +1 + +3. If the point P (6, 2) divides the line segment joining A(6, 5) and B(4, y) in the ratio 3 : 1, then the value of y is + +(a) 4 + +(b) 3 + +(c) 2 + +(d) 1 + +Ans: 1 mark be awarded to everyone + +1 + +4. The sum of exponents of prime factors in the prime-factorisation of 196 is + +(a) 3 + +(b) 4 + +(c) 5 + +(d) 2 + +Ans: (b) 4 + +1 + +5. Euclid's division Lemma states that for two positive integers a and b, there exists unique integer q and r satisfying $a = bq + r$, and + +(a) $0 < r < b$ + +(b) $0 < r \leq b$ + +(c) $0 \leq r < b$ + +(d) $0 \leq r \leq b$ + +Ans: (c) $0 \leq r < b$ + +1 + +6. The zeroes of the polynomial $x^2 - 3x - m(m+3)$ are + +(a) m, m + 3 + +(b) -m, m + 3 + +(c) m, -(m + 3) + +(d) -m, -(m + 3) + +Ans: (b) -m, m + 3 + +1 + +7. The value of k for which the system of linear equations $x + 2y = 3$, $5x + ky + 7 = 0$ is inconsistent is + +(a) $-\frac{14}{3}$ + +(b) $\frac{2}{5}$ + +(c) 5 + +(d) 10 + +Ans: (d) 10 + +1 + +8. The roots of the quadratic equation $x^2 - 0.04 = 0$ are + +(a) $\pm 0.2$ + +(b) $\pm 0.02$ + +(c) 0.4 + +(d) 2 + +Ans: (a) $\pm 0.2$ + +1 + +9. The common difference of the A.P. $\frac{1}{p}$, $\frac{1-p}{p}$, $\frac{1-2p}{p}$, ... is + +(a) 1 + +(b) $\frac{1}{p}$ + +(c) -1 + +(d) $-\frac{1}{p}$ + +Ans: (c) -1 + +1 +---PAGE_BREAK--- + +10. The $n^{th}$ term of the A.P. a, 3a, 5a, ... is + +(a) na + +(b) (2n - 1)a + +(c) (2n + 1) a + +(d) 2na + +**Ans:** (b) (2n - 1)a + +1 + +In Q. Nos. 11 to 15, fill in the blanks. Each question is of 1 mark. + +11. In Fig. 1, the angles of depressions from the observing positions O₁ and O₂ respectively of the object A are __________, _________. + +Fig. 1 + +**Ans:** 30°, 45° + +$\frac{1}{2} + \frac{1}{2}$ + +12. In $\triangle ABC$, AB = $6\sqrt{3}$ cm, AC = 12 cm and BC = 6 cm, then $\angle B = $ ________. + +**Ans:** 90° + +OR + +Two triangles are similar if their corresponding sides are ________. + +**Ans:** proportional + +1 + +1 + +13. In given Fig. 2, the length PB = _______ cm. + +Fig. 2 + +**Ans:** 4 + +1 + +14. In Fig. 3, MN || BC and AM : MB = 1 : 2, then $\frac{ar(\triangle AMN)}{ar(\triangle ABC)} = $ ________. + +Fig. 3 + +**Ans:** $\frac{1}{9}$ + +1 + +15. The value of sin 32° cos 58° + cos 32° sin 58° is + +**Ans:** 1 + +1 +---PAGE_BREAK--- + +OR + +The value of $\frac{\tan 35^\circ}{\tan 55^\circ} + \frac{\cot 78^\circ}{\tan 12^\circ}$ is ______. + +**Ans:** 2 + +1 + +Q. Nos. 16 to 20 are short answer type questions of 1 mark each. + +16. A die is thrown once. What is the probability of getting a prime number. + +**Ans:** Number of prime numbers = 3 i.e. {2, 3, 5} + +$\text{P(Prime Number)} = \frac{3}{6} \text{ or } \frac{1}{2}$ + +1/2 + +1/2 + +17. If a number x is chosen at random from the numbers -3, -2, -1, 0, 1, 2, 3, then find the probability of $x^2 < 4$. + +**Ans:** Number of Favourable outcomes = 3 i.e., {-1, 0, 1} $\therefore P(x^2 < 4) = \frac{3}{7}$ + +1/2+1/2 + +OR + +What is the probability that a randomly taken leap year has 52 Sundays ? + +**Ans:** $P(52 \text{ Sunday}) = \frac{5}{7}$ + +1 + +18. If $\sin A + \sin^2 A = 1$, then find the value of the expression ($\cos^2 A + \cos^4 A$). + +**Ans:** +$$ +\begin{cases} +\sin A = 1 - \sin^2 A \\ +\sin A = \cos^2 A +\end{cases} +\text{ } +\begin{array}{l} +\cos^2 A + \cos^4 A = \sin^2 A + \sin^2 A = 1 +\end{array} +$$ + +1/2 + +1/2 + +19. Find the area of the sector of a circle of radius 6 cm whose central angle is 30°. +(Take $\pi = 3.14$) + +**Ans:** Area = $3.14 \times (6)^2 \times \frac{30^\circ}{360^\circ}$ += $9.42 \text{ cm}^2$ + +1/2 + +1/2 + +20. Find the class marks of the classes 20 – 50 and 35 – 60. + +**Ans:** +$$ \frac{20+50}{2} = 35 $$ + +$$ \frac{35+60}{2} = 47.5 $$ + +1/2 + +1/2 + +SECTION - B + +Q. Nos. 21 to 26 carry 2 marks each. + +21. A teacher asked 10 of his students to write a polynomial in one variable on a paper and then to handover the paper. The following were the answers given by the students: + +$2x + 3$, $3x^2 + 7x + 2$, $4x^3 + 3x^2 + 2$, $x^3 + \sqrt{3x} + 7$, $7x + \sqrt{7}$, $5x^3 - 7x + 2$, +$2x^2 + 3 - \frac{5}{x}$, $5x - \frac{1}{2}$, $ax^3 + bx^2 + cx + d$, $x + \frac{1}{x}$ +---PAGE_BREAK--- + +Answer the following questions : + +(i) How many of the above ten, are not polynomials ? + +(ii) How many of the above ten, are quadratic polynomials ? + +**Ans:** (i) 3 + +(ii) 1 + +1 + +1 + +22. A child has a die whose six faces show the letters as shown below : + +The die is thrown once. What is the probability of getting (i) A, (ii) D ? + +**Ans:** (i) $P(A) = \frac{2}{6}$ or $\frac{1}{3}$ + +(ii) $P(D) = \frac{1}{6}$ + +1+1 + +23. In Fig. 4, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that + +$$\frac{ar(\Delta ABC)}{ar(\Delta DBC)} = \frac{AO}{DO}$$ + +Fig. 4 + +**Ans:** + +Draw $AX \perp BC$, $DY \perp BC$ +$\Delta AOX \sim \Delta DOY$ + +$$\frac{AX}{DY} = \frac{AO}{DO} \quad \dots(i)$$ + +$$\frac{ar(\triangle ABC)}{ar(\triangle DBC)} = \frac{\frac{1}{2} \times BC \times AX}{\frac{1}{2} \times BC \times DY}$$ + +$$\frac{AX}{DY} = \frac{AO}{DO} \text{ (From (i))}$$ + +OR + +In Fig. 5, if $AD \perp BC$, then prove that $AB^2 + CD^2 = BD^2 + AC^2$. + +**Ans:** +In rt $\triangle ABD$ $AB^2 = BD^2 + AD^2$ ... (i) +In rt $\triangle ADC$ $CD^2 = AC^2 - AD^2$ ... (ii) +Adding (i) & (ii) +$$AB^2 + CD^2 = BD^2 + AC^2$$ + +1/2 + +1/2 + +1/2 + +1/2 + +1/2 + +1 +---PAGE_BREAK--- + +24. + +Prove that $1 + \frac{\cot^2 \alpha}{1 + \cos \alpha} = \cos \alpha$ +---PAGE_BREAK--- + +**Ans:** + +Correct Fig + +$$ \begin{aligned} \text{AQ} &= \frac{1}{2} (2\text{AQ}) \\ &= \frac{1}{2} (\text{AQ} + \text{AQ}) \\ &= \frac{1}{2} (\text{AQ} + \text{AR}) \\ &= \frac{1}{2} (\text{AB} + \text{BQ} + \text{AC} + \text{CR}) \\ &= \frac{1}{2} (\text{AB} + \text{BC} + \text{CA}) \\ &\therefore [\text{BQ} = \text{BP}, \text{CR} = \text{CP}] \end{aligned} $$ + +1/2 + +1/2 + +1 + +1 + +28. The area of a circular play ground is 22176 cm². Find the cost of fencing this ground at the rate of 50 per metre. + +**Ans:** Let the radius of playground be r cm + +$$ \begin{aligned} \pi r^2 &= 22176 \text{ cm}^2 \\ r &= 84 \text{ cm} \end{aligned} $$ + +1 + +Circumference = $2\pi r = 2 \times \frac{22}{7} \times 84 = 528$ cm + +1 + +Cost of fencing = $\frac{50}{100} \times 528 = 264$ + +1 + +29. If the mid-point of the line segment joining the points A(3, 4) and B(k, 6) is P(x, y) and x + y - 10 = 0, find the value of k. + +**Ans:** + +$$ A = \frac{\vert A - C \vert}{\sqrt{(x - 4)^2 + (y - 6)^2}} $$ + +$$ x = \frac{3+k}{2}, \quad y=5 $$ + +$$ x+y-10=0 \Rightarrow \frac{3+k}{2}+5-10=0 $$ + +$$ \Rightarrow k=7 $$ + +OR + +Find the area of triangle ABC with A(1, -4) and the mid-points of sides through A being (2, -1) and (0, -1). + +**Ans:** B(3, 2), C(-1, 2) + +$$ \text{Area} = \frac{1}{2} |1(2-2)+3(2+4)-1(-4-2)| = 12 \text{ sq units} $$ + +1/2+1/2 + +1 + +1 + +1/2+1/2 + +1+1 +---PAGE_BREAK--- + +30. In Fig. 6, if $\triangle ABC \sim \triangle DEF$ and their sides of lengths (in cm) are marked along them, then find the lengths of sides of each triangle. + +Fig. 6 + +**Ans:** As $\triangle ABC \sim \triangle DEF$ + +$$ \frac{2x-1}{18} = \frac{3x}{6x} $$ + +$$ x = 5 $$ + +$$ AB = 9 \text{ cm} $$ + +DE = 18 cm + +BC = 12 cm + +EF = 24 cm + +CA = 15 cm + +FD = 30 cm + +$$ \frac{1}{12+1/2} $$ + +31. If $2x + y = 23$ and $4x - y = 19$, find the value of $(5y - 2x)$ and $(\frac{y}{x} - 2)$ + +**Ans:** $2x + y = 23$, $4x - y = 19$ + +Solving, we get $x = 7$, $y = 9$ + +$$ 5y - 2x = 31, \quad \frac{y}{x} - 2 = \frac{-5}{7} $$ + +OR + +Solve for $x$: $\frac{1}{x+4} - \frac{1}{x+7} = \frac{11}{30}$, $x\# = -4, 7$ + +**Ans:** + +$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x-7} &= \frac{11}{30} \\ &\Rightarrow \frac{-11}{(x+4)(x-7)} = \frac{11}{30} \\ &\Rightarrow x^2 - 3x + 2 = 0 \\ &\Rightarrow (x-2)(x-1) = 0 \\ &\Rightarrow x = 2, 1 \end{aligned} $$ + +The Following solution should also be accepted + +$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x+7} &= \frac{11}{30} \\ &\Rightarrow \frac{x+7-x-4}{(x+4)(x-7)} = \frac{11}{30} \\ &\Rightarrow 11x^2 + 121x + 218 = 0 \end{aligned} $$ + +Here, D = 5049 + +$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$ + +$$ \frac{1}{1+1/2} $$ + +$$ \frac{1}{2} $$ +---PAGE_BREAK--- + +**32.** Which term of the A.P. 20,19$\frac{1}{4}$,18$\frac{1}{2}$,17$\frac{3}{4}$... is the first negative term. + +$$ \text{Ans: } a = 20 \text{ & } d = 19\frac{1}{4} - 20 = -\frac{3}{4} $$ + +$$ a_n < 0 $$ + +$$ 20 + (n-1)\left(-\frac{3}{4}\right) < 0 $$ + +$$ n > 27\frac{2}{3} $$ + +∴ 28th term of the given A. P. is first negative term + +OR + +Find the middle term of the A.P. 7, 13, 19, ..., 247. + +$$ \text{Ans: } a = 7 \text{ & } d = 13 - 7 = 6 $$ + +$$ 247 = 7 + (n - 1)6 $$ + +$$ n = 41 $$ + +$$ \text{Middle term} = \left(\frac{41+1}{2}\right)^{\text{th}} = 21^{\text{st}} \text{ term.} $$ + +$$ a_{21} = 7 + 20 \times 6 = 127 $$ + +**33.** Water in a canal, 6 m wide and 1.5 m deep, is flowing with a speed of 10 km/h. +How much area will it irrigate in 30 minutes, if 8 cm standing water is +required ? + +$$ \text{Ans: Volume of water in canal in 1 hr} = 10000 \times 6 \times 1.5 = 90000 \text{ m}^3 $$ + +$$ \text{Volume of water in canal in 30 mins} = \frac{1}{2} \times 90000 = 45000 \text{ m}^3 $$ + +$$ \begin{aligned} \text{Area} &= \frac{45000}{8/100} \\ &= 562500 \text{ m}^2 \end{aligned} $$ + +**34.** Show that : + +$$ \frac{\cos^2(45^\circ + \theta) + \cos^2(45^\circ - \theta)}{\tan(60^\circ + \theta) \tan(30^\circ - \theta)} = 1 $$ + +$$ \text{Ans: L.H.S} = \frac{\cos^2(45^\circ + \theta) + \sin^2(90^\circ - 45^\circ + \theta)}{\tan(60^\circ + \theta) \cdot \cot(90^\circ - 30^\circ + \theta)} $$ + +$$ = \frac{\cos^2(45^\circ + \theta) + \sin^2(45^\circ + \theta)}{\tan(60^\circ + \theta) \cdot \cot(60^\circ + \theta)} $$ + +$$ = \frac{1}{1} = 1 = R.H.S $$ +---PAGE_BREAK--- + +SECTION - D + +Question numbers 35 to 40 carry 4 marks each. + +35. The mean of the following frequency distribution is 18. The frequency f in the class interval 19 – 21 is missing. Determine f. + +
Class interval11 - 1313 - 1515 - 1717 - 1919 - 2121 - 2323 - 25
Frequency36913f54
+ +**Ans:** + +C.I + +f + +x + +xf + +11-13 + +3 + +12 + +36 + +13-15 + +6 + +14 + +84 + +15-17 + +9 + +16 + +144 + +17-19 + +13 + +18 + +234 + +19-21 + +f + +20 + +20f + +21-23 + +5 + +22 + +110 + +23-25 + +$\frac{4}{40+f}$ + +24 + +96 +--- +$704 + 20f$ + +$$ \text{Mean} = \frac{\sum xf}{\sum f} \Rightarrow 18 = \frac{704+20f}{40+f} \Rightarrow f=8 $$ + +OR + +The following table gives production yield per hectare of wheat of 100 farms of a village : + +
Production yield40-4545-5050-5555-6060-6565-70
No. of farms4616203024
+ +Change the distribution to a 'more than' type distribution and draw its ogive. + +**Ans:** + +
Production yieldNumber of farms
More than or equal to 40100
More than or equal to 4596
More than or equal to 5090
More than or equal to 5574
More than or equal to 6054
More than or equal to 6524
+ +Plotting of points (40, 100) (45, 96) (50, 90) (55, 74) (60, 54) (65, 24) join to get ogive. + +2 + +2 + +36. From a point on the ground, the angles of elevation of the bottom and the top of a tower fixed at the top of a 20 m high building are 45° and 60° respectively. Find the height of the tower. + +**Ans:** Let height of tower = h m +---PAGE_BREAK--- + +In rt. $\triangle BCD \tan 45° = \frac{BC}{CD}$ + +$$ +\left. +\begin{array}{l} +1 = \frac{20}{CD} \\ +CD = 20 \text{ m} +\end{array} +\right\} +$$ + +In rt. $\triangle ACD \tan 60° = \frac{AC}{CD}$ + +$$ +\sqrt{3} = \frac{20 + h}{20} +$$ + +$$ +h = 20(\sqrt{3}-1)m +$$ + +corr fig. 1 + +1 + +1 + +1 + +1 + +37. It can take 12 hours to fill a swimming pool using two pipes. If the pipe of larger diameter is used for four hours and the pipe of smaller diameter for 9 hours, only half of the pool can be filled. How long would it take for each pipe to fill the pool separately ? + +Ans: Let time taken by pipe of larger diameter to fill the tank be x hr +Let time taken by pipe of smaller diameter to fill the tank be y hr + +A.T.Q + +$$ +\frac{1}{x} + \frac{1}{y} = \frac{1}{12}, \quad \frac{4}{x} + \frac{9}{y} = \frac{1}{2} +$$ + +Solving we get x = 20 hr y = 30 hr + +1+1 + +1+1 + +38. Prove that $\sqrt{5}$ is an irrational number. + +Ans: Let $\sqrt{5}$ be a rational number. + +$$ +\sqrt{5} = \frac{p}{q}, p \text{ & q are coprimes & } q \neq 0 +$$ + +1 + +$5q^2 = p^2 \Rightarrow 5$ divides $p^2 \Rightarrow 5$ divides $p$ also Let $p = 5a$, for some integer $a$ + +1 + +$5q^2 = 25a^2 \Rightarrow q^2 = 5a^2 \Rightarrow 5$ divides $q^2 \Rightarrow 5$ divides $q$ also + +1 + +∴ 5 is a common factor of p, q, which is not possible as p, q are coprimes. + +Hence assumption is wrong $\sqrt{5}$ is irrational no. + +1 + +39. Draw a circle of radius 3.5 cm. From a point P, 6 cm from its centre, draw two tangents to the circle. + +Ans: Correct construction of circle of radius 3.5 cm + +Correct construction of tangents. + +OR + +Construct a $\triangle ABC$ with AB = 6 cm, BC = 5 cm and $\angle B = 60°$. + +Now construct another triangle whose sides are $\frac{2}{3}$ times the corresponding sides of $\triangle ABC$. +---PAGE_BREAK--- + +**Ans:** Correct construction of given triangle +Construction of Similar triangle + +1 + +3 + +40. A solid is in the shape of a hemisphere surmounted by a cone. If the radius of hemisphere and base radius of cone is 7 cm and height of cone is 3.5 cm, find the volume of the solid. + +$$ \left(\text{Take } \pi = \frac{22}{7}\right) $$ + +**Ans:** + +$$ +\begin{aligned} +& \text{Volume of solid} = \frac{1}{3} \times \frac{22}{7} \times (7)^2 \times 3.5 + \frac{2}{3} \times \frac{22}{7} \times (7)^3 \\ +&= \frac{22}{7} \times (7)^2 \times \left[ \frac{3.5}{3} + \frac{2}{3} \times 7 \right] \\ +&= 898\frac{1}{3} \text{ or } 898.33 \text{ cm}^3 +\end{aligned} +$$ + +2 + +1 + +1 \ No newline at end of file diff --git a/samples/texts_merged/692782.md b/samples/texts_merged/692782.md new file mode 100644 index 0000000000000000000000000000000000000000..8e000c825bf9e5ec3ce858af1de2617be81121ad --- /dev/null +++ b/samples/texts_merged/692782.md @@ -0,0 +1,220 @@ + +---PAGE_BREAK--- + +# Propagation with time-dependent Hamiltonian + +Gang Huang¹ + +¹Johannes Gutenberg University of Mainz + +July 16, 2020 + +## Abstract + +In this note, we introduce one basic concept in nonlinear optical spectroscopy: time-dependent Hamiltonian. Then we give one example of application of the time evolution operator. + +APS/123-QED + +Institute for Physics, Johannes Gutenberg University, Mainz, Germany gang@uni-mainz.de + +In optical spectroscopy, the choice we face is: (1) working with a time-independent Hamiltonian in a larger phase space that includes the matter and the radiation field (Shaul Mukamel, 1995); (2) using a time-dependent Hamiltonian in a smaller phase space of the matter alone. + +For any vector $|\psi\rangle$ in Hilbert space, its dynamical equation is the time-dependent Schrodinger equation: + +$$i\hbar \frac{\partial |\psi(t)\rangle}{\partial t} = \mathbf{H} |\psi(t)\rangle. \quad (1)$$ + +Since + +$$|\psi(t)\rangle = \sum_l |f_l\rangle \langle f_l|\psi(t)\rangle, \quad (2)$$ + +and + +$$\mathbf{H}|f_l\rangle = E_l|f_l\rangle, \quad (3)$$ + +we have + +$$i\hbar \frac{\partial}{\partial t} \langle f_l |\psi(t)\rangle = E_l \langle f_l |\psi(t)\rangle,$$ + +which is + +$$i\hbar \frac{\partial}{\partial t} c_l = E_l c_l,$$ + +or + +$$\mathbf{H}\mathbf{c} = \mathbf{E}\mathbf{c}. \quad (4)$$ + +We obtain the wave function at time $t$: + +$$\langle f_l | \psi(t) \rangle = e^{-\frac{i E_l (t-t_0)}{\hbar}} \langle f_l | \psi(t_0) \rangle, \quad (5)$$ +---PAGE_BREAK--- + +where $\langle f_l | \psi(t_0) \rangle$ is the initial expansion coefficients of the wavefunction. We then have + +$$ |\psi(t)\rangle = \sum_l e^{-\frac{iE_l(t-t_0)}{\hbar}} |f_l\rangle \langle f_l|\psi(t_0)\rangle, \quad (6) $$ + +Therefore, the evolution operator $U(t, t_0)$ can be defined as: + +$$ |\psi(t)\rangle \equiv U(t, t_0)|\psi(t_0)\rangle, $$ + +or + +$$ U(t, t_0) = \sum_l |f_l\rangle e^{-\frac{iE_l(t-t_0)}{\hbar}} \langle f_l|. \quad (7) $$ + +It is immediately follows that + +$$ U(t_0, t_0) = 1. \quad (8) $$ + +The eq. 7 gives the evolution operator in a specific representation, i.e., the eigenstates of the Hamiltonian **H**. + +Here is one example of application of the time evolution operator. Calculate the time evolution operator of a coupled 2-level system ($|\psi_a\rangle$ and $|\psi_b\rangle$) with energies $\epsilon_a$, $\epsilon_b$, and a coupling $V_{ab}$, represented by the Hamiltonian + +$$ \begin{bmatrix} \epsilon_a & V_{ab} \\ V_{ba} & \epsilon_b \end{bmatrix}. $$ + +Solution: Denote + +$$ V_{ab} = V_{ba}^* = |V_{ab}|e^{-i\chi}(0 < \chi < \pi/2). \quad (9) $$ + +Denote $\lambda$ as the eigenvalue of the energy, solve the JiuQi equation + +$$ (\epsilon_a - \lambda)(\epsilon_b - \lambda) - |V_{ab}|^2 = 0, \quad (10) $$ + +we get the eigenvalue of the energy: $\lambda_{\pm} = \frac{(\epsilon_a + \epsilon_b) \pm \sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}}{2}$. Then the eigenstates can be calculated. +For $\lambda = \lambda_-$, + +$$ (\epsilon_b - \lambda_-)b = -V_{ab}e^{i\chi}a, $$ +---PAGE_BREAK--- + +$$ +(11) +$$ + +i.e., + +$$ +\begin{align*} +\frac{b}{a} &= \frac{-|V_{ab}|e^{i\chi}}{\epsilon_b - \lambda_{-}} \\ +&= \frac{-2|V_{ab}|e^{i\chi}}{(\epsilon_b - \epsilon_a) + \sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}} \\ +&= \frac{-2|V_{ab}|e^{i\chi}/(\epsilon_a - \epsilon_b)}{-1 + \sqrt{1 + \frac{4|V_{ab}|^2}{(\epsilon_a - \epsilon_b)^2}}} \\ +&= \frac{-\tan 2\theta}{-1 + \sec 2\theta} e^{i\chi} \\ +&= -\frac{\cos\theta}{\sin\theta} e^{i\chi}, +\end{align*} +$$ + +where we have set + +$$ +\tan 2\theta \equiv \frac{2|V_{ab}|}{\epsilon_a - \epsilon_b}, \quad 0 < \theta < \frac{\pi}{2}. \tag{12} +$$ + +Therefore, + +$$ +|\psi_-\rangle = \left[ \begin{array}{c} -\sin\theta e^{-i\chi/2} \\ \cos\theta e^{i\chi/2} \end{array} \right]. \qquad (13) +$$ + +Similarly, replace $\lambda_-$ by $\lambda_+$, we can obtain + +$$ +|\psi_+\rangle = \left[ \begin{array}{c} \cos\theta e^{-i\chi/2} \\ \sin\theta e^{i\chi/2} \end{array} \right]. +$$ + +$$ +(14) +$$ + +Thus, from eq. 7, the time evolution operator is + +$$ +U(t, t_0) = |\psi_+\rangle\langle\psi_+|e^{-\frac{i}{\hbar}\lambda_+(t-t_0)} + |\psi_-\rangle\langle\psi_-|e^{-\frac{i}{\hbar}\lambda_-(t-t_0)}. \quad (15) +$$ + +Using eq. (13) and (14), we obtain the exprssion of $U(t, t_0)$: + +$$ +U(t, t_0) = +\begin{bmatrix} +\cos^2\theta & \cos\theta\sin\theta e^{-i\chi} \\ +\cos\theta\sin\theta e^{i\chi} & \sin^2\theta +\end{bmatrix} +e^{-\frac{i}{\hbar}\lambda_{+}(t-t_0)} + +\\ ++ +\\ +\begin{bmatrix} +\sin^2\theta & -\cos\theta\sin\theta e^{-i\chi} \\ +-\cos\theta\sin\theta e^{i\chi} & \cos^2\theta +\end{bmatrix} +e^{-\frac{i}{\hbar}\lambda_{-}(t-t_0)}. +\tag{16} +$$ + +Discussion: suppose the system is initially (at time $t_0$ = 0) in the $|\phi_a\rangle$ state, i.e., $|\psi(0)\rangle = |\phi_a\rangle$. We can calculate the probability of the system to be found in the $|\phi_b\rangle$ state at time $t$ +---PAGE_BREAK--- + +$$ +\begin{align} +P_{ba}(t) &= |\langle \phi_b | \psi(t) \rangle|^2 \tag{17} \\ +&= |\langle \phi_b | U(t, t_0) \psi(0) \rangle|^2 \nonumber \\ +&= |\langle \phi_b | U(t, t_0) | \phi_a \rangle|^2 \nonumber +\end{align} +$$ + +$$ +(18) +$$ + +Since + +$$ +\langle \phi_b | U(t, t_0) | \phi_a \rangle = +$$ + +$$ +\begin{bmatrix} 0 & 1 \end{bmatrix} +\begin{bmatrix} U_{aa}(t) & U_{ab}(t) \\ U_{ba}(t) & U_{bb}(t) \end{bmatrix} +\begin{bmatrix} 1 \\ 0 \end{bmatrix} +$$ + +$$ +\begin{align*} +&= U_{ba}(t) \\ +&= \sin\theta\cos\theta e^{i\chi} e^{-\frac{i}{\hbar}\lambda_{+}(t-t_0)} - \\ +&\quad - \sin\theta\cos\theta e^{i\chi} e^{-\frac{i}{\hbar}\lambda_{-}(t-t_0)} \\ +&= \sin 2\theta e^{i\chi} \frac{2(\cos\frac{\lambda_{+}(t-t_0)}{\hbar} - i\sin\frac{\lambda_{+}(t-t_0)}{\hbar})}{2} - \\ +&\quad - \cos\lambda_{-}(t-t_0) \frac{\lambda_{-}(t-t_0)}{\hbar + i\sin\frac{\lambda_{-}(t-t_0)}{\hbar}} \\ +&= \sin 2\theta e^{i\chi} \frac{2 \times 2 i \sin\beta(\cos\alpha - i \sin\alpha)}{2} \\ +&= i \sin 2\theta e^{i(\chi - \alpha)} \sin\beta, \quad (13) \text{ where we have defined} +\end{align*} +$$ + +$$ +\alpha = \frac{(\epsilon_a + \epsilon_b)(t - t_0)}{2\hbar}, \beta = \frac{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}(t - t_0)}{2\hbar}. +$$ + +$$ +(14) +$$ + +So + +$$ +\begin{align} +|\langle \phi_b | U(t, t_0) | \phi_a \rangle|^2 &= \sin^2 2\theta \sin^2 \beta \nonumber \\ +&= \frac{4|V_{ab}|^2}{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}} \sin^2 \frac{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}(t-t_0)}{2\hbar}. \tag{15} +\end{align} +$$ + +This is known as Rabi formula and + +$$ +\Omega_R \equiv \frac{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}}{\hbar} \qquad (16) +$$ +---PAGE_BREAK--- + +is known as Rabi frequency. For example, in the case of alkali atoms, the order of magnitude of the Rabi frequency is MHz. We assume that $(\epsilon_a - \epsilon_b)^2$ and $4|V_{ab}|^2$ have the same order of magnitude, i.e., + +$$ \frac{4|V_{ab}|^2}{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}} \sim \sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2} \approx 10^6. $$ + +## References + +(1995). \ No newline at end of file diff --git a/samples/texts_merged/7081601.md b/samples/texts_merged/7081601.md new file mode 100644 index 0000000000000000000000000000000000000000..ac196320060de82ae699faa87889eee1b1784ee0 --- /dev/null +++ b/samples/texts_merged/7081601.md @@ -0,0 +1,995 @@ + +---PAGE_BREAK--- + +# A Systolic Design Methodology with Application to Full-Search Block-Matching Architectures + +YEN-KUANG CHEN AND S.Y. KUNG + +Princeton University + +Received May 21, 1997; Revised November 5, 1997 + +**Abstract.** We present a systematic methodology to support the design tradeoffs of array processors in several emerging issues, such as (1) high performance and high flexibility, (2) low cost, low power, (3) efficient memory usage, and (4) system-on-a-chip or the ease of system integration. This methodology is algebraic based, so it can cope with high-dimensional data dependence. The methodology consists of some transformation rules of data dependency graphs for facilitating flexible array designs. For example, two common partitioning approaches, LPGS and LSGP, could be unified under the methodology. It supports the design of high-speed and massively parallel processor arrays with efficient memory usage. More specifically, it leads to a novel *systolic cache* architecture comprising of shift registers only (cache without tags). To demonstrate how the methodology works, we have presented several systolic design examples based on the block-matching motion estimation algorithm (BMA). By multiprojecting a 4D DG of the BMA to 2D mesh, we can reconstruct several existing array processors. By multiprojecting a 6D DG of the BMA, a novel 2D systolic array can be derived that features significantly improved rates in data reusability (96%) and processor utilization (99%). + +## 1. Introduction + +The rapid progress in VLSI technology will soon reach more than 100 million transistors in a chip, implying tremendous computation power for many applications, e.g., real-time multimedia processing. Many important design issues emerge for the hardware design for these applications: + +1. High performance and high flexibility + +2. Low cost, low power, and efficient memory usage + +3. System-on-a-chip or the ease of system integration + +4. Fast design turn-around + +The challenge is that many of these design issues dis- +cord with each other. + +In addressing these critical issues, we present a sys- +tematic methodology to support the design of a broad +scope of array processors. This allows us to design and +evaluate diverse designs easily and quickly. This alge- +braic methodology can handle algorithms with high- + +dimensional data dependency. It can exploit a high +degree of data reusability and thus it can design high +performance processor arrays with high efficiency in +memory usage. + +In this paper, we focus on the block-matching motion +estimation algorithm (BMA) [6] as an example. The +basic idea of the BMA is to locate a displaced block, +which is most similar to the current block, within the +search area in the previous frame as shown in Fig. 1. +Various criteria have been presented for the BMA. The +most popular one is to find the least sum of the absolute +difference (SAD) as + +$$ \text{Motion Vector} = \arg \min_{[u,v]} \{SAD[u, v]\} $$ + +$$ SAD[u, v] = \sum_{i=1}^{n} \sum_{j=1}^{n} \left| s[i+u, j+v] - r[i, j] \right| $$ + +$$ -p \leq u \leq p, -p \leq v \leq p $$ + +where *n* is the block width and height, *p* is the absolute value of the maximum possible vertical/horizontal motion, *r*[i,j] is the pixel intensity (luminance value) +---PAGE_BREAK--- + +Fig. 1. In the process of the block-matching motion estimation algorithm, the current frame is divided into a number of non-overlapping current blocks, which are *n* pixels × *n* pixels. Each of the current blocks will be compared with (2*p* + 1) × (2*p* + 1) different displaced blocks in the search area of the previous frame. + +in the current block at (i, j), s[i+u, j+v] is the pixel +intensity in the search area in the previous frame, and +(u, v) represents the candidate displacement vector. + +The BMA is extremely computationally intensive in +current video coding [7, 15]. For example, a SAD +for a block of 16 × 16 pixels requires 512 additions. +For search range {−32, ..., +32} × {−32, ..., +32}, +there are 4225 SADs, and hence, 2.16 × 10⁶ additions. +For a video with 720 pixels × 480 pixels × 30 frames +per second, 88 × 10⁹ additions per second would be +required for a real-time MPEG-1 video coding. In or- +der to tackle such a computationally demanding prob- +lem in real-time, putting massively parallel processing +elements (PEs) together as a computing engine, like +systolic array, is often mandatory. + +Such fully utilized processing power can process a +tremendous amount of data. In the example, each pixel +in the previous frame will be revisited thousands of +times. If each visit involves a memory fetch, it would +imply an extremely short memory read cycle time (32 +ps) for real-time motion estimation of CCIR 601 pic- +tures. So far, state-of-the-art memories are far beyond +such demand. In order to make the data flow keep up +with the processing power, memory access localities +must be exploited. Particularly, data reusability plays + +a critical role in the systolic design of many important +applications. + +In order to find a good tradeoff point between several +conflicting design goals, a systematic/comprehensive +design methodology must be used. Since most multi- +media signal processing algorithms have the following +features: localized operations, intensive computation, +and matrix operation, high-level mapping methodolo- +gies are proving very efficient. (For the reader's conve- +nience, in the Appendices, we review the basic systolic +design notations and methodology.) + +**1.1. Previous Approaches for Systolic BMA Design** + +Because the BMA for a single current block is a 4- +dimensional algorithm (as shown in Appendix A.1), it +is impossible to get a 2D or 1D system implementa- +tion by one projection. Conventionally, the BMA is +decomposed into subparts, which (1) are individually +defined over index spaces with dimensions less than +or equal to three and (2) are suitable to perform the +canonical projection. The functional decomposition +method simplifies the multi-dimensional time sched- +ule and projection problem [5, 10, 16, 20]. For exam- +ple, one such decomposition is to take *u* out first and +consider it later as follows: + +$$ +\begin{equation} +\begin{aligned} +SAD[v] = & \sum_{i=1}^{n} \sum_{j=1}^{n} |s[i, j + v] - r[i, j]| \\ +& - p \le v \le p +\end{aligned} +\end{equation} +$$ + +As a result, we can get several existing DGs as shown +in Fig. 2. + +There are many arrays in [10, 16] that can be derived by canonical projecting of the 3D DG shown in Fig. 2. However, most of the designs require a huge amount of memory bandwidth. For example, the design shown in Fig. 3(a) can be derived by projecting the DG in Fig. 2 along the *v*-direction. This design needs 16 byte data per cycles. Without sufficient memory bandwidth, the PEs are idle most of the time. Hence, most of these designs are not practical. + +Another method (called *index fixing*) fixes one the loop index at a time over and over. When two or fewer loop indices remain, the remaining algorithm can be easily transformed into systolic design [4, 5, 10, 16]. For example, the design in Fig. 3(a) can also be derived by fixing the index of the *u* and *v* of the 4-dimensional DG. +---PAGE_BREAK--- + +Fig. 2. Two 3D DG examples of the BMA [2, 10, 16]. + +Fig. 3. Previous array design examples. (a) Projected without buffers. (b) Projected with buffers [8]. + +A breakthrough design that greatly reduces the I/O bandwidth by exploiting *data reusability* is shown in [8] (cf. Fig. 3(b)). It carries some extra buffers. The advantage of this design is that the data are input serially such that the hunger of the I/O is greatly reduced. The amount of the input data per operation is only 1 byte. Furthermore, shift registers instead of random access memories are used here such that the control is easier, the buffer area is smaller, and the data access rate is higher. Moreover, because the search windows of the current blocks overlap each other, a simple FIFO (based on this design) is proposed to cap- + +ture more data reusability and thus further reduce the I/O bandwidth [14]. + +However, the design shown in Fig. 3(b) is one of the designs that is blamed for inefficiency because of unnecessary computations. The inefficiency comes from the following problem: In order to have only one I/O port for the whole array, the data running through the whole array must be unified. Hence, in this design, some processor may receive some useless data and do some unnecessary computation (or without doing real computation) [1, 8]. The utilization rate = $\frac{(2p + 1)^2}{(n + 2p)^2}$. +---PAGE_BREAK--- + +Later, a 2D array design prevents some unnecessary data running through every PE by inputting the data from two memory ports [1]. It not only needs low I/O bandwidth but can also achieve high computational power. + +A transformation of snapshot (called *slice and tile*) is employed to produce different forms of DGs [2]. There will be a reduction of one dimension in the DG. For example, an original 3D BMA would become a 2D DG. After that, canonical single projection approaches can be used. This technique can re-design most of the existing architectures in graphs. However, the memory organization must be designed via a careful bookkeeping system on the information about the interface between subparts. + +## 1.2. Overview of this Work + +In this paper, we present a systematic methodology, multiprojection, to support the design of a broad scope of array processors. Many previous approaches, such as *functional decomposition*, *index fixing*, and *slice and tile*, can be regarded as its special cases. + +We also propose several useful rules essential for the implementation of multiprojection. For instance, by applying LPGS (locally parallel globally sequential) or LSGP (locally sequential globally parallel) during the multiprojection, the design can enjoy expandabilities without compromising the data reusability. Other rules for reducing the number of buffers are also made available. The rules may be adopted to improve computational power and flexibilities and reduce I/O requirement and control overhead. + +We shall demonstrate how the multiprojection can achieve this goal, based on a systolic design example of the BMA. Our methodology is applied to design (1) massively parallel systolic architectures and (2) fast *systolic cache* architectures for the MPEG application. + +# 2. Multiprojection Methodology for Optimal Systolic Design + +Conventional single projection can only map an $n$-dimensional DG directly onto an $(n-1)$-dimensional SFG. However, due to current VLSI technology constraint, it is hard to implement a 3D or 4D systolic array. In order to map an $n$-dimensional DG directly onto an $(n-k)$-dimensional SFG without DG de- + +composition, a multi-dimensional projection method is introduced [11, 17, 18, 24]. + +The projection method, which maps an $n$-dimensional DG to an $(n-1)$-dimensional SFG, can be applied $k$ times and thus reduces the dimension of the array to $n-k$. More elaborately, a similar projection method can be used to map an $(n-1)$-dimensional SFG into an $(n-2)$-dimensional SFG, and so on. This scheme is called *multiprojection*. + +The *functional decomposition*, *index fixing*, and *slice and tile* are the special cases of the multiprojection. Multiprojection can not only obtain the DGs and SFGs from functional decomposition but can also obtain other 3D DGs, 2D SFGs, and other designs that are difficult to be obtained from other methods. + +Multiprojection is introduced here to design array processors which satisfy most of the following design criteria: (1) increase the computational power, (2) reduce the I/O requirement, (3) reduce the control overhead, and (4) have some expandabilities. For example, a localized recursive algorithm for block matching is derived so that the original 6D BMA is transferred into 3D algorithm [22]. (We will see why the BMA is 6-dimensional later in Section 2.1 and Section 4.3.) After that, it is derived into two designs—a 1D systolic array and a 2D semi-systolic array. Both of the arrays are reported to achieve an almost 100% utilization rate. Nevertheless, since the original 6D is folded into 3D, the designs have more constraints. The former one requires a massive amount of I/O ports. The latter one is only useful when the size of the current block ($n$) is equal to twice of the search range ($2p$) and requires a massive amount of data broadcasting. + +## 2.1. High Dimensional Algorithm + +Before we jump into the discussion of the multi-projection, it is advisable to introduce the concept of high-dimensional algorithms first. An algorithm is said to be $n$-dimensional if it has $n$-depth recursive loops in nature. For example, a block-matching algorithm for the whole frame is 6-dimensional as shown Fig. 4(a). The indices $x, y, u, v, i, j$ contribute the algorithm into 6D. + +It is very important to respect the *read-after-read* data dependency. If a datum could be read time after time by hundreds of operations and those operations are put closely together, then a small cache can get rid of a large amount of external memory accesses. +---PAGE_BREAK--- + +Fig. 4. (a) The 6D BMA, where $N_v$ is the number of current blocks in the vertical direction, $N_h$ is the number of current blocks in the horizontal direction, $n$ is the block size, and $p$ is the search range. The indices $x, y, u, v, i, j$ contribute the algorithm into 6D. The inner four loops are exactly those shown in Fig. 22. (b) A 3D BMA that folds two loops in (a) into one loop. (c) On the other hand, a 7D BMA ($x, y, u, v, i, j_1, j_2$ 7-dimension) can be constructed by modifying the inmost loop index $j$ of the original algorithm into two indices $j$ and $j_2$. + +Since $s[x*n+i+u, y*n+j+v]$ will be read time after time for different $x, y, u, v, i, j$ combinations, this algorithm is 6D. + +One the other hand, if we ignore the read-after-read data dependency, the DG has only two-dimensional + +read-after-write dependency based on variable SAD. Although the DG become lower dimensional, it would be harder to track the data reusability and reduce the amount of memory accesses. +---PAGE_BREAK--- + +*Transformation to Lower Dimension.* As shown in Fig. 4(b), two loops are folded into one loop to make the algorithm become less-dimensional [22]. + +The DG becomes 3-dimensional because there are only 3 loop indices. The number of projections in multiprojection become less and it is easier to optimize the scheduling. However, in this modified algorithm, the operation regarding (u,v+1) must be executed directly after the operation regarding (u,v). It makes the algorithm become less flexible. Efficient, expandable, and low I/O designs are harder to achieve. Besides, the folding of 6D DG will make it benefit less from some useful graph transformation as shown in Section 3. + +*Transformation to Higher Dimension.* We can also construct some artificial indices to make a lower-dimensional DG problem become higher-dimensional DG. For example, the inmost loop of the original algorithm could be modified as shown in Fig. 4(c). + +The indices x, y, u, v, i, j₁, j₂ transform this algorithm into a 7-dimensional concept. This approach is not generally recommended because the number of steps for multiprojection increases in order to have the low-dimension design. However, this method provides an option of execution in the order of $j = \{1, N/2 + 1, 2, N/2 + 2, ...\}$ instead of $j = \{1, 2, ..., N/2, N/2 + 1, ...\}$ (simply exchanging the order of the j₁ loop and the j₂ loop). As we will see later in Section 3.7, LSGP and LPGS partitioning can be carried out via multiprojection after a DG is transformed into an artificial higher-dimensional DG. + +## 2.2. Algebraic Formulation of Multiprojection + +The process of multiprojection could be written as a number of single projections using the same algebraic formulation as introduced in Appendix A.1. In this section, we explain how to project the (n-1)-dimensional SFG to an (n-2)-dimensional SFG. The potential difficulties of this mapping are (1) the presence of delay edges in the (n-1)-dimensional SFG, and (2) the delay management of the edges in the (n-2)-dimensional SFG. + +*Double-Projection.* For simplicity, we first introduce how to have a 2D SFG for a 4D DG by the multiprojection. + +**Step 1** We project the 4D DG into a 3D SFG by projection vector $\vec{d}_4$ (4 × 1 column vector), projection matrix $\mathbf{P}_4$ (3 × 4 matrix), and scheduling vector $\vec{s}_4$ (4 × 1 column vector) with three constraints: (1) $\vec{s}_4^T \vec{d}_4 > 0$, (2) $\mathbf{P}_4 \vec{d}_4 = 0$, and (3) $\vec{s}_4^T \vec{e}_i \ge 0 \ \forall i$. The computation node $\underline{\mathcal{C}}$ (4 × 1) in 4D DG will be mapped into the 3D SFG by + +$$ \begin{bmatrix} T_3(\underline{\mathcal{C}}) \\ \underline{n}_3(\underline{\mathcal{C}}) \end{bmatrix} = \begin{bmatrix} \vec{s}_4^T \\ \mathbf{P}_4 \end{bmatrix} \underline{\mathcal{C}} $$ + +The data dependence edges will be mapped into the 3D SFG by + +$$ \begin{bmatrix} D_3(\vec{e}_i) \\ \vec{m}_3(\vec{e}_i) \end{bmatrix} = \begin{bmatrix} \vec{s}_4^T \\ \mathbf{P}_4 \end{bmatrix} \vec{e}_i $$ + +**Theorem 1.** $D_3(\vec{e}_i) \neq 0$ for any $\vec{m}_3(\vec{e}_i) = 0$. + +*Proof:* For $\vec{m}_3(\vec{e}_i) = 0$, $\vec{e}_i$ is proportional to $\vec{d}_4$. For example, $\vec{e}_i = \alpha\vec{d}_4$ ($\alpha \neq 0$). The basic constraint $\vec{s}_4^T\vec{d}_4 > 0$ implies $\alpha\vec{s}_4^T\vec{d}_4 \neq 0$; therefore, $D_3(\vec{e}_i) = \vec{s}_4^T\vec{e}_i \neq 0$. $\square$ + +**Step 2** We project the 3D SFG into a 2D SFG by projection vector $\vec{d}_3$ (3 × 1 column vector), projection matrix $\mathbf{P}_3$ (2 × 3 matrix), and scheduling vector $\vec{s}_3$ (3 × 1 column vector) with three constraints: (1) $\vec{s}_3^T\vec{d}_3 > 0$, (2) $\mathbf{P}_3\vec{d}_3 = 0$, and (3) $\vec{s}_3^T\vec{m}_3(\vec{e}_i) \ge 0 \ \forall \vec{e}_i$ for broadcasting data. Or, $\vec{s}_3^T\vec{m}_3(\vec{e}_i) > 0 \ \forall \vec{e}_i$ for non-broadcasting data. +The computation node $\underline{n}_3(\underline{\mathcal{C}})$ (3 × 1) in the 3D SFG, which is mapped from $\underline{\mathcal{C}}$ (4 × 1) in the 4D DG, will be mapped into the 2D SFG by + +$$ \begin{bmatrix} T'_2(\underline{\mathcal{C}}) \\ \underline{n}'_2(\underline{\mathcal{C}}) \end{bmatrix} = \begin{bmatrix} \vec{s}_3^T \\ \mathbf{P}_3 \end{bmatrix} \underline{n}_3(\underline{\mathcal{C}}) $$ + +The data dependence edges in the 3D SFG will further be mapped into the 2D SFG by + +$$ \begin{bmatrix} D'_2(\vec{e}_i) \\ \vec{m}'_2(\vec{e}_i) \end{bmatrix} = \begin{bmatrix} \vec{s}_3^T \\ \mathbf{P}_3 \end{bmatrix} \vec{m}_3(\vec{e}_i) $$ + +**Step 3** We can combine the results from the previous 2 steps. Let allocation matrix $\mathbf{A} = \mathbf{P}_3\mathbf{P}_4$ and scheduling vector $\mathbf{S}^T = \vec{s}_3^T\mathbf{P}_4 + M_4\vec{s}_4^T$. ($M_4 \ge 1 + (N_4 - 1)\vec{s}_3^T\vec{d}_3$ where $N_4$ is the maximum number of nodes along the $\vec{d}_3$ direction in the 3D SFG.) + +• Node mapping: + +$$ \begin{bmatrix} T_2(\underline{\mathcal{C}}) \\ \underline{n}_2(\underline{\mathcal{C}}) \end{bmatrix} = \begin{bmatrix} \mathbf{S}^T \\ \mathbf{A} \end{bmatrix} \underline{\mathcal{C}} $$ +---PAGE_BREAK--- + +where $\underline{n}_2(\underline{\mathcal{C}}) = \underline{A}\underline{\mathcal{C}}$ means where the original computational node $\underline{\mathcal{C}}$ is mapped. $T_2(\underline{\mathcal{C}}) = \underline{S}\underline{\mathcal{C}}$ means when the computation node is to be executed. + +* Edge mapping: + +$$ \begin{bmatrix} D_2(\vec{e}_i) \\ \vec{m}_2(\vec{e}_i) \end{bmatrix} = \begin{bmatrix} \mathbf{S}^T \\ \mathbf{A} \end{bmatrix} \vec{e}_i $$ + +where $\vec{m}_2(\vec{e}_i) = \mathbf{A}\vec{e}_i$ means where the original data dependency relationship is mapped. $D_2(\vec{e}_i) = \mathbf{S}^T\vec{e}_i$ means how much time delay should be in the edge $\vec{m}_2(\vec{e}_i)$. + +**Constraints for Data and Processor Availability.** Every dependent datum comes from previous computation. To ensure data availability, every edge must have at least one unit of delay if the edge is not broadcasting some data. + +**Theorem 2.** **Data Availability.** $D_2(\vec{e}_i) = \mathbf{S}^T\vec{e}_i \ge 0$ if $\vec{e}_i$ is for broadcasting data. $D_2(\vec{e}_i) = \mathbf{S}^T\vec{e}_i > 0$ if $\vec{e}_i$ is not for broadcasting data. + +**Proof:** + +$$ +\begin{align*} +D_2(\vec{e}_i) &= \mathbf{S}^T \vec{e}_i \\ +&= (\vec{s}_3^T \mathbf{P}_4 + M_4 \vec{s}_4^T) \vec{e}_i \\ +&= \vec{s}_3^T \mathbf{P}_4 \vec{e}_i + M_4 \vec{s}_4^T \vec{e}_i \\ +&\geq \vec{s}_3^T \mathbf{P}_4 \vec{e}_i \\ +&\quad (\text{from the constraint (3) in step 1}) \\ +&> 0 \quad (\text{or, } \geq 0) \\ +&\quad (\text{from the constraint (3) in step 2}) +\end{align*} +$$ + +□ + +Two computational nodes that are mapped into a single processor could not be executed at the same time. To ensure processor availability, $T_2(\underline{\mathcal{C}}_i) \neq T_2(\underline{\mathcal{C}}_j)$ must be satisfied for any $\underline{\mathcal{C}}_i \neq \underline{\mathcal{C}}_j$ and $\underline{n}_2(\underline{\mathcal{C}}_i) = \underline{n}_2(\underline{\mathcal{C}}_j)$. + +**Theorem 3.** **Processor Availability.** $T_2(\underline{\mathcal{C}}_i) \neq T_2(\underline{\mathcal{C}}_j)$ for any $\underline{\mathcal{C}}_i \neq \underline{\mathcal{C}}_j$ and $\underline{n}_2(\underline{\mathcal{C}}_i) = \underline{n}_2(\underline{\mathcal{C}}_j)$. + +**Proof:** For any $\underline{n}_2(\underline{\mathcal{C}}_i) = \underline{n}_2(\underline{\mathcal{C}}_j)$ +$\Rightarrow \mathbf{P}_3\underline{n}_3(\underline{\mathcal{C}}_i) - \mathbf{P}_3\underline{n}_3(\underline{\mathcal{C}}_j) = 0$ +$\Rightarrow \underline{n}_3(\underline{\mathcal{C}}_i) - \underline{n}_3(\underline{\mathcal{C}}_j)$ is proportional to $\vec{d}_3$. +$\Rightarrow \underline{n}_3(\underline{\mathcal{C}}_i) - \underline{n}_3(\underline{\mathcal{C}}_j) = \mathbf{P}_4(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) = \alpha\vec{d}_3$ + +Since $N_4$ is the maximum number of nodes along the $\vec{d}_3$ direction in the 3D SFG, $\alpha \in \{\underline{0}, \pm\underline{1}, \pm\underline{2}, \dots, \pm\underline{(N_4-1)}\}$. + +$$ +\begin{align*} +T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= \mathbf{S}^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&= (\vec{s}_3^T \mathbf{P}_4 + M_4 \vec{s}_4^T)(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&= \vec{s}_3^T \mathbf{P}_4(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) + M_4 \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&= \alpha \vec{s}_3^T \vec{d}_3 + M_4 \alpha \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) +\end{align*} +$$ + +1. If $\mathbf{P}_4\underline{\mathcal{C}}_i = \mathbf{P}_4\underline{\mathcal{C}}_j$, then $\alpha = 0$ and + +$$ +\begin{align*} +T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= M_4 \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&\neq 0 && (\text{by Theorem 1}) +\end{align*} +$$ + +2. If $\mathbf{P}_4\underline{\mathcal{C}}_i \neq \mathbf{P}_4\underline{\mathcal{C}}_j$, then $\alpha \in \{\pm\underline{1}, \dots, \pm\underline{(N_4-1)}\}$ + +(a) If $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) = 0$, then + +$$ +\begin{align*} +T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= \alpha \vec{s}_3^T \vec{d}_3 \\ +&\neq 0 && (\text{by the basic constraint of step 2}) +\end{align*} +$$ + +(b) If $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \neq 0$, then by assuming $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) > 0$ without losing generality, we have + +$$ +\begin{align*} +T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= \alpha \vec{s}_3^T \vec{d}_3 + M_4 \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&\geq \alpha \vec{s}_3^T \vec{d}_3 \\ +&\quad +(1 + (\underline{(N_4-1)}\vec{s}_3^T \vec{d}_3))\vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&= (\alpha + (\underline{(N_4-1)}\vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j)))\vec{s}_3^T \vec{d}_3 \\ +&\quad +\vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&\geq (\alpha + (\underline{(N_4-1)}\vec{s}_3^T \vec{d}_3 + \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j)) \\ +&\quad (\because \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \geq 1) \\ +&\geq 0 + \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\ +&\quad (\because \alpha + N_4 - 1 \geq 0) \\ +&> 0 +\end{align*} +$$ + +If $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) < 0$, then let $\underline{c}'_i = \underline{\mathcal{C}}_j$ and $\underline{c}'_j = \underline{\mathcal{C}}_i$. The condition $T_2(\underline{c}'_i) \neq T_2(\underline{c}'_j)$ for any $\underline{c}'_i \neq c'_j$ and $\underline{n}_2(\underline{\mathcal{C}}'_i) = n'_2(\underline{\mathcal{C}}'_j)$ holds. So, the proof will. + +Q.E.D. from 1, 2(a), and 2(b). □ + +Multiprojection n-Dimensional DG into k-Dimensional SFG. +---PAGE_BREAK--- + +**Step 1** Let the $n$-dimensional SFG define as the +$n$-dimensional DG. That is, $\underline{n}_n(\mathcal{C}_x) = \mathcal{C}_x$ and +the $\vec{m}_n(\vec{e}_i) = \vec{e}_i$. + +**Step 2** We project the *l*-dimensional SFG into a +(*l* − 1)-dimensional SFG by projection vector $\vec{d}_l$ +(*l* × 1), projection matrix **P****l* ((*l* − 1) × *l*), and +scheduling vector $\vec{s}_l$ (*l* × 1) with basic constraint +$\vec{s}_l^T \vec{d}_l > 0$, **P****l* $\vec{d}_l$ = 0, and $\vec{s}_l^T \vec{m}_l(\vec{e}_i) \ge$ (or >) +0$\forall\vec{e}_i$. +The computation node $\mathcal{C}_i$ (*l* × 1) and the data de- +pendence edge $\vec{m}_l(\vec{e}_i)$ (*l* × 1) in *l*-dimensional +SFG will be mapped into the (*l* − 1)-dimensional +SFG by + +$$ +\underline{n}_{l-1}(\underline{\mathcal{c}}_i) = \mathbf{P}_l \underline{n}_l(\underline{\mathcal{c}}_i) \quad (1) +$$ + +$$ +\vec{m}_{l-1}(\vec{e}_i) = \mathbf{P}_l \vec{m}_l(\vec{e}_i) \quad (2) +$$ + +**Step 3** After ($n-k$) projections, the results can be +combined. The allocation matrix will be + +$$ +\mathbf{A} = \mathbf{P}_k \mathbf{P}_{k+1} \cdots \mathbf{P}_n \qquad (3) +$$ + +The scheduling vector will be + +$$ +\begin{align} +\mathbf{S}^T &= \bar{\mathbf{s}}_{k+1}^T \mathbf{P}_{k+2} \mathbf{P}_{k+3} \cdots \mathbf{P}_n \nonumber \\ +&\quad + M_{k+2} \bar{\mathbf{s}}_{k+2}^T \mathbf{P}_{k+3} \mathbf{P}_{k+4} \cdots \mathbf{P}_n \nonumber \\ +&\quad + M_{k+2} M_{k+3} \bar{\mathbf{s}}_{k+3}^T \mathbf{P}_{k+4} \mathbf{P}_{k+5} \cdots \mathbf{P}_n \nonumber \\ +&\vdots \nonumber \\ +&\quad + M_{k+2} M_{k+3} \cdots M_n \bar{\mathbf{s}}_n \tag{4} +\end{align} +$$ + +where $M_l \ge 1 + (N_l - 1)\bar{s}_{l-1}^T d_{l-1}$ and $N_l$ is +the maximum number of nodes along the $d_{l-1}$ +direction in the $l$-dimensional SFG. Therefore, + +• Node mapping will be: + +$$ +\left[ \frac{T_k(\underline{\mathcal{c}}_i)}{\underline{n}_k(\underline{\mathcal{c}}_i)} \right] = \left[ \frac{\mathbf{S}^T}{\mathbf{A}} \right] \underline{\mathcal{c}}_i \quad (5) +$$ + +• Edge mapping will be: + +$$ +\left[ D_k(\vec{e}_i) \quad \vec{m}_k(\vec{e}_i) \right] = \left[ \begin{matrix} S^T \\ A \end{matrix} \right] \vec{e}_i \quad (6) +$$ + +Constraints for Processor and Data Availability. If no transmittance property is assumed, every edge must have at least one delay because every dependent data is come from previous computation. It is easy to show that data availability is satisfied, i.e., $D_k(\vec{e}_i) > 0 \forall i.$ + +Following the same proof of Theorem 3, one can +easily show processor availability is also satisfied., i.e., +$T_k(c_i) \neq T_k(c_j)$ for any $c_i \neq c_j$ and $\underline{n}_2(c_i) = \underline{n}_2(c_j)$. + +2.3. Optimization in Multiprojection + +After projection directions are fixed, the structure of +the array is determined. The remaining part of the +design is to find a scheduling that can complete the +computation in minimal time under processor and data +availability constraint. That is, + +$$ +\min_{\mathbf{S}} \left( \max_{\underline{\mathcal{c}}_x, \underline{\mathcal{c}}_y} \{\mathbf{S}^T (\underline{\mathcal{c}}_x - \underline{\mathcal{c}}_y)\} \right) +$$ + +under the following constraints: + +1. $\mathbf{S}^T\vec{e}_i > 0 \quad \forall \vec{e}_i$ (Data Availability) + +2. $\mathbf{S}^T\mathcal{C}_i \neq \mathbf{S}^T\vec{c}_j \quad \forall \mathcal{C}_i \neq \vec{c}_j, A\mathcal{C}_i = A\vec{c}_j$ (Processor Availability) + +A method using quadratic programming techniques +is proposed to tackle the optimization problem [26]. +However, it takes non-polynomial time to find the op- +timal solution. A polynomial-time heuristic approach, +which uses the branch-and-bound technique and tries +to solve the problem by linear programming, is also +proposed [25]. + +Here, we propose another heuristic procedure to +find a near optimal scheduling in our multiprojection +method. In each single projection, from i-dimension +to (i - 1)-dimension, find an $\vec{s}_i$ by + +$$ +\vec{s}_i = \arg\min_{\vec{s}} \left\{ +\max_{\underline{n}_i(\underline{\mathcal{c}}_x), \underline{n}_i(\underline{\mathcal{c}}_y)} +\left\{ +\vec{s}^T [\underline{n}_i(\underline{\mathcal{c}}_x) - \underline{n}_i(\underline{\mathcal{c}}_y)] +\right\} +\right\} +\quad \forall \underline{\mathcal{c}}_x, \underline{\mathcal{c}}_y \in \text{DG}(7) +$$ + +under the following constraints: + +1. $\bar{\boldsymbol{s}}_i^T \bar{\boldsymbol{d}}_i > 0$ + +2. $\bar{s}_i^T m_i(e_j) \ge 0 \quad \forall j$ if $(i-1)$-dimension is not the final goal. + +$\bar{s}_i^T m_i(e_j) > 0$ $\forall j$ if $(i-1)$-dimension is the final goal. + +This procedure will find a linear scheduling vector in polynomial time, when the given processor allocation function is linear. Although we have no proof of optimization yet, several design examples show our method can provide optimal scheduling when the DG is shift-invariant and the projections directions are along the axes. (Nevertheless, it will still be an NP-hard problem for all possible processor allocation and time allocation functions.) +---PAGE_BREAK--- + +**Table 1.** Graph transformation rules for equivalent DGs. Note that the *transmittent data*, which are used repeatedly by many computation nodes in the DG (see Appendix A.2), play a critical role here. + +
RulesApply toFunctionAdvantages
Assimilarity2D transmittent dataKeep only one edge and delete the others in the 2nd dimensionSave links
Summation2D accumulation dataKeep only one edge and delete the others in the 2nd dimensionSave links
Degeneration2D transmittent dataReduce a long buffers to a single registerSave buffers
Reformation2D transmittent dataReduce a long delay to a shorter oneSave buffers
RedirectionOrder independent data (e.g., transmittent or accumulation data)Opposite the edgeSave problems on negative edges
+ +Fig. 5. (a) A high-dimensional DG, where a datum is transmittent to a set of nodes by the solid 2D mesh. (b) There are several paths via which the datum can reach a certain node. (c) During the multiprojection, the dependencies in different directions get different delay. (d) Because the data could reach the nodes by two possible paths, the *assimilarity rule* is applied to this SFG. Only one of the edges in the second dimension is kept. Without changing the correctness of the algorithm, a number of links and buffers are reduced. + +## 3. Equivalent Graph Transformation Rules + +In Appendix A.2 and Section 2.1, some transformation rules of the DG are introduced. In order to have better designs, we also provide some graph transformation rules that can help us reduce the number of connections between processors, the size of buffer, or the power consumption. Table 1 shows a brief summary of the rules. + +### 3.1. Assimilarity Rule + +As shown in Fig. 5, the assimilarity rule can save some links without changing the correctness of the DG. If a datum is transmittent to a set of operation/computation nodes in the DG/SFG by a 2D (or higher-dimensional) mesh, then there are several possible paths via which the datum can reach a certain node. For example, in the BMA, the $s[i+u, j+v]$ +---PAGE_BREAK--- + +Fig. 6. (a) A datum is the summation of a set of nodes by a 2D mesh in an SFG. During the multiprojection, the dependencies in different directions get different delay. (b) Without changing the correctness of the algorithm, only one of the edges in the second dimension is kept. By the summation rule, a number of links and buffers are reduced. + +Fig. 7. (a) When transforming an SFG description to a systolic array, the conventional delay management uses $(m-1)$ registers for $m$ units of delay on the links. (b) If the data sets of two adjacent nodes overlap each other, the degeneration rule suggests that only a register is required because the other data could be obtained by the other direction. + +can be passed by $s[(i+1)+(u-1), j+v]$ via loop *i*, or by $s[i+u, (j+1)+(v-1)]$ via loop *j*. Keeping only one edge in the second dimension is sufficient for the data to reach everywhere. + +The procedure of keeping only one edge for a set of edges can save a great number of interconnection buffers. Usually, this rule is applied after the final SFG is obtained. In this way, we can get rid of edges with longer delay and more edges. + +One of the major drawbacks of this assimilarity rule is that every node must use the same set of data before this rule can be applied. It is not true for any algorithm that uses a 2D mesh to transmittent the data. Generally speaking, the data set of a node greatly overlaps with the data set of the other nodes but not identically. In order to reduce the connection edges, we can make all + +the nodes process the same set of data artificially (i.e., ask the nodes to do some useless computations) and then apply this rule. + +## 3.2. Summation Rule + +As shown in Fig. 6, the summation rule can save some links without changing the correctness of the DG. Because summation is associative, the order of the summation can be changed. If output is obtained by aggregating a 2D (or higher-dimensional) mesh of computational nodes, we can accumulate the partial sum in one dimension first, then accumulate the total from the partial sum in the second dimension afterward. For example, in the BMA, the SAD[u,v] is the 2D summation of $|s[i+u, j+v] - r[i, j]|$ over $1 \le i, j \le n$. We can accumulate the difference over index *i* first, or over +---PAGE_BREAK--- + +**Fig. 8.** (a) A high-dimensional DG, where a datum is transmittent to a set of nodes by a 2D mesh, is projected into an SFG. During the multiprojection, the dependencies in different directions get different delay. Because the data could reach the nodes by more than two possible paths, the assimilarity rule is applied to this SFG. Only one of the edges in the second dimension is kept. (b) The delay (i.e., the number of buffers) could be further decreased when the *reformation* rule transforms the original 2D mesh into a tilted mesh. + +index *j* first (cf. Fig 2). We should calculate the data in +the direction with fewer buffers first, then rigorously +calculate the data in the other direction later. + +### 3.4. Reformation Rule + +For 2D or higher-dimensional transmittent data, the +structure of the mesh is not rigid. For example, +in the BMA, the $s[i+u, j+v]$ can be passed +by $s[(i+k)+(u-k), j+v]$ via loop *i* and by +$s[i+u, (j+k)+(v-k)]$ via loop *j* for $1 \le k \le n$. +For a different *k*, the structure of the 2D transmittent +mesh is different. The final delay in the designed +SFG will be different. As a result, we should choose +*k*, depending on the required buffer size. Generally +speaking, the shorter the delay, the fewer the buffers. + +For example, Fig. 8(a) shows a design after applying +the assimilarity rule. Only a long delayed edge was left. +Moreover, the data are transmittent to the whole array. +So, we detour the long delayed edge, make use of the +delay in the first dimension, and get the design show +in Fig. 8(b), where the longest delay is now shorter. + +### 3.3. Degeneration Rule + +The degeneration rule reduces the data link when data +are transmittent through a 2D (or higher-dimensional) +mesh when (1) each node has its own data set and +(2) the data sets of two adjacent nodes overlap each +other significatly. One way to save the buffer is to +let the overlapping data transmittent from one dimen- +sion thoroughly (like that in the assimilarity rule) and +let the non-overlapping transmittent from the other di- +mension(s) (unlike that in the assimilarity rule). In the +second dimension, it is only necessary to keep non- +overlapping data. Fig. 7 shows that only a register is +required because the other data could be obtained by +the other direction. + +### 3.5. Redirection Rule + +Because some operations are associative (e.g., sum- +mation data, transmittent data), the arcs in the DG are +reversible. The arcs are reversed to help the design. +For example, the datum $s[(i+1)+(u-1), j+v]$ is passed to $s[i+u, j+v]$ via loop *i* in the BMA. +After mapping the DG to a SFG, the delay on the edge is +negative. Conventionally, negative delay is not allowed +and we must find another scheduling vector $\vec{s}$. This +rule tells us to move the data in the opposite direction +/passing the $s[i+u, j+v]$ to $s[(i+1)+(u-1), j+v]$ instead of re-calculating the scheduling vector +(cf. Fig. 9). + +**Fig. 9.** (a) Generally speaking, an SFG with a negative delay is not permissible. (b) However, if the dependencies have no polarization, then we apply the redirection rule to direct the edges with negative delay to the opposite direction. After that, the SFG become permissible. +---PAGE_BREAK--- + +### 3.6. Design Optimization vs. Equivalent Transformation Rules + +All these rules do not modify the correctness of the implementation, but could accomplish some degree of design optimization. + +1. The assimilarity rule and the summation rule have no influence on the overall calculation time. However, these two rules reduce the buffers and links. Generally speaking, these two rules are applied after the SFG is yielded. + +2. The degeneration rule does not influence the overall calculation time. It is applied when one would like to transform the SFG into hardware design. It helps the reduction of the buffers and links. However, extra control logic circuits are required. + +3. The reformation rule and the redirection rule will have influence on the scheduling problem because these two rules can make some prohibited scheduling vectors become permissible. + +These rules help the design optimization but also make the optimization process harder. Sometimes, the optimization process will become a iterative procedure which consists of (1) scheduling optimization and (2) equivalent transformation. + +### 3.7. Locally Parallel Globally Sequential and Locally Sequential Globally Parallel Systolic Design by Multiprojection + +In Appendix A.4, LPGS and LSGP have been introduced briefly. In this section, we delineate a unified partitioning and scheduling scheme for LPGS and LSGP into our multiprojection method. The advantage of this unified partitioning model is that various partitioning methods can be achieved by choosing projection vectors. The systematic scheduling scheme can explore more inter-processor parallelism. + +*Equivalent Graph Transformation Rules for Index Folding.* A unified re-indexing method is adopted to fold original DG into a higher-dimensional DG but with a smaller size in a chosen dimension. Then, our multiprojection approach is applied to obtain the LPGS or LSGP designs. The only difference between LPGS and LSGP under our uniform approaches is the order of the projection. Our approach is even better in deciding the scheduling because our scheduling is automatically inherited from multiprojection scheduling instead of hierarchical scheduling. + +*Index Folding.* In order to map an algorithm into a systolic array by LPGS or LSGP, we propose a re- + +Fig. 10. (a) shows a 2 × 6 DG. (b) shows an equivalent 2 × 3 × 2 DG after index folding. (c) an LPGS partitioning when we project the 3D DG along the *a* direction. (d) an LSGP partitioning when we project the 3D DG along the *b* direction. +---PAGE_BREAK--- + +Fig. 11. A core in the 4D DG of the BMA. There are $n \times n \times (2p+1) \times (2p+1)$ nodes in the DG. The node $i, j, u, v$ represents the computation $SAD[u, v] = SAD[u, v] + |s[i+u, j+v] - r[i, j]|$. We denote $\vec{E}_1$ as the data dependency between computation nodes for $s[i+u, j+v]$. Because $s[i+u, j+v]$ can come from two possible directions: (1) $s[(i-1)+u, (j+v)]$ or (2) $s[i+u, (j-1)+v]$, $\vec{E}_1$ can be $(1, 0, -1, 0)$ and $(0, 1, 0, -1)$. By the same token, $\vec{E}_2$—the data dependency of the current block—could be $(0, 0, -1, 0)$ and $(0, 0, -1, 0)$. $\vec{E}_3$, which accumulates the difference, could be $(1, 0, 0, 0)$ and $(0, 1, 0, 0)$. The representation of the DG is not unique; most of the dependence edges can be redirected because of data transmittance. + +indexing method for the computational nodes into a +higher-dimensional DG problem. + +An example is shown in Fig. 10. We want to map a +$2 \times 6$ DG into a smaller 2D systolic array. Let $u, v$ be +the indices $(0 \le u \le 1, 0 \le v \le 5)$ of the DG. + +First, we will re-index all the computational nodes +$(u, v)$ into $(u, a, b)$. The 2D DG becomes a 3D DG +$(2 \times 2 \times 3)$ where an $a$ means 3 units of $v$, a $b$ means +1 unit of $v$, and $0 \le a \le 1$, $0 \le b \le 2$. Then, a node +at $(u, a, b)$ in the 3D DG is equivalent to the node at +$(u, (3a + b))$ in the original 2D DG. + +After this, by multiprojection, we can have the fol- +lowing two partitioning methods: + +**1. LPGS** + +If we project the 3D DG along the *a* direction, +then the nodes that are close to each other in the *v* +direction will be mapped into the different nodes. +That is, the computation nodes are going to be +executed in parallel. This is an LPGS partitioning. + +**2. LSGP** + +If we project the 3D DG along *b*, then the nodes +that are close to each other in the *v* direction will be +mapped into the same node. That is, the computa- + +tion nodes are going to be executed in a sequential +order. This is an LSGP partitioning. + +Note that we must be careful about the data depen- +dency after transformation. One unit of original *v* will +be 0 unit of *a* and 1 unit of *b* when the dependence edge +does not move across different packing segments. (In +the example, a packing segment consists of all the com- +putation nodes within three units of sequential *v*. That +is, the packing boundary is when 3 divides *v*.) One +unit of the *v* is 1 unit of the *a* and -2 unit of the *b* when +the dependence edge crosses the packing boundary of +the transformed DG one time. + +**4. Systolic Designs for Full-Search Block-Matching Algorithms by Multiprojection Approach** + +4.1. 4D DG of BMA + +As Fig. 22 shows the pseudo code of the BMA of a +single current block, Fig. 11 shows a core in the 4D +DG of the BMA for a current block. The operations of +taking difference, taking absolute value, and accumu- +lating residue are embedded in a 4-dimensional space +i,j,u,v. The indeices i and j (1 ≤ i, j ≤ n) are the +indices of the pixels in a current block. The indices +u and v (-p ≤ u, v ≤ p) are the indices of the po- +tential displacement vector. The actual DG would be +a 4-dimensional repeat of the same core. Although it +is more difficult to visualize the actual DG, it is fairly +straightforward to manipulate algebra on the core and +thus manipulate multiprojection. + +We use $\vec{E}_1$ to denote the data dependency of the search window. The $s[i+u, j+v]$ will be used repeatedly for (1) different $i, j$, (2) same $i + v$, and (3) same $j + u$. Therefore, $\vec{E}_1$ is a 2-dimensional reformable mesh. One possible choice is (1, 0, -1, 0) and (0, 1, 0, -1). The $r[i, j]$ will be used repeatedly for different $u, v$. Hence, $\vec{E}_2$, the data dependency of the current block, could be (0, 0, -1, 0) and (0, 0, -1, 0). The summation can be done in *i*-first order or *j*-first order. $\vec{E}_3$, which accumulates the difference, could be (1, 0, 0, 0) and (0, 1, 0, 0). The representation of the DG is not unique; most of the dependence edges can be redirected because of data transmittance. +---PAGE_BREAK--- + +Constructing Previous Designs. As mentioned be- +fore, our multiprojection can cover most of the previ- +ous design methods. Here is the first example. + +After our first projection with $\vec{d}_4^T = (0, 0, -1, 0)$, $\vec{s}_4^T = (0, 0, -1, 0)$, and + +The following is the 4D DG of the BMA: + +$$P_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix}$$ + +Search Window ($\vec{E}_1$) +1, 0, -1, 0 $D_4$ = 0 +0, 1, 0, -1 $D_4$ = 0 + +Current Blocks ($\vec{E}_2$) +0, 0, -1, 0 $D_4$ = 0 +0, 0, 0, -1 $D_4$ = 0 + +Partial Sum of SAD ($\vec{E}_3$) +1, 0, 0, 0 $D_4$ = 0 +0, 1, 0, 0 $D_4$ = 0 + +the SFG will be + +Fig. 12. (a) A 2D BMA systolic design from double-projecting the 4D DG using Eq. (9). (b) The design after the assimilarity rule is applied. (c) The design after the reformation rule is applied (cf. Fig. [8]). (d) The design by applying the degeneration rule. Its timing diagram is shown in Fig. 13. +---PAGE_BREAK--- + +Fig. 13. The timing diagram of the design in Fig. 12(d). + +Fig. 14. (a) The data sets of different current blocks indicates the possibilities of the data reuse. (b) The 5D DG of the BMA. + +
Search Window ($\vec{E}_1$)1, 0, 0$D_3 = 1$
0, 1, 1$D_3 = 0$
Current Blocks ($\vec{E}_2$)0, 0, 0$D_3 = 1$
0, 0, 1$D_3 = 0$
Partial Sum of SAD ($\vec{E}_3$)1, 0, 0$D_3 = 0$
0, 1, 0$D_3 = 0$
+ +If we discard any edges that have delay, then $\vec{E}_1 = (\bar{0}, \bar{1}, \bar{1})$, $\vec{E}_2 = (\bar{0}, \bar{0}, \bar{1})$, $\vec{E}_3 = (\bar{0}, \bar{1}, \bar{0}) \& (\bar{1}, \bar{0}, \bar{0})$. We construct the 3D DG shown in Fig. 2. And, we also construct many previous designs based on the 3D DG. + +If we keep the edges that have delays, then we can reconstruct the design in [8] (cf. Fig. 3(b)) by projecting the SFG one more time with $\vec{d}_3^T = (\bar{0}, \bar{0}, \bar{1})$, $\vec{s}_3^T = (\bar{1}, \bar{0}, \bar{1})$, and + +$$P_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$$ +---PAGE_BREAK--- + +To ensure processor availability, + +$$M \geq 1 + (N - 1)(\vec{s}_3 \cdot \vec{d}_3) \quad (8)$$ + +where N is the maximal number of nodes along the $\vec{d}_3$-direction in the SFG. Because the index u ranges from $-p$ to $p$, N is $2p+1$. Hence, $M = 2p+1$ and + +$$\left\{ \begin{array}{l} \mathbf{A} = \mathbf{P}_3 \mathbf{P}_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix} \\ \mathbf{S}^T = \vec{s}_3^T \mathbf{P}_4 + M \vec{s}_4^T = [1, 0, -2p-1, -1]^T \end{array} \right. \quad (9)$$ + +We have + +
Search Window ($\vec{E}_1$)1, 0$D_2 = 2p + 2$
0, 1$D_2 = 1$
+ +
Current Blocks ($\vec{E}_2$)0, 0$D_2 = 2p + 1$
0, 0$D_2 = 1$
+ +
Partial Sum of SAD ($\vec{E}_3$)1, 0$D_2 = 1$
0, 1$D_2 = 0$
+ +as Fig. 12(a) shows the design. + +Design Via Assimilarity and Reformation Rule. This design has a huge amount of buffers although it can catch considerable data reusability. In order to reduce the number of buffers, we can apply the *assimilarity rule*, as suggested in Section 3.1. We make + +all the nodes process the same set of data (s [-p+1, -p+1],..., s[p+n, p+n]), and delete most of the link in the second dimension, as shown in Fig. 12(b). We further apply the *reformation rule* to make the design smaller, and get the design shown in Fig. 12(c), which is identical to the design proposed in [8]. + +In terms of I/O bandwidth requirements, this design is superior to many other designs because the data are input serially and the I/O bandwidth is reduced by one order of magnitude. Shift registers instead of random access memories are used here. Thus, the control is easier, the buffer area is smaller, and the data access rate is higher. (The I/O rate of the current block is only 6% of the rate f the search window. It is relatively easy to manage the data flow of the current date. Therefore, we focus on the I/O requirement of the search window in this paper.) + +However, because of artificial unifying of the input data, some unnecessary data must go through every PE. So, the utilization rate is only 66% when n = 1 and p = 32. + +Design Via Degeneration Rule. Another approach to save buffer for Fig. 12(a) is to apply the *degeneration rule*. As shown in Fig. 12(d), this design can also save a number of buffers as well as keep the processor busy. It has a 77% total utilization rate (include the loading + +Fig. 15. (a) The design, proposed in [14], can be re-delivered by multiprojecting the 5D DG of the BMA with the *assimilarity rule* and the *reformation rule*. (b) A new design can be devised by multiprojecting the 5D DG of the BMA with the degeneration rule. +---PAGE_BREAK--- + +Fig. 16. (a) The data sets of different current blocks (in row-major order) indicates different possibilities of the data reuse. (b) The design with data input in the order of row major. Its timing diagram is shown in Fig. 17. + +Fig. 17. The timing diagram of the design in Fig. 16(b). + +phase and computation), and use only one I/O port for search window. Its timing diagram is shown in Fig. 13. + +As shown in Fig. 14, two contiguous current blocks may share some parts of the search window. + +## 4.2. Multiprojecting 5D DG of BMA + +Increasing the reusability of the data can reduce the I/O and, hence, increase the overall performance. This motivates the introduction of the 5D DG of the BMA. + +Let $x, y$ define the indices of the current blocks in a frame. In the 5D design, we fix $y$ at a constant value. $\vec{E}_4$ is new. $\vec{E}_4$ passes the data of the search window shared by the current blocks of a same $y$. $\vec{E}_1, \vec{E}_2, \vec{E}_3$ are the same as before; more specifically, $\vec{E}_1$ passes the data of the search window for a given current block. + +If we project the 5D DG along $x, u, v$ direction and apply the assimilarity and the reformation rule +---PAGE_BREAK--- + +Fig. 18. (a) The data reusability between current blocks. (b) The core of the 6D DG of the BMA. (The core will be repeated when $0 \le x \le N_v$, $0 \le y \le N_h$, $1 \le i, j \le n$, $-p \le u, v \le p$.) $\vec{E}_1 = (0, 0, 1, 0, -1, 0)$ and $(0, 0, 0, 1, 0, -1)$. $\vec{E}_2 = (0, 0, 0, 0, -1, 0)$ and $(0, 0, 0, 0, -1, 0)$. $\vec{E}_3 = (0, 0, 1, 0, 0, 0)$ and $(0, 0, 0, 1, 0, 0)$. $\vec{E}_4 = (1, 0, 0, 0, -n, 0)$ and $(0, 1, 0, 0, 0, -n)$. + +Fig. 19. The design by multiprojecting the 6D DG of the BMA with the degeneration rule. The basic structure of the processor array is the same as 5D design. Its systolic cache is detailed in Fig. 20. + +to it, we have the same design as proposed in [14] (cf. Fig. 15(a)). By adding some buffers in the chip, we can reuse a major part of the search window without reloading it. The ratio of reused data: $\frac{2p \times (n+2p)}{(n+2p) \times (n+2p)}$. When $n = 16$, $p = 32$, the ratio amounts to about 80% while 4KB on-chip buffer is added. However, this de- + +sign would share the same problem, a low utilization rate, as that in [8] (cf. Fig. 3(b)). + +Fig. 15(b) shows the design after the degeneration rule is applied to 5D DG. It has a 99% total utilization rate (include the loading phase and the computation phase), and uses only one I/O port for search window. + +**Row-Major 5D DG of BMA.** In the previous design, we assume that the BMA will be performed in the column major of the current blocks. However, in MPEG codec, current blocks are coded in the order of the row major. In order to work with current MPEG codec, the previous column-major systolic design may require an extra buffer to save the motion vector information. + +In order to avoid the extra buffer, the data that overlapped between the current blocks in the row major (cf. Fig. 16(a)) is also considered. Because the memory designed for the buffer is in row-major, the data reused between two current blocks become piecewise continuous. Its correspondent design and timing diagram are shown in Fig. 16(b) and 17. + +## 4.3. Multiprojecting 6D DG of BMA + +As the full-frame BMA is 6D (cf. Fig. 4), Fig. 18 shows the 6D DG of the BMA. Let $x_i, y_j$ define the indices of the current blocks in a frame. $\vec{E}_1, \vec{E}_2, \vec{E}_3$ are the same as above. The new feature is that $\vec{E}_4$ now represents inter-block usability shifted in both $x$ and $y$ indices. +---PAGE_BREAK--- + +Fig. 20. The systolic cache of the design shown in Fig. 19: (a) Its timing diagram. (b) The overall picture. (c) The first-level systolic cache. (d) A subcell of second-level systolic cache. (e) The second-level systolic cache. +---PAGE_BREAK--- + +Fig. 21. A seamless design of expandable array processors (cf. Fig 19). + +Table 2. A comparison of several designs. Our algebraic design methodology can handle algorithms with high-dimensional data dependency and thus exploit the maximum degree of data reusability. Our design from multiprojection the 6D DG of the BMA can achieve 99% total utilization rate of the PEs and 96% data reusability rate of the search window. + +
AdvantageDisadvantage
Our design from 4D DG
(By degeneration rule, Fig. 12)
Only one I/O port81% total utilization rate
Our design from 5D DG
(By degeneration rule, Fig. 15)
Only one I/O port
99% total utilization rate
80% data reusability rate
Our design from 6D DG
(By degeneration rule, Fig. 21)
Only one I/O port
99% total utilization rate
96% data reusability rate
Expandable
+ +Special Supporting Memory/Cache/Buffer Design. +Since it is hard to hold all the data in the same chip that +holds the processor array, a small cache is important. +Because the memory access pattern is very regular in +the full search BMA, there is a predetermined way +for best replacement policy of the cache. Eventually, +we can get rid of the tags for the cache between the +main memory and processing unit because we know +(1) where the data should go, (2) which data should +be replaced, and (3) where we should fetch the data. + +Based on this idea, we can design a so-called *systolic cache*—a pre-fetch external cache. + +Fig. 19 shows the extended systolic design for the row-major 6D DG. The schematic design of the *systolic cache* to support such a row-major 6D DG design is detailed in Fig. 20. If the width of a frame $F_h$ is 1024 ($F_h = N_h \times n$) and half of the search window size $p$ is 32, then the size of that cache will be $2p \times F_h = 64K$ cache. + +**LPGS and LSGP for Expandable Design.** In addition to the overlapping between search windows of different current blocks, another important property is that there +---PAGE_BREAK--- + +Fig. 22. (a) The pseudo code of the BMA for a single current block. This pseudo code is exactly the inner four loops as shown in Fig. 4(a). +(b) A single assignment code for the BMA. Every element in SAD[u, v, i, j] array will be assigned to a value only once—as the name +come from. +---PAGE_BREAK--- + +```c +for (i = 1; i <= n; i++) + for (j = 1; j <= n; j++) + { + R[u, -p-1, i, j] = r[i, j]; + S[u, -p-1, i, j] = s[u+i, -p-1+j]; + } + +for (v = -p; v <= p; v++) +{ + SAD[u, v, 0, n] = 0; + for (i = 1; i <= n; i++) + { + SAD[u, v, i, 0] = SAD[u, v, i - 1, n]; + for (j = 1; j <= n; j++) + { + R[u, v, i, j] = R[u, v-1, i, j]; + S[u, v, i, j] = S[u, v-1, i, j+1]; + SAD[u,v,i,j] = SAD[u,v,i,j-1] + | S[u,v,i,j] - R[u,v,i,j] |; + } + } +} +``` + +Fig. 23. A example of the localized recursive BMA. The variable $s[u+i, u+j]$ and $r[i, j]$ in the inner three loop of the single assignment code shown in Fig. 22(b) are replaced by locally-interconnected array $S[u,v,i,j]$ and $R[u,v,i,j]$ respectively. + +Fig. 24. There are two methods for mapping the partitioned DG to an array: locally parallel globally sequential (LPGS) and locally sequential globally parallel (LSGP). + +is **no overlap and no gap** between search windows of different current blocks at any time. The search window data departing one array can be used immediately by another array. The reusable data are taken over naturally by the next array without extra buffers or special links. This design has very high expandabilities. The chips can be cascaded easily without performance lost as shown in Fig. 21. + +## 5. Conclusions + +In this work, we concentrate on an algebraic multiprojection methodology, capable of manipulating an algorithm with high-dimensional data dependence, to design the special data flow for highly reusable data. + +Multiprojecting the 6D DG of the BMA can give us high performance processor array designs with minimum supporting buffers (cf. Table 2). We can achieve very high data reusability rates by simple buffers, e.g., +---PAGE_BREAK--- + +shift registers or cache without tags. The data in the search-window are reused as many times as possible in the SAD computations at different search-positions. Therefore, the problem of the input bandwidth for the search-area data can be alleviated. + +It is desirable to have a chip flexible for different block-sizes and search-ranges so that it can be used in a variety of application systems. The size of buffers and their scheduling could be derived automatically when array processors are designed via multiprojection. + +In addition, the expandability of the array processor design is very important for some practical implementations. The multiprojection can give us the expandability not only for single chip solution but also for the chip array design. + +This work has also been extended to operation placement and scheduling in fine-grain parallel architectures [3]. Because this method exploits cache and communication localities, it results in highly efficient parallel codes. + +# Appendix + +## A.1. Common Systolic Design Approaches + +Several useful transformation techniques have been proposed for mapping the algorithm into parallel and/or pipeline VLSI architecture [11]. There are 3 stages in common systolic design methodology: the first is dependence graph (DG) design, the second is mapping the DG to a signal flow graph (SFG), and the third is design array processor based on the SFG. + +More precisely, a DG is a directed graph, $G =< V, E >$, which shows the dependence of the computations that occur in an algorithm. Each operation will be represented as one node, $\zeta \in V$, in the graph. The dependence relation will be shown as an arc, $\vec{e} \in E$, between the corresponding operations. A DG can be also considered as the graphical representation of a single assignment algorithm. Our approach to the construction of a DG will be based on the space-time indices in the recursive algorithm: Corresponding to the space-time index space in the recursive algorithm, there is a natural lattice space (with the same indices) for the DG, with one node residing on each grid point. Then the data dependencies in the recursive algorithm may be explicitly expressed by the arcs connecting the interacting nodes in the DG, while its functional description will be embedded in the nodes. A high-dimensional + +looped algorithm will lead to a high-dimensional DG. For example, the BMA for a single current block is a 4-dimensional recursive algorithm [22]. + +A complete SFG description includes both functional and structural description parts. The functional description defines the behavior within a node, whereas the structural description specifies the interconnection (edges and delays) between the nodes. The structural part of an SFG can be represented by a finite directed graph, $G =< V, E, D(E) >$ since the SFG expression consists of processing nodes, communicating edges, and delays. In general, a node, $\zeta \in V$, represents an arithmetic or logic function performed with zero delay, such as multiplication or addition. The directed edges $\vec{e} \in E$ model the interconnections between the nodes. Each edge $\vec{e}$ of $E$ connects an output port of a node to an input port of some node and is weighted with a delay count $D(\vec{e})$. The delay count is determined by the timing and is equal to the number of time steps needed for the corresponding arcs. Often, input and output ports are refereed to as sources and sinks, respectively. + +Since a complete SFG description should include both functional description (defines the behavior within a node) and structural description (specifies the interconnection—edges and delays—between the nodes), we can easily transform an SFG into a systolic array, wavefront array, SIMD, or MIMD. Therefore, most research is on how to transfer a DG to an SFG in the systolic design methodology. + +There are two basic considerations for mapping from a DG to an SFG: + +1. **Placement:** To which processors should operations be assigned? (A criterion might be to minimize communication/exchange of data between processors.) + +2. **Scheduling:** In what ordering should the operations be assigned to a processor? (A criterion might be to minimize total computing time.) + +Two steps are involved in mapping a DG to an SFG array. The first step is the processor assignment. Once the processor assignment is fixed, the second step is the scheduling. The allowable processor and schedule assignments can be quite general; however, in order to derive a regular systolic array, linear assignments and scheduling attract more attention. + +*Processor Assignment.* Processor assignment decides which processor is going to execute which node in the DG. A processor could carry out the opera- +---PAGE_BREAK--- + +tions of a number of nodes. For example, a projection method may be applied, in which nodes of the DG along a straight line are assigned to a common processing element (PE). Since the DG of a locally recursive algorithm is regular, the projection maps the DG onto a lower dimensional lattice of points, known as the processor space. Mathematically, a linear projection is often represented by a projection vector $\vec{d}$. The mapping assigns the node activities in the DG to processors. The index set of nodes of the SFG are represented by the mapping + +$$ \mathbf{P}: I^n \rightarrow I^{n-1} $$ + +where $I^n$ is the index set of the nodes of the DG, and $I^{n-1}$ is the Cartesian product of (n-1) integers. The mapping of a computation $\mathcal{C}_i$ in the DG onto a node $\underline{n}$ in the SFG is found by: + +$$ \underline{n}(\mathcal{C}_i) = \mathbf{P}\mathcal{C}_i $$ + +where $\underline{n}(\cdot)$ denotes the mapping function from a node in the DG to a node in the SFG, and the processor basis $\mathbf{P}$, denoted by an $(n-1) \times n$ matrix, is orthogonal to $\vec{d}$. Mathematically, + +$$ \vec{d}^T \mathbf{P} = 0 $$ + +This mapping also maps the arcs of the DG to the edges of the SFG. The set of edges $\vec{m}(\vec{e})$ into each node of the SFG is derived from the set of dependence edges $\vec{e}$ at each point in the DG by + +$$ \vec{m}(\vec{e}_i) = \mathbf{P}\vec{e}_i $$ + +where $\vec{m}(\cdot)$ denotes the mapping function from an edge in the DG to an edge in the SFG. + +In this paper, bold face letters (e.g., $\mathbf{P}$) represent matrices. Overhead arrows represent an $n$-dimensional vector, written as an $n \times 1$ matrix, e.g., $\vec{e}_i$ (a dependency arc in the DG) and $\vec{m}(\vec{e}_i)$ (an SFG dependency edge that comes for the $\vec{e}_i$). An $n$-tuple (a point in $n$-dimensional space), written as an $n \times 1$ matrix, is represented by underlined letters, e.g., $\mathcal{C}_i$ (a computation node in the DG) and $\underline{n}(\mathcal{C}_i)$ (an SFG computation node that comes from $\mathcal{C}_i$). + +**Scheduling.** The projection should be accompanied by a scheduling scheme, which specifies the sequence of the operations in all the PEs. A schedule function represents a mapping from the $n$-dimensional index space of the DG onto a 1D scheduling time space. A linear schedule is based on a set of parallel and uni- + +formly spaced hyper-planes in the DG. These hyper-planes are called equi-temporal hyper-planes—all the nodes on the same hyper-plane must be processed at the same time. Mathematically, the schedule can be represented by a schedule vector (column vector) $\vec{s}$, pointing to the normal direction of the hyper-planes. The scheduling of a computation $\mathcal{C}$ in the DG on a node $\underline{n}$ in the SFG is found by: + +$$ T(\underline{n}) = \vec{s}^T \underline{n} $$ + +where $T(\cdot)$ denotes the timing function of a node in the DG to the execution time of the processor in the SFG. + +The delay $D(\vec{e})$ on every edge is derived from the set of dependence edges $\vec{e}$ at each point in the DG by + +$$ D(\vec{e}_i) = \vec{s}^T \vec{e}_i $$ + +where $D(\cdot)$ denotes the timing function of an edge in the DG to the delay of the edge in the SFG. + +**Permissible Linear Schedules.** There is a partial ordering among the computations, inherent in the algorithm, as specified by the DG. For example, if there is a directed path from node $\mathcal{C}_x$ to node $\mathcal{C}_y$, then the computation represented by node $\mathcal{C}_y$ must be executed after the computation represented by node $\mathcal{C}_x$ is completed. The feasibility of a schedule is determined by the partial ordering and the processor assignment scheme. + +The necessary and sufficient conditions are stated below: + +1. $\vec{s}^T \vec{e} \ge 0$, for any dependence arc $\vec{e}$. $\vec{s}^T \vec{e} \neq 0$, for non-broadcast data. + +2. $\vec{s}^T \vec{d} > 0$. + +The first condition stands for data availability and states that the precedent computation must be completed before the succeeding computation starts. Namely, if node $\mathcal{C}_y$ depends on node $\mathcal{C}_x$, then the time step assigned for $\mathcal{C}_y$ can not be less than the time step assigned for $\mathcal{C}_x$. The first condition means that the causality should be enforced in a permissible schedule. But, if a datum is used by many operations in the DG (read-after-read data dependencies), the causality constraint could be a little bit different. As popularly adopted, the same data value is broadcast to all the operation nodes. The data are called *broadcast data*. In this case, there is no delay required. Alternatively, the same data may be propagated step by step via local +---PAGE_BREAK--- + +arcs without being modified to all the nodes. This kind of data, which is propagated without being modified, is called *transmittent data*. There should be at least one delay for transmittent data. + +The second condition stands for processor availability, i.e., 2 computation nodes cannot be executed in the same time if they are mapped into the same processor element. The second condition implies that nodes on an equi-temporal hyper-plane should not be projected to the same PE. In short, the schedule is permissible if and only if (1) all the dependency arcs flow in the same direction across the hyper-planes; and (2) the hyper-planes are not parallel with projection vector $\vec{d}$. + +In general, the projection procedure involves the following steps: + +1. For any projection direction, a processor space is orthogonal to the projection direction. A processor array may be obtained by projecting the index points to the processor space. + +2. Replace the arcs in the DG with zero or nonzero delay edges between their corresponding processors. The delay on each edge is determined by the timing and is equal to the number of time steps needed for the corresponding arcs. + +3. Since each node has been projected to a PE and each input (or output) data is connected to some nodes, it is now possible to attach the input and output data to their corresponding processors. + +## A.2. The Transformation of DG + +Besides the direction of the projection and the schedule, the choice of a particular DG for an algorithm can greatly affect the performance of the resulting array. The following are the two most common transformations of the DG seen in the literature: + +### Reindexing + +A useful technique for modifying the DG is to apply a coordinate transformation to the index space (called *reindexing*). Examples for reindexing are plane-by-plane shifting or circular shifting in the index space. For instance, when there is no permissible linear schedule or systolic schedule for the original DG, it is often desirable to modify the DG so that such a desired schedule may be obtained. The effect of this method is equivalent to the re-timing method [13]. + +### Localized dependence graph + +A locally recursive algorithm is an algorithm whose corresponding DG has only local dependencies—all variables are (directly) dependent upon the variables of neighboring nodes only. The length of each dependency arc is independent of the problem size. + +On the other hand, a non-localized recursive algorithm has global interconnections/dependencies. For example, a same datum will be used by many operations, i.e., the same data value will repeatedly appear in a set of index points in the recursive algorithm or DG. As popularly adopted, the operation nodes receive the datum by broadcasting. The data are called *broadcast data* and this set is termed a broadcast contour. Such a non-localized recursive algorithm, when mapped onto an array processor, is likely to result in an array with global interconnections. + +In general, global interconnections are more expensive than localized interconnections. In certain instances, such global arcs can be avoided by using a proper projection direction in the mapping schemes. To guarantee a locally interconnected array, a localized recursive algorithm would be derived (and, equivalently, a localized DG). In many cases, such broadcasting can be avoided and replaced by local communication. For example, in Fig. 23, the variable $s[u+i, u+j]$ and $r[i, j]$ in the inner three loops of the BMA (cf. Fig. 22(b)) are replaced by local variables $s[u,v,i,j]$ and $r[u,v,i,j]$ respectively. The key point is that instead of broadcasting the (public) data along a global arc, the same data may be propagated step by step via local arcs without being modified to all the nodes. This kind of data, which is propagated without being modified, is called *transmittent data*. + +## A.3. General Formulation of Optimization Problems + +It takes more efforts to find an optimal and permissible linear scheduling than it does to find a permissible linear scheduling. In this section, we show how to derive an optimal design. + +*Optimization Criteria.* Optimization plays an important role in implementing systems. In terms of parallel processing, there are many ways to evaluate of a de- +---PAGE_BREAK--- + +sign: one is to measure by the completion time (T), another one is to measure by the product of the VLSI chip area and the completion time (A × T) [12]. In general, the optimization problems can be categorized into: + +1. To find a best scheduling that minimizes the execution time, for given constraints on the number of processing units [25]. + +2. To minimize the cost (area, power, etc.) under certain given timing constraints [19]. + +In either case, such tasks are proved to be NP-hard. In this paper, we focus on how to find an optimal schedule given an array structure—the timing is an optimization goal, not a constraint. + +**Basic Formula.** First, we know that the computation time of a systolic array can be written as + +$$T = \max_{\mathcal{L}_x, \mathcal{L}_y} \{\vec{s}^T (\mathcal{L}_x - \mathcal{L}_y)\} + 1$$ + +where $\mathcal{L}_x$ and $\mathcal{L}_y$ are two computation nodes in the DG. + +The optimization problem becomes the following min-max formulation: + +$$\vec{s}_{op} = \arg \left[ \min_{\vec{s}} \left[ \max_{\mathcal{L}_x, \mathcal{L}_y} \{\vec{s}^T (\mathcal{L}_x - \mathcal{L}_y)\} + 1 \right] \right]$$ + +under the following two constraints: $\vec{s}^T \vec{d} > 0$ and $\vec{s}^T \vec{e} > 0$, for any dependence arc $\vec{e}$. + +The minimal computation time schedule $\vec{s}$ can be found by solving the proper integer linear programming [12, 21, 25] or quadratic programming [26]. + +### A.4. Partitioning Methods + +As DSP systems grow too complex to be contained in a single chip, partitioning is used to design a system into multi-chip architectures. In general, the mapping scheme (including both the node assignment and scheduling) will be much more complicated than the regular projection methods discussed in the previous sections because it must optimize chip area while meeting constraints on throughput, input/output timing and latency. The design takes into consideration I/O pins, inter-chip communication, control overheads, and tradeoff between external communication and local memory. + +For a systematic mapping from the DG onto a systolic array, the DG is regularly partitioned into many blocks, each consisting of a cluster of nodes in the DG. As shown in Fig. 24, there are two methods for mapping the partitioned DG to an array: the locally sequential globally parallel (LSGP) method and the locally parallel globally sequential (LPGS) method [11]. + +For convenience of presentation, we adopt the following mathematical notations. Suppose that an $n$-dimensional DG is linear projected to an $(n-1)$-dimensional SFG array of size $L_1 \times L_2 \times \cdots \times L_{n-1}$. The SFG is partitioned into $M_1 \times M_2 \times \cdots \times M_{n-1}$ blocks, where each block is of size $Z_1 \times Z_2 \times \cdots \times Z_{n-1}$. $Z_i = L_i/M_i$ for $i \in \{1, 2, \cdots, n-1\}$, + +**Allocation.** + +1. In the LSGP scheme, one block is mapped to one PE. Each PE sequentially executes the nodes of the corresponding block. The number of blocks is equal to the number of PEs in the array, i.e., the array size equals to the product $M_1 \times M_2 \times \cdots \times M_{n-1}$. + +2. In the LPGS scheme, the block size is chosen to match the array size, i.e., one block can be mapped to one array. All nodes within one block are processed concurrently, i.e., locally parallel. One block after another block of node data is loaded into the array and processed in a sequential manner, i.e., globally sequential. + +**Scheduling.** In LSGP, after processor allocation, from the processor sharing perspective, there are $Z_1 \times Z_2 \times \cdots \times Z_{n-1}$ nodes in each block in the SFG, which share one PE. An acceptable (i.e., sufficiently slow) schedule is chosen so that at any instant there is at most one active PE in each block. + +As to the scheduling scheme for the LPGS method, a general rule is to select a (global) scheduling that does not violate the data dependencies. Note that the LPGS design has the advantage that blocks can be executed one after another in a natural order. However, this simple ordering is valid only when there is no reverse data dependence for the chosen blocks. + +**Generalized Partitioning Method.** A unified partitioning and scheduling scheme is proposed for LPGS and LSGP in [9]. The main contribution includes a unified partitioning model and a systematic two-level scheduling scheme. The unified partitioning model can support LPGS and LSGP design in the same manner. +---PAGE_BREAK--- + +The systematic two-level scheduling scheme can spec- +ify the intra-processor schedule and inter-processor +schedule independently. Hence, more inter-processor +parallelism can be effectively explored. + +A general frame work for processing mapping is also +proposed in [17, 18]. + +Optimization for Partitioning. The problem of find- +ing an optimal (or reasonably small) schedule is a NP- +hard problem. A systematic methodology for optimal +partitioning is described in [23]. + +Acknowledgements + +This work was supported in part by Sarnoff Research +Center, Mitsubishi Electric, and the George Van Ness +Lothrop Honorific Fellowship. + +References + +1. J. Baek, S. Nam, M. Lee, C. Oh, and K. Hwang, "A Fast Array Architecture for Block Matching Algorithm," *Proc. of IEEE Symposium on Circuits and Systems*, vol. 4, pp. 211–214, 1994. +2. S. Chang, J.-H. Hwang, and C.-W. Jen, "Scalable Array Architecture Design for Full Search Block Matching," *IEEE Trans. on Circuits and Systems for Video Technology*, vol. 5, no. 4, pp. 332–343, Aug. 1995. +3. Y.-K. Chen and S. Y. Kung, "An Operation Placement and Scheduling Scheme for Cache and Communication Localities in Fine-Grain Parallel Architectures," in *Proc. of Int'l Symposium on Parallel Architectures, Algorithms and Networks*, pp. 390–396, Dec. 1997. +4. L. De Vos, "VLSI-architectures for the Hierarchical Block-Matching Algorithm for HDTV Applications," *SPIE Visual Communications and Image Processing*, vol. 1360, pp. 398–409, 1990. +5. L. De Vos and M. Stegherr, "Parameterizable VLSI Architectures for Full-Search Block-Matching Algorithm," *IEEE Trans. on Circuits and Systems*, vol. 36, no. 10, pp. 1309–1316, Oct. 1989. +6. D. Le Gall, "MPEG: A Video Compression Standard for Multimedia Applications," *Communications of the ACM*, vol. 34, no. 4, Apr. 1991. +7. K. Guttag, R. J. Gove, and J. R. V. Aken, "A Single-Chip Multiprocessor For Multimedia: The MVP," *IEEE Computer Graphics & Applications*, vol. 11, no. 6, pp. 53–64, Nov. 1992. +8. C.-H. Hsieh and T.-P. Lin, "VLSI Architecture for Block-Matching Motion Estimation Algorithm," *IEEE Trans. on Circuits and Systems for Video Technology*, vol. 2, no. 2, pp. 169–175, June 1992. +9. Y.-T. Hwang and Y.-H. Hu, "A Unified Partitioning and Scheduling Scheme for Mapping Multi-Stage Regular Iterative Algorithms onto Processor Arrays," *Journal of VLSI Signal Processing Applications*, vol. 11, pp. 133–150, Oct. 1995. + +10. T. Komarek and P. Pirsch, "Array Architectures for Block Matching Algorithms," *IEEE Trans. on Circuits and Systems*, vol. 36, no. 10, pp. 1301-1308, Oct. 1989. +11. S. Y. Kung, *VLSI Array Processors*. Englewood Cliffs, NJ: Prentice Hall, 1988. +12. G.-J. Li and B. W. Wah, "The Design of Optimal Systolic Array," *IEEE Trans. on Computer*, vol. 34, no. 1, pp. 66-77, Jan. 1985. +13. N. L. Passos and E. H.-M. Sha, "Achieving Full Parallelism Using Multidimensional Retiming," *IEEE Trans. on Parallel and Distributed Systems*, vol. 7, no. 11, pp. 1150-1163, Nov. 1996. +14. P. Pirsch, N. Demassieux, and W. Gehrke, "VLSI Architectures for Video Compression-A Survey," *Proceedings of the IEEE*, vol. 83, no. 2, pp. 220-246, Feb. 1995. +15. F. Sijstermans and J. van der Meer, "CD-1 Full-Motion Video Encoding on a Parallel Computer," *Communications of the ACM*, vol. 34, no. 4, pp. 81-91, Apr. 1991. +16. M.-T. Sun, "Algorithms and VLSI Architectures for Motion Estimation," *VLSI Implementations for Image Communications*, pp. 251-282, 1993. +17. J. Teich and L. Thiele, "Partitioning of Processor Arrays: a Piecewise Regular Approach," *INTEGRATION: The VLSI Journal*, vol. 14, no. 3, pp. 297-332, 1993. +18. J. Teich, L. Thiele, and L. Zhang, "Partitioning Processor Arrays under Resource Constraints," *Journal of VLSI Signal Processing*, vol. 17, no. 1, pp. 5-20, Sept. 1997. +19. W.F. Verhaegh, P.E. Lippens, E.H.Aarts, J.H.Korst,J.L.van Meerbergen,and A.van der Werf,"Improved Force-directed Scheduling in High-throughput Digital Signal Processing,"*IEEE Trans.on Computer-Aided Design of Integrated Circuits and Systems*, vol. 14, no. 8, pp. 945-960,Aug 1995. +20.B.-M.Wang,J.-C.Yen.,and S.Chang,"Zero Waiting-Cycle Hierarchical Block Matching Algorithm and its Array Architectures,"*IEEE Trans.on Circuits and Systemsfor Video Technology*, vol. 4, no. 4, pp. 18-28, Feb. 1994. +21.Y.Wong and J.-M.Delosme,"Optimization of Computation Time for Systolic Array,"*IEEE Trans.on Computer*, vol. 41, no. 2, pp. 159-177, Feb. 1992. +22.H.Yeo and Y.-H.Hu,"A Novel Modular Systolic Array Architecture for Full-Search Block Matching Motion Estimation","*IEEE Trans.on Circuits and Systems for Video Technology*, vol. 5, no. 5, pp. 407-416, Oct. 1995. +23.K.-H.Zimmermann,"A Unifying Lattice-Based Approach for the Partitioning of Systolic Arrays via LPGS and LSGP,"*Journal of VLSI Signal Processing*, vol. 17, no. 1, pp. 21-47, Sept. 1997. +24.K.-H.Zimmermann,"Linear Mappings of n-Dimensional Uniform Recurrences onto k-Dimensional Systolic Array","*Journal of Signal Processing System for Signal, Image, and Video Technology*, vol. 12, no. 2, pp. 187-202, May 1996. +25.K.-H.Zimmermann and W.Achtziger,"Finding Space-Time Transformations for Uniform Recurrences via Branching Parametric Linear Programming","*Journal of VLSI Signal Processing*, vol. 15, no. 3, pp. 259-274, 1997. +26.K.-H.Zimmermann and W.Achtziger,"On Time Optimal Implementation of Uniform Recurrences onto Array Processors via Quadratic Programmin","*Journal of VLSI Signal Processing*, vol. 19, no. 1, pp. 19-38, 1998. + diff --git a/samples/texts_merged/7100604.md b/samples/texts_merged/7100604.md new file mode 100644 index 0000000000000000000000000000000000000000..95d793b5b3df9bd220771c0df04e07b87d512f45 --- /dev/null +++ b/samples/texts_merged/7100604.md @@ -0,0 +1,1109 @@ + +---PAGE_BREAK--- + +# Efficient Market Making via Convex Optimization, and a Connection to Online Learning + +Jacob Abernethy, University of Pennsylvania +Yiling Chen, Harvard University +Jennifer Wortman Vaughan, University of California, Los Angeles + +We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomial-time pricing mechanisms for various problems. We demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the market institution's bounded budget. Although our framework was designed with the goal of deriving efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. Using our framework, we illustrate the mathematical parallels between cost function based markets and online learning and establish a correspondence between cost function based markets and market scoring rules for complete markets. + +**Categories and Subject Descriptors:** F.0 [Theory of Computation]: General; J.4 [Computer Applications]: Social and Behavioral Sciences + +**General Terms:** Algorithms, Economics, Theory + +**Additional Key Words and Phrases:** Market design, securities market, prediction market, automated market maker, convex analysis, online linear optimization + +**ACM Reference Format:** + +Abernethy, J., Chen, Y., Vaughan, J. W. 2012. Efficient Market Making via Convex Optimization, and a Connection to Online Learning. ACM TEAC 1, 1, Article X (2012), 38 pages. +DOI 10.1145/0000000.000000 http://doi.acm.org/10.1145/0000000.000000 + +Parts of this research initially appeared in Chen and Vaughan [2010] and Abernethy et al. [2011]. This work is supported NSF grants CCF-0953516, CCF-0915016, IIS-1054911, and DMS-070706, DARPA grant FA8750-05-2-0249, and a Yahoo! PhD Fellowship, and is based on work that was supported by NSF under CNS-0937060 to the CRA for the CIFellows Project. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors alone. The authors are grateful to David Pennock for useful discussions about this work and Xiaolong Li and Michael Ruberry for comments on an earlier draft. + +Author's addresses: J. Abernethy, Computer and Information Science Department, University of Pennsylvania; Y. Chen, School of Engineering and Applied Sciences, Harvard University; J. W. Vaughan, Computer Science Department, University of California, Los Angeles. + +Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org. + +© 2012 ACM 0000-0000/2012/-ARTX $15.00 +DOI 10.1145/0000000.000000 http://doi.acm.org/10.1145/0000000.000000 +---PAGE_BREAK--- + +# 1. INTRODUCTION + +Securities markets play a fundamental role in economics and finance. A securities market offers a set of contingent securities whose payoffs depend on the future state of the world. For example, an Arrow-Debreu security pays $1 if a particular state of the world is reached and $0 otherwise [Arrow 1964; 1970]. Consider an Arrow-Debreu security that will pay off in the event that a category 4 or higher hurricane passes through Florida in 2012. A Florida resident who worries about his home being damaged might buy this security as a form of insurance to hedge his risk; if there is a hurricane powerful enough to damage his home, he will be compensated. Additionally, a risk-neutral trader who has reason to believe that the probability of a category 4 or higher hurricane landing in Florida in 2012 is *p* should be willing to buy this security at any price below *p* or (short) sell it at any price above *p* to capitalize his information. For this reason, the market price of the security can be viewed as the traders' collective estimate of how likely it is that a powerful hurricane will occur. Securities markets thus have dual functions: risk allocation and information aggregation. + +Insurance contracts, options, futures, and many other financial derivatives are examples of contingent securities. A securities market primarily focused on information aggregation is often referred to as a prediction market. The forecasts of prediction markets have proved to be accurate in a variety of domains [Ledyard et al. 2009; Berg et al. 2001; Wolfers and Zitzewitz 2004]. While our work builds on ideas from prediction market design [Chen and Vaughan 2010; Othman et al. 2010; Agrawal et al. 2011], our framework can be applied to any contingent securities. + +A securities market is said to be complete if it offers at least |O| linearly independent securities over a set O of mutually exclusive and exhaustive states of the world, which we refer to as outcomes [Arrow 1964; 1970; Mas-Colell et al. 1995]. For example, a prediction market with *n* Arrow-Debreu securities for *n* outcomes is complete. In a complete securities market without transaction fees, a trader may bet on any combination of the securities, allowing him to hedge any possible risk he may have. It is generally assumed that the trader may short sell a security, betting against the given outcome; in a market with short selling, the *n*th security is not strictly necessary, as a trader can substitute the purchase of this security by short selling all others. Furthermore, traders can change the market prices to reflect any valid probability distribution over the outcome space, allowing them to reveal any belief. Completeness therefore provides expressiveness for both risk allocation and information aggregation. + +Unfortunately, completeness is not always achievable. In many real-world settings, the outcome space is exponentially large or even infinite. For instance, a competitive race between *n* athletes results in an outcome space of *n!* rank orders, while the future price of a stock has an infinite outcome space, namely $\mathbb{R}_{\ge 0}$. In such situations operating a complete securities market is not practical for two reasons: (a) humans are notoriously bad at estimating small probabilities and (b) it is computationally intractable to manage such a large set of securities. Instead, it is natural to offer a smaller set of structured securities. For example, rather than offer a security corresponding to each rank ordering, in pair betting a market institution offers securities of the form "$1 if candidate A beats candidate B" [Chen et al. 2007a; Chen et al. 2008a]. There has been a surge of recent research examining the tractability of running standard prediction market mechanisms (such as the popular Logarithmic Market Scoring Rule (LMSR) market maker [Hanson 2003]) over combinatorial outcome spaces by limiting the space of available securities [Pennock and Sami 2007]. While this line of research has led to a few positive results [Chen et al. 2007b; Chen et al. 2008b; Guo and Pennock 2009; Agrawal et al. 2008], it has led more often to hardness results [Chen et al. 2007b; Chen +---PAGE_BREAK--- + +et al. 2008a] or to markets with undesirable properties such as unbounded loss of the market institution [Gao et al. 2009]. + +In this paper, we propose a general framework to design automated market makers for securities markets. An automated market maker is a market institution that adaptively sets prices for each security and is always willing to accept trades at these prices. Unlike previous research aimed at finding a space of securities that can be efficiently priced using an existing market maker like the LMSR, we start with an arbitrary space of securities and design a new market maker tailored to this space. Our framework is therefore very general and includes existing market makers for complete markets, such as the LMSR and Quad-SCPM [Agrawal et al. 2011], as special cases. + +We take an axiomatic approach. Given a relatively small space of securities with bounded payoff, we define a set of intuitive conditions that a reasonable market maker should satisfy. We prove that a market maker satisfying these conditions must price securities via a convex potential function (the cost function), and that the space of reachable security prices must be precisely the convex hull of the payoff vectors for each outcome (that is, the set of vectors, one per outcome, denoting the payoff for each security if that outcome occurs). We then incorporate ideas from online convex optimization [Hazan 2009; Rakhlin 2009] to define a convex cost function in terms of an optimization over this convex hull; the vector of prices is chosen as the optimizer of this convex objective. With this framework, instead of dealing with the exponentially large or infinite outcome space, we only need to deal with the lower-dimensional convex hull. The problem of automated market making is reduced to the problem of convex optimization, for which we have many efficient techniques to leverage. + +To demonstrate the advantages of our framework, we provide two new computationally efficient markets. The first market can efficiently price subset bets on permutations, which are known to be #P-hard to price using the LMSR [Chen et al. 2008a]. The second market can be used to price bets on the landing location of an object on a sphere. For situations where the convex hull cannot be efficiently represented, we show that we can relax the convex hull to gain computational tractability without compromising the market maker's bounded budget. This allows us to provide a computationally efficient market maker for the aforementioned pair betting, which is also known to be #P-hard to price using the LMSR [Chen et al. 2008a]. + +Although our framework was designed with the goal of deriving novel, efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. With our framework, we illustrate the mathematical parallels between cost function based markets and online learning, and establish a correspondence between cost function based markets and market scoring rules for complete markets. + +**Roadmap of the paper:** The rest of the paper is organized as follows. We begin in Section 2 with a review of the relevant literature on automated market makers and prediction market design. In Section 3 we describe the problem of market design for large outcome spaces, discuss the difficulties inherent to this problem, and introduce our axiomatic approach. In Section 4 we give a detailed framework for constructing pricing mechanisms based on convex optimization and conjugate duality. We give a couple of examples of efficient duality-based cost function market makers in Section 5. In Section 6 we consider the computational issues associated with our framework, and show how the proposed convex optimization problem can be relaxed to gain tractability without increasing the worst-case loss of the market maker. We illustrate the mathematical parallels between our framework and online learning in Section 7. Finally, in +---PAGE_BREAK--- + +Section 8, we describe how our framework can be used to establish a correspondence between cost function based markets and market scoring rules for complete markets. + +## 2. BACKGROUND AND RELATED WORK + +Automated market makers for complete markets are well studied in both economics and finance. Our work builds on the literature on cost function based markets [Hanson 2003; 2007; Chen and Pennock 2007]. A simple cost function based market maker offers $|O|$ Arrow-Debreu securities, each corresponding to a potential outcome. The market maker determines how much each security should cost using a differentiable cost function, $C: \mathbb{R}^{|O|} \to \mathbb{R}$, which is simply a potential function specifying the amount of money currently wagered in the market as a function of the number of shares of each security that have been purchased. If $q_o$ is the number of shares of security $o$ currently held by traders, and a trader would like to purchase a bundle of $r_o$ shares for each security $o \in O$ (where each $r_o$ could be positive, representing a purchase, zero, or even negative, representing a sale), the trader must pay $C(q+r) - C(q)$ to the market maker. The instantaneous price of security $o$ (that is, the price per share of an infinitesimal portion of a security) is then $\partial C(q)/\partial q_o$, and is denoted $p_o(q)$. + +One example of a cost function based market that has received considerable attention is Hanson's Logarithmic Market Scoring Rule (LMSR) [Hanson 2003; 2007; Chen and Pennock 2007]. The cost function of the LMSR is + +$$C(\mathbf{q}) = b \log \sum_{o \in O} e^{q_o / b}, \quad (1)$$ + +where $b > 0$ is a parameter of the market controlling the rate at which prices change. The corresponding price function for each security $o$ is + +$$p_o(\mathbf{q}) = \frac{\partial C(\mathbf{q})}{\partial q_o} = \frac{e^{q_o/b}}{\sum_{o' \in O} e^{q_{o'}/b}}. \quad (2)$$ + +It is well known that the monetary loss of an automated market maker using the LMSR is upperbounded by $b \log |O|$. Additionally, the LMSR satisfies several other desirable properties, which are discussed in more detail in Section 3.1. + +When $|O|$ is large or infinite, calculating the cost of a purchase becomes intractable in general. Recent research on automated market makers for large outcome spaces has focused on restricting the allowable securities over a combinatorial outcome space and examining whether the LMSR prices can be computed efficiently in the restricted space. If the outcome space contains $n!$ rank orders of $n$ competing candidates, it is #P-hard to price *pair bets* (securities of the form "$1 if and only if candidate A beats candidate B") or *subset bets* (for example, "$1 if one of the candidates in subset C finishes at position $k$") using the LMSR on the full set of permutations [Chen et al. 2008a]. If the outcome space contains $2^n$ Boolean values of $n$ binary base events, it is #P-hard to price securities on conjunctions of any two base events (for example, "$1 if and only if a Democrat wins Florida and Ohio") using the LMSR [Chen et al. 2008a]. This line of research has led to some positive results when the uncertain event enforces particular structure on the outcome space. In particular, for a single-elimination tournament of $n$ teams, securities such as "$1 if and only if team A wins a $k$th round game" and "$1 if and only if team A beats team B given they face off" can be priced efficiently using the LMSR [Chen et al. 2008b]. The tractability of these securities is due to a structure-preserving property — the market probability can be represented by a Bayesian network and price updating does not change the structure of the network. Pennock and Xia [2011] significantly generalized this result and characterize all structure-preserving securities. For a taxonomy tree on some statistic where the value +---PAGE_BREAK--- + +of the statistic of a parent node is the sum of those of its children, securities such as "$1 if and only if the value of the statistic at node A belongs to $[x, y]$" can be priced efficiently using the LMSR [Guo and Pennock 2009]. + +One approach to combat the computational intractability of pricing over combinatorial spaces is to approximate the market prices using sampling techniques. Yahoo!'s Predictalot,¹ a play-money combinatorial prediction market for the NCAA Men's Basketball playoff, allows traders to bet on almost any combination of the 2⁶³ outcomes of the tournament. Predictalot is based on the LMSR. Instead of calculating the exact prices for securities, it uses importance sampling to approximate the prices. Xia and Pennock [2011] devised a Monte-Carlo algorithm that can efficiently compute the price of any security in disjunctive or conjunctive normal form with guaranteed error bounds. However, using sampling techniques brings a new problem to pricing. The sampling algorithm in general won't give the same prices if quoted twice, even if the market status remains the same. Because of this, traders can exploit the market to make a profit, which increases the loss of the market maker. + +In this paper, we take a drastically different approach to combinatorial market design. Instead of searching for supportable spaces of securities for existing market makers, we design new market makers tailored to any security space of interest and with desirable theoretical properties. Additionally, rather than requiring that securities have a fixed (e.g., $1) payoff when the underlying event happens, we allow more general contingent securities with arbitrary, efficiently computable and bounded payoffs. + +Our approach makes use of powerful techniques from convex optimization. Agrawal et al. [2011] and Peters et al. [2007] also use convex optimization for automated market making. One major difference is that they only consider complete markets, while we consider markets with an arbitrary set of securities. They consider the setting in which traders submit limit orders, and formulate a convex optimization problem that can be solved by the market institution in order to decide what quantity of orders to accept. While formulating the problem in terms of limit orders leads to a syntactically different problem, their mechanisms can be turned into equivalent cost function based market makers. Agrawal et al. [2011] show that their mechanisms can be formulated as a risk minimization problem with an associated penalty function. Mathematically the penalty function plays a similar role as the conjugate function $R$ in our framework, but they do not explicitly make a connection with conjugate duality. + +This paper focuses on cost function based market makers. It is worth noting that there are other market mechanisms, with different properties, designed for securities markets. For complete markets, Dynamic Parimutuel Markets [Pennock 2004; Mandgold et al. 2005] also use a cost function to price securities, however the securities are parimutuel bets whose future payoff is not fixed a priori, but depends on the market activities. Brahma et al. [2010] and Das and Magdon-Ismail [2008] design Bayesian learning market makers that maintain a belief distribution and update it based on the traders' behavior. Call markets have been studied to trade securities over combinatorial spaces. In a call market, participants submit limit orders and the market institution determines what orders to accept or reject. Researchers have studied the computational complexity of operating call markets for both permutation [Chen et al. 2007b; Agrawal et al. 2008; Ghodsi et al. 2008] and Boolean [Fortnow et al. 2004] combinatorics. + +Related work on online learning and related work on market scoring rules are discussed in Sections 7 and 8 respectively. + +¹http://labs.yahoo.com/project/336 +---PAGE_BREAK--- + +### 3. AN AXIOMATIC APPROACH TO MARKET DESIGN + +In this work, we are primarily interested in a market-design scenario in which the outcome space $\mathcal{O}$ is exponentially large, or even infinite, making it infeasible to run a complete market; not only is it generally intractable for the market maker to price an exponential number of securities, but it is notoriously difficult for human traders to reason about the probabilities of so many individually unlikely outcomes. To address both of these problems, we restrict the market maker to offer a menu of only $K$ securities for some reasonably-sized $K$. These securities will be designed by the market maker and one can interpret each security as corresponding to some “interesting” or “useful” query that we might like to make about the future outcome. For example, if a set of players compete in a tournament, the market maker can offer a security for every question of the form “does player X survive beyond round Y?” + +We assume that the payoff of each security, clearly depending on the future outcome $o$, can be described by an arbitrary but efficiently-computable function $\rho: \mathcal{O} \to \mathbb{R}_{\ge 0}^K$; if a trader purchases a share of security $i$ and the true outcome is $o$, then the trader is paid $\rho_i(o)$. We call such a security space complex. The complete security space is a special case of a complex security space in which $K = |\mathcal{O}|$ and for each $i \in \{1, \dots, K\}, \rho_i(o)$ equals 1 if $o$ is the $i$th outcome and 0 otherwise. The markets we design enable traders to purchase arbitrary security bundles $r \in \mathbb{R}^K$. A negative element of $r$ encodes a sale of such a security. The payoff for $r$ upon outcome $o$ is exactly $\rho(o) \cdot r$, where $\rho(o)$ denotes the vector of payoffs for each security for outcome $o$. Let us define $\rho(\mathcal{O}) := \{\rho(o)|o \in \mathcal{O}\}$. It will be assumed, throughout the paper, that $\rho(\mathcal{O})$ is closed and bounded. + +The first step in the design of automated market makers for complex security spaces is to determine an appropriate set of properties that we would like such market makers to satisfy. To build intuition about which properties might be desirable, we first step back and consider what it is that makes a market maker like the LMSR a good choice for complete markets. + +#### 3.1. What Makes A Market Maker Reasonable? + +Consider the cost function associated with the Logarithmic Market Scoring Rule (Equation 1) and the corresponding instantaneous price functions (Equation 2). This cost function and the resulting market satisfy several natural properties that make the LMSR a “reasonable” choice: + +(1) The cost function is differentiable everywhere. As a result, an instantaneous price $p_o(q) = \partial C(q)/\partial q_o$ can always be obtained for the security associated with any outcome $o$, regardless of the current quantity vector $q$. + +(2) The market incorporates information from the traders, in the sense that the purchase of a security corresponding to outcome $o$ causes $p_o$ to increase. + +(3) The market does not provide explicit opportunities for arbitrage. Since instantaneous prices are never negative, traders are never paid to obtain securities. Additionally, the sum of the instantaneous prices of the securities is always 1. If the prices summed to something less than (respectively, greater than) 1, a trader could purchase (respectively, short sell) small equal quantities of each security for a guaranteed profit. This is prevented. In addition to preventing arbitrage, these properties also ensure that prices can be interpreted naturally as probabilities, representing the market's current estimate of the distribution over outcomes. +---PAGE_BREAK--- + +(4) The market is *expressive* in the sense that a trader with sufficient funds can always set the market prices to reflect his beliefs about the probability of each outcome.² + +As described in Section 2, previous research on cost function based markets for com- +binatorial outcome spaces has focused on developing algorithms to efficiently imple- +ment or approximate LMSR pricing [Chen et al. 2008a; Chen et al. 2008b; Guo and +Pennock 2009]. Because of this, there has been no need to explicitly extend these prop- +erties to complex markets; the properties hold automatically for any implementation of +the LMSR. This is no longer the case when our goal is to design new markets tailored +to custom sets of securities. + +To gain intuition about what makes an arbitrary complex market “reasonable,” let us begin by considering the example of *pair betting* [Chen et al. 2007a; Chen et al. 2008a]. Suppose our outcome space consists of rankings of a set of *n* competitors, such as *n* horses in a race. The outcome of such a race is a permutation $\pi : [n] \to [n]$, where $[n]$ denotes the set $\{1, \dots, n\}$, and $\pi(i)$ is the final position of $i$, with $\pi(i) = 1$ being best. A typical market for this setting might offer *n* securities, with the $i$th security paying off \$1 $\pi(i) = 1$ and \$0 otherwise. Additionally, there might be separate, *independent* markets allowing bets on horses to place (come in first or second) or show (come in first, second, or third). However, running independent markets for sets of outcomes with clear correlations is wasteful in that information revealed in one market does not automatically propagate to the others. Instead, suppose that we would like to define a set of securities that allow traders to make arbitrary *pair bets*; that is, for every $i, j$, a trader can purchase a security which pays out \$1 whenever $\pi(i) < \pi(j)$. What properties would make a market for pair bets reasonable? + +The first two properties described above have straight-forward interpretations in this setting. We would still like the instantaneous price of each security to be well-defined at all times; intuitively, the instantaneous price of the security for $\pi(i) < \pi(j)$ should represent the traders' collective belief about the probability that horse $i$ finishes ahead of horse $j$. Call this price $p_{i,j}$. We would still like the market to incorporate information, in the sense that buying the security corresponding to $\pi(i) < \pi(j)$ should never cause the price $p_{i,j}$ to drop. + +The remaining two properties are more tricky to quantify. Intuitively, these proper- +ties require us to define a set of constraints over the prices achievable in the market +(to prevent arbitrage), and to ensure that any prices reflecting consistent beliefs about +the distribution over outcomes can be achieved (for expressiveness). One can come up +with various logical constraints that prices should satisfy. For example, $p_{i,j}$ must be +nonnegative at all times for all $i$ and $j$, and $p_{i,j} + p_{j,i}$ must always equal 1 since exactly +one of the two securities corresponding to $\pi(i) < \pi(j)$ and $\pi(j) < \pi(i)$ respectively will +pay out \$1. Similar reasoning gives us the additional constraint that for all $i$, $j$, and +$k$, $p_{i,j} + p_{j,k} + p_{k,i}$ must be at least 1 and no more than 2. But are these constraints +enough to prevent arbitrage? Are they too strong to allow the expression of arbitrary +consistent beliefs? + +In general, this type of ad hoc reasoning can lead us to many apparently reasonable +constraints, but does not yield an algorithm to determine whether or not we have +generated the full set of constraints necessary to prevent arbitrage, and cannot be +applied easily to more complicated security spaces. We address this problem in the next +section. We start by formalizing the desirable market properties described above in the +context of complex markets. We then provide a precise mathematical characterization +of all cost functions that satisfy these properties. + +²Othman et al. [2010] introduced a similar property for complete markets, which they called *surjectivity*. +---PAGE_BREAK--- + +## 3.2. An Axiomatic Characterization of Complex Markets + +We are now ready to formalize a set of conditions or axioms that one might expect a market to satisfy, and show that these conditions lead to some natural mathematical restrictions on the costs of security bundles. (We consider relaxations of these conditions in Section 6.) We do not presuppose a cost function based market. However, we show that the use of a convex cost function is necessary given the assumption of path independence on the security purchases. + +**3.2.1. Path Independence and the Use of Cost Functions.** Imagine a sequence of traders entering the marketplace and purchasing security bundles. Let $r_1, r_2, r_3, \dots$ be the sequence of security bundles purchased. After $t-1$ such purchases, the $t$-th trader should be able to enter the marketplace and query the market maker for the cost of arbitrary bundles. The market maker must be able to furnish a cost, denoted $Cost(r|r_1, \dots, r_{t-1})$, for any bundle $r$ given a previous trade sequence $r_1, \dots, r_{t-1}$. If the trader chooses to purchase $r_t$ at a cost of $Cost(r_t|r_1, \dots, r_{t-1})$, the market maker may update the costs of each bundle accordingly. Our first condition requires that the cost of acquiring a bundle $r$ must be the same regardless of how the trader splits up the purchase. + +**CONDITION 1 (PATH INDEPENDENCE).** For any $r, r',$ and $r''$ such that $r = r' + r''$, for any $r_1, \dots, r_t$, + +$$ Cost(r|r_1, \dots, r_t) = Cost(r'|r_1, \dots, r_t) + Cost(r''|r_1, \dots, r_t, r'). $$ + +Path independence helps to reduce both arbitrage opportunities and the strategic play of traders, as traders need not reason about the optimal path leading to some target position. However, it is worth pointing out that there are interesting markets that do not satisfy this condition, such as the continuous double auction and the market maker for continuous double auctions considered by Brahma et al. [2010] and Das and Magdon-Ismail [2008]. These markets do not fall into our framework and deserve separate treatment. + +It turns out that the path independence alone implies that prices can be represented by a cost function $C$, as illustrated in the following theorem. + +**THEOREM 3.1.** *Under Condition 1, there exists a cost function $C: \mathbb{R}^K \rightarrow \mathbb{R}$ such that we may always write* + +$$ Cost(\mathbf{r}_t|\mathbf{r}_1, \dots, \mathbf{r}_{t-1}) = C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1} + \mathbf{r}_t) - C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1}). $$ + +**PROOF.** Let $C(q) := Cost(q|\emptyset)$. Clearly $C(0) = Cost(0|\emptyset) = 0$. We will show, via induction on $t$, that for any $t$ and any bundle sequence $r_1, \dots, r_t$, + +$$ Cost(\mathbf{r}_t|r_1, \dots, r_{t-1}) = C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1} + \mathbf{r}_t) - C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1}). \quad (3) $$ + +When $t=1$, this holds trivially. Assume that Equation 3 holds for all bundle sequences of any length $t \le T$. By Condition 1, + +$$ +\begin{align*} +\text{Cost}(\mathbf{r}_{T+1} | \mathbf{r}_1, \dots, \mathbf{r}_T) \\ +&= \text{Cost}(\mathbf{r}_{T+1} + \mathbf{r}_T | \mathbf{r}_1, \dots, \mathbf{r}_{T-1}) - \text{Cost}(\mathbf{r}_T | \mathbf{r}_1, \dots, \mathbf{r}_{T-1}) \\ +&= C\left(\mathbf{r}_{T+1} + \mathbf{r}_T + \sum_{t=1}^{T-1} \mathbf{r}_t\right) - C\left(\sum_{t=1}^{T-1} \mathbf{r}_t\right) - C\left(\mathbf{r}_T + \sum_{t=1}^{T-1} \mathbf{r}_t\right) \\ +&= C\left(\sum_{t=1}^{T+1} \mathbf{r}_t\right) - C\left(\sum_{t=1}^{T} \mathbf{r}_t\right), +\end{align*} +$$ +---PAGE_BREAK--- + +and we see that Equation 3 holds for $t = T + 1$ too. □ + +With this theorem in mind, we drop the cumbersome Cost($r|r_1, \dots, r_t$) notation from now on, and write the cost of a bundle $r$ as $C(q+r) - C(q)$, where $q = r_1 + \dots + r_t$ is the vector of previous purchases. + +**3.2.2. Formalizing the Properties of a Reasonable Market.** Recall that one of the functions of a securities market is to aggregate traders' beliefs into an accurate prediction. Each trader may have his own (potentially secret) information about the future, which we represent as a distribution $p \in \Delta|_O^n$ over the outcome space, where $\Delta_n = \{x \in \mathbb{R}^n_+ : \sum_{i=1}^n x_i = 1\}$, the n-simplex. The pricing mechanism should therefore incentivize the traders to reveal $p$, but simultaneously avoid providing arbitrage opportunities. Towards this goal, we now revisit the relevant properties of the LMSR discussed in Section 3.2, and show how the ideas behind each of these properties can be extended to the complex market setting, yielding four additional conditions on our pricing mechanism. + +The first condition ensures that the gradient of $C, \nabla C(q)$, is always well-defined. If we imagine that a trader can buy or sell an arbitrarily small bundle, we would like the cost of buying and selling an infinitesimal quantity of any particular bundle to be the same. If $\nabla C(q)$ is well-defined, it can be interpreted as a vector of instantaneous prices for each security, with $\partial C(q)/\partial q_i$ representing the price per share of an infinitesimal amount of security $i$. Additionally, we can interpret $\nabla C(q)$ as the traders' current estimates of the expected payoff of each security, in the same way that $\partial C(q)/\partial q_o$ was interpreted as the probability of outcome $o$ when considering the complete security space. + +**CONDITION 2 (EXISTENCE OF INSTANTANEOUS PRICES).** *C* is continuous and differentiable everywhere on $\mathbb{R}^K$. + +The next condition encompasses the idea that the market should react to trades in a sensible way in order to incorporate the private information of the traders. In particular, it says that the purchase of a security bundle $r$ should never cause the market to lower the price of $r$. This condition is closely related to incentive compatibility for a myopic trader. It is equivalent to requiring that a trader with a distribution $p \in \Delta|_O^n$ can never find it profitable (in expectation) to buy a bundle $r$ and at the same time find it profitable to buy the bundle $-r$. In other words, there can not be more than one way to express one's information. + +**CONDITION 3 (INFORMATION INCORPORATION).** For any **q** and **r** ∈ R^K, C(**q** + 2**r**) − C(**q** + **r**) ≥ C(**q** + **r**) − C(**q**). + +The no arbitrage condition states that it is never possible for a trader to purchase a security bundle $r$ and receive a positive profit regardless of the outcome. Without this property, the market maker would occasionally offer traders a chance to obtain a guaranteed profit, which is clearly suboptimal in terms of the market maker's loss. However, we do consider the relaxation of this property in Section 6. + +**CONDITION 4 (NO ARBITRAGE).** For all **q**, **r** ∈ R^K, there exists an **o** ∈ O such that +C(**q** + **r**) − C(**q**) ≥ **r** · ρ(**o**). + +Finally, the expressiveness condition specifies that any trader can set the market prices to reflect his beliefs, within any ε error, about the expected payoffs of each security if arbitrarily small portions of shares may be purchased. The ε approximation factor is necessary because the trader's beliefs may only be expressible in the limit; +---PAGE_BREAK--- + +note that the LMSR does not allow a trader to express the belief that an outcome will +occur with probability 1 except in the limit. + +CONDITION 5 (EXPRESSIVENESS). For any $\mathbf{p} \in \Delta_{|\mathcal{O}|}$, we write $\mathbf{x}^{\mathrm{P}} := \mathbb{E}_{o \sim p}[\rho(o)]$. Then for any $\mathbf{p} \in \Delta_{|\mathcal{O}|}$ and any $\epsilon > 0$ there is some $\mathbf{q} \in \mathbb{R}^K$ for which $\|\nabla C(\mathbf{q}) - \mathbf{x}^{\mathrm{P}}\| < \epsilon$. + +Having formalized our set of conditions, we must now address the question of how to determine whether or not these conditions are satisfied for a particular cost function $C$. The following theorem precisely characterizes the set of all cost functions that satisfy these conditions. The statement and proof require the use of a few pieces of terminology from convex optimization, which will be our main tool for designing cost functions that satisfy Conditions 2-5; for more on why this is necessary, see the note in Section 4. In particular, the *relative boundary* of a convex set $S$ is its boundary in the “ambient” dimension of $S$. For example, if we consider the $n$-dimensional probability simplex $\Delta_n := \{\mathbf{x} \in \mathbb{R}^n : \sum_i x_i = 1, \forall i x_i \ge 0\}$, then the relative boundary of $\Delta_n$ is the set $\{\mathbf{x} \in \Delta_n : x_i = 0 \text{ for some } i\}$. We use relint($S$) to refer to the *relative interior* of a convex set $S$, which is the set $S$ minus all of the points on the relative boundary. The interior of a square in 3-dimensional space is empty, but the relative interior is not. We will use closure($S$) to refer to the closure of $S$, the smallest closed set containing all of the limit points of $S$. For any subset $S$ of $\mathbb{R}^d$, let $\mathcal{H}(S)$ denote the convex hull of $S$. An important object, which we will use throughout the paper, is $\mathcal{H}(\rho(O))$ the convex hull of the set of outcome payoffs. (Recall that $\rho(O) := \{\rho(o) | o \in O\}$.) As we have assumed that $\rho(O)$ is a closed set, it follows easily that $\mathcal{H}(\rho(O))$ is also closed, and hence closure($\mathcal{H}(\rho(O))$) = $\mathcal{H}(\rho(O))$. + +**THEOREM 3.2.** *Under Conditions 2-5, C must be convex with* + +$$ +\operatorname{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathcal{H}(\rho(\mathcal{O})). \quad (4) +$$ + +Moreover, any convex differentiable function $C : \mathbb{R}^K \to \mathbb{R}$ respecting (4) must also satisfy Conditions 2-5. + +PROOF. We begin with the first direction. Take any $C$ satisfying Conditions 2-5. We first establish that $C$ is convex everywhere. Assume $C$ is non-convex somewhere. Then there must exist some $\mathbf{q}$ and $\mathbf{r}$ such that $C(\mathbf{q}) > (1/2)C(\mathbf{q} + \mathbf{r}) + (1/2)C(\mathbf{q} - \mathbf{r})$. This means $C(\mathbf{q} + \mathbf{r}) - C(\mathbf{q}) < C(\mathbf{q}) - C(\mathbf{q} - \mathbf{r})$, which contradicts Condition 3, so $C$ must be convex. + +To prove the equality, we will establish containment in both directions. We first prove that {$\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K$} $\subset \mathcal{H}(\rho(\mathcal{O}))$, from which it follows that closure{$(\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K)$} $\subseteq \mathcal{H}(\rho(\mathcal{O}))$ because $\mathcal{H}(\rho(\mathcal{O}))$ is already closed by assumption. Notice that Condition 2 trivially guarantees that $\nabla C(\mathbf{q})$ is well-defined for any $\mathbf{q}$. Towards a contradiction, let us assume there exists some $\mathbf{q}'$ for which $\nabla C(\mathbf{q}') \notin \mathcal{H}(\rho(\mathcal{O}))$. Because the hull is a convex set, this can be reformulated in the following way: There must exist some halfspace, defined by a normal vector $\mathbf{r}$, that separates $\nabla C(\mathbf{q}')$ from every member of $\rho(\mathcal{O})$. More precisely + +$$ +\nabla C(\mathbf{q}') \notin \mathcal{H}(\rho(\mathcal{O})) \iff \exists \mathbf{r} \forall o \in \mathcal{O} : \nabla C(\mathbf{q}') \cdot \mathbf{r} < \rho(o) \cdot \mathbf{r}. +$$ + +The strict inequality in this equation is due to the assumption that $\mathcal{H}(\rho(O))$ is a closed convex set. On the other hand, letting $\mathbf{q} := \mathbf{q}' - \mathbf{r}$, we see by convexity of $C$ that $C(\mathbf{q}+\mathbf{r})-C(\mathbf{q}) \leq \nabla C(\mathbf{q}') \cdot \mathbf{r}$. Combining these last two inequalities, we see that the price of bundle $\mathbf{r}$ purchased with history $\mathbf{q}$ is always smaller than the payoff for any outcome. This implies that there exists some arbitrage opportunity, contradicting Condition 4. + +We now show that $\mathcal{H}(\rho(O)) \subseteq \text{closure}\{\nabla C(\mathbf{q}) : q \in \mathbb{R}^K\}$. The statement of Condition 5 is equivalent to the statement that every element $\mathbf{x}^\mathrm{P} \in \mathcal{H}(\rho(O))$ is a limit point +---PAGE_BREAK--- + +of the set $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$. But then we are done, as the closure($S$) is defined as the $S$ including all of its limit points. + +We now prove the final statement, which is that (4) is also sufficient to achieve Conditions 2-5. Take some convex differentiable $C: \mathbb{R}^K \to \mathbb{R}$ for which (4) is true. Condition 2 follows by definition. As previously argued, Condition 3 is equivalent to the convexity of $C$. Condition 5 is equivalent to the statement that $\mathcal{H}(\rho(\mathcal{O})) \subseteq \text{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$. Finally, to establish Condition 4, we have to reverse our previous argument. The existence of an arbitrage opportunity means that there exist some $\mathbf{q}, \mathbf{r}$ such that $C(\mathbf{q} + \mathbf{r}) - C(\mathbf{q}) < \rho(\mathbf{o}) \cdot \mathbf{r}$ for each $\mathbf{o} \in \mathcal{O}$. Using convexity of $C$, we also have that $\nabla C(\mathbf{q}) \cdot \mathbf{r} \le C(\mathbf{q} + \mathbf{r}) - C(\mathbf{q})$. Combining gives us that $\nabla C(\mathbf{q}) \cdot \mathbf{r} \le \rho(\mathbf{o}) \cdot \mathbf{r}$ for all $\mathbf{o} \in \mathcal{O}$, but this last statement is equivalent to the statement that $\nabla C(\mathbf{q}) \notin \mathcal{H}(\rho(\mathcal{O}))$. This is a contradiction and thus Condition 4 is satisfied. $\square$ + +What we have arrived at from the set of proposed conditions is that (a) a pricing mechanism can always be described precisely in terms of a convex cost function $C$ and (b) the set of reachable prices of a mechanism, that is the set $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$, must be identically the convex hull of the payoff vectors for each outcome $\mathcal{H}(\rho(\mathcal{O}))$ except possibly differing at the relative boundary of $\mathcal{H}(\rho(\mathcal{O}))$. For complete markets, this would imply that the set of achievable prices should be the convex hull of the $n$ standard basis vectors. Indeed, this comports exactly with the natural assumption that the vector of security prices in complete markets should represent a probability distribution, or equivalently that it should lie in the $n$-simplex [Agrawal et al. 2011]. + +# 4. DESIGNING THE COST FUNCTION VIA CONJUGATE DUALITY + +The natural conditions we introduced above imply that to design a market for a set of $K$ securities with payoffs specified by an arbitrary payoff function $\rho: \mathcal{O} \to \mathbb{R}_{\ge 0}^K$, we should use a cost function based market with a convex, differentiable cost function such that $\text{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathcal{H}(\rho(\mathcal{O}))$. We now provide a general technique that can be used to design and compare properties of cost functions that satisfy these criteria. Our proposed framework uses the notion *conjugate duality* to construct cost functions. The aim here is to simplify the task of designing a function $C$ which satisfies Conditions 2-5. We refer to any market mechanism belonging to our framework as a *Duality-based Cost Function Market Maker*. + +**Duality-based Cost Function Market Maker** + +*Input:* Outcome space $\mathcal{O}$ + +*Input:* $K$ securities specified by a payoff function $\rho: \mathcal{O} \to \mathbb{R}_{\ge 0}^K$ + +*Input:* Convex compact price space $\Pi$ (typically $\Pi \equiv \mathcal{H}(\rho(\mathcal{O}))$) + +*Input:* Strictly convex $R$ with $\text{relint}(\Pi) \subseteq \text{dom}(R)$ + +*Output:* Market mechanism specified by the cost function $C: \mathbb{R}^K \to \mathbb{R}$ with + +$$C(\mathbf{q}) := \sup_{\mathbf{x} \in \text{relint}(\Pi)} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x})$$ + +To understand this framework, we begin by reviewing the definition of a convex conjugate. Here and throughout the paper we use the notation $\text{dom}(f)$ to refer to the domain of a function $f$, i.e., where it is defined and finite valued. +---PAGE_BREAK--- + +*Definition 4.1 (Rockafellar [1970], Section 12).* For any convex function $f : \mathbb{R}^K \rightarrow [-\infty, \infty]$, the convex conjugate $f^*$ of $f$ is defined as + +$$f^*(z) := \sup_{x \in \mathbb{R}^K} z \cdot x - f(x).$$ + +The curious reader can find good discussions of conjugate functions in, e.g., Boyd and Vandenberghe [2004] or Hiriart-Urruty and Lemaréchal [2001]. Rockafellar [1970] further shows that if $f$ is convex and proper³ then $f^*$ is also convex and proper. Properness shall be assumed throughout; that is, when we introduce a function and refer to it as *convex* we mean *convex and proper*. + +The notion of convex duality has several nice features. For example, under weak conditions it holds that $f^{**} \equiv f$ for a convex $f$. We need more tools from convex analysis to give precise proofs of the results needed for the present discussion, however we save the technical details for the appendix. We now state the key result that justifies the duality-based framework. The proof of this theorem can also be found in the appendix. + +**THEOREM 4.2.** *Assume we have an outcome space $\mathcal{O}$ and a payoff function $\rho$ such that $\rho(\mathcal{O})$ is a bounded subset of $\mathbb{R}^K$. Then for any cost function $C: \mathbb{R}^K \to \mathbb{R}$ satisfying Conditions 2-5 and where $C$ is closed$^4$, there exists a strictly convex function $R: \mathbb{R}^K \to [-\infty, \infty]$ such that* + +$$C(\mathbf{q}) = \sup_{\mathbf{x} \in \text{relint}(\mathcal{H}(\rho(\mathcal{O})))} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x}). \quad (5)$$ + +Furthermore, for any convex function $R$ defined on $\text{relint}(\mathcal{H}(\rho(\mathcal{O})))$, if $R$ is strictly convex on its domain then the cost function defined by the conjugate, $C := R^*$, satisfies Conditions 2-5. + +This theorem is the key result that will guide us in designing a market pricing mechanism. This mechanism relies on constructing a cost function $C: \mathbb{R}^K \to \mathbb{R}$ that satisfies Conditions 2-5, and we are now given ingredients to achieve this: pick any strictly convex function $R$ with domain containing $\mathcal{H}(\rho(\mathcal{O}))$, and let $C$ be defined as in (5). Moreover, any $C$ satisfying the desired conditions can be constructed in this fashion. + +## **4.1. Properties of Duality-based Cost Functions** + +We now devote a few paragraphs to some important details regarding the proposed duality-based pricing mechanism. + +In our definition, we introduce the concept of a “price space” denoted by $\Pi$. For the conditions of Theorem 4.2 to hold, we need $\Pi \equiv \mathcal{H}(\rho(\mathcal{O}))$. One might ask why we even introduce a price space $\Pi$ when it is already given by $\rho$. Indeed, we give the more general definition because, as we will discuss, there can be computational benefits to allowing $\Pi$ to be larger. We also require that $R$ be differentiable which, while not strictly necessary, is a reasonable condition and eases the notation as we can now discuss the gradient $\nabla R(\mathbf{x})$. + +This duality based approach to designing the market mechanism is convenient for several reasons. First, it leads to markets that are efficient to implement whenever $\mathcal{H}(\rho(\mathcal{O}))$ can be described by a polynomial number of simple constraints.⁵ The difficulty with combinatorial outcome spaces is that actually enumerating the set of out- + +³The properness of a function is defined in appendix. This is not to be confused with the properness of a scoring rule that we will discuss in Section 8. + +⁴See the appendix for the definition of closed convex functions. + +⁵Under reasonable assumptions, a convex program can be solved with error $\epsilon$ in time polynomial in $1/\epsilon$ and the size of the problem input using standard techniques, e.g., the ellipsoid method and interior point +---PAGE_BREAK--- + +comes can be challenging or impossible. In our proposed framework we need only work with the convex hull of the payoff vectors for each outcome when represented by a low-dimensional payoff function $\rho(\cdot)$. This has significant benefits, as one often encounters convex sets which contain exponentially many vertices yet can be described by polynomially many constraints. Moreover, as the construction of $C$ is based entirely on convex programming, we reduce the problem of automated market making to the problem of optimization for which we have a wealth of efficient algorithms. Second, this method yields simple formulas for properties of markets that help us choose the best market to run. Two of these properties, worst-case monetary loss and worst-case information loss, are analyzed in Section 4.2. + +In order to establish precise statements, our discussions about certain convex sets – e.g., {$\nabla C$}, $\mathcal{H}(\rho(O))$, and $\Pi$ – have required precise definitions like the relative boundary and interior, and the closure of a set. One might ask whether this is necessary, as we might be focusing too heavily on “boundary cases.” While these details are occasionally cumbersome, they are important and do arise for very simple markets. For example, for the case of a complete market on $n$ outcomes using the LMSR cost function $C(\mathbf{q}) = b \log \sum_i \exp(q_i/b)$, we have that {$\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^n$} = $\text{relint}(\Delta_n)$; prices of 0 and 1 can be reached only in the limit. + +For the remainder of the paper, we shall further assume that our chosen $R$ is continuous and defined everywhere on $\mathcal{H}(\rho(O))$; that is, not just on the relative interior. It is not entirely unreasonable to consider functions $R$ for which this is not the case, for example we could imagine an $R$ which asymptotes towards the boundary of $\mathcal{H}(\rho(O))$. However, there are practical reasons why this is undesirable as we will show such cases lead to unbounded loss for the market maker. Notice also that if $R$ is defined on the compact set $\mathcal{H}(\rho(O))$ it follows immediately that $R$ is also bounded on $\mathcal{H}(\rho(O))$. Furthermore, we can always write + +$$C(\mathbf{q}) = \max_{\mathbf{x} \in \mathcal{H}(\rho(O))} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x}); \quad (6)$$ + +that is, where we have replaced $\sup$ with $\max$. Equation 6 is often convenient as we need to consider the maximizer of the optimization. + +**LEMMA 4.3.** If $R$ is continuous and defined on all of $\mathcal{H}(\rho(O))$, the price vector at any $\mathbf{q} \in \mathbb{R}^K$ satisfies + +$$\nabla C(\mathbf{q}) = \arg\max_{\mathbf{x} \in \mathcal{H}(\rho(O))} \mathbf{q} \cdot \mathbf{x} - R(\mathbf{x}). \quad (7)$$ + +**PROOF.** We first note that the optimization problem in Equation 7 has a unique maximizer because $R$ is strictly convex. We know via conjugate duality that for any $\mathbf{q} \in \mathbb{R}^K$, + +$$R(\nabla C(\mathbf{q})) = \sup_{\mathbf{q}' \in \mathbb{R}^K} \mathbf{q}' \cdot \nabla C(\mathbf{q}) - C(\mathbf{q}').$$ + +Since the supremum is over all of $\mathbb{R}^K$, it is achieved anywhere the derivative of the objective function (with respect to $\mathbf{q}'$) vanishes. This holds when $\mathbf{q}' = \mathbf{q}$, which gives us that + +$$R(\nabla C(\mathbf{q})) + C(\mathbf{q}) = \mathbf{q} \cdot \nabla C(\mathbf{q}), \quad (8)$$ + +for every $\mathbf{q}$. Equation 7 follows immediately from Equation 8. $\square$ + +methods. Efficient techniques for convex optimization have been thoroughly studied and can be found in standard texts; hence we omit such discussions in the present work. +---PAGE_BREAK--- + +Lemma 4.3 shows that given any q, the instantaneous prices are simply the maximizer of the convex optimization problem (6) for any R that is bounded and defined on $H(\rho(O))$. This convenient fact will be used throughout the paper. + +Given an arbitrary smooth convex function $f$, we can define the *Legendre Transformation* which maps a point $x \in \text{dom}(f)$ via the rule $x \mapsto \nabla f(x)$. Indeed, under certain circumstances we get that this map is the inverse of the Legendre transformation of the conjugate $f^*$, i.e., $\nabla f^*(\nabla f(x)) = x$ and $\nabla f(\nabla f^*(y)) = y$ for every $x \in \text{dom}(f)$ and $y \in \text{dom}(f^*)$. Unfortunately the required conditions are quite strong: we need that $f$ is strictly convex, the interior of $\text{dom}(f)$ is non-empty, and $\nabla f$ always diverges towards the boundary of $\text{dom}(f)$ (see Chapter 26 of Rockafellar [1970]). So while we would like to argue that $\nabla C$ is the inverse of the map $\nabla R$ for our framework, this will generally not be true. Assuming $R$ is differentiable then given any $x \in H(\rho(O))$, according to Lemma 4.3 we always have $\nabla C(\nabla R(x)) = x$ by setting $q = \nabla R(x)$. However, $\nabla R(\nabla C(q)) = q$ does not hold in general. On the other hand, if $H(\rho(O))$ has a non-empty interior, and the optimal solution to Equation 6 is always contained within the interior, then the statement $\nabla R(\nabla C(q)) = q$ will hold. Note, however, that these conditions are not satisfied for a complete market on $n$ outcomes, where $H(\rho(O))$ is the $n$-simplex $\Delta_n$ which has an empty interior (even though the relative interior is non-empty). Thus, cost function based market makers for complete markets do not satisfy $\nabla R(\nabla C(q)) = q$. In fact, while each $q$ maps to a single price $x = \nabla C(q)$, each price $x$ can be achieved at multiple values of $q$ in these markets. + +## 4.2. Bounding the Market Maker's Loss and Loss of Information + +We now discuss two key properties of our proposed market framework. We will make use of the notion of a Bregman divergence. The *Bregman divergence* with respect to a differentiable convex function $f$ is given by + +$$D_f(x, y) := f(x) - f(y) - \nabla f(y) \cdot (x - y).$$ + +It is clear by convexity that $D_f(x, y) \ge 0$ for all $x$ and $y$. + +### 4.2.1. Bounding the Market Maker's Monetary Loss. +When comparing market mechanisms, it is useful to consider the market maker's worst-case monetary loss, + +$$\sup_{q \in \mathbb{R}^K} \left( \sup_{o \in O} (\rho(o) \cdot q) - C(q) + C(0) \right).$$ + +This quantity is simply the worst-case difference between the maximum amount that the market maker might have to pay the traders ($\sup_{o \in O} \rho(o) \cdot q$) and the amount of money collected by the market maker ($C(q) - C(0)$). The following theorem provides a bound on this loss in terms of the conjugate function. + +**THEOREM 4.4.** *Consider any duality-based cost function market maker with $\Pi = H(\rho(O))$. The worst-case monetary loss of the market maker is no more than* + +$$\sup_{x \in \rho(O)} R(x) - \min_{x \in H(\rho(O}}} R(x). \quad (9)$$ + +Furthermore, the above bound is tight, as the supremum of the market maker's loss is exactly the value in Equation 9. + +**PROOF.** Let $q$ denote the final vector of quantities sold, $\nabla C(q)$ denote the final vector of instantaneous prices, and $o$ denote the true outcome. From Equations 6 and 7, we have that $C(q) = \nabla C(q) \cdot q - R(\nabla C(q))$ and $C(0) = -\min_{x \in H(\rho(O))} R(x)$. The difference between the amount that the market maker must pay out and the amount that +---PAGE_BREAK--- + +the market maker has previously collected when outcome o happens is + +$$ +\begin{align*} +& \rho(\mathbf{o}) \cdot \mathbf{q} - C(\mathbf{q}) + C(\mathbf{0}) \\ +&= \rho(\mathbf{o}) \cdot \mathbf{q} - (\nabla C(\mathbf{q}) \cdot \mathbf{q} - R(\nabla C(\mathbf{q}))) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) \\ +&= \mathbf{q} \cdot (\rho(\mathbf{o}) - \nabla C(\mathbf{q})) + R(\nabla C(\mathbf{q})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) + R(\rho(\mathbf{o})) - R(\rho(\mathbf{o})) \\ +&= R(\rho(\mathbf{o})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) - (R(\rho(\mathbf{o})) - R(\nabla C(\mathbf{q})) - \mathbf{q} \cdot (\rho(\mathbf{o}) - \nabla C(\mathbf{q}))) \tag{10} \\ +&\le R(\rho(\mathbf{o})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) - (R(\rho(\mathbf{o})) - R(\nabla C(\mathbf{q})) - \nabla R(\nabla C(\mathbf{q})) \cdot (\rho(\mathbf{o}) - \nabla C(\mathbf{q}))) \\ +&= R(\rho(\mathbf{o})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) - D_R(\rho(\mathbf{o}), \nabla C(\mathbf{q})), +\end{align*} +$$ + +where $D_R$ is the Bregman divergence with respect to $R$, as defined above. The first equality follows from Equation 8. The inequality follows from the first-order optimality condition for convex optimization, which says that for any convex and differentiable $f$ defined on the domain $\Pi$, if $f$ is minimized at $x$, then + +$$ \nabla f(x) \cdot (y - x) \ge 0 \text{ for any } y \in \Pi. $$ + +Consider $f(x) = R(x) - q \cdot x$. The minimum of this function occurs at $x = \nabla C(q)$ via the duality assumption. Plugging in $y = \rho(o)$ yields the inequality. + +Since the divergence is always nonnegative, this is upperbounded by $R(\rho(o)) - \min_{x \in \mathcal{H}(\rho(O))} R(x)$, which is in turn upperbounded by $\sup_{x \in \rho(O)} R(x) - \min_{x \in \mathcal{H}(\rho(O))} R(x)$. + +Finally, we show that this loss bound is tight. First, select any $\epsilon > 0$. Choose an outcome $o$ so that $\sup_{o' \in O} R(\rho(o')) - R(\rho(o)) < \epsilon/2$. Next, choose some $q'$ so that $D_R(\rho(o), \nabla C(q')) < \epsilon/2$. This is achievable because the space of gradients of $C$ is assumed to span relint($\mathcal{H}(\rho(O))$) via Theorem 3.2, and so we can ensure that $\nabla C(q')$ is arbitrarily close to $\rho(o)$. Finally, let $q := \nabla R(\nabla C(q'))$, and observe that by construction we have $\nabla C(q) = \nabla C(q')$. To compute the market maker's loss for this particular choice of $q$ and $o$, we apply Equation 10 to obtain: + +$$ +\begin{align*} +&R(\rho(\boldsymbol{o})) - \min_{\boldsymbol{x} \in \mathcal{H}(\rho(\boldsymbol{o}))} R(\boldsymbol{x}) - (R(\rho(\boldsymbol{o})) - R(\nabla C(\boldsymbol{q})) - \boldsymbol{q} \cdot (\boldsymbol{\rho}(\boldsymbol{o}) - \nabla C(\boldsymbol{q}))) \\ +&= R(\rho(\boldsymbol{o})) - \min_{\boldsymbol{x} \in \mathcal{H}(\rho(\boldsymbol{o}))} R(\boldsymbol{x}) - D_R(\rho(\boldsymbol{o}), \nabla C(\boldsymbol{q})) \\ +&> \sup_{\boldsymbol{o}' \in O} R(\rho(\boldsymbol{o}')) - \min_{\boldsymbol{x} \in \mathcal{H}(\rho(\boldsymbol{o}'))} R(\boldsymbol{x}) - \epsilon +\end{align*} +$$ + +where the first equality holds by the definition of the Bregman divergence, because $q = \nabla R(\nabla C(q))$. $\square$ + +This theorem tells us that as long as the conjugate function is bounded on $\mathcal{H}(\rho(O))$, the market maker's worst-case loss is also bounded.⁶ It says further that this loss is actually realized, for a particular outcome $o$, at least when the price vector approaches $\rho(o)$. This suggests that loss to the market maker is worst when the traders are the most certain about the outcome. + +### 4.2.2. Bounding Information Loss. +Information loss can occur when securities are sold in discrete quantities (for example, single units), as they are in most real-world markets. + +⁶In Section 6, we will state a more general, stronger bound on market maker's loss capturing the intuitive notion that the market maker's profits should be higher when the distance between the final vector of prices and the payoff vector $\rho(o)$ of the true outcome $o$ is large; see Theorem 6.2. +---PAGE_BREAK--- + +Without the ability to purchase arbitrarily small bundles, traders may not be able to change the market prices to reflect their true beliefs about the expected payoff of each security, even if expressiveness is satisfied. We will argue that the amount of information loss is captured by the market's bid-ask spread for the smallest trading unit. Given some $q$, the current bid-ask spread of security bundle $r$ is defined to be $(C(q+r) - C(q)) - (C(q) - C(q-r))$. This is simply the difference between the current cost of buying the bundle $r$ and the current price at which $r$ could be sold. + +To see how the bid-ask spread relates to information loss, suppose that the current vector of quantities sold is $q$. If securities must be sold in unit chunks, a rational, risk-neutral trader will not buy security $i$ unless she believes the expected payoff of this security is at least $C(q+e^i) - C(q)$, where $e^i$ is the vector that has value 1 at its $i$th element and 0 everywhere else. Similarly, she will not sell security $i$ unless she believes the expected payoff is at most $C(q) - C(q-e^i)$. If her estimate of the expected payoff of the security is between these two values, she has no incentive to buy or sell the security. In this case, it is only possible to infer that the trader believes the true expected payoff lies somewhere in the range $[C(q) - C(q-e^i), C(q+e^i) - C(q)]$. The bid-ask spread is precisely the size of this range. + +The bid-ask spread depends on how fast instantaneous prices change as securities are bought or sold. Intuitively, the bid-ask spread relates to the depth of the market. When the bid-ask spread is large, new purchases or sales can change the prices of the securities dramatically; essentially, the market is shallow. When the bid-ask spread is small, purchases or sales may only move the prices slightly; the market is deep. Based on this intuition, for complete markets, Chen and Pennock [2007] use the inverse of $\partial^2 C(q)/\partial q_i^2$ to capture the notion of market depth for each security $i$ independently. In a similar spirit, we define a *market depth parameter* $\beta$. Larger values of $\beta$ correspond to deeper markets. We will bound the bid-ask spread in terms of this parameter, and use this parameter to show that there exists a clear tradeoff between worst-case monetary loss and information loss; this will be formalized in Theorem 4.7 below. + +To simplify discussion, assume that $C$ is twice-differentiable. Our parameter $\beta$ is related to the curvature of $C$. Given any unit vector $v$, the curvature (i.e., second derivative) of $C$ at $q$ in the direction of $v$ can be calculated as $v^\top \nabla^2 C(q)v$, where $\nabla^2 C(q)$ is the Hessian of $C$ at $q$. Furthermore, for any unit vector $v$, $v^\top \nabla^2 C(q)v$ is lower bounded by the smallest eigenvalue and upper bounded by the largest eigenvalue of $\nabla^2 C(q)$. To see this, note that the Hessian is a symmetric matrix, and therefore has $K$ linearly independent eigenvectors, each normalized to have length one. Let $u_i$ be the $i$th unit eigenvector of $\nabla^2 C(q)$ corresponding to eigenvalue $\lambda_i$. $\lambda_i$ is nonnegative due to convexity of $C$. Any unit vector $v$ can be represented as a linear combination of the $K$ unit eigenvectors, $v = \sum_i a_i u_i$ with $\sum_i a_i^2 = 1$. For any orthogonal eigenvectors $u_i$ and $u_j$, it is easy to see that $u_i^\top \nabla^2 C(q) u_i = \lambda_i$ and $u_i^\top \nabla^2 C(q) u_j = 0$. Thus, $v^\top \nabla^2 C(q) v = \sum_i a_i^2 \lambda_i$, which lies in $[\min_i \lambda_i, \max_i \lambda_i]$. + +*Definition 4.5.* For any duality-based cost function market maker with twice-differentiable cost function $C$, the *market depth parameter* $\beta(q)$ for a quantity vector $q$ is defined as $\beta(q) = 1/\lambda_C(q)$, where $\lambda_C(q)$ is the largest eigenvalue of $\nabla^2 C(q)$, the Hessian of $C$ at $q$. The worst-case market depth is $\beta = \inf_{q \in \mathbb{R}^K} \beta(q)$. + +As described above, this definition of worst-case market depth implies that $1/\beta$ is an upper bound on the curvature of $C$. We will derive the upper bound of the bid-ask spread and the lower bound of the worst-case loss of the market maker in terms of $\beta$. Our derivation makes use of the following lemma that establishes a convenient relationship between the Bregman divergence of a convex function $f$ and the eigenvalues of the Hessian of $f$. The proof of the lemma is in Appendix B. +---PAGE_BREAK--- + +**LEMMA 4.6.** Let $f(\mathbf{x})$ be a twice-differentiable convex function. If for all $\mathbf{x} \in \text{dom}(f)$, every eigenvalue of $\nabla^2 f(\mathbf{x})$ falls in the set $[a, b]$, $a \le b$, then for any $\mathbf{x}, \mathbf{x}' \in \text{dom}(f)$, + +$$ \frac{a\|\mathbf{x} - \mathbf{x}'\|^2}{2} \le D_f(\mathbf{x}, \mathbf{x}') \le \frac{b\|\mathbf{x} - \mathbf{x}'\|^2}{2}. \quad (11) $$ + +We now present a theorem showing an inherent tension between worst-case monetary loss and information loss. Here $\text{diam}(\mathcal{H}(\rho(\mathcal{O})))$ denotes the diameter of the hull of the payoff vectors for each outcome. + +**THEOREM 4.7.** For any duality-based cost function market maker with twice differentiable $C$ and worst-case market depth $\beta$, the bid-ask spread for bundle $r$ with previous purchases $q$ is no more than $\|\mathbf{r}\|^2/\beta$. The worst-case monetary loss of the market maker is at least $\beta \cdot \text{diam}^2(\mathcal{H}(\rho(\mathcal{O})))/\sqrt{\beta}$. + +**PROOF.** The bid-ask spread can be written in terms of Bregman divergences. In particular, $C(q+r) - C(q) - (C(q) - C(q-r)) = D_C(q+r, q) + D_C(q-r, q)$. According to Lemma 4.6, because $1/\beta$ is the upper bound of the eigenvalues of $\nabla^2 C(q)$ at any $q$, both $D_C(q+r, q)$ and $D_C(q-r, q)$ are upper bounded by $\|\mathbf{r}\|^2/2\beta$. Thus, $C(q+r) - C(q) - (C(q) - C(q-r)) \le \|\mathbf{r}\|^2/\beta$. + +Let $\mathbf{x}_0 = \arg\min_{x \in \Pi} R(\mathbf{x})$. The first-order optimality condition for convex optimization gives that $\nabla R(\mathbf{x}_0) \cdot (\mathbf{x} - \mathbf{x}_0) \ge 0$ for all $\mathbf{x} \in \Pi$. According to Theorem 4.4, the worst-case loss of the market maker is + +$$ +\begin{align*} +\sup_{\mathbf{x} \in \rho(\mathcal{O})} R(\mathbf{x}) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} R(\mathbf{x}) &= \sup_{\mathbf{x} \in \rho(\mathcal{O})} (R(\mathbf{x}) - R(\mathbf{x}_0)) \\ +&= \sup_{\mathbf{x} \in \rho(\mathcal{O})} (D_R(\mathbf{x}, \mathbf{x}_0) + \nabla R(\mathbf{x}_0) \cdot (\mathbf{x} - \mathbf{x}_0)) \\ +&\ge \sup_{\mathbf{x} \in \rho(\mathcal{O})} D_R(\mathbf{x}, \mathbf{x}_0). +\end{align*} +$$ + +Because $C$ is twice-differentiable, for any $q$ such that $\nabla C(q) \in \text{relint}(\Pi)$, we have a correspondence between the Hessian of $C$ at $q$ and the Hessian of $R$ at $\nabla C(q)$. More precisely, we have that $u^\top\nabla^2 C(q)u = u^\top\nabla^{-2}R(\nabla C(q))u$ for any $u = x - x'$ with $x, x' \in \Pi$, where $\nabla^{-2}R(\nabla C(q))$ denotes the inverse of the Hessian of $R$ at $\nabla C(q)$. (See, for example, Gorni [1991] for more on the second-order properties of convex functions.) This means that $\beta(q)$ is equivalently defined as the smallest eigenvalue of $\nabla^2 R(\nabla C(q))|_{\Pi}$; that is, where we consider the second derivative only within the price region $\Pi$. Thus, $\beta$ lower bounds the eigenvalues of $\nabla^2 R(x)$ for all $x \in \Pi$. + +Applying Lemma 4.6, we have $D_R(x, x_0) \ge \frac{\beta}{2} \|x - x_0\|^2$. In the worst-case, $x_0$ is in the center of $\mathcal{H}(\rho(\mathcal{O}))$ and $\|x - x_0\|$ is at least $\text{diam}(\mathcal{H}(\rho(\mathcal{O}))) / 2$, which finishes the proof. $\square$ + +We can see that there is a direct tradeoff between the upper bound⁷ of the bid-ask spread, which shrinks as $\beta$ grows, and the lower bound of the worst-case loss of the market maker, which grows linearly in $\beta$. This tradeoff is very intuitive. When the market is shallow (small $\beta$), small trades have a large impact on market prices, and traders cannot purchase too many shares of the same security without paying a lot. When the market is deep (large $\beta$), prices change slowly, allowing the market maker + +⁷Strictly speaking, as we are emphasizing the necessary tradeoff between bid-ask spread and worst-case loss, we should have a *lower bound* on the bid-ask spread. On the other hand, if the worst-case market depth parameter is $\beta$ then there is some $q$ and $r$ such that $D_C(q+r, q)/\|\mathbf{r}\|^2 \approx 1/(2\beta)$ and this approximation can be made arbitrarily tight for small enough $r$ when $C$ is twice differentiable. +---PAGE_BREAK--- + +to gain more precise information, but simultaneously forcing the market maker to take on more risk since many shares of a security can be purchased at prices that are potentially too low. This tradeoff can be adjusted by scaling $R$, which scales $\beta$. This is analogous to adjusting the “liquidity parameter” $b$ of the LMSR. + +### 4.3. Selecting a Conjugate Function + +We have seen that the choice of the conjugate function $R$ impacts market properties such as worst-case loss and information loss. We now explore this choice in more detail. In many situations, the ideal choice of the conjugate is a function of the form + +$$R(x) := \frac{\lambda}{2} \|x - x_0\|^2. \quad (12)$$ + +Here $R(x)$ is simply the squared Euclidean distance between $x$ and an initial price vector $x_0 \in \Pi$, scaled by $\lambda/2$. By utilizing this quadratic conjugate function, we achieve a market depth $\beta(q)$ that is uniformly $\lambda$ for any $q$ for which $\nabla C(q) \in \text{relint}(\Pi)$. Furthermore, if $x_0$ is chosen as the “center” of $\Pi$, namely $x_0 = \arg\min_{x \in \Pi} \max_{y \in \Pi} \|x - y\|$, then the worst-case loss of the market maker is $\max_{x \in \Pi} R(x) = (\lambda/8)\text{diam}^2(\Pi)$. While the market maker can tune $\lambda$ appropriately according to the desired tradeoff between worst-case market depth and worst-case loss, the tradeoff is tightest when $R$ has a Hessian that is uniformly a scaled identity matrix, or more precisely where $R$ takes the form in Equation 12. + +Unfortunately, by selecting a conjugate of this form, or any $R$ with bounded derivative, the market maker does inherit one potentially undesirable property: security prices may become constant when $\nabla C(q)$ reaches a point at relbnd($\Pi$), the relative boundary of $\Pi$. That is, if we arrive at a total demand $q$ where $\nabla C(q) = \rho(o)$ for some outcome $o$, our mechanism begins offering securities at a price equal to the best-case payoff, akin to asking someone to bet a dollar for the chance to possibly win a dollar. The Quad-SCPM for complete markets is known to exhibit this behavior [Agrawal et al. 2011]. + +To avoid these undesirable pricing scenarios, it is sufficient to require that our conjugate function satisfies one condition. We say that a convex function $R$ defined on $\Pi$ is a pseudo-barrier⁸ for $\Pi$ if $\|\nabla R(x_t)\| \to \infty$ for any sequence of points $x_1, x_2, \dots \in \Pi$ which tends towards relbnd($\Pi$). If we require our conjugate function $R$ to be a pseudo-barrier, we are guaranteed that the instantaneous price vector $\nabla C(q)$ always lies in $\text{relint}(\Pi)$, and does not become constant near the boundary. + +It is important to note that, while it is desirable that $\|\nabla R(x_t)\| \to \infty$ as $x_t$ approaches relbnd($\Pi$), it is generally not desirable that $R(x_t) \to \infty$. Recall that the market maker's worst-case loss grows with the maximum value of $R$ on $\Pi$ and thus we restrict a conjugate function that is bounded on the domain. A perfect example of convex function that is simultaneously bounded and a pseudo-barrier is the negative entropy function $H(x) = \sum_i x_i \log x_i$, defined on the $n$-simplex $\Delta_n$. It is perhaps no surprise that the LMSR, the most common market mechanism for complete security spaces, can be described by the choice $R(x) := bH(x)$ where the price space $\Pi = \Delta_n$ [Agrawal et al. 2011; Chen and Vaughan 2010]. + +⁸We use the term pseudo-barrier to distinguish this from the typical definition of a barrier function on a set $\Pi$, which is a function that grows without bound towards the boundary of $\Pi$. The term *Legendre* was used by Cesa-Bianchi and Lugosi [2006] for a similar notion, which may have originated in Rockafellar [1970], yet this definition requires the stronger condition that $\Pi$ contains a nonempty interior. +---PAGE_BREAK--- + +**5. EXAMPLES OF COMPUTATIONALLY EFFICIENT MARKETS** + +In the previous section, we provided a general framework for designing markets on combinatorial or infinite outcome spaces. We now provide some examples of markets that can be operated efficiently using this framework. + +**5.1. Subset Betting** + +Recall the scenario described in Section 3.1 in which the outcome is a ranking of a set of $n$ competitors, such as $n$ horses in a race, represented as a permutation $\pi : [n] \to [n]$. Chen et al. [2007a] proposed a betting language, *subset betting*, in which traders can place bets $(i, j)$, for any candidate $i$ and any slot $j$, that pay out \$1 in the event that $\pi(i) = j$ and \$0 otherwise.⁹ Chen et al. [2008a] showed that pricing bets of this form using the LMSR is #P-hard and provided an algorithm for approximating the prices by exploiting the structure of the market. Using our framework, it is simple to design a computationally efficient market for securities of this form. + +In order to set up such a combinatorial market within our framework, we must be able to efficiently work with the convex hull of the payoff vectors for each outcome. Notice that, for an outcome $\pi$, the associated payoff can be described by a matrix $M_\pi$, with $M_\pi(i,j) = I[\pi(i) = j]$, where $I[\cdot]$ is the indicator function. Taking this one step further, it is easy to verify that the convex hull of the set of permutation matrices is precisely the set of *doubly stochastic matrices*, that is the set + +$$ \Pi = \left\{ X \in \mathbb{R}^{n \times n}_{\ge 0} : \sum_{i'=1}^{n} X(i', j) = \sum_{j'=1}^{n} X(i, j') = 1 \ \forall i, j \right\}, $$ + +where $X(i, j)$ represents the element at the $i$th row and $j$th column of the matrix $X$. Notice, importantly, that this set is described by only $n^2$ variables and $O(n)$ constraints. + +To fully specify the market maker, we must also select a conjugate function $R$ for our price space. While the quadratic conjugate function is an option, there is a natural extension of the negative entropy function, whose desirable properties were discussed in the previous section, for the space of stochastic matrices. For any $X \in \Pi$, let us set + +$$ R(X) = b \sum_{i,j} X(i,j) \log X(i,j) $$ + +for some parameter $b > 0$. The worst-case market depth is computed as the minimum of the smallest eigenvalue of the Hessian of $R$ within relint($\Pi$). This occurs when the $X$ matrix has all values $1/n$, hence the worst-case depth is $nb$. The worst-case loss, on the other hand, is easily computed as $bn \log n$. Note that this bound on worst-case loss is the same that would be obtained by running $n$ independent markets, one for each slot $j$, using the LMSR. + +**5.2. Sphere Betting** + +One important challenge of operating a combinatorial prediction market is to always maintain the logical consistency of security prices. Our framework offers a way to incorporate the constraints on security prices into pricing. Hence, in addition to combinatorial prediction markets, our framework can be used to design markets where security prices have some natural constraints due to their problem domains. + +⁹The original definition of subset betting allowed bets of the form "any candidate in set S will end up in slot j" or "candidate i will end up in one of the slots in set S." A bet of this form can be constructed easily using our betting language by bundling multiple securities. +---PAGE_BREAK--- + +We consider an example in which the outcome space is infinite. An object orbiting +the planet, perhaps a satellite, is predicted to fall to earth in the near future and will +land at an unknown location, which we would like to predict. We represent locations +on the earth as unit vectors $u \in \mathbb{R}^3$. The difficulty of this example arises from the fact +that the outcome must be a unit vector, imposing constraints on the three coordinates. +We will design a market with three securities, each corresponding to one coordinate +of the final location of the object. In particular, security $i$ will pay off $u_i + 1$ dollars if +the object lands in location $u$. (The addition of 1, while not strictly necessary, ensures +that the payoffs, and therefore prices, remain positive, though it will be necessary +for traders to sell securities to express certain beliefs.) This means that traders can +purchase security bundles $r \in \mathbb{R}^3$ and, when the object lands at a location $u$, receive +a payoff $(u+1) \cdot r$. Note that in this example, the outcome space is infinite, but the +security space is small. + +The price space $\mathcal{H}(\rho(O))$ for this market will be the 2-norm unit ball centered at 1. To construct a market for this scenario, let us make the simple choice of $R(x) = \lambda\|x-1\|^2$ for some parameter $\lambda > 0$. When $\|q\| \le 2\lambda$, there exists an $x$ such that $\nabla R(x) = q$. In particular, this is true for $x = (1/2)q/\lambda + 1$, and $q \cdot x - R(x)$ is maximized at this point. When $\|q\| > 2\lambda$, $q \cdot x - R(x)$ is maximized at an $x$ on the boundary of $\mathcal{H}(\rho(O))$. Specifically, it is maximized at $x = q/||q|| + 1$. From this, we can compute + +$$C(\mathbf{q}) = \begin{cases} \frac{1}{4\lambda} ||\mathbf{q}||^2 + \mathbf{q} \cdot \mathbf{1}, & \text{when } ||\mathbf{q}|| \le 2\lambda, \\ ||\mathbf{q}|| + \mathbf{q} \cdot \mathbf{1} - \lambda, & \text{when } ||\mathbf{q}|| > 2\lambda. \end{cases}$$ + +The market depth parameter $\beta$ is $2\lambda$; in fact, $\beta(x) = 2\lambda$ for any price vector $x$ in the interior of $\mathcal{H}(\rho(O))$. By Theorem 4.4, the worst-case loss of the market maker is no more than $\lambda$, which is precisely the lower bound implied by Theorem 4.7. Finally, the divergence $D_C(q+r, q) \le \|r\|^2/(4\lambda)$ for all $q, r$, with equality when $\|q\|, \|q+r\| \le 2\lambda$, implying that the bid-ask spread scales linearly with $\|r\|^2/\lambda$. + +We note that for this particular prediction problem, if we try to predict the latitude and longitude of the landing location, we don't have any constraints on prices. In particular, we can have two securities that pay off linearly with the latitude and longitude of the landing location respectively. These two securities are independent and can be traded in two independent markets. + +**6. COMPUTATIONAL COMPLEXITY AND RELAXATIONS** + +In Section 3, we argued that the space of feasible price vectors should be precisely +$\mathcal{H}(\rho(O))$, the convex hull of the payoff vectors for each outcome. In each of our exam- +ples, we have discussed market scenarios for which this hull has a polynomial number +of constraints, allowing us to efficiently calculate prices via convex optimization. Un- +fortunately, one should not necessarily expect that a given payoff function and outcome +space will lead to an efficiently describable convex hull. In this section, we explore a +couple of approaches to overcome such complexity challenges. First, we discuss the +case in which $\mathcal{H}(\rho(O))$ has exponentially (or infinitely) many constraints yet gives rise +to a separation oracle. Second, we show that the price space $\Pi$ can indeed be relaxed +beyond $\mathcal{H}(\rho(O))$ without increasing the risk to the market maker. Finally, we show +how this relaxation applies in practice. + +**6.1. Separation Oracles** + +If we encounter a convex hull $\mathcal{H}(\rho(O))$ with exponentially-many constraints, all may +not be lost. In order to calculate prices, we need to solve the optimization problem +$\max_{x \in \mathcal{H}(\rho(O))} q \cdot x - R(x)$. Under certain circumstances this can still be solved efficiently. +---PAGE_BREAK--- + +Consider a convex optimization problem with a concave objective function $f(x)$ and constraints $g_i(x) \le 0$ for all $i$ in some index set $I$. That is, we want to solve: + +$$ +\begin{array}{ll} +\max & f(\mathbf{x}) \\ +\text{s.t.} & \mathbf{x} \in \mathbb{R}^d \\ +& g_i(\mathbf{x}) \le 0 \quad \forall i \in I +\end{array} +$$ + +This can be converted to a problem with a linear objective in the standard way: + +$$ +\begin{array}{ll} +\max & c \\ +\text{s.t.} & x \in \mathbb{R}^d, c \in \mathbb{R} \\ +& f(x) \geq c \\ +& g_i(x) \leq 0 \quad \forall i \in I +\end{array} +$$ + +Of course, if *I* is an exponentially or infinitely large set we will have trouble solving this problem directly. On the other hand, the constraint set may admit an efficient separation oracle, defined as a function that takes as input a point (x, c) and returns true if all the necessary constraints are satisfied or, otherwise, returns false and specifies a violated constraint.¹⁰ Given an efficient separation oracle, one has access to alternative methods for optimization, the most famous being Khachiyan's ellipsoid method, that run in polynomial time. For more details see, for example, Grötschel et al. [1981]. + +This suggests that a fruitful direction for designing computationally efficient market makers is to examine the pricing problem on an instance-by-instance basis, and for a particular instance of interest, leverage the structure of the instance to develop an efficient algorithm for solving the specific separation problem. We leave this for future research. + +## 6.2. Relaxation of the Price Space + +When dealing with a convex hull $\mathcal{H}(\rho(O))$ that has a prohibitively large constraint set and does not admit an efficient separation oracle, we still have one tool at our disposal: we can modify $\mathcal{H}(\rho(O))$ to get an alternate price space $\Pi$ which we can work with efficiently. Recall that in Section 3, we arrived at the requirement that $\Pi = \mathcal{H}(\rho(O))$ as a necessary conclusion of the proposed conditions on our market maker. If we wish to violate this requirement, we need to consider which conditions must be weakened and revise the resulting guarantees from Section 3. + +We will continue to construct duality-based cost function market makers in the usual way, via the tuple $(O, \rho, \Pi, R)$. $\Pi$ is still a convex compact set of feasible prices. But we now allow $\Pi$ to be distinct from $\mathcal{H}(\rho(O))$. Not surprisingly, the choice of $\Pi$ will affect the interest of the traders and the market maker. We prove several claims which will aid us in our market design. Theorem 6.1 tells us that the expressiveness condition should not be relaxed, while Theorem 6.2 tells us that the no-arbitrage condition can be. Together, these imply that we may safely choose $\Pi$ to be a *superset* of $\mathcal{H}(\rho(O))$. + +The first (perhaps surprising) theorem tells us that expressiveness is not only useful for information aggregation, it is actually necessary for the market maker to avoid unbounded loss. The proof involves showing that if $\rho$ is the final outcome and $\rho(o) \notin \Pi$, then it is possible to make an infinite sequence of trades such that each trade causes a constant amount of loss to the market maker. + +¹⁰More precisely, a separation oracle returns any separating hyperplane that divides the input from the feasible set. +---PAGE_BREAK--- + +**THEOREM 6.1.** For any duality-based cost function market maker, the worst-case loss of the market maker is unbounded if $\rho(O) \notin \Pi$. + +**PROOF.** Consider some outcome $o$ such that $\rho(o) \notin \Pi$. By definition, the feasible price set $\Pi = \{\nabla C(q) : \forall q\}^c$ is compact. Because $\rho(o) \notin \Pi$, there exists a hyperplane that strongly separates $\Pi$ and $\rho(o)$. In other words, there exists an $k > 0$ such that $\|\rho(o) - \nabla C(q)\| \ge k, \forall q$. + +When outcome $o$ is realized, $B(q) = \rho(o) \cdot q - C(q) + C(0)$ is the market maker's loss given $q$. We have $\nabla B(q) = \rho(o) - \nabla C(q)$, which represents the instantaneous change of the market maker's loss. For infinitesimal $\epsilon$, let $q' = q + \epsilon(\rho(o) - \nabla C(q))$. Then + +$$ +\begin{align*} +B(q') &= B(q) + \nabla B(q) \cdot [\epsilon(\rho(o) - \nabla C(q))] \\ +&= B(q) + \epsilon ||\rho(o) - \nabla C(q)||^2 \geq B(q) + \epsilon k^2. +\end{align*} +$$ + +This shows that for any $q$ we can find a $q'$ such that the market maker's worst-case loss is at least increased by $\epsilon k^2$. This process can continue for infinite steps. Hence, we conclude that the market maker's loss is unbounded. $\square$ + +In the following theorem, which is a simple extension of Theorem 4.4, we see that including additional price vectors in $\Pi$ does not adversely impact the market maker's worst-case loss, despite the fact that the no-arbitrage condition is violated. + +**THEOREM 6.2.** *Consider any duality-based cost function market maker with $R$ and $\Pi$ satisfying $\sup_{x \in H(\rho(O))} R(x) < \infty$ and $H(\rho(O)) \subseteq \Pi$. Assume that the initial price vector satisfies $\nabla C(0) \in H(\rho(O))$. Let $q$ denote the vector of quantities sold and $o$ denote the true outcome. The monetary loss of the market maker is no more than* + +$$ (R(\rho(o))) - \min_{x \in H(\rho(O))} R(x) - D_R(\rho(o), \nabla C(q)). $$ + +**PROOF.** This proof is nearly identical to the proof of Theorem 4.4. The only major difference is that now $C(0) = -\min_{x \in \Pi} R(x)$ instead of $C(0) = -\min_{x \in H(\rho(O))} R(x)$, but this is equivalent since we have assumed that $\nabla C(0) \in H(\rho(O))$. $R(\rho(o))$ is still well-defined and finite since we have assumed that $H(\rho(O)) \subseteq \Pi$. $\square$ + +This tells us that expanding $\Pi$ can only help the market maker; increasing the range of $\nabla C(q)$ can only increase the divergence term. This may seem somewhat counterintuitive. We originally required that $\Pi \subseteq H(\rho(O))$ as a consequence of the no-arbitrage condition, and by relaxing this condition, we are providing traders with potential arbitrage opportunities. However, these arbitrage opportunities do not hurt the market maker. As long as the initial price vector lies in $H(\rho(O))$, any such situations where a trader can earn a guaranteed profit are effectively created (and paid for) by other traders! In fact, if the final price vector $\nabla C(q)$ falls outside the convex hull, the divergence term will be strictly positive, improving the bound. + +To elaborate on this point, let's consider an example where $\Pi$ is strictly larger than $H(\rho(O))$. Let $q$ be the current vector of purchases, and assume the associated price vector $x = \nabla C(q)$ lies in the interior of $H(\rho(O))$. Consider a trader who purchases a bundle $r$ such that the new price vector leaves this set, i.e., $y := \nabla C(q+r) \notin H(\rho(O))$. We claim that this choice can be strictly improved in the sense that there is an alternative bundle $r'$ whose associated profit, for any outcome $o$, is strictly greater than the profit for $r$. + +For simplicity, assume $y$ is an interior point of $\Pi \setminus H(\rho(O))$ so that $q+r = \nabla R(y)$. Define $\pi(y) := \arg\min_{y' \in H(\rho(O))} D_R(y', y)$, the minimum divergence projection of $y$ into $H(\rho(O))$. The alternative bundle we consider is $r' = \nabla R(\pi(y)) - q$. Our trader +---PAGE_BREAK--- + +pays $C(\mathbf{q}+\mathbf{r}) - C(\mathbf{q}+\mathbf{r}')$ less to purchase $\mathbf{r}'$ than to purchase $\mathbf{r}$. Hence, for any outcome $\mathbf{o}$, we see that the increased profit for $\mathbf{r}'$ over $\mathbf{r}$ is + +$$ +\begin{align} +\rho(\mathbf{o}) \cdot (\mathbf{r}' - \mathbf{r}) - C(\mathbf{q} + \mathbf{r}') + C(\mathbf{q} + \mathbf{r}) &> \rho(\mathbf{o}) \cdot (\mathbf{r}' - \mathbf{r}) + \nabla C(\mathbf{q} + \mathbf{r}') \cdot (\mathbf{r} - \mathbf{r}') \\ +&= (\rho(\mathbf{o}) - \pi(\mathbf{y})) \cdot (\mathbf{r}' - \mathbf{r}). \tag{13} +\end{align} +$$ + +Notice that we achieve strict inequality precisely because $\nabla C(\mathbf{q} + \mathbf{r}') = \pi(\mathbf{y}) \neq \mathbf{y} = \nabla C(\mathbf{q} + \mathbf{r})$. Now use the optimality condition for $\pi(\mathbf{y})$ to see that, since $\rho(\mathbf{o}) \in \mathcal{H}(\rho(\mathcal{O}))$, $\nabla_{\pi(\mathbf{y})}(D_R(\pi(\mathbf{y}), \mathbf{y})) \cdot (\rho(\mathbf{o}) - \pi(\mathbf{y})) \ge 0$. It is easy to check that $\nabla_{\pi(\mathbf{y})}(D_R(\pi(\mathbf{y}), \mathbf{y})) = \nabla R(\pi(\mathbf{y})) - \nabla R(\mathbf{y}) = \mathbf{r}' - \mathbf{r}$. Combining this last expression with the inequality above and (13) tells us that the profit increase is strictly greater than $(\rho(\mathbf{o}) - \pi(\mathbf{y})) \cdot (\mathbf{r}' - \mathbf{r}) \ge 0$. Simply put, the trader receives a guaranteed positive increase in profit for any outcome $\mathbf{o}$. + +The next theorem shows that any time the price vector lies outside of $\rho(\mathbf{o})$, traders could profit by moving it back inside. The proof uses a nice application of minimax duality for convex-concave functions. + +**THEOREM 6.3.** For any duality-based cost function market maker, given a current quantity vector $\mathbf{q}_0$ with current price vector $\nabla C(\mathbf{q}_0) = \mathbf{x}_0$, a trader has the opportunity to earn a guaranteed profit of at least $\min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} D_R(\mathbf{x}, \mathbf{x}_0)$. + +**PROOF.** A trader looking to earn a guaranteed profit when the current quantity is $\mathbf{q}_0$ hopes to purchase a bundle $\mathbf{r}$ so that the worst-case profit $\min_{\mathbf{o} \in \mathcal{O}} \rho(\mathbf{o}) \cdot \mathbf{r} - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0)$ is as large as possible. Notice that this quantity is strictly positive since $\mathbf{r} = 0$, which always has 0 profit, is one option. Thus, a trader would like to solve the following objective: + +$$ +\begin{align*} +& \max_{\mathbf{r} \in \mathbb{R}^K} \min_{\mathbf{o} \in \mathcal{O}} \rho(\mathbf{o}) \cdot \mathbf{r} - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0) \\ +&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} \max_{\mathbf{r} \in \mathbb{R}^K} \mathbf{x} \cdot \mathbf{r} - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0) \\ +&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} \max_{\mathbf{r} \in \mathbb{R}^K} \mathbf{x} \cdot (\mathbf{q}_0 + \mathbf{r}) - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0) - \mathbf{x} \cdot \mathbf{q}_0 \\ +&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} R(\mathbf{x}) + C(\mathbf{q}_0) - \mathbf{x} \cdot \mathbf{q}_0 \\ +&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} R(\mathbf{x}) + \mathbf{x}_0 \cdot \mathbf{q}_0 - R(\mathbf{x}_0) - \mathbf{x} \cdot \mathbf{q}_0 \\ +&\geq \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} D_R(\mathbf{x}, \mathbf{x}_0). +\end{align*} +$$ + +The first equality with the $\min/\max$ swap holds via Sion's Minimax Theorem [Sion 1958]. The last inequality was obtained using the first-order optimality condition of the solution $\mathbf{x}_0 = \arg\max_{\mathbf{x} \in \Pi} \mathbf{x} \cdot \mathbf{q}_0 - R(\mathbf{x})$ for the vector $\mathbf{x} - \mathbf{x}_0$ which holds since $\mathbf{x} \in \Pi$. $\square$ + +When $x_0 \in H(\rho(O))$, $D_R(x, x_0)$ is minimized when $x = x_0$ and the bound is vacuous, as we would expect. The more interesting case occurs when the prices have fallen outside of $H(\rho(O))$, in which case a trader is guaranteed a riskless profit by moving $\nabla C(q)$ to the closest point in $H(\rho(O))$. + +## 6.3. Pair Betting via Relaxation + +We return our attention to the scenario in which the outcome is a ranking of *n* competitors, as described in Section 3.1. Consider a complex market in which traders make arbitrary pair bets: for every *i*, *j*, a trader can purchase a security which pays out \$1 +---PAGE_BREAK--- + +whenever $\pi(i) < \pi(j)$. Like subset bets, pricing pair bets using the LMSR is known to be #P-hard [Chen et al. 2008a]. + +We can represent the payoff structure of any such outcome $\pi$ by a matrix $M_{\pi}$ defined by + +$$M_{\pi}(i,j) = \begin{cases} 1, & \text{if } \pi(i) < \pi(j) \\ \frac{1}{2}, & \text{if } i = j \\ 0, & \text{if } \pi(i) > \pi(j). \end{cases}$$ + +We would like to choose our feasible price region as the set $\mathcal{H}(\{M_{\pi} : \pi \in S_n\})$, where $S_n$ is the set of permutations on $[n]$. Unfortunately, the computation of this convex hull is necessarily hard: if given only a separation oracle for the set $\mathcal{H}(\{M_{\pi} : \pi \in S_n\})$, we could construct a linear program to solve the “minimum feedback arc set” problem, which is known to be NP-hard [Karp 1972]. + +On the positive side, we see from the previous section that the market maker can work in a larger feasible price space without risking a larger loss. We thus relax our feasible price region $\Pi$ to the set of $n \times n$ real-valued matrices $X \in \mathbb{R}^{n^2}$ satisfying the intuitive set of constraints described in Section 3.1: + +$$ +\begin{align*} +X(i,j) &\ge 0 && \forall i,j \in [n] \\ +X(i,j) &= 1 - X(j,i) && \forall i,j \in [n] \\ +X(i,j) + X(j,k) + X(k,i) &\ge 1 && \forall i,j,k \in [n] +\end{align*} +$$ + +This relaxation was first discussed by Megiddo [1977], who referred to such matrices as *generalized order matrices*. He proved that, for $n \le 4$, we do have $\Pi = \mathcal{H}(\{M_{\pi} : \pi \in S_n\})$, but gave a counterexample showing strict containment for $n = 13$. By using this relaxed price space, the market maker allows traders to bring the price vector outside of the convex hull, yet includes a set of basic (and natural) constraints on the prices. Such a market could be implemented with any strongly convex conjugate function (e.g., quadratic). + +Notice that in this example, it is computationally hard in general for a trader to determine whether or not a particular price vector falls within the convex hull; if this were not the case, then we would be able to construct a separation oracle, and could price pair bets efficiently without the relaxation. Therefore, although arbitrage opportunities may be created, it is generally intractable for traders to find and exploit these opportunities. + +# 7. RELATION TO ONLINE LEARNING + +In this section, we use our framework to explore the striking mathematical connections that exist between automated market makers and the class of Follow the Regularized Leader algorithms for online learning. While the problem of learning in an online environment appears quite different semantically from the problem of pricing securities in a market, we show that the two frameworks have a strong syntactic correspondence. We begin with a brief overview of no-regret learning and the online linear optimization problem. + +## 7.1. Online Learning and Regret-Minimizing Algorithms + +Perhaps the most canonical example of online, no-regret learning is the problem of *learning from expert advice*. In the expert setting, we imagine an algorithm that must make a sequence of predictions based on the advice of a set of *N* experts and receive a +---PAGE_BREAK--- + +corresponding sequence of losses.¹¹ The goal of the algorithm is to achieve a cumulative loss that is “almost as low” as the cumulative loss of the best performing expert in hindsight. No statistical assumptions are made about these losses. Indeed, algorithms are expected to perform well even if the sequence of losses is chosen by an adversary. + +Formally, at every time step $t \in \{1, \dots, T\}$, every expert $i \in \{1, \dots, N\}$ receives a loss $\ell_{i,t} \in [0, 1]$. The cumulative loss of expert $i$ at time $T$ is then defined as $L_{i,T} = \sum_{t=1}^{T} \ell_{i,t}$. An algorithm $\mathcal{A}$ maintains a weight $w_{i,t}$ for each expert $i$ at time $t$, where $\sum_{i=1}^{N} w_{i,t} = 1$. These weights can be viewed as a distribution over the experts. The algorithm then receives its own instantaneous loss $\ell_{\mathcal{A},t} = \sum_{i=1}^{N} w_{i,t}\ell_{i,t}$, which can be interpreted as the expected loss the algorithm would receive if it always chose an expert to follow according to the current distribution. The cumulative loss of $\mathcal{A}$ up to time $T$ is defined in the natural way as $L_{\mathcal{A},T} = \sum_{t=1}^{T} \ell_{\mathcal{A},t} = \sum_{t=1}^{T} \sum_{i=1}^{N} w_{i,t}\ell_{i,t}$. Below we use the symbols $\ell_t$, $L_t$, and $w_t$ to refer to the vector of losses, the vector of cumulative loss, and the vector of weights, respectively, for each expert on round $t$. + +It is unreasonable to expect the algorithm to achieve a small cumulative loss if none of the experts perform well. For this reason, it is typical to measure the performance of an algorithm in terms of its *regret*, defined to be the difference between the cumulative loss of the algorithm and the loss of the best performing expert, that is, + +$$L_{\mathcal{A},T} - \min_{i \in \{1, \dots, N\}} L_{i,T}.$$ + +An algorithm is said to have no regret if the average per time step regret approaches 0 as $T$ approaches infinity. + +The popular Randomized Weighted Majority (WM) algorithm [Littlestone and Warmuth 1994; Freund and Schapire 1997] is an example of a no-regret algorithm. Weighted Majority uses weights + +$$w_{i,t} = \frac{e^{-\eta L_{i,t-1}}}{\sum_{j=1}^{N} e^{-\eta L_{j,t-1}}}, \quad (14)$$ + +where $\eta > 0$ is a tunable parameter known as the *learning rate*. It is well known that the regret of WM after $T$ trials can be bounded as + +$$L_{WM(\eta),T} - \min_{i \in \{1, \dots, N\}} L_{i,T} \leq \eta T + \frac{\log N}{\eta}.$$ + +When $T$ is known in advance, setting $\eta = \sqrt{\log N/T}$ yields the standard $O(\sqrt{T \log N})$ regret bound. + +It has been shown that the weights chosen by Weighted Majority are precisely those that minimize a combination of empirical loss and an entropy-based regularization term [Kivinen and Warmuth 1997; 1999; Helmbold and Warmuth 2009]. More specifically, the weight vector $w_t$ at time $t$ is precisely the solution to the following minimization problem: + +$$\min_{w \in \Delta_N} w \cdot L_{t-1} - \frac{1}{\eta} H(w)$$ + +where $H$ is the entropy function, $H(w) := -\sum_{i=1}^{N} w_i \log w_i$. Indeed, Weighted Majority is an example of broader class of algorithms collectively known as *Follow the Regulated Leader* (FTRL) algorithms [Shalev-Shwartz and Singer 2007; Hazan and Kale + +¹¹This framework could be formalized equally well in terms of rewards, but losses are more common in the literature. +---PAGE_BREAK--- + +2010; Hazan 2009]. The FTRL template can be applied to a wide class of learning problems that fall under a general framework commonly known as *online convex optimization* [Zinkevich 2003]. Other problems that fall into this framework include online linear pattern classification [Kivinen and Warmuth 1997], online Gaussian density estimation [Azoury and Warmuth 2001], and online portfolio selection [Cover 1991]. In Algorithm 1, we present a version of FTRL tailored to the *online linear optimization* problem, an extension of the expert setting in which weights $w_t$ are chosen from a fixed bounded convex action space $\mathcal{K} \subset \mathbb{R}^N$. Notice that the experts setting is just a special case of online linear optimization, where the set $\mathcal{K}$ is the $N$-simplex $\Delta_N$. + +**ALGORITHM 1:** Follow the Regularized Leader (FTRL) + +1: Input: convex compact decision set $\mathcal{K} \subset \mathbb{R}^N$ +2: Input: strictly convex differentiable regularization function $\mathcal{R}(\cdot)$ defined on $\mathcal{K}$ +3: Parameter: $\eta > 0$ +4: Initialize: $\mathbf{L}_0 = \langle 0, \dots, 0 \rangle$ +5: **for** $t = 1, \dots, T$ **do** +6:     The learner selects action $w_t \in \mathcal{K}$ according to: + +$$ \mathbf{w}_t := \underset{\mathbf{w} \in \mathcal{K}}{\operatorname{argmin}} \mathbf{L}_{t-1} \cdot \mathbf{w} + \frac{1}{\eta} \mathcal{R}(\mathbf{w}) \quad (15) $$ + +7: Nature reveals $\ell_t$, learner suffers loss $\ell_t \cdot w_t$ +8: The learner updates $\mathbf{L}_t = \mathbf{L}_{t-1} + \ell_t$ +9: **end for** + +For a complete description of the FTRL algorithm, we refer the reader to the excellent notes of Rakhlin [2009]. We will make use of a result from these notes, but we first include two additional assumptions that we will use to make the connection to duality-based cost function market makers. In the remainder of this section, we use $\|\cdot\|$ to denote the L2 norm. + +**ASSUMPTION 1.** For each time step $t$, $\|\ell_t\| \le 1$. + +**ASSUMPTION 2.** The regularizer $\mathcal{R}(\cdot)$ has the Legendre property defined in Section 11.2 of Cesa-Bianchi and Lugosi [2006]: $\mathcal{R}$ is strictly convex on relint($\mathcal{K}$) and $\|\nabla \mathcal{R}(\mathbf{w})\| \to \infty$ as $\mathbf{w} \to \text{relbnd}(\mathcal{K})$. + +Under the latter assumption, the solution to Equation 15 will always occur in the relative interior of $\mathcal{K}$, which implies that the optimization is effectively unconstrained. We can now utilize Corollary 9 of Rakhlin [2009] to obtain the following. + +**PROPOSITION 7.1.** Under Assumptions 1 and 2, the FTRL algorithm enjoys the following regret bound: For any $\mathbf{w}^* \in \mathcal{K}$, + +$$ \sum_{t=1}^{T} \ell_t \cdot \mathbf{w}_t - \sum_{t=1}^{T} \ell_t \cdot \mathbf{w}^* \leq \frac{1}{\eta} \left( \mathcal{R}(\mathbf{w}^*) - \mathcal{R}(\mathbf{w}_1) - D_{\mathcal{R}}(\mathbf{w}^*, \mathbf{w}_{T+1}) + \sum_{t=1}^{T} D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \right). $$ + +This proposition may not be so illuminating at first glance, but it expresses a fundamental tradeoff in the learning problem. If we choose a regularizer $\mathcal{R}$ with heavy curvature, or equivalently if we choose a small $\eta$, then given the nature of the optimization problem in Equation 15, we ensure that the updates $w_t \to w_{t+1}$ are “small” and hence $D_{\mathcal{R}}(w_t, w_{t+1})$ will be small. On the other hand, we pay for either of these choices since (a) the bound is proportional to $1/\eta$, and (b) the difference $\mathcal{R}(w^*) - \mathcal{R}(w_1)$ grows larger when $\mathcal{R}$ has more curvature. +---PAGE_BREAK--- + +Under certain reasonable assumptions on $\mathcal{R}$, it is possible to prove that $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \le O(\eta^2)$. For example, if $\mathcal{R}$ is strongly convex (with respect to the L2 norm), then $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \le \eta^2 \|\ell_t\|^2$. See Rakhlin [2009] for more details. + +COROLLARY 7.2. Suppose that there exists $B > 0$ such that for every $t$, $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \le B\eta^2$, and that there exists $C > 0$ such that $\mathcal{R}(\mathbf{w}^*) - \mathcal{R}(\mathbf{w}_1) \le C$. Then $\text{Regret}(\text{FTRL}) \le C/\eta + \eta BT$. If $\eta = \sqrt{C/BT}$, then $\text{Regret}(\text{FTRL}) \le 2\sqrt{BCT}$. + +This final bound is quite powerful. It says that the regret of FTRL on any online linear optimization problem is always on the order of $\sqrt{T}$. The constant in front of this rate will depend on the total variation of the regularization function on $\mathcal{K}$ (that is, $\mathcal{R}(\mathbf{w}^*) - \mathcal{R}(\mathbf{w}_1)$) as well as the stability of the updates (that is, the terms $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1})$). + +## 7.2. An Equivalence Between Online Learning and Market Making + +Having reviewed much of the literature on the design of online learning algorithms, we now pivot back to the primary topic at hand, the design of market makers for complex security spaces. We will see that the tools that have been developed for the online learning setting are strikingly similar to those we have constructed for selecting pricing mechanisms. This is rather surprising, as the problem of learning in an online environment is semantically quite distinct from the problem of pricing securities in a prediction market: a learning algorithm receives *losses* and selects *weights* whereas a market maker manages *trades* and sets *prices*. We now show how these two problems can be viewed as two sides of the same coin. The two frameworks have very different semantics yet, in a very strong sense, have nearly identical syntax. + +The relationship is described in full detail in Figure 1. We imagine that the learner uses the FTRL algorithm (Algorithm 1) to select weights, and the market uses the duality-based cost function market maker framework. + +What we emphasize in Figure 1 is that, by identifying the objects $\Pi$, $R(\cdot)$, and $\{r_t\}$ with the objects $\mathcal{K}$, $\mathcal{R}(\cdot)/\eta$, and $\{-\ell_t\}$, respectively, the mechanisms for choosing an instantaneous price vector $x_t \in \Pi$ and selecting a weight vector $w_t \in \mathcal{K}$ are identical. Put another way, if we consider security bundles $r_t$ as the negative loss vectors $\ell_t$, then the duality-based cost function market maker becomes exactly FTRL. + +The connection seems to break down when we arrive at the last pair of statements, as the FTRL regret and the market maker's worst-case loss do not appear to be identical. Strictly speaking this is true. However, these two quantities are not so far apart. Using the previous identification, we see that the term $\max_{x \in \Pi} x \cdot q_T$, representing the worst-case payout of the market maker, matches exactly the term $-\min_{w \in \mathcal{K}} w \cdot L_T$. Now let us do a first-order approximation on the negation of the first term, i.e., the market maker's earnings from selling securities. We have + +$$C(\mathbf{q}_T) - C(\mathbf{q}_0) = \sum_{t=1}^{T} C(\mathbf{q}_t) - C(\mathbf{q}_{T-1}) \approx \sum_{t=1}^{T} \nabla C(\mathbf{q}_{T-1}) \cdot (\mathbf{q}_t - \mathbf{q}_{T-1}) = \sum_{t=1}^{T} x_t \cdot r_t, \quad (16)$$ + +where we used the fact that the instantaneous price vector $x_t$ is equal to $\nabla C(q_{t-1})$. This is not too surprising. Every trader will roughly pay the instantaneous prices $x_t$ for the securities times the quantities $r_t$ of each security sold. The total earned by the market maker $C(q_T) - C(q_0)$ is then roughly the sum of these payments over all trades. + +How bad is this approximation? We can quantify this explicitly, since the difference between $C(q_t) - C(q_{t-1})$ and $\nabla C(q_{t-1}) \cdot (q_t - q_{t-1})$ is exactly the value $D_C(q_t, q_{t-1})$. If $\mathcal{R}$ has the Legendre property (described in Assumption 2) then via standard arguments [Rockafellar 1970] we can also conclude that $D_C(q_t, q_{t-1}) = D_{\mathcal{R}}(x_t, x_{t+1})$. Under this assumption, in other words, the worst-case loss of the market maker can be writ- +---PAGE_BREAK--- + +Fig. 1. The similarities between the duality-based cost function market maker framework and the Follow the Regularized Leader algorithm for online linear optimization. + +ten as + +$$ \max_{\mathbf{x} \in \Pi} \mathbf{x} \cdot \mathbf{q}_T - \sum_{t=1}^{T} \mathbf{x}_t \cdot \mathbf{r}_t - \sum_{t=1}^{T} D_R(\mathbf{x}_t, \mathbf{x}_{t+1}). $$ + +Putting everything together, this final bound is exactly what we should expect. Look again at Theorem 6.2 and Proposition 7.1. The bounds in these theorems are nearly identical under the translation matching $w^* \leftrightarrow \rho(o)$, $w_{T+1} \leftrightarrow \nabla C(\mathbf{q})$, and $R(x) \leftrightarrow \mathcal{R}(w)/\eta$, since by definition of FTRL, $w_1 = \arg\min_{w \in \mathcal{K}} \mathcal{R}(w)$. The key difference is that the sum of divergence terms seems to get “lost in translation” when we look at Theorem 6.2. The above equation tells us why this is. + +It is worth looking further into this key difference between the FTRL algorithm for online linear optimization and our proposed automated market maker. We could imagine a modified market maker with a different mechanism: after the $(t-1)$th trade the market maker posts the (instantaneous) price vector $\mathbf{x}_t$, a trader arrives to purchase bundle $\mathbf{r}_t$, and the trader pays exactly $\mathbf{x}_t \cdot \mathbf{r}_t$. Notice this is different from the original +---PAGE_BREAK--- + +framework, where the trader would pay $C(\mathbf{q} + \mathbf{r}_i) - C(\mathbf{q})$, although we observed in Equation 16 that these two values are not so far apart. + +Under the mapping outlined in Figure 1, algorithms for the expert setting ($K = \Delta_n$) correspond to complete markets. Weighted Majority corresponds directly to the LMSR, with the learning rate $\eta$ playing a similar role to the LMSR parameter $b$. The similarity between the Weighted Majority weights (Equation 14) and the LMSR prices (Equation 2) has been observed and exploited in the past [Chen et al. 2008a]. The Quad-SCPM market [Agrawal et al. 2011] can be mapped to online gradient descent, which is known to be equivalent to FTRL with a quadratic regularizer [Hazan et al. 2007; Hazan 2009]. + +## 8. RELATION TO MARKET SCORING RULES + +We have described ways in which our optimization-based framework can be used to derive novel, efficient automated market makers for markets in which the outcome space is very large. Our framework also provides new insights into the complete market setting. In this section, we describe how our framework can be used to establish a correspondence between cost function based markets and market scoring rules. + +Consider the special case of complete markets, and in particular, markets that offer *n* Arrow-Debreu securities for the *n* mutually exclusive and exhaustive outcomes. Our framework defines a set of market makers by equating the set of allowable prices II to the *n*-simplex. That is, a market maker for a complete market that satisfies conditions 1–5 in Section 3 can use a cost function of the form + +$$C(\mathbf{q}) = \sup_{\mathbf{x} \in \text{relint}(\Delta_n)} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x}), \quad (17)$$ + +where $R(x)$ is strictly convex over $\Delta_n$. The market price $x(q) = \nabla C(q)$ is the optimal solution to the convex optimization. It is easy to check that when $R(x) = b \sum_{i=1}^n x_i \log x_i$, the negative entropy function, we have the LMSR market maker. The LMSR is a popular example of a large class of market makers, called *market scoring rules* (MSR). In this section, after reviewing the notion of a proper scoring rule and describing the class of MSRs, we use Equation 17 to establish a correspondence between MSRs and cost function based market makers for complete markets. + +### 8.1. Proper Scoring Rules + +*Scoring rules* have long been used in the evaluation of probabilistic forecasts. In the context of information elicitation, scoring rules are used to encourage individuals to make careful assessments and truthfully report their beliefs [Savage 1971; Garthwaite et al. 2005; Lambert et al. 2008]. In the context of machine learning, scoring rules are used as loss functions to evaluate and compare the performance of different algorithms [Buja et al. 2005; Reid and Williamson 2009]. We briefly mention recent work of Abernethy and Frongillo [2011] who used a generalized notion of a scoring rule in order to construct a market mechanism for solving machine learning problems. + +Formally, let $\{1, \dots, n\}$ be a set of mutually exclusive and exhaustive outcomes of a future event. A scoring rule maps a probability distribution $p$ over outcomes to a score $s_i(p)$ for each outcome $i$, with $s_i(p)$ taking values in the range $[-\infty, \infty]$. Intuitively, this score represents the reward that a forecaster receives for predicting the distribution $p$ if the outcome turns out to be $i$. A scoring rule is said to be *regular* relative to the probability simplex $\Delta_n$ if $\sum_{i=1}^n p_i s_i(p') \in [-\infty, \infty)$ for all $p, p' \in \Delta_n$, with $\sum_{i=1}^n p_i s_i(p) \in (-\infty, \infty)$. This implies that $s_i(p) \in (-\infty, \infty)$ whenever $p_i > 0$ and $s_i(p)$ may equal to $-\infty$ when $p_i = 0$. A scoring rule is said to be *proper* if a risk-neutral forecaster who believes the true distribution over outcomes to be $p$ has no incentive to report any alternate distribution $p'$, that is, if $\sum_{i=1}^n p_i s_i(p) \ge \sum_{i=1}^n p_i s_i(p')$ for all +---PAGE_BREAK--- + +distributions $p' \in \Delta_n$. The rule is *strictly proper* if this inequality holds with equality only when $p = p'$. + +Two examples of regular, strictly proper scoring rules commonly used in both information elicitation and machine learning are the quadratic scoring rule [Brier 1950]: + +$$s_i(\mathbf{p}) = a_i + b \left( 2p_i - \sum_{i=1}^{n} p_i^2 \right) \quad (18)$$ + +and the logarithmic scoring rule [Good 1952]: + +$$s_i(\mathbf{p}) = a_i + b \log(p_i) \quad (19)$$ + +where $b > 0$ and $a_1, \dots, a_n$ are parameters. + +Proper scoring rules are closely related to convex functions. In fact, the following characterization theorem of Gneiting and Raftery [2007], which is credited to McCarthy [1956] and Savage [1971], gives the precise relationship between convex functions and proper scoring rules. + +**THEOREM 8.1 (GNEITING AND RAFTERY [2007]).** A regular scoring rule is (strictly) proper if and only if there exists a (strictly) convex function $G: \Delta_n \to \mathbb{R}$ such that for all $i \in \{1, \dots, n\}$, + +$$s_i(\mathbf{p}) = G(\mathbf{p}) - G'(\mathbf{p}) \cdot \mathbf{p} + G'_i(\mathbf{p}),$$ + +where $G'(\mathbf{p})$ is any subgradient of $G$ at the point $\mathbf{p}$ and $G'_i(\mathbf{p})$ is the $i$-th element of $G'(\mathbf{p})$. + +Note that for a scoring rule defined in terms of a function $G$, + +$$\sum_{i=1}^{n} p_i s_i(\mathbf{p}) = \sum_{i=1}^{n} p_i (G(\mathbf{p}) - G'(\mathbf{p}) \cdot \mathbf{p} + G'_i(\mathbf{p})) = G(\mathbf{p}).$$ + +Theorem 8.1 therefore indicates that a regular scoring rule is (strictly) proper if and only if its expected score function $G(\mathbf{p})$ is (strictly) convex on $\Delta_n$, and the vector with elements $s_i(\mathbf{p})$ is a subgradient of $G$ at the point $\mathbf{p}$. Hence, every bounded convex function $G$ over $\Delta_n$ induces a proper scoring rule. + +Define $S(\tilde{\mathbf{p}}, \mathbf{p}) = \sum_{i=1}^n p_i s_i(\tilde{\mathbf{p}})$ to be the expected score of a forecaster who has belief $\mathbf{p}$ but predicts $\tilde{\mathbf{p}}$. Then, $G(\mathbf{p}) = S(\mathbf{p}, \mathbf{p})$. If a scoring rule is regular and proper, $d(\tilde{\mathbf{p}}, \mathbf{p}) = S(\mathbf{p}, \mathbf{p}) - S(\tilde{\mathbf{p}}, \mathbf{p})$ is the associated divergence function that captures the expected loss in score if a forecaster predicts $\tilde{\mathbf{p}}$ rather than his true belief $\mathbf{p}$. It is known that if $G(\mathbf{p})$ is differentiable, the divergence function is the Bregman divergence for $G$, that is, $d(\tilde{\mathbf{p}}, \mathbf{p}) = D_G(\tilde{\mathbf{p}}, \mathbf{p})$. For a nice survey on uses, properties, and characterizations of proper scoring rules, see Gneiting and Raftery [2007]. + +## 8.2. Market Scoring Rules + +Market scoring rules (MSR) were developed by Hanson [2003; Hanson [2007] as a method of using scoring rules to pool opinions from many different forecasters. Market scoring rules are sequentially shared scoring rules. Formally, the market maintains a current probability distribution $\mathbf{p}$. At any time, a trader can enter the market and change this distribution to an arbitrary distribution $p'$ of her choice.¹² If the outcome turns out to be $i$, she receives a (possibly negative) payoff of $s_i(\mathbf{p}') - s_i(\mathbf{p})$. For example, in the MSR defined using the logarithmic scoring rule in Equation 19, a trader + +¹²In some market scoring rules, such as the LMSR, distributions that place a weight of 0 on any outcome are not allowed since a trader would have to pay an infinite amount of money if the outcome with reported probability 0 actually occurred. +---PAGE_BREAK--- + +who changes the distribution from p to p' receives a payoff of $b \log(p'_i/p_i)$. This market formulation is equivalent to the cost function based formulation of the LMSR (hence its name) in the sense that a trader who changes the market probabilities from p to p' in the MSR formulation receives the same payoff for every outcome i as a trader who changes the quantity vectors from any q to q' such that market prices satisfy $x(q) = p$ and $x(q') = p'$ in the cost function based formulation. Using proper scoring rules, market scoring rules preserve the nice incentive compatible property of proper scoring rules for *myopic* traders. A trader who believes the true distribution to be p and only cares about payoff of her current action, maximizes her expected payoff by changing the market's distribution to p. + +One advantage of the market scoring rule formulation is the ease of bounding the market maker's worst-case loss. Each trader in a market scoring rule is essentially responsible for paying the previous trader's score. Thus the market maker is responsible only for paying the score of the final trader. Let $p_0$ be the initial probability distribution of the market. The worst-case loss of the market maker is then + +$$ \max_{i \in \{1, \dots, n\}} \sup_{\mathbf{p} \in \Delta_n} (s_i(\mathbf{p}) - s_i(\mathbf{p}_0)). $$ + +The LMSR market maker is not the only market that can be defined as either a market scoring rule or a cost function based market. The fact that there exists a correspondence between certain market scoring rules and certain cost function based markets was noted by Chen and Pennock [2007]. They pointed out that the MSR with scoring function $s$ and the cost function based market with cost function $C$ are equivalent if for all $\mathbf{q}$ and all outcomes $i$, $C(\mathbf{q}) = q_i - s_i(\mathbf{x}(\mathbf{q}))$. However, they provide neither any guarantees about the circumstances under which this condition can be satisfied nor a general way to find the cost function given a market scoring rule; $\mathbf{x}(\mathbf{q})$ is the gradient of $C(\mathbf{q})$ and the condition defines a differential equation. Agrawal et al. [2011] also made use of the equivalence between markets when this strong condition holds. In the next section, we will give very general precise conditions under which an MSR is equivalent to a cost function based market and provide a way to translate a market scoring rule to a cost function based market and vice versa. + +**8.3. Equivalence between Market Scoring Rules and Cost Function Based Market Makers** + +Recall that a convex cost function $C$ can be defined as $C(\mathbf{q}) = \sup_{\mathbf{x} \in \text{relint}(\Delta_n)} \sum_{i=1}^n x_i q_i - R(\mathbf{x})$ for a strictly convex function $R$, namely the convex conjugate of $C$. According to Theorem 8.1, there is a one-to-one and onto mapping between strictly convex and differentiable $R$ and strictly proper, regular scoring rules with differentiable scoring functions $s_i(\mathbf{x})$, where for every pair we have + +$$ R(\mathbf{x}) = \sum_{i=1}^{n} x_i s_i(\mathbf{x}), \quad (20) $$ + +and + +$$ s_i(\mathbf{x}) = R(\mathbf{x}) - \sum_{j=1}^{n} \frac{\partial R(\mathbf{x})}{\partial x_j} x_j + \frac{\partial R(\mathbf{x})}{\partial x_i}. \quad (21) $$ + +Theorem 8.2 below shows that the cost function based market using $R$ in (20) and the market scoring rule market using $s_i(\mathbf{x})$ in (21) are equivalent in terms of traders' profits, reachable price vectors, and the market maker's worst-case loss under some mild conditions. +---PAGE_BREAK--- + +**THEOREM 8.2.** Given a strictly convex, continuous conjugate function $R$ and a strictly proper, regular scoring rule $\mathbf{s}$ with scoring functions $s_i$ satisfying the relationships in Equations 20 and 21, if both $R$ and $s_i$'s are differentiable everywhere in relint($\Delta_n$), the corresponding cost function based market and market scoring rule market are equivalent in the following three aspects: + +(a) A trade in the cost function based market bringing the quantity vector **q** to **q'** and the price vector $\mathbf{x}(\mathbf{q})$ to $\mathbf{x}(\mathbf{q}')$ gives the same profit as a trade in the MSR market bringing the market probability from $\mathbf{x}(\mathbf{q})$ to $\mathbf{x}(\mathbf{q}')$ for every outcome $i$ as long as $\mathbf{x}(\mathbf{q}), \mathbf{x}(\mathbf{q}') > 0$. + +(b) Given any probability vector $\mathbf{x}$ for which $s_i(\mathbf{x}) \in (-\infty, \infty) \forall i$ in the MSR market, there is always a quantity vector $\mathbf{q}$ such that $\nabla C(\mathbf{q}) = \mathbf{x}$ in the cost function based market. + +(c) If the initial probability vector $\mathbf{x}_0$ in the MSR market is equal to the initial price vector $\nabla C(0)$ in the cost function based market, and $\mathbf{x}_0 \in \text{relint}(\Delta_n)$, then both markets have the same worst-case loss for the market maker. + +PROOF. Because $R$ is continuous and defined on $\Delta_n$, $R$ is bounded on $\Delta_n$. According to Lemma 4.3, for any $\mathbf{q} \in \mathbb{R}^n$, + +$$ \mathbf{x}(\mathbf{q}) = \nabla C(\mathbf{q}) = \underset{\mathbf{x} \in \Delta_n}{\operatorname{argmax}} \left( \sum_{i=1}^{n} x_i q_i - R(\mathbf{x}) \right) \quad (22) $$ + +in the cost function based market. Below, we prove each part in turn. + +Part (a). Due to Equation 22, if $\mathbf{x}(\mathbf{q}) > 0$, $\mathbf{x}(\mathbf{q})$ must be the optimal solution to the unconstrained optimization problem $\max_{\mathbf{x}} \sum_{i=1}^{n} x_i q_i - R(\mathbf{x}) - \lambda_{\mathbf{q}} (\sum_{i=1}^{n} x_i - 1)$ for some $\lambda_{\mathbf{q}}$. Since $R$ is differentiable in $\text{relint}(\Delta_n)$, this means that + +$$ q_i = \frac{\partial R(\mathbf{x}(\mathbf{q}))}{\partial x_i(\mathbf{q})} + \lambda_{\mathbf{q}} \quad (23) $$ + +for some $\lambda_{\mathbf{q}}$. + +Suppose in the cost function based market a trader changes the outstanding shares from **q** to **q'** and this trade changes the market price from **x**(**q**) > 0 to **x**(**q')** > 0. If outcome *i* occurs, the trader's profit is + +$$ +\begin{align*} +& (q'_i - q_i) - (C(\mathbf{q}') - C(\mathbf{q})) \\ +&= (q'_i - q_i) - \left( \sum_{j=1}^{n} x_j(\mathbf{q}') q'_j - R(\mathbf{x}(\mathbf{q}')) \right) + \left( \sum_{j=1}^{n} x_j(\mathbf{q}) q_j - R(\mathbf{x}(\mathbf{q})) \right) \\ +&= \left( q'_i - \sum_{j=1}^{n} x_j(\mathbf{q}') q'_j + R(\mathbf{x}(\mathbf{q}')) \right) - \left( q_i - \sum_{j=1}^{n} x_j(\mathbf{q}) q_j + R(\mathbf{x}(\mathbf{q})) \right) \\ +&= \left( \frac{\partial R(\mathbf{x}(\mathbf{q}'))}{\partial x_i(\mathbf{q}')} - \sum_{j=1}^{n} x_j(\mathbf{q}') \frac{\partial R(\mathbf{x}(\mathbf{q}'))}{\partial x_j(\mathbf{q}')} + R(\mathbf{x}(\mathbf{q}')) \right) \\ +&\quad - \left( \frac{\partial R(\mathbf{x}(\mathbf{q}))}{\partial x_i(\mathbf{q})} - \sum_{j=1}^{n} x_j(\mathbf{q}) \frac{\partial R(\mathbf{x}(\mathbf{q}))}{\partial x_j(\mathbf{q})} + R(\mathbf{x}(\mathbf{q})) \right) \\ +&= s_i(\mathbf{x}(\mathbf{q}')) - s_i(\mathbf{x}(\mathbf{q})). +\end{align*} +$$ + +The first equality follows since $\mathbf{x}(\mathbf{q})$ is the solution to $\max_{\mathbf{x}\in\Delta_n} (\sum_{i=1}^n x_i q_i - R(\mathbf{x}))$ and the third equality follows from Equation 23. Since $s_i(\mathbf{x}(\mathbf{q}')) - s_i(\mathbf{x}(\mathbf{q}))$ is the profit of +---PAGE_BREAK--- + +a trader who changes the market probability from $x(q)$ to $x(q')$ in the MSR market when outcome $i$ occurs, this completes the proof of part (a). + +*Part (b).* In the MSR market, only probability vectors in the set $Y = \{x \in \Delta_n : s_i(x) \in (-\infty, \infty) \forall i\}$ can possibly be reported by a trader with finite wealth. Since the scoring rule $s$ is regular, $s_i(x) \in [-\infty, \infty)$ and it can equal $-\infty$ only when $x_i$ is 0. However, any $x$ that sets $s_i(x) = -\infty$ for some $i$ is not allowed, as it requires the trader to pay infinite amount of money when outcome $i$ actually happens. + +We show that in the cost function based market it is possible to achieve any price vector $x \in Y$ by setting $q_i = s_i(\mathbf{x})$ for all $i$. By strict properness of the scoring rule $s$, we know that $\sum_{i=1}^n x'_i s_i(\mathbf{x}) - \sum_{i=1}^n x'_i s_i(\mathbf{x}') \le 0$ for any $\mathbf{x}$ and $\mathbf{x}'$ and the equality holds only when $\mathbf{x}' = \mathbf{x}$. For any vector $\mathbf{x} \in Y$, $s(\mathbf{x}) \in \mathbb{R}^n$. By Equation 22, we have $\nabla C(s(\mathbf{x})) = \operatorname{argmax}_{\mathbf{x}' \in \Delta_n} \sum_{i=1}^n x_i' s_i(\mathbf{x}) - \sum_{i=1}^n x_i' s_i(\mathbf{x}') = \mathbf{x}$. Hence, the price vector in the cost function based market is exactly $\mathbf{x}$. + +*Part (c).* We know that $C(0) = \max_{\mathbf{x} \in \Delta_n} -R(\mathbf{x})$. If $\mathbf{x}_0 \in \text{relint}(\Delta_n)$, we have $\mathbf{x}_0 = \nabla C(0) = \text{argmin}_{\mathbf{x} \in \Delta_n} R(\mathbf{x})$ and $\mathbf{x}_0$ must satisfy + +$$ \nabla R(\mathbf{x}_0) = 0. \tag{24} $$ + +Combining Equation 24 with Equation 21, we have + +$$ s_i(\mathbf{x}_0) = R(\mathbf{x}_0). \tag{25} $$ + +The worst-case loss of the cost function based market maker is + +$$ +\begin{align} +\sup_{\mathbf{x} \in \rho(O)} R(\mathbf{x}) - \min_{\mathbf{x} \in \mathcal{H}(\rho(O))} R(\mathbf{x}) &= \sup_{\mathbf{x} \in \rho(O)} R(\mathbf{x}) - R(\mathbf{x}_0) \\ +&= \sup_{\mathbf{x} \in \rho(O)} \sum_i x_i s_i(\mathbf{x}) - R(\mathbf{x}_0) \\ +&= \max_{i \in \{1, \dots, n\}} s_i(\mathbf{e}^i) - R(\mathbf{x}_0) \tag{26} +\end{align} +$$ + +where $\mathbf{e}^i$ is the $n$-dimensional vector that has 1 for its $i$-th element and 0 everywhere else. The second equality is due to Equation 20. The third equality is because $\rho(O) = \{\mathbf{e}^1, \dots, \mathbf{e}^n\}$ for the complete market we consider. + +The worst-case loss of the MSR market maker with the scoring functions $s_i(\mathbf{x})$ is + +$$ +\begin{align} +\max_{i \in \{1, \dots, n\}} \sup_{\mathbf{x} \in \Delta_n} (s_i(\mathbf{x}) - s_i(\mathbf{x}_0)) &= \max_{i \in \{1, \dots, n\}} \sup_{\mathbf{x} \in \Delta_n} s_i(\mathbf{x}) - R(\mathbf{x}^0) \\ +&= \max_{i \in \{1, \dots, n\}} s_i(\mathbf{e}^i) - R(\mathbf{x}^0). \tag{27} +\end{align} +$$ + +The first equality is due to Equation 25. The second equality holds because for strictly proper scoring rule $s$ + +$$ s_i(\mathbf{e}^i) = \sum_{j=1}^{n} e_j^i s_j(\mathbf{e}^i) \geq \sum_{j=1}^{n} e_j^i s_j(\mathbf{x}) = s_i(\mathbf{x}) $$ + +for all $\mathbf{x} \in \Delta_n$. + +Equations 26 and 27 are identical. Hence, the worst-case loss of the market maker is the same in these two markets. $\square$ + +Theorem 8.2 shows that a trader's profit for moving the prices from $x$ to $x'$ can be different in these two markets only when $x$ or $x'$ (or both) lie on the relative boundary of $\Delta_n$, and the worst-case loss of the market maker can be different in these two markets only when the initial market price vector lies on the relative boundary of +---PAGE_BREAK--- + +$\Delta_n$. The reachable price vectors, however, are always the same. The LMSR market is an example where both the initial market price vector and market prices at any consequent time are in relint($\Delta_n$). The MSR market using a quadratic scoring rule is an example where the initial market price vector is in relint($\Delta_n$) but future market prices can reach the relative boundary of $\Delta_n$. Its corresponding cost function based market maker is equivalent to the Quad-SCPM market introduced by Agrawal et al. [2011]. + +## 9. CONCLUSION + +We conclude by mentioning one nice direction of work. As we discussed, there is an inherent tradeoff between the bid-ask spread and the worst-case loss of the market maker. But if the market maker chooses to sell securities with an additional *transaction cost* for each security sold, then this money can not only help to cover the worst-case loss, but can also lead to a profit. Furthermore, if a market becomes popular, the market-maker may wish to increase the market depth. This idea has been explored by Othman et al. [2010] for the case of complete markets, introducing a *liquidity sensitive* market maker, and they provide a new model with profit guarantees. Othman and Sandholm [2011] recently extend the work and characterize a family of market makers that are liquidity sensitive. Via our framework, we can define an alternative method for simultaneously including transaction costs and guaranteeing profit. In particular, this is achieved through relaxing the price space, as discussed in Section 6.2. We leave the details to future work. + +## REFERENCES + +ABERNETHY, J., CHEN, Y., AND VAUGHAN, J. W. 2011. An optimization-based framework for automated market-making. In *Proceedings of the 12th ACM Conference on Electronic Commerce*. 297–306. + +ABERNETHY, J. AND FRONGILLO, R. M. 2011. A collaborative mechanism for crowdsourcing prediction problems. In *Advances in Neural Information Processing Systems*. + +AGRAWAL, S., DELAGE, E., PETERS, M., WANG, Z., AND YE, Y. 2011. A unified framework for dynamic prediction market design. *Operations Research* 59, 3, 550–568. + +AGRAWAL, S., WANG, Z., AND YE, Y. 2008. Parimutuel betting on permutations. In *Proceedings of the 4th International Workshop On Internet And Network Economics*. 126–137. + +ARROW, K. J. 1964. The role of securities in the optimal allocation of risk-bearing. *Review of Economic Studies* 31, 2, 91–96. + +ARROW, K. J. 1970. *Essays in the Theory of Risk Bearing*. North Holland, Amsterdam. + +AZOURY, K. S. AND WARMUTH, M. K. 2001. Relative loss bounds for on-line density estimation with the exponential family of distributions. *Machine Learning* 43, 3, 211–246. + +BERG, J. E., FORSYTHE, R., NELSON, F. D., AND RIETZ, T. A. 2001. Results from a dozen years of election futures markets research. In *Handbook of Experimental Economic Results*, C. A. Plott and V. Smith, Eds. + +BOYD, S. AND VANDENBERGHE, L. 2004. *Convex Optimization*. Cambridge University Press. + +BRAHMA, A., DAS, S., AND MAGDON-ISMAIL, M. 2010. Comparing prediction market structures, with an application to market making. Working paper. + +BRIER, G. 1950. Verification of forecasts expressed in terms of probability. *Monthly Weather Review* 78, 1, 1–3. + +BUJA, A., STUETZLE, W., AND SHEN, Y. 2005. Loss functions for binary class probability estimation and classification: Structure and applications. Working draft. + +CESA-BIANCHI, N. AND LUGOSI, G. 2006. *Prediction, Learning, and Games*. Cambridge University Press. + +CHEN, Y., FORTNOW, L., LAMBERT, N., PENNOCK, D. M., AND WORTMAN, J. 2008a. Complexity of combinatorial market makers. In *Proceedings of the 9th ACM Conference on Electronic Commerce*. 190–199. + +CHEN, Y., FORTNOW, L., NIKOLOVA, E., AND PENNOCK, D. M. 2007a. Betting on permutations. In *Proceedings of the 8th ACM Conference on Electronic Commerce*. ACM, 326–335. + +CHEN, Y., FORTNOW, L., NIKOLOVA, E., AND PENNOCK, D. M. 2007b. Betting on permutations. In *Proceedings of the 8th ACM conference on Electronic commerce*. 326–335. +---PAGE_BREAK--- + +CHEN, Y., GOEL, S., AND PENNOCK, D. M. 2008b. Pricing combinatorial markets for tournaments. In *Proceedings of the 40th ACM Symposium on Theory of Computing*. + +CHEN, Y. AND PENNOCK, D. M. 2007. A utility framework for bounded-loss market makers. In *Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence*. 49–56. + +CHEN, Y. AND VAUGHAN, J. W. 2010. A new understanding of prediction markets via no-regret learning. In *Proceedings of the 11th ACM Conference on Electronic Commerce*. 189–198. + +COVER, T. 1991. Universal portfolios. *Mathematical Finance* **1**, 1–29. + +DAS, S. AND MAGDON-ISMAIL, M. 2008. Adapting to a market shock: Optimal sequential market-making. In *Proceedings of the 21st Annual Conference on Neural Information Processing Systems*. 361–368. + +FORTNOW, L., KILIAN, J., PENNOCK, D. M., AND WELLMAN, M. P. 2004. Betting boolean-style: A framework for trading in securities based on logical formulas. *Decision Support Systems* **39**, 1, 87–104. + +FREUND, Y. AND SCHAPIRE, R. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. *Journal of Comp. and System Sciences* **55**, 1, 119–139. + +GAO, X., CHEN, Y., AND PENNOCK, D. M. 2009. Betting on the real line. In *Proceedings of the 5th Workshop on Internet and Network Economics*. 553–560. + +GARTHWAITE, P. H., KADANE, J. B., AND O'HAGAN, A. 2005. Statistical methods for eliciting probability distributions. *Journal of the American Statistical Association* **100**, 680–701. + +GHODSI, M., MAHINI, H., MIRROKNI, V. S., AND ZADIMOGHADDAM, M. 2008. Permutation betting markets: Singleton betting with extra information. In *Proceedings of the 9th ACM conference on Electronic commerce*. 180–189. + +GNEITING, T. AND RAFTERY, A. 2007. Strictly proper scoring rules, prediction, and estimation. *Journal of the American Statistical Association* **102**, 477, 359–378. + +GOOD, I. J. 1952. Rational decisions. *Journal of the Royal Statistical Society, Series B (Methodological)* **14**, 1, 107–114. + +GORNI, G. 1991. Conjugation and second-order properties of convex functions. *Journal of Mathematical Analysis and Applications* **158**, 2, 293–315. + +GRÖTSCHEL, M., LOVÁSZ, L., AND SCHRIJVER, A. 1981. The ellipsoid method and its consequences in combinatorial optimization. *Combinatorica* **1**, 2, 169–197. + +GUO, M. AND PENNOCK, D. M. 2009. Combinatorial prediction markets for event hierarchies. In *Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems*. 201–208. + +HANSON, R. 2003. Combinatorial information market design. *Information Systems Frontiers* **5**, 1, 105–119. + +HANSON, R. 2007. Logarithmic market scoring rules for modular combinatorial information aggregation. *Journal of Prediction Markets* **1**, 1, 3–15. + +HAZAN, E. 2009. A survey: The convex optimization approach to regret minimization. Draft. + +HAZAN, E., AGARWAL, A., AND KALE, S. 2007. Logarithmic regret algorithms for online convex optimization. *Machine Learning* **69**, 2–3, 169–192. + +HAZAN, E. AND KALE, S. 2010. Extracting certainty from uncertainty: regret bounded by variation in costs. *Machine Learning* **80**, 165–188. + +HELMBOLD, D. AND WARMUTH, M. 2009. Learning permutations with exponential weights. *JMLR* **10**, 1705–1736. + +HIRIART-URRUTY, J.-B. AND LEMARÉCHAL, C. 2001. *Fundamentals of Convex Analysis*. Springer. + +KARP, R. 1972. Reducibility among combinatorial problems. In *Complexity of Computer Computations (Symposium Proceedings)*. Plenum Press, 85–103. + +KIVINEN, J. AND WARMUTH, M. 1997. Exponentiated gradient versus gradient descent for linear predictors. *Journal of Information and Computation* **132**, 1, 1–63. + +KIVINEN, J. AND WARMUTH, M. K. 1999. Averaging expert predictions. In *Computational Learning Theory: 4th European Conference (EuroCOLT '99)*. Springer, 153–167. + +LAMBERT, N., PENNOCK, D. M., AND SHOHAM, Y. 2008. Eliciting properties of probability distributions. In *Proceedings of the 9th ACM Conference on Electronic Commerce*. + +LEDYARD, J., HANSON, R., AND ISHIKIDA, T. 2009. An experimental test of combinatorial information markets. *Journal of Economic Behavior and Organization* **69**, 182–189. + +LITTLESTONE, N. AND WARMUTH, M. 1994. The weighted majority algorithm. *Info. and Computation* **108**, 2, 212–261. + +MANGOLD, B., DOOLEY, M., DORNFEST, R., FLAKE, G. W., HOFFMAN, H., KASTURI, T., AND PENNOCK, D. M. 2005. The tech buzz game. *IEEE Computer* **38**, 7, 94–97. + +MAS-COLELL, A., WHINSTON, M. D., AND GREEN, J. R. 1995. *Microeconomics Theory*. Oxford University Press, New York, NY. + +ACM Transactions on Economics and Computation, Vol. 1, No. 1, Article X, Publication date: 2012. +---PAGE_BREAK--- + +MCCARTHY, J. 1956. Measures of the value of information. *PNAS* **42**, 654–655. + +MEGIDDO, N. 1977. Mixtures of order matrices and generalized order matrices. *Discrete Mathematics* **19**, 2, 177–181. + +OTHMAN, A. AND SANDHOLM, T. 2011. Homogeneous risk measures and liquidity-sensitive automated market makers. In *Proceedings of the 7th Workshop on Internet and Network Economics*. 314–325. + +OTHMAN, A., SANDHOLM, T., PENNOCK, D. M., AND REEVES, D. M. 2010. A practical liquidity-sensitive automated market maker. In *Proceedings of the 11th ACM Conference on Electronic Commerce*. 377–386. + +PENNOCK, D. M. 2004. A dynamic pari-mutuel market for hedging, wagering, and information aggregation. In *Proceedings of the Fifth ACM Conference on Electronic Commerce (EC'04)*. + +PENNOCK, D. M. AND SAMI, R. 2007. Computational aspects of prediction markets. In *Algorithmic Game Theory*, N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, Eds. Cambridge University Press. + +PENNOCK, D. M. AND XIA, L. 2011. Price updating in combinatorial prediction markets with bayesian networks. In *Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence*. 581–588. + +PETERS, M., SO, A. M.-C., AND YE, Y. 2007. Pari-mutuel markets: Mechanisms and performance. In *Proceedings of the 3rd International Workshop on Internet and Network Economics*. 82–95. + +RAKHLIN, A. 2009. Lecture notes on online learning. Draft. + +REID, M. D. AND WILLIAMSON, R. C. 2009. Surrogate regret bounds for proper losses. In *ICML*. + +ROCKAFELLAR, R. T. 1970. *Convex analysis*. Princeton Univ Press. + +SAVAGE, L. J. 1971. Elicitation of personal probabilities and expectations. *Journal of the American Statistical Association* **66**, 336, 783–801. + +SHALEV-SHWARTZ, S. AND SINGER, Y. 2007. A primal-dual perspective of online learning algorithms. *Machine Learning* **69**, 2–3, 115–142. + +SION, M. 1958. On general minimax theorems. *Pacific Journal of Mathematics* **8**, 1, 171–176. + +WOLFERS, J. AND ZITZEWITZ, E. 2004. Prediction markets. *Journal of Economic Perspective* **18**, 2, 107–126. + +XIA, L. AND PENNOCK, D. M. 2011. An efficient monte-carlo algorithm for pricing combinatorial prediction markets for tournaments. In *Proceedings of the International Joint Conferences on Artificial Intelligence*. 305–314. + +ZINKEVICH, M. 2003. Online convex programming and generalized infinitesimal gradient ascent. In *ICML*. + +## A. CONVEX ANALYSIS RESULTS AND PROOF OF THEOREM 4.2 + +Towards proving Theorem 4.2, we provide another definition and a couple of results from Rockafellar [1970]. + +*Definition A.1 (Rockafellar [1970], Section 7).* A convex function $f$ is said to be proper if $f(x) > -\infty$ for all $x$ and $f(x) < +\infty$ for some $x$. Also, $f : \mathbb{R}^K \to [-\infty, \infty]$ is said to be closed when the epigraph of $f$ is a closed set, or equivalently, the set $\{x : f(x) \le \alpha\}$ is closed for all $\alpha \in \mathbb{R}$. + +**THEOREM A.2 (ROCKAFELLAR [1970], THEOREM 12.2 AND COROLLARY 12.2.2).** +For any closed convex function $f : \mathbb{R}^K \to [-\infty, \infty]$, the conjugate $f^*$ is also closed and convex, and $f^{**} = f$. Furthermore, we can write + +$$f^*(y) = \sup_{x \in \text{relint}(\text{dom}(f))} y \cdot x - f(x).$$ + +The preceding theorem tells us that the convex conjugate, which is usually defined in terms of a $\sup$ over all of $\mathbb{R}^K$, can also be written as a $\sup$ over just the relative interior of the domain of the function. This is useful for our duality-based framework, as we want to optimize only inside of the convex hull of the payoff vectors. + +**THEOREM A.3 (ROCKAFELLAR [1970], THEOREM 26.3).** Given a proper closed convex function $f : \mathbb{R}^K \to [-\infty, \infty]$, $f$ is finite and differentiable everywhere on $\mathbb{R}^K$ if and only if its conjugate $f^*$ is strictly convex on $\text{dom}(f^*)$. +---PAGE_BREAK--- + +PROOF OF THEOREM 4.2. We begin with the first part of the Theorem, showing that for any $C: \mathbb{R}^K \to \mathbb{R}$ satisfying Conditions 2-5, there exists a function $R$ such that Equation 5 is true for any $\mathbf{q} \in \mathbb{R}^K$. + +Let $C: \mathbb{R}^K \to \mathbb{R}$ be some cost function satisfying Conditions 2-5. Theorem 3.2 implies that closure($\{\nabla C(\mathbf{q}): \mathbf{q} \in \mathbb{R}^K\}$) = $\mathcal{H}(\rho(\mathcal{O}))$. It follows also that + +$$ \mathrm{relint}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathrm{relint}(\mathcal{H}(\rho(\mathcal{O}))). \quad (28) $$ + +Let us now consider the convex conjugate of $C$, + +$$ C^*(\mathbf{x}) := \sup_{\mathbf{q} \in \mathbb{R}^K} \mathbf{x} \cdot \mathbf{q} - C(\mathbf{q}). \quad (29) $$ + +Recall that we use the notation $\mathrm{dom}(f)$ to refer to the domain of a function $f$, i.e., where it is defined and finite valued. We can show that + +$$ \{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\} \subseteq \mathrm{dom}(C^*) \subseteq \mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}). \quad (30) $$ + +For the first containment, it is clear that if we set $\mathbf{x} = \nabla C(q')$ for any $q' \in \mathbb{R}^K$ then the supremum in (29) is achieved for $\mathbf{q} = q'$ and hence $C^*(\nabla C(q')) = q' \cdot \nabla C(q') - C(q')$. Since $C^*$ is defined on $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$, we have $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\} \subseteq \mathrm{dom}(C^*)$. For the second containment, take some $\mathbf{x} \notin \mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$ and consider the derivative of the objective function in (29) with respect to any $\mathbf{q}$, which is $\mathbf{x} - \nabla C(\mathbf{q})$. This derivative will always have norm bigger than some $\epsilon > 0$ by construction for any $\mathbf{q}$, and hence the objective must increase without bound. Since $\mathbf{x}$ that does not belong to $\mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$ must not in $\mathrm{dom}(C^*)$, we establish $\mathrm{dom}(C^*) \subseteq \mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$. + +We now show that the choice of $R := C^*$ is strictly convex and satisfies (5). Indeed, strict convexity follows trivially from Theorem A.3. We establish (5) by observing that + +$$ C(\mathbf{q}) = C^{**}(\mathbf{q}) = \sup_{\mathbf{x} \in \mathbb{R}^K} \mathbf{x} \cdot \mathbf{q} - C^*(\mathbf{x}) = \sup_{\mathbf{x} \in \mathrm{relint}(\mathrm{dom}(C^*))} \mathbf{x} \cdot \mathbf{q} - C^*(\mathbf{x}), $$ + +where the last equality follows because of Theorem A.2. According to (28) and (30), we also have $\mathrm{relint}(\mathrm{dom}(C^*)) = \mathrm{relint}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathrm{relint}(\mathcal{H}(\rho(\mathcal{O})))$ as desired. + +We now prove the other direction. Take any strictly convex $R$ defined on $\mathrm{relint}(\mathcal{H}(\rho(\mathcal{O})))$ and let $C(\mathbf{q}) := R^*(\mathbf{q}) = \sup_{\mathbf{x} \in \mathrm{dom}(R)} \mathbf{q} \cdot \mathbf{x} - R(\mathbf{x})$. To establish Conditions 2-5, Theorem 3.2 tells us that it is sufficient to establish three facts: (a) $C$ is defined on all of $\mathbb{R}^K$, (b) $C$ is everywhere differentiable, and (c) $C$ has the property that closure($\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$) = $\mathcal{H}(\rho(\mathcal{O}))$. It is easy to establish (a), since for any $\mathbf{q} \in \mathbb{R}^K$, $C(\mathbf{q})$ is defined as a supremum of a concave function on a bounded domain, which always exists. For (b), Theorem A.3 gives us that $R$ being strictly convex implies that $C$ is everywhere differentiable. To prove (c), we note that we already proved that for any differentiable $C$, the sets $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$ and $\mathrm{dom}(C^*)$ are identical except possibly for points occurring at their respective relative boundaries. Thus, $\mathrm{dom}(C^*) = \mathrm{dom}(R) = \mathrm{relint}(\mathcal{H}(\rho(\mathcal{O}))),$ which implies (c). $\square$ + +## B. PROOF OF LEMMA 4.6 + +Let $g(t) := D_f(\mathbf{x} + t\boldsymbol{\tau}/\|\boldsymbol{\tau}\|, \boldsymbol{\tau})$. Notice that $g(0) = 0$, and $g'(0) = 0$ since $f$ is differentiable and $D_f(\boldsymbol{x}, \boldsymbol{x}')$ is minimized at $\boldsymbol{x} = \boldsymbol{x}'$ or, equivalently, $g(t)$ is minimized at $t = 0$. Using the fundamental theorem of calculus, it follows that + +$$ D_f(\boldsymbol{x}+\boldsymbol{r}, \boldsymbol{x}) = g(\|\boldsymbol{r}\|) - g(0) = \int_0^{\|\boldsymbol{r}\|} g'(s)ds = \int_0^{\|\boldsymbol{r}\|} (g'(s)-g'(0))ds = \int_0^{\|\boldsymbol{r}\|} g''(t)dt ds. $$ +---PAGE_BREAK--- + +Because + +$$g(t) = D_f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|, \mathbf{x}) = f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|) - f(\mathbf{x}) - \nabla f(\mathbf{x}) \cdot (\mathbf{tr}/\|\mathbf{r}\|),$$ + +we obtain + +$$g'(t) = \nabla f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|) \cdot (\mathbf{r}/\|\mathbf{r}\|) - \nabla f(\mathbf{x}) \cdot (\mathbf{r}/\|\mathbf{r}\|).$$ + +Taking the derivative of the above expression regarding $t$, we further have + +$$g''(t) = (\mathbf{r}/\|\mathbf{r}\|)^{\top} \nabla^2 f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|)(\mathbf{r}/\|\mathbf{r}\|).$$ + +Because the curvature of $f$ at $\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|$ is lower bounded by the smallest eigenvalue and upper bounded by the largest eigenvalue of $\nabla^2 f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|)$, it must be true that $a \le g''(t) \le b$. Thus, + +$$\int_0^{\|\mathbf{r}\|} \int_0^s a dt ds \le \int_0^{\|\mathbf{r}\|} \int_0^s g''(t) dt ds \le \int_0^{\|\mathbf{r}\|} \int_0^s b dt ds \implies$$ + +$$\int_0^{\|\mathbf{r}\|} as \, ds \le \int_0^{\|\mathbf{r}\|} \int_0^s g''(t) \, dt \, ds \le \int_0^{\|\mathbf{r}\|} b s \, ds \implies$$ + +$$\frac{a\|\mathbf{r}\|^2}{2} \le \int_0^{\|\mathbf{r}\|} \int_0^s g''(t) dt ds \le \frac{b\|\mathbf{r}\|^2}{2}.$$ + +As $\mathbf{r}$ can be chosen arbitrarily as long as $\mathbf{x}$ and $\mathbf{x} + \mathbf{r}$ are both in dom($f$), we establish Inequality 11. \ No newline at end of file diff --git a/samples/texts_merged/7604074.md b/samples/texts_merged/7604074.md new file mode 100644 index 0000000000000000000000000000000000000000..7c02851ae14ffbed39d4863e1944401ed27c8e46 --- /dev/null +++ b/samples/texts_merged/7604074.md @@ -0,0 +1,475 @@ + +---PAGE_BREAK--- + +Cosmology of a polynomial model for de Sitter +gauge theory sourced by a fluid + +Jia-An Lu¹ + +School of Physics, Sun Yat-sen University, +Guangzhou 510275, China + +**Abstract** + +In the de Sitter gauge theory (DGT), the fundamental variables are the de Sitter (dS) connection and the gravitational Higgs/Goldstone field $\xi^A$. Previously, a model for DGT was analyzed, which generalizes the MacDowell–Mansouri gravity to have a variable cosmological constant $\Lambda = 3/l^2$, where $l$ is related to $\xi^A$ by $\xi^A\xi_A = l^2$. It was shown that the model sourced by a perfect fluid does not support a radiation epoch and the accelerated expansion of the parity invariant universe. In this work, I consider a similar model, namely, the Stelle–West gravity, and couple it to a modified perfect fluid, such that the total Lagrangian 4-form is polynomial in the gravitational variables. The Lagrangian of the modified fluid has a nontrivial variational derivative with respect to $l$, and as a result, the problems encountered in the previous work no longer appear. Moreover, to explore the elegance of the general theory, as well as to write down the basic framework, I perform the Lagrange–Noether analysis for DGT sourced by a matter field, yielding the field equations and the identities with respect to the symmetries of the system. The resulted formula are dS covariant and do not rely on the existence of the metric field. + +PACS numbers: 04.50.Kd, 98.80.Jk, 04.20.Cv + +Key words: Stelle–West gravity, gauge theory of gravity, cosmic acceleration + +# 1 Introduction + +The gauge theories of gravity (GTG) aim at treating gravity as a gauge field, in particular, constructing a Yang–Mills-type Lagrangian which reduces to GR in some limiting case, while providing some novel falsifiable predictions. A well-founded subclass of GTG is the Poincaré gauge theory (PGT) [1–5], in which the gravitational field consists of the Lorentz connection and the co-tetrad field. Moreover, the PGT can be reformulated as de Sitter gauge theory (DGT), in which the Lorentz connection and the co-tetrad field are united into a de Sitter (dS) connection [6, 7]. In fact, before the idea of DGT is realized, a related Yang–Mills-type Lagrangian for gravity was proposed by MacDowell and Mansouri [8], and reformulated into a dS-invariant form by West [9], which reads + +$$ +\begin{aligned} +\mathcal{L}^{\text{MM}} &= \epsilon_{ABCDE} \xi^E \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} \\ +&= \epsilon_{\alpha\beta\gamma\delta} (l R^{\alpha\beta} \wedge R^{\gamma\delta} - 2l^{-1} R^{\alpha\beta} \wedge e^{\gamma} \wedge e^{\delta} + l^{-3} e^{\alpha} \wedge e^{\beta} \wedge e^{\gamma} \wedge e^{\delta}), +\end{aligned} +\quad (1) +$$ + +where $\epsilon_{ABCDE}$ and $\epsilon_{\alpha\beta\gamma\delta}$ are the 5d and 4d Levi-Civita symbols, $\xi^A$ is a dS vector constrained by $\xi^A\xi_A = l^2$, $l$ is a positive constant, $\mathcal{F}^{AB}$ is the dS curvature, $R^{\alpha\beta}$ is the + +¹Email: ljagdgz@163.com +---PAGE_BREAK--- + +Lorentz curvature, and $e^\alpha$ is the orthonormal co-tetrad field. This theory is equivalent to the Einstein–Cartan (EC) theory with a cosmological constant $\Lambda = 3/l^2$ and a Gauss–Bonnet (GB) topological term, as seen in Eq. (1). + +Note that some special gauges with the residual Lorentz symmetry can be defined by $\xi^A = \delta^A_4 l$. Henceforth, $\xi^A$ is akin to an unphysical Goldstone field. To make $\xi^A$ physical, and so become the gravitational Higgs field, one may replace the constant $l$ by a dynamical $l$, resulting in the Stelle–West (SW) theory [7]. The theory is further explored by Refs. [10,11] (see also the review [12]), in which the constraint $\xi^A\xi_A = l^2$ is completely removed, in other words, $\xi^A\xi_A$ needs not to be positive. Suppose that $\xi^A\xi_A = \sigma l^2$, where $\sigma = \pm 1$. When $l \neq 0$, the metric field can be defined by $g_{\mu\nu} = (\tilde{D}_\mu\xi^A)(\tilde{D}_\nu\xi_A)$, where $\tilde{D}_\mu\xi^A = \tilde{\delta}^A_B D_\mu\xi^B$, $\tilde{\delta}^A_B = \delta^A_B - \xi^A\xi_B/\sigma l^2$, $D_\mu\xi^A = d_\mu\xi^A + \Omega^A_{B\mu}\xi^B$, and $\Omega^A_{B\mu}$ is the dS connection. It was shown that $\sigma = \pm 1$ corresponds to the Lorentz/Euclidean signature of the metric field, and the signature changes when $\xi^A\xi_A$ changes its sign [11]. + +On the other hand, it remains to check whether the SW gravity is viable. Although the SW lagrangian reduces to the MM Lagrangian when $l$ is a constant, the field equations do not. In the SW theory, there is an additional field equation coming from the variation with respect to $l$, which is nontrivial even when $l$ is a constant. Actually, a recent work [13] presents some negative results for a related model, whose Lagrangian is equal to the SW one times $(-l/2)$. For a homogeneous and isotropic universe with parity-invariant torsion, it is found that $l$ being a constant implies the energy density of the material fluid being a constant, and so $l$ should not be a constant in the general case. Moreover, in the radiation epoch, the $l$ equation forces the energy density equal to zero; while in the matter epoch, a dynamical $l$ only works to renormalize the gravitational constant by some constant factor, and hence the cosmic expansion decelerates as in GR. + +In this work, it is shown that the SW gravity suffers from similar problems encountered in the model considered by Ref. [13]. Also, I try to solve these problems by using a new fluid with the Lagrangian being polynomial in the gravitational variables. The merits of a Lagrangian polynomial in some variables are that it is simple and nonsingular with respect to those variables. In Refs. [14,15], the polynomial Lagrangian for gravitation and other fundamental fields were proposed, while in this paper, the polynomial Lagrangian for a perfect fluid is proposed, which reduces to the Lagrangian of a usual perfect fluid when $l$ is a constant. It turns out that, in contrast to the case with an ordinary fluid, the SW gravity coupled with the new fluid supports the radiation epoch and naturally drives the cosmic acceleration. In addition, when writing down the basic framework of DGT, a Lagrangian–Noether analysis is performed, which generalizes the results of Ref. [16] to the cases with arbitrary matter field and arbitrary $\xi^A$. + +The article is organized as follows. In Sec. 2.1, a Lagrangian–Noether analysis is done for the general DGT sourced by a matter field. In Sec. 2.2, I reduce the analysis in Sec. 2.1 in the Lorentz gauges, and show how the two Noether identities in PGT can be elegantly unified into one identity in DGT. In Sec. 3.1, the SW model of DGT is introduced, with the field equations derived both in the general gauge and the Lorentz gauges. Further, the matter source is discussed in Sec. 3.2, where a modified perfect fluid with the Lagrangian polynomial in the gravitational variables is constructed, and a general class of perfect fluids is defined, which contains both the usual and the modified perfect fluids. Then I couple the SW gravity with the class of fluids and study the coupling system in the homogeneous, isotropic and parity-invariant universe. The field equations are deduced in Sec. 4.1 and solved in Sec. 4.2, and the results are compared with observations in Sec. +---PAGE_BREAK--- + +4.3. In Sec. 5, I give some conclusions, and discuss the remaining problems and possible solutions. + +# 2 de Sitter gauge theory + +## 2.1 Lagrangian–Noether machinery + +The DGT sourced by a matter field is described by the Lagrangian 4-form + +$$ \mathcal{L} = \mathcal{L}(\psi, D\psi, \xi^A, D\xi^A, \mathcal{F}^{AB}), \quad (2) $$ + +where $\psi$ is a $p$-form valued at some representation space of the dS group $SO(1, 4)$, $D\psi = d\psi + \Omega^{AB}T_{AB} \wedge \psi$ is the covariant exterior derivative, $T_A{}^B$ are representations of the dS generators, $\xi^A$ is a dS vector, $D\xi^A = d\xi^A + \Omega^A{}_B\xi^B$, $\Omega^A{}_B$ is the dS connection 1-form, and $\mathcal{F}^A{}_B = d\Omega^A{}_B + \Omega^A{}_C \wedge \Omega^C{}_B$ is the dS curvature 2-form. The variation of $\mathcal{L}$ resulted from the variations of the explicit variables reads + +$$ \begin{aligned} \delta \mathcal{L} = & \delta\psi \wedge \partial\mathcal{L}/\partial\psi + \delta D\psi \wedge \partial\mathcal{L}/\partial D\psi + \delta\xi^A \cdot \partial\mathcal{L}/\partial\xi^A + \delta D\xi^A \wedge \partial\mathcal{L}/\partial D\xi^A \\ & + \delta\mathcal{F}^{AB} \wedge \partial\mathcal{L}/\partial\mathcal{F}^{AB}, \end{aligned} \quad (3) $$ + +where $(\partial\mathcal{L}/\partial\psi)_{\mu_{p+1}\cdots\mu_4} \equiv \partial\mathcal{L}_{\mu_1\cdots\mu_p\mu_{p+1}\cdots\mu_4}/\partial\psi_{\mu_1\cdots\mu_p}$, and the other partial derivatives are similarly defined. The variations of $D\psi$, $D\xi^A$ and $\mathcal{F}^{AB}$ can be transformed into the variations of the fundamental variables $\psi$, $\xi^A$, and $\Omega^{AB}$, leading to + +$$ \begin{aligned} \delta \mathcal{L} = & \delta\psi \wedge V_{\psi} + \delta\xi^A \cdot V_A + \delta\Omega^{AB} \wedge V_{AB} \\ & + d(\delta\psi \wedge \partial\mathcal{L}/\partial D\psi + \delta\xi^A \cdot \partial\mathcal{L}/\partial D\xi^A + \delta\Omega^{AB} \wedge \partial\mathcal{L}/\partial \mathcal{F}^{AB}), \end{aligned} \quad (4) $$ + +where + +$$ V_{\psi} = \delta \mathcal{L} / \delta \psi = \partial \mathcal{L} / \partial \psi - (-1)^p D \partial \mathcal{L} / \partial D \psi, \quad (5) $$ + +$$ V_A = \delta L / \delta\xi^A = \partial L / \partial\xi^A - D \partial L / \partial D\xi^A, \quad (6) $$ + +$$ V_{AB} = \delta L / \delta\Omega^{AB} = T_{AB}\psi \wedge \partial L / \partial D\psi + \partial L / \partial D\xi^{[A} \cdot \xi_{B]} + D\partial L / \partial F^{AB}. \quad (7) $$ + +The symmetry transformations in DGT consist of the diffeomorphism transformations and the dS transformations. For the diffeomorphism transformations, they can be promoted to a gauge-invariant version [16, 17], namely, the parallel transports in the fiber bundle with the gauge group as the structure group. The action of an infinitesimal parallel transport on a variable is a gauge-covariant Lie derivative$^2$ $L_v = v]D + Dv]$, where $v$ is the vector field which generates the infinitesimal parallel transport, and ] denotes a contraction, for example, $(v]_\psi)_{\mu_2...,\mu_p} = v^{\mu_1}\psi_{\mu_1,\mu_2...,\mu_p}$. Put $\delta = L_v$ in Eq. (3), utilize the arbitrariness of $v$, then one obtains the chain rule + +$$ v]_{\mathcal{L}} = (v]_{\psi}) \wedge \partial_{\mathcal{L}}/\partial\psi + (v]_{D\psi}) \wedge \partial_{\mathcal{L}}/\partial D\psi + (v]_{D\xi^A}) \cdot \partial_{\mathcal{L}}/\partial D\xi^A \\ +(v]_{F^{AB}}) \wedge \partial_{\mathcal{L}}/\partial F^{AB}, \quad (8) $$ + +and the first Noether identity + +$$ (v]D\psi) \wedge V_{\psi} + (-1)^p(v]\psi) \wedge DV_{\psi} + (v]D\xi^A) \cdot V_A + (v]F^{AB}) \wedge V_{AB} = 0. \quad (9) $$ + +$^2$The gauge-covariant Lie derivative has been used in the metric-affine gauge theory of gravity [18]. +---PAGE_BREAK--- + +On the other hand, the dS transformations are defined as vertical isomorphisms on the fiber bundle. The actions of an infinitesimal dS transformation on the fundamental variables are as follows: + +$$ \delta\psi = B^{AB}T_{AB}\psi, \quad \delta\xi^A = B^{AB}\xi_B, \quad \delta\Omega^{AB} = -DB^{AB}, \qquad (10) $$ + +where $B^A{}_B$ is a dS algebra-valued function which generates the infinitesimal dS transformation. Substitute Eq. (10) and $\delta\mathcal{L} = 0$ into Eq. (4), and make use of Eq. (7) and the arbitrariness of $B^{AB}$, one arrives at the second Noether identity + +$$ DV_{AB} = -T_{AB}\psi \wedge V_{\psi} - V[A \cdot \xi_{B}]. \qquad (11) $$ + +The above analyses are so general that they do not require the existence of a metric field. In the special case with a metric field being defined, $\xi^A \xi_A$ equating to a positive constant, and $p=0$, the above analyses coincide with those in Ref. [16]. + +## 2.2 Reduction in the Lorentz gauges + +Consider the case with $\xi^A \xi_A = l^2$, where $l$ is a positive function. Then we may define the projector $\tilde{\delta}^A{}_B = \delta^A{}_B - \xi^A \xi_B / l^2$, the generalized tetrad $\tilde{D} \xi^A = \tilde{\delta}^A{}_B D \xi^B$, and a symmetric rank-2 tensor³ + +$$ g_{\mu\nu} = \eta_{AB}(\tilde{D}_{\mu}\xi^{A})(\tilde{D}_{\nu}\xi^{B}), \qquad (12) $$ + +which is a localization of the dS metric $\hat{g}_{\mu\nu} = \eta_{AB}(d_{\mu}\dot{\xi}^{A})(d_{\nu}\dot{\xi}^{B})$, where $\dot{\xi}^{A}$ are the 5d Minkowski coordinates on the 4d dS space. Though Eq. (12) seems less natural than the choice $g^{*}_{\mu\nu} = \eta_{AB}(D_{\nu}\xi^{A})(D_{\nu}\xi^{B})$, it coincides with another natural identification (15) (the relation between Eqs. (12) and (15) will be discussed later). If $g_{\mu\nu}$ is non-degenerate, it is a metric field with Lorentz signature, and one may define $\tilde{D}^{\mu}\xi_A \equiv g^{\mu\nu}\tilde{D}_{\nu}\xi_A$. Put $v^\mu = \tilde{D}^\mu\xi_A$ in Eq. (9) and utilize $(\tilde{D}_\mu\xi^A)(\tilde{D}^\mu\xi_B) = \tilde{\delta}^A{}_B$, we get + +$$ \begin{aligned} \tilde{V}_A = &-(\tilde{D}\xi_A]D\psi) \wedge V_\psi - (-1)^p(\tilde{D}\xi_A]\psi) \wedge DV_\psi - (\tilde{D}\xi_A]d\ln l) \cdot V_C\xi^C \\ &-(\tilde{D}\xi_A]\mathcal{F}^{CD}) \wedge V_{CD}, \end{aligned} \qquad (13) $$ + +where $\tilde{V}_A = \tilde{\delta}^B{}_AV_B$. When $l$ is a constant, Eq. (13) implies that the $\xi^A$ equation ($\tilde{V}_A = 0$ for this case) can be deduced from the other field equations ($V_\psi = 0$ and $V_{CD} = 0$), as pointed out by Ref. [19]. Substitute Eq. (13) into Eq. (11), and make use of $\tilde{V}_{[A} \cdot \xi_{B]} = V_{[A} \cdot \xi_{B]}$ and $\tilde{D}\xi_{[A} \cdot \xi_{B]} = D\xi_{[A} \cdot \xi_{B]}$, one attains + +$$ \begin{aligned} DV_{AB} = &-T_{AB}\psi \wedge V_{\psi} + (D\xi_{[A} \cdot \xi_{B]})[D\psi) \wedge V_{\psi} + (-1)^p(D\xi_{[A} \cdot \xi_{B]})[\psi) \wedge DV_{\psi} \\ &+(D\xi_{[A} \cdot \xi_{B]})[d \ln l) \cdot V_C\xi^C + (D\xi_{[A} \cdot \xi_{B]})[\mathcal{F}^{CD}) \wedge V_{CD}. \end{aligned} \qquad (14) $$ + +When $l$ is a constant, Eq. (14) coincides with the corresponding result in Ref. [16]. As will be shown later, Eq. (14) unifies the two Noether identities in PGT. + +To see this, let us define the Lorentz gauges by the condition $\xi^A = \delta^A{}_4l$ [7]. If $h^A{}_B \in SO(1, 4)$ preserves these gauges, then $h^A{}_B = \text{diag}(h^\alpha_\beta, 1)$, where $h^\alpha_\beta$ belongs to the Lorentz group $SO(1, 3)$. In the Lorentz gauges, $\Omega^\alpha_\beta$ transforms as a Lorentz connection, + +³This formula has been given by Refs. [11, 19], and is different from that originally proposed by Stelle and West [7] by a factor $(l_0/l)^2$, where $l_0$ is the vacuum expectation value of $l$. +---PAGE_BREAK--- + +and $\Omega^{\alpha}_4$ transforms as a co-tetrad field. Therefore, one may identify $\Omega^{\alpha}_{\beta}$ as the spacetime connection $\Gamma^{\alpha}_{\beta}$, and $\Omega^{\alpha}_4$ as the co-tetrad field $e^{\alpha}$ divided by some quantity with the dimension of length, a natural choice for which is $l$. Resultantly, $\Omega^{AB}$ is identified with a combination of geometric quantities as follows: + +$$ \Omega^{AB} = \begin{pmatrix} \Gamma^{\alpha\beta} & l^{-1}e^{\alpha} \\ -l^{-1}e^{\beta} & 0 \end{pmatrix}. \qquad (15) $$ + +In the case with constant $l$, this formula has been given by Refs. [7,20], and, in the case with varying $l$, it has been given by Refs. [10, 19]. In the Lorentz gauges, $\tilde{D}\xi^4 = 0$, $\tilde{D}\xi^{\alpha} = \Omega^{\alpha}_4 l = e^{\alpha}$ (where Eq. (15) is used), and so $g_{\mu\nu}$ defined by Eq. (12) satisfies $g_{\mu\nu} = \eta_{\alpha\beta}e^{\alpha}_{\mu}e^{\beta}_{\nu}$, implying that Eq. (12) coincides with Eq. (15). Moreover, according to Eq. (15), one finds the expression for $\mathcal{F}^{AB}$ in the Lorentz gauges as follows [19]: + +$$ \mathcal{F}^{AB} = \begin{pmatrix} R^{\alpha\beta} - l^{-2}e^{\alpha} \wedge e^{\beta} & l^{-1}[S^{\alpha} - d \ln l \wedge e^{\alpha}] \\ -l^{-1}[S^{\beta} - d \ln l \wedge e^{\beta}] & 0 \end{pmatrix}, \qquad (16) $$ + +where $R^{\alpha}_{\beta} = d\Gamma^{\alpha}_{\beta} + \Gamma^{\alpha}_{\gamma} \wedge \Gamma^{\gamma}_{\beta}$ is the spacetime curvature, and $S^{\alpha} = de^{\alpha} + \Gamma^{\alpha}_{\beta} \wedge e^{\beta}$ is the spacetime torsion. + +Now it is ready to interpret the results in Sec. 2.1 in the Lorentz gauges. In those gauges, $D\psi = D^{\Gamma}\psi + 2l^{-1}e^{\alpha}T_{\alpha4} \wedge \psi$, $D\xi^{\alpha} = e^{\alpha}$, $D\xi^4 = dl$, and so Eq. (2) becomes + +$$ \mathcal{L} = \mathcal{L}^L(\psi, D^\Gamma \psi, l, dl, e^\alpha, R^{\alpha\beta}, S^\alpha), \qquad (17) $$ + +where $D^{\Gamma}\psi = d\psi + \Gamma^{\alpha\beta}T_{\alpha\beta} \wedge \psi$. It is the same as a Lagrangian 4-form in PGT [21], with the fundamental variables being $\psi, l, \Gamma^{\alpha\beta}$ and $e^{\alpha}$. The relations between the variational derivatives with respect to the PGT variables and those with respect to the DGT variables can be deduced from the following equality: + +$$ \delta\xi^A \cdot V_A + 2\delta\Omega^{\alpha4} \wedge V_{\alpha4} = \delta l \cdot \Sigma_l + \delta e^\alpha \wedge \Sigma_\alpha, \qquad (18) $$ + +where $\Sigma_l \equiv \delta\mathcal{L}^L/\delta l$ and $\Sigma_\alpha \equiv \delta\mathcal{L}^L/\delta e^\alpha$. Explicitly, the relations are: + +$$ \Sigma_{\psi} \equiv \delta \mathcal{L}^L / \delta \psi = V_{\psi}, \qquad (19) $$ + +$$ \Sigma_l = V_4 - 2l^{-2}e^\alpha \wedge V_{\alpha 4}, \qquad (20) $$ + +$$ \Sigma_{\alpha\beta} = \delta\mathcal{L}^L/\delta\Gamma^{\alpha\beta} = V_{\alpha\beta}, \qquad (21) $$ + +$$ \Sigma_\alpha = 2l^{-1}V_{\alpha 4}. \qquad (22) $$ + +It is remarkable that the DGT variational derivative $V_{AB}$ unifies the two PGT variational derivatives $\Sigma_{\alpha\beta}$ and $\Sigma_{\alpha}$. With the help of Eqs. (19)–(22), the $\alpha\beta$ components and $\alpha 4$ components of Eq. (14) are found to be + +$$ D^\Gamma \Sigma_{\alpha\beta} = -T_{\alpha\beta} \psi \wedge \Sigma_\psi + e_{[\alpha} \wedge \Sigma_{\beta]}, \qquad (23) $$ + +$$ D^\Gamma \Sigma_\alpha = D_\alpha^\Gamma \psi \wedge \Sigma_\psi + (-1)^p (e_\alpha] \psi) \wedge D^\Gamma \Sigma_\psi + \partial_\alpha l \cdot \Sigma_l \\ + (e_\alpha] R^{\beta\gamma}) \wedge \Sigma_{\beta\gamma} + (e_\alpha] S^\beta) \wedge \Sigma_\beta, \qquad (24) $$ + +which are just the two Noether identities in PGT [21], with both $\psi$ and $l$ as the matter fields. This completes our proof for the earlier statement that the DGT identity (14) unifies the two Noether identities in PGT. +---PAGE_BREAK--- + +# 3 Polynomial models for DGT + +## 3.1 Stelle-West gravity + +It is natural to require that the Lagrangian for DGT is regular with respect to the fundamental variables. The simplest regular Lagrangian are polynomial in the variables, and, in order to recover the EC theory, the polynomial Lagrangian should be at least linear in the gauge curvature. Moreover, to ensure $\mathcal{F}^{AB} = 0$ is naturally a vacuum solution, the polynomial Lagrangian should be at least quadratic in $\mathcal{F}^{AB} {}^4$. The general Lagrangian quadratic in $\mathcal{F}^{AB}$ reads: + +$$ +\begin{aligned} +\mathcal{L}^G &= (\kappa_1 \epsilon_{ABCDE} \xi^E + \kappa_2 \eta_{AC} \xi_B \xi_D + \kappa_3 \eta_{AC} \eta_{BD}) \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} \\ +&= \kappa_1 \mathcal{L}^{\text{SW}} + \kappa_2 (S^\alpha \wedge S_\alpha - 2S^\alpha \wedge d \ln l \wedge e_\alpha) \\ +&\quad + \kappa_3 [R^{\alpha\beta} \wedge R_{\alpha\beta} + d(2l^{-2} S^\alpha \wedge e_\alpha)], +\end{aligned} +\quad (25) +$$ + +where the $\kappa_1$ term is the SW Lagrangian, the $\kappa_2$ and $\kappa_3$ terms are parity odd, and the $\kappa_3$ term is a sum of the Pontryagin and modified Nieh-Yan topological terms. This quadratic Lagrangian is a special case of the at most quadratic Lagrangian proposed in Refs. [10,22], and one should note that the quadratic Lagrangian satisfies the requirement mentioned above about the vacuum solution, while the at most quadratic Lagrangian does not always satisfy that requirement. + +Among the three terms in Eq. (25), the SW term is the only one that can be reduced to the EC Lagrangian in the case with positive and constant $\xi^A\xi_A$. Thus the SW Lagrangian is the simplest choice for the gravitational Lagrangian which (i) is regular with respect to the fundamental variables; (ii) can be reduced to the EC Lagrangian; (iii) ensures $\mathcal{F}^{AB} = 0$ is naturally a vacuum solution. + +The SW Lagrangian 4-form $\mathcal{L}^{\text{SW}}$ takes the same form as $\mathcal{L}^{\text{MM}}$ in the first line of Eq. (1), while $\xi^A$ is not constrained by any condition. Substitute Eq. (1) into Eqs. (6)-(7), make use of $\partial\mathcal{L}^{\text{SW}}/\partial\mathcal{F}^{AB} = \epsilon_{ABCDE} \xi^E \mathcal{F}^{CD}$ and the Bianchi identity $D\mathcal{F}^{AB} = 0$, one immediately gets the gravitational field equations + +$$ -\kappa \epsilon_{ABCDE} \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} = \delta \mathcal{L}^m / \delta \xi^E, \quad (26) $$ + +$$ -\kappa \epsilon_{ABCDE} D\xi^E \wedge \mathcal{F}^{CD} = \delta \mathcal{L}^m / \delta \Omega^{AB}, \quad (27) $$ + +where $\mathcal{L}^m$ is the Lagrangian of the matter field coupled to the SW gravity, with $\kappa$ as the coupling constant. In the vacuum case, Eq. (27) has been given by Ref. [22] by direct computation, while here, Eq. (27) is obtained from the general formula (7). + +In the Lorentz gauges, $\mathcal{L}^{\text{SW}}$ takes the same form as $\mathcal{L}^{\text{MM}}$ in the second line of Eq. (1), while $l$ becomes a dynamical field. The gravitational field equations read + +$$ -(\kappa/4)\epsilon_{\alpha\beta\gamma\delta}\epsilon^{\mu\nu\sigma\rho}e^{-1}R^{\alpha\beta}_{\quad\mu\nu}R^{\gamma\delta}_{\quad\sigma\rho} - 4\kappa l^{-2}R + 72\kappa l^{-4} = \delta S_m/\delta l, \quad (28) $$ + +$$ -\kappa \epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\quad\sigma\rho} + 8\kappa e_{[\alpha}^{\mu} e_{\beta]}^{\nu} \partial_{\nu} l^{-1} + 4\kappa l^{-1} T^{\mu}_{\alpha\beta} = \delta S_m / \delta \Gamma^{\alpha\beta}_{\quad\mu}, \quad (29) $$ + +$$ -8\kappa l^{-1}(G^{\mu}_{\alpha} + \Lambda e_{\alpha}) = \delta S_m / \delta e^{\alpha}_{\mu}, \quad (30) $$ + +where $e = \det(e^{\alpha}_{\mu})$, $R$ is the scalar curvature, $G^{\mu}_{\alpha}$ is the Einstein tensor, $T^{\mu}_{\alpha\beta} = S^{\mu}_{\alpha\beta} + 2e_{[\alpha}^{\mu} S^{\nu}_{\beta]\nu}$, and $S_m$ is the action of the matter field. + +⁴When the Lagrangian is linear in $\mathcal{F}^{AB}$, we may add some ‘constant term’ (independent of $\mathcal{F}^{AB}$) to ensure $\mathcal{F}^{AB}=0$ is a vacuum solution, but this way is not so natural. +---PAGE_BREAK--- + +## 3.2 Polynomial dS fluid + +For the same reason of choosing a polynomial Lagrangian for DGT, we intend to use those matter sources with polynomial Lagrangian. It has been shown that the Lagrangian of fundamental fields can be reformulated into polynomial forms [14, 15]. However, when describing the universe, it is more adequate to use a fluid as the matter source. The Lagrangian of an ordinary perfect fluid [23] can be written in a Lorentz-invariant form: + +$$ \mathcal{L}_{\mu\nu\rho\sigma}^{\text{PF}} = -\epsilon_{\alpha\beta\gamma\delta} e_{\mu}^{\alpha} e_{\nu}^{\beta} e_{\rho}^{\gamma} e_{\sigma}^{\delta} \rho + \epsilon_{\alpha\beta\gamma\delta} J^{\alpha} e_{\mu}^{\beta} e_{\nu}^{\gamma} e_{\rho}^{\delta} \eta^{\sigma} \wedge \partial_{\mu}\phi, \quad (31) $$ + +where $\phi$ is a scalar field, $J^\alpha$ is the particle number current which is Lorentz covariant and satisfies $J^\alpha J_\alpha < 0$, $\rho = \rho(n)$ is the energy density, and $n \equiv \sqrt{-J^\alpha J_\alpha}$ is the particle number density. The Lagrangian (31) is polynomial in the PGT variable $e^\alpha_\mu$, but it is not polynomial in the DGT variables when it is reformulated into a dS-invariant form, in which case the Lagrangian reads + +$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{PF}} = & -\epsilon_{ABCDE}(D_\mu\xi^A)(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D)(\xi^E/l)\rho \\ & +\epsilon_{ABCDE}J^A(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D) \wedge (\xi^E/l)\partial_\mu\phi, \end{aligned} \quad (32) $$ + +where $J^A$ is a dS-covariant particle number current, which satisfies $J^AJ_A < 0$ and $J^\alpha\xi_A = 0$, $\rho = \rho(n)$ and $n \equiv \sqrt{-J^AJ_A}$. Because $l^{-1}$ appears in Eq. (32), the Lagrangian is not polynomial in $\xi^A$. + +A straightforward way to modify Eq. (32) into a polynomial Lagrangian is to multiply it by $l$. In the Lorentz gauges, $J^4 = 0$, and we may define the invariant $J^\mu \equiv J^\alpha e_\alpha^\mu$. Then the modified Lagrangian $\mathcal{L}_{\mu\nu\rho\sigma}^{\prime PF} = -e\epsilon_{\mu\nu\rho\sigma}\rho l + e\epsilon_{\mu'\nu\rho\sigma}J^{\mu'} \wedge l \cdot \partial_\mu\phi$. It can be verified that this Lagrangian violates the particle number conservation law $\nabla_\mu J^\mu = 0$, where $\nabla_\mu$ is the linearly covariant, metric-compatible and torsion-free derivative. To preserve the particle number conservation, we may replace $l \cdot \partial_\mu\phi$ by $\partial_\mu(l\phi)$, and the corresponding dS-invariant Lagrangian is + +$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} = & -\epsilon_{ABCDE}(D_\mu\xi^A)(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D)\xi^E\rho(n) \\ & +\epsilon_{ABCDE}J^A(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D) \wedge \left(\frac{1}{4}D_\mu\xi^E \cdot \phi + \xi^E \partial_\mu\phi\right). \end{aligned} \quad (33) $$ + +The perfect fluid depicted by the above Lagrangian is called the polynomial dS fluid, or dS fluid for short. In the Lorentz gauges, + +$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} &= -e\epsilon_{\mu\nu\rho\sigma}\rho l + \epsilon_{\alpha\beta\gamma\delta}J^\alpha e^\beta_\nu e^\gamma_\rho e^\delta_\sigma \wedge (\partial_\mu l \cdot \phi + l \cdot \partial_\mu \phi) \\ &= -e\epsilon_{\mu\nu\rho\sigma}\rho l + e\epsilon_{\mu'\nu\rho\sigma}J^{\mu'} \wedge \partial_\mu(l\phi), \end{aligned} \quad (34) $$ + +which is equivalent to Eq. (31) when $l$ is a constant. + +Define the Lagrangian function $\mathcal{L}_{\text{DF}}$ by $\mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} = \mathcal{L}_{\text{DF}} e\epsilon_{\mu\nu\rho\sigma}$, then $\mathcal{L}_{\text{DF}} = -\rho l + J^\mu \partial_\mu(l\phi)$. To compare the polynomial dS fluid with the ordinary perfect fluid, let us consider a general model with the Lagrangian function + +$$ \mathcal{L}_m = -\rho l^k + J^\mu \partial_\mu (l^k \phi), \quad (35) $$ + +where $k \in \mathbb{R}$. When $k=0$, it describes the ordinary perfect fluid; when $k=1$, it describes the polynomial dS fluid. The variation of $S_m = \int dx^4 e \mathcal{L}_m$ with respect to $\phi$ gives the +---PAGE_BREAK--- + +particle number conservation law $\nabla_{\mu}J^{\mu} = 0$. The variation with respect to $J^{\alpha}$ yields +$\partial_{\mu}(l^{k}\phi) = -\mu U_{\mu}l^{k}$, where $\mu \equiv d\rho/dn = (\rho+p)/n$ is the chemical potential, $p = p(n)$ +is the pressure, and $U^{\mu} \equiv J^{\mu}/n$ is the 4-velocity of the fluid particle. Making use of +these results, one may check that the on-shell Lagrangian function is equal to $pl^{k}$, and +the variational derivatives + +$$ +\delta S_m / \delta l = -k \rho l^{k-1}, \tag{36} +$$ + +$$ +\delta S_m / \delta \Gamma^{\alpha\beta}_{\mu} = 0, \quad (37) +$$ + +$$ +\delta S_m / \delta e^\alpha_\mu = (\rho + p) l^k U^\mu U_\alpha + pl^k e^\alpha_\mu . \quad (38) +$$ + +It is seen that $\delta S_m / \delta l = 0$ for the ordinary perfect fluid, while $\delta S_m / \delta l = -\rho$ for the polynomial dS fluid. + +Finally, it should be noted that the polynomial dS fluid does not support a signature change corresponding to $\xi^A\xi_A$ varying from negative to positive. The reason is that when $\xi^A\xi_A < 0$, there exists no $J^A$ which satisfies $J^AJ_A < 0$ and $J^A\xi_A = 0$. + +# 4 Cosmological solutions + +## 4.1 Field equations for the universe + +In this section, the coupling system of the SW gravity and the fluid model (35) will be analyzed in the homogenous, isotropic, parity-invariant and spatially flat universe characterized by the following ansatz [13]: + +$$ +e^0_\mu = d_\mu t, \quad e^i_\mu = a \, d_\mu x^i, \tag{39} +$$ + +$$ +S^0_{\mu\nu} = 0, \quad S^i_{\mu\nu} = b e^0_\mu \wedge e^i_\nu, \tag{40} +$$ + +where *a* and *b* are functions of the cosmic time *t*, and *i* = 1, 2, 3. On account of Eqs. (39)–(40), the Lorentz connection Γαβμ and curvature Rαβμν can be calculated [13]. Further, assume that Uμ = e0μ, then Uμ = −eμ0, and so Uα = −δ0α. Now the reduced form of each term of Eqs. (28)–(30) can be attained. In particular, + +$$ +\epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} R^{\alpha\beta}_{\mu\nu} R^{\gamma\delta}_{\sigma\rho} = 96(ha)' a^{-1} h^2, \quad (41) +$$ + +$$ +R = 6[(ha)'a^{-1} + h^2], \tag{42} +$$ + +$$ +\epsilon_{0i\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\sigma\rho} = -4h^2 \dot{l} e_i{}^\mu, \quad (43) +$$ + +$$ +\epsilon_{ij\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\sigma\rho} = 0, \quad (44) +$$ + +$$ +T^{\mu}_{0i} = -2b e_i{}^{\mu}, \quad T^{\mu}_{ij} = 0, \tag{45} +$$ + +$$ +G^{\mu}_{0} = -3h^{2}e_{0}^{\mu}, \qquad (46) +$$ + +$$ +G^{\mu}_i = -[2(ha)' a^{-1} + h^2] e_i^{\mu}, \quad (47) +$$ + +$$ +\delta S_m / \delta e^\mu = -\rho l^k e_0^\mu, \quad (48) +$$ + +$$ +\delta S_m / \delta e_i^\mu = p l^k e_i^\mu, \quad (49) +$$ + +where $\cdot$ on top of a quantity or being a superscript denotes the differentiation with respect to $t$, and $h = \dot{a}/a - b$. Substitution of the above equations into Eqs. (28)–(30) leads to + +$$ +(ha)' a^{-1} (h^2 + l^{-2}) + l^{-2} (h^2 - \Lambda) = k \rho l^{k-1} / 24\kappa, \quad (50) +$$ +---PAGE_BREAK--- + +$$ (h^2 + l^{-2})\dot{l} - 2bl^{-1} = 0, \qquad (51) $$ + +$$ 8\kappa l^{-1}(-3h^2 + \Lambda) = \rho l^k, \qquad (52) $$ + +$$ 8\kappa l^{-1}[-2(ha)\dot{a}^{-1} - h^2 + \Lambda] = -pl^k, \qquad (53) $$ + +which constitute the field equations for the universe. + +## 4.2 Solutions for the field equations + +Before solving the field equations (50)–(53), let us first derive the continuity equation from the field equations. Rewrite Eq. (52) as + +$$ h^2 = l^{-2} - \rho l^{k+1}/24\kappa. \qquad (54) $$ + +Substituting Eq. (54) into Eq. (53) yields + +$$ (ha)\dot{a}^{-1} = l^{-2} + (\rho + 3p)l^{k+1}/48\kappa. \qquad (55) $$ + +Multiply Eq. (55) by $2h$, making use of Eq. (54) and $h = \dot{a}/a - b$, one gets + +$$ 2h\dot{h} = (\rho + p)l^{k+1}\dot{a}a^{-1}/8\kappa - 2b(ha)\dot{a}a^{-1}, \qquad (56) $$ + +in which, according to Eqs. (50), (51) and (54), + +$$ 2b(ha)\dot{a}^{-1} = \dot{l}[(k+1)\rho l^k/24\kappa + 2l^{-3}]. \qquad (57) $$ + +Differentiate Eq. (54) with respect to $t$, and compare it with Eqs. (56)–(57), one arrives at the continuity equation + +$$ \dot{\rho} + 3(\rho + p)\dot{a}a^{-1} = 0, \qquad (58) $$ + +which is, unexpectedly, the same as the usual one. Suppose that $p = w\rho$, where $w$ is a constant. Then Eq. (58) has the solution + +$$ \rho = \rho_0(a/a_0)^{-3(1+w)}, \qquad (59) $$ + +where $a_0$ and $\rho_0$ are the values of $a$ and $\rho$ at some moment $t_0$. + +Now it is ready to solve Eqs. (50)–(52), while Eq. (53) is replaced by Eq. (58) with the solution (59). Firstly, substitute Eqs. (54)–(55) into Eq. (50), one finds + +$$ \rho l^{k+3} = 48\kappa(3w - k - 1)/(3w + 1). \qquad (60) $$ + +Assume that $\kappa < 0$, then according to the above relation, $\rho l^{k+3} > 0$ implies $(3w - k - 1)/(3w + 1) < 0$. We only concern the cases with $k=0, 1$, and so assume that $k+1 > -1$, then $\rho l^{k+3} > 0$ constrains $w$ by + +$$ -\frac{1}{3} < w < \frac{k+1}{3}. \qquad (61) $$ + +For the ordinary fluid ($k=0$), the pure radiation ($w=1/3$) cannot exist. In fact, on account of Eq. (60), $\rho l^3 = 0$ in this case, which is unreasonable. This problem is similar to that appeared in Ref. [13]. On the other hand, for the dS fluid ($k=1$), Eq. (61) +---PAGE_BREAK--- + +becomes $-1/3 < w < 2/3$, which contains both the cases with pure matter ($w = 0$) and +pure radiation ($w = 1/3$). Generally, the combination of Eqs. (59) and (60) yields + +$$l = l_0(a/a_0)^{\frac{3(w+1)}{k+3}}, \quad (62)$$ + +where $l_0$ is the value of $l$ when $t = t_0$, and is related to $\rho_0$ by Eq. (60). +Secondly, substitute Eq. (54) into Eq. (51), and utilize Eqs. (60) and (62), one gets + +$$b = \frac{3(w + 1)(k + 2)}{(3w + 1)(k + 3)} \dot{a} a^{-1}, \qquad (63)$$ + +and hence + +$$h = \frac{3w - 2k - 3}{(3w + 1)(k + 3)} \dot{a} a^{-1}. \qquad (64)$$ + +Thirdly, substitution of Eqs. (60) and (64) into Eq. (52) leads to + +$$\dot{a}a^{-1} = H_0(l_0/l), \qquad (65)$$ + +where $H_0 \equiv (\dot{a}a^{-1})_{t_0}$ is the Hubble constant, being related to $l_0$ by + +$$H_0 = \sqrt{\frac{3w+1}{-3w+2k+3}} \cdot (k+3)l_0^{-1}. \qquad (66)$$ + +Here note that Eq. (61) implies that $3w + 1 > 0$, $-3w + k + 1 > 0$, $k + 1 > -1$, and so +$-3w + 2k + 3 > 0$. In virtue of Eqs. (63), (65) and (62), one has + +$$b = b_0(a_0/a)^{\frac{3(w+1)}{k+3}}, \qquad (67)$$ + +where $b_0$ is related to $H_0$ by Eq. (63). Moreover, substitute Eq. (62) into Eq. (65) and +solve the resulting equation, one attains + +$$(a/a_0)^{\frac{3(w+1)}{k+3}} - 1 = \frac{3(w+1)}{k+3} \cdot H_0(t-t_0). \qquad (68)$$ + +In conclusion, the solutions for the field equations (50)-(53) are given by Eqs. (59), +(62), (67) and (68), with the independent constants $a_0$, $H_0$ and $t_0$. + +**4.3 Comparison with observations** + +If $k$ is specified, we can determine the value of the coupling constant $\kappa$ from the observed values of $H_0 = 67.4 \text{ km} \cdot \text{s}^{-1} \cdot \text{Mpc}^{-1}$ and $\Omega_0 \equiv 8\pi\rho_0/3H_0^2 = 0.315$ [24]. For example, put $k=1$, then according to Eq. (66) (with $w=0$), one has + +$$l_0 = 4/\sqrt{5}H_0 = 8.19 \times 10^{17} \text{ s}. \qquad (69)$$ + +Substitution of Eq. (69) and $\rho_0 = 3H_0^2\Omega_0/8\pi = 1.79 \times 10^{-37} \text{ s}^{-2}$ into Eq. (60) yields + +$$\kappa = -\rho_0 l_0^4 / 96 = -8.41 \times 10^{32} \text{ s}^2. \qquad (70)$$ + +This value is an important reference for the future work which will explore the viability +of the model in the solar system scale. +---PAGE_BREAK--- + +Also, we can compare the deceleration parameter $q \equiv -a\ddot{a}/\dot{a}^2$ derived from the above models with the observed one. With the help of Eqs. (65) and (62), one finds $\dot{a} \sim a^{(k-3w)/(k+3)}$, then $\ddot{a} = \frac{k-3w}{k+3} \cdot \dot{a}^2 a^{-1}$, and so + +$$q = \frac{3w-k}{k+3}. \quad (71)$$ + +Put $w=0$, it is seen that the universe accelerates ($q<0$) if $k>0$, linearly expands ($q=0$) if $k=0$, and decelerates ($q>0$) if $k<0$. In particular, for the model with an ordinary fluid ($k=0$), the universe expands linearly⁵; while for the model with a dS fluid ($k=1$), the universe accelerates with $q=-1/4$, which is consistent with the observational result $-1 \le q_0 < 0$ [25–27], where $q_0$ is the present-day value of $q$. It should be noted that Eq. (71) implies that $q$ is a constant when $w$ is a constant, and so the models cannot describe the transition from deceleration to acceleration when $w$ is a constant. + +## 5 Remarks + +It is shown that the requirement of regular Lagrangian may be crucial for DGT, as it is shown that the SW gravity coupled with an ordinary perfect fluid (whose Lagrangian is not regular with respect to $\xi^A$ when $\xi^A\xi_A = 0$) does not permit a radiation epoch and the acceleration of the universe, while the SW gravity coupled with a polynomial dS fluid (whose Lagrangian is regular with respect to $\xi^A$) is out of these problems. Yet, the latter model is still not a realistic model, because it cannot describe the transition from deceleration to acceleration in the matter epoch. + +There are two possible ways to find a more reasonable model. The first is to modify the gravitational part to be the general quadratic model (25), which is a special case of the at most quadratic model proposed in Refs. [10, 22], but the coupling of which with the polynomial dS fluid is unexplored. It is unknown whether the effect of the $\kappa_2$ term could solve the problem encountered in the SW gravity. + +The second way is to modify the matter part. Although the Lagrangian of the polynomial dS fluid is regular with respect to $\xi^A$, it is not regular with respect to $J^A$ when $\xi^A\xi_A = 0$, in which case there should be $J^AJ_A \ge 0$, and so the number density $n \equiv \sqrt{-J^AJ_A}$ is not regular. Maybe one could find a new fluid model whose Lagrangian is regular with respect to all the variables, based on the polynomial models for fundamental fields proposed in Refs. [14, 15]. + +## Acknowledgments + +I thank Profs. S.-D. Liang and Z.-B. Li for their abiding help. Also, I would to thank my parents and my wife. This research is supported by the National Natural Science Foundation for Young Scientists of China under Grant No. 12005307. + +⁵This result is different from that in Ref. [13], where the cosmological solution describes a decelerating universe. It shows that the SW model is not equivalent to the model considered in Ref. [13]. +---PAGE_BREAK--- + +References + +[1] T. W. B. Kibble. Lorentz invariance and the gravitational field. J. Math. Phys. 2, 212-221 (1961) + +[2] D. W. Sciama. On the analogy between charge and spin in general relativity, in: Recent Developments in General Relativity, Festschrift for Infeld (Pergamon Press, Oxford, 1962) pp. 415–439 + +[3] M. Blagojević and F. W. Hehl. Gauge Theories of Gravitation. A Reader with Commentaries. Imperial College Press, London, 2013 + +[4] V. N. Ponomariov, A. O. Barvinsky and Y. N. Obukhov. Gauge Approach and Quantization Methods in Gravity Theory (Nauka, Moscow, 2017) + +[5] E. W. Mielke. Geometrodynamics of Gauge Fields, 2nd. ed. (Springer, Switzerland, 2017) + +[6] K. S. Stelle and P. C. West. De Sitter gauge invariance and the geometry of the Einstein-Cartan theory. J. Phys., A12, L205-L210 (1979) + +[7] K. S. Stelle and P. C. West. Spontaneously broken de Sitter symmetry and the gravitational holonomy group. Phys. Rev. D 21, 1466-1488 (1980) + +[8] S. W. MacDowell and F. Mansouri. Unified geometric theory of gravity and supergravity. Phys. Rev. Lett. 38, 739-742 (1977) + +[9] P. C. West. A geometric gravity Lagrangian. Phys. Lett. B 76, 569 (1978) + +[10] H. Westman and T. Złośnik. Exploring Cartan gravity with dynamical symmetry breaking. Class. Quant. Grav. 31, 095004 (2014) + +[11] J. Magueijo, M. Rodríguez-Vázquez, H. Westman and T. Złośnik. Cosmological sig-nature change in Cartan Gravity with dynamical symmetry breaking. Phys. Rev. D 89, 063542 (2014) + +[12] H. Westman and T. Złośnik. An introduction to the physics of Cartan gravity. Ann. Phys. 361, 330-376 (2015) + +[13] S. Alexander, M. Cortês, A. Liddle, J. Magueijo, R. Sims, and L. Smolin. The cosmology of minimal varying Lambda theories. Phys. Rev. D 100, 083507 (2019) + +[14] H. R. Pagels. Gravitational gauge fields and the cosmological constant. Phys. Rev. D 29, 1690-1698 (1984) + +[15] H. Westman and T. Złośnik. Cartan gravity, matter fields, and the gauge principle. Ann. Phys. 334, 157-197 (2013) + +[16] J.-A. Lu. Energy, momentum and angular momentum conservation in de Sitter gravity. Class. Quantum Grav. 33, 155009 (2016) + +[17] F. W. Hehl, P. von der Heyde, G. D. Kerlick, and J. M. Nester. General relativity with spin and torsion: Foundations and prospects. Rev. Mod. Phys. 48, 393 (1976) +---PAGE_BREAK--- + +[18] F. W. Hehl, J. D. McCrea, E. W. Mielke, and Y. Ne'eman. Metric-affine gauge theory of gravity: field equations, Noether identities, world spinors, and breaking of dilation invariance. Phys. Rep. 258, 1-171 (1995) + +[19] J.-A. Lu and C.-G. Huang. Kaluza-Klein-type models of de Sitter and Poincaré gauge theories of gravity. Class. Quantum Grav. 30, 145004 (2013) + +[20] H.-Y. Guo. The local de Sitter invariance. Kexue Tongbao 21, 31-34 (1976) + +[21] Y. N. Obukhov. Poincaré gauge gravity: selected topics. Int. J. Geom. Meth. Mod. Phys. 3, 95-138 (2006) + +[22] H. Westman and T. Złośnik. Gravity, Cartan geometry, and idealized waywisers. arXiv:1203.5709 (2012) + +[23] J. D. Brown. Action functionals for relativistic perfect fluids. Class. Quant. Grav. 10, 1579 (1993) + +[24] Planck Collaboration. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641, A6 (2020) + +[25] A. G. Riess et al. Observational evidence from supernovae for an accelerating universe and a cosmological constant. Astron. J. 116, 1009-1038 (1998) + +[26] B. Schmidt et al. The high-Z supernova search: measuring cosmic deceleration and global curvature of the universe using type IA supernovae. Astrophys. J. 507, 46-63 (1998) + +[27] S. Perlmutter et al. Measurements of Omega and Lambda from 42 high redshift supernovae. Astrophys. J. 517, 565-586 (1999) \ No newline at end of file diff --git a/samples/texts_merged/7618174.md b/samples/texts_merged/7618174.md new file mode 100644 index 0000000000000000000000000000000000000000..0c5404fc31c41dae905eebf1044b064b93fba0e9 --- /dev/null +++ b/samples/texts_merged/7618174.md @@ -0,0 +1,712 @@ + +---PAGE_BREAK--- + +QUADRATIC BOUNDS +ON THE QUASICONVEXITY OF +NESTED TRAIN TRACK SEQUENCES + +by +TARIK AOUGAB + +Electronically published on March 4, 2014 + +Topology Proceedings + +Web: http://topology.auburn.edu/tp/ + +Mail: Topology Proceedings +Department of Mathematics & Statistics +Auburn University, Alabama 36849, USA + +E-mail: topolog@auburn.edu + +ISSN: 0146-4124 + +COPYRIGHT © by Topology Proceedings. All rights reserved. +---PAGE_BREAK--- + +QUADRATIC BOUNDS ON THE QUASICONVEXITY OF +NESTED TRAIN TRACK SEQUENCES + +TARIK AOUGAB + +**ABSTRACT.** Let $S_{g,p}$ denote the genus $g$ orientable surface with $p$ punctures. We show that nested train track sequences constitute $O((g,p)^2)$-quasiconvex subsets of the curve graph, effectivizing a theorem of Howard A. Masur and Yair N. Minsky. As a consequence, the genus $g$ disk set is $O(g^2)$-quasiconvex. We also show that splitting and sliding sequences of birecurrent train tracks project to $O((g,p)^2)$-unparameterized quasigeodesics in the curve graph of any essential subsurface, an effective version of a theorem of Masur, Lee Mosher, and Saul Schleimer. + +# 1. INTRODUCTION + +Let $S_{g,p}$ denote the orientable surface of genus $g$ with $p \ge 0$ punctures, and let $\mathcal{C}(S_{g,p})$ be the corresponding curve complex. Finally, let $\mathcal{C}_k(S_{g,p})$ denote the corresponding $k$-skeleton. + +Let $(\tau_i)_i$ be a sequence of train tracks on $S_{g,p}$ such that $\tau_{i+1}$ is carried by $\tau_i$ for each $i$. Such a collection of train tracks defines a subset of $\mathcal{C}_0(S_{g,p})$ called a *nested train track sequence*. A train track splitting sequence is an important special case of such a sequence, in which $\tau_i$ is obtained from $\tau_{i-1}$ via one of two simple combinatorial moves, *splitting* and *sliding*. + +A nested train track sequence is said to have *R*-bounded steps if the $\mathcal{C}_1$-distance between the vertex cycles of $\tau_i$ and those of $\tau_{i+1}$ is bounded above by R. Howard A. Masur and Yair N. Minsky [13] show that any + +2010 Mathematics Subject Classification. 57M07, 20F65. +Key words and phrases. curve complex, disk set, mapping class roup. +The author was partially supported by an NSF grant during the completion of this work. +©2014 Topology Proceedings. +---PAGE_BREAK--- + +nested train track sequence with $R$-bounded steps is a $K = K(R, g, p)$-quasigeodesic. Our first result provides some effective control on $K$ as a function of $g$ and $p$; in what follows, let $\omega(g, p) = 3g + p - 4$. + +**Theorem 1.1.** There exists a function $K(g,p) = O(\omega(g,p)^2)$ such that any nested train track sequence with $R$-bounded steps is a $(K(g,p) + R)$-unparameterized quasigeodesic of the curve graph $C_1(S_{g,p})$, which is $(K(g,p) + R)$-quasiconvex. + +Masur, Lee Mosher, and Saul Schleimer [14] use Masur and Minsky's result [13] to show that if $Y \subseteq S_{g,p}$ is any essential subsurface, then a sliding and splitting sequence on $S_{g,p}$ maps to a uniform unparameterized quasigeodesic under the subsurface projection map to $\mathcal{C}(Y)$. Using Theorem 1.1, we show the following theorem. + +**Theorem 1.2.** *There exists a function $A(g,p) = O(\omega(g,p)^2)$ satisfying the following. Suppose $Y \subseteq S_{g,p}$ is an essential subsurface, and let $(\tau_i)_i$ be a splitting and sliding sequence of birecurrent train tracks on $S_{g,p}$. Then $(\tau_i)_i$ projects to an $A(g,p)$-unparameterized quasigeodesic in $C_1(Y)$.* + +Let $H_g$ denote the genus $g$ handlebody and let $D(g) \subset C_1(S_g)$ denote the set of meridians, curves on $S_g$ that bound disks in $H_g$. Also due to Masur and Minsky [13] is the fact that any two meridians in $D(g)$ can be connected by a 15-bounded nested train track sequence. Therefore, we obtain the following corollary of Theorem 1.1. + +**Corollary 1.3.** *There exists a function $f(g) = O(g^2)$ such that $D(g)$ is an $f(g)$-quasiconvex subset of $C_1(S_g)$.* + +The mapping class group, denoted Mod($S$), is the group of isotopy classes of orientation preserving homeomorphisms of a surface $S$ (see [5] for a thorough exposition). + +As an application of Corollary 1.3, we obtain a more effective approach for detecting when a pseudo-Anosov mapping class $\phi$ is generic. Here, *generic* means that the stable lamination of $\phi$ is not a limit of meridians; the term “generic” is warranted by a theorem of Steven P. Kerckhoff [10], which states that the set of all projective measured laminations which are limits of meridians constitutes a measure 0 subset of $\mathcal{PML}(S)$, the space of all projective measured laminations on a surface $S$. + +In what follows, let $d_{C(S)}$ denote distance in $C_1(S)$; when there is no confusion, the reference to $S$ will be omitted. Masur and Minsky [11] showed that $C_1(S)$ is a $\delta$-hyperbolic metric space. + +Using Theorem 1.2, [1], and the fact that the curve graphs are uniformly hyperbolic (as shown by the author in [2], and independently in [3], [4], and [9]), we have the following corollary. +---PAGE_BREAK--- + +**Corollary 1.4.** There exists a function $r(g) = O(g^2)$ such that $\phi \in Mod(S_g)$ is a generic pseudo-Anosov mapping class if and only if there exists some $k \in \mathbb{N}$ such that for all $n > k$, + +$$d_C(D(g), \phi^n(D(g))) > r(g).$$ + +**Remark 1.5.** By the argument of Aaron Abrams and Saul Schleimer [1], it suffices to take $r(g) = 2\delta + 2f(g)$ for $\delta$ the hyperbolicity constant of $C_1$, and $f(g)$ as in the statement of Corollary 1.3. + +We also note that quasiconvexity of $D(g)$ and the fact that splitting sequences map to quasigeodesics under subsurface projection are main ingredients in the proof due to Masur and Schleimer [15] that the disk complex is $\delta$-hyperbolic. Thus, the effective control discussed above is perhaps a first step to studying the growth of the hyperbolicity constant of the disk complex. + +The proof of the main theorem, Theorem 1.1, relies on the ability to control + +(1) the hyperbolicity constant $\delta(g,p)$ of $C_1$; + +(2) $B = B(g,p)$, a bound on the diameter of a set of vertex cycles of a fixed train track $\tau \subset S_{g,p}$; and + +(3) the “nesting lemma constant” $k(g,p)$. + +As mentioned above, due to work of the author and the authors of [3], [4], and [9], curve graphs are uniformly hyperbolic. Furthermore, [9] shows that all curve graphs are 17-hyperbolic. + +Regarding (2), the author [2] has also shown that for sufficiently large $\omega$, $B(g,p) \le 3$. + +Therefore, all that remains is to analyze the growth of $k(g,p)$, which we address in section 5 by following Masur and Minsky's original argument [11] while keeping track of the constants that pop up along the way. However, in order to do this, we have need of an effective criterion for determining when a train track $\tau$ is non-recurrent, which we address in section 4. + +In section 2, we review some preliminaries about curve complexes and subsurface projections. In section 3, we review train tracks on surfaces and bounds on curve graph distance given by intersection number, as obtained in previous work. In section 4, we obtain an effective way of detecting non-recurrence of train tracks by analyzing the linear algebra of the corresponding branch-switch incidence matrix. In section 5, we obtain an effective version of Masur and Minsky's nesting lemma [11], which is the main tool needed to prove Theorem 1.1. In section 6 we complete the proofs of theorems 1.1 and 1.2, and Corollary 1.3. +---PAGE_BREAK--- + +## 2. PRELIMINARIES: COARSE GEOMETRY, COMBINATORIAL COMPLEXES, AND SUBSURFACE PROJECTIONS + +Let $(X, d_X)$ and $(Y, d_Y)$ be metric spaces. For some $k \ge 1$, a relation $f : X \to Y$ is a *k-quasi-isometric embedding* of $X$ into $Y$ if, for any $x_1, x_2 \in X$, we have + +$$ \frac{1}{k} d_Y(f(x_1, x_2) - k \le d_X(x_1, x_2) \le k \cdot d_Y(f(x_1), f(x_2)) + k. $$ + +Since $f$ is not necessarily a map, $f(x)$ and $f(y)$ need not be singletons, and the distance $d_Y(f(x), f(y))$ is defined to be the diameter in the metric $d_Y$ of the union $f(x) \cup f(y)$. If the $k$-neighborhood of $f(X)$ is all of $Y$, then $f$ is a *k-quasi-isometry* between $X$ and $Y$, and we refer to $X$ and $Y$ as being *quasi-isometric*. + +Given an interval $[a, b] \in \mathbb{Z}$, a *k-quasigeodesic* in $X$ is a $k$-quasi-isometric embedding $f : [a, b] \to X$. If $f : [a, b] \to X$ is any relation such that there exists an interval $[c, d]$ and a strictly increasing function $g \cdot [c, d] \to [a, b]$ such that $f \circ g$ is a $k$-quasigeodesic, we say that $f$ is a *k-unparameterized quasigeodesic*. In this case we also require that, for each $i \in [c, d - 1]$, the diameter of $f([g(i), g(i+1)])$ is at most $k$. We will sometimes refer to a quasigeodesic by its image in the metric space $X$. + +A simple closed curve on $S_{g,p}$ is *essential* if it is homotopically non-trivial and not homotopic into a neighborhood of a puncture. + +The *curve complex* of $S_{g,p}$, denoted $\mathcal{C}(S_{g,p})$, is the simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves on $S_{g,p}$, such that $k+1$ vertices span a $k$-simplex exactly when the corresponding $k+1$ isotopy classes can be realized disjointly on $S_{g,p}$. The curve complex is made into a metric space by identifying each simplex with the standard Euclidean simplex with unit length edges. Let $\mathcal{C}_k(S)$ denote the $k$-skeleton of $\mathcal{C}(S)$. + +The curve complex is a locally infinite, infinite diameter metric space. By a theorem of Masur and Minsky [11], $\mathcal{C}(S)$ is $\delta$-hyperbolic for some $\delta = \delta(S) > 0$, meaning that the $\delta$-neighborhood of the union of any two edges of a geodesic triangle contains the third edge. + +The curve complex admits an isometric (but not properly discontinuous) action of $\text{Mod}(S)$, and it is a flag complex, so that its combinatorics are completely encoded by $\mathcal{C}_1(S)$, the *curve graph*; note also that $\mathcal{C}(S)$ is quasi-isometric to $\mathcal{C}_1(S)$, and therefore, to study the coarse geometry of $\mathcal{C}$, it suffices to consider the curve graph. Let $d_\mathcal{C}$ denote distance in the curve graph. + +If $p \ne 0$, we can consider more general combinatorial complexes, which also allow vertices to represent essential arcs connecting punctures, up to isotopy. As such, define $\mathcal{A}\mathcal{C}(S)$, the *arc and curve complex of* $S$, to +---PAGE_BREAK--- + +be the simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves and arcs on $S$. In case $S$ has boundary, the isotopy classes of arcs which constitute a vertex of $\mathcal{A}C$ are not required to be rel boundary; that is, two arcs represent the same vertex if they are isotopic via an isotopy which need not fix the boundary pointwise. + +As with $\mathcal{C}(S)$, two vertices are connected by an edge if and only if the corresponding isotopy classes can be realized disjointly, and the higher dimensional skeleta are defined by requiring $\mathcal{A}C(S)$ to be flag. As with $\mathcal{C}$, denote by $\mathcal{A}C_k(S)$ the $k$-skeleton of $\mathcal{A}C(S)$. It is worth noting that $\mathcal{A}C(S)$ is quasi-isometric to $\mathcal{C}(S)$, with quasiconstants not depending on the topological type of $S$. + +A non-annular subsurface $Y$ of $S$ is the closure of a complementary component of an essential multi-curve on $S$; an annular subsurface $Y \subseteq S$ is a closed neighborhood of an essential simple closed curve on $S$, homeomorphic to $[0, 1] \times S^1$. A subsurface is essential if its boundary components are all essential curves and it is not homotopy equivalent to a thrice-punctured sphere. + +Let $Y \subseteq S$ be an essential, embedded subsurface of $S$. Then there is a covering space $S^Y$ associated to the inclusion $\pi_1(Y) < \pi_1(S)$. While $S^Y$ is not compact, note that the Gromov compactification of $S^Y$ is homeomorphic to $Y$, and via this homeomorphism, we identify $\mathcal{A}C(Y)$ with $\mathcal{A}C(S^Y)$. Then, given $\alpha \in \mathcal{A}C_0(S)$, the subsurface projection map $\pi_Y : \mathcal{A}C(S) \to \mathcal{A}C(Y)$ is defined by setting $\pi_Y(\alpha)$ equal to its preimage under the covering map $S^Y \to S$. + +Technically, this defines a map from $\mathcal{A}C_0(S)$ into $2^{\mathcal{A}C_0(S)}$ since there may be multiple connected components of the pre-image of a curve or arc, but the image of any point in the domain is a bounded subset of the range. Thus, to make $\pi_Y$ a map, we can simply choose some component of this pre-image for each point in the domain and then extend the map $\pi_Y$ simplicially to the higher dimensional skeleta. + +Given an arc $a \in \mathcal{A}C(S)$, there is a closely related simple closed curve $\tau(a) \in C_1(S)$, obtained from $a$ by surgering along the boundary components that $a$ meets. More concretely, let $\mathcal{N}(a)$ denote a thickening of the union of $a$ together with the (at most two) boundary components of $S$ that $a$ meets, and define $\tau(a) \in 2^{C_1(S)}$ to be the components of $\partial(\mathcal{N}(a))$. + +Thus, we obtain a *subsurface projection map* + +$$\psi_Y := \tau \circ \pi_Y : \mathcal{C}(S) \to \mathcal{C}(Y)$$ + +for $Y \subseteq S$ any essential subsurface. + +Then, given $\alpha, \beta \in \mathcal{C}(S)$, define $d_Y(\alpha, \beta)$ by + +$$d_Y(\alpha, \beta) := \text{diam}_{\mathcal{C}(Y)}(\psi_Y(\alpha) \cup \psi_Y(\beta)).$$ +---PAGE_BREAK--- + +### 3. TRAIN TRACKS AND INTERSECTION NUMBERS + +In this section, we recall some basic terminology of train tracks on surfaces; we refer the reader to [18] and [16] for a more in-depth discussion. A *train track* $\tau \subset S$ is an embedded 1-complex whose vertices and edges are called *switches* and *branches*, respectively. Branches are smooth parameterized paths with well-defined tangent vectors at the initial and terminal switches. At each switch $v$ there is a unique line $L \subset T_v S$ such that the tangent vector of any branch incident at $v$ coincides with $L$. + +As part of the data of $\tau$, we choose a preferred direction along this line at each switch $v$; a half branch incident at $v$ is called *incoming* if its tangent vector at $v$ is parallel to this chosen direction and is called *outgoing* if it is anti-parallel. Therefore, at each switch, the incident half branches are partitioned disjointly into two orientation classes, the *incoming germ* and *outgoing germ*. + +The valence of each switch must be at least 3 unless $\tau$ has a connected component consisting of a simple closed curve; in this case, $\tau$ has one bivalent switch for such a component. + +Finally, we require that every complementary component of $S \setminus \tau$ has a negative generalized Euler characteristic, that is + +$$\chi(Q) - \frac{1}{2}V(Q) < 0$$ + +for any complementary component $Q$; here, $\chi(Q)$ is the usual Euler characteristic and $V(Q)$ is the number of cusps on $\partial(Q)$. + +A *train path* is a path $\gamma : [0, 1] \to \tau$, smooth on $(0, 1)$, which traverses a switch only by entering via one germ and exiting from the other; a *closed train path* is a train path with $\gamma(0) = \gamma(1)$. A *proper closed train path* is a closed train path with $\gamma'(0) = \gamma'(1)$; here, $\gamma'(t)$ is the unit tangent vector to the path $\gamma$ at time $t$. + +Let $\mathcal{B}$ denote the set of branches of $\tau$; then a non-negative, real-valued function $\mu : \mathcal{B} \to \mathbb{R}_+$ is called a *transverse measure* on $\tau$ if for each switch $v$ of $\tau$, we have + +$$\sum_{b \in i(v)} \mu(b) = \sum_{b' \in o(v)} \mu(b')$$ + +where $i(v)$ is the set of incoming branches and $o(v)$ is the set of outgoing ones. These are called the *switch conditions*. $\tau$ is called *recurrent* if it admits a strictly positive transverse measure, that is, one that assigns a positive weight to every branch. A switch of $\tau$ is called *semi-generic* if exactly one of the two germs of half branches consists of a single half branch. $\tau$ is called semi-generic if all switches are semi-generic, and $\tau$ is *generic* if $\tau$ is semi-generic and each switch has degree at most 3. $\tau$ +---PAGE_BREAK--- + +is called *large* if each connected component of its complement is simply connected. + +Any positive scaling of a transverse measure is also a transverse measure, and therefore the set of all transverse measures, viewed as a subset of $\mathbb{R}^\mathcal{B}$, is a cone over a compact polyhedron in projective space. Let $P(\tau)$ denote the projective polyhedron of transverse measures. A projective measure class $[\mu] \in P(\tau)$ is called a *vertex cycle* if it is an extreme point of $P(\tau)$. It is worth noting that if $\tau$ is any train track on $S$, there exists a generic, recurrent train track $\tau'$ such that $P(\tau) = P(\tau')$. + +A lamination $\lambda$ is *carried* by $\tau$ if there is a smooth map $\phi: S \to S$ called the $\tau$-carrying map for $\lambda$ which is isotopic to the identity, $\phi(\lambda) \subset \tau$, and such that the restriction of the differential $d\phi$ to any tangent line of $\lambda$ is non-singular. If $c$ is any simple closed curve carried by $\tau$, then $c$ induces an integral transverse measure called the *counting measure*, for which each branch of $\tau$ is assigned the natural number equaling the number of times the image of $c$ under its carrying map traverses that branch. + +A train track $\tau'$ is *carried* by $\tau$ if there exists a smooth map $\phi: S \to S$ isotopic to the identity, such that for any lamination $\lambda$ carried by $\tau'$, $\phi$ is a $\tau$-carrying map for $\lambda$. + +A subset $\tau' \subset \tau$ is called a *subtrack* of $\tau$ if it is also a train track on $S$. In this case, we write $\tau' < \tau$. + +Given any train track $\tau$ with branch set $\mathcal{B}$, we can distinguish branches as being one of three types: If $b \in \mathcal{B}$ and both half branches of $b$ are the only half branch in their respective germs, $b$ is called *large*. If both half branches of $b$ are in germs containing more than one half branch, $b$ is *small*; otherwise, $b$ is *mixed* (Figure 1). + +FIGURE 1. Branch Classes. Left: $b_1$ is small; Middle: $b_2$ is mixed; Right: $b_3$ is large. + +If $[v]$ is a vertex cycle of $\tau$, then there is a unique (up to isotopy) simple closed curve $c(v)$ such that $c$ is carried by $\tau$, and the counting measure on $c$ is an element of $[v]$. Therefore, if $[v_1]$ and $[v_2]$ are two vertex cycles of $\tau$, we can define the distance $d([v_1], [v_2])$ between them to be the curve graph +---PAGE_BREAK--- + +distance between their respective simple closed curve representatives: + +$$d([v_1], [v_2]) := d_C(c(v_1), c(v_2)).$$ + +Using this, we can also define the distance between two train tracks $\tau$ and $\tau'$ to be the distance between their vertex cycle sets: + +$$d(\tau, \tau') := \min\{d([v_\tau], [v_{\tau'}]): [v_\tau] \text{ is a vertex cycle of } \tau \text{ and} \\ [v_{\tau'}] \text{ is a vertex cycle of } \tau']\}.$$ + +A train track $\tau$ is called *transversely recurrent* if, for each branch $b$ of $\tau$, there exists a simple closed curve $c$ intersecting $b$, such that $S \setminus (\tau \cup c)$ contains no bigon complementary regions. A track $\tau$ which is both recurrent and transversely recurrent is called *birecurrent*. + +A *nested train track sequence* is a sequence $(\tau_i)_i$ on $S_{g,p}$ of birecurrent train tracks such that $\tau_j$ is carried by $\tau_{j+1}$ for each $j$. This, in turn, determines a collection of vertices in $C_1(S_{g,p})$ by associating the track $\tau_j$ with its collection of vertices. + +Given $R > 0$, a nested train track sequence $(\tau_i)_i$ is said to have $R$-bounded steps if + +$$d(\tau_i, \tau_{i+1}) \le R$$ + +for each $i$. An important special case is the example of a *splitting and sliding sequence*. This is any train track sequence where $\tau_i$ is obtained from $\tau_{i+1}$ via one of two combinatorial moves, *splitting* (Figure 2) or *sliding* (Figure 3). + +FIGURE 2. Any large branch admits three possible “splittings.” +---PAGE_BREAK--- + +FIGURE 3. Any mixed branch admits a “sliding.” + +We will need the following theorem, as seen in [2]. + +**Theorem 3.1.** There exists a natural number $n \in \mathbb{N}$ such that if $\omega(g,p) > n$, the following holds: Suppose $\tau \subset S_{g,p}$ is any train track and $[v_1]$ and $[v_2]$ are vertex cycles of $\tau$. Then + +$$d([v_1], [v_2]) \le 3.$$ + +Let $\text{int}(P(\tau)) \subset P(\tau)$ denote the set of strictly positive transverse measures on $\tau$. There, $\tau$ is recurrent if and only if $\text{int}(P(\tau)) \neq \emptyset$. For $\tau$ a large track, a *diagonal extension* $\sigma$ of $\tau$ is a track such that $\tau < \sigma$ and each branch of $\sigma \setminus \tau$ has the property that its endpoints are incident at corners of complementary regions of $\tau$. + +Following [11], let $E(\tau)$ denote the set of all diagonal extensions of $\tau$, and define + +$$PE(\tau) := \bigcup_{\sigma \in E(\tau)} P(\sigma).$$ + +Let $N(\tau)$ be the union of $E(\kappa)$ over all large, recurrent subtracks $\kappa < \tau$: + +$$N(\tau) := \bigcup_{\kappa < \tau, \kappa \text{ large, recurrent}} E(\kappa),$$ + +and define + +$$PN(\tau) := \bigcup_{\kappa \in N(\tau)} P(\kappa).$$ + +Define $\text{int}(PE(\tau))$ to be the measures in $PE(\tau)$ whose restrictions to $\tau$ are strictly positive, and define + +$$\text{int}(PN(\tau)) := \bigcup_{\kappa} \text{int}(PE(\kappa)).$$ + +The following theorem will be heavily relied upon in section 3. + +**Theorem 3.2 ([2]).** For $\epsilon \in (0,1)$, there is some $\eta = \eta(\epsilon)$ such that if $\alpha, \beta \in C_0(S_g)$, whenever $\omega(g,p) > \eta(\epsilon)$ and $d_C(\alpha, \beta) \ge k$, + +$$i(\alpha, \beta) \ge \left( \frac{\omega(g,p)^{\epsilon}}{q(g,p)} \right)^{k-2}$$ + +where $q(g,p) = O(\log_2(\omega))$. +---PAGE_BREAK--- + +**Remark 3.3.** In the above, $i(\alpha, \beta)$ is the geometric intersection number between $\alpha$ and $\beta$, defined by + +$$i(\alpha, \beta) := \min |x \cap \beta|$$ + +where the minimum is taken over all $x$ isotopic to $\alpha$. + +We can explicitly write down the function $q(g,p)$ from the statement of Theorem 3.2. $q(g,p)$ is an upper bound on the girth of a finite graph with at most $8(6g+3p-7)$ vertices and average degree larger than 2.02. As seen in [6], + +$$ +\begin{aligned} +q(g,p) = & \left( \frac{8}{\log_2(1.01)} + 5 \right) \log_2(8(6g + 3p - 7)) \\ +& < 1000 \cdot \log_2(100\omega). +\end{aligned} +$$ + +This upper bound will be used in section 5. + +## 4. DETECTING RECURRENCE FROM THE INCIDENCE MATRIX + +Let $\tau = (S, \mathcal{B}) \subset S_{g,p}$ be a train track with branch set $\mathcal{B}$ and switch set $S$. + +Label the branches $\mathcal{B} = \{b_1, \dots, b_n\}$ and switches $S = \{s_1, \dots, s_m\}$, and identify $\mathbb{R}^n$ with real-valued functions over $\mathcal{B}$. Then, associated to $\tau$ is a linear map $L_\tau: \mathbb{R}^n \to \mathbb{R}^m$ and a corresponding matrix in the standard basis defined by, given $u \in \mathbb{R}^n$, the $j^{th}$ coordinate of $L_\tau(u)$ is the sum of the incoming weights, minus the sum of the outgoing weights at the $j^{th}$ switch, $1 \le j \le m$. Let $\mathbb{R}_+^n$ denote the strictly positive orthant of $\mathbb{R}^n$, the collection of vectors with all positive coordinates. + +We call $L_\tau$ the incidence matrix for $\tau$. Note that $\mu \in \mathbb{R}^n$ is a transverse measure on $\tau$ if and only if $\mu \in \ker(L_\tau)$; thus, $\tau$ is recurrent if $\ker(L_\tau)$ intersects $\mathbb{R}_+^n$ non-trivially. + +As mentioned in the proof of Lemma 4.1 of [11], if $\ker(L_\tau) \cap \mathbb{R}_+^n = \emptyset$, then there is some $\delta > 0$ such that + +$$ \|L_{\tau}(u)\| \geq \delta \cdot u_{min}, \quad \forall u \in \mathbb{R}_{+}^{n}. $$ + +Here, $u_{min}$ is the minimum over all coordinates of the vector $u$, and $\|\cdot\|$ is the standard Euclidean norm in $\mathbb{R}^m$. The main goal of this section is to effectivize this statement, that is, to obtain explicit control on the size of $\delta$ as a function of $g$ and $p$. + +**Theorem 4.1.** Let $\tau = (S, \mathcal{B})$, $|\mathcal{B}| = n$, and $|S| = m$ be a non-recurrent train track on $S_{g,p}$, and let $u \in \mathbb{R}_+^n$. Then + +$$ \|L_{\tau}(u)\|_{sup} \geq \frac{u_{min}}{12g + 4p - 12}, $$ + +where $\|\cdot\|_{sup}$ is the sup norm on $\mathbb{R}^m$. +---PAGE_BREAK--- + +*Proof.* We begin by observing that non-recurrence is equivalent to the existence of “extra” branches, ones that must be assigned 0 by any transverse measure: + +**Lemma 4.2.** Suppose that for each branch $b \in \mathcal{B}$, there is some corresponding transverse measure $\mu_b$ on $\tau$ such that $\mu(b) > 0$. Then $\tau$ is recurrent. + +Therefore, the existence of a branch $b$, which is assigned 0 by every +transverse measure on $\tau$, is equivalent to $\tau$ being non-recurrent. We will +call such a branch *invisible*. + +Given $s \in S$, the switch condition at $s$ represents a row vector of the +matrix corresponding to the linear transformation $L_{\tau}$. This is the vector +$v_s$ that has 1's in the coordinates corresponding to the incoming half +branches incident to $s$ and -1's in the coordinates corresponding to the +outgoing half branches incident to $s$. Note that $v_s$ could also have a $\pm 2$ +in place of two 1's if both ends of a single branch are incident to $s$. Let +$R(L_{\tau})$ denote the row space of $L_{\tau}$, the vector space spanned by the row +vectors. + +The following is an immediate corollary of Theorem 4.1. + +**Lemma 4.3.** Suppose $b \in \mathcal{B}$ is an invisible branch. Then $b$ is not contained in a closed train path. + +For $b$, a branch of $\tau$, let $S(b) \subset S$ denote the switches of $\tau$ incident to $b$; thus, $|S(b)| = 1$ or 2. For $x \in S(b)$, consider the pointed universal cover $(\tilde{\tau}, \tilde{x})$ with associated covering projection $\pi : (\tilde{\tau}, \tilde{x}) \to (\tau, x)$. We define $P(\tilde{\tau}, \tilde{x}) \subseteq \tilde{\tau}$ to be the subset of the universal cover consisting of train paths in $\tilde{\tau}$ emanating from $\tilde{x}$ that do not traverse any branch which projects to $b$ under $\pi$. + +Any train path emanating from $\tilde{x}$ has a natural choice of orientation +by defining its initial point to be $\tilde{x}$. This induces an orientation on any +branch $e$ contained in $\tilde{P}$. Note that this is well defined because $\tilde{\tau}$ does +not contain closed train paths (proper or otherwise). + +We say that $P(\tilde{\tau}, \tilde{x})$ is unidirectional if, whenever $e_i, e_j \subseteq P(\tilde{\tau}, \tilde{x})$ project to the same branch $e$ of $\tau$, the orientations of $e$ induced by $e_i$ and $e_j$ agree. + +Given $u \in \mathbb{R}^n$, define the *deviation* of $u$ at $s \in S$, denoted by $d_s(u)$, to be the absolute value of the coordinate of $L_\tau(u)$ corresponding to $s$. It suffices to assume that for $u$, as in the statement of the theorem, + +$$ +(4.1) \qquad d_s(u) < \frac{u_{\min}}{12g + 4p - 12}, \quad \forall s \in S. +$$ + +We will use this assumption to obtain a contradiction. + +Since $\tau$ is non-recurrent, it must contain an invisible branch $b$. +---PAGE_BREAK--- + +**Lemma 4.4.** Let $s_1, s_2 \in S(b)$ be the two (possibly non-distinct) switches incident to the invisible branch $b$ and let $\tilde{s}_1$ and $\tilde{s}_2 \in \tilde{\tau}$ be corresponding lifts which together bound a lift of $b$. Then at least one of $\mathcal{P}(\tilde{\tau}, \tilde{s}_i)$, $i = 1, 2$ is unidirectional. + +*Proof.* Suppose not. Then there exist branches $(e_j^i)_{\substack{j=1,2 \\ i=1,2}}^{j=1,2} \in \mathcal{P}(\tilde{\tau}, \tilde{x})$ such that $e_1^i$, $i = 1, 2$ project to a branch $e_1$ of $\tau$ with opposite orientations, and similarly for $e_2^i$, $i = 1, 2$. Thus, in $\tau$ there exist two train paths starting from $s_1$ and ending at $e_1$, but which traverse $e_1$ in opposite directions. Concatenating these two paths produces a loop in $\tau$, which is a train path away from $s_1$. + +By the same exact argument, there is another loop containing the switch $s_2$ and the branch $e_2$, which is a train path away from $s_2$. We can then concatenate these two paths across the branch $b$ to obtain a "dumbbell"-shaped closed train path, which contains $b$ (see Figure 4). This contradicts Lemma 4.2. □ + +FIGURE 4. If neither train path set emanating from $b$ is unidirectional, then there exist non-closed train paths starting and ending at $s_1$ and $s_2$. Joining these paths across $b$ yields a closed train path containing $b$, pictured above. + +Therefore, we assume henceforth that $\mathcal{P}(\tilde{\tau}, \tilde{s}_1)$ is unidirectional; let $\mathcal{Q}(s_1) \subseteq \tau$ be the projection of $\mathcal{P}$ to $\tau$. That $\mathcal{P}$ is unidirectional will allow us to redefine which half branches are incoming and which are outgoing (without changing the linear algebraic structure of $L_\tau$) such that each branch of $\mathcal{Q}$ is mixed. + +More concretely, orient each edge $e \subseteq \mathcal{Q}(s_1)$ by projecting the orientation on $\tilde{e}$ down to $e$, where $\tilde{e} \subseteq \tilde{\mathcal{P}}$ is any branch of $\tilde{\tau}$ with $\pi(\tilde{e}) = e$; unidirectionality implies that this construction is well defined. Then we +---PAGE_BREAK--- + +simply define a half-branch $e' \subset e \in Q$ to be outgoing at a switch $s$ if the orientation of $e'$ coming from $e$ points away from $s$, and similarly for in-coming branches. Note that this is well defined in that two half-branches incident to the same switch in distinct germs will be assigned opposing directional classes. + +This rule then defines an assignment of direction for all half branches of $\tau$ as follows. The half branches of $\tau$ which are not contained in $Q$ can be partitioned disjointly into two subcollections: the *frontier* half branches (those which are incident to a switch contained in $Q$) and the *interior* half branches (those for which the incident switch is not contained in $Q$). Once directions have been assigned to the half branches of $Q$ as above, directions for frontier half branches are determined by which germ they belong to at the corresponding switch. For interior half branches, simply assign the original directions coming from $\tau$. + +Let $S(Q) \subseteq S$ denote the switches of $\tau$ contained in $Q$ and recall that $v_s$ denotes the row vector of $L_{\tau}$ corresponding to the switch $s \in S$. + +**Lemma 4.5.** The vector $V = \sum_{s \in S(Q)} v_s \in R(L_{\tau})$ is a non-zero integer vector, all of whose coordinates are non-negative. + +*Proof.* Since every branch of $Q$ is mixed, each component of $V$ corresponding to a branch of $Q$ is 0. The same is true for any branch not in $Q$ which does not contain a frontier half-branch. + +We claim that frontier half branches must be incoming at the switch contained in $S(Q)$ to which it is incident; this will imply that $V$ takes on a positive value for each component corresponding to a branch containing a frontier half branch. + +Indeed, let $e$ be a branch containing a frontier half branch and let $s \in S(Q)$ be incident to $e$. $s \in S(Q)$ implies that there is another branch $e'$ incident to $s$ such that $e'$ is a branch of $Q$ and $e'$ is incoming at $s$. Thus, if $e$ were outgoing at $s$, there would exist a train path emanating from $s_1$ which traverses $e$, by concatenating the train path starting at $s_1$ and ending at $e'$ with the train path connecting $e'$ to $e$ over $s$. This contradicts the assumption that $e \notin Q$. + +Thus, to complete the argument, it suffices to show that the collection of frontier half branches is non-empty. Recall that $b$ is an invisible branch, and is therefore not contained in any closed train path. It then follows that the half branch of $b$ incident to $s_1$ is frontier. $\square$ + +We now use the following elementary fact regarding train tracks on $S_{g,p}$ (see [18] for proof). + +**Lemma 4.6.** Let $\tau \subset S_g, \tau = (\mathcal{B}, \mathcal{S})$ be a train track. Then + +$$|\mathcal{B}| \leq 18g + 6p - 18;$$ +---PAGE_BREAK--- + +$$|\mathcal{S}| \leq 12g + 4p - 12.$$ + +Therefore, there are at most $12g+4p-12$ row vectors of $L_{\tau}$ in the sum +$V$. Furthermore, since the components of $V$ are all non-negative integers, + +$$|V \cdot u| \geq u_{min},$$ + +where $\cdot$ denotes the standard Euclidean dot product. On the other hand, assuming the validity of (4.1), one obtains + +$$ +\begin{align*} +|V \cdot u| &= \left| \sum_{s \in S(Q)} v_s \cdot u \right| \le \sum_{s \in S(Q)} |v_s \cdot u| \\ +&= \sum_{s \in S(Q)} d_s(u) < (12g + 4p - 12) \cdot \frac{u_{min}}{12g + 4g - 12} = u_{min}, +\end{align*} +$$ + +a contradiction. +□ + +5. AN EFFECTIVE NESTING LEMMA + +In this section, we will use Theorem 3.2 and Lemma 4.3 to establish +the following effective version of Masur and Minsky’s [11] nesting lemma. + +**Lemma 5.1.** There exists a function $k(g,p) = O(\omega^2)$ such that if $\sigma$ and $\tau$ are large train tracks and $\sigma$ is carried by $\tau$, and $d(\tau,\sigma) > k(g,p)$, then + +$$PN(\sigma) \subset \text{int}(PN(\tau)).$$ + +**Remark 5.2.** When convenient, we will assume our train tracks to be generic; as mentioned in [13], the proof of the nesting lemma in the generic case is easily extendable to the general setting. + +If $\mu \in P(\tau)$, define the *combinatorial length* of $\mu$ with respect to $\tau$, +$l_{\tau}(\mu)$, to be the integral of $\mu$ over $\mathcal{B}$, that is + +$$l_{\tau}(\mu) := \sum_{b} \mu(b).$$ + +We also define + +$$l_{N(\tau)}(\mu) := \min_{\sigma} l_{\sigma}(\mu)$$ + +where the minimum is taken over all tracks $\sigma \in N(\tau)$ carrying $\mu$. + +We will need the following lemma, as seen in [8]. + +**Lemma 5.3.** Let $c$ be a simple closed curve carried by a train track $\tau$. Then the counting measure on $c$ is a vertex cycle of $\tau$ if and only if, for any branch $b$ of $\tau$, the image of $c$ under its corresponding carrying map traverses $b$ at most twice, and never twice in the same direction. +---PAGE_BREAK--- + +Since the vertex cycles are the extreme points of $P(\tau)$, by the classical +Krein-Milman theorem, any projective transverse measure class can be +written as a convex combination of vertex cycles; that is, given $\kappa \in P(\tau)$, +there exists $(a_i)$ such that + +$$ +(5.1) \qquad \kappa = \sum_i a_i \alpha_i, +$$ + +where $(\alpha_i)$ are the vertex cycles of $\tau$. Any train track on $S_{g,p}$ has at most +$18g + 6p - 18$ branches, and therefore, by Lemma 5.3, if $\tau$ is any train +track and $\alpha$ is a vertex cycle, + +$$ +l_{\tau}(\alpha) \leq 2(18g + 6p - 18). +$$ + +Lemma 5.3 also implies that any train track $\tau$ has at most $3^{18g+6p-18}$ vertex cycles since any branch is traversed once, twice, or no times. We therefore conclude that, given $\kappa$ as in equation (5.1), + +$$ +(5.2) \quad \max_i a_i \le l_\tau(\sigma) < \left[ (2(18g + 6p - 18)) \cdot 3^{18g+6p} \right] \max_i a_i +$$ + +$$ +(5.3) \qquad = C \cdot \max_i a_i. +$$ + +**Lemma 5.4.** Given $L > 0$, there exists a function $h_L(g,p) =$ $O(\log_{\omega(g,p)}(L))$ such that if $\alpha \in P(\tau)$ and $l_{\tau}(\alpha) \le L$, then $d_C(\alpha, \tau) < h_L(g,p)$. + +*Proof*. Suppose $l_{\tau}(\alpha) \le L$. We will abuse notation and refer to the image of $\alpha$ under its carrying map by $\alpha$. Then every time $\alpha$ traverses a branch of $\tau$, by Lemma 5.3, it can intersect a vertex cycle at most twice. Therefore, if $v$ is any vertex cycle of $\tau$, + +$$ +i(v, \alpha) \le 2L, +$$ + +and hence, by Theorem 3.2, for any $\epsilon \in (0,1)$ and $\omega = \text{omega}(\epsilon)$ suffi- +ciently large, + +$$ +\begin{align} +(5.4) \quad d_C(v, \alpha) &\le \frac{\log_\omega(2L)}{\lambda(\log_\omega(3)+1) - \log_\omega(1000 \cdot \log_2(100\omega))} + 2 \\ +&= O(\log_\omega(L)). \tag*{\hspace*{\fill} \square} +\end{align} +$$ + +**Remark 5.5.** One needs to be cautious in manipulating the inequality +in Theorem 3.2 to obtain equation (5.4); if + +$$ +\rho(\omega, \lambda) := \lambda(\log_{\omega}(3) + 1) - \log_{\omega}(1000 \cdot \log_{2}(100\omega)) < 0, +$$ + +the direction of the inequality changes and we will not get the desired +upper bound on curve graph distance. However, + +$$ +\lim_{\omega \to \infty} \rho(\omega, \lambda) = \lambda > 0, +$$ + +and therefore, for sufficiently large $\omega$, this is not an issue. +---PAGE_BREAK--- + +**Lemma 5.6.** Suppose $\sigma$ is a large recurrent train track carried by $\tau$ on $S_{g,p}$, and let $\sigma' \in E(\sigma)$ and $\tau' \in E(\tau)$ such that $\sigma'$ is carried by $\tau'$. Then the total number of times, counting multiplicity, that branches of $\sigma'$ traverse any branch of $\tau' \setminus \tau$ is bounded above by $m_0 = 36g + 12p$. + +*Proof.* The complete argument may be found in Masur and Minsky's original paper [11] on the hyperbolicity of the curve complex. For our purposes and for the sake of brevity, it suffices here to simply remark that they show that any given branch of $\sigma'$ can only traverse branches of $\tau' \setminus \tau$ at most twice. Then, since any track has less than $18g + 6p$ branches, the result follows. $\square$ + +To prove the following lemma, we use the results from section 4. + +**Lemma 5.7.** There exists $R = R(g,p)$ with + +$$ \frac{1}{R(g,p)} = O(\omega^2), $$ + +such that if $\sigma < \tau$, $\sigma$ is large, $\tau$ is generic, $\mu \in P(\tau)$, and every branch $b$ of $\tau \setminus \sigma$ and $b'$ of $\sigma$ satisfies $\mu(b) < R(g)\mu(b')$, then $\mu \in \text{int}(PE(\sigma))$ and $\sigma$ is recurrent. + +*Proof.* We follow Masur and Minsky's original argument [11]. The main tools are the elementary moves on train tracks called splitting and sliding as introduced in section 3 (see figures 2 and 3), which can be used to take $\tau$ to a diagonal extension of $\sigma$. In order to do this, we need to move any branch of $\tau \setminus \sigma$ into a corner of a complementary region of $\sigma$. A split or a slide applied to any such branch either reduces the number of branches of $\tau \setminus \sigma$ incident to a given branch of $\sigma$ or decreases the distance between a branch of $\tau \setminus \sigma$ and a corner of a complementary region of $\sigma$. + +Thus, a bounded number of such moves produces a track carried by a diagonal extension of $\sigma$. If a splitting is performed involving a branch $b$ of $\tau \setminus \sigma$ and a branch $c$ of $\sigma$, the resulting track contains a new branch $c'$ of $\sigma$, and we can extend $\mu$ to $c'$ to be consistent with the switch conditions by assigning $\mu(c') = \mu(c) - \mu(b)$. In particular, a sufficient condition for being able to define $\mu$ on the new track is + +$$ (5.5) \qquad \mu(c) > \mu(b). $$ + +There are at most $18g + 6p$ branches of $\tau \setminus \sigma$ and at most $18g + 6p$ branches of $\sigma$ or $\tau$. As earlier mentioned, a splitting move either reduces the number of branches of $\tau \setminus \sigma$ incident to $\sigma$, or it reduces the number of edges of $\sigma$ between a given branch of $\tau \setminus \sigma$ and a corner that it faces. Once a branch of $\tau \setminus \sigma$ is separated by a corner of a complementary region of $\sigma$ by only edges of $\sigma$ for which no splitting moves can be performed, a slide move takes such an edge to a corner point. Therefore, each edge of +---PAGE_BREAK--- + +$\tau \setminus \sigma$ is taken to a corner of $\sigma$ after no more than $18g+6p+1$ slidings and splittings, and therefore we obtain $\tau'$ after at most $(18g+6p)(18g+6p+1)$ such moves. + +Now, let $R(g,p) = \frac{1}{(18g+6p)(18g+6p+1)+1}$, and assume that for this value of $R$, the hypothesis of the statement is satisfied. In light of equation (5.5), $\mu$ is definable on the diagonal extension $\tau'$ that we obtain after splitting and sliding as long as + +$$ (5.6) \qquad \min_{\sigma} \mu > \frac{1}{R(g,p)} \max_{\tau \setminus \sigma} \mu, $$ + +which is precisely what the hypothesis of Lemma 5.6 implies. Therefore, $\mu$ is extendable to a diagonal extension of $\sigma$ such that all branches receive positive weights; hence, $\mu \in \text{int}(PE(\sigma))$. + +It remains to show that $\sigma$ is recurrent; suppose not. Let $B(\sigma)$ denote the branch set of $\sigma$. Then Lemma 4.3 implies that if $u \in \mathbb{R}^{|B(\sigma)|}$ is a vector with all positive coordinates, + +$$ \|L_{\sigma}(u)\| \geq \frac{u_{\min}}{12g + 4p - 12}. $$ + +In light of equation (5.6), since $\mu$ satisfies the switch conditions on $\sigma$, the vector $\mu$ has small deviations up to the additive error coming from the weight it assigns to any branch of $\tau \setminus \sigma$, which is less than + +$$ \frac{\mu_{\min}}{R(g,p)}; $$ + +since we assumed that $\tau$ is generic, there are at most two branches of $\tau \setminus \sigma$ incident to any branch of $\sigma$, and therefore the deviations of $\mu$ are all less than $\frac{\mu_{\min}}{12g+4p-12}$, contradicting Lemma 4.3. $\square$ + +**Lemma 5.8.** Let $L > 0$ be given. Then there exist functions $s_L(g,p)$ and $y(g,p) = O(\omega^3 3^{18\omega})$ satisfying the following: If $\sigma$ is large and carried by $\tau$ and $\sigma' \in E(\sigma)$, $\tau' \in E(\tau)$ such that $\tau'$ carries $\sigma'$, and if $d_C(\sigma, \tau) \ge s_L$, then any simple closed curve $\beta$ carried on $\sigma'$ can be written in $P(\tau')$ as $\beta_{\tau} + \beta'_{\tau'}$, such that + +$$ l_{\tau'}(\beta'_{\tau}) \le y(g,p) \cdot l_{\sigma'}(\beta) \text{ and} \\ l_{\tau}(\beta_{\tau}) \ge s_L(g,p)l_{\sigma'}(\beta). $$ + +*Proof.* The details of the argument are not entirely relevant for the proof of our main theorem but may be found in [11]; therefore, we omit the particulars of the proof, and remark only that in their argument, Masur and Minsky show that it suffices to take + +$$ y(g,p) := C \cdot m_0 W_0 C_0, $$ +---PAGE_BREAK--- + +where $C$ is the constant from equation (5.3), $m_0$ is the constant from the statement of Remark 5.5, $W_0$ is a bound on the weights that a vertex cycle can place on any one branch of $\sigma'$ (and therefore it suffices to take $W_0 = 3$ by Lemma 5.3), and $C_0$ is a bound on the combinatorial length of any vertex cycle on any train track on $S_{g,p}$. Putting all of this together, we obtain + +$$y(g,p) := [(2(18g + 6p - 18)) \cdot 3^{18g+6p}] (3(36g+12p-36)^2) = O(\omega^3 3^{18\omega}),$$ +as claimed. + +Masur and Minsky [11] also show that it suffices to take + +$$s_L(g,p) := h_L(C_0 L + y(g,p)) + 2B,$$ + +where $B$ is a bound on the curve graph distance between any two vertex cycles of the same train track. + +Therefore, by Theorem 3.1, for sufficiently large $\omega$, + +$$ (5.7) \qquad s_L(g,p) \le h_L(C_0 L + y(g,p)) + 6. \qquad \square $$ + +*Proof of Lemma 5.1.* Again with concision in mind, we do not include the entirety of Masur and Minsky’s argument [11]; we simply remark here that in our notation, it suffices to choose + +$$k(g,p) := s_{C m_0} \left( \frac{m_2}{R(g,p)} \right)^{m_3} (g,p).$$ + +Here, $m_0$ is as in Lemma 5.5 and is thus bounded above by $36g + 12p$, $m_2 < (18g + 6p)^{18g+6p}$, and $m_3 < 18g + 6p$. Thus, + +$$ C m_0 \cdot \left( \frac{m_2}{R(g)} \right)^{m_3} \\ < [(2(18g + 6p - 18)) \cdot 3^{18g}] \cdot (36g + 12p) ((18g + 6p)^{18g+6p+2})^{18g+6p} =: D, $$ + +and therefore, by Lemma 4.5, for $\omega(g,p)$ sufficiently large, + +$$ \begin{align*} k(g,p) &< h_D(C_0 D + y(g,p)) + 6 \\ &= O(\log_\omega(\omega^3 3^{18\omega}(18\omega)^{324\omega^2+36\omega})) \\ &= O(\omega^2). \end{align*} \qquad \square $$ + +## 6. PROOF OF THE MAIN THEOREM AND COROLLARIES + +In this section, we prove the main results. + +**Theorem 1.1.** There exists a function $K(g,p) = O(\omega(g,p)^2)$ such that any nested train track sequence with $R$-bounded steps is a $(K(g,p) + R)$-unparameterized quasigeodesic of the curve graph $C_1(S_{g,p})$, which is $(K(g,p) + R)$-quasiconvex. +---PAGE_BREAK--- + +*Proof.* Where possible, we use the same notation that Masur and Minsky [11] do to avoid confusion. Let $\delta$ be the hyperbolicity constant of $C_1(S)$. By [9], it suffices to take $\delta = 17$. Let $B$ be a bound on the diameter of the set of vertex cycles of a given train track $\tau \subset S_{g,p}$. As mentioned above, for sufficiently large $\omega$ it suffices to take $B=3$ (see [2] for a proof of this). + +Given a nested train sequence $(\tau_i)_i$, consider a subsequence $(\tau_{ij})_j$ such that + +$$k(g, p) \le d(\tau_{ij}, \tau_{ij+1}) < k(g, p) + R,$$ + +and such that if $\tau_n$ is any track not in the subsequence $(\tau_{ij})_j$, then there is some $c$ for which + +$$d(\tau_{ic}, \tau_n) < k(g, p).$$ + +Then, since $d(\tau_{ij}, \tau_{ij+1}) \ge k(g, p)$, the effective nesting lemma implies that + +$$PN(\tau_{ij+1}) \subset int(PN(\tau_{ij})).$$ + +For any train track $\tau$, one always has + +$$N_1(int(PN(\tau))) \subset PN(\tau),$$ + +where $N_m(int(PN(\tau)))$ denotes the set of multi-curves distance at most $m$ in $C_1$ from some multi-curve representing a measure in $int(PN(\tau))$. Combining these two inclusions and inducting yields + +$$N_{m-1}(PN(\tau_{ij+m})) \subset int(PN(\tau_{ij})).$$ + +Masur and Minsky [11] then make use of a lemma which implies that no vertex cycle of $\tau_{ij}$ is in $int(PN(\tau_{ij}))$, and therefore + +$$d(\tau_{ij}, \tau_{ik}) \ge |k-j|.$$ + +Thus, if $(v_{ij})_j$ is any sequence of the vertices of $(\tau_{ij})_j$, we have + +$$|m-n| \le d_C(v_{in}, v_{im}) < (k(g,p) + R + 2B)|m-n|,$$ + +which implies that $(v_{ij})_j$ is a $(k(g,p)+R+2B)$-quasigeodesic. This proves the first part of Theorem 1.1, with $K(g,p) := 2k(g,p) + 46$. (We have shown the sequence to be a $(k(g,p)+R+6)$-quasigeodesic, but we will need the extra $k(g,p)+40$ for the quasiconvexity statement.) + +We now show $(\tau_i)_{i \in I_1}$ is $(K(g,p)+R)$-quasiconvex. In any $\delta$-hyperbolic metric space, a geodesic segment $\gamma$ connecting the endpoints of a $K$-quasigeodesic segment $\gamma'$ is contained in a $W$-neighborhood of $\gamma'$, where $W = W(K, \delta)$. $W$ is sometimes known as the *stability constant*. + +Therefore, a geodesic segment connecting any two elements of the vertex cycle sequence $(v_{ij})_j$ is contained in a $W(K, \delta) = W(k(g,p)+R+6, 17)$-neighborhood of the sequence. + +**Lemma 6.1.** For sufficiently large $\omega$, $W < K(g,p) + R$. +---PAGE_BREAK--- + +*Proof.* We only give a sketch here; the main idea of the proof follows an argument of Ken'ichi Ohshika [17, p. 35], and we refer to this for a more complete argument. Hyperbolicity of $C_1$ implies the existence of an exponential divergence function; that is, if $\alpha_1, \alpha_2 : [0, \infty) \to C_1$ are two geodesic rays based at the same point $x_0 \in C_1$, then there is some exponential function $f$ so that for sufficiently large $r$ (depending on the choice of geodesic rays), the length of any arc outside of a ball of radius $r$ centered at $x$, connecting $\alpha_1(r)$ and $\alpha_2(r)$, is at least $f(r)$. + +Let $x$ and $y$ be two elements of a vertex cycle sequence $(v_{ij})_j$ and let $h$ be a geodesic segment connecting them. Denote by $w$ the $(k(g,p)+M+6)$-quasigeodesic segment obtained by following along the vertex sequence from $x$ to $y$. + +Let $D = \sup_{x \in h} d_C(x, w)$ and suppose $s \in h$ with $d_C(s, w) = D$. Let $a$ and $b$ be two points on $w$ whose distance from $s$ is $D$ and such that $a$ and $b$ are on different sides of $s$. Note that we can assume that such points exist because the end points of $w$ are also the endpoints of $h$, and therefore $s$ must be at least $D$ from the end points of $w$. + +Let $a'$ ($b'$, respectively) be points located $2D$ from $s$ on either side of $s$ on $w$; if $s$ is closer than $2D$ to one of the endpoints of $w$, simply define $a'$ ($b'$, respectively) to be this corresponding endpoint of $w$. Let $y, z \in h$ be points whose distances are less than $D$ from $a'$ and $b'$, respectively. Note that there is an arc $\sigma$ joining $y$ to $z$ by first connecting $y$ to $a'$, then $a'$ to $b'$ along $w$, and then jumping back over to $h$. Thus, + +$$ +\begin{align*} +d_C(y, z) &\le d_C(y, a') + d_C(a', b') + d_C(b', z) \\ +&\le D + 4D + D = 6D. +\end{align*} +$$ + +This gives a bound on the length of the segment of $w$ connecting $y$ and +$z$ since it is a quasigeodesic: + +$$ \text{length}_w(y,z) \leq (k(g,p) + R + 6) \cdot 6D. $$ + +Let $\beta$ be the arc obtained by concatenating the following 5 arcs: the arc along $h$ from $a$ to $a'$, the arc connecting $a'$ to $y$, the arc along $w$ from $y$ to $z$, the arc connecting $z$ to $b'$, and the arc along $h$ from $b'$ to $b$ (see Figure 5). + +It follows that + +$$ \mathrm{length}(\beta) \leq 4D + (k(g,p) + R + 6)D. $$ + +Now we use the divergence function $f$ for $C_1$ to bound the length of $\beta$ +from below. Indeed, for sufficiently large $D$, we have + +$$ \mathrm{length}(\beta) \geq f(D-c), $$ +---PAGE_BREAK--- + +FIGURE 5. The length of the path $\beta$ (the dotted path) is bounded above by $4D + (k(g,p) + R + 6)D$. + +where $c$ is a constant related to $f(0)$, and which does not affect the growth rate of the function $f$. Therefore, + +$$f(D-c) \leq 4D + (k(g,p) + R + 6)D.$$ + +Therefore, if $D > k(g,p) + R + 6$, $\omega$ cannot be arbitrarily large because $f(x)$ eventually dominates $x^2$. This completes the proof of the lemma. $\square$ + +**Remark 6.2.** We note that the conclusion of Lemma 6.1 is not at all sharp; indeed, the same argument would have shown that $W$ is eventually smaller than $(k(g,p) + R + 6)^\lambda$ for any $\lambda \in (0, 1)$. However, we do not concern ourselves with this because the contribution to the quasiconvexity of nested sequences coming from $W$ will be dominated by a larger term, as will be seen below. + +We have now shown that the collection of vertices of the sequence $(\tau_{ij})_j$ is quasiconvex with quasiconvexity constant $k(g,p) + R + 6$. It remains to analyze the vertex cycles of tracks that are not in this subsequence. If $v$ is such a vertex and $\omega$ is sufficiently large, we know that $v$ is within $k(g,p)+6$ from some vertex of one of the $\tau_{ij}$'s. In any $\delta$-hyperbolic space, geodesics with nearby end points fellow travel, in that they remain within a bounded neighborhood of one another, whose diameter depends only on $\delta$ and the distance between endpoints. + +Indeed, if $h$ is any geodesic segment connecting arbitrary vertices $v_1$ and $v_2$, $h$ must remain within $2\delta + k(g,p) + 6 \leq 40 + k(g,p)$ of some geodesic connecting vertices of the $\tau_{ij}$. + +Therefore, the collection of all vertices of the sequence $(\tau_i)_{i \in I_1}$ is a $(46+R+2k(g,p))$-quasiconvex subset of $C_1$. This completes the proof of Theorem 1.1. $\square$ +---PAGE_BREAK--- + +*Proof of Corollary 1.3.* Masur and Minsky [13] complete their argument showing the quasiconvexity of $D(g) \subset C_1(S_g)$ by noting that any two disks in $D(g)$ can be connected by a path in $D(g)$ representing a *well-nested curve replacement sequence*, a certain kind of nested train track sequence with $R$-bounded steps for which one can take $R$ to be 15. + +Thus, we see that $D(g)$ is $(61 + 4k(g, 0))$-quasiconvex, and this completes the proof of Corollary 1.3. $\square$ + +## 6.1. PROOF OF THEOREM 1.2. + +The purpose of this subsection is to prove Theorem 1.2, which states that the splitting and sliding sequences project to $O(\omega^2)$-unparameterized quasigeodesics in the curve graph of any essential subsurface $Y \subseteq S$. To do this, we simply follow the original argument of [14], effectivizing along the way. + +We first introduce some terminology. Given a subsurface $Y$, as in section 2, let $S^Y$ denote the (non-compact) covering space of $S$ corresponding to $Y$. Then, if $\tau$ is a train track on $S$, let $\tau^Y$ denote the pre-image under the covering projection of $\tau$ to $S^Y$. Then let $C(\tau^Y)$ and $\mathcal{A}C(\tau^Y)$ denote the collection of essential, non-peripheral, simple closed curves (curves and arcs, respectively) in the Gromov compactification of $S^Y$ whose interiors are train paths on $\tau^Y$. Let $V(\tau)$ denote the collection of vertex cycles of a track $\tau$. + +Then, if $Y$ is not an annulus, define the *induced track*, denoted $\tau|_Y$, to be the union of branches of $\tau^Y$ traversed by some element of $C(\tau^Y)$. + +*Proof of Theorem 1.2.* We first note that any splitting and sliding sequence $(\tau_i)_i$ is a nested train track sequence with $Z$-bounded steps, for $Z$ some uniform constant. Indeed, if $\tau_i$ is obtained from $\tau_{i-1}$ by either a splitting or a sliding, any vertex cycle of $\tau_i$ may intersect a vertex cycle of $\tau_{i-1}$ at most 6 times over any branch of $\tau_{i-1}$. Thus, there is some linear function $f: \mathbb{N} \to \mathbb{N}$ such that $i(v_i, v_{i-1}) < f(\omega(g,p))$ for $(\tau_i)_i$ a sliding and splitting sequence on $S_{g,p}$, and $v_i$ ($v_{i-1}$, respectively) is any vertex cycle of $\tau_i$ ($\tau_{i-1}$, respectively), and therefore, as a consequence of Theorem 3.2, for sufficiently large $\omega$, + +$$d_C(v_i, v_{i-1}) < 4.$$ + +To show that $(\psi_Y(\tau_i))_i$ is an $O(\omega^2)$-unparameterized quasigeodesic in $C(Y)$, we will exhibit a splitting and sliding sequence $(\sigma_i)_i$ on $Y$ such that $d_C(\tau_i, \sigma_i) = O(1)$. Then we will be done by applying Theorem 1.1 to the sequence $(\sigma_i)$. + +Given a vertex cycle $\alpha$ of $\tau_j|_Y$, define $\sigma_j \subset \tau_j|_Y$ to be the minimal track carrying $\alpha$; thus, $\sigma_j$ is recurrent by construction, and Masur, Mosher, and Schleimer [14] show $\sigma_j$ to be transversely recurrent as well. +---PAGE_BREAK--- + +Furthermore, they show that $\sigma_{j+1}$ is obtained from $\sigma_j$ by a slide or a split so long as $\sigma_j \neq \sigma_{j+1}$. Therefore, $(\sigma_i)_i$ constitutes a sliding and splitting sequence of birecurrent train tracks and thus is a nested train track sequence on $Y$ with $Z$-bounded steps. + +Since $\sigma_j$ is a subtrack of $\tau_j|Y$, by Lemma 5.3, any vertex cycle of $\sigma_j$ is a vertex cycle of $\tau_j|Y$, and therefore the diameter of $V(\tau_j|Y) \cup V(\sigma_j)$ is no more than 6 for sufficiently large $\omega$. + +Since $\alpha$ is carried by $\tau_j|Y$, it is also carried by $\tau_j$. Masur, Mosher, and Schleimer [14] then make use of a lemma which implies the existence of a vertex cycle $\beta_j$ of $\tau_j$ which intersects the subsurface $Y$ essentially. By [14, Lemma 2.8 and Lemma 5.4], + +$$i(\pi_Y(\beta_j), v_j) < 8|\mathcal{B}(\tau_j)|,$$ + +and therefore, by Lemma 4.6 and Theorem 3.2, for $\omega$ sufficiently large, + +$$d_C(\pi_Y(\beta_j), v_j) < 4.$$ + +This same argument applies to any vertex cycle of $\tau_j$ which projects non-trivially to $Y$, and thus we conclude that + +$$d_Y(\sigma_j, \tau_j) \le d_Y(\sigma_j, \tau_j|Y) + d_Y(\tau_j|Y, \tau_j) < 6 + 4 = 10,$$ + +for all $\omega$ sufficiently large. $\square$ + +**Acknowledgments.** The author would primarily like to thank his adviser, Yair Minsky, for his guidance and for many helpful suggestions. He would also like to thank Ian Biringer, Catherine Pfaff, Saul Schleimer, and Harold Sultan for their time and for the many motivating conversations they’ve had with the author regarding this work. Finally, the author thanks the referee for several helpful comments. + +REFERENCES + +[1] Aaron Abrams and Saul Schleimer, *Distances of Heegaard splittings*, Geom. Topol. **9** (2005), 95–119 (electronic). + +[2] Tarik Aougab. *Uniform hyperbolicity of the graphs of curves*. arXiv:1212.3160 [math.GT]. Available at http://arxiv.org/pdf/1212.3160.pdf. + +[3] Brian H. Bowditch, *Uniform hyperbolicity of the curve graphs*. Available at http://homepages.warwick.ac.uk/masgak/papers/uniformhyp.pdf. + +[4] Matt Clay, Kasra Rafi, and Saul Schleimer, *Uniform hyperbolicity of the curve graph via surgery sequences*. arXiv:1302.5519 [math.GT]. Available at http://arxiv.org/pdf/1302.5519.pdf. + +[5] Benson Farb and Dan Margalit, *A Primer on Mapping Class Groups*. Princeton Mathematical Series, 49. Princeton, NJ: Princeton University Press, 2012. +---PAGE_BREAK--- + +[6] Samuel Fiorini, Gwenaël Joret, Dirk Oliver Theis, and David R. Wood, *Small minors in dense graphs*. arXiv:1005.0895 [math.CO]. Available at http://arxiv.org/pdf/1005.0895.pdf. Small minors in dense graphs. European J. Combin. 33 (2012), no. 6, 1226–1245. + +[7] John Hempel, *3-manifolds as viewed from the curve complex*, Topology **40** (2001), no. 3, 631–657. + +[8] Ursula Hamenstädt, *Geometry of the complex of curves and of Teichmüller space* in Handbook of Teichmüller Theory. Vol. I. Ed. Athanase Papadopoulos. IRMA Lectures in Mathematics and Theoretical Physics, 11. Zürich: Eur. Math. Soc., 2007. 447–467. + +[9] Sebastian Hensel, Piotr Przytycki, and Richard C. H. Webb, *Slim unicorns and uniform hyperbolicity for arc graphs and curve graphs*. arXiv:1301.5577 [math.GT]. Available at http://arxiv.org/pdf/1301.5577.pdf. + +[10] Steven P. Kerckhoff, *The measure of the limit set of the handlebody group*, Topology **29** (1990), no. 1, 27–40. + +[11] Howard A. Masur and Yair N. Minsky, *Geometry of the complex of curves. I. Hyperbolicity*, Invent. Math. **138** (1999), no. 1, 103–149. + +[12] ————, *Geometry of the complex of curves. II. Hierarchical structure*, Geom. Funct. Anal. **10** (2000), no. 4, 902–974. + +[13] ————, *Quasiconvexity in the curve complex* in In the Tradition of Ahlfors and Bers, III. Ed. William Abikoff and Andrew Haas. Contemporary Mathematics, 355. Providence, RI: Amer. Math. Soc., 2004. 309–320. + +[14] Howard Masur, Lee Mosher, and Saul Schleimer, *On train-track splitting sequences*, Duke Math. J. **161** (2012), no. 9, 1613–1656. + +[15] Howard Masur and Saul Schleimer, *The geometry of the disk complex*, J. Amer. Math. Soc. **26** (2013), no. 1, 1–62. + +[16] Lee Mosher, *Train track expansions of measured foliations*. Available at http://andromeda.rutgers.edu/sinmosher/arationality031228.pdf. 2003. + +[17] Ken'ichi Ohshika, *Discrete Groups*. Translated from the 1998 Japanese original by the author. Translations of Mathematical Monographs, 207. Iwanami Series in Modern Mathematics. Providence, RI: American Mathematical Society, 2002. + +[18] R. C. Penner and J. L. Harer, *Combinatorics of Train Tracks*. Annals of Mathematics Studies, 125. Princeton, NJ: Princeton University Press, 1992. + +[19] William P. Thurston, *On the geometry and dynamics of diffeomorphisms of surfaces*, Bull. Amer. Math. Soc. (N.S.) **19** (1988), no. 2, 417–431. + +DEPARTMENT OF MATHEMATICS; YALE UNIVERSITY; 10 HILLHOUSE AVENUE; NEW HAVEN, CT 06510 USA + +E-mail address: tarik.aougab@yale.edu \ No newline at end of file diff --git a/samples/texts_merged/7642017.md b/samples/texts_merged/7642017.md new file mode 100644 index 0000000000000000000000000000000000000000..3551ceee9fc6dddc1b098efac5464bd729379fc5 --- /dev/null +++ b/samples/texts_merged/7642017.md @@ -0,0 +1,313 @@ + +---PAGE_BREAK--- + +Fast and Accurate Texture Recognition with Multilayer +Convolution and Multifractal Analysis + +Hicham Badri, Hussein Yahia, Khalid Daoudi + +► To cite this version: + +Hicham Badri, Hussein Yahia, Khalid Daoudi. Fast and Accurate Texture Recognition with Multilayer Convolution and Multifractal Analysis. European Conference on Computer Vision, ECCV 2014, Sep 2014, Zürich, Switzerland. [hal-01064793](https://hal.inria.fr/hal-01064793) + +HAL Id: hal-01064793 + +https://hal.inria.fr/hal-01064793 + +Submitted on 17 Sep 2014 + +**HAL** is a multi-disciplinary open access +archive for the deposit and dissemination of sci- +entific research documents, whether they are pub- +lished or not. The documents may come from +teaching and research institutions in France or +abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est +destinée au dépôt et à la diffusion de documents +scientifiques de niveau recherche, publiés ou non, +émanant des établissements d'enseignement et de +recherche français ou étrangers, des laboratoires +publics ou privés. +---PAGE_BREAK--- + +# Fast and Accurate Texture Recognition with Multilayer Convolution and Multifractal Analysis + +Hicham Badri, Hussein Yahia, and Khalid Daoudi + +INRIA Bordeaux Sud-Ouest, 33405 Talence, France +{hicham.badri,hussein.yahia,khalid.daoudi}@inria.fr + +**Abstract.** A fast and accurate texture recognition system is presented. The new approach consists in extracting locally and globally invariant representations. The locally invariant representation is built on a multi-resolution convolutional network with a local pooling operator to improve robustness to local orientation and scale changes. This representation is mapped into a globally invariant descriptor using multifractal analysis. We propose a new multifractal descriptor that captures rich texture information and is mathematically invariant to various complex transformations. In addition, two more techniques are presented to further improve the robustness of our system. The first technique consists in combining the generative PCA classifier with multiclass SVMs. The second technique consists of two simple strategies to boost classification results by synthetically augmenting the training set. Experiments show that the proposed solution outperforms existing methods on three challenging public benchmark datasets, while being computationally efficient. + +## 1 Introduction + +Texture classification is one of the most challenging computer vision and pattern recognition problems. A powerful texture descriptor should be invariant to scale, illumination, occlusions, perspective/affine transformations and even non-rigid surface deformations, while being computationally efficient. Modeling textures via statistics of spatial local textons is probably the most popular approach to build a texture classification system [1,2,3,4,5,6,7]. Based on this Bag-of-Words architecture, these methods try to design a robust local descriptor. Distributions over these textons are then compared using a proper distance and a nearest neighbor or kernel SVMs classifier [8]. Another alternative to regular histograms consists in using multifractal analysis [9,10,11,12,13]. The VG-fractal method [9] statistically represents the textures with the full PDF of the local fractal dimensions or lengths, while the methods in [10,11,12,13] make use of the box-counting method to estimate the multifractal spectrum. Multifractal-based descriptors are theoretically globally invariant to bi-Lipschitz transforms that include perspective transforms and texture deformations. A different approach recently presented in [14] consists in building a powerful local descriptor by cascading wavelet scattering transformations of image patches and using a generative PCA classifier [15]. Unfortunately, while these methods achieve high accuracy on some standard benchmark datasets, little attention is given to the computational efficiency, which is crucial in a real-world system. +---PAGE_BREAK--- + +We present in this paper a new texture classification system which is both accurate and computationally efficient. The motivation behind the proposed work comes from the success of multifractal analysis [10,9,11,12,13]. Given an input texture, the image is filtered with a small filter bank for various filter orientations. A pooling operator is then applied to improve robustness to local orientation change. This process is repeated for different resolutions for a richer representation. This first step generates various low-pass and high-pass responses that form a *locally invariant* representation. The mapping towards the final descriptor is done via multifractal analysis. It is well known that the *multifractal spectrum* encodes rich texture information. The methods in [10, 11, 12, 13] use the box-counting method to estimate the multifractal spectrum. However, this method is unstable due to the limited resolution of real-world images. We present a new multifractal descriptor that is more stable and improves invariance to bi-Lipschitz transformations. This improvement is validated by extensive experiments on public benchmark datasets. The second part of our work concerns training strategies to improve classification rates. We propose to combine the generative PCA classifier [14,15] with kernel SVMs [8] for classification. We also introduce two strategies called "synthetic training" to artificially add more training data based on illumination and scale change. Results outperforming the state-of-the-art are obtained over challenging public datasets, with high computational efficiency. + +The paper is organized as follows: section 2 describes the proposed descriptor, section 3 presents the proposed training strategies, section 4 presents classification results conducted on 3 public datasets as well as a comparison with 9 state-of-the-art methods. + +## 2 Robust Invariant Texture Representation + +The main goal of a texture recognition system is to build an *invariant* representation, a mapping which reduces the large intra-class variability. This is a very challenging problem because the invariance must include various complex transformations such as translation, rotation, occlusion, illumination change, non-rigid deformations, perspective view, among others. As a result, two similar textures with different transformation parameters must have similar descriptors. An example is given in Figure 1. Not only the system should be accurate, but it should be also computationally efficient. Otherwise, its use in a real-world system would be limited due to the long processing time to extract the descriptor. Our goal in this paper is to build both an *accurate* and *fast* texture recognition system. Our Matlab non-optimized implementation takes around 0.7 second to extract the descriptor on a medium size image (480 × 640) using a modern laptop. The processing time can be further decreased by reducing the resolution of the image without sacrificing much the accuracy. This is due to the strong robustness of our descriptor to scale changes via accurate multifractal statistics that encode rich multi-scale texture information. We explain in this section how we build the proposed descriptor, the motivation behind the approach and the connection with previous work. + +### 2.1 Overview of the Proposed Approach + +The proposed descriptor is based on two main steps : +---PAGE_BREAK--- + +Fig. 1: Intra-class variability demonstration. The three textures 1, 2 and 3 exhibit strong changes in scale and orientation as well as non-rigid deformations. As can be seen, the proposed descriptor is nearly invariant to these transformations (see section 2). + +1. Building a *locally* invariant representation : using multiple high-pass filters, we generate different sparse representations for different filter orientations. A pooling operator is applied on the orientation to increase the local invariance to orientation change. The process is repeated for multiple image resolutions for a richer representation. + +2. Building a *globally* invariant representation : the first step generates various images that encode different texture information. We also include the multi-resolution versions of the input to provide low-pass information. We need a mapping that transforms this set of images into a stable, fixed-size descriptor. We use multi-fractal analysis to statistically describe each one of these images. We present a new method that extracts rich information directly from local singularity exponents. The local exponents encode rich multi-scale texture information. Their log-normalized distribution represents a stable mapping which is invariant to complex bi-Lipschitz transforms. As a result, the proposed multifractal descriptor is proven mathematically to be robust to strong environmental changes. + +## 2.2 Locally Invariant Representation + +A locally invariant representation aims at increasing the similarity of local statistics between textures of the same class. To build this representation, we construct a simple convolutional network where the input image is convolved with a filter bank for various orientations, and then pooled to reduce local orientation change. The multilayer extension consists in repeating the same process for various image resolutions on the low-pass output of the previous resolution, which offers a richer representation. +---PAGE_BREAK--- + +Given an input texture *I*, the image is first low-pass filtered with a filter $\psi_l$ to reduce small image domain perturbations and produce an image $J_{1,0}$. This image is then filtered with multiple zero-mean high-pass filters $\psi_{k,\theta}$, where *k* denotes the filter number and $\theta$ its orientation. High-pass responses encode higher-order statistics that are not present in the low-pass response $J_{1,0}$. A more stable approach consists in applying the modulus on the high-pass responses, which imposes symmetric statistics and improves invariance of the local statistics. Applying multiple filtering with multiple different filters naturally increases the amount of texture information that are going to be extracted further via multifractal analysis. In order to increase the local invariance to orientation, we apply a pooling operator $\phi_\theta: \mathcal{R}^{i \times j \times n} \rightarrow \mathcal{R}^{i \times j}$ on the oriented outputs for each filter: + +$$ J_{1,k} = \phi_{\theta}(|J_{1,0} \star \psi_{k,\theta}|, \theta = \theta_1, \dots, \theta_n), \quad k = 1, \dots, K, \qquad (1) $$ + +where *n* is the number of orientations and *i* × *j* is the size of the low-pass image. As a result, we obtain 1 low-pass response and *K* high-pass responses, each image is encoding different statistics. For a richer representation, we repeat the same operation for different resolutions $s = 2^0, \dots, -L$, where *s* = 1 is the finest resolution and $s = 2^{-L}$ is the coarsest resolution. The image generation process is then generalized as follows: + +$$ J_{s,k} = \begin{cases} I \star \psi_l & k=0, s=1 \\ \downarrow (J_{2s,0} \star \psi_l) & k=0, s \neq 1 \\ \phi_\theta(|J_{s,0} \star \psi_{k,\theta}|, \theta=\theta_1, \dots, \theta_n) & k=1, \dots, K, \end{cases} \qquad (2) $$ + +where $\downarrow$ denotes the downsampling operator. We found that calculating statistics on multiple resolutions instead of a single one increases significantly the robustness of the descriptor. This can be expected because two textures may seem "more similar" at a lower resolution. As a result, the intra-class variability decreases as the resolution decreases, but keeping higher resolution images is important to ensure extra-class decorrelation. + +## Dimensionality Reduction with Pooling + +Using multiple filters $\psi_{k,\theta}$ increases dramatically the size of the image set. Knowing that each image $J_{s,k}$ will be used to extract statistics using multifractal analysis, this will result in a very large descriptor. One resulting issue is the high dimensionality of the training set. Another one is the processing time as the statistics should be applied on each image. We propose to merge different high-pass responses $J_{s,k}$ together to reduce the number of images. A straightforward approach would be to gather various images $\{J_{s,k}, k=t, \dots, u\}$ and then apply a pooling operator $\phi_r$ that is going to merge each image subset into one single image $J_{s,k_{t,...,u}}$: + +$$ J_{s,k_{t,...,u}}} = \phi_r(J_{s,k}, k=t, \dots, u). \qquad (3) $$ + +As a result, the number of high-pass responses will be decreased; this leads to a reduced size descriptor. The pooling operator $\phi_r$ can be either the mean or the min/max functions. We take $\phi_r$ as a maximum function in this paper. An example is given in Figure 2 for one resolution $s=0$ using 6 high-pass filters and one low-pass filter. The +---PAGE_BREAK--- + +number of images is reduced from 7 to 3. For 5 resolutions ($s = 2^0, \dots, -4$), the total number of images goes from 35 to 15, which is an important reduction. + +Fig. 2: Image generation example applied on the texture input $I$ for one resolution using 6 high-pass filters. The images $J_{0,1,\dots,6}$ are a result of the orientation pooling (eq. 2). The 6 images are reduced to 2 images using a pooling operator $\phi_r$ on similar responses to reduce the dimensionality. The same process is repeated for multiple resolutions. + +## 2.3 Globally Invariant Representation + +Once the set of low-pass and high-pass images is generated, we need to extract global statistics, a mapping into a fixed-size descriptor, which is *globally invariant* to the complex physical transformations. We propose to use a new multifractal approach to statistically describe textures suffering from strong environmental changes. To understand the difference between the proposed method and the previous work, we first present the standard fractal and multifractal analysis framework used by the previous methods, we then introduce the proposed approach. + +**Multifractal Analysis** In a nutshell, a fractal object $E$ is self-similar across scales. One characteristic of its irregularity is the so-called *box fractal dimension*. By measuring a fractal object on multiple scales $r$, the box fractal dimension is defined as a power-law relashionship between the scale $r$ and the smallest number of sets of length $r$ covering $E$ [16]: + +$$ \dim(E) = \lim_{r \to 0} \frac{\log N(r, E)}{-\log r}, \quad (4) $$ + +Using squared boxes of size $r$, this dimension can be estimated numerically, known as the *box-counting method*. Multifractal analysis is an extension of this important notion. A multifractal object $F$ is composed of many fractal components $F_{1,...,f}$. In this +---PAGE_BREAK--- + +case, a single fractal dimension is not sufficient to describe this object. The *multifractal spectrum* is the collection of all the associated fractal dimensions that describe the multifractal object. + +It is easy to show mathematically that the fractal dimension is invariant to bi-Lipschitz transformations [17], which includes various transformations such as non-rigid transformations, view-point change, translation, rotation, etc.. As a result, the multifractal spectrum is also invariant to these transformations. This makes the multifractal spectrum an attractive tool to globally describe textures. However, the box-counting method gives a rather crude estimation of the real fractal dimension. The fractal dimension is estimated for each fractal set using a log-log regression. As the resolution $r$ is supposed to be very small ($r \to 0$), using small-sized boxes on a relatively low-resolution image results in a biased estimation due to the relatively low-resolution of real-world images [18]. It has been used as the core of various recent multifractal texture descriptors [10, 11, 12, 13] that use the same box-counting method to build the final descriptor. We present a different method to statistically describe textures using multifractal analysis. Contrary to previous methods, we use a new measure which is based on the distribution of local singularity exponents. It can be shown in fact that this measure is related to the true multifractal spectrum, and its precision is proven by the high-accuracy of the proposed descriptor. Moreover, this approach is computationally efficient, which permits to achieve high accuracy at reduced processing time. + +**Proposed Multifractal Descriptor** The proposed method first estimates the local singularity exponents $h(x)$ on each pixel $x$, and then applies the empirical histogram followed by log operator to extract the global statistics $\phi_h = \log(\rho_h + \epsilon)$. This operation is performed on all the resulting images of the first step, which results in multiple histograms $\phi_{h_i}$. The concatenation of all these histograms forms the final descriptor. + +Let $J$ be an image, and $\mu_\psi(B(x,r)) = \int_{B(x,r)} (J \star \psi_r)(y)dy$ a positive measure, where $\psi_r$ is an appropriate wavelet at scale $r$ (Gaussian in our case) and $B(x,r)$ a closed disc of radius $r > 0$ centered at $x$. Multifractal analysis states that the wavelet projections scale as power laws in $r$ [19,20,21]. We use a microcanonical evaluation [20] which consists in assessing an exponent $h(x)$ for each pixel $x$: + +$$ \mu_{\psi}(B(x, r)) \approx \alpha(x)r^{h(x)}, \quad r \to 0. \qquad (5) $$ + +The validity of equation (5) has been tested on a large dataset [21], which proves that natural images exhibit a strong multifractal behavior. Introducing the log, the formula is expressed as a linear fit: + +$$ \log(\mu_{\psi}(B(x, r))) \approx \log(\alpha(x)) + h(x)\log(r), \quad r \to 0. \qquad (6) $$ + +Rewriting the equation in the matrix form permits to calculate all the exponents at once by solving the following linear system: + +$$ \underbrace{\begin{bmatrix} 1 & \log(r_1) \\ \vdots \\ 1 & \log(r_l) \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} \log(\alpha(x_1)) & \cdots & \log(\alpha(x_N)) \\ h(x_1) & \cdots & h(x_N) \end{bmatrix}}_{\eta} = \underbrace{\begin{bmatrix} \log(\mu_{\psi}(B(x_1, r_1))) & \cdots & \log(\mu_{\psi}(B(x_N, r_1))) \\ \vdots & & \vdots \\ \log(\mu_{\psi}(B(x_1, r_l))) & \cdots & \log(\mu_{\psi}(B(x_N, r_l))) \end{bmatrix}}_{b}, \quad (7) $$ +---PAGE_BREAK--- + +$$ \underset{\eta}{\operatorname{argmin}} ||A\eta - b||_2^2, h(x_i) = \eta(2, i), \quad (8) $$ + +where *N* is the number of pixels of the image *J*, *l* is the number of scales used in the log-log regression. This matrix formulation is computationally efficient and plays an important role in the speed of the proposed method. Given the local exponents *h*(*x*), which is an image of the same size of *J* that describes the local irregularities at each pixel, we need to extract now a fixed-size measure that globally describes the statistics of *h*(*x*). Using the box-counting method, this would require extracting all the fractal fractal sets $F_h = \{x | h(x) \approx h\}$, and then calculating the box-counting dimension for each set $F_h$. As discussed before, this approach leads to a crude estimation of the true multifractal spectrum due to the actual low-resolution of real-world images. Moreover, a log-log regression should be performed on each fractal set. Instead, we propose to use the empirical histogram $\rho_h$ followed by a log operator: + +$$ \varphi_h = \log(\rho_h + \epsilon), \quad (9) $$ + +where $\epsilon \ge 1$ is set to provide stability. The distribution of the local exponents is an invariant representation which encodes the multi-scale properties of the texture. The log acts as a normalization operator that nearly linearizes histogram scaling and makes the descriptor more robust to small perturbations. This way, we have access to reliable statistics¹. This log-histogram is calculated on each image generated in the first step, which results in a set of histograms $\varphi_{h_1,...,h_M}$, where *M* is the total number of generated images. The final descriptor $\varphi$ is constructed by concatenating $(\uplus)$ all the generated histograms: + +$$ \varphi = \bigoplus_m^{M} \varphi_{h_m}; \quad (10) $$ + +A descriptor example is given in Figure 3. This descriptor $\varphi$ is the result of the concatenation of 14 log exponents histograms calculated on the images generated with the first step of the method presented in section 2.2 and further explained in Figure 2. Three images are generated for each scale *s*; a low-pass response is presented in red, and two high-pass responses are presented in black and gray in the figure ². + +## 2.4 Analysis + +The basic multifractal framework consists in generating multiple images and then extracting statistics using multifractal analysis. Multifractal descriptors are mathematically invariant to bi-Lipschitz transforms, which even includes non-rigid transformation and view-point change. The proposed method follows the same strategy, but is substantially different from the previous methods. The differences lie in both the image generation step and the statistical description. For instance, the WMFS method [13] + +¹ A mathematical relationship between the log exponents histogram and the multifractal spectrum is presented in the supplementary material. + +² A histogram was discarded for $s = 2^{-4}$ in the second high response (in gray) due to the large size of the filter which is larger than the actual size of the input image at resolution $s = 2^{-4}$. +---PAGE_BREAK--- + +Fig. 3: A descriptor example using a low-pass response and two high-pass responses for 5 resolutions $s = 2^0, \dots, -4$. The exponents log-histogram is calculated for each response and for multiple image resolutions $s$. + +generates multiple images for multiple orientations, each oriented image is then analyzed using Daubechies discrete wavelet transform as well as using the wavelet leaders [22]. The multifractal spectrum (MFS) is then estimated for each image, for a given orientation using the box-counting method. Each MFS is then concatenated for a given orientation and the final descriptor is defined as the mean of all the descriptors over the orientation. Contrary to this method, we use different high-pass filters instead of one single analyzing wavelet, which permits to extract different statistics. Generating multiple descriptors for multiple orientations is computationally expensive. In contrast, we generate only one descriptor. To ensure local robustness to orientation, we apply a pooling operator on the *filtered responses*. This approach is much more computationally efficient. Finally, the core of our method is the new multifractal descriptor which permits to extract accurate statistics, contrary to the popular box-counting method as explained in the previous section. The proposed method takes about 0.7 second to extract the whole descriptor on an image of size 480 × 640, compared to 37 seconds as reported in the state-of-the-art multifractal method [13]. Experiments show that the proposed descriptor permits also to achieve higher accuracy, especially in large-scale situations when the extra-class decorrelation is a challenging issue. + +## 2.5 Pre and Post Processing + +Pre-processing and post-processing can improve the robustness of a texture recognition system. For instance, the method in [12] performs a scale normalization step on each input texture using blob detection. This step first estimates the scale of the texture and then a normalization is applied, which aims at increasing the robustness to scale change. Other texture classification methods such as [9] use Weber's law normalization to improve robustness to illumination. We do not use any scale normalization step such as [12,13], we rather use sometimes histogram equalization to improve robustness to illumination change. We also use a post-processing on features vector $\phi$ using wavelet domain soft-thresholding [?]. This step aims at increasing the intra-class correlation by +---PAGE_BREAK--- + +reducing small histogram perturbations (for more details, please refer to the supplementary material). + +# 3 Classification and Training Strategies + +The second part of our work concerns the training aspect of the texture recognition problem. The globally invariant representation offers a theoretically stable invariant representation via accurate multifractal statistics. However, there are other small transformations and perturbations that may occur in real-world images and this is where a good training strategy will help us to take advantage of the proposed descriptor in practice. We work on two ideas : + +1. The choice of the classifier can improve recognition rates: we introduce a simple combination between the Generative PCA classifier [14] and SVMs [8]. + +2. The lack of data is an issue, how to get more data? : Given an input training texture image, we synthetically generate more images by changing its illumination and scale. We call this strategy "synthetic training". + +Experiments on challenging public benchmark datasets, including a large-scale dataset with 250 classes, validates the robustness of the proposed solution. + +## 3.1 Classification + +**Support Vector Machines** SVMs [8] are widely used in texture classification [10,12,13,17,6]. Commonly used kernels are mainly RBF Gaussian kernel, polynomials and $\chi^2$ kernel. Extension to multiclass can be done via strategies such as one-vs-one and one-vs-all. In this paper, we use the one-vs-all strategy with an RBF-kernel. It consists in building a binary classifier for each class as follows: for each class, a positive label is assigned to the corresponding instances and a negative label is affected to all the remaining instances. The winning class $c_{svm}$ can be chosen based on probability estimates [23] or a simple score maximization: + +$$ c_{svm} = \underset{1 \le c \le N_c}{\operatorname{argmax}} \{f_{svm}(x,c)\} , \quad f_{svm}(x,c) = \sum_{i=1}^{M_c} \alpha_i^c y_i^c K(x_i^c, x) + b_c, \quad (11) $$ + +where $\alpha_i^c$ are the optimal Lagrange multipliers of the classifier representing the class $c$, $x_i^c$ are the support vectors of the class $c$, $y_i^c$ are the corresponding $\pm 1$ labels, $N_c$ is the number of classes and $x$ is the instance to classify. + +**Generative PCA Classifier** The generative PCA (GPCA) classifier is a simple PCA-based classifier recently used in [15,14]. Given a test descriptor $x$, GPCA finds the closest class centroid $\mathbb{E}(\{x_c\})$ to $x$, after ignoring the first $D$ principal variability directions. Let $V_c$ be the linear space generated by the $D$ eigenvectors of the covariance matrix of largest eigenvalues, and $V_c^\perp$ its orthogonal complement. The generative PCA classifier uses the projection distance associated to $P_{V_c^\perp}$: + +$$ c_{pca} = \underset{1 \le c \le N_c}{\operatorname{argmin}} \| P_{V_c^\perp} (x - \mathbb{E}(\{x_c\})) \|^2. \quad (12) $$ +---PAGE_BREAK--- + +Classification consists in choosing the class $c_{pca}$ with the minimum projection distance. + +**GPCA-SVM Classifier** We propose to combine GPCA and SVMs in one single classifier. The idea behind this combination comes from the observation that SVMs and GPCA often fail on different instances. As a result, a well-established combination of these classifiers should theoretically lead to improved performance. We propose a combination based on the distance between the score separation of each classifier output + +$$ c_{final} = \begin{cases} c_{svm} & \text{if } f_{svm}(x, c_{svm}) - f_{svm}(x, c_{pca}) \geq th_{svm} \\ c_{pca} & \text{otherwise,} \end{cases} \quad (13) $$ + +where $th_{svm}$ is a threshold parameter. The score separation gives an idea of SVMs' accuracy to classify a given instance. Another similar approach would be using probability estimates [23] instead of the score. If the measure $f_{svm}(x, c_{svm}) - f_{svm}(x, c_{pca})$ is relatively important, this means that SVMs are quite "confident" about the result. Otherwise, the classifier selects the GPCA result. Determining the best threshold $th_{svm}$ for each instance is an open problem. In this paper, we rather fix a threshold value for each experiment. We generally select a small threshold for small training sets and larger thresholds for larger sets. Even if this strategy is not optimal, experiments show that the combination improves the classification rates as expected. + +## 3.2 Synthetic Training + +One important problem in training is coping with the low amount of examples. We propose a simple strategy to artificially add more data to the training set by changing illumination and scale of each instance of the training set. While this idea seems simple, it can have a dramatic impact on the performance as we will see in the next section. + +**Multi-Illumination Training** Given an input image *I*, multi-illumination training consists in generating other images of the same content of *I* but with different illumination. There are two illumination cases; the first one consists in *uniform* changing by image scaling of the form *a*I, where *a* is a given scalar. The second case consists in *nonuniform* changing using histogram matching with a set of histograms. The histograms can come from external images, or even from the training set itself (for example by transforming or combining a set of histograms). + +**Multi-Scale Training** Given an input image *I*, multi-scale training consists simply in generating other images of the same size as *I* by zooming-in and out. In this paper, we use around 4 generated images, 2 by zooming-in and 2 others by zooming-out. + +# 4 Texture Classification Experiments + +We present in this section texture classification results conducted on standard public datasets **UIUC** [24,1], **UMD** [25] and **ALOT** [26,27], as well as a comparison with 9 state-of-the-art methods. +---PAGE_BREAK--- + +**Datasets Description** The UIUC dataset [24,1] is one of the most challenging texture datasets presented so far. It is composed of 25 classes, each class contains 40 grayscale images of size 480 × 640 with strong scale, rotation and viewpoint changes in uncontrolled illumination environment. Some images exhibit also strong non-rigid deformations. Some samples are presented in Figure 4. The UMD dataset [25] is similar to UIUC with higher resolution images (1280 × 960) but exhibits less non-rigid deformations and stronger illumination changes compared to UIUC. To evaluate the proposed method on a large-scale dataset, we choose the ALOT dataset [26,27]. It consists of 250 classes, 100 samples each. We use the same setup as the previous multifractal methods [13]: grayscale version with half resolution (768 × 512). The ALOT dataset is very challenging as it represents a significantly larger number of classes (250) compared to UIUC and UMD (25) and very strong illumination change (8 levels of illumination). The viewpoint change is however less dramatic compared to UIUC and UMD. + +Fig. 4: Texture samples from the **UIUC** dataset [24,1]. Each row represents images from the same class with strong environmental changes. + +**Implementation details** In order to build a fast texture classification system, we use only two high-pass filtering responses, which results in 3 histograms per image resolution³. The number of the image scales is fixed to 5. The filter bank consists in high-pass wavelet filters (Daubechies, Symlets and Gabor). A more robust descriptor can be built by increasing the number of filters and orientations. Filtering can be parallelized for faster processing. While augmenting the number of filters slightly improves classification results, the minimalist setup presented above, coupled with the training strategies introduced in this paper, permits to outperform existing techniques while offering in addition computational efficiency. + +**Evaluation** + +We evaluate the proposed system and compare it with state-of-the-art methods for 50 random splits between training and testing. The evaluation consists in three steps: + +³ Except for **ALOT** dataset, we use 3 high-pass responses for a more robust representation. +---PAGE_BREAK--- + +1. log-histogram vs. box-counting: We evaluate the precision of our log-histogram method and compare it with the box-counting method used in previous methods. + +2. Learning efficiency: We compare the proposed GPCA-SVM combination with single GPCA and SVM results and see how the proposed synthetic training strategy improves classification rates. + +3. We compare our main results with **9** state-of-the-art results. + +**log-histogram vs. box-counting** In this experiment, we replace the log-histogram step of our approach with the box-counting method widely used in the previous multifractal methods to see if the proposed log-histogram leads to a more accurate bi-Lipschitz invariance. The results are presented in Figure 5. As can be seen, the log-histogram approach leads to higher performance, especially when more data is available. This clearly shows that indeed, the log-histogram leads to a better bi-Lipschitz invariance, as theoretically discussed before. The log-histogram is a simple operation that permits our system to achieve high computational efficiency. + +Fig. 5: Comparison between the box-counting method and the proposed log-histogram approach for various dataset training sizes (5, 10 and 20). The proposed approach leads to a more accurate descriptor. + +**Learning Efficiency** In this experiment, we first compare the proposed GPCA-SVM combination with single GPCA and SVM classifiers using the proposed descriptor. Each dataset is presented in the form $D_{(y)}^x$ where x is the name of the dataset and y is the training size in number of images. The best results are in bold. As can be seen in Table 1, the GPCA-SVM does indeed improve classification rates. We expect to get even better results with a better strategy to set the threshold parameters $th_{svm}$ as in the proposed experiments, the threshold is fixed for all the instances. Now we compare the results with and without the proposed synthetic training strategy. As can be seen, synthetic training leads to a dramatic improvement. This is a very interesting approach as it increases only the training time. The system can achieve higher recognition accuracy for almost the same computational efficiency. For the **UMD** and **ALOT** datasets, we use uniform illumination change with the multiplicative parameter $a$ in the range [0.9, 0.95, 1.05, 1.1]. For the **UIUC** dataset, we use the nonuniform illumination change +---PAGE_BREAK--- + +with two histograms. For the multi-scale training, we use only four generated images (two by zooming-in and two other by zooming-out), which increases the training set 9 times in the **UMD** and **UIUC** datasets (no mutli-scale training is used for the **ALOT** dataset). + +
D(5)UIUCD(10)UIUCD(20)UIUCD(5)UMDD(10)UMDD(20)UMDD(10)ALOTD(30)ALOTD(50)ALOT
ProposedGPCA91.15%97.12%99.07%95.07%97.85%99.40%89.30%98.03%99.27%
SVM91.23%96.30%98.47%94.43%97.44%99.25%88.96%98.16%99.14%
GPCA-SVM92.58%97.17%99.10%95.23%98.04%99.44%90.67%98.45%99.34%
+ Synthetic TrainGPCA95.84%98.77%99.67%98.02%99.13%99.62%91.54%98.81%99.59%
SVM95.40%98.43%99.46%97.75%99.06%99.72%92.23%98.80%99.51%
GPCA-SVM96.13%98.93%99.78%98.20%99.24%99.79%92.82%99.03%99.64%
+ +Table 1: Classification rates comparison using GPCA-SVM and synthetic training. + +**Discussions** We compare the proposed method MCMA (Multilayer Convolution - Multifractal Analysis) with 9 state-of-the-art methods for 50 random splits between training and testing, for different training sizes. Results are presented in Table 2. The best results are in bold ⁴. As can be seen, the proposed method outperforms the published results on the 3 datasets. Compared to the leading method [14], our system seems to better handle viewpoint change and non-rigid deformations. This is clearly shown in the results on the **UIUC** dataset that exhibits strong enviromental changes. This result can be expected as the scattering method builds invariants on translation, rotation and scale changes, which does not include viewpoint change and non-rigid deformations. Contrary to this, using accurate multifractal statistics, our solution produces descriptors that are invariant to these complex transformations. The proposed system maintains a high performance on the **UMD** dataset. It is worth noting that on this dataset, the images are of high resolution (1280 × 960), which gives an advantage over the **UIUC** dataset. However, we did not use the original resolution, we rather rescale the images to half-size for faster processing. The high accuracy shows that the proposed multifractal method is able to extract robust invariant statistics even on low-resolution images. + +On the large-scale dataset **ALOT**, the proposed method maintains high performance. + +Recall that this dataset contains 250 classes with 100 samples each. This is a very challenging dataset that evaluates the extra-class decorrelation of the produced descriptors. + +A robust descriptor should increase the intra-class correlation, but should also decrease the extra-class correlation and this has been evaluated on a large-scale data set which contains as many different classes as possible. The results on the **ALOT** dataset clearly show a significant performance drop of the leading multifractal method WMFS. The proposed solution in fact outperforms the WMFS method even without synthetic training as can be seen in Table 1. This proves that the proposed descriptor is able to extract a robust invariant representation. + +⁴ Detailed results with standard deviation can be found in the supplementary material. +---PAGE_BREAK--- + +
DUIUC(5)DUIUC(10)DUIUC(20)DUMD(5)DUMD(10)DUMD(20)DALOT(10)DALOT(30)DALOT(50)
MFS [10]--92.74%--93.93%71.35%82.57%85.64%
OTF-MFS [11]--97.40%--98.49%81.04%93.45%95.60%
WMFS [13]93.40%97.00%97.62%93.40%97.00%98.68%82.95%93.57%96.94%
VG-Fractal [9]85.35%91.64%95.40%--96.36%---
Varma [28]--98.76%------
Lazebnik [1]91.12%94.42%97.02%90.71%94.54%96.95%---
BIF [5]--98.80%------
SRP [7]--98.56%--99.30%---
Scattering [14]93.30%97.80%99.40%96.60%98.90%99.70%---
MCMA96.13%98.93%99.78%98.20%99.24%99.79%92.82%99.03%99.64%
+ +Table 2: Classification rates on the UIUC, UMD and ALOT datasets. + +# 5 Conclusion + +This paper presents a fast and accurate texture classification system. The proposed solution builds a locally invariant representation using a multilayer convolution architecture that performs convolutions with a filter bank, applies a pooling operator to increase the local invariance and repeats the process for various image resolutions. The resulting images are mapped into a stable descriptor via multifractal analysis. We present a new multifractal descriptor that extracts rich texture information from the local singularity exponents. The descriptor is mathematically validated to be invariant to bi-Lipschitz transformations, which includes complex environmental changes. The second part of paper tackles the training part of the recognition system. We propose the GPCA-SVM classifier that combines the generative PCA classifier with the popular kernel SVMs to achieve higher accuracy. In addition, a simple and efficient "synthetic training" strategy is proposed that consists in synthetically generating more training data by changing illumination and scale of the training instances. Results outperforming the state-of-the-art are obtained and compared with 9 recent methods on 3 challenging public benchmark datasets, while ensuring high computational efficiency. + +# Acknowledgements + +Hicham Badri's PhD is funded by an INRIA (Direction of Research) CORDI-S grant. He is making a PhD in co-supervision with INRIA and Mohammed V-Agdal University - LRIT, Associated Unit to CNRST (URAC 29). + +# References + +1. Lazebnik, S., Schmid, C., Ponce, J.: A sparse texture representation using local affine regions. PAMI **27** (2005) 1265–1278 + +2. Zhang, J., Marszalek, M., Lazebnik, S., Schmid, C.: Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Comput. Vision **73**(2) (June 2007) 213–238 +---PAGE_BREAK--- + +3. Varma, M., Zisserman, A.: A statistical approach to material classification using image patch exemplars. PAMI 31(11) (November 2009) 2032–2047 + +4. Ojala, T., Pietikäinen, M., Mäenpää, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. PAMI 24(7) (July 2002) 971–987 + +5. Crosier, M., Griffin, L.D.: Texture classification with a dictionary of basic image features. In: CVPR, IEEE Computer Society (2008) + +6. Liu, L., Fieguth, P.W.: Texture classification from random features. PAMI 34(3) (2012) 574–586 + +7. Liu, L., Fieguth, P.W., Kuang, G., Zha, H.: Sorted random projections for robust texture classification. In: ICCV. (2011) 391–398 + +8. Scholkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA (2001) + +9. Varma, M., Garg, R.: Locally invariant fractal features for statistical texture classification. In: CVPR, Rio de Janeiro, Brazil. (October 2007) + +10. Xu, Y., Ji, H., Fermuller, C.: A projective invariant for textures. 2006 CVPR 2 (2006) 1932–1939 + +11. Xu, Y., Huang, S.B., Ji, H., Fermuller, C.: Combining powerful local and global statistics for texture description. In: CVPR, IEEE (2009) 573–580 + +12. Xu, Y., Yang, X., Ling, H., Ji, H.: A new texture descriptor using multifractal analysis in multi-orientation wavelet pyramid. In: CVPR. (2010) 161–168 + +13. Ji, H., Yang, X., Ling, H., Xu, Y.: Wavelet domain multifractal analysis for static and dynamic texture classification. IEEE Transactions on Image Processing 22(1) (2013) 286–299 + +14. Sifre, L., Mallat, S.: Rotation, scaling and deformation invariant scattering for texture discrimination. In: CVPR. (2013) + +15. Bruna, J., Mallat, S.: Invariant scattering convolution networks. PAMI 35(8) (August 2013) 1872–1886 + +16. Falconer, K.: Techniques in Fractal Geometry. Wiley (1997) + +17. Xu, Y., Ji, H., Fermüller, C.: Viewpoint invariant texture description using fractal analysis. Int. J. Comput. Vision 83(1) (June 2009) 85–100 + +18. Arneodo, A., Bacry, E., Muzy, J.F.: The thermodynamics of fractals revisited with wavelets. Physica A: Statistical and Theoretical Physics 213(1-2) (January 1995) 232–275 + +19. Turiel, A., del Pozo, A.: Reconstructing images from their most singular fractal manifold. IEEE Trans. Img. Proc. 11(4) (April 2002) 345–350 + +20. Yahia, H., Turiel, A., Perez-Vicente, C.: Microcanonical multifractal formalism: a geometrical approach to multifractal systems. Part I: singularity analysis. Journal of Physics A: Math. Theor (41) (2008) + +21. Turiel, A., Parga, N.: The multifractal structure of contrast changes in natural images: From sharp edges to textures. Neural Computation 12(4) (2000) 763–793 + +22. Wendt, H., Roux, S.G., Jaffard, S., Abry, P.: Wavelet leaders and bootstrap for multifractal analysis of images. Signal Process. 89(6) (June 2009) 1100–1114 + +23. Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2 (2011) 27:1–27:27 + +24. : UIUC: http://www-cvr.ai.uiuc.edu/once_grp/data/. + +25. : UMD: http://www.cfar.umd.edu/~fer/website-texture/texture.htm. + +26. Burghouts, G.J., Geusebroek, J.M.: Material-specific adaptation of color invariant features. Pattern Recognition Letters 30 (2009) 306–313 + +27. : ALOT: http://staff.science.uva.nl/~aloi/public_alot/. + +28. Varma, M.: Learning the discriminative powerinvariance trade-off. In: In ICCV. (2007) \ No newline at end of file diff --git a/samples/texts_merged/7774888.md b/samples/texts_merged/7774888.md new file mode 100644 index 0000000000000000000000000000000000000000..8b9878b92849f009742a8cbc92fe07675806bfb2 --- /dev/null +++ b/samples/texts_merged/7774888.md @@ -0,0 +1,807 @@ + +---PAGE_BREAK--- + +# Spectral theory and operator ergodic theory on super-reflexive Banach spaces + +by + +EARL BERKSON (Urbana, IL) + +**Abstract.** On reflexive spaces trigonometrically well-bounded operators have an operator-ergodic-theory characterization as the invertible operators *U* such that + +$$ (*) \quad \sup_{n \in \mathbb{N}, z \in \mathcal{T}} \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) k^{-1} z^k U^k \right\| < \infty. $$ + +Trigonometrically well-bounded operators permeate many settings of modern analysis, and this note highlights the advances in both their spectral theory and operator ergodic theory made possible by a recent rekindling of interest in the R. C. James inequalities for super-reflexive spaces. When the James inequalities are combined with Young-Stieltjes integration for the spaces $V_p(\mathcal{T})$ of functions having bounded $p$-variation, it transpires that every trigonometrically well-bounded operator on a super-reflexive space $X$ has a norm-continuous $V_p(\mathcal{T})$-functional calculus for a range of values of $p > 1$, and we investigate the ways this outcome logically simplifies and simultaneously expands the structure theory, Fourier analysis, and operator ergodic theory of trigonometrically well-bounded operators on $X$. In particular, on a super-reflexive space $X$ (but not on a general relexive space) a theorem of Tauberian type holds: the (C, 1) averages in (*) corresponding to a trigonometrically well-bounded operator $U$ can be replaced by the set of all the rotated ergodic Hilbert averages of $U$, which, in fact, is a precompact set relative to the strong operator topology. This circle of ideas is facilitated by the development of a convergence theorem for nets of spectral integrals of $V_p(\mathcal{T})$-functions. In the Hilbert space setting we apply the foregoing to the operator-weighted shifts which are known to provide a universal model for trigonometrically well-bounded operators on Hilbert space. + +## 1. Introduction and notation. +The set of positive integers, the set of all integers, the real line, and the complex plane will be denoted by $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$, and $\mathbb{C}$, respectively. The unit circle $\{z \in \mathbb{C} : |z| = 1\}$ will be designated by $\mathcal{T}$. The symbol “K” with a (possibly empty) set of subscripts will be used to denote a constant which depends only on its subscripts, and which can change in value from one occurrence to another. Except where other- + +2010 Mathematics Subject Classification: Primary 26A45, 46B20, 47A35, 47B40. +Key words and phrases: ergodic Hilbert transform, super-reflexive Banach space, spectral decomposition, p-variation, trigonometrically well-bounded operator. +---PAGE_BREAK--- + +wise indicated, the convergence of a bilateral series $\sum_{k=-\infty}^{\infty} a_k$ will mean +the convergence of its sequence of bilateral partial sums $\{\sum_{k=-n}^{n} a_k\}_{n=1}^{\infty}$. +Throughout all that follows, $\mathcal{X}$ will be an arbitrary Banach space, and we +shall symbolize by $\mathfrak{B}(\mathcal{X})$ the Banach algebra of all continuous linear oper- +ators mapping $\mathcal{X}$ into $\mathcal{X}$, the identity operator on $\mathcal{X}$ being denoted by $I$. +A trigonometric polynomial will be a linear combination of a finite subset of +the functions $\epsilon_n(z) \equiv z^n \ (z \in \mathbb{T}, n \in \mathbb{Z})$. Given a trigonometric polynomial +$Q(z) \equiv \sum_n a_n z^n$ and an invertible $T \in \mathfrak{B}(\mathcal{X})$, we shall denote by $Q(T)$ the +operator $\sum_n a_n T^n$. + +Deferring the precise details from spectral theory to §2, we use this in- +troductory section to fix some notation and to outline our considerations, +beginning with the abstract notions of spectral decomposability and spec- +tral integration. An operator $U \in \mathfrak{B}(\mathcal{X})$ is said to be trigonometrically well- +bounded ([5]) provided that $U$ has a “unitary-like” spectral representation + +$$ +(1.1) \qquad U = \int_{0-}^{2\pi} e^{it} dE(t), +$$ + +where $E(\cdot) : \mathbb{R} \to \mathfrak{B}(\mathcal{X})$ is a bounded idempotent-valued function possessing certain additional properties reminiscent of, but weaker than, those that would be inherited from a countably additive Borel spectral measure in $\mathbb{R}$, and where the integral in (1.1) is a Riemann–Stieltjes integral existing in the strong operator topology. After suitable normalization, the idempotent-valued function $E(\cdot)$ in (1.1) is uniquely determined, and is called the *spectral decomposition* of $U$. The spectral decomposition $E(\cdot)$ gives rise to a notion of Riemann–Stieltjes *spectral integration* against the integrator $E(\cdot)$. Spectral integration with respect to $E(\cdot)$ provides the trigonometrically well-bounded operator $U$ with a norm-continuous functional calculus implemented by $BV(\mathbb{T})$, the Banach algebra of all complex-valued functions $\psi$ on $\mathbb{T}$ having bounded variation and furnished with the $BV([0, 2\pi])$-norm of the corresponding function $\psi^\dagger(\cdot) \equiv \psi(e^{i\langle\cdot\rangle})$. + +Trigonometrically well-bounded operators abound in the structures of modern analysis that require weakened forms of orthogonality to treat delicate convergence phenomena beyond the reach of the unconditional convergence associated with spectral measures. For a variety of naturally occurring examples of trigonometrically well-bounded operators, see, e.g., [8], §4 of [10], and [20]. In particular, if $\mathcal{X}$ is a UMD space, then any invertible $U \in \mathfrak{B}(\mathcal{X})$ such that $U$ is power-bounded (that is, $\sup_{n \in \mathbb{Z}} \|U^n\| < \infty$) is trigonometrically well-bounded. For some applications of trigonometrically well-bounded operators to operator ergodic theory and transference methods, see [3], [13], [14], [15], [17], and [18]. + +Our starting point for this article is the following operator-ergodic-theory +characterization of trigonometrically well-bounded operators on an arbitrary +---PAGE_BREAK--- + +reflexive Banach space $\mathcal{X}_0$ (see the equivalence of conditions (i) and (ii) of +Theorem (2.4) in [6]). + +PROPOSITION 1.1. Let $\mathcal{X}_0$ be a reflexive Banach space, and let $U \in \mathfrak{B}(\mathcal{X}_0)$ be an invertible operator. Then $U$ is trigonometrically well-bounded if and only if + +$$ +(1.2) \quad \sup \left\{ \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) \frac{z^k}{k} U^k \right\| : n \in \mathbb{N}, z \in \mathbb{T} \right\} < \infty. +$$ + +This article features results in both spectral theory and operator er- +godic theory made possible by a recent renewal of interest in the conse- +quences of R. C. James' inequalities for super-reflexive Banach spaces. (For +these inequalities, see [30]; for the basic notions and fundamental features +of super-reflexive spaces, see [31] as well as the celebrated result of P. Enflo +in [26], which characterizes super-reflexivity as the property of having an +equivalent uniformly convex norm.) When the James inequalities from [30] +are combined with Young's inequalities in [40] for the spaces of functions +having bounded $p$-variation on the circle (the $V_p(\mathbb{T})$ spaces), $1 < p < \infty$, +it transpires that for every trigonometrically well-bounded operator on a +super-reflexive Banach space, spectral integration against its spectral de- +composition extends its BV$(\mathbb{T})$-functional calculus to a norm-continuous +$V_p(\mathbb{T})$-functional calculus, for a suitable range of values of $p > 1$ (Theorem +3.7 below). One indicator of the scope of this extension is that, in contrast +to BV$(\mathbb{T})$, every class $V_p(\mathbb{T})$ contains a continuous, nowhere differentiable +function of Hardy-Weierstrass type (see Remark 2.8(ii) below). + +The spectral integration of function classes of “higher variation” was initiated in [11], but heretofore has been confined to integrating against the spectral decompositions of: invertible power-bounded operators on classical UMD spaces [19], or invertible operators that are separation-preserving and modulus mean-bounded on reflexive Lebesgue spaces of sigma-finite measures [18]. Consequently, the results below ensuring spectral integration of $V_p(\mathbb{T})$ in the wide setting of super-reflexive spaces markedly expand the scope of spectral integration. Since functions of higher variation act as Fourier multipliers in classical unweighted settings as well as in classical weighted settings (see, e.g., Theorem 8 of [18], Théorème 1 and Lemme 3 of [24]), the spectral integration of the spaces $V_p(\mathbb{T})$ provided by Theorem 3.7 below can be viewed as a mechanism for the transference to super-reflexive spaces of a wide family of classical Fourier multipliers, with ramifications for the Fourier analysis of operators. In this regard let us recall that in various contexts where the left bilateral shift is a trigonometrically well-bounded operator (with spectral decomposition $\mathcal{E}(\cdot)$, say) on a sequence space, any bounded complex-valued function $f$ which is continuous a.e. on the circle, and +---PAGE_BREAK--- + +such that the spectral integral $\int_{[0,2\pi]} f(e^{it}) d\mathcal{E}(t)$ exists, will act as a Fourier multiplier for the given sequence space, with $\int_{[0,2\pi]} f(e^{it}) d\mathcal{E}(t)$ serving as the multiplier transform of $f$ (p. 16 of [9], Scholium (5.13) of [10], Theorem 4.3 of [16]). Theorem 5.5 below illustrates this point with a new application. + +By drawing on §3, the treatment in §4 furnishes a number of pleasant consequences for the operator ergodic theory of trigonometrically well-bounded operators that logically simplifies and expands their machinery in the super-reflexive space setting. In particular, if $U$ is a trigonometrically well-bounded operator on a super-reflexive space $X$, then a Tauberian-type theorem holds (Theorem 4.3 below). Specifically, the $(C, 1)$ averages appearing in the uniform boundedness condition (1.2) can be replaced by the rotated ergodic Hilbert averages of $U$: + +$$ (1.3) \qquad \tilde{\mathcal{W}} = \left\{ \sum_{0 < |k| \le n} \frac{z^k}{k} U^k : n \in \mathbb{N}, z \in \mathbb{T} \right\}. $$ + +In fact, the set $\tilde{\mathcal{W}}$ is precompact relative to $\sigma_X$, the strong operator topology of $\mathfrak{B}(X)$. In the general reflexive space setting, this norm-boundedness of $\tilde{\mathcal{W}}$ need not hold for a trigonometrically well-bounded operator $U$ (see Remark 2.5 below). However, thanks to Hardy's Tauberian Theorem (see, e.g., Theorem II.2.2 in [32]), in the general Banach space setting the set $\tilde{\mathcal{W}}$ corresponding to a power-bounded trigonometrically well-bounded operator is norm-bounded (Theorem (3.21) of [7]). So the streamlining effect of Theorem 4.3 below is that for boundedness of $\tilde{\mathcal{W}}$, the hypothesis of power-boundedness can be dropped provided the underlying Banach space is super-reflexive. In the realm of Fourier analysis of operators on super-reflexive spaces, this streamlining effect is illustrated below by the strong convergence of the operator-valued "Fourier series" associated with a trigonometrically well-bounded operator $U$ and $BV(\mathbb{T})$-functions (Theorem 4.4). (In this setting, it is further shown that the operator-valued "Fourier series" associated with a trigonometrically well-bounded operator $U$ and $V_p(\mathbb{T})$-functions converge $(C, 1)$ in the strong operator topology (Theorem 4.5 below).) The foregoing circle of ideas is facilitated by the development of a suitable convergence theorem for the spectral integrals of $V_p(\mathbb{T})$-functions (Theorem 3.9 below). + +Since, when taken as a whole, the foregoing results can fail to hold in the general reflexive space setting, it is a pleasant surprise to find them valid throughout the broad context furnished by super-reflexive spaces, which include the UMD spaces ([1], [34]) properly ([22], [35]). In §5, we confine attention to the Hilbert space context by taking up some applications of the foregoing to operator-weighted shifts, which have been shown in [16] to furnish a universal model for estimates regarding trigonometrically well-bounded operators on Hilbert space. +---PAGE_BREAK--- + +In the course of the exchanges during the Oberwolfach Workshop on Spectral Theory in Banach Spaces and Harmonic Analysis (July 25–31, 2004), Nigel Kalton offered the seminal suggestion that the James inequalities for super-reflexive spaces ([30]) might prove to be a useful tool for advances in spectral integration. The author wishes to thank Nigel Kalton for subsequently informing him of this perceptive viewpoint, which forms the basis for the developments below. On the heels of the Oberwolfach Workshop on Spectral Theory in Banach Spaces and Harmonic Analysis, work aimed in the direction of Kalton’s suggestion was carried out in a doctoral dissertation at the University of Edinburgh [21]. This thesis work and the present article spiritually overlap each other in two places, and this state of affairs will be described below in Remark 3.8, where we discuss the anatomy of the present article’s methods. + +**2. Background items.** In this section, we recall requisite notions, starting with the basic machinery of spectral families and their associated spectral integration. + +**DEFINITION 2.1.** A *spectral family* in a Banach space $\mathcal{X}$ is an idempotent-valued function $E(\cdot) : \mathbb{R} \to \mathfrak{B}(\mathcal{X})$ with the following properties: + +(i) $E(\lambda)E(\tau) = E(\tau)E(\lambda) = E(\lambda)$ if $\lambda \le \tau$; + +(ii) $\|E\|_u = \sup\{\|E(\lambda)\| : \lambda \in \mathbb{R}\} < \infty$; + +(iii) with respect to the strong operator topology, $E(\cdot)$ is right continuous and has a left-hand limit $E(\lambda^{-})$ at each point $\lambda \in \mathbb{R}$; + +(iv) $E(\lambda) \to I$ as $\lambda \to \infty$ and $E(\lambda) \to 0$ as $\lambda \to -\infty$, each limit being with respect to the strong operator topology. + +If, in addition, there exist $a, b \in \mathbb{R}$ with $a \le b$ such that $E(\lambda) = 0$ for $\lambda < a$ and $E(\lambda) = I$ for $\lambda \ge b$ then $E(\cdot)$ is said to be *concentrated on* $[a, b]$. + +Given a spectral family $E(\cdot)$ in the Banach space $\mathcal{X}$ concentrated on a compact interval $J = [a, b]$, an associated theory of spectral integration can be developed as follows. For each bounded function $\psi : J \to \mathbb{C}$ and each partition $\mathcal{P} = (\lambda_0, \lambda_1, \dots, \lambda_n)$ of $J$, where we take $\lambda_0 = a$ and $\lambda_n = b$, set + +$$ (2.1) \qquad S(\mathcal{P}; \psi, E) = \sum_{k=1}^{n} \psi(\lambda_k) \{E(\lambda_k) - E(\lambda_{k-1})\}. $$ + +If the net $\{S(\mathcal{P}; \psi, E)\}$ converges in the strong operator topology of $\mathfrak{B}(\mathcal{X})$ as $\mathcal{P}$ runs through the set of partitions of $J$ directed to increase by refinement, then the strong limit is called the *spectral integral* of $\psi$ with respect to $E(\cdot)$, and is denoted by $\int_J \psi(\lambda) dE(\lambda)$ or, more briefly, by $\int_J \psi dE$. +---PAGE_BREAK--- + +In this case, we define $\int_J^\oplus \psi(\lambda) dE(\lambda)$ by writing + +$$\int_J^\oplus \psi(\lambda) dE(\lambda) = \psi(a)E(a) + \int_J \psi(\lambda) dE(\lambda),$$ + +and so $\int_J^\oplus \psi(\lambda) dE(\lambda)$ is the limit in the strong operator topology of the sums + +$$ (2.2) \quad \tilde{S}(\mathcal{P}; \psi, E) = \psi(a)E(a) + \sum_{k=1}^n \psi(\lambda_k)\{E(\lambda_k) - E(\lambda_{k-1})\}. $$ + +It can be shown that the spectral integral $\int_J \psi(\lambda) dE(\lambda)$ exists for each $\psi \in \text{BV}(J)$, and that the mapping + +$$ (2.3) \qquad \psi \mapsto \int_J^\oplus \psi(\lambda) dE(\lambda) $$ + +is an identity-preserving algebra homomorphism of $BV(J)$ into $\mathfrak{B}(\mathcal{X})$ satisfying + +$$ (2.4) \qquad \left\| \int_J^\oplus \psi(t) dE(t) \right\| \le \|\psi\|_{\text{BV}(J)} \sup\{\|E(\lambda)\| : \lambda \in \mathbb{R}\}, $$ + +where $\|\cdot\|_{\text{BV}(J)}$ denotes the usual Banach algebra norm expressed by + +$$ \|\psi\|_{\text{BV}(J)} = \sup_{x \in J} |\psi(x)| + \text{var}(\psi, J). $$ + +In this connection, we recall here a key oscillation notion for the spectral family $E(\cdot)$ in the arbitrary Banach space $\mathcal{X}$ concentrated on a compact interval $J = [a, b]$. For each $x \in \mathcal{X}$, and each partition of $[a, b]$, $\mathcal{P} = (a = a_0 < a_1 < \dots < a_N = b)$, we put + +$$ \omega(\mathcal{P}, E, x) = \max_{1 \le j \le N} \sup \{\|E(t)x - E(a_{j-1})x\| : a_{j-1} \le t < a_j\}. $$ + +Now, as $\mathcal{P}$ increases through the set of all partitions of $[a, b]$ directed to increase by refinement, we have (see Lemma 4 of [38]) + +$$ (2.5) \qquad \lim_{\mathcal{P}} \omega(\mathcal{P}, E, x) = 0. $$ + +In the setting of the arbitrary Banach space $\mathcal{X}$, one can establish with the aid of (2.5) the following “workhorse” convergence theorem for spectral integrals of $BV(J)$-functions taken with respect to $E(\cdot)$. In the setting of super-reflexive spaces, Theorems 3.9 and 3.11 below show that this convergence theorem has counterparts for functions of higher variation. + +**THEOREM 2.2.** Let $\{\psi_\alpha\}_{\alpha \in \mathcal{A}}$ be a net in $BV(J)$, and let $\psi$ be a complex-valued function on $J$ such that + +(i) $\sup_{\alpha \in \mathcal{A}} \text{var}(\psi_\alpha, J) < \infty$, + +(ii) $\psi_\alpha \to \psi$ pointwise on $J$. +---PAGE_BREAK--- + +Then $\psi \in \text{BV}(J)$, and $\{\int_J^\oplus \psi_\alpha dE\}_{\alpha \in \mathcal{A}}$ converges to $\int_J^\oplus \psi dE$ in the strong operator topology. + +The foregoing basic theory of spectral integration was developed in [38]. We refer the reader to §2 of [7] for a simplified account using the above notation. We shall also consider in connection with the above matters the Banach algebra $\text{BV}(\mathbb{T})$, which consists of all $\psi : \mathbb{T} \to \mathbb{C}$ such that the function $\psi^\dagger(t) = \psi(e^{it})$ belongs to $\text{BV}([0, 2\pi])$, furnished with the norm $\|\psi\|_{\text{BV}(\mathbb{T})} = \|\psi^\dagger\|_{\text{BV}([0, 2\pi])}$. The following notation will come in handy—particularly whenever Fejér's Theorem is invoked. Given any function $f : \mathbb{R} \to \mathbb{C}$ which has a right-hand limit and a left-hand limit at each point of $\mathbb{R}$, we shall denote by $f^\# : \mathbb{R} \to \mathbb{C}$ the function defined for every $t \in \mathbb{R}$ by putting + +$$f^{\#}(t) = \frac{1}{2} \left\{ \lim_{s \to t^+} f(s) + \lim_{s \to t^-} f(s) \right\}.$$ + +In the case of a function $\phi : \mathbb{T} \to \mathbb{C}$ such that $\phi(e^{i\cdot}) : \mathbb{R} \to \mathbb{C}$ has everywhere a right-hand limit and a left-hand limit, we shall, by a slight abuse of notation, write + +$$ (2.6) \qquad \phi^{\#}(t) = \frac{1}{2} \left\{ \lim_{s \to t^+} \phi(e^{is}) + \lim_{s \to t^-} \phi(e^{is}) \right\} \quad \text{for all } t \in \mathbb{R}. $$ + +In particular, for each $\phi \in \text{BV}(\mathbb{T})$, it is clear that we may regard the $(2\pi)$-periodic function $\phi^\#$ as an element of $\text{BV}(\mathbb{T})$. (In general, when there is no danger of confusion, we shall, as convenient, tacitly indulge in the conventional practice of identifying a function $\Psi$ defined on $\mathbb{T}$ with its $(2\pi)$-periodic counterpart $\Psi(e^{i\cdot})$ defined on $\mathbb{R}$.) + +**DEFINITION 2.3.** An operator $U \in \mathfrak{B}(\mathcal{X})$ is said to be trigonometrically well-bounded if there is a spectral family $E(\cdot)$ in $\mathcal{X}$ concentrated on $[0, 2\pi]$ such that $U = \int_{[0,2\pi]} e^{i\lambda} dE(\lambda)$. In this case, it is possible to arrange that $E((2\pi)^{-}) = I$, and with this additional property the spectral family $E(\cdot)$ is uniquely determined by $U$, and is called the *spectral decomposition* of $U$. + +**REMARK 2.4.** The above discussion regarding (2.3) and (2.4) shows that a trigonometrically well-bounded operator on a Banach space has a norm-continuous $\text{BV}(\mathbb{T})$-functional calculus. In the setting of super-reflexive spaces, Theorem 3.7 below will extend this $\text{BV}(\mathbb{T})$-functional calculus to a norm-continuous functional calculus based on functions of appropriately higher variation. + +After the development in [4] of an intimately related precursor class (the “well-bounded operators of type (B)”), the class of trigonometrically well-bounded operators was introduced in [5], and its fundamental structural theory further developed in [6]. In the general Banach space setting +---PAGE_BREAK--- + + resp., in the reflexive space setting described in Proposition 1.1), trigonometrically well-bounded operators can be characterized by the precompactness in the weak operator topology (resp., the uniform boundedness) of the +(C, 1) means of their full set of rotated discrete ergodic Hilbert averages. +(For the general Banach space case, see Theorem 5.2 of [14].) In order to +discuss this recurring theme, it will be convenient to establish a notation +for the sequence of trigonometric polynomials underlying it via spectral +integration—specifically, for each $n \in \mathbb{N}$ and each $z \in \mathbb{T}$, we write + +$$ +(2.7) \qquad \mathfrak{s}_n(z) = \sum_{0 < |k| \le n} \frac{z^k}{k} +$$ + +(thus, {$\mathfrak{s}_n$}$_{n=1}^{\infty}$ is the sequence of partial sums for the Fourier series of $\phi_0 \in$ +BV($\mathbb{T}$) defined by $\phi_0(1) = 0$ and $\phi_0(e^{it}) = i(\pi - t)$ for $0 < t < 2\pi$). The +fact that var($\mathfrak{s}_n$, $\mathbb{T}$) $\to \infty$ as $n \to \infty$ is a well-known consequence of the +properties of the Lebesgue constants (see, e.g., (3.9) of [14]), and renders +(2.4) incapable of bounding the sequence {||$\mathfrak{s}_n(T)$||}$_{n=1}^{\infty}$ in the case of an +arbitrary trigonometrically well-bounded operator on an arbitrary Banach +space $\mathcal{X}$. The following remark guarantees that there is no way out of this, +even in the setting of a general reflexive Banach space, and this fact serves +to underscore the aforementioned felicitous properties which Theorem 4.3 +confers on the set $\tilde{\mathcal{W}}$ in (1.3) when the underlying Banach space is super- +reflexive. + +**REMARK 2.5.** Example (3.1) in [6] exhibits a reflexive Banach space $\mathcal{X}_0$ +and a trigonometrically well-bounded operator $T_0 \in \mathfrak{B}(\mathcal{X}_0)$ such that for +each trigonometric polynomial $Q$, we have + +$$ +\|Q(T_0)\|_{\mathfrak{B}(\mathcal{X}_0)} = |Q(1)| + \text{var}(Q, \mathbb{T}). +$$ + +Hence $\|\mathfrak{s}_n(T_0)\|_{\mathfrak{B}(\mathcal{X}_0)} \to \infty$ as $n \to \infty$. A noteworthy feature of the reflexive Banach space $\mathcal{X}_0$ used in this example is that, by virtue of [25] (note, e.g., Lemma 1.e.4 in [33]), $\mathcal{X}_0$ cannot be made uniformly convex by equivalent renorming (in view of Corollary 3 of [26], this last can be equivalently restated by saying that the reflexive Banach space $\mathcal{X}_0$ is not super-reflexive). + +On a more positive note, we mention here that trigonometrically well- +bounded operators do enjoy the following operator-valued variant of Fejér’s +Theorem (see Theorem (3.10)(i) of [7]). (For a marked improvement on +the conclusion of this next theorem in the presence of super-reflexivity, see +Theorem 4.4 below.) + +**THEOREM 2.6.** Suppose that *U* is a trigonometrically well-bounded op- +erator on a Banach space *X*, and *E*(·) is the spectral decomposition of *U*. +Let *f* ∈ BV(*T*), and let *f*# be as in (2.6). Then the series ∑∞∑k=-∞ ̂*f*(k)*U*^k +is (*C*, 1)-summable in the strong operator topology to (that is, the sequence +---PAGE_BREAK--- + +$$ \begin{gather*} \left\{ \sum_{k=-n}^{n} \left(1 - \frac{|k|}{n+1}\right) \hat{f}(k) U^k \right\}_{n=1}^{\infty} \text{ converges in the strong operator topology to} \\ \int_{[0,2\pi]} \int f^{\#}(t) dE(t). \end{gather*} $$ + +The centerpiece of our considerations in §3 will be a proof that, in the context of super-reflexivity, spectral integration against $E(\cdot)$ can be extended from BV($\mathbb{T}$) to the broader classes $V_p(\mathbb{T})$ consisting of the functions of bounded $p$-variation, where $p$ ranges over an appropriate subinterval of $(1, \infty)$ (see Theorem 3.7 below). To avoid later digressions, we take up here the definition of the $p$-variation of a function $\psi$. + +**DEFINITION 2.7.** Let $J = [a, b]$ be a compact interval of $\mathbb{R}$. For $1 \le p < \infty$, the $p$-variation of a function $\psi: J \to \mathbb{C}$ is specified by writing + +$$ \mathrm{var}_p(\psi, [a,b]) = \sup \left\{ \sum_{k=1}^{N} |\psi(x_k) - \psi(x_{k-1})|^p \right\}^{1/p}, $$ + +where the supremum is extended over all partitions $a = x_0 < x_1 < \dots < x_N = b$ of $[a, b]$. + +By definition, the class $V_p(J)$ consists of all functions $\psi: J \to \mathbb{C}$ such that $\mathrm{var}_p(\psi, [a,b]) < \infty$. It is readily verified that $V_p(J)$ becomes a unital Banach algebra under pointwise operations when endowed with the norm $\|\cdot\|_{V_p(J)}$ specified by + +$$ \|\psi\|_{V_p(J)} = \sup\{|\psi(x)| : x \in J\} + \mathrm{var}_p(\psi, J). $$ + +Moreover, if $\psi \in V_p(J)$, then $\lim_{x \to y^+} \psi(x)$ exists for each $y \in [a, b)$ and $\lim_{x \to y^-} \psi(x)$ exists for each $y \in (a, b]$, and the set of discontinuities of $\psi$ in $J$ is countable. It is elementary that $V_1(J)$ and BV$(J)$ consist of the same functions, and also that $V_q(J) \subseteq V_r(J)$ when $1 \le q \le r < \infty$, since $\|\psi\|_{V_p(J)}$ is a decreasing function of $p$. For additional fundamental features of $V_p(J)$, see, e.g., §2 in [11]. + +For $\psi: \mathbb{T} \to \mathbb{C}$, we define $\mathrm{var}_p(\psi, \mathbb{T})$ to be $\mathrm{var}_p(\psi(e^{i\cdot}), [0, 2\pi])$, and we designate by $V_p(\mathbb{T})$ the class consisting of all functions $\psi: \mathbb{T} \to \mathbb{C}$ such that $\mathrm{var}_p(\psi, \mathbb{T}) < \infty$. With pointwise operations on $\mathbb{T}$, $V_p(\mathbb{T})$ likewise becomes a unital Banach algebra when furnished with the norm + +$$ \|\psi\|_{V_p(\mathbb{T})} = \|\psi(e^{i\cdot})\|_{V_p([0,2\pi])} = \sup\{|\psi(z)| : z \in \mathbb{T}\} + \mathrm{var}_p(\psi, \mathbb{T}). $$ + +**REMARK 2.8.** (i) For $1 \le p < \infty$ and $\psi: \mathbb{T} \to C$, there is also a rotation-invariant notion for the $p$-variation of $\psi$ on $\mathbb{T}$, which serves as an alternative to $\mathrm{var}_p(\psi, \mathbb{T})$ defined above. Specifically, we can define + +$$ \mathfrak{v}_p(\psi, \mathbb{T}) = \sup \left\{ \sum_{k=1}^{N} |\psi(e^{it_k}) - \psi(e^{it_{k-1}})|^p \right\}^{1/p}, $$ +---PAGE_BREAK--- + +where the supremum is taken over all finite sequences $-\infty < t_0 < t_1 < \dots < t_N = t_0 + 2\pi < \infty$. It is evident that + +$$ (2.8) \qquad \mathrm{var}_p(\psi, \mathbb{T}) \le \nu_p(\psi, \mathbb{T}) \le 2 \mathrm{var}_p(\psi, \mathbb{T}), $$ + +and that $\nu_1(\psi, \mathbb{T}) = \mathrm{var}_1(\psi, \mathbb{T})$. Moreover, for $1 \le p < \infty$, $V_p(\mathbb{T})$ is also a unital Banach algebra under the norm $\|\cdot\|_{\nu_p(\mathbb{T})}$ given by + +$$ \|\psi\|_{\nu_p(\mathbb{T})} = \sup\{|\psi(z)| : z \in \mathbb{T}\} + \nu_p(\psi, \mathbb{T}), $$ + +which, by virtue of (2.8), is obviously equivalent to the Banach algebra norm $\|\cdot\|_{V_p(\mathbb{T})}$ defined above. (When convenient, we shall use the equivalence of the norms $\|\cdot\|_{\nu_p(\mathbb{T})}$ and $\|\cdot\|_{V_p(\mathbb{T})}$ without comment.) Straightforward application of the Generalized Minkowski Inequality shows that if $F \in L^1(\mathbb{T})$ and $\psi \in V_p(\mathbb{T})$, then the convolution $F * \psi$ belongs to $V_p(\mathbb{T})$, with + +$$ (2.9) \qquad \|F * \psi\|_{V_p(\mathbb{T})} \le \|F\|_{L^1(\mathbb{T})} \|\psi\|_{\nu_p(\mathbb{T})} \le 2 \|F\|_{L^1(\mathbb{T})} \|\psi\|_{V_p(\mathbb{T})}. $$ + +(ii) It is worth noting here that if $1 < q < \infty$, then $\bigcup_{1 \le p < q} V_p(\mathbb{T})$ is not dense in $V_q(\mathbb{T})$. To see this, first note that if $1 \le p < \infty$ and $f \in V_p(\mathbb{T})$, then, in the notation of [29] we have, $f \in \Lambda_p$. This is a standard inclusion, established for $p=1$ in Lemma 9 of [29], and for $1 < p < \infty$ on pages 259, 260 of [40] (nowadays this inclusion for $1 < p < \infty$ is also transparent via, e.g., Theorem 3.1 of [23]). Hence Lemma 11 of [29] shows that $\{\hat{f}(k)\}_{k=-\infty}^{\infty}$, the sequence of Fourier coefficients of $f$, satisfies + +$$ (2.10) \qquad \sup\{|k|^{1/p}|\hat{f}(k)| : k \in \mathbb{Z}\} < \infty. $$ + +In view of this, we can define for $1 \le p < \infty$ the linear mapping $\mathfrak{T}_p : V_p(\mathbb{T}) \to \ell^\infty(\mathbb{Z})$ by writing $\mathfrak{T}_p(f) = \{|k|^{1/p} \hat{f}(k)\}_{k=-\infty}^{\infty}$. It follows via the Closed Graph Theorem that $\mathfrak{T}_p$ is continuous, and so the following set $\mathcal{N}_p(\mathbb{T})$, which coincides with $(\mathfrak{T}_p)^{-1}(c_0(\mathbb{Z}))$, is a closed subspace of $V_p(\mathbb{T})$: + +$$ \mathcal{N}_p(\mathbb{T}) = \{g \in V_p(\mathbb{T}) : |k|^{1/p} \hat{g}(k) \to 0 \text{ as } |k| \to \infty\}. $$ + +It is clear from (2.10) that $\bigcup_{1 \le p < q} V_p(\mathbb{T}) \subseteq \mathcal{N}_q(\mathbb{T})$. However, $F_q$, Hardy's $(2\pi)$-periodic, Weierstrass-type, continuous, nowhere differentiable function from [28], which is specified by + +$$ F_q(t) = \sum_{n=0}^{\infty} 2^{-n/q} \cos(2^n t) \quad \text{for all } t \in \mathbb{R}, $$ + +belongs to $Lip_{1/q}(\mathbb{R})$ by 1.33 of [28], and hence its restriction $F_q|_{[0, 2\pi]}$ can be regarded as belonging to $V_q(\mathbb{T})$. It is clear that for each non-negative integer $n$, + +$$ 2^{n/q} \widehat{F}_q(2^n) = \frac{1}{2}, $$ + +whence $F_q|_{[0, 2\pi]}$ does not belong to $\mathcal{N}_q(\mathbb{T})$. (Compare (9.4) of [40].) +---PAGE_BREAK--- + +If we replace absolute values by norms in the foregoing definitions of $p$-variation, we arrive at the corresponding definitions for vector-valued functions. Furthermore, for a vector-valued function $f$ defined on $\mathbb{R}$ (including the scalar-valued case), the standard counterpart for $\mathbb{R}$ of $p$-variation is given by + +$$ \operatorname{var}_p(f, \mathbb{R}) = \sup_{-\infty < a < b < \infty} \operatorname{var}_p(f, [a, b]). $$ + +If $E(\cdot)$ is a spectral family of projections in an arbitrary Banach space $\mathcal{X}$, and $1 \le p < \infty$, we shall also use the symbol $\operatorname{var}_p(E)$ to denote + +$$ \sup\{\operatorname{var}_p(E(\cdot)x, \mathbb{R}) : \|x\| \le 1\}. $$ + +**3. Super-reflexivity and spectral integration of $V_p(\mathbb{T})$ with $p > 1$.** + +For extensive details and terminology regarding the structure theory of super-reflexive spaces, we refer the interested reader to, e.g., Part 4 of [2]. One of R. C. James' inequalities for super-reflexive spaces (Theorem 3 of [30]) states the following. + +**THEOREM 3.1.** Let $X$ be a super-reflexive Banach space. If $\phi$ and $K$ are real numbers such that + +$$ 0 < 2\phi < 1/K \le 1, $$ + +then there is $q = q(X, \phi, K) \in (1, \infty)$ such that for any normalized basic sequence $\{y_j\}$ in $X$ with basis constant not exceeding $K$, we have + +$$ (3.1) \qquad \phi\left\{\sum_j |a_j|^q\right\}^{1/q} \le \left\|\sum_j a_j y_j\right\|, $$ + +for all scalar sequences $\{a_j\}$ such that $\sum_j a_j y_j$ converges. + +In the context of a spectral family of projections in a super-reflexive space, James's Theorem 3.1 above readily specializes so as to take the following form. + +**PROPOSITION 3.2.** If $E(\cdot)$ is a spectral family of projections in a super-reflexive Banach space $X$, and $\phi$ is a real number satisfying + +$$ (3.2) \qquad 0 < \phi < \frac{1}{4\|E\|_u}, $$ + +then there is a real number $q = q(X, \phi, \|E\|_u) \in (1, \infty)$ such that + +$$ (3.3) \qquad \operatorname{var}_q(E) \le \frac{2\|E\|_u}{\phi}. $$ + +*Proof.* Let $x \in X \setminus \{0\}$, and suppose that $-\infty < \lambda_0 < \lambda_1 < \dots < \lambda_N < \infty$. Let $\{z_j\}_{j=1}^M$ be the basic sequence consisting of all non-zero terms extracted from $\{{E(\lambda_k) - E(\lambda_{k-1})}\}_{k=1}^N x\}_{j=1}^M$, let $\{y_j\}_{j=1}^M$ be the normalized basic sequence $\{z_j/||z_j||\}_{j=1}^M$ (whose basis constant clearly does not exceed +---PAGE_BREAK--- + +$2\|E\|_u)$, and let $\{a_j\}_{j=1}^M$ be the sequence of real numbers $\{\|z_j\|\}_{j=1}^M$. Then, in the present context, (3.1) becomes the desired conclusion (3.3), since the sum in the majorant of (3.1) telescopes here. ■ + +Since we shall not require any specificity for the roles played by the constants $\phi$, $\|E\|_u$, and $q = q(X, \phi, \|E\|_u)$ in Proposition 3.2, we include here the following condensed version (which can also be derived directly from Proposition IV.II.3 on pages 249–250 of [2] by similar reasoning to that above after using the equivalent renorming of $X$ specified by $\|x\| = \sup_{-\infty 1$, and $f \in V_p(J)$, $g \in V_q(J)$ have no common discontinuities. Then the Riemann-Stieltjes integral $\int_a^b f(t) dg(t)$ exists and obeys the estimate + +$$ \left| \int_a^b f(t) dg(t) \right| \le \left\{ 1 + \zeta \left( \frac{1}{p} + \frac{1}{q} \right) \right\} \|f\|_{V_p(J)} \text{var}_q(g, J). $$ + +*(Here $\zeta$ designates the Riemann zeta function specified by $\zeta(s) = \sum_{n=1}^{\infty} n^{-s}$ for $s > 1$)* + +**THEOREM 3.5.** Let $X$ be a super-reflexive Banach space, and let $E(\cdot)$ be the spectral decomposition of a trigonometrically well-bounded operator $U \in \mathfrak{B}(X)$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $u \in (1, q')$, where $q' = q(q-1)^{-1}$ is the conjugate index of $q$. Then, in terms of the notation of (2.6), for every $f \in \text{BV}(\mathbb{T})$ we have + +$$ (3.4) \quad \left\| \int_{[0,2\pi]}^\oplus f^{#}(t) dE(t) \right\| \le 3 \left\{ 1 + \zeta \left( \frac{1}{u} + \frac{1}{q} \right) \right\} \|f\|_{V_u(\mathbb{T})} \text{var}_q(E). $$ + +*Proof.* Here and henceforth we denote by $\{\kappa_n\}_{n=0}^\infty$ the Fejér kernel for $\mathbb{T}$, + +$$ \kappa_n(z) = \sum_{k=-n}^{n} \left(1 - \frac{|k|}{n+1}\right) z^k. $$ + +Clearly $u^{-1} + q^{-1} > 1$. For $f \in \text{BV}(\mathbb{T})$, each trigonometric polynomial $\kappa_n * f$ +---PAGE_BREAK--- + +is in $BV(\mathbb{T}) \subseteq V_u(\mathbb{T})$, with + +$$ +\|\kappa_n * f\|_{BV(\mathbb{T})} \leq \|f\|_{BV(\mathbb{T})}. +$$ + +For the integral + +$$ +\int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dx^*(E(t)x) +$$ + +(which automatically exists for arbitrary $x \in X$, and $x^*$ in the dual space $X^*$), we now apply Theorem 3.4 to the pair of functions $\kappa_n * f \in V_u(\mathbb{T})$ and $x^*(E(\cdot)x) \in V_q([0, 2\pi])$ to obtain the estimate + +$$ +\left| \int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dx^*(E(t)x) \right| +\le \left\{ 1 + \zeta \left( \frac{1}{u} + \frac{1}{q} \right) \right\} \| \kappa_n * f \|_{V_u(\mathbb{T})} \mathrm{var}_q(E) \|x\| \|x^*\|, +$$ + +and consequently for each $n$, we see with the aid of this last estimate that + +$$ +(3.5) \quad \left\| \int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dE(t) \right\| +\le +\begin{cases} +1 + \zeta \left(\frac{1}{u} + \frac{1}{q}\right) & \| \kappa_n * f \|_{V_u(\mathbb{T})} \operatorname{var}_q(E) \\ +\le 2 \left\{ 1 + \zeta \left(\frac{1}{u} + \frac{1}{q}\right) \right\} \| f \|_{V_u(\mathbb{T})} \operatorname{var}_q(E). +\end{cases} +$$ + +Since $\{\kappa_n * f\}_{n=0}^\infty$ converges pointwise to $f^\#$ on $\mathbb{T}$ while its terms have uniformly bounded 1- variations, we can infer via Theorem 2.2 above that, in the strong operator topology, + +$$ +\int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dE(t) \rightarrow_{[0,2\pi]} f^\#(t) dE(t). +$$ + +Hence (3.5) shows that (3.4) holds. ■ + +In order to pass from the estimate in (3.4) for the spectral integral of $f^\#$ when $f \in BV(\mathbb{T})$ to the spectral integration of $V_p(\mathbb{T})$-functions, we shall need to rely on the following exemplar of the tools which spectral integration furnishes for such situations. + +**THEOREM 3.6.** Suppose that $U$ is a trigonometrically well-bounded op- +erator on an arbitrary Banach space $\mathcal{X}$, $E(\cdot)$ is the spectral decomposition +of $U$, and $1 < u < \infty$. Suppose further that there is a constant $\tau$ such that + +$$ +(3.6) \quad \left\| \int_{[0,2\pi]}^{\oplus} \psi^{\#}(e^{it}) dE(t) \right\| \leq \tau \|\psi\|_{V_u(\mathbb{T})} \quad \text{for all } \psi \in BV(\mathbb{T}). +$$ + +Then if $1 \le p < u$, the spectral integral $\int_{[0,2\pi]} \phi(e^{it}) dE(t)$ exists for each $\phi \in V_p(\mathbb{T})$, and the mapping $\phi \in V_p(\mathbb{T}) \mapsto \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t)$ is an identity- +---PAGE_BREAK--- + +preserving algebra homomorphism of $V_p(\mathbb{T})$ into $\mathfrak{B}(\mathcal{X})$ such that + +$$ \left\| \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t) \right\| \leq \tau K_{p,u} \| \phi \|_{V_p(\mathbb{T})} \quad \text{for all } \phi \in V_p(\mathbb{T}). $$ + +*Proof*. A demonstration of the current theorem can readily be modeled after the proof of Theorem 2.1 in [11] by replacing the Fourier multiplier norm estimate in Proposition 2.3 et seq. of [11] by the present hypothesis (3.6). Alternatively, one can extract key elements of a proof for the current theorem by making suitable modifications to the reasoning for its Marcinkiewicz power-classes counterpart in Theorem 12 of [18]. ■ + +By taking $u = 2^{-1}(p+q')$ in Theorem 3.5 while combining Theorems 3.5 and 3.6 we arrive at the following principal result, which guarantees spectral integration of $V_p(\mathbb{T})$ spaces in the presence of super-reflexivity, and thereby extends to each $V_p(\mathbb{T})$ space, throughout an appropriate range of $p > 1$, the BV$(\mathbb{T})$-functional calculus for trigonometrically well-bounded operators. + +**THEOREM 3.7.** Let $X$ be a super-reflexive Banach space, and let $E(\cdot)$ be the spectral decomposition of a trigonometrically well-bounded operator $U \in \mathfrak{B}(X)$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $p \in (1, q')$, where $q' = q(q-1)^{-1}$ is the conjugate index of $q$. Then the spectral integral $\int_{[0,2\pi]} \phi(e^{it}) dE(t)$ exists for each $\phi \in V_p(\mathbb{T})$, and the mapping $\phi \in V_p(\mathbb{T}) \mapsto \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t)$ is an identity-preserving algebra homomorphism of $V_p(\mathbb{T})$ into $\mathfrak{B}(X)$ such that + +$$ \left\| \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t) \right\| \leq K_{p,q} \text{var}_q(E) \| \phi \|_{V_p(\mathbb{T})} \quad \text{for all } \phi \in V_p(\mathbb{T}). $$ + +**REMARK 3.8.** (i) As already indicated above, from both a conceptual and historical standpoint, Proposition 3.2 (along with its abbreviated version in Proposition 3.3) can best be viewed as the immediate specialization to spectral families of James’ celebrated estimate for super-reflexive spaces here quoted as Theorem 3.1. On the basis of extensive calculations aided by [30], Theorem 2.1 of [21] asserts what amounts to Proposition 3.2 above. The reasoning devoted to Theorem 2.1 in [21] occurs there on pp. 14–28, 31, with the following description on page 23: “The proof of Theorem 2.1 is rather involved, and requires several technical results”. + +(ii) Some generic spectral integration tool for the general Banach space setting, such as Theorem 3.6, seems to be required for the transition from Proposition 3.2 and the fundamental theorem of Young–Stieltjes integration reproduced in Theorem 3.4 in order to arrive at Theorem 3.7. The reasoning offered for Theorem 4.1 in [21], which purports to establish the same result as Theorem 3.7 above without such a transitional tool, is flawed, primarily +---PAGE_BREAK--- + +because it rests on the false premise that $V_1(\mathbb{T})$ is norm-dense in $V_p(\mathbb{T})$, if +$1 < p < \infty$, in contradiction to the result in Remark 2.8(ii) above. + +We now proceed to associate with Theorem 3.7 a useful convergence theorem for appropriate nets of spectral integrals in the context of super-reflexivity. This (as well as Theorem 3.11 below) furnishes the promised extension of Theorem 2.2 to functions of higher variation. + +**THEOREM 3.9.** *Assume the hypotheses on X, E(·), U, and q of Theorem 3.7, and let p ∈ (1, q'). Suppose that {gβ}β∈B is a net of mappings from T into C satisfying* + +$$ +\begin{align*} +(3.7) \qquad & \rho \equiv \sup\{\text{var}_p(g_\beta, \mathbb{T}) : \beta \in B\} < \infty, \\ +& \text{and such that for each } \beta \in B, \text{ and each } t_0 \in \mathbb{R}, \\ +(3.8) \qquad & \lim_{t \to t_0^-} g_\beta(e^{it}) = g_\beta(e^{it_0}). +\end{align*} +$$ + +*Suppose further that {g_β}_{β∈B} converges pointwise on T to a complex-valued function g. Then g ∈ V_p(T), and the net* + +$$ +\left\{ \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t) \right\}_{\beta \in B} +$$ + +converges in the strong operator topology of $\mathfrak{B}(X)$ to $\int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t).$ + +*Proof.* Clearly, var$_p$(g, T) ≤ ρ < ∞. Choose q₁ so that 1 < q < q₁ < ∞ and p⁻¹ + q⁻¹ > 1. Fix x ∈ X \ {0}, let ε > 0 be given, and use (2.5) to infer that [0, 2π] has a partition Pε = (0 = t₀ < t₁ < ... < tJ = 2π) such that + +$$ +(3.9) \qquad \omega(U, E, x) < \epsilon \quad \text{for any refinement } U \text{ of } P_{\epsilon}. +$$ + +For an arbitrary pair of refinements of $\mathcal{P}_\varepsilon$, say $\mathcal{P} = (0 = a_0 < a_1 < \dots < a_N = 2\pi)$, $\mathcal{Q} = (0 = b_0 < b_1 < \dots < b_M = 2\pi)$, and for any $\beta \in B$, we shall now consider the following two sums: + +$$ +S_1 \equiv \sum_{j=1}^{N} E(a_{j-1})x\{g_{\beta}(e^{ia_j}) - g_{\beta}(e^{ia_{j-1}})\}, +$$ + +$$ +S_2 \equiv \sum_{m=1}^{M} E(b_{m-1})x\{g_{\beta}(e^{ib_m}) - g_{\beta}(e^{ib_{m-1}})\}. +$$ + +For $1 \le \nu \le J$, let $I_\nu = [y_\nu, z_\nu]$ be the rightmost subinterval of $\mathcal{P}$ contained +in the subinterval $[t_{\nu-1}, t_\nu]$ of $\mathcal{P}_\varepsilon$, and let $S'_1$ denote the sum $S_1$ after the +replacement of the terms $E(y_\nu)x\{g_\beta(e^{iz_\nu}) - g_\beta(e^{iy_\nu})\}$, $1 \le \nu \le J$, by corre- +sponding terms $E(y_\nu)x\{g_\beta(e^{iz'_\nu}) - g_\beta(e^{iy_\nu})\}$, where $y_\nu < z'_\nu < z_\nu$, $1 \le \nu \le J$. +Moreover, we can choose these points $z'_\nu$, $1 \le \nu \le J$, so that we can similarly +form $S'_2$ from $S_2$ by truncating to the same right end-point $z'_\nu$ the rightmost +---PAGE_BREAK--- + +in the string of subintervals of $\mathcal{Q}$ contained in each $[t_{\nu-1}, t_\nu]$. In terms of +this notation, we can write + +$$S'_1 - S'_2 = \sum_{\nu=1}^{J} (\Omega_{\nu} - \Lambda_{\nu}),$$ + +where, for $1 \le \nu \le J$, $\Omega_\nu$ (resp., $\Lambda_\nu$) represents the contribution to $S'_1$ (resp., $S'_2$) of the string of intervals that are contained in the subinterval $[t_{\nu-1}, t_\nu]$ of $\mathcal{P}_\varepsilon$. Provided that the pair of reciprocal indices involved has sum exceeding 1 (as is true here for $q_1^{-1}, p^{-1}$), the reasoning leading up to and including Young's estimate (6.4) in [40] can be applied to any pair of qualifying functions such that one is vector-valued, and the other is scalar-valued (a quick way to see this is to apply temporarily an arbitrary continuous linear functional, then invoke directly the results in [40] for a pair of scalar-valued functions, and then revert to norms in the ultimate vector-valued expressions). + +Applying Young's estimate (6.4), and then the technique in (10.8) of [40], +together with (3.9) above, we can infer that for $1 \le \nu \le J$ we have, in terms +of the Riemann zeta function $\zeta$, + +$$ +\begin{equation} +\begin{aligned} +& \left\| \Omega_{\nu} - \Lambda_{\nu} \right\| \\ +& \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1+\zeta(q_1^{-1}+p^{-1})\} \text{var}_{q}^{q/q_1}(E(\cdot)x, [t_{\nu-1}, t_{\nu}]) \text{var}_{p}(g_{\beta}, [t_{\nu-1}, t_{\nu}]). +\end{aligned} +\end{equation} +$$ + +Summing the estimates in (3.10) from $\nu = 1$ to $J$, and then applying Hölder's inequality (for the pair of indices $q_1, p$) to the resulting majorant, we find that + +$$ +\[ +\begin{array}{l} +(3.11) \\ +\| S'_{1} - S'_{2} \| \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1+\zeta(q_1^{-1}+p^{-1})\} \mathrm{var}_q^{q/q_1}(E(\cdot)x, [0, 2\pi]) \mathrm{var}_p(g_\beta, \mathbb{T}). +\end{array} +\] +$$ + +If in the sums $S'_1$ and $S'_2$ we now let each $z'_\nu$ approach from the left the +corresponding point $t_\nu$, then (3.8) gives + +$$ +\begin{equation} +\begin{split} +& \left\| \sum_{j=1}^{N} E(a_{j-1})x \{g_{\beta}(e^{ia_j}) - g_{\beta}(e^{ia_{j-1}})\} - \sum_{m=1}^{M} E(b_{m-1})x \{g_{\beta}(e^{ib_m}) - g_{\beta}(e^{ib_{m-1}})\} \right\| \\ +& \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1 + \zeta(q_1^{-1} + p^{-1})\} \mathrm{var}_q^{q/q_1}(E(\cdot)x, [0, 2\pi])\rho. +\end{split} +\tag{3.12} +\end{equation} +$$ + +For notational convenience, let us denote by $\delta_\epsilon$ the majorant in (3.12), while +keeping in mind that $\delta_\epsilon \to 0$ as $\epsilon \to 0^+$. After a summation by parts is +performed on each of the vector-valued sums appearing in the minorant of +(3.12), we find that, in the notation of (2.2), the estimate (3.12) can be +rewritten as follows: + +$$ +(3.13) \quad \| \tilde{\mathcal{S}}(\mathcal{P}; g_\beta(e^{i\cdot}), E) x - \tilde{\mathcal{S}}(\mathcal{Q}; g_\beta(e^{i\cdot}), E) x \| \le \delta_\varepsilon. +$$ +---PAGE_BREAK--- + +Upon letting $\mathcal{P}$ run through all refinements of $\mathcal{P}_\varepsilon$ in (3.13), while simultaneously holding fixed both the arbitrary refinement $\mathcal{Q}$ of $\mathcal{P}_\varepsilon$ and the arbitrary $\beta \in B$, we get + +$$ +(3.14) \quad \left\| \int_{[0,2\pi]}^{\oplus} g_\beta(e^{it}) dE(t)x - \tilde{\mathcal{S}}(\mathcal{Q}; g_\beta(e^{i\cdot})), E)x \right\| \le \delta_\varepsilon. +$$ + +Next, while holding $\mathcal{P}, \mathcal{Q}$ fixed in (3.13), we let $\beta$ run through $B$ to obtain, +via the pointwise convergence on $\mathbb{T}$, + +$$ +\|\tilde{\mathcal{S}}(\mathcal{P}; g(e^{i\cdot}), E)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot}), E)x\| \le \delta_\varepsilon. +$$ + +Letting $\mathcal{P}$ run through all refinements of $\mathcal{P}_\varepsilon$ in this yields, for every refine- +ment $\mathcal{Q}$ of $\mathcal{P}_\varepsilon$, + +$$ +\left\| \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot})), E)x \right\| \leq \delta_{\epsilon}. +$$ + +Combining this estimate with (3.14), we find that for all $\beta \in B$, and every +refinement $\mathcal{Q}$ of $\mathcal{P}_{\varepsilon}$, + +$$ +(3.15) \quad \left\| \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t)x - \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x \right\| \\ +\le 2\delta_{\epsilon} + \| \tilde{\mathcal{S}}(\mathcal{Q}; g_{\beta}(e^{i\cdot})), E)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot})), E)x \| . +$$ + +In (3.15), we now specialize $\mathcal{Q}$ to be $\mathcal{P}_{\varepsilon}$, and we see from the pointwise +convergence of $\{g_{\beta}\}_{\beta \in B}$ to $g$ on $\mathbb{T}$ that for all sufficiently large $\beta \in B$, + +$$ +\left\| \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t)x - \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x \right\| \le 3\delta_{\varepsilon}. \blacksquare +$$ + +**REMARK 3.10.** Our treatment of the spectral integration of functions of higher variation emphasizes applications thereof to a unified framework of trigonometrically well-bounded operators and related periodic functions. For this purpose $[0, 2\pi]$ conveniently serves as the fundamental interval. It is worth noting, however, that the above Theorems 3.7 and 3.9 do not need to be tied directly to trigonometrically well-bounded operators, since they readily imply their analogues for spectral families concentrated on arbitrary intervals by using simple affine changes of the real variable (e.g., mapping $[0, \pi]$ onto an interval $J = [a, b]$). The outcome, which includes an extension of the BV($J$)-functional calculus induced by spectral families (2.3), can be stated as follows. + +**THEOREM 3.11.** Let $E(\cdot)$ be a spectral family of projections in a super-reflexive Banach space $X$. Suppose that $E(\cdot)$ is concentrated on a compact interval $J = [a, b]$, and let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $p \in (1, q')$. Then the spectral integral $\int_J \Phi dE$ exists for each $\Phi \in V_p(J)$, and the mapping $\Phi \in V_p(J) \to$ +---PAGE_BREAK--- + +$$ \int_J^\oplus \Phi dE \text{ is a continuous identity-preserving homomorphism of the Banach algebra } V_p(J) \text{ into the Banach algebra } \mathfrak{B}(X) \text{ such that} $$ + +$$ \left\| \int_J^\oplus \Phi dE \right\| \le K_{p,q} \operatorname{var}_q(E) \| \Phi \|_{V_p(J)} \quad \text{for all } \Phi \in V_p(J). $$ + +If $\{\Phi_\beta\}_{\beta \in B}$ is a net of mappings from $J$ into $\mathbb{C}$ satisfying + +$$ \sup\{\operatorname{var}_p(\Phi_\beta, J) : \beta \in B\} < \infty, $$ + +and such that for each $\beta \in B$, and each $t_0 \in (a, b]$, + +$$ \lim_{t \to t_0^-} \Phi_\beta(t) = \Phi_\beta(t_0), $$ + +and if $\{\Phi_\beta\}_{\beta \in B}$ converges pointwise on $J$ to a complex-valued function $\Phi$, then $\Phi \in V_p(J)$, and the net + +$$ \left\{ \int_J^\oplus \Phi_\beta dE \right\}_{\beta \in B} $$ + +converges in the strong operator topology of $\mathfrak{B}(X)$ to $\int_J^\oplus \Phi dE$. + +**4. Some consequences.** The stage is almost set for the main result of this section (Theorem 4.3), which will establish the precompactness relative to the strong operator topology of the set of rotated Hilbert averages $\tilde{W}$ corresponding to a trigonometrically well-bounded operator $U$ on a super-reflexive space. In order to obtain this result, we shall also require the following two auxiliary items from the literature. + +**PROPOSITION 4.1.** Suppose that $1 \le p < \infty$. Then we have, for the sequence of trigonometric polynomials $\{s_n\}_{n=1}^\infty$ in (2.7), + +$$ (4.1) \qquad \sup_{n \in \mathbb{N}} \operatorname{var}_p(s_n, \mathbb{T}) < \infty \quad \text{if and only if} \quad p > 1. $$ + +*Proof*. Since, as was noted in conjunction with (2.7), $\operatorname{var}_1(s_n, \mathbb{T}) \to \infty$ as $n \to \infty$, it suffices to have + +$$ \sup_{n \in \mathbb{N}} \operatorname{var}_p(s_n, \mathbb{T}) < \infty \quad \text{if } p > 1. $$ + +The derivation of this is included in §12 of the article [40]. ■ + +In view of this, the set $\mathcal{S}$ consisting of all rotates of $\{s_n : n \in \mathbb{N}\}$ must satisfy + +$$ (4.2) \qquad \sup_{n \in \mathbb{N}, z \in \mathbb{T}} \|s_n((\cdot)z)\|_{V_p(\mathbb{T})} < \infty \quad \text{if } p > 1, $$ + +by virtue of (2.8), and because $\{s_n\}_{n=1}^\infty$ is the sequence of partial sums for the Fourier series of a BV($\mathbb{T}$)-function, whence + +$$ \sup_{n \in \mathbb{N}} \|s_n\|_{L^\infty(\mathbb{T})} < \infty. $$ +---PAGE_BREAK--- + +The second auxiliary item we shall rely on is the following convenient formulation of the “Helly Selection Theorem for Functions of Bounded p-Variation” (Theorem 2.4 of [36]). (Although it will not be an issue for us, we note that in the parlance of [36], the symbol $\text{var}_p$ denotes what is, in the sense of our notation, $\text{var}_p^p$.) + +**THEOREM 4.2.** Let $\mathcal{F}$ be a sequence of functions mapping a subset $\mathcal{M}$ of $\mathbb{R}$ to a metric space $\mathcal{Y}$, and such that, for some $p \in [1, \infty)$, $\mathcal{F}$ has uniformly bounded $p$-variation on $\mathcal{M}$ (in symbols, $\sup\{\text{var}_p(F, \mathcal{M}) : F \in \mathcal{F}\} < \infty$). Suppose further that for each $t \in \mathcal{M}$, $\{F(t) : F \in \mathcal{F}\}$ has compact closure in $\mathcal{Y}$. Then $\mathcal{F}$ has a subsequence $\{f_n\}_{n=1}^\infty$ pointwise convergent on $\mathcal{M}$ to a function $f : \mathcal{M} \to \mathcal{Y}$ such that + +$$ \text{var}_p(f, \mathcal{M}) \leq \sup\{\text{var}_p(F, \mathcal{M}) : F \in \mathcal{F}\} < \infty. $$ + +**THEOREM 4.3.** If $U$ is a trigonometrically well-bounded operator on a super-reflexive Banach space $X$, then the closure, relative to the strong operator topology, of the class $\tilde{\mathcal{W}}$ specified in (1.3) by + +$$ (4.3) \qquad \tilde{\mathcal{W}} = \left\{ \sum_{0 < |k| \le n} \frac{z^k}{k} U^k : n \in \mathbb{N}, z \in \mathbb{T} \right\} $$ + +is compact in the strong operator topology, and hence, in particular, + +$$ (4.4) \qquad \sup\{\|T\| : T \in \tilde{\mathcal{W}}\} < \infty. $$ + +Conversely, if $\mathcal{X}_0$ is a reflexive Banach space, and $U \in \mathfrak{B}(\mathcal{X}_0)$ is an invertible operator such that (4.4) holds, then $U$ is trigonometrically well-bounded. + +*Proof.* Let $E(\cdot)$ be the spectral decomposition of $U$, and choose $q,p$ as in the hypotheses of Theorem 3.7. Let $x \in X \setminus \{0\}$. We are required to show that the set $\tilde{\mathcal{W}}x$ is totally bounded in the metric space defined by the norm of $X$. For this purpose, let $\mathcal{G}$ be a sequence in $\tilde{\mathcal{W}}x$. Hence for some sequence $\mathcal{F}$ taken from the set of trigonometric polynomials $\mathfrak{S}$ appearing in the minorant of (4.2), we can express $\mathcal{G}$ as $\mathcal{F}(U)x$. By virtue of (4.2) and Theorem 4.2, we can extract from the sequence of trigonometric polynomials $\mathcal{F}$ a subsequence $\{f_k\}_{k=1}^\infty$ pointwise convergent on $\mathbb{T}$ to a function $f : \mathbb{T} \to \mathbb{C}$ such that + +$$ \text{var}_p(f, \mathbb{T}) \leq \sup\{\text{var}_p(F, \mathbb{T}) : F \in \mathfrak{S}\} < \infty. $$ + +By Theorem 3.9, applied to $\{f_k\}_{k=1}^\infty$, we see that $\{f_k(U)\}_{k=1}^\infty$ converges in the strong operator topology to $\int_{[0,2\pi]} f(e^{it}) dE(t)$. + +The converse conclusion follows directly from Proposition 1.1, since for each $z \in \mathbb{T}$, the $(C, 1)$ averages appearing in (1.2) are the means of the corresponding discrete Hilbert averages in (4.3). ■ + +An application of Theorem 3.7 of [12] to (4.4) yields the following improvement of Theorem 2.6. +---PAGE_BREAK--- + +**THEOREM 4.4.** Let $X$ be a super-reflexive Banach space, let $U \in \mathfrak{B}(X)$ be trigonometrically well-bounded, and let $E(\cdot)$ be the spectral decomposition of $U$. Then for each $f \in \text{BV}(\mathbb{T})$, the series $\sum_{k=-\infty}^{\infty} \hat{f}(k)U^k$ converges in the strong operator topology to $\int_{[0,2\pi]} f^{\#}(t) dE(t)$. + +In the presence of super-reflexivity, we now also have the following extension of Theorem 2.6 from $\text{BV}(\mathbb{T})$ to spaces $V_p(\mathbb{T})$, for appropriate $p > 1$. + +**THEOREM 4.5.** Let $X$ be a super-reflexive Banach space, and let $U \in \mathfrak{B}(X)$ be a trigonometrically well-bounded operator. Denote by $E(\cdot)$ the spectral decomposition of $U$, let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$, and let $p \in (1, q')$. If $\phi \in V_p(\mathbb{T})$, then for each $x \in X$, + +$$ (4.5) \quad \left\| \sum_{\nu=-n}^{n} \left(1 - \frac{|\nu|}{n+1}\right) \hat{\phi}(\nu) U^\nu x - \left\{ \int_{[0,2\pi]}^\oplus \phi^{\#}(t) dE(t) \right\} x \right\| \to 0 \quad \text{as } n \to \infty. $$ + +*Proof.* Clearly, the sequence of trigonometric polynomials $\{\kappa_n * \phi\}_{n \ge 0}$ has the property that $\sup_{n \ge 0} \| \kappa_n * \phi \|_{V_p(\mathbb{T})} < \infty$, and by Fejér's Theorem, $(\kappa_n * \phi)(e^{it}) \to \phi^{\#}(t)$ for all $t \in \mathbb{R}$. The desired conclusion is now an immediate consequence of Theorem 3.9 applied to the pointwise convergent sequence $\{\kappa_n * \phi\}_{n \ge 0}$. ■ + +**REMARK 4.6.** In contrast to the situation for $\text{BV}(\mathbb{T})$-functions in Theorem 4.4, it is an open question whether or not one can, for the general $\phi \in V_p(\mathbb{T})$, improve the strong $(C, 1)$-convergence in (4.5) to strong convergence of the series $\sum_{\nu=-\infty}^{\infty} \hat{\phi}(\nu)U^{\nu}$. In this regard, one can use Theorem 3.1 of [37] in combination with Theorem 4.5 to obtain the following partial result in the positive direction. We omit the details for expository reasons. + +**PROPOSITION 4.7.** Suppose that $\mathcal{Y}$ is a UMD space having an unconditional basis, let $U \in \mathfrak{B}(\mathcal{Y})$ be a trigonometrically well-bounded operator. Denote by $E(\cdot)$ the spectral decomposition of $U$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$, and let $p \in (1, q')$. If $\phi \in V_p(\mathbb{T})$, then for each $y \in \mathcal{Y}$ we have, for almost all $z \in \mathbb{T}$, + +$$ \left\| \left( \sum_{k=-n}^{n} \hat{\phi}(k) U^k z^k \right) y - \left( \int_{[0,2\pi]}^\oplus (\phi_z)^{\#}(t) dE(t) \right) y \right\|_{\mathcal{Y}} \to 0 \quad \text{as } n \to \infty. $$ + +**REMARK 4.8.** Since the Haar system is an unconditional basis for $L^r([0, 1])$, $1 < r < \infty$, the space $L^r(\mathbb{T})$ satisfies the hypotheses on $\mathcal{Y}$ of Proposition 4.7. In particular, by specializing to the value $r = 2$, we see that any separable Hilbert space (finite-dimensional or infinite-dimensional) satisfies these hypotheses on $\mathcal{Y}$. +---PAGE_BREAK--- + +**5. Operator-weighted Hilbert sequence spaces and trigonometrically well-bounded shift operators.** Henceforth, $\mathcal{R}$ will be an arbitrary Hilbert space with inner product $\langle \cdot, \cdot \rangle$. As shown in Theorem 2.3 of [16], shifts on appropriate operator-weighted Hilbert sequence spaces serve as a model for the general behavior of trigonometrically well-bounded operators on arbitrary Hilbert spaces. More specifically, to any invertible operator $V \in \mathfrak{B}(\mathcal{R})$ there correspond a bilateral operator-valued weight sequence $\mathfrak{W}_V \subseteq \mathfrak{B}(\mathcal{R})$ and an affiliated Hilbert sequence space $\ell^2(\mathfrak{W}_V)$ such that $V$ is trigonometrically well-bounded on $\mathcal{R}$ if and only if the right bilateral shift $\mathcal{R}$ is a trigonometrically well-bounded operator on $\ell^2(\mathfrak{W}_V)$; moreover, if this is the case, then the norm properties of trigonometric polynomials of $\mathcal{R}$ mirror the norm properties of trigonometric polynomials of $V$. (See (5.6) below. For additional background facts regarding these matters, see [12].) In this section, we shall discuss how application of the preceding sections to this circle of ideas in Hilbert space affords some new insights into the role of the Hilbert transform and of multiplier theory in non-commutative analysis. + +We begin by describing the relevant class of operator-weighted Hilbert sequence spaces. An *operator-valued weight sequence* on $\mathcal{R}$ will be a bilateral sequence $\mathfrak{W} = \{W_k\}_{k=-\infty}^{\infty} \subseteq \mathfrak{B}(\mathcal{R})$ such that for each $k \in \mathbb{Z}$, $W_k$ is a positive, invertible, self-adjoint operator. We associate with $\mathfrak{W}$ the weighted Hilbert space $\ell^2(\mathfrak{W})$ consisting of all sequences $x = \{x_k\}_{k=-\infty}^{\infty} \subseteq \mathcal{R}$ such that + +$$ \sum_{k=-\infty}^{\infty} \langle W_k x_k, x_k \rangle < \infty, $$ + +and furnished with the inner product $\langle \langle \cdot, \cdot \rangle \rangle$ specified by + +$$ \langle\langle x, y \rangle\rangle = \sum_{k=-\infty}^{\infty} \langle W_k x_k, y_k \rangle. $$ + +Thus, $\ell^2(\mathfrak{W})$ is a generalization to non-commutative analysis of the $\ell^2$-spaces defined by scalar-valued weight sequences in the special case where $\mathcal{R} = \mathbb{C}$. (For the continuous variable generalization from scalar-valued weights to operator-valued weights, see [39].) Note that for each $z \in \mathbb{T}$, there is a natural unitary operator $\Delta_z$ defined on $\ell^2(\mathfrak{W})$ by writing $\Delta_z(\{x_k\}_{k=-\infty}^{\infty}) = \{z^k x_k\}_{k=-\infty}^{\infty}$. + +The links between the considerations of the previous sections and $\ell^2(\mathfrak{W})$ stem from the interplay between $\ell^2(\mathfrak{W})$ and the discrete Hilbert kernel $h: \mathbb{Z} \to \mathbb{R}$, which, in terms of the function $\phi_0 \in \text{BV}(\mathbb{T})$ specified in conjunction with (2.7), is expressed by $h = \hat{\phi}_0$. Thus $h(0) = 0$, and $h(k) = k^{-1}$ for $k \in \mathbb{Z} \setminus \{0\}$. The truncates $\{h_N\}_{N=1}^{\infty}$ of the discrete Hilbert kernel $h$ are defined by writing, for each $N \in \mathbb{N}$ and each $k \in \mathbb{Z}$, $h_N(k) = h(k)$ if $|k| \le N$, and $h_N(k) = 0$ if $|k| > N$. The formal operator of convolution by +---PAGE_BREAK--- + +$h$ on $\ell^2(\mathfrak{M})$ will be referred to as the discrete Hilbert transform, and will be symbolized by $D$ (convolution by $h_N$ on $\ell^2(\mathfrak{M})$ will be denoted by $D_N$). If $h$ defines a bounded convolution operator from $\ell^2(\mathfrak{M})$ into $\ell^2(\mathfrak{M})$, we shall say that $\mathfrak{M}$ possesses the *Treil–Volberg property*. It was shown in [12] that in the context of $\ell^2(\mathfrak{M})$, one can define an operator-valued counterpart (the discrete analogue of [39]) for the Muckenhoupt $A_2$-weight condition—if this condition is satisfied by $\mathfrak{M}$, we write $\mathfrak{M} \in A_2(\mathcal{R})$. Since we do not need this $A_2(\mathcal{R})$ weight condition for our present considerations, we shall not pursue it further, except to note that the condition $\mathfrak{M} \in A_2(\mathcal{R})$ is always necessary, but, for the continuous-variable case and infinite-dimensional $\mathcal{R}$, is known not to be sufficient, for $\mathfrak{M}$ to possess the Treil–Volberg property (see, respectively, Proposition 4.4 of [12], and Theorem 1.1 of [27]). + +The connection between the Treil–Volberg property and the right (bilateral) shift $\mathcal{R}: \ell^2(\mathfrak{M}) \to \mathcal{R}^\mathbb{Z}$ specified by + +$$ \mathcal{R}(\{x_k\}_{k=-\infty}^{\infty}) = \{x_{k-1}\}_{k=-\infty}^{\infty} $$ + +is expressed as follows (Theorem 4.12 of [12]). + +**PROPOSITION 5.1.** Let $\mathfrak{M} = \{W_k\}_{k=-\infty}^{\infty}$ be an operator-valued weight sequence on the arbitrary Hilbert space $\mathcal{R}$. Then the following assertions are equivalent: + +(i) $\mathfrak{M}$ has the Treil–Volberg property. + +(ii) The right shift $\mathcal{R}$ is a trigonometrically well-bounded operator on $\ell^2(\mathfrak{M})$. + +(iii) $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$ such that + +$$ (5.1) \quad \sup_{n \in \mathbb{N}} \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) \frac{\mathcal{R}^k}{k} \right\| < \infty. $$ + +**REMARK 5.2.** If $\mathcal{R} \in \mathfrak{B}(\ell^2(\mathfrak{M}))$, then for each $z \in \mathbb{T}$, $\Delta_z \mathcal{R} \Delta_{\bar{z}} = z \mathcal{R}$, and hence the condition (1.2) reduces to (5.1) in the context of Proposition 5.1(iii). + +By virtue of (4.4), we can add the following two conditions to the list of equivalent conditions in Proposition 5.1. + +**PROPOSITION 5.3.** Under the hypotheses of Proposition 5.1, each of the following two conditions is equivalent to the conditions (i)–(iii) listed therein: + +(iv) $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$ such that + +$$ (5.2) \quad \sup_{n \in \mathbb{N}} \left\| \sum_{0 < |k| \le n} \frac{\mathcal{R}^k}{k} \right\| < \infty. $$ +---PAGE_BREAK--- + +$$ (v) \{D_N\}_{N=1}^{\infty} \subseteq \mathfrak{B}(\ell^2(\mathfrak{M})), \text{ with} $$ + +$$ (5.3) \qquad \sup_{N \in \mathbb{N}} \|D_N\|_{\mathfrak{B}(\ell^2(\mathfrak{M}))} < \infty. $$ + +*Proof*. It is elementary that (iv) ⇒ (iii). The implication (ii) ⇒ (iv) is a consequence of (4.4). If (iv) holds, then for each $N \in \mathbb{N}$, + +$$ (5.4) \qquad s_N(\mathcal{R}) = D_N, $$ + +and hence (v) holds. So the proof of Proposition 5.3 boils down to assuming (v) in order to show any one of the conditions (i) through (iv). Since there is no a priori reason to infer from (v) that $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$, we cannot make immediate use of (5.4), and so we shall sidestep this difficulty by establishing (i) directly. Since the Hilbert space $\ell^2(\mathfrak{M})$ is, in particular, reflexive, it follows from (5.3) that the closure of + +$$ \mathcal{D} = \{D_N : N \in \mathbb{N}\} $$ + +in the weak operator topology of $\mathfrak{B}(\ell^2(\mathfrak{M}))$ is compact in the weak operator topology of $\mathfrak{B}(\ell^2(\mathfrak{M}))$. Consequently, there are a subnet $\{D_{N_\gamma}\}_{\gamma \in \Gamma}$ and an operator $\mathfrak{H} \in \mathfrak{B}(\ell^2(\mathfrak{M}))$ such that + +$$ (5.5) \qquad D_{N_\gamma} \to \mathfrak{H} \quad \text{in the weak operator topology of } \mathfrak{B}(\ell^2(\mathfrak{M})). $$ + +Hence it will suffice to verify that for every vector $y = \{y_k\}_{k=-\infty}^{\infty} \in \ell^2(\mathfrak{M})$ such that the support of $y$ is a singleton, $\mathfrak{H}$ acts on $y$ as convolution by $h$. It is a routine matter to perform this verification by using (5.5) in conjunction with such vectors. ■ + +**REMARK 5.4.** In classical single-variable Fourier analysis, as well as in its generalizations to norm inequalities involving scalar-valued weights, the boundedness of the relevant Hilbert transform goes hand-in-hand with the boundedness of pillars like the Hardy–Littlewood maximal function and the maximal Hilbert transform—which leave in their wake the uniform boundedness of the Hilbert transform’s truncates. This familiar scenario ultimately entails the validity of the relevant version of the Marcinkiewicz Multiplier Theorem and of the Littlewood–Paley Theorem. However, in the framework of condition (i) of Proposition 5.1 such underpinnings as maximal operators are lacking, and moreover, Theorem 6.1 of [16] shows that there is an operator-valued weight sequence $\mathfrak{W}_0$ on the Hilbert space $\ell^2(\mathbb{N})$ such that $\mathfrak{W}_0$ enjoys the Treil–Volberg property, but the analogues of the classical Marcinkiewicz Multiplier Theorem and the Littlewood–Paley Theorem fail to hold on $\ell^2(\mathfrak{W}_0)$. One motivation for obtaining the above implication (i) ⇒ (v) is that it, nevertheless, confirms the survival of the uniform boundedness for the Hilbert transform’s truncates, in an environment where so many mainstays fail to carry over. The next theorem adds still more to the +---PAGE_BREAK--- + +positive side of the ledger by extending this type of boundedness result to +appropriate function classes. + +**THEOREM 5.5.** Suppose that $\mathcal{R}$ is an arbitrary Hilbert space, and $\mathfrak{M} = \{W_k\}_{k=-\infty}^{\infty}$ is an operator-valued weight sequence on $\mathcal{R}$ having the Treil-Volberg property. Then there is $\gamma \in (1, \infty)$ such that for each $p$ satisfying $1 \le p < \gamma$, and each function $\phi \in V_p(\mathbb{T})$, convolution by the inverse Fourier transform $\phi^\vee$ on $\ell^2(\mathfrak{M})$ is a bounded linear mapping $\mathfrak{F}_\phi$ of $\ell^2(\mathfrak{M})$ into $\ell^2(\mathfrak{M})$ satisfying + +$$ +\|\mathfrak{F}_{\phi}\|_{\mathfrak{B}(\ell^2(\mathfrak{M}))} \leq K_{\mathfrak{M},p} \|\phi\|_{V_p(\mathbb{T})}. +$$ + +*Proof.* Combine Theorems 4.2 and 4.3 of [16] and Corollary 4.4 of [16] with Theorem 3.7 above. ■ + +We finish this section with a brief sketch of how the above scene furnishes a model for estimates with trigonometrically well-bounded operators on Hilbert spaces. Suppose that $V \in \mathfrak{B}(\mathcal{R})$ is an invertible operator, and let $\mathfrak{M}_V$ be the operator-valued weight sequence on the Hilbert space $\mathcal{R}$ given by $\mathfrak{M}_V = \{(V^k)*V^k\}_{k=-\infty}^{\infty}$. Lemma 2.2 of [16] and Theorem 2.3 of [16] guarantee that the right shift $\mathcal{R}$ is a bounded invertible linear mapping of $\ell^2(\mathfrak{M}_V)$ onto itself such that for every trigonometric polynomial $Q$, + +$$ +(5.6) \qquad \|Q(\mathcal{R})\|_{\mathfrak{B}(\ell^2(\mathfrak{M}_V))} = \sup_{z \in \mathbb{T}} \|Q(zV)\|_{\mathfrak{B}(\mathcal{R})}. +$$ + +In view of Proposition 1.1 and the equivalence of conditions (ii) and (iii) +in Proposition 5.1, it follows directly from (5.6) that the right shift $\mathcal{R}$ is +trigonometrically well-bounded on $\ell^2(\mathfrak{M}_V)$ if and only if $V$ is trigonometrically well-bounded on $\mathcal{R}$. + +References + +[1] D. J. Aldous, *Unconditional bases and martingales in $L_p(F)$*, Math. Proc. Cambridge Philos. Soc. 85 (1979), 117-123. + +[2] B. Beauzamy, *Introduction to Banach Spaces and Their Geometry*, North-Holland Math. Stud. 68 (Notas de Mat. 86), Elsevier Science, New York, 1982. + +[3] E. Berkson, J. Bourgain, and T. A. Gillespie, *On the almost everywhere convergence of ergodic averages for power-bounded operators on $L^p$-subspaces*, Integral Equations Operator Theory 14 (1991), 678-715. + +[4] E. Berkson and H. R. Dowson, *On uniquely decomposable well-bounded operators*, Proc. London Math. Soc. (3) 22 (1971), 339-358. + +[5] E. Berkson and T. A. Gillespie, *AC functions on the circle and spectral families*, J. Operator Theory 13 (1985), 33-47. + +[6] —, —, *Fourier series criteria for operator decomposability*, Integral Equations Operator Theory 9 (1986), 767–789. + +[7] —, —, *Stečkin's theorem, transference, and spectral decompositions*, J. Funct. Anal. 70 (1987), 140–170. +---PAGE_BREAK--- + +[8] E. Berkson and T. A. Gillespie, *The spectral decomposition of weighted shifts and the $A_p$ condition*, Colloq. Math. (special volume dedicated to A. Zygmund) 60-61 (1990), 507-518. + +[9] —, —, *Spectral decompositions and harmonic analysis on UMD spaces*, Studia Math. 112 (1994), 13-49. + +[10] —, —, *Mean-boundedness and Littlewood-Paley for separation-preserving operators*, Trans. Amer. Math. Soc. 349 (1997), 1169-1189. + +[11] —, —, *The q-variation of functions and spectral integration of Fourier multipliers*, Duke Math. J. 88 (1997), 103-132. + +[12] —, —, *Mean₂-bounded operators on Hilbert space and weight sequences of positive operators*, Positivity 3 (1999), 101-133. + +[13] —, —, *Spectral integration from dominated ergodic estimates*, Illinois J. Math. 43 (1999), 500-519. + +[14] —, —, *Spectral decompositions, ergodic averages, and the Hilbert transform*, Studia Math. 144 (2001), 39-61. + +[15] —, —, *A Tauberian theorem for ergodic averages, spectral decomposability, and the dominated ergodic estimate for positive invertible operators*, Positivity 7 (2003), 161-175. + +[16] —, —, *Shifts as models for spectral decomposability on Hilbert space*, J. Operator Theory 50 (2003), 77-106. + +[17] —, —, *Operator means and spectral integration of Fourier multipliers*, Houston J. Math. 30 (2004), 767-814. + +[18] —, —, *The q-variation of functions and spectral integration from dominated ergodic estimates*, J. Fourier Anal. Appl. 10 (2004), 149-177. + +[19] —, —, *An $M_q(T)$-functional calculus for power-bounded operators on certain UMD spaces*, Studia Math. 167 (2005), 245-257. + +[20] E. Berkson, T. A. Gillespie, and P. S. Muhly, *Abstract spectral decompositions guaranteed by the Hilbert transform*, Proc. London Math. Soc. (3) 53 (1986), 489-517. + +[21] D. Blagojevic, *Spectral families and geometry of Banach spaces*, PhD thesis, Univ. of Edinburgh, 2007; http://www.era.lib.ed.ac.uk/handle/1842/2389. + +[22] J. Bourgain, *Some remarks on Banach spaces in which martingale difference sequences are unconditional*, Ark. Mat. 21 (1983), 163-168. + +[23] V. V. Chistyakov and O. E. Galkin, *On maps of bounded p-variation with p > 1*, Positivity 2 (1998), 19-45. + +[24] R. Coifman, J. L. Rubio de Francia, et S. Semmes, *Multiplicateurs de Fourier de $L^p(\mathbb{R})$ et estimations quadratiques*, C. R. Acad. Sci. Paris Sér. I Math. 306 (1988), 351-354. + +[25] M. M. Day, *Reflexive Banach spaces not isomorphic to uniformly convex spaces*, Bull. Amer. Math. Soc. 47 (1941), 313-317. + +[26] P. Enflo, *Banach spaces which can be given an equivalent uniformly convex norm*, Israel J. Math. 13 (1972), 281-288. + +[27] T. A. Gillespie, S. Pott, S. Treil, and A. Volberg, *Logarithmic growth for weighted Hilbert transforms and vector Hankel operators*, J. Operator Theory 52 (2004), 103-112. + +[28] G. H. Hardy, *Weierstrass's non-differentiable function*, Trans. Amer. Math. Soc. 17 (1916), 301-325. + +[29] G. H. Hardy and J. E. Littlewood, *A convergence criterion for Fourier series*, Math. Z. 28 (1928), 612-634. + +[30] R. C. James, *Super-reflexive spaces with bases*, Pacific J. Math. 41 (1972), 409-419. + +[31] —, *Super-reflexive Banach spaces*, Canad. J. Math. 24 (1972), 896-904. +---PAGE_BREAK--- + +[32] Y. Katznelson, *An Introduction to Harmonic Analysis*, Dover, New York, 1976. + +[33] J. Lindenstrauss and L. Tzafriri, *Classical Banach Spaces II: Function Spaces*, Ergeb. Math. Grenzgeb. 97, Springer, New York, 1979. + +[34] B. Maurey, *Système de Haar*, in: Séminaire Maurey-Schwartz 1974-1975, Centre Math. École Polytechnique, Paris, 1975, 26 pp. + +[35] G. Pisier, *Un exemple concernant la super-réflexivité*, ibid., 12 pp. + +[36] J. E. Porter, *Helly's selection principle for functions of bounded p-variation*, Rocky Mountain J. Math. 35 (2005), 675-679. + +[37] J. L. Rubio de Francia, *Fourier series and Hilbert transforms with values in UMD Banach spaces*, Studia Math. 81 (1985), 95-105. + +[38] P. G. Spain, *On well-bounded operators of type (B)*, Proc. Edinburgh Math. Soc. (2) 18 (1972), 35-48. + +[39] S. Treil and A. Volberg, *Wavelets and the angle between past and future*, J. Funct. Anal. 143 (1997), 269-308. + +[40] L. C. Young, *An inequality of the Hölder type, connected with Stieltjes integration*, Acta Math. 67 (1936), 251-282. + +Earl Berkson +Department of Mathematics +University of Illinois +1409 W. Green Street +Urbana, IL 61801 U.S.A. +E-mail: berkson@math.uiuc.edu + +Received January 30, 2010 + +Revised version July 7, 2010 + +(6804) \ No newline at end of file diff --git a/samples/texts_merged/822209.md b/samples/texts_merged/822209.md new file mode 100644 index 0000000000000000000000000000000000000000..b853e54328468a23032f581d5199278dec09f18c --- /dev/null +++ b/samples/texts_merged/822209.md @@ -0,0 +1,738 @@ + +---PAGE_BREAK--- + +XHX – A Framework for Optimally Secure +Tweakable Block Ciphers from Classical Block +Ciphers and Universal Hashing + +Ashwin Jha¹, Eik List², Kazuhiko Minematsu³, +Sweta Mishra⁴, and Mridul Nandi¹ + +¹ Indian Statistical Institute, Kolkata, India. {ashwin_r, mridul}@isical.ac.in + +² Bauhaus-Universität Weimar, Weimar, Germany. eik.list@uni-weimar.de + +³ NEC Corporation, Tokyo, Japan. k-minematsu@ah.jp.nec.com + +⁴ IIIT, Delhi, India. swetam@iiitd.ac.in + +**Abstract.** Tweakable block ciphers are important primitives for designing cryptographic schemes with high security. In the absence of a standardized tweakable block cipher, constructions built from classical block ciphers remain an interesting research topic in both theory and practice. Motivated by Mennink's $\tilde{F}[2]$ publication from 2015, Wang et al. proposed 32 optimally secure constructions at ASIACRYPT'16, all of which employ two calls to a classical block cipher each. Yet, those constructions were still limited to *n*-bit keys and *n*-bit tweaks. Thus, applications with more general key or tweak lengths still lack support. This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in efficient manner from the block cipher, where we generalize the constructions by Wang et al.; finally, we propose efficient instantiations for the used hash functions. + +**Keywords:** Provable security · ideal-cipher model · tweakable block cipher + +# 1 Introduction + +*Tweakable Block Ciphers.* In addition to the usual key and plaintext inputs of classical block ciphers, tweakable block ciphers (TBCs, for short) are cryptographic transform that adds an additional public parameter called *tweak*. So, a tweakable block cipher $\tilde{E}: \mathcal{K} \times \mathcal{T} \times \mathcal{M} \rightarrow \mathcal{M}$ is a permutation on the plaintext/ciphertext space $\mathcal{M}$ for every combination of key $\mathcal{K} \in \mathcal{K}$ and tweak $\mathcal{T} \in \mathcal{T}$, where $\mathcal{K}, \mathcal{T}$, and $\mathcal{M}$ are assumed to be non-empty sets. Their first use in literature was due to Schroeppel and Orman in the Hasty Pudding Cipher, where the tweak still was called *Spice* [18]. Liskov, Rivest, and Wagner [11] have formalized the concept then in 2002. + +In the recent past, the status of tweakable block ciphers has become more prominent, last but not least due to the advent of efficient dedicated constructions, +---PAGE_BREAK--- + +such as Deoxys-BC or Joltik-BC that were proposed alongside the TWEAKEY framework [6], or e.g., SKINNY [1]. However, in the absence of a standard, tweakable block ciphers based on classical ones remain a highly interesting topic. + +**Blockcipher-based Constructions.** Liskov et al. [11] had described two constructions, known as LRW1 and LRW2. Rogaway [17] proposed XE and XEX as refinements of LRW2 for updating tweaks efficiently and reducing the number of keys. These schemes are efficient in the sense that they need one call to the block cipher plus one call to a universal hash function. Both XE and XEX are provably secure in the standard model, i.e., assuming the block cipher is a (strong) pseudorandom permutation, they are secure up to $O(2^{n/2})$ queries, when using an $n$-bit block cipher. Since this bound results from the birthday paradox on input collisions, the security of those constructions is inherently limited by the birthday bound (BB-secure). + +**Constructions with Stronger Security.** Constructions with beyond-birthday-bound (BBB) security have been an interesting research topic. In [13], Minematsu proposed introduced a rekeying-based construction. Landecker, Shrimpton and Terashima [9] analyzed the cascade of two independent LRW2 instances, called CLRW2. Both constructions are secure up to $O(2^{2n/3})$ queries, however, at the price of requiring two block-cipher calls per block plus per-tweak rekeying or plus two calls to a universal hash function, respectively. + +For settings that demand stronger security, Lampe and Seurin [8] proved that the chained cascade of more instances of LRW2 could asymptotically approach a security of up to $O(2^n)$ queries, i.e. full $n$-bit security. However, the disadvantage is drastically decreased performance. An alternative direction has been initiated by Mennink [12], who also proposed TBC constructions from classical block ciphers, but proved the security in the ideal-cipher model. Mennink's constructions could achieve full $n$-bit security quite efficiently when both input and key are $n$ bits. In particular, his $\tilde{F}$[2] construction required only two block-cipher calls. + +Following Mennink's work, Wang et al. [20] proposed 32 constructions of optimally secure tweakable block ciphers from classical block ciphers. Their designs share an $n$-bit key, $n$-bit tweak and $n$-bit plaintext, and linearly mix tweak, key, and the result of a second offline call to the block cipher. Their constructions have the desirable property of allowing to cache the result of the first block-cipher call; moreover, given a-priori known tweaks, some of their constructions allow further to precompute the result of the key schedule. + +All constructions by Wang et al. were restricted to $n$-bit keys and tweaks. While this limit was reasonable, it did not address tweakable block ciphers with tweaks longer than $n$ bit. Such constructions, however, are useful in applications with increased security needs such as for authenticated encryption or variable-input-length ciphers (e.g., [19]). Moreover, disk-encryption schemes are typically based on wide-block tweakable ciphers, where the physical location on disk (e.g., the sector ID) is used as tweak, which can be arbitrarily long. +---PAGE_BREAK--- + +In general, extending the key length in the ideal-cipher model is far from trivial (see, e.g., [2,5,10]), and the key size in this model does not necessarily match the required tweak length. Moreover, many ciphers, like the AES-192 or AES-256, possess key and block lengths for which the constructions in [12,20] are inapplicable. In general, the tweak represents additional data accompanying the plaintext/ciphertext block, and no general reason exists why tweaks must be limited to the block length. + +Before proving the security of a construction, we have to specify the employed model. The standard model is well-established in the cryptographic community despite the fact that proofs base on few unproven assumptions, such as that a block cipher is a PRP, or ignore practical side-channel attacks. In the standard model, the adversary is given access only to either the *real construction* $\tilde{E}$ or an *ideal construction* $\tilde{\pi}$. In contrast, the ideal-cipher model assumes an ideal primitive—in our case the classical ideal block cipher $E$ which is used in $\tilde{E}$— which the adversary has also access to in both worlds. Although a proof in the ideal-cipher model is not an unexceptional guarantee that no attacks may exist when instantiated in practice [3], for us, it allows to capture away the details of the primitive for the sake of focusing on the security of the construction. + +A good example for TBCs proven in the standard model is XTX [14] by Minematsu and Iwata. XTX extended the tweak domain of a given tweakable block cipher $\tilde{E}: \{0,1\}^k \times \{0,1\}^t \times \{0,1\}^n \rightarrow \{0,1\}^n$ by hashing the arbitrary-length tweak to an $(n+t)$-bit value. The first $t$ bits serve as tweak and the latter $n$ bits are XORed to both input and output of $\tilde{E}$. Given an $\epsilon$-AXU family of hash functions and an ideal tweakable cipher, XTX is secure for up to $O(2^{(n+t)/2})$ queries in the standard model. However, no alternative to XTX exists in the ideal-cipher model yet. + +**Contribution.** This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. [20]. Like them, the present work also uses the ideal-cipher model for its security analysis. As the major difference to their work, our proposal allows arbitrary tweak lengths and works for any block cipher of $n$-bit block and $k$-bit key. The security is guaranteed for up to $O(2^{(n+k)/2})$ queries, which yields $n$-bit security when $k \ge n$. + +Our contributions in the remainder of this work are threefold: First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in an efficient manner from the block cipher, generalizing the constructions by Wang et al.; finally, we propose efficient instantiations for the employed hash functions for concreteness. + +*Remark 1.* Recently, Naito [15] proposed the XKX framework of beyond-birthday-secure tweakable block ciphers, which shares similarities to the proposal in the present work. He proposed two instances, the birthday-secure XKX(1) and the beyond-birthday-secure XKX(2). More detailed, the nonce is processed by a block-cipher-based PRF which yields the block-cipher key for the current message; the counter is hashed with a universal hash function under a second, in- +---PAGE_BREAK--- + +**Table 1:** Comparison of XHX to earlier highly secure TBCs built upon classical block ciphers. ICM(n, k) denotes the ideal-cipher model for a block cipher with n-bit block and k-bit key; BC(n, k) and TBC(n, t, k) denote the standard-model (tweakable) block cipher of n-bit block, t-bit tweak, and k-bit key. \#Enc. = \#calls to the (tweakable) block cipher, and \#Mult. = \#multiplications over GF(2^n). a(b) = b out of a calls can be precomputed with the secret key; we define s = ⌈k/n⌋. + +
SchemeModelTweakKeySecurityEfficiencyReference
length in bitin bit#Enc.#Mult.
F̃[2]ICM(n,n)nnn2[12]
Eķ1, ..., Eķ32ICM(n,n)nnn2 (1)[20]
XTXTBC(n,t,k)any lk + 2n(n + t)/212[l/n][14]
XKX(2)BC(n,k)-*k + nmin{n,k/2}11[15]
XHXICM(n,k)any lk(n + k)/2s + 1 (s)s[l/n]This work
XHXICM(n,k)2nkns + 1 (s)sThis work
+ +* XKX(2) employs a counter as tweak. + +dependent key to mask the input. In contrast to other proposals including ours, Naito's construction demands both a counter plus a nonce as parameters to overcome the birthday bound; as a standalone construction, its security reduces to n/2 bits if an adversary could use the same "nonce" value for all queries. Hence, XKX(2) is tailored only to certain domains, e.g., modes of operation in nonce-based authenticated encryption schemes. Our proposal differs from XKX in four aspects: (1) we do not pose limitations on the reuse of input parameters; moreover, (2) we do not require a minimum key length of n + k bits; (3) we do not use several independent keys, but employ the block cipher to derive hashing keys; (4) finally, Naito's construction is proved in the standard model, whereas we consider the ideal-cipher model. + +The remainder is structured as follows: Section 2 briefly gives the preliminaries necessary for the rest of this work. Section 3 then defines the general construction, that we call GXHX for simplicity, which hashes the tweak to three outputs. Section 4 continues with the definition and analysis of XHX, which derives the hashing keys from the block cipher. Section 5 describes and analyzes efficient instantiations for our hash functions depending on the tweak length. In particular, we propose instantiations for 2n-bit and arbitrary-length tweaks. + +## 2 Preliminaries + +**General Notation.** We use lowercase letters $x$ for indices and integers, uppercase letters $X, Y$ for binary strings and functions, and calligraphic uppercase letters $\mathcal{X}, \mathcal{Y}$ for sets. We denote the concatenation of binary strings $X$ and $Y$ by $X \parallel Y$ and the result of their bitwise XOR by $X \oplus Y$. For tuples of bit +---PAGE_BREAK--- + +strings $(X_1, \dots, X_n)$, $(Y_1, \dots, Y_n)$ of equal domain, we denote by $(X_1, \dots, X_n) \oplus (Y_1, \dots, Y_n)$ the element-wise XOR, i.e., $(X_1 \oplus Y_1, \dots, X_n \oplus Y_n)$. We indicate the length of $X$ in bits by $|X|$ and write $X_i$ for the $i$-th block. Furthermore, we denote by $X \leftarrow \mathcal{X}$ that $X$ is chosen uniformly at random from the set $\mathcal{X}$. We define three sets of particular interest: $\text{Func}(\mathcal{X}, \mathcal{Y})$ be the set of all functions $F : \mathcal{X} \to \mathcal{Y}$, $\text{Perm}(\mathcal{X})$ the set of all permutations $\pi : \mathcal{X} \to \mathcal{X}$, and $\text{TPerm}(\mathcal{T}, \mathcal{X})$ for the set of tweaked permutations over $\mathcal{X}$ with associated tweak space $\mathcal{T}$. $(X_1, \dots, X_n) \stackrel{n}{\leftarrow} X$ denotes that $X$ is split into $n$-bit blocks i.e., $X_1 || \dots || X_n = X$, and $|X_i| = n$ for $1 \le i \le x-1$, and $|X_x| \le n$. Moreover, we define $\langle X \rangle_n$ to denote the encoding of a non-negative integer $X$ into its $n$-bit representation. Given an integer $x \in \mathbb{N}$, we define the function $\text{TRUNC}_x : \{0,1\}^* \to \{0,1\}^x$ to return the leftmost $x$ bits of the input if its length is $\ge x$, and returns the input otherwise. For two sets $\mathcal{X}$ and $\mathcal{Y}$, a uniform random function $\rho : \mathcal{X} \to \mathcal{Y}$ maps inputs $X \in \mathcal{X}$ independently from other inputs and uniformly at random to outputs $Y \in \mathcal{Y}$. For an event $E$, we denote by $\text{Pr}[E]$ the probability of $E$. For positive integers $n$ and $k$, we denote the falling factorial as $(n)_k := \frac{n!}{k!}$. + +**Adversaries.** An adversary **A** is an efficient Turing machine that interacts with a given set of oracles that appear as black boxes to **A**. We denote by **A**Ο the output of **A** after interacting with some oracle O. We write Δ**A** (O1; O2) := |Pr[**A**Ο ⇒ 1] − Pr[**A**Ο2 ⇒ 1]| for the advantage of **A** to distinguish between oracles O1 and O2. All probabilities are defined over the random coins of the oracles and those of the adversary, if any. W.l.o.g., we assume that **A** never asks queries to which it already knows the answer. + +A block cipher $E$ with associated key space $\mathcal{K}$ and message space $\mathcal{M}$ is a mapping $E: \mathcal{K} \times \mathcal{M} \rightarrow \mathcal{M}$ such that for every key $K \in \mathcal{K}$, it holds that $E(K, \cdot)$ is a permutation over $\mathcal{M}$. We define Block($\mathcal{K}, \mathcal{M}$) as the set of all block ciphers with key space $\mathcal{K}$ and message space $\mathcal{M}$. A tweakable block cipher $\tilde{E}$ with associated key space $\mathcal{K}$, tweak space $\mathcal{T}$, and message space $\mathcal{M}$ is a mapping $\tilde{E}: \mathcal{K} \times \mathcal{T} \times \mathcal{M} \rightarrow \mathcal{M}$ such that for every key $K \in \mathcal{K}$ and tweak $T \in \mathcal{T}$, it holds that $\tilde{E}(K, T, \cdot)$ is a permutation over $\mathcal{M}$. We also write $\tilde{E}_K^\mathrm{T}(\cdot)$ as short form in the remainder. + +The STPRP security of $\tilde{E}$ is defined via upper bounding the advantage of a distinguishing adversary **A** in a game, where we consider the ideal-cipher model throughout this work. There, **A** has access to oracles ($\mathcal{O}, E^\pm$), where $E^\pm$ is the usual notation for access to the encryption oracle *E* and to the decryption oracle *E*-1. *O* is called construction oracle, and is either the real construction $\tilde{E}_K^\pm(\cdot, \cdot)$, or $\tilde{\pi}^\pm(\cdot, \cdot)$ for $\tilde{\pi} \leftarrow \text{TPerm}(\mathcal{T}, \mathcal{M})$. $E^\pm \leftarrow \text{Perm}(\mathcal{M})$ is an ideal block cipher underneath $\tilde{E}$. The STPRP advantage of **A** is defined as $\Delta_{\mathbf{A}}(\tilde{E}_K^\pm(\cdot, \cdot), E^\pm(\cdot, \cdot); \tilde{\pi}^\pm(\cdot, \cdot), E^\pm(\cdot, \cdot))$, where the probabilities are taken over random and independent choice of $K, E, \tilde{\pi}$, and the coins of **A** if any. For the remainder, we say that **A** is a ($q_C, q_P$)-distinguisher if it asks at most $q_C$ queries to its construction oracle and at most $q_P$ queries to its primitive oracle. + +**Definition 1 (Almost-Uniform Hash Function).** Let $\mathcal{H}: \mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$ be a family of keyed hash functions. We call $\mathcal{H}$ ϵ-almost-uniform (ϵ-AUniform) if, for $K \leftarrow_K$ and all $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$, it holds that $\text{Pr}_{K \leftarrow_K}[\mathcal{H}(K, X) = Y] \le \epsilon$. +---PAGE_BREAK--- + +**Definition 2 (Almost-XOR-Universal Hash Function).** Let $\mathcal{H} : \mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$ be a family of keyed hash functions with $\mathcal{Y} \subseteq \{0,1\}^*$. We say that $\mathcal{H}$ is $\epsilon$-almost-XOR-universal ($\epsilon$-AXU) if, for $K \leftarrow \mathcal{K}$, and for all distinct $X, X' \in \mathcal{X}$ and any $\Delta \in \mathcal{Y}$, it holds that $\Pr_{K \leftarrow \mathcal{K}} [\mathcal{H}(K,X) \oplus \mathcal{H}(K,X') = \Delta] \le \epsilon$. + +Minematsu and Iwata [14] defined partial-almost-XOR-universality to capture +the probability of partial output collisions. + +**Definition 3 (Partial-AXU Hash Function).** Let $\mathcal{H} : \mathcal{K} \times \mathcal{X} \to \{0,1\}^n \times \{0,1\}^k$ be a family of hash functions. We say that $\mathcal{H}$ is $(n, k, \epsilon)$-partial-AXU $((n, k, \epsilon)$-pAXU) if, for $K \leftarrow \mathcal{K}$, and for all distinct $X, X' \in \mathcal{X}$ and all $\Delta \in \{0,1\}^n$, it holds that $\Pr_{K \leftarrow \mathcal{K}} [\mathcal{H}(K,X) \oplus \mathcal{H}(K,X') = (\Delta, 0^k)] \le \epsilon$. + +**The H-Coefficient Technique.** The H-coefficients technique is a method due to Patarin [4,16]. It assumes the results of the interaction of an adversary **A** with its oracles are collected in a transcript $\tau$. The task of **A** is to distinguish the real world $\mathcal{O}_{\text{real}}$ from the ideal world $\mathcal{O}_{\text{ideal}}$. A transcript $\tau$ is called *attainable* if the probability to obtain $\tau$ in the ideal world is non-zero. One assumes that **A** does not ask duplicate queries or queries prohibited by the game or to which it already knows the answer. Denote by $\Theta_{\text{real}}$ and $\Theta_{\text{ideal}}$ the distribution of transcripts in the real and the ideal world, respectively. Then, the fundamental Lemma of the H-coefficients technique states: + +**Lemma 1 (Fundamental Lemma of the H-coefficient Technique [16]).** +Assume, the set of attainable transcripts is partitioned into two disjoint sets GOODT and BADT. Further assume, there exist $\epsilon_1, \epsilon_2 \ge 0$ such that for any transcript $\tau \in$ GOODT, it holds that + +$$ +\frac{\Pr[\Theta_{\text{real}} = \tau]}{\Pr[\Theta_{\text{ideal}} = \tau]} \geq 1 - \epsilon_1, \quad \text{and} \quad \Pr[\Theta_{\text{ideal}} \in \text{BADT}] \leq \epsilon_2. +$$ + +Then, for all adversaries **A**, it holds that $\Delta_A(\mathcal{O}_{\text{real}}; \mathcal{O}_{\text{ideal}}) \le \epsilon_1 + \epsilon_2$. + +The proof is given in [4,16]. + +3 The Generic GXHX Construction + +Let $n, k, l \ge 1$ be integers and $\mathcal{K} = \{0,1\}^k$, $\mathcal{L} = \{0,1\}^l$, and $\mathcal{T} \subseteq \{0,1\}^*$. Let $E: \mathcal{K} \times \{0,1\}^n \rightarrow \{0,1\}^n$ be a block cipher and $\mathcal{H}: \mathcal{L} \times \mathcal{T} \rightarrow \{0,1\}^n \times \mathcal{K} \times \{0,1\}^n$ be a family of hash functions. Then, we define by GXHX[$E$, $\mathcal{H}$] : $\mathcal{L} \times \mathcal{T} \times \{0,1\}^n \rightarrow \{0,1\}^n$ the tweakable block cipher instantiated with $E$ and $\mathcal{H}$ that, for given key $L \in \mathcal{L}$, tweak $T \in \mathcal{T}$, and message $M \in \{0,1\}^n$, computes the ciphertext $C$, as shown on the left side of Algorithm 1. Likewise, given key $L \in \mathcal{L}$, tweak $T \in \mathcal{T}$, and ciphertext $C \in \{0,1\}^n$, the plaintext $M$ is computed by $M \leftarrow$ GXHX[$E$, $\mathcal{H}]_L^{-1}(T, C)$, as shown on the right side of Algorithm 1. Clearly, GXHX[$E$, $\mathcal{H}$] is a correct and tidy tweakable permutation, i.e., for all +---PAGE_BREAK--- + +**Fig. 1:** Schematic illustration of the encryption process of a message *M* and a tweak *T* with the general GXHX[*E*, *H*] tweakable block cipher. *E*: *K* × {0, 1}ⁿ → {0, 1}ⁿ is a keyed permutation and *H*: *L* × *T* → {0, 1}ⁿ × *K* × {0, 1}ⁿ is a keyed universal hash function. + +**Algorithm 1** Encryption and decryption algorithms of the general GXHX[*E*, *H*] construction. + +
11: function GXHX[E, H]L(T, M)21: function GXHX[E, H]L-1(T, C)
12: (H1, H2, H3) ← H(L, T)22: (H1, H2, H3) ← H(L, T)
13: C ← EH2(M ⊕ H1) ⊕ H323: M ← EH2-1(C ⊕ H3) ⊕ H1
14: return C24: return M
+ +keys $L \in \mathcal{L}$, all tweak- plaintext inputs $(T, M) \in \mathcal{T} \times \{0, 1\}^n$, and all tweak-ciphertext inputs $(T, C) \in \mathcal{T} \times \{0, 1\}^n$, it holds that + +$$ \text{GXHX}[E, \mathcal{H}]_L^{-1}(T, \text{GXHX}[E, \mathcal{H}]_L(T, M)) = M \text{ and} \\ \text{GXHX}[E, \mathcal{H}]_L(T, \text{GXHX}[E, \mathcal{H}]_L^{-1}(T, C)) = C. $$ + +Figure 1 illustrates the encryption process schematically. + +## 4 XHX: Deriving the Hash Keys from the Block Cipher + +In the following, we adapt the general GXHX construction to XHX. which differs from the former in two aspects: first, XHX splits the hash function into three functions $\mathcal{H}_1$, $\mathcal{H}_2$, and $\mathcal{H}_3$; second, since we need at least $n + k$ bit of key material for the hash functions, it derives the hash-function key from a key $K$ using the block cipher $E$. We denote by $s \ge 0$ the number of derived hash-function keys $L_i$ and collect them together with the user-given key $K \in \{0, 1\}^k$ into a vector $L := (K, L_1, ..., L_s)$. Moreover, we define a set of variables $I_i$ and $K_i$, for $1 \le i \le s$, which denote input and key to the block cipher $E$ for computing: $L_i := E_{K_i}(I_i)$. We allow flexible, usecase-specific definitions for the values $I_i$ and $K_i$ as long as they fulfill certain properties that will be listed in Section 4.1. We redefine the key space of the hash functions to $\mathcal{L} \subseteq \{0, 1\}^k \times$ +---PAGE_BREAK--- + +Fig. 2: Schematic illustration of the XHX[E, $\mathcal{H}$] construction where we derive the hash-function keys $L_i$ from the block cipher E. + +**Algorithm 2** Encryption and decryption algorithms of XHX where the keys are derived from the block cipher. We define $\mathcal{H} := (\mathcal{H}_1, \mathcal{H}_2, \mathcal{H}_3)$. Note that the exact definitions of $I_i$ and $K_i$ are usecase-specific. + +11: **function** XHX[E, $\mathcal{H}$].KEYSETUP(K) +12: **for** i ← 1 to s **do** +13: $L_i \leftarrow E_{K_i}(I_i)$ +14: $L \leftarrow (K, L_1, \dots, L_s)$ +15: **return** $L$ +31: **function** XHX[E, $\mathcal{H}$]$_K(T, M)$ +32: $L \leftarrow$ XHX[E, $\mathcal{H}$].KEYSETUP(K) +33: $(H_1, H_2, H_3) \leftarrow \mathcal{H}(L, T)$ +34: $C \leftarrow E_{H_2}(M \oplus H_1) \oplus H_3$ +35: **return** $C$ +31: **function** XHX[E, $\mathcal{H}$]$_K(T, C)$ +42: $L \leftarrow$ XHX[E, $\mathcal{H}$].KEYSETUP(K) +43: $(H_1, H_2, H_3) \leftarrow \mathcal{H}(L, T)$ +44: $M \leftarrow E_{H_2}^{-1}(C \oplus H_3) \oplus H_1$ +45: **return** $M$ + +($\{0, 1\}^n)^s$. Note, the values $L_i$ are equal for all encryptions and decryptions and hence, can be precomputed and stored for all encryptions under the same key. + +*The Constructions by Wang et al.* The 32 constructions $\tilde{\mathbb{E}}[2]$ by Wang et al. are a special case of our construction with the parameters $s=1$, key length $k=n$, with the inputs $I_i, K_i \in \{0^n, K\}$, and the option $(I_i, K_i) = (0^n, 0^n)$ excluded. Their constructions compute exactly one value $L_1$ by $L_1 := E_{K_1}(I_1)$. One can easily describe their constructions in the terms of the XHX framework, with three variables $X_1, X_2, X_3 \in \{K, L_1, K \oplus L_1\}$ for which holds that $X_1 \neq X_2$ and $X_3 \neq X_2$, and which are used in XHX as follows: + +$$ +\begin{align*} +\mathcal{H}_1(L,T) &:= X_1, \\ +\mathcal{H}_2(L,T) &:= X_2 \oplus T, \\ +\mathcal{H}_3(L,T) &:= X_3. +\end{align*} + $$ + +## 4.1 Security Proof of XHX + +This section concerns the security of the XHX construction in the ideal-cipher model where the hash-function keys are derived by the (ideal) block cipher E. +---PAGE_BREAK--- + +**Properties of $\mathcal{H}$**. For our security analysis, we list a set of properties that we require for $\mathcal{H}$. We assume that $L$ is sampled uniformly at random from $\mathcal{L}$. To address parts of the output of $\mathcal{H}$, we also use the notion $\mathcal{H}_i : \mathcal{L} \times \mathcal{T} \to \{0,1\}^{o_i}$ to refer to the function that computes the $i$-th output of $\mathcal{H}(L,T)$, for $1 \le i \le 3$, with $o_1 := n$, $o_2 := k$, and $o_3 := n$. Moreover, we define $\mathcal{H}_{1,2}(T) := (\mathcal{H}_1(L,T), \mathcal{H}_2(L,T))$, and $\mathcal{H}_{3,2}(T) := (\mathcal{H}_3(L,T), \mathcal{H}_2(L,T))$. + +**Property P1.** For all distinct $T, T' \in \mathcal{T}$ and all $\Delta \in \{0,1\}^n$, it holds that + +$$ \max_{i \in \{1,3\}} \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{i,2}(T) \oplus \mathcal{H}_{i,2}(T') = (\Delta, 0^k)] \le \epsilon_1. $$ + +**Property P2.** For all $T \in \mathcal{T}$ and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$, it holds that + +$$ \max_{i \in \{1,3\}} \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{i,2}(T) = (c_1, c_2)] \le \epsilon_2. $$ + +Note that Property P1 is equivalent to saying $\mathcal{H}_{1,2}$ and $\mathcal{H}_{3,2}$ are $(n, k, \epsilon_1)$-pAXU; Property P2 is equivalent to the statement that $\mathcal{H}_{1,2}$ and $\mathcal{H}_{3,2}$ are $\epsilon_2$-AUniform. Clearly, it must hold that $\epsilon_1, \epsilon_2 \ge 2^{-(n+k)}$. + +**Property P3.** For all $T \in \mathcal{T}$, all chosen $I_i, K_i$, for $1 \le i \le s$, and all $\Delta \in \{0,1\}^n$, it holds that + +$$ \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{1,2}(T) \oplus (I_i, K_i) = (\Delta, 0^k)] \le \epsilon_3. $$ + +**Property P4.** For all $T \in \mathcal{T}$, all chosen $K_i, L_i$, for $1 \le i \le s$, and all $\Delta \in \{0,1\}^n$, it holds that + +$$ \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{3,2}(T) \oplus (L_i, K_i) = (\Delta, 0^k)] \le \epsilon_4. $$ + +Properties P3 and P4 represent the probabilities that an adversary's query hits the inputs that have been chosen for computing a hash-function key. We list a further property which gives the probability that a set of constants chosen by the adversary can hit the values $I_i$ and $K_i$ from generating the keys $L_i$: + +**Property P5.** For $1 \le i \le s$, and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$, it holds that + +$$ \Pr_{K \leftarrow \mathcal{K}} [(I_i, K_i) = (c_1, c_2)] \le \epsilon_5. $$ + +In other words, the tuples $(I_i, K_i)$ contain a sufficient amount of close to $n$ bit entropy, and cannot be predicted by an adversary with greater probability, i.e., $\epsilon_5$ should not be larger than a small multiple of $1/2^n$. From Property 5 and the fact that the values $L_i$ are computed from $E_{K_i}(I_i)$ with an ideal permutation $E$, it follows that for $1 \le i \le s$ and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$ + +$$ \Pr_{K \leftarrow \mathcal{K}} [(L_i, K_i) = (c_1, c_2)] \le \epsilon_5. $$ +---PAGE_BREAK--- + +**Fig. 3:** Schematic illustration of the oracles available to **A**. + +**Theorem 1.** Let $E \leftarrow$ Block($\mathcal{K}$, $\{0,1\}^n$) be an ideal cipher. Further, let $\mathcal{H}_i$: $\mathcal{L} \times \mathcal{T} \rightarrow \{0,1\}^{o_i}$, for $1 \le i \le 3$ be families of hash functions for which Properties P1 through P4 hold, and let $K \leftarrow K$. Moreover, let Property P5 hold for the choice of all $I_i$ and $K_i$. Let $s$ denote the number of keys $L_i$, $1 \le i \le s$. Let **A** be a $(q_C, q_P)$-distinguisher on XHX[$E, \mathcal{H}]_K$. Then + +$$ \Delta_{\mathbf{A}}(\text{XHX}[E, \mathcal{H}], E^{\pm}; \tilde{\pi}^{\pm}, E^{\pm}) \le q_C^2\epsilon_1 + 2q_P q_C \epsilon_2 + q_C s(\epsilon_3 + \epsilon_4) + 2q_P s \epsilon_5 + \frac{s^2}{2^{n+1}} $$ + +*Proof Idea.* The proof of Theorem 1 follows from Lemmas 1, 2, and 3. Those can be found in Appendix A. Let $\tilde{E}$ denote the XHX[$E, \mathcal{H}$] construction in the remainder. Figure 3 illustrates the oracles available to **A**. The queries by **A** are collected in a transcript $\tau$. We will define a series of bad events that can happen during the interaction of **A** with its oracles: + +- Collisions between two construction queries, + +- Collisions between a construction and a primitive query, + +- Collisions between two primitive queries, + +- The case that the adversary finds an input-key tuple in either a primitive or construction query that was used to derive a key $L_i$. + +The proof will bound the probability of these events to occur in the transcript in Lemma 2. We define a transcript as **bad** if it satisfies at least one such **bad** event, and define BADT as the set of all attainable **bad** transcripts. + +**Lemma 2.** It holds that + +$$ \Pr[\Theta_{\text{ideal}} \in \text{BADT}] \le q_C^2\epsilon_1 + 2q_P q_C \epsilon_2 + q_C s(\epsilon_3 + \epsilon_4) + 2q_P s \epsilon_5 + \frac{s^2}{2^{n+1}}. $$ + +The proof is given in Appendix A.1. + +**Good Transcripts.** Above, we have considered **bad** events. In contrast, we define GOODT as the set of all good transcripts, i.e., all attainable transcripts that are *not* bad. + +**Lemma 3.** Let $\tau \in \text{GOODT}$ be a good transcript. Then + +$$ \frac{\Pr[\Theta_{\text{real}} = \tau]}{\Pr[\Theta_{\text{ideal}} = \tau]} \ge 1. $$ + +The full proof can be found in Appendix A.2. +---PAGE_BREAK--- + +**Algorithm 3** The universal hash function $\mathcal{H}^*$. +---PAGE_BREAK--- + +- **Case k < n.** In this case, we could simply truncate $H_2$ from $n$ to $k$ bits. Theoretically, we could derive a longer key from $K$ for the computation of $H_1$ and $H_3$; however, we disregard this case since ciphers with smaller key size than state length are very uncommon. + +- **Case k > n.** In the third case, we truncate the hash key $K$ for the computation of $H_1$ and $H_3$ to $n$ bits. Moreover, we derive $s$ hashing keys $L_1, \dots, L_s$ from the block cipher $E$. For $H_2$ and we concatenate the output of $s$ instances of $\mathcal{F}$. This construction is well-known to be $\epsilon^s(m)$-pAXU if $\mathcal{F}$ is $\epsilon(m)$-pAXU. Finally, we truncate the result to $k$ bits if necessary. + +**Lemma 4.** $\mathcal{H}^*$ is $2^{sn-k}\epsilon^{s+1}(m)$-pAXU and $2^{sn-k}\rho^{s+1}(m)$-Uniform. Moreover, it satisfies Properties P3 and P4 with probability $2^{sn-k}\rho^{s+1}(m)$ each, and Property P5 with $\epsilon_5 \le 2/2^k$ for our choice of the values $I_i$ and $K_i$. + +*Remark 2.* The term $2^{sn-k}$ results from the potential truncation of $H_2$ if the key length $k$ of the block cipher is no multiple of the state size $n$. $H_2$ is computed by concatenating the results of multiple independent invocations of a polynomial hash function $\mathcal{F}$ in $\text{GF}(2^n)$ under assumed independent keys. Clearly, if $\mathcal{F}$ is $\epsilon$-AXU, then their $sn$-bit concatenation is $\epsilon^s$-AXU. However, after truncating $sn$ to $k$ bits, we may lose information, which results in the factor of $2^{sn-k}$. For the case $k=n$, it follows that $s=1$, and the terms $2^{sn-k}\epsilon^{s+1}(m)$ and $2^{sn-k}\rho^{s+1}(m)$ simplify to $\epsilon^2(m)$ and $\rho^2(m)$, respectively. + +Our instantiation of $\mathcal{F}$ has $\epsilon(m) = \rho(m) = (m+2)/2^n$. Before we prove Lemma 4, we derive from it the following corollary for XHX when instantiated with $\mathcal{H}^*$. + +**Corollary 1.** Let $E$ and XHX[$E, \mathcal{H}^*$] be defined as in Theorem 1, where the maximum length of any tweak is limited by at most $m$ n-bit blocks. Moreover, let $K \leftarrow K$. Let $\mathbf{A}$ be a $(q_C, q_P)$-distinguisher on XHX[$E, \mathcal{H}^*$]. Then + +$$ \Delta_{\mathbf{A}}(\text{XHX}[E, \mathcal{H}^*], E^\pm; \tilde{\pi}^\pm, E^\pm) \le \frac{(q_C^2+2q_Cq_P+2q_Cs)(m+2)^{s+1}}{2^{n+k}} + \frac{4q_P s}{2^k} + \frac{s^2}{2^{n+1}}. $$ + +The proof of the corollary stems from the combination of Lemma 4 with Theorem 1 and can be omitted. + +*Proof of Lemma 4.* In the following, we assume that $T, T' \in \{0, 1\}^*$ are distinct tweaks of at most $m$ blocks each. Again, we consider the pAXU property first. + +**Partial Almost-XOR-Universality.** This is the probability that for any $\Delta \in \{0, 1\}^n$: + +$$ +\begin{align*} +& \Pr_{L \leftarrow \mathcal{L}} [(\mathcal{F}_{K'}(T), \mathcal{F}_{L_1, \dots, L_s}(T)) \oplus (\mathcal{F}_{K'}(T'), \mathcal{F}_{L_1, \dots, L_s}(T')) = (\Delta, 0^n)] \\ +&= \Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) \oplus \mathcal{F}_{K'}(T') = \Delta, \mathcal{F}_{L_1, \dots, L_s}(T) \oplus \mathcal{F}_{L_1, \dots, L_s}(T') = 0^n] \\ +&\le 2^{sn-k} \cdot \epsilon^{s+1}(m). +\end{align*} +$$ +---PAGE_BREAK--- + +We assume independent hashing keys $K', L_1, \dots, L_s$ here. When $k=n$, it holds that $s=1$, and this probability is upper bounded by $\epsilon^2(m)$ since $\mathcal{F}$ is $\epsilon(m)$-AXU. In the case $k>n$, we compute $s$ words of $H_2$ that are concatenated and truncated to $k$ bits. Hence, $\mathcal{F}_{L_1, \dots, L_s}$ is $2^{sn-k} \cdot \epsilon^s(m)$-AXU. In combination with the AXU bound for $\mathcal{F}_{K'}$, we obtain the pAXU bound for $\mathcal{H}^*$ above. + +**Almost-Uniformity.** Here, for any $(\Delta_1, \Delta_2) \in \{0,1\}^n \times \{0,1\}^k$, it shall hold + +$$ +\begin{align*} +\Pr_{L \leftarrow \mathcal{L}} [(\mathcal{F}_{K'}(T), \mathcal{F}_{L_1, \dots, L_s}(T)) = (\Delta_1, \Delta_2)] &= \Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = \Delta_1, \mathcal{F}_{L_1, \dots, L_s}(T) = \Delta_2] \\ +&\le 2^{sn-k} \cdot \rho^{s+1}(m) +\end{align*} +$$ + +since $\mathcal{F}$ is $\rho(m)$-AUniform, and using a similar argumentation for the cases $k=n$ and $k>n$ as for partial-almost-XOR universality. + +**Property P3.** For all $T \in \mathcal{T}$ and $\Delta \in \{0,1\}^n$, Property P3 is equivalent to + +$$ +\Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = (\Delta \oplus I_i), \mathcal{F}_{L_1, \dots, L_s}(T) = K] +$$ + +for a fixed $1 \le i \le s$. Here, this property is equivalent to almost uniformity; hence, +the probability for the latter equality is at most $2^{sn-k} \cdot \rho^s(m)$. The probability for +the former equality is at most $\rho(m)$ since the property considers a fixed $i$. Since +we assume independence of $K$ and $L_1, \dots, L_s$, it holds that $\epsilon_3 \le 2^{sn-k} \cdot \rho^{s+1}(m)$. + +**Property P4.** For all $T \in \mathcal{T}$ and $\Delta \in \{0,1\}^n$, Property P4 is equivalent to + +$$ +\Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = (\Delta \oplus L_i), \mathcal{F}_{L_1, \dots, L_s}(T) = K] +$$ + +for a fixed $1 \le i \le s$. Using a similar argumentation as for Property P3, the +probability is upper bounded by $\epsilon_4 \le 2^{sn-k} \cdot \rho^{s+1}(m)$. + +**Property P5.** We derive the hashing keys $L_i$ with the help of $E$ and the secret key $K$. So, in the simple case that $s=1$, the probability that the adversary can guess any tuple $(I_i, K_i)$, for $1 \le i \le s$, that is used to derive the hashing keys $L_i$, or guess any tuple $(L_i, K_i)$ is at most $1/2^k$. Under the reasonable assumption $s < 2^{k-1}$, the probability becomes for fixed $i$ in the general case: + +$$ +\Pr_{K \leftarrow K} [ (I_i, K_i) = (c_1, c_2) ] \leq \frac{1}{2^k - s} \leq \frac{2}{2^k}. +$$ + +A similar argument holds that the adversary can guess any tuple $(L_i, K_i)$, for +$1 \le i \le s$. Hence, it holds for $\mathcal{H}^*$ that $\epsilon_5 \le 2/2^k$. + +$\epsilon(m)$ **and** $\rho(m)$. It remains to determine $\epsilon(m)$ and $\rho(m)$ for our instantiation of $\mathcal{F}_K(\cdot)$. It maps tweaks $T = T_1, \dots, T_m$ to the result of + +$$ +\left( \bigoplus_{i=1}^{m} T_i \cdot K^{m+3-i} \right) \oplus (\|T\|_n \cdot K \oplus K. +$$ +---PAGE_BREAK--- + +**Algorithm 4** The universal hash function $\mathcal{H}^2$. + +11: **function** $\mathcal{H}_L^2(T)$ +12: $(K, L_1, \dots, L_s) \leftarrow L$ +13: $(T_1, T_2) \stackrel{n}{\leftarrow} T$ +14: $K' \leftarrow \text{TRUNC}_n(K)$ +15: $H_1 \leftarrow T_1 \Box K'$ +16: $H_2 \leftarrow \text{TRUNC}_k (\mathcal{F}_{L_1}(T) \parallel \dots \parallel \mathcal{F}_{L_s}(T))$ +17: $H_3 \leftarrow T_1 \Box K'$ +18: **return** $(H_1, H_2, H_3)$ + +21: function $\mathcal{F}_{L_i}(T_1 || T_2)$ + +22: return $(T_1 \boxdot L_i) \oplus T_2$ + +This is a polynomial of degree at most $m+2$, which is $(m+2)/2^n$-AXU. Moreover, over $L \in \mathcal{L}$, it lacks fixed points but for every $\Delta \in \{0, 1\}^n$, and any fixed subset of $m$ blocks of $T_1, \dots, T_m$, there are at most $m+2$ out of $2^n$ values for the block $T_{m+1}$ that fulfill $\mathcal{F}_K(T) = \Delta$. Hence, $\mathcal{F}$ is also $(m+2)/2^n$-AUniform. $\square$ + +$\mathcal{H}^*$ is a general construction which supports arbitrary tweak lengths. Though, if we used $\mathcal{H}^*$ for 2n-bit tweaks, we would need four Galois-Field multiplications. However, we can hash more efficiently, even optimal in terms of the number of multiplications in this case. For this purpose, we define $\mathcal{H}^2$. + +**$\mathcal{H}^2$ - A Hash Function for 2n-bit Tweaks.** Naively, for two-block tweaks $|T| = 2n$, an $\epsilon$-pAXU construction with $\epsilon \approx 1/2^{2n}$ could be achieved by simply multiplying the tweak with some key $L \in \mathrm{GF}(2^{2n})$ sampled uniformly over $\mathrm{GF}(2^{2n})$. However, we can realize a similarly secure construction more efficiently by using two multiplications over the smaller field $\mathrm{GF}(2^n)$. Additional conditions, such as uniformity, are satisfied by introducing squaring in the field to avoid fixed points in multiplication-based universal hash function. Following the notations from the previous sections, let $L = (K, L_1)$ be the 2n-bit key of our hash function. For $X, Y \in \mathrm{GF}(2^n)$, we define the operation $\boxdot : \mathrm{GF}(2^n) \times \mathrm{GF}(2^n) \to \mathrm{GF}(2^n)$ as + +$$ X \boxdot Y := \begin{cases} X \cdot Y & \text{if } X \neq 0 \\ Y^2 & \text{otherwise.} \end{cases} $$ + +We assume a common encoding between the bit space and GF(2^n), i.e. a polynomial in the field is represented as its coefficient vector, e. g., the all-zero vector denotes the zero element 0, and the bit string (0...01) denotes the identity element. Hereafter, we write X interchangeably as an element of GF(2^n) or of {0, 1}^n. For L = {0, 1}^n, X = ({0, 1})^2 and Y = {0, 1}^n × {0, 1}^k × {0, 1}^n, the construction H^2 : L × X → Y is defined in Algorithm 4. We note that the usage of keys has been chosen carefully, e.g., a swap of K and L_1 in H^2 would invalidate Property P4. + +**Lemma 5.** $\mathcal{H}^2$ is $2^{s+1}/2^{n+k}$-pAXU, $2^s/2^{n+k}$-AUniform, satisfies Properties P3 and P4 with probability $2/2^{n+k}$ each, and Property P5 with $\epsilon_5 = s/2^n$ for our choices of $I_i$ and $K_i$, for $1 \le i \le s$. +---PAGE_BREAK--- + +Before proving Lemma 5, we derive from it the following corollary for XHX when instantiated with $\mathcal{H}^2$. + +**Corollary 2.** Let $E$ and XHX[$E$, $\mathcal{H}^2$] be defined as in Theorem 1. Moreover, let $K \leftarrow K$. Let $\mathbf{A}$ be a $(q_C, q_P)$-distinguisher on XHX[$E$, $\mathcal{H}^2$]$_K$. Then + +$$ \Delta_{\mathbf{A}}(\mathrm{XHX}[E, \mathcal{H}^2], E^{\pm}; \tilde{\pi}^{\pm}, E^{\pm}) \le \frac{2^{s+2}q_C^2 + 2^{s+1}q_Cq_P + 4q_Cs}{2^{n+k}} + \frac{2q_Ps^2}{2^n} + \frac{s^2}{2^{n+1}} $$ + +Again, the proof of the corollary stems from the combination of Lemma 5 with Theorem 1 and can be omitted. + +*Proof of Lemma 5.* Since $H_1$ and $H_3$ are computed identically, we can restrict the analysis of the properties of $\mathcal{H}^2$ to only the outputs $(H_1, H_2)$. Note that $K$ and $L_1$ are independent. In the following, we denote the hash-function results for some tweak $T$ as $H_1, H_2, H_3$, and those for some tweak $T' \ne T$ as $H'_1, H'_2, H'_3$. Moreover, we denote the $n$-bit words of $H_2$ as $(H'_2, \dots, H'_s)$, and those of $H'_2$ as $(H''_2, \dots, H''_s)$. + +**Partial Almost-XOR-Universality.** First, let us consider the pAXU property. It holds that $H_1 := T_1 \sqcap K'$ and $H_2 := \text{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T))$. Considering $H_1$, it must hold that $H'_1 = H_1 \oplus \Delta$, with + +$$ \Delta = (T'_1 \sqcap K') \oplus (T_1 \sqcap K'). $$ + +For any $X \ne 0^n$, it is well-known that $X \sqcap Y$ is $1/2^n$-AXU. So, for any fixed $T_1$ and fixed $\Delta \in \{0, 1\}^n$, there is exactly one value $T'_1$ that fulfills the equation if $H'_1 \ne K' \sqcap K'$, and exactly two values if $H'_1 = K' \sqcap K'$, namely $T'_1 \in \{0^n, K'\}$. So + +$$ \Pr_{K \leftarrow \{0,1\}^k} [ (T_1 \sqcap K') \oplus (T'_1 \sqcap K') = \Delta ] \le 2/2^n. $$ + +The argumentation for $H_2$ is similar. The probability that any $L_i = 0^n$, for fixed $1 \le i \le s$, is at most $1/(2^n - s + 1)$, which will be smaller than the probability of $H_i^i = H'^i_2$. So, in the remainder, we can concentrate on the case that all $L_i \ne 0^n$. W.l.o.g., we focus for now on the first word of $H_2$, $H'^1_2$, in the following. For fixed $(T_1, T_2)$, $H'^1_2$, and $T'_2$, there is exactly one value $T'_1$ s.t. $H'^1_2 = H'^1_1$ if $H'^1_2 \ne L_1 \sqcap (L_1 \oplus T'_2)$, namely $T'_1 := T_1 \oplus (T_2 \oplus T'_2) \sqcap L^{-1}_1$. There exist exactly two values $T'_1$ if $H'^1_2 = L_1 \sqcap L_1 \oplus T'_2$, namely $T'_1 \in \{0^n, L_1\}$. Hence, it holds that + +$$ \Pr_{L_1 \leftarrow \mathcal{L}} [H_2^1 = H'_2] \le 2/2^n. $$ + +The same argumentation follows for $H_2^i = H'^i_2$, for $2 \le i \le s$ since the keys $L_i$ are pairwise independent. Since the $sn$ bits of $H_2^s$ and $H'^s_2$ are truncated if $k$ is not a multiple of $n$, the bound has to be multiplied with $2^{sn-k}$. With the factor of $2/2^n$ for $H_1$, it follows for fixed $\Delta \in \{0, 1\}^n$ that $\mathcal{H}^2$ is $\epsilon$-pAXU for $\epsilon$ upper bounded by + +$$ \frac{2}{2^n} \cdot 2^{sn-k} \cdot \left( \frac{2}{2^n} \right)^s = \frac{2^{s+1}}{2^{n+k}}. $$ +---PAGE_BREAK--- + +**Almost-Uniformity.** Here, we concern the probability for any $H_1$ and $H_2$: + +$$ \mathrm{Pr}_{L \leftarrow \mathcal{L}} [T_1 \sqcap K' = H_1, \mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = H_2]. $$ + +If $K' = 0^n$ and $H_1 = 0^n$, then the first equation may be fulfilled for any $T_1$. Though, the probability for $K' = 0^n$ is $1/2^n$. So, we can assume $K' \neq 0^n$ in the remainder. Next, we focus again on the first word of $H_2$, i.e., $H_2^1$. For fixed $L_1$ and $H_2^1$, there exist at most two values $(T_1, T_2)$ to fulfill $(T_1 \sqcap L_1) \oplus T_2 = H_2^1$. In the case $H_1 \neq K' \sqcap K'$, there is exactly one value $T_1 := H_1 \sqcap K'^{-1}$ that yields $H_1$. Then, $T_1, L_1$, and $H_2^1$ determine $T_2 := H_2^1 \oplus (T_1 \sqcap L_1)$ uniquely. In the opposite case that $H_1 = K' \sqcap K'$, there exist exactly two values $(T_1, T'_1)$ that yield $H_1$, namely $0^n$ and $K'$. Each of those determines $T_2$ uniquely. The probability that the so-fixed values $T_1, T_2$ yield also $H_2^2, \dots, H_s^2$ is at most $(2/2^n)^{s-1}$ if $k$ is a multiple of $n$ since the keys $L_i$ are pairwise independent; if $k$ is not a multiple of $n$, we have again an additional factor of $2^{sn-k}$ from the truncation. So, $\mathcal{H}^2$ is $\epsilon$-AUniform for $\epsilon$ at most + +$$ 2^{sn-k} \cdot \left( \frac{2}{2^n} \right)^s = \frac{2^s}{2^{n+k}}. $$ + +**Property P3.** Given $I_i = \langle i - 1 \rangle$ and $K_i = K$, for $1 \le i \le s$, $\epsilon_3$ is equivalent to the probability that a chosen $(T_1, T_2)$ yields $\mathrm{Pr}[T_1 \sqcap K' = \Delta \oplus \langle i - 1 \rangle, \mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = K]$, for some $i$. This can be rewritten to + +$$ +\begin{aligned} +& \mathrm{Pr}[T_1 \sqcap K' = \Delta \oplus \langle i-1 \rangle] \\ +& \quad \cdot \mathrm{Pr}[\mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = K | T_1 \sqcap K' = \Delta \oplus \langle i-1 \rangle]. +\end{aligned} +$$ + +For fixed $\Delta \neq K' \sqcap K'$, there is exactly one value $T_1$ that satisfies the first part of the equation; otherwise, there are exactly two values $T_1$ if $\Delta = K' \sqcap K'$. Moreover, $K'$ is secret; so, the values $T_1$ require that the adversary guesses $K'$ correctly. Given fixed $T_1$, $\Delta$, and $K'$, there is exactly one value $T_2$ that matches the first $n$ bits of $K$; $T_2 := (T_1 \sqcap L_1) \oplus K[k-1..k-n]$. The remaining bits of $K$ are matched with probability $2^{sn-k}/2^{(s-1)n}$, assuming that the keys $L_i$ are independent. Hence, it holds that $\epsilon_3$ is at most + +$$ \frac{2}{2^n} \cdot \frac{2^{sn-k}}{2^{sn}} = \frac{2}{2^{n+k}}. $$ + +**Property P4.** This argument follows from a similar argumentation as Property P3. Hence, it holds that $\epsilon_4 \le 2/2^{n+k}$. $\square$ + +**Acknowledgments.** This work was initiated during the group sessions of the 6th Asian Workshop on Symmetric Cryptography (ASK 2016) held in Nagoya. We thank the anonymous reviewers of the ToSC 2017 and Latincrypt 2017 for their fruitful comments. We thank Ashwin Jha and Mridul Nandi for their remark in [7] wherein they pointed us to a subtle error in our formulation of Fact 1 that has been corrected in this version of 08 March 2021. As they noted, our Proof of Lemma 3 implicitly used a special case of compressing sequences, where the fact already held. Therefore, our proof was only slightly augmented to point it out, but does not change. +---PAGE_BREAK--- + +References + +1. Christof Beierle, Jérémy Jean, Stefan Kölbl, Gregor Leander, Amir Moradi, Thomas Peyrin, Yu Sasaki, Pascal Sasdrich, and Siang Meng Sim. The SKINNY Family of Block Ciphers and Its Low-Latency Variant MANTIS. In Matthew Robshaw and Jonathan Katz, editors, *CRYPTO II*, volume 9815 of *Lecture Notes in Computer Science*, pages 123–153. Springer, 2016. + +2. Mihir Bellare and Phillip Rogaway. The Security of Triple Encryption and a Framework for Code-Based Game-Playing Proofs. In Serge Vaudenay, editor, *EUROCRYPT*, volume 4004 of *Lecture Notes in Computer Science*, pages 409–426. Springer, 2006. + +3. John Black. The Ideal-Cipher Model, Revisited: An Uninstantiable Blockcipher-Based Hash Function. In Matthew J. B. Robshaw, editor, *FSE*, volume 4047 of *Lecture Notes in Computer Science*, pages 328–340. Springer, 2006. + +4. Shan Chen and John P. Steinberger. Tight Security Bounds for Key-Alternating Ciphers. In Phong Q. Nguyen and Elisabeth Oswald, editors, *EUROCRYPT*, volume 8441 of *Lecture Notes in Computer Science*, pages 327–350. Springer, 2014. + +5. Peter Gazi and Ueli M. Maurer. Cascade Encryption Revisited. In Mitsuru Matsui, editor, *ASIACRYPT*, volume 5912 of *Lecture Notes in Computer Science*, pages 37–51. Springer, 2009. + +6. Jérémy Jean, Ivica Nikolic, and Thomas Peyrin. Tweaks and Keys for Block Ciphers: The TWEAKEY Framework. In Palash Sarkar and Tetsu Iwata, editors, *ASIACRYPT (2)*, volume 8874 of *Lecture Notes in Computer Science*, pages 274–288, 2014. + +7. Ashwin Jha and Mridul Nandi. Tight security of cascaded LRW2. *J. Cryptol.*, 33(3):1272–1317, 2020. + +8. Rodolphe Lampe and Yannick Seurin. Tweakable Blockciphers with Asymptotically Optimal Security. In Shiho Moriai, editor, *FSE*, volume 8424 of *Lecture Notes in Computer Science*, pages 133–151. Springer, 2013. + +9. Will Landecker, Thomas Shrimpton, and R. Seth Terashima. Tweakable blockciphers with beyond birthday-bound security. In Reihaneh Safavi-Naini and Ran Canetti, editors, *CRYPTO*, volume 7417 of *Lecture Notes in Computer Science*, pages 14–30. Springer, 2012. + +10. Jooyoung Lee. Towards Key-Length Extension with Optimal Security: Cascade Encryption and XOR-cascade Encryption. In Thomas Johansson and Phong Q. Nguyen, editors, *EUROCRYPT*, volume 7881 of *Lecture Notes in Computer Science*, pages 405–425. Springer, 2013. + +11. Moses Liskov, Ronald L. Rivest, and David Wagner. Tweakable Block Ciphers. In Moti Yung, editor, *CRYPTO*, volume 2442 of *Lecture Notes in Computer Science*, pages 31–46. Springer, 2002. + +12. Bart Mennink. Optimally Secure Tweakable Blockciphers. In Gregor Leander, editor, *FSE*, volume 9054 of *Lecture Notes in Computer Science*, pages 428–448. Springer, 2015. + +13. Kazuhiko Minematsu. Beyond-Birthday-Bound Security Based on Tweakable Block Cipher. In Orr Dunkelman, editor, *FSE*, volume 5665 of *Lecture Notes in Computer Science*, pages 308–326. Springer, 2009. + +14. Kazuhiko Minematsu and Tetsu Iwata. Tweak-Length Extension for Tweakable Blockciphers. In Jens Groth, editor, *IMA Int. Conf.*, volume 9496 of *Lecture Notes in Computer Science*, pages 77–93. Springer, 2015. +---PAGE_BREAK--- + +15. Yusuke Naito. Tweakable Blockciphers for Efficient Authenticated Encryptions with Beyond the Birthday-Bound Security. *IACR Transactions on Symmetric Cryptology*, 2017(2):1–26, 2017. + +16. Jacques Patarin. The "Coefficients H" Technique. In Roberto Maria Avanzi, Liam Keliher, and Francesco Sica, editors, *SAC*, volume 5381 of *Lecture Notes in Computer Science*, pages 328–345. Springer, 2008. + +17. Phillip Rogaway. Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes OCB and PMAC. In *ASIACRYPT*, volume 3329 of *Lecture Notes in Computer Science*, pages 16–31. Springer, 2004. + +18. Richard Schroeppel and Hilarie Orman. The Hasty Pudding Cipher. *AES candidate submitted to NIST*, 1998. + +19. Thomas Shrimpton and R. Seth Terashima. A Modular Framework for Building Variable-Input-Length Tweakable Ciphers. In Kazue Sako and Palash Sarkar, editors, *ASIACRYPT (1)*, volume 8269 of *Lecture Notes in Computer Science*, pages 405–423. Springer, 2013. + +20. Lei Wang, Jian Guo, Guoyan Zhang, Jingyuan Zhao, and Dawu Gu. How to Build Fully Secure Tweakable Blockciphers from Classical Blockciphers. In Jung Hee Cheon and Tsuyoshi Takagi, editors, *ASIACRYPT (1)*, volume 10031 of *Lecture Notes in Computer Science*, pages 455–483, 2016. + +A Proof Details + +The proof of Theorem 1 follows from Lemmas 1, 2, and 3. Let $\tilde{E}$ denote the XHX[$E, \mathcal{H}$] construction in the remainder. W.l.o.g., we assume, **A** does not ask duplicated queries nor trivial queries to which it already knows the answer, e.g., feeds the result of an encryption query to the corresponding decryption oracle or vice versa. The queries by **A** are collected in a transcript $\tau$. We define that $\tau$ is composed of two disjoint sets of queries $\tau_C$ and $\tau_P$ and $L$, $\tau = \tau_C \cup \tau_P \cup \{L\}$, where $\tau_C := \{(M^i, C^i, T^i, H_1^i, H_2^i, H_3^i, X^i, Y^i, d^i)\}_{1\le i\le q_C}$ denotes the queries by **A** to the construction oracle plus internal variables $H_1^i, H_2^i, H_3^i$ (i.e., the outputs of $\mathcal{H}_1, \mathcal{H}_2$, and $\mathcal{H}_3$, respectively), $X^i$ and $Y^i$ (where $X^i \leftarrow H_1^i \oplus M^i$ and $Y^i \leftarrow H_3^i \oplus C^i$, respectively); and $\tau_P := \{(\hat{K}^i, \hat{X}^i, \hat{Y}^i, d^i)\}_{1\le i\le q_P}$ the queries to the primitive oracle; both sets store also binary variables $d^i$ that indicate the direction of the $i$-th query, where $d^i = 1$ represents the fact that the $i$-th query is an encryption query, and $d^i = 0$ that it is a decryption query. The internal variables for one call to XHX are as given in Algorithm 2 and Figure 2. +We apply a common strategy for handling bad events from both worlds: in the real world, all secrets (i.e., the hash-function key $L$) are revealed to the **A** after it finished its interaction with the available oracles, but before it has output its decision bit regarding which world it interacted with. Similarly, in the ideal world, the oracle samples the hash-function key independently from the choice of $E$ and $\tilde{\pi}$ uniformly at random, $L \leftarrow \mathcal{L}$, and also reveals $L$ to **A** after the adversary finished its interaction and before has output its decision bit. The internal variables in construction queries – $H_1^i, H_2^i, H_3^i, X^i, Y^i$ – can then be computed and added to the transcript also in the ideal world using the oracle inputs and outputs $T^i, M^i, C^i, H_1^i, H_2^i$, and $H_3^i$. +---PAGE_BREAK--- + +Let $1 \le i \ne j \le q$. We define that an attainable transcript $\tau$ is **bad**, i.e., $\tau \in \text{BADT}$, if one of the following conditions is met: + +- bad$_1$: There exist $i \neq j$ s.t. $(H_2^i, X^i) = (H_2^j, X^j)$. + +- bad$_2$: There exist $i \neq j$ s.t. $(H_2^i, Y^i) = (H_2^j, Y^j)$. + +- bad$_3$: There exist $i \neq j$ s.t. $(H_2^i, X^i) = (\tilde{K}^j, \tilde{X}^j)$. + +- bad$_4$: There exist $i \neq j$ s.t. $(H_2^i, Y^i) = (\tilde{K}^j, \tilde{Y}^j)$. + +- bad$_5$: There exist $i \neq j$ s.t. $(\tilde{K}^i, \tilde{X}^i) = (\tilde{K}^j, \tilde{X}^j)$. + +- bad$_6$: There exist $i \neq j$ s.t. $(\tilde{K}^i, \tilde{Y}^i) = (\tilde{K}^j, \tilde{Y}^j)$. + +- bad$_7$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_C\}$ s.t. $(X^j, H_2^j) = (I_i, K_i)$ and $d^j = 1$. + +- bad$_8$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_C\}$ s.t. $(Y^j, H_2^j) = (L_i, K_i)$ and $d^j = 0$. + +- bad$_9$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_P\}$ s.t. $(\tilde{X}^j, \tilde{K}^j) = (I_i, K_i)$. + +- bad$_{10}$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_P\}$ s.t. $(\tilde{Y}^j, \tilde{K}^j) = (L_i, K_i)$. + +- bad$_{11}$: There exist $i, j \in \{1, \dots, s\}$ and $i \neq j$ s.t. $(K_i, L_i) = (K_j, L_j)$ but $I_i \neq I_j$. + +The events + +- bad$_1$ and bad$_2$ consider collisions between two construction queries, + +- bad$_3$ and bad$_4$ consider collisions between primitive and construction queries, + +- bad$_5$ and bad$_6$ consider collisions between two primitive queries, and + +- bad$_7$ through bad$_{10}$ address the case that the adversary may could find an input-key tuple in either a primitive or construction query that has been used to derive some of the subkeys $L_i$. + +- bad$_{11}$ addresses the event that the ideal oracle produces a collision while sampling the hash-function keys independently uniformly at random. + +Note that the events bad$_5$ and bad$_6$ are listed here only for the sake of completeness. We will show briefly that these events can never occur. + +## A.1 Proof of Lemma 2 + +*Proof.* In the following, we upper bound the probabilities of each bad event. + +**bad$_1$ and bad$_2$.** Events bad$_1$ and bad$_2$ represent the cases that two distinct construction queries would feed the same tuple of key and input to the underlying primitive *E* if the construction would be the real $\tilde{E}$; bad$_1$ considers the case when the values $H_2^i = H_2^j$ and $X^i = X^j$ collide. In the real world, it follows that $Y^i = Y^j$, while this holds only with small probability in the ideal world. The event bad$_2$ concerns the case when the values $H_2^i = H_2^j$ and $Y^i = Y^j$ collide. Again, in the real world, it follows then that $X^i = X^j$, whereas this holds only with small probability in the ideal world. So, both events would allow **A** to distinguish both worlds. Let us consider bad$_1$ first, and let us start in the real +---PAGE_BREAK--- + +world. Since **A** asks no duplicate queries, it must hold that two distinct queries $(M^i, T^i)$ and $(M^j, T^j)$ yielded + +$$X^i = (M^i \oplus H_1^i) = (M^j \oplus H_1^j) = X^j \quad \text{and} \quad H_2^i = H_2^j.$$ + +We define $\Delta := M^i \oplus M^j$ and consider two subcases: in the subcase that $T^i = T^j$, it automatically holds that $H_2^i = H_2^j$ and $H_1^i = H_1^j$. However, this also implies that $M^i = M^j$, i.e., **A** would have asked a duplicate query, which is prohibited. So, it must hold that $T^i \neq T^j$ in the real world. + +If $T^i = T^j$ in the ideal world, it must hold that the plaintexts are disjoint, $M^i \neq M^j$, since we assumed that **A** does not make duplicate queries. Since $\tilde{\pi}(T^i, \cdot)$ is a permutation, the resulting plaintexts are also disjoint: $M^i \neq M^j$. From $T^i = T^j$ follows that $H_1^i = H_1^j$ and thus, $X^i$ and $X^j$ cannot be equal: + +$$X^i = M^i \oplus H_1^i \neq M^j \oplus H_1^j = X^j,$$ + +which contradicts with our definition of bad$_1$. So, it must hold that $T^i \neq T^j$ also in the ideal world. From Property P1 and over $L \leftarrow \mathcal{L}$, it holds then + +$$ +\begin{align*} +\Pr[\text{bad}_1] &= \Pr[\exists i \neq j; 1 \le i, j \le q_C : (X^i, H_2^i) = (X^j, H_2^j)] \\ +&= \Pr[\exists i \neq j; 1 \le i, j \le q_C : \mathcal{H}_{1,2}(T^i) \oplus \mathcal{H}_{1,2}(T^j) = (\Delta, 0^k)] \le \binom{q_C}{2} \epsilon_1. +\end{align*} +$$ + +Using a similar argumentation, it follows also from Property P1 that for $T^i \neq T^j$ + +$$ +\begin{align*} +\Pr[\text{bad}_2] &= \Pr[\exists i \neq j; 1 \le i, j \le q_C : (Y^i, H_2^i) = (Y^j, H_2^j)] \\ +&= \Pr[\exists i \neq j; 1 \le i, j \le q_C : \mathcal{H}_{3,2}(T^i) \oplus \mathcal{H}_{3,2}(T^j) = (\Delta, 0^k)] \le \binom{q_C}{2} \epsilon_1. +\end{align*} +$$ + +**bad3 and bad4.** Events bad3 and bad4 represent the cases that a construction query to the *real* construction $\tilde{E}$ would feed the same key and input $(H_2^i, X^i)$ to the underlying primitive *E* in the real construction as a primitive query $(\hat{K}^j, \hat{X}^j)$. This is equivalent to guessing the hash-function output for the *i*-th query. Let us consider bad3 first. Over $L \leftarrow \mathcal{L}$ and for all $(\hat{K}^j, \hat{X}^j)$, the probability of bad3 is upper bounded by + +$$ +\begin{align*} +\Pr[\text{bad}_3] &= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : (X^i, H_2^i) = (\hat{X}^j, \hat{K}^j)] \\ +&= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : (H_1^i = M^i \oplus \hat{X}^j) \land (H_2^i = \hat{K}^j)] \\ +&= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : \mathcal{H}_{1,2}(T^i) = (M^i \oplus \hat{X}^j, \hat{K}^j)] \\ +&\le q_C \cdot q_P \cdot \epsilon_2 +\end{align*} +$$ +---PAGE_BREAK--- + +due to Property P2. Using a similar argumentation, it holds that + +$$ +\begin{align*} +\Pr[\text{bad}_4] &= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : (X^i, H_2^i) = (\hat{Y}^j, \hat{K}^j)\right] \\ +&= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : (H_3^i = C^i \oplus \hat{Y}^j) \land (H_2^i = \hat{K}^j)\right] \\ +&= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : \mathcal{H}_{3,2}(T^i) = (C^i \oplus \hat{Y}^j, \hat{K}^j)\right] \\ +&\le q_C \cdot q_P \cdot \epsilon_2. +\end{align*} +$$ + +**bad5 and bad6.** Events **bad5** and **bad6** represent the cases that two distinct primitive queries feed the same key and the same input to the primitive **E**. Clearly, in both worlds, this implies that **A** either has asked a duplicate primitive query or has fed the result of an earlier primitive query to the primitive's inverse oracle. Both types of queries are forbidden; so, they will not occur. + +**bad7 and bad8.** Let us consider **bad7** first, which considers the case that the *j*-th construction query in encryption direction matches the inputs to **E** used for generating a hash function subkeys $L_i$, for some $j \in [1..q]$ and $i \in [1..s]$. **bad8** considers the equivalent case in decryption direction. We define $\Delta := M^j \oplus \mathcal{H}_1(L, T^j)$. For this **bad** event, it must hold that $M^j \oplus \mathcal{H}_1(L, T^j) = I_i$ and $\mathcal{H}_2(L, T^j) = K_i$. Concerning the tuples $I_i, K_i$, we cannot exclude in general that all values $K_1(K) = ... = K_s(K)$ are equal and therefore, $L_i$ are outputs of the same permutation. From Property P3 and the fact that there have been $j$ queries and the adversary can hit one out of $s$ values, and over $L \leftarrow \mathcal{L}$, it follows that the probability for this event can be upper bounded by + +$$ +\begin{align*} +\Pr[\text{bad}_7] &= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : (X^j, H_2^j) \oplus (I_i, K_i) = (\Delta, 0^k)\right] \\ +&= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : \mathcal{H}_{1,2}(T^j) \oplus (I_i, K_i) = (\Delta, 0^k)\right] \\ +&\le q_C \cdot s \cdot \epsilon_3. +\end{align*} +$$ + +Using a similar argument, it follows from Property P4 that + +$$ +\begin{align*} +\Pr[\text{bad}_8] &= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : (Y^j, H_2^j) \oplus (L_i, K_i) = (\Delta, 0^k)\right] \\ +&= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : \mathcal{H}_{3,2}(T^j) \oplus (L_i, K_i) = (\Delta, 0^k)\right] \\ +&\le q_C \cdot s \cdot \epsilon_4. +\end{align*} +$$ + +**bad9 and bad10.** The event **bad9** models the case that a primitive query in encryption direction matches key and input used for generating $L_i$, for some $i \in [1..s]: (\hat{X}^j, \hat{K}^j) = (I_i, K_i)$. The event **bad10** considers the equivalent case in decryption direction. From our assumption that Property P5 holds and the fact that the adversary can hit one out of $s$ values, and over $K \leftarrow K$, the probability for this event can be upper bounded by + +$$ +\Pr[\text{bad}_9] = \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_P : (\hat{X}^j, \hat{K}^j) = (I_i, K_i)\right] \le q_P \cdot s \cdot \epsilon_7 +$$ +---PAGE_BREAK--- + +We can use a similar argument and Property P5 to upper bound the probability +that the *j*-th query of **A** hits $L_i$, $K_i$ by + +$$ +\Pr[\text{bad}_{10}] = \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_P : (\hat{Y}^j, \hat{K}^j) = (L_i, K_i)\right] \le q_P \cdot s \cdot \epsilon_5. +$$ + +bad$_{11}$. It is possible that a number of key inputs $K_i = K_j$, for some $i, j \in$ +$\{1, \dots, s\}, i \neq j$, are equal. The event bad$_{11}$ models the case that the ideal +oracle produces a collision $(K_i, L_i) = (K_j, L_j)$, although it holds that $I_i \neq I_j$, +which indicates that the hash-function keys cannot be result of computing them +from the block cipher $E$. In the worst case, all keys $K_i$, for $1 \le i \le s$, are equal. +So, the probability for this event can be upper bounded by + +$$ +\mathrm{Pr}[\mathrm{bad}_{11}] = \mathrm{Pr}[\exists i, j \in \{1, \dots, s\}, i \neq j : (K_i, L_i) = (K_j, L_j), I_i \neq I_j] \leq \frac{s^2}{2^{n+1}}. +$$ + +Our claim in Lemma 2 follows from summing up the probabilities of all bad events. + +Before proceeding with the proof of good transcripts, we formulate a short fact +that will serve useful later on. In the remainder, we denote the falling factorial +as $(n)_k := \frac{n!}{k!}$. Prior, we recall a definition from [7]. + +**Definition 4 (Compressing Sequences [7]).** For integers $r \le s$, let $U = (u_1, \dots, u_r)$ and $V = (b_1, \dots, b_s)$ be two sequences over $\mathbb{N}$. We say that $V$ compresses to $U$ if there exists a partition $\mathcal{P}$ of $\{1, \dots, r\}$ such that $\mathcal{P}$ contains exactly $s$ entries, say $\mathcal{P}_1, \dots, \mathcal{P}_s$ and $\forall i \in \{1, \dots, s\}$, it holds that $u_i = \sum_{j \in \mathcal{P}_i} v_j$. + +The following Fact has been updated to match Proposition 1 of [7], where we +changed $r \ge s$. The proof is given there. + +**Fact 1 (A Variant of Proposition 1 in [7].)** For integers $r \le s$, let $U=(u_1, \dots, u_r)$ and $V = (v_1, \dots, v_s)$ be two sequences of positive integers such that $V$ compresses to $U$. Then, it holds for any positive integer $n$ such that $2^n \ge \sum_{i=1}^r u_i$ that + +$$ +\prod_{i=1}^{r} (N)_{u_i} \leq \prod_{i=1}^{s} (N)_{v_i} \quad \text{and thus} \quad \prod_{i=1}^{r} \frac{1}{(N)_{u_i}} \geq \prod_{i=1}^{s} \frac{1}{(N)_{v_i}}. +$$ + +A.2 Proof of Lemma 3 + +*Proof.* Fix a good transcript $\tau$. In the ideal world, the probability to obtain $\tau$ is + +$$ +\begin{align*} +\Pr[\Theta_{\text{ideal}} = \tau] &= \Pr_{\forall i} [\tilde{\pi}(T^i, M^i) = C^i] \cdot \Pr_{\forall j} [E(\hat{K}^j, \hat{X}^j) = Y^j] \cdot \Pr_{\forall g} [L_g] \\ +&\qquad \cdot \Pr[K \leftarrow K : K]. +\end{align*} +$$ + +In the real world, the probability to obtain a transcript $\tau$ is given by + +$$ +\Pr[\Theta_{\text{real}} = \tau] = \underset{\forall i, \forall j, \forall g}{\operatorname{Pr}} [ \underset{\forall L}{\tilde{E}}(T^i, M^i) = C^i, E(\hat{K}^j, \hat{X}^j) = Y^j, E(K_g, I_g) = L_g ] \\ +\cdot \Pr[K \leftarrow K : K]. +$$ +---PAGE_BREAK--- + +First, we consider the distribution of keys. In the ideal world, all components of $L = (K, L_1, \dots, L_s)$ are sampled uniformly and independently at random; the real world employs the block cipher $E$ for generating $L_1, \dots, L_s$. Let us focus on $K$, which is sampled uniformly in both worlds: + +$$ \Pr[K \leftarrow \mathcal{K} : K] = \frac{1}{|\mathcal{K}|}. $$ + +The remaining hash-function key $L_1, \dots, L_s$ will be considered in turn. To prove the remainder of our claim in Lemma 3, we have to show that + +$$ \begin{align} & \Pr_{\forall i, \forall j, \forall g} \left[ \tilde{E}_L(T^i, M^i) = C^i, E(\hat{\mathcal{K}}^j, \hat{\mathcal{X}}^j) = Y^j, E(K_g, I_g) = L_g \right] \tag{1} \\ & \ge \Pr_{\forall i} [\tilde{\pi}(T^i, M^i) = C^i] \cdot \Pr_{\forall j} [E(\hat{\mathcal{K}}^j, \hat{\mathcal{X}}^j) = Y^j] \cdot \prod_{g=1}^s \Pr[L_g \leftarrow \{0, 1\}^n : L_g]. \nonumber \end{align} $$ + +We reindex the keys used in primitive queries to $\hat{\mathcal{K}}^1, \dots, \hat{\mathcal{K}}^\ell$ to eliminate duplicates. Given those indices, we group all primitive queries into sets $\hat{\mathcal{K}}^j$, for $1 \le j \le \ell$, s.t. all sets are distinct and each set $\hat{\mathcal{K}}^j$ contains exactly only the primitive queries with key $\hat{\mathcal{K}}^j$: + +$$ \hat{\mathcal{K}}^j := \left\{ (\hat{\mathcal{K}}^i, \hat{\mathcal{X}}^i, \hat{\mathcal{Y}}^i) : \hat{\mathcal{K}}^i = \hat{\mathcal{K}}^j \right\}. $$ + +We denote by $\hat{k}^j = |\hat{\mathcal{K}}^j|$ the number of queries with key $\hat{\mathcal{K}}^j$. Clearly, it holds that $\ell \le q_P$ and $\sum_{j=1}^\ell \hat{k}^j = q_P$. + +Moreover, we also re-index the tweaks of the construction queries to $\mathcal{T}^1, \dots, \mathcal{T}^r$ for the purpose of eliminating duplicates. Given these new indices, we group all construction queries into sets $\mathcal{T}^j$, for $1 \le j \le r$, s.t. all sets are distinct and each set $\mathcal{T}^j$ contains exactly only all construction queries with the tweak $\mathcal{T}^j$: + +$$ \mathcal{T}^j := \left\{ (\mathcal{T}^i, M^i, C^i) : \mathcal{T}^i = \mathcal{T}^j \right\}. $$ + +We denote by $t^j = |\mathcal{T}^j|$ the number of queries with tweak $\mathcal{T}^j$. It holds that $r \le q_C$ and $\sum_{j=1}^r t^j = q_C$. + +First, we consider the probability of an obtained good transcript in the ideal world. Therein, all components $L_1, \dots, L_s$ are sampled independently uniformly at random from $\{0, 1\}^n$. So, in the ideal world, it holds that + +$$ \prod_{g=1}^{s} \Pr[L_g \leftarrow \{0,1\}^n : L_g] = \frac{1}{(2^n)^s}. $$ + +Recall that every $\tilde{\pi}(\mathcal{T}^j, \cdot)$ and $\tilde{\pi}^{-1}(\mathcal{T}^j, \cdot)$ is a permutation, and the assumption that **A** does not ask duplicate queries or such to which it already knows the answer. So, all queries are pairwise distinct. The probability to obtain the outputs of our transcript for some fixed tweak $\mathcal{T}^j$ is given by + +$$ \frac{1}{2^n \cdot (2^n - 1) \cdots (2^n - t^j + 1)} = \frac{1}{(2^n)_{t^j}}. $$ +---PAGE_BREAK--- + +The same applies for the outputs of the primitive queries in our transcript for some fixed key $\hat{\mathcal{K}}^j$: + +$$ \frac{1}{(2^n)_{\hat{\mathcal{K}}^j}} $$ + +The outputs of construction and primitive queries are independent from each other in the ideal world. Over all disjoint key and tweak sets, the probability for obtaining $\tau$ in the ideal world is given by + +$$ \mathrm{Pr}[\Theta_{\mathrm{ideal}} = \tau] = \left(\prod_{i=1}^{r} \frac{1}{(2^n)_{t_i}}\right) \cdot \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{\mathcal{K}}_j}}\right) \cdot \frac{1}{(2^n)^s} \cdot \frac{1}{|\mathcal{K}|}. \quad (2) $$ + +It remains to upper bound the probability $\tau$ in the real world. We observe that for every pair of queries $i$ and $j$ with $T^i = T^j$, it holds that $H_2^i = H_2^j$, i.e., both queries always target the same underlying permutation. Moreover, in the real world, two distinct tweaks $T^i \neq T^j$ can still collide in their hash-function outputs $H_2^i = H_2^j$. In this case, the queries with tweaks $T^i$ and $T^j$ also use the same permutation. Furthermore, there may be hash-function outputs $H_2^i$ from construction queries that are identical to keys $\hat{\mathcal{K}}^j$ that were used in primitive queries. In this case, both queries also employ the same permutation and so, the outputs from primitive and from construction queries are not independent as in the ideal world. Moreover, the derived keys $L_i$ are also constructed from the same block cipher $E$; hence, the inputs $K_i$ may also use the same permutation as primitive and construction queries. + +For our purpose, we also reindex the keys in all primitive queries into sets to $\hat{\mathcal{K}}^1, \dots, \hat{\mathcal{K}}^\ell$, and also reindex the tweaks in construction queries to $T^1, \dots, T^r$ to eliminate duplicates. We define key sets $\mathcal{K}^j$, for $1 \le j \le \ell$, and tweak sets $T^j$, for $1 \le j \le r$, analogously as we did for the ideal world. Moreover, for every so-indexed tweak $T^i$, we compute its corresponding value $H_2^i$. We also reindex the hash values $H_2^j$ to $H_2^1, \dots, H_2^u$ for duplicate elimination, and group the construction queries into sets + +$$ \mathcal{H}_2^j := \left\{ (T^i, M^i, C^i) : \mathcal{H}_2(L, T^i) = H_2^j \right\}. $$ + +We denote by $h_2^j = |\mathcal{H}_2^j|$ the number of queries whose tweak maps to $H_2^j$. Clearly, it still holds that $\sum_{i=1}^u h_2^j = q_C$. We can define an ordering s.t. for all $1 \le i \le u$, $T^i$ is mapped to $H_2^i$. Since for all $1 \le i \le r$, all queries of tweak $T^j$ are contained in exactly one set $\mathcal{H}_2^j$, there exists some $j \in \{1, \dots, u\}$, s.t. it holds + +$$ \sum_{j=1}^{u} h_2^{j} = \sum_{i=1}^{r} t^{i} = q_{C}, \quad u \le r, \text{ and } h_{2}^{i} \ge t^{i}, \text{ for all } 1 \le i \le r. $$ + +Note that the sequence that contains the number of occurrences of tweak values $\mathcal{T}$ compresses to the sequences that contains the number of occurrences of hash values $\mathcal{H}_2$. Equal tweaks $T^i$ and $T^j$ will map to the same hash value $\mathbb{H}_2$. If the +---PAGE_BREAK--- + +hashes of $T^i$ and $T^j$ are identical, than, $H_2$ will be the sum of (at least) their +numbers of occurrences. Thus, they are compressing, and it follows from Fact 1 +that + +$$ +\prod_{j=1}^{u} \frac{1}{(2^n)_{h_2^j}} \geq \prod_{i=1}^{r} \frac{1}{(2^n)_{t^i}}. +$$ + +In addition, we reindex the key inputs $K_i$ that are used for generating the keys $L_1, \dots, L_s$ to $K_1, \dots, K_w$ to eliminate duplicates, and group all tuples $(I_i, K_i)$ into sets $\mathcal{K}^j$, for $1 \le j \le w$, s.t. all sets are distinct and each set contains exactly those key-generating tuples with the key $K_j$: + +$$ +\mathcal{K}^j := \{(I_i, K_i) : K_i = \mathcal{K}^j.\} +$$ + +On this base, we unify and reindex the values $H_2^j$, $\hat{\mathcal{K}}^j$, and $\mathcal{K}^j$ to values $\mathbb{P}^1, \dots, \mathbb{P}^v$ (using $\mathbb{P}$ for permutation). We group all queries into sets $\mathcal{P}^j$, for $1 \le j \le v$, s.t. all sets are distinct and each set $\mathcal{P}^j$ consists of exactly the union of all construction queries with the hash value $H_2 = \mathbb{P}^j$, all primitive queries with $\hat{\mathcal{K}} = \mathbb{P}^j$, and all key-generating tuples with $\mathcal{K} = \mathbb{P}^j$: + +$$ +\mathcal{P}^j := \{\mathcal{H}_2^i : \mathcal{H}_2^i = \mathbf{P}^j\} \cup \{\hat{\mathcal{K}}^i : \hat{\mathcal{K}}^i = \mathbf{P}^j\} \cup \{\mathcal{K}^i : \mathcal{K}^i = \mathbf{P}^j\}. +$$ + +We denote by $p^j = |\mathcal{P}^j|$ the number of queries that use the same permutation. +Clearly, it holds that $\sum_{j=1}^v p^j = q_P + q_C + s$. Recall that Block$(k,n)$ denotes the +set of all $k$-bit key, $n$-bit block ciphers. In the following, we call a block cipher +$E$ compatible with $\tau$ iff + +1. For all $1 \le i \le q_C$, it holds that $C^i = E_{H_2^i}(M^i \oplus H_1^i) \oplus H_3^i$, where $H_1^i = H_1(L, T^i)$, $H_2^i = H_2(L, T^i)$, and $H_3^i = H_3(L, T^i)$, and + +2. for all $1 \le j \le q_P$, it holds that $\hat{Y}^j = E_{\hat{\mathcal{K}}^j}(\hat{X}^j)$, + +3. and for all $1 \le g \le s$, it holds that $L_i = E_{K_i}(I_i)$. + +Let $\text{Comp}(\tau)$ denote the set of all block ciphers $E$ compatible with $\tau$. Then, + +$$ +\Pr[\Theta_{\text{real}} = \tau] = \Pr[E \leftarrow \text{Block}(k,n) : E \in \text{Comp}(\tau)] \cdot \Pr[K | \Theta_{\text{real}} = \tau]. \quad (3) +$$ + +We focus on the first factor on the right-hand side. Since we assume that no bad +events have occurred, the fraction of compatible block ciphers is given by + +$$ +\mathrm{Pr}[E \leftarrow \text{Block}(k, n) : E \in \mathrm{Comp}(\tau)] = \prod_{i=1}^{v} \frac{1}{(2^n)_{p^i}}. +$$ + +It holds that + +$$ +\sum_{i=1}^{v} p^i = q_P + q_C + s = \sum_{j=1}^{\ell} \hat{k}^j + \sum_{j=1}^{r} t^j + \sum_{j=1}^{w} k^j = \sum_{j=1}^{\ell} \hat{k}^j + \sum_{j=1}^{u} h_2^j + \sum_{j=1}^{w} k^j. +$$ +---PAGE_BREAK--- + +We can substitute the variables $\hat{k}^j, h_2^j$, and $k^j$ on the right-hand side by auxiliary variables $z^j$ + +$$ \sum_{i=1}^{v} p^i = \sum_{j=1}^{\ell+u+w} z^j \quad \text{where} \quad z^j = \begin{cases} \hat{k}^j & \text{if } j \le \ell, \\ h_2^j & \text{if } \ell < j \le \ell+u, \\ k^j & \text{otherwise.} \end{cases} $$ + +It holds that $v \le \ell+u+w \le \ell+r+w$. Since each permutation set $\mathcal{P}^i$ consists of all queries in $\tau$ that use a certain key $\hat{K}^j$, and/or all queries in $\tau$ that use one hash $H_2^j$, and/or all tuples $(I_i, K_i)$ that use one value $K^j$, it further holds that for all $1 \le i \le v$, there exists some $j \in \{1, \dots, \ell+u+w\}$ s.t. + +$$ p^i \ge z^j. $$ + +Again, the sequences are compressing, and we can directly apply Fact 1. It follows that + +$$ +\begin{align} +\prod_{i=1}^{v} \frac{1}{(2^n)_{p^i}} &\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{u} \frac{1}{(2^n)_{h_2^j}}\right) \cdot \left(\prod_{j=1}^{w} \frac{1}{(2^n)_{k^j}}\right) \tag{4} \\ +&\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{r} \frac{1}{(2^n)_{t_j}}\right) \cdot \left(\prod_{j=1}^{w} \frac{1}{(2^n)_{k^j}}\right) \\ +&\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{r} \frac{1}{(2^n)_{t_j}}\right) \cdot \frac{1}{(2^n)^s}. +\end{align} +$$ + +Using the combined knowledge from Equations (1) through (4), we can derive that the probability for obtaining the construction and primitive outputs in the transcript is at least as high as the probability in the ideal world: + +$$ \Pr[\Theta_{\text{real}} = \tau] \ge \Pr[\Theta_{\text{ideal}} = \tau]. $$ + +So, we obtain our claim in Lemma 3. □ \ No newline at end of file diff --git a/samples/texts_merged/825446.md b/samples/texts_merged/825446.md new file mode 100644 index 0000000000000000000000000000000000000000..a336d1b8bdf6efb6e888cb05327d6da7fb860ca7 --- /dev/null +++ b/samples/texts_merged/825446.md @@ -0,0 +1,377 @@ + +---PAGE_BREAK--- + +# Analysis of Power Matching on Energy Savings of a Pneumatic Rotary Actuator Servo-Control System + +Yeming Zhang¹*, Hongwei Yue¹, Ke Li² and Maolin Cai³ + +**Abstract** + +When saving energy in a pneumatic system, the problem of energy losses is usually solved by reducing the air supply pressure. The power-matching method is applied to optimize the air-supply pressure of the pneumatic system, and the energy-saving effect is verified by experiments. First, the experimental platform of a pneumatic rotary actuator servo-control system is built, and the mechanism of the valve-controlled cylinder system is analyzed. Then, the output power characteristics and load characteristics of the system are derived, and their characteristic curves are drawn. The employed air compressor is considered as a constant-pressure source of a quantitative pump, and the power characteristic of the system is matched. The power source characteristic curve should envelope the output characteristic curve and load characteristic curve. The minimum gas supply pressure obtained by power matching represents the optimal gas supply pressure. The comparative experiments under two different gas supply pressure conditions show that the system under the optimal gas supply pressure can greatly reduce energy losses. + +**Keywords:** Pneumatic rotary actuator, Energy savings, Gas supply pressure, Characteristic curve, Power matching + +## 1 Introduction + +The problem of energy shortages has become increasingly significant with the rapid development of society. In addition to discovering new energy sources, energy conservation is the most effective and important measure to fundamentally solve the energy problem [1]. Energy saving has increasingly become a hot topic of concern. Energy has always been a constraint to economic development, which makes energy-saving research more urgent and practical [2]. Currently, pneumatic technology is widely used in various fields of industry, and has become an important technical means of transmission and control [3, 4]. The use of existing technology to improve the energy utilization rate of energy-consuming equipment is an important energy-saving method [5]. + +However, the energy efficiency of pneumatic technology is relatively low [6]. Therefore, improving the efficiency of energy utilization and reducing the energy loss of pneumatic systems have become the concern of scholars all over the world [7, 8]. + +Pneumatic systems have three aspects of energy wastage [9, 10]: (1) gas and power losses during compressor gas production, (2) pressure loss in the gas supply pipeline, and (3) gas leakage from the gas equipment [11]. Accordingly, many methods are available to solve these problems. For the pressure loss in the air source, the timing of opening and closing of multiple air compressors can be optimized, and the gas production process of the air compressors can also be optimized, such as making full use of the expansion of compressed air to reduce unnecessary power consumption [12]. In order to reduce pressure loss in the pipeline, the method of reducing the pressure in the gas supply pipeline can be adopted [13]. When necessary, a supercharger can be added in front of the terminal equipment. For gas leakage from the gas equipment, optimizing the component + +*Correspondence: zym@hpu.edu.cn + +¹ School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, China +Full list of author information is available at the end of the article +---PAGE_BREAK--- + +structure is usually implemented to solve this problem. The pneumatic servo-control system precisely controls the angle of rotation; however, energy loss still occurs in the system. For this system, reducing the gas supply pressure is the most effective way of reducing the energy loss. Determining the critical pressure and reducing the gas supply pressure as much as possible while ensuring normal operation of the system are the key. The power-matching method can solve the optimization problem of the gas supply pressure based on the power required by the system [14]. In flow compensation, different compensation controllers can also be designed to match the flow and the system to realize the purpose of energy savings [15, 16]. Problems arise with regard to the high energy consumption and poor controllability of the rotary system of a hydraulic excavator due to throttle loss and overflow loss in the control valve during frequent acceleration and deceleration with large inertia. Therefore, Huang et al. [17] proposed the flow matching of a pump valve joint control and an independent measurement method of the hydraulic excavator rotary system to improve the energy efficiency of the system and reduce throttle loss. Xu et al. [18] designed a dynamic bypass pressure compensation circuit of a load sensing system, which solved the problems of pressure shock and energy loss caused by excessive flow and improved the efficiency and controllability of the system. Kan et al. [19] analyzed the basic characteristics of a hydraulic transmission system for wheel loaders using numerical calculation and adopted the optimal design method of a power-matching system. This improved the efficient working area of the system and average efficiency in the transportation process, and reduced the average working fuel consumption rate. Yang et al. designed an electro-hydraulic flow-matching controller with shunt ability to improve the dynamic characteristics and energy-saving effect and improve the stability of the system [20]. Guo et al. [21] used genetic algorithm to optimize the parameters of an asynchronous motor to achieve energy savings and consumption reduction, which proved the effectiveness and practicability of the power matching method of an electric pump system. Wang et al. [22] matched an engine and a generator to achieve efficiency optimization and obtained a common high efficiency area. They proposed a partial power tracking control strategy. Lai et al. [23] proposed a parameter matching method for an accumulator in a parallel hydraulic hybrid excavator and optimized the parameter matching process of the main components such as the engine, accumulator, and hydraulic secondary regulatory pump using genetic algorithm to reduce the installed power. Yan et al. [24] focused on the problem in which the flow of a constant displacement pump could not match with the changing load, resulting in energy loss. + +They proposed an electro-hydraulic flow-matching steering control method, which used a servo motor to drive a constant displacement pump independently to reduce the energy consumption of the system. At present, many studies on energy savings are conducted using the power matching method in the hydraulic system, but only few focus on the pneumatic system [25]. + +In the present study, a method of reducing the gas supply pressure is implemented to reduce energy loss of a pneumatic rotary actuator servo-control system. The output and load characteristic curves of the system are derived, and the power source characteristic curve is matched to determine the optimal gas supply pressure. Finally, the experiment verifies the energy-saving effect under this gas supply pressure. + +Through theoretical analysis and experimental verification of the application platform of the pneumatic rotary actuator, a method of function matching and energy optimization method for the pneumatic rotary actuator under normal working conditions is proposed for the first time. + +## 2 Experimental Platform + +Figure 1 shows the schematic diagram of the pneumatic rotary actuator servo-control system. + +As a gas source, the air compressor provides power to the system. The air filter, air regulator, and air lubricator are used to filter and clean the gas. When the driving voltage signal of the proportional directional control valve is given, the proportional valve controls the flow and direction of the gas, and then controls the rotary motion of the pneumatic rotary actuator. The rotary encoder measures the angular displacement and transmits the TTL (Transistor-Transistor Logic) level signals to the data acquisition card. The data acquisition card is installed in the industrial personal computer which calls the program of the upper computer, samples the encoder signal, and outputs a 0–10 V voltage signal through the controller calculation. The driving voltage signal output by the controller further regulates the flow and direction of the proportional directional control valve to reduce the angle error. After continuous iteration, the angle error of the system decreases and tends to stabilize. + +Figure 2 shows the experimental platform of the pneumatic rotary actuator servo-control system. The round steel passes through the pneumatic rotary actuator and is connected to the rotary encoder through the coupling. The pneumatic rotary actuator is horizontally installed. + +By selecting the MPYE-M5-010-B model proportional valve with a smaller range, we can more easily ensure the control accuracy of the system. The SMC MSQA30A pneumatic rotary actuator is adopted. The actuator has a high-precision ball bearing and belongs +---PAGE_BREAK--- + +**Figure 1** Schematic diagram of the pneumatic rotary actuator servo-control system + +**Figure 2** Experimental diagram of the pneumatic rotary servo-control system + +to a high-precision actuator type. The rotating platform of the actuator contains many symmetrical threaded holes for easy introduction of loads. A high-precision rotary encoder is used, and the 20000P/R resolution + +corresponds to an accuracy of $1.8 \times 10^{-2}$, which satisfies the high-precision measurement for the rotation angle. In addition, the air compressor and the filter, regulator, and lubricator (F. R. L.) units satisfy the gas supply pressure of at least 0.8 MPa. The digital I/O port and analog output port of the data-acquisition card must meet the experimental requirements, and the higher digit counter in the data-acquisition card improves the system response speed. The models and parameters of the components are listed in Table 1. + +In some experimental tests, measuring the flow rate, pressure, and temperature of the gas is necessary, which can be performed using a flow sensor, a pressure transmitter, and a temperature transmitter (thermocouple), respectively. The flow rate in the inlet and outlet is measured using a flow sensor in the FESTO SFAB series + +**Table 1** Models and parameters of the components + +
ComponentModelParameter
Air compressorPANDA 750-30LMaximum supply pressure: 0.8 MPa
F. R. L. unitsAC3000-03Maximum working pressure: 1.0 MPa
Proportional-directional control valveFESTO MPYE-5-M5-010-B3-position 5-way valve, 0–10 V driving voltage
Pneumatic rotary actuatorSMC MSQA30ABore: 30 mm; stroke: 190°
Rotary EncoderGSS06-LDH-RAG2000Z1Resolution: 20000P/R
Data-acquisition cardNI PCI-622932-bit counter; from –10 V to +10 V output voltage
Industrial personal computerADVANTECH IPC-610HStandard configuration
+---PAGE_BREAK--- + +with a range of 2–200 L/min, and the flow rate of the leak port is measured using a flow sensor with a range of 0.1–5 L/min in the SFAH series. The MIK-P300 pressure transmitter has high accuracy and fast response and can accurately measure the pressure changes. A thermocouple is used as a temperature transmitter to measure the gas temperature. To prevent signal interference, a temperature isolator is added to the circuit for the temperature signal transmission. The models and parameters of the test components are listed in Table 2. The circuit connection of the experimental platform is shown in Figure 3. + +The schematic diagram of the valve-controlled cylinder system is constructed according to the experimental platform, as shown in Figure 4. The system consists of Chamber **a** and Chamber **b**. The dashed lines represent the boundaries of the chambers. Figure 4 shows the gas-flow mechanism when the spool moves to the right, and $\dot{m}_a$, $\dot{m}_b$ represent the mass flow rates of Chamber **a** and Chamber **b**, respectively. $p_a$, $p_b$ and $T_a$, $T_b$ represent the corresponding pressure and temperature of Chamber **a** and Chamber **b**, respectively. $p_s$ is the gas supply pressure, $p_e$ is the atmospheric pressure, and $\theta$ is the rotation angle of the pneumatic rotary actuator. + +Figure 3 Circuit connection of the experimental platform + +## 3 Power Characteristic Matching + +### 3.1 Output Characteristics of the Valve-Controlled Cylinder + +The output characteristic of the valve-controlled cylinder system refers to the relationship between the total load moment and angular velocity when the power source is known. The output characteristic can be obtained by the following method. + +When supply pressure $p_s$ is relatively low, i.e., when $0.1013 \text{ MPa} \le p_s \le 0.4824 \text{ MPa}$, the condition is satisfied, i.e., $p_a/p_s > b = 0.21$, where *b* denotes the critical pressure ratio. The gas flow in the proportional-directional control valve is a subsonic flow. Here, the mass flow equation through the proportional valve is [26] + +$$ \dot{m}_a = \frac{S_e p_s}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_a}{p_s} \right)^{\frac{2}{\kappa}} - \left( \frac{p_a}{p_s} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (1) $$ + +$$ \dot{m}_b = \frac{S_e p_b}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_e}{p_b} \right)^{\frac{2}{\kappa}} - \left( \frac{p_e}{p_b} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (2) $$ + +Table 2 Models and parameters of the test components + +
ComponentModelParameter
Pressure transmitterMIK-P300Range: 0–1.0 MPa; accuracy: 0.3% FS
Flow sensor 1FESTO SFAB-200U-HQ8-2SV-M12Range: 2–200 L/min; accuracy: 3% o.m.v. + 0.3% FS
Flow sensor 2FESTO SFAH-5U-Q6S-PNLK-PNVBA-M8Range: 0.1–5 L/min; accuracy: 2% o.m.v. + 1% FS
Temperature transmitter (thermocouple)TT-K-36 (K type, diameter: 0.1 mm)Range: 0–260°; accuracy: 0.4% FS
Temperature isolatorSLDTR-2P11Response time: ≤ 10 ms; accuracy: 0.1% FS
+---PAGE_BREAK--- + +**Figure 4** Schematic diagram of the valve-controlled cylinder system + +where $S_e$ is the effective area of the proportional valve orifice, $R$ is the gas constant, $T_s$ is the gas supply temperature, and $\kappa$ is the isentropic index. + +When the opening of the proportional-directional control valve is maximum, the mass flow rates of the two chambers are maximum, which can be expressed as + +$$ \dot{m}_{\text{a-max}} = \frac{C \pi r^2 p_s}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_a}{p_s} \right)^{\frac{2}{\kappa}} - \left( \frac{p_a}{p_s} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (3) $$ + +$$ \dot{m}_{\text{b-max}} = \frac{C \pi r^2 p_b}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_e}{p_b} \right)^{\frac{2}{\kappa}} - \left( \frac{p_e}{p_b} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (4) $$ + +where C is the flow coefficient and r is the radius of the orifice. + +Under adiabatic condition, $p_a/\rho_a^\kappa = p_s/\rho_s^\kappa$ and $p_b/\rho_b^\kappa = p_c/\rho_c^\kappa$, where $\rho_a$, $\rho_b$, $\rho_s$, and $\rho_e$ represent the gas density in Chamber a, gas density in Chamber b, gas supply density, and atmospheric density respectively. For the pneumatic rotary actuator, these can be obtained from the mass flow-rate formulas: + +$$ \dot{m}_{\text{a-max}} = \rho_a \cdot 2A \cdot \frac{1}{2} d_f \dot{\theta} = \frac{\rho_a}{\rho_s} \rho_s A d_f \dot{\theta} = \left(\frac{p_a}{p_s}\right)^{\frac{1}{\kappa}} \frac{p_s}{RT_s} A d_f \dot{\theta}, \quad (5) $$ + +$$ \dot{m}_{\text{b-max}} = \rho_b \cdot 2A \cdot \frac{1}{2} d_f \dot{\theta} = \frac{\rho_e}{\rho_b} \rho_b A d_f \dot{\theta} = \left(\frac{p_e}{p_b}\right)^{\frac{1}{\kappa}} \frac{p_b}{RT_s} A d_f \dot{\theta}, \quad (6) $$ + +where A is the effective area of a single piston, $d_f$ is the pitch diameter of the gear, and $\dot{\theta}$ is the angular velocity of the pneumatic rotary actuator. + +**Table 3** Known parameters in Eq. (8) + +
ParameterValue
A (m²)3.4636 × 10-4
df (m)0.014
κ1.4
C0.6437
r (m)1.00 × 10-3
R (J/(kg·K))287
+ +The dynamic equation of the pneumatic rotary actuator can be expressed as follows: + +$$ p_a - p_b = \frac{f}{d_f A}, \quad (7) $$ + +where f is the total load moment. + +Combining Eqs. (3)–(6) yields $p_a$ and $p_b$. Substituting the expressions of $p_a$ and $p_b$ into Eq. (7) yields + +$$ p_s \left[ 1 - \frac{A^2 d_f^2 \dot{\theta}^2 (\kappa - 1)}{2C^2 \pi^2 r^4 \kappa R T_s} \right]^{\frac{\kappa}{\kappa-1}} - \frac{p_e}{\left[ 1 - \frac{A^2 d_f^2 \dot{\theta}^2 (\kappa-1)}{2C^2 \pi^2 r^4 \kappa R T_s} \right]^{\frac{\kappa}{\kappa-1}}} = \frac{f}{d_f A}. \quad (8) $$ + +Eq. (8) is the expression of the output characteristic curve of the valve-controlled cylinder. The known parameters in the equation are shown in Table 3. + +To extend and improve the influence of the output characteristics of the system, the influence law of the fixed parameters is also theoretically analyzed. Figure 5 shows the output characteristic curves. The following characteristics can be found in plane $\dot{\theta}-f$: + +(1) Figure 5(a) shows that when pressure $p_s$ increases from 0.3 MPa to 0.4 MPa, the curve is a parabola and $p_s$ is a variable parameter. Increasing $p_s$ makes the whole parabola move to the right while the shape does not change. + +(2) Figure 5(b) shows that when the maximum opening area of the valve increases from $\pi r^2$ to $2\pi r^2$, the whole parabola becomes wider but the vertices remain the same. + +(3) Figure 5(c) shows that the increase in effective working area A of the piston makes the top of the parabola move to the right and the parabola simultaneously becomes narrower. + +We can see from Eq. (8) that when $\dot{\theta}=0$, the maximum total load moment can be expressed as + +$$ f_{\max} = Ad_f(p_s - p_e). \quad (9) $$ + +When $f=0$, the maximum angular velocity is +---PAGE_BREAK--- + +**Figure 5** Output characteristic curve of the valve-controlled cylinder: (a) Output characteristics of the pressure variation, (b) Output characteristics of the change in the valve port area, (c) Output characteristics of the variation in the effective piston area + +$$ \dot{\theta}_{\max} = \sqrt{\frac{2C_1^2 \pi^2 r^4 \kappa R T_s}{A^2 d_f^2 (\kappa - 1)}} \left[ 1 - \left( \frac{p_e}{p_s} \right)^{\left(\frac{\kappa-1}{2\kappa}\right)} \right]. \quad (10) $$ + +## 3.2 Load Characteristic + +The load characteristic refers to the relationship between the moment required for the load to move and the position, velocity, and acceleration of the load itself [27]. The load characteristic can be expressed by the angular velocity–moment curve. + +The load characteristic is related to the form of load movement. When the load sinusoidally moves, the motion of the load is expressed as + +$$ \theta = \theta_m \sin \omega t, \quad (11) $$ + +where $\theta_m$ is the maximum angular value of the load motion and $\omega$ is the sinusoidal motion frequency of the load. + +The angular velocity and acceleration of the load are + +$$ \dot{\theta} = \theta_m \omega \cos \omega t, \quad (12) $$ + +$$ \ddot{\theta} = -\theta_m \omega^2 \sin \omega t. \quad (13) $$ + +The total load moment of the pneumatic rotary actuator is + +$$ f = \left( \frac{1}{2} m_p d_f^2 + J \right) \ddot{\theta} + \frac{1}{2} d_f F_f \\ = - \left( \frac{1}{2} m_p d_f^2 + J \right) \theta_m \omega^2 \sin \omega t \\ + \frac{1}{2} d_f \left[ F_c \operatorname{sign}(\dot{\theta}) + (F_s - F_c)e^{-(\dot{\theta}/\dot{\theta}_s)^2} \operatorname{sign}(\dot{\theta}) + \sigma \dot{\theta} \right], \quad (14) $$ + +where $m_p$ is the mass of a single piston and $J$ is the moment of inertia of the pneumatic rotary actuator. $F_f$ is the friction force and can be represented by the Stribeck friction model. + +$$ F_f = F_c \operatorname{sign}(\dot{\theta}) + (F_s - F_c)e^{-(\dot{\theta}/\dot{\theta}_s)^2} \operatorname{sign}(\dot{\theta}) + \sigma \dot{\theta}, \quad (15) $$ + +where $F_s$ is the maximum static friction, $F_c$ is the Coulomb friction, $\dot{\theta}_s$ is the critical Stribeck velocity, and $\sigma$ is the viscous friction coefficient. +---PAGE_BREAK--- + +**Table 4** Known parameters in Eq. (16) + +
ParameterValue
Fs (N)10.60
Fc (N)6.03
θ̇s (rad/s)0.19
σ (N·s/rad)0.87
mp (kg)0.21
+ +**Figure 6** Load characteristic curve + +Combining Eqs. (12)–(14) yields + +$$ \left[ \frac{f - \frac{1}{2} d_f F_c \text{sign}(\dot{\theta}) - \frac{1}{2} d_f (F_s - F_c) e^{-(\dot{\theta}/\dot{\theta}_s)^2} \text{sign}(\dot{\theta})}{-\frac{1}{2} d_f \sigma \dot{\theta} - \left(\frac{1}{2} m_p d_f^2 + J \right) \theta_m \omega^2} + \left(\frac{\dot{\theta}}{\dot{\theta}_m \omega}\right)^2 = 1. \right]^2 \quad (16) $$ + +The known parameters in Eq. (16) are listed in Table 4. + +The load characteristic curve can be obtained from Eq. (16) when $\theta_m=180°$ and $\omega=10$ rad/s, as shown in Figure 6. + +### 3.3 Power Source Characteristics and Matching + +The power source characteristic refers to the characteristic of the flow and pressure provided by the power source, which can be expressed by the flow-pressure curve. The air compressor used in this work can be approximately regarded as a constant-pressure source for a quantitative + +**Figure 7** Power source characteristic curve + +**Figure 8** Power source characteristic matching + +pump. Therefore, the power source characteristic curve is shown in Figure 7, where $\dot{m}_s$ is the gas supply mass flow, $p_s$ is the gas supply pressure, $\dot{m}_L$ is the driving mass flow, and $p_L$ is the driving pressure. + +The output and power source characteristics of the valve-controlled cylinder should envelope the load characteristic curve. To minimize unnecessary energy consumption, the output characteristic curve should be tangent to the load characteristic curve, and the power source characteristic curve should be tangent to the output characteristic curve in the f-axis direction and the load characteristic curve in the $\dot{\theta}$-axis direction, as shown in Figure 8. + +In this manner, the maximum total load moment is obtained, i.e., $f_{max}=0.96$ N·m. The optimum gas supply pressure can be obtained from Eq. (9), i.e., $p_s=f_{max}/d_f A + p_e= 0.3367$ MPa. +---PAGE_BREAK--- + +**4 Experimental Verification of the Energy Savings** + +To verify the calculation results presented in the previous section, low-speed uniform-motion experiments of the pneumatic rotary actuator were carried out using 0.6 and 0.3367 MPa supply pressure. The total energy and effective energy consumed by the valve-controlled cylinder system were measured and calculated. In the experiment, the input-angle signal was set as the slope signal, and Chamber **a** was used as the intake chamber. The motion curve of the uniform-velocity period was considered, and the angular strokes in the two experiments were the same. Two flow sensors were used to measure the volume flow of the gas supply pipeline and the Chamber **a** port. Temperature sensors were used to measure the gas temperature of the gas supply pipeline and Chamber **a**. + +Figures 9 and 10 show the system response curves at gas supply pressure values of 0.6 and 0.3367 MPa, respectively, including the angle curve, gas supply flow curve, gas supply temperature curve, pressure curve of Chamber **a**, volume-flow curve of Chamber **a**, and temperature curve of Chamber **a**. Figures 9(f) and 10(f) show that the temperature in Chamber **a** changed with the change in the velocity, which first increased, then decreased, and then entered a stable stage. + +The total power consumed by the pneumatic system is expressed as [28, 29]: + +$$P_T = p_s \dot{V}_s \left[ \ln \frac{p_s}{p_e} + \frac{\kappa}{\kappa - 1} \left( \frac{T_s - T_e}{T_e} - \ln \frac{T_s}{T_e} \right) \right], \quad (17)$$ + +where $\dot{V}_s$ is the volume flow through the gas supply pipeline, and its numerical variation curves are shown in Figures 9(b) and 10(b). The $T_s$ curves are shown in Figures 9(c) and 10(c). + +The effective power of the pneumatic rotary actuator can be expressed as + +$$P_E = p_a \dot{V}_a \left[ \ln \frac{p_a}{p_e} + \frac{\kappa}{\kappa - 1} \left( \frac{T_a - T_e}{T_e} - \ln \frac{T_a}{T_e} \right) \right], \quad (18)$$ + +where $\dot{V}_a$ is the volume flow into Chamber **a**, and its numerical variation curves are shown in Figures 9(e) and 10(e). The $T_a$ curves are shown in Figures 9(f) and 10(f). + +By substituting the data in Figures 9 and 10 into Eqs. (17) and (18), the total and effective power of the pneumatic system at different supply pressure values can be obtained, as shown in Figure 11. The total and effective energy consumed by the pneumatic system can be obtained by integrating the data shown in Figure 11 using the Origin software. + +The actual work done by the gas on the pneumatic rotary actuator is equal to the sum of the rotational + +kinetic energy of the rotating platform, the kinetic energy of the cylinder piston, and the work done by the piston to overcome the friction force, which can be expressed as + +$$ +\begin{aligned} +W &= \frac{1}{2} J \dot{\theta}^2 + \frac{1}{2} \cdot 2m_p \cdot (\dot{y})^2 + F_f y \\ +&= \frac{1}{2} \left( J + \frac{1}{2} m_p d_f^2 \right) \dot{\theta}^2 + \frac{1}{2} F_f d_f \theta, +\end{aligned} +\quad (19) $$ + +where $y$ is the displacement of the actuator piston and $\dot{\theta}$ is replaced by the average value of the angular velocity. + +The calculation results are described as follows. When the gas supply pressure is 0.6 MPa, the total energy consumed by the system is 195.552 J, the effective energy is 32.666 J, and the actual work done by the pneumatic rotary actuator is 3.513 J. When the gas supply pressure is 0.3367 MPa, the total energy consumed by the system is 32.207 J, the effective energy is 9.481 J, and the actual work done is 3.517 J. In both cases, the actual work of the pneumatic rotary actuator is almost the same, and when the gas supply pressure is 0.3367 MPa, the energy consumption is greatly reduced. + +**5 Further Discussions** + +According to the matching method of the power characteristics, for the constant-pressure source servo system with a quantitative pump, we need to calculate the optimal air-supply pressure required for manually adjusting the air-supply pressure to the optimal pressure. Matching efficiency $\eta$ represents the ratio of the power output of the pneumatic system to the input power of the gas source. The matching efficiency is expressed as + +$$\eta = \frac{p_L \dot{m}_L}{p_s \dot{m}_s}. \quad (20)$$ + +Figure 7 shows that the matching efficiency of this method is low. The adaptive power source can adaptively change the gas supply pressure or flow to meet the system requirements and improve the matching efficiency. It can be divided into the following three types [30]. + +(1) Flow adaptive power source + +This power source can adaptively adjust the supply flow from the power source according to the system flow demand to reduce the loss in the flow. The characteristic curve is shown in Figure 12(a). The matching efficiency is expressed as + +$$\eta = \frac{p_L \dot{m}_L}{p_s \dot{m}_s'} \approx \frac{p_L}{p_s}. \quad (21)$$ +---PAGE_BREAK--- + +**Figure 9** System-response curve at gas supply pressure of 0.6 MPa: (a) Angle curve, (b) Gas supply flow, (c) Gas supply temperature, (d) Pressure curve of Chamber a, (e) Volume-flow curve of Chamber a, (f) Temperature curve of Chamber a +---PAGE_BREAK--- + +**Figure 10** System response curve at gas supply pressure of 0.3367 MPa: (a) Angle curve, (b) Gas supply flow, (c) Gas supply temperature, (d) Pressure curve of Chamber a, (e) Volume-flow curve of Chamber a, (f) Temperature curve of Chamber a +---PAGE_BREAK--- + +**Figure 11** Total and effective power of the pneumatic system under different supply pressure values: (a) Total power, (b) Effective power + +(2) Pressure adaptive power source + +This power source can adaptively adjust the gas supply pressure of the power source according to the system pressure demand to reduce the pressure loss. The characteristic curve is shown in Figure 12(b). The matching efficiency is expressed as + +$$ \eta = \frac{p_L \dot{m}_L}{p'_s \dot{m}_s} \approx \frac{\dot{m}_L}{\dot{m}_s}. \qquad (22) $$ + +(3) Power adaptive power source + +This power source can adaptively adjust the gas supply pressure and flow from the power source according to the system pressure and flow demand to minimize the loss in power. $\dot{m}'_s$ denotes the air-supply flow. The characteristic + +**Figure 12** Power characteristics of the adaptive power sources: (a) Flow adaptive power source, (b) Pressure adaptive power source, (c) Power adaptive power source +---PAGE_BREAK--- + +curve is shown in Figure 12(c). The matching efficiency is expressed as + +$$ \eta = \frac{p_L \dot{m}_L}{p'_s \dot{m}'_s} \approx 1. \qquad (23) $$ + +Therefore, the power adaptive power source demonstrates better energy-saving effect, and its matching efficiency is closer to 100%. + +## 6 Conclusions + +Power matching of the pneumatic rotary actuator involves optimizing the relevant parameters of the pneumatic rotary actuator system based on the premise of satisfying the normal operation of the pneumatic rotary actuator, realizing the power demand and output matching, and achieving energy savings. In this study, the derivation process of the output-power and load characteristics of the pneumatic rotary actuator servo-control system is described. The employed air compressor is regarded as a constant-pressure source of the quantitative pump, and the power characteristics of the system are matched. The following conclusions are obtained. + +(1) The minimum gas supply pressure obtained by the power-matching method represents the optimal gas supply pressure. The optimum gas supply pressure is 0.3367 MPa. + +(2) By comparing the system-response experiments at 0.6 and 0.3367 MPa, the total energy consumed by the system generates savings of 163.345 J. This value verifies that the system under the optimal gas supply pressure can significantly reduce energy loss. + +(3) According to the characteristic curves of the adaptive power sources, the matching efficiency of the power adaptive power source is higher than that of the flow and pressure adaptive power sources. + +### Acknowledgments + +The authors would like to thank Henan Polytechnic University and Beihang University for providing the necessary facilities and machinery to build the prototype of the pneumatic servo system. The authors are sincerely grateful to the reviewers for their valuable review comments, which substantially improved the paper. + +### Authors' Contributions + +YZ provided guidance for the whole research. KL and HY established the model, designed the experiments and wrote the initial manuscript. KL and MC assisted with sampling and laboratory analyses. YZ and HY revised the manuscript, performed the experiments and analysed the data. All authors read and approved the final manuscript. + +### Authors' Information + +Yeming Zhang, born in 1979, is currently an associate professor at School of Mechanical and Power Engineering, Henan Polytechnic University, China. He received his PhD degree from Beihang University, China, in 2011. His research interests include complex mechatronics system design and simulation, + +intelligent control, reliability and fault diagnosis, pneumatic system energy saving and flow measurement. + +Hongwei Yue, born in 1992, is currently a master candidate at School of Mechanical and Power Engineering, Henan Polytechnic University, China. + +Ke Li, born in 1991, is currently a PhD candidate at School of Mechanical and Electrical Engineering, Harbin Institute of Technology, China. He received his master degree on mechano-electronic from Henan Polytechnic University, China, in 2019. + +Maolin Cai, born in 1972, is currently a professor and a PhD candidate supervisor at Beihang University, China. He received his PhD degree from Tokyo Institute of Technology, Japan, in 2002. His main research direction includes pneumatic and hydraulic fluidics, compressed air energy storage, and pneumatic pipe line system. + +### Funding + +Supported by Henan Province Science and Technology Key Project of China (Grant Nos. 202102210081, 202102210082), Fundamental Research Funds for Henan Province Colleges and Universities of China (Grant No. NSFRF140120), and Doctor Foundation of Henan Polytechnic University (Grant No. B2012-101). + +### Competing Interests + +The authors declare no competing financial interests. + +### Author Details + +¹School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, China. ²School of Mechanical and Electrical Engineering, Harbin Institute of Technology, Harbin 150001, China. ³School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China. + +Received: 6 July 2019 Revised: 22 February 2020 Accepted: 18 March 2020 +Published online: 09 April 2020 + +### References + +[1] L Ge, L Quan, X G Zhang, et al. Power matching and energy efficiency improvement of hydraulic excavator driven with speed and displacement variable power source. *Chinese Journal of Mechanical Engineering*, 2019, 32:100, https://doi.org/10.1186/s10033-019-0415-x. + +[2] T Chen, L Cai, X F Ma, et al. Modeling and matching performance of a hybrid-power gas engine heat pump system with continuously variable transmission. *Building Simulation*, 2019, 12(2): 273-283. + +[3] G W Jia, W Q Xu, M L Cai, et al. Micron-sized water spray-cooled quasi-isothermal compression for compressed air energy storage. *Experimental Thermal and Fluid Science*, 2018, 96: 470-481. + +[4] D Shaw, J-J Yu, C Chieh. Design of a hydraulic motor system driven by compressed air. *Energies*, 2013, 6(7): 3149-3166. + +[5] M Cheng, B Xu, J H Zhang, et al. Pump-based compensation for dynamic improvement of the electrohydraulic flow matching system. *IEEE Transactions on Industrial Electronics*, 2017, 64(4): 2903-2913. + +[6] Y M Zhang, K Li, G Wang, et al. Nonlinear model establishment and experimental verification of a pneumatic rotary actuator position servo system. *Energies*, 2019, 12(6): 1096. + +[7] T L Brown, V P Atluri, J P Schmiedeler. A low-cost hybrid drivetrain concept based on compressed air energy storage. *Applied Energy*, 2014, 134: 477-489. + +[8] Y M Zhang, M L Cai. Overall life cycle comprehensive assessment of pneumatic and electric actuator. *Chinese Journal of Mechanical Engineering*, 2014, 27(3): 584-594. + +[9] M L Cai. Energy saving technology on pneumatic systems. *Chinese Hydraulics & Pneumatics*, 2013(8): 1-8. (in Chinese) + +[10] J F Li. Energy saving of pneumatic system. Beijing: Machinery Industry Press, 1997. (in Chinese) + +[11] R Saidur, N A Rahim, M Hasanuzzaman. A review on compressed-air energy use and energy savings. *Renewable and Sustainable Energy Reviews*, 2010, 14(4): 1135-1153. + +[12] Y M Zhang, S Wang, S L Wei, et al. Optimization of control method of air compressor group under intermittent large flow condition. *Fluid Machinery*, 2017, 45(7): 7-11. +---PAGE_BREAK--- + +[13] K Baghestan, S M Rezaei, H A Talebi, et al. An energy-saving nonlinear position control strategy for electro-hydraulic servo systems. *ISA Trans.*, 2015, 59: 268-279. +[14] S P Yang, H Yu, J G Liu, et al. Research on power matching and energy sav- ing control of power system in hydraulic excavator. *Journal of Mechanical Engineering*, 2014, 50(5): 152-160. (in Chinese) +[15] M Cheng, B Xu, J H Zhang, et al. Valve-based compensation for controlla- bility improvement of the energy-saving electrohydraulic flow matching system. *Journal of Zhejiang University: Science A*, 2017, 18(6): 430-442. +[16] B Xu, M Cheng, H Y Yang, et al. A hybrid displacement/pressure control scheme for an electrohydraulic flow matching system. *IEEE/ASME Transactions on Mechatronics*, 2015, 20(6): 2771-2782. +[17] W N Huang, L Quan, J H Huang, et al. Flow matching with combined control of the pump and the valves for the independent metering swing system of a hydraulic excavator. *Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering*, 2018, 232(10): 1310-1322. +[18] B Xu, M Cheng, H Y Yang, et al. Electrohydraulic flow matching system with bypass pressure compensation. *Journal of Zhejiang University (Engineering Science)*, 2015, 49(9): 1762-1767. (in Chinese) +[19] Y Z Kan, D Y Sun, Y Luo, et al. Optimal design of power matching for wheel loader based on power reflux hydraulic transmission system. *Mechanism and Machine Theory*, 2019, 137: 67-82. +[20] H Y Yang, W Liu, B Xu, et al. Characteristic analysis of electro-hydraulic flow matching control system in hydraulic excavator. *Journal of Mechanical Engineering*, 2012, 48(14): 156-163. (in Chinese) +[21] X Guo, C Lu, J Li, et al. Analysis of motor-pump system power matching based on genetic algorithm. *EEA - Electrotehnica, Electronica, Automatica*, 2018, 66(1): 93-99. + +[22] X Wang, H Lv, Q Sun, et al. A proportional resonant control strategy for efficiency improvement in extended range electric vehicles. *Energies*, 2017, 10(2): 204. +[23] X L Lai, C Guan. A parameter matching method of the parallel hydraulic hybrid excavator optimized with genetic algorithm. *Mathematical Problems in Engineering*, 2013: 1-6. +[24] X D Yan, L Quan, J Yang. Analysis on steering characteristics of wheel loader based on electric-hydraulic flow matching principle. *Transactions of the Chinese Society of Agricultural Engineering*, 2015, 31(18): 71-78. (in Chinese) +[25] L C Xu, X M Hou. Power matching on loader engine and hydraulic torque converter based on typical operating conditions. *Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering*, 2015, 31(7): 80-84. (in Chinese) +[26] X H Fu, M L Cai, W Y X ang, et al. Optimization study on expansion energy used air-powered vehicle with pneumatic-hydraulic transmission. *Chinese Journal of Mechanical Engineering*, 2018, 31:3, https://doi.org/10.1186/s10033-018-0220-y. +[27] H B Yuan, H Na, Y Kim. Robust MPC-PIC force control for an electro-hydraulic servo system with pure compressive elastic load. *Control Engineering Practice*, 2018, 79: 170-184. +[28] Y Shi, M L Cai, W Q Xu, et al. Methods to evaluate and measure power of pneumatic system and their applications. *Chinese Journal of Mechanical Engineering*, 2019, 32:42, https://doi.org/10.1186/s10033-019-0354-6. +[29] Y Shi, T C Wu, M L Cai, et al. Energy conversion characteristics of a hydro-pneumatic transformer in a sustainable-energy vehicle. *Applied Energy*, 2016, 171: 77-85. +[30] C C Zhan, X Y Chen. *Hydraulic reliability optimization and intelligent fault diagnosis*. Beijing: Metallurgical Industry Press, 2015. (in Chinese) + +Submit your manuscript to a SpringerOpen® journal and benefit from: + +► Convenient online submission + +► Rigorous peer review + +► Open access: articles freely available online + +► High visibility within the field + +► Retaining the copyright to your article + +Submit your next manuscript at ► springeropen.com \ No newline at end of file diff --git a/samples/texts_merged/879988.md b/samples/texts_merged/879988.md new file mode 100644 index 0000000000000000000000000000000000000000..8c107a98b95b994888509c4b170f2d6a2fa73765 --- /dev/null +++ b/samples/texts_merged/879988.md @@ -0,0 +1,435 @@ + +---PAGE_BREAK--- + +# The Poisson Process and Associated Probability Distributions on Time Scales + +Dylan R. Poulsen +Department of Mathematics +Baylor University +Waco, TX 76798 + +Email: Dylan_Poulsen@baylor.edu + +Michael Z. Spivey +Department of Mathematics and +Computer Science +University of Puget Sound +Tacoma, WA 98416 + +Email: mspivey@pugetsound.edu + +Robert J. Marks II +Department of Electrical and +Computer Engineering +Baylor University +Waco, TX 76798 + +Email: Robert_Marks@baylor.edu + +**Abstract**—Duals of probability distributions on continuous $\mathbb{R}$ domains exist on discrete $\mathbb{Z}$ domains. The Poisson distribution on $\mathbb{R}$, for example, manifests itself as a binomial distribution on $\mathbb{Z}$. Time scales are a domain generalization in which $\mathbb{R}$ and $\mathbb{Z}$ are special cases. We formulate a generalized Poisson process on an arbitrary time scale and show that the conventional Poisson distribution on $\mathbb{R}$ and binomial distribution on $\mathbb{Z}$ are special cases. The waiting times of the generalized Poisson process are used to derive the Erlang distribution on a time scale and, in particular, the exponential distribution on a time scale. The memoryless property of the exponential distribution on $\mathbb{R}$ is well known. We find conditions on the time scale which preserve the memorylessness property in the generalized case. + +On $\mathbb{R}$, this is interpreted in the limiting case and $x^\Delta(t) = \frac{d}{dt}x(t)$. The Hilger integral can be viewed as the antiderivative in the sense that, if $y(t) = x^\Delta(t)$, then for $s, t \in \mathbb{T}$, + +$$\int_{\tau=s}^{t} y(\tau)\Delta\tau = x(t) - x(s).$$ + +The solution to the differential equation + +$$x^{\Delta}(t) = zx(t); x(0) = 1,$$ + +is $x(t) = e_z(t, 0)$ where [2], [10] + +$$e_z(t, s) := \exp \left( \int_{\tau=s}^{t} \frac{\log(1 + \mu(\tau)z)}{\mu(\tau)} \Delta\tau \right).$$ + +For an introduction to time scales, there is an online tutorial [10] or, for a more thorough treatment, see the text by Bohner and Peterson [2]. + +## I. INTRODUCTION + +The theory of continuous and discrete time stochastic processes is well developed [7], [8]. Stochastic processes on general closed subsets of the real numbers, also known as *time scales*, allow a generalization to other domains [4], [9]. The notion of a stochastic process on time scales naturally leads to questions about probability theory on time scales, which has been developed by Kahraman [5]. We begin by introducing a generalized Poisson process on time scales and show it reduces to the conventional Poisson process on $\mathbb{R}$ and the binomial distribution on $\mathbb{Z}$. We then use properties of the Poisson process to motivate generalized Erlang and exponential distributions on time scales. Finally, we show that the generalized exponential distribution has an analogue of the memorylessness property under periodicity conditions on the time scale. + +## II. FOUNDATIONS + +A time scale, $\mathcal{T}$, is any closed subset of the real line. We restrict attention to causal time scales [6] where $0 \in \mathcal{T}$ and $t \ge 0$ for all $t \in \mathcal{T}$. The forward jump operator [2], [10], $\sigma(t)$, is defined as the point immediately to the right of $t$, in the sense that $\sigma(t) = \inf\{s \in \mathcal{T} \forall s > t\}$. The graininess is the distance between points defined as $\mu(t) := \sigma(t) - t$. For $\mathbb{R}$, $\sigma(t) = t$ and $\mu(t) = 0$. + +The time scale or Hilger derivative of a function $x(t)$ on $\mathcal{T}$ is defined as + +$$x^{\Delta}(t) := \frac{x(\sigma(t)) - x(t)}{\mu(t)}. \quad (II.1)$$ + +## III. THE POISSON PROCESS ON TIME SCALES + +We begin by presenting the derivation for a particular stochastic process on time scales which mirrors a derivation for the Poisson process on $\mathbb{R}$ [3]. + +Let $\lambda > 0$. Assume the probability an event occurs in the interval $[t, \sigma(s))_{\mathcal{T}}$ is given by + +$$-(\ominus\lambda)(t)(\sigma(s) - t) + o(s - t).$$ + +where $\ominus z := -z/(1 - \mu(t)z)$ [2], [10]. Hence the probability that no event occurs on the interval is given by + +$$1 + (\ominus\lambda)(t)(\sigma(s) - t) + o(s - t).$$ + +We also assume that at $t=0$ no events have occurred. + +We now define a useful notation. Let $X : \mathcal{T} \to \mathbb{N}^0$ be a counting process [8] where $\mathbb{N}^0$ denotes all nonnegative integers. For $k \in \mathbb{N}^0$, define $p_k(t) = \mathbb{P}[X(t) = k]$, the probability that $k$ events have occurred by time $t \in \mathcal{T}$. Let $t, s \in \mathcal{T}$ with $s > t$. Consider the successive intervals $[0, t)_{\mathcal{T}}$ +---PAGE_BREAK--- + +and $[t, \sigma(s))_{\mathbb{T}}$. We can therefore set up the system of equations + +$$ +\begin{align*} +p_0(\sigma(s)) &= p_0(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\ +p_1(\sigma(s)) &= p_1(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\ +&\quad + p_0(t)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\ +&\vdots \\ +p_k(\sigma(s)) &= p_k(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\ +&\quad + p_{k-1}(t)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\ +&\vdots +\end{align*} +$$ + +with initial conditions $p_0(0) = 1$ and $p_k(0) = 0$ for $k > 0$. We will let $s \to t$ and solve these equations recursively. Consider the $p_0$ equation. By the definition of the derivative on time scales, we have + +$$ +p_0^\Delta(t) = \lim_{s \to t} \frac{p_0(\sigma(s)) - p_0(t)}{\sigma(s) - t} = (\ominus\lambda)(t)p_0(t), +$$ + +which, using the initial value $p_0(0) = 1$, has a solution + +$$ +p_0(t) = e_{\ominus\lambda}(t, 0). \tag{III.1} +$$ + +Now consider the $p_1$ equation. Substituting the solution of the $p_0$ equation yields + +$$ +\begin{align*} +p_1(\sigma(s)) &= p_1(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\ +&\quad + e_{\ominus\lambda}(t, 0)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t), +\end{align*} +$$ + +which, using (II.1), yields + +$$ +p_1^{\Delta}(t) = (\ominus\lambda)(t)p_1(t) - (\ominus\lambda)(t)e_{\ominus\lambda}(t, 0). \quad (III.2) +$$ + +Using the variation of constants formula on time scales [2], we arrive at the solution + +$$ +\begin{align*} +p_1(t) &= - \int_0^t e_{\ominus\lambda}(t, \sigma(\tau))(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\ +&= - \int_0^t e_\lambda(\tau, t)(1 + \mu(\tau)\lambda)(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\ +&= \lambda \int_0^t e_\lambda(\tau, 0)e_\lambda(0, t)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\ +&= \lambda \int_0^t e_{\ominus\lambda}(t, 0)\Delta\tau \\ +&= \lambda t e_{\ominus\lambda}(t, 0) \\ +&= \frac{\lambda}{1 + \mu(0)\lambda} t e_{\ominus\lambda}(t, \sigma(0)) \\ +&= -( \ominus \lambda )(0) t e_{\ominus \lambda }(t, \sigma(0)). +\end{align*} +$$ + +Now consider the $p_2$ equation. Substituting the solution of the $p_1$ equation yields + +$$ +\begin{align*} +p_2(\sigma(s)) &= p_2(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\ +&\quad - (\ominus\lambda)(0)te_{\ominus\lambda}(t, \sigma(0))[-(\ominus\lambda)(t)(\sigma(s) - t)] \\ +&\quad + o(s - t), +\end{align*} +$$ + +which, using (II.1) yields + +$$ +p_2^{\Delta}(t) = (\ominus\lambda)(t)p_2(t) + (\ominus\lambda)(0)(\ominus\lambda)(t)te_{\ominus\lambda}(t, \sigma(0)). +$$ + +Again, using the variation of constants formula on time scales, +we arrive at the solution + +$$ +\begin{align*} +p_2(t) &= (\ominus\lambda)(0) \\ +& \quad \times \int_0^t e_{\ominus\lambda}(t, \sigma(\tau))(\ominus\lambda)(\tau)\tau e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\ +&= (\ominus\lambda)(0) \\ +& \quad \times \int_0^t e_{\lambda}(\tau, t)(1 + \mu(\tau)\lambda)(\ominus\lambda)(\tau)\tau e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\ +&= -\lambda(\ominus\lambda)(0) \\ +& \quad \times \int_0^t \tau e_{\lambda}(\tau, \sigma(0)) e_{\lambda}(\sigma(0), t) e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\ +&= -\lambda(\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma(0)) \int_0^t \tau \Delta\tau \\ +&= -\lambda(\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma(0)) h_2(t, 0) \\ +&= \frac{-\lambda}{1 + \mu(\sigma(0))\lambda} (\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma^2(0)) h_2(t, 0) \\ +&= (\ominus\lambda)(\sigma(0)) (\ominus\lambda)(0) h_2(t, 0) e_{\ominus\lambda}(t, \sigma^2(0)). +\end{align*} +$$ + +In general, it can be shown via induction that + +$$ +p_k(t) = (-1)^k h_k(t, 0) e_{k-1}(t, \sigma^k(0)) \prod_{i=0}^{k-1} (\ominus_i \sigma^i(0)), +$$ + +where $h_k(t, 0)$ is the $k^{\text{th}}$ generalized Taylor monomial [2]. + +The above derivation motivates the following definition: + +**Definition III.1.** Let $\mathbb{T}$ be a time scale. We say $S: \mathbb{T} \rightarrow \mathbb{N}^0$ is a $\mathbb{T}$-Poisson process with rate $\lambda > 0$ if for $t \in \mathbb{T}$ and $k \in \mathbb{N}^0$, + +$$ +\P[S(t; \lambda) = k] = (-1)^k h_k(t, 0) e_{-\lambda}(t, \sigma^k(0)) \prod_{i=0}^{k-1} (\ominus_i \lambda)(\sigma^i(0)). \quad (III.3) +$$ + +Each fixed $t \in T$ generates a discrete distribution of the number of arrivals at $t$. We now examine the specific examples of $\mathbb{R}$, $\mathbb{Z}$ and the harmonic time scale [2]. + +A. On $\mathbb{R}$ and $\mathbb{Z}$ + +Let $S: \mathbb{R} \to \mathbb{N}^0$ be an $\mathbb{R}$-Poisson process. Then $\sigma^i(0) = 0$ for all $i \in \mathbb{N}$, $(\ominus\lambda)(t) = -\lambda$ for all $t \in \mathbb{R}$ and $h_k(t) = \frac{t^k}{k!}$. + +$$ +\P[S(t; \lambda) = k] = \frac{(\lambda t)^k}{k!} e^{-\lambda t}, +$$ + +which we recognize as the Poisson distribution. + +Now let $S: Z \to N^0$ be an $N^0$-Poisson process. We have +$\sigma^i(0) = i$ for all $i \in N$, $(\ominus\lambda)(t) = \frac{-\lambda}{1+\lambda} := -p$, and $h_k(t) =$ +$\binom{t}{k}$. Thus we have + +$$ +\P[S(t; \lambda) = k] = \binom{t}{k} p^k (1-p)^{t-k}, +$$ + +which we recognize as the binomial distribution. +---PAGE_BREAK--- + +Fig. 1. Probability against Number of Events and Time for the $H_n$-Poisson Process with rate 1. + +Fig. 2. A comparison of probability versus number of events near $t = 2$ for the $H_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $H_n$-Poisson process behaves more like the $Z$-Poisson process than the $\mathbb{R}$-Poisson process. + +## B. On the Harmonic Time Scale + +Now let $S: \mathbb{H}_n \to \mathbb{N}^0$ be an $\mathbb{H}_n$-Poisson process with rate $\lambda$, where + +$$ t \in \mathbb{H}_n \text{ if and only if } t = \sum_{k=1}^{n} \frac{1}{k} \text{ for some } n \in \mathbb{N}, $$ + +which we call the harmonic time scale. To help understand later figures and emphasize that $S$ yields a distinct discrete distribution for each value of $t$, we show the probability against the number of events and time in Figure 1. The choice of $\mathbb{H}_n$ as the time scale shows very informative behavior. Near $t=0$, when the graininess is large, we find behavior that is more like the integers. In contrast, away from $t=0$, where the graininess is small, we find behavior that is more like the real numbers. This behavior is demonstrated in Figures 2–4. + +Fig. 3. A comparison of probability versus number of events near $t = 4$ for the $\mathbb{H}_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $\mathbb{H}_n$-Poisson process behaves more like the $\mathbb{R}$-Poisson process than the $Z$-Poisson process. + +Fig. 4. A comparison of probability versus time when we fix the number of events at 2 for the $\mathbb{H}_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $\mathbb{H}_n$-Poisson process behaves more like the $Z$-Poisson process near $t = 0$ and more like the $\mathbb{R}$-Poisson process away from $t = 0$. + +# IV. THE ERLANG DISTRIBUTION ON TIME SCALES + +A time scales generalization of the Erlang distribution can be generated by examining the waiting times between any number of events in the $\mathbb{T}$-Poisson process. To that end, let $\mathbb{T}$ be a time scale. Let $S: \mathbb{T} \to \mathbb{N}$ be a $\mathbb{T}$-Poisson process with rate $\lambda$. Let $T_n$ be a random variable which denotes the time until the $n^{th}$ event. We have + +$$ +\begin{aligned} +\mathbb{P}[S(t; \lambda) < n] &= \mathbb{P}[T_n > t] \\ +&= 1 - \mathbb{P}[T_n \leq t]. +\end{aligned} + $$ + +which implies + +$$ 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] = \mathbb{P}[T_n \leq t], $$ + +which motivates the following definition. + +**Definition IV.1.** Let $\mathbb{T}$ be a time scale, $S: \mathbb{T} \to \mathbb{N}^0$ be a $\mathbb{T}$-Poisson Process with rate $\lambda > 0$. We say $F(t; n, \lambda)$ is the $\mathbb{T}$-Erlang cumulative distribution function with shape parameter +---PAGE_BREAK--- + +$n$ and rate $\lambda$ provided + +$$F(t; n, \lambda) = 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k].$$ + +From our derivation, it is clear that the $\mathbb{T}$-Erlang distribution models the time until the $n^{th}$ event in the $\mathbb{T}$-Poisson process. We would like to know the probability that the $n^{th}$ event is in any subset of $\mathbb{T}$. To this end, we introduce the $\mathbb{T}$-Erlang probability density function in the next definition. + +**Definition IV.2.** Let $\mathbb{T}$ be a time scale, $S: \mathbb{T} \to \mathbb{N}^0$ be a $\mathbb{T}$-Poisson Process with rate $\lambda > 0$. We say $f(t; n, \lambda)$ is the $\mathbb{T}$-Erlang probability density function with shape parameter $n$ and rate $\lambda$ provided + +$$f(t; n, \lambda) = - \sum_{k=0}^{n-1} [\mathbb{P}[S(t; \lambda) = k]]^\Delta.$$ + +where the $\Delta$-differentiation is with respect to $t$. + +We want to show that $f(t; n, \lambda)$ can rightly be called a probability density with respect to some accumulation function. Thus, we have the following theorem. + +**Theorem IV.1.** Let $\mathbb{T}$ be a time scale. Let $F(t; n, \lambda)$ be a $\mathbb{T}$-Erlang cumulative distribution function with shape parameter $n$ and rate $\lambda$ and let $f(t; n, \lambda)$ be a $\mathbb{T}$-Erlang probability density function with shape parameter $n$ and rate $\lambda$. Then + +$$\int_0^t f(\tau; n, \lambda) \Delta\tau = F(t; n, \lambda) \quad (IV.1)$$ + +and in particular + +$$\int_{\mathbb{T}} f(\tau; n, \lambda) \Delta\tau = 1. \quad (IV.2)$$ + +*Proof:* Implicit in the definition of the $\mathbb{T}$-Erlang probability distribution is a $\mathbb{T}$-Poisson process $S: \mathbb{T} \to \mathbb{N}^0$. By the assumption that + +$$\mathbb{P}[S(0; \lambda) = k] = \begin{cases} 1 & k = 0 \\ 0 & k > 0, \end{cases}$$ + +we have, + +$$\begin{align*} +\int_0^t f(\tau; n, \lambda) \Delta\tau &= \int_0^t -\sum_{k=0}^{n-1} \mathbb{P}[S(\tau; \lambda) = k]^{\Delta} \Delta\tau \\ +&= -\sum_{k=0}^{n-1} \int_0^t \mathbb{P}[S(\tau; \lambda) = k]^{\Delta} \Delta\tau \\ +&= -\sum_{k=0}^{n-1} \mathbb{P}[S(\tau; \lambda) = k]|_0^t \\ +&= -\sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] \\ +&\qquad + \sum_{k=0}^{n-1} \mathbb{P}[S(0; \lambda) = k] \\ +&= 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] \\ +&= F(t; n, \lambda), +\end{align*}$$ + +which proves (IV.1). To prove (IV.2), we note for all $k < n$, + +$$\lim_{t \to \infty} \mathbb{P}[S(t; \lambda) = k] = 0,$$ + +by repeated application of L'Hôpital's rule for time scales on III.3 [1]. This fact proves (IV.2) by the same argument as the proof of (IV.1). ■ + +We note that the moments of the $\mathbb{T}$-Erlang distribution cannot in general be calculated explicitly without some knowledge of the time scale. + +## V. THE EXPONENTIAL DISTRIBUTION ON TIME SCALES + +Of particular interest to us is the $\mathbb{T}$-Erlang distribution with shape parameter 1. By the above discussion and equation (III.1), the probability density function of this distribution is given by + +$$f(t; 1, \lambda) = -\mathbb{P}[S^{\Delta}(t; \lambda) = 0] = -(\ominus\lambda)(t)e_{\ominus\lambda}(t, 0).$$ + +**Definition V.1.** Let $\mathbb{T}$ be a time scale and let $T$ be a $\mathbb{T}$-Erlang random variable with shape parameter 1 and rate $\lambda$. Then we say $T$ is a $\mathbb{T}$-exponential random variable with rate $\lambda$. + +### A. The Expected Value + +The $\mathbb{T}$-exponential distribution gives us the rare opportunity to calculate a moment without any knowledge of the time scale. + +**Lemma V.1.** Let $\mathbb{T}$ be a time scale and let $T$ be a $\mathbb{T}$-exponential random variable with rate $\lambda > 0$. Then + +$$\mathbb{E}(T) = \frac{1}{\lambda}.$$ +---PAGE_BREAK--- + +**Proof:** Using integration by parts on time scales, we find + +$$ +\begin{align*} +\mathbb{E}(T) &= \int_0^\infty t[-(\ominus\lambda)(t)e_{\ominus\lambda}(t, 0)]\Delta t \\ +&= -te_{\ominus\lambda}(t, 0)|_0^\infty + \int_0^\infty e_{\ominus\lambda}(\sigma(t), 0)\Delta t \\ +&= 0 + \int_0^\infty (1 + \mu(t)(\ominus\lambda)(t))e_{\ominus\lambda}(t, 0)\Delta t \\ +&= \int_0^\infty \frac{1}{1 + \mu(t)\lambda}e_{\ominus\lambda}(t, 0)\Delta t \\ +&= -\frac{1}{\lambda}\int_0^\infty \frac{-\lambda}{1 + \mu(t)\lambda}e_{\ominus\lambda}(t, 0)\Delta t \\ +&= -\frac{1}{\lambda}\int_0^\infty (\ominus\lambda)(t)e_{\ominus\lambda}(t, 0)\Delta t \\ +&= -\frac{1}{\lambda}e_{\ominus\lambda}(t, 0)|_0^\infty \\ +&= -\frac{1}{\lambda}[0 - 1] \\ +&= \frac{1}{\lambda}, +\end{align*} +$$ + +which proves our claim. + +■ + +**B. On $\mathbb{R}$ and $\mathbb{Z}$** + +We note that if $\mathbb{T} = \mathbb{R}$, then we have + +$$f(t; 1, \lambda) = \lambda e^{-\lambda t},$$ + +which we recognize as the exponential distribution. By Lemma V.1, we find the mean of the exponential distribution is $1/\lambda$, which is well known. + +Now if $\mathbb{T} = \mathbb{Z}$, then we have + +$$f(t; 1, \lambda) = \frac{\lambda}{1+\lambda} \left(1 - \frac{\lambda}{1+\lambda}\right)^t = p(1-p)^t,$$ + +where $p := \frac{\lambda}{1+\lambda}$. We recognize the above as the geometric distribution. By Lemma V.1, we find the mean of the geometric distribution is $1/\lambda = (1-p)/p$. + +**C. The $\omega$-Memorylessness Property** + +Both the geometric and exponential distributions are completely classified by the fact that they have the memorylessness property [8]. We recall the the memoryless property on $\mathbb{R}$ is the property that if $T$ is a continuous random variable, then for all $t, \tau \in \mathbb{R}$, + +$$\mathbb{P}[T > t + \tau | T > t] = \mathbb{P}[T > \tau]$$ + +and that the memoryless property on $\mathbb{Z}$ is the property that if $T$ is a discrete random variable, then for all $t, \tau \in \mathbb{Z}$, + +$$\mathbb{P}[T > t + \tau | T > t] = \mathbb{P}[T > \tau].$$ + +We would like to find conditions on the time scale $\mathbb{T}$ such that the $\mathbb{T}$-exponential distribution on time scales has this property. Let $\mathbb{T}$ is $\omega$-periodic, that is, if $t \in \mathbb{T}$ then $t+\omega \in \mathbb{T}$. Then we can define a property much like the memorylessness property. + +**Definition V.2.** Let $\mathbb{T}$ be an $\omega$-periodic time scale. We say a probability distribution on $\mathbb{T}$ has the $\omega$-memorylessness property provided for all $t \in \mathbb{T}$, + +$$P(T > t + \omega | T > t) = P(T > \omega),$$ + +We note that this definition generalizes the memorylessness property on $\mathbb{R}$ and $\mathbb{Z}$ since $\mathbb{R}$ and $\mathbb{Z}$ are $\omega$-periodic for any $\omega$ in $\mathbb{R}$ and $\mathbb{Z}$, respectively. + +Let $\mathbb{T}$ be $\omega$-periodic and let $T$ be a $\mathbb{T}$-exponential random variable. Then we claim the $\mathbb{T}$-exponential distribution has the $\omega$-memorylessness property. To show this claim, we first prove two lemmas. + +**Lemma V.2.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and let $\lambda > 0$. Then for $t, t_0 \in \mathbb{T}$, $e_{\ominus\lambda}(t+\omega, t_0) = e_{\ominus\lambda}(t, t_0 - \omega)$. + +**Proof:** By the definition of the time scales exponential function, + +$$ +\begin{align*} +e_{\ominus\lambda}(t+\omega, t_0) &= \exp\left(\int_{t_0}^{t+\omega} \frac{\text{Log}(1+(\ominus\lambda)(s)\mu(s))\Delta s}{\mu(s)}\right) \\ +&= \exp\left(\int_{t_0}^{t+\omega} \frac{\text{Log}\left(1+\frac{-\lambda\mu(s)}{1+\lambda\mu(s)}\right)\Delta s}{\mu(s)}\right) \\ +&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}\left(1+\frac{-\lambda\mu(\tau+\omega)}{1+\lambda\mu(\tau+\omega)}\right)\Delta\tau}{\mu(\tau+\omega)}\right) \\ +&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}\left(1+\frac{-\lambda\mu(\tau)}{1+\lambda\mu(\tau)}\right)\Delta\tau}{\mu(\tau)}\right) \\ +&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}(1+(\ominus\lambda)(\tau)\mu(\tau))\Delta\tau}{\mu(\tau)}\right) \\ +&= e_{\ominus\lambda}(t, t_0 - \omega), +\end{align*} +$$ + +where we use the fact that for $\omega$-periodic time scales $\mu(t+\omega) = \mu(t)$ for all $t \in \mathbb{T}$ and the change of variables $\tau = s-\omega$. + +■ + +**Lemma V.3.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and $\lambda > 0$. Then for all $t \in \mathbb{T}$, $e_{\ominus\lambda}^{\Delta}(t+\omega, t) = 0$. + +**Proof:** By the product rule on time scales and Lemma V.2, + +$$ +\begin{align*} +e_{\ominus\lambda}^{\Delta}(t+\omega,t) &= (e_{\ominus\lambda}(t+\omega,t_0)e_{\ominus\lambda}(t_0,t))^{\Delta} \\ +&= (e_{\ominus\lambda}(t,t_0-\omega)e_{\ominus\lambda}(t_0,t))^{\Delta} \\ +&= (e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0))^{\Delta} \\ +&= e_{\ominus\lambda}(\sigma(t), t_0-\omega)\lambda e_{\lambda}(t,t_0) \\ +&+ (\ominus\lambda)(t)e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\ +&= \lambda(1+(\ominus\lambda)(t)\mu(t))e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\ +&+ (\ominus\lambda)(t)e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\ +&= [-(\ominus\lambda)(t) + (\ominus\lambda)(t)] \\ +&e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\ +&= 0. +\end{align*} +$$ + +■ +---PAGE_BREAK--- + +The above lemmas allow us to prove the following result. + +**Theorem V.4.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and let $\lambda > 0$. Then the $\mathbb{T}$-exponential distribution with rate $\lambda$ has the $\omega$-memorylessness property. + +*Proof:* Let $T$ be a $\mathbb{T}$-exponential random variable with rate $\lambda > 0$. By Lemma V.2 and Lemma V.3, + +$$ +\begin{aligned} +P(T > t + \omega | T > t) &= \frac{P(T > t + \omega)}{P(T > t)} \\ +&= \frac{\int_{t+\omega}^{\infty} -(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau}{\int_{t}^{\infty} -(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau} \\ +&= \frac{e_{\ominus\lambda}(t+\omega, 0)}{e_{\ominus\lambda}(t, 0)} \\ +&= e_{\ominus\lambda}(t+\omega, t) \\ +&= e_{\ominus\lambda}(\omega, 0) \\ +&= P(T > \omega), +\end{aligned} +$$ + +since $e_{\ominus\lambda}(\omega, 0)$ is a constant independent of $t$ by Lemma V.3. Thus the $\mathbb{T}$-exponential distribution has the $\omega$-memorylessness property ■ + +## REFERENCES + +[1] M. Bohner and A. Peterson, *Advances in Dynamic Equations on Time Scales*, Birkhäuser, Boston, 2003. + +[2] M. Bohner and A. Peterson, *Dynamic Equations on Time Scales*, Birkhäuser, Boston, 2001. + +[3] W. Ching and M. Ng, *Markov chains: models, algorithms and applications*, Springer, New York, 2006. + +[4] John M. Davis, Ian A. Gravagne and Robert J. Marks II, "Bilateral Laplace Transforms on Time Scales: Convergence, Convolution, and the Characterization of Stationary Stochastic Time Series," Circuits, Systems, and Signal Processing, Birkhäuser, Boston, Volume 29, Issue 6 (2010), Page 1141. [DOI 10.1007/s00034-010-9196-2] + +[5] S. Kahraman, "Probability Theory Applications on Time Scales," M.S. Thesis, İzmir Institute of Technology, 2008 + +[6] Robert J. Marks II, Ian A. Gravagne and John M. Davis, "A Generalized Fourier Transform and Convolution on Time Scales," Journal of Mathematical Analysis and Applications Volume 340, Issue 2, 15 April 2008, Pages 901-919. + +[7] R.J. Marks II, *Handbook of Fourier Analysis and Its Applications*, Oxford University Press (2009). + +[8] A. Papoulis, *Probability, Random Variables and Stochastic Processes*, 3rd Edition, McGraw-Hill, New York (1991) + +[9] S. Sanyal, "Stochastic Dynamic Equations," Ph.D. Thesis, Missouri University of Science and Technology, 2008 + +[10] Baylor Time Scales Group, http://timescales.org/ \ No newline at end of file diff --git a/samples/texts_merged/88513.md b/samples/texts_merged/88513.md new file mode 100644 index 0000000000000000000000000000000000000000..b3010a5d04a873f1f38f8b29d5eafe598a2856bb --- /dev/null +++ b/samples/texts_merged/88513.md @@ -0,0 +1,161 @@ + +---PAGE_BREAK--- + +# VALIDATION OF THE GAMMA SUBMERSION CALCULATION OF THE REMOTE POWER PLANT MONITORING SYSTEM OF THE FEDERAL STATE OF BADEN-WÜRTTEMBERG + +Janis Lapins¹, Wolfgang Bernnat², Walter Scheuermann² + +¹Institute of Nuclear Technology and Energy Systems, Pfaffenwaldring 31, University of Stuttgart, +Stuttgart, Germany + +²KE-Technologie GmbH, Stuttgart, Germany + +**Abstract:** The radioactive dispersion model used in the framework of the remote nuclear power plant monitoring system of the federal state of Baden-Württemberg applies the method of adjoint fluxes to calculate the sky shine from gamma rays with a regarded gamma energy spectrum for nuclides released. The spectrum is represented by 30 energy groups of the released nuclides. A procedure has been developed to calculate the dose distribution on the ground in case of an accident with a release of radioactivity. For validation purposes, the results produced with the adjoint method in the dispersion code ABR are compared to results produced by forward calculations with Monte Carlo methods using the Los Alamos Code MCNP6. + +**Key words:** adjoint method, MCNP, validation, gamma submersion + +## THE MODULAR DISPERSION TOOL “ABR” + +The federal state of Baden-Württemberg, Germany, operates a remote power plant monitoring system that has an online access to the main safety relevant parameter of the power plant as well as the meteorological data provided by the German weather forecast service (DWD). The data are sent to a server system that is operated for the Ministry of Environment of the federal state. The radioactive dispersion tool “ABR” is an integral part of this system and is used for calculation of radiological consequences in case of an accident, or to prepare and to perform emergency exercises for the civil protection. For a dispersion calculation, the ABR has to account for the following: + +* Interpolation of forecasted or measured precipitation to grid (precipitation module) + +* Calculation of the wind field from forecast or measurement on grid (terrain-following wind field module) + +* Release of the amount of radioactivity to the environment accounting for decay time of nuclides between shutdown of the reactor and the time of emission (release module) + +* Transport of radioactivity with wind, also washout and fallout due to deposition or rain, respectively (Lagrange particle transport module) + +* Sky shine to a detector 1 m above the ground (sky shine module) + +* Calculation of the doses from various exposure paths (gamma submersion, beta submersion, inhalation and ground shine) and for 25 organs and one effective dose (dose module) + +All of this is performed by the different modules of the programme system mentioned above. However, this paper will focus on the validation of the sky shine module in conjunction with the dose module which calculates the gamma submersion by the method of adjoint fluxes [1]. For validation, the reference code system MCNP6 [2] is used. Results produced with ABR are benchmarked against it. + +## METHOD OF CALCULATION + +The dose calculation is performed applying the method of adjoint fluxes to calculate the gamma cloud radiation with a regarded gamma ray energy spectrum for nuclides released comprising 30 energy groups. This procedure enables an efficient algorithm to calculate the dose rates or integrated doses in case of an accident with a release of radioactivity. The system is part of the emergency preparedness and response and is in online operational service. The adjoint fluxes were produced by results from MCNP6 [2]. For validation purposes, the results produced with the adjoint method in the dispersion code ABR are compared to results produced by forward calculations with Monte Carlo methods using MCNP6. The +---PAGE_BREAK--- + +computational procedure comprises the following steps: From a point or a volume source, respectively, photons are started isotropically for average energies of the 30 energy groups or distinctive gamma spectrum for single nuclides. Travelling through space, these photons collide with nuclides present in air or the ground and are scattered until they reach the detector. With the help of point detectors, the flux density spectrum can be estimated, and, by making use of a dose-flux-relation, the resulting gamma submersion dose on the ground can be determined. + +The backward method in the ABR uses the adjoint fluxes to evaluate the influence of a certain nuclide (spectrum) in the cloud at a certain distance from a detector point on the ground. To obtain these adjoint fluxes, a large number of calculations has been performed to determine the adjoint flux for all energy groups and distances (radii). The radii for which the fluxes were produced are support points. Radii between support points are interpolated. Depending on the energy of the group under consideration there are different exponential fitting functions that account for both energy and distance. The energy deposited within human tissue is accounted for by age classes and by use dose factors from the German Radiation Protection Ordinance that provide dose factors for organs and effective dose [5]. + +## SOLUTION OF THE TRANSPORT EQUATION + +The transport equation in operator notation is + +$$M\Phi = Q \quad (1)$$ + +with + +$$M = \vec{\Omega}grad + \Sigma_T(E) + \iint_{\vec{\Omega}\setminus E'} \Sigma_s(\vec{\Omega}' \rightarrow \vec{\Omega}, E' \rightarrow E)dE' d\Omega' \quad (2)$$ + +In equation (1) above $Q(\vec{r}, \vec{\Omega}, E)$ represents the source vector and $\Phi(\vec{r}, \vec{\Omega}, E)$ represents the flux density vector which both depend on the location $\vec{r}$, the direction $\vec{\Omega}$, and the Energy $E$. In equation (2) the first term represents the leakage term, $\Sigma_T(E)$ represents the collision, and the integral represents the scattering from any direction $\vec{\Omega}'$ and energy $E'$ into the direction $\vec{\Omega}$ and energy $E$ of interest. + +After solution of the transport equation reaction rates, e.g. dose rates $\bar{D}$ can be calculated with the help of a response function $R(\vec{r}, E)$ such that the condition + +$$\bar{D} = \langle \Phi R \rangle = \int_V \int_E \Phi(\vec{r}, E) R(\vec{r}, E) dr dE \quad (3)$$ + +is valid. The adjoint equation to the equation (1) is + +$$M^+ \Phi^+ = R \quad (4)$$ + +The adjoint equation has to be defined in a way that the condition + +$$\langle \Phi^+ M \Phi \rangle = \langle \Phi M^+ \Phi^+ \rangle \quad (5)$$ + +holds. If this is the case, the following is also valid: + +$$\bar{D} = \langle \Phi^+ M \Phi \rangle = \langle \Phi^+ Q \rangle = \langle \Phi M^+ \Phi^+ \rangle = \langle \Phi R \rangle = \bar{D} \quad (6)$$ + +I.e. instead of eq. (1), the adjoint function eq. (4) can be solved and the reaction rates are determined by eq. (3). The solution of the adjoint transport equation provides a relation between photon emission of a certain energy/energy range of a point/volume regarded and the dose at a computational point. + +## CALCULATION OF ADJOINT FLUXES WITH MCNP + +The calculation of the gamma submersion as a consequence of radioactive nuclides in the radioactive cloud can be achieved if the spatial and energy distribution of the gamma sources in relation to certain computational points at the ground are known, together with the composition of air and soil. The computation necessitates the solution of the photon transport equation with respect to the energy dependence of the possible reactions of photons with atoms in air or soil (photo-electrical effect, Compton effect, pair production effect etc.). The solution of the transport equation yields photon spectra for computational points that enable dose calculations. Relevant dose/flux relations are defined by ICRP, [3]. For photons ICRP 74 can be applied. The dose/flux relation is presented in **Figure 2**. With Monte Carlo codes with their continuous energy dependence of the cross sections, a direct solution of the adjoint transport equation is not possible. Nevertheless, these codes can be used to estimate the contribution of a source point/volume to the dose at a computational point, see **Figure 1**. To do this, the source +---PAGE_BREAK--- + +point/volume a sufficiently great number of photon trajectories have to be simulated and their +contribution to the dose is calculated. Computing the dose rates at a computational point of interest, the +relevant contributions from all source points/volumes of the whole emission field have to be summed up +such that the dose at the computational point (x, y, z) can be estimated with + +$$ +D(x, y, z) = \sum_q \sum_g \Phi_g^+ (r_q, z_q - z) \cdot Q(x_q, y_q, z_q) \cdot V_q \quad (7) +$$ + +with + +$$ +r_q = \sqrt{(x_q - x)^2 + (y_q - y)^2} \tag{8} +$$ + +$\Phi_g^+$ as the adjoint flux depending on the radius and the height, $Q_g$ as specific source concentration and $V_q$ as the volume that contains the concentration. + +**Figure 1.** Source point/volume $Q(r_q, z_q)$ and computational point of interest $P(x, y, z)$ in dose calculations + +The index q corresponds to the source; the index g corresponds to the energy group or the gamma line of +the source of the photon emission energy. The coordinates x, y, z correspond to the computational point +of interest. The coordinates xq, yq (resp. rq), zq correspond to the centre point of the source volume Vq, +see Figure 1. + +**Figure 2.** Dose/flux relation for gamma energies from 0.01 – 10 MeV in 0.07 cm depth of the body according to ICRP 74 [3] + +## 2 SCENARIOS FOR DOSE COMPARISONS: A HOMOGENEOUS AND A NON-HOMOGENEOUS RADIOACTIVE CLOUD OF REFERENCE NUCLIDES + +For comparison of the results of the gamma submersion dose rates, two scenarios have been defined. The base scenario assumes a homogeneous concentration distribution of three reference nuclides Xe-133, Cs-137 and I-131 with a flat topography both for the ABR and MCNP, respectively. There is no use of the dispersion module of the ABR, but the concentrations are artificially input into the sky shine and dose modules of the ABR. The computational domain and the boundary conditions for this scenario are presented in Table 1. A sketch of the scenario is shown in Figure 5. + +An advanced scenario with a 3-D cloud is also presented. For this scenario, a realistic concentration distribution has been generated with the ABR, i.e. the release height of 150 metres with a wind speed of 4 m/s at 10 m height and increasing wind speed with the height for diffusion category D (neutral conditions). The released activity is transported with the wind. After one time step the doses are compared. Since the MCNP cannot simulate the transport of radioactive particles with the wind, the distribution of concentration of the isotope regarded is imported to MCNP via an interface. The results for the dose calculation are also compared. The radioactive cloud together with the wind speed is presented in +---PAGE_BREAK--- + +Figure 6. For this paper, the shape of the cloud is regarded as given as the dose rates are subject to comparison and not the cloud shape. The boundary conditions and general assumptions for this case is given in Table 2 + +The gamma lines of the reference nuclides are shown in **Figure 3** and **Figure 4**, [4]. These gamma emissions are accounted for in the 30 group spectrum of the ABR with their respective intensity. For the MCNP calculation, the gamma energies and their respective intensity are directly input. + +**Table 1.** Simulation set-up for homogeneous cloud from 120 – 160 m + +
Constant sourceABRMCNP6
Computational area (x, y, z)20 km x 20 km x 1 km20 km x 20 km x 1 km
Mesh number (x, y,z)100 x 100 x 25-
Mesh size in x, y, z - direction200 m, 200 m, 40 m-
Cloud height120 – 160 m120 – 160 m
Activity in cloud [Bqm-3]
Cs-1376.0E+046.0E+04
Xe-1332.0E+102.0E+10
I-1311.0E+061.0E+06
+ +**Table 2.** Simulation set-up for a non-homogeneous cloud + +
Realistic sourceABRMCNP6
Computational area (x, y, z)20 km x 20 km x 1 km20 km x 20 km x 1 km
Mesh number (x, y,z)100 x 100 x 25100 x 100 x 25
Mesh size in x, y, z - direction200 m, 200 m, 40 m200 m, 200 m, 40 m
Emission height150 m150 m
Total activity released [Bq]Activity imported via interface
Cs-1376.0E+096.0E+09
Xe-1332.0E+172.0E+17
I-1311.0E+101.0E+10
Wind speed in 10 m height4 m/s-
Diffusion categoryD-
Emission duration1 hour-
+ +**Figure 3.** Gamma lines and intensities of Cs-137 and Xe-133 (NUDAT 2.6) [4] + +**Figure 4.** Gamma lines of I-131 (NUDAT 2.6) [4] +---PAGE_BREAK--- + +**Figure 5.** Sketch of the scenario with homogeneous emission layer and exemplary paths from the cloud to the detector (direct, indirect via air and ground reflection, or both) + +**Figure 6.** Non-homogeneous distribution of aerosoles after 1 hour with a wind speed of 4 m/s at a height of 10 m simulated with the ABR. The concentration is exported to MCNP + +## RESULTS OF COMPARISON + +The results of the comparison are presented in the tables below. One can see that the results are in good agreement for the three reference nuclides. + +**Table 3.** Results for the base case with homogeneous cloud + +
NuclideMCNP6 [Sv/h]ABR [Sv/h]Ratio ABR/MCNP6
Cs-1379.31E-078.33E-070.89
Xe-1331.36E-021.30E-020.96
I-1311.01E-051.03E-051.02
+ +**Table 4.** Results for the advanced case with non-homogenous cloud + +
NuclideMCNP6 [Sv/h]ABR [Sv/h]Ratio ABR/MCNP6
Cs-1371.42E-101.36E-100.96
Xe-1334.49E-044.9E-041.09
I-1311.49E-101.57E-101.05
+ +## CONCLUSION + +The results for the comparison of gamma submersion dose rates show that there is good agreement between the ABR and MCNP6 for the cases analysed. It could be shown that for all three reference nuclides the maximum deviation for the dose rate of Cs-137 is -11% for the base case. + +For the non-homogenous distribution of the concentration for the reference nuclides the agreement is better than 10%. Keeping in mind that for a real dispersion calculation there are a multitude of uncertainties, e.g. emitted nuclide vector, meteorological prediction, transport of cloud, this agreement presented for the comparison of the dose rates for the reference nuclide each can be regarded as excellent. + +## REFERENCES + +[1] Sohn, G. Pfister, W. Bernnat, G. Hehn: Dose, ein neuer Dosismodul zur Berechnung der effektiven Dosis von 21 Organdosen für die Dosispfade Submersion, Inhalation und Bodenstrahlung, IKE 6 UM 3, Nov. 1994. + +[2] D. B. Pelowitz: MCNP6TM User's Manual Version 1.0 LA-CP-13-00634, Rev. 0 (2013) + +[3] ICRP, 1996: Conversion Coefficients for use in Radiological Protection against External Radiation. ICRP Publication 74, Ann. ICRP 26 (3-4). + +[4] NUDAT 2.6, National Nuclear Data Centre, Brookhaven National Laboratory. + +[5] Entwurf zur AVV zu §47 Strahlenschutzverordnung, Anhang 3 (German: General Administrative Regulation for §47 of the German Radiation Protection Ordinance, Appendix 3), (2005). \ No newline at end of file