| 6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99, | iψt=l0ψxx+1/4l0x2ψ+icot(-t)xψ+1/4l0x2-iαl0csc(cot(t))|ψ|^2 ψ -αlαsinc(αsinc(t))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinc(αsinc(αsinc(αsinc(t)))|ψ|^2 ψ -αlαsinh(cosh(-b))|ω|² -at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at-at>b a>b a b c d e f g h i j k l m n o p q r s t u v w x y z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-y-yy_yn_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_n_nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnneeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
+# 1
+$$i\psi_t = l_0 \psi_{xx} - \frac{bt^{m-1}}{4l_0} + \frac{b^2 x^m}{4l_0} x^2 \psi - ib t^m x \psi_{x} - \lambda l_0 e^{-bt^{m+1}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - \frac{b^2}{4l_0} x^2 \psi + i \frac{1}{4} x \psi_{x} - \lambda l_0 t |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{c^2}{4} l_0) x^2 \psi + icx \psi_{x} - \lambda l_0 e^{ct} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - \frac{b^2}{4l_0} t^k x^2 \psi + ibx \psi_{x} - \lambda l_0 e^{bt} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - \frac{ab^{bt} + a^2 e^{bkt}}{4l_0} x^2 \psi - ia e^{bt} x \psi_{x} - \lambda l_0 e^{-\frac{ab-tb}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - \frac{bt^{-1} + b^2 ln^2(t)}{4l_0} x^2 \psi - ib ln(t) x \psi_{x} - \lambda l_0 t^{-bt} e^{bt} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi + icot(-t) x \psi_{x} - \lambda l_0 csc(cot(t)) |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi - itan(-t) x \psi_{x} - \lambda l_0 cos(t) |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - \frac{bt^{-1} + b^2 ln^2(t)}{4l_0} x^2 \psi - ibln(t) x \psi_{x} - \lambda l_0 t^{-bt} e^{bt} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi + icot(-it) x \psi_{x} - \lambda l_0 cscsec(cot(t)) |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + \frac{1}{4l_0} x^2 \psi - itan(-it) x \psi_{x} - \lambda l_0 sec(t) |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{a^2 + absin(h_t) + a^2 sin(h_t)^2}{4l_0}) x^2 \psi - ia cos(h_t) x \psi_{x} - \lambda l_0 e^{-\frac{asin(h_t)}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{a^2 + absin(h_t) - a^2 sin(h_t)^2}{4l_0}) x^2 \psi + iacos(h_t) x \psi_{x} - \lambda l_0 e^{-\frac{asin(h_t)}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{a^2 + abcos(h_t) - a^2 cos(h_t)^2}{4l_0}) x^2 \psi - iasin(h_t) x \psi_{x} + \lambda l_0 e^{-\frac{abcos(h_t)}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{atanh(h_t)(a+b)+ab}{4l_0}) x^2 \psi - itan(h_t) x \psi_{x} - \lambda l_0 |cos(h_t)|^{\frac{b}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{atanh(h_t)(a+b)-ab}{4l_0}) x^2 \psi + atan(h_t) x \psi_{x} - \lambda l_0 |cos(h_t)|^{\frac{b}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} - (\frac{atanh(h_t)(b-a)-ab}{4l_0}) x^2 \psi - atan(h_t) x \psi_{x} + \lambda l_0 e^{-\frac{atanh(h_t)}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + (\frac{atanh^2(h_t)(b-a)-ab}{4l_0}) x^2 \psi - iacoth(h_t) x \psi_{x} - \lambda l_0 |sin(h_t)|^{\frac{b}{b}} |\psi|^2 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + (\frac{atanh^2(h_t)(b-a)-ab}{4l_0}) x^2 \psi - iacoth(h_t) x \psi_{x} + (\frac{atanh^2(h_t)(b-a)-ab}{4l_0}) x^3 \psi$$
+$$i\psi_t = l_0 \psi_{xx} + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \psi - iacot(h_t) x \psi_{x} - (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^4 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x^3 \\ + (\frac{atanh^3(h_t)(b-a)-ab}{4l_0}) x\\# 1
+$$
+
+
+---PAGE_BREAK---
+
+**Table 2.** Riccati equations used to generate the similarity transformations.
+
+| # | Riccati Equation | Similarity Transformation from Table 1 |
|---|
| 1 | y'x = axny2 + bmxm-1 - ab2xn+2m | 1 | | 2 | (axn + b)y'x = by2 + axn-2 | 2 | | 3 | y'x = axny2 + bxmy + bcxm - ac2xn | 3 | | 4 | y'x = axny2 + bxmy + ckxk-1 - bcxm+k - ac2xn+2k | 1 | | 5 | xy'x = axny2 + my - ab2xn+2m | 3 | | 6 | (axn + bxm + c)y'x = axky2 + βxsy - αb2xk + βbxs | 4 | | 7 | y'x = beμxy2 + acecx - a2be(μ+2c)x | 5 | | 8 | y'x = aenxy2 + cy - ab2e(μ+2c)x | 3 | | 9 | y'x = aecxy2 + bnxn-1 - ab2ex2n | 1 | | 10 | y'x = axny2 + bceex - ab2xn2x | 8 | | 11 | y'x = axny2 + cy - ab2xn2cx | 3 | | 12 | y'x = [a sinh2(cx) - c]y2 - a sinh2(cx) + c - a | 6 | | 13 | 2y'x = [a - b + a cosh(bx)]y2 + a + b - a cosh(bx) | 7 | | 14 | y'x = a(ln x)ny2 + bmxm-1 - ab2x2m(ln x)n | 1 | | 15 | xy'x = axny2 + b - ab2xn ln2 x | 8 | | 16 | y'x = [b + a sin2(bx)]y2 + b - a + a sin2(bx) | 9 | | 17 | 2y'x = [b + a + a cos(bx)]y2 + b - a + a cos(bx) | 10 | | 18 | y'x = [b + a cos2(bx)]y2 + b - a + a cos2(bx) | 10 | | 19 | y'x = c(arcsin x)ny2 + ay + ab - b2c arctan x n | 3 | | 20 | y'x = a(arcsin x)n/2 y2 + βmx m-1 - aβ²x²m (arcsin x) | n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1br/>(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(g-f)(af+b) a tanh²(bx)(af+b)+ab |
+
+
+
+ |
+ 38
+ |
+
+ y'x = fy² - a²f + ab sinh(bx) - a²f sinh²(bx)
+ |
+
+ 14
+ |
+
+
+ |
+ 39
+ |
+
+ y'x = fy² - a²f + ab sin(bx) + a²f sin²(bx)
+ |
+
+ 15
+ |
+
+
+ |
+ 40
+ |
+
+ y'x = fy² - a²f + ab cos(bx) + a²f cos²(bx)
+ |
+
+ 16
+ |
+
+
+ |
+ 41
+ |
+
+ y'x = fy² - a tan²(bx)(af - b) + ab
+ |
+
+ 17
+ |
+
+
+ |
+ 42
+ |
+
+ y'x = fy² - a cot²(bx)(af - b) + ab
+ |
+
+ 18
+ |
+
+
+
+Symmetry **2016**, *8*, 38
+---PAGE_BREAK---
+
+**2. Soliton Solutions for VCNLS through Riccati Equations and Similarity Transformations**
+
+In this section, by means of a similarity transformation introduced in [42], and using computer
+algebra systems, we show the existence of Peregrine, bright and dark solitons for the family Equation
+(1). Thanks to the computer algebra systems, we are able to find an extensive list of integrable
+variable coefficient nonlinear Schrödinger equations (see Table 1). For similar work and applications to
+Bose-Einstein condensates, we refer the reader to [1]
+
+**Lemma 1.** ([42]) Suppose that $h(t) = -l_0\lambda\mu(t)$ with $\lambda \in \mathbb{R}$, $l_0 = \pm 1$ and that $c(t)$, $\alpha(t)$, $\delta(t)$, $\kappa(t)$, $\mu(t)$ and $g(t)$ satisfy the equations:
+
+$$
+\begin{align}
+\alpha(t) &= l_0 \frac{c(t)}{4}, \quad \delta(t) = -l_0 \frac{g(t)}{2}, \quad h(t) = -l_0 \lambda \mu(t), \tag{2} \\
+\kappa(t) &= \kappa(0) - \frac{l_0}{4} \int_0^t g^2(z) dz, \tag{3} \\
+\mu(t) &= \mu(0) \exp \left( \int_0^t (2d(z) - c(z)) dz \right) \mu(0) \neq 0, \tag{4} \\
+g(t) &= g(0) - 2l_0 \exp \left( -\int_0^t c(z) dz \right) \int_0^t \exp \left( \int_0^z c(y) dy \right) f(z) dz. \tag{5}
+\end{align}
+$$
+
+Then,
+
+$$
+\psi(t,x) = \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} u(t,x) \quad (6)
+$$
+
+is a solution to the Cauchy problem for the nonautonomous Schrödinger equation
+
+$$
+i\psi_t - l_0\psi_{xx} - b(t)x^2\psi + ic(t)x\psi_x + id(t)\psi + f(t)x\psi - ig(t)\psi_x - h(t)|\psi|^2\psi = 0, \quad (7)
+$$
+
+$$
+\psi(0, x) = \psi_0(x),
+$$
+
+if and only if $u(t,x)$ is a solution of the Cauchy problem for the standard Schrödinger equation
+
+$$
+iu_t - l_0 u_{xx} + l_0 |\lambda| u^2 = 0,
+$$
+
+with initial data
+
+$$
+u(0,x) = \sqrt{\mu(0)}e^{-i(\alpha(0)x^2+\delta(0)x+\kappa(0))}\psi_0(x). \quad (10)
+$$
+
+Now, we proceed to use Lemma 1 to discuss how we can construct NLS with variable coefficients
+equations that can be reduced to the standard NLS and therefore be solved explicitly. We start
+recalling that
+
+$$
+u_1(t, x) = A \exp\left(2iA^2t\right) \left(\frac{3 + 16iA^2t - 16A^4t^2 - 4A^2x^2}{1 + 16A^4t^2 + 4A^2x^2}\right), A \in \mathbb{R} \quad (11)
+$$
+
+is a solution for ($l_0 = -1$ and $\lambda = -2$)
+
+$$
+iu_t + u_{xx} + 2|u|^2 u = 0, t, x \in \mathbb{R}. \tag{12}
+$$
+
+In addition,
+
+$$
+u_2(\xi, \tau) = A \tanh(A\xi)e^{-2iA^2\tau} \quad (13)
+$$
+
+is a solution of ($l_0 = -1$ and $\lambda = 2$)
+
+$$
+iu_{\tau} + u_{\xi\xi} - 2|u|^2 u = 0, \quad (14)
+$$
+---PAGE_BREAK---
+
+and
+
+$$
+u_3(\tau, \xi) = \sqrt{v} \operatorname{sech}(\sqrt{v}\xi) \exp(-iv\tau), v > 0 \quad (15)
+$$
+
+is a solution of ($l_0 = 1$ and $\lambda = -2$),
+
+$$
+iu_{\tau} - u_{\xi\xi} - 2|u|^2 u = 0. \tag{16}
+$$
+
+**Example 1.** Consider the NLS:
+
+$$
+i\psi_t + \psi_{xx} - \frac{c^2}{4} x^2 \psi - icx\psi_x \pm 2e^{ct} |\psi|^2 \psi = 0. \quad (17)
+$$
+
+Our intention is to construct a similarity transformation from Equation (17) to standard NLS Equation (9) by means of Lemma 1. Using the latter, we obtain
+
+$$
+b(t) = \frac{c^2}{4}, c(t) = c, \mu(t) = e^{ct},
+$$
+
+and
+
+$$
+\alpha(t) = -\frac{c}{4}, h(t) = \pm 2e^{ct}.
+$$
+
+Therefore,
+
+$$
+\psi(x,t) = \frac{e^{-i\frac{x}{c^2}t}}{\sqrt{e^{ct}}} u_j(x,t), j=1,2
+$$
+
+is a solution of the form Equation (6), and $u_j(x,t)$ are given by Equations (12) and (13).
+
+**Example 2.** Consider the NLS:
+
+$$
+i\psi_t + \psi_{xx} - \frac{1}{2t^2}x^2\psi - i\frac{1}{t}x\psi_x \pm 2t|\psi|^2\psi = 0. \quad (18)
+$$
+
+By Lemma 1, a Riccati equation associated to the similarity transformation is given by
+
+$$
+\frac{dc}{dt} + c(t)^2 - 2t^{-2} = 0, \tag{19}
+$$
+
+and we obtain the functions
+
+$$
+b(t) = \frac{1}{2t^2}, c(t) = -\frac{1}{t}, \mu(t) = t,
+$$
+
+$$
+\alpha(t) = -\frac{1}{4t}, h_1(t) = -2t, h_2(t) = 2t.
+$$
+
+Using $u_j(x,t)$, $j=1$ and $2$, given by Equations (12) and (13), we get the solutions
+
+$$
+\psi_j(x,t) = \frac{e^{-i\frac{1}{4t}x^2}}{\sqrt{t}} u_i(x,t). \quad (20)
+$$
+
+Table 1 shows integrable variable coefficient NLS and the corresponding similarity transformation to constant coefficient NLS. Table 2 lists some Riccati equations that can be used to generate these transformations.
+---PAGE_BREAK---
+
+**Example 3.** If we consider the following family (m and B are parameters) of variable coefficient NLS,
+
+$$i\psi_t + \psi_{xx} - \frac{Bmt^{m-1} + Bt^{2m}}{4}x^2\psi + iBt^m x\psi_x + \gamma e^{-\frac{Bt^{m+1}}{m+1}}|\psi|^2\psi = 0, \quad (21)$$
+
+by means of the Riccati equation
+
+$$y_t = At^n y^2 + Bmt^{m-1} - AB^2t^{n+2m}, \quad (22)$$
+
+and Lemma 1, we can construct soliton-like solutions for Equation (21). For this example, we restrict ourselves to taking $A = -1$ and $n = 0$. Furthermore, taking in Lemma 1 $l_0 = -1$, $\lambda = -2$, $a(t) = 1$, $b(t) = \frac{Bmt^{m-1}+Bt^{2m}}{4}$, $c(t) = Bt^m$, $\mu(t) = e^{-\frac{Bt^{m+1}}{m+1}}$, $h(t) = -2e^{-\frac{Bt^{m+1}}{m+1}}$, and $\alpha(t) = -Bt^m/4$, soliton-like solutions to the Equation (21) are given by
+
+$$\psi_j(x,t) = e^{i-\frac{B^2 j^m}{4}} e^{\frac{B j^{m+1}}{2(m+1)}} u_j(x,t), \quad (23)$$
+
+where using $u_j(x,t)$, $j=1$ and $2$, given by Equations (12) and (15), we get the solutions. It is important to notice that if we consider $B=0$ in Equation (21) we obtain standard NLS models.
+
+### 3. Riccati Systems with Parameters and Similarity Transformations
+
+In this section, we use different similarity transformations than those used in Section 2, but they have been presented previously [26,35,39,42]. The advantage of the presentation of this section is a multiparameter approach. These parameters provide us with a control on the center axis of bright and dark soliton solutions. Again in this section, using Table 2, and by means of computer algebra systems, we show that we can produce a very extensive number of integrable VCNLS allowing soliton-type solutions. The transformations will require:
+
+$$\frac{d\alpha}{dt} + b(t) + 2c(t)\alpha + 4a(t)\alpha^2 = 0, \quad (24)$$
+
+$$\frac{d\beta}{dt} + (c(t) + 4a(t)\alpha(t))\beta = 0, \quad (25)$$
+
+$$\frac{d\gamma}{dt} + l_0 a(t) \beta^2(t) = 0, l_0 = \pm 1, \quad (26)$$
+
+$$\frac{d\delta}{dt} + (c(t) + 4a(t)\alpha(t))\delta = f(t) + 2a(t)g(t), \quad (27)$$
+
+$$\frac{d\epsilon}{dt} = (g(t) - 2a(t)\delta(t))\beta(t), \quad (28)$$
+
+$$\frac{d\kappa}{dt} = g(t)\delta(t) - a(t)\delta^2(t). \quad (29)$$
+
+Considering the standard substitution
+
+$$\alpha(t) = \frac{1}{4a(t)} \frac{\mu'(t)}{\mu(t)} - \frac{d(t)}{2a(t)}, \quad (30)$$
+
+it follows that the Riccati Equation (24) becomes
+
+$$\mu'' - \tau(t)\mu' + 4\sigma(t)\mu = 0, \quad (31)$$
+
+with
+
+$$\tau(t) = \frac{a'}{a} - 2c + 4d, \sigma(t) = ab - cd + d^2 + \frac{d}{2}\left(\frac{a'}{a} - \frac{d'}{d}\right). \quad (32)$$
+---PAGE_BREAK---
+
+We will refer to Equation (31) as the characteristic equation of the Riccati system. Here, $a(t)$, $b(t)$, $c(t)$, $d(t)$, $f(t)$ and $g(t)$ are real value functions depending only on the variable $t$. A solution of the Riccati system Equations (24)–(29) with multiparameters is given by the following expressions (with the respective inclusion of the parameter $l_0$) [26,35,39]:
+
+$$ \mu(t) = 2\mu(0)\mu_0(t)(\alpha(0) + \gamma_0(t)), \quad (33) $$
+
+$$ \alpha(t) = \alpha_0(t) - \frac{\beta_0^2(t)}{4(\alpha(0) + \gamma_0(t))'}, \quad (34) $$
+
+$$ \beta(t) = -\frac{\beta(0)\beta_0(t)}{2(\alpha(0) + \gamma_0(t))} = \frac{\beta(0)\mu(0)}{\mu(t)}w(t), \quad (35) $$
+
+$$ \gamma(t) = l_0\gamma(0) - \frac{l_0\beta^2(0)}{4(\alpha(0) + \gamma_0(t))}, \quad l_0 = \pm 1, \quad (36) $$
+
+$$ \delta(t) = \delta_0(t) - \frac{\beta_0(t)(\delta(0) + \varepsilon_0(t))}{2(\alpha(0) + \gamma_0(t))}, \quad (37) $$
+
+$$ \varepsilon(t) = \varepsilon(0) - \frac{\beta(0)(\delta(0) + \varepsilon_0(t))}{2(\alpha(0) + \gamma_0(t))}, \quad (38) $$
+
+$$ \kappa(t) = \kappa(0) + \kappa_0(t) - \frac{(\delta(0) + \varepsilon_0(t))^2}{4(\alpha(0) + \gamma_0(t))'}, \quad (39) $$
+
+subject to the initial arbitrary conditions $\mu(0), \alpha(0), \beta(0) \neq 0, \gamma(0), \delta(0), \varepsilon(0)$ and $\kappa(0)$. $\alpha_0, \beta_0, \gamma_0, \delta_0, \varepsilon_0$ and $\kappa_0$ are given explicitly by:
+
+$$ a_0(t) = \frac{1}{4a(t)} \frac{\mu'_0(t)}{\mu_0(t)} - \frac{d(t)}{2a(t)}, \quad (40) $$
+
+$$ \beta_0(t) = -\frac{w(t)}{\mu_0(t)}, w(t) = \exp\left(-\int_0^t (c(s) - 2d(s))ds\right), \quad (41) $$
+
+$$ \gamma_0(t) = \frac{d(0)}{2a(0)} + \frac{1}{2\mu_1(0)} \frac{\mu_1(t)}{\mu_0(t)}, \quad (42) $$
+
+$$ \delta_0(t) = \frac{w(t)}{\mu_0(t)} \int_0^t \left[ \left(f(s) - \frac{d(s)}{a(s)}g(s)\right)\mu_0(s) + \frac{g(s)}{2a(s)}\mu'_0(s) \right] \frac{ds}{w(s)}, \quad (43) $$
+
+$$ \begin{aligned} \varepsilon_0(t) = & -\frac{2a(t)w(t)}{\mu'_0(t)}\delta_0(t) + 8 \int_0^t \frac{a(s)\varphi(s)w(s)}{(\mu'_0(s))^2}(\mu_0(s)\delta_0(s))ds \\ & + 2\int_0^t \frac{a(s)w(s)}{\mu'_0(s)}[f(s) - \frac{d(s)}{a(s)}g(s)]ds, \end{aligned} \quad (44) $$
+
+$$ \begin{aligned} \kappa_0(t) = & \frac{a(t)\mu_0(t)}{\mu'_0(t)}\delta_0^2(t) - 4\int_0^t \frac{a(s)\varphi(s)}{(\mu'_0(s))^2}(\mu_0(s)\delta_0(s))^2 ds \\ & - 2\int_0^t \frac{a(s)}{\mu'_0(s)}(\mu_0(s)\delta_0(s))[f(s) - \frac{d(s)}{a(s)}g(s)]ds, \end{aligned} \quad (45) $$
+
+with $\delta_0(0) = g_0(0)/(2a(0))$, $\varepsilon_0(0) = -\delta_0(0)$, $\kappa_0(0) = 0$. Here, $\mu_0$ and $\mu_1$ represent the fundamental solution of the characteristic equation subject to the initial conditions $\mu_0(0) = 0, \mu'_0(0) = 2a(0) \neq 0$ and $\mu_1(0) \neq 0, \mu'_1(0) = 0$.
+
+Using the system Equations (34)–(39), in [26], a generalized lens transformation is presented. Next, we recall this result (here we use a slight perturbation introducing the parameter $l_0 = \pm 1$ in order to use Peregrine type soliton solutions):
+---PAGE_BREAK---
+
+**Lemma 2** ($l_0 = 1$, [26]). Assume that $h(t) = \lambda a(t) \beta^2(t) \mu(t)$ with $\lambda \in \mathbb{R}$. Then, the substitution
+
+$$ \psi(t,x) = \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} u(\tau, \xi), \quad (46) $$
+
+where $\xi = \beta(t)x + \epsilon(t)$ and $\tau = \gamma(t)$, transforms the equation
+
+$$ i\psi_t = -a(t)\psi_{xx} + b(t)x^2\psi - ic(t)x\psi_x - id(t)\psi - f(t)x\psi + ig(t)\psi_x + h(t)|\psi|^2\psi $$
+
+into the standard Schrödinger equation
+
+$$ iu_{\tau} - l_{0}u_{\xi\xi} + l_{0}\lambda|u|^{2}u = 0, l_{0} = \pm 1, \quad (47) $$
+
+as long as $\alpha, \beta, \gamma, \delta, \varepsilon$ and $\kappa$ satisfy the Riccati system Equations (24)–(29) and also Equation (30).
+
+**Example 4.** Consider the NLS:
+
+$$ i\psi_t = \psi_{xx} - \frac{x^2}{4}\psi + h(0) \operatorname{sech}(t) |\psi|^2 \psi. \quad (48) $$
+
+It has the associated characteristic equation $\mu'' + a\mu = 0$, and, using this, we will obtain the functions:
+
+$$ \alpha(t) = \frac{\coth(t)}{4} - \frac{1}{2} \operatorname{csch}(t) \operatorname{sech}(t), \quad \delta(t) = -\operatorname{sech}(t), \quad (49) $$
+
+$$ \kappa(t) = 1 - \frac{\tanh(t)}{2}, \quad \mu(t) = \cosh(t), \quad (50) $$
+
+$$ h(t) = h(0) \operatorname{sech}(t), \quad \beta(t) = \frac{1}{\cosh(t)}, \quad (51) $$
+
+$$ \varepsilon(t) = -1 + \tanh(t), \quad \gamma(t) = 1 - \frac{\tanh(t)}{2}. \quad (52) $$
+
+Then, we can construct solution of the form
+
+$$ \psi_j(t,x) = \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} u_j\left(1 - \frac{\tanh(t)}{2}, \frac{x}{\cosh(t)} - 1 + \tanh(t)\right), \quad (53) $$
+
+with $u_j, j = 1$ and $2$, given by Equations (12) and (13).
+
+**Example 5.** Consider the NLS:
+
+$$ i\psi_t(x,t) = \psi_{xx}(x,t) + \frac{h(0)\beta(0)^2\mu(0)}{1+\alpha(0)2c_2t} |\psi(x,t)|^2 \psi(x,t). $$
+
+It has the characteristic equation $\mu'' + a\mu = 0$, and, using this, we will obtain the functions:
+
+$$ \alpha(t) = \frac{1}{4t} - \frac{1}{2+\alpha(0)4c_2^2t^2}, \quad \delta(t) = \frac{\delta(0)}{1+\alpha(0)2c_2t'} \quad (54) $$
+
+$$ \kappa(t) = \kappa(0) - \frac{\delta(0)^2 c_2 t}{2 + 4\alpha(0)c_2 t'}, \quad h(t) = \frac{h(0)\beta(0)^2\mu(0)}{1 + \alpha(0)2c_2 t'}, \quad (55) $$
+
+$$ \mu(t) = (1 + \alpha(0)2c_2t)\mu(0), \quad \beta(t) = \frac{\beta(0)}{1 + \alpha(0)2c_2t'} $$
+---PAGE_BREAK---
+
+$$
+\gamma(t) = \gamma(0) - \frac{\beta(0)^2 c_2 t}{2 + 4\alpha(0)c_2 t}, \quad \epsilon(t) = \epsilon(0) - \frac{\beta(0)\delta(0)c_2 t}{1 + 2\alpha(0)c_2 t}.
+$$
+
+Then, we can construct a solution of the form
+
+$$
+\begin{equation}
+\begin{split}
+\psi_j(t,x) ={}& \frac{1}{\sqrt{\mu(t)}} e^{i(\alpha(t)x^2 + \delta(t)x + \kappa(t))} \\
+& u_j \left( \gamma(0) - \frac{\beta(0)^2 c_2 t}{2+4\alpha(0)c_2 t'} \frac{\beta(0)x}{1+\alpha(0)2c_2 t} + \epsilon(0) - \frac{\beta(0)\delta(0)c_2 t}{1+2\alpha(0)c_2 t} \right),
+\end{split}
+\tag{56}
+\end{equation}
+$$
+
+with $u_j, j = 1$ and $2$, Equations (12) and (13).
+
+Following Table 2 of Riccati equations, we can use Equation (24) and Lemma 2 to construct an extensive list of integrable variable coefficient nonlinear Schrödinger equations.
+
+**4. Crank-Nicolson Scheme for Linear Schrödinger Equation with Variable Coefficients Depending on Space**
+
+In addition, in [35], a generalized Melher’s formula for a general linear Schrödinger equation of the one-dimensional generalized harmonic oscillator of the form Equation (1) with $h(t) = 0$ was presented. As a particular case, if $b = \lambda \frac{\omega^2}{2}$; $f = b$, $\omega > 0$, $\lambda \in \{-1, 0, 1\}$, $c = g = 0$, then the evolution operator is given explicitly by the following formula (note—this formula is a consequence of Mehler’s formula for Hermite polynomials):
+
+$$
+\psi(x,t) = U_V(t)f := \frac{1}{\sqrt{2i\pi\mu_j(t)}} \int_{\mathbb{R}^n} e^{iS_V(x,y,t)} f(y)dy, \quad (57)
+$$
+
+where
+
+$$
+S_V(x, y, t) = \frac{1}{\mu_j(t)} \left( \frac{x_j^2 + y_j^2}{2} l_j(t) - x_j y_j \right),
+$$
+
+$$
+\{\mu_j(t), l_j(t)\} = \begin{cases} i\psi_l = -\Delta\psi + V(x, t)\psi, & (59) \\ 0, & (58) \end{cases}
+$$
+
+Using Riccati-Ermakov systems in [41], it was shown how computer algebra systems can be used to derive the multiparameter formulas (33)–(45). This multi-parameter study was used also to study solutions for the inhomogeneous paraxial wave equation in a linear and quadratic approximation including oscillating laser beams in a parabolic waveguide, spiral light beams, and more families of propagation-invariant laser modes in weakly varying media. However, the analytical method is restricted to solve Riccati equations exactly as the ones presented in Table 2. In this section, we use a finite differences method to compare analytical solutions described in [41] with numerical approximations. We aim (in future research) to extend numerical schemes to solve more general cases that the analytical method exposed cannot. Particularly, we will pursue to solve equations of the general form:
+
+using polynomial approximations in two variables for the potential function $V(x, t)$ ($V(x, t) \approx b(t)(x_1^2 + x_2^2) + f(t)x_1 + g(t)x_2 + h(t))$. For this purpose, it is necessary to analyze stability of different methods applied to this equation.
+---PAGE_BREAK---
+
+We also will be interested in extending this process to nonlinear Schrödinger-type equations with potential terms dependent on time, such as
+
+$$i\psi_t = -\Delta\psi + V(\mathbf{x}, t)\psi + s|\psi|^2\psi. \quad (60)$$
+
+In this section, we show that the Crank-Nicolson scheme seems to be the best method to deal with reconstructing numerically the analytical solutions presented in [41].
+
+Numerical methods arise as an alternative when it is difficult to find analytical solutions of the Schrödinger equation. Despite numerical schemes not providing explicit solutions to the problem, they do yield approaches to the real solutions which allow us to obtain some relevant properties of the problem. Most of the simplest and often-used methods are those based on finite differences.
+
+In this section, the Crank-Nicolson scheme is used for linear Schrödinger equation in the case of coefficients depending only on the space variable because it is absolutely stable and the matrix of the associate system does not vary for each iteration.
+
+A rectangular mesh $(x_m, t_n)$ is introduced in order to discretize a bounded domain $\Omega \times [0, T]$ in space and time. In addition, $\tau$ and $\mathbf{h}$ represent the size of the time step and the size of space step, respectively. $\mathbf{x}_m$ and $\mathbf{h}$ are in $\mathbb{R}$ if one-dimensional space is considered; otherwise, they are in $\mathbb{R}^2$.
+
+The discretization is given by the matrix system
+
+$$\left(I + \frac{i\alpha\tau}{2h^2}\Delta + \frac{i\tau}{2}V(\mathbf{x})\right)\psi^{n+1} = \left(I - \frac{i\alpha\tau}{2h^2}\Delta - \frac{i\tau}{2}V(\mathbf{x})\right)\psi^n, \quad (61)$$
+
+where $I$ is the identity matrix, $\Delta$ is the discrete representation of the Laplacian operator in space, and $V(\mathbf{x})$ is the diagonal matrix that represents the operator of the external potential depending on $\mathbf{x}$.
+
+The paraxial wave equation (also known as harmonic oscillator)
+
+$$2i\psi_t + \Delta\psi - r^2\psi = 0, \quad (62)$$
+
+where $r = x$ for $\mathbf{x} \in \mathbb{R}$ or $r = \sqrt{x_1^2 + x_2^2}$ for $\mathbf{x} \in \mathbb{R}^2$, describes the wave function for a laser beam [40].
+
+One solution for this equation can be presented as Hermite-Gaussian modes on a rectangular domain:
+
+$$ \begin{aligned} \psi_{nm}(\mathbf{x}, t) = & A_{nm} \frac{\exp[i(\kappa_1+\kappa_2)+2i(n+m+1)\gamma]}{\sqrt{2^{n+m}n!m!\pi}} \beta \\ & \times \exp\left[i(\alpha\mathbf{r}^2 + \delta_1x_1 + \delta_2x_2) - (\beta x_1 + \epsilon_1)^2/2 - (\beta x_2 + \epsilon_2)^2/2\right] \\ & \times H_n(\beta x_1 + \epsilon_1)H_m(\beta x_2 + \epsilon_2), \end{aligned} \quad (63) $$
+
+where $H_n(x)$ is the n-th order Hermite polynomial in the variable $x$, see [40,41].
+
+In addition, some solutions of the paraxial equation may be expressed by means of Laguerre-Gaussian modes in the case of cylindrical domains (see [43]):
+
+$$ \begin{aligned} \psi_n^m(\mathbf{x}, t) = & A_n^m \sqrt{\frac{n!}{\pi(n+m)!}\beta} \\ & \times \exp\left[i(\alpha\mathbf{r}^2 + \delta_1x_1 + \delta_2x_2 + \kappa_1 + \kappa_2) - (\beta x_1 + \epsilon_1)^2/2 - (\beta x_2 + \epsilon_2)^2/2\right] \\ & \times \exp[i(2n+m+1)\gamma](\beta(x_1 \pm ix_2) + \epsilon_1 \pm i\epsilon_2)^m \\ & \times L_n^m((\beta x_1 + \epsilon_1)^2 + (\beta x_2 + \epsilon_2)^2), \end{aligned} \quad (64) $$
+
+with $L_n^m(x)$ being the n-th order Laguerre polynomial with parameter $m$ in the variable $x$.
+
+$\alpha, \beta, \gamma, \delta_1, \delta_2, \epsilon_1, \epsilon_2, \kappa_1$ and $\kappa_2$ given by Equations (34)-(39) for both Hermite-Gaussian and Laguerre-Gaussian modes.
+
+Figures 1 and 2 show two examples of solutions of the one-dimensional paraxial equation with $\Omega = [-10, 10]$ and $T = 12$. The step sizes are $\tau = \frac{10}{200}$ and $h = \frac{10}{200}$.
+---PAGE_BREAK---
+
+**Figure 1.** (a) corresponding approximation for the one-dimensional Hermite-Gaussian beam with $t = 10$. The initial condition is $\sqrt{\frac{2}{3\sqrt{\pi}}}e^{(\frac{3}{2}x)^2/2}$; (b) the exact solution for the one-dimensional Hermite-Gaussian beam with $t = 10$, $A_n = 1$, $\mu_0 = 1$, $\alpha_0 = 0$, $\beta_0 = \frac{4}{9}$, $n_0 = 0$, $\delta_0 = 0$, $\gamma_0 = 0$, $\epsilon_0 = 0$, $\kappa_0 = 0$.
+
+**Figure 2.** (a) corresponding approximation for the one-dimensional Hermite-Gaussian beam with $t = 10$. The initial condition is $\sqrt{\frac{2}{3\sqrt{\pi}}}e^{(\frac{3}{2}x)^2/2+ix}$; (b) the exact solution for the one-dimensional Hermite-Gaussian beam with $t = 10$, $A_n = 1$, $\mu_0 = 1$, $\alpha_0 = 0$, $\beta_0 = \frac{4}{9}$, $n_0 = 0$, $\delta_0 = 1$, $\gamma_0 = 0$, $\epsilon_0 = 0$, $\kappa_0 = 0$.
+
+Figure 3 shows four profiles of two-dimensional Hermite-Gaussian beams considering $\Omega = [-6,6] \times [-6,6]$ and $T = 10$. The corresponding step sizes are $\tau = \frac{10}{40}$ and $h = (\frac{12}{48}, \frac{12}{48})$.
+---PAGE_BREAK---
+
+Figure 3. (Left): corresponding approximations for the two-dimensional Hermite-Gaussian beams with $t = 10$. The initial conditions are (a) $\frac{1}{\sqrt{8\pi}}e^{-(x^2+y^2)}$; (b) $\frac{1}{\sqrt{2\pi}}e^{-(x^2+y^2)x}$; (c) $\sqrt{\frac{2}{\pi}}e^{-(x^2+y^2)xy}$; (d) $\frac{1}{4\sqrt{32\pi}}e^{-(x^2+y^2)}(8x^2-2)(8y^2-2)$. (Right): the exact solutions for the two-dimensional Hermite-Gaussian beams with $t = 10$ and parameters $A_{nm} = \frac{1}{4}$, $a_0 = 0$, $\beta_0 = \sqrt{2}$, $\delta_{0,1} = 1$, $\gamma_{0,1} = 0$, $\epsilon_{0,1} = 0$, $\kappa_{0,1} = 0$. For (a) $n=0$ and $m=0$, for (b) $n=1$ and $m=0$, for (c) $n=1$ and $m=1$, for (d) $n=2$ and $m=2$.
+---PAGE_BREAK---
+
+Figure 4 shows two profiles of two-dimensional Laguerre-Gaussian beams considering $\Omega = [-6,6] \times [-6,6]$ and $T = 10$. The corresponding step sizes are $\tau = \frac{10}{40}$ and $\mathbf{h} = (\frac{12}{48}, \frac{12}{48})$.
+
+**Figure 4.** (Left): corresponding approximations for the two-dimensional Laguerre-Gaussian beams with $t = 10$. The initial conditions are (a) $\frac{1}{\sqrt{4\pi}}e^{-(x^2+y^2)}(x+iy)$; (b) $\frac{1}{\sqrt{2\pi}}e^{-(x^2+y^2)}(x+iy)(1-x^2-y^2)$. (Right): the exact solutions for the two-dimensional Laguerre-Gaussian beams with $t = 10$ and parameters $A_n^m = \frac{1}{4}$, $a_0 = 0$, $\beta_0 = \sqrt{2}$, $\delta_{0,1} = 1$, $\gamma_{0,1} = 0$, $\epsilon_{0,1} = 0$, $\kappa_{0,1} = 0$.
+
+**5. Conclusions**
+
+Rajendran et al. in [1] used similarity transformations introduced in [28] to show a list of integrable NLS equations with variable coefficients. In this work, we have extended this list, using similarity transformations introduced by Suslov in [26], and presenting a more extensive list of families of integrable nonlinear Schrödinger (NLS) equations with variable coefficients (see Table 1 as a primary list. In both approaches, the Riccati equation plays a fundamental role. The reader can observe that, using computer algebra systems, the parameters (see Equations (33)–(39)) provide a change of the dynamics of the solutions; the Mathematica files are provided as a supplement for the readers. Finally, we have tested numerical approximations for the inhomogeneous paraxial wave equation by the Crank-Nicolson scheme with analytical solutions. These solutions include oscillating laser beams and Laguerre and Gaussian beams. The explicit solutions have been found previously thanks to explicit solutions of Riccati-Ermakov systems [41].
+
+**Supplementary Materials:** The following are available online at http://www.mdpi.com/2073-8994/8/5/38/s1, Mathematica supplement file.
+
+**Acknowledgments:** The authors were partially funded by the Mathematical American Association through NSF (grant DMS-1359016) and NSA (grant DMS-1359016). Also, the authors are thankful for the funding received from the Department of Mathematics and Statistical Sciences and the College of Liberal Arts and Sciences at University of Puerto Rico, Mayagüez. E. S. is funded by the Simons Foundation Grant # 316295 and by the National Science Foundation Grant DMS-1440664. E.S is also thankful for the start up funds and the "Faculty
+---PAGE_BREAK---
+
+Development Funding Program Award" received from the School of Mathematics and Statistical Sciences and the College of Sciences at University of Texas, Rio Grande Valley.
+
+**Author Contributions:** The original results presented in this paper are the outcome of a research collaboration started during the Summer 2015 and continuous until Spring 2016. Similarly, the selection of the examples, tables, graphics and extended bibliography is the result of a continuous long interaction between the authors.
+
+**Conflicts of Interest:** The authors declare no conflict of interest.
+
+References
+
+1. Rajendran, S.; Muruganandam, P.; Lakshmanan, M. Bright and dark solitons in a quasi-1D Bose-Einstein condensates modelled by 1D Gross-Pitaevskii equation with time-dependent parameters. *Phys. D Nonlinear Phenom.* **2010**, *239*, 366–386. [CrossRef]
+
+2. Agrawal, G.-P. *Nonlinear Fiber Optics*, 4th ed.; Academic Press: New York, NY, USA, 2007.
+
+3. Al Khawaja, U. A comparative analysis of Painlevé, Lax Pair and similarity transformation methods in obtaining the integrability conditions of nonlinear Schrödinger equations. *J. Phys. Math.* **2010**, *51*. [CrossRef]
+
+4. Brugarino, T.; Sciacca, M. Integrability of an inhomogeneous nonlinear Schrödinger equation in Bose-Einstein condensates and fiber optics. *J. Math. Phys.* **2010**, *51*. [CrossRef]
+
+5. Chen, H.-M.; Liu, C.S. Solitons in nonuniform media. *Phys. Rev. Lett.* **1976**, *37*, 693–697. [CrossRef]
+
+6. He, X.G.; Zhao, D.; Li, L.; Luo, H.G. Engineering integrable nonautonomous nonlinear Schrödinger equations. *Phys. Rev. E* **2009**, *79*. [CrossRef] [PubMed]
+
+7. He, J.; Li, Y. Designable ineigrability of the variable coefficient nonlinear Schrödinger equations. *Stud. Appl. Math.* **2010**, *126*, 1–15. [CrossRef]
+
+8. He, J.S.; Charalampidis, E.G.; Kevrekidis, P.G.; Frantzeskakis, D.J. Rogue waves in nonlinear Schrödinger models with variable coefficients: Application to Bose-Einstein condensates. *Phys. Lett. A* **2014**, *378*, 577–583. [CrossRef]
+
+9. Kruglov, V.I.; Peacock, A.C.; Harvey, J.D. Exact solutions of the generalized nonlinear Schrödinger equation with distributed coefficients. *Phys. Rev. E* **2005**, *71*. [CrossRef] [PubMed]
+
+10. Marikhin, V.G.; Shabat, A.B.; Boiti, M.; Pimpinelli, F. Self-similar solutions of equations of the nonlinear Schrödinger type. *J. Exp. Theor. Phys.* **2000**, *90*, 553–561. [CrossRef]
+
+11. Ponomarenko, S.A.; Agrawal, G.P. Do Solitonlike self-similar waves exist in nonlinear optical media? *Phys. Rev. Lett.* **2006**, *97*. [CrossRef] [PubMed]
+
+12. Ponomarenko, S.A.; Agrawal, G.P. Optical similaritons in nonlinear waveguides. *Opt. Lett.* **2007**, *32*, 1659–1661. [CrossRef] [PubMed]
+
+13. Raghavan, S.; Agrawal, G.P. Spatiotemporal solitons in inhomogeneous nonlinear media. *Opt. Commun.* **2000**, *180*, 377–382. [CrossRef]
+
+14. Serkin, V.N.; Hasegawa, A. Novel Soliton solutions of the nonlinear Schrödinger Equation model. *Phys. Rev. Lett.* **2000**, *85*. [CrossRef] [PubMed]
+
+15. Serkin, V.; Matsumoto, M.; Belyaeva, T. Bright and dark solitary nonlinear Bloch waves in dispersion managed fiber systems and soliton lasers. *Opt. Commun.* **2001**, *196*, 159–171. [CrossRef]
+
+16. Tian, B.; Shan, W.; Zhang, C.; Wei, G.; Gao, Y. Transformations for a generalized variable-coefficient nonlinear Schrödinger model from plasma physics, arterial mechanics and optical fibers with symbolic computation. *Eur. Phys. J. B* **2005**, *47*, 329–332. [CrossRef]
+
+17. Dai, C.-Q.; Wang, Y.-Y. Infinite generation of soliton-like solutions for complex nonlinear evolution differential equations via the NLSE-based constructive method. *Appl. Math. Comput.* **2014**, *236*, 606–612. [CrossRef]
+
+18. Wang, M.; Shan, W.-R.; Lü, X.; Xue, Y.-S.; Lin, Z.-Q.; Tian, B. Soliton collision in a general coupled nonlinear Schrödinger system via symbolic computation. *Appl. Math. Comput.* **2013**, *219*, 11258–11264. [CrossRef]
+
+19. Yu, F.; Yan, Z. New rogue waves and dark-bright soliton solutions for a coupled nonlinear Schrödinger equation with variable coefficients. *Appl. Math. Comput.* **2014**, *233*, 351–358. [CrossRef]
+
+20. Fibich, G. *The Nonlinear Schrödinger Equation*, Singular Solutions and Optical Collapse; Springer: Berlin/Heidelberg, Germany, 2015.
+
+21. Kevrekidis, P.G.; Frantzeskakis, D.J.; Carretero-Gonzáles, R. *Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment*; Springer Series of Atomic, Optical and Plasma Physics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 45.
+---PAGE_BREAK---
+
+22. Suazo, E.; Suslov, S.-K. Soliton-Like solutions for nonlinear Schrödinger equation with variable quadratic Hamiltonians. *J. Russ. Laser Res.* **2010**, *33*, 63–83. [CrossRef]
+
+23. Sulem, C.; Sulem, P.L. *The Nonlinear Schrödinger Equation*; Springer: New York, NY, USA, 1999.
+
+24. Tao, T. Nonlinear dispersive equations: Local and global analysis. In *CBMS Regional Conference Series in Mathematics*; American Mathematical Society: Providence, RI, USA, 2006.
+
+25. Zakharov, V.-E.; Shabat, A.-B. Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media. *Soviet. Phys. JETP* **1972**, *34*, 62–69.
+
+26. Suslov, S.-K. On integrability of nonautonomous nonlinear Schrödinger equations. *Proc. Am. Math. Soc.* **2012**, *140*, 3067–3082. [CrossRef]
+
+27. Talanov, V.I. Focusing of light in cubic media. *JETP Lett.* **1970**, *11*, 199–201.
+
+28. Perez-Garcia, V.M.; Torres, P.J.; Konotop, V.K. Similarity transformations for nonlinear Schrödinger equations with time-dependent coefficients. *Physica D* **2006**, *221*, 31–36. [CrossRef]
+
+29. Ablowitz, M.; Hooroka, T. Resonant intrachannel pulse interactions in dispersion-managed transmission systems. *IEEE J. Sel. Top. Quantum Electron.* **2002**, *8*, 603–615. [CrossRef]
+
+30. Marhic, M.E. Oscillating Hermite-Gaussian wave functions of the harmonic oscillator. *Lett. Nuovo Cim.* **1978**, *22*, 376–378. [CrossRef]
+
+31. Carles, R. Nonlinear Schrödinger equation with time dependent potential. *Commun. Math. Sci.* **2010**, *9*, 937–964. [CrossRef]
+
+32. López, R.M.; Suslov, S.K.; Vega-Guzmán, J.M. On a hidden symmetry of quantum harmonic oscillators. *J. Differ. Equ. Appl.* **2013**, *19*, 543–554. [CrossRef]
+
+33. Aldaya, V.; Cossio, F.; Guerrero, J.; López-Ruiz, F.F. The quantum Arnold transformation. *J. Phys. A Math. Theor.* **2011**, *44*, 1–6. [CrossRef]
+
+34. Feynman, R.P.; Hibbs, A.R. *Quantum Mechanics and Path Integrals*; McGraw-Hill: New York, NY, USA, 1965.
+
+35. Cordero-Soto, R.; Lopez, R.M.; Suazo, E.; Suslov, S.K. Propagator of a charged particle with a spin in uniform magnetic and perpendicular electric fields. *Lett. Math. Phys.* **2008**, *84*, 159–178. [CrossRef]
+
+36. Lanfear, N.; López, R.M.; Suslov, S.K. Exact wave functions for a generalized harmonic oscillators. *J. Russ. Laser Res.* **2011**, *32*, 352–361. [CrossRef]
+
+37. López, R.M.; Suslov, S.K.; Vega-Guzmán, J.M. Reconstructing the Schrödinger groups. *Phys. Scr.* **2013**, *87*, 1–6. [CrossRef]
+
+38. Suazo, E.; Suslov, S.K. Cauchy problem for Schrödinger equation with variable quadratic Hamiltonians. *2011*. to be submitted.
+
+39. Suazo, E. Fundamental Solutions of Some Evolution Equations. Ph.D. Thesis, Arizona State University, Tempe, AZ, USA, September 2009.
+
+40. Mahalov, A.; Suazo, E.; Suslov, S.K. Spiral laser beams in inhomogeneous media. *Opt. Lett.* **2013**, *38*, 2763–2766. [CrossRef] [PubMed]
+
+41. Koutschan, C.; Suazo, E.; Suslov, S.K. Fundamental laser modes in paraxial optics: From computer algebra and simulations to experimental observation. *Appl. Phys. B* **2015**, *121*, 315–336. [CrossRef]
+
+42. Escoriacia, J.; Suazo, E. Blow-up results and soliton solutions for a generalized variable coefficient nonlinear Schrödinger equation. Available online: http://arxiv.org/abs/1605.07554 (accessed on 24 May 2016).
+
+43. Andrews, L.C.; Phillips, R.L. *Laser Beam Propagation through Random Media*, 2nd ed.; SPIE Press: Bellingham, WA, USA, 2005.
+
+© 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+# Coherent States of Harmonic and Reversed Harmonic Oscillator
+
+Alexander Rauh
+
+Department of Physics, University of Oldenburg, Oldenburg D-26111, Germany;
+alexander.rauh@uni-oldenburg.de; Tel.: +49-441-798-3460
+
+Academic Editor: Young Suh Kim
+
+Received: 16 January 2016; Accepted: 3 June 2016; Published: 13 June 2016
+
+**Abstract:** A one-dimensional wave function is assumed whose logarithm is a quadratic form in the configuration variable with time-dependent coefficients. This trial function allows for general time-dependent solutions both of the harmonic oscillator (HO) and the reversed harmonic oscillator (RO). For the HO, apart from the standard coherent states, a further class of solutions is derived with a time-dependent width parameter. The width of the corresponding probability density fluctuates, or "breathes" periodically with the oscillator frequency. In the case of the RO, one also obtains normalized wave packets which, however, show diffusion through exponential broadening with time. At the initial time, the integration constants give rise to complete sets of coherent states in the three cases considered. The results are applicable to the quantum mechanics of the Kepler-Coulomb problem when transformed to the model of a four-dimensional harmonic oscillator with a constraint. In the classical limit, as was shown recently, the wave packets of the RO basis generate the hyperbolic Kepler orbits, and, by means of analytic continuation, the elliptic orbits are also obtained quantum mechanically.
+
+**Keywords:** inverted harmonic oscillator; harmonic trap; Kepler-Coulomb problem; Kustaanheimo-Stiefel transformation
+
+## 1. Introduction
+
+Coherent states of the harmonic oscillator (HO) were introduced already at the beginning of wave mechanics [1]. Much later, such states were recognized as being useful as a basis to describe radiation fields [2] and optical correlations [3]. The reversed harmonic oscillator (RO) refers to a model with repulsive harmonic forces, and was discussed in [4] in the context of irreversibility. Recently, in [5], which also communicates historical remarks, the RO was applied to describe nonlinear optical phenomena. As mentioned in [5], the term “inverted harmonic oscillator” (IO) originally refers to a model with negative kinetic and potential energy, as proposed in [6]. Nevertheless, most articles under the headline IO, actually consider the RO model, see, e.g., [7–9].
+
+The RO model formally can be obtained by assuming a purely imaginary oscillator frequency. It is then not anymore possible to construct coherent states by means of creation and annihilation operators; for a text book introduction see [10]. In [9], the RO was generalized by the assumption of a time-dependent mass and frequency. The corresponding Schrödinger equation was solved by means of an algebraic method with the aim to describe quantum tunneling.
+
+In the present study, emphasis is laid on the derivation of complete sets of coherent states both for the HO and the RO model, together with their time evolution. In the case of the HO, in addition to the standard coherent states, a further function set is found with a time-dependent width parameter. Both in the HO and RO case, the integration constants of the time-dependent solutions induce complete function sets which, at time $t = 0$, are isomorphic to the standard coherent states of the HO.
+---PAGE_BREAK---
+
+In Section 6, an application to the quantum mechanics of the Kepler-Coulomb problem will be briefly discussed. As has first been observed by Fock [11], the underlying four-dimensional rotation symmetry of the non-relativistic Hamiltonian of the hydrogen atom permits the transformation to the problem of four isotropic harmonic oscillators with a constraint; for applications see, e.g., [12–14]. The transformation proceeds conveniently by means of the Kustaanheimo-Stiefel transformation [15]. In [14], the elliptic Kepler orbits were derived in the classical limit on the basis of coherent HO states. By means of coherent RO states, the classical limit for hyperbolic Kepler orbits was achieved in [16,17], whereby the elliptic regime could be obtained by analytic continuation from the hyperbolic side. Recently, by means of the same basis, a first order quantum correction to Kepler’s equation was derived in [18], whereby the smallness parameter was defined by the reciprocal angular momentum in units of $\hbar$.
+
+As compared to the classical elliptic Kepler orbits, the derivation of hyperbolic orbits from quantum mechanics was accomplished quite recently [16,17]. For this achievement, it was crucial to devise a suitable time-dependent ansatz for the wave function, see (1) below, in order to construct coherent RO states. As it turns out, the wave function (1) contains also the usual coherent HO states, and, unexpectedly, a further set of coherent states, which we call type-II states. The latter are characterized by a time-dependent width parameter and are solutions of the time-dependent Schrödinger equation of the HO. Section 4 contains the derivation. Essentially, the type-II states offer a disposable width parameter which allows us, for instance, to describe arbitrarily narrowly peaked initial states together with their time evolution in a harmonic potential. In this paper, a unified derivation is presented of coherent states of the HO, RO, and type-II HO states. Furthermore, the connection of HO and RO with the quantum mechanics of the Kepler-Coulomb problem is briefly discussed in the context of the derivation of the classical Kepler orbits from quantum mechanics.
+
+## 2. Introducing a Trial Wave Function
+
+In order to solve the Schrödinger equation for the harmonic oscillator (HO) and the reversed oscillator (RO), a trial wave function of Gaussian type is assumed as follows
+
+$$ \psi(x,t) = C_0 \exp \left[ C(t) + B(t)x - \Gamma(t)x^2 \right], \quad x \in \mathbf{R}, \quad \text{Real}(\Gamma) > 0, \qquad (1) $$
+
+where $C, B, \Gamma$ are complex functions of time $t$ and $C_0$ the time-independent normalization constant. When the Schrödinger operator $[\mathrm{i}\hbar\partial_t - H]$ is applied to $\psi$ for a Hamiltonian with harmonic potential, then the wave function $\psi$ is reproduced up to a factor which is a quadratic polynomial and must vanish identically in the configuration variable $x$:
+
+$$ 0 = p_0(t) + p_1(t)x + p_2(t)x^2. \qquad (2) $$
+
+The conditions $p_0 = 0$, $p_1 = 0$, and $p_2 = 0$, give rise to three first-order differential equations for the functions $C(t)$, $B(t)$, and $\Gamma(t)$. In the following we examine two cases for the HO: type-I and type-II are characterized by a constant and time-dependent function $\Gamma$, respectively. In the case of the RO, only a time-dependent $\Gamma$ leads to a solution. By a suitable choice of the parameters, the ansatz (1) solves the time-dependent Schrödinger equation both for the HO and the RO Hamiltonian
+
+$$ H = p^2/(2m) + (m\omega^2/2)x^2 \quad \text{and} \quad H_{\Omega} = p^2/(2m) - (m\Omega^2/2)x^2, \quad \omega, \Omega > 0, $$
+
+respectively.
+
+## 3. Standard (Type-I) Coherent States of the HO
+
+In the following, the time-dependent solutions are derived, within the trial function scheme, for the Hamiltonian
+
+$$ H = p^2/(2m) + (m\omega^2/2)x^2 = (\hbar\omega/2) [-\partial_\xi^2 + \zeta^2], \qquad (3) $$
+---PAGE_BREAK---
+
+where $\zeta = ax$ is dimensionless with $a^2 = m\omega/\hbar$. For later comparison, we list the standard definition of coherent states from the textbook [10], see Equations (4.72) and (4.75):
+
+$$|z\rangle = \exp\left[-\frac{1}{2}zz^*\right] \sum_{n=0}^{\infty} \frac{z^n}{\sqrt{n!}} |n\rangle, \quad (4)$$
+
+$$\psi_z(\zeta) = \pi^{-1/4} \exp\left[-\frac{1}{2}(zz^* + z^2)\right] \exp\left[-\frac{1}{2}\zeta^2 + \sqrt{2}\zeta z\right], \quad \zeta = ax, \quad a^2 = \frac{m\omega}{\hbar}, \quad (5)$$
+
+where $\psi_z(\zeta) = \langle\zeta|z\rangle$, $|n\rangle$ denotes the n-th energy eigenvector, and the star superscript means complex conjugation. The time evolution gives rise to, see [10],
+
+$$|z,t\rangle = \exp[-i\omega t/2] |z \exp[-i\omega t]\rangle, \quad (6)$$
+
+$$\psi_z(\zeta, t) = \exp[-i\omega t/2] \psi_{(z \exp[-i\omega t])}(\zeta). \quad (7)$$
+
+The state $|z\rangle$ is minimal with respect to the position-momentum uncertainty product $\Delta x \Delta p$, and there exists the following completeness property, see [3],
+
+$$\frac{1}{\pi} \int_0^\infty u du \int_0^{2\pi} d\varphi |z\rangle\langle z| = \sum_n |n\rangle\langle n|, \quad z = u \exp[i\varphi]. \quad (8)$$
+
+The relation (8) follows immediately from the definition (4). An equivalent statement is:
+
+$$\frac{1}{\pi} \int_{0}^{\infty} u du \int_{0}^{2\pi} d\varphi \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = \delta(\zeta_2 - \zeta_1), \quad (9)$$
+
+which corresponds to the completeness of the energy eigenfunctions of the harmonic oscillator. In Appendix B, we reproduce a proof of (9), which is appropriate, since the proof has to be extended to the modified coherent states in the type-II HO and the RO cases.
+
+In terms of the scaled variables $\zeta$ and $\tau = t\omega$, the trial ansatz reads:
+
+$$\psi(\zeta, \tau) = C_0 \exp[c(\tau) + \beta(\tau)\zeta - \gamma(\tau)\zeta^2/2], \quad (10)$$
+
+where $c, \beta, \gamma$ are dimensionless functions of $\tau$, and the re-scaling factor of the probability density, $1/\sqrt{\alpha}$, is taken into the normalization constant $C_0$.
+
+We assume that $\gamma = \gamma_0 = \text{const}$. Then, the polynomial (2) gives rise to the equations:
+
+$$\gamma_0^2 = 1, \quad i\beta'(\tau) = \beta(\tau), \quad 2ic'(t) = 1 - \beta^2(t), \quad (11)$$
+
+which implies that $\gamma_0 = 1$ is fixed. The further solutions emerge easily as:
+
+$$\beta(\tau) = C_2 \exp[-i\tau], \quad c(\tau) = -i\tau/2 - (C_2^2/4) \exp[-2i\tau] + C_3, \quad (12)$$
+
+where $C_2$ and $C_3$ are complex integration constants. A comparison with (5), at $t=0$, suggests to set:
+
+$$C_2 = \sqrt{2}z, \quad C_3 = -(1/2)zz^*, \quad (13)$$
+
+which specifies the functions $\beta$ and $c$ as follows:
+
+$$\beta(\tau) = \sqrt{2}(z \exp[-i\tau]), \quad c(t) = -i\tau/2 - (1/2)[zz^* + (z \exp[-i\tau])^2]. \quad (14)$$
+---PAGE_BREAK---
+
+The normalization integral with respect to $\zeta$ amounts to the condition
+
+$$C_0^2 \sqrt{\pi} \exp[zz^*] = 1; \qquad (15)$$
+
+hence (7) with (5) is reproduced.
+
+**4. Type-II Solutions of the Harmonic Oscillator**
+
+With $\gamma$ being a function of time, one obtains the following differential equations with prime denoting the derivative with respect to the scaled time $\tau$:
+
+$$i\gamma' = \gamma^2 - 1, \quad i\beta' = \gamma\beta; \quad 2i\gamma'c' = \gamma - \beta^2. \qquad (16)$$
+
+The solution for $\gamma$ is
+
+$$\gamma(\tau) = \frac{\exp(2i\tau) - C_1}{\exp(2i\tau) + C_1}, \quad C_1 = \frac{1-\gamma_0}{1+\gamma_0}. \quad \gamma_0 = \gamma(0). \qquad (17)$$
+
+Splitting $\gamma$ into its real and imaginary parts, one can write
+
+$$\begin{aligned} \gamma(\tau) &= \gamma_R + i\gamma_I; & \gamma_R &= (1-C_1^2)N_1^{-1}, & \gamma_I &= 2C_1N_1^{-1}\sin(2\tau), \\ N_1(\tau) &= 1+C_1^2+2C_1\cos(2\tau) = 4(1+\gamma_0)^{-2}[1+(\gamma_0^2-1)\sin^2(\tau)]. & & & \end{aligned} \qquad (18)$$
+
+In order that the wave function is square integrable, $\gamma_R$ has to be positive, which implies that
+
+$$C_1^2 < 1 \text{ or } \gamma_0 > 0. \qquad (19)$$
+
+The initial value $\gamma(t=0) = \gamma_0 > 0$ emerges as a disposable parameter.
+
+The probability density, $P = |\psi(\zeta, \tau)|^2$, is characterized by a width of order of magnitude $d = 1/\sqrt{\gamma_R}$:
+
+$$d(\tau) = \sqrt{[1 + (\gamma_0^2 - 1) \sin^2(\tau)] / \gamma_0}. \qquad (20)$$
+
+Obviously, the width fluctuates, or "breathes", periodically with time. Of course, this is not a breathing mode as observed in systems of confined interacting particles, see [19,20], e.g.,
+
+Integration of the $\beta$ equation leads to
+
+$$\beta = C_2 \exp(i\tau) [\exp(2i\tau) + C_1]^{-1} = C_2 N_1^{-1} [\exp(-i\tau) + \exp(i\tau) C_1]. \qquad (21)$$
+
+Later on, the complex integration constant $C_2 = A_2 + iB_2$ will serve as a state label. The third differential equation of (16) amounts to
+
+$$c(\tau) = i\tau/2 - C_2^2 [4(\exp(2i\tau) + C_1)]^{-1} - (1/2) \ln \left( \sqrt{\exp(2i\tau) + C_1} \right) + C_3. \qquad (22)$$
+
+By reasons explained in Appendix A, we dispose of the integration constant $C_3$ as follows
+
+$$C_3 = -(1 + \gamma_0)(8\gamma_0)^{-1}(A_2^2 + \gamma_0 B_2^2), \quad C_2 = A_2 + iB_2. \qquad (23)$$
+
+In Appendix A, the probability density $P$ is derived in the following form
+
+$$P(\xi, \tau) = \frac{C_0^2}{\sqrt{N_1}} \exp[-\gamma_R (\xi - \beta_R / \gamma_R)^2], \qquad (24)$$
+---PAGE_BREAK---
+
+where the time-dependent functions $\gamma_R$ and $N_1$ are defined through (17) and (18), and $\beta_R$ comes out as
+
+$$ \beta_R(\tau) = (1/8)(1 + \gamma_0)^{-1} N_1^{-1} [A_2 \cos(\tau) + B_2 \sin(\tau)]. \quad (25) $$
+
+The complex integration constant $C_2$ corresponds to the familiar complex quantum number $z$ in the case of the standard coherent states; hence, the real numbers $A_2, B_2$ characterize different states. The normalization constant $C_0$ obeys the following condition, see Appendix A,
+
+$$ 1 = (1/2) C_0^2 \sqrt{\pi / \gamma_0 (1 + \gamma_0)}. \quad (26) $$
+
+## 4.1. Completeness of Type-II States
+
+Combining the above results, we write the time-dependent wave function as follows:
+
+$$ \psi(\xi, \tau) = \frac{C_0}{\sqrt{\exp(2i\tau) + C_1}} \exp \left[ C_3 - \frac{C_2^2 (\exp(-2i\tau) + C_1)}{4N_1} + \beta(\tau)\xi - \gamma(\tau)\frac{\xi^2}{2} \right], \quad (27) $$
+
+where $\gamma$, $\beta$, and $C_3$ are defined in (18), (21), and (23), respectively. Let us consider $\psi$ at zero time:
+
+$$ \psi(\xi, 0) = \frac{C_0}{\sqrt{1+C_1}} \exp \left[ C_3 - \frac{C_2^2}{4(1+C_1)} + C_2(1+\gamma_0)\xi/2 - \gamma_0\xi^2/2 \right]. \quad (28) $$
+
+In (28), we set $\tilde{\xi} = \xi/\sqrt{\gamma_0}$ to write:
+
+$$ \psi(\tilde{\xi}, 0) = \frac{C_0 \gamma_0^{-1/4}}{\sqrt{1+C_1}} \exp \left[ C_3 - \frac{C_2^2}{4(1+C_1)} + C_2(1+\gamma_0)/\sqrt{\gamma_0 \tilde{\xi}/2} - \frac{\tilde{\xi}^2}{2} \right]. \quad (29) $$
+
+Now we substitute the complex variable $z$ for the integration constant $C_2$ as follows:
+
+$$ C_2 \frac{1 + \gamma_0}{2\sqrt{\gamma_0}} = \sqrt{2}z \quad (30) $$
+
+and obtain:
+
+$$ \psi(\tilde{\xi}, 0) = \frac{C_0}{\sqrt{1+C_1}} \exp \left[ C_3 - z^2 \frac{\gamma_0}{1+\gamma_0} + \sqrt{2}z\tilde{\xi} - \frac{\tilde{\xi}^2}{2} \right]. \quad (31) $$
+
+In $C_3$, given in (23), we make the following replacements which are induced by (30):
+
+$$ A_2 \rightarrow \kappa(z+z^*), \quad B_2 \rightarrow -i\kappa(z-z^*), \quad \kappa = \frac{\sqrt{2\gamma_0}}{1+\gamma_0}. \quad (32) $$
+
+There occur some nice cancelations, and one obtains:
+
+$$ \psi_z(\tilde{\xi}) = \frac{C_0 \gamma_0^{-1/4}}{\sqrt{1+C_1}} \exp \left[ -\frac{1}{2}(zz^* + z^2) + iD + \sqrt{2}z\tilde{\xi} - \frac{\tilde{\xi}^2}{2} \right], \quad D = \frac{1-\gamma_0}{2(1+\gamma_0)} \operatorname{Im}(z^2). \quad (33) $$
+
+Comparison with (5) shows that the wave function (33) has the same structure apart from the purely imaginary phase $iD$. The latter drops out in the completeness proof, see (A15) in Appendix B. As a consequence, the states (33) form a complete set of states with respect to the state label $z$.
+
+At $\tau=0$, the states (33) differ from the standard coherent states (5) by the state dependent phase $D$, through the variables $\tilde{\zeta}$ and $\tilde{\xi}$ which denote the differently scaled space variable $x$, and also through the different definition of the quantum number $z$, which for simplicity was denoted by the same symbol in (30). Essentially, type-I and type-II states differ by their time evolution and width parameter $\gamma_0$ which is equal to $a^2 = m\omega/\hbar$ and to an arbitrary positive number, respectively.
+---PAGE_BREAK---
+
+## 4.2. Mean Values and Uncertainty Product
+
+In the following, we list mean values for the time-dependent states (27) including the position momentum uncertainty product $\Delta_{xp}$. They are periodic in time with the oscillator angular frequency $\omega \equiv 2\pi/T$. The uncertainty product is minimal at the discrete times $t_n = (1/4)nT$, $n = 0, 1, \dots$. For comparison, the traditional coherent states are always minimal [10]. We use the abbreviations $(\Delta_x)^2 = \langle x^2 \rangle - \langle x \rangle^2$ and $(\Delta_v)^2 = \langle v^2 \rangle - \langle v \rangle^2$ for the mean square deviations of position and velocity, respectively.
+
+$$ \langle x(\tau) \rangle = (1/\alpha)(1 + \gamma_0)(2\gamma_0)^{-1} [A_2 \cos(\tau) + B_2\gamma_0 \sin(\tau)]; \quad (34) $$
+
+$$ \langle v(\tau) \rangle = \hbar a(2m\gamma_0)^{-1} [-A_2 \sin(\tau) + \gamma_0 B_2 \cos(\tau)]; \quad (35) $$
+
+$$ (\Delta_x)^2 = (4a^2\gamma_0)^{-1} [1 + \gamma_0^2 + (1-\gamma_0^2)\cos(2\tau)]; \quad (36) $$
+
+$$ (\Delta_v)^2 = \hbar^2 a^2 (4m^2\gamma_0)^{-1} [1 + \gamma_0^2 + (\gamma_0^2 - 1)\cos(2\tau)]; \quad (37) $$
+
+$$ \langle H \rangle = \hbar\omega(8\gamma_0^2)^{-1} \left[ (1+\gamma_0)^2 (A_2^2 + \gamma_0^2 B_2^2) + 2\gamma_0(1+\gamma_0^2) \right]. \quad (38) $$
+
+It is noticed that the mean square deviations do not depend on the state label ($A_2, B_2$). The uncertainty product follows immediately from (36) and (37) as
+
+$$ \Delta_{xp} := (\Delta_x)^2 (\Delta_p)^2 = m^2 (\Delta x)^2 (\Delta v)^2 = \frac{\hbar^2}{16\gamma_0^2} [(1+\gamma_0^2)^2 - (1-\gamma_0^2)^2 \cos^2(2\tau)]. \quad (39) $$
+
+In the special case $\gamma_0 = 1$, the product is always minimal. As a matter of fact, $\gamma_0 = 1$ is the type-I case of Section 3.
+
+By (38), the mean energy does not depend on time and is positive definite, as it must be. The limit to the standard case with $\gamma_0 = 1$, gives the known result
+
+$$ \langle H \rangle_{\gamma_0=1} = \hbar\omega(zz^* + 1/2). \quad (40) $$
+
+and the state with $z=0$ is the ground state of the HO with zero point energy $\hbar\omega/2$.
+
+# 5. Wave Packet Solutions for the RO
+
+For convenience, we will keep the same symbols for the trial functions $\gamma(\tau)$, $\beta(\tau)$, and $c(\tau)$. Setting $\omega = i\Omega$ with $\Omega > 0$, implies that $a^2 = -m\Omega/\hbar$. In the coherent state (5), the exponential part, $-z^2/2 = -(m\omega/\hbar)x^2/2$, is then replaced by $(m\Omega/\hbar)x^2/2$, which precludes normalization.
+
+We introduce $1/a_\Omega$ as the new length parameter and define the dimensionless magnitudes
+
+$$ z = a_\Omega x, \quad \tau = t\Omega, \quad \text{with } a_\Omega^2 = m\Omega/\hbar. \quad (41) $$
+
+The Schrödinger equation, with the ansatz (10), has to be solved for the RO Hamiltonian
+
+$$ H_{\Omega} = p^2/(2m) - m\Omega^2/2 x^2 = -\hbar\Omega/2 [\partial_{x}^{2} + z^{2}]. \quad (42) $$
+
+From (2), the following differential equations result:
+
+$$ i\gamma'(\tau) = 1 + \gamma^2(\tau), \quad i\beta'(\tau) = \gamma(\tau)\beta(\tau), \quad 2ic'(\tau) = \gamma(\tau) - \beta^2(\tau), \quad (43) $$
+---PAGE_BREAK---
+
+where, as compared with the HO case in (16), only the equation for $\gamma$ differs. Beginning with $\gamma$, one successively obtains the following solutions
+
+$$ \gamma(\tau) = -i \tanh(\tau + iC_1), \quad (44) $$
+
+$$ \beta(\tau) = C_2 / \cosh(\tau + i C_1), \quad (45) $$
+
+$$ c(\tau) = C_3 - (1/2) \ln(\cosh(\tau + i C_1)) + (i/2) C_2^2 \tanh(\tau + i C_1), \quad (46) $$
+
+where $C_1, C_2, C_3$ are integration constants. We assume that
+
+$$ \gamma_0 \equiv \gamma(0) = \tan(C_1) > 0, \quad 0 < C_1 < \pi/2, \quad (47) $$
+
+which implies that
+
+$$ \cos(C_1) = (1 + \gamma_0^2)^{-1/2}, \quad \sin(C_1) = \gamma_0 (1 + \gamma_0^2)^{-1/2}. \quad (48) $$
+
+In order to decompose the functions $c(\tau)$, $\beta(\tau)$, $\gamma(\tau)$ into their real and imaginary parts, we take over the following abbreviations from [16]
+
+$$ f(\tau) = \cosh(\tau) - i\gamma_0 \sinh(\tau), \quad h(\tau) = [ff^*]^{-1}. \quad (49) $$
+
+After the decompositions $\beta = \beta_R + i\beta_I$, $\gamma = \gamma_R + i\gamma_I$, $C_2 = A_2 + iB_2$, we infer from (44) to (46):
+
+$$ \gamma_R = h(\tau)\gamma_0, \quad \gamma_I = -(h(\tau)/2)(1+\gamma_0^2)\sinh(2\tau); \quad (50) $$
+
+$$ \beta_R = h(\tau) \sqrt{1 + \gamma_0^2} [A_2 \cosh(\tau) + \gamma_0 B_2 \sinh(\tau)], $$
+
+$$ \beta_I = h(\tau) \sqrt{1 + \gamma_0^2} [B_2 \cosh(\tau) - \gamma_0 A_2 \sinh(\tau)]; \quad (51) $$
+
+$$ \exp[c(\tau)] = [\cosh(\tau + i C_1)]^{-1/2} \exp[C_3 - C_2^2 \gamma(\tau)/2]. \quad (52) $$
+
+According to (50), $\gamma_R$ is larger zero, which makes the wave function (10) a normalizable wave packet. The probability density reads:
+
+$$ P(\zeta, \tau) = C_0^2 \exp[c + c^* + 2\beta_R\zeta - \gamma_R\zeta^2]. \quad (53) $$
+
+Integration with respect to $\zeta$ leads to the normalization condition
+
+$$ 1 = C_0^2 \sqrt{\pi/\gamma_R} \exp[c(\tau) + c^*(\tau) + \beta_R^2/\gamma_R]. \quad (54) $$
+
+The normalization constant $C_0$ was determined in [16] for real constants $C_2$. With $C_2 = A_2 + iB_2$, we dispose of the integration constant $C_3$ as
+
+$$ C_3 = -(1/2)(A_2^2/\gamma_0 + B_2^2\gamma_0) \quad (55) $$
+
+to obtain in a straightforward manner
+
+$$ C_0^2 = \sqrt{\pi(\gamma_0^{-1} + \gamma_0)}, \quad (56) $$
+
+which is a time independent condition as it must be.
+
+With the aid of elementary trigonometric manipulations and the normalization constant $C_0$ given in (56), the wave function can be written as follows:
+
+$$ \psi(\zeta, \tau) = (\gamma_0/\pi)^{1/4} \sqrt{h(\tau)f(\tau)} \exp[C_3 - (1/2)C_2^2\gamma(\tau) + \beta(\tau)\zeta - \gamma(\tau)\zeta^2/2]. \quad (57) $$
+---PAGE_BREAK---
+
+5.1. Coherent States of the RO
+
+As before, let us consider the wave function at time $t = 0$, where in particular $h = f = 1$:
+
+$$
+\psi(\zeta, 0) \equiv \psi(\zeta, \tau = 0) = (\gamma_0 / \pi)^{1/4} \exp \left[ C_3 - \frac{1}{2} C_2^2 \gamma_0 + C_2 \sqrt{1 + \gamma_0^2 \zeta - \gamma_0 \zeta^2 / 2} \right]. \quad (58)
+$$
+
+After the re-scaling $\zeta \rightarrow \tilde{\zeta}$ with $\tilde{\zeta} = \sqrt{\gamma_0} \zeta$, one obtains
+
+$$
+\Psi(\tilde{\zeta}, 0) = \pi^{-1/4} \exp \left[ C_3 - \frac{1}{2} C_2^2 \gamma_0 + C_2 \sqrt{(1+\gamma_0^2)/\gamma_0} \tilde{\zeta} - \frac{\tilde{\zeta}^2}{2} \right]. \quad (59)
+$$
+
+In view of the standard HO wave function (5), we replace the integration constant $C_2$ by $z$:
+
+$$
+C_2 \sqrt{(1 + \gamma_0^2) / \gamma_0} = \sqrt{2} z \tag{60}
+$$
+
+and obtain
+
+$$
+\Psi_z(\xi) = \pi^{-1/4} \exp \left[ C_3 - \frac{\gamma_0^2 z^2}{(1+\gamma_0^2)} + \sqrt{2} z \xi - \frac{\xi^2}{2} \right]. \quad (61)
+$$
+
+In $C_3$, given in (55), the relation (60) gives rise to the substitutions
+
+$$
+A_2 \rightarrow \kappa_1(z+z^*), \quad B_2 \rightarrow -i\kappa_1(z-z^*), \quad \kappa_1 = (1/2)\sqrt{2\gamma_0/(1+\gamma_0^2)}, \qquad (62)
+$$
+
+and hence to
+
+$$
+C_3 = [4(1 + \gamma_0^2)]^{-1} [(7\gamma_0^2 - 1)(z^2 + z^*z^*) - 2(1 + \gamma_0^2)zz^*]. \quad (63)
+$$
+
+After some elementary re-arrangements, one finds
+
+$$
+\Psi_z(\xi) = \frac{1}{\pi^{1/4}} \exp \left[ -\frac{1}{2}(zz^* + z^2) + iD_1 + \sqrt{2}z\xi - \frac{\xi^2}{2} \right], \quad D_1 = \frac{1-\gamma_0^2}{2(1+\gamma_0^2)} \operatorname{Im}(z^2). \quad (64)
+$$
+
+Apart from the purely imaginary phase $i D_1$, the wave functions $\Psi_z$ are the same as the standard coherent states (5). Since in the completeness proof the $D_1$ phase drops out, see (A15) in Appendix B, the states $\Psi_z$ form a complete function set.
+
+5.2. Mean Values
+
+With the aid of Mathematica [21], we get the following mean values for position x, velocity v,
+their mean square deviations (Δx)², (Δv)², and the mean energy ⟨HΩ>:
+
+$$
+\langle x \rangle = (\alpha_{\Omega})^{-1} \sqrt{1 + \gamma_0^{-2}} [A_2 \cosh(\tau) + \gamma_0 B_2 \sinh(\tau)]; \quad (65)
+$$
+
+$$
+(\Delta x)^2 = (2a_\Omega^2 \gamma_0)^{-1} [\cosh^2(\tau) + \gamma_0^2 \sinh^2(\tau)]; \quad (66)
+$$
+
+$$
+\langle v \rangle = (\hbar a_{\Omega}/m) \sqrt{1 + \gamma_0^{-2}} [A_2 \sinh(\tau) + \gamma_0 B_2 \cosh(\tau)]; \quad (67)
+$$
+
+$$
+(\Delta v)^2 = (\hbar a_{\Omega} / (2m))^2 \gamma_0^{-1} [\gamma_0^2 - 1 + (1 + \gamma_0^2) \cosh(2\tau)]; \quad (68)
+$$
+
+$$
+\langle H_{\Omega} \rangle = \hbar\Omega(4\gamma_0)^{-1}[\gamma_0^2 - 1 + 2(\gamma_0 + \gamma_0^{-1})(\gamma_0^2 B_2^2 - A_2^2)]. \quad (69)
+$$
+
+The mean energy does not depend on time, as it must be. With the aid of (62), the mean energy
+could also be expressed in terms of the complex state label z. Since $A_2$ and $B_2$ are arbitrary real
+---PAGE_BREAK---
+
+numbers, the mean energy can have any positive or negative value. From (66) and (68) one infers the
+position-momentum uncertainty product $\Delta_{xp}$ as
+
+$$
+\Delta_{xp}^2(\tau) = \hbar^2 / (8\gamma_0^2) \left[ \cosh^2(\tau) + \gamma_0^2 \sinh(\tau) \right] \left[ \gamma_0^2 - 1 + (1+\gamma_0^2) \cosh(2\tau) \right]. \quad (70)
+$$
+
+This product obeys the inequality
+
+$$
+\Delta_{xp}^2(\tau) > \Delta_{xp}^2(0) = \frac{\hbar^2}{4}, \quad \tau > 0. \tag{71}
+$$
+
+Obviously, the uncertainty product is minimal at $\tau = 0$, which means for the coherent states (64).
+By (66), the wave packets broaden exponentially with time.
+
+**6. Application to the Kepler-Coulomb Problem**
+
+The connection of the non-relativistic Hamiltonian for the hydrogen atom with the model
+of a four-dimensional oscillator is conveniently achieved by means of the Kustaanheimo-Stiefel
+transformation [15], which we write as follows [16,22]
+
+$$
+\begin{align*}
+u_1 &= \sqrt{r} \cos(\theta/2) \cos(\varphi - \Phi); & u_2 &= \sqrt{r} \cos(\theta/2) \sin(\varphi - \Phi); \\
+u_3 &= \sqrt{r} \sin(\theta/2) \cos(\Phi); & u_4 &= \sqrt{r} \sin(\theta/2) \sin(\Phi),
+\end{align*}
+\tag{72}
+$$
+
+where $r, \theta, \varphi$ are three-dimensional polar coordinates with $r > 0, 0 < \theta < \pi, 0 \le \varphi < 2\pi$,
+and $0 \le \Phi < 2\pi$ generates the extension to the fourth dimension. The vector **u** = {$u_1, u_2, u_3, u_4$}
+covers the $\mathbf{R}^4$ and the volume elements are related as [16]
+
+$$
+du_1 du_2 du_3 du_4 = (1/8) r \sin(\theta) dr d\theta d\varphi d\Phi. \quad (73)
+$$
+
+The stationary Schrödinger equation $H\psi = E\psi$ for the Hamiltonian $H = p^2/(2m) - \lambda/r$ is
+transformed into the following form of a four-dimensional harmonic oscillator [14]:
+
+$$
+H_u \Psi(\mathbf{u}) = \lambda \Psi(\mathbf{u}), \quad H_u = -\frac{\hbar^2}{(8m)} \Delta_u - E \mathbf{u} \cdot \mathbf{u}, \quad \Delta_u = \partial_{u_1}^2 + \dots \partial_{u_4}^2
+\qquad (74)
+$$
+
+with the constraint
+
+$$
+\partial_{\Phi} \Psi(\mathbf{u}) = 0. \tag{75}
+$$
+
+It should be noticed that, by (72), the components $u_i^2$ have the dimension of a length rather than
+length square. As a consequence, in the evolution equation $i\hbar\partial_\nu\Psi = H_u\Psi$, the parameter $\sigma$, which has
+the dimension time/length, is not the time parameter of the original problem. For negative energies
+with $E<0$, four-dimensional coherent oscillator states (of type-I) were used in [14] to show that elliptic
+orbits emerge in the classical limit whereby $\sigma$ turns out being proportional to the eccentric anomaly.
+
+In the spectrum of positive energies (ionized states of the hydrogen atom) with $E > 0$,
+coherent states of the RO were constructed in [16] and gave rise to hyperbolic orbits in the classical limit;
+by analytic continuation, also the elliptic orbits were derived from the RO states in the classical
+limit [17]. In addition, Kepler's equation was obtained by the assumption that time-dependence enters
+through the curve parameter $\sigma$ only. Recently [18], based on the coherent RO states, the first order
+quantum correction to Kepler's equation could be established for the smallness parameter $\epsilon = \hbar/L$
+where L denotes the orbital angular momentum.
+
+**7. Conclusions**
+
+Besides the standard coherent states of the harmonic oscillator (H0), a further solution family of
+the time-dependent Schrödinger equation was derived with the following properties: (i) The functions
+are normalizable of Gaussian type and contain a disposable width parameter. The latter allows us,
+for instance, to use arbitrarily concentrated one-particle states independently of the parameters of
+---PAGE_BREAK---
+
+a harmonic trap; (ii) The functions are complete and isomorphic to the standard coherent states at time $t=0$; (iii) The states minimize the position-momentum uncertainty product at the discrete times $T_n = n\pi/(2\omega)$, $n=0,1,...$; (iv) The width of the wave packets "breathes" periodically with period $T/2 = \pi/\omega$. (v) There is no diffusion, $T = 2\pi/\omega$ is the recurrence time of the states.
+
+In the case of the reversed harmonic oscillator (RO), there exists only one family of time-dependent solutions. They share the properties (i) and (ii) of the type-II HO states, and (iii) is fulfilled at time $t=0$, only. There is no recurrence, instead there is diffusion with a broadening which increases exponentially with time. The application to the Kepler-Coulomb problem was briefly discussed. The HO coherent states of type-I and the RO coherent states served as basis to derive, in the classical limit, the elliptic Kepler orbits [14] and the hyperbolic ones [16,17], respectively.
+
+**Acknowledgments:** The author expresses his gratitude to Jürgen Parisi for his constant encouragement and support. He also profited from his critical reading of the manuscript.
+
+**Conflicts of Interest:** The author declares no conflict of interest.
+
+## Appendix. Probability Density for Type-II States
+
+We have to decompose the functions $\beta(\tau)$ and $c(\tau)$, as given by (21)and (22), into their real and imaginary parts. To this end, we set $C_2 = A_2 + iB_2$ with real constants $A_2$ and $B_2$ and $\beta = \beta_R + i\beta_I$. Using the definitions of $N_1$ and $C_1$ in terms of $\gamma_0$, we obtain
+
+$$
+\begin{aligned}
+\beta_R &= \frac{1 + \gamma_0}{2} \frac{A_2 \cos(\tau) + B_2 \gamma_0 \sin(\tau)}{1 + (\gamma_0^2 - 1) \sin^2(\tau)}, \\
+\beta_I &= \frac{1 + \gamma_0}{2} \frac{B_2 \cos(\tau) - A_2 \gamma_0 \sin(\tau)}{1 + (\gamma_0^2 - 1) \sin^2(\tau)}.
+\end{aligned}
+\quad (A1) $$
+
+In view of the function $c(\tau)$, we make use of the following auxiliary relations
+
+$$ F_c \equiv -C_2^2 [4 (\exp(2i \tau) + C_1)]^{-1} = F_R + i F_I, $$
+
+$$ F_R = \left(1/(4N_1)\right) \left[(B_2^2 - A_2^2)\cos(2\tau) - 2A_2B_2\sin(2\tau) + (B_2^2 - A_2^2)C_1\right], $$
+
+$$ F_I = \left(1/(4N_1)\right) \left[(A_2^2 - B_2^2)\sin(2\tau) - 2A_2B_2\cos(2\tau) - 2A_2B_2C_1\right], \quad (A2) $$
+
+$$ \exp[c(\tau) + c^{*}(\tau)] = (1/\sqrt{N_1}) \exp[2C_3 + 2F_R], \quad (A3) $$
+
+where the integration constant $C_3$ is assumed being real and the star suffix means complex conjugation. The probability density $P$ results from the wave function (10) in the form
+
+$$ P(\xi, \tau) = \frac{C_0^2}{\sqrt{N_1}} \exp \left[ 2C_3 + 2F_R + 2\beta_R \xi - \gamma_R \xi^2 \right], \quad (A4) $$
+
+where $C_0$ is defined through the normalization integral
+
+$$ 1 = \int_{-\infty}^{\infty} d\xi P(\xi, \tau) = \frac{C_0^2 \sqrt{\pi}}{\sqrt{N_1 \gamma_R}} \exp(G), \quad G = 2C_3 + 2F_R + \frac{\beta_R^2}{\gamma_R}. \quad (A5) $$
+
+From the expression of $G$, it is not obvious that $C_0$ is independent of $\tau$ which was assumed in (10). Clearly, since $\Phi := \psi/C_0$ obeys the Schrödinger equation and $H$ is hermitian, one has the property
+
+$$ \partial_{\tau}\langle\Phi|\Phi\rangle = 0. \quad (A6) $$
+
+As a matter of fact, it is straightforward to show that
+
+$$ 2F_R + \frac{\beta_R^2}{\gamma_R} = [B_2^2(C_1 - 1) - A_2^2(1 + C_1)] [2(C_1^2 - 1)]^{-1} \quad (A7) $$
+---PAGE_BREAK---
+
+does not depend on $\tau$. We now dispose of the integration constant $C_3$ such that the exponent $G$ vanishes:
+
+$$ C_3 = - [B_2^2(C_1-1) - A_2^2(1+C_1)] [4(C_1^2-1)]^{-1} \quad (A8) $$
+
+In view of $G=0$, we replace $2C_3 + 2F_R$ by $-\beta_R^2/\gamma_R$, so that
+
+$$ P(\zeta, \tau) = \frac{C_0^2}{\sqrt{N_1}\gamma_R} \exp[-\gamma_R (\zeta - \beta_R/\gamma_R)^2], \quad (A9) $$
+
+which is the result (24). The normalization condition comes out immediately in the form
+
+$$ 1 = \frac{C_0^2 \sqrt{\pi}}{\sqrt{N_1 \gamma_R}} = \frac{C_0^2 \sqrt{\pi}}{\sqrt{1-C_1^2}} = \frac{C_0^2 \sqrt{\pi}(1+\gamma_0)}{2\sqrt{\gamma_0}}. \quad (A10) $$
+
+## Appendix. Proof of Completeness
+
+In order to prove the completeness of the functions (5), i.e., for the type-I HO case, we take advantage of the following generating function of the Hermite polynomials [23]:
+
+$$ \exp[2XZ - Z^2] = \sum_{n=0}^{\infty} \frac{Z^n}{n!} H_n(X). \quad (A11) $$
+
+In the function (5), we replace $z$ by $\sqrt{2}Z$ to obtain
+
+$$ \psi_z(\zeta) = \pi^{-1/4} \exp[-ZZ^* - (1/2)\zeta^2] \exp[-Z^2 + 2\zeta Z]. \quad (A12) $$
+
+With the aid of (A11), one can write
+
+$$ \psi_z(\zeta) = \exp[-(1/2)zz^*] \sum_{n=0}^{\infty} \frac{z^n}{\sqrt{n!}} \varphi_n(\zeta), \quad (A13) $$
+
+where
+
+$$ \varphi_n(\zeta) = \frac{1}{\sqrt{n! 2^n \sqrt{\pi}}} H_n(\zeta) \exp[-(1/2)\zeta^2]. \quad (A14) $$
+
+By means of (A13) and setting $z = u \exp[i\varphi]$, we obtain
+
+$$ \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = \exp[-u^2] \sum_{m,n=0}^{\infty} \frac{u^{n+m} \exp[i(m-n)\varphi]}{\sqrt{m!n!}} \varphi_m(\zeta_2) \varphi_n(\zeta_1). \quad (A15) $$
+
+In (A15), the $\varphi$ integration projects out the terms $n=m$ with the result
+
+$$ \frac{1}{\pi} \int_{0}^{\infty} u du \int_{0}^{2\pi} d\varphi \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = 2 \int_{0}^{\infty} u du \exp[-u^2] \sum_{n=0}^{\infty} \frac{u^{2n}}{n!} \varphi_n(\zeta_2) \varphi_n(\zeta_1). \quad (A16) $$
+
+After changing the integration variable $u \to v$ with $v = u^2$ with $udu = dv/2$, one uses
+
+$$ \int_{0}^{\infty} dv \frac{v^n}{n!} \exp[-v] = 1, \quad n = 0, 1, \dots \quad (A17) $$
+
+and, in view of the completeness of the Hermite polynomials, arrives at
+
+$$ \frac{1}{\pi} \int_{0}^{\infty} u du \int_{0}^{2\pi} d\varphi \langle \zeta_2 | z \rangle \langle z | \zeta_1 \rangle = \sum_{n=0}^{\infty} \varphi_n(\zeta_2) \varphi_n(\zeta_1) = \delta(\zeta_2 - \zeta_1). \quad (A18) $$
+---PAGE_BREAK---
+
+In the type-II HO and the RO cases, there appear additional purely imaginary phases in the
+wave function, which do not depend on $\zeta_1$, $\zeta_2$, and drop out at the step (A15) of the completeness
+proof above.
+
+References
+
+1. Schrödinger, E. Der stetige Übergang von der Mikro-zur Makromechanik. *Naturwissenschaften* **1926**, *14*, 664-666.
+
+2. Glauber, R.J. Coherent and incoherent states of the radiation field. *Phys. Rev.* **1963**, *131*, 2766.
+
+3. Glauber, R.J. Photon Correlations. *Phys. Rev. Lett.* **1963**, *10*, 84.
+
+4. Antoniou, I.E.; Progogine, I. Intrinsic irreversibility and integrability of dynamics. *Phys. A Stat. Mech. Appl.* **1993**, *192*, 443–464.
+
+5. Gentilini, S.; Braidotti, M.C.; Marcucci, G.; DelRe, E.; Conti, C. Physical realization of the Glauber quantum oscillator. *Sci. Rep.* **2015**, *5*, 15816.
+
+6. Glauber, R.J. Amplifiers, attenuators, and schrödinger's cat. *Ann. N. Y. Acad. Sci.* **1986**, *480*, 336–372.
+
+7. Barton, G. Quantum mechanics of the inverted oscillator potential. *Ann. Phys.* **1986**, *166*, 322–363.
+
+8. Bhaduri, R.K.; Khare, A.; Reimann, S.M.; Tomisiek, E.L. The riemann zeta function and the inverted harmonic oscillator. *Ann. Phys.* **1997**, *264*, 25–40.
+
+9. Guo, G.-J.; Ren, Z.-Z.; Ju, G.-X.; Guo, X.-Y. Quantum tunneling effect of a time-dependent inverted harmonic oscillator. *J. Phys. A Math. Theor.* **2011**, *44*, 185301.
+
+10. Galindo, A.; Pascual, P. *Quantum Mechanics I*; Springer: Berlin, Germany, 1990.
+
+11. Fock, V.A. Zur Theorie des Wassenstoffatoms. Z. Phys. **1935**, *98*, 145–154.
+
+12. Chen, A.C. Hydrogen atom as a four-dimensional oscillator. *Phys. Rev.* **A 1980**, *22*, 333–335.
+
+13. Gracia-Bondia, J.M. Hydrogen atom in the phase-space formulation of quantum mechanics. *Phys. Rev.* **A 1984**, *30*, 691–697.
+
+14. Gerry, C.C. Coherent states and the Kepler-Coulomb problem. *Phys. Rev.* **A 1986**, *33*, 6–11.
+
+15. Kustaanheimo, P.; Stiefel, E. Perturbation theory of Kepler motion based on spinor regularization. *J. Reine Angew. Math.* **1965**, *218*, 204–219.
+
+16. Rauh, A.; Parisi, J. Quantum mechanics of hyperbolic orbits in the Kepler problem. *Phys. Rev.* **A 2011**, *83*, 042101.
+
+17. Rauh, A.; Parisi, J. Quantum mechanics of Kepler orbits. *Adv. Stud. Theor. Phys.* **2014**, *8*, 889–938.
+
+18. Rauh, A.; Parisi, J. Quantum mechanical correction to Kepler’s equation. *Adv. Stud. Theor. Phys.* **2016**, *10*, 1–22.
+
+19. Baletto, F.; Riccardo, F. Structural properties of nanoclusters: Energetic, thermodynamic, and kinetic effects. *Rev. Mod. Phys.* **2015**, *77*, 371–423.
+
+20. Bauch, S.; Balzer, K.; Bonitz, M. Quantum breathing mode of trapped bosons and fermions at arbitrary coupling. *Phys. Rev. B* **2009**, *80*, 054515.
+
+21. Wolfram Research, Inc. Mathematica; Version 10.1.0.0; Wolfram Research, Inc.: Champaign, IL, USA, 2015.
+
+22. Chen, C.; Kibler, M. Connection between the hydrogen atom and the four-dimensional oscillator. *Phys. Rev.* **A 1985**, *31*, 3960–3963.
+
+23. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products; Academic Press: New York, NY, USA, 1965.
+
+© 2016 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+Entangled Harmonic Oscillators and
+Space-Time Entanglement
+
+Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³
+
+¹ Department of Physics, Middle East Technical University, 06800 Ankara, Turkey; baskal@newton.physics.metu.edu.tr
+
+² Center for Fundamental Physics, University of Maryland College Park, College Park, MD 20742, USA
+
+³ Department of Radiology, New York University School of Medicine, New York, NY 10016, USA; marilyn.noz@med.nyu.edu
+
+* Correspondence: yskim@umd.edu; Tel.: +1-301-937-6306
+
+Academic Editor: Sergei D. Odintsov
+
+Received: 26 February 2016; Accepted: 20 June 2016; Published: 28 June 2016
+
+**Abstract:** The mathematical basis for the Gaussian entanglement is discussed in detail, as well as its implications in the internal space-time structure of relativistic extended particles. It is shown that the Gaussian entanglement shares the same set of mathematical formulas with the harmonic oscillator in the Lorentz-covariant world. It is thus possible to transfer the concept of entanglement to the Lorentz-covariant picture of the bound state, which requires both space and time separations between two constituent particles. These space and time variables become entangled as the bound state moves with a relativistic speed. It is shown also that our inability to measure the time-separation variable leads to an entanglement entropy together with a rise in the temperature of the bound state. As was noted by Paul A. M. Dirac in 1963, the system of two oscillators contains the symmetries of the $O(3,2)$ de Sitter group containing two $O(3,1)$ Lorentz groups as its subgroups. Dirac noted also that the system contains the symmetry of the $Sp(4)$ group, which serves as the basic language for two-mode squeezed states. Since the $Sp(4)$ symmetry contains both rotations and squeezes, one interesting case is the combination of rotation and squeeze, resulting in a shear. While the current literature is mostly on the entanglement based on squeeze along the normal coordinates, the shear transformation is an interesting future possibility. The mathematical issues on this problem are clarified.
+
+**Keywords:** Gaussian entanglement; two coupled harmonic oscillators; coupled Lorentz groups; space-time separation; Wigner's little groups; $O(3,2)$ group; Dirac's generators for two coupled oscillators
+
+**PACS:** 03.65.Fd, 03.65.Pm, 03.67.-a, 05.30.-d
+
+# 1. Introduction
+
+Entanglement problems deal with fundamental issues in physics. Among them, the Gaussian entanglement is of current interest not only in quantum optics [1–4], but also in other dynamical systems [3,5–8]. The underlying mathematical language for this form of entanglement is that of harmonic oscillators. In this paper, we present first the mathematical tools that are and may be useful in this branch of physics.
+
+The entangled Gaussian state is based on the formula:
+
+$$ \frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k \chi_k(x) \chi_k(y) \quad (1) $$
+
+where $\chi_n(x)$ is the $n^{th}$ excited-state oscillator wave function.
+---PAGE_BREAK---
+
+In Chapter 16 of their book [9], Walls and Milburn discussed in detail the role of this formula in the theory of quantum information. Earlier, this formula played the pivotal role for Yuen to formulate his two-photon coherent states or two-mode squeezed states [10]. The same formula was used by Yurke and Patasek in 1987 [11] and by Ekert and Knight [12] for the two-mode squeezed state where one of the photons is not observed. The effect of entanglement is to be seen from the beam splitter experiments [13,14].
+
+In this paper, we point out first that the series of Equation (1) can also be written as a squeezed Gaussian form:
+
+$$ \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta} (x+y)^2 + e^{2\eta} (x-y)^2 \right] \right\} \quad (2) $$
+
+which becomes:
+
+$$ \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (x^2 + y^2) \right\} \qquad (3) $$
+
+when $\eta = 0$.
+
+We can obtain the squeezed form of Equation (2) by replacing $x$ and $y$ by $x'$ and $y'$, respectively, where:
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cosh \eta & -\sinh \eta \\ -\sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \qquad (4) $$
+
+If $x$ and $y$ are replaced by $z$ and $t$, Equation (4) becomes the formula for the Lorentz boost along the $z$ direction. Indeed, the Lorentz boost is a squeeze transformation [3,15].
+
+The squeezed Gaussian form of Equation (2) plays the key role in studying boosted bound states in the Lorentz-covariant world [16–20], where $z$ and $t$ are the space and time separations between two constituent particles. Since the mathematics of this physical system is the same as the series given in Equation (1), the physical concept of entanglement can be transferred to the Lorentz-covariant bound state, as illustrated in Figure 1.
+
+**Figure 1.** One mathematics for two branches of physics. Let us look at Equations (1) and (2) applicable to quantum optics and special relativity, respectively. They are the same formula from the Lorentz group with different variables as in the case of the Inductor-Capacitor-Resistor (LCR) circuit and the mechanical oscillator sharing the same second-order differential equation.
+
+We can approach this problem from the system of two harmonic oscillators. In 1963, Paul A. M. Dirac studied the symmetry of this two-oscillator system and discussed all possible transformations
+---PAGE_BREAK---
+
+applicable to this oscillator [21]. He concluded that there are ten possible generators of transformations satisfying a closed set of commutation relations. He then noted that this closed set corresponds to the Lie algebra of the $O(3, 2)$ de Sitter group, which is the Lorentz group applicable to three space-like and two time-like dimensions. This $O(3, 2)$ group has two $O(3, 1)$ Lorentz groups as its subgroups.
+
+We note that the Lorentz group is the language of special relativity, while the harmonic oscillator is one of the major tools for interpreting bound states. Therefore, Dirac's two-oscillator system can serve as a mathematical framework for understanding quantum bound systems in the Lorentz-covariant world.
+
+Within this formalism, the series given in Equation (1) can be produced from the ten-generator Dirac system. In discussing the oscillator system, the standard procedure is to use the normal coordinates defined as:
+
+$$u = \frac{x+y}{\sqrt{2}}, \quad \text{and} \quad v = \frac{x-y}{\sqrt{2}} \qquad (5)$$
+
+In terms of these variables, the transformation given in Equation (4) takes the form:
+
+$$\begin{pmatrix} u' \\ v' \end{pmatrix} = \begin{pmatrix} e^{-\eta} & 0 \\ 0 & e^{\eta} \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} \qquad (6)$$
+
+where this is a squeeze transformation along the normal coordinates. While the normal-coordinate transformation is a standard procedure, it is interesting to note that it also serves as a Lorentz boost [18].
+
+With these preparations, we shall study in Section 2 the system of two oscillators and coordinate transformations of current interest. It is pointed out in Section 3 that there are ten different generators for transformations, including those discussed in Section 2. It is noted that Dirac derived ten generators of transformations applicable to these oscillators, and they satisfy the closed set of commutation relations, which is the same as the Lie algebra of the $O(3, 2)$ de Sitter group containing two Lorentz groups among its subgroups. In Section 4, Dirac's ten-generator symmetry is studied in the Wigner phase-space picture, and it is shown that Dirac's symmetry contains both canonical and Lorentz transformations.
+
+While the Gaussian entanglement starts from the oscillator wave function in its ground state, we study in Section 5 the entanglements of excited oscillator states. We give a detailed explanation of how the series of Equation (1) can be derived from the squeezed Gaussian function of Equation (2).
+
+In Section 6, we study in detail how the sheared state can be derived from a squeezed state. It appears to be a rotated squeezed state, but this is not the case. In Section 7, we study what happens when one of the two entangled variables is not observed within the framework of Feynman's rest of the universe [22,23].
+
+In Section 8, we note that most of the mathematical formulas in this paper have been used earlier for understanding relativistic extended particles in the Lorentz-covariant harmonic oscillator formalism [20,24–28]. These formulas allow us to transport the concept of entanglement from the current problem of physics to quantum bound states in the Lorentz-covariant world. The time separation between the constituent particles is not observable and is not known in the present form of quantum mechanics. However, this variable effects the real world by entangling itself with the longitudinal variable.
+
+## 2. Two-Dimensional Harmonic Oscillators
+
+The Gaussian form:
+
+$$\left[ \frac{1}{\sqrt{\pi}} \right]^{1/4} \exp \left( -\frac{x^2}{2} \right) \qquad (7)$$
+
+is used for many branches of science. For instance, we can construct this function by throwing dice.
+---PAGE_BREAK---
+
+In physics, this is the wave function for the one-dimensional harmonic oscillator in the ground state. This function is also used for the vacuum state in quantum field theory, as well as the zero-photon state in quantum optics. For excited oscillator states, the wave function takes the form:
+
+$$ \chi_n(x) = \left[ \frac{1}{\sqrt{\pi} 2^{n} n!} \right]^{1/2} H_n(x) \exp \left( -\frac{x^2}{2} \right) \quad (8) $$
+
+where $H_n(x)$ is the Hermite polynomial of the $n$-th degree. The properties of this wave function are well known, and it becomes the Gaussian form of Equation (7) when $n=0$.
+
+We can now consider the two-dimensional space with the orthogonal coordinate variables x and y and the same wave function with the y variable:
+
+$$ \chi_m(y) = \left[ \frac{1}{\sqrt{\pi} 2^{m} m!} \right]^{1/2} H_m(y) \exp \left( -\frac{y^2}{2} \right) \quad (9) $$
+
+and construct the function:
+
+$$ \psi^{n,m}(x,y) = [\chi_n(x)] [\chi_m(y)] \quad (10) $$
+
+This form is clearly separable in the x and y variables. If *n* and *m* are zero, the wave function becomes:
+
+$$ \psi^{0,0}(x,y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (x^2 + y^2) \right\} \quad (11) $$
+
+Under the coordinate rotation:
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \quad (12) $$
+
+this function remains separable. This rotation is illustrated in Figure 2. This is a transformation very familiar to us.
+
+We can next consider the scale transformation of the form:
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \quad (13) $$
+
+This scale transformation is also illustrated in Figure 2. This area-preserving transformation is known as the squeeze. Under this transformation, the Gaussian function is still separable.
+
+If the direction of the squeeze is rotated by 45°, the transformation becomes the diagonal transformation of Equation (6). Indeed, this is a squeeze in the normal coordinate system. This form of squeeze is most commonly used for squeezed states of light, as well as the subject of entanglements. It is important to note that, in terms of the x and y variables, this transformation can be written as Equation (4) [18]. In 1905, Einstein used this form of squeeze transformation for the longitudinal and time-like variables. This is known as the Lorentz boost.
+
+In addition, we can consider the transformation of the form:
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} 1 & 2\alpha \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \quad (14) $$
+
+This transformation shears the system as is shown in Figure 2.
+
+After the squeeze or shear transformation, the wave function of Equation (10) becomes non-separable, but it can still be written as a series expansion in terms of the oscillator wave functions. It can take the form:
+
+$$ \psi(x,y) = \sum_{n,m} A_{n,m} \chi_n(x) \chi_m(y) \quad (15) $$
+---PAGE_BREAK---
+
+with:
+
+$$ \sum_{n,m} |A_{n,m}|^2 = 1 $$
+
+if $\psi(x, y)$ is normalized, as was the case for the Gaussian function of Equation (11).
+
+## 2.1. Squeezed Gaussian Function
+
+Under the squeeze along the normal coordinate, the Gaussian form of Equation (11) becomes:
+
+$$ \psi_{\eta}(x, y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta}(x+y)^2 + e^{2\eta}(x-y)^2 \right] \right\} \quad (16) $$
+
+which was given in Equation (2). This function is not separable in the x and y variables. These variables are now entangled. We obtain this form by replacing, in the Gaussian function of Equation (11), the x and y variables by $x'$ and $y'$, respectively, where:
+
+$$ x' = (\cosh \eta)x - (\sinh \eta)y, \quad \text{and} \quad y' = (\cosh \eta)y - (\sinh \eta)x \qquad (17) $$
+
+This form of squeeze is illustrated in Figure 3, and the expansion of this squeezed Gaussian function becomes the series given in Equation (1) [20,26]. This aspect will be discussed in detail in Section 5.
+
+**Figure 2.** Transformations in the two-dimensional space. The object can be rotated, squeezed or sheared. In all three cases, the area remains invariant.
+
+**Figure 3.** Squeeze along the 45 °C direction, discussed most frequently in the literature.
+---PAGE_BREAK---
+
+In 1976 [10], Yuen discussed two-photon coherent states, often called squeezed states of light. This series expansion served as the starting point for two-mode squeezed states. More recently, in 2003, Giedke et al. [1] used this formula to formulate the concept of the Gaussian entanglement.
+
+There is another way to derive the series. For the harmonic oscillator wave functions, there are step-down and step-up operators [17]. These are defined as:
+
+$$a = \frac{1}{\sqrt{2}} \left( x + \frac{\partial}{\partial x} \right), \quad \text{and} \quad a^{\dagger} = \frac{1}{\sqrt{2}} \left( x - \frac{\partial}{\partial x} \right) \qquad (18)$$
+
+If they are applied to the oscillator wave function, we have:
+
+$$a \chi_n(x) = \sqrt{n} \chi_{n-1}(x), \quad \text{and} \quad a^{\dagger} \chi_n(x) = \sqrt{n+1} \chi_{n+1}(x) \qquad (19)$$
+
+Likewise, we can introduce $b$ and $b^\dagger$ operators applicable to $\chi_n(y)$:
+
+$$b = \frac{1}{\sqrt{2}} \left( y + \frac{\partial}{\partial y} \right), \quad \text{and} \quad b^{\dagger} = \frac{1}{\sqrt{2}} \left( y - \frac{\partial}{\partial y} \right) \qquad (20)$$
+
+Thus
+
+$$\begin{aligned} \left(a^{\dagger}\right)^{n} \psi^{0}(x) &= \sqrt{n!} \chi_{n}(x) \\ \left(b^{\dagger}\right)^{n} \psi^{0}(y) &= \sqrt{n!} \chi_{n}(y) \end{aligned} \qquad (21)$$
+
+and:
+
+$$a \chi_0(x) = b \chi_0(y) = 0 \qquad (22)$$
+
+In terms of these variables, the transformation leading the Gaussian function of Equation (11) to its squeezed form of Equation (16) can be written as:
+
+$$\exp\left\{\frac{\eta}{2}(a^{\dagger}b^{\dagger} - ab)\right\} \qquad (23)$$
+
+which can also be written as:
+
+$$\exp\left\{-\eta\left(x\frac{\partial}{\partial y} + y\frac{\partial}{\partial x}\right)\right\} \qquad (24)$$
+
+Next, we can consider the exponential form:
+
+$$\exp\left\{(\tanh \eta)a^{\dagger}b^{\dagger}\right\} \qquad (25)$$
+
+which can be expanded as:
+
+$$\sum_{n} \frac{1}{n!} (\tanh \eta)^n (a^{\dagger} b^{\dagger})^n \qquad (26)$$
+
+If this operator is applied to the ground state of Equation (11), the result is:
+
+$$\sum_{n} (\tanh \eta)^n \chi_n(x) \chi_n(y) \qquad (27)$$
+
+This form is not normalized, while the series of Equation (11) is. What is the origin of this difference?
+
+There is a similar problem with the one-photon coherent state [29,30]. There, the series comes from the expansion of the exponential form:
+
+$$\exp\{aa^{\dagger}\} \qquad (28)$$
+---PAGE_BREAK---
+
+which can be expanded to:
+
+$$ \sum_n \frac{1}{n!} a^n (a^\dagger)^n \qquad (29) $$
+
+However, this operator is not unitary. In order to make this series unitary, we consider the exponential form:
+
+$$ \exp (\alpha a^{\dagger} - \alpha^* a) \qquad (30) $$
+
+which is unitary. This expression can then be written as:
+
+$$ e^{-\alpha a^*/2} [\exp(\alpha a^{\dagger})] [\exp(\alpha^* a)] \qquad (31) $$
+
+according to the Baker–Campbell–Hausdorff (BCH) relation [31,32]. If this is applied to the ground state, the last bracket can be dropped, and the result is:
+
+$$ e^{-\alpha a^*/2} \exp[\alpha a^{\dagger}] \qquad (32) $$
+
+which is the unitary operator with the normalization constant:
+
+$$ e^{-\alpha a^*/2} $$
+
+Likewise, we can conclude that the series of Equation (27) is different from that of Equation (1) due to the difference between the unitary operator of Equation (23) and the non-unitary operator of Equation (25). It may be possible to derive the normalization factor using the BCH formula, but it seems to be intractable at this time. The best way to resolve this problem is to present the exact calculation of the unitary operator leading to the normalized series of Equation (11). We shall return to this problem in Section 5, where squeezed excited states are studied.
+
+## 2.2. Sheared Gaussian Function
+
+In addition, there is a transformation called "shear," where only one of the two coordinates is translated, as shown in Figure 2. This transformation takes the form:
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} 1 & 2\alpha \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \qquad (33) $$
+
+which leads to:
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} x + 2\alpha y \\ y \end{pmatrix} \qquad (34) $$
+
+This shear is one of the basic transformations in engineering sciences. In physics, this transformation plays the key role in understanding the internal space-time symmetry of massless particles [33–35]. This matrix plays the pivotal role during the transition from the oscillator mode to the damping mode in classical damped harmonic oscillators [36,37].
+
+Under this transformation, the Gaussian form becomes:
+
+$$ \psi_{shr}(x,y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} \left[ (x - 2\alpha y)^2 + y^2 \right] \right\} \qquad (35) $$
+
+It is possible to expand this into a series of the form of Equation (15) [38].
+
+The transformation applicable to the Gaussian form of Equation (11) is:
+
+$$ \exp(-2\alpha y \frac{\partial}{\partial x}) \qquad (36) $$
+---PAGE_BREAK---
+
+and the generator is:
+
+$$ -iy \frac{\partial}{\partial x} \tag{37} $$
+
+It is of interest to see where this generator stands among the ten generators of Dirac.
+
+However, the most pressing problem is whether the sheared Gaussian form can be regarded as a rotated squeezed state. The basic mathematical issue is that the shear matrix of Equation (33) is triangular and cannot be diagonalized. Therefore, it cannot be a squeezed state. Yet, the Gaussian form of Equation (35) appears to be a rotated squeezed state, while not along the normal coordinates. We shall look at this problem in detail in Section 6.
+
+### 3. Dirac's Entangled Oscillators
+
+Paul A. M. Dirac devoted much of his life-long efforts to the task of making quantum mechanics compatible with special relativity. Harmonic oscillators serve as an instrument for illustrating quantum mechanics, while special relativity is the physics of the Lorentz group. Thus, Dirac attempted to construct a representation of the Lorentz group using harmonic oscillator wave functions [17,21].
+
+In his 1963 paper [21], Dirac started from the two-dimensional oscillator whose wave function takes the Gaussian form given in Equation (11). He then considered unitary transformations applicable to this ground-state wave function. He noted that they can be generated by the following ten Hermitian operators:
+
+$$ L_1 = \frac{1}{2} (a^\dagger b + b^\dagger a), \quad L_2 = \frac{1}{2i} (a^\dagger b - b^\dagger a) $$
+
+$$ L_3 = \frac{1}{2} (a^\dagger a - b^\dagger b), \quad S_3 = \frac{1}{2} (a^\dagger a + b b^\dagger) $$
+
+$$ K_1 = -\frac{1}{4} (a^\dagger a^\dagger + aa - b^\dagger b^\dagger - bb) $$
+
+$$ K_2 = \frac{i}{4} (a^\dagger a^\dagger - aa + b^\dagger b^\dagger - bb) $$
+
+$$ K_3 = \frac{1}{2} (a^\dagger b^\dagger + ab) $$
+
+$$ Q_1 = -\frac{i}{4} (a^\dagger a^\dagger - aa - b^\dagger b^\dagger + bb) $$
+
+$$ Q_2 = -\frac{1}{4} (a^\dagger a^\dagger + aa + b^\dagger b^\dagger + bb) $$
+
+$$ Q_3 = \frac{i}{2} (a^\dagger b^\dagger - ab) \tag{38} $$
+
+He then noted that these operators satisfy the following set of commutation relations.
+
+$$ [L_i, L_j] = i\epsilon_{ijk}L_k, \quad [L_i, K_j] = i\epsilon_{ijk}K_k, \quad [L_i, Q_j] = i\epsilon_{ijk}Q_k $$
+
+$$ [K_i, K_j] = [Q_i, Q_j] = -i\epsilon_{ijk}L_k, \quad [L_i, S_3] = 0 $$
+
+$$ [K_i, Q_j] = -i\delta_{ij}S_3, \quad [K_i, S_3] = -iQ_i, \quad [Q_i, S_3] = iK_i \tag{39} $$
+
+Dirac then determined that these commutation relations constitute the Lie algebra for the $O(3,2)$ de Sitter group with ten generators. This de Sitter group is the Lorentz group applicable to three space
+---PAGE_BREAK---
+
+coordinates and two time coordinates. Let us use the notation (x, y, z, t, s), with (x, y, z) as the space
+coordinates and (t, s) as two time coordinates. Then, the rotation around the z axis is generated by:
+
+$$
+L_3 = \begin{pmatrix} 0 & -i & 0 & 0 \\ i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \tag{40}
+$$
+
+The generators $L_1$ and $L_2$ can also be constructed. The $K_3$ and $Q_3$ generators will take the form:
+
+$$
+K_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & i & 0 \\ 0 & 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \quad Q_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \tag{41}
+$$
+
+From these two matrices, the generators $K_1, K_2, Q_1, Q_2$ can be constructed. The generator $S_3$ can be written as:
+
+$$
+S_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i & 0 \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \quad (42)
+$$
+
+The last five-by-five matrix generates rotations in the two-dimensional space of (t, s). If we introduce
+these two time variables, the O(3,2) group leads to two coupled Lorentz groups. The particle mass is
+invariant under Lorentz transformations. Thus, one Lorentz group cannot change the particle mass.
+However, with two coupled Lorentz groups, we can describe the world with variable masses, such as
+the neutrino oscillations.
+
+In Section 2, we used the operators $Q_3$ and $K_3$ as the generators for the squeezed Gaussian
+function. For the unitary transformation of Equation (23), we used:
+
+$$
+\exp(-i\eta Q_3) \tag{43}
+$$
+
+However, the exponential form of Equation (25) can be written as:
+
+$$
+\exp\{-i(\tanh \eta)(Q_3 + iK_3)\} \qquad (44)
+$$
+
+which is not unitary, as was seen before.
+
+From the space-time point of view, both $K_3$ and $Q_3$ generate Lorentz boosts along the z direction,
+with the time variables $t$ and $s$, respectively. The fact that the squeeze and Lorentz transformations
+share the same mathematical formula is well known. However, the non-unitary operator $iK_3$ does not
+seem to have a space-time interpretation.
+
+As for the sheared state, the generator can be written as:
+
+$$
+Q_3 - L_2 \tag{45}
+$$
+
+leading to the expression given in Equation (37). This is a Hermitian operator leading to the unitary
+transformation of Equation (36).
+---PAGE_BREAK---
+
+**4. Entangled Oscillators in the Phase-Space Picture**
+
+Also in his 1963 paper, Dirac states that the Lie algebra of Equation (39) can serve as the four-dimensional symplectic group $Sp(4)$. This group allows us to study squeezed or entangled states in terms of the four-dimensional phase space consisting of two position and two momentum variables [15,39,40].
+
+In order to study the $Sp(4)$ contents of the coupled oscillator system, let us introduce the Wigner function defined as [41]:
+
+$$
+\begin{aligned}
+W(x,y;p,q) = & \left(\frac{1}{\pi}\right)^2 \int \exp\{-2i(px' + qy')\} \\
+& \times \psi^*(x+x',y+y')\psi(x-x',y-y')dx'dy'
+\end{aligned}
+\quad (46)
+$$
+
+If the wave function $\psi(x, y)$ is the Gaussian form of Equation (11), the Wigner function becomes:
+
+$$ W(x,y:p,q) = \left(\frac{1}{\pi}\right)^2 \exp\left\{-\left(x^2 + p^2 + y^2 + q^2\right)\right\} \quad (47) $$
+
+The Wigner function is defined over the four-dimensional phase space of $(x, p, y, q)$ just as in the case of classical mechanics. The unitary transformations generated by the operators of Equation (38) are translated into Wigner transformations [39,40,42]. As in the case of Dirac's oscillators, there are ten corresponding generators applicable to the Wigner function. They are:
+
+$$
+\begin{aligned}
+L_1 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial q} - q \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial p} - p \frac{\partial}{\partial y} \right) \right\} \\
+L_2 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right) + \left( p \frac{\partial}{\partial q} - q \frac{\partial}{\partial p} \right) \right\} \\
+L_3 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} - p \frac{\partial}{\partial x} \right) - \left( y \frac{\partial}{\partial q} - q \frac{\partial}{\partial y} \right) \right\} \\
+S_3 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} - p \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial q} - q \frac{\partial}{\partial y} \right) \right\}
+\end{aligned}
+\quad (48)
+$$
+
+and:
+
+$$
+\begin{aligned}
+K_1 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} + p \frac{\partial}{\partial x} \right) - \left( y \frac{\partial}{\partial q} + q \frac{\partial}{\partial y} \right) \right\} \\
+K_2 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial x} + y \frac{\partial}{\partial y} \right) - \left( p \frac{\partial}{\partial p} + q \frac{\partial}{\partial q} \right) \right\} \\
+K_3 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial q} + q \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial p} + p \frac{\partial}{\partial y} \right) \right\} \\
+Q_1 &= +\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial x} + q \frac{\partial}{\partial q} \right) - \left( y \frac{\partial}{\partial y} + p \frac{\partial}{\partial p} \right) \right\} \\
+Q_2 &= -\frac{i}{2} \left\{ \left( x \frac{\partial}{\partial p} + p \frac{\partial}{\partial x} \right) + \left( y \frac{\partial}{\partial q} + q \frac{\partial}{\partial y} \right) \right\} \\
+Q_3 &= -\frac{i}{2} \left\{ \left( y \frac{\partial}{\partial x} + x \frac{\partial}{\partial y} \right) - \left( q \frac{\partial}{\partial p} + p \frac{\partial}{\partial q} \right) \right\}
+\end{aligned}
+\quad (49)
+$$
+---PAGE_BREAK---
+
+These generators also satisfy the Lie algebra given in Equations (38) and (39). Transformations generated by these generators have been discussed in the literature [15,40,42].
+
+As in the case of Section 3, we are interested in the generators $Q_3$ and $K_3$. The transformation generated by $Q_3$ takes the form:
+
+$$ \left[ \exp \left\{ \eta \left( x \frac{\partial}{\partial y} + y \frac{\partial}{\partial x} \right) \right\} \right] \left[ \exp \left\{ -\eta \left( p \frac{\partial}{\partial q} + q \frac{\partial}{\partial p} \right) \right\} \right] \quad (50) $$
+
+This exponential form squeezes the Wigner function of Equation (47) in the *x* *y* space, as well as in their corresponding momentum space. However, in the momentum space, the squeeze is in the opposite direction, as illustrated in Figure 4. This is what we expect from canonical transformation in classical mechanics. Indeed, this corresponds to the unitary transformation, which played the major role in Section 2.
+
+**Figure 4.** Transformations generated by $Q_3$ and $K_3$. As the parameter $\eta$ becomes larger, both the space and momentum distribution becomes larger.
+
+Even though shown insignificant in Section 2, $K_3$ had a definite physical interpretation in Section 3. The transformation generated by $K_3$ takes the form:
+
+$$ \left[ \exp \left\{ \eta \left( x \frac{\partial}{\partial q} + q \frac{\partial}{\partial x} \right) \right\} \right] \left[ \exp \left\{ \eta \left( y \frac{\partial}{\partial p} + p \frac{\partial}{\partial y} \right) \right\} \right] \quad (51) $$
+
+This performs the squeeze in the *x* *q* and *y* *p* spaces. In this case, the squeezes have the same sign, and the rate of increase is the same in all directions. We can thus have the same picture of squeeze for both *x* *y* and *p* *q* spaces, as illustrated in Figure 4. This parallel transformation corresponds to the Lorentz squeeze [20,25].
+
+As for the sheared state, the combination:
+
+$$ Q_3 - L_2 = -i \left( y \frac{\partial}{\partial x} + q \frac{\partial}{\partial p} \right) \quad (52) $$
+
+generates the same shear in the *p* *q* space.
+---PAGE_BREAK---
+
+**5. Entangled Excited States**
+
+In Section 2, we discussed the entangled ground state and noted that the entangled state of Equation (1) is a series expansion of the squeezed Gaussian function. In this section, we are interested in what happens when we squeeze an excited oscillator state starting from:
+
+$$ \chi_n(x)\chi_m(y) \tag{53} $$
+
+In order to entangle this state, we should replace $x$ and $y$, respectively, by $x'$ and $y'$ given in Equation (17).
+
+The question is how the oscillator wave function is squeezed after this operation. Let us note first that the wave function of Equation (53) satisfies the equation:
+
+$$ \frac{1}{2} \left\{ \left( x^2 - \frac{\partial^2}{\partial x^2} \right) - \left( y^2 - \frac{\partial^2}{\partial y^2} \right) \right\} \chi_n(x) \chi_m(y) = (n-m) \chi_n(x) \chi_m(y) \tag{54} $$
+
+This equation is invariant under the squeeze transformation of Equation (17), and thus, the eigenvalue $(n-m)$ remains invariant. Unlike the usual two-oscillator system, the $x$ component and the $y$ component have opposite signs. This is the reason why the overall equation is squeeze-invariant [3,25,43].
+
+We then have to write this squeezed oscillator in the series form of Equation (15). The most interesting case is of course for $m=n=0$, which leads to the Gaussian entangled state given in Equation (16). Another interesting case is for $m=0$, while $n$ is allowed to take all integer values. This single-excitation system has applications in the covariant oscillator formalism where no time-like excitations are allowed. The Gaussian entangled state is a special case of this single-excited oscillator system.
+
+The most general case is for nonzero integers for both $n$ and $m$. The calculation for this case is available in the literature [20,44]. Seeing no immediate physical applications of this case, we shall not reproduce this calculation in this section.
+
+For the single-excitation system, we write the starting wave function as:
+
+$$ \chi_n(x)\chi_0(y) = \left[ \frac{1}{\pi 2^n n!} \right]^{1/2} H_n(x) \exp \left\{ -\left( \frac{x^2 + y^2}{2} \right) \right\} \tag{55} $$
+
+There are no excitations along the $y$ coordinate. In order to squeeze this function, our plan is to replace $x$ and $y$ by $x'$ and $y'$, respectively, and write $\chi_n(x')\chi_0(y')$ as a series in the form:
+
+$$ \chi_n(x')\chi_0(y') = \sum_{k',k} A_{k',k}(n)\chi_{k'}(x)\chi_k(y) \tag{56} $$
+
+Since $k' - k = n$ or $k' = n + k$, according to the eigenvalue of the differential equation given in Equation (54), we write this series as:
+
+$$ \chi_n(x')\chi_0(y') = \sum_{k',k} A_k(n)\chi_{(k+n)}(x)\chi_k(y) \tag{57} $$
+
+with:
+
+$$ \sum_k |A_k(n)|^2 = 1 \tag{58} $$
+
+This coefficient is:
+
+$$ A_k(n) = \int \chi_{k+n}(x)\chi_k(y)\chi_n(x')\chi_0(y') dx dy \tag{59} $$
+
+This calculation was given in the literature in a fragmentary way in connection with a Lorentz-covariant description of extended particles starting from Ruiz's 1974 paper [45], subsequently by Kim et al. in
+---PAGE_BREAK---
+
+1979 [26] and by Rotbart in 1981 [44]. In view of the recent developments of physics, it seems necessary
+to give one coherent calculation of the coefficient of Equation (59).
+
+We are now interested in the squeezed oscillator function:
+
+$$
+\begin{equation}
+\begin{aligned}
+A_k(n) = & \left[ \frac{1}{\pi^2 2^n n! (k+n)^2 (n+k)! k^{2k}!} \right]^{1/2} \\
+& \times \int H_{n+k}(x) H_k(y) H_n(x') \exp \left\{ -\left( \frac{x^2 + y^2 + x'^2 + y'^2}{2} \right) \right\} dx dy
+\end{aligned}
+\tag{60}
+\end{equation}
+$$
+
+As was noted by Ruiz [45], the key to the evaluation of this integral is to introduce the generating
+function for the Hermite polynomials [46,47]:
+
+$$
+G(r,z) = \exp(-r^2 + 2rz) = \sum_m \frac{r^m}{m!} H_m(z) \quad (61)
+$$
+
+and evaluate the integral:
+
+$$
+I = \int G(r,x)G(s,y)G(r',x') \exp \left\{ - \left( \frac{x^2 + y^2 + (x'^2 + y'^2)}{2} \right) \right\} dx dy \quad (62)
+$$
+
+The integrand becomes one exponential function, and its exponent is quadratic in x and y.
+This quadratic form can be diagonalized, and the integral can be evaluated [20,26]. The result is:
+
+$$
+I = \left[ \frac{\pi}{\cosh \eta} \right] \exp(2rs \tanh \eta) \exp\left(\frac{2rr'}{\cosh \eta}\right) \quad (63)
+$$
+
+We can now expand this expression and choose the coefficients of r$^{n+k}$, s$^{k}$, r$^{m}$ for H$_{(n+k)}$ (x), H$_{n}$ (y) and
+H$_{n}$ (z'), respectively. The result is:
+
+$$
+A_{n;k} = \left( \frac{1}{\cosh \eta} \right)^{(n+1)} \left[ \frac{(n+k)!}{n!k!} \right]^{1/2} (\tanh \eta)^k \quad (64)
+$$
+
+Thus, the series becomes:
+
+$$
+\chi_n(x')\chi_0(y') = \left(\frac{1}{\cosh \eta}\right)^{(n+1)} \sum_k \left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh \eta)^k \chi_{k+n}(x)\chi_k(y) \quad (65)
+$$
+
+If $n = 0$, it is the squeezed ground state, and this expression becomes the entangled state of
+Equation (16).
+
+**6. E(2)-Sheared States**
+
+Let us next consider the effect of shear on the Gaussian form. From Figures 3 and 5, it is clear that
+the sheared state is a rotated squeezed state.
+
+In order to understand this transformation, let us note that the squeeze and rotation are generated
+by the two-by-two matrices:
+
+$$
+K = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad J = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \tag{66}
+$$
+---PAGE_BREAK---
+
+which generate the squeeze and rotation matrices of the form:
+
+$$
+\begin{align}
+\exp(-i\eta K) &= \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \notag \\
+\exp(-i\theta J) &= \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \tag{67}
+\end{align}
+$$
+
+respectively. We can then consider:
+
+$$
+S = K - J = \begin{pmatrix} 0 & 2i \\ 0 & 0 \end{pmatrix} \tag{68}
+$$
+
+This matrix has the property that S² = 0. Thus, the transformation matrix becomes:
+
+$$
+\exp(-i\alpha S) = \begin{pmatrix} 1 & 2\alpha \\ 0 & 1 \end{pmatrix} \qquad (69)
+$$
+
+Since $S^2 = 0$, the Taylor expansion truncates, and the transformation matrix becomes the triangular matrix of Equation (34), leading to the transformation:
+
+$$
+\begin{pmatrix} x \\ y \end{pmatrix} \rightarrow \begin{pmatrix} x + 2\alpha y \\ y \end{pmatrix} \qquad (70)
+$$
+
+The shear generator S of Equation (68) indicates that the infinitesimal transformation is a rotation followed by a squeeze. Since both rotation and squeeze are area-preserving transformations, the shear should also be an area-preserving transformations.
+
+Figure 5. Shear transformation of the Gaussian form given in Equation (11).
+
+In view of Figure 5, we should ask whether the triangular matrix of Equation (69) can be obtained from one squeeze matrix followed by one rotation matrix. This is not possible mathematically. It can however, be written as a squeezed rotation matrix of the form:
+
+$$
+\begin{pmatrix} e^{\lambda/2} & 0 \\ 0 & e^{-\lambda/2} \end{pmatrix} \begin{pmatrix} \cos \omega & \sin \omega \\ -\sin \omega & \cos \omega \end{pmatrix} \begin{pmatrix} e^{-\lambda/2} & 0 \\ 0 & e^{\lambda/2} \end{pmatrix} \quad (71)
+$$
+
+resulting in:
+
+$$
+\left( \begin{array}{cc} \cos \omega & e^{\lambda} \sin \omega \\ -e^{-\lambda} \sin \omega & \cos \omega \end{array} \right) \qquad (72)
+$$
+
+If we let:
+
+$$
+(\sin \omega) = 2\alpha e^{-\lambda} \tag{73}
+$$
+---PAGE_BREAK---
+
+Then:
+
+$$
+\begin{pmatrix}
+\cos \omega & 2\alpha \\
+-2\alpha e^{-2\lambda} & \cos \omega
+\end{pmatrix}
+\qquad (74)
+$$
+
+If $\lambda$ becomes infinite, the angle $\omega$ becomes zero, and this matrix becomes the triangular matrix of Equation (69). This is a singular process where the parameter $\lambda$ goes to infinity.
+
+If this transformation is applied to the Gaussian form of Equation (11), it becomes:
+
+$$
+\psi(x, y) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} \left[ (x - 2\alpha y)^2 + y^2 \right] \right\} \quad (75)
+$$
+
+The question is whether the exponential portion of this expression can be written as:
+
+$$
+\exp \left\{ -\frac{1}{2} \left[ e^{-2\eta} (x \cos \theta + y \sin \theta)^2 + e^{2\eta} (x \sin \theta - y \cos \theta)^2 \right] \right\} \quad (76)
+$$
+
+The answer is yes. This is possible if:
+
+$$
+e^{2\eta} = 1 + 2\alpha^2 + 2\alpha \sqrt{\alpha^2 + 1}
+e^{-2\eta} = 1 + 2\alpha^2 - 2\alpha \sqrt{\alpha^2 + 1}
+$$
+
+In Equation (74), we needed a limiting case of $\lambda$ becoming infinite. This is necessarily a singular transformation. On the other hand, the derivation of the Gaussian form of Equation (75) appears to be analytic. How is this possible? In order to achieve the transformation from the Gaussian form of Equations (11) to (75), we need the linear transformation:
+
+$$
+\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix} \tag{78}
+$$
+
+If the initial form is invariant under rotations as in the case of the Gaussian function of Equation (11),
+we can add another rotation matrix on the right-hand side. We choose that rotation matrix to be:
+
+$$
+\begin{pmatrix} \cos(\theta - \pi/2) & -\sin(\theta - \pi/2) \\ \sin(\theta - \pi/2) & \cos(\theta - \pi/2) \end{pmatrix} \tag{79}
+$$
+
+write the three matrices as:
+
+$$
+\begin{pmatrix} \cos \theta' & -\sin \theta' \\ \sin \theta' & \cos \theta' \end{pmatrix} \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} \cos \theta' & -\sin \theta' \\ \sin \theta' & \cos \theta' \end{pmatrix} \quad (80)
+$$
+
+with:
+
+$$
+\theta' = \theta - \frac{\pi}{4}
+$$
+
+The multiplication of these three matrices leads to:
+
+$$
+\begin{pmatrix}
+(\cosh \eta) \sin(2\theta) & \sinh \eta + (\cosh \eta) \cos(2\theta) \\
+\sinh \eta - (\cosh \eta) \cos(2\theta) & (\cosh \eta) \sin(2\theta)
+\end{pmatrix}
+\quad (81)
+$$
+---PAGE_BREAK---
+
+The lower-left element can become zero when $\sinh\eta = \cosh(\eta)\cos(2\theta)$, and consequently, this matrix becomes:
+
+$$ \begin{pmatrix} 1 & 2 \sinh \eta \\ 0 & 1 \end{pmatrix} \qquad (82) $$
+
+Furthermore, this matrix can be written in the form of a squeezed rotation matrix given in Equation (72), with:
+
+$$ \cos \omega = (\cosh \eta) \sin(2\theta) $$
+
+$$ e^{-2\lambda} = \frac{\cos(2\theta) - \tanh \eta}{\cos(2\theta) + \tanh \eta} \qquad (83) $$
+
+The matrices of the form of Equations (72) and (81) are known as the Wigner and Bargmann decompositions, respectively [33,36,48–50].
+
+## 7. Feynman's Rest of the Universe
+
+We need the concept of entanglement in quantum systems of two variables. The issue is how the measurement of one variable affects the other variable. The simplest case is what happens to the first variable while no measurements are taken on the second variable. This problem has a long history since von Neumann introduced the concept of the density matrix in 1932 [51]. While there are many books and review articles on this subject, Feynman stated this problem in his own colorful way. In his book on statistical mechanics [22], Feynman makes the following statement about the density matrix.
+
+*When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts—the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system.*
+
+Indeed, Yurke and Potasek [11] and also Ekert and Knight [12] studied this problem in the two-mode squeezed state using the entanglement formula given in Equation (16). Later in 1999, Han et al. studied this problem with two coupled oscillators where one oscillator is observed while the other is not and, thus, is in the rest of the universe as defined by Feynman [23].
+
+Somewhat earlier in 1990 [27], Kim and Wigner observed that there is a time separation wherever there is a space separation in the Lorentz-covariant world. The Bohr radius is a space separation. If the system is Lorentz-boosted, the time-separation becomes entangled with the space separation. However, in the present form of quantum mechanics, this time-separation variable is not measured and not understood.
+
+This variable was mentioned in the paper of Feynman et al. in 1971 [43], but the authors say they would drop this variable because they do not know what to do with it. While what Feynman et al. did was not quite respectable from the scientific point of view, they made a contribution by pointing out the existence of the problem. In 1990, Kim and Wigner [27] noted that the time-separation variable belongs to Feynman's rest of the universe and studied its consequences in the observable world.
+
+In this section, we first reproduce the work of Kim and Wigner using the *x* and *y* variables and then study the consequences. Let us introduce the notation $\psi_{\eta}^{n}(x,y)$ for the squeezed oscillator wave function given in Equation (65):
+
+$$ \psi_{\eta}^{n}(x,y) = \chi_{n}(x')\chi_{0}(y') \qquad (84) $$
+
+with no excitations along the *y* direction. For $\eta = 0$, this expression becomes $\chi_n(x)\chi_0(y)$.
+
+From this wave function, we can construct the pure-state density matrix as:
+
+$$ \rho_{\eta}^{n}(x, y; r, s) = \psi_{\eta}^{n}(x, y)\psi_{\eta}^{n}(r, s) \qquad (85) $$
+---PAGE_BREAK---
+
+which satisfies the condition $\rho^2 = \rho$, which means:
+
+$$ \rho_{\eta}^{n}(x, y; r, s) = \int \rho_{\eta}^{n}(x, y; u, v) \rho_{\eta}^{n}(u, v; r, s) du dv \quad (86) $$
+
+As illustrated in Figure 6, it is not possible to make measurements on the variable $y$. We thus have to take the trace of this density matrix along the $y$ axis, resulting in:
+
+$$ \begin{aligned} \rho_{\eta}^{n}(x,r) &= \int \psi_{\eta}^{n}(x,y)\psi_{\eta}^{n}(r,y)dy \\ &= \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_{k} \frac{(n+k)!}{n!k!} (\tanh \eta)^{2k} \chi_{n+k}(x) \chi_{k+n}(r) \end{aligned} \quad (87) $$
+
+The trace of this density matrix is one, but the trace of $\rho^2$ is:
+
+$$ \begin{aligned} \mathrm{Tr} (\rho^2) &= \int \rho_{\eta}^{n}(x,r)\rho_{\eta}^{n}(r,x)drdx \\ &= \left(\frac{1}{\cosh \eta}\right)^{4(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^2 (\tanh \eta)^{4k} \end{aligned} \quad (88) $$
+
+which is less than one. This is due to the fact that we are not observing the $y$ variable. Our knowledge is less than complete.
+
+**Figure 6.** Feynman's rest of the universe. As the Gaussian function is squeezed, the $x$ and $y$ variables become entangled. If the $y$ variable is not measured, it affects the quantum mechanics of the $x$ variable.
+
+The standard way to measure this incompleteness is to calculate the entropy defined as [51–53]:
+
+$$ S = -\operatorname{Tr} (\rho(x, r) \ln[\rho(x, r)]) \quad (89) $$
+
+which leads to:
+
+$$ S = 2(n+1)[(\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta)^2 \ln(\sinh \eta)] \\ - \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_k \frac{(n+k)!}{n!k!} \ln\left[\frac{(n+k)!}{n!k!}\right] (\tanh \eta)^{2k} \quad (90) $$
+
+Let us go back to the wave function given in Equation (84). As is illustrated in Figure 6, its localization property is dictated by its Gaussian factor, which corresponds to the ground-state wave
+---PAGE_BREAK---
+
+function. For this reason, we expect that much of the behavior of the density matrix or the entropy for
+the $n^{th}$ excited state will be the same as that for the ground state with $n = 0$. For this state, the density
+matrix is:
+
+$$ \rho_{\eta}(x, r) = \left( \frac{1}{\pi \cosh(2\eta)} \right)^{1/2} \exp \left\{ -\frac{1}{4} \left[ \frac{(x+r)^2}{\cosh(2\eta)} + (x-r)^2 \cosh(2\eta) \right] \right\} \quad (91) $$
+
+and the entropy is:
+
+$$ S_{\eta} = 2 \left[ (\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta)^2 \ln(\sinh \eta) \right] \quad (92) $$
+
+The density distribution $\rho_\eta(x,x)$ becomes:
+
+$$ \rho_{\eta}(x,x) = \left( \frac{1}{\pi \cosh(2\eta)} \right)^{1/2} \exp \left( -\frac{x^2}{\cosh(2\eta)} \right) \qquad (93) $$
+
+The width of the distribution becomes $\sqrt{\cosh(2\eta)}$, and the distribution becomes wide-spread as $\eta$ becomes larger. Likewise, the momentum distribution becomes wide-spread as can be seen in Figure 4. This simultaneous increase in the momentum and position distribution widths is due to our inability to measure the y variable hidden in Feynman's rest of the universe [22].
+
+In their paper of 1990 [27], Kim and Wigner used the *x* and *y* variables as the longitudinal and time-like variables respectively in the Lorentz-covariant world. In the quantum world, it is a widely-accepted view that there are no time-like excitations. Thus, it is fully justified to restrict the *y* component to its ground state, as we did in Section 5.
+
+**8. Space-Time Entanglement**
+
+The series given in Equation (1) plays the central role in the concept of the Gaussian or continuous-variable entanglement, where the measurement on one variable affects the quantum mechanics of the other variable. If one of the variables is not observed, it belongs to Feynman's rest of the universe.
+
+The series of the form of Equation (1) was developed earlier for studying harmonic oscillators in moving frames [20,24–28]. Here, *z* and *t* are the space-like and time-like separations between the two constituent particles bound together by a harmonic oscillator potential. There are excitations along the longitudinal direction. However, no excitations are allowed along the time-like direction. Dirac described this as the “c-number” time-energy uncertainty relation [16]. Dirac in 1927 was talking about the system without special relativity. In 1945 [17], Dirac attempted to construct space-time wave functions using harmonic oscillators. In 1949 [18], Dirac introduced his light-cone coordinate system for Lorentz boosts, demonstrating that the boost is a squeeze transformation. It is now possible to combine Dirac’s three observations to construct the Lorentz covariant picture of quantum bound states, as illustrated in Figure 7.
+
+If the system is at rest, we use the wave function:
+
+$$ \psi_0^n(z,t) = \chi_n(z)\chi_0(t) \qquad (94) $$
+
+which allows excitations along the *z* axis, but no excitations along the *t* axis, according to Dirac's c-number time-energy uncertainty relation.
+
+If the system is boosted, the *z* and *t* variables are replaced by *z'* and *t'* where:
+
+$$ z' = (\cosh \eta)z - (\sinh \eta)t, \quad \text{and} \quad t' = -(\sinh \eta)z + (\cosh \eta)t \qquad (95) $$
+
+This is a squeeze transformation as in the case of Equation (17). In terms of these space-time variables, the wave function of Equation (84), can be written as:
+
+$$ \psi_{\eta}^{n}(z, t) = \chi_{n}(z')\chi_{0}(t') \qquad (96) $$
+---PAGE_BREAK---
+
+and the series of Equation (65) then becomes:
+
+$$ \psi_{\eta}^{n}(z, t) = \left(\frac{1}{\cosh \eta}\right)^{(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh \eta)^{k} \chi_{k+n}(z) \chi_{k}(t) \quad (97) $$
+
+**Figure 7.** Dirac's form of Lorentz-covariant quantum mechanics. In addition to Heisenberg's uncertainty relation, which allows excitations along the spatial direction, there is the "c-number" time-energy uncertainty without excitations. This form of quantum mechanics can be combined with Dirac's light-cone picture of Lorentz boost, resulting in the Lorentz-covariant picture of quantum mechanics. The elliptic squeeze shown in this figure can be called the space-time entanglement.
+
+Since the Lorentz-covariant oscillator formalism shares the same set of formulas with the Gaussian entangled states, it is possible to explain some aspects of space-time physics using the concepts and terminologies developed in quantum optics, as illustrated in Figure 1.
+
+The time-separation variable is a case in point. The Bohr radius is a well-defined spatial separation between the proton and electron in the hydrogen atom. However, if the atom is boosted, this radius picks up a time-like separation. This time-separation variable does not exist in the Schrödinger picture of quantum mechanics. However, this variable plays the pivotal role in the covariant harmonic oscillator formalism. It is gratifying to note that this “hidden or forgotten” variable plays a role in the real world while being entangled with the observable longitudinal variable. With this point in mind, let us study some of the consequences of this space-time entanglement.
+
+First of all, does the wave function of Equation (96) carry a probability interpretation in the Lorentz-covariant world? Since $dzdt = dz'dt'$, the normalization:
+
+$$ \int |\psi_{\eta}^{n}(z, t)|^{2} dtdz = 1 \qquad (98) $$
+
+This is a Lorentz-invariant normalization. If the system is at rest, the z and t variables are completely dis-entangled, and the spatial component of the wave function satisfies the Schrödinger equation without the time-separation variable.
+
+However, in the Lorentz-covariant world, we have to consider the inner product:
+
+$$ (\psi_{\eta}^{n}(z,t), \psi_{\eta'}^{m}(z,t)) = \int [\psi_{\eta}^{n}(z,t)]^{*} \psi_{\eta'}^{m}(z,t) dzdt \quad (99) $$
+---PAGE_BREAK---
+
+The evaluation of this integral was carried out by Michael Ruiz in 1974 [45], and the result was:
+
+$$ \left( \frac{1}{|\cosh(\eta - \eta')|} \right)^{n+1} \delta_{nm} \qquad (100) $$
+
+In order to see the physical implications of this result, let us assume that one of the oscillators is at rest with $\eta' = 0$ and the other is moving with the velocity $\beta = \tanh(\eta)$. Then, the result is:
+
+$$ (\psi_{\eta}^{n}(z,t), \psi_{0}^{m}(z,t)) = (\sqrt{1-\beta^2})^{n+1} \delta_{nm} \qquad (101) $$
+
+Indeed, the wave functions are orthonormal if they are in the same Lorentz frame. If one of them is boosted, the inner product shows the effect of Lorentz contraction. We are familiar with the contraction $\sqrt{1-\beta^2}$ for the rigid rod. The ground state of the oscillator wave function is contracted like a rigid rod.
+
+The probability density $|\psi_\eta^0(z)|^2$ is for the oscillator in the ground state, and it has one hump. For the $n^{th}$ excited state, there are $(n+1)$ humps. If each hump is contracted like $\sqrt{1-\beta^2}$, the net contraction factor is $(\sqrt{1-\beta^2})^{n+1}$ for the $n^{th}$ excited state. This result is illustrated in Figure 8.
+
+**Figure 8.** Orthogonality relations for two covariant oscillator wave functions. The orthogonality relation is preserved for different frames. However, they show the Lorentz contraction effect for two different frames.
+
+With this understanding, let us go back to the entanglement problem. The ground state wave function takes the Gaussian form given in Equation (11):
+
+$$ \psi_0(z,t) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (z^2 + t^2) \right\} \qquad (102) $$
+
+where the x and y variables are replaced by z and t, respectively. If Lorentz-boosted, this Gaussian function becomes squeezed to [20,24,25]:
+
+$$ \psi_{\eta}^{0}(z,t) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta}(z+t)^2 + e^{2\eta}(z-t)^2 \right] \right\} \qquad (103) $$
+
+leading to the series:
+
+$$ \frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k \chi_k(z) \chi_k(t) \qquad (104) $$
+
+According to this formula, the z and t variables are entangled in the same way as the x and y variables are entangled.
+---PAGE_BREAK---
+
+Here, the z and t variables are space and time separations between two particles bound together by the oscillator force. The concept of the space separation is well defined, as in the case of the Bohr radius. On the other hand, the time separation is still hidden or forgotten in the present form of quantum mechanics. In the Lorentz-covariant world, this variable affects what we observe in the real world by entangling itself with the longitudinal spatial separation.
+
+In Chapter 16 of their book [9], Walls and Milburn wrote down the series of Equation (1) and discussed what would happen when the $\eta$ parameter becomes infinitely large. We note that the series given in Equation (104) shares the same expression as the form given by Walls and Milburn, as well as other papers dealing with the Gaussian entanglement. As in the case of Wall and Milburn, we are interested in what happens when $\eta$ becomes very large.
+
+As we emphasized throughout the present paper, it is possible to study the entanglement series using the squeezed Gaussian function given in Equation (103). It is then possible to study this problem using the ellipse. Indeed, we can carry out the mathematics of entanglement using the ellipse shown Figure 9. This figure is the same as that of Figure 6, but it illustrates the entanglement of the space and time separations, instead of the x and y variables. If the particle is at rest with $\eta = 0$, the Gaussian form corresponds to the circle in Figure 9. When the particle gains speed, this Gaussian function becomes squeezed into an ellipse. This ellipse becomes concentrated along the light cone with $t = z$, as $\eta$ becomes very large.
+
+The point is that we are able to observe this effect in the real world. These days, the velocity of protons from high-energy accelerators is very close to that of light. According to Gell-Mann [54], the proton is a bound state of three quarks. Since quarks are confined in the proton, they have never been observed, and the binding force must be like that of the harmonic oscillator. Furthermore, the observed mass spectra of the hadrons exhibit the degeneracy of the three-dimensional harmonic oscillator [43]. We use the word “hadron” for the bound state of the quarks. The simplest hadron is thus the bound state of two quarks.
+
+In 1969 [55], Feynman observed that the same proton, when moving with a velocity close to that of light, can be regarded as a collection of partons, with the following peculiar properties.
+
+1. The parton picture is valid only for protons moving with velocity close to that of light.
+
+2. The interaction time between the quarks becomes dilated, and partons are like free particles.
+
+3. The momentum distribution becomes wide-spread as the proton moves faster. Its width is proportional to the proton momentum.
+
+4. The number of partons is not conserved, while the proton starts with a finite number of quarks.
+
+**Figure 9.** Feynman's rest of the universe. This figure is the same as Figure 6. Here, the space variable z and the time variable t are entangled.
+---PAGE_BREAK---
+
+Indeed, Figure 10 tells why the quark and parton models are two limiting cases of one Lorentz-covariant entity. In the oscillator regime, the three-particle system can be reduced to two independent two-particle systems [43]. Also in the oscillator regime, the momentum-energy wave function takes the same form as the space-time wave function, thus with the same squeeze or entanglement property as illustrated in this figure. This leads to the wide-spread momentum distribution [20,56,57].
+
+**Figure 10.** The transition from the quark to the parton model through space-time entanglement. When $\eta = 0$, the system is called the quark model where the space separation and the time separation are dis-entangled. Their entanglement becomes maximum when $\eta = \infty$. The quark model is transformed continuously to the parton model as the $\eta$ parameter increases from zero to $\infty$. The mathematics of this transformation is given in terms of circles and ellipses.
+
+Also in Figure 10, the time-separation between the quarks becomes large as $\eta$ becomes large, leading to a weaker spring constant. This is why the partons behave like free particles [20,56,57].
+
+As $\eta$ becomes very large, all of the particles are confined into a narrow strip around the light cone. The number of particles is not constant for massless particles as in the case of black-body radiation [20,56,57].
+
+Indeed, the oscillator model explains the basic features of the hadronic spectra [43]. Does the oscillator model tell the basic feature of the parton distribution observed in high-energy laboratories? The answer is yes. In his 1982 paper [58], Paul Hussar compared the parton distribution observed in a high-energy laboratory with the Lorentz-boosted Gaussian distribution. They are close enough to justify that the quark and parton models are two limiting cases of one Lorentz-covariant entity.
+
+To summarize, the proton makes a phase transition from the bound state into a plasma state as it moves faster, as illustrated in Figure 10. The unobserved time-separation variable becomes more prominent as $\eta$ becomes larger. We can now go back to the form of this entropy given in Equation (92) and calculate it numerically. It is plotted against $(\tanh \eta)^2 = \beta^2$ in Figure 11. The entropy is zero when the hadron is at rest, and it becomes infinite as the hadronic speed reaches the speed of light.
+---PAGE_BREAK---
+
+Figure 11. Entropy and temperature as functions of $[\tanh(\eta)]^2 = \beta^2$. They are both zero when the hadron is at rest, but they become infinitely large when the hadronic speed becomes close to that of light. The curvature for the temperature plot changes suddenly around $[\tanh(\eta)]^2 \approx 0.8$, indicating a phase transition.
+
+Let us go back to the expression given in Equation (87). For this ground state, the density matrix becomes:
+
+$$ \rho_{\eta}(z, z') = \left( \frac{1}{\cosh \eta} \right)^2 \sum_k (\tanh \eta)^{2k} \chi_k(z) \chi_k(z') \quad (105) $$
+
+We can now compare this expression with the density matrix for the thermally-excited oscillator state [22]:
+
+$$ \rho_{\eta}(z, z') = (1 - e^{-1/T}) \sum_{k} [\cosh z]^{k} \chi_{k}(z) \chi_{k}(z') \quad (106) $$
+
+By comparing these two expressions, we arrive at:
+
+$$ [\tanh(\eta)]^2 = e^{-1/T} \quad (107) $$
+
+and thus:
+
+$$ T = \frac{-1}{\ln[(\tanh \eta)^2]} \quad (108) $$
+
+This temperature is also plotted against $(\tanh \eta)^2$ in Figure 11. The temperature is zero if the hadron is at rest, but it becomes infinite when the hadronic speed becomes close to that of light. The slope of the curvature changes suddenly around $(\tanh \eta)^2 \approx 0.8$, indicating a phase transition from the bound state to the plasma state.
+
+In this section, we have shown how useful the concept of entanglement is in understanding the role of the time-separation in high energy hadronic physics including Gell-Mann's quark model and Feynman's parton model as two limiting cases of one Lorentz-covariant entity.
+
+**9. Concluding Remarks**
+
+The main point of this paper is the mathematical identity:
+
+$$ \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} \left[ e^{-2\eta} (x+y)^2 + e^{2\eta} (x-y)^2 \right] \right\} = \frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k \chi_k(x) \chi_k(y) \quad (109) $$
+
+which says that the series of Equation (1) is an expansion of the Gaussian form given in Equation (2).
+---PAGE_BREAK---
+
+The first derivation of this series was published in 1979 [26] as a formula from the Lorentz group. Since this identity is not well known, we explained in Section 5 how this formula can be derived from the generating function of the Hermite polynomials.
+
+While the series serves useful purposes in understanding the physics of entanglement, the Gaussian form can be used to transfer this idea to high-energy hadronic physics. The hadron, such as the proton, is a quantum bound state. As was pointed out in Section 8, the squeezed Gaussian function of Equation (109) plays the pivotal role for hadrons moving with relativistic speeds.
+
+The Bohr radius is a very important quantity in physics. It is a spatial separation between the proton and electron in the the hydrogen atom. Likewise, there is a space-like separation between constituent particles in a bound state at rest. When the bound state moves, it picks up a time-like component. However, in the present form of quantum mechanics, this time-like separation is not recognized. Indeed, this variable is hidden in Feynman's rest of the universe. When the system is Lorentz-boosted, this variable entangles itself with the measurable longitudinal variable. Our failure to measure this entangled variable appears in the form of entropy and temperature in the real world.
+
+While harmonic oscillators are applicable to many aspects of quantum mechanics, Paul A. M. Dirac observed in 1963 [21] that the system of two oscillators contains also the symmetries of the Lorentz group. We discussed in this paper one concrete case of Dirac's symmetry. There are different languages for harmonic oscillators, such as the Schrödinger wave function, step-up and step-down operators and the Wigner phase-space distribution function. In this paper, we used extensively a pictorial language with circles and ellipses.
+
+Let us go back to Equation (109); this mathematical identity was published in 1979 as textbook material in the American Journal of Physics [26], and the same formula was later included in a textbook on the Lorentz group [20]. It is gratifying to note that the same formula serves as a useful tool for the current literature in quantum information theory [59,60].
+
+**Author Contributions:** Each of the authors participated in developing the material presented in this paper and in writing the manuscript.
+
+**Conflicts of Interest:** The authors declare that no conflict of interest exists.
+
+## References
+
+1. Giedke, G.; Wolf, M.M.; Krueger, O.; Werner, R.F.; Cirac, J.J. Entanglement of formation for symmetric Gaussian states. Phys. Rev. Lett. **2003**, *91*, 10790.1–10790.4.
+
+2. Braunstein, S.L.; van Loock, P. Quantum information with continuous variables. Rev. Mod. Phys. **2005**, *28*, 513–676.
+
+3. Kim, Y.S.; Noz, M.E. Coupled oscillators, entangled oscillators, and Lorentz-covariant Oscillators. J. Opt. B Quantum Semiclass. **2003**, *7*, s459–s467.
+
+4. Ge, W.; Tasgin, M.E.; Suhail Zubairy, S. Conservation relation of nonclassicality and entanglement for Gaussian states in a beam splitter. Phys. Rev. A **2015**, *92*, 052328.
+
+5. Gingrich, R.M.; Adami, C. Quantum Engtanglement of Moving Bodies. Phys. Rev. Lett. **2002**, *89*, 270402.
+
+6. Dodd, P.J.; Halliwell, J.J. Disentanglement and decoherence by open system dynamics. Phys. Rev. A **2004**, *69*, 052105.
+
+7. Ferraro, A.; Olivares, S.; Paris, M.G.A. Gaussian States in Continuous Variable Quantum Information. EDIZIONI DI FILOSOFIA E SCIENZE (2005). Available online: http://arxiv.org/abs/quant-ph/0503237 (accessed on 24 June 2016).
+
+8. Adesso, G.; Illuminati, F. Entanglement in continuous-variable systems: Recent advances and current perspectives. J. Phys. A **2007**, *40*, 7821–7880.
+
+9. Walls, D.F.; Milburn, G.J. Quantum Optics, 2nd ed.; Springer: Berlin, Germany, 2008.
+
+10. Yuen, H.P. Two-photon coherent states of the radiation field. Phys. Rev. A **1976**, *13*, 2226–2243.
+
+11. Yurke, B.; Potasek, M. Obtainment of Thermal Noise from a Pure State. Phys. Rev. A **1987**, *36*, 3464–3466.
+
+12. Ekert, A.K.; Knight, P.L. Correlations and squeezing of two-mode oscillations. Am. J. Phys. **1989**, *57*, 692–697.
+---PAGE_BREAK---
+
+13. Paris, M.G.A. Entanglement and visibility at the output of a Mach-Zehnder interferometer. Phys. Rev. A **1999**, *59*, 1615.
+
+14. Kim, M.S.; Son, W.; Buzek, V.; Knight, P.L. Entanglement by a beam splitter: Nonclassicality as a prerequisite for entanglement. Phys. Rev. A **2002**, *65*, 02323.
+
+15. Han, D.; Kim, Y.S.; Noz, M.E. Linear Canonical Transformations of Coherent and Squeezed States in the Wigner phase Space III. Two-mode States. Phys. Rev. A **1990**, *41*, 6233-6244.
+
+16. Dirac, P.A.M. The Quantum Theory of the Emission and Absorption of Radiation. Proc. Roy. Soc. (Lond.) **1927**, A114, 243-265.
+
+17. Dirac, P.A.M. Unitary Representations of the Lorentz Group. Proc. Roy. Soc. (Lond.) **1945**, A183, 284-295.
+
+18. Dirac, P.A.M. Forms of relativistic dynamics. Rev. Mod. Phys. **1949**, *21*, 392-399.
+
+19. Yukawa, H. Structure and Mass Spectrum of Elementary Particles. I. General Considerations. Phys. Rev. **1953**, *91*, 415-416.
+
+20. Kim, Y.S.; Noz, M.E. Theory and Applications of the Poincaré Group; Reidel: Dordrecht, The Netherlands, 1986.
+
+21. Dirac, P.A.M. A Remarkable Representation of the 3 + 2 de Sitter Group. J. Math. Phys. **1963**, *4*, 901-909.
+
+22. Feynman, R.P. Statistical Mechanics; Benjamin Cummings: Reading, MA, USA, 1972.
+
+23. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative Example of Feynman's Rest of the Universe. Am. J. Phys. **1999**, *67*, 61-66.
+
+24. Kim, Y.S.; Noz, M.E. Covariant harmonic oscillators and the quark model. Phys. Rev. D **1973**, *8*, 3521-3627.
+
+25. Kim, Y.S.; Noz, M.E.; Oh, S.H. Representations of the Poincaré group for relativistic extended hadrons. J. Math. Phys. **1979**, *20*, 1341-1344.
+
+26. Kim, Y.S.; Noz, M.E.; Oh, S.H. A simple method for illustrating the difference between the homogeneous and inhomogeneous Lorentz groups. Am. J. Phys. **1979**, *47*, 892-897.
+
+27. Kim, Y.S.; Wigner, E.P. Entropy and Lorentz Transformations. Phys. Lett. A **1990**, *147*, 343-347.
+
+28. Kim, Y.S.; Noz, M.E. Lorentz Harmonics, Squeeze Harmonics and Their Physical Applications. Symmety **2011**, *3*, 16-36.
+
+29. Klauder, J.R.; Sudarshan, E.C.G. Fundamentals of Quantum Optics; Benjamin: New York, NY, USA, 1968.
+
+30. Saleh, B.E.A.; Teich, M.C. Fundamentals of Photonics, 2nd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2007.
+
+31. Miller, W. Symmetry Groups and Their Applications; Academic Press: New York, NY, USA, 1972.
+
+32. Hall, B.C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, 2nd ed.; Springer International: Cham, Switzerland, 2015.
+
+33. Wigner, E. On Unitary Representations of the Inhomogeneous Lorentz Group. Ann. Math. **1939**, *40*, 149-204.
+
+34. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass. Phys. Rev. **1964**, *135*, B1049-B1056.
+
+35. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic-particles. J. Math. Phys. **1990**, *31*, 55-60.
+
+36. Başkal, S.; Kim, Y.S.; Noz, M.E. Wigner's Space-Time Symmetries Based on the Two-by-Two Matrices of the Damped Harmonic Oscillators and the Poincaré Sphere. Symmetry **2014**, *6*, 473-515.
+
+37. Başkal, S.; Kim, Y.S.; Noz, M.E. Physics of the Lorentz Group; IOP Science; Morgan & Claypool Publishers: San Rafael, CA, USA, 2015.
+
+38. Kim, Y.S.; Yeh, Y. $E(2)$-symmetric two-mode sheared states. J. Math. Phys. **1992**, *33*, 1237-1246
+
+39. Kim, Y.S.; Noz, M.E. Phase Space Picture of Quantum Mechanics; World Scientific Publishing Company: Singapore, Singapore, 1991.
+
+40. Kim, Y.S.; Noz, M.E. Dirac Matrices and Feynman's Rest of the Universe. Symmetry **2012**, *4*, 626-643.
+
+41. Wigner, E. On the Quantum Corrections for Thermodynamic Equilibrium. Phys. Rev. **1932**, *40*, 749-759.
+
+42. Han, D.; Kim, Y.S.; Noz, M.E. $O(3,3)$-like Symmetries of Coupled Harmonic Oscillators. J. Math. Phys. **1995**, *36*, 3940-3954.
+
+43. Feynman, R.P.; Kislinger, M.; Ravndal, F. Current Matrix Elements from a Relativistic Quark Model.
+Phys. Rev. D **1971**, *3*, 2706-2732.
+
+44. Rotbart, F.C. Complete orthogonality relations for the covariant harmonic oscillator.
+Phys. Rev. D **1981**, *12*, 3078-3090.
+
+45. Ruiz, M.J. Orthogonality relations for covariant harmonic oscillator wave functions.
+Phys. Rev. D **1974**, *10*, 4306-4307.
+
+46. Magnus, W.; Oberhettinger, F.; Soni, R.P. Formulas and Theorems for the Special Functions of Mathematical Physics;
+Springer-Verlag: Heidelberg, Germany, 1966.
+---PAGE_BREAK---
+
+47. Doman, B.G.S. *The Classical Orthogonal Polynomials*; World Scientific: Singapore, Singapore, 2016.
+
+48. Bargmann, V. Irreducible unitary representations of the Lorentz group. *Ann. Math.* **1947**, *48*, 568–640.
+
+49. Han, D.; Kim, Y.S. Special relativity and interferometers. *Phys. Rev. A* **1988**, *37*, 4494–4496.
+
+50. Han, D.; Kim, Y.S.; Noz, M.E. Wigner rotations and Iwasawa decompositions in polarization optics. *Phys. Rev. E* **1999**, *1*, 1036–1041.
+
+51. Von Neumann, J. *Die mathematische Grundlagen der Quanten-Mechanik*; Springer: Berlin, Germany, 1932. (von Neumann, I. *Mathematical Foundation of Quantum Mechanics*; Princeton University: Princeton, NJ, USA, 1955.)
+
+52. Fano, U. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques. *Rev. Mod. Phys.* **1957**, *29*, 74–93.
+
+53. Wigner E.P.; Yanase, M.M. Information Contents of Distributions. Proc. Natl. Acad. Sci. USA **1963**, *49*, 910–918.
+
+54. Gell-Mann, M. A Schematic Model of Baryons and Mesons. Phys. Lett. **1964**, *8*, 214-215.
+
+55. Feynman, R.P. Very High-Energy Collisions of Hadrons. Phys. Rev. Lett. **1969**, *23*, 1415-1417.
+
+56. Kim, Y.S.; Noz, M.E. Covariant harmonic oscillators and the parton picture. Phys. Rev. D **1977**, *15*, 335-338.
+
+57. Kim, Y.S. Observable gauge transformations in the parton picture. Phys. Rev. Lett. **1989**, *63*, 348-351.
+
+58. Hussar, P.E. Valons and harmonic oscillators. Phys. Rev. D **1981**, *23*, 2781-2783.
+
+59. Leonhardt, U. *Essential Quantum Optics*; Cambridge University Press: London, UK, 2010.
+
+60. Furusawa, A.; Loock, P.V. *Quantum Teleportation and Entanglement: A Hybrid Approach to Optical Quantum Information Processing*; Wiley-VCH: Weinheim, Germany, 2010.
+
+© 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+Massless Majorana-Like Charged Carriers in
+Two-Dimensional Semimetals
+
+Halina Grushevskaya † and George Krylov †,*
+
+Physics Department, Belarusian State University, 4 Nezaleznasti Ave., 220030 Minsk, Belarus; grushevskaja@bsu.by
+
+* Correspondence: krylov@bsu.by; Tel.: +375-296-62-44-97
+
+† These authors contributed equally to this work.
+
+Academic Editor: Young Suh Kim
+
+Received: 29 February 2016; Accepted: 1 July 2016; Published: 8 July 2016
+
+**Abstract:** The band structure of strongly correlated two-dimensional (2D) semimetal systems is found to be significantly affected by the spin-orbit coupling (SOC), resulting in SOC-induced Fermi surfaces. Dirac, Weyl and Majorana representations are used for the description of different semimetals, though the band structures of all these systems are very similar. We develop a theoretical approach to the band theory of two-dimensional semimetals within the Dirac–Hartree–Fock self-consistent field approximation. It reveals partially breaking symmetry of the Dirac cone affected by quasi-relativistic exchange interactions for 2D crystals with hexagonal symmetry. Fermi velocity becomes an operator within this approach, and elementary excitations have been calculated in the tight-binding approximation when taking into account the exchange interaction of $\pi(p_z)$-electron with its three nearest $\pi(p_z)$-electrons. These excitations are described by the massless Majorana equation instead of the Dirac one. The squared equation for this field is of the Klein–Gordon–Fock type. Such a feature of the band structure of 2D semimetals as the appearance of four pairs of nodes is shown to be described naturally within the developed formalism. Numerical simulation of band structure has been performed for the proposed 2D-model of graphene and a monolayer of Pb atoms.
+
+**Keywords:** 2D semimetals; Dirac–Hartree–Fock self-consistent field approximation; Majorana-like field; Weyl-like nodes; Fermi velocity operator
+
+PACS: 73.22.-f, 81.05.Bx
+
+# 1. Introduction
+
+Strongly correlated materials, such as two-dimensional (2D) complex oxides of transition metals, graphene, oxides with a perovskite structure, and IV–VI semiconductors being three-dimensional (3D) analogues of graphene, can demonstrate unusual electronic and magnetic properties, such as e.g., half-metallicity. The linear dispersion law for such materials is stipulated by the simultaneous existence of positively and negatively charged carriers [1]. Conical singularities are generic in the quantum crystals having honeycomb lattice symmetry [2]. Bipolarity of the material suggests that the state of an excitonic insulator is possible for it. Since an electron-hole pair is at the same time its own antiparticle, the Majorana representation has been used [3,4] to describe the interaction of pseudospins with the valley currents in a monolayer graphene.
+
+The electron is a complex fermion, so if one decomposes it into its real and imaginary parts, which would be Majorana fermions, they are rapidly re-mixed by electromagnetic interactions. However, such a decomposition could be reasonable for a superconductor where, because of effective electrostatic screening, the Bogoliubov quasi-fermions behave as if they are neutral excitations [5].
+---PAGE_BREAK---
+
+A helical magnetic ordering (commensurate magnetism) occurs due to strong spin-orbit coupling (SOC) between Fe and Pb atoms in the system where a chain of ferromagnetic Fe atoms is placed on the surface of conventional superconductor composed of Pb atoms [6]. In this case, the imposition of SOC results in the appearance of Majorana-like excitations at the ends of the Fe atom chain.
+
+The discovered p-wave function pairing in this Fe-chain is allowed to assume that there exists a new mechanism of superconductivity in high-temperature superconductors through the exchange of Majorana particles rather than phonons in the Bardeen-Cooper-Schrieffer theory. Such a novel superconducting state emerges, for example, in compound CeCoIn₅ in strong magnetic fields in addition to ordinary superconducting state, [7]. It has been shown [8–10] that the coupling of electrons into Cooper pairs in pnictides (LiFeAs with slabs FeAs) is mediated by the mixing of d-electron orbitals surrounding the atomic cores of transition metal. The new state is mediated by an anti-ferromagnetic order, and its fluctuations appear due to strong spin-orbit coupling [8,9,11]. It has been experimentally confirmed in [10] for LiFeAs. For antiferromagnetic itinerant-electron system LaFe₁₂B₆, ultrasharp magnetization steps have been observed [12]. The last can be only explained by the existence of anti-ferromagnetic order, and its fluctuations appear due to strong spin-orbit coupling.
+
+Thus, there is a strong evidence that SOC may control the spin ordering in the absence of external magnetic fields. However, the mechanism that leads to such, commensurate magnetism has not been yet established.
+
+The phenomenon of the contraction of electron density distribution in one direction is called nematicity. It is observed in pnictides BaFe₂(As₁₋₃)Pₓ₂ placed in a magnetic field, and such a phenomenon remains in the superconducting state [13]. The nematicity is coupled with considerable stripe spin fluctuations in FeSe [14]. The very strong spin orbit coupling leads to contraction in a factor of about 10% and rotation on 30° of the hexagonal Brillouin zone of delafossite oxide PtCoO₂, belonging to yet another class of topological insulators in which atoms of metal are in layers with triangular lattices [15].
+
+Other topological insulators, namely so-called Weyl materials with a linear dispersion law, are close in properties with layered perovskite-like materials (see [16] and references therein). Currently, the first candidate for such a material has been found, namely TaAs, whose Brillouin zone has Weyl-like nodes and Fermi arcs [17–19].
+
+Moreover, the experimental evidence of the similarities between the Fermi surfaces of insulator SmB₆ and metallic rare earth hexaborides (PrB₆ and LaB₆) has been presented in [20]. To explain the accompanying ordering phenomena, each associated with different symmetry breaking, it is necessary to develop a unified theory as it has been pointed out in [9].
+
+Electrically charged carriers in the strongly correlated semimetallic systems with half-filled bands are massless fermions [15,21,22].
+
+In a low-dimensional system, the exciton binding energy turns out to be high [23] and, respectively, the transition to the state of excitonic insulator is possible. Therefore, the Majorana rather than Weyl representation is preferable for the description of 2D semimetals. An attempt to represent the transition to the state of excitonic insulator as the appearance of Majorana zero-modes solution in graphene with trigonal warping [24] contradicts experimental data on the absence of a gap in band structure of graphene [25] and on diminishing of charged carriers mobility [26] and minimal conductivity [27]. However, at the present time, there exist experimental signatures of graphene Majorana states in graphene-superconductor junctions without the need for spin-orbit coupling [28]. However, modern Quantum Field Theory of pseudo-Dirac quasiparticles in random phase approximation predicts a strong screening that destroys the excitonic pairing instability if the fermion dynamic mass *m*(*p*) dependent on momentum *p* is small in comparison with the chemical potential *μ*: *m*(*p*) ≤ *μ* [29].
+
+In the paper, we would like to show how the above described features of the layered materials can be formalized in 2D models, where the charged carriers are the quasiparticles of Majorana rather than of the Weyl type. We also show that, under certain conditions, these quasiparticles reveal themselves as Weyl-like states or massless Dirac pseudofermions.
+---PAGE_BREAK---
+
+However, the use of the well-known Majorana representations to describe a semimetal as a massless-quasiparticle system is encountered with such a puzzle as the absence of harmonic oscillatory solutions in ultrarelativistic limit for Majorana particles of zero mass [30]. The equations are known for massive Majorana particles only [31–33].
+
+In the paper, we reveal different aspects of appearance of Majorana-like quasiparticle states in the band structure of semimetals. 2D Hartree-Fock approximation for graphene, however, predicts experimentally observable increase of the Fermi velocity value $v_F(\vec{p})$ at small momenta $p$ [25] but leads to logarithmically divergent $v_F(\vec{p})$ at $p \to 0$ [34]. To take into account this effect of long range Coulomb interactions correctly, our calculation is based on the quasi-relativistic Dirac-Hartree-Fock self-consistent field approach developed earlier [35,36].
+
+The goal is to construct a 2D-semimetal model in which a motion equation is a pseudo-relativistic massless Majorana-like one. We show that the squared equation for this field is of a Klein-Gordon-Fock type, and therefore the charged carriers in such 2D-semimetal models can be assumed massless Majorana-like quasiparticles.
+
+We study quasiparticle excitations of the electronic subsystem of a hexagonal monoatomic layer (monolayer) of light or heavy atoms in tight-binding approximation. The simulations are performed for the atoms of C and Pb on the assumption that sp²-hybridization for s- and p-electron orbitals is also possible for the atoms of Pb.
+
+We demonstrate that the band-structure features for the hexagonal monolayers are similar to each other due to the similarity of external electronic shells of their atoms. Despite the similarity of the band structure, the charged carriers in such 2D-semimetal models can possess different features, e.g., the charged carriers in the monolayer of the atoms of C can be thought of as massless Dirac pseudofermions, whereas in the monolayer from the atoms of Pb, they reveal themselves as Weyl-like states.
+
+The paper is organized as follows. In Section 2, we propose a semimetal model with coupling between pseudospin and valley currents and prove the pseudo-helicity conservation law. In Section 3, we briefly introduce the approach [3,35–37] and use it in a simple tight-binding approximation to obtain the system of equations for a Majorana secondary quantized field. In Section 4, we support the statement that the squared equation for the constructed field is of the Klein-Gordon-Fock type for different model exchange operators. We also discuss features of our model manifesting in the band structure of real semimetals. In Section 5, we discuss the proposed approximations for the exchange interactions in 2D semimetals and summarize our findings.
+
+## 2. Monolayer Semimetal Model with Partial Unfolding of Dirac Bands
+
+Semimetals are known to be bipolar materials with half-filled valence and conduction bands. A distinctive feature of the graphene band structure is the existence of Dirac cones in the Dirac points (valleys) K, K' of the Brillouin zone. In the present paper, these Dirac points are designated as $K_A, K_B$. We assume that pseudo-spins of hexagonally packed carbon atoms in the monoatomic layer (monolayer) graphene are anti-ordered, as it is shown schematically in Figure 1a. The fact that the pseudo-helicity (chirality) conservation law forbids massless charged carriers to be in lattice sites with the opposite signs of pseudo-spin, makes possible the existence of valley currents due to jumps through the forbidden sites. This is shown schematically in Figure 1a. Coupling between the pseudo-spin and the valley current in the Majorana representation of bispinors can be determined in the following way.
+---PAGE_BREAK---
+
+**Figure 1.** (a) graphene lattice, comprised of two sublattices {A} with spin “up” and {B} with spin “down”. Right and left valley currents $J_V^R$ and $J_V^L$ are shown as circular curves with arrows. Double arrows from site A to site $B_L$ and from A to $B_R$ indicate clockwise and anti-clockwise directions. The axis of mirror reflection from $A_R$ to $B_L$ is marked by dash-dotted line; (b) transformations of a q-circumference into ellipses under an action of exchange operators ($\Sigma_{rel}^x$)$_{AB}$ and ($\Sigma_{rel}^y$)$_{BA}$ (in color).
+
+According to Figure 1a, a particle can travel from a lattice site A to e.g., a lattice site $A_R$ through right or left sites $B_R$ or $B_L$, respectively. Since the particle is symmetrical, its description in the right and left reference frames has to be equivalent. Therefore, a bispinor wave function $\Psi'$ of graphene has to be chosen in the Majorana representation, and its upper and lower spin components $\psi'$, $\psi'$ are transformed by left and right representations of the Lorentz group:
+
+$$ \Psi' = \begin{pmatrix} \psi'_{\sigma} \\ \psi'_{-\sigma} \end{pmatrix} = \begin{pmatrix} e^{\frac{i}{2}\vec{\sigma}\cdot\vec{n}}\psi_{\sigma} \\ e^{\frac{i}{2}(-\vec{\sigma})\cdot\vec{n}}\psi_{-\sigma} \end{pmatrix}. \quad (1) $$
+
+The wave-function $\tilde{\chi}_{\vec{\sigma}}^{\dagger}(\vec{r}_A) |0, +\sigma\rangle$ of a particle (in our case of an electron-hole pair) located on the site A, behaves as a component $\psi_{\sigma}$, while the wave-function $\tilde{\chi}_{-\sigma}^{\dagger}(\vec{r}_B) |0, -\sigma\rangle$ of a particle located on the site B behaves as a component $\psi_{-\sigma}$ of the bispinor (1).
+
+Relativistic particles with non-zero spin possess the helicity $h$, which is the projection of the particle's spin to the direction of motion [32]:
+
+$$ h = \vec{p} \cdot \vec{S} = \frac{1}{2} p_i \begin{pmatrix} \sigma_i & 0 \\ 0 & \sigma_i \end{pmatrix}, \quad (2) $$
+
+where $\vec{p}$ is the particle momentum, $\vec{S}$ is the spin operator for a particle, $\vec{\sigma}$ is the vector of the Pauli matrices $\sigma_i$, and $i = x, y$. In quantum relativistic field theory, the value of the helicity of a massless particle is preserved in the transition from one reference frame moving with the velocity $v_1$, to another one moving with the velocity $v_2$ [32,38].
+
+Let us designate the two-dimensional spin of the quasi-particle in valleys $K_A$ and $K_B$ as $\vec{S}_{AB} = \hbar\vec{\sigma}_{AB}/2$ and $\vec{S}_{BA} = \hbar\vec{\sigma}_{BA}/2$, respectively.
+
+Let us introduce two-dimensional pseudospin $\vec{S}_{AB}$ and $\vec{S}_{BA}$ of quasi-particles in valleys $K_A$ and $K_B$ through the transformed vector $\vec{\sigma}$ of the Pauli matrices $\sigma_i$, $i = x, y$ as $\vec{S}_{AB} = \hbar\vec{\sigma}_{AB}/2$ and $\vec{S}_{BA} = \hbar\vec{\sigma}_{BA}/2$. The explicit form of this transformation is given in Section 3.
+
+A valley current $J_V^R$ or $J_V^L$, on the right or left closed contour $\{A \to B_R \to A_R \to B \to A_L \to B_L \to A\}$ or $\{A \to B_L \to A_L \to B \to A_R \to B_R \to A\}$, respectively, in Figure 1, is created by an electron (hole) with pseudo-angular momentum $\vec{l}_{AB_R}$ and momentum $\vec{p}_{AB_R}$ or by an electron (hole) with $\vec{l}_{AB_L}$ and
+---PAGE_BREAK---
+
+$\vec{p}_{AB_L}$. Pseudo-helicity of bispinors (1), describing the particles right or left the from lattice site A, is defined by the expressions, which are analogous to (2):
+
+$$h_{BR,A} = \vec{p}_{AB_R} \cdot \vec{S}_{BRA}, \quad (3)$$
+
+$$h_{BL,A} = \vec{p}_{ABL} \cdot \vec{S}_{BLA}. \quad (4)$$
+
+Let us use the parity operator $P$, which mirrors the bispinor (1) with respect to the line passing through the points A and B. Pseudo-helicity of the mirrored bispinor is defined by the expression:
+
+$$Ph_{BRAR}P = h_{ALBL} = \vec{p}_{BLAL} \cdot \vec{S}_{ALBL}. \quad (5)$$
+
+Pseudo-helicity $h_{AB}$ does not change its value while the valley momentum and the pseudo-spin change signs: $\vec{p}_{ALBL} = -\vec{p}_{BRA_R}$ and $\vec{S}_{ALBL} = -\vec{S}_{BRA_R}$.
+
+The pseudo-helicity $h_{AB}$ is expressed through the projection $\tilde{\mathcal{M}}_{AB} = \vec{\sigma}_{BA} \cdot (\vec{l}_{AB} + \hbar\vec{\sigma}_{BA}/2)$ of the total angular momentum on the direction of the spin $\vec{\sigma}_{BA}$ as [39,40]:
+
+$$\vec{\sigma}_{BA} \cdot \vec{p}_{AB} = \sigma^r_{BA} \left( p_{r,BA} + i \frac{\tilde{\mathcal{M}}_{AB}}{r} - \hbar/2 \right) = \sigma^r_{BA} \left( p_{r,BA} + i \frac{\vec{\sigma}_{BA} \cdot \vec{l}_{AB}}{r} \right), \quad (6)$$
+
+where $\sigma^r_{BA}$ and $p_{r,BA}$ are radial components of the spin and the momentum, respectively. According to Equation (6), the pseudo-spin-orbit scalar $\vec{\sigma}_{BA} \cdot \vec{l}_{AB}$ describes the coupling (interaction) of the spin with the valley currents flowing along a closed loop clockwise or in opposite directions, as is shown in Figure 1a. Hence, there exists a preferred direction along which the spin projection of the bispinor (1) is not changed after transition from one moving reference frame into another. At this, the spin of a particle precesses. Transformation of the electron and hole into each other in an exciton is a pseudo-precession.
+
+As a result, the coupling of pseudo-spin and valley currents stipulates the spin precession of exciton charged carriers in graphene. In our model, the orientation of non-equilibrium spin of the states of monolayer graphene in electromagnetic fields may be retained for a long time due to prohibition of change for exciton pseudo-helicity. Pseudo-precession is possible, if spins of p_z-electrons are anti-ordered (pseudo-antiferromagnetic ordering). Therefore, the pseudo-spin precession of the exciton can be implemented through the exchange interaction. Furthermore, we determine the operators $\vec{\sigma}_{BA(AB)}$, $\vec{p}_{AB(BA)}$ and describe the effects of pseudo-spin and valley current coupling.
+
+### 3. Effects of Coupling between Pseudo-Spin and Valley Current
+
+In quasi-relativistic approximation ($c^{-1}$ expansion), the eigenproblem for the equation of motion of the secondary quantized field $\hat{\chi}_{-\sigma_A}^\dagger$ in the model shown in Figure 1a has the form: [35–37]
+
+$$\left\{ \vec{\sigma} \cdot \vec{p} \, \hat{\sigma}_F^{qu} - \frac{1}{c} (i\Sigma_{rel}^x)_{AB} (i\Sigma_{rel}^x)_{BA} \right\} \hat{\chi}_{-\sigma_A}^\dagger (\vec{r}) |0, -\sigma\rangle \\ = E_{qu}(p) \hat{\chi}_{-\sigma_A}^\dagger (\vec{r}) |0, -\sigma\rangle, \quad (7)$$
+
+where the Fermi velocity operator $\hat{\sigma}_F^{qu}$ is defined as
+
+$$\hat{\sigma}_F^{qu} = [(\Sigma_{rel}^x)_{BA} + c\hbar\vec{\sigma} \cdot (\vec{K}_A + \vec{K}_B)] .$$
+---PAGE_BREAK---
+
+($\Sigma_{rel}^{x}$)$_{BA}$, ($\Sigma_{rel}^{x}$)$_{AB}$ are determined through an ordinary exchange interaction contribution,
+for example [39,40]:
+
+$$
+\begin{align*}
+(\Sigma_{rel}^{x})_{AB} \hat{\chi}_{\sigma_B}^{\dagger}(\vec{r}) |0, \sigma\rangle &= \sum_{i=1}^{N_v N} \int d\vec{r}_i \hat{\chi}_{\sigma_i B}^{\dagger}(\vec{r}) |0, \sigma\rangle \\
+&\quad \times \langle 0, -\sigma_i | \hat{\chi}_{-\sigma_i A}^{\dagger}(\vec{r}_i) V(\vec{r}_i - \vec{r}) \hat{\chi}_{-\sigma_B}(\vec{r}_i) |0, -\sigma_i'\rangle.
+\end{align*}
+$$
+
+$V(\vec{r}_i - \vec{r})$ is the Coulomb interaction between two valent electrons with radius-vectors $\vec{r}_i$ and $\vec{r}$; $N$ is a total number of atoms in the system, $N_v$ is a number of valent electrons in an atom, $c$ is the speed of light.
+
+After applying the non-unitary transformation to the wave function in the form
+
+$$
+\tilde{\chi}_{-\sigma_A}^{\uparrow} |0, -\sigma\rangle = (\Sigma_{rel}^{x})_{BA} \tilde{\chi}_{-\sigma_A}^{\uparrow} |0, -\sigma\rangle,
+$$
+
+we obtain (neglecting mixing of the states for the Dirac points) the equation that is similar to the one
+in 2D quantum field theory (QFT) [41–43], but it describes the motion of a particle with pseudo-spin
+$\vec{S}_{AB} = \hbar\vec{\sigma}_{AB}/2$:
+
+$$
+\{\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA} - c^{-1}\Sigma_{BA}\tilde{\Sigma}_{AB}\} \tilde{\chi}_{-\sigma_A}^{\uparrow}(\vec{r}) |0, -\sigma\rangle = \tilde{E}_{qu}(p) \tilde{\chi}_{-\sigma_A}^{\uparrow}(\vec{r}) |0, -\sigma\rangle , \quad (8)
+$$
+
+with a transformed 2D vector $\vec{\sigma}_{2D}^{AB}$ of the Pauli matrices, which are determined as
+$\vec{\sigma}_{2D}^{AB} = (\Sigma_{rel}^{x})_{BA} \vec{\sigma} \cdot (\Sigma_{rel}^{x})_{BA}^{-1}$. The following notions are introduced: $\vec{p}_{BA}\tilde{\chi}_{-\sigma_A}^{\uparrow} = (\Sigma_{rel}^{x})_{BA} \vec{p} \cdot (\Sigma_{rel}^{x})_{BA}^{-1}\tilde{\chi}_{-\sigma_A}^{\uparrow} = [(\Sigma_{rel}^{x})_{BA}\vec{p}] \tilde{\chi}_{-\sigma_A}^{\uparrow}, \tilde{E}_{qu} = E_{qu}/\hat{v}_{F}^{BA}, \hat{v}_{F}^{BA} = (\Sigma_{rel}^{x})_{BA}, \tilde{\Sigma}_{BA}\tilde{\Sigma}_{AB} = (\Sigma_{rel}^{x})_{BA}(i\Sigma_{rel}^{x})_{AB}(i\Sigma_{rel}^{x})_{BA}(\Sigma_{rel}^{x})_{BA}^{-1} = (i\Sigma_{rel}^{x})_{BA}(i\Sigma_{rel}^{x})_{AB}$; and the product of two capital sigma, as one sees from the last chain of formulas, behaves like a scalar mass term.
+
+Further simulations are performed in nearest neighbor tight-binding approximation [44,45].
+This approximation correctly predicts the graphene band structure in the energy range ±1 eV [46].
+This turns out to be sufficient for our purposes. We use the expressions for the exchange between
+$\pi(p_z)$-electrons only. One can find the explicit form of these expressions in [4].
+
+The action of the matrices ($\Sigma_{rel}^x$)$_{BA}$, ($\Sigma_{rel}^x$)$_{AB}$ in the momentum space is shown in Figure 1b.
+As ($\Sigma_{rel}^x$)$_{BA}$ $\neq$ ($\Sigma_{rel}^x$)$_{AB}$, the vector $\vec{p}_{BA}$ is rotated with respect to $\vec{p}_{AB}$ and stretched. According to
+Figure 1b, ellipses in momentum spaces of electrons and holes are rotated 90° with respect to each
+other. With an account of the hexagonal symmetry of the system, the last explains the experimentally
+observed rotation in 30° of the hexagonal Brillouin zone of PtCoO$_2$ [15].
+
+Thus, the sequence of exchange interactions $(\Sigma_{rel}^x)_{AB}$ $(\Sigma_{rel}^x)_{BA}$ $(\Sigma_{rel}^x)_{AB}$ for valley currents makes
+rotation initially of the electron Brillouin zone and Dirac band into the hole Brillouin zone and
+Dirac band, and then vice-versa. Thus, the exchange $(\Sigma_{rel}^x)_{AB(AB)} \equiv \Sigma_{AB(BA)}$ changes the sublattices
+wave functions:
+
+$$
+|\psi_{AB}\rangle = \Sigma_{AB} |\psi_{BA}^*\rangle.
+$$
+
+Owing to it and neglecting a very small mass term $c^{-1}\Sigma_{BA}\tilde{\Sigma}_{AB}$, the equation in which the operator of the Fermi velocity enters, can be rewritten as follows:
+
+$$
+\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB} |\psi_{AB}\rangle = E_{qu} |\psi_{BA}^*\rangle . \qquad (9)
+$$
+
+Taking into account that $E \to i\frac{\partial}{\partial t}$ and $\vec{p} = -i\vec{\nabla}$, we transform the system of equations for the Majorana bispinor ($\psi_{AB}^\dagger$, ($\psi_{BA}^\dagger$)$^\dagger$:
+---PAGE_BREAK---
+
+$$ \vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB} |\psi_{AB}\rangle = i \frac{\partial}{\partial t} |\psi_{BA}^*\rangle, \quad (10) $$
+
+$$ \vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA}^* |\psi_{BA}^*\rangle = -i \frac{\partial}{\partial t} |\psi_{AB}\rangle, \quad (11) $$
+
+into the wave equation of the form:
+
+$$ (\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA}^*)(\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB}) |\psi_{AB}\rangle = \frac{\partial^2}{\partial t^2} |\psi_{AB}\rangle. \quad (12) $$
+
+Equation (12) describes an oscillator with the energy operator $\hat{\omega}(\vec{p})$
+
+$$ \hat{\omega}(\vec{p}) = \frac{1}{\sqrt{2}} [(\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA})(\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB}) + (\vec{\sigma}_{2D}^{BA} \cdot \vec{p}_{AB})(\vec{\sigma}_{2D}^{AB} \cdot \vec{p}_{BA})]^{1/2}. \quad (13) $$
+
+Now, one can really see that the obtained equation is the equation of motion for a Majorana bispinor wave function of the semimetal charged carriers.
+
+Thus, the Fermi velocity becomes an operator within this approach, and elementary excitations are fermionic excitations described by the massless Majorana-like equation rather than Dirac-like one.
+
+**4. Harmonic Analysis of the Problem**
+
+Equation (13) can be rewritten in the following form:
+
+$$ \hat{\omega}^2(\vec{p}) = \frac{1}{2} (\hat{H}_{AB}\hat{H}_{BA} + \hat{H}_{BA}\hat{H}_{AB}). \quad (14) $$
+
+In order to describe the proposed secondary quantized field by a set of harmonic oscillators, it is necessary to show that the squared Equation (14), obtained by the symmetrization of the product of the Hamiltonians $\hat{H}_{AB}$ and $\hat{H}_{BA}$, is the Klein-Gordon-Fock operator. This will be the case if the non-diagonal matrix elements of the operator vanish identically, and therefore the components of the equation are independent. Then, $\hat{\omega}^2(\vec{p})$ can be considered as a "square of energy operator".
+
+Unfortunately, because of the complex form of the exchange operator, the statement is difficult to prove in the general case. Therefore, we do this for several approximations of the exchange interaction and demonstrate that the Equation (14) is a Klein-Gordon-Fock one.
+
+As a first particular case, when the proposed Majorana-like field is proven to be a harmonic oscillators set, we consider $\epsilon$-neighborhood ($\epsilon \to 0$) of the Dirac point $K_A(K_B)$.
+
+Let us designate the momentum of a particle in a valley as $\vec{q}$. The momentum $\vec{q}$ is determined as $\vec{q} = \vec{p} - \hbar\vec{K}_A$. In the case of very small values of $\vec{q}, q \to 0$ the exchange operator $\Sigma_{AB(BA)}$ is approximated by a power series expansion up to the fourth order in $q$. Then, an analytical calculation of non-diagonal elements of the operator $\hat{\omega}^2(\vec{p})$ performed in the Mathematica system proves that they are identically zero.
+
+Band structures for monolayer graphene and monolayer of atoms of Pb are shown in Figure 2a,b. One can see that the Weyl nodes in graphene are located far enough from the Dirac point. The Weyl nodes are shifted to the Dirac point for the Pb-monolayer. Therefore, Weyl-like character in the behavior of charged carriers may be exhibited for the Pb-monolayer under the condition that the contributions up to 4-th order in $q$ are prevailing in the exchange. In accordance with Figure 1b, the exchange operator matrices transform a circumference in the momentum space into a highly stretched ellipse that allows us to assume the presence of nematicity in the model.
+
+For a given $\vec{q}$, where the eigenfunction of Equation (9) represents 2D spinor $\Psi$, we choose its normalization in the form $\Psi(\vec{q}) = (\psi(\vec{q}), 1)^\dagger$ with lower component equal to unity. Then, as it can be easily shown for the massless Dirac pseudo-fermion model [47], the absolute value of the upper component $|\psi(\vec{q})|$ does not depend upon the wave vector $\vec{q}$, demonstrating the equivalence of all
+---PAGE_BREAK---
+
+directions in $\vec{q}$ space. We construct $|\psi(\vec{q})|^2$ for Equation (9) in $q^4$-approximation for the exchange. The results are shown in Figure 2c. The isotropy of $|\psi(\vec{q})|^2$ is broken for our model due to the appearance of the preferable directions in the momentum space.
+
+As one can see from Figure 2c, the existence of almost one-dimensional regions with sharp jump in $|\psi(\vec{q})|^2$ should probably lead to some anisotropy already in the configuration space for the carriers that we consider as manifestation of nematicity.
+
+The approximation $q^4$ for the exchange operator expression presents a particular interest for systems with strong damping of quasi-particle excitations.
+
+**Figure 2.** A splitting of Dirac cone replicas: for graphene (a) and Pb monolayer (b). One of the six pairs of Weyl-like nodes: source and sink are indicated; (c) the square of the absolute value of the upper spinor component $|\psi|^2$ of $\vec{q}$-eigenstate in the 2D semimetal model. $\vec{q} = \vec{p} - \vec{K}_A$. (in color)
+
+The second approximation of the exchange, for which we can prove the harmonic origin of the proposed Majorana-like field, is the model exchange with full exponential factors taken into account, but with the phase-difference between $\pi(p_z)$-electrons wavefunction chosen to be identically zero (see Ref. [4] for detail). Numeric simulation of $\omega^2(\vec{p})$ with this model exchange has been performed on a discrete lattice in the Brillouin zone. It has been demonstrated that the operator $\omega^2(\vec{p})$ is always diagonal in this case.
+
+Now, we perform the simulations with the exact expression for the exchange term.
+
+In this general case, the exchange between $\pi(p_z)$-electron and its three nearest $\pi(p_z)$-electrons has been calculated based on the method proposed in [4]. Band structure of the 2D semimetal has the form of a degenerated Dirac cone in the neighborhood of the Dirac point. Then, the emergence of unfolding leads to replica appearance, and further splitting of these replicas gives the octagonal symmetry of the problem, as one can see in Figure 3. Hyperbolic points (saddle points) are located between nodes and at the apex of the Dirac cone (Van-Hove singularities) as one can see in Figure 2a,b [3,48–50]. Therefore, a fractal-like set of Fermi arcs which are shown in Figure 4, is formed in the absence of damping in the system. Contrary to the graphene case, the splitting of the Dirac bands for the Pb-monolayer occurs at sufficiently small $q$, and therefore, can be observed experimentally. In addition, for the Pb-monolayer, there exist regions with huge numbers of Fermi arcs, and, respectively, regions with strong fluctuations of antiferromagnetic ordering.
+
+Thus, the secondary quantized field described by Equation (9) represents a field in which quanta manifest themselves as Dirac pseudo-fermions in the apex of the Dirac cone and as Weyl-like particles for sufficiently large $q$ at the presence of the dumping in the system. For an ideal system ($\Im m \epsilon(\vec{q}) = 0$), such a behavior is similar to that of the mathematical pendulum in the vicinity of the separatrix [51,52].
+---PAGE_BREAK---
+
+**Figure 3.** A band structure in the graphene model with partial unfolding of Dirac cone: real (a) and imaginary (b) parts of $\epsilon(\vec{q})$; range of high momenta. $\vec{q} = \vec{p} - \vec{K}_A$ (in color).
+
+**Figure 4.** Density of Fermi arcs sets in graphene (a) and Pb-monolayer bands for values of momentum $q$ in the range $0 \ge q/|\vec{K}_A| \le 10^{-4}$, $\vec{q} = \vec{p} - \vec{K}_A$.
+
+## 5. Discussion
+
+Discussing the obtained results, we have to point out, firstly, that the excitations of the constructed secondary-quantized pseudo-fermionic field are Majorana-like massless quasiparticles.
+
+The set of Fermi arcs in our model shows that the splitting of Dirac replicas on a huge number of Weyl-like states occurs in the momentum space except for the Dirac cone apex.
+
+In contrast to known massless Dirac and Weyl models, in the proposed model, there is a partial removing of the degeneracy of the Dirac cone, and the octagonal symmetry of the bands emerges for sufficiently large $q$. Thus, Majorana particles in our model can be represented as a wave package of infinitely large number of Weyl-like states.
+
+Secondly, the Dirac cone for the proposed 2D-semimetal model is degenerated in a very small neighborhood of the Dirac point $K_A(K_B)$ at $q \to 0$.
+
+Thirdly, the first-approximation with damping demonstrates that sufficiently strong decay leads to diminishing the number of the Weyl states and formation of bands having hexagonal symmetry. In accordance with the obtained results, in the system with strong damping, only six pairs of Weyl nodes survive. In this case, each Dirac hole (electron) cone is surrounded by three electron (hole) bands relating to three Weyl pairs. Provided the lifetime of the Weyl-like states is sufficiently large (small but finite damping) to preserve the octagonal symmetry of the bands, each Dirac hole (electron) cone will be surrounded by four electron (hole) bands relating to four Weyl pairs.
+
+Important features of the proposed model are that the fractal set of Fermi arches manifests pseudospin fluctuations and the phenomenon of nematicity is possible.
+---PAGE_BREAK---
+
+**6. Conclusions**
+
+In conclusion, contrary to known Dirac and Weyl models, the constructed 2D-semimetal model allows for description, in a general formalism, the band structure of a wide class of existing strongly.
+
+**Acknowledgments:** This work has been supported in part by Research grant No. 2.1.01.1 within the Basic Research Program "Microcosm and Universe" of the Republic of Belarus.
+
+**Author Contributions:** Both authors equally contributed to this work.
+
+**Conflicts of Interest:** The authors declare no conflict of interest.
+
+**References**
+
+1. Grushevskaya, H.V.; Hurski, L.I. Coherent charge transport in strongly correlated electron systems: Negatively charged exciton. *Quantum Matter* **2015**, *4*, 384–386.
+
+2. Fefferman, C.L.; Weinstein, M.I. Honeycomb lattice potentials and Dirac points. *J. Am. Math. Soc.* **2012**, *25*, 1169–1220.
+
+3. Grushevskaya, H.V.; Krylov, G. Quantum field theory of graphene with dynamical partial symmetry breaking. *J. Mod. Phys.* **2014**, *5*, 984–994.
+
+4. Grushevskaya, H.V.; Krylov, G. Semimetals with Fermi Velocity Affected by Exchange Interactions: Two Dimensional Majorana Charge Carriers. *J. Nonlinear Phenom. Complex Syst.* **2015**, *18*, 266–283.
+
+5. Semenoff, G.W.; Sodano, P. Stretched quantum states emerging from a Majorana medium. *J. Phys. B: At. Mol. Opt. Phys.* **2007**, *40*, 1479–1488.
+
+6. Nadj-Perge, S.; Drozdov, I.K.; Li, J.; Chen, H.; Jeon, S.; Seo, J.; MacDonald, A.H.; Bernevig, A.; Yazdani, A. Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor. *Science* **2014**, *346*, 602–607.
+
+7. Gerber, S.; Bartkowiak, M.; Gavilano, J.L.; Ressouche, E.; Egetenmeyer, N.; Niedermayer, C.; Bianchi, A.D.; Movshovich, R.; Bauer, E.D.; Thompson, J.D.; et al. Switching of magnetic domains reveals spatially inhomogeneous superconductivity. *Nat. Phys.* **2014**, *10*, 126–129.
+
+8. Shimojima, T.; Sakaguchi, F.; Ishizaka, K.; Ishida, Y.; Kiss, T.; Okawa, M.; Togashi, T.; Chen, C.-T.; Watanabe, S.; Arita, M.; et al. Orbital-independent superconducting gaps in iron-pnictides. *Science* **2011**, *332*, 564–567.
+
+9. Davis, J.C.S.; Lee, D.-H. Concepts relating magnetic interactions, intertwined electronic orders, and strongly correlated superconductivity. *Proc. Natl. Acad. Sci. USA* **2013**, *110*, 17623–17630.
+
+10. Borisenko, S.V.; Evtushinsky, D.V.; Liu, Z.-H.; Morozov, I.; Kappenberger, R.; Wurmehl, S.; Büchner, B.; Yaresko, A.N.; Kim, T.K.; Hoesch, M.; et al. Direct observation of spin-orbit coupling in iron-based superconductors. *Nat. Phys.* **2015**, doi:10.1038/nphys3594.
+
+11. Hurski, L.I.; Grushevskaya, H.V.; Kalanda, N.A. Non-adiabatic paramagnetic model of pseudo-gap state in high-temperature cuprate superconductors. *Dokl. Nat. Acad. Sci. Belarus* **2010**, *54*, 55–62. (In Russian)
+
+12. Diop, L.V.B.; Isnard, O.; Rodriguez-Carvajal, J. Ultrasharp magnetization steps in the antiferromagnetic itinerant-electron system $LaFe_{12}B_6$. *Phys. Rev.* **2016**, *B93*, 014440.
+
+13. Kasahara, S.; Shi, H.J.; Hashimoto, K.; Tonegawa, S.; Mizukami, Y.; Shibauchi, T.; Sugimoto, K.; Fukuda, T.; Terashima, T.; Nevidomskyy, A.H.; et al. Electronic nematicity above the structural and superconducting transition in $BaFe_2(As_{1-x}P_x)_2$. *Nature* **2012**, *486*, 382–385.
+
+14. Wang, Q.; Shen, Y.; Pan, B.; Hao, Y.; Ma, M.; Zhou, F.; Steffens, P.; Schmalzl, K.; Forrest, T.R.; Abdel-Hafiez, M.; et al. Strong interplay between stripe spin fluctuations, nematicity and superconductivity in FeSe. *Nat. Mater.* **2016**, *15*, 159–163.
+
+15. Kushwaha, P.; Sunko, V.; Moll, Ph.J.W.; Bawden, L.; Riley, J.M.; Nandi, N.; Rosner, H.; Schmidt, M.P.; Arnold, F.; Hassinger, E.; et al. Nearly free electrons in a 5d delafossite oxide metal. *Sci. Adv.* **2015**, *e1500692*.
+
+16. Lv, M.; Zhang, S.-C. Dielectric function, Friedel oscillation and plasmons in Weyl semimetals. *Int. J. Mod. Phys.* **B** **2013**, *27*, 1350177.
+
+17. Xu, S.-Y.; Belopolski, I.; Alidoust, N.; Neupane, M.; Bian, G.; Zhang, C.; Sankar, R.; Chang, G.; Yuan, Z.; Lee, C.-C.; et al. Discovery of a Weyl Fermion semimetal and topological Fermi arcs. *Science* **2015**, *349*, 613–617.
+---PAGE_BREAK---
+
+18. Lv, B.Q.; Xu, N.; Weng, H.M.; Ma, J.Z.; Richard, P.; Huang, X.C.; Zhao, L.X.; Chen, G.F.; Matt, C.E.; Bisti, F.; et al. Observation of Weyl nodes in TaAs. *Nat. Phys.* **2015**, *11*, 724–727.
+
+19. Huang, S.-M.; Xu, S.-Y.; Belopolski, I.; Lee, C.-C.; Chang, G.; Wang, B.K.; Alidoust, N.; Bian, G.; Neupane, M.; Zhang, C.; et al. A Weyl Fermion semimetal with surface Fermi arcs in the transition metal monopnictide TaAs class. *Nat. Commun.* **2015**, *6*, 7373.
+
+20. Tan, B.S.; Hsu, Y.-T.; Zeng, B.; Ciomaga Hatnean, M.; Harrison, N.; Zhu, Z.; Hartstein, M.; Kiourlappou, M.; Srivastava, A.; Johannes, M.D.; et al. Unconventional Fermi surface in an insulating state. *Science* **2015**, *349*, 287–290.
+
+21. Falkovsky, L.A. Optical properties of graphene and IV-VI semiconductors. *Phys.-Uspekhi* **2008**, *51*, 887–897.
+
+22. Novoselov, K.S.; Jiang, D.; Schedin, F.; Booth, T.J.; Khotkevich, V.V.; Morozov, S.V.; Geim, A.K. Two-dimensional atomic crystals. *Proc. Natl. Acad. Sci. USA* **2005**, *102*, 10451–10453.
+
+23. Keldysh, L.V. Coulomb interaction in thin semiconductor and semimetal films. *Lett. J. Exper. Theor. Phys.* **1979**, *29*, 716–719.
+
+24. Dora, B.; Gulacsi, M.; Sodano, P. Majorana zero modes in graphene with trigonal warping. *Phys. Status Solidi RRL* **2009**, *3*, 169–171.
+
+25. Elias, D.C.; Gorbachev, R.V.; Mayorov, A.S.; Morozov, S.V.; Zhukov, A.A.; Blake, P.; Ponomarenko, L.A.; Grigorieva, I.V.; Novoselov, K.S.; Guinea, F.; et al. Dirac cones reshaped by interaction effects in suspended graphene. *Nat. Phys.* **2012**, *8*, 172.
+
+26. Du, X.; Skachko, I.; Barker, A.; Andrei, E.Y. Approaching ballistic transport in suspended graphene. *Nat. Nanotechnol.* **2008**, *3*, 491–495.
+
+27. Cooper, D.R.; D'Anjou, B.; Ghattamaneni, N.A.; Harack, B.; Hilke, M.; Horth, A.; Majlis, N.; Massicotte, M.; Vandsburger, L.; Whiteway, E.; et al. Experimental Review of Graphene. *ISRN Condensed Matter Phys.* **2012**, 2012, Article ID 501686.
+
+28. San-Jose, P.; Lado, J. L.; Aguado, R.; Guinea, F.; Fernandez-Rossier, J. Majorana Zero Modes in Graphene. *Phys. Rev. X* **2015**, *5*, 041042.
+
+29. Wang, J.R.; Liu, G.Z. Eliashberg theory of excitonic insulating transition in graphene. *J. Phys. Condensed Matter* **2011**, *23*, 155602.
+
+30. Pessa, E. The Majorana Oscillator. *Electr. J. Theor. Phys.* **2006**, *3*, 285–292.
+
+31. Majorana, E. Theory of Relativistic Particles with Arbitrary Intrinsic Moment. *Nuovo Cimento* **1932**, *9*, 335.
+
+32. Peskin, M.E.; Schroeder, D.V. *An Introduction to Quantum Field Theory*; Addison-Wesley Publishing Company: Oxford, UK, 1995.
+
+33. Simpao, V.A. Exact Solution of Majorana Equation via Heaviside Operational Ansatz. *Electr. J. Theor. Phys.* **2006**, *3*, 239–247.
+
+34. Hainzl, C.; Lewin, M.; Sparber, C. Ground state properties of graphene in Hartree-Fock theory. *J. Math. Phys.* **2012**, *53*, 095220.
+
+35. Grushevskaya, H.V.; Krylov, G.G. Charge Carriers Asymmetry and Energy Minigaps in Monolayer Graphene: Dirac-Hartree-Fock approach. *Int. J. Nonliner Phenom. Complex Syst.* **2013**, *16*, 189–208.
+
+36. Grushevskaya, H.V.; Krylov, G.G. Nanotechnology in the Security Systems, NATO Science for Peace and Security Series C: Environmental Security; Bonča, J., Kruchinin, S., Eds.; Springer: Dordrecht, The Netherlands, 2015; Chapter 3.
+
+37. Grushevskaya, H.V.; Krylov, G.G. Electronic Structure and Transport in Graphene: QuasiRelativistic Dirac-Hartree-Fock Self-Consistent Field Approximation. In *Graphene Science Handbook*. Vol. 3: Electrical and Optical Properties; Aliofkhazraei, M., Ali, N., Milne, W.I., Ozkan, C.S., Mitura, S., Gervasoni, J.L., Eds.; CRC Press—Taylor&Francis Group: Boca Raton, FL, USA, 2016.
+
+38. Gribov, V.N. *Quantum Electrodynamics*; R & C Dynamics: Izhevsk, Russia, 2001. (In Russian)
+
+39. Fock, V.A. *Principles of Quantum Mechanics*; Science: Moscow, Russia, 1976. (In Russian)
+
+40. Krylova, H.; Hursky, L. Spin Polarization in Strong-Correlated Nanosystems; LAP LAMBERT Academic Publishing, AV Akademikerverlag GmbH & Co.: Saarbrüken, Germany, 2013.
+
+41. Semenoff, G.W. Condensed-matter simulation of a three-dimensional anomaly. *Phys. Rev. Lett.* **1984**, *53*, 2449.
+
+42. Abergel, D.S.L.; Apalkov, V.; Berashevich, J.; Ziegler, K.; Chakraborty, T. Properties of graphene: A theoretical perspective. *Adv. Phys.* **2010**, *59*, 261.
+
+43. Gusynin, V.P.; Sharapov, S.G.; Carbotte, J.P. AC Conductivity of Graphene: From Tight-binding model to 2 + 1-dimensional quantum electrodynamics. *Int. J. Mod. Phys. B* **2007**, *21*, 4611.
+---PAGE_BREAK---
+
+44. Wallace, P.R. The band theory of graphite. *Phys. Rev.* **1971**, *71*, 622-634.
+
+45. Saito, R.; Dresselhaus, G.; Dresselhaus, M.S. *Physical Properties of Carbon Nanotubes*; Imperial: London, UK, 1998.
+
+46. Reich, S.; Maultzsch, J.; Thomsen, C.; Ordejón, P. Tight-binding description of graphene. *Phys. Rev. B* **2002**, *66*, 035412.
+
+47. Castro Neto, A.H.; Guinea, F.; Peres, N.M.; Novoselov, K.S.; Geim, A.K. The electronic properties of graphene. *Rev. Mod. Phys.* **2009**, *81*, 109.
+
+48. Brihuega, I.; Mallet, P.; González-Herrero, H.; Trambly de Laissardière, G.; Ugeda, M.M.; Magaud, L.; Gomez-Rodríguez, J.M.; Ynduráin, F.; Veuillen, J.-Y. Unraveling the Intrinsic and Robust Nature of van Hove Singularities in Twisted Bilayer Graphene by Scanning Tunneling Microscopy and Theoretical Analysis. *Phys. Rev. Lett.* **2012**, *109*, 196802; Erratum in *2012*, *109*, 209905.
+
+49. Andrei, E.Y.; Li, G.; Du, X. Electronic properties of graphene: A perspective from scanning tunneling microscopy and magnetotransport. *Rep. Prog. Phys.* **2012**, *75*, 056501.
+
+50. Grushevskaya, H.V.; Krylov, G.; Gaisyonok, V.A.; Serow, D.V. Symmetry of Model N = 3 for Graphene with Charged Pseudo-Excitons. *J. Nonliner Phenom. Complex Sys.* **2015**, *18*, 81-98.
+
+51. Zaslavsky, G. M.; Sagdeev, R.Z.; Usikov, D.A.; Chernikov, A.A. *Weak Chaos and Quasi-Regular Patterns*; Cambridge University Press: New York, NY, USA, 1991.
+
+52. Guckenheimer, J.; Holmes, P. *Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields*; Springer-Verlag: New York, NY, USA, 1990; Volume 42.
+
+© 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Chapter 3:
+Papers Published by This Issue Editor in
+Symmetry
+---PAGE_BREAK---
+
+
+---PAGE_BREAK---
+
+Article
+
+Lorentz Harmonics, Squeeze Harmonics and Their
+Physical Applications
+
+Young S. Kim ¹,* and Marilyn E. Noz ²
+
+¹ Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA
+
+² Department of Radiology, New York University, New York, NY 10016, USA
+
+* E-Mail: yskim@umd.edu; Tel.: 301-405-6024.
+
+Received: 6 January 2011; in revised form: 7 February 2011 / Accepted: 11 February 2011 /
+Published: 14 February 2011
+
+**Abstract:** Among the symmetries in physics, the rotation symmetry is most familiar to us. It is known that the spherical harmonics serve useful purposes when the world is rotated. Squeeze transformations are also becoming more prominent in physics, particularly in optical sciences and in high-energy physics. As can be seen from Dirac's light-cone coordinate system, Lorentz boosts are squeeze transformations. Thus the squeeze transformation is one of the fundamental transformations in Einstein's Lorentz-covariant world. It is possible to define a complete set of orthonormal functions defined for one Lorentz frame. It is shown that the same set can be used for other Lorentz frames. Transformation properties are discussed. Physical applications are discussed in both optics and high-energy physics. It is shown that the Lorentz harmonics provide the mathematical basis for squeezed states of light. It is shown also that the same set of harmonics can be used for understanding Lorentz-boosted hadrons in high-energy physics. It is thus possible to transmit physics from one branch of physics to the other branch using the mathematical basis common to them.
+
+**Keywords:** Lorentz harmonics; relativistic quantum mechanics; squeeze transformation; Dirac's efforts; hidden variables; Lorentz-covariant bound states; squeezed states of light
+
+Classification: PACS 03.65.Ge, 03.65.Pm
+
+# 1. Introduction
+
+In this paper, we are concerned with symmetry transformations in two dimensions, and we are accustomed to the coordinate system specified by x and y variables. On the xy plane, we know how to make rotations and translations. The rotation in the xy plane is performed by the matrix algebra
+
+$$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \qquad (1) $$
+
+but we are not yet familiar with
+
+$$ \begin{pmatrix} z' \\ t' \end{pmatrix} = \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} z \\ t \end{pmatrix} \qquad (2) $$
+---PAGE_BREAK---
+
+We see this form when we learn Lorentz transformations, but there is a tendency in the literature to avoid this form, especially in high-energy physics. Since this transformation can also be written as
+
+$$ \begin{pmatrix} u' \\ v' \end{pmatrix} = \begin{pmatrix} \exp(\eta) & 0 \\ 0 & \exp(-\eta) \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} \qquad (3) $$
+
+with
+
+$$ u = \frac{z+t}{\sqrt{2}}, \quad v = \frac{z-t}{\sqrt{2}} \qquad (4) $$
+
+where the variables *u* and *v* are expanded and contracted respectively, we call Equation (2) or Equation (3) **squeeze transformations** [1].
+
+From the mathematical point of view, the symplectic group $Sp(2)$ contains both the rotation and squeeze transformations of Equations (1) and (2), and its mathematical properties have been extensively discussed in the literature [1,2]. This group has been shown to be one of the essential tools in quantum optics. From the mathematical point of view, the squeezed state in quantum optics is a harmonic oscillator representation of this $Sp(2)$ group [1].
+
+We are interested in this paper in "squeeze transformations" of localized functions. We are quite familiar with the role of spherical harmonics in three dimensional rotations. We use there the same set of harmonics, but the rotated function has different linear combinations of those harmonics. Likewise, we are interested in a complete set of functions which will serve the same purpose for squeeze transformations. It will be shown that harmonic oscillator wave functions can serve the desired purpose. From the physical point of view, squeezed states define the squeeze or Lorentz harmonics.
+
+In 2003, Giedke et al. used the Gaussian function to discuss the entanglement problems in information theory [3]. This paper allows us to use the oscillator wave functions to address many interesting current issues in quantum optics and information theory. In 2005, the present authors noted that the formalism of Lorentz-covariant harmonic oscillators leads to a space-time entanglement [4]. We developed the oscillator formalism to deal with hadronic phenomena observed in high-energy laboratories [5]. It is remarkable that the mathematical formalism of Giedke et al. is identical with that of our oscillator formalism.
+
+While quantum optics or information theory is a relatively new branch of physics, the squeeze transformation has been the backbone of Einstein's special relativity. While Lorentz, Poincaré, and Einstein used the transformation of Equation (2) for Lorentz boosts, Dirac observed that the same equation can be written in the form of Equation (3) [6]. Unfortunately, this squeeze aspect of Lorentz boosts has not been fully addressed in high-energy physics dealing with particles moving with relativistic speeds.
+
+Thus, we can call the same set of functions "squeeze harmonics" and "Lorentz harmonics" in quantum optics and high-energy physics respectively. This allows us to translate the physics of quantum optics or information theory into that of high-energy physics.
+
+The physics of high-energy hadrons requires a Lorentz-covariant localized quantum system. This description requires one variable which is hidden in the present form of quantum mechanics. It is the time-separation variable between two constituent particles in a quantum bound system like the hydrogen atom, where the Bohr radius measures the separation between the proton and the electron. What happens to this quantity when the hydrogen atom is boosted and the time-separation variable starts playing its role? The Lorentz harmonics will allow us to address this question.
+
+In Section 2, it is noted that the Lorentz boost of localized wave functions can be described in terms of one-dimensional harmonic oscillators. Thus, those wave functions constitute the Lorentz harmonics. It is also noted that the Lorentz boost is a squeeze transformation.
+
+In Section 3, we examine Dirac's life-long efforts to make quantum mechanics consistent with special relativity, and present a Lorentz-covariant form of bound-state quantum mechanics. In Section 4,
+---PAGE_BREAK---
+
+we construct a set of Lorentz-covariant harmonic oscillator wave functions, and show that they can be
+given a Lorentz-covariant probability interpretation.
+
+In Section 5, the formalism is shown to constitute a mathematical basis for squeezed states of light, and for quantum entangled states. In Section 6, this formalism can serve as the language for Feynman's rest of the universe [7]. Finally, in Section 7, we show that the harmonic oscillator formalism can be applied to high-energy hadronic physics, and what we observe there can be interpreted in terms of what we learn from quantum optics.
+
+## 2. Lorentz or Squeeze Harmonics
+
+Let us start with the two-dimensional plane. We are quite familiar with rigid transformations such as rotations and translations in two-dimensional space. Things are different for non-rigid transformations such as a circle becoming an ellipse.
+
+We start with the well-known one-dimensional harmonic oscillator eigenvalue equation
+
+$$ \frac{1}{2} \left[ -\left(\frac{\partial}{\partial x}\right)^2 + x^2 \right] \chi_n(x) = \left(n + \frac{1}{2}\right) \chi_n(x) \quad (5) $$
+
+For a given value of integer $n$, the solution takes the form
+
+$$ \chi_n(x) = \left[ \frac{1}{\sqrt{\pi 2^n n!}} \right]^{1/2} H_n(x) \exp \left( -\frac{x^2}{2} \right) \quad (6) $$
+
+where $H_n(x)$ is the Hermite polynomial of the n-th degree. We can then consider a set of functions with all integer values of $n$. They satisfy the orthogonality relation
+
+$$ \int \chi_n(x) \chi_{n'}(x) = \delta_{nn'} \quad (7) $$
+
+This relation allows us to define $f(x)$ as
+
+$$ f(x) = \sum_{n} A_{n} \chi_{n}(x) \quad (8) $$
+
+with
+
+$$ A_n = \int f(x)\chi_n(x)dx \quad (9) $$
+
+Let us next consider another variable added to Equation (5), and the differential equation
+
+$$ \frac{1}{2} \left\{ \left[ -\left(\frac{\partial}{\partial x}\right)^2 + x^2 \right] + \left[ -\left(\frac{\partial}{\partial y}\right)^2 + y^2 \right] \right\} \phi(x,y) = \lambda \phi(x,y) \quad (10) $$
+
+This equation can be re-arranged to
+
+$$ \frac{1}{2} \left\{ -\left(\frac{\partial}{\partial x}\right)^2 - \left(\frac{\partial}{\partial y}\right)^2 + x^2 + y^2 \right\} \phi(x,y) = \lambda \phi(x,y) \quad (11) $$
+
+This differential equation is invariant under the rotation defined in Equation (1). In terms of the polar coordinate system with
+
+$$ r = \sqrt{x^2 + y^2}, \qquad \tan \theta = \left(\frac{y}{x}\right) \quad (12) $$
+
+this equation can be written:
+
+$$ \frac{1}{2} \left\{ -\frac{\partial^2}{\partial r^2} - \frac{1}{r} \frac{\partial}{\partial r} - \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} + r^2 \right\} \phi(r, \theta) = \lambda \phi(r, \theta) \quad (13) $$
+---PAGE_BREAK---
+
+and the solution takes the form
+
+$$
+\phi(r, \theta) = e^{-r^2/2} R_{n,m}(r) \{ A_m \cos(m\theta) + B_n \sin(m\theta) \} \quad (14)
+$$
+
+The radial equation should satisfy
+
+$$
+\frac{1}{2} \left\{ -\frac{\partial^2}{\partial r^2} - \frac{1}{r} \frac{\partial}{\partial r} + \frac{m^2}{r^2} + r^2 \right\} R_{n,m}(r) = (n+m+1)R_{n,m}(r) \quad (15)
+$$
+
+In the polar form of Equation (14), we can achieve the rotation of this function by changing the angle variable $\theta$.
+
+On the other hand, the differential equation of Equation (10) is separable in the x and y variables.
+The eigen solution takes the form
+
+$$
+\Phi_{n_x, n_y}(x, y) = \chi_{n_x}(x) \chi_{n_y}(y) \tag{16}
+$$
+
+with
+
+$$
+\lambda = n_x + n_y + 1
+\quad
+(17)
+$$
+
+If a function $f(x,y)$ is sufficiently localized around the origin, it can be expanded as
+
+$$
+f(x,y) = \sum_{n_x, n_y} A_{n_x, n_y} \chi_{n_x}(x) \chi_{n_y}(y) \qquad (18)
+$$
+
+with
+
+$$
+A_{n_x, n_y} = \int f(x,y)\chi_{n_x}(x)\chi_{n_y}(y) dx dy \quad (19)
+$$
+
+If we rotate $f(x,y)$ according to Equation (1), it becomes $f(x^*, y^*)$, with
+
+$$
+x^* = (\cos \theta)x - (\sin \theta)y, \quad y^* = (\sin \theta)x + (\cos \theta)y \tag{20}
+$$
+
+This rotated function can also be expanded in terms of $\chi_{n_x}(x)$ and $\chi_{n_y}(y)$:
+
+$$
+f(x^*, y^*) = \sum_{n_x, n_y} A_{n_x, n_y}^* \chi_{n_x}(x) \chi_{n_y}(y) \quad (21)
+$$
+
+with
+
+$$
+A_{n_x, n_y}^* = \int f(x^*, y^*) \chi_{n_x}(x) \chi_{n_y}(y) dx dy \quad (22)
+$$
+
+Next, let us consider the differential equation
+
+$$
+\frac{1}{2} \left\{ -\left(\frac{\partial}{\partial z}\right)^2 + \left(\frac{\partial}{\partial t}\right)^2 + z^2 - t^2 \right\} \psi(z,t) = \lambda \psi(z,t) \quad (23)
+$$
+
+Here we use the variables *z* and *t*, instead of *x* and *y*. Clearly, this equation can be also separated in the
+*z* and *t* coordinates, and the eigen solution can be written as
+
+$$
+\psi_{nz,n_l}(z,t) = \chi_{nz}(z)\chi_{nl}(z,t) \tag{24}
+$$
+
+with
+
+$$
+\lambda = n_z - n_t. \tag{25}
+$$
+
+The oscillator equation is not invariant under coordinate rotations of the type given in Equation (1).
+It is however invariant under the squeeze transformation given in Equation (2).
+---PAGE_BREAK---
+
+The differential equation of Equation (23) becomes
+
+$$
+\frac{1}{4} \left\{ -\frac{\partial}{\partial u} \frac{\partial}{\partial v} + uv \right\} \psi(u, v) = \lambda \psi(u, v) \quad (26)
+$$
+
+Both Equation (11) and Equation (23) are two-dimensional differential equations. They are
+invariant under rotations and squeeze transformations respectively. They take convenient forms
+in the polar and squeeze coordinate systems respectively as shown in Equation (13) and Equation (26).
+
+The solutions of the rotation-invariant equation are well known, but the solutions of the squeeze-invariant equation are still strange to the physics community. Fortunately, both equations are separable in the Cartesian coordinate system. This allows us to study the latter in terms of the familiar rotation-invariant equation. This means that if the solution is sufficiently localized in the z and t plane, it can be written as
+
+$$
+\psi(z, t) = \sum_{n_z, n_t} A_{n_z, n_t} \chi_{n_z}(z) \chi_{n_t}(t) \tag{27}
+$$
+
+with
+
+$$
+A_{n_z, n_t} = \int \psi(z,t) \chi_{n_z}(z) \chi_{n_t}(t) \, dz \, dt \quad (28)
+$$
+
+If we squeeze the coordinate according to Equation (2),
+
+$$
+\psi(z^*, t^*) = \sum_{n_z, n_t} A_{n_z, n_t}^* \chi_{n_z}(z) \chi_{n_t}(t) \quad (29)
+$$
+
+with
+
+$$
+A_{n_z, n_t}^* = \int \psi(z^*, t^*) \chi_{n_z}(z) \chi_{n_t}(t) \, dz \, dt \quad (30)
+$$
+
+Here again both the original and transformed wave functions are linear combinations of the wave
+functions for the one-dimensional harmonic oscillator given in Equation (6).
+
+The wave functions for the one-dimensional oscillator are well known, and they play important
+roles in many branches of physics. It is gratifying to note that they could play an essential role in
+squeeze transformations and Lorentz boosts, see Table (1). We choose to call them Lorentz harmonics
+or squeeze harmonics.
+
+**Table 1.** Cylindrical and hyperbolic equations. The cylindrical equation is invariant under rotation while the hyperbolic equation is invariant under squeeze transformation
+
+
+
+
+ |
+ Equation
+ |
+
+ Invariant under
+ |
+
+ Eigenvalue
+ |
+
+
+
+
+ |
+ Cylindrical
+ |
+
+ Rotation
+ |
+
+ λ = nx + ny + 1
+ |
+
+
+ |
+ Hyperbolic
+ |
+
+ Squeeze
+ |
+
+ λ = nx - ny
+ |
+
+
+
+
+**3. The Physical Origin of Squeeze Transformations**
+
+Paul A. M. Dirac made it his life-long effort to combine quantum mechanics with special relativity.
+We examine the following four of his papers.
+
+* In 1927 [8], Dirac pointed out the time-energy uncertainty should be taken into consideration for efforts to combine quantum mechanics and special relativity.
+
+* In 1945 [9], Dirac considered four-dimensional harmonic oscillator wave functions with
+
+$$
+\exp\left\{-\frac{1}{2}\left(x^2 + y^2 + z^2 + t^2\right)\right\} \qquad (31)
+$$
+
+and noted that this form is not Lorentz-covariant.
+---PAGE_BREAK---
+
+* In 1949 [6], Dirac introduced the light-cone variables of Equation (4). He also noted that the construction of a Lorentz-covariant quantum mechanics is equivalent to the construction of a representation of the Poncaré group.
+
+* In 1963 [10], Dirac constructed a representation of the (3 + 2) deSitter group using two harmonic oscillators. This deSitter group contains three (3 + 1) Lorentz groups as its subgroups.
+
+In each of these papers, Dirac presented the original ingredients which can serve as building blocks for making quantum mechanics relativistic. We combine those elements using Wigner's little groups [11] and Feynman's observation of high-energy physics [12–14].
+
+First of all, let us combine Dirac’s 1945 paper and his light-cone coordinate system given in his 1949 paper. Since x and y variables are not affected by Lorentz boosts along the z direction in Equation (31), it is sufficient to study the Gaussian form
+
+$$ \exp\left\{-\frac{1}{2}(z^2 + t^2)\right\} \qquad (32) $$
+
+This form is certainly not invariant under Lorentz boost as Dirac noted. On the other hand, it can be written as
+
+$$ \exp\left\{-\frac{1}{2}(u^2 + v^2)\right\} \qquad (33) $$
+
+where *u* and *v* are the light-cone variables defined in Equation (4). If we make the Lorentz-boost or Lorentz squeeze according to Equation (3), this Gaussian form becomes
+
+$$ \exp\left\{-\frac{1}{2}\left(e^{-2\eta}u^2 + e^{2\eta}v^2\right)\right\} \qquad (34) $$
+
+If we write the Lorentz boost as
+
+$$ z' = \frac{z + \beta t}{\sqrt{1 - \beta^2}} \qquad t' = \frac{t + \beta z}{\sqrt{1 - \beta^2}} \qquad (35) $$
+
+where $\beta$ is the velocity parameter $v/c$, then $\beta$ is related to $\eta$ by
+
+$$ \beta = \tanh(\eta) \qquad (36) $$
+
+Let us go back to the Gaussian form of Equation (32), this expression is consistent with Dirac’s earlier paper on the time-energy uncertainty relation [8]. According to Dirac, this is a c-number uncertainty relation without excitations. The existence of the time-energy uncertainty is illustrated in the first part of Figure 1.
+
+In his 1927 paper, Dirac noted the space-time asymmetry in uncertainty relations. While there are no time-like excitations, quantum mechanics allows excitations along the z direction. How can we take care of problem?
+
+If we suppress the excitations along the *t* coordinate, the normalized solution of this differential equation, Equation (24), is
+
+$$ \psi(z,t) = \left( \frac{1}{\pi 2^n n!} \right)^{1/2} H_n(z) \exp \left\{ - \left( \frac{z^2 + t^2}{2} \right) \right\} \qquad (37) $$
+---PAGE_BREAK---
+
+**Figure 1.** Space-time picture of quantum mechanics. In his 1927 paper, Dirac noted that there is a c-number time-energy uncertainty relation, in addition to Heisenberg's position-momentum uncertainty relations, with quantum excitations. This idea is illustrated in the first figure (upper left). In his 1949 paper, Dirac produced his light-cone coordinate system as illustrated in the second figure (upper right). It is then not difficult to produce the third figure, for a Lorentz-covariant picture of quantum mechanics. This Lorentz-squeeze property is observed in high-energy laboratories through Feynman's parton picture discussed in Section 7.
+
+If we boost the coordinate system, the Lorentz-boosted wave functions should take the form
+
+$$ \begin{aligned} \psi_{\eta}^{n}(z, t) = & \left( \frac{1}{\pi 2^{n} n!} \right)^{1/2} H_n \left( z \cosh \eta - t \sinh \eta \right) \\ & \times \exp \left\{ - \left[ \frac{( \cosh 2\eta )(z^2 + t^2) - 4( \sinh 2\eta )zt }{2} \right] \right\} \end{aligned} \quad (38) $$
+
+These are the solutions of the phenomenological equation of Feynman *et al.* [12] for internal motion of the quarks inside a hadron. In 1971, Feynman *et al.* wrote down a Lorentz-invariant differential equation of the form
+
+$$ \frac{1}{2} \left\{ - \left( \frac{\partial}{\partial x_{\mu}} \right)^2 + x_{\mu}^2 \right\} \psi(x_{\mu}) = (\lambda + 1) \psi(x_{\mu}) \quad (39) $$
+
+where $x_\mu$ is for the Lorentz-covariant space-time four vector. This oscillator equation is separable in the Cartesian coordinate system, and the transverse components can be separated out. Thus, the differential of Equation (23) contains the essential element of the Lorentz-invariant Equation (39).
+
+However, the solutions contained in Reference [12] are not normalizable and therefore cannot carry physical interpretations. It was shown later that there are normalizable solutions which constitute a representation of Wigner's O(3)-like little group [5,11,15]. The O(3) group is the three-dimensional rotation group without a time-like direction or time-like excitations. This addresses Dirac's concern about the space-time asymmetry in uncertainty relations [8]. Indeed, the expression of Equation (37) is considered to be the representation of Wigner's little group for quantum bound states [11,15]. We shall return to more physical questions in Section 7.
+
+## 4. Further Properties of the Lorentz Harmonics
+
+Let us continue our discussion of quantum bound states using harmonic oscillators. We are interested in this section to see how the oscillator solution of Equation (37) would appear to a moving observer.
+---PAGE_BREAK---
+
+The variable z and *t* are the longitudinal and time-like separations between the two constituent particles. In terms of the light-cone variables defined in Equation (4), the solution of Equation (37) takes the form
+
+$$
+\psi_0^n(z, t) = \left[ \frac{1}{\pi n! 2^n} \right]^{1/2} H_n \left( \frac{u+v}{\sqrt{2}} \right) \exp \left\{ - \left( \frac{u^2 + v^2}{2} \right) \right\} \quad (40)
+$$
+
+and
+
+$$
+\psi_{\eta}^{n}(z,t) = \left[ \frac{1}{\pi n! 2^n} \right]^{1/2} H_n \left( \frac{e^{-\eta} u + e^{\eta} v}{\sqrt{2}} \right) \exp \left\{ - \left( \frac{e^{-2\eta} u^2 + e^{2\eta} v^2}{2} \right) \right\} \quad (41)
+$$
+
+for the rest and moving hadrons respectively.
+
+It is mathematically possible to expand this as [5,16]
+
+$$
+\psi_{\eta}^{n}(z, t) = \left(\frac{1}{\cosh \eta}\right)^{(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh \eta)^{k} \chi_{n+k}(z) \chi_{n}(t) \quad (42)
+$$
+
+where $\chi_n(z)$ is the $n$-th excited state oscillator wave function which takes the familiar form
+
+$$
+\chi_n(z) = \left[ \frac{1}{\sqrt{\pi 2^n n!}} \right]^{1/2} H_n(z) \exp \left( -\frac{z^2}{2} \right) \qquad (43)
+$$
+
+as given in Equation (6). This is an expansion of the Lorentz-boosted wave function in terms of the Lorentz harmonics.
+
+If the hadron is at rest, there are no time-like oscillations. There are time-like oscillations for a moving hadron. This is the way in which the space and time variable mix covariantly. This also provides a resolution of the space-time asymmetry pointed out by Dirac in his 1927 paper [8]. We shall return to this question in Section 6. Our next question is whether those oscillator equations can be given a probability interpretation.
+
+Even though we suppressed the excitations along the *t* direction in the hadronic rest frame, it is an interesting mathematical problem to start with the oscillator wave function with an excited state in the time variable. This problem was addressed by Rotbart in 1981 [17].
+
+## 4.1. Lorentz-Invariant Orthogonality Relations
+
+Let us consider two wave functions $\psi_\eta^n(z, t)$. If two covariant wave functions are in the same Lorentz frame and have thus the same value of $\eta$, the orthogonality relation
+
+$$
+(\psi_{\eta}^{n'}, \psi_{\eta}^{n}) = \delta_{nn'} \quad (44)
+$$
+
+is satisfied.
+
+If those two wave functions have different values of η, we have to start with
+
+$$
+(\psi_{\eta'}^{n'}, \psi_{\eta}^{n}) = \int (\psi_{\eta'}^{n'}(z,t))^* \psi_{\eta}^{n}(z,t) dz dt \quad (45)
+$$
+
+Without loss of generality, we can assume $\eta' = 0$ in the system where $\eta = 0$, and evaluate the integration. The result is [18]
+
+$$
+(\psi_0^{n'}, \psi_\eta^n) = \int (\psi_0^{n'}(z,t))^2 \psi_\eta^n(z,t) dxdt = (\sqrt{1-\beta^2})^{(n+1)} \delta_{n,n'} \quad (46)
+$$
+
+where $\beta = \tanh(\eta)$, as given in Equation (36). This is like the Lorentz-contraction property of a rigid rod. The ground state is like a single rod. Since we obtain the first excited state by applying a step-up operator, this state should behave like a multiplication of two rods, and a similar argument can be given to *n* rigid rods. This is illustrated in Figure 2.
+---PAGE_BREAK---
+
+**Figure 2.** Orthogonality relations for the covariant harmonic oscillators. The orthogonality remains invariant. For the two wave functions in the orthogonality integral, the result is zero if they have different values of *n*. If both wave functions have the same value of *n*, the integral shows the Lorentz contraction property.
+
+With these orthogonality properties, it is possible to give quantum probability interpretation in the Lorentz-covariant world, and it was so stated in our 1977 paper [19].
+
+## 4.2. Probability Interpretations
+
+Let us study the probability issue in terms of the one-dimensional oscillator solution of Equation (6) whose probability interpretation is indisputable. Let us also go back to the rotationally invariant differential equation of Equation (11). Then the product
+
+$$ \chi_{n_x}(x) \chi_{n_y}(y) \quad (47) $$
+
+also has a probability interpretation with the eigen value $(n_x + n_y + 1)$. Thus the series of the form [1,5]
+
+$$ \phi_{\eta}^{n}(x, y) = \left( \frac{1}{\cosh \eta} \right)^{(n+1)} \sum_{k} \left[ \frac{(n+k)!}{n!k!} \right]^{1/2} (\tanh \eta)^k \chi_{n+k}(x) \chi_n(y) \quad (48) $$
+
+also has its probability interpretation, but it is not in an eigen state. Each term in this series has an eigenvalue $(2n + k + 1)$. The expectation value of Equation (11) is
+
+$$ \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_k \frac{(2n+k+1)(n+k)!}{n!k!} (\tanh \eta)^{2k} \quad (49) $$
+
+If we replace the variables *x* and *y* by *z* and *t* respectively in the above expression of Equation (48), it becomes the Lorentz-covariant wave function of Equation (42). Each term $\chi_{n+k}(z)\chi_k(t)$ in the series has the eigenvalue *n*. Thus the series is in the eigen state with the eigenvalue *n*.
+
+This difference does not prevent us from importing the probability interpretation from that of Equation (48).
+
+In the present covariant oscillator formalism, the time-separation variable can be separated from the rest of the wave function, and does not require further interpretation. For a moving
+---PAGE_BREAK---
+
+hadron, time-like excitations are mixed with longitudinal excitations. Is it possible to give a physical interpretation to those time-like excitations? To address this issue, we shall study in Section 5 two-mode squeezed states also based on the mathematics of Equation (48). There, both variables have their physical interpretations.
+
+**5. Two-Mode Squeezed States**
+
+Harmonic oscillators play the central role also in quantum optics. There the $n^{th}$ excited oscillator state corresponds to the *n*-photon state $|n\rangle$. The ground state means the zero-photon or vacuum state $|0\rangle$. The single-photon coherent state can be written as
+
+$$|\alpha\rangle = e^{-\alpha a^*/2} \sum_n \frac{a^n}{\sqrt{n!}} |n\rangle \quad (50)$$
+
+which can be written as [1]
+
+$$|\alpha\rangle = e^{-\alpha a^*/2} \sum_n \frac{\alpha^n}{n!} (\hat{a}^\dagger)^n |0\rangle = \left\{e^{-\alpha a^*/2}\right\} \exp\{\alpha \hat{a}^\dagger\} |0\rangle \quad (51)$$
+
+This aspect of the single-photon coherent state is well known. Here we are dealing with one kind of photon, namely with a given momentum and polarization. The state $|n\rangle$ means there are $n$ photons of this kind.
+
+Let us next consider a state of two kinds of photons, and write $|n_1, n_2\rangle$ as the state of $n_1$ photons of the first kind, and $n_2$ photons of the second kind [20]. We can then consider the form
+
+$$\frac{1}{\cosh \eta} \exp \{(\tanh \eta) \hat{a}_1^\dagger \hat{a}_2^\dagger\} |0, 0\rangle \quad (52)$$
+
+The operator $\hat{a}_1^\dagger \hat{a}_2^\dagger$ was studied by Dirac in connection with his representation of the deSitter group, as we mentioned in Section 3. After making a Taylor expansion of Equation (52), we arrive at
+
+$$\frac{1}{\cosh \eta} \sum_k (\tanh \eta)^k |k, k\rangle \quad (53)$$
+
+which is the squeezed vacuum state or two-photon coherent state [1,20]. This expression is the wave function of Equation (48) in a different notation. This form is also called the entangled Gaussian state of two photons [3] or the entangled oscillator state of space and time [4].
+
+If we start with the *n*-particle state of the first photon, we obtain
+
+$$ \begin{aligned} & \left[ \frac{1}{\cosh \eta} \right]^{(n+1)} \exp \left\{ (\tanh \eta) \hat{a}_1^\dagger \hat{a}_2^\dagger \right\} |n, 0\rangle \\ &= \left[ \frac{1}{\cosh \eta} \right]^{(n+1)} \sum_k \left[ \frac{(n+k)!}{n!k!} \right]^{1/2} (\tanh \eta)^k |k+n, k\rangle \end{aligned} \quad (54) $$
+
+which is the wave function of Equation (42) in a different notation. This is the *n*-photon squeezed state [1].
+
+Since the two-mode squeezed state and the covariant harmonic oscillators share the same set of mathematical formulas, it is possible to transmit physical interpretations from one to the other. For two-mode squeezed state, both photons carry physical interpretations, while the interpretation is yet to be given to the time-separation variable in the covariant oscillator formalism. It is clear from Equation (42) and Equation (54) that the time-like excitations are like the second-photon states.
+
+What would happen if the second photon is not observed? This interesting problem was addressed by Yurke and Potasek [21] and by Ekert and Knight [22]. They used the density matrix formalism and
+---PAGE_BREAK---
+
+integrated out the second-photon states. This increases the entropy and temperature of the system. We choose not to reproduce their mathematics, because we will be presenting the same mathematics in Section 6.
+
+**6. Time-Separation Variable in Feynman's Rest of the Universe**
+
+As was noted in the previous section, the time-separation variable has an important role in the covariant formulation of the harmonic oscillator wave functions. It should exist wherever the space separation exists. The Bohr radius is the measure of the separation between the proton and electron in the hydrogen atom. If this atom moves, the radius picks up the time separation, according to Einstein [23].
+
+On the other hand, the present form of quantum mechanics does not include this time-separation variable. The best way we can interpret it at the present time is to treat this time-separation as a variable in Feynman's rest of the universe [24]. In his book on statistical mechanics [7], Feynman states
+
+> When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts - the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system.
+
+The failure to include what happens outside the system results in an increase of entropy. The entropy is a measure of our ignorance and is computed from the density matrix [25]. The density matrix is needed when the experimental procedure does not analyze all relevant variables to the maximum extent consistent with quantum mechanics [26]. If we do not take into account the time-separation variable, the result is an increase in entropy [27,28].
+
+For the covariant oscillator wave functions defined in Equation (42), the pure-state density matrix is
+
+$$ \rho_{\eta}^{n}(z, t; z', t') = \psi_{\eta}^{n}(z, t) \psi_{\eta}^{n}(z', t') \quad (55) $$
+
+which satisfies the condition $\rho^2 = \rho$:
+
+$$ \rho_{\eta}^{n}(z, t; x', t') = \int \rho_{\eta}^{n}(z, t; x'', t'') \rho_{\eta}^{n}(z'', t''; z', t') dz'' dt'' \quad (56) $$
+
+However, in the present form of quantum mechanics, it is not possible to take into account the time separation variables. Thus, we have to take the trace of the matrix with respect to the $t$ variable. Then the resulting density matrix is:
+
+$$ \begin{aligned} \rho_{\eta}^{n}(z, z') &= \int \psi_{\eta}^{n}(z, t) \psi_{\eta}^{n}(z', t) dt \\ &= \left(\frac{1}{\cosh \eta}\right)^{2(n+1)} \sum_{k} \frac{(n+k)!}{n!k!} (\tanh \eta)^{2k} \psi_{n+k}(z) \psi_{n+k}^{*}(z') \end{aligned} \quad (57) $$
+
+The trace of this density matrix is one, but the trace of $\rho^2$ is less than one, as:
+
+$$ \begin{aligned} \mathrm{Tr}(\rho^2) &= \int \rho_{\eta}^{n}(z,z') \rho_{\eta}^{n}(z',z) dzdz' \\ &= \left(\frac{1}{\cosh \eta}\right)^{4(n+1)} \sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^2 (\tanh \eta)^{4k} \end{aligned} \quad (58) $$
+
+which is less than one. This is due to the fact that we do not know how to deal with the time-like separation in the present formulation of quantum mechanics. Our knowledge is less than complete.
+---PAGE_BREAK---
+
+The standard way to measure this ignorance is to calculate the entropy defined as
+
+$$S = -\mathrm{Tr}(\rho \ln(\rho)) \qquad (59)$$
+
+If we pretend to know the distribution along the time-like direction and use the pure-state density matrix given in Equation (55), then the entropy is zero. However, if we do not know how to deal with the distribution along $t$, then we should use the density matrix of Equation (57) to calculate the entropy, and the result is
+
+$$S = 2(n+1) \left\{ (\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta) \ln(\sinh \eta) \right\} \\ - \left( \frac{1}{\cosh \eta} \right)^{2(n+1)} \sum_k \frac{(n+k)!}{n!k!} \ln \left[ \frac{(n+k)!}{n!k!} \right] (\tanh \eta)^{2k} \qquad (60)$$
+
+In terms of the velocity $v$ of the hadron,
+
+$$S = -(n+1) \left\{ \ln \left[ 1 - \left( \frac{v}{c} \right)^2 \right] + \frac{(v/c)^2 \ln(v/c)^2}{1 - (v/c)^2} \right\} \\ - \left[ 1 - \left( \frac{1}{v} \right)^2 \right] \sum_k \frac{(n+k)!}{n!k!} \ln \left[ \frac{(n+k)!}{n!k!} \right] \left( \frac{v}{c} \right)^{2k} \qquad (61)$$
+
+Let us go back to the wave function given in Equation (41). As is illustrated in Figure 3, its localization property is dictated by the Gaussian factor which corresponds to the ground-state wave function. For this reason, we expect that much of the behavior of the density matrix or the entropy for the $n^{th}$ excited state will be the same as that for the ground state with $n=0$. For this state, the density matrix and the entropy are
+
+$$\rho(z,z') = \left(\frac{1}{\pi \cosh(2\eta)}\right)^{1/2} \exp\left\{-\frac{1}{4}\left[\frac{(z+z')^2}{\cosh(2\eta)} + (z-z')^2 \cosh(2\eta)\right]\right\} \qquad (62)$$
+
+and
+
+$$S = 2 \left\{ (\cosh \eta)^2 \ln(\cosh \eta) - (\sinh \eta)^2 \ln(\sinh \eta) \right\} \qquad (63)$$
+
+respectively. The quark distribution $\rho(z, z)$ becomes
+
+$$\rho(z, z) = \left( \frac{1}{\pi \cosh(2\eta)} \right)^{1/2} \exp \left( \frac{-z^2}{\cosh(2\eta)} \right) \qquad (64)$$
+
+The width of the distribution becomes $\sqrt{\cosh\eta}$, and becomes wide-spread as the hadronic speed increases. Likewise, the momentum distribution becomes wide-spread [5,29]. This simultaneous increase in the momentum and position distribution widths is called the parton phenomenon in high-energy physics [13,14]. The position-momentum uncertainty becomes $\cosh\eta$. This increase in uncertainty is due to our ignorance about the physical but unmeasurable time-separation variable.
+
+Let us next examine how this ignorance will lead to the concept of temperature. For the Lorentz-boosted ground state with $n=0$, the density matrix of Equation (62) becomes that of the harmonic oscillator in a thermal equilibrium state if $(\tanh\eta)^2$ is identified as the Boltzmann factor [29]. For other states, it is very difficult, if not impossible, to describe them as thermal equilibrium states. Unlike the case of temperature, the entropy is clearly defined for all values of $n$. Indeed, the entropy in this case is derivable directly from the hadronic speed.
+
+The time-separation variable exists in the Lorentz-covariant world, but we pretend not to know about it. It thus is in Feynman's rest of the universe. If we do not measure this time-separation, it becomes translated into the entropy.
+---PAGE_BREAK---
+
+Figure 3. Localization property in the $zt$ plane. When the hadron is at rest, the Gaussian form is concentrated within a circular region specified by $(z+t)^2 + (z-t)^2 = 1$. As the hadron gains speed, the region becomes deformed to $e^{-2\eta}(z+t)^2 + e^{2\eta}(z-t)^2 = 1$. Since it is not possible to make measurements along the $t$ direction, we have to deal with information that is less than complete.
+
+Figure 4. The uncertainty from the hidden time-separation coordinate. The small circle indicates the minimal uncertainty when the hadron is at rest. More uncertainty is added when the hadron moves. This is illustrated by a larger circle. The radius of this circle increases by $\sqrt{\cosh(2\eta)}$.
+
+We can see the uncertainty in our measurement process from the Wigner function defined as
+
+$$W(z,p) = \frac{1}{\pi} \int \rho(z+y,z-y)e^{ipy} dy \quad (65)$$
+
+After integration, this Wigner function becomes
+
+$$W(z,p) = \frac{1}{\pi \cosh(2\eta)} \exp \left\{ - \left( \frac{z^2 + p^2}{\cosh(2\eta)} \right) \right\} \quad (66)$$
+
+This Wigner phase distribution is illustrated in Figure 4. The smaller inner circle corresponds to the minimal uncertainty of the single oscillator. The larger circle is for the total uncertainty including the statistical uncertainty from our failure to observe the time-separation variable. The two-mode squeezed state tells us how this happens. In the two-mode case, both the first and second photons are observable, but we can choose not to observe the second photon.
+
+## 7. Lorentz-Covariant Quark Model
+
+The hydrogen atom played the pivotal role while the present form of quantum mechanics was developed. At that time, the proton was in the absolute Galilean frame of reference, and it was thinkable that the proton could move with a speed close to that of light.
+
+Also, at that time, both the proton and electron were point particles. However, the discovery of Hofstadter *et al*. changed the picture of the proton in 1955 [30]. The proton charge has its internal
+---PAGE_BREAK---
+
+distribution. Within the framework of quantum electrodynamics, it is possible to calculate the Rutherford formula for the electron-proton scattering when both electron and proton are point particles. Because the proton is not a point particle, there is a deviation from the Rutherford formula. We describe this deviation using the formula called the “proton form factor” which depends on the momentum transfer during the electron-proton scattering.
+
+Indeed, the study of the proton form factor has been and still is one of the central issues in high-energy physics. The form factor decreases as the momentum transfer increases. Its behavior is called the “dipole cut-off” meaning an inverse-square decrease, and it has been a challenging problem in quantum field theory and other theoretical models [31]. Since the emergence of the quark model in 1964 [32], the hadrons are regarded as quantum bound states of quarks with space-time wave functions. Thus, the quark model is responsible for explaining this form factor. There are indeed many papers written on this subject. We shall return to this problem in Subsection 7.2.
+
+Another problem in high-energy physics is Feynman's parton picture [13,14]. If the hadron is at rest, we can approach this problem within the framework of bound-state quantum mechanics. If it moves with a speed close to that of light, it appears as a collection of an infinite number of partons, which interact with external signals incoherently. This phenomenon raises the question of whether the Lorentz boost destroys quantum coherence [33]. This leads to the concept of Feynman's decoherence [34]. We shall discuss this problem first.
+
+## 7.1. Feynman's Parton Picture and Feynman's Decoherence
+
+In 1969, Feynman observed that a fast-moving hadron can be regarded as a collection of many “partons” whose properties appear to be quite different from those of the quarks [5,14]. For example, the number of quarks inside a static proton is three, while the number of partons in a rapidly moving proton appears to be infinite. The question then is how the proton looking like a bound state of quarks to one observer can appear different to an observer in a different Lorentz frame? Feynman made the following systematic observations.
+
+a. The picture is valid only for hadrons moving with velocity close to that of light.
+
+b. The interaction time between the quarks becomes dilated, and partons behave as free independent particles.
+
+c. The momentum distribution of partons becomes widespread as the hadron moves fast.
+
+d. The number of partons seems to be infinite or much larger than that of quarks.
+
+Because the hadron is believed to be a bound state of two or three quarks, each of the above phenomena appears as a paradox, particularly (b) and (c) together. How can a free particle have a wide-spread momentum distribution?
+
+In order to address this question, let us go to Figure 5, which illustrates the Lorentz-squeeze property of the hadron as the hadron gains its speed. If we use the harmonic oscillator wave function, its momentum-energy wave function takes the same form as the space-time wave function. As the hadron gains its speed, both wave functions become squeezed.
+
+As the wave function becomes squeezed, the distribution becomes wide-spread, the spring constant appears to become weaker. Consequently, the constituent quarks appear to become free particles.
+
+If the constituent particles are confined in the narrow elliptic region, they become like massless particles. If those massless particles have a wide-spread momentum distribution, it is like a black-body radiation with infinite number of photon distributions.
+
+We have addressed this question extensively in the literature, and concluded Gell-Mann's quark model and Feynman's parton model are two different manifestations of the same Lorentz-covariant quantity [19,35,36]. Thus coherent quarks and incoherent partons are perfectly consistent within the framework of quantum mechanics and special relativity [33]. Indeed, this defines Feynman's decoherence [34].
+---PAGE_BREAK---
+
+**Figure 5.** Lorentz-squeezed space-time and momentum-energy wave functions. As the hadron’s speed approaches that of light, both wave functions become concentrated along their respective positive light-cone axes. These light-cone concentrations lead to Feynman’s parton picture.
+
+More recently, we were able to explain this decoherence problem in terms of the interaction time among the constituent quarks and the time required for each quark to interact with external signals [4].
+
+## 7.2. Proton Form Factors and Lorentz Coherence
+
+As early as in 1970, Fujimura et al. calculated the electromagnetic form factor of the proton using the wave functions given in this paper and obtained the so-called “dipole” cut-off of the form factor [37]. At that time, these authors did not have a benefit of the differential equation of Feynman and his co-authors [12]. Since their wave functions can now be given a bona-fide covariant probability interpretation, their calculation could be placed between the two limiting cases of quarks and partons.
+
+Even before the calculation of Fujimura et al. in 1965, the covariant wave functions were discussed by various authors [38–40]. In 1970, Licht and Pagnamenta also discussed this problem with Lorentz-contracted wave functions [41].
+
+In our 1973 paper [42], we attempted to explain the covariant oscillator wave function in terms of the coherence between the incoming signal and the width of the contracted wave function. This aspect was explained in terms of the overlap of the energy-momentum wave function in our book [5].
+
+In this paper, we would like to go back to the coherence problem we raised in 1973, and follow-up on it. In the Lorentz frame where the momentum of the proton has the opposite signs before and after the collision, the four-momentum transfer is
+
+$$ (p, E) - (-p, E) = (2p, 0) \qquad (67) $$
+
+where the proton comes along the z direction with its momentum $p$, and its energy $\sqrt{p^2 + m^2}$.
+---PAGE_BREAK---
+
+Then the form factor becomes
+
+$$F(p) = \int e^{2ipz} (\psi_{\eta}(z,t))^* \psi_{-\eta}(z,t) dz dt \quad (68)$$
+
+If we use the ground-state oscillator wave function, this integral becomes
+
+$$\frac{1}{\pi} \int e^{2ipz} \exp \left\{ -\cosh(2\eta) (z^2 + t^2) \right\} dz dt \quad (69)$$
+
+After the $t$ integration, this integral becomes
+
+$$\frac{1}{\sqrt{\pi} \cosh(2\eta)} \int e^{2ipz} \exp \{-z^2 \cosh(2\eta)\} dz \quad (70)$$
+
+The integrand is a product of a Gaussian factor and a sinusoidal oscillation. The width of the Gaussian factor shrinks by $1/\sqrt{\cosh(2\eta)}$, which becomes $\exp(-\eta)$ as $\eta$ becomes large. The wave length of the sinusoidal factor is inversely proportional to the momentum $p$. The wave length decreases also at the rate of $\exp(-\eta)$. Thus, the rate of the shrinkage is the same for both the Gaussian and sinusoidal factors. For this reason, the cutoff rate of the form factor of Equation (68) should be less than that for
+
+$$\int e^{2ipz} (\psi_0(z,t))^* \psi_0(z,t) dz dt = \frac{1}{\sqrt{\pi}} \int e^{2ipz} \exp(-z^2) dz \quad (71)$$
+
+which corresponds to the form factor without the squeeze effect on the wave function. The integration of this expression leads to $\exp(-p^2)$, which corresponds to an exponential cut-off as $p^2$ becomes large. Let us go back to the form factor of Equation (68). If we complete the integral, it becomes
+
+$$F(p) = \frac{1}{\cosh(2\eta)} \exp \left\{ \frac{-p^2}{\cosh(2\eta)} \right\} \quad (72)$$
+
+As $p^2$ becomes large, the Gaussian factor becomes a constant. However, the factor $1/\cosh(2\eta)$ leads the form factor decrease of $1/p^2$, which is a much slower decrease than the exponential cut-off without squeeze effect.
+
+There still is a gap between this mathematical formula and the observed experimental data. Before looking at the experimental curve, we have to realize that there are three quarks inside the hadron with two oscillator mode. This will lead to a $(1/p^2)^2$ cut-off, which is commonly called the dipole cut-off in the literature.
+
+There is still more work to be done. For instance, the effect of the quark spin should be addressed [43,44]. Also there are reports of deviations from the exact dipole cut-off [45]. There have been attempts to study the form factors based on the four-dimensional rotation group [46], and also on the lattice QCD [47].
+
+Yet, it is gratifying to note that the effect of Lorentz squeeze leads to the polynomial decrease in the momentum transfer, thanks to the Lorentz coherence illustrated in Figure 6. We started our logic from the fundamental principles of quantum mechanics and relativity.
+
+## 8. Conclusions
+
+In this paper, we presented one mathematical formalism applicable both to the entanglement problems in quantum optics [3] and to high-energy hadronic physics [4]. The formalism is based on harmonic oscillators familiar to us. We have presented a complete orthonormal set with a Lorentz-covariant probability interpretation.
+
+Since both branches of physics share the same mathematical base, it is possible to translate physics from one branch to the other. In this paper, we have given a physical interpretation to the
+---PAGE_BREAK---
+
+**Figure 6.** Coherence between the wavelength and the proton size. As the momentum transfer increases, the external signal sees Lorentz-contracting proton distribution. On the other hand, the wavelength of the signal also decreases. Thus, the cutoff is not as severe as the case where the proton distribution is not contracted.
+
+time-separation variable as a hidden variable in Feynman's rest of the universe, in terms of the two-mode squeezed state where both photons are observable.
+
+This paper is largely a review paper with an organization to suit the current interest in physics. For instance, the concepts of entanglement and decohercne did not exist when those original papers were written. Furthermore, the probability interpretation given in Subsection 4.2 has not been published before.
+
+The rotation symmetry plays its role in all branches of physics. We noted that the squeeze symmetry plays active roles in two different subjects of physics. It is possible that the squeeze transformation can serve useful purposes in many other fields, although we are not able to specify them at this time.
+
+References
+
+1. Kim, Y.S.; Noz, M.E. *Phase Space Picture of Quantum Mechanics*; World Scientific Publishing Company: Singapore, 1991.
+
+2. Guillemin, V.; Sternberg, S. *Symplectic Techniques in Physics*; Cambridge University: Cambridge, UK, 1984.
+
+3. Giedke, G.; Wolf, M.M.; Krger, O.; Werner, R.F.; Cirac, J.J. Entanglement of formation for symmetric Gaussian states. *Phys. Rev. Lett.* **2003**, *91*, 107901-107904.
+
+4. Kim, Y.S.; Noz, M.E. Coupled oscillators, entangled oscillators, and Lorentz-covariant harmonic oscillators. *J. Opt. B: Quantum Semiclass. Opt.* **2005**, *7*, S458-S467.
+
+5. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group* D; Reidel Publishing Company: Dordrecht, The Netherlands, 1986.
+
+6. Dirac, P.A.M. Forms of Relativistic Dynamics. *Rev. Mod. Phys.* **1949**, *21*, 392-399.
+
+7. Feynman, R.P. *Statistical Mechanics*; Benjamin/Cummings: Reading, MA, USA, 1972.
+
+8. Dirac, P.A.M. The Quantum Theory of the Emission and Absorption of Radiation. Proc. Roy. Soc. (London) **1927**, A114, 243-265.
+
+9. Dirac, P.A.M. Unitary Representations of the Lorentz Group. Proc. Roy. Soc. (London) **1945**, A183, 284-295.
+
+10. Dirac, P.A.M. A Remarkable Representation of the 3 + 2 de Sitter Group. J. Math. Phys. **1963**, *4*, 901-909.
+
+11. Wigner, E. On Unitary Representations of the Inhomogeneous Lorentz Group. Ann. Math. **1939**, *40*, 149-204.
+---PAGE_BREAK---
+
+12. Feynman, R.P.; Kislinger, M.; Ravndal F. Current Matrix Elements from a Relativistic Quark Model. Phys. Rev. D **1971**, *3*, 2706-2732.
+
+13. Feynman, R.P. Very High-Energy Collisions of Hadrons. Phys. Rev. Lett. **1969**, *23*, 1415-1417.
+
+14. Feynman, R.P. The Behavior of Hadron Collisions at Extreme Energies in High-Energy Collisions. In Proceedings of the Third International Conference; Gordon and Breach: New York, NY, USA, 1969; pp. 237-249.
+
+15. Kim, Y.S.; Noz, M.E.; Oh, S.H. Representations of the Poincaré group for relativistic extended hadrons. J. Math. Phys. **1979**, *20*, 1341-1344.
+
+16. Kim, Y.S.; Noz, M.E.; Oh, S.H.; A Simple Method for Illustrating the Difference between the Homogeneous and Inhomogeneous Lorentz Groups. Am. J. Phys. **1979**, *47*, 892-897.
+
+17. Rotbart, F.C. Complete orthogonality relations for the covariant harmonic oscillator. Phys. Rev. D **1981**, *12*, 3078-3090.
+
+18. Ruiz, M.J. Orthogonality relations for covariant harmonic oscillator wave functions. Phys. Rev. D **1974**, *10*, 4306-4307.
+
+19. Kim, Y.S.; Noz, M.E. Covariant Harmonic Oscillators and the Parton Picture. Phys. Rev. D **1977**, *15*, 335-338.
+
+20. Yuen, H.P. Two-photon coherent states of the radiation field. Phys. Rev. A **1976**, *13*, 2226-2243.
+
+21. Yurke, B.; Potasek, M. Obtainment of Thermal Noise from a Pure State. Phys. Rev. A **1987**, *36*, 3464-3466.
+
+22. Ekert, A.K.; Knight, P.L. Correlations and squeezing of two-mode oscillations. Am. J. Phys. **1989**, *57*, 692-697.
+
+23. Kim, Y.S.; Noz, M.E. The Question of Simultaneity in Relativity and Quantum Mechanics. In Quantum Theory: Reconsideration of Foundations-3; Adenier, G., Khrennikov, A., Nieuwenhuizen, T.M., Eds.; AIP Conference Proceedings 180, American Institute of Physics, College Park, MD, USA, 2006; pp. 168-178.
+
+24. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative Example of Feynman's Rest of the Universe. Am. J. Phys. **1999**, *67*, 61-66.
+
+25. von Neumann, J. *Die Mathematische Grundlagen der Quanten-mechanik*; Springer: Berlin, Germany, 1932.
+
+26. Fano, U. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques. Rev. Mod. Phys. **1967**, *29*, 74-93.
+
+27. Kim, Y.S.; Wigner, E.P. Entropy and Lorentz Transformations. Phys. Lett. A **1990**, *147*, 343-347.
+
+28. Kim, Y.S. Coupled oscillators and Feynman's three papers. J. Phys. Conf. Ser. **2007**, *70*, 012010: 1-19.
+
+29. Han, D.; Kim, Y.S.; Noz, M.E. Lorentz-Squeezed Hadrons and Hadronic Temperature. Phys. Lett. A **1990**, *144*, 111-115.
+
+30. Hofstadter, R.; McAllister, R.W. Electron Scattering from the Proton. Phys. Rev. **1955**, *98*, 217-218.
+
+31. Frazer, W.; Fulco, J. Effect of a Pion-Pion Scattering Resonance on Nucleon Structure. Phys. Rev. Lett. **1960**, *2*, 365-368.
+
+32. Gell-Mann, M. Nonleptonic Weak Decays and the Eightfold Way. Phys. Lett. **1964**, *12*, 155-156.
+
+33. Kim, Y.S. Does Lorentz Boost Destroy Coherence? Fortschr. der Physik **1998**, *46*, 713-724.
+
+34. Kim, Y.S.; Noz, M.E. Feynman's Decoherence. Optics Spectro. **2003**, *47*, 733-740.
+
+35. Hussar, P.E. Valons and harmonic oscillators. Phys. Rev. D **1981**, *23*, 2781-2783.
+
+36. Kim, Y.S. Observable gauge transformations in the parton picture. Phys. Rev. Lett. **1989**, *63*, 348-351.
+
+37. Fujimura, K.; Kobayashi, T.; Namiki, M. Nucleon Electromagnetic Form Factors at High Momentum Transfers in an Extended Particle Model Based on the Quark Model. Prog. Theor. Phys. **1970**, *43*, 73-79.
+
+38. Yukawa, H. Structure and Mass Spectrum of Elementary Particles. I. General Considerations. Phys. Rev. **1953**, *91*, 415-416.
+
+39. Markov, M. On Dynamically Deformable Form Factors in the Theory Of Particles. Suppl. Nuovo Cimento **1956**, *3*, 760-772.
+
+40. Ginzburg, V.L.; Man'ko, V.I. Relativistic oscillator models of elementary particles. Nucl. Phys. **1965**, *74*, 577-588.
+
+41. Licht, A.L.; Pagnamenta, A. Wave Functions and Form Factors for Relativistic Composite Particles I. Phys. Rev. D **1970**, *2*, 1150-1156.
+
+42. Kim, Y.S.; Noz, M.E. Covariant harmonic oscillators and the quark model. Phys. Rev. D **1973**, *8*, 3521-3627.
+
+43. Lipes, R. Electromagnetic Excitations of the Nucleon in a Relativistic Quark Model. Phys. Rev. D **1972**, *5*, 2849-2863.
+
+44. Henriques, A.B.; Keller, B.H.; Moorhouse, R.G. General three-spinor wave functions and the relativistic quark model. Ann. Phys. (NY) **1975**, *93*, 125-151
+---PAGE_BREAK---
+
+45. Punjabi, V.; Perdrisat, C.F.; Aniol, K.A.; Baker, F.T.; Berthot, J.; Bertin, P.Y.; Bertozzi, W.; Besson, A.; Bimbot, L.; Boeglin, W.U.; et al. Proton elastic form factor ratios to Q2 = 3.5 GeV2 by polarization transfer. *Phys. Rev. C* **2005**, *71*, 055202-27.
+
+46. Alkofer, R.; Holl, A.; Kloker, M.; Karssnigg A.; Roberts, C.D. On Nucleon Electromagnetic Form Factors. *Few-Body Sys.* **2005**, *37*, 1-31.
+
+47. Matevosyan, H.H.; Thomas, A.W.; Miller, G.A. Study of lattice QCD form factors using the extended Gari-Krumpelmann model. *Phys. Rev. C* **2005**, *72*, 065204-5.
+
+© 2011 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+Dirac Matrices and Feynman's Rest of the Universe
+
+Young S. Kim ¹,* and Marilyn E. Noz ²
+
+¹ Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA
+
+² Department of Radiology, New York University, New York, NY 10016, USA; marilyne.noz@gmail.com
+
+* Author to whom correspondence should be addressed; yskim@umd.edu; Tel.: +1-301-937-6306.
+
+Received: 25 June 2012; in revised form: 6 October 2012; Accepted: 23 October 2012; Published: 30 October 2012
+
+**Abstract:** There are two sets of four-by-four matrices introduced by Dirac. The first set consists of fifteen Majorana matrices derivable from his four $\gamma$ matrices. These fifteen matrices can also serve as the generators of the group $SL(4, r)$. The second set consists of ten generators of the $Sp(4)$ group which Dirac derived from two coupled harmonic oscillators. It is shown possible to extend the symmetry of $Sp(4)$ to that of $SL(4, r)$ if the area of the phase space of one of the oscillators is allowed to become smaller without a lower limit. While there are no restrictions on the size of phase space in classical mechanics, Feynman's rest of the universe makes this $Sp(4)$-to-$SL(4, r)$ transition possible. The ten generators are for the world where quantum mechanics is valid. The remaining five generators belong to the rest of the universe. It is noted that the groups $SL(4, r)$ and $Sp(4)$ are locally isomorphic to the Lorentz groups $O(3, 3)$ and $O(3, 2)$ respectively. This allows us to interpret Feynman's rest of the universe in terms of space-time symmetry.
+
+**Keywords:** Dirac gamma matrices; Feynman's rest of the universe; two coupled oscillators; Wigner's phase space; non-canonical transformations; group generators; $SL(4, r)$ isomorphic $O(3, 3)$; quantum mechanics interpretation
+
+# 1. Introduction
+
+In 1963, Paul A. M. Dirac published an interesting paper on the coupled harmonic oscillators [1]. Using step-up and step-down operators, Dirac was able to construct ten operators satisfying a closed set of commutation relations. He then noted that this set of commutation relations can also be used as the Lie algebra for the $O(3, 2)$ de Sitter group applicable to three space and two time dimensions. He noted further that this is the same as the Lie algebra for the four-dimensional symplectic group $Sp(4)$.
+
+His algebra later became the fundamental mathematical language for two-mode squeezed states in quantum optics [2–5]. Thus, Dirac’s ten oscillator matrices play a fundamental role in modern physics.
+
+In the Wigner phase-space representation, it is possible to write the Wigner function in terms of two position and two momentum variables. It was noted that those ten operators of Dirac can be translated into the operators with these four variables [4,6], which then can be written as four-by-four matrices. There are thus ten four-by-four matrices. We shall call them Dirac’s oscillator matrices. They are indeed the generators of the symplectic group $Sp(4)$.
+
+We are quite familiar with four Dirac matrices for the Dirac equation, namely $\gamma_1, \gamma_2, \gamma_3$, and $\gamma_0$. They all become imaginary in the Majorana representation. From them we can construct fifteen linearly independent four-by-four matrices. It is known that these four-by-four matrices can serve as the generators of the $SL(4, r)$ group [6,7]. It is also known that this $SL(4, r)$ group is locally isomorphic to the Lorentz group $O(3, 3)$ applicable to the three space and three time dimensions [6,7].
+
+There are now two sets of the four-by-four matrices constructed by Dirac. The first set consists of his ten oscillator matrices, and there are fifteen $\gamma$ matrices coming from his Dirac equation. There is
+---PAGE_BREAK---
+
+thus a difference of five matrices. The question is then whether this difference can be explained within
+the framework of the oscillator formalism with tangible physics.
+
+It was noted that his original O(3,2) symmetry can be extended to that of O(3,3) Lorentz group applicable to the six dimensional space consisting of three space and three time dimensions. This requires the inclusion of non-canonical transformations in classical mechanics [6]. These non-canonical transformations cannot be interpreted in terms of the present form of quantum mechanics.
+
+On the other hand, we can use this non-canonical effect to illustrate the concept of Feynman's rest of the universe. This oscillator system can serve as two different worlds. The first oscillator is the world in which we do quantum mechanics, and the second is for the rest of the universe. Our failure to observe the second oscillator results in the increase in the size of the Wigner phase space, thus increasing the entropy [8].
+
+Instead of ignoring the second oscillator, it is of interest to see what happens to it. In this paper,
+it is shown that Planck's constant does not have a lower limit. This is allowed in classical mechanics,
+but not in quantum mechanics.
+
+Indeed, Dirac's ten oscillator matrices explain the quantum world for both oscillators. The set of Dirac's fifteen $\gamma$ matrices contains his ten oscillator matrices as a subset. We discuss in this paper the physics of this difference.
+
+In Section 2, we start with Dirac’s four $\gamma$ matrices in the Majorana representation and construct all fifteen four-by-four matrices applicable to the Majorana form of the Dirac spinors. Section 3 reproduces Dirac’s derivation of the $O(3,2)$ symmetry with ten generators from two coupled oscillators. This group is locally isomorphic to $Sp(4)$, which allows canonical transformations in classical mechanics.
+
+In Section 4, we translate Dirac’s formalism into the language of the Wigner phase space.
+This allows us to extend the $Sp(4)$ symmetry into the non-canonical region in classical mechanics.
+The resulting symmetry is that of $SL(4,r)$, isomorphic to that of the Lorentz group $O(3,3)$ with fifteen
+generators. This allows us to establish the correspondence between Dirac’s Majorana matrices with
+those $SL(4,r)$ four-by-four matrices applicable to the two oscillator system, as well as the fifteen
+six-by-six matrices that serve as the generators of the $O(3,3)$ group.
+
+Finally, in Section 5, it is shown that the difference between the ten oscillator matrices and the
+fifteen Majorana matrix can serve as an illustrative example of Feynman’s rest of the universe [8,9].
+
+## 2. Dirac Matrices in the Majorana Representation
+
+Since all the generators for the two coupled oscillator system can be written as four-by-four
+matrices with imaginary elements, it is convenient to work with Dirac matrices in the Majorana
+representation, where all the elements are imaginary [7,10,11]. In the Majorana representation,
+the four Dirac $\gamma$ matrices are
+
+$$ \gamma_1 = i \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, \quad \gamma_2 = \begin{pmatrix} 0 & -\sigma_2 \\ \sigma_2 & 0 \end{pmatrix} $$
+
+$$ \gamma_3 = -i \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix}, \quad \gamma_0 = \begin{pmatrix} 0 & \sigma_2 \\ \sigma_2 & 0 \end{pmatrix} \qquad (1) $$
+
+where
+
+$$ \sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \sigma_2 = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $$
+
+These $\gamma$ matrices are transformed like four-vectors under Lorentz transformations. From these four matrices, we can construct one pseudo-scalar matrix
+
+$$ \gamma_5 = i\gamma_0\gamma_1\gamma_2\gamma_3 = \begin{pmatrix} \sigma_2 & 0 \\ 0 & -\sigma_2 \end{pmatrix} \qquad (2) $$
+---PAGE_BREAK---
+
+and a pseudo vector $i\gamma_5\gamma_\mu$ consisting of
+
+$$
+\begin{align}
+i\gamma_5\gamma_1 &= i \begin{pmatrix} -\sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix}, &
+i\gamma_5\gamma_2 &= -i \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix} \\
+i\gamma_5\gamma_0 &= i \begin{pmatrix} 0 & I \\ -I & 0 \end{pmatrix}, &
+i\gamma_5\gamma_3 &= i \begin{pmatrix} -\sigma_3 & 0 \\ 0 & +\sigma_3 \end{pmatrix}
+\end{align}
+$$
+
+(3)
+
+In addition, we can construct the tensor of the $\gamma$ as
+
+$$
+T_{\mu\nu} = \frac{i}{2} (\gamma_{\mu}\gamma_{\nu} - \gamma_{\nu}\gamma_{\mu}) \quad (4)
+$$
+
+This antisymmetric tensor has six components. They are
+
+$$
+i\gamma_0\gamma_1 = -i \begin{pmatrix} 0 & \sigma_1 \\ \sigma_1 & 0 \end{pmatrix}, i\gamma_0\gamma_2 = -i \begin{pmatrix} -I & 0 \\ 0 & I \end{pmatrix}, i\gamma_0\gamma_3 = -i \begin{pmatrix} 0 & \sigma_3 \\ \sigma_3 & 0 \end{pmatrix} \quad (5)
+$$
+
+and
+
+$$
+i\gamma_1\gamma_2 = i \begin{pmatrix} 0 & -\sigma_1 \\ \sigma_1 & 0 \end{pmatrix}, i\gamma_2\gamma_3 = -i \begin{pmatrix} 0 & -\sigma_3 \\ \sigma_3 & 0 \end{pmatrix}, i\gamma_3\gamma_1 = \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix} \quad (6)
+$$
+
+There are now fifteen linearly independent four-by-four matrices. They are all traceless and their components are imaginary [7]. We shall call these Dirac's Majorana matrices.
+
+In 1963 [1], Dirac constructed another set of four-by-four matrices from two coupled harmonic oscillators, within the framework of quantum mechanics. He ended up with ten four-by-four matrices. It is of interest to compare his oscillator matrices and his fifteen Majorana matrices.
+
+**3. Dirac’s Coupled Oscillators**
+
+In his 1963 paper [1], Dirac started with the Hamiltonian for two harmonic oscillators. It can be written as
+
+$$
+H = \frac{1}{2} (p_1^2 + x_1^2) + \frac{1}{2} (p_2^2 + x_2^2) \tag{7}
+$$
+
+The ground-state wave function for this Hamiltonian is
+
+$$
+\psi_0(x_1, x_2) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{2} (x_1^2 + x_2^2) \right\} \qquad (8)
+$$
+
+We can now consider unitary transformations applicable to the ground-state wave function of
+Equation (8), and Dirac noted that those unitary transformations are generated by [1]
+
+$$
+\begin{align*}
+L_1 &= \frac{1}{2}(a_1^\dagger a_2 + a_2^\dagger a_1), & L_2 &= \frac{1}{2i}(a_1^\dagger a_2 - a_2^\dagger a_1) \\
+L_3 &= \frac{1}{2}(a_1^\dagger a_1 - a_2^\dagger a_2), & S_3 &= \frac{i}{2}(a_1^\dagger a_1 + a_2^\dagger a_2) \\
+K_1 &= -\frac{1}{4}(a_1^\dagger a_1^\dagger + a_1 a_1 - a_2^\dagger a_2^\dagger - a_2 a_2) \\
+K_2 &= \frac{i}{4}(a_1^\dagger a_1^\dagger - a_1 a_1 + a_2^\dagger a_2^\dagger - a_2 a_2) \\
+K_3 &= \frac{i}{2}(a_1^\dagger a_2^\dagger + a_1 a_2) \\
+Q_1 &= -\frac{i}{4}(a_1^\dagger a_1^\dagger - a_1 a_1 - a_2^\dagger a_2^\dagger + a_2 a_2) \\
+Q_2 &= -\frac{i}{4}(a_1^\dagger a_1^\dagger + a_1 a_1 + a_2^\dagger a_2^\dagger + a_2 a_2) \\
+Q_3 &= \frac{i}{2}(a_1^\dagger a_2^\dagger - a_1 a_2)
+\end{align*}
+$$
+
+(9)
+
+where $a^\dagger$ and $a$ are the step-up and step-down operators applicable to harmonic oscillator wave functions. These operators satisfy the following set of commutation relations.
+---PAGE_BREAK---
+
+$$
+\begin{align}
+[L_i, L_j] &= i\epsilon_{ijk}L_k, \quad [L_i, K_j] = i\epsilon_{ijk}K_k, \quad [L_i, Q_j] = i\epsilon_{ijk}Q_k \nonumber \\
+[K_i, K_j] &= [Q_i, Q_j] = -i\epsilon_{ijk}L_k, \quad [L_i, S_3] = 0 \nonumber \\
+[K_i, Q_j] &= -i\delta_{ij}S_3, \quad [K_i, S_3] = -iQ_i, \quad [Q_i, S_3] = iK_i \tag{10}
+\end{align}
+$$
+
+Dirac then determined that these commutation relations constitute the Lie algebra for the O(3,2) de Sitter group with ten generators. This de Sitter group is the Lorentz group applicable to three space coordinates and two time coordinates. Let us use the notation (x,y,z,t,s), with (x,y,z) as space coordinates and (t,s) as two time coordinates. Then the rotation around the z axis is generated by
+
+$$
+L_3 = \begin{pmatrix}
+0 & -i & 0 & 0 & 0 \\
+i & 0 & 0 & 0 & 0 \\
+0 & 0 & 0 & 0 & 0 \\
+0 & 0 & 0 & 0 & 0 \\
+0 & 0 & 0 & 0 & 0
+\end{pmatrix}
+\qquad (11)
+$$
+
+The generators $L_1$ and $L_2$ can be also be constructed. The $K_3$ and $Q_3$ will take the form
+
+$$
+K_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & i & 0 \\ 0 & 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, Q_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \tag{12}
+$$
+
+From these two matrices, the generators $K_1, K_2, Q_1, Q_2$ can be constructed. The generator $S_3$ can be written as
+
+$$
+S_3 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 & 0 \end{pmatrix} \tag{13}
+$$
+
+The last five-by-five matrix generates rotations in the two-dimensional space of $(t,s)$.
+
+In his 1963 paper [1], Dirac states that the Lie algebra of Equation (10) can serve as the four-dimensional symplectic group $Sp(4)$. In order to see this point, let us go to the Wigner phase-space picture of the coupled oscillators.
+
+### **3.1. Wigner Phase-Space Representation**
+
+For this two-oscillator system, the Wigner function is defined as [4,6]
+
+$$
+W(x_1, x_2; p_1, p_2) = \left(\frac{1}{\pi}\right)^2 \int \exp\{-2i(p_1 y_1 + p_2 y_2)\} \\
+\times \psi^*(x_1+y_1, x_2+y_2) \psi(x_1-y_1, x_2-y_2) dy_1 dy_2 \tag{14}
+$$
+
+Indeed, the Wigner function is defined over the four-dimensional phase space of $(x_1, p_1, x_2, p_2)$ just as in the case of classical mechanics. The unitary transformations generated by the operators of Equation (9) are translated into linear canonical transformations of the Wigner function [4]. The canonical transformations are generated by the differential operators [4]:
+
+$$
+L_1 = +i \frac{1}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_1} \right) + \left( x_2 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_2} \right) \right\}
+$$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+L_2 &= -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_2} - x_2 \frac{\partial}{\partial x_1}\right) + \left(p_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial p_1}\right) \right\} \\
+L_3 &= +\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_1}\right) - \left(x_2 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_2}\right) \right\} \\
+S_3 &= -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_1}\right) + \left(x_2 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_2}\right) \right\}
+\end{align*}
+$$
+
+and
+
+$$
+\begin{align}
+K_1 &= -\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_1} \right) - \left( x_2 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_2} \right) \right\} \\
+K_2 &= -\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial x_1} - p_1 \frac{\partial}{\partial p_1} \right) + \left( x_2 \frac{\partial}{\partial x_2} - p_2 \frac{\partial}{\partial p_2} \right) \right\} \\
+K_3 &= +\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_1} \right) + \left( x_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_2} \right) \right\} \\
+Q_1 &= +\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial x_1} - p_1 \frac{\partial}{\partial p_1} \right) - \left( x_2 \frac{\partial}{\partial x_2} - p_2 \frac{\partial}{\partial p_2} \right) \right\} \\
+Q_2 &= -\frac{i}{2} \left\{ \left( x_1 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_1} \right) + \left( x_2 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_2} \right) \right\} \\
+Q_3 &= -\frac{i}{2} \left\{ \left( x_2 \frac{\partial}{\partial x_1} + x_1 \frac{\partial}{\partial x_2} \right) - \left( p_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial p_2} \right) \right\}
+\end{align}
+$$
+
+$$
+(15)
+$$
+
+$$
+(K_i M) = (M K_i)^{-1}
+$$
+
+$$
+(M J M^*) = J
+$$
+
+where *M* is a four-by-four matrix defined by
+
+$$
+M_{ij} = \frac{\partial}{\partial \eta_j} \xi_i
+$$
+
+and
+
+$$
+J = \begin{pmatrix}
+0 & 1 & 0 & 0 \\
+-1 & 0 & 0 & 0 \\
+0 & 0 & 0 & 1 \\
+0 & 0 & -1 & 0
+\end{pmatrix}
+$$
+
+(19)
+
+According to this form of the *J* matrix, the area of the phase space for *x*₁ and *p*₁ variables remains invariant, and the story is the same for the phase space of *x*₂ and *p*₂.
+
+We can then write the generators of the Sp(4) group as
+
+$$
+L_1 = -\frac{1}{2} \begin{pmatrix} 0 & \sigma_2 \\ \sigma_2 & 0 \end{pmatrix}, L_2 = \frac{i}{2} \begin{pmatrix} 0 & -I \\ I & 0 \end{pmatrix}
+$$
+
+$$
+L_3 = \frac{1}{2} \begin{pmatrix} -\sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, S_3 = \frac{1}{2} \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}
+$$
+
+and
+
+$$
+K_1 = i \begin{pmatrix} \sigma_1 & 0 \\ 0 & -\sigma_1 \end{pmatrix}, K_2 = i \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, K_3 = -i \begin{pmatrix} 0 & \sigma_1 \\ \sigma_1 & 0 \end{pmatrix}
+$$
+---PAGE_BREAK---
+
+and
+
+$$Q_1 = \frac{i}{2} \begin{pmatrix} -\sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, Q_2 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix}, Q_3 = \frac{i}{2} \begin{pmatrix} 0 & \sigma_3 \\ \sigma_3 & 0 \end{pmatrix} \quad (21)$$
+
+These four-by-four matrices satisfy the commutation relations given in Equation (10). Indeed, the de Sitter group *O*(3,2) is locally isomorphic to the *Sp*(4) group. The remaining question is whether these ten matrices can serve as the fifteen Dirac matrices given in Section 2. The answer is clearly no. How can ten matrices describe fifteen matrices? We should therefore add five more matrices.
+
+**4. Extension to O(3,3) Symmetry**
+
+Unlike the case of the Schrödinger picture, it is possible to add five non-canonical generators to the above list. They are
+
+$$S_1 = +\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_2} - x_2 \frac{\partial}{\partial x_1}\right) - \left(p_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial p_1}\right) \right\}$$
+
+$$S_2 = -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_2} - p_2 \frac{\partial}{\partial x_1}\right) + \left(x_2 \frac{\partial}{\partial p_1} - p_1 \frac{\partial}{\partial x_2}\right) \right\} \quad (22)$$
+
+as well as three additional squeeze operators:
+
+$$G_1 = -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_2} + x_2 \frac{\partial}{\partial x_1}\right) + \left(p_1 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial p_1}\right) \right\}$$
+
+$$G_2 = \frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial p_2} + p_2 \frac{\partial}{\partial x_1}\right) - \left(x_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_2}\right) \right\}$$
+
+$$G_3 = -\frac{i}{2} \left\{ \left(x_1 \frac{\partial}{\partial x_1} + p_1 \frac{\partial}{\partial p_1}\right) + \left(x_2 \frac{\partial}{\partial p_1} + p_1 \frac{\partial}{\partial x_2}\right) \right\} \quad (23)$$
+
+These five generators perform well-defined operations on the Wigner function. However, the question is whether these additional generators are acceptable in the present form of quantum mechanics.
+
+In order to answer this question, let us note that the uncertainty principle in the phase-space picture of quantum mechanics is stated in terms of the minimum area in phase space for a given pair of conjugate variables. The minimum area is determined by Planck's constant. Thus we are allowed to expand the phase space, but are not allowed to contract it. With this point in mind, let us go back to $G_3$ of Equation (23), which generates transformations that simultaneously expand one phase space and contract the other. Thus, the $G_3$ generator is not acceptable in quantum mechanics even though it generates well-defined mathematical transformations of the Wigner function.
+
+If the five generators of Equations (22) and (23) are added to the ten generators given in Equations (15) and (16), there are fifteen generators. They satisfy the following set of commutation relations.
+
+$$
+\begin{align*}
+[L_i, L_j] &= i\epsilon_{ijk}L_k, & [S_i, S_j] &= i\epsilon_{ijk}S_k, & [L_i, S_j] &= 0 \\
+[L_i, K_j] &= i\epsilon_{ijk}K_k, & [L_i, Q_j] &= i\epsilon_{ijk}Q_k, & [L_i, G_j] &= i\epsilon_{ijk}G_k \\
+[K_i, K_j] &= [Q_i, Q_j] &= [Q_i, Q_j] &= -i\epsilon_{ijk}L_k \\
+[K_i, Q_j] &= -i\delta_{ij}S_3, & [Q_i, G_j] &= -i\delta_{ij}S_1, & [G_i, K_j] &= -i\delta_{ij}S_2 \\
+[K_i, S_3] &= -iQ_i, & [Q_i, S_3] &= iK_i, & [G_i, S_3] &= 0 \\
+[K_i, S_1] &= 0, & [Q_i, S_1] &= -iG_i, & [G_i, S_1] &= iQ_i \\
+[K_i, S_2] &= iG_i, & [Q_i, S_2] &= 0, & [G_i, S_2] &= -iK_i
+\end{align*}
+\tag{24}
+$$
+
+As we shall see in Section 4.2, this set of commutation relations serves as the Lie algebra for the group SL(4, r) and also for the *O*(3, 3) Lorentz group.
+---PAGE_BREAK---
+
+These fifteen four-by-four matrices are written in terms of Dirac's fifteen Majorana matrices, and are tabulated in Table 1. There are six anti-symmetric and nine symmetric matrices. These anti-symmetric matrices were divided into two sets of three rotation generators in the four-dimensional phase space. The nine symmetric matrices can be divided into three sets of three squeeze generators. However, this classification scheme is easier to understand in terms the group $O(3,3)$, discussed in Section 4.2.
+
+**Table 1.** SL(4,*r*) and Dirac matrices. Two sets of rotation generators and three sets of boost generators.
+There are 15 generators.
+
+ | First component | Second component | Third component |
|---|
| Rotation | $L_1 = \frac{-i}{2}\gamma_0$ | $L_2 = \frac{i}{2}\gamma_5\gamma_0$ | $L_3 = \frac{-i}{2}\gamma_5$ | | Rotation | $S_1 = \frac{i}{2}\gamma_2\gamma_3$ | $S_2 = \frac{i}{2}\gamma_1\gamma_2$ | $S_3 = \frac{i}{2}\gamma_3\gamma_1$ | | Boost | $K_1 = \frac{-i}{2}\gamma_5\gamma_1$ | $K_2 = \frac{1}{2}\gamma_1$ | $K_3 = \frac{1}{2}\gamma_0\gamma_1$ | | Boost | $Q_1 = \frac{i}{2}\gamma_5\gamma_3$ | $Q_2 = \frac{-i}{2}\gamma_3$ | $Q_3 = -\frac{i}{2}\gamma_0\gamma_3$ | | Boost | $G_1 = \frac{-i}{2}\gamma_5\gamma_2$ | $G_2 = \frac{1}{2}\gamma_2$ | $G_3 = \frac{1}{2}\gamma_0\gamma_2$ |
+
+## 4.1. Non-Canonical Transformations in Classical Mechanics
+
+In addition to Dirac's ten oscillator matrices, we can consider the matrix
+
+$$ G_3 = \frac{i}{2} \begin{pmatrix} I & 0 \\ 0 & -I \end{pmatrix} \qquad (25) $$
+
+which will generate a radial expansion of the phase space of the first oscillator, while contracting that of the second phase space [14], as illustrated in Figure 1. What is the physical significance of this operation? The expansion of phase space leads to an increase in uncertainty and entropy [8,14].
+
+**Figure 1.** Expanding and contracting phase spaces. Canonical transformations leave the area of each phase space invariant. Non-canonical transformations can change them, yet the product of these two areas remains invariant.
+
+The contraction of the second phase space has a lower limit in quantum mechanics, namely it cannot become smaller than Planck's constant. However, there is no such lower limit in classical mechanics. We shall go back to this question in Section 5.
+---PAGE_BREAK---
+
+In the meantime, let us study what happens when the matrix $G_3$ is introduced into the set of matrices given in Equations (20) and (21). It commutes with $S_3$, $L_3$, $K_1$, $K_2$, $Q_1$, and $Q_2$. However, its commutators with the rest of the matrices produce four more generators:
+
+$$[G_3, L_1] = iG_2, [G_3, L_2] = -iG_1, [G_3, K_3] = iS_2, [G_3, Q_3] = -iS_1 \qquad (26)$$
+
+where
+
+$$G_1 = \frac{i}{2} \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix}, G_2 = \frac{1}{2} \begin{pmatrix} 0 & -\sigma_2 \\ \sigma_2 & 0 \end{pmatrix}$$
+
+$$S_1 = \frac{i}{2} \begin{pmatrix} 0 & \sigma_3 \\ -\sigma_3 & 0 \end{pmatrix}, S_2 = \frac{i}{2} \begin{pmatrix} 0 & -\sigma_1 \\ \sigma_1 & 0 \end{pmatrix} \qquad (27)$$
+
+If we take into account the above five generators in addition to the ten generators of $Sp(4)$, there are fifteen generators. These generators satisfy the set of commutation relations given in Equation (24).
+
+Indeed, the ten $Sp(4)$ generators together with the five new generators form the Lie algebra for the group $SL(4,r)$. There are thus fifteen four-by-four matrices. They can be written in terms of the fifteen Majorana matrices, as given in Table 1.
+
+## 4.2. Local Isomorphism between O(3,3) and SL(4,r)
+
+It is now possible to write fifteen six-by-six matrices that generate Lorentz transformations on the three space coordinates and three time coordinates [6]. However, those matrices are difficult to handle and do not show existing regularities. In this section, we write those matrices as two-by-two matrices of three-by-three matrices.
+
+For this purpose, we construct four sets of three-by-three matrices given in Table 2. There are two sets of rotation generators:
+
+$$L_i = \begin{pmatrix} A_i & 0 \\ 0 & 0 \end{pmatrix}, S_i = \begin{pmatrix} 0 & 0 \\ 0 & A_i \end{pmatrix} \qquad (28)$$
+
+applicable to the space and time coordinates respectively.
+
+There are also three sets of boost generators. In the two-by-two representation of the matrices given in Table 2, they are:
+
+$$K_i = \begin{pmatrix} 0 & B_i \\ \tilde{B}_i & 0 \end{pmatrix}, Q_i = \begin{pmatrix} 0 & C_i \\ \tilde{C}_i & 0 \end{pmatrix}, G_i = \begin{pmatrix} 0 & D_i \\ \tilde{D}_i & 0 \end{pmatrix} \qquad (29)$$
+
+where the three-by-three matrices $A_i, B_i, C_i$, and $D_i$ are given in Table 2, and $\tilde{A}_i, \tilde{B}_i, \tilde{C}_i, \tilde{D}_i$ are their transposes respectively.
+---PAGE_BREAK---
+
+**Table 2.** Three-by-three matrices constituting the two-by-two representation of generators of the $O(3,3)$ group.
+
+ | i = 1 | i = 2 | i = 3 |
|---|
| Ai | $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & i & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & i \\ 0 & 0 & 0 \\ -i & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | | Bi | $\begin{pmatrix} i & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ i & 0 & 0 \end{pmatrix}$ | | Ci | $\begin{pmatrix} 0 & i & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & 0 \\ 0 & i & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & i & 0 \end{pmatrix}$ | | Di | $\begin{pmatrix} 0 & 0 & i \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & 0 \\ 0 & i & 0 \\ 0 & 0 & 0 \end{pmatrix}$ | $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & i \end{pmatrix}$ |
+
+There is a four-by-four Majorana matrix corresponding to each of these fifteen six-by-six matrices, as given in Table 1.
+
+There are of course many interesting subgroups. The most interesting case is the $O(3,2)$ subgroup, and there are three of them. Another interesting feature is that there are three time dimensions. Thus, there are also $O(2,3)$ subgroups applicable to two space and three time coordinates. This symmetry between space and time coordinates could be an interesting future investigation.
+
+## **5. Feynman's Rest of the Universe**
+
+In his book on statistical mechanics [9], Feynman makes the following statement. When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts - the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system.
+
+We can use two coupled harmonic oscillators to illustrate what Feynman says about his rest of the universe. One of the oscillators can be used for the world in which we make physical measurements, while the other belongs to the rest of the universe [8].
+
+Let us start with a single oscillator in its ground state. In quantum mechanics, there are many kinds of excitations of the oscillator, and three of them are familiar to us. First, it can be excited to a state with a definite energy eigenvalue. We obtain the excited-state wave functions by solving the eigenvalue problem for the Schrödinger equation, and this procedure is well known.
+
+Second, the oscillator can go through coherent excitations. The ground-state oscillator can be excited to a coherent or squeezed state. During this process, the minimum uncertainty of the ground state is preserved. The coherent or squeezed state is not in an energy eigenstate. This kind of excited state plays a central role in coherent and squeezed states of light, which have recently become a standard item in quantum mechanics.
+
+Third, the oscillator can go through thermal excitations. This is not a quantum excitation but a statistical ensemble. We cannot express a thermally excited state by making linear combinations of wave functions. We should treat this as a canonical ensemble. In order to deal with this thermal state, we need a density matrix.
+
+For the thermally excited single-oscillator state, the density matrix takes the form [9,15,16].
+
+$$ \rho(x,y) = (1 - e^{-1/T}) \sum_k e^{-k/T} \phi_k(x) \phi_k^*(x) \quad (30) $$
+
+where the absolute temperature T is measured in the scale of Boltzmann's constant, and $\phi_k(x)$ is the k-th excited state wave oscillator wave function. The index ranges from 0 to $\infty$.
+---PAGE_BREAK---
+
+We also use Wigner functions to deal with statistical problems in quantum mechanics. The Wigner function for this thermally excited state is [4,9,15]
+
+$$W_T(x, p) = \frac{1}{\pi} \int e^{-2ipz} \rho(x-z, x+z) dz \quad (31)$$
+
+which becomes
+
+$$W_T = \left[ \frac{\tanh(1/2T)}{\pi} \right] \exp \left[ - (x^2 + p^2) \tanh(1/2T) \right] \quad (32)$$
+
+This Wigner function becomes
+
+$$W_0 = \frac{1}{\pi} \exp[-(x^2 + p^2)] \quad (33)$$
+
+when $T=0$. As the temperature increases, the radius of this Gaussian form increases from one to [14].
+
+$$\frac{1}{\sqrt{\tanh(1/2T)}} \qquad (34)$$
+
+The question is whether we can derive this expanding Wigner function from the concept of Feynman's rest of the universe. In their 1999 paper [8], Han et al. used two coupled harmonic oscillators to illustrate what Feynman said about his rest of the universe. One of their two oscillators is for the world in which we do quantum mechanics and the other is for the rest of the universe. However, these authors did not use canonical transformations. In Section 5.1, we summarize the main point of their paper using the language of canonical transformations developed in the present paper.
+
+Their work was motivated by the papers by Yurke et al. [17] and by Ekert et al. [18], and the Barnett-Phoenix version of information theory [19]. These authors asked the question of what happens when one of the photons is not observed in the two-mode squeezed state.
+
+In Section 5.2, we introduce another form of Feynman's rest of the universe, based on non-canonical transformations discussed in the present paper. For a two-oscillator system, we can define a single-oscillator Wigner function for each oscillator. Then non-canonical transformations allow one Wigner function to expand while forcing the other to shrink. The shrinking Wigner function has a lower limit in quantum mechanics, while there is none in classical mechanics. Thus, Feynman's rest of the universe consists of classical mechanics where Planck's constant has no lower limit.
+
+In Section 5.3, we translate the mathematics of the expanding Wigner function into the physical language of entropy.
+
+## 5.1. Canonical Approach
+
+Let us start with the ground-state wave function for the uncoupled system. Its Hamiltonian is given in Equation (7), and its wave function is
+
+$$\psi_0(x_1, x_2) = \frac{1}{\sqrt{\pi}} \exp \left[ -\frac{1}{2} (x_1^2 + x_2^2) \right] \quad (35)$$
+
+We can couple these two oscillators by making the following canonical transformations. First, let us rotate the coordinate system by 45° to get
+
+$$\frac{1}{\sqrt{2}}(x_1+x_2), \frac{1}{\sqrt{2}}(x_1-x_2) \qquad (36)$$
+
+Let us then squeeze the coordinate system:
+
+$$\frac{e^{\eta}}{\sqrt{2}}(x_1 + x_2), \frac{e^{-\eta}}{\sqrt{2}}(x_1 - x_2) \qquad (37)$$
+---PAGE_BREAK---
+
+Likewise, we can transform the momentum coordinates to
+
+$$ \frac{e^{-\eta}}{\sqrt{2}}(p_1 + p_2), \quad \frac{e^{\eta}}{\sqrt{2}}(p_1 - p_2) \qquad (38) $$
+
+Equations (37) and (38) constitute a very familiar canonical transformation. The resulting wave function for this coupled system becomes
+
+$$ \psi_{\eta}(x_1, x_2) = \frac{1}{\sqrt{\pi}} \exp \left\{ -\frac{1}{4} [e^{\eta}(x_1 - x_2)^2 + e^{-\eta}(x_1 + x_2)^2] \right\} \quad (39) $$
+
+This transformed wave function is illustrated in Figure 2.
+
+As was discussed in the literature for several different purposes [4,20–22], this wave function can be expanded as
+
+$$ \psi_{\eta}(x_1, x_2) = \frac{1}{\cosh \eta} \sum_k \left( \tanh \frac{\eta}{2} \right)^k \phi_k(x_1) \phi_k(x_2) \quad (40) $$
+
+where the wave function $\phi_k\phi(x)$ and the range of summation are defined in Equation (30). From this wave function, we can construct the pure-state density matrix
+
+$$ \rho(x_1, x_2; x'_1, x'_2) = \psi_\eta(x_1, x_2) \psi_\eta(x'_1, x'_2) \quad (41) $$
+
+which satisfies the condition $\rho^2 = \rho$:
+
+$$ \rho(x_1, x_2; x'_1, x'_2) = \int \rho(x_1, x_2; x''_1, x''_2) \rho(x''_1, x''_2; x'_1, x'_2) dx'_1 dx''_2 \quad (42) $$
+
+**Figure 2.** Two-dimensional Gaussian form for two-coupled oscillators. One of the variables is observable while the second variable is not observed. It belongs to Feynman's rest of the universe.
+
+If we are not able to make observations on the $x_2$, we should take the trace of the $\rho$ matrix with respect to the $x_2$ variable. Then the resulting density matrix is
+
+$$ \rho(x, x') = \int \psi_{\eta}(x, x_2) \{\psi_{\eta}(x', x_2)\}^* dx_2 \quad (43) $$
+---PAGE_BREAK---
+
+Here, we have replaced $x_1$ and $x'_1$ by $x$ and $x'$ respectively. If we complete the integration over the $x_2$ variable,
+
+$$ \rho(x,x') = \left(\frac{1}{\pi \cosh \eta}\right)^{1/2} \exp\left\{-\frac{(x+x')^2 + (x-x')^2 \cosh^2 \eta}{4 \cosh \eta}\right\} \quad (44) $$
+
+The diagonal elements of the above density matrix are
+
+$$ \rho(x,x) = \left( \frac{1}{\pi \cosh \eta} \right)^{1/2} \exp(-x^2 / \cosh \eta) \quad (45) $$
+
+With this expression, we can confirm the property of the density matrix: $\text{Tr}(\rho) = 1$. As for the trace of $\rho^2$, we can perform the integration
+
+$$ \mathrm{Tr}(\rho^2) = \int \rho(x,x')\rho(x',x)dx'dx = \frac{1}{\cosh\eta} \quad (46) $$
+
+which is less than one for nonzero values of $\eta$.
+
+The density matrix can also be calculated from the expansion of the wave function given in Equation (40). If we perform the integral of Equation (43), the result is
+
+$$ \rho(x,x') = \left( \frac{1}{\cosh(\eta/2)} \right)^2 \sum_k \left( \tanh \frac{\eta}{2} \right)^{2k} \phi_k(x) \phi_k^*(x') \quad (47) $$
+
+which leads to $\text{Tr}(\rho) = 1$. It is also straightforward to compute the integral for $\text{Tr}(\rho^2)$. The calculation leads to
+
+$$ \mathrm{Tr}(\rho^2) = \left(\frac{1}{\cosh(\eta/2)}\right)^4 \sum_k \left(\tanh \frac{\eta}{2}\right)^{4k} \quad (48) $$
+
+The sum of this series becomes to $(1/\cosh\eta)$, as given in Equation (46).
+
+We can approach this problem using the Wigner function. The Wigner function for the two oscillator system is [4]
+
+$$ W_0(x_1, p_1; x_2, p_2) = \left(\frac{1}{\pi}\right)^2 \exp\left[-(x_1^2 + p_1^2 + x_2^2 + p_2^2)\right] \quad (49) $$
+
+If we pretend not to make measurement on the second oscillator coordinate, the $x_2$ and $p_2$ variables have to be integrated out [8]. The net result becomes the Wigner function for the first oscillator.
+
+The canonical transformation of Equations (37) and (38) changes this Wigner function to
+
+$$ W(x_1, x_2; p_1, p_2) = \left(\frac{1}{\pi}\right)^2 \exp \left\{ -\frac{1}{2} [e^\eta (x_1 - x_2)^2 + e^{-\eta} (x_1 + x_2)^2 + e^{-\eta}(p_1 - p_2)^2 + e^\eta (p_1 + p_2)^2] \right\} \quad (50) $$
+
+If we do not observe the second pair of variables, we have to integrate this function over $x_2$ and $p_2$:
+
+$$ W_{\eta}(x_1, p_1) = \int W(x_1, x_2; p_1, p_2) dx_2 dp_2 \quad (51) $$
+
+and the evaluation of this integration leads to [8]
+
+$$ W_{\eta}(x,p) = \frac{1}{\pi \cosh \eta} \exp\left[-\left(\frac{x^2 + p^2}{\cosh \eta}\right)\right] \quad (52) $$
+
+where we use $x$ and $p$ for $x_1$ and $p_1$ respectively.
+---PAGE_BREAK---
+
+This Wigner function is of the form given in Equation (32) for the thermal excitation, if we identify
+the squeeze parameter $\eta$ as [23]
+
+$$ \cosh \eta = \frac{1}{\tanh(1/2T)} \quad (53) $$
+
+The failure to make measurement on the second oscillator leads to the radial expansion of the Wigner phase space as in the case of the thermal excitation.
+
+## 5.2. Non-Canonical Approach
+
+As we noted before, among the fifteen Dirac matrices, ten of them can be used for canonical transformations in classical mechanics, and thus in quantum mechanics. They play a special role in quantum optics [2–5].
+
+The remaining five of them can have their roles if the change in the phase space area is allowed. In quantum mechanics, the area can be increased, but it has a lower limit called Plank’s constant. In classical mechanics, this constraint does not exist. The mathematical formalism given in this paper allows us to study this aspect of the system of coupled oscillators.
+
+Let us choose the following three matrices from those in Equations (20) and (21).
+
+$$ S_3 = \frac{1}{2} \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, K_2 = \frac{i}{2} \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, Q_2 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix} \quad (54) $$
+
+They satisfy the closed set of commutation relations:
+
+$$ [S_3, K_2] = iQ_2, [S_3, Q_2] = -iQ_3, [K_2, Q_2] = -iS_3 \quad (55) $$
+
+This is the Lie algebra for the $Sp(2)$ group. This is the symmetry group applicable to the single-oscillator phase space [4], with one rotation and two squeezes. These matrices generate the same transformation for the first and second oscillators.
+
+We can choose three other sets with similar properties. They are:
+
+$$ S_3 = \frac{1}{2} \begin{pmatrix} \sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, Q_1 = \frac{i}{2} \begin{pmatrix} \sigma_3 & 0 \\ 0 & -\sigma_3 \end{pmatrix}, K_1 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & -\sigma_1 \end{pmatrix} \quad (56) $$
+
+$$ L_3 = \frac{1}{2} \begin{pmatrix} -\sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, K_2 = \frac{i}{2} \begin{pmatrix} \sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, K_1 = \frac{i}{2} \begin{pmatrix} -\sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix} \quad (57) $$
+
+and
+
+$$ L_3 = \frac{1}{2} \begin{pmatrix} -\sigma_2 & 0 \\ 0 & \sigma_2 \end{pmatrix}, -Q_2 = \frac{i}{2} \begin{pmatrix} -\sigma_3 & 0 \\ 0 & \sigma_3 \end{pmatrix}, Q_2 = \frac{i}{2} \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_1 \end{pmatrix} \quad (58) $$
+
+These matrices also satisfy the commutation relations given in Equation (55). In this case, the squeeze transformations take opposite directions in the second phase space.
+
+Since all these transformations are canonical, they leave the area of each phase space invariant. However, let us look at the non-canonical generator $G_3$ of Equation (25). It generates the transformation matrix of the form:
+
+$$ \begin{pmatrix} e^{\eta} & 0 \\ 0 & e^{-\eta} \end{pmatrix} \quad (59) $$
+
+If $\eta$ is positive, this matrix expands the first phase space while contracting the second. This contraction of the second phase space is allowed in classical mechanics, but it has a lower limit in quantum mechanics.
+
+The expansion of the first phase space is exactly like the thermal expansion resulting from our failure to observe the second oscillator that belongs to the rest of the universe. If we expand the system of Dirac's ten oscillator matrices to the world of his fifteen Majorana matrices, we can expand and
+---PAGE_BREAK---
+
+contract the first and second phase spaces without mixing them up. We can thus construct a model where the observed world and the rest of the universe remain separated. In the observable world, quantum mechanics remains valid with thermal excitations. In the rest of the universe, since the area of the phase space can decrease without lower limit, only classical mechanics is valid.
+
+During the expansion/contraction process, the product of the areas of the two phase spaces remains constant. This may or may not be an extended interpretation of the uncertainty principle, but we choose not to speculate further on this issue.
+
+Let us turn our attention to the fact that the groups $SL(4,r)$ and $Sp(4)$ are locally isomorphic to $O(3,3)$ and $O(3,2)$ respectively. This means that we can do quantum mechanics in one of the $O(3,2)$ subgroups of $O(3,3)$, as Dirac noted in his 1963 paper [1]. The remaining generators belong to Feynman's rest of the universe.
+
+### 5.3. Entropy and the Expanding Wigner Phase Space
+
+We have seen how Feynman's rest of the universe increases the radius of the Wigner function. It is important to note that the entropy of the system also increases.
+
+Let us go back to the density matrix. The standard way to measure this ignorance is to calculate the entropy defined as [16,24–27].
+
+$$S = -\operatorname{Tr}(\rho \ln(\rho)) \qquad (60)$$
+
+where S is measured in units of Boltzmann's constant. If we use the density matrix given in Equation (44), the entropy becomes
+
+$$S = 2\left\{\cosh^2\left(\frac{\eta}{2}\right) \ln\left(\cosh\frac{\eta}{2}\right) - \sinh^2\left(\frac{\eta}{2}\right) \ln\left(\sinh\frac{\eta}{2}\right)\right\} \quad (61)$$
+
+In order to express this equation in terms of the temperature variable $T$, we write Equation (53) as
+
+$$\cosh \eta = \frac{1 + e^{-1/T}}{1 - e^{-1/T}} \qquad (62)$$
+
+which leads to
+
+$$\cosh^2\left(\frac{\eta}{2}\right) = \frac{1}{1+e^{-1/T}}, \quad \sinh^2\left(\frac{\eta}{2}\right) = \frac{e^{-1/T}}{1+e^{-1/T}} \qquad (63)$$
+
+Then the entropy of Equation (61) takes the form [8]
+
+$$S = \left(\frac{1}{T}\right) \left\{ \frac{1}{\exp\left(\frac{1}{T}\right) - 1} \right\} - \ln\left(1 - e^{-1/T}\right) \qquad (64)$$
+
+This familiar expression is for the entropy of an oscillator state in thermal equilibrium. Thus, for this oscillator system, we can relate our ignorance of the Feynman's rest of the universe, measured by the coupling parameter $\eta$, to the temperature.
+
+## 6. Concluding Remarks
+
+In this paper, we started with the fifteen four-by-four matrices for the Majorana representation of the Dirac matrices, and the ten generators of the $Sp(4)$ group corresponding to Dirac's oscillator matrices. Their explicit forms are given in the literature [6,7], and their roles in modern physics are well-known [3,4,11]. We re-organized them into tables.
+
+The difference between these two representations consists of five matrices. The physics of this difference is discussed in terms of Feynman's rest of the universe [9]. According to Feynman, this universe consists of the world in which we do quantum mechanics, and the rest of the universe. In the rest of the universe, our physical laws may or may not be respected. In the case of coupled oscillators, without the lower limit on Planck's constant, we can do classical mechanics but not quantum mechanics in the rest of the universe.
+---PAGE_BREAK---
+
+In 1971, Feynman et al. [28] published a paper on the oscillator model of hadrons, where the proton consists of three quarks linked up by oscillator springs. In order to treat this problem, they use a three-particle symmetry group formulated by Dirac in his book on quantum mechanics [29,30]. An interesting problem could be to see what happens to the two quarks when one of them is not observed. Another interesting question could be to see what happens to one of the quarks when two of them are not observed.
+
+Finally, we note here that group theory is a very powerful tool in approaching problems in modern physics. Different groups can share the same set of commutation relations for their generators. Recently, the group SL(2, c) through its correspondence with the SO(3,1) has been shown to be the underlying language for classical and modern optics [4,31]. In this paper, we exploited the correspondence between SL(4, r) and O(3,3), as well as the correspondence between Sp(4) and O(3,2), which was first noted by Paul A. M. Dirac [1].
+
+There could be more applications of group isomorphisms in the future. A comprehensive list of those correspondences is given in Gilmore's book on Lie groups [32].
+
+**Acknowledgments:** We would like to thank Christian Baumgarten for telling us about the *Sp*(2) symmetry in classical mechanics.
+
+References
+
+1. Dirac, P.A.M. A remarkable representation of the 3 + 2 de Sitter Group. J. Math. Phys. **1963**, *4*, 901-909.
+ [CrossRef]
+
+2. Yuen, H.P. Two-photon coherent states of the radiation field. Phys. Rev. A **1976**, *13*, 2226-2243. [CrossRef]
+
+3. Yurke, B.S.; McCall, S.L.; Klauder, J.R. SU(2) and SU(1,1) interferometers. Phys. Rev. A **1986**, *33*, 4033-4054.
+ [CrossRef] [PubMed]
+
+4. Kim, Y.S.; Noz, M.E. Phase Space Picture of Quantum Mechanics; World Scientific Publishing Company: Singapore, 1991.
+
+5. Han, D.; Kim, Y.S.; Noz, M.E.; Yeh, L. Symmetries of two-mode squeezed states. J. Math. Phys. **1993**, *34*, 5493-5508. [CrossRef]
+
+6. Han, D.; Kim, Y.S.; Noz, M.E. O(3,3)-like symmetries of coupled harmonic oscillators. J. Math. Phys. **1995**, *36*, 3940-3954. [CrossRef]
+
+7. Lee, D.-G. The Dirac gamma matrices as "relics" of a hidden symmetry?: As fundamental representation of the algebra Sp(4,r). J. Math. Phys. **1995**, *36*, 524-530. [CrossRef]
+
+8. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative example of Feynman's rest of the universe. Am. J. Phys. **1999**, *67*, 61-66. [CrossRef]
+
+9. Feynman, R.P. Statistical Mechanics; Benjamin/Cummings: Reading, MA, USA, 1972.
+
+10. Majorana, E. Relativistic theory of particles with arbitrary intrinsic angular momentum. Nuovo Cimento **1932**, *9*, 335-341. [CrossRef]
+
+11. Itzykson, C.; Zuber, J.B. Quantum Field Theory; MaGraw-Hill: New York, NY, USA, 1980.
+
+12. Goldstein, H. *Classical Mechanics*, 2nd ed.; Addison-Wesley: Reading, MA, USA, 1980.
+
+13. Abraham, R.; Marsden, J.E. *Foundations of Mechanics*, 2nd ed.; Benjamin/Cummings: Reading, MA, USA, 1978.
+
+14. Kim, Y.S.; Li, M. Squeezed states and thermally excited states in the Wigner phase-space picture of quantum mechanics. Phys. Lett. A **1989**, *139*, 445-448. [CrossRef]
+
+15. Davies, R.W.; Davies, K.T.R. On the Wigner distribution function for an oscillator. Ann. Phys. **1975**, *89*, 261-273. [CrossRef]
+
+16. Landau, L.D.; Lifshitz, E.M. Statistical Physics; Pergamon Press: London, UK, 1958.
+
+17. Yurke, B.; Potasek, M. Obtainment of thermal noise from a pure state. Phys. Rev. A **1987**, *36*, 3464-3466.
+ [CrossRef] [PubMed]
+
+18. Ekert, A.K.; Knight, P.L. Correlations and squeezing of two-mode oscillations. Am. J. Phys. **1989**, *57*, 692-697.
+ [CrossRef]
+
+19. Barnett, S.M.; Phoenix, S.J.D. Information theory, squeezing and quantum correlations. Phys. Rev. A **1991**, *44*, 535-545. [CrossRef] [PubMed]
+---PAGE_BREAK---
+
+20. Kim, Y.S.; Noz, M.E.; Oh, S.H. A simple method for illustrating the difference between the homogeneous and inhomogeneous Lorentz Groups. Am. J. Phys. **1979**, *47*, 892–897. [CrossRef]
+
+21. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, the Netherlands, 1986.
+
+22. Giedke, G.; Wolf, M.M.; Krueger, O.; Werner, R.F.; Cirac, J.J. Entanglement of formation for symmetric Gaussian states. Phys. Rev. Lett. **2003**, *91*, 107901–107904. [CrossRef] [PubMed]
+
+23. Han, D.; Kim, Y.S.; Noz, M.E. Lorentz-squeezed hadrons and hadronic temperature. Phys. Lett. A **1990**, *144*, 111–115. [CrossRef]
+
+24. von Neumann, J. *Mathematical Foundation of Quantum Mechanics*; Princeton University: Princeton, NJ, USA, 1955.
+
+25. Fano, U. Description of states in quantum mechanics by density matrix and operator techniques. Rev. Mod. Phys. **1957**, *29*, 74–93. [CrossRef]
+
+26. Blum, K. *Density Matrix Theory and Applications*; Plenum: New York, NY, USA, 1981.
+
+27. Kim, Y.S.; Wigner, E.P. Entropy and Lorentz transformations. Phys. Lett. A **1990**, *147*, 343–347. [CrossRef]
+
+28. Feynman, R.P.; Kislinger, M.; Ravndal, F. Current matrix elements from a relativistic Quark Model. Phys. Rev. D **1971**, *3*, 2706–2732. [CrossRef]
+
+29. Dirac, P.A.M. *Principles of Quantum Mechanics*, 4th ed.; Oxford University: London, UK, 1958.
+
+30. Hussar, P.E.; Kim, Y.S.; Noz, M.E. Three-particle symmetry classifications according to the method of Dirac. Am. J. Phys. **1980**, *48*, 1038–1042. [CrossRef]
+
+31. Başkal, S.; Kim, Y.S. Lorentz Group in ray and polarization optics. In *Mathematical Optics: Classical, Quantum and Imaging Methods*; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.; CRC Press: New York, NY, USA, 2012.
+
+32. Gilmore, R. *Lie Groups, Lie Algebras, and Some of Their Applications*; Wiley: New York, NY, USA, 1974.
+
+© 2012 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+# Symmetries Shared by the Poincaré Group and the Poincaré Sphere
+
+Young S. Kim ¹,* and Marilyn E. Noz ²
+
+¹ Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA
+
+² Department of Radiology, New York University, New York, NY 10016, USA; marilyne.noz@gmail.com
+
+* Author to whom correspondence should be addressed; yskim@umd.edu; Tel.: +1-301-937-1306.
+
+Received: 29 May 2013; in revised form: 9 June 2013; Accepted: 9 June 2013; Published: 27 June 2013
+
+**Abstract:** Henri Poincaré formulated the mathematics of Lorentz transformations, known as the Poincaré group. He also formulated the Poincaré sphere for polarization optics. It is shown that these two mathematical instruments can be derived from the two-by-two representations of the Lorentz group. Wigner's little groups for internal space-time symmetries are studied in detail. While the particle mass is a Lorentz-invariant quantity, it is shown to be possible to address its variations in terms of the decoherence mechanism in polarization optics.
+
+**Keywords:** Poincaré group; Poincaré sphere; Wigner's little groups; particle mass; decoherence mechanism; two-by-two representations; Lorentz group
+
+## 1. Introduction
+
+It was Henri Poincaré who worked out the mathematics of Lorentz transformations before Einstein and Minkowski, and the Poincaré group is the underlying language for special relativity. In order to analyze the polarization of light, Poincaré also constructed a graphic illustration known as the Poincaré sphere [1–3].
+
+It is of interest to see whether the Poincaré sphere can also speak the language of special relativity. In that case, we can study the physics of relativity in terms of what we observe in optical laboratories. For that purpose, we note first that the Lorentz group starts as a group of four-by-four matrices, while the Poincaré sphere is based on the two-by-two matrix consisting of four Stokes parameters. Thus, it is essential to find a two-by-two representation of the Lorentz group. Fortunately, this representation exists in the literature [4,5], and we shall use it in this paper.
+
+As for the problems in relativity, we shall discuss here Wigner’s little groups dictating the internal space-time symmetries of relativistic particles [6]. In his original paper of 1939 [7], Wigner considered the subgroups of the Lorentz group, whose transformations leave the four-momentum of a given particle invariant. While this problem has been extensively discussed in the literature, we propose here to study it using Naimark’s two-by-two representation of the Lorentz group [4,5].
+
+This two-by-two representation is useful for communicating with the symmetries of the Poincaré sphere based on the four Stokes parameters, which can take the form of two-by-two matrices. We shall prove here that the Poincaré sphere shares the same symmetry property as that of the Lorentz group, particularly in approaching Wigner’s little groups. By doing this, we can study the Lorentz symmetries of elementary particles from what we observe in optical laboratories.
+
+The present paper starts from an unpublished note based on an invited paper presented by one of the authors (YSK) at the Fedorov Memorial Symposium: Spins and Photonic Beams at Interface held in Minsk (2011) [8]. To this, we have added a detailed discussion of how the decoherence mechanism in polarization optics is mathematically equivalent to a massless particle gaining mass to become a massive particle. We are particularly interested in how the variation of mass can be accommodated in the study of internal space-time symmetries.
+---PAGE_BREAK---
+
+In Section 2, we define the symmetry problem we propose to study in this paper. We are interested in the subgroups of the Lorentz group, whose transformations leave the four-momentum of a given particle invariant. This is an old problem and has been repeatedly discussed in the literature [6,7,9]. In this paper, we discuss this problem using the two-by-two formulation of the Lorentz group. This two-by-two language is directly applicable to polarization optics and the Poincaré sphere.
+
+While Wigner formulated his little groups for particles in their given Lorentz frames, we give a formalism applicable to all Lorentz frames. In his 1939 paper, Wigner pointed out that his little groups are different for massive, massless and imaginary-particles. In Section 3, we discuss the possibility of deriving the symmetry properties for massive and imaginary-mass particles from that of the massless particle.
+
+In Section 4, we assemble the variables in polarization optics, and define the matrix operators corresponding to transformations applicable to those variables. We write the Stokes parameters in the form of a two-by-two matrix. The Poincaré sphere can be constructed from this two-by-two Stokes matrix. In Section 5, we note that there can be two radii for the Poincaré sphere. Poincaré's original sphere has one fixed radius, but this radius can change, depending on the degree of coherence. Based on what we studied in Section 3, we can associate this change of the radius to the change in mass of the particle.
+
+## 2. Poincaré Group and Wigner's Little Groups
+
+Poincaré formulated the group theory of Lorentz transformations applicable to four-dimensional space consisting of three space coordinates and one time variable. There are six generators for this group consisting of three rotation and three boost generators.
+
+In addition, Poincaré considered translations applicable to those four space-time variables, with four generators. If we add these four generators to the six generators for the homogeneous Lorentz group, the result is the inhomogeneous Lorentz group [7] with ten generators. This larger group is called the Poincaré group in the literature.
+
+The four translation generators produce space-time four-vectors consisting of the energy and momentum. Thus, within the framework of the Poincaré group, we can consider the subgroup of the Lorentz group for a fixed value of momentum [7]. This subgroup defines the internal space-time symmetry of the particle. Let us consider a particle at rest. Its momentum consists of its mass as its time-like variable and zero for the three momentum components.
+
+$$ (m, 0, 0, 0) \qquad (1) $$
+
+For convenience, we use the four-vector convention, $(t, z, x, y)$ and $(E, p_x, p_y)$.
+
+This four-momentum of Equation (1) is invariant under three-dimensional rotations applicable only to the $z, x, y$ coordinates. The dynamical variable associated with this rotational degree of freedom is called the spin of the particle.
+
+We are then interested in what happens when the particle moves with a non-zero momentum. If it moves along the z direction, the four-momentum takes the value:
+
+$$ m(\cosh \eta, \sinh \eta, 0, 0) \qquad (2) $$
+
+which means:
+
+$$ p_0 = m(\cosh \eta)p_z = m(\sinh \eta)e^{\eta} = \sqrt{\frac{p_0 + p_z}{p_0 - p_z}} \qquad (3) $$
+
+Accordingly, the little group consists of Lorentz-boosted rotation matrices. This aspect of the little group has been discussed in the literature [6,9]. The question then is whether we could carry out the same logic using two-by-two matrices
+---PAGE_BREAK---
+
+Of particular interest is what happens when the transformation parameter, $\eta$, becomes very large and the four-momentum becomes that of a massless particle. This problem has also been discussed in the literature within the framework of four-dimensional Minkowski space. The $\eta$ parameter becomes large when the momentum becomes large, but it can also become large when the mass becomes very small. The two-by-two formulation allows us to study these two cases separately, as we will do in Section 3.
+
+If the particle has an imaginary mass, it moves faster than light and is not observable. Yet, particles of this kind play important roles in Feynman diagrams, and their space-time symmetry should also be studied. In his original paper [7], Wigner studied the little group as the subgroup of the Lorentz group whose transformations leave the four-momentum invariant of the form:
+
+$$ (0, k, 0, 0) \tag{4} $$
+
+Wigner observed that this four-momentum remains invariant under the Lorentz boost along the x or y direction.
+
+If we boost this four-momentum along the z direction, the four-momentum becomes:
+
+$$ k(\sinh\eta, \cosh\eta, 0, 0) \tag{5} $$
+
+with:
+
+$$ e^{\eta} = \sqrt{\frac{p_0 + p_z}{p_z - p_0}} \tag{6} $$
+
+The two-by-two formalism also allows us to study this problem.
+
+In Section 2.1, we shall present the two-by-two representation of the Lorentz group. In Section 2.2, we shall present Wigner's little groups in this two-by-two representation. While Wigner's analysis was based on particles in their fixed Lorentz frames, we are interested in what happens when they start moving. We shall deal with this problem in Section 3.
+
+## 2.1. Two-by-Two Representation of the Lorentz Groups
+
+The Lorentz group starts with a group of four-by-four matrices performing Lorentz transformations on the Minkowskian vector space of $(t, z, x, y)$, leaving the quantity:
+
+$$ t^2 - z^2 - x^2 - y^2 \tag{7} $$
+
+invariant. It is possible to perform this transformation using two-by-two representations [4,5]. This mathematical aspect is known as SL(2, c), the universal covering group for the Lorentz group.
+
+In this two-by-two representation, we write the four-vector as a matrix:
+
+$$ X = \begin{pmatrix} t+z&x-iy \\ x+iy&t-z \end{pmatrix} \tag{8} $$
+
+Then, its determinant is precisely the quantity given in Equation (7). Thus, the Lorentz transformation on this matrix is a determinant-preserving transformation. Let us consider the transformation matrix as:
+
+$$ G = \begin{pmatrix} a & b \\ c & d \end{pmatrix} G^\dagger = \begin{pmatrix} a^* & c^* \\ b^* & d^* \end{pmatrix} \tag{9} $$
+
+with:
+
+$$ \det(G) = 1 \tag{10} $$
+
+The $G$ matrix starts with four complex numbers. Due to the above condition on its determinant, it has six independent parameters. The group of these $G$ matrices is known to be locally isomorphic to
+---PAGE_BREAK---
+
+the group of four-by-four matrices performing Lorentz transformations on the four-vector (t,z,x,y).
+In other words, for each G matrix, there is a corresponding four-by-four Lorentz-transform matrix, as
+is illustrated in the Appendix A.
+
+The matrix, $G$, is not a unitary matrix, because its Hermitian conjugate is not always its inverse.
+The group can have a unitary subgroup, called $SU(2)$, performing rotations on electron spins. As far
+as we can see, this $G$-matrix formalism was first presented by Naimark in 1954 [4]. Thus, we call this
+formalism the Naimark representation of the Lorentz group. We shall see first that this representation
+is convenient for studying space-time symmetries of particles. We shall then note that this Naimark
+representation is the natural language for the Stokes parameters in polarization optics.
+
+With this point in mind, we can now consider the transformation:
+
+$$X' = GXG^{\dagger} \qquad (11)$$
+
+Since $G$ is not a unitary matrix, it is not a unitary transformation. In order to tell this difference, we call
+this the "Naimark transformation". This expression can be written explicitly as:
+
+$$\begin{pmatrix} t' + z' & x' - iy' \\ x + iy & t' - z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t + z & x - iy \\ x + iy & t - z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (12)$$
+
+For this transformation, we have to deal with four complex numbers. However, for all practical
+purposes, we may work with two Hermitian matrices:
+
+$$Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} R(\delta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (13)$$
+
+and two symmetric matrices:
+
+$$B(\eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \quad (14)$$
+
+whose Hermitian conjugates are not their inverses. The two Hermitian matrices in Equation (13) lead
+to rotations around the *z* and *y* axes, respectively. The symmetric matrices in Equation (14) perform
+Lorentz boosts along the *z* and *x* directions, respectively.
+
+Repeated applications of these four matrices will lead to the most general form of the $G$ matrix of
+Equation (9) with six independent parameters. For each two-by-two Naimark transformation, there is
+a four-by-four matrix performing the corresponding Lorentz transformation on the four-component
+four-vector. In the Appendix A, the four-by-four equivalents are given for the matrices of Equations (13)
+and (14).
+
+It was Einstein who defined the energy-momentum four-vector and showed that it also has the
+same Lorentz-transformation law as the space-time four-vector. We write the energy-momentum
+four-vector as:
+
+$$P = \begin{pmatrix} E + p_z & p_x - ip_y \\ p_x + ip_y & E - p_z \end{pmatrix} \qquad (15)$$
+
+with:
+
+$$\det(P) = E^2 - p_x^2 - p_y^2 - p_z^2 \qquad (16)$$
+
+which means:
+
+$$\det(P) = m^2 \qquad (17)$$
+
+where *m* is the particle mass.
+---PAGE_BREAK---
+
+Now, Einstein's transformation law can be written as:
+
+$$P' = GPC^+ \quad (18)$$
+
+or explicitly:
+
+$$\begin{pmatrix} E' + p_z' & p_x' - ip_y' \\ p_x' + ip_y' & E' - p_z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} E + p_z & p_x - ip_y \\ p_x + ip_y & E - p_z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \beta^* \\ \delta^* & \end{pmatrix} \quad (19)$$
+
+## 2.2. Wigner's Little Groups
+
+Later in 1939 [7], Wigner was interested in constructing subgroups of the Lorentz group whose transformations leave a given four-momentum invariant. He called these subsets "little groups". Thus, Wigner's little group consists of two-by-two matrices satisfying:
+
+$$P = WPW^+ \quad (20)$$
+
+This two-by-two W matrix is not an identity matrix, but tells about the internal space-time symmetry of a particle with a given energy-momentum four-vector. This aspect was not known when Einstein formulated his special relativity in 1905. The internal space-time symmetry was not an issue at that time.
+
+If its determinant is a positive number, the P matrix can be brought to a form proportional to:
+
+$$P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad (21)$$
+
+corresponding to a massive particle at rest.
+
+If the determinant is negative, it can be brought to a form proportional to:
+
+$$P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \quad (22)$$
+
+corresponding to an imaginary-mass particle moving faster than light along the z direction, with its vanishing energy component.
+
+If the determinant is zero, we may write P as:
+
+$$P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \quad (23)$$
+
+which is proportional to the four-momentum matrix for a massless particle moving along the z direction.
+
+For all three of the above cases, the matrix of the form:
+
+$$Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} \quad (24)$$
+
+will satisfy the Wigner condition of Equation (20). This matrix corresponds to rotations around the z axis, as is shown in the Appendix A.
+
+For the massive particle with the four-momentum of Equation (21), the Naimark transformations with the rotation matrix of the form:
+
+$$R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (25)$$
+---PAGE_BREAK---
+
+also leave the *P* matrix of Equation (21) invariant. Together with the *Z*(*δ*) matrix, this rotation matrix
+leads to the subgroup consisting of the unitary subset of the *G* matrices. The unitary subset of *G* is
+*SU*(2), corresponding to the three-dimensional rotation group dictating the spin of the particle [9].
+
+For the massless case, the transformations with the triangular matrix of the form:
+
+$$
+\begin{pmatrix}
+1 & \gamma \\
+0 & 1
+\end{pmatrix}
+\qquad (26)
+$$
+
+leave the momentum matrix of Equation (23) invariant. The physics of this matrix has a stormy history,
+and the variable, $\gamma$, leads to gauge transformation applicable to massless particles [6,10].
+
+For a particle with its imaginary mass, the W matrix of the form:
+
+$$
+S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \tag{27}
+$$
+
+will leave the four-momentum of Equation (22) invariant. This unobservable particle does not appear
+to have observable internal space-time degrees of freedom.
+
+Table 1 summarizes the transformation matrices for Wigner’s subgroups for massive, massless and imaginary-mass particles. Of course, it is a challenging problem to have one expression for all those three cases, and this problem has been addressed in the literature [11].
+
+**Table 1.** Wigner’s Little Groups. The little groups are the subgroups of the Lorentz group, whose transformations leave the four-momentum of a given particle invariant. Thus, the little groups define the internal space-time symmetries of particles. The four-momentum remains invariant under the rotation around it. In addition, the four-momentum remains invariant under the following transformations. These transformations are different for massive, massless and imaginary-mass particles.
+
+
+
+
+ |
+ Particle mass
+ |
+
+ Four-momentum
+ |
+
+ Transform matrices
+ |
+
+
+
+
+ |
+ Massive
+ |
+
+ (
+
+
+ |
+ 1
+ |
+
+ 0
+ |
+
+
+ |
+ 0
+ |
+
+ 1
+ |
+
+
+ )
+ |
+
+ (
+
+
+ |
+ cosh(θ/2)
+ |
+
+ -sin(θ/2)
+ |
+
+
+ |
+ sin(θ/2)
+ |
+
+ cos(θ/2)
+ |
+
+
+ )
+ |
+
+
+ |
+ Massless
+ |
+
+ (
+
+
+ |
+ 1
+ |
+
+ 0
+ |
+
+
+ |
+ 0
+ |
+
+ 0
+ |
+
+
+ )
+ |
+
+ (
+
+
+ |
+ 1
+ |
+
+ γ
+ |
+
+
+ |
+ 0
+ |
+
+ 1
+ |
+
+
+ )
+ |
+
+
+ |
+ Imaginary mass
+ |
+
+ (
+
+
+ |
+ 1
+ |
+
+ 0
+ |
+
+
+ |
+ 0
+ |
+
+ -1
+ |
+
+
+ )
+ |
+
+ (
+
+
+ |
+ cosh(λ/2)
+ |
+
+ sinh(λ/2)
+ |
+
+
+ |
+ sinh(λ/2)
+ |
+
+ cosh(λ/2)
+ |
+
+
+ )
+ |
+
+
+
+
+
+
+**3. Lorentz Completion of Wigner's Little Groups**
+
+In his original paper [7], Wigner worked out his little groups for specific Lorentz frames. For the massive particle, he constructed his little group in the frame where the particle is at rest. For the imaginary-mass particle, the energy-component of his frame is zero.
+
+For the massless particle, it moves along the *z* direction with a nonzero momentum. There are no specific frames particularly convenient for us. Thus, the specific frame can be chosen for an arbitrary value of the momentum, and the triangular matrix of Equation (26) should remain invariant under Lorentz boosts along the *z* direction.
+
+For the massive particle, let us Lorentz-boost the four-momentum matrix of Equation (21) by performing a Naimark transformation:
+
+$$
+\begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (28)
+$$
+
+which leads to:
+
+$$
+\left( \begin{array}{cc} e^{\eta} & 0 \\ 0 & e^{-\eta} \end{array} \right) \qquad (29)
+$$
+---PAGE_BREAK---
+
+This resulting matrix corresponds to the Lorentz-boosted four-momentum given in Equation (2). For simplicity, we let $m = 1$ hereafter in this paper. The Lorentz transformation applicable to the four-momentum matrix is not a similarity transformation, but it is a Naimark transformation, as defined in Equation (11).
+
+On the other hand, the rotation matrix of Equation (25) is Lorentz-boosted as a similarity transformation:
+
+$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (30) $$
+
+and it becomes:
+
+$$ \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (31) $$
+
+If we perform the Naimark transformation of the four-momentum matrix of Equation (29) with this Lorentz-boosted rotation matrix:
+
+$$ \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta/2} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{\eta} & 0 \\ 0 & e^{-\eta} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & e^{\eta} \sin(\theta/2) \\ -e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (32) $$
+
+the result is the four-momentum matrix of Equation (29). This means that the Lorentz-boosted rotation matrix of Equation (31) represents the little group, whose transformations leave the four-momentum matrix of Equation (29) invariant.
+
+For the imaginary-mass case, the Lorentz boosted four-momentum matrix becomes:
+
+$$ \begin{pmatrix} e^\eta & 0 \\ 0 & -e^{-\eta} \end{pmatrix} \quad (33) $$
+
+The little group matrix is:
+
+$$ \begin{pmatrix} \cosh(\lambda/2) & e^\eta \sinh(\lambda/2) \\ e^{-\eta} \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \quad (34) $$
+
+where $\eta$ is given in Equation (6).
+
+For the massless case, if we boost the four-momentum matrix of Equation (23), the result is:
+
+$$ e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \quad (35) $$
+
+Here, the $\eta$ parameter is an independent variable and cannot be defined in terms of the momentum or energy.
+
+The remaining problem is to see whether the massive and imaginary-mass cases collapse to the massless case in the large $\eta$ limit. This variable becomes large when the momentum becomes large or the mass becomes small. We shall discuss these two cases separately.
+
+### 3.1. Large-Momentum Limit
+
+While Wigner defined his little group for the massive particle in its rest frame in his original paper [7], the little group represented by Equation (31) is applicable to the moving particle, whose four-momentum is given in Equation (29). This matrix can also be written as:
+
+$$ e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & e^{-2\eta} \end{pmatrix} \quad (36) $$
+---PAGE_BREAK---
+
+In the limit of large η, we can change the above expression into:
+
+$$e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (37)$$
+
+This process is continuous, but not necessarily analytic [11]. After making this transition, we can come back to the original frame to obtain the four momentum matrix of Equation (23).
+
+The remaining problem is the Lorentz-boosted rotation matrix of Equation (31). If this matrix is going to remain finite as $\eta$ approaches infinity, the upper-right element should be finite for large values of $\eta$. Let it be $\gamma$. Then:
+
+$$-\varepsilon^{\eta} \sin(\theta/2) = \gamma \qquad (38)$$
+
+This means that angle $\theta$ has to become zero. As a consequence, the little group matrix of Equation (31) becomes the triangular matrix given in Equation (26) for massless particles.
+
+Imaginary-mass particles move faster than light, and they are not observable. On the other hand, the mathematics applicable to Wigner's little group for this particle has been useful in the two-by-two beam transfer matrix in ray and polarization optics [12].
+
+Let us go back to the four-momentum matrix of Equation (22). If we boost this matrix, it becomes:
+
+$$\begin{pmatrix} e^{\eta} & 0 \\ 0 & -e^{-\eta} \end{pmatrix} \qquad (39)$$
+
+which can be written as:
+
+$$e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & -e^{-2\eta} \end{pmatrix} \qquad (40)$$
+
+This matrix can be changed to form Equation (37) in the limit of large $\eta$.
+
+Indeed, the little groups for massive, massless and imaginary cases coincide in the large-$\eta$ limit. Thus, it is possible to jump from one little group to another, and it is a continuous process, but not necessarily analytic [12].
+
+The $\eta$ parameter can become large as the momentum becomes large or the mass becomes small. In this subsection, we considered the case for large momentum. However, it is of interest to see the limiting process when the mass becomes small, especially in view of the fact that neutrinos have small masses.
+
+## 3.2. Small-Mass Limit
+
+Let us start with a massive particle with fixed energy, $E$. Then, $p_0 = E$, and $p_z = E \cos \chi$. The four-momentum matrix is:
+
+$$E \begin{pmatrix} 1 + \cos \chi & 0 \\ 0 & 1 - \cos \chi \end{pmatrix} \qquad (41)$$
+
+The determinant of this matrix is $E^2 (\sin \chi)^2$. In the regime of the Lorentz group, this is the $(mass)^2$ and is a Lorentz-invariant quantity. There are no Lorentz transformations that change the angle, $\chi$. Thus, with this extra variable, it is possible to study the little groups for variable masses, including the small-mass limit and the zero-mass case.
+
+If $\chi = 0$, the matrix of Equation (41) becomes that of the four-momentum matrix for a massless particle. As it becomes a positive small number, the matrix of Equation (41) can be written as:
+
+$$E(\sin\chi) \begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix} \qquad (42)$$
+---PAGE_BREAK---
+
+with
+
+$$e^{\eta} = \sqrt{\frac{1 + \cos \chi}{1 - \cos \chi}} \qquad (43)$$
+
+Here, again, the determinant of Equation (42) is $E^2(\sin \chi)^2$. With this matrix, we can construct Wigner's little group for each value of the angle, $\chi$. If $\chi$ is not zero, even if it is very small, the little group is $O(3)$-like, as in the case of all massive particles. As the angle, $\chi$, varies continuously from zero to 90°, the mass increases from zero to its maximum value.
+
+It is important to note that the little groups are different for the small-mass limit and for the zero-mass case. In this section, we studied the internal space-time symmetries dictated by Wigner's little groups, and we are able to present their Lorentz-covariant picture in Table 2.
+
+**Table 2.** Covariance of the energy-momentum relation and covariance of the internal space-time symmetry groups. The $\gamma$ parameter for the massless case has been studied in earlier papers in the four-by-four matrix formulation [6]. It corresponds to a gauge transformation. Among the three spin components, $S_3$ is along the direction of the momentum and remains invariant. It is called the "helicity".
+
+| Massive, Slow | Covariance | Massless, Fast |
|---|
$E = p^2/2m$ $S_3$ | Einstein's $E = mc^2$ | $E = cp$ Helicity | | $S_1, S_2$ | Wigner's Little Group | Gauge Transformation |
+
+## 4. Jones Vectors and Stokes Parameters
+
+In studying polarized light propagating along the z direction, the traditional approach is to consider the x and y components of the electric fields. Their amplitude ratio and the phase difference determine the state of polarization. Thus, we can change the polarization either by adjusting the amplitudes, by changing the relative phase or both. For convenience, we call the optical device that changes amplitudes an "attenuator" and the device that changes the relative phase a "phase shifter".
+
+The traditional language for this two-component light is the Jones-vector formalism, which is discussed in standard optics textbooks [13]. In this formalism, the above two components are combined into one column matrix, with the exponential form for the sinusoidal function:
+
+$$\begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp\{i(kz - \omega t + \phi_1)\} \\ b \exp\{i(kz - \omega t + \phi_2)\} \end{pmatrix} \qquad (44)$$
+
+This column matrix is called the Jones vector.
+
+When the beam goes through a medium with different values of indexes of refraction for the x and y directions, we have to apply the matrix:
+
+$$\begin{pmatrix} e^{i\delta_1} & 0 \\ 0 & e^{i\delta_2} \end{pmatrix} = e^{i(\delta_1+\delta_2)/2} \begin{pmatrix} e^{-i\delta/2} & 0 \\ 0 & e^{i\delta/2} \end{pmatrix} \qquad (45)$$
+
+with $\delta = \delta_1 - \delta_2$. In measurement processes, the overall phase factor, $e^{i(\delta_1+\delta_2)/2}$, cannot be detected and can therefore be deleted. The polarization effect of the filter is solely determined by the matrix:
+
+$$Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} \qquad (46)$$
+
+which leads to a phase difference of $\delta$ between the x and y components. The form of this matrix is given in Equation (13), which serves as the rotation around the z axis in the Minkowski space and time.
+---PAGE_BREAK---
+
+Also along the x and y directions, the attenuation coefficients could be different. This will lead to
+the matrix [14]:
+
+$$
+\begin{pmatrix}
+e^{-\eta_1} & 0 \\
+0 & e^{-\eta_2}
+\end{pmatrix}
+=
+e^{-(\eta_1+\eta_2)/2}
+\begin{pmatrix}
+e^{\eta/2} & 0 \\
+0 & e^{-\eta/2}
+\end{pmatrix}
+\quad (47)
+$$
+
+with $\eta = \eta_2 - \eta_1$. If $\eta_1 = 0$ and $\eta_2 = \infty$, the above matrix becomes:
+
+$$
+\begin{pmatrix}
+1 & 0 \\
+0 & 0
+\end{pmatrix}
+\qquad (48)
+$$
+
+which eliminates the y component. This matrix is known as a polarizer in the textbooks [13] and is a
+special case of the attenuation matrix of Equation (47).
+
+This attenuation matrix tells us that the electric fields are attenuated at two different rates.
+The exponential factor, $e^{-(\eta_1+\eta_2)/2}$, reduces both components at the same rate and does not affect the
+state of polarization. The effect of polarization is solely determined by the squeeze matrix [14]:
+
+$$
+B(\eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{49}
+$$
+
+This diagonal matrix is given in Equation (14). In the language of space-time symmetries, this matrix performs a Lorentz boost along the z direction.
+
+The polarization axes are not always the x and y axes. For this reason, we need the rotation matrix:
+
+$$
+R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \quad (50)
+$$
+
+which, according to Equation (13), corresponds to the rotation around the *y* axis in the space-time symmetry.
+
+Among the rotation angles, the angle of 45° plays an important role in polarization optics.
+Indeed, if we rotate the squeeze matrix of Equation (49) by 45°, we end up with the squeeze matrix:
+
+$$
+R(\theta) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \quad (51)
+$$
+
+which is also given in Equation (14). In the language of space-time physics, this matrix leads to a
+Lorentz boost along the x axis.
+
+Indeed, the *G* matrix of Equation (9) is the most general form of the transformation matrix applicable to the Jones vector. Each of the above four matrices plays its important role in special relativity, as we discussed in Section 2. Their respective roles in optics and particle physics are given in Table 3.
+
+However, the Jones vector alone cannot tell us whether the two components are coherent with each other. In order to address this important degree of freedom, we use the coherency matrix [1,2]:
+
+$$
+C = \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \tag{52}
+$$
+
+with:
+
+$$
+\langle \psi_i^* \psi_j \rangle = \frac{1}{T} \int_0^T \psi_i^*(t + \tau) \psi_j(t) dt \quad (53)
+$$
+---PAGE_BREAK---
+
+where $T$, for a sufficiently long time interval, is much larger than $\tau$. Then, those four elements become [15]:
+
+$$
+\begin{aligned}
+S_{11} &= \langle \psi_1^\dagger \psi_1 \rangle = a^2 & S_{12} &= \langle \psi_1^\dagger \psi_2 \rangle = abe^{-(\sigma+i\delta)} \\
+S_{21} &= \langle \psi_2^\dagger \psi_1 \rangle = abe^{-(\sigma-i\delta)} & S_{22} &= \langle \psi_2^\dagger \psi_2 \rangle = b^2
+\end{aligned}
+\quad (54) $$
+
+The diagonal elements are the absolute values of $\psi_1$ and $\psi_2$, respectively. The off-diagonal elements could be smaller than the product of $\psi_1$ and $\psi_2$, if the two beams are not completely coherent. The $\sigma$ parameter specifies the degree of coherency.
+
+This coherency matrix is not always real, but it is Hermitian. Thus, it can be diagonalized by a unitary transformation. If this matrix is normalized so that its trace is one, it becomes a density matrix [16,17].
+
+**Table 3.** Polarization optics and special relativity sharing the same mathematics. Each matrix has its clear role in both optics and relativity. The determinant of the Stokes or the four-momentum matrix remains invariant under Lorentz transformations. It is interesting to note that the decoherence parameter (least fundamental) in optics corresponds to the mass (most fundamental) in particle physics.
+
+| Polarization Optics | Transformation Matrix | Particle Symmetry |
|---|
| Phase shift δ | ( eδ/2 0 0 e-iδ/2) | Rotation around z | | Rotation around z | ( cos(θ/2) - sin(θ/2) sin(θ/2) cos(θ/2)) | Rotation around y | | Squeeze along x and y | ( eη/2 0 0 e-η/2) | Boost along z | Squeeze along 45° (ab)2 sin2χ | ( cosh(λ/2) sinh(λ/2) sinh(λ/2) cosh(λ/2)) Determinant | Boost along x (mass)2 |
+
+If we start with the Jones vector of the form of Equation (44), the coherency matrix becomes:
+
+$$ C = \begin{pmatrix} a^2 & ab e^{-(\sigma+i\delta)} \\ ab e^{-(\sigma-i\delta)} & b^2 \end{pmatrix} \qquad (55) $$
+
+We are interested in the symmetry properties of this matrix. Since the transformation matrix applicable to the Jones vector is the two-by-two representation of the Lorentz group, we are particularly interested in the transformation matrices applicable to this coherency matrix.
+
+The trace and the determinant of the above coherency matrix are:
+
+$$
+\begin{aligned}
+\det(C) &= (ab)^2 (1 - e^{-2\sigma}) \\
+\operatorname{tr}(C) &= a^2 + b^2
+\end{aligned}
+\quad (56) $$
+
+Since $e^{-\sigma}$ is always smaller than one, we can introduce an angle, $\chi$, defined as:
+
+$$ \cos \chi = e^{-\sigma} \quad (57) $$
+
+and call it the "decoherence angle". If $\chi = 0$, the decoherence is minimum, and it becomes maximum when $\chi = 90^\circ$. We can then write the coherency matrix of Equation (55) as:
+
+$$ C = \begin{pmatrix} a^2 & ab(\cos \chi)e^{-i\delta} \\ ab(\cos \chi)e^{i\delta} & b^2 \end{pmatrix} \quad (58) $$
+---PAGE_BREAK---
+
+The degree of polarization is defined as [13]:
+
+$$f = \sqrt{1 - \frac{4 \det(C)}{(tr(C))^2}} = \sqrt{1 - \frac{4(ab)^2 \sin^2 \chi}{(a^2 + b^2)^2}} \quad (59)$$
+
+This degree is one if $\chi = 0$. When $\chi = 90^\circ$, it becomes:
+
+$$\frac{a^2 - b^2}{a^2 + b^2} \qquad (60)$$
+
+Without loss of generality, we can assume that *a* is greater than *b*. If they are equal, this minimum degree of polarization is zero.
+
+Under the influence of the Naimark transformation given in Equation (11), this coherency matrix is transformed as:
+
+$$ (61) $$
+
+It is more convenient to make the following linear combinations:
+
+$$
+\begin{aligned}
+S_0 &= \frac{S_{11} + S_{22}}{2} S_3 = \frac{S_{11} - S_{22}}{2} \\
+S_1 &= \frac{S_{12} - S_{21}}{2} S_2 = \frac{S_{12} + S_{21}}{2}
+\end{aligned}
+\qquad (62) $$
+
+These four parameters are called Stokes parameters, and four-by-four transformations applicable to these parameters are widely known as Mueller matrices [1,3]. However, if the Naimark transformation given in Equation (61) is translated into the four-by-four Lorentz transformations according to the correspondence given in the Appendix A, the Mueller matrices constitute a representation of the Lorentz group.
+
+Another interesting aspect of the two-by-two matrix formalism is that the coherency matrix can be formulated in terms of quarternions [18–20]. The quarnion representation can be translated into rotations in four-dimensional space. There is a long history between the Lorentz group and the four-dimensional rotation group. It would be interesting to see what the quarnion representation of polarization optics will add to this history between those two similar, but different, groups.
+
+As for earlier applications of the two-by-two representation of the Lorentz group, we note the vector representation by Fedorov [21,22]. Fedorov showed that it is easier to carry out kinematical calculations using his two-by-two representation. For instance, the computation of the Wigner rotation angle is possible in the two-by-two representation [23]. Earlier papers on group theoretical approaches to polarization optics include also those on Mueller matrices [24] and on relativistic kinematics and polarization optics [25].
+
+**5. Geometry of the Poincaré Sphere**
+
+We now have the four-vector, ($S_0, S_3, S_1, S_2$), which is Lorentz-transformed like the space-time four-vector, $(t, z, x, y)$, or the energy-momentum four-vector of Equation (15). This Stokes four-vector has a three-component subspace, ($S_3, S_1, S_2$), which is like the three-dimensional Euclidean subspace
+---PAGE_BREAK---
+
+in the four-dimensional Minkowski space. In this three-dimensional subspace, we can introduce the
+spherical coordinate system with:
+
+$$
+\begin{align}
+&R = \sqrt{S_3^2 + S_1^2 + S_2^2} \notag \\
+&S_3 = R \cos \zeta \tag{63} \\
+&S_1 = R(\sin \zeta) \cos \delta S_2 = R(\sin \zeta) \sin \delta \notag
+\end{align}
+$$
+
+The radius, *R*, is the radius of this sphere, and is:
+
+$$
+R = \frac{1}{2} \sqrt{(a^2 - b^2)^2 + 4(ab)^2 \cos^2 \chi} \quad (64)
+$$
+
+with:
+
+$$
+S_3 = \frac{a^2 - b^2}{2} \tag{65}
+$$
+
+This spherical picture is traditionally known as the Poincaré sphere [1–3]. Without loss of generality, we assume *a* is greater than *b*, and *S*₃ is non-negative. In addition, we can consider another sphere with its radius:
+
+$$
+S_0 = \frac{a^2 + b^2}{2} \tag{66}
+$$
+
+according to Equation (62).
+
+The radius, *R*, takes its maximum value, $S_0$, when $\chi = 0^\circ$. It decreases and reaches its minimum value, $S_3$, when $\chi = 90^\circ$. In terms of *R*, the degree of polarization given in Equation (59) is:
+
+$$
+f = \frac{R}{S_0} \tag{67}
+$$
+
+This aspect of the radius *R* is illustrated in Figure 1a. The minimum value of *R* is *S*3 of Equation (64).
+
+**Figure 1.** Radius of the Poincaré sphere. The radius, *R*, takes its maximum value, $S_0$, when the decoherence angle, $\chi$, is zero. It becomes smaller as $\chi$ increases. It becomes minimum when the angle reaches 90°. Its minimum value is $S_3$, as is illustrated in Figure 1a. The degree of polarization is maximum when $R = S_0$ and is minimum when $R = S_3$. According to Equation (65), $S_3$ becomes zero when $a = b$, and the minimum value of $R$ becomes zero, as is indicated in Figure 1b. Its maximum value is still $S_0$. This maximum radius can become larger because $b$ becomes larger to make $a = b$.
+---PAGE_BREAK---
+
+Let us go back to the four-momentum matrix of Equation (15). Its determinant is $m^2$ and remains invariant. Likewise, the determinant of the coherency matrix of Equation (58) should also remain invariant. The determinant in this case is:
+
+$$S_0^2 - R^2 = (ab)^2 \sin^2 \chi \quad (68)$$
+
+This quantity remains invariant. This aspect is shown on the last row of Table 3.
+
+Let us go back to Equation (49). This matrix changes the relative magnitude of the amplitudes, *a* and *b*. Thus, without loss of generality, we can study the Stokes parameters with *a* = *b*. The coherency matrix then becomes:
+
+$$C = a^2 \begin{pmatrix} 1 & (\cos \chi)e^{-i\delta} \\ (\cos \chi)e^{i\delta} & 1 \end{pmatrix} \quad (69)$$
+
+Since the angle, $\delta$, does not play any essential roles, we can let $\delta = 0$ and write the coherency matrix as:
+
+$$C = a^2 \begin{pmatrix} 1 & \cos \chi \\ \cos \chi & 1 \end{pmatrix} \quad (70)$$
+
+Then, the minimum radius, $S_3 = 0$, and $S_0$ of Equation (62) and *R* of Equation (64) become:
+
+$$S_0 = a^2 R = a^2(\cos \chi) \quad (71)$$
+
+respectively. The Poincaré sphere becomes simplified to that of Figure 1b. This Poincaré sphere allows *R* to decrease to zero.
+
+The determinant of the above two-by-two matrix is:
+
+$$a^4 (1 - \cos^2 \chi) = a^4 \sin^2 \chi \quad (72)$$
+
+Since the Lorentz transformation leaves the determinant invariant, the change in this $\chi$ variable is not a Lorentz transformation. It is of course possible to construct a larger group in which this variable plays a role in a group transformation [23], but in this paper, we are more interested in its role in a particle gaining a mass. With this point in mind, let us diagonalize the coherency matrix of Equation (69). Then it takes the form:
+
+$$a^2 \begin{pmatrix} 1 + \cos \chi & 0 \\ 0 & 1 - \cos \chi \end{pmatrix} \quad (73)$$
+
+This form is the same as the four-momentum matrix given in Equation (41). There, we were not able to associate the variable, $\chi$, with any known physical process or symmetry operations of the Lorentz group. Fortunately, in this section, we noted that this variable comes from the degree of decoherence in polarization optics.
+
+## 6. Concluding Remarks
+
+In this paper, we noted first that the group of Lorentz transformations can be formulated in terms of two-by-two matrices. This two-by-two formalism can also be used for transformations of the coherency matrix in polarization optics consisting of four Stokes parameters.
+
+Thus, this set of the four parameters is like a Minkowskian four-vector under four-by-four Lorentz transformations. In order to accommodate all four Stokes parameters, we noted that the radius of the Poincaré sphere should be allowed to vary from its maximum value to its minimum, corresponding to the fully and minimal coherent cases.
+
+As in the case of the particle mass, the decoherence parameter in the Stokes formalism is invariant under Lorentz transformations. However, the Poincaré sphere, with a variable radius, provides the
+---PAGE_BREAK---
+
+mechanism for the variations of the decoherence parameter. It was noted that this variation gives a
+physical process whose mathematics correspond to that of the mass variable in particle physics.
+
+As for polarization optics, the traditional approach has been to work with two polarizer matrices, like:
+
+$$
+\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \qquad (74)
+$$
+
+We have replaced these two matrices by one attenuation matrix of Equation (47). This replacement enables us to formulate the Lorentz group for the Stokes parameters [15]. Furthermore, this attenuation matrix makes it possible to make a continuous transformation from one matrix to another by adjusting the attenuation parameters in optical media. It could be interesting to design optical experiments along this direction.
+
+**Acknowledgments:** This paper is in part based on an invited paper presented by one of the authors (YSK) at the Fedorov Memorial Symposium: International Conference "Spins and Photonic Beams at Interface", dedicated to the 100th anniversary of F.I. Fedorov (1911–1994) (Minsk, Belarus, 2011). He would like to thank Sergei Kilin for inviting him to the conference.
+
+In addition to numerous original contributions in optics, Fedorov wrote a book on two-by-two representations of the Lorentz group based on his own research on this subject. It was, therefore, quite appropriate for him (YSK) to present a paper on applications of the Lorentz group to optical science. He would like to thank V. A. Dluganovich and M. Glaynskii for bringing the papers and the book written by Academician Fedorov, as well as their own papers to his attention.
+
+**Conflicts of Interest:** The authors declare no conflict of interest.
+
+Appendix Appendix
+
+In Section 2, we listed four two-by-two matrices whose repeated applications lead to the most general form of the two-by-two matrix, *G*. It is known that every *G* matrix can be translated into a four-by-four Lorentz transformation matrix through [4,9,15]:
+
+$$
+\begin{pmatrix}
+t' + z' \\
+x' - iy' \\
+x' + iy' \\
+t' - z'
+\end{pmatrix}
+=
+\begin{pmatrix}
+\alpha\alpha^* & \alpha\beta^* & \beta\alpha^* & \beta\beta^* \\
+\alpha\gamma^* & \alpha\delta^* & \beta\gamma^* & \beta\delta^* \\
+\gamma\alpha^* & \gamma\beta^* & \delta\alpha^* & \delta\beta^* \\
+\gamma\gamma^* & \gamma\delta^* & \delta\gamma^* & \delta\delta^*
+\end{pmatrix}
+\begin{pmatrix}
+t+z \\
+x-iy \\
+x+iy \\
+t-z
+\end{pmatrix}
+\tag{75}
+$$
+
+and:
+
+$$
+\begin{pmatrix} t \\ z \\ x \\ y \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & -1 \\ 0 & 1 & 1 & 0 \\ 0 & i & -i & 0 \end{pmatrix} \begin{pmatrix} t+z \\ x-iy \\ x+iy \\ t-z \end{pmatrix} \quad (76)
+$$
+
+These matrices appear to be complicated, but it is enough to study the matrices of Equation (13) and Equation (14) to cover all the matrices in this group. Thus, we give their four-by-four equivalents in this Appendix A:
+
+$$
+Z(\delta) = \begin{pmatrix} e^{i\delta/2} & 0 \\ 0 & e^{-i\delta/2} \end{pmatrix} \tag{77}
+$$
+
+leads to the four-by-four matrix:
+
+$$
+\begin{pmatrix}
+1 & 0 & 0 & 0 \\
+1 & 0 & 0 & 0 \\
+0 & 1 & \cos \delta & -\sin \delta \\
+0 & 0 & \sin \delta & \cos \delta
+\end{pmatrix}
+\qquad (78)
+$$
+---PAGE_BREAK---
+
+Likewise:
+
+$$
+B(\eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \rightarrow \begin{pmatrix} \cosh \eta & \sinh \eta & 0 & 0 \\ \sinh \eta & \cosh \eta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \qquad (79)
+$$
+
+$$
+R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \sin(\theta/2) \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \quad (80)
+$$
+
+and:
+
+$$
+S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \sinh(\lambda/2) \end{pmatrix} \rightarrow \begin{pmatrix} \cosh\lambda & 0 & \sinh\lambda & 0 \\ 0 & 1 & 0 & 0 \\ \sinh\lambda & 0 & \cosh\lambda & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \quad (81)
+$$
+
+References
+
+1. Azzam, R.A.M.; Bashara, I. *Ellipsometry and Polarized Light*; North-Holland: Amsterdam, The Netherlands, 1977.
+
+2. Born, M.; Wolf, E. *Principles of Optics*, 6th ed.; Pergamon: Oxford, NY, USA, 1980.
+
+3. Brosseau, C. *Fundamentals of Polarized Light: A Statistical Optics Approach*; John Wiley: New York, NY, USA, 1998.
+
+4. Naimark, M.A. Linear representation of the Lorentz group. *Uspekhi Mater. Nauk* **1954**, *9*, 19–93, Translated by Atkinson, F.V., American Mathematical Society Translations, Series 2, **1957**, *6*, 379–458.
+
+5. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon Press: Oxford, NY, USA, 1958; Translated by Swinfen, A.; Marstrand, O.J., 1964.
+
+6. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic particles. *J. Math. Phys.* **1990**, *31*, 55–60. [CrossRef]
+
+7. Wigner, E. On unitary representations of the inhomogeneous Lorentz group. *Ann. Math.* **1939**, *40*, 149–204. [CrossRef]
+
+8. Kim, Y.S. Poincaré Sphere and Decoherence Problems. Available online: http://arxiv.org/abs/1203.4539 (accessed on 17 June 2013).
+
+9. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986.
+
+10. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. *Phys. Rev. D* **1982**, *26*, 3717–3725.
+
+11. Başkal, S.; Kim, Y.S. One analytic form for four branches of the ABCD matrix. *J. Mod. Opt.* **2010**, *57*, 1251–1259.
+[CrossRef]
+
+12. Başkal, S.; Kim, Y.S. Lorentz Group in Ray and Polarization Optics. In *Mathematical Optics: Classical, Quantum and Computational Methods*; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.; CRC Taylor and Francis: New York, NY, USA, 2013; Chapter 9; pp. 303–349.
+
+13. Saleh, B.E.A.; Teich, M.C. *Fundamentals of Photonics*, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2007.
+
+14. Han, D.; Kim, Y.S.; Noz, M.E. Jones-vector formalism as a representation of the Lorentz group. *J. Opt. Soc. Am. A* **1997**, *14*, 2290–2298.
+
+15. Han, D.; Kim, Y.S.; Noz, M.E. Stokes parameters as a Minkowskian four-vector. *Phys. Rev. E* **1997**, *56*, 6065–6076.
+
+16. Feynman, R.P. *Statistical Mechanics*; Benjamin/Cummings: Reading, MA, USA, 1972.
+
+17. Han, D.; Kim, Y.S.; Noz, M.E. Illustrative example of Feynman's rest of the universe. *Am. J. Phys.* **1999**, *67*, 61–66. [CrossRef]
+
+18. Pellat-Finet, P. Geometric approach to polarization optics. II. Quarternionic representation of polarized light. *Optik* **1991**, *87*, 68–76.
+
+19. Dlugunovich, V.A.; Kurochkin, Y.A. Vector parameterization of the Lorentz group transformations and polar decomposition of Mueller matrices. *Opt. Spectrosc.* **2009**, *107*, 312–317. [CrossRef]
+---PAGE_BREAK---
+
+20. Tudor, T. Vectorial Pauli algebraic approach in polarization optics. I. Device and state operators. *Optik* **2010**, *121*, 1226–1235. [CrossRef]
+
+21. Fedorov, F.I. Vector parametrization of the Lorentz group and relativistic kinematics. *Theor. Math. Phys.* **1970**, *2*, 248–252. [CrossRef]
+
+22. Fedorov, F.I. *Lorentz Group*; [in Russian]; Global Science, Physical-Mathematical Literature: Moscow, Russia, 1979.
+
+23. Başkal, S.; Kim, Y.S. De Sitter group as a symmetry for optical decoherence. *J. Phys. A* **2006**, *39*, 7775–7788.
+
+24. Dargys, A. Optical Mueller matrices in terms of geometric algebra. *Opt. Commun.* **2012**, *285*, 4785–4792.
+[CrossRef]
+
+25. Pellat-Finet, P.; Basset, M. What is common to both polarization optics and relativistic kinematics? *Optik* **1992**, *90*, 101–106.
+
+© 2013 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+Wigner's Space-Time Symmetries Based on the Two-by-Two Matrices of the Damped Harmonic Oscillators and the Poincaré Sphere
+
+Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³
+
+¹ Department of Physics, Middle East Technical University, Ankara 06800, Turkey; E-Mail: baskal@newton.physics.metu.edu.tr
+
+² Center for Fundamental Physics, University of Maryland, College Park, MD 20742, USA
+
+³ Department of Radiology, New York University, New York, NY 10016, USA; E-Mail: marilyne.noz@gmail.com
+
+* E-Mail: yskim@umd.edu; Tel.: +1-301-937-1306.
+
+Received: 28 February 2014; in revised form: 28 May 2014 / Accepted: 9 June 2014 / Published: 25 June 2014
+
+**Abstract:** The second-order differential equation for a damped harmonic oscillator can be converted to two coupled first-order equations, with two two-by-two matrices leading to the group $Sp(2)$. It is shown that this oscillator system contains the essential features of Wigner's little groups dictating the internal space-time symmetries of particles in the Lorentz-covariant world. The little groups are the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. It is shown that the damping modes of the oscillator correspond to the little groups for massive and imaginary-mass particles respectively. When the system makes the transition from the oscillation to damping mode, it corresponds to the little group for massless particles. Rotations around the momentum leave the four-momentum invariant. This degree of freedom extends the $Sp(2)$ symmetry to that of $SL(2, c)$ corresponding to the Lorentz group applicable to the four-dimensional Minkowski space. The Poincaré sphere contains the $SL(2, c)$ symmetry. In addition, it has a non-Lorentzian parameter allowing us to reduce the mass continuously to zero. It is thus possible to construct the little group for massless particles from that of the massive particle by reducing its mass to zero. Spin-1/2 particles and spin-1 particles are discussed in detail.
+
+**Keywords:** damped harmonic oscillators; coupled first-order equations; unimodular matrices; Wigner's little groups; Poincaré sphere; $Sp(2)$ group; $SL(2, c)$ group; gauge invariance; neutrinos; photons
+
+**PACS:** 03.65.Fd, 03.67.-a, 05.30.-d
+
+# 1. Introduction
+
+We are quite familiar with the second-order differential equation
+
+$$m \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + Ky = 0 \quad (1)$$
+
+for a damped harmonic oscillator. This equation has the same mathematical form as
+
+$$L \frac{d^2 Q}{dt^2} + R \frac{dQ}{dt} + \frac{1}{C} Q = 0 \quad (2)$$
+
+for electrical circuits, where L, R, and C are the inductance, resistance, and capacitance respectively. These two equations play fundamental roles in physical and engineering sciences. Since they start from the same set of mathematical equations, one set of problems can be studied in terms of the other. For instance, many mechanical phenomena can be studied in terms of electrical circuits.
+---PAGE_BREAK---
+
+In Equation (1), when $b = 0$, the equation is that of a simple harmonic oscillator with the frequency $\omega = \sqrt{K/m}$. As $b$ increases, the oscillation becomes damped. When $b$ is larger than $2\sqrt{Km}$, the oscillation disappears, as the solution is a damping mode.
+
+Consider that increasing *b* continuously, while difficult mechanically, can be done electrically using Equation (2) by adjusting the resistance *R*. The transition from the oscillation mode to the damping mode is a continuous physical process.
+
+This *b* term leads to energy dissipation, but is not regarded as a fundamental force. It is inconvenient in the Hamiltonian formulation of mechanics and troublesome in transition to quantum mechanics, yet, plays an important role in classical mechanics. In this paper this term will help us understand the fundamental space-time symmetries of elementary particles.
+
+We are interested in constructing the fundamental symmetry group for particles in the Lorentz-covariant world. For this purpose, we transform the second-order differential equation of Equation (1) to two coupled first-order equations using two-by-two matrices. Only two linearly independent matrices are needed. They are the anti-symmetric and symmetric matrices
+
+$$A = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \text{and} \quad S = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \qquad (3)$$
+
+respectively. The anti-symmetric matrix *A* is Hermitian and corresponds to the oscillation part, while the symmetric *S* matrix corresponds to the damping.
+
+These two matrices lead to the *Sp*(2) group consisting of two-by-two unimodular matrices with real elements. This group is isomorphic to the three-dimensional Lorentz group applicable to two space-like and one time-like coordinates. This group is commonly called the *O*(2, 1) group.
+
+This *O*(2, 1) group can explain all the essential features of Wigner's little groups dictating internal space-time symmetries of particles [1]. Wigner defined his little groups as the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. He observed that the little groups are different for massive, massless, and imaginary-mass particles. It has been a challenge to design a mathematical model which will combine those three into one formalism, but we show that the damped harmonic oscillator provides the desired mathematical framework.
+
+For the two space-like coordinates, we can assign one of them to the direction of the momentum, and the other to the direction perpendicular to the momentum. Let the direction of the momentum be along the z axis, and let the perpendicular direction be along the x axis. We therefore study the kinematics of the group within the zx plane, then see what happens when we rotate the system around the z axis without changing the momentum [2].
+
+The Poincaré sphere for polarization optics contains the *SL*(2, *c*) symmetry isomorphic to the four-dimensional Lorentz group applicable to the Minkowski space [3–7]. Thus, the Poincaré sphere extends Wigner’s picture into the three space-like and one time-like coordinates. Specifically, this extension adds rotations around the given momentum which leaves the four-momentum invariant [2].
+
+While the particle mass is a Lorentz-invariant variable, the Poincaré sphere contains an extra variable which allows the mass to change. This variable allows us to take the mass-limit of the symmetry operations. The transverse rotational degrees of freedom collapse into one gauge degree of freedom and polarization of neutrinos is a consequence of the requirement of gauge invariance [8,9].
+
+The *SL*(2,*c*) group contains symmetries not seen in the three-dimensional rotation group. While we are familiar with two spinors for a spin-1/2 particle in nonrelativistic quantum mechanics, there are two additional spinors due to the reflection properties of the Lorentz group. There are thus 16 bilinear combinations of those four spinors. This leads to two scalars, two four-vectors, and one antisymmetric four-by-four tensor. The Maxwell-type electromagnetic field tensor can be obtained as a massless limit of this tensor [10].
+
+In Section 2, we review the damped harmonic oscillator in classical mechanics, and note that the solution can be either in the oscillation mode or damping mode depending on the magnitude of
+---PAGE_BREAK---
+
+the damping parameter. The translation of the second order equation into a first order differential equation with two-by-two matrices is possible. This first-order equation is similar to the Schrödinger equation for a spin-1/2 particle in a magnetic field.
+
+Section 3 shows that the two-by-two matrices of Section 2 can be formulated in terms of the $Sp(2)$ group. These matrices can be decomposed into the Bargmann and Wigner decompositions. Furthermore, this group is isomorphic to the three-dimensional Lorentz group with two space and one time-like coordinates.
+
+In Section 4, it is noted that this three-dimensional Lorentz group has all the essential features of Wigner's little groups which dictate the internal space-time symmetries of the particles in the Lorentz-covariant world. Wigner's little groups are the subgroups of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. The Bargmann Wigner decompositions are shown to be useful tools for studying the little groups.
+
+In Section 5, we note that the given momentum is invariant under rotations around it. The addition of this rotational degree of freedom extends the $Sp(2)$ symmetry to the six-parameter $SL(2, c)$ symmetry. In the space-time language, this extends the three dimensional group to the Lorentz group applicable to three space and one time dimensions.
+
+Section 6 shows that the Poincaré sphere contains the symmetries of $SL(2, c)$ group. In addition, it contains an extra variable which allows us to change the mass of the particle, which is not allowed in the Lorentz group.
+
+In Section 7, the symmetries of massless particles are studied in detail. In addition to rotation around the momentum, Wigner's little group generates gauge transformations. While gauge transformations on spin-1 photons are well known, the gauge invariance leads to the polarization of massless spin-1/2 particles, as observed in neutrino polarizations.
+
+In Section 8, it is noted that there are four spinors for spin-1/2 particles in the Lorentz-covariant world. It is thus possible to construct 16 bilinear forms, applicable to two scalars, and two vectors, and one antisymmetric second-rank tensor. The electromagnetic field tensor is derived as the massless limit. This tensor is shown to be gauge-invariant.
+
+## 2. Classical Damped Oscillators
+
+For convenience, we write Equation (1) as
+
+$$ \frac{d^2 y}{dt^2} + 2\mu \frac{dy}{dt} + \omega^2 y = 0 \quad (4) $$
+
+with
+
+$$ \omega = \sqrt{\frac{K}{m}}, \quad \text{and} \quad \mu = \frac{b}{2m} \qquad (5) $$
+
+The damping parameter $\mu$ is positive when there are no external forces. When $\omega$ is greater than $\mu$, the solution takes the form
+
+$$ y = e^{-\mu t} [C_1 \cos(\omega't) + C_2 \sin(\omega't)] \quad (6) $$
+
+where
+
+$$ \omega' = \sqrt{\omega^2 - \mu^2} \qquad (7) $$
+
+and $C_1$ and $C_2$ are the constants to be determined by the initial conditions. This expression is for a damped harmonic oscillator. Conversely, when $\mu$ is greater than $\omega$, the quantity inside the square-root sign is negative, then the solution becomes
+
+$$ y = e^{-\mu t} [C_3 \cosh(\mu't) + C_4 \sinh(\mu't)] \quad (8) $$
+
+with
+
+$$ \mu' = \sqrt{\mu^2 - \omega^2} \qquad (9) $$
+---PAGE_BREAK---
+
+If $\omega = \mu$, both Equations (6) and (8) collapse into one solution
+
+$$y(t) = e^{-\mu t} [C_5 + C_6 t] \quad (10)$$
+
+These three different cases are treated separately in textbooks. Here we are interested in the transition from Equation (6) to Equation (8), via Equation (10). For convenience, we start from $\mu$ greater than $\omega$ with $\mu'$ given by Equation (9).
+
+For a given value of $\mu$, the square root becomes zero when $\omega$ equals $\mu$. If $\omega$ becomes larger, the square root becomes imaginary and divides into two branches.
+
+$$\pm i \sqrt{\omega^2 - \mu^2} \quad (11)$$
+
+This is a continuous transition, but not an analytic continuation. To study this in detail, we translate the second order differential equation of Equation (4) into the first-order equation with two-by-two matrices.
+
+Given the solutions of Equations (6) and (10), it is convenient to use $\psi(t)$ defined as
+
+$$\psi(t) = e^{\mu t} y(t), \quad \text{and} \quad y = e^{-\mu t} \psi(t) \quad (12)$$
+
+Then $\psi(t)$ satisfies the differential equation
+
+$$\frac{d^2 \psi(t)}{dt^2} + (\omega^2 - \mu^2)\psi(t) = 0 \quad (13)$$
+
+## 2.1. Two-by-Two Matrix Formulation
+
+In order to convert this second-order equation to a first-order system, we introduce $\psi_1(t)$ and $\psi_2(t)$ satisfying two coupled differential equations
+
+$$\begin{align}
+\frac{d\psi_1(t)}{dt} &= (\mu - \omega)\psi_2(t) \tag{14} \\
+\frac{d\psi_2(t)}{dt} &= (\mu + \omega)\psi_1(t) \tag{15}
+\end{align}$$
+
+which can be written in matrix form as
+
+$$\frac{d}{dt} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \begin{pmatrix} 0 & \mu - \omega \\ \mu + \omega & 0 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} \quad (16)$$
+
+Using the Hermitian and anti-Hermitian matrices of Equation (3) in Section 1, we construct the linear combination
+
+$$H = \omega \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} + \mu \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \quad (17)$$
+
+We can then consider the first-order differential equation
+
+$$i \frac{\partial}{\partial t} \psi(t) = H \psi(t) \quad (18)$$
+
+While this equation is like the Schrödinger equation for an electron in a magnetic field, the two-by-two matrix is not Hermitian. Its first matrix is Hermitian, but the second matrix is anti-Hermitian. It is of course an interesting problem to give a physical interpretation to this non-Hermitian matrix
+---PAGE_BREAK---
+
+in connection with quantum dissipation [11], but this is beyond the scope of the present paper.
+The solution of Equation (18) is
+
+$$
+\psi(t) = \exp \left\{ \begin{pmatrix} 0 & -\omega + \mu \\ \omega + \mu & 0 \end{pmatrix} t \right\} \begin{pmatrix} C_7 \\ C_8 \end{pmatrix} \quad (19)
+$$
+
+where $C_7 = \psi_1(0)$ and $C_8 = \psi_2(0)$ respectively.
+
+2.2. Transition from the Oscillation Mode to Damping Mode
+
+It appears straight-forward to compute this expression by a Taylor expansion, but it is not.
+This issue was extensively discussed in the earlier papers by two of us [12,13]. The key idea is to write
+the matrix
+
+$$
+\begin{pmatrix}
+0 & -\omega + \mu \\
+\omega + \mu & 0
+\end{pmatrix}
+\qquad (20)
+$$
+
+as a similarity transformation of
+
+$$
+\omega' \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \quad (\omega > \mu) \tag{21}
+$$
+
+and as that of
+
+$$
+\mu' \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \quad (\mu > \omega) \tag{22}
+$$
+
+with $\omega'$ and $\mu'$ defined in Equations (7) and (9), respectively.
+Then the Taylor expansion leads to
+
+$$
+\left( \frac{\cos(\omega't)}{\sqrt{(\omega+\mu)/(\omega-\mu)}} \sin(\omega't) - \frac{\sqrt{(\omega-\mu)/(\omega+\mu)}}{\cos(\omega't)} \sin(\omega't) \right) \quad (23)
+$$
+
+when $\omega$ is greater than $\mu$. The solution $\psi(t)$ takes the form
+
+$$
+\begin{pmatrix}
+C_7 \cos(\omega't) - C_8 \sqrt{(\omega - \mu)/( \omega + \mu)} \sin(\omega't) \\
+C_7 \sqrt{(\omega + \mu)/( \omega - \mu)} \sin(\omega't) + C_8 \cos(\omega't)
+\end{pmatrix}
+\quad (24)
+$$
+
+If $\mu$ is greater than $\omega$, the Taylor expansion becomes
+
+$$
+\left( \frac{\cosh(\mu't)}{\sqrt{(\mu+\omega)/(\mu-\omega)}} \frac{\sqrt{(\mu-\omega)/(\mu+\omega)}}{\cosh(\mu't)} \sinh(\mu't) \right) \quad (25)
+$$
+
+When $\omega$ is equal to $\mu$, both Equations (23) and (25) become
+
+$$
+\begin{pmatrix} 1 & 0 \\ 2\omega t & 1 \end{pmatrix} \tag{26}
+$$
+
+If $\omega$ is sufficiently close to but smaller than $\mu$, the matrix of Equation (25) becomes
+
+$$
+\begin{pmatrix}
+1 + (\epsilon/2)(2\omega t)^2 & +\epsilon(2\omega t) \\
+(2\omega t) & 1 + (\epsilon/2)(2\omega t)^2
+\end{pmatrix}
+\quad (27)
+$$
+
+with
+
+$$
+\epsilon = \frac{\mu - \omega}{\mu + \omega} \tag{28}
+$$
+---PAGE_BREAK---
+
+If $\omega$ is sufficiently close to $\mu$, we can let
+
+$$ \mu + \omega = 2\omega, \quad \text{and} \quad \mu - \omega = 2\mu\epsilon \tag{29} $$
+
+If $\omega$ is greater than $\mu$, $\epsilon$ defined in Equation (28) becomes negative, the matrix of Equation (23) becomes
+
+$$ \begin{pmatrix} 1 - (-\epsilon/2)(2\omega t)^2 & -(\epsilon)(2\omega t) \\ 2\omega t & 1 - (-\epsilon/2)(2\omega t)^2 \end{pmatrix} \tag{30} $$
+
+We can rewrite this matrix as
+
+$$ \begin{pmatrix} 1 - (1/2) \left[ (2\omega\sqrt{-\epsilon})t \right]^2 & -\sqrt{-\epsilon} \left[ (2\omega\sqrt{-\epsilon})t \right] \\ 2\omega t & 1 - (1/2) \left[ (2\omega\sqrt{-\epsilon})t \right]^2 \end{pmatrix} \tag{31} $$
+
+If $\epsilon$ becomes positive, Equation (27) can be written as
+
+$$ \begin{pmatrix} 1 + (1/2) \left[ (2\omega\sqrt{\epsilon})t \right]^2 & \sqrt{\epsilon} \left[ (2\omega\sqrt{\epsilon})t \right] \\ 2\omega t & 1 + (1/2) \left[ (2\omega\sqrt{\epsilon})t \right]^2 \end{pmatrix} \tag{32} $$
+
+The transition from Equation (31) to Equation (32) is continuous as they become identical when $\epsilon = 0$. As $\epsilon$ changes its sign, the diagonal elements of above matrices tell us how cos($\omega't$) becomes cosh($\mu't$). As for the upper-right element element, $-\sin(\omega't)$ becomes sinh($\mu't$). This non-analytic continuity is discussed in detail in one of the earlier papers by two of us on lens optics [13]. This type of continuity was called there "tangential continuity." There, the function and its first derivative are continuous while the second derivative is not.
+
+## 2.3. Mathematical Forms of the Solutions
+
+In this section, we use the Heisenberg approach to the problem, and obtain the solutions in the form of two-by-two matrices. We note that
+
+1. For the oscillation mode, the trace of the matrix is smaller than 2. The solution takes the form of
+
+$$ \begin{pmatrix} \cos(x) & -e^{-\eta} \sin(x) \\ e^{\eta} \sin(x) & \cos(x) \end{pmatrix} \tag{33} $$
+
+with trace $2\cos(x)$. The trace is independent of $\eta$.
+
+2. For the damping mode, the trace of the matrix is greater than 2.
+
+$$ \begin{pmatrix} \cosh(x) & e^{-\eta} \sinh(x) \\ e^{\eta} \sinh(x) & \cosh(x) \end{pmatrix} \tag{34} $$
+
+with trace $2\cosh(x)$. Again, the trace is independent of $\eta$.
+
+3. For the transition mode, the trace is equal to 2, and the matrix is triangular and takes the form of
+
+$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \tag{35} $$
+
+When $x$ approaches zero, the Equations (33) and (34) take the form
+
+$$ \begin{pmatrix} 1 - x^2/2 & -xe^{-\eta} \\ xe^{\eta} & 1 - x^2/2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 + x^2/2 & xe^{-\eta} \\ xe^{\eta} & 1 + x^2/2 \end{pmatrix} \tag{36} $$
+---PAGE_BREAK---
+
+respectively. These two matrices have the same lower-left element. Let us fix this element to be a
+positive number $\gamma$. Then
+
+$$
+x = \gamma e^{-\eta} \tag{37}
+$$
+
+Then the matrices of Equation (36) become
+
+$$
+\begin{pmatrix}
+1 - \gamma^2 e^{-2\eta} / 2 & -\gamma e^{-2\eta} \\
+\gamma & 1 - \gamma^2 e^{-2\eta} / 2
+\end{pmatrix},
+\quad
+\text{and}
+\quad
+\begin{pmatrix}
+1 + \gamma^2 e^{-2\eta} / 2 & \gamma e^{-2\eta} \\
+\gamma & 1 + \gamma^2 e^{-2\eta} / 2
+\end{pmatrix}
+\qquad (38)
+$$
+
+If we introduce a small number $\epsilon$ defined as
+
+$$
+\epsilon = \sqrt{\gamma} e^{-\eta} \tag{39}
+$$
+
+the matrices of Equation (38) become
+
+$$
+\begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \begin{pmatrix} 1 - \gamma \epsilon^2/2 & \sqrt{\gamma} \epsilon \\ \sqrt{\gamma} \epsilon & 1 - \gamma \epsilon^2/2 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{40}
+$$
+
+$$
+\begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \begin{pmatrix} 1 + \gamma \epsilon^2/2 & \sqrt{\gamma} \epsilon \\ \sqrt{\gamma} \epsilon & 1 + \gamma \epsilon^2/2 \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix}
+$$
+
+respectively, with $e^{-\eta} = \epsilon / \sqrt{\gamma}$.
+
+**3. Groups of Two-by-Two Matrices**
+
+If a two-by-two matrix has four complex elements, it has eight independent parameters. If the determinant of this matrix is one, it is known as an unimodular matrix and the number of independent parameters is reduced to six. The group of two-by-two unimodular matrices is called SL(2, c). This six-parameter group is isomorphic to the Lorentz group applicable to the Minkowski space of three space-like and one time-like dimensions [14].
+
+We can start with two subgroups of SL(2, c).
+
+1. While the matrices of SL(2, c) are not unitary, we can consider the subset consisting of unitary matrices. This subgroup is called SU(2), and is isomorphic to the three-dimensional rotation group. This three-parameter group is the basic scientific language for spin-1/2 particles.
+
+2. We can also consider the subset of matrices with real elements. This three-parameter group is called Sp(2) and is isomorphic to the three-dimensional Lorentz group applicable to two space-like and one time-like coordinates.
+
+In the Lorentz group, there are three space-like dimensions with x, y, and z coordinates.
+However, for many physical problems, it is more convenient to study the problem in the
+two-dimensional (x,z) plane first and generalize it to three-dimensional space by rotating the system
+around the z axis. This process can be called Euler decomposition and Euler generalization [2].
+
+First, we study *Sp*(2) symmetry in detail, and achieve the generalization by augmenting the
+two-by-two matrix corresponding to the rotation around the *z* axis. In this section, we study in detail
+properties of *Sp*(2) matrices, then generalize them to *SL*(2, *c*) in Section 5.
+
+There are three classes of Sp(2) matrices. Their traces can be smaller or greater than two, or equal to two. While these subjects are already discussed in the literature [15–17] our main interest is what happens as the trace goes from less than two to greater than two. Here we are guided by the model we have discussed in Section 2, which accounts for the transition from the oscillation mode to the damping mode.
+---PAGE_BREAK---
+
+### 3.1. Lie Algebra of Sp(2)
+
+The two linearly independent matrices of Equation (3) can be written as
+
+$$ K_1 = \frac{1}{2} \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad \text{and} \quad J_2 = \frac{1}{2} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \qquad (41) $$
+
+However, the Taylor series expansion of the exponential form of Equation (23) or Equation (25) requires an additional matrix
+
+$$ K_3 = \frac{1}{2} \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \qquad (42) $$
+
+These matrices satisfy the following closed set of commutation relations.
+
+$$ [K_1, J_2] = iK_3, \quad [J_2, K_3] = iK_1, \quad [K_3, K_1] = -iJ_2 \qquad (43) $$
+
+These commutation relations remain invariant under Hermitian conjugation, even though $K_1$ and $K_3$ are anti-Hermitian. The algebra generated by these three matrices is known in the literature as the group $Sp(2)$ [17]. Furthermore, the closed set of commutation relations is commonly called the Lie algebra. Indeed, Equation (43) is the Lie algebra of the $Sp(2)$ group.
+
+The Hermitian matrix $J_2$ generates the rotation matrix
+
+$$ R(\theta) = \exp(-i\theta J_2) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (44) $$
+
+and the anti-Hermitian matrices $K_1$ and $K_2$, generate the following squeeze matrices.
+
+$$ S(\lambda) = \exp(-i\lambda K_1) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \qquad (45) $$
+
+and
+
+$$ B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} \exp(\eta/2) & 0 \\ 0 & \exp(-\eta/2) \end{pmatrix} \qquad (46) $$
+
+respectively.
+
+Returning to the Lie algebra of Equation (43), since $K_1$ and $K_3$ are anti-Hermitian, and $J_2$ is Hermitian, the set of commutation relation is invariant under the Hermitian conjugation. In other words, the commutation relations remain invariant, even if we change the sign of $K_1$ and $K_3$, while keeping that of $J_2$ invariant. Next, let us take the complex conjugate of the entire system. Then both the $J$ and $K$ matrices change their signs.
+
+### 3.2. Bargmann and Wigner Decompositions
+
+Since the $Sp(2)$ matrix has three independent parameters, it can be written as [15]
+
+$$ \begin{pmatrix} \cos(\alpha_1/2) & -\sin(\alpha_1/2) \\ \sin(\alpha_1/2) & \cos(\alpha_1/2) \end{pmatrix} \begin{pmatrix} \cosh\chi & \sinh\chi \\ \sinh\chi & \cosh\chi \end{pmatrix} \begin{pmatrix} \cos(\alpha_2/2) & -\sin(\alpha_2/2) \\ \sin(\alpha_2/2) & \cos(\alpha_2/2) \end{pmatrix} \qquad (47) $$
+
+This matrix can be written as
+
+$$ \begin{pmatrix} \cos(\delta/2) & -\sin(\delta/2) \\ \sin(\delta/2) & \cos(\delta/2) \end{pmatrix} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} \cos(\delta/2) & \sin(\delta/2) \\ -\sin(\delta/2) & \cos(\delta/2) \end{pmatrix} \qquad (48) $$
+---PAGE_BREAK---
+
+where
+
+$$
+\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \begin{pmatrix} \cosh \chi & \sinh \chi \\ \sinh \chi & \cosh \chi \end{pmatrix} \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \quad (49)
+$$
+
+with
+
+$$
+\delta = \frac{1}{2}(\alpha_1 - \alpha_2), \quad \text{and} \quad \alpha = \frac{1}{2}(\alpha_1 + \alpha_2) \tag{50}
+$$
+
+If we complete the matrix multiplication of Equation (49), the result is
+
+$$
+\left(
+\begin{array}{cc}
+ (\cosh \chi) \cos \alpha & \sinh \chi - (\cosh \chi) \sin \alpha \\
+ \sinh \chi + (\cosh \chi) \sin \alpha & (\cosh \chi) \cos \alpha
+\end{array}
+\right)
+\qquad (51)
+$$
+
+We shall call hereafter the decomposition of Equation (49) the Bargmann decomposition. This means that every matrix in the Sp(2) group can be brought to the Bargmann decomposition by a similarity transformation of rotation, as given in Equation (48). This decomposition leads to an equidiagonal matrix with two independent parameters.
+
+For the matrix of Equation (49), we can now consider the following three cases. Let us assume that $\chi$ is positive, and the angle $\theta$ is less than 90°. Let us look at the upper-right element.
+
+1. If it is negative with $[\sinh\chi < (\cosh\chi)\sin\alpha]$, then the trace of the matrix is smaller than 2, and the matrix can be written as
+
+$$
+\begin{pmatrix}
+\cos(\theta/2) & -e^{-\eta}\sin(\theta/2) \\
+e^{\eta}\sin(\theta/2) & \cos(\theta/2)
+\end{pmatrix}
+\qquad (52)
+$$
+
+with
+
+$$
+\cos(\theta/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{-2\eta} = \frac{(\cosh\chi)\sin\alpha - \sinh\chi}{(\cosh\chi)\sin\alpha + \sinh\chi} \tag{53}
+$$
+
+2. If it is positive with $[\sinh \chi > (\cosh \chi) \sin \alpha]$, then the trace is greater than 2, and the matrix can be written as
+
+$$
+\begin{pmatrix}
+\cosh(\lambda/2) & e^{-\eta} \sinh(\lambda/2) \\
+e^{\eta} \sinh(\lambda/2) & \cosh(\lambda/2)
+\end{pmatrix}
+\qquad (54)
+$$
+
+with
+
+$$
+\cosh(\lambda/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{-2\eta} = \frac{\sinh\chi - (\cosh\chi)\sin\alpha}{(\cosh\chi)\sin\alpha + \sinh\chi} \tag{55}
+$$
+
+3. If it is zero with $[(\sinh \chi = (\cosh \chi) \sin \alpha)]$, then the trace is equal to 2, and the matrix takes the form
+
+$$
+\begin{pmatrix}
+1 & 0 \\
+2 \sinh \chi & 1
+\end{pmatrix}
+\qquad (56)
+$$
+
+The above repeats the mathematics given in Section 2.3.
+
+Returning to Equations (52) and (53), they can be decomposed into
+
+$$
+M(\theta, \eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (57)
+$$
+
+and
+
+$$
+M(\lambda, \eta) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cos(\lambda/2) \end{pmatrix} \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (58)
+$$
+
+respectively. In view of the physical examples given in Section 6, we shall call this the “Wigner decomposition.” Unlike the Bargmann decomposition, the Wigner decomposition is in the form of a similarity transformation.
+---PAGE_BREAK---
+
+We note that both Equations (57) and (58) are written as similarity transformations. Thus
+
+$$[M(\theta, \eta)]^n = \begin{pmatrix} \cos(n\theta/2) & -e^{-\eta} \sin(n\theta/2) \\ e^{\eta} \sin(n\theta/2) & \cos(n\theta/2) \end{pmatrix} \quad (59)$$
+
+$$[M(\lambda, \eta)]^n = \begin{pmatrix} \cosh(n\lambda/2) & e^{\eta} \sinh(n\lambda/2) \\ e^{-\eta} \sinh(n\lambda/2) & \cosh(n\lambda/2) \end{pmatrix} \quad (60)$$
+
+$$[M(\gamma)]^n = \begin{pmatrix} 1 & 0 \\ n\gamma & 1 \end{pmatrix} \quad (61)$$
+
+These expressions are useful for studying periodic systems [18].
+
+The question is what physics these decompositions describe in the real world. To address this, we study what the Lorentz group does in the real world, and study isomorphism between the $Sp(2)$ group and the Lorentz group applicable to the three-dimensional space consisting of one time and two space coordinates.
+
+### 3.3. Isomorphism with the Lorentz Group
+
+The purpose of this section is to give physical interpretations of the mathematical formulas given in Section 3.2. We will interpret these formulae in terms of the Lorentz transformations which are normally described by four-by-four matrices. For this purpose, it is necessary to establish a correspondence between the two-by-two representation of Section 3.2 and the four-by-four representations of the Lorentz group.
+
+Let us consider the Minkowskian space-time four-vector
+
+$$ (t, z, x, y) \qquad (62) $$
+
+where $(t^2 - z^2 - x^2 - y^2)$ remains invariant under Lorentz transformations. The Lorentz group consists of four-by-four matrices performing Lorentz transformations in the Minkowski space.
+
+In order to give physical interpretations to the three two-by-two matrices given in Equations (44)–(46), we consider rotations around the *y* axis, boosts along the *x* axis, and boosts along the *z* axis. The transformation is restricted in the three-dimensional subspace of $(t,z,x)$. It is then straight-forward to construct those four-by-four transformation matrices where the *y* coordinate remains invariant. They are given in Table 1. Their generators also given. Those four-by-four generators satisfy the Lie algebra given in Equation (43).
+
+**Table 1.** Matrices in the two-by-two representation, and their corresponding four-by-four generators and transformation matrices.
+
+| Matrices | Generators | Four-by-Four | Transform matrices |
|---|
| R(θ) | J2 = 1⁄2 (0 i −i 0) | 0 0 0 0 0 −i 0 i 0 0 0 0 | 1 0 0 0 cos θ − sin θ 0 sin θ cos θ 0 0 0 | | B(η) | K3 = 1⁄2(i 0 −i 0)) | 0 i 0 i 0 0 0 0 0 0 0 0 | | cosh η | sinh η | 0 | 0 | | sinh η | cosh η | 0 | 0 | | 0 | 0 | 1 | 0 | | 0 | 0 | 0 | 1 |
| | S(λ) | K1 = 1⁄2(0 i i 0)) | 0 0 i i 0 0 0 0 0 | | cosh λ | 0 | sinh λ | 0 | | 0 | 1 | 0 | 0 | | sinh λ | 0 | cosh λ | 0 | | 0 | 0 | 0 | 1 |
|
+
+
+---PAGE_BREAK---
+
+**4. Internal Space-Time Symmetries**
+
+We have seen that there corresponds a two-by-two matrix for each four-by-four Lorentz transformation matrix. It is possible to give physical interpretations to those four-by-four matrices. It must thus be possible to attach a physical interpretation to each two-by-two matrix.
+
+Since 1939 [1] when Wigner introduced the concept of the little groups many papers have been published on this subject, but most of them were based on the four-by-four representation. In this section, we shall give the formalism of little groups in the language of two-by-two matrices. In so doing, we provide physical interpretations to the Bargmann and Wigner decompositions introduced in Section 3.2.
+
+**4.1. Wigner's Little Groups**
+
+In [1], Wigner started with a free relativistic particle with momentum, then constructed subgroups of the Lorentz group whose transformations leave the four-momentum invariant. These subgroups thus define the internal space-time symmetry of the given particle. Without loss of generality, we assume that the particle momentum is along the z direction. Thus rotations around the momentum leave the momentum invariant, and this degree of freedom defines the helicity, or the spin parallel to the momentum.
+
+We shall use the word "Wigner transformation" for the transformation which leaves the four-momentum invariant:
+
+1. For a massive particle, it is possible to find a Lorentz frame where it is at rest with zero momentum. The four-momentum can be written as $m(1,0,0,0)$, where $m$ is the mass. This four-momentum is invariant under rotations in the three-dimensional $(z,x,y)$ space.
+
+2. For an imaginary-mass particle, there is the Lorentz frame where the energy component vanishes. The momentum four-vector can be written as $p(0,1,0,0)$, where $p$ is the magnitude of the momentum.
+
+3. If the particle is massless, its four-momentum becomes $p(1,1,0,0)$. Here the first and second components are equal in magnitude.
+
+The constant factors in these four-momenta do not play any significant roles. Thus we write them as $(1,0,0,0)$, $(0,1,0,0)$, and $(1,1,0,0)$ respectively. Since Wigner worked with these three specific four-momenta [1], we call them Wigner four-vectors.
+
+All of these four-vectors are invariant under rotations around the z axis. The rotation matrix is
+
+$$Z(\phi) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos\phi & -\sin\phi \\ 0 & 0 & \sin\phi & \cos\phi \end{pmatrix} \quad (63)$$
+
+In addition, the four-momentum of a massive particle is invariant under the rotation around the y axis, whose four-by-four matrix was given in Table 1. The four-momentum of an imaginary particle is invariant under the boost matrix $S(\lambda)$ given in Table 1. The problem for the massless particle is more complicated, but will be discussed in detail in Section 7. See Table 2.
+---PAGE_BREAK---
+
+**Table 2.** Wigner four-vectors and Wigner transformation matrices applicable to two space-like and one time-like dimensions. Each Wigner four-vector remains invariant under the application of its Wigner matrix.
+
+| Mass | Wigner Four-Vector | Wigner Transformation |
|---|
| Massive | (1, 0, 0, 0) | (1 0 0 0) | | (0 cos θ - sinθ 0) | | (0 sin θ cos θ 0) | | (0 0 0 1) | | Massless | (1, 1, 0, 0) | (1 + γ2/2 - γ2/2 γ 0) | | (γ2/2 1 - γ2/2 γ 0) | | -γ γ 1 0 | | (0 0 0 1) | | Imaginary mass | (0, 1, 0, 0) | (cosh λ 0 sinh λ 0) | | (0 1 0 0) | | (sinh λ 0 cosh λ 0) | | (0 0 0 1) |
+
+## 4.2. Two-by-Two Formulation of Lorentz Transformations
+
+The Lorentz group is a group of four-by-four matrices performing Lorentz transformations on the Minkowskian vector space of $(t,z,x,y)$, leaving the quantity
+
+$$t^2 - z^2 - x^2 - y^2 \quad (64)$$
+
+invariant. It is possible to perform the same transformation using two-by-two matrices [7,14,19].
+
+In this two-by-two representation, the four-vector is written as
+
+$$X = \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix} \quad (65)$$
+
+where its determinant is precisely the quantity given in Equation (64) and the Lorentz transformation on this matrix is a determinant-preserving, or unimodular transformation. Let us consider the transformation matrix as [7,19]
+
+$$G = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \quad \text{and} \quad G^{\dagger} = \begin{pmatrix} \alpha^{*} & \gamma^{*} \\ \beta^{*} & \delta^{*} \end{pmatrix} \quad (66)$$
+
+with
+
+$$\det(G) = 1 \quad (67)$$
+
+and the transformation
+
+$$X' = GXG^{\dagger} \quad (68)$$
+
+Since $G$ is not a unitary matrix, Equation (68) not a unitary transformation, but rather we call this the “Hermitian transformation”. Equation (68) can be written as
+
+$$\begin{pmatrix} t' + z' & x' - iy' \\ x + iy & t' - z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t + z & x - iy \\ x + iy & t - z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \quad (69)$$
+
+It is still a determinant-preserving unimodular transformation, thus it is possible to write this as a four-by-four transformation matrix applicable to the four-vector $(t,z,x,y)$ [7,14].
+
+Since the $G$ matrix starts with four complex numbers and its determinant is one by Equation (67), it has six independent parameters. The group of these $G$ matrices is known to be locally isomorphic
+---PAGE_BREAK---
+
+to the group of four-by-four matrices performing Lorentz transformations on the four-vector $(t, z, x, y)$. In other words, for each $G$ matrix there is a corresponding four-by-four Lorentz-transform matrix [7].
+
+The matrix $G$ is not a unitary matrix, because its Hermitian conjugate is not always its inverse. This group has a unitary subgroup called $SU(2)$ and another consisting only of real matrices called $Sp(2)$. For this later subgroup, it is sufficient to work with the three matrices $R(\theta), S(\lambda)$, and $B(\eta)$ given in Equations (44)–(46) respectively. Each of these matrices has its corresponding four-by-four matrix applicable to the $(t, z, x, y)$. These matrices with their four-by-four counterparts are tabulated in Table 1.
+
+The energy-momentum four vector can also be written as a two-by-two matrix. It can be written as
+
+$$P = \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix} \qquad (70)$$
+
+with
+
+$$\det(P) = p_0^2 - p_x^2 - p_y^2 - p_z^2 \qquad (71)$$
+
+which means
+
+$$\det(P) = m^2 \qquad (72)$$
+
+where *m* is the particle mass.
+
+The Lorentz transformation can be written explicitly as
+
+$$P' = GPG^+ \qquad (73)$$
+
+or
+
+$$\begin{pmatrix} p'_0 + p'_z & p'_x - ip'_y \\ p'_x + ip'_y & E' - p'_z \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \qquad (74)$$
+
+This is an unimodular transformation, and the mass is a Lorentz-invariant variable. Furthermore, it was shown in [7] that Wigner's little groups for massive, massless, and imaginary-mass particles can be explicitly defined in terms of two-by-two matrices.
+
+Wigner's little group consists of two-by-two matrices satisfying
+
+$$P = WPW^{+} \qquad (75)$$
+
+The two-by-two $W$ matrix is not an identity matrix, but tells about the internal space-time symmetry of a particle with a given energy-momentum four-vector. This aspect was not known when Einstein formulated his special relativity in 1905, hence the internal space-time symmetry was not an issue at that time. We call the two-by-two matrix $W$ the Wigner matrix, and call the condition of Equation (75) the Wigner condition.
+
+If determinant of $W$ is a positive number, then $P$ is proportional to
+
+$$P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \qquad (76)$$
+
+corresponding to a massive particle at rest, while if the determinant is negative, it is proportional to
+
+$$P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad (77)$$
+---PAGE_BREAK---
+
+corresponding to an imaginary-mass particle moving faster than light along the z direction, with
+a vanishing energy component. If the determinant is zero, P is
+
+$$
+P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \tag{78}
+$$
+
+which is proportional to the four-momentum matrix for a massless particle moving along the z direction.
+
+For all three cases, the matrix of the form
+
+$$
+Z(\phi) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} \quad (79)
+$$
+
+will satisfy the Wigner condition of Equation (75). This matrix corresponds to rotations around
+the z axis.
+
+For the massive particle with the four-momentum of Equation (76), the transformations with the rotation matrix of Equation (44) leave the *P* matrix of Equation (76) invariant. Together with the *Z*(*φ*) matrix, this rotation matrix leads to the subgroup consisting of the unitary subset of the *G* matrices. The unitary subset of *G* is *SU*(2) corresponding to the three-dimensional rotation group dictating the spin of the particle [14].
+
+For the massless case, the transformations with the triangular matrix of the form
+
+$$
+\begin{pmatrix} 1 & \gamma \\ 0 & 1 \end{pmatrix} \qquad (80)
+$$
+
+leave the momentum matrix of Equation (78) invariant. The physics of this matrix has a stormy history,
+and the variable $\gamma$ leads to a gauge transformation applicable to massless particles [8,9,20,21].
+
+For a particle with an imaginary mass, a W matrix of the form of Equation (45) leaves the
+four-momentum of Equation (77) invariant.
+
+Table 3 summarizes the transformation matrices for Wigner's little groups for massive, massless,
+and imaginary-mass particles. Furthermore, in terms of their traces, the matrices given in this
+subsection can be compared with those given in Section 2.3 for the damped oscillator. The comparisons
+are given in Table 4.
+
+Of course, it is a challenging problem to have one expression for all three classes. This problem
+has been discussed in the literature [12], and the damped oscillator case of Section 2 addresses the
+continuity problem.
+
+**Table 3.** Wigner vectors and Wigner matrices in the two-by-two representation. The trace of the matrix tells whether the particle $m^2$ is positive, zero, or negative.
+
+
+
+
+ |
+ Particle Mass
+ |
+
+ Four-Momentum
+ |
+
+ Transform Matrix
+ |
+
+ Trace
+ |
+
+
+
+
+ |
+ Massive
+ |
+
+ (
+
+ 1
+
+ 0)
+
+ (0 1)
+ |
+
+ (
+
+ cos(θ/2)
+
+ − sin(θ/2))
+
+ (
+
+ sin(θ/2)
+
+ cos(θ/2))
+ |
+
+ less than 2
+ |
+
+
+ |
+ Massless
+ |
+
+ (
+
+ 1
+
+ 0)
+
+ (0 0)
+ |
+
+ (
+
+ 1
+
+ γ)
+
+ (0 1)
+ |
+
+ equal to 2
+ |
+
+
+ |
+ Imaginary mass
+ |
+
+ (
+
+ 1
+
+ 0)
+
+ (0 −1)
+ |
+
+ (
+
+ cosh(λ/2)
+
+ sinh(λ/2))
+
+ (
+
+ sinh(λ/2)
+
+ cosh(λ/2))
+ |
+
+ greater than 2
+ |
+
+
+
+---PAGE_BREAK---
+
+**Table 4.** Damped Oscillators and Space-time Symmetries. Both share Sp(2) as their symmetry group.
+
+| Trace | Damped Oscillator | Particle Symmetry | | Smaller than 2 | Oscillation Mode | Massive Particles | | Equal to 2 | Transition Mode | Massless Particles | | Larger than 2 | Damping Mode | Imaginary-mass Particles |
+
+## 5. Lorentz Completion of Wigner's Little Groups
+
+So far we have considered transformations applicable only to (t, z, x) space. In order to study the full symmetry, we have to consider rotations around the z axis. As previously stated, when a particle moves along this axis, this rotation defines the helicity of the particle.
+
+In [1], Wigner worked out the little group of a massive particle at rest. When the particle gains a momentum along the z direction, the single particle can reverse the direction of momentum, the spin, or both. What happens to the internal space-time symmetries is discussed in this section.
+
+### 5.1. Rotation around the z Axis
+
+In Section 3, our kinematics was restricted to the two-dimensional space of z and x, and thus includes rotations around the y axis. We now introduce the four-by-four matrix of Equation (63) performing rotations around the z axis. Its corresponding two-by-two matrix was given in Equation (79). Its generator is
+
+$$J_3 = \frac{1}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad (81)$$
+
+If we introduce this additional matrix for the three generators we used in Sections 3 and 3.2, we end up the closed set of commutation relations
+
+$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, K_j] = i\epsilon_{ijk}K_k, \quad [K_i, K_j] = -i\epsilon_{ijk}J_k \qquad (82)$$
+
+with
+
+$$J_i = \frac{1}{2}\sigma_i, \quad \text{and} \quad K_i = \frac{i}{2}\sigma_i \qquad (83)$$
+
+where $\sigma_i$ are the two-by-two Pauli spin matrices.
+
+For each of these two-by-two matrices there is a corresponding four-by-four matrix generating Lorentz transformations on the four-dimensional Lorentz group. When these two-by-two matrices are imaginary, the corresponding four-by-four matrices were given in Table 1. If they are real, the corresponding four-by-four matrices were given in Table 5.
+---PAGE_BREAK---
+
+**Table 5.** Two-by-two and four-by-four generators not included in Table 1. The generators given there and given here constitute the set of six generators for SL(2, c) or of the Lorentz group given in Equation (82).
+
+| Generator | Two-by-Two | Four-by-Four |
|---|
| J3 | 1⁄2() | | | J1 | 1⁄2() | | | K2 | 1⁄2() | |
+
+This set of commutation relations is known as the Lie algebra for the SL(2, c), namely the group of two-by-two elements with unit determinants. Their elements are complex. This set is also the Lorentz group performing Lorentz transformations on the four-dimensional Minkowski space.
+
+This set has many useful subgroups. For the group SL(2, c), there is a subgroup consisting only of real matrices, generated by the two-by-two matrices given in Table 1. This three-parameter subgroup is precisely the Sp(2) group we used in Sections 3 and 3.2. Their generators satisfy the Lie algebra given in Equation (43).
+
+In addition, this group has the following Wigner subgroups governing the internal space-time symmetries of particles in the Lorentz-covariant world [1]:
+
+1. The $J_i$ matrices form a closed set of commutation relations. The subgroup generated by these Hermitian matrices is SU(2) for electron spins. The corresponding rotation group does not change the four-momentum of the particle at rest. This is Wigner's little group for massive particles. If the particle is at rest, the two-by-two form of the four-vector is given by Equation (76). The Lorentz transformation generated by $J_3$ takes the form
+
+$$ \begin{pmatrix} e^{i\phi/2} & 0 \\ 0 & e^{-i\phi/2} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad (84) $$
+
+Similar computations can be carried out for $J_1$ and $J_2$.
+
+2. There is another Sp(2) subgroup, generated by $K_1$, $K_2$, and $J_3$. They satisfy the commutation relations
+
+$$ [K_1, K_2] = -iJ_3, \quad [J_3, K_1] = iK_2, \quad [K_2, J_3] = iK_1. \quad (85) $$
+
+The Wigner transformation generated by these two-by-two matrices leave the momentum four-vector of Equation (77) invariant. For instance, the transformation matrix generated by $K_2$ takes the form
+
+$$ \exp(-i\xi K_2) = \begin{pmatrix} \cosh(\xi/2) & i\sinh(\xi/2) \\ i\sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} \quad (86) $$
+
+and the Wigner transformation takes the form
+
+$$ \begin{pmatrix} \cosh(\xi/2) & i\sinh(\xi/2) \\ -i\sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} \cosh(\xi/2) & i\sinh(\xi/2) \\ -i\sinh(\xi/2) & \cosh(\xi/2) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \quad (87) $$
+
+Computations with $K_2$ and $J_3$ lead to the same result.
+---PAGE_BREAK---
+
+Since the determinant of the four-momentum matrix is negative, the particle has an imaginary mass. In the language of the four-by-four matrix, the transformation matrices leave the four-momentum of the form (0, 1, 0, 0) invariant.
+
+3. Furthermore, we can consider the following combinations of the generators:
+
+$$N_1 = K_1 - J_2 = \begin{pmatrix} 0 & i \\ 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = K_2 + J_1 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \qquad (88)$$
+
+Together with $J_3$, they satisfy the following commutation relations.
+
+$$[N_1, N_2] = 0, \quad [N_1, J_3] = -iN_2, \quad [N_2, J_3] = iN_1 \qquad (89)$$
+
+In order to understand this set of commutation relations, we can consider an x y coordinate system in a two-dimensional space. Then rotation around the origin is generated by
+
+$$J_3 = -i \left( x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right) \qquad (90)$$
+
+and the two translations are generated by
+
+$$N_1 = -i \frac{\partial}{\partial x}, \quad \text{and} \quad N_2 = -i \frac{\partial}{\partial y} \qquad (91)$$
+
+for the x and y directions respectively. These operators satisfy the commutations relations given in Equation (89).
+
+The two-by-two matrices of Equation (88) generate the following transformation matrix.
+
+$$G(\gamma, \phi) = \exp[-i\gamma(N_1 \cos\phi + N_2 \sin\phi)] = \begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \qquad (92)$$
+
+The two-by-two form for the four-momentum for the massless particle is given by Equation (78). The computation of the Hermitian transformation using this matrix is
+
+$$\begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \gamma e^{i\phi} & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (93)$$
+
+confirming that $N_1$ and $N_2$, together with $J_3$, are the generators of the $E(2)$-like little group for massless particles in the two-by-two representation. The transformation that does this in the physical world is described in the following section.
+
+## 5.2. $E(2)$-Like Symmetry of Massless Particles
+
+From the four-by-four generators of $K_{1,2}$ and $J_{1,2}$, we can write
+
+$$N_1 = \begin{pmatrix} 0 & 0 & i & 0 \\ 0 & 0 & i & 0 \\ i & -i & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = \begin{pmatrix} 0 & 0 & 0 & i \\ 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 \\ i & -i & 0 & 0 \end{pmatrix} \qquad (94)$$
+---PAGE_BREAK---
+
+These matrices lead to the transformation matrix of the form
+
+$$
+G(\gamma, \phi) = \begin{pmatrix}
+1 + \gamma^2/2 & -\gamma^2/2 & \gamma \cos \phi & \gamma \sin \phi \\
+\gamma^2/2 & 1 - \gamma^2/2 & \gamma \cos \phi & \gamma \sin \phi \\
+-\gamma \cos \phi & \gamma \cos \phi & 1 & 0 \\
+-\gamma \sin \phi & \gamma \sin \phi & 0 & 1
+\end{pmatrix} \quad (95)
+$$
+
+This matrix leaves the four-momentum invariant, as we can see from
+
+$$
+G(\gamma, \phi) \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \tag{96}
+$$
+
+When it is applied to the photon four-potential
+
+$$
+G(\gamma, \phi) \begin{pmatrix} A_0 \\ A_3 \\ A_1 \\ A_2 \end{pmatrix} = \begin{pmatrix} A_0 \\ A_3 \\ A_1 \\ A_2 \end{pmatrix} + \gamma (A_1 \cos \phi + A_2 \sin \phi) \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad (97)
+$$
+
+with the Lorentz condition which leads to $A_3 = A_0$ in the zero mass case. Gauge transformations are well known for electromagnetic fields and photons. Thus Wigner's little group leads to gauge transformations.
+
+In the two-by-two representation, the electromagnetic four-potential takes the form
+
+$$
+\begin{pmatrix} 2A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} \qquad (98)
+$$
+
+with the Lorentz condition $A_3 = A_0$. Then the two-by-two form of Equation (97) is
+
+$$
+\begin{pmatrix} 1 & \gamma e^{-i\phi} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 2A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \gamma e^{i\phi} & 1 \end{pmatrix} \quad (99)
+$$
+
+which becomes
+
+$$
+\begin{pmatrix} A_0 & A_1 - iA_2 \\ A_1 + iA_2 & 0 \end{pmatrix} + \begin{pmatrix} 2\gamma (A_1 \cos \phi - A_2 \sin \phi) & 0 \\ 0 & 0 \end{pmatrix} \quad (100)
+$$
+
+This is the two-by-two equivalent of the gauge transformation given in Equation (97).
+
+For massless spin-1/2 particles starting with the two-by-two expression of $G(\gamma, \phi)$ given in Equation (92), and considering the spinors
+
+$$
+u = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad v = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \tag{101}
+$$
+
+for spin-up and spin-down states respectively,
+
+$$
+Gu = u, \quad \text{and} \quad Gv = v + \gamma e^{-i\phi} u
+\quad (102)
+$$
+
+This means that the spinor $u$ for spin up is invariant under the gauge transformation while $v$ is not. Thus, the polarization of massless spin-1/2 particle, such as neutrinos, is a consequence of the gauge invariance. We shall continue this discussion in Section 7.
+---PAGE_BREAK---
+
+5.3. Boosts along the z Axis
+
+In Sections 4.1 and 5.1, we studied Wigner transformations for fixed values of the four-momenta.
+The next question is what happens when the system is boosted along the z direction, with the
+transformation
+
+$$
+\begin{pmatrix} t' \\ z' \end{pmatrix} = \begin{pmatrix} \cosh \eta & \sinh \eta \\ \sinh \eta & \cosh \eta \end{pmatrix} \begin{pmatrix} t \\ z \end{pmatrix} \qquad (103)
+$$
+
+Then the four-momenta become
+
+$$
+(\cosh \eta, \sinh \eta, 0, 0), \quad (\sinh \eta, \cosh \eta, 0, 0), \quad e^{\eta}(1, 1, 0, 0) \tag{104}
+$$
+
+respectively for massive, imaginary, and massless particles cases. In the two-by-two representation,
+the boost matrix is
+
+$$
+\begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \tag{105}
+$$
+
+and the four-momenta of Equation (104) become
+
+$$
+\begin{pmatrix} e^\eta & 0 \\ 0 & e^{-\eta} \end{pmatrix}, \quad \begin{pmatrix} e^\eta & 0 \\ 0 & -e^{-\eta} \end{pmatrix}, \quad \begin{pmatrix} e^\eta & 0 \\ 0 & 0 \end{pmatrix} \tag{106}
+$$
+
+respectively. These matrices become Equations (76)–(78) respectively when $\eta = 0$.
+
+We are interested in Lorentz transformations which leave a given non-zero momentum invariant.
+We can consider a Lorentz boost along the direction preceded and followed by identical rotation
+matrices, as described in Figure 1 and the transformation matrix as
+
+$$
+\begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \begin{pmatrix} \cosh \chi & -\sinh \chi \\ -\sinh \chi & \cosh \chi \end{pmatrix} \begin{pmatrix} \cos(\alpha/2) & -\sin(\alpha/2) \\ \sin(\alpha/2) & \cos(\alpha/2) \end{pmatrix} \quad (107)
+$$
+
+which becomes
+
+$$
+\begin{pmatrix}
+(\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\
+-\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi
+\end{pmatrix}
+\quad (108)
+$$
+---PAGE_BREAK---
+
+Figure 1. Bargmann and Wigner decompositions. (a) Bargmann decomposition; (b) Wigner decomposition. In the Bargmann decomposition, we start from a momentum along the z direction. We can rotate, boost, and rotate to bring the momentum to the original position. The resulting matrix is the product of one boost and two rotation matrices. In the Wigner decomposition, the particle is boosted back to the frame where the Wigner transformation can be applied. Make a Wigner transformation there and come back to the original state of the momentum. This process also can also be written as the product of three simple matrices.
+
+Except the sign of $\chi$, the two-by-two matrices of Equations (107) and (108) are identical with those given in Section 3.2. The only difference is the sign of the parameter $\chi$. We are thus ready to interpret this expression in terms of physics.
+
+1. If the particle is massive, the off-diagonal elements of Equation (108) have opposite signs, and this matrix can be decomposed into
+
+$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (109) $$
+
+with
+
+$$ \cos(\theta/2) = (\cosh \chi) \cos \alpha, \quad \text{and} \quad e^{2\eta} = \frac{\cosh(\chi) \sin \alpha + \sinh \chi}{\cosh(\chi) \sin \alpha - \sinh \chi} \quad (110) $$
+
+and
+
+$$ e^{2\eta} = \frac{p_0 + p_z}{p_0 - p_z} \quad (111) $$
+
+According to Equation (109) the first matrix (far right) reduces the particle momentum to zero. The second matrix rotates the particle without changing the momentum. The third matrix boosts the particle to restore its original momentum. This is the extension of Wigner's original idea to moving particles.
+
+2. If the particle has an imaginary mass, the off-diagonal elements of Equation (108) have the same sign,
+
+$$ \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \begin{pmatrix} \cosh(\lambda/2) & -\sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (112) $$
+---PAGE_BREAK---
+
+with
+
+$$ \cosh(\lambda/2) = (\cosh\chi)\cos\alpha, \quad \text{and} \quad e^{2\eta} = \frac{\sinh\chi + \cosh(\chi)\sin\alpha}{\cosh(\chi)\sin\alpha - \sinh\chi} \qquad (113) $$
+
+and
+
+$$ e^{2\eta} = \frac{p_0 + p_z}{p_z - p_0} \qquad (114) $$
+
+This is also a three-step operation. The first matrix brings the particle momentum to the zero-energy state with $p_0 = 0$. Boosts along the x or y direction do not change the four-momentum. We can then boost the particle back to restore its momentum. This operation is also an extension of the Wigner's original little group. Thus, it is quite appropriate to call the formulas of Equations (109) and (112) Wigner decompositions.
+
+3. If the particle mass is zero with
+
+$$ \sinh \chi = (\cosh \chi) \sin \alpha \qquad (115) $$
+
+the $\eta$ parameter becomes infinite, and the Wigner decomposition does not appear to be useful. We can then go back to the Bargmann decomposition of Equation (107). With the condition of Equations (115) and (108) becomes
+
+$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \qquad (116) $$
+
+with
+
+$$ \gamma = 2 \sinh \chi \qquad (117) $$
+
+The decomposition ending with a triangular matrix is called the Iwasawa decomposition [16,22] and its physical interpretation was given in Section 5.2. The $\gamma$ parameter does not depend on $\eta$.
+
+Thus, we have given physical interpretations to the Bargmann and Wigner decompositions given in Section (3.2). Consider what happens when the momentum becomes large. Then $\eta$ becomes large for nonzero mass cases. All three four-momenta in Equation (106) become
+
+$$ e^{\eta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad (118) $$
+
+As for the Bargmann-Wigner matrices, they become the triangular matrix of Equation (116), with $\gamma = \sin(\theta/2)e^{\eta}$ and $\gamma = \sinh(\lambda/2)e^{\eta}$, respectively for the massive and imaginary-mass cases.
+
+In Section 5.2, we concluded that the triangular matrix corresponds to gauge transformations. However, particles with imaginary mass are not observed. For massive particles, we can start with the three-dimensional rotation group. The rotation around the z axis is called helicity, and remains invariant under the boost along the z direction. As for the transverse rotations, they become gauge transformation as illustrated in Table 6.
+
+**Table 6.** Covariance of the energy-momentum relation, and covariance of the internal space-time symmetry. Under the Lorentz boost along the z direction, $J_3$ remains invariant, and this invariant component of the angular momentum is called the helicity. The transverse component $J_1$ and $J_2$ collapse into a gauge transformation. The $\gamma$ parameter for the massless case has been studied in earlier papers in the four-by-four matrix formulation of Wigner's little groups [8,21].
+
+| Massive, Slow | Covariance | Massless, Fast |
|---|
$E = p^2/2m$ $J_3$ | Einstein's $E = mc^2$ Wigner's Little Group | $E = cp$ Helicity Gauge Transformation | | $J_1, J_2$ | | |
+---PAGE_BREAK---
+
+5.4. Conjugate Transformations
+
+The most general form of the SL(2, c) matrix is given in Equation (66). Transformation operators for the Lorentz group are given in exponential form as:
+
+$$
+D = \exp \left\{ -i \sum_{i=1}^{3} (\theta_i J_i + \eta_i K_i) \right\} \qquad (119)
+$$
+
+where the $J_i$ are the generators of rotations and the $K_i$ are the generators of proper Lorentz boosts. They satisfy the Lie algebra given in Equation (43). This set of commutation relations is invariant under the sign change of the boost generators $K_i$. Thus, we can consider “dot conjugation” defined as
+
+$$
+\dot{D} = \exp \left\{ -i \sum_{i=1}^{3} (\theta_i J_i - \eta_i K_i) \right\} \quad (120)
+$$
+
+Since $K_i$ are anti-Hermitian while $J_i$ are Hermitian, the Hermitian conjugate of the above expression is
+
+$$
+D^{\dagger} = \exp \left\{ -i \sum_{i=1}^{3} (-\theta_i J_i + \eta_i K_i) \right\} \qquad (121)
+$$
+
+while the Hermitian conjugate of G is
+
+$$
+\dot{D}^{\dagger} = \exp \left\{ -i \sum_{i=1}^{3} (-\theta_i J_i - \eta_i K_i) \right\} \qquad (122)
+$$
+
+Since we understand the rotation around the z axis, we can now restrict the kinematics to the
+zt plane, and work with the Sp(2) symmetry. Then the D matrices can be considered as Bargmann
+decompositions. First, D and $\dot{D}$, and their Hermitian conjugates are
+
+$$
+D(\alpha, \chi) = \begin{pmatrix}
+(\cos \alpha) \cosh \chi & \sinh \chi - (\sin \alpha) \cosh \chi \\
+\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi
+\end{pmatrix} \tag{123}
+$$
+
+$$
+\dot{D}(\alpha, \chi) = \begin{pmatrix}
+(\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\
+-\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi
+\end{pmatrix} \quad (124)
+$$
+
+These matrices correspond to the "D loops" given in Figure 2a,b respectively. The "dot" conjugation changes the direction of boosts. The dot conjugation leads to the inversion of the space which is called the parity operation.
+
+We can also consider changing the direction of rotations. Then they result in the Hermitian
+conjugates. We can write their matrices as
+
+$$
+D^{\dagger}(\alpha, \chi) = \begin{pmatrix}
+(\cos \alpha) \cosh \chi & \sinh \chi + (\sin \alpha) \cosh \chi \\
+\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi
+\end{pmatrix} \quad (125)
+$$
+
+$$
+\dot{D}^{\dagger}(\alpha, \chi) = \begin{pmatrix}
+(\cos \alpha) \cosh \chi & -\sinh \chi + (\sin \alpha) \cosh \chi \\
+-\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi
+\end{pmatrix} \quad (126)
+$$
+
+From the exponential expressions from Equation (119) to Equation (122), it is clear that
+
+$$
+D^{\dagger} = D^{-1}, \quad \text{and} \quad D^{\dagger} = D^{-1} \tag{127}
+$$
+
+The D loop given in Figure 1 corresponds to $\dot{D}$. We shall return to these loops in Section 7.
+---PAGE_BREAK---
+
+Figure 2. Four D-loops resulting from the Bargmann decomposition. (a) Bargmann decomposition from Figure 1; (b) Direction of the Lorentz boost is reversed; (c) Direction of rotation is reversed; (d) Both directions are reversed. These operations correspond to the space-inversion, charge conjugation, and the time reversal respectively.
+
+**6. Symmetries Derivable from the Poincaré Sphere**
+
+The Poincaré sphere serves as the basic language for polarization physics. Its underlying
+language is the two-by-two coherency matrix. This coherency matrix contains the symmetry of SL(2, c)
+isomorphic to the the Lorentz group applicable to three space-like and one time-like dimensions [4,6,7].
+
+For polarized light propagating along the z direction, the amplitude ratio and phase difference of
+electric field x and y components traditionally determine the state of polarization. Hence, the polarization
+can be changed by adjusting the amplitude ratio or the phase difference or both. Usually, the optical
+device which changes amplitude is called an “attenuator” (or “amplifier”) and the device which changes
+the relative phase a “phase shifter”.
+
+Let us start with the Jones vector:
+
+$$
+\begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp[i(kz - \omega t)] \\ a \exp[i(kz - \omega t)] \end{pmatrix} \tag{128}
+$$
+---PAGE_BREAK---
+
+To this matrix, we can apply the phase shift matrix of Equation (79) which brings the Jones vector to
+
+$$
+\begin{pmatrix} \psi_1(z,t) \\ \psi_2(z,t) \end{pmatrix} = \begin{pmatrix} a \exp[i(kz - \omega t - i\phi/2)] \\ a \exp[i(kz - \omega t + i\phi/2)] \end{pmatrix} \quad (129)
+$$
+
+The generator of this phase-shifter is $I_3$ given Table 5.
+
+The optical beam can be attenuated differently in the two directions. The resulting matrix is
+
+$$
+e^{-\mu} \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \qquad (130)
+$$
+
+with the attenuation factor of exp(-μ₀ + η/2) and exp(-μ - η/2) for the x and y directions respectively. We are interested only the relative attenuation given in Equation (46) which leads to different amplitudes for the x and y component, and the Jones vector becomes
+
+$$
+\begin{pmatrix} \psi_1(z, t) \\ \psi_2(z, t) \end{pmatrix} = \begin{pmatrix} ae^{\mu/2} \exp[i(kz - \omega t - i\phi/2)] \\ ae^{-\mu/2} \exp[i(kz - \omega t + i\phi/2)] \end{pmatrix} \quad (131)
+$$
+
+The squeeze matrix of Equation (46) is generated by $K_3$ given in Table 1.
+
+The polarization is not always along the *x* and *y* axes, but can be rotated around the *z* axis using Equation (79) generated by $J_2$ given in Table 1.
+
+Among the rotation angles, the angle of 45° plays an important role in polarization optics. Indeed, if we rotate the squeeze matrix of Equation (46) by 45°, we end up with the squeeze matrix of Equation (45) generated by $K_1$ given also in Table 1.
+
+Each of these four matrices plays an important role in special relativity, as we discussed in Sections 3.2 and 6. Their respective roles in optics and particle physics are given in Table 7.
+
+**Table 7.** Polarization optics and special relativity share the same mathematics. Each matrix has its clear role in both optics and relativity. The determinant of the Stokes or the four-momentum matrix remains invariant under Lorentz transformations. It is interesting to note that the decoherence parameter (least fundamental) in optics corresponds to the (mass)$^2$ (most fundamental) in particle physics.
+
+
+
+
+ |
+ Polarization Optics
+ |
+
+ Transformation Matrix
+ |
+
+ Particle Symmetry
+ |
+
+
+
+
+ |
+ Phase shift by φ
+ |
+
+
+
+ |
+ e-iφ/2
+ |
+
+ 0
+ |
+
+
+ |
+ 0
+ |
+
+ eiφ/2
+ |
+
+
+ |
+
+ Rotation around z.
+ |
+
+
+ |
+ Rotation around z
+ |
+
+
+
+ |
+ cos(θ/2)
+ |
+
+ -sin(θ/2)
+ |
+
+
+ |
+ sin(θ/2)
+ |
+
+ cos(θ/2)
+ |
+
+
+ |
+
+ Rotation around y.
+ |
+
+
+ |
+ Squeeze along x and y
+ |
+
+
+
+ |
+ eη/2
+ |
+
+ 0
+ |
+
+
+ |
+ 0
+ |
+
+ e-η/2
+ |
+
+
+ |
+
+ Boost along z.
+ |
+
+
+ |
+ Squeeze along 45°
+ |
+
+
+
+ |
+ cosh(λ/2)
+ |
+
+ sinh(λ/2)
+ |
+
+
+ |
+ sinh(λ/2)
+ |
+
+ cosh(λ/2)
+ |
+
+
+ |
+
+ Boost along x.
+ |
+
+
+ |
+ a⁴ (sinξ)² Determinant
+ |
+
+ (mass)²
+ |
+
+
+
+
+The most general form for the two-by-two matrix applicable to the Jones vector is the G matrix of Equation (66). This matrix is of course a representation of the SL(2, c) group. It brings the simplest Jones vector of Equation (128) to its most general form.
+---PAGE_BREAK---
+
+## 6.1. Coherency Matrix
+
+However, the Jones vector alone cannot tell us whether the two components are coherent with each other. In order to address this important degree of freedom, we use the coherency matrix defined as [3,23]
+
+$$ C = \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \qquad (132) $$
+
+where
+
+$$ \langle \psi_i^* \psi_j \rangle = \frac{1}{T} \int_0^T \psi_i^*(t+\tau) \psi_j(t) dt \qquad (133) $$
+
+where T is a sufficiently long time interval. Then, those four elements become [4]
+
+$$ S_{11} = \langle \psi_1^* \psi_1 \rangle = a^2, \quad S_{12} = \langle \psi_1^* \psi_2 \rangle = a^2 (\cos \zeta) e^{-i\phi} \qquad (134) $$
+
+$$ S_{21} = \langle \psi_2^* \psi_1 \rangle = a^2 (\cos \zeta) e^{+i\phi}, \quad S_{22} = \langle \psi_2^* \psi_2 \rangle = a^2 \qquad (135) $$
+
+The diagonal elements are the absolute values of $\psi_1$ and $\psi_2$ respectively. The angle $\phi$ could be different from the value of the phase-shift angle given in Equation (79), but this difference does not play any role in the reasoning. The off-diagonal elements could be smaller than the product of $\psi_1$ and $\psi_2$, if the two polarizations are not completely coherent.
+
+The angle $\zeta$ specifies the degree of coherency. If it is zero, the system is fully coherent, while the system is totally incoherent if $\zeta$ is $90^\circ$. This can therefore be called the "decoherence angle."
+
+While the most general form of the transformation applicable to the Jones vector is G of Equation (66), the transformation applicable to the coherency matrix is
+
+$$ C' = G C G^{\dagger} \qquad (136) $$
+
+The determinant of the coherency matrix is invariant under this transformation, and it is
+
+$$ \det(C) = a^4 (\sin \zeta)^2 \qquad (137) $$
+
+Thus, angle $\zeta$ remains invariant. In the language of the Lorentz transformation applicable to the four-vector, the determinant is equivalent to the $(mass)^2$ and is therefore a Lorentz-invariant quantity.
+
+## 6.2. Two Radii of the Poincaré Sphere
+
+Let us write explicitly the transformation of Equation (136) as
+
+$$ \begin{pmatrix} S'_{11} & S'_{12} \\ S'_{21} & S'_{22} \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix} \qquad (138) $$
+
+It is then possible to construct the following quantities,
+
+$$ S_0 = \frac{S_{11} + S_{22}}{2}, \quad S_3 = \frac{S_{11} - S_{22}}{2} \qquad (139) $$
+
+$$ S_1 = \frac{S_{12} + S_{21}}{2}, \quad S_2 = \frac{S_{12} - S_{21}}{2i} \qquad (140) $$
+
+These are known as the Stokes parameters, and constitute a four-vector ($S_0, S_3, S_1, S_2$) under the Lorentz transformation.
+
+In the Jones vector of Equation (128), the amplitudes of the two orthogonal components are equal. Thus, the two diagonal elements of the coherency matrix are equal. This leads to $S_3 = 0$, and the
+---PAGE_BREAK---
+
+problem is reduced from the sphere to a circle. In the resulting two-dimensional subspace, we can
+introduce the polar coordinate system with
+
+$$
+\begin{align}
+R &= \sqrt{S_1^2 + S_2^2} \tag{141} \\
+S_1 &= R \cos \phi \tag{142} \\
+S_2 &= R \sin \phi \tag{143}
+\end{align}
+$$
+
+The radius $R$ is the radius of this circle, and is
+
+$$
+R = a^2 \cos \zeta \quad (144)
+$$
+
+The radius $R$ takes its maximum value $S_0$ when $\zeta = 0^\circ$. It decreases as $\zeta$ increases and vanishes when $\zeta = 90^\circ$. This aspect of the radius $R$ is illustrated in Figure 3.
+
+**Figure 3.** Radius of the Poincaré sphere. The radius $R$ takes its maximum value $S_0$ when the decoherence angle $\zeta$ is zero. It becomes smaller as $\zeta$ increases. It becomes zero when the angle reaches 90°.
+
+In order to see its implications in special relativity, let us go back to the four-momentum matrix of $m(1,0,0,0)$. Its determinant is $m^2$ and remains invariant. Likewise, the determinant of the coherency matrix of Equation (132) should also remain invariant. The determinant in this case is
+
+$$
+S_0^2 - R^2 = a^4 \sin^2 \zeta \quad (145)
+$$
+
+This quantity remains invariant under the Hermitian transformation of Equation (138), which is a Lorentz transformation as discussed in Sections 3.2 and 6. This aspect is shown on the last row of Table 7.
+
+The coherency matrix then becomes
+
+$$
+C = a^2 \begin{pmatrix} 1 & (\cos \xi)e^{-i\phi} \\ (\cos \xi)e^{i\phi} & 1 \end{pmatrix} \qquad (146)
+$$
+---PAGE_BREAK---
+
+Since the angle $\phi$ does not play any essential role, we can let $\phi = 0$, and write the coherency matrix as
+
+$$ C = a^2 \begin{pmatrix} 1 & \cos \xi \\ \cos \xi & 1 \end{pmatrix} \qquad (147) $$
+
+The determinant of the above two-by-two matrix is
+
+$$ a^4 (1 - \cos^2 \xi) = a^4 \sin^2 \xi \qquad (148) $$
+
+Since the Lorentz transformation leaves the determinant invariant, the change in this $\xi$ variable is not a Lorentz transformation. It is of course possible to construct a larger group in which this variable plays a role in a group transformation [6], but here we are more interested in its role in a particle gaining a mass from zero or the mass becoming zero.
+
+### 6.3. Extra-Lorentzian Symmetry
+
+The coherency matrix of Equation (146) can be diagonalized to
+
+$$ a^2 \begin{pmatrix} 1 + \cos \xi & 0 \\ 0 & 1 - \cos \xi \end{pmatrix} \qquad (149) $$
+
+by a rotation. Let us then go back to the four-momentum matrix of Equation (70). If $p_x = p_y = 0$, and $p_z = p_0 \cos \xi$, we can write this matrix as
+
+$$ p_0 \begin{pmatrix} 1 + \cos \xi & 0 \\ 0 & 1 - \cos \xi \end{pmatrix} \qquad (150) $$
+
+Thus, with this extra variable, it is possible to study the little groups for variable masses, including the small-mass limit and the zero-mass case.
+
+For a fixed value of $p_0$, the $(mass)^2$ becomes
+
+$$ (mass)^2 = (p_0 \sin \xi)^2, \quad \text{and} \quad (momentum)^2 = (p_0 \cos \xi)^2 \qquad (151) $$
+
+resulting in
+
+$$ (energy)^2 = (mass)^2 + (momentum)^2 \qquad (152) $$
+
+This transition is illustrated in Figure 4. We are interested in reaching a point on the light cone from mass hyperbola while keeping the energy fixed. According to this figure, we do not have to make an excursion to infinite-momentum limit. If the energy is fixed during this process, Equation (152) tells the mass and momentum relation, and Figure 5 illustrates this relation.
+---PAGE_BREAK---
+
+Figure 4. Transition from the massive to massless case. (a) Transition within the framework of the Lorentz group; (b) TransITION allowed in the symmetry of the Poincaré sphere. Within the framework of the Lorentz group, it is not possible to go from the massive to massless case directly, because it requires the change in the mass which is a Lorentz-invariant quantity. The only way is to move to infinite momentum and jump from the hyperbola to the light cone, and come back. The extra symmetry of the Poincaré sphere allows a direct transition
+
+Figure 5. Energy-momentum-mass relation. This circle illustrates the case where the energy is fixed, while the mass and momentum are related according to the triangular rule. The value of the angle ξ changes from zero to 180°. The particle mass is negative for negative values of this angle. However, in the Lorentz group, only (mass)$^2$ is a relevant variable, and negative masses might play a role for theoretical purposes.
+
+Within the framework of the Lorentz group, it is possible, by making an excursion to infinite momentum where the mass hyperbola coincides with the light cone, to then come back to the desired point. On the other hand, the mass formula of Equation (151) allows us to go there directly. The decoherence mechanism of the coherency matrix makes this possible.
+---PAGE_BREAK---
+
+## 7. Small-Mass and Massless Particles
+
+We now have a mathematical tool to reduce the mass of a massive particle from its positive value to zero. During this process, the Lorentz-boosted rotation matrix becomes a gauge transformation for the spin-1 particle, as discussed Section 5.2. For spin-1/2 particles, there are two issues.
+
+1. It was seen in Section 5.2 that the requirement of gauge invariance lead to a polarization of massless spin-1/2 particle, such as neutrinos. What happens to anti-neutrinos?
+
+2. There are strong experimental indications that neutrinos have a small mass. What happens to the $E(2)$ symmetry?
+
+### 7.1. Spin-1/2 Particles
+
+Let us go back to the two-by-two matrices of Section 5.4, and the two-by-two $D$ matrix. For a massive particle, its Wigner decomposition leads to
+
+$$ D = \begin{pmatrix} \cos(\theta/2) & -e^{-\eta} \sin(\theta/2) \\ e^{\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (153) $$
+
+This matrix is applicable to the spinors $u$ and $v$ defined in Equation (101) respectively for the spin-up and spin-down states along the $z$ direction.
+
+Since the Lie algebra of $SL(2,c)$ is invariant under the sign change of the $K_i$ matrices, we can consider the “dotted” representation, where the system is boosted in the opposite direction, while the direction of rotations remain the same. Thus, the Wigner decomposition leads to
+
+$$ \dot{D} = \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix} \qquad (154) $$
+
+with its spinors
+
+$$ \dot{u} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad \dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \qquad (155) $$
+
+For anti-neutrinos, the helicity is reversed but the momentum is unchanged. Thus, $D^\dagger$ is the appropriate matrix. However, $D^\dagger = \tilde{D}^{-1}$ as was noted in Section 5.4. Thus, we shall use $\tilde{D}$ for anti-neutrinos.
+
+When the particle mass becomes very small,
+
+$$ e^{-\eta} = \frac{m}{2p} \qquad (156) $$
+
+becomes small. Thus, if we let
+
+$$ e^{\eta} \sin(\theta/2) = \gamma, \quad \text{and} \quad e^{-\eta} \sin(\theta/2) = \epsilon^2 \qquad (157) $$
+
+then the $D$ matrix of Equation (153) and the $\tilde{D}$ of Equation (154) become
+
+$$ \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\epsilon^2 \\ \gamma & 1 - \gamma\epsilon^2 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\gamma \\ \epsilon^2 & 1 - \gamma\epsilon^2 \end{pmatrix} \qquad (158) $$
+
+respectively where $\gamma$ is an independent parameter and
+
+$$ \epsilon^2 = \gamma \left( \frac{m}{2p} \right)^2 \qquad (159) $$
+---PAGE_BREAK---
+
+When the particle mass becomes zero, they become
+
+$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \tag{160} $$
+
+respectively, applicable to the spinors $(u, v)$ and $(\tilde{u}, \tilde{v})$ respectively.
+
+For neutrinos,
+
+$$ \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ \gamma \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \tag{161} $$
+
+For anti-neutrinos,
+
+$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} -\gamma \\ 1 \end{pmatrix} \tag{162} $$
+
+It was noted in Section 5.2 that the triangular matrices of Equation (160) perform gauge transformations. Thus, for Equations (161) and (162) the requirement of gauge invariance leads to the polarization of neutrinos. The neutrinos are left-handed while the anti-neutrinos are right-handed. Since, however, nature cannot tell the difference between the dotted and undotted representations, the Lorentz group cannot tell which neutrino is right handed. It can say only that the neutrinos and anti-neutrinos are oppositely polarized.
+
+If the neutrino has a small mass, the gauge invariance is modified to
+
+$$ \begin{pmatrix} 1 - \gamma\epsilon^{2/2} & -\epsilon^2 \\ \gamma & 1 - \gamma\epsilon^{2/2} \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} - \epsilon^2 \begin{pmatrix} 1 \\ \gamma/2 \end{pmatrix} \tag{163} $$
+
+and
+
+$$ \begin{pmatrix} 1 - \gamma\epsilon^2/2 & -\gamma \\ \epsilon^2 & 1 - \gamma\epsilon^2 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \epsilon^2 \begin{pmatrix} -\gamma/2 \\ 1 \end{pmatrix} \tag{164} $$
+
+respectively for neutrinos and anti-neutrinos. Thus the violation of the gauge invariance in both cases is proportional to $\epsilon^2$ which is $m^2/4p^2$.
+
+## 7.2. Small-Mass Neutrinos in the Real World
+
+Whether neutrinos have mass or not and the consequences of this relative to the Standard Model and lepton number is the subject of much theoretical speculation [24,25], and of cosmology [26], nuclear reactors [27], and high energy experimentations [28,29]. Neutrinos are fast becoming an important component of the search for dark matter and dark radiation [30]. Their importance within the Standard Model is reflected by the fact that they are the only particles which seem to exist with only one direction of chirality, i.e., only left-handed neutrinos have been confirmed to exist so far.
+
+It was speculated some time ago that neutrinos in constant electric and magnetic fields would acquire a small mass, and that right-handed neutrinos would be trapped within the interaction field [31]. Solving generalized electroweak models using left- and right-handed neutrinos has been discussed recently [32]. Today these right-handed neutrinos which do not participate in weak interactions are called “sterile” neutrinos [33]. A comprehensive discussion of the place of neutrinos in the scheme of physics has been given by Drewes [30]. We should note also that the three different neutrinos, namely $ν_e$, $ν_μ$, and $ν_τ$, may have different masses [34].
+---PAGE_BREAK---
+
+**8. Scalars, Four-Vectors, and Four-Tensors**
+
+In Sections 5 and 7, our primary interest has been the two-by-two matrices applicable to spinors for spin-1/2 particles. Since we also used four-by-four matrices, we indirectly studied the four-component particle consisting of spin-1 and spin-zero components.
+
+If there are two spin 1/2 states, we are accustomed to construct one spin-zero state, and one spin-one state with three degeneracies.
+
+In this paper, we are confronted with two spinors, but each spinor can also be dotted. For this reason, there are 16 orthogonal states consisting of spin-one and spin-zero states. How many spin-zero states? How many spin-one states?
+
+For particles at rest, it is known that the addition of two one-half spins result in spin-zero and spin-one states. In this paper, we have two different spinors behaving differently under the Lorentz boost. Around the z direction, both spinors are transformed by
+
+$$Z(\phi) = \exp(-i\phi J_3) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix} \quad (165)$$
+
+However, they are boosted by
+
+$$B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix} \quad (166)$$
+
+$$\dot{B}(\eta) = \exp(i\eta K_3) = \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix} \quad (167)$$
+
+applicable to the undotted and dotted spinors respectively. These two matrices commute with each other, and also with the rotation matrix Z(φ) of Equation (165). Since K₃ and J₃ commute with each other, we can work with the matrix Q(η, φ) defined as
+
+$$Q(\eta, \phi) = B(\eta)Z(\phi) = \begin{pmatrix} e^{(\eta-i\phi)/2} & 0 \\ 0 & e^{-(\eta-i\phi)/2} \end{pmatrix} \quad (168)$$
+
+$$\dot{Q}(\eta, \phi) = \dot{B}(\eta)\dot{Z}(\phi) = \begin{pmatrix} e^{-(\eta+i\phi)/2} & 0 \\ 0 & e^{(\eta+i\phi)/2} \end{pmatrix} \quad (169)$$
+
+When this combined matrix is applied to the spinors,
+
+$$Q(\eta, \phi)u = e^{(\eta-i\phi)/2}u, \quad Q(\eta, \phi)v = e^{-(\eta-i\phi)/2}v \quad (170)$$
+
+$$\dot{Q}(\eta, \phi)\dot{u} = e^{-(\eta+i\phi)/2}\dot{u}, \quad \dot{Q}(\eta, \phi)\dot{v} = e^{(\eta+i\phi)/2}\dot{v} \quad (171)$$
+
+If the particle is at rest, we can construct the combinations
+
+$$uu, \quad \frac{1}{\sqrt{2}}(uv + vu), \quad vv \quad (172)$$
+
+to construct the spin-1 state, and
+
+$$\frac{1}{\sqrt{2}}(uv - vu) \qquad (173)$$
+
+for the spin-zero state. There are four bilinear states. In the SL(2, c) regime, there are two dotted spinors. If we include both dotted and undotted spinors, there are 16 independent bilinear combinations. They are given in Table 8. This table also gives the effect of the operation of Q(η, φ).
+---PAGE_BREAK---
+
+**Table 8.** Sixteen combinations of the SL(2, c) spinors. In the SU(2) regime, there are two spinors leading to four bilinear forms. In the SL(2, c) world, there are two undotted and two dotted spinors. These four spinors lead to 16 independent bilinear combinations.
+
+| Spin 1 | Spin 0 |
|---|
| uu, 1⁄√2(uv + vu), vv, | 1⁄√2(uv − vu) | | úú, 1⁄√2(úv + vú), vúv, | 1⁄√2(úv − vú) | | uú, 1⁄√2(uø + vú), vúv, | 1⁄√2(uø − vú) | | úú, 1⁄√2(úv + vú), vúv, | 1⁄√2(úv − vú) |
+
+After the Operation of Q(η, φ) and $\tilde{Q}(\eta, \phi)$
+
+$$
+\begin{aligned}
+e^{-i\phi} e^{\eta} u u, & \quad \frac{1}{\sqrt{2}} (uv + vu), \quad e^{i\phi} e^{-\eta} v v, \quad \frac{1}{\sqrt{2}} (uv - vu) \\
+e^{-i\phi} e^{-\eta} u \dot{u}, & \quad \frac{1}{\sqrt{2}} (\dot{u}v + \dot{v}\dot{u}), \quad e^{i\phi} e^{\eta} \dot{v} \dot{v}, \quad \frac{1}{\sqrt{2}} (\dot{u}\dot{v} - \dot{v}\dot{u}) \\
+e^{-i\phi} u \dot{u}, & \quad \frac{1}{\sqrt{2}} (e^{\eta} u \dot{v} + e^{-\eta} v \dot{u}), \quad e^{i\phi} v \dot{v}, \quad \frac{1}{\sqrt{2}} (e^{\eta} u \dot{v} - e^{-\eta} v \dot{u}) \\
+e^{-i\phi} \dot{u} u, & \quad \frac{1}{\sqrt{2}} (\dot{u}v + \dot{v}u), \quad e^{i\phi} \dot{v} v, \quad \frac{1}{\sqrt{2}} (e^{-\eta} \dot{u} v - e^{\eta} \dot{v} u)
+\end{aligned}
+$$
+
+Among the bilinear combinations given in Table 8, the following two are invariant under rotations and also under boosts.
+
+$$S = \frac{1}{\sqrt{2}}(uv - vu), \quad \text{and} \quad S = -\frac{1}{\sqrt{2}}(\dot{u}\dot{v} - \dot{v}\dot{u}) \qquad (174)$$
+
+They are thus scalars in the Lorentz-covariant world. Are they the same or different? Let us consider the following combinations
+
+$$S_+ = \frac{1}{\sqrt{2}} (S + S'), \quad \text{and} \quad S_- = \frac{1}{\sqrt{2}} (S - S') \qquad (175)$$
+
+Under the dot conjugation, $S_+$ remains invariant, but $S_-$ changes its sign.
+
+Under the dot conjugation, the boost is performed in the opposite direction. Therefore it is the operation of space inversion, and $S_+$ is a scalar while $S_-$ is called the pseudo-scalar.
+
+## 8.1. Four-Vectors
+
+Let us consider the bilinear products of one dotted and one undotted spinor as $u\dot{u}$, $u\dot{v}$, $\dot{u}v$, $v\dot{v}$, and construct the matrix
+
+$$U = \begin{pmatrix} u\dot{v} & v\dot{v} \\ u\dot{u} & v\dot{u} \end{pmatrix} \qquad (176)$$
+
+Under the rotation $Z(\phi)$ and the boost $B(\eta)$ they become
+
+$$
+\begin{pmatrix}
+e^{\eta} u \dot{v} & e^{-i\phi} v \dot{v} \\
+e^{i\phi} u \dot{u} & e^{-\eta} v \dot{u}
+\end{pmatrix}
+\qquad
+(177)
+$$
+
+Indeed, this matrix is consistent with the transformation properties given in Table 8, and transforms like the four-vector
+
+$$
+\begin{pmatrix}
+t+z & x-iy \\
+x+iy & t-z
+\end{pmatrix}
+\qquad
+(178)
+$$
+
+This form was given in Equation (65), and played the central role throughout this paper. Under the space inversion, this matrix becomes
+
+$$
+\begin{pmatrix}
+t-z & -(x-iy) \\
+-(x+iy) & t+z
+\end{pmatrix}
+\qquad
+(179)
+$$
+---PAGE_BREAK---
+
+This space inversion is known as the parity operation.
+
+The form of Equation (176) for a particle or field with four-components, is given by $(V_0, V_z, V_x, V_y)$. The two-by-two form of this four-vector is
+
+$$ U = \begin{pmatrix} V_0 + V_z & V_x - iV_y \\ V_x + iV_y & V_0 - V_z \end{pmatrix} \qquad (180) $$
+
+If boosted along the z direction, this matrix becomes
+
+$$ \begin{pmatrix} e^{\eta} (V_0 + V_z) & V_x - iV_y \\ V_x + iV_y & e^{-\eta} (V_0 - V_z) \end{pmatrix} \qquad (181) $$
+
+In the mass-zero limit, the four-vector matrix of Equation (181) becomes
+
+$$ \begin{pmatrix} 2A_0 & A_x - iA_y \\ A_x + iA_y & 0 \end{pmatrix} \qquad (182) $$
+
+with the Lorentz condition $A_0 = A_z$. The gauge transformation applicable to the photon four-vector was discussed in detail in Section 5.2.
+
+Let us go back to the matrix of Equation (180), we can construct another matrix $\dot{U}$. Since the dot conjugation leads to the space inversion,
+
+$$ \dot{U} = \begin{pmatrix} \dot{u}\nu & \dot{\nu}\nu \\ \dot{u}u & \dot{\nu}u \end{pmatrix} \qquad (183) $$
+
+Then
+
+$$ \dot{u}\nu \approx (t-z), \qquad \dot{\nu}u \approx (t+z) \qquad (184) $$
+
+$$ \dot{\nu}\nu \approx -(x-iy), \quad \dot{u}u \approx -(x+iy) \qquad (185) $$
+
+where the symbol $\simeq$ means “transforms like”.
+
+Thus, $U$ of Equation (176) and $\dot{U}$ of Equation (183) used up 8 of the 16 bilinear forms. Since there are two bilinear forms in the scalar and pseudo-scalar as given in Equation (175), we have to give interpretations to the six remaining bilinear forms.
+
+## 8.2. Second-Rank Tensor
+
+In this subsection, we are studying bilinear forms with both spinors dotted and undotted. In Section 8.1, each bilinear spinor consisted of one dotted and one undotted spinor. There are also bilinear spinors which are both dotted or both undotted. We are interested in two sets of three quantities satisfying the $O(3)$ symmetry. They should therefore transform like
+
+$$ (\overline{x+iy})/\sqrt{2}, \quad (\overline{x-iy})/\sqrt{2}, \quad z \qquad (186) $$
+
+which are like
+
+$$ uu, \quad vv, \quad (\overline{uv} + \overline{vu})/\sqrt{2} \qquad (187) $$
+
+respectively in the $O(3)$ regime. Since the dot conjugation is the parity operation, they are like
+
+$$ -\dot{u}\dot{u}, \quad -\dot{\nu}\dot{\nu}, \quad -(\overline{\dot{u}\dot{\nu}} + \overline{\dot{\nu}\dot{u}})/\sqrt{2} \qquad (188) $$
+
+In other words,
+
+$$ (\overline{uu}) = -\dot{u}\dot{u}, \quad \text{and} \quad (\overline{vv}) = -\dot{\nu}\dot{\nu} \qquad (189) $$
+---PAGE_BREAK---
+
+We noticed a similar sign change in Equation (184).
+
+In order to construct the z component in this O(3) space, let us first consider
+
+$$f_z = \frac{1}{2} [(uv + vu) - (\dot{u}\dot{v} + \dot{v}\dot{u})], \quad g_z = \frac{1}{2i} [(uv + vu) + (\dot{u}\dot{v} + \dot{v}\dot{u})] \qquad (190)$$
+
+where $f_z$ and $g_z$ are respectively symmetric and anti-symmetric under the dot conjugation or the parity operation. These quantities are invariant under the boost along the z direction. They are also invariant under rotations around this axis, but they are not invariant under boost along or rotations around the x or y axis. They are different from the scalars given in Equation (174).
+
+Next, in order to construct the x and y components, we start with $g_\pm$ as
+
+$$f_+ = \frac{1}{\sqrt{2}} (uu - \dot{u}\dot{u}) \qquad g_+ = \frac{1}{\sqrt{2}i} (uu + \dot{u}\dot{u}) \qquad (191)$$
+
+$$f_- = \frac{1}{\sqrt{2}} (vv - \dot{v}\dot{v}) \qquad g_- = \frac{1}{\sqrt{2}i} (vv + \dot{v}\dot{v}) \qquad (192)$$
+
+Then
+
+$$f_x = \frac{1}{\sqrt{2}} (f_+ + f_-) = \frac{1}{2} [(uu - \dot{u}\dot{u}) + (vv - \dot{v}\dot{v})] \qquad (193)$$
+
+$$f_y = \frac{1}{\sqrt{2}i} (f_+ - f_-) = \frac{1}{2i} [-(vv - \dot{v}\dot{v})] \qquad (194)$$
+
+and
+
+$$g_x = \frac{1}{\sqrt{2}} (g_+ + g_-) = \frac{1}{2i} [(uu + \dot{u}\dot{u}) + (vv + \dot{v}\dot{v})] \qquad (195)$$
+
+$$g_y = \frac{1}{\sqrt{2}i} (g_+ - g_-) = -\frac{1}{2} [(uu + \dot{u}\dot{u}) - (vv + \dot{v}\dot{v})] \qquad (196)$$
+
+Here $f_x$ and $f_y$ are symmetric under dot conjugation, while $g_x$ and $g_y$ are anti-symmetric.
+
+Furthermore, $f_z$, $f_x$, and $f_y$ of Equations (190) and (193) transform like a three-dimensional vector. The same can be said for $g_i$ of Equations (190) and (195). Thus, they can be grouped into the second-rank tensor
+
+$$T = \begin{pmatrix}
+0 & -g_z & -g_x & -g_y \\
+g_z & 0 & -f_y & f_x \\
+g_x & f_y & 0 & -f_z \\
+g_y & -f_x & f_z & 0
+\end{pmatrix} \qquad (197)$$
+
+whose Lorentz-transformation properties are well known. The $g_i$ components change their signs under space inversion, while the $f_i$ components remain invariant. They are like the electric and magnetic fields respectively.
+
+If the system is Lorentz-booted, $f_i$ and $g_i$ can be computed from Table 8. We are now interested in the symmetry of photons by taking the massless limit. According to the procedure developed in Section 6, we can keep only the terms which become larger for larger values of $\eta$. Thus,
+
+$$f_x \rightarrow \frac{1}{2}(uu - \dot{u}\dot{v}), \qquad f_y \rightarrow \frac{1}{2i}(uu + \dot{u}\dot{v}) \qquad (198)$$
+
+$$g_x \rightarrow \frac{1}{2i}(uu + \dot{u}\dot{v}), \qquad g_y \rightarrow -\frac{1}{2}(uu - \dot{u}\dot{v}) \qquad (199)$$
+
+in the massless limit.
+---PAGE_BREAK---
+
+Then the tensor of Equation (197) becomes
+
+$$F = \begin{pmatrix} 0 & 0 & -E_x & -E_y \\ 0 & 0 & -B_y & B_x \\ E_x & B_y & 0 & 0 \\ E_y & -B_x & 0 & 0 \end{pmatrix} \qquad (200)$$
+
+with
+
+$$B_x \approx \frac{1}{2}(uu - \bar{u}\bar{v}), \quad B_y \approx \frac{1}{2i}(uu + \bar{u}\bar{v}) \qquad (201)$$
+
+$$E_x = \frac{1}{2i}(uu + \bar{u}\bar{v}), \quad E_y = -\frac{1}{2}(uu - \bar{u}\bar{v}) \qquad (202)$$
+
+The electric and magnetic field components are perpendicular to each other. Furthermore,
+
+$$E_x = B_y, \quad E_y = -B_x \qquad (203)$$
+
+In order to address this question, let us go back to Equation (191). In the massless limit,
+
+$$B_+ \approx E_+ \approx uu, \quad B_- \approx E_- \approx \bar{u}\bar{v} \qquad (204)$$
+
+The gauge transformation applicable to $u$ and $\bar{v}$ are the two-by-two matrices
+
+$$\begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ -\gamma & 1 \end{pmatrix} \qquad (205)$$
+
+respectively as noted in Sections 5.2 and 7.1. Both $u$ and $\bar{v}$ are invariant under gauge transformations, while $i\dot{u}$ and $\bar{v}$ do not.
+
+The $B_+$ and $E_+$ are for the photon spin along the $z$ direction, while $B_-$ and $E_-$ are for the opposite direction. In 1964 [35], Weinberg constructed gauge-invariant state vectors for massless particles starting from Wigner’s 1939 paper [1]. The bilinear spinors $uu$ and $\bar{u}\bar{v}$ correspond to Weinberg’s state vectors.
+
+### 8.3. Possible Symmetry of the Higgs Mechanism
+
+In this section, we discussed how the two-by-two formalism of the group $SL(2,c)$ leads the scalar, four-vector, and tensor representations of the Lorentz group. We discussed in detail how the four-vector for a massive particle can be decomposed into the symmetry of a two-component massless particle and one gauge degree of freedom. This aspect was studied in detail by Kim and Wigner [20,21], and their results are illustrated in Figure 6. This decomposition is known in the literature as the group contraction.
+
+The four-dimensional Lorentz group can be contracted to the Euclidean and cylindrical groups. These contraction processes could transform a four-component massive vector meson into a massless spin-one particle with two spin components, and one gauge degree of freedom.
+
+Since this contraction procedure is spelled out detail in [21], as well as in the present paper, its reverse process is also well understood. We start with one two-component massless particle with one gauge degree of freedom, and end up with a massive vector meson with its four components.
+
+The mathematics of this process is not unlike the Higgs mechanism [36,37], where one massless field with two degrees of freedom absorbs one gauge degree freedom to become a quartet of bosons, namely that of $W, Z^\pm$ plus the Higgs boson. As is well known, this mechanism is the basis for the theory of electro-weak interaction formulated by Weinberg and Salam [38,39].
+---PAGE_BREAK---
+
+**Figure 6.** Contractions of the three-dimensional rotation group. (a) Contraction in terms of the tangential plane and the tangential cylinder [20]; (b) Contraction in terms of the expansion and contraction of the longitudinal axis [21]. In both cases, the symmetry ends up with one rotation around the longitudinal direction and one translational degree along the longitudinal axis. The rotation and translation corresponds to the helicity and gauge degrees of freedom.
+
+The word "spontaneous symmetry breaking" is used for the Higgs mechanism. It could be an interesting problem to see that this symmetry breaking for the two Higgs doublet model can be formulated in terms of the Lorentz group and its contractions. In this connection, we note an interesting recent paper by Dée and Ivanov [40].
+
+# 9. Conclusions
+
+The damped harmonic oscillator, Wigner's little groups, and the Poincaré sphere belong to the three different branches of physics. In this paper, it was noted that they are based on the same mathematical framework, namely the algebra of two-by-two matrices.
+
+The second-order differential equation for damped harmonic oscillators can be formulated in terms of two-by-two matrices. These matrices produce the algebra of the group $Sp(2)$. While there are three trace classes of the two-by-two matrices of this group, the damped oscillator tells us how to make transitions from one class to another.
+
+It is shown that Wigner's three little groups can be defined in terms of the trace classes of the $Sp(2)$ group. If the trace is smaller than two, the little group is for massive particles. If greater than two, the little group is for imaginary-mass particles. If the trace is equal to two, the little group is for massless particles. Thus, the damped harmonic oscillator provides a procedure for transition from one little group to another.
+
+The Poincaré sphere contains the symmetry of the six-parameter $SL(2, c)$ group. Thus, the sphere provides the procedure for extending the symmetry of the little group defined within the Lorentz group of three-dimensional Minkowski space to its full Lorentz group in the four-dimensional space-time. In addition, the Poincaré sphere offers the variable which allows us to change the symmetry of a massive particle to that of a massless particle by continuously decreasing the mass.
+
+In this paper, we extracted the mathematical properties of Wigner's little groups from the damped harmonic oscillator and the Poincaré sphere. In so doing, we have shown that the transition from one little group to another is tangentially continuous.
+
+This subject was initiated by İnönü and Wigner in 1953 as the group contraction [41]. In their paper, they discussed the contraction of the three-dimensional rotation group becoming contracted to the two-dimensional Euclidean group with one rotational and two translational degrees of freedom. While the $O(3)$ rotation group can be illustrated by a three-dimensional sphere, the plane tangential at
+---PAGE_BREAK---
+
+the north pole is for the $E(2)$ Euclidean group. However, we can also consider a cylinder tangential at the equatorial belt. The resulting cylindrical group is isomorphic to the Euclidean group [20]. While the rotational degree of freedom of this cylinder is for the photon spin, the up and down translations on the surface of the cylinder correspond to the gauge degree of freedom of the photon, as illustrated in Figure 6.
+
+It was noted also that the Bargmann decomposition of two-by-two matrices, as illustrated in Figure 1 and Figure 2, allows us to study more detailed properties of the little groups, including space and time reflection reflection properties. Also in this paper, we have discussed how the scalars, four-vectors, and four-tensors can be constructed from the two-by-two representation in the Lorentz-covariant world.
+
+In addition, it should be noted that the symmetry of the Lorentz group is also contained in the squeezed state of light [14] and the ABCD matrix for optical beam transfers [18]. We also mentioned the possibility of understanding the mathematics of the Higgs mechanism in terms of the Lorentz group and its contractions.
+
+## Acknowledgements
+
+In his 1939 paper [1], Wigner worked out the subgroups of the Lorentz group whose transformations leave the four momentum of a given particle invariant. In so doing, he worked out their internal space-time symmetries. In spite of its importance, this paper remains as one of the most difficult papers to understand. Wigner was eager to make his paper understandable to younger physicists.
+
+While he was the pioneer in introducing the mathematics of group theory to physics, he was also quite fond of using two-by-two matrices to explain group theoretical ideas. He asked one of the present authors (Young S. Kim) to rewrite his 1939 paper [1] using the language of those matrices. This is precisely what we did in the present paper.
+
+We are grateful to Eugene Paul Wigner for this valuable suggestion.
+
+## Author Contributions
+
+This paper is largely based on the earlier papers by Young S. Kim and Marilyn E. Noz, and those by Sibel Başkal and Young S. Kim. The two-by-two formulation of the damped oscillator in Section 2 was jointly developed by Sibel Başkal and Young S. Kim during the summer of 2012. Marilyn E. Noz developed the idea of the symmetry of small-mass neutrinos in Section 7. The limiting process in the symmetry of the Poincaré sphere was formulated by Young S. Kim. Sibel Başkal initially constructed the four-by-four tensor representation in Section 8.
+
+The initial organization of this paper was conceived by Young S. Kim in his attempt to follow Wigner's suggestion to translate his 1939 paper into the language of two-by-two matrices. Sibel Başkal and Marilyn E. Noz tightened the organization and filled in the details.
+
+## Conflicts of Interest
+
+The authors declare no conflicts of interest.
+
+## References
+
+1. Wigner, E. On unitary representations of the inhomogeneous Lorentz Group. *Ann. Math.* **1939**, *40*, 149–204.
+2. Han, D.; Kim, Y.S.; Son, D. Eulerian parametrization of Wigner little groups and gauge transformations in terms of rotations in 2-component spinors. *J. Math. Phys.* **1986**, *27*, 2228–2235.
+3. Born, M.; Wolf, E. *Principles of Optics*, 6th ed.; Pergamon: Oxford, UK, 1980.
+---PAGE_BREAK---
+
+4. Han, D.; Kim, Y.S.; Noz, M.E. Stokes parameters as a Minkowskian four-vector. Phys. Rev. E **1997**, 56, 6065-6076.
+
+5. Brosseau, C. *Fundamentals of Polarized Light: A Statistical Optics Approach*; John Wiley: New York, NY, USA, 1998.
+
+6. Başkal, S.; Kim, Y.S. De Sitter group as a symmetry for optical decoherence. J. Phys. A **2006**, 39, 7775-7788.
+
+7. Kim, Y.S.; Noz, M.E. Symmetries shared by the Poincaré Group and the Poincaré Sphere. *Symmetry* **2013**, *5*, 233–252.
+
+8. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. Phys. Rev. D **1982**, *26*, 3717–3725.
+
+9. Han, D.; Kim, Y.S.; Son, D. Photons, neutrinos and gauge transformations. Am. J. Phys. **1986**, *54*, 818–821.
+
+10. Başkal, S.; Kim, Y.S. Little groups and Maxwell-type tensors for massive and massless particles. Europhys. Lett. **1997**, *40*, 375–380.
+
+11. Leggett, A.; Chakravarty, S.; Dorsey, A.; Fisher, M.; Garg, A.; Zwerger, W. Dynamics of the dissipative 2-state system. Rev. Mod. Phys. **1987**, *59*, 1–85.
+
+12. Başkal, S.; Kim, Y.S. One analytic form for four branches of the ABCD matrix. J. Mod. Opt. **2010**, *57*, 1251–1259.
+
+13. Başkal, S.; Kim, Y.S. Lens optics and the continuity problems of the ABCD matrix. J. Mod. Opt. **2014**, *61*, 161–166.
+
+14. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986.
+
+15. Bargmann, V. Irreducible unitary representations of the Lorentz group. Ann. Math. **1947**, *48*, 568–640.
+
+16. Iwasawa, K. On some types of topological groups. Ann. Math. **1949**, *50*, 507–558.
+
+17. Guillemin, V.; Sternberg, S. *Symplectic Techniques in Physics*; Cambridge University Press: Cambridge, UK, 1984.
+
+18. Başkal, S.; Kim, Y.S. Lorentz Group in Ray and Polarization Optics. In *Mathematical Optics: Classical, Quantum and Computational Methods; Lakshminarayanan, V., Calvo, M.L., Alieva, T., Eds.*; CRC Taylor and Francis: New York, NY, USA, 2013; Chapter 9, pp. 303–340.
+
+19. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon: Oxford, UK, 1964.
+
+20. Kim, Y.S.; Wigner, E.P. Cylindrical group and massless particles. J. Math. Phys. **1987**, *28*, 1175-1179.
+
+21. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic particles. J. Math. Phys. **1990**, *31*, 55-60.
+
+22. Georgieva, E.; Kim, Y.S. Iwasawa effects in multilayer optics. Phys. Rev. E **2001**, *64*, doi:10.1103/PhysRevE.64.026602.
+
+23. Saleh, B.E.A.; Teich, M.C. *Fundamentals of Photonics*, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2007.
+
+24. Papoulias, D.K.; Kosmas, T.S. Exotic Lepton Flavour Violating Processes in the Presence of Nuclei. J. Phys.: Conf. Ser. **2013**, *410*, 012123:1-012123:5.
+
+25. Dinh, D.N.; Petcov, S.T.; Sasao, N.; Tanaka, M.; Yoshimura, M. Observables in neutrino mass spectroscopy using atoms. Phys. Lett. B **2013**, *719*, 154-163.
+
+26. Miramonti, L.; Antonelli, V. Advancements in Solar Neutrino physics. Int. J. Mod. Phys. E **2013**, *22*, 1-16.
+
+27. Li, Y.-F.; Cao, J.; Jun, Y.; Wang, Y.; Zhan, L. Unambiguous determination of the neutrino mass hierarchy using reactor neutrinos. Phys. Rev. D **2013**, *88*, 013008:1-013008:9.
+
+28. Bergstrom, J. Combining and comparing neutrinoless double beta decay experiments using different 584 nuclei. J. High Energy Phys. **2013**, *02*, 093:1-093:27.
+
+29. Han, T.; Lewis, I.; Ruiz, R.; Si, Z.-G. Lepton number violation and $W'$ chiral couplings at the LHC. Phys. Rev. D **2013**, *87*, 035011:1-035011:25.
+
+30. Drewes, M. The phenomenology of right handed neutrinos. Int. J. Mod. Phys. E **2013**, *22*, 1330019:1-1330019:75.
+
+31. Barut, A.O.; McEwan, J. The four states of the massless neutrino with pauli coupling by spin-gauge invariance.
+ Lett. Math. Phys. **1986**, *11*, 67–72.
+
+32. Palcu, A. Neutrino Mass as a consequence of the exact solution of 3-3-1 gauge models without exotic electric charges.
+ Mod. Phys. Lett. A **2006**, *21*, 1203–1217.
+
+33. Bilenky, S.M. Neutrino.
+ Phys. Part. Nucl. **2013**, *44*, 1–46.
+
+34. Alhendi, H. A.; Lashin, E. I.; Mudlej, A. A. Textures with two traceless submatrices of the neutrino mass matrix.
+ Phys. Rev. D **2008**, *77*, 013009:1-013009:1-13.
+
+35. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass.
+ Phys. Rev. **1964**, *135*, B1049-B1056.
+
+36. Higgs, P.W. Broken symmetries and the masses of gauge bosons.
+ Phys. Rev. Lett. **1964**, *13*, 508-509.
+
+Symmetry **2014**, *6*, 473–515
+---PAGE_BREAK---
+
+37. Guralnik, G.S.; Hagen, C.R.; Kibble, T.W.B. Global conservation laws and massless particles. Phys. Rev. Lett. **1964**, *13*, 585–587.
+
+38. Weinberg, S. A model of leptons. Phys. Rev. Lett. **1967**, *19*, 1265–1266.
+
+39. Weinberg, S. *Quantum Theory of Fields, Volume II, Modern Applications*; Cambridge University Press: Cambridge, UK, 1996.
+
+40. Dée, A.; Ivanov, I.P. Higgs boson masses of the general two-Higgs-doublet model in the Minkowski-space formalism. Phys. Rev. D **2010**, *81*, 015012:1–015012:8.
+
+41. Inönü, E.; Wigner, E.P. On the contraction of groups and their representations. Proc. Natl. Acad. Sci. USA **1953**, *39*, 510–524.
+
+© 2014 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution
+(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+Article
+
+Loop Representation of Wigner's Little Groups
+
+Sibel Başkal ¹, Young S. Kim ²,* and Marilyn E. Noz ³
+
+¹ Department of Physics, Middle East Technical University, 06800 Ankara, Turkey; baskal@newton.physics.metu.edu.tr
+
+² Center for Fundamental Physics, University of Maryland College Park, Maryland, MD 20742, USA
+
+³ Department of Radiology, New York University, New York, NY 10016, USA; marilyn.noz@med.nyu.edu
+
+* Correspondence: yskim@umd.edu; Tel.: +1-301-937-6306
+
+Academic Editor: Sergei D. Odintsov
+
+Received: 12 May 2017; Accepted: 15 June 2017; Published: 23 June 2017
+
+**Abstract:** Wigner's little groups are the subgroups of the Lorentz group whose transformations leave the momentum of a given particle invariant. They thus define the internal space-time symmetries of relativistic particles. These symmetries take different mathematical forms for massive and for massless particles. However, it is shown possible to construct one unified representation using a graphical description. This graphical approach allows us to describe vividly parity, time reversal, and charge conjugation of the internal symmetry groups. As for the language of group theory, the two-by-two representation is used throughout the paper. While this two-by-two representation is for spin-1/2 particles, it is shown possible to construct the representations for spin-0 particles, spin-1 particles, as well as for higher-spin particles, for both massive and massless cases. It is shown also that the four-by-four Dirac matrices constitute a two-by-two representation of Wigner's little group.
+
+**Keywords:** Wigner's little groups; Lorentz group; unified picture of massive and massless particles; two-by-two representations; graphical approach to internal space-time symmetries
+
+PACS: 02.10.Yn; 02.20.Uw; 03.65.Fd
+
+# 1. Introduction
+
+In his 1939 paper [1], Wigner introduced subgroups of the Lorentz group whose transformations leave the momentum of a given particle invariant. These subgroups are called Wigner’s little groups in the literature and are known as the symmetry groups for internal space-time structure.
+
+For instance, a massive particle at rest can have spin that can be rotated in three-dimensional space.
+The little group in this case is the three-dimensional rotation group. For a massless particle moving
+along the z direction, Wigner noted that rotations around the z axis do not change the momentum.
+In addition, he found two more degrees of freedom, which together with the rotation, constitute a
+subgroup locally isomorphic to the two-dimensional Euclidean group.
+
+However, Wigner’s 1939 paper did not deal with the following critical issues.
+
+1. As for the massive particle, Wigner worked out his little group in the Lorentz frame where the particle is at rest with zero momentum, resulting in the three-dimensional rotation group. He could have Lorentz-boosted the O(3)-like little group to make the little group for a moving particle.
+
+2. While the little group for a massless particle is like *E*(2), it is not difficult to associate the rotational degree of freedom to the helicity. However, Wigner did not give physical interpretations to the two translation-like degrees of freedom.
+
+3. While the Lorentz group does not allow mass variations, particles with infinite momentum should behave like massless particles. The question is whether the Lorentz-boosted O(3)-like little group becomes the *E*(2)-like little group for particles with infinite momentum.
+---PAGE_BREAK---
+
+These issues have been properly addressed since then [2–5]. The translation-like degrees of freedom for massless particles collapse into one gauge degree of freedom, and the *E*(2)-like little group can be obtained as the infinite-momentum limit of the *O*(3)-like little group. This history is summarized in Figure 1.
+
+**Figure 1.** *O*(3)-like and *E*(2)-like internal space-time symmetries of massive and massless particles. The sphere corresponds to the *O*(3)-like little group for the massive particle. There is a plane tangential to the sphere at its north pole, which is *E*(2). There is also a cylinder tangent to the sphere at its equatorial belt. This cylinder gives one helicity and one gauge degree of freedom. This figure thus gives a unified picture of the little groups for massive and massless particles [5].
+
+In this paper, we shall present these developments using a mathematical language more transparent than those used in earlier papers.
+
+1. In his original paper [1], Wigner worked out his little group for the massive particle when its momentum is zero. How about moving massive particles? In this paper, we start with a moving particle with non-zero momentum. We then perform rotations and boosts whose net effect does not change the momentum [6–8]. This procedure can be applied to the massive, massless, and imaginary-mass cases.
+
+2. By now, we have a clear understanding of the group SL(2, c) as the universal covering group of the Lorentz group. The logic with two-by-two matrices is far more transparent than the mathematics based on four-by-four matrices. We shall thus use the two-by-two representation of the Lorentz group throughout the paper [5,9–11].
+
+The purpose of this paper is to make the physics contained in Wigner’s original paper more transparent. In Section 2, we give the six generators of the Lorentz group. It is possible to write them in terms of coordinate transformations, four-by-four matrices, and two-by-two matrices. In Section 3, we introduce Wigner’s little groups in terms of two-by-two matrices. In Section 4, it is shown possible to construct transformation matrices of the little group by performing rotations and a boost resulting in a non-trivial matrix, which leaves the given momentum invariant.
+
+Since we are more familiar with Dirac matrices than the Lorentz group, it is shown in Section 5 that Dirac matrices are a representation of the Lorentz group, and his four-by-four matrices are two-by-two
+---PAGE_BREAK---
+
+representations of the two-by-two representation of Wigner's little groups. In Section 6, we construct spin-0 and spin-1 particles for the SL(2,c) spinors. We also discuss massless higher spin particles.
+
+## 2. Lorentz Group and Its Representations
+
+The group of four-by-four matrices, which performs Lorentz transformations on the four-dimensional Minkowski space leaving invariant the quantity ($t^2 - z^2 - x^2 - y^2$), forms the starting point for the Lorentz group. As there are three rotation and three boost generators, the Lorentz group is a six-parameter group.
+
+Einstein, by observing that this Lorentz group also leaves invariant $(E, p_z, p_x, p_y)$, was able to derive his Lorentz-covariant energy-momentum relation commonly known as $E = mc^2$. Thus, the particle mass is a Lorentz-invariant quantity.
+
+The Lorentz group is generated by the three rotation operators:
+
+$$J_i = -i \left( x_j \frac{\partial}{\partial x_k} - x_k \frac{\partial}{\partial x_j} \right), \qquad (1)$$
+
+where $i, j, k = 1, 2, 3$, and three boost operators:
+
+$$K_i = -i \left( t \frac{\partial}{\partial x_i} + x_i \frac{\partial}{\partial t} \right). \qquad (2)$$
+
+These generators satisfy the closed set of commutation relations:
+
+$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, K_j] = i\epsilon_{ijk}K_j, \quad [K_i, K_j] = -i\epsilon_{ijk}J_k, \qquad (3)$$
+
+which are known as the Lie algebra for the Lorentz group.
+
+Under the space inversion, $x_i \rightarrow -x_i$, or the time reflection, $t \rightarrow -t$, the boost generators $K_i$ change sign. However, the Lie algebra remains invariant, which means that the commutation relations remain invariant under Hermitian conjugation.
+
+In terms of four-by-four matrices applicable to the Minkowskian coordinate of $(t,z,x,y)$, the generators can be written as:
+
+$$J_3 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 \end{pmatrix}, \quad K_3 = \begin{pmatrix} 0 & i & 0 & 0 \\ i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \qquad (4)$$
+
+for rotations around and boosts along the z direction, respectively. Similar expressions can be written for the x and y directions. We see here that the rotation generators $J_i$ are Hermitian, but the boost generators $K_i$ are anti-Hermitian.
+
+We can also consider the two-by-two matrices:
+
+$$J_i = \frac{1}{2}\sigma_i, \quad \text{and} \quad K_i = \frac{i}{2}\sigma_i, \qquad (5)$$
+
+where $\sigma_i$ are the Pauli spin matrices. These matrices also satisfy the commutation relations given in Equation (3).
+
+There are interesting three-parameter subgroups of the Lorentz group. In 1939 [1], Wigner considered the subgroups whose transformations leave the four-momentum of a given particle invariant. First of all, consider a massive particle at rest. The momentum of this particle is invariant under rotations in three-dimensional space. What happens for the massless particle that cannot be brought to a rest frame? In this paper we shall consider this and other problems using the two-by-two representation of the Lorentz group.
+---PAGE_BREAK---
+
+### 3. Two-by-Two Representation of Wigner's Little Groups
+
+The six generators of Equation (5) lead to the group of two-by-two unimodular matrices of the form:
+
+$$ G = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \qquad (6) $$
+
+with $\det(G) = 1$, where the matrix elements are complex numbers. There are thus six independent real numbers to accommodate the six generators given in Equation (5). The groups of matrices of this form are called SL(2, c) in the literature. Since the generators $K_i$ are not Hermitian, the matrix G is not always unitary. Its Hermitian conjugate is not necessarily the inverse.
+
+The space-time four-vector can be written as [5,9,11]:
+
+$$ \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix}, \qquad (7) $$
+
+whose determinant is $t^2 - z^2 - x^2 - z^2$, and remains invariant under the Hermitian transformation:
+
+$$ X' = G X G^{\dagger}. \qquad (8) $$
+
+This is thus a Lorentz transformation. This transformation can be explicitly written as:
+
+$$ \begin{pmatrix} t'+z' & x'-iy' \\ x'+iy' & t'-z' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix} \begin{pmatrix} \alpha^* & \gamma^* \\ \beta^* & \delta^* \end{pmatrix}. \qquad (9) $$
+
+With these six independent real parameters, it is possible to construct four-by-four matrices for Lorentz transformations applicable to the four-dimensional Minkowskian space [5,12]. For the purpose of the present paper, we need some special cases, and they are given in Table 1.
+
+Table 1. Two-by-two and four-by-four representations of the Lorentz group.
+
+| Generators | Two-by-Two | Four-by-Four |
|---|
J3 = 1⁄2(0 0) (exp(iφ/2) & 0 & exp(-iφ/2)) | 0 & 0 | 0 & 0 | K3 = 1⁄2(i 0) & 0 & -i) | 0 & 0 | 0 & 0 | J1 = 1⁄2(0 & 1) & 1 & 0) | 0 & 0 | 0 & 0 | K1 = 1⁄2(0 & i) & i & 0) | 0 & 0 | 0 & 0 | J2 = 1⁄2(0 & -i) & i & 0) | 0 & 0 | 0 & 0 | K2 = 1⁄2(0 & -1) & -1 & 0) | 0 & 0 | 0 & 0 |
+
+$$ \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \qquad (6) $$
+
+$$ X' = G X G^{\dagger}. \qquad (8) $$
+
+$$ \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & 0 & 0 \\ 0 & 0 & \sin\theta & 0 \\ 0 & 0 & 0 & \cos\theta \end{pmatrix}. \qquad (9) $$
+
+$$ \begin{pmatrix} \cosh\lambda & 0 & \sinh\lambda & 0 \\ 0 & 1 & 0 & 0 \\ \sinh\lambda & 0 & \cosh\lambda & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. $$
+
+$$ \begin{pmatrix} \cosh\lambda & 0 & \sinh\lambda & 0 \\ 0 & -\sin\theta & 0 & 0 \\ \sinh\lambda & 0 & \cosh\lambda & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. $$
+
+$$ \begin{pmatrix} \cosh\lambda & 0 & 0 & \sinh\lambda \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \sinh\lambda & 0 & 0 & \cosh\lambda \end{pmatrix}. $$
+---PAGE_BREAK---
+
+Likewise, the two-by-two matrix for the four-momentum takes the form:
+
+$$P = \begin{pmatrix} p_0 + p_z & p_x - ip_y \\ p_x + ip_y & p_0 - p_z \end{pmatrix}, \qquad (10)$$
+
+with $p_0 = \sqrt{m^2 + p_z^2 + p_x^2 + p_2^2}$. The transformation property of Equation (9) is applicable also to this energy-momentum four-vector.
+
+In 1939 [1], Wigner considered the following three four-vectors.
+
+$$P_+ = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad P_0 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad P_- = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \qquad (11)$$
+
+whose determinants are 1, 0, and -1, respectively, corresponding to the four-momenta of massive, massless, and imaginary-mass particles, as shown in Table 2.
+
+Table 2. The Wigner momentum vectors in the two-by-two matrix representation together with the corresponding transformation matrix. These four-momentum matrices have determinants that are positive, zero, and negative for massive, massless, and imaginary-mass particles, respectively.
+
+| Particle Mass | Four-Momentum | Transform Matrix |
|---|
| Massive | (1⁄0 0⁄1) | (cos(θ/2) − sin(θ/2)) | | Massless | (1⁄0 0⁄0) | (1⁄0 − γ-1) | | Imaginary mass | (1⁄0 0⁄−1) | (cosh(λ/2) sinh(λ/2)) |
+
+He then constructed the subgroups of the Lorentz group whose transformations leave these four-momenta invariant. These subgroups are called Wigner's little groups in the literature. Thus, the matrices of these little groups should satisfy:
+
+$$W P_i W^\dagger = P_i, \qquad (12)$$
+
+where $i = +, 0, -$.
+
+Since the momentum of the particle is fixed, these little groups define the internal space-time symmetries of the particle. For all three cases, the momentum is invariant under rotations around the z axis, as can be seen from the expression given for the rotation matrix generated by $J_3$ given in Table 1.
+
+For the first case corresponding to a massive particle at rest, the requirement of the subgroup is:
+
+$$W P_+ W^\dagger = P_+. \qquad (13)$$
+
+This requirement tells that the subgroup is the rotation subgroup with the rotation matrix around the y direction:
+
+$$R(\theta) = \begin{pmatrix} \cos(\theta/2) & -\sin(\theta/2) \\ \sin(\theta/2) & \cos(\theta/2) \end{pmatrix}. \qquad (14)$$
+
+For the second case of $P_0$, the triangular matrix of the form:
+
+$$\Gamma(\xi) = \begin{pmatrix} 1 & -\xi \\ 0 & 1 \end{pmatrix}, \qquad (15)$$
+---PAGE_BREAK---
+
+satisfies the Wigner condition of Equation (12). If we allow rotations around the z axis, the expression becomes:
+
+$$ \Gamma(\xi, \phi) = \begin{pmatrix} 1 & -\xi \exp(-i\phi) \\ 0 & 1 \end{pmatrix}. \quad (16) $$
+
+This matrix is generated by:
+
+$$ N_1 = J_2 - K_1 = \begin{pmatrix} 0 & -i \\ 0 & 0 \end{pmatrix}, \quad \text{and} \quad N_2 = J_1 + K_2 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}. \quad (17) $$
+
+Thus, the little group is generated by $J_3$, $N_1$, and $N_2$. They satisfy the commutation relations:
+
+$$ [N_1, N_2] = 0, \quad [J_3, N_1] = iN_2, \quad [J_3, K_2] = -iN_1. \quad (18) $$
+
+Wigner in 1939 [1] observed that this set is the same as that of the two-dimensional Euclidean group with one rotation and two translations. The physical interpretation of the rotation is easy to understand. It is the helicity of the massless particle. On the other hand, the physics of the $N_1$ and $N_2$ matrices has a stormy history, and the issue was not completely settled until 1990 [4]. They generate gauge transformations.
+
+For the third case of $P_-$, the matrix of the form:
+
+$$ S(\lambda) = \begin{pmatrix} \cosh(\lambda/2) & \sinh(\lambda/2) \\ \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix}, \quad (19) $$
+
+satisfies the Wigner condition of Equation (12). This corresponds to the Lorentz boost along the x direction generated by $K_1$ as shown in Table 1. Because of the rotation symmetry around the z axis, the Wigner condition is satisfied also by the boost along the y axis. The little group is thus generated by $J_3$, $K_1$, and $K_2$. These three generators:
+
+$$ [J_3, K_1] = iK_2, \quad [J_3, K_2] = -iK_1, \quad [K_1, K_2] = -iJ_3 \quad (20) $$
+
+form the little group $O(2, 1)$, which is the Lorentz group applicable to two space-like and one time-like dimensions.
+
+Of course, we can add rotations around the z axis. Let us Lorentz-boost these matrices along the z direction with the diagonal matrix:
+
+$$ B(\eta) = \begin{pmatrix} \exp(\eta/2) & 0 \\ 0 & \exp(-\eta/2) \end{pmatrix}. \quad (21) $$
+
+Then, the matrices of Equations (14), (15), and (19) become:
+
+$$ B(\eta)R(\theta)B(-\eta) = \begin{pmatrix} \cos(\theta/2) & -e^{\eta} \sin(\theta/2) \\ e^{-\eta} \sin(\theta/2) & \cos(\theta/2) \end{pmatrix}, \quad (22) $$
+
+$$ B(\eta)\Gamma(\xi)B(-\eta) = \begin{pmatrix} 1 & -e^{\eta}\xi \\ 0 & 1 \end{pmatrix}, \quad (23) $$
+
+$$ B(\eta)S(-\lambda)B(-\eta) = \begin{pmatrix} \cosh(\lambda/2) & -e^{\eta} \sinh(\lambda/2) \\ -e^{-\eta} \sinh(\lambda/2) & \cosh(\lambda/2) \end{pmatrix}, \quad (24) $$
+
+respectively. We have changed the sign of $\lambda$ for future convenience.
+---PAGE_BREAK---
+
+When $\eta$ becomes large, $\theta$, $\tilde{\epsilon}$, and $\lambda$ should become small if the upper-right elements of these three matrices are to remain finite. In that case, the diagonal elements become one, and all three matrices become like the triangular matrix:
+
+$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}. \tag{25} $$
+
+Here comes the question of whether the matrix of Equation (24) can be continued from Equation (22), via Equation (23). For this purpose, let us write Equation (22) as:
+
+$$ \begin{pmatrix} 1 - \frac{(\gamma\epsilon)^2}{2} & -\gamma \\ \gamma\epsilon^2 & 1 - \frac{(\gamma\epsilon)^2}{2} \end{pmatrix}, \tag{26} $$
+
+for small $\theta = 2\gamma\epsilon$, with $\epsilon = e^{-\eta}$. For Equation (24), we can write:
+
+$$ \begin{pmatrix} 1 + \frac{(\gamma\epsilon)^2}{2} & -\gamma \\ -\gamma\epsilon^2 & 1 + \frac{(\gamma\epsilon)^2}{2} \end{pmatrix}, \tag{27} $$
+
+with $\lambda = -2\gamma\epsilon$. Both of these expressions become the triangular matrix of Equation (25) when $\epsilon = 0$. For small values of $\epsilon$, the diagonal elements change from $\cos(\theta/2)$ to $\cosh(\lambda/2)$ while $\sin(\theta/2)$ becomes $-\sinh(\lambda/2)$. Thus, it is possible to continue from Equation (22) to Equation (24). The mathematical details of this process have been discussed in our earlier paper on this subject [13].
+
+We are then led to the question of whether there is one expression that will take care of all three cases. We shall discuss this issue in Section 4.
+
+**4. Loop Representation of Wigner's Little Groups**
+
+It was noted in Section 3 that matrices of Wigner’s little group take different forms for massive, massless, and imaginary-mass particles. In this section, we construct one two-by-two matrix that works for all three different cases.
+
+In his original paper [1], Wigner constructs those matrices in specific Lorentz frames. For instance, for a moving massive particle with a non-zero momentum, Wigner brings it to the rest frame and works out the *O*(3) subgroup of the Lorentz group as the little group for this massive particle. In order to complete the little group, we should boost this *O*(3) to the frame with the original non-zero momentum [4].
+
+In this section, we construct transformation matrices without changing the momentum. Let us assume that the momentum is along the z direction; the rotation around the z axis leaves the momentum invariant. According to the Euler decomposition, the rotation around the y axis, in addition, will accommodate rotations along all three directions. For this reason, it is enough to study what happens in transformations within the xz plane [14].
+
+It was Kupersztych [6] who showed in 1976 that it is possible to construct a momentum-preserving transformation by a rotation followed by a boost as shown in Figure 2. In 1981 [7], Han and Kim showed that the boost can be decomposed into two components as illustrated in Figure 2. In 1988 [8], Han and Kim showed that the same purpose can be achieved by one boost preceded and followed by the same rotation matrix, as shown also in Figure 2. We choose to call this loop the “D loop” and write the transformation matrix as:
+
+$$ D(\alpha, \chi) = R(\alpha)S(-2\chi)R(\alpha). \tag{28} $$
+---PAGE_BREAK---
+
+**Figure 2.** Evolution of the Wigner loop. In 1976 [6], Kupersztych considered a rotation followed by a boost whose net result will leave the momentum invariant. In 1981 [7], Han and Kim considered the same problem with simpler forms for boost matrices. In 1988, Han and Kim [8] constructed the Lorentz kinematics corresponding to the Bargmann decomposition [10] consisting of one boost matrix sandwiched by two rotation matrices. In the present case, the two rotation matrices are identical.
+
+The *D* matrix can now be written as three matrices. This form is known in the literature as the Bargmann decomposition [10]. This form gives additional convenience. When we take the inverse or the Hermitian conjugate, we have to reverse the order of matrices. However, this particular form does not require re-ordering.
+
+The *D* matrix of Equation (28) becomes:
+
+$$ D(\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi - (\sin \alpha) \cosh \chi \\ -\sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}. \quad (29) $$
+
+If the diagonal element is smaller than one with $((\cos \alpha) \cosh \chi) < 1$, the off-diagonal elements have opposite signs. Thus, this *D* matrix can serve as the Wigner matrix of Equation (22) for massive particles. If the diagonal elements are one, one of the off-diagonal elements vanishes, and this matrix becomes triangular like Equation (23). If the diagonal elements are greater than one with $((\cos \alpha) \cosh \chi) > 1$, this matrix can become Equation (24). In this way, the matrix of Equation (28) can accommodate the three different expressions given in Equations (22)–(24).
+
+### 4.1. Continuity Problems
+
+Let us go back to the three separate formulas given in Equations (22)–(24). If $\eta$ becomes infinity, all three of them become triangular. For the massive particle, $\tanh \eta$ is the particle speed, and:
+
+$$ \tanh \eta = \frac{p}{p_0}, \quad (30) $$
+
+where *p* and $p_0$ are the momentum and energy of the particle, respectively.
+When the particle is massive with $m^2 > 0$, the ratio:
+
+$$ \frac{\text{lower-left element}}{\text{upper-right element}'} \quad (31) $$
+
+is negative and is:
+
+$$ -e^{-2\eta} = \frac{1 - \sqrt{1 + m^2/p^2}}{1 + \sqrt{1 + m^2/p^2}}. \quad (32) $$
+---PAGE_BREAK---
+
+If the mass is imaginary with $m^2 < 0$, the ratio is positive and:
+
+$$e^{-2\eta} = \frac{1 - \sqrt{1 + m^2/p^2}}{1 + \sqrt{1 + m^2/p^2}} \quad (33)$$
+
+This ratio is zero for massless particles. This means that when $m^2$ changes from positive to negative, the ratio changes from $-e^{-2\eta}$ to $e^{-2\eta}$. This transition is continuous, but not analytic. This aspect of non-analytic continuity has been discussed in one of our earlier papers [13].
+
+The *D* matrix of Equation (29) combines all three matrices given in Equations (22)–(24) into one matrix. For this matrix, the ratio of Equation (31) becomes:
+
+$$\frac{\tanh \chi - \sin \alpha}{\tanh \chi + \sin \alpha} = \frac{1 - \sqrt{1 + (m/p)^2}}{1 + \sqrt{1 + (m/p)^2}} \quad (34)$$
+
+Thus,
+
+$$\frac{m^2}{p^2} = \left( \frac{\sin \alpha}{\tanh \chi} \right)^2 - 1. \quad (35)$$
+
+For the *D* loop of Figure 2, both $\tanh \chi$ and $\sin \alpha$ range from 0–1, as illustrated in Figure 3. For small values of the mass for a fixed value of the momentum, this expression becomes:
+
+$$-\frac{m^2}{4p^2}. \quad (36)$$
+
+Thus, the change from positive values of $m^2$ to negative values is continuous and analytic. For massless particles, $m^2$ is zero, while it is negative for imaginary-mass particles.
+
+We realize that the mass cannot be changed within the frame of the Lorentz group and that both $\alpha$ and $\eta$ are parameters of the Lorentz group. On the other hand, their combinations according to the *D* loop of Figure 2 can change the value of $m^2$ according to Equation (35) and Figure 3.
+
+**Figure 3.** Non-Lorentzian transformations allowing mass variations. The *D* matrix of Equation (29) allows us to change the $\chi$ and $\alpha$ analytically within the square region in (a). These variations allow the mass variations illustrated in (b), not allowed in Lorentz transformations. The Lorentz transformations are possible along the hyperbolas given in this figure.
+
+## 4.2. Parity, Time Reversal, and Charge Conjugation
+
+Space inversion leads to the sign change in $\chi$:
+
+$$D(\alpha, -\chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & \sinh \chi - (\sin \alpha) \cosh \chi \\ \sinh \chi + (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}, \quad (37)$$
+---PAGE_BREAK---
+
+and time reversal leads to the sign change in both $\alpha$ and $\chi$:
+
+$$D(-\alpha, -\chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & \sinh \chi + (\sin \alpha) \cosh \chi \\ \sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}. \quad (38)$$
+
+If we space-invert this expression, the result is a change only in the direction of rotation,
+
+$$D(-\alpha, \chi) = \begin{pmatrix} (\cos \alpha) \cosh \chi & -\sinh \chi + (\sin \alpha) \cosh \chi \\ -\sinh \chi - (\sin \alpha) \cosh \chi & (\cos \alpha) \cosh \chi \end{pmatrix}. \quad (39)$$
+
+The combined transformation of space inversion and time reversal is known as the “charge conjugation”. All of these transformations are illustrated in Figure 4.
+
+Figure 4. Parity, time reversal, and charge conjugation of Wigner’s little groups in the loop representation.
+
+Let us go back to the Lie algebra of Equation (3). This algebra is invariant under Hermitian conjugation. This means that there is another set of commutation relations,
+
+$$[J_i, J_j] = i\epsilon_{ijk}J_k, \quad [J_i, \hat{K}_j] = i\epsilon_{ijk}\hat{K}_k, \quad [\hat{K}_i, \hat{K}_j] = -i\epsilon_{ijk}J_k, \quad (40)$$
+
+where $K_i$ is replaced with $\hat{K}_i = -K_i$. Let us go back to the expression of Equation (2). This transition to the dotted representation is achieved by the space inversion or by the parity operation.
+
+On the other hand, the complex conjugation of the Lie algebra of Equation (3) leads to:
+
+$$[J_i^*, J_j^*] = -i\epsilon_{ijk}J_k^*, \quad [J_i^*, K_j^*] = -i\epsilon_{ijk}K_k^*, \quad [K_i^*, K_j^*] = i\epsilon_{ijk}J_k^*. \quad (41)$$
+---PAGE_BREAK---
+
+It is possible to restore this algebra to that of the original form of Equation (3) if we replace $J_i^*$ by $-J_i$ and $K_i^*$ by $-K_i$. This corresponds to the time-reversal process. This operation is known as the anti-unitary transformation in the literature [15,16].
+
+Since the algebras of Equations (3) and (41) are invariant under the sign change of $K_i$ and $K_i^*$, respectively, there is another Lie algebra with $J_i^*$ replaced by $-J_i$ and $K_i^*$ by $-K_i$. This is the parity operation followed by time reversal, resulting in charge conjugation. With the four-by-four matrices for spin-1 particles, this complex conjugation is trivial, and $J_i^* = -J_i$, as well as $K_i^* = -K_i$.
+
+On the other hand, for spin 1/2 particles, we note that:
+
+$$
+\begin{aligned}
+J_1^* &= J_1, & J_2^* &= -J_2, & J_3^* &= J_3, \\
+K_1^* &= -K_1, & K_2^* &= K_2, & K_3^* &= -K_3.
+\end{aligned}
+\quad (42) $$
+
+Thus, $J_i^*$ should be replaced by $\sigma_2 J_i \sigma_2$, and $K_i^*$ by $-\sigma_2 K_i \sigma_2$.
+
+**5. Dirac Matrices as a Representation of the Little Group**
+
+The Dirac equation, Dirac matrices, and Dirac spinors constitute the basic language for spin-1/2 particles in physics. Yet, they are not widely recognized as the package for Wigner's little group. Yes, the little group is for spins, so are the Dirac matrices.
+
+Let us write the Dirac equation as:
+
+$$ (p \cdot \gamma - m)\psi(\vec{x}, t) = \lambda\psi(\vec{x}, t). \quad (43) $$
+
+This equation can be explicitly written as:
+
+$$ \left( -i\gamma_0 \frac{\partial}{\partial t} - i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} - i\gamma_3 \frac{\partial}{\partial z} - m \right) \psi(\vec{x}, t) = \lambda \psi(\vec{x}, t), \quad (44) $$
+
+where:
+
+$$ \gamma_0 = \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix}, \quad \gamma_1 = \begin{pmatrix} 0 & \sigma_1 \\ -\sigma_1 & 0 \end{pmatrix}, \quad \gamma_2 = \begin{pmatrix} 0 & \sigma_2 \\ -\sigma_2 & 0 \end{pmatrix}, \quad \gamma_3 = \begin{pmatrix} 0 & \sigma_3 \\ -\sigma_3 & 0 \end{pmatrix}, \quad (45) $$
+
+where *I* is the two-by-two unit matrix. We use here the Weyl representation of the Dirac matrices.
+
+The Dirac spinor has four components. Thus, we write the wave function for a free particle as:
+
+$$ \psi(\vec{x}, t) = U_{\pm} \exp [i (\vec{p} \cdot \vec{x} - p_0 t)], \quad (46) $$
+
+with the Dirac spinor:
+
+$$ U_{+} = \begin{pmatrix} u \\ \dot{u} \end{pmatrix}, \qquad U_{-} = \begin{pmatrix} v \\ \dot{v} \end{pmatrix}, \quad (47) $$
+
+where:
+
+$$ u = \dot{u} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \text{and} \quad v = \dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. \quad (48) $$
+
+In Equation (46), the exponential form $\exp[i(\vec{p} \cdot \vec{x} - p_0 t)]$ defines the particle momentum, and the column vector $U_{\pm}$ is for the representation space for Wigner's little group dictating the internal space-time symmetries of spin-1/2 particles.
+
+In this four-by-four representation, the generators for rotations and boosts take the form:
+
+$$ J_i = \frac{1}{2} \begin{pmatrix} \sigma_i & 0 \\ 0 & \sigma_i \end{pmatrix}, \quad \text{and} \quad K_i = \frac{i}{2} \begin{pmatrix} \sigma_i & 0 \\ 0 & -\sigma_i \end{pmatrix}. \quad (49) $$
+---PAGE_BREAK---
+
+This means that both dotted and undotted spinor are transformed in the same way under rotation, while they are boosted in the opposite directions.
+
+When this $\gamma_0$ matrix is applied to $U_\pm$:
+
+$$ \gamma_0 U_+ = \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix} \begin{pmatrix} u \\ \dot{u} \end{pmatrix} = \begin{pmatrix} \dot{u} \\ u \end{pmatrix}, \quad \text{and} \quad \gamma_0 U_- = \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix} \begin{pmatrix} v \\ \dot{v} \end{pmatrix} = \begin{pmatrix} \dot{v} \\ v \end{pmatrix}. \qquad (50) $$
+
+Thus, the $\gamma_0$ matrix interchanges the dotted and undotted spinors. The four-by-four matrix for the rotation around the y axis is:
+
+$$ R_{44}(\theta) = \begin{pmatrix} R(\theta) & 0 \\ 0 & R(\theta) \end{pmatrix}, \qquad (51) $$
+
+while the matrix for the boost along the z direction is:
+
+$$ B_{44}(\eta) = \begin{pmatrix} B(\eta) & 0 \\ 0 & B(-\eta) \end{pmatrix}, \qquad (52) $$
+
+with:
+
+$$ B(\pm\eta) = \begin{pmatrix} e^{\pm\eta/2} & 0 \\ 0 & e^{\mp\eta/2} \end{pmatrix}. \qquad (53) $$
+
+These $\gamma$ matrices satisfy the anticommutation relations:
+
+$$ \{\gamma_{\mu}, \gamma_{\nu}\} = 2g_{\mu\nu}, \qquad (54) $$
+
+where:
+
+$$ g_{00} = 1, \quad g_{11} = g_{22} = g_{22} = -1, $$
+
+$$ g_{\mu\nu} = 0 \quad \text{if } \mu \neq \nu. \qquad (55) $$
+
+Let us consider space inversion with the exponential form changing to $\exp[i(-\vec{p} \cdot \vec{x} - p_0t)]$. For this purpose, we can change the sign of $x$ in the Dirac equation of Equation (44). It then becomes:
+
+$$ (-i\gamma_0 \frac{\partial}{\partial t} + i\gamma_1 \frac{\partial}{\partial x} + i\gamma_2 \frac{\partial}{\partial y} + i\gamma_3 \frac{\partial}{\partial z} - m) \psi(-\vec{x}, t) = \lambda \psi(-\vec{x}, 0). \qquad (56) $$
+
+Since $\gamma_0\gamma_i = -\gamma_i\gamma_0$ for $i=1,2,3$,
+
+$$ (-i\gamma_0 \frac{\partial}{\partial t} - i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} - i\gamma_3 \frac{\partial}{\partial z} - m) [\gamma_0\psi(-\vec{x} \cdot \vec{p}, p_0t)] = \lambda[\gamma_0\psi(-\vec{x} \cdot \vec{p}, p_0t)]. \qquad (57) $$
+
+This is the Dirac equation for the wave function under the space inversion or the parity operation. The Dirac spinor $U_\pm$ becomes $\gamma_0 U_\pm$, according to Equation (50). This operation is illustrated in Table 3 and Figure 4.
+
+**Table 3.** Parity, charge conjugation, and time reversal in the loop representation.
+
+ | Start | Time Reflection |
|---|
| Start | Start with R(α)S(-2χ)R(α) | Time Reversal R(-α)S(2χ)R(-α) | Space Inversion | Parity R(α)S(2χ)R(α) | Charge Conjugation R(-α)S(-2χ)R(-α) |
+---PAGE_BREAK---
+
+We are interested in changing the sign of $t$. First, we can change both space and time variables, and then, we can change the space variable. We can take the complex conjugate of the equation first. Since $\gamma_2$ is imaginary, while all others are real, the Dirac equation becomes:
+
+$$ \left( i\gamma_0 \frac{\partial}{\partial t} + i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} + i\gamma_3 \frac{\partial}{\partial z} - m \right) \psi^*(\vec{x}, t) = \lambda \psi^*(\vec{x}, t). \quad (58) $$
+
+We are now interested in restoring this equation to the original form of Equation (44). In order to achieve this goal, let us consider $(\gamma_1 \gamma_3)$. This form commutes with $\gamma_0$ and $\gamma_2$ and anti-commutes with $\gamma_1$ and $\gamma_3$. Thus,
+
+$$ \left(-i\gamma_0 \frac{\partial}{\partial t} - i\gamma_1 \frac{\partial}{\partial x} - i\gamma_2 \frac{\partial}{\partial y} - i\gamma_3 \frac{\partial}{\partial z} - m\right) (\gamma_1 \gamma_3) \psi^*(\vec{x}, -t) = \lambda (\gamma_1 \gamma_3) \psi^*(\vec{x}, -t). \quad (59) $$
+
+Furthermore, since:
+
+$$ \gamma_1 \gamma_3 = \begin{pmatrix} i\sigma_2 & 0 \\ 0 & i\sigma_2 \end{pmatrix}, \quad (60) $$
+
+this four-by-four matrix changes the direction of the spin. Indeed, this form of time reversal is consistent with Table 3 and Figure 4.
+
+Finally, let us change the signs of both $\vec{x}$ and $t$. For this purpose, we go back to the complex-conjugated Dirac equation of Equation (43). Here, $\gamma_2$ anti-commutes with all others. Thus, the wave function:
+
+$$ \gamma_2 \psi(-\vec{x} \cdot \vec{p}, -p_0 t), \quad (61) $$
+
+should satisfy the Dirac equation. This form is known as the charge-conjugated wave function, and it is also illustrated in Table 3 and Figure 4.
+
+## 5.1. Polarization of Massless Neutrinos
+
+For massless neutrinos, the little group consists of rotations around the z axis, in addition to $N_i$ and $\tilde{N}_i$ applicable to the upper and lower components of the Dirac spinors. Thus, the four-by-four matrix for these generators is:
+
+$$ N_{44(i)} = \begin{pmatrix} N_i & 0 \\ 0 & \tilde{N}_i \end{pmatrix}. \quad (62) $$
+
+The transformation matrix is thus:
+
+$$ D_{44}(\alpha, \beta) = \exp(-i\alpha N_{44(1)} - i\beta N_{44(2)}) = \begin{pmatrix} D(\alpha, \beta) & 0 \\ 0 & \tilde{D}(\alpha, \beta) \end{pmatrix}, \quad (63) $$
+
+with:
+
+$$ D(\alpha, \beta) = \begin{pmatrix} 1 & \alpha - i\beta \\ 0 & 1 \end{pmatrix}, \qquad \tilde{D}(\alpha, \beta) = \begin{pmatrix} 1 & 0 \\ -\alpha - i\beta & 1 \end{pmatrix}. \quad (64) $$
+
+As is illustrated in Figure 1, the $D$ transformation performs the gauge transformation on massless photons. Thus, this transformation allows us to extend the concept of gauge transformations to massless spin-1/2 particles. With this point in mind, let us see what happens when this $D$ transformation is applied to the Dirac spinors.
+
+$$ D(\alpha, \beta)u = u, \qquad \tilde{D}(\alpha, \beta)\dot{v} = \dot{v}. \quad (65) $$
+
+Thus, $u$ and $\dot{v}$ are invariant gauge transformations.
+---PAGE_BREAK---
+
+What happens to $v$ and $\dot{u}$?
+
+$$D(\alpha, \beta)v = v + (\alpha - i\beta)u, \quad \dot{D}(\alpha, \beta)\dot{u} = \dot{u} - (\alpha + i\beta)\dot{v}. \qquad (66)$$
+
+These spinors are not invariant under gauge transformations [17,18].
+
+Thus, the Dirac spinor:
+
+$$U_{\text{inv}} = \begin{pmatrix} u \\ \dot{v} \end{pmatrix}, \qquad (67)$$
+
+is gauge-invariant while the spinor:
+
+$$U_{\text{non}} = \begin{pmatrix} v \\ \dot{u} \end{pmatrix}, \qquad (68)$$
+
+is not. Thus, gauge invariance leads to the polarization of massless spin-1/2 particles. Indeed, this is what we observe in the real world.
+
+## 5.2. Small-Mass Neutrinos
+
+Neutrino oscillation experiments presently suggest that neutrinos have a small, but finite mass [19]. If neutrinos have mass, there should be a Lorentz frame in which they can be brought to rest with an $O(3)$-like $SU(2)$ little group for their internal space-time symmetry. However, it is not likely that at-rest neutrinos will be found anytime soon. In the meantime, we have to work with the neutrino with a fixed momentum and a small mass [20]. Indeed, the present loop representation is suitable for this problem.
+
+Since the mass is so small, it is appropriate to approach this small-mass problem as a departure from the massless case. In Section 5.1, it was noted that the polarization of massless neutrinos is a consequence of gauge invariance. Let us start with a left-handed massless neutrino with the spinor:
+
+$$\dot{v} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \qquad (69)$$
+
+and the gauge transformation applicable to this spinor:
+
+$$\Gamma(\gamma) = \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}. \qquad (70)$$
+
+Since:
+
+$$\begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \qquad (71)$$
+
+the spinor of Equation (69) is invariant under the gauge transformation of Equation (70).
+
+If the neutrino has a small mass, the transformation matrix is for a rotation. However, for a small non-zero mass, the deviation from the triangular form is small. The procedure for deriving the Wigner matrix for this case is given toward the end of Section 3. The matrix in this case is:
+
+$$\mathcal{D}(\gamma) = \begin{pmatrix} 1 - (\gamma\epsilon)^2/2 & -\gamma\epsilon^2 \\ \gamma & 1 - (\gamma\epsilon)^2/2 \end{pmatrix}, \qquad (72)$$
+
+with $\epsilon^2 = m/p$, where *m* and *p* are the mass and momentum of the neutrino, respectively. This matrix becomes the gauge transformation of Equation (70) for $\epsilon = 0$. If this matrix is applied to the spinor of Equation (69), it becomes:
+
+$$D(\gamma)\dot{v} = \begin{pmatrix} -\gamma\epsilon^2 \\ 1 \end{pmatrix}. \qquad (73)$$
+---PAGE_BREAK---
+
+In this way, the left-handed neutrino gains a right-handed component. We took into account that $(\gamma e)^2$ is much smaller than one.
+
+Since massless neutrinos are gauge independent, we cannot measure the value of $\gamma$. For the small-mass case, we can determine this value from the measured values of $m/p$ and the density of right-handed neutrinos.
+
+## 6. Scalars, Vectors, and Tensors
+
+We are quite familiar with the process of constructing three spin-1 states and one spin-0 state from two spinors. Since each spinor has two states, there are four states if combined.
+
+In the Lorentz-covariant world, for each spin-1/2 particle, there are two additional two-component spinors coming from the dotted representation [12,21–23]. There are thus four states. If two spinors are combined, there are 16 states. In this section, we show that they can be partitioned into
+
+1. scalar with one state,
+
+2. pseudo-scalar with one state,
+
+3. four-vector with four states,
+
+4. axial vector with four states,
+
+5. second-rank tensor with six states.
+
+These quantities contain sixteen states. We made an attempt to construct these quantities in our earlier publication [5], but this earlier version is not complete. There, we did not take into account the parity operation properly. We thus propose to complete the job in this section.
+
+For particles at rest, it is known that the addition of two one-half spins result in spin-zero and spin-one states. Hence, we have two different spinors behaving differently under the Lorentz boost. Around the z direction, both spinors are transformed by:
+
+$$Z(\phi) = \exp(-i\phi J_3) = \begin{pmatrix} e^{-i\phi/2} & 0 \\ 0 & e^{i\phi/2} \end{pmatrix}. \qquad (74)$$
+
+However, they are boosted by:
+
+$$B(\eta) = \exp(-i\eta K_3) = \begin{pmatrix} e^{\eta/2} & 0 \\ 0 & e^{-\eta/2} \end{pmatrix},$$
+
+$$\dot{B}(\eta) = \exp(i\eta K_3), = \begin{pmatrix} e^{-\eta/2} & 0 \\ 0 & e^{\eta/2} \end{pmatrix}, \qquad (75)$$
+
+which are applicable to the undotted and dotted spinors, respectively. These two matrices commute with each other and also with the rotation matrix $Z(\phi)$ of Equation (74). Since $K_3$ and $J_3$ commute with each other, we can work with the matrix $Q(\eta, \phi)$ defined as:
+
+$$Q(\eta, \phi) = B(\eta)Z(\phi) = \begin{pmatrix} e^{(\eta-i\phi)/2} & 0 \\ 0 & e^{-(\eta-i\phi)/2} \end{pmatrix},$$
+
+$$\dot{Q}(\eta, \phi) = \dot{B}(\eta)\dot{Z}(\phi) = \begin{pmatrix} e^{-(\eta+i\phi)/2} & 0 \\ 0 & e^{(\eta+i\phi)/2} \end{pmatrix}. \qquad (76)$$
+
+When this combined matrix is applied to the spinors,
+
+$$Q(\eta, \phi)u = e^{(\eta-i\phi)/2}u, \quad Q(\eta, \phi)v = e^{-(\eta-i\phi)/2}v,$$
+
+$$\dot{Q}(\eta, \phi)\dot{u} = e^{-(\eta+i\phi)/2}\dot{u}, \quad \dot{Q}(\eta, \phi)\dot{v} = e^{(\eta+i\phi)/2}\dot{v}. \qquad (77)$$
+---PAGE_BREAK---
+
+If the particle is at rest, we can explicitly construct the combinations:
+
+$$uu, \quad \frac{1}{\sqrt{2}}(uv + vu), \quad vv, \tag{78}$$
+
+to obtain the spin-1 state and:
+
+$$\frac{1}{\sqrt{2}}(uv - vu), \tag{79}$$
+
+for the spin-zero state. This results in four bilinear states. In the $SL(2, c)$ regime, there are two dotted spinors, which result in four more bilinear states. If we include both dotted and undotted spinors, there are sixteen independent bilinear combinations. They are given in Table 4. This table also gives the effect of the operation of $Q(\eta, \phi)$.
+
+**Table 4.** Sixteen combinations of the $SL(2, c)$ spinors. In the $SU(2)$ regime, there are two spinors leading to four bilinear forms. In the $SL(2, c)$ world, there are two undotted and two dotted spinors. These four-spinors lead to sixteen independent bilinear combinations.
+
+| Spin 1 | Spin 0 |
|---|
| uu, 1⁄√2(uv + vu), vv, | u1⁄√2(uv − vu) | | úú, 1⁄√2(úv + vú), vú, | v1⁄√2(úv − vú) | | uú, 1⁄√2(uv + vú), vv, | v1⁄√2(uø − vø) | | úu, 1⁄√2(úv + vú), vú, | v1⁄√2(úv − vú) | | After the operation of Q(η, φ) and Q̇(η, φ) | | e−iφeηuu, 1⁄√2(uv + vu), eiφe−ηvv, | u1⁄√2(uv − vu) | | e−iφe−ηúú, 1⁄√2(úv + vú), eiφeηvú, | v1⁄√2(úv − vú) | | e−iφuú, 1⁄√2(eηuv + e−ηvú), eiφvú, | e1⁄√2(eηuø − e−ηvø) | | e−iφúú, 1⁄√2(úv + vú), eiφvv, | e1⁄√2(eηúv − e−ηvø) |
+
+Among the bilinear combinations given in Table 4, the following two equations are invariant under rotations and also under boosts:
+
+$$S = \frac{1}{\sqrt{2}}(uv - vu), \quad \text{and} \quad \dot{S} = -\frac{1}{\sqrt{2}}(\dot{u}\dot{v} - \dot{v}\dot{u}). \tag{80}$$
+
+They are thus scalars in the Lorentz-covariant world. Are they the same or different? Let us consider the following combinations:
+
+$$S_+ = \frac{1}{\sqrt{2}}(S + \dot{S}), \quad \text{and} \quad S_- = \frac{1}{\sqrt{2}}(S - \dot{S}). \tag{81}$$
+
+Under the dot conjugation, $S_+$ remains invariant, but $S_-$ changes sign. The boost is performed in the opposite direction and therefore is the operation of space inversion. Thus, $S_+$ is a scalar, while $S_-$ is called a pseudo-scalar.
+
+## 6.1. Four-Vectors
+
+Let us go back to Equation (78) and make a dot-conjugation on one of the spinors.
+
+$$u\dot{u}, \quad \frac{1}{\sqrt{2}}(u\dot{v} + v\dot{u}), \quad v\dot{v}, \quad \frac{1}{\sqrt{2}}(u\dot{v} - v\dot{u}),$$
+
+$$\dot{u}u, \quad \frac{1}{\sqrt{2}}(\dot{u}v + \dot{v}u), \quad \dot{v}v, \quad \frac{1}{\sqrt{2}}(\dot{u}v - \dot{v}u). \tag{82}$$
+---PAGE_BREAK---
+
+We can make symmetric combinations under dot conjugation, which lead to:
+
+$$
+\frac{1}{\sqrt{2}} (u\dot{u} + \dot{u}u), \quad \frac{1}{2} [(u\dot{\nu} + v\dot{u}) + (\dot{u}v + \dot{v}u)], \quad \frac{1}{\sqrt{2}} (v\dot{\nu} + \dot{v}v), \quad \text{for spin 1},
+$$
+
+$$
+\frac{1}{2}[(u\dot{v}-v\dot{u})+(\dot{u}v-\dot{v}u)], \quad \text{for spin 0,} \tag{83}
+$$
+
+and anti-symmetric combinations, which lead to:
+
+$$
+\frac{1}{\sqrt{2}}(u\dot{u} - \dot{u}u), \quad \frac{1}{2}[(u\dot{v} + v\dot{u}) - (\dot{u}v + \dot{v}u)], \quad \frac{1}{\sqrt{2}}(v\dot{v} - \dot{v}v), \quad \text{for spin 1,}
+$$
+
+$$
+\frac{1}{2}[(u\ddot{v} - v\ddot{u}) - (\dot{u}\ddot{v} - \ddot{u}v)], \quad \text{for spin } 0. \qquad (84)
+$$
+
+Let us rewrite the expression for the space-time four-vector given in Equation (7) as:
+
+$$
+\begin{pmatrix} t+z & x-iy \\ x+iy & t-z \end{pmatrix}, \tag{85}
+$$
+
+which, under the parity operation, becomes
+
+$$
+\begin{pmatrix}
+t-z & -x+iy \\
+-x-iy & t+z
+\end{pmatrix}.
+\qquad
+(86)
+$$
+
+If the expression of Equation (85) is for an axial vector, the parity operation leads to:
+
+$$
+\begin{pmatrix} -t+z & x-iy \\ x+iy & -t-z \end{pmatrix}, \qquad (87)
+$$
+
+where only the sign of *t* is changed. The off-diagonal elements remain invariant, while the diagonal elements are interchanged with sign changes.
+
+We note here that the parity operation corresponds to dot conjugation. Then, from the expressions given in Equations (83) and (84), it is possible to construct the four-vector as:
+
+$$
+V = \begin{pmatrix} u\ddot{v} - \dot{u}u & v\ddot{v} - \dot{v}u \\ u\dot{u} - \dot{u}u & \dot{u}v - v\dot{u} \end{pmatrix}, \qquad (88)
+$$
+
+where the off-diagonal elements change their signs under the dot conjugation, while the diagonal elements are interchanged.
+
+The axial vector can be written as:
+
+$$
+A = \begin{pmatrix} u\ddot{v} + v\dot{u} & v\ddot{v} + v\dot{v} \\ u\dot{u} + \dot{u}u & -\dot{u}v - v\dot{u} \end{pmatrix}. \qquad (89)
+$$
+
+Here, the off-diagonal elements do not change their signs under dot conjugation, and the diagonal elements become interchanged with a sign change. This matrix thus represents an axial vector.
+
+6.2. Second-Rank Tensor
+
+There are also bilinear spinors, which are both dotted or both undotted. We are interested in two
+sets of three quantities satisfying the O(3) symmetry. They should therefore transform like:
+
+$$
+(x + iy)/\sqrt{2}, \quad (x - iy)/\sqrt{2}, \quad z, \tag{90}
+$$
+---PAGE_BREAK---
+
+which are like:
+
+$$uu, \quad vv, \quad (uv + vu) / \sqrt{2}, \tag{91}$$
+
+respectively, in the $O(3)$ regime. Since the dot conjugation is the parity operation, they are like:
+
+$$-\dot{u}\dot{u}, \quad -\dot{v}\dot{v}, \quad -(\dot{u}\dot{v} + \dot{v}\dot{u})/\sqrt{2}. \tag{92}$$
+
+In other words,
+
+$$(uu) = -\dot{u}\dot{u}, \quad \text{and} \quad (vv) = -\dot{v}\dot{v}. \tag{93}$$
+
+We noticed a similar sign change in Equation (86).
+
+In order to construct the z component in this $O(3)$ space, let us first consider:
+
+$$f_z = \frac{1}{2} [(uv + vu) - (\dot{u}\dot{v} + \dot{v}\dot{u})], \qquad g_z = \frac{1}{2i} [(uv + vu) + (\dot{u}\dot{v} + \dot{v}\dot{u})]. \tag{94}$$
+
+Here, $f_z$ and $g_z$ are respectively symmetric and anti-symmetric under the dot conjugation or the parity operation. These quantities are invariant under the boost along the z direction. They are also invariant under rotations around this axis, but they are not invariant under boosts along or rotations around the x or y axis. They are different from the scalars given in Equation (80).
+
+Next, in order to construct the x and y components, we start with $f_{\pm}$ and $g_{\pm}$ as:
+
+$$f_+ = \frac{1}{\sqrt{2}}(uu - \dot{u}\dot{u}), \quad f_- = \frac{1}{\sqrt{2}}(vv - \dot{v}\dot{v}),$$
+
+$$g_+ = \frac{1}{\sqrt{2i}}(uu + \dot{u}\dot{u}), \quad g_- = \frac{1}{\sqrt{2i}}(vv + \dot{v}\dot{v}). \tag{95}$$
+
+Then:
+
+$$f_x = \frac{1}{\sqrt{2}}(f_+ + f_-) = \frac{1}{2}[(uu + vv) - (\dot{u}\dot{u} + \dot{v}\dot{v})],$$
+
+$$f_y = \frac{1}{\sqrt{2i}}(f_+ - f_-) = \frac{1}{2i}[(uu - vv) - (\dot{u}\dot{u} - \dot{v}\dot{v})], \tag{96}$$
+
+and:
+
+$$g_x = \frac{1}{\sqrt{2}}(g_+ + g_-) = \frac{1}{2}[(uu + vv) + (\dot{u}\dot{u} + \dot{v}\dot{v})],$$
+
+$$g_y = \frac{1}{\sqrt{2i}}(g_+ - g_-) = \frac{1}{2i}[(uu - vv) + (\dot{u}\dot{u} - \dot{v}\dot{v})]. \tag{97}$$
+
+Here, $f_x$ and $f_y$ are symmetric under dot conjugation, while $g_x$ and $g_y$ are anti-symmetric.
+
+Furthermore, $f_z$, $f_x$ and $f_y$ of Equations (94) and (96) transform like a three-dimensional vector. The same can be said for $g_i$ of Equations (94) and (97). Thus, they can be grouped into the second-rank tensor:
+
+$$\begin{pmatrix}
+0 & -f_z & -f_x & -f_y \\
+f_z & 0 & -g_y & g_x \\
+f_x & g_y & 0 & -g_z \\
+f_y & -g_x & g_z & 0
+\end{pmatrix}, \tag{98}$$
+
+whose Lorentz-transformation properties are well known. The $g_i$ components change their signs under space inversion, while the $f_i$ components remain invariant. They are like the electric and magnetic fields, respectively.
+---PAGE_BREAK---
+
+If the system is Lorentz-boosted, $f_i$ and $g_i$ can be computed from Table 4. We are now interested in the symmetry of photons by taking the massless limit. Thus, we keep only the terms that become larger for larger values of $\eta$. Thus,
+
+$$
+\begin{aligned}
+f_x & \rightarrow \frac{1}{2} (uu - \dot{u}\dot{v}), && f_y \rightarrow \frac{1}{2i} (uu + \dot{u}\dot{v}), \\
+g_x & \rightarrow \frac{1}{2i} (uu + \dot{v}\dot{u}), && g_y \rightarrow -\frac{1}{2} (uu - \dot{u}\dot{v}),
+\end{aligned}
+\quad (99) $$
+
+in the massless limit.
+
+Then, the tensor of Equation (98) becomes:
+
+$$ \begin{pmatrix} 0 & 0 & -E_x & -E_y \\ 0 & 0 & -B_y & B_x \\ E_x & B_y & 0 & 0 \\ E_y & -B_x & 0 & 0 \end{pmatrix}, \qquad (100) $$
+
+with:
+
+$$
+\begin{aligned}
+E_x &\approx \frac{1}{2}(uu - \dot{u}\dot{v}), && E_y \approx \frac{1}{2i}(uu + \dot{u}\dot{v}), \\
+B_x &= \frac{1}{2i}(uu + \dot{v}\dot{u}), && B_y = -\frac{1}{2}(uu - \dot{u}\dot{v}).
+\end{aligned}
+\quad (101) $$
+
+The electric and magnetic field components are perpendicular to each other. Furthermore,
+
+$$ B_x = E_y, \quad B_y = -E_x. \quad (102) $$
+
+In order to address symmetry of photons, let us go back to Equation (95). In the massless limit,
+
+$$ B_+ \approx E_+ \approx uu, \quad B_- \approx E_- \approx \dot{u}\dot{v}. \quad (103) $$
+
+The gauge transformations applicable to $u$ and $\bar{v}$ are the two-by-two matrices:
+
+$$ \begin{pmatrix} 1 & -\gamma \\ 0 & 1 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}, \qquad (104) $$
+
+respectively. Both $u$ and $\bar{v}$ are invariant under gauge transformations, while $u$ and $\bar{v}$ are not.
+
+The $B_+$ and $E_+$ are for the photon spin along the z direction, while $B_-$ and $E_-$ are for the opposite direction.
+
+### 6.3. Higher Spins
+
+Since Wigner's original book of 1931 [24,25], the rotation group, without Lorentz transformations, has been extensively discussed in the literature [22,26,27]. One of the main issues was how to construct the most general spin state from the two-component spinors for the spin-1/2 particle.
+
+Since there are two states for the spin-1/2 particle, four states can be constructed from two spinors, leading to one state for the spin-0 state and three spin-1 states. With three spinors, it is possible to construct four spin-3/2 states and two spin-1/2 states, resulting in six states. This partition process is much more complicated [28,29] for the case of three spinors. Yet, this partition process is possible for all higher spin states.
+
+In the Lorentz-covariant world, there are four states for each spin-1/2 particle. With two spinors, we end up with sixteen (4 × 4) states, and they are tabulated in Table 4. There should be 64 states for
+---PAGE_BREAK---
+
+three spinors and 256 states for four spinors. We now know how to Lorentz-boost those spinors. We also know that the transverse rotations become gauge transformations in the limit of zero-mass or infinite-$\eta$. It is thus possible to bundle all of them into the table given in Figure 5.
+
+**Figure 5.** Unified picture of massive and massless particles. The gauge transformation is a Lorentz-boosted rotation matrix and is applicable to all massless particles. It is possible to construct higher-spin states starting from the four states of the spin-1/2 particle in the Lorentz-covariant world.
+
+In the relativistic regime, we are interested in photons and gravitons. As was noted in Sections 6.1 and 6.2, the observable components are invariant under gauge transformations. They are also the terms that become largest for large values of $\eta$.
+
+We have seen in Section 6.2 that the photon state consists of $uu$ and $\bar{u}\bar{v}$ for those whose spins are parallel and anti-parallel to the momentum, respectively. Thus, for spin-2 gravitons, the states must be $uuuu$ and $\bar{u}\bar{v}\bar{v}\bar{v}$, respectively.
+
+In his effort to understand photons and gravitons, Weinberg constructed his states for massless particles [30], especially photons and gravitons [31]. He started with the conditions:
+
+$$N_1|\text{state}>=0, \quad \text{and} \quad N_2|\text{state}>=0, \qquad (105)$$
+
+where $N_1$ and $N_2$ are defined in Equation (17). Since they are now known as the generators of gauge transformations, Weinberg's states are gauge-invariant states. Thus, $uu$ and $\bar{u}\bar{v}$ are Weinberg's states for photons, and $uuuu$ are $\bar{u}\bar{v}\bar{v}\bar{v}$ are Weinberg's states for gravitons.
+
+## 7. Concluding Remarks
+
+Since the publication of Wigner's original paper [1], there have been many papers written on the subject. The issue is how to construct subgroups of the Lorentz group whose transformations do not change the momentum of a given particle. The traditional approach to this problem has been to work with a fixed mass, which remains invariant under Lorentz transformation.
+
+In this paper, we have presented a different approach. Since, we are interested in transformations that leave the momentum invariant, we do not change the momentum throughout mathematical processes. Figure 3 tells the difference. In our approach, we fix the momentum, and we allow transitions from one hyperbola to another analytically with one transformation matrix. It is an interesting future problem to see what larger group can accommodate this process.
+
+Since the purpose of this paper is to provide a simpler mathematics for understanding the physics of Wigner's little groups, we used the two-by-two $SL(2,c)$ representation, instead of four-by-four matrices, for the Lorentz group throughout the paper. During this process, it was noted in Section 5 that the Dirac equation is a representation of Wigner's little group.
+
+We also discussed how to construct higher-spin states starting from four-component spinors for the spin-1/2 particle. We studied how the spins can be added in the Lorentz-covariant world, as illustrated in Figure 5.
+
+**Author Contributions:** Each of the authors participated in developing the material presented in this paper and in writing the manuscript.
+---PAGE_BREAK---
+
+**Conflicts of Interest:** The authors declare no conflict of interest.
+
+References
+
+1. Wigner, E. On unitary representations of the inhomogeneous Lorentz group. *Ann. Math.* **1939**, *40*, 149–204.
+
+2. Han, D.; Kim, Y.S.; Son, D. Gauge transformations as Lorentz-boosted rotations. *Phys. Lett. B* **1983**, *131*, 327–329.
+
+3. Kim, Y.S.; Wigner, E.P. Cylindrical group and massless particles. *J. Math. Phys.* **1987**, *28*, 1175–1179.
+
+4. Kim, Y.S.; Wigner, E.P. Space-time geometry of relativistic-particles. *J. Math. Phys.* **1990**, *31*, 55–60.
+
+5. Başkal, S.; Kim, Y.S.; Noz, M.E. *Physics of the Lorentz Group*, IOP Concise Physics; Morgan & Claypool Publishers: San Rafael, CA, USA, 2015.
+
+6. Kupersztych, J. Is there a link between gauge invariance, relativistic invariance and Electron Spin? *Nuovo Cimento* **1976**, *31B*, 1–11.
+
+7. Han, D.; Kim, Y.S. Little group for photons and gauge transformations. *Am. J. Phys.* **1981**, *49*, 348–351.
+
+8. Han, D.; Kim, Y.S. Special relativity and interferometers. *Phys. Rev. A* **1988**, *37*, 4494–4496.
+
+9. Dirac, P.A.M. Applications of quaternions to Lorentz transformations. *Proc. R. Irish Acad.* **1945**, *A50*, 261–270.
+
+10. Bargmann, V. Irreducible unitary representations of the Lorentz group. *Ann. Math.* **1947**, *48*, 568–640.
+
+11. Naimark, M.A. *Linear Representations of the Lorentz Group*; Pergamon Press: Oxford, UK, 1954.
+
+12. Kim, Y.S.; Noz, M.E. *Theory and Applications of the Poincaré Group*; Reidel: Dordrecht, The Netherlands, 1986.
+
+13. Başkal, S.; Kim, Y.S.; Noz, M.E. Wigner’s space-time symmetries based on the two-by-two matrices of the damped harmonic oscillators and the poincaré sphere. *Symmetry* **2014**, *6*, 473–515.
+
+14. Han, D.; Kim, Y.S.; Son, D. Eulerian parametrization of Wigner little groups and gauge transformations in terms of rotations in 2-component spinors. *J. Math. Phys.* **1986**, *27*, 2228–2235.
+
+15. Wigner, E.P. Normal form of antiunitary operators. *J. Math. Phys.* **1960**, *1*, 409–413.
+
+16. Wigner, E.P. Phenomenological distinction between unitary and antiunitary symmetry operators. *J. Math. Phys.* **1960**, *1*, 413–416.
+
+17. Han, D.; Kim, Y.S.; Son, D. E(2)-like little group for massless particles and polarization of neutrinos. *Phys. Rev. D* **1982**, *26*, 3717–3725.
+
+18. Han, D.; Kim, Y.S.; Son, D. Photons, neutrinos, and gauge transformations. *Am. J. Phys.* **1986**, *54*, 818–821.
+
+19. Mohapatra, R.N.; Smirnov, A.Y. Neutrino mass and new physics. *Ann. Rev. Nucl. Part. Sci.* **2006**, *56*, 569–628.
+
+20. Kim, Y.S.; Maguire, G.Q., Jr.; Noz, M.E. Do small-mass neutrinos participate in gauge transformations? *Adv. High Energy Phys.* **2016**, 2016, 1847620, doi:10.1155/2016/1847620.
+
+21. Berestetskii, V.B.; Pitaevskii, L.P.; Lifshitz, E.M. *Quantum Electrodynamics*, Volume 4 of the Course of Theoretical Physics, 2nd ed.; Pergamon Press: Oxford, UK, 1982.
+
+22. Gel'fand, I.M.; Minlos, R.A.; Shapiro, A. *Representations of the Rotation and Lorentz Groups and their Applications*; MacMillan: New York, NY, USA, 1963.
+
+23. Weinberg, S. Feynman rules for any spin. *Phys. Rev.* **1964**, *133*, B1318-B1332.
+
+24. Wigner, E. *Gruppentheorie und ihre Anwendungen auf die Quantenmechanik der Atomspektren*; Friedrich Vieweg und Sohn: Braunsweig, Germany, 1931. (In German)
+
+25. Wigner, E.P. *Group Theory and Its Applications to the Quantum Mechanics of Atomic Spectra*, Translated from the German; Griffin, J.J., Ed.; Academic Press: New York, NY, USA, 1959.
+
+26. Condon, E.U.; Shortley, G.H. *The Theory of Atomic Spectra*; Cambridge University Press: London, UK, 1951.
+
+27. Hamermesh, M. *Group Theory and Application to Physical Problems*; Addison-Wesley: Reading, MA, USA, 1962.
+
+28. Feynman, R.P.; Kislinger, M.; Ravndal, F. Current matrix elements from a relativistic quark model. *Phys. Rev. D* **1971**, *3*, 2706–2732.
+
+29. Hussar, P.E.; Kim, Y.S.; Noz, M.E. Three-particle symmetry classifications according to the method of Dirac. *Am. J. Phys.* **1980**, *48*, 1038–1042.
+
+30. Weinberg, S. Feynman rules for any spin II. massless particles. *Phys. Rev.* **1964**, *134*, B882-B896.
+
+31. Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass. *Phys. Rev.* **1964**, *135*, B1049-B1056.
+
+© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
+---PAGE_BREAK---
+
+MDPI AG
+
+St. Alban-Anlage 66
+4052 Basel, Switzerland
+
+Tel. +41 61 683 77 34
+Fax +41 61 302 89 18
+
+http://www.mdpi.com
+
+*Symmetry* Editorial Office
+
+E-mail: symmetry@mdpi.com
+
+http://www.mdpi.com/journal/symmetry
+---PAGE_BREAK---
+
+
+---PAGE_BREAK---
+
+MDPI AG
+St. Alban-Anlage 66
+4052 Basel
+Switzerland
+
+Tel: +41 61 683 77 34
+Fax: +41 61 302 89 18
+
+www.mdpi.com
\ No newline at end of file
diff --git a/samples/texts_merged/6026555.md b/samples/texts_merged/6026555.md
new file mode 100644
index 0000000000000000000000000000000000000000..edc588b73deca44abee75ce162eaa0a4ab0518b2
--- /dev/null
+++ b/samples/texts_merged/6026555.md
@@ -0,0 +1,180 @@
+
+---PAGE_BREAK---
+
+# Thermodynamics of Efflux Process of Liquids and Gases
+
+E. A. Mikaelian¹, Saif A. Mouhammad²*
+
+¹Gubkin Russian State University of Oil and Gas, Moscow, Russia
+
+²Physics Department, Faculty of Science, Taif University, Taif, Kingdom of Saudi Arabia
+
+Email: saifnet70@hotmail.com
+
+Received 29 March 2015; accepted 11 May 2015; published 14 May 2015
+
+Copyright © 2015 by authors and Scientific Research Publishing Inc.
+This work is licensed under the Creative Commons Attribution International License (CC BY).
+http://creativecommons.org/licenses/by/4.0/
+
+Open Access
+
+## Abstract
+
+The main objective of this work is to obtain the calculated ratio of efflux processes for liquids, vapors, gases on the basis of the developed mathematical model, which allows to determine the characteristics of the channel profiles nozzles and diffusers, to solve a number of subsequent applications for analysis modes. On the basis of the calculated ratios are equations of the first law of thermodynamics for the flow of liquids and gases. The obtained calculated ratios are extended for the case of the efflux of compressible liquids, vapors and gases and as a special case, for incompressible liquids. The characteristics of the critical efflux regime liquids, which allows to determine the linear and the mass efflux rate of the critical regime and the calculated characteristics of the channel profiles nozzles and diffusers, Laval nozzles for different modes of operation are obtained.
+
+## Keywords
+
+Thermodynamics, Efflux, Compressible, Incompressible, Liquids, Diffusers, Nozzles
+
+## 1. Introduction
+
+The efflux processes are quite common in various technological processes performed with the power technology equipment in the gas and oil industry, in heat engines, pumps, compressor machines, mas-and-heat exchange units, pipelines, in separate elements of machines and devices: nozzles, diffusers, convergent nozzles, mud guns, fittings, locking devices, gate valves, valves, various calibration holes etc. It is worth emphasising a special role in studying the processes of the gas and liquid efflux through various sorts of leakiness and gap [1] [2].
+
+The efflux process can be considered as a special case of the occurrence and distribution of potential work. Effective work in the efflux process is distributed on the work, directly transmitted to the bodies of external systems.
+
+*Corresponding author.
+---PAGE_BREAK---
+
+tem (in our case in the efflux process this work is absent: $\delta W_{ez}^* = 0$) and to a change in the energy of external position of working medium itself ($de_{ez}$). The last term, in turn, consists of the kinetic energy $d(c^2/2)$ and the potential energy ($gdz$).
+
+Thus, the initial equation of the theoretical efflux process has the following form:
+
+$$ \delta W = -VdP = d(c^2/2) + gdz. \quad (1) $$
+
+Switching to the real efflux processes then is carried out by introducing correction factors: velocity rates ($\phi$) and flow rates ($\phi_*$). The integral of the initial equation of efflux for the expression of potential flow work from the initial 1 to the final section 2 of a flow has the form as follows:
+
+$$ W_{12} = c_2^2/2 - c_1^2/2 + g(z_2 - z_1); \quad (2) $$
+
+$$ W_{12} = \left[ 1 - \left( \frac{P_2}{P_1} \right)^{(n-1)/n} \right] \frac{P_1 V_1 n}{n-1}. \quad (3) $$
+
+The rate of gas efflux in the initial section can be considered as a result of the efflux of a conditional initial state 0-0 at zero velocity $c_0 = 0$; with the graded level $z_0 = z_2$ and pressure $P_0$.
+
+Then the calculated expression for the potential work and linear velocity of the efflux of the final section of the flow is determined by the following equations:
+
+$$ W_{02} = W_{12} + W_{01} = \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \frac{P_0 V_0 n}{n-1}; \quad (4) $$
+
+$$ c_2 = (2W_{02})^{0.5} = \left[ 2W_{12} + c_1^2 + 2g(z_1 + z_2) \right]^{0.5}. \quad (5) $$
+
+The theoretical efflux process is regarded as an adiabatic one, then based on the first law of thermodynamics for the flow the potential flow work is determined as the specific heat drop of the flow equal to the difference between the heat content (enthalpy) of it, taken with the opposite sign [3] [4]:
+
+$$ W_{12} = h_1 - h_2; \quad q_{12} = 0. \quad (6) $$
+
+Further the mass rate of the efflux is entered in calculations:
+
+$$ u = G/f = V\rho/f = \rho c, \quad (7) $$
+
+where *f*—the cross section of the flow under consideration; *G* and *V* are the mass flow rate and volumetric flow rate; *ρ*—liquid density; *c*—the linear velocity of the liquid in the direction of movement (the average velocity in the section *f* in the direction of a normal to this section).
+
+The concept of the mass flow rate in research is most essential. The concept of linear velocity characterises only the kinetic energy of the flow, averaging of such a velocity depends on the flow mode (laminar, transitional, turbulent) and is not identical with the mass flow rate.
+
+The calculated expression of the theoretical mass flow rate at the outlet section is obtained according to the last equation depending on the linear velocity (4) and (5) and the equation of the efflux process:
+
+$$ u_2 = \left\{ \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \left( \frac{P_2}{P_0} \right)^{\frac{2}{n}} \left( \frac{P_0}{V_0} \right)^{\frac{2}{n}} \right\}^{\frac{0.5}{n-1}}. \quad (8) $$
+
+For transition to the real characteristics of a flow we input correction factors in our calculations:
+
+$$ c = \varphi c_2; \quad u = \varphi u_2; \quad G = uf = \varphi u_2 f = \varphi c_2 \rho_2 f. \quad (9) $$
+
+In the formula (9) the velocity and flow-rate factors are determined as the ratio of theoretical and actual velocities:
+
+$$ \varphi = c/c_2 = V/fc_2; \quad \varphi = u/u_2 = G/fu_2. \quad (10) $$
+
+The work of irreversible energy losses associated with the real efflux process:
+
+$$ W^{**} = (c_2^2 - c^2)/2 = (1 - \varphi^2)c_2^2/2 = \xi c_2^2/2, \quad (11) $$
+---PAGE_BREAK---
+
+where $\xi$—the factor of energy losses in the real process.
+
+To calculate the velocity and flow rate, as follows from the formulas (10) it is necessary to arrange for mass (volume) measurements of the liquid flow rates [5] [6].
+
+## 2. Efflux of Incompressible Liquids
+
+The initial condition ($\rho_1 = \rho_2 = \rho = 1/\nu = \text{idem}$):
+
+$$W_{12} = (P_1 - P_2)/\rho; W_{02} = (P_0 - P_2)/\rho. \quad (12)$$
+
+Further, by using the initial general ratios (5), (7), (9) and Equation (12), we will obtain the calculated ratios for a particular case of the efflux of incompressible liquids:
+
+$$c_2 = (2W_{02})^{0.5} = \left[ 2W_{12} + c_1^2 + 2g(z_1 + z_2) \right]^{0.5} = \left[ 2(P_0 - P_2)/\rho \right]^{0.5} \\ = \left[ 2(P_1 - P_2)/\rho + c_1^2 + 2g(z_1 + z_2) \right]^{0.5}; \quad (13)$$
+
+$$u_2 = G/f = V\rho/f = \rho c_2 = \left[ 2(P_0 - P_2)\rho \right]^{0.5}; \quad (14)$$
+
+$$G = \phi u_2 f. \quad (15)$$
+
+The obtained ratios can be applied to the efflux of compressible liquids (gases) with the condition of insignificant fluctuations of the densities. In this case, in Formulae (12)-(15) there should be introduced the average density value, for example, as arithmetic mean:
+
+$$(\rho_1 + \rho_2)/2 = \rho_m$$
+
+## 3. Efflux of Compressible Liquids (Gases)
+
+The general solution of problems concerning the efflux of compressible liquids is obtained by a corresponding development of the previously obtained initial relationships.
+
+From a consideration of the original ratio (8) it follows that the mass velocity becomes zero for the following values of pressure ratios: 1) $P_2/P_0 = 1$, this takes place at the beginning of the efflux $c=0$; $u=c\rho=0$ due to the initial rate; 2) $P_2/P_0 = 0$—in the efflux to vacuum at the outlet section $\rho=0$; $u=c\rho=0$ due to density. Within this range, the mass flow rate passes through the maximum (Rolle's theorem). This means the variable factor of the radicand (8) passes through the maximum:
+
+$$\Psi = \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \left( \frac{P_2}{P_0} \right)^{\frac{2}{n}}. \quad (16)$$
+
+Let us introduce the following designations:
+
+$$(P_2/P_0) = \tau^{n/(n-1)}; \quad (P_2/P_0)^{2/n} = \tau^{2/(n-1)}, \quad (17)$$
+
+using the Rolle's theorem for investigating the function to the maximum, we obtain the parameters of the critical mode of efflux for compressible liquids
+
+$$\tau_{cr} = (P_2/P_0)_{cr}^{(n-1)/n} = 2/(n+1), \quad (18)$$
+
+$$\beta = (P_2/P_0)_{cr} = \tau_{cr}^{n/(n-1)} = \left[ 2/(n+1) \right]^{n/(n-1)}, \quad (19)$$
+
+$$\Psi_{cr} = (1 - \tau_{cr}) \tau_{cr}^{2/(n-1)}. \quad (20)$$
+
+Depending on parameters of the critical efflux mode, the linear and mass efflux rates of the critical mode are determined:
+
+$$c_{cr} = \left[ n(PV)_{cr} \right]^{0.5}, \quad (21)$$
+---PAGE_BREAK---
+
+**Table 1. Characteristic values of the discharge critical mode.**
+
+| n | 1.1 | 1.2 | 1.3 | 1.4 |
|---|
| τcr = 2/(n+1) | 0.953 | 0.909 | 0.870 | 0.833 | | β = τq/(n-1)cr | 0.5847 | 0.5645 | 0.5457 | 0.5283 | ~0.55 | | Ψcr | 1.9677 | 2.0309 | 2.0896 | 2.1443 | ~2.05 |
+
+$$u_{cr} = \left[ 2P_0 \rho_0 \Psi_{cr} n / (n-1) \right]^{0.5}. \quad (22)$$
+
+**Table 1** shows the values of the critical discharge characteristics depending on the performance of the efflux process.
+
+## 4. Particular Cases of Efflux
+
+The ideal gas ($PV = RT$):
+
+$$c_{cr} = [nRT_{cr}]^{0.5}; \quad u_{cr} = P_0 [2\Psi_{cr}n/(n-1)RT_{cr}]^{0.5}, \quad (23)$$
+
+The incompressible liquids ($V$ = idem; $n = \infty$): $c_{cr} = \infty$.
+
+This means that the critical mode for incompressible fluids is unattainable. The critical linear velocity of the adiabatic efflux ($n = k$) is the velocity of sound:
+
+$$a^* = [k(PV)_{cr}]^{0.5},$$
+
+for the ideal gas:
+
+$$a^* = [nRT_{cr}]^{0.5}. \quad (24)$$
+
+## 5. Conclusion
+
+According to the energy conservation law, the equation of distribution and occurrence of the potential work of any thermodynamic systems is obtained. Taken as a basis for the theory of the efflux of gases, compressible and incompressible liquids, the characteristic features of the critical mode of the liquid efflux are obtained. The derived calculated ratios will further determine the calculated characteristics of the channel profiles of nozzles and diffusers, Laval nozzles for a range of modes of operation.
+
+## References
+
+[1] Mikaelian, E.A. (2000) Maintenance Energotechnological Equipment, Gas Turbine Gas Compressor Units of Gas Gathering and Transportation. Methodology, Research, Analysis and Practice, Fuel and Energy, Moscow, 304.
+http://www.dobi.oglib.ru/bgl/5076.html
+
+[2] Mikaelian, E.A. (2001) Improving the Quality, to Ensure Reliability and Safety of the Main Pipelines. In: Margulov, G.D., Ed., Series: Sustainable Energy and Society, Fuel and Energy, Moscow, 640.
+http://www.dobi.oglib.ru/bgl/4625.html
+
+[3] Vladimirov, A.I. and Kershenbaum, Y.V. (2008) Industrial Safety Compressor Stations. Management of Safety and Reliability. Inter-Sector Foundation “National Institute of Oil and Gas”, Moscow, 640.
+http://www.mdk-arbat.ru/bookcard?book_id=3304125
+
+[4] Mikaelian, E.A. (2008) Diagnosis Energotechnological Equipment GGPA Based on Various Diagnostic Features. Gas Industry, **4**, 59-63.
+
+[5] Mikaelian, E.A. (2014) Determination of the Characteristic Features and Technical Condition of the Gas-Turbine and Gas-Compressor Units of Compressor Stations Based on a Simplified Thermodynamic Model. Quality Management in Oil and Gas Industry, **1**, 44-48.
+http://instoilgas.ru/ukang
+
+[6] Mikaelian, E.A. and Mouhammed, S.A. (2014) Survey Equipment Gas Transmission Systems. Quality Management in Oil and Gas Industry, **4**, 29-36.
+http://instoilgas.ru/ukang
\ No newline at end of file
diff --git a/samples/texts_merged/6080891.md b/samples/texts_merged/6080891.md
new file mode 100644
index 0000000000000000000000000000000000000000..6b29ead8802f4ca0abec08587dfaaab92de82cbe
--- /dev/null
+++ b/samples/texts_merged/6080891.md
@@ -0,0 +1,760 @@
+
+---PAGE_BREAK---
+
+# A Hankel matrix acting on Hardy and Bergman spaces
+
+by
+
+PETROS GALANOPOULOS and JOSÉ ÁNGEL PELÁEZ (Málaga)
+
+**Abstract.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$. Let $\mathcal{H}_\mu = (\mu_{n,k})_{n,k \ge 0}$ be the Hankel matrix with entries $\mu_{n,k} = \int_{[0,1)} t^{n+k} d\mu(t)$. The matrix $\mathcal{H}_\mu$ induces formally an operator on the space of all analytic functions in the unit disc by the formula
+
+$$ \mathcal{H}_{\mu}(f)(z) = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{\infty} \mu_{n,k} a_k \right) z^n, \quad z \in \mathbb{D}, $$
+
+where $f(z) = \sum_{n=0}^{\infty} a_n z^n$ is an analytic function in $\mathbb{D}$.
+
+We characterize those positive Borel measures on $[0,1)$ such that $\mathcal{H}_\mu(f)(z) = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t)$ for all $f$ in the Hardy space $H^1$, and among them we describe those for which $\mathcal{H}_\mu$ is bounded and compact on $H^1$. We also study the analogous problem for the Bergman space $A^2$.
+
+**1. Introduction.** We denote by $\mathbb{D} = \{z \in \mathbb{C} : |z| < 1\}$ the unit disc and by $\mathbb{T}$ the unit circle. Let $\mathcal{Hol}(\mathbb{D})$ be the space of analytic functions in $\mathbb{D}$ and let $H^p(0 < p \le \infty)$ be the classical Hardy space of analytic functions in $\mathbb{D}$ (see [D]).
+
+If $0 < p < \infty$ the Bergman space $A^p$ is the set of all $f \in \mathcal{Hol}(\mathbb{D})$ such that
+
+$$ \|f\|_{A^p} := \int_{\mathbb{D}} |f(z)|^p dA(z) < \infty, $$
+
+where $dA(z) = \pi^{-1}dx dy$ is the normalized Lebesgue area measure on $\mathbb{D}$.
+
+For the theory of these spaces we refer to [DS] and [Zh].
+
+Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and let $\mathcal{H}_\mu = (\mu_{n,k})_{n,k \ge 0}$ be the Hankel matrix with entries $\mu_{n,k} = \int_{[0,1)} t^{n+k} d\mu(t)$. The matrix $\mathcal{H}_\mu$ induces formally an operator (which will also be denoted $\mathcal{H}_\mu$) on $\mathcal{Hol}(\mathbb{D})$ in the following sense. If $f(z) = \sum_{n \ge 0} a_n z^n \in \mathcal{Hol}(\mathbb{D})$, by multiplication of the
+
+2010 Mathematics Subject Classification: Primary 47B35; Secondary 30H10.
+Key words and phrases: Hankel matrices, Hardy spaces, Bergman spaces.
+---PAGE_BREAK---
+
+matrix with the sequence of Taylor coefficients of the function,
+
+$$ \{a_n\}_{n \ge 0} \mapsto \left\{ \sum_{k \ge 0} \mu_{n,k} a_k \right\}_{n \ge 0}, $$
+
+we can formally define
+
+$$ (1.1) \qquad \mathcal{H}_\mu(f)(z) = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{\infty} \mu_{n,k} a_k \right) z^n, \quad z \in \mathbb{D}. $$
+
+If $\mu$ is the Lebesgue measure on the interval $[0,1]$ we get the classical Hilbert matrix $H = \{\frac{1}{n+k+1}\}_{n,k \ge 0}$. This matrix induces, in the same way as above, a bounded operator on $H^p$, $p \in (1, \infty)$ (see [DiS]), and on $A^p$, $p \in (2, \infty)$ (see [Di]); estimates on the norms have also been obtained. Recently in [DJV], a further progress has been achieved in this direction.
+
+In this paper we shall focus our attention on the limit cases $H^1$ and $A^2$, that is, we shall study the boundedness, compactness, and other related properties of $\mathcal{H}_\mu$ on these spaces in terms of $\mu$. Similar investigations have previously been conducted by several authors in different spaces of analytic functions in $\mathbb{D}$ (see e.g. [W], [Po]).
+
+The classical Hilbert matrix $\mathcal{H}$ is well defined but it is not bounded on $H^1$ (see [DiS]). It is known that the operator induced by the Hilbert matrix is not even well defined on $A^2$. Indeed, $f(z) = \sum_{n=1}^{\infty} \frac{1}{\log(n+1)}z^n \in A^2$ but $Hf(0) = \sum_{n=1}^{\infty} \frac{1}{(n+1)\log(n+1)} = \infty$ (see [DJV]). Thus, it is natural to study under which conditions on the measure $\mu$ the corresponding matrix $\mathcal{H}_\mu$ induces a well defined and bounded operator on $H^1$ and on $A^2$.
+
+The structure of the paper is as follows. In Section 2 we deal with the case of the Hardy space $H^1$. Let $\mu$ be a positive Borel measure in $\mathbb{D}$. For $\alpha \ge 0$ and $s > 0$, we say that $\mu$ is an $\alpha$-logarithmic $s$-Carleson measure, resp. a vanishing $\alpha$-logarithmic $s$-Carleson measure, if
+
+$$ \sup_{a \in \mathbb{D}} \frac{\mu(S(a)) (\log \frac{2}{1-|a|^2})^\alpha}{(1 - |a|^2)^s} < \infty, \quad \text{resp. } \lim_{|a| \to 1^{-}} \frac{\mu(S(a)) (\log \frac{2}{1-|a|^2})^\alpha}{(1 - |a|^2)^s} = 0. $$
+
+By $S(a)$ we denote the Carleson box with vertex at $a$, that is,
+
+$$ S(a) = \left\{ z \in \mathbb{D} : 1 - |z| \le 1 - |a|, \left| \frac{\arg(a\bar{z})}{2\pi} \right| \le \frac{1 - |a|}{2} \right\}. $$
+
+The above definition is a generalization of the fundamental notion of *classical Carleson measure* introduced by Carleson (see [C]). These are measures that occur for $\alpha = 0$ and $s = 1$.
+
+We shall prove that any classical Carleson measure induces a well defined operator on $H^1$, and conversely being Carleson is necessary in the following sense.
+---PAGE_BREAK---
+
+**PROPOSITION 1.1.** Suppose that $\mu$ is a finite positive Borel measure on $[0, 1)$.
+
+(i) If $\mu$ is a classical Carleson measure then the power series $\mathcal{H}_\mu(f)(z)$ represents a function in $\text{Hol}(\mathbb{D})$ for any $f \in H^1$, and moreover
+
+$$ (1.2) \qquad \mathcal{H}_\mu(f)(z) = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t), \quad f \in H^1. $$
+
+(ii) If the integral in (1.2) converges for each $z \in \mathbb{D}$ and $f \in H^1$, then $\mu$ is a classical Carleson measure.
+
+The hope that any classical Carleson measure $\mu$ induces a bounded operator $\mathcal{H}_\mu$ on $H^1$ is unjustified, because the Lebesgue measure does not. The next result describes the appropriate subclass of classical Carleson measures.
+
+**THEOREM 1.2.** Suppose that $\mu$ is a classical Carleson measure on $[0, 1)$.
+
+(i) $\mathcal{H}_\mu : H^1 \to H^1$ is bounded if and only if $\mu$ is a 1-logarithmic 1-Carleson measure.
+
+(ii) $\mathcal{H}_\mu : H^1 \to H^1$ is compact if and only if $\mu$ is a vanishing 1-logarithmic 1-Carleson measure.
+
+In many papers (see [CS], [JPS], [T], [PV] and [Pe]), another approach to the study of Hankel operators on spaces of analytic functions is developed, using the symbol of the operator, which in our case is essentially the function
+
+$$ (1.3) \qquad h_\mu(z) = \sum_n \mu_n z^n, \quad \mu_n = \int_{[0,1)} t^n d\mu(t). $$
+
+A characterization of the boundedness and compactness of the operator $\mathcal{H}_\mu : H^1 \to H^1$ in terms of $h_\mu$ follows from [PV, Theorems 1.6 and 1.7] (see also [CS], [JPS] and [T]). We shall provide two proofs of Theorem 1.2, a first one based on the integral representation (1.2) and a second one which uses the last cited result.
+
+In the case of $H^2$, $\mathcal{H}_\mu$ is bounded if and only if $\mu$ is a classical Carleson measure (see [Pe]). Power, [Po, p. 428], proved that if $\int_{[0,1)} d\mu(t)/(1-t)^2 < \infty$, then $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator, and raised the question of a necessary condition. The next result solves this problem.
+
+**THEOREM 1.3.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and suppose that the operator $\mathcal{H}_\mu$ is bounded on $H^2$. Then $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $H^2$ if and only if
+
+$$ (1.4) \qquad \int_{[0,1)} \frac{\mu([t, 1])}{(1-t)^2} d\mu(t) < \infty. $$
+---PAGE_BREAK---
+
+In Section 3 we turn our attention to $A^2$. First we clarify for which measures the operator is well defined on this space and also gets an integral representation.
+
+**PROPOSITION 1.4.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$.
+
+(i) If $\mu$ satisfies (1.4) then the power series $\mathcal{H}_\mu(f)(z)$ is in $\text{Hol}(\mathbb{D})$ for any $f \in A^2$ and moreover
+
+$$ (1.5) \qquad \mathcal{H}_\mu(f)(z) = \int_{[0,1]} \frac{f(t)}{1-tz} d\mu(t), \quad f \in A^2. $$
+
+(ii) If for any choice of $f \in A^2$ and $z \in \mathbb{D}$ the integral in (1.5) converges, then (1.4) is satisfied.
+
+Unfortunately, condition (1.4) does not imply the boundedness of $\mathcal{H}_\mu$ on $A^2$ (see Theorem 1.5 and Proposition 1.7 below), so we need to look for a stronger one. Observe that (1.4) can be restated by saying that the analytic function $h_\mu$ belongs to the *Dirichlet space*
+
+$$ \mathcal{D} = \left\{ f(z) = \sum_{n=0}^{\infty} a_n z^n \in \text{Hol}(\mathbb{D}) : \int_{\mathbb{D}} |f'(z)|^2 dA(z) < \infty \right\}, $$
+
+which is a Hilbert space equipped with the inner product $\langle f, g \rangle_{\mathcal{D}} = a_0 \bar{b}_0 + \sum_{n \ge 0} (n+1)a_{n+1} \bar{b}_{n+1}$. We characterize in these terms the boundedness of the operator $\mathcal{H}_\mu$ on $A^2$.
+
+**THEOREM 1.5.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ that satisfies (1.4). The operator $\mathcal{H}_\mu$ is bounded in $A^2$ if and only if the measure $|h'_\mu(z)|^2 dA(z)$ is a Dirichlet Carleson measure.
+
+We remind the reader that a finite positive Borel measure $\nu$ in $\mathbb{D}$ is called a *Dirichlet Carleson measure* if the identity operator is bounded from the Dirichlet space to $L^2(\mathbb{D}, \nu)$. We refer to [S] and [ARS] for descriptions of these measures.
+
+It would be nice to relate the boundedness of the operator directly to a condition on the measure. In this spirit, we are able to describe the Hilbert-Schmidt operators on $A^2$.
+
+**THEOREM 1.6.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ that satisfies (1.4). The operator $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $A^2$ if and only if
+
+$$ (1.6) \qquad \int_{[0,1]} \frac{\mu([t, 1])}{(1-t)^2} \log \frac{1}{1-t} d\mu(t) < \infty. $$
+
+Obviously, (1.6) gives bounded operators $\mathcal{H}_\mu$ on $A^2$; maybe surprisingly, it is sharp for the boundedness in a certain sense.
+---PAGE_BREAK---
+
+PROPOSITION 1.7. For each $\beta \in [0,1)$ there is a finite positive Borel measure $\mu$ on $[0,1)$ such that
+
+$$ (1.7) \quad \int_{[0,1)} \frac{\mu([t, 1))}{(1-t)^2} \left(\log \frac{1}{1-t}\right)^\beta d\mu(t) < \infty, $$
+
+and $\mathcal{H}_\mu$ is not bounded on $A^2$.
+
+**2. The Hankel matrix $\mathcal{H}_\mu$ acting on $H^1$.** Before we proceed to the proofs of Proposition 1.1 and Theorem 1.2 some results and definitions must be recalled. First, we present an equivalent description of the $\alpha$-logarithmic $s$-Carleson measures (see [Z]).
+
+LEMMA A. Suppose that $0 \le \alpha < \infty$ and $0 < s < \infty$ and $\mu$ is a positive Borel measure in $\mathbb{D}$. Then $\mu$ is an $\alpha$-logarithmic $s$-Carleson measure if and only if
+
+$$ (2.1) \quad \sup_{a \in \mathbb{D}} \left( \log \frac{2}{1 - |a|^2} \right)^\alpha \int_{\mathbb{D}} \left( \frac{1 - |a|^2}{|1 - \bar{a}z|^2} \right)^s d\mu(z) < \infty. $$
+
+We shall write $BMOA_{\log,\alpha}$, $\alpha \ge 0$, (see [Gi] and [PV]) for the space of those $H^1$ functions whose boundary values satisfy
+
+$$ (2.2) \quad \|f\|_{BMOA_{\log,\alpha}} = |f(0)| + \sup_{a \in \mathbb{D}} \left( \log \frac{2}{1-|a|} \right)^\alpha \frac{1}{2\pi} \int_0^{2\pi} |f(e^{i\theta}) - f(a)| P_a(e^{i\theta}) d\theta < \infty, $$
+
+where $P_a(e^{i\theta}) = (1-|a|^2)/(1-ae^{-i\theta})^2$ is the Poisson kernel.
+
+We shall write $VMOA_{\log,\alpha}$ for the subspace of $H^1$ of those functions $f$ such that
+
+$$ \lim_{|a| \to 1^-} \left( \log \frac{2}{1 - |a|} \right)^\alpha \int_T |f(e^{i\theta}) - f(a)| P_a(e^{i\theta}) d\theta = 0. $$
+
+If $\alpha = 0$, we obtain the classical space BMOA [VMOA] of $H^1$-functions with bounded [vanishing] mean oscillation. For simplicity, we shall write $BMOA_{\log}$ [VMOA$_{\log}$] for the space $BMOA_{\log,1}$ [VMOA$_{\log,1}$].
+
+We shall also use Fefferman's result (see [Gi]) that $(H^1)^* \cong \text{BMOA}$ and $(\text{VMOA})^* \cong H^1$, under the Cauchy pairing
+
+$$ (2.3) \quad \langle f, g \rangle_{H^2} = \lim_{r \to 1^-} \frac{1}{2\pi} \int_0^{2\pi} f(re^{i\theta}) \overline{g(e^{i\theta})} d\theta, $$
+
+$f \in H^1$, $g \in \text{BMOA}$ (resp. VMOA).
+
+*Proof of Proposition 1.1.* (i) Let $f(z) = \sum_{n \ge 0} a_n z^n \in H^1$ and assume that $\mu$ is a classical Carleson measure. This means equivalently that (see
+---PAGE_BREAK---
+
+[Pe, p. 42]) $\sup_{n \in \mathbb{N}} \mu_n(n+1) < \infty$. This fact together with Hardy's inequality (see [D, p. 48]) implies that
+
+$$ \sum_{k=0}^{\infty} \mu_{n,k} |a_k| \le C \sum_{k=0}^{\infty} \frac{|a_k|}{n+k+1} \le C \|f\|_{H^1}, \quad n \in \mathbb{N}, $$
+
+so $H_\mu(f)(z) \in \text{Hol}(\mathbb{D})$. The above inequalities also justify that
+
+$$ \sum_{k \ge 0} \mu_{n,k} a_k = \int_{[0,1)} t^n f(t) d\mu(t), \quad n \in \mathbb{N}. $$
+
+Then
+
+$$ H_{\mu}(f)(z) = \sum_{n \ge 0} \left( \int_{[0,1)} t^n f(t) d\mu(t) \right) z^n = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t), \quad z \in \mathbb{D}. $$
+
+The last equality is true since $\mu$ is a classical Carleson measure and so
+
+$$ \sum_{n \ge 0} \left( \int_{[0,1)} t^n |f(t)| d\mu(t) \right) |z|^n \le C \|f\|_{H^1} \frac{1}{1-|z|}. $$
+
+(ii) Assume that for any choice of $f \in H^1$ and $z \in D$ the integral (1.2) converges. Fix $f \in H^1$ and choose $z=0$. This means that $\int_{[0,1)} |f(t)| d\mu(t) < \infty$. If for any $\beta \in [0, 1]$ we define $T_\beta : H^1 \to L^1(d\mu)$ by setting $T_\beta(f) = f \cdot \chi_{\{0 \le |z| < \beta\}}$, then there is $C > 0$ such that
+
+$$ \|T_\beta(f)\|_{L^1(d\mu)} = \int_{[0,\beta]} |f(t)| d\mu(t) \le \int_{[0,1]} |f(t)| d\mu(t) \le C $$
+
+for any $\beta \in [0, 1]$, which together with the uniform boundedness principle gives $\sup_{\beta \in [0,1]} \|T_\beta\|_{L^1(d\mu)} < \infty$, that is, the identity operator from $H^1$ to $L^1(d\mu)$ is bounded, thus by Carleson's result (see [D, Theorem 9.3]) $\mu$ is a classical Carleson measure. $\blacksquare$
+
+Now we are ready to prove our main result in this section.
+
+*Proof of Theorem 1.2.*
+
+*Proof of (i): Boundedness.* We observe that the duality relation (VMOA)* $\cong$ $H^1$, Proposition 1.1, Cauchy's integral representation for functions in $H^1$ (see [D, Theorem 3.9]) and Fubini's theorem imply that
+
+$$ (2.4) \qquad \mathcal{H}_{\mu}: H^{1} \rightarrow H^{1} \text{ is bounded} $$
+
+$$
+\begin{align*}
+&\Leftrightarrow \lim_{r \to 1^{-}} \left| \frac{1}{2\pi} \int_0^{2\pi} \left( \int_0^1 \frac{f(t)}{1 - tre^{i\theta}} d\mu(t) \right) \overline{g(e^{i\theta})} d\theta \right| \le C \|f\|_{H^1} \|g\|_{\text{BMOA}} \\
+&\Leftrightarrow \lim_{r \to 1^{-}} \left| \int_0^1 f(t) \overline{g.rt)} d\mu(t) \right| \le C \|f\|_{H^1} \|g\|_{\text{BMOA}},
+\end{align*}
+$$
+
+for all $f \in H^1$ and $g \in \text{VMOA}$.
+---PAGE_BREAK---
+
+Suppose that $\mathcal{H}_\mu : H^1 \to H^1$ is bounded and select the families of test functions
+
+$$ (2.5) \qquad g_a(z) = \log \frac{2}{1-az}, \quad f_b(z) = \frac{1-b^2}{(1-bz)^2}, \quad a,b \in [0,1). $$
+
+A calculation shows that {$g_a$} $\subset$ VMOA and {$f_b$} $\subset$ $H^1$ with
+
+$$ (2.6) \quad \sup_{a \in [0,1)} \|g_a\|_{\text{BMOA}} < \infty \quad \text{and} \quad \sup_{b \in [0,1)} \|f_b\|_{H^1} < \infty. $$
+
+Next, taking $a=b \in [0,1)$ and $r \in [a, 1]$ we obtain
+
+$$ \begin{aligned} \left|\int_0^1 f_a(t) \overline{g_a(rt)} d\mu(t)\right| &\ge \int_a^1 \frac{1-a^2}{(1-rt)^2} \log \frac{2}{1-rat} d\mu(t), \\ &\ge C \frac{\log \frac{2}{1-a^2}}{1-a^2} \mu([a, 1)), \end{aligned} $$
+
+which bearing in mind (2.4) and (2.6) implies that $\mu$ is a 1-logarithmic 1-Carleson measure.
+
+Conversely, suppose that $\mu$ is a 1-logarithmic 1-Carleson measure. Then by Lemma A,
+
+$$ (2.7) \qquad K_\mu := \sup_{a \in D} \log \frac{2}{1-|a|^2} \int_D \frac{1-|a|^2}{|1-\bar{a}z|^2} d\mu(z) < \infty. $$
+
+Let us see that $\mathcal{H}_\mu$ is bounded on $H^1$. Using (2.4), it is enough to prove
+
+$$ (2.8) \quad \lim_{r \to 1^-} \int_0^1 |f(t)| |g.rt)| d\mu(t) \le C \|f\|_{H^1} \|g\|_{\text{BMOA}} $$
+
+for all $f \in H^1$ and $g \in \text{VMOA}$,
+
+which together with [D, Theorem 9.3] and Lemma A is equivalent to
+
+$$ (2.9) \quad \lim_{r \to 1^-} \sup_{a \in D} \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)| d\mu(z) \le C \|g\|_{\text{BMOA}} \quad \text{for all } g \in \text{VMOA}. $$
+
+On the other hand, for each $r \in (0,1)$, $a \in D$ and $g \in \text{VMOA}$,
+
+$$ (2.10) \quad \begin{aligned} &\int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)| d\mu(z) \\ &\le |g.ra)| \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} d\mu(z) + \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)-g.ra)| d\mu(z) \\ &= I_1(r,a) + I_2(r,a). \end{aligned} $$
+---PAGE_BREAK---
+
+Bearing in mind that any function $g$ in the Bloch space $\mathcal{B}$ (see [ACP]) has the growth
+
+$$|g(z)| \le 2 \|g\|_{\mathcal{B}} \log \frac{2}{1 - |z|} \quad \text{for all } z \in \mathbb{D}$$
+
+and BMOA $\subset \mathcal{B}$ (see Theorem 5.1 of [Gi]), by (2.7) we have
+
+$$
+\begin{align*}
+(2.11) \quad I_1(r, a) &\le C \|g\|_{\text{BMOA}} \log \frac{2}{1-|a|} \int_D \frac{1-|a|^2}{|1-\bar{a}z|^2} d\mu(z) \\
+&\le CK_\mu \|g\|_{\text{BMOA}} < \infty \quad \text{for all } r \in (0,1) \text{ and } a \in \mathbb{D}.
+\end{align*}
+$$
+
+Next, combining (2.7), $\mathbb{D}$, Theorem 9.3], (2.2) and the fact that BMOA is closed under subordination (see [Gi, Theorem 10.3]), we deduce that
+
+$$
+\begin{align*}
+I_2(r, a) &\le CK_\mu \int_T \frac{1-|a|^2}{|1-\bar{a}e^{i\theta}|^2} |g(re^{i\theta}) - g(ra)| d\theta \\
+&\le CK_\mu \|g_r\|_{\text{BMOA}} \\
+&\le CK_\mu \|g\|_{\text{BMOA}} \quad \text{for all } r \in (0,1), a \in \mathbb{D} \text{ and } g \in \text{VMOA},
+\end{align*}
+$$
+
+which together with (2.10) and (2.11) implies (2.9).
+
+*Proof of (ii): Compactness.* Suppose that $\mathcal{H}_\mu : H^1 \to H^1$ is compact. Let $\{f_b\}$ be the family of functions defined in (2.5) and let $\{b_n\}$ be a sequence of points of $(0,1)$ such that $\lim_{n\to\infty} b_n = 1$. Since $\{f_{b_n}\}$ is a bounded sequence in $H^1$, there is a subsequence $\{b_{n_k}\}$ and $g \in H^1$ such that $\lim_{k\to\infty} \| \mathcal{H}_\mu(f_{b_{n_k}}) - g \|_{H^1} = 0$. Now, as $\{f_{b_{n_k}}\}$ converges to 0 uniformly on compact subsets of $\mathbb{D}$ and $\mu$ is a 1-logarithmic 1-Carleson measure, $\{\mathcal{H}_\mu(f_{b_{n_k}})\}$ converges to 0 uniformly on compact subsets of $\mathbb{D}$, which implies that $g=0$. Thus, combining the fact that $\lim_{k\to\infty} \|\mathcal{H}_\mu(f_{b_{n_k}})\|_{H^1} = 0$ with the inequality (for all $g \in$ VMOA)
+
+$$
+\lim_{r \to 1^{-}} \left| \int_{0}^{1} f_{b_{n_k}}(t) \overline{g(rt)} d\mu(t) \right| \le C \| \mathcal{H}_{\mu}(f_{b_{n_k}}) \|_{H^1} \| g \|_{\text{BMOA}},
+$$
+
+and the reasoning used in the boundedness case, we deduce that
+
+$$
+\lim_{k \to \infty} \frac{\mu([b_{n_k}, 1)) \log \frac{2}{1-b_{n_k}}}{1-b_{n_k}} = 0.
+$$
+
+Consequently, $\mu$ is a vanishing 1-logarithmic 1-Carleson measure.
+
+Conversely, assume that $\mu$ is a vanishing 1-logarithmic 1-Carleson measure. The proof of the sufficiency for the boundedness yields
+
+$$
+(2.12) \quad \int_0^1 |f(t)| |g(t)| d\mu(t) \le CK_\mu \|f\|_{H^1} \|g\|_{\text{BMOA}}
+$$
+
+for all $f \in H^1$ and $g \in \text{VMOA}$.
+---PAGE_BREAK---
+
+So, it suffices to prove that for any sequence $\{f_n\}$ such that $\sup_{n \in \mathbb{N}} \|f_n\|_{H^1} < \infty$ and $\lim_{n \to \infty} f_n = 0$ on compact subsets of $\mathbb{D}$,
+
+$$ (2.13) \quad \lim_{n \to \infty} \int_0^1 |f_n(t)| |g(t)| d\mu(t) = 0 \quad \text{for all } g \in \text{VMOA.} $$
+
+Let us write $d\mu_r = \chi_{\{r<|z|<1\}}d\mu$. Since $\mu$ is a vanishing 1-logarithmic 1-Carleson measure, $\lim_{r \to 1^-} K_{\mu_r} = 0$. This together with the fact that $\lim_{n \to \infty} f_n = 0$ on compact subsets of $\mathbb{D}$, and (2.12), shows (using a standard argument) that $\mathcal{H}_\mu$ is compact on $H^1$. ■
+
+In order to present a second proof of Theorem 1.2 some definitions and known results are needed. Given $g(\xi) \sim \sum_{n=-\infty}^{\infty} \hat{g}(n)\xi^n \in L^2(\mathbb{T})$, the associated Hankel operator (see [Pe] or [PV]) is formally defined as
+
+$$ H_g(f) = P(gJf) $$
+
+where *P* is the Riesz projection and
+
+$$ Jf(\xi) = \bar{\xi}f(\bar{\xi}) = \sum_{n=-\infty}^{\infty} \hat{f}(-n-1)\xi^n, \quad \xi \in \mathbb{T}. $$
+
+Moreover, if $\mu$ is a classical Carleson measure, Nehari's Theorem implies that (see [Pe, p. 3] or [D, Theorem 6.8]) there is $g_\mu \in L^\infty(\mathbb{T})$ with $\mu_n = \hat{g}_\mu(n+1)$, so
+
+$$ \mathcal{H}_\mu(f)(z) = \overline{H_{g_\mu}(f)(\bar{z})}, $$
+
+and consequently $\mathcal{H}_\mu$ is bounded on $H^1$ if and only if $H_{g_\mu}$ is bounded on $H^1$. On the other hand,
+
+$$
+\begin{align*}
+P_1(g_\mu)(z) &:= P(g_\mu)(z) - \hat{g}_\mu(0) = \sum_{n=1}^{\infty} \hat{g}_\mu(n)z^n = \sum_{n=0}^{\infty} \hat{g}_\mu(n+1)z^{n+1} \\
+&= \sum_{n=0}^{\infty} \mu_n z^{n+1} = zh_\mu(z).
+\end{align*}
+$$
+
+Thus, we have the next result joining [PV, Theorems 1.6 and 1.7] (see also [CS], [JPS] and [T]).
+
+**THEOREM A.** Suppose that $\mu$ is a classical Carleson measure on $[0, 1)$.
+
+(i) $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is bounded if and only if $h_{\mu} \in \text{BMOA}_{\log}$.
+
+(ii) $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is compact if and only if $h_{\mu} \in \text{VMOA}_{\log}$.
+
+**Second proof of Theorem 1.2**
+
+*Proof of (i): Boundedness.* If $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is bounded, then by Theorem A the function $h_{\mu}$ is in $\text{BMOA}_{\log}$. For any $a \in (0, 1)$ we deduce that
+---PAGE_BREAK---
+
+$$
+\begin{equation} \tag{2.14}
+\begin{aligned}
+& \frac{1}{2\pi} \int_0^{2\pi} |h_\mu(e^{i\theta}) - h_\mu(a)| \frac{1-a^2}{|1-ae^{i\theta}|^2} d\theta \\
+&= \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \left| \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right| d\theta \\
+&\ge \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \operatorname{Re} \left( \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right) d\theta \\
+&= \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \int_0^1 \frac{t(1-t\cos(\theta))}{|1-te^{i\theta}|^2(1-ta)} d\mu(t) d\theta \\
+&= \int_0^1 \frac{t(1-a^2)}{1-ta} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1-t\cos(\theta)}{|1-te^{i\theta}|^2|1-ae^{i\theta}|} d\theta \right) d\mu(t) \\
+&\ge \frac{1}{2} \int_0^1 \frac{t(1-a^2)^2}{1-ta} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1-t\cos(\theta)}{|1-te^{i\theta}|^2|1-ae^{i\theta}|^2} d\theta \right) d\mu(t).
+\end{aligned}
+\end{equation}
+$$
+
+Assume, for the moment, that
+
+$$
+(2.15) \quad \frac{1}{2\pi} \int_{0}^{2\pi} \frac{1 - t \cos(\theta)}{|1 - te^{i\theta}|^2 |1 - ae^{i\theta}|^2} d\theta = \frac{1}{(1 - at)(1 - a^2)}
+$$
+
+for any $a, t \in [0, 1)$.
+
+This together with (2.14) yields
+
+$$
+\sup_{a \in [0,1]} \log \frac{2}{1-a} \int_0^1 \frac{t(1-a^2)}{(1-ta)^2} d\mu(t) \le C \|h_\mu\|_{BMOA_{\log}} < \infty,
+$$
+
+so $\mu$ is a 1-logarithmic 1-Carleson measure.
+
+Now, (2.15) will be proved. We assume that $a \neq t$ (if $a = t$ a similar calculation also gives (2.15)), and we write
+
+$$
+F(z) = \frac{z - \frac{t}{2}(z^2 + 1)}{(z - t)(1 - tz)(z - a)(1 - az)}.
+$$
+
+Therefore, using the residue theorem we see that
+
+$$
+\begin{align*}
+& \frac{1}{2\pi} \int_0^{2\pi} \frac{1 - t \cos(\theta)}{|1 - te^{i\theta}|^2 |1 - ae^{i\theta}|^2} d\theta \\
+&= \operatorname{Res}(F, t) + \operatorname{Res}(F, a) \\
+&= \frac{\frac{t}{2}}{(t-a)(1-at)} - \frac{a - \frac{t}{2}(a^2 + 1)}{(t-a)(1-at)(1-a^2)} \\
+&= \frac{1}{(1-at)(1-a^2)},
+\end{align*}
+$$
+
+which proves (2.15).
+---PAGE_BREAK---
+
+Conversely, suppose that $\mu$ is a 1-logarithmic 1-Carleson measure. Then $h_\mu$ has finite radial limit a.e. on $\mathbb{T}$, indeed $h_\mu \in H^2$ (see [Pe, p. 42]), and for any $a \in \mathbb{D}$,
+
+$$
+\begin{align*}
+(2.16) \quad & \frac{1}{2\pi} \int_0^{2\pi} |h_\mu(e^{i\theta}) - h_\mu(a)| \frac{1-|a|^2}{|1-ae^{i\theta}|^2} d\theta \\
+& = \frac{1}{2\pi} \int_0^{2\pi} \left| \frac{1-|a|^2}{|1-ae^{i\theta}|} \right| \left| \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right| d\theta \\
+& \le \frac{1}{2\pi} \int_0^{2\pi} \frac{1-|a|^2}{|1-ae^{i\theta}|} \left| \int_0^1 \frac{d\mu(t)}{|1-te^{i\theta}||1-ta|} \right| d\theta \\
+& \le \frac{1-|a|^2}{2\pi} \int_0^1 \frac{1}{|1-ta|} \int_0^{2\pi} \frac{d\theta}{|1-ae^{i\theta}||1-te^{i\theta}|} d\mu(t) \\
+& \le \frac{1-|a|^2}{2\pi} \int_0^1 \frac{1}{|1-ta|} \left( \int_0^{2\pi} \frac{d\theta}{|1-ae^{i\theta}|^2} \right)^{1/2} \left( \int_0^{2\pi} \frac{d\theta}{|1-te^{i\theta}|^2} \right)^{1/2} d\mu(t) \\
+& \le C(1-|a|^2)^{1/2} \int_0^1 \frac{1}{|1-ta|(1-t)^{1/2}} d\mu(t) \\
+& \le C(1-|a|^2)^{1/2} \int_0^1 \frac{1}{(1-t|a|)(1-t)^{1/2}} d\mu(t).
+\end{align*}
+$$
+
+Moreover, using that $\mu$ is a 1-logarithmic 1-Carleson measure and a standard argument (see [G] or [Z]) we conclude that
+
+$$
+\sup_{a \in (0,1)} (1-a^2)^{1/2} \int_0^1 \frac{1}{(1-ta)(1-t)^{1/2}} d\mu(t) < \infty,
+$$
+
+which together with (2.16) shows that $h_\mu \in \text{BMOA}_{\log}$, thus by Theorem A, $\mathcal{H}_\mu : H^1 \to H^1$ is bounded.
+
+The proof of (ii) is analogous, so it will be omitted. $\blacksquare$
+
+Proof of Theorem 1.3. We recall that $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $H^2$ if and only if $\sum_{k \ge 0} \|H_\mu(e_k)\|_{H^2}^2 < \infty$ for any orthonormal base $\{e_k\}_{k=0}^\infty$. We choose the orthonormal base $e_k(z) = z^k$. For $z = re^{i\theta} \in \mathbb{D}$, we observe that $\int_0^{2\pi} |\mathcal{H}_\mu(e_k)(re^{i\theta})|^2 d\theta = \sum_{n \ge 0} |\mu_{n,k}|^2 r^{2n}$. So
+
+$$
+\begin{align*}
+\sum_{k \ge 0} \| \mathcal{H}_\mu(e_k) \|_{H^2}^2 &= \sum_{k \ge 0} \sum_{n \ge 0} |\mu_{n,k}|^2 = \sum_{k \ge 0} \sum_{n \ge 0} \int_{[0,1]} \int_{[0,1]} (ts)^{n+k} d\mu(s) d\mu(t) \\
+&= \int_{[0,1]} \int_{[0,1]} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t) \approx \int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} d\mu(t).
+\end{align*}
+$$
+
+This finishes the proof. $\blacksquare$
+---PAGE_BREAK---
+
+Finally, we shall see that although $\mathcal{H}_\mu$ is not bounded on $H^1$ for a classical Carleson measure $\mu$, in some sense $\mathcal{H}_\mu$ is close to having this property.
+
+**THEOREM 2.1.** If $\mu$ is a classical Carleson measure supported on $[0, 1)$ and $0 < p < 1$, then $\mathcal{H}_\mu : H^1 \to H^p$ is bounded.
+
+*Proof.* As $\mu$ is a classical Carleson measure,
+
+$$
+\begin{aligned}
+(2.17) \quad & \| \mathcal{H}_\mu(f) \|_{H^p}^p \le \sup_{01} H^p $$
+
+where $Tf(e^{it}) = f(e^{-it})$ and $M_g$ is the multiplication operator by $g$. Thus, using standard techniques and well-known results we deduce that $\mathcal{H}_{\mu}$ is of weak type $(1,1)$ on Hardy spaces. ■
+
+**3. The Hankel matrix $\mathcal{H}_{\mu}$ acting on $A^2$.** We recall that the Bergman projection $Pf(z) = \int_{\mathbb{D}} f(w) \overline{K_z(w)} dA(w)$ is bounded from $L^2(dA)$ to $A^2$ (see [Zh]), where $K_z(w) = (1 - \bar{z}w)^{-2}$ is the Bergman kernel of $A^2$. It follows that any $f \in A^2$ can be represented by its Bergman projection and moreover $(A^2)^* \cong A^2$ under the pairing $\langle f, g \rangle_{A^2} = \int_{\mathbb{D}} f(z) \overline{g(z)} dA(z)$.
+
+*Proof of Proposition 1.4.* (i) Fix $n \in \mathbb{N}$. If $f(z) = \sum_{k=0}^{\infty} a_k z^k \in A^2$, then by the Cauchy–Schwarz inequality,
+---PAGE_BREAK---
+
+$$ (3.1) \quad \left| \sum_{k \ge 0} \mu_{n,k} a_k \right| \le \sum_{k \ge 0} \mu_{n,k} |a_k| \le \left\{ \sum_{k \ge 0} (k+1) \mu_{n,k}^2 \right\}^{1/2} \|f\|_{A^2}. $$
+
+But
+
+$$
+\begin{align*}
+(3.2) \quad \sum_{k \ge 0} (k+1)\mu_{n,k}^2 &= \int_{[0,1]} \int_{[0,1]} \frac{(ts)^n}{(1-ts)^2} d\mu(s) d\mu(t) \\
+&= 2 \int_{[0,1]} \int_{[t,1)} \frac{(ts)^n}{(1-ts)^2} d\mu(s) d\mu(t) \le 2 \int_{[0,1)} \frac{\mu([t,1))}{(1-t)^2} d\mu(t).
+\end{align*}
+$$
+
+Thus, if $\mu$ satisfies (1.4) the power series (1.1) is well defined and it represents an analytic function in $\mathbb{D}$. Under (1.4) we can also write
+
+$$ \sum_{k \ge 0} \mu_{n,k} a_k = \int_{[0,1)} t^n f(t) d\mu(t). $$
+
+So, for $z \in \mathbb{D}$,
+
+$$ \mathcal{H}_{\mu}(f)(z) = \sum_{n \ge 0} \left( \int_{[0,1)} t^n f(t) d\mu(t) \right) z^n = \int_{[0,1)} \frac{f(t)}{1-zt} d\mu(t). $$
+
+The last equality is true since
+
+$$ \sum_{n \ge 0} \left( \int_{[0,1)} t^n |f(t)| d\mu(t) \right) |z|^n \le \left\{ 2 \int_{[0,1)} \frac{\mu([t,1))}{(1-t)^2} d\mu(t) \right\}^{1/2} \|f\|_{A^2} \frac{1}{1-|z|}. $$
+
+(ii) Take $f \in A^2$. Assume that the integral in (1.5) converges for each $z \in D$. We choose $z = 0$. So, there is $C > 0$ such that
+
+$$ (3.3) \quad \left| \int_{[0,\beta)} f(t) d\mu(t) \right| \le \int_{[0,\beta)} |f(t)| d\mu(t) \le \int_{[0,1)} |f(t)| d\mu(t) \le C $$
+
+for all $\beta \in (0, 1)$.
+
+On the other hand, the integral representation of $f \in A^2$ through the Bergman projection, and Fubini's theorem, imply that
+
+$$
+\begin{align*}
+\int_{[0,\beta)} f(t) d\mu(t) &= \int_{[0,\beta)} \int_{\mathbb{D}} \frac{f(w)}{(1-\bar{w}t)^2} dA(z) d\mu(t) \\
+&= \int_{\mathbb{D}} f(w) \overline{\int_{[0,\beta)} \frac{1}{(1-wt)^2} d\mu(t)} = \langle f, g_\beta \rangle_{A^2},
+\end{align*}
+$$
+
+where $g_\beta(w) = \int_{[0,\beta)} \frac{1}{(1-wt)^2} d\mu(t) \in A^2$ for every $\beta$. Then, combining (3.3), the fact that $(A^2)^* \cong A^2$ under the pairing $\langle \cdot, \cdot \rangle_{A^2}$, and the uniform bound-
+---PAGE_BREAK---
+
+edness principle, we conclude that $\sup_{\beta} \|g_{\beta}\|_{A^2} < C$. Thus, using that
+$\|g_{\beta}\|_{A^2}^2 = \int_{[0,\beta]} \int_{[0,\beta)} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t)$, we get
+
+$$
+C \geq \int_{[0,1]} \int_{[0,1]} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t) \geq \frac{1}{4} \int_{[0,1]} \frac{\mu([t,1))}{(1-t)^2} d\mu(t).
+$$
+
+So condition (1.4) is true. ■
+
+Proof of Theorem 1.5. It is known that $(A^2)^* \cong D$ and $D^* \cong A^2$ under the Cauchy pairing $\langle f, g \rangle_{H^2} = \sum_{n \ge 0} a_n \bar{b}_n$ where $f(z) = \sum_n a_n z^n \in A^2$ and $g(z) = \sum_n b_n z^n \in D$. We observe that, under this relation, $\mathcal{H}_\mu$ is self-adjoint. Therefore, $\mathcal{H}_\mu$ is bounded on $A^2$ if and only if it is on $D$.
+
+If $f,g \in D$ we shall write $f_1(z) = \sum_n |a_n|z^n$, $g_1(z) = \sum_n |b_n|z^n$ so that
+$\|f\|_D = \|f_1\|_D$ and $\|g\|_D = \|g_1\|_D$. Then
+
+$$
+\begin{align*}
+& |\langle \mathcal{H}_{\mu}(f), g \rangle_{\mathcal{D}}| \\
+& \leq \sum_{n \geq 0} (n+1) \left( \sum_{k \geq 0} \mu_{n+1,k} |a_k| \right) |b_{n+1}| + \mu_0 |a_0| |b_0| + |b_0| \sum_{k=0}^{\infty} \mu_{k+1} |a_{k+1}| \\
+& \leq \sum_{n \geq 0} \mu_{n+1} \left( \sum_{k=0}^{n} (k+1) |b_{k+1}| |a_{n-k}| \right) + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\
+& \phantom{\leq} + \|g\|_{\mathcal{D}} \int_{\mathcal{D}} \left( \frac{f_1(z) - f_1(0)}{z} \right) \overline{h'_{\mu}(z)} dA(z) \\
+& \leq \int_{\mathcal{D}} f_1(z) g'_1(z) \overline{h'_{\mu}(z)} dA(z) + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\
+& \phantom{\leq} + \|g\|_{\mathcal{D}} \int_{\mathcal{D}} \left( \frac{f_1(z) - f_1(0)}{z} \right) \overline{h'_{\mu}(z)} dA(z).
+\end{align*}
+$$
+
+So, if $|\overline{h}'_\mu(z)|^2 dA(z)$ is a Dirichlet Carleson measure, we get
+
+$$
+\begin{align*}
+& |\langle \mathcal{H}_{\mu}(f), g \rangle_{\mathcal{D}}| \\
+&\leq \left\{ \int_{\mathcal{D}} |f_1(z)|^2 |\overline{h}'_{\mu}(z)|^2 dA(z) \right\}^{1/2} \left\{ \int_{\mathcal{D}} |g'_1(z)|^2 dA(z) \right\}^{1/2} + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\
+&\quad + \left\{ \int_{\mathcal{D}} \left| \frac{f_1(z) - f_1(0)}{z} \right|^2 |\overline{h}'_{\mu}(z)|^2 dA(z) \right\}^{1/2} \left\{ \int_{\mathcal{D}} |g'_1(z)|^2 dA(z) \right\}^{1/2} \\
+&\leq C \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}},
+\end{align*}
+$$
+
+and consequently $\mathcal{H}_\mu$ is bounded.
+---PAGE_BREAK---
+
+Conversely, assume that $\mathcal{H}_\mu$ is bounded on $\mathcal{D}$. Then
+
+$$
+\begin{align*}
+& \left| \int_{\mathcal{D}} f(z) g'(z) \overline{h'_\mu(z)} dA(z) \right| \\
+& \leq \int_0^1 \sum_{n \geq 0} (n+1) \mu_{n+1} \left( \sum_{k=0}^n (k+1) |b_{k+1}| |a_{n-k}| \right) r^{n+1} dr \\
+& \leq \sum_{n \geq 0} (n+1) \left( \sum_{k \geq 0} \mu_{n+1,k} |a_k| \right) |b_{n+1}| \\
+& = |\langle \mathcal{H}_\mu(f_1), g_1 \rangle_\mathcal{D}| \leq C \|f\|_\mathcal{D} \|g\|_\mathcal{D}.
+\end{align*}
+$$
+
+So (exchanging also the roles of $f$ and $g$) we have
+
+$$
+\left| \int_D (fg)'(z) \overline{h'_{\mu}(z)} dA(z) \right| \leq C \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}}
+$$
+
+for every $f,g \in D$. Finally, Theorem 1 of [ARSW] (see also [Wu]) implies
+that $|h'_{\mu}(z)|^2 dA(z)$ is a Dirichlet Carleson measure. $\blacksquare$
+
+**REMARK 3.1.** We recall that [ARS, Theorem 1] says that a positive Borel measure $\nu$ in $\mathbb{D}$ is a Dirichlet Carleson measure if and only if there is a positive constant $C$ such that for all $a \in \mathbb{D}$,
+
+$$
+(3.4) \quad \int_{\tilde{S}(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1-|z|^2)^2} \le C\nu(S(a)),
+$$
+
+where
+
+$$
+\tilde{S}(a) = \left\{ z \in \mathbb{D} : 1 - |z| \le 2(1 - |a|), \left| \frac{\arg(a\bar{z})}{2\pi} \right| \le \frac{1 - |a|}{2} \right\}.
+$$
+
+We note that if $\nu$ is finite, (3.4) is equivalent to the simpler condition
+
+$$
+(3.5) \quad \int_{S(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1-|z|^2)^2} \le C\nu(S(a)),
+$$
+
+because in this case
+
+$$
+\begin{align*}
+& \int_{\tilde{S}(a) \setminus S(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1 - |z|^2)^2} \\
+&\le C(1 - |a|)^{-2} \iint_{\tilde{S}(a) \setminus S(a)} (\nu(S(z) \cap S(a)))^2 dA(z) \\
+&\le C(1 - |a|)^{-2}\nu(S(a))^2 \int_{\tilde{S}(a) \setminus S(a)} dA(z) \le C\nu(S(a)).
+\end{align*}
+$$
+
+Consequently, combining Proposition 1.4 and Theorem 1.5, if $\mu$ is a finite positive Borel measure on $[0,1)$ that satisfies (1.4), $\mathcal{H}_{\mu}$ is bounded in $A^2$ if and only if the measure $\nu = |h'_{\mu}(z)|^2 dA(z)$ satisfies (3.5) for all $a \in D$.
+---PAGE_BREAK---
+
+*Proof of Theorem 1.6.* Take the orthonormal basis $\{e_k\}_{k \ge 0} = (k+1)^{1/2} z^k$ and observe that
+
+$$
+\begin{align*}
+(3.6) \quad \sum_{k=0}^{\infty} \| \mathcal{H}_{\mu}(e_k) \|_{A^2}^2 &= \sum_{k=0}^{\infty} (k+1) \sum_{n=0}^{\infty} (n+1)^{-1} \mu_{n,k}^2 \\
+&= \sum_{k=0}^{\infty} (k+1) \iint_{0}^{1} (ts)^k \frac{1}{ts} \log \frac{1}{1-ts} d\mu(t) d\mu(s) \\
+&\asymp \int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} \log \frac{1}{1-t} d\mu(t).
+\end{align*}
+$$
+
+So the operator is Hilbert–Schmidt if and only if (1.6) holds. ■
+
+Finally we shall prove Proposition 1.7.
+
+*Proof of Proposition 1.7.* We claim that if $\mathcal{H}_\mu$ is bounded on $A^2$ then
+
+$$
+(3.7) \quad \sup_{a \in (0,1)} \frac{\int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} \left(\frac{1}{at} \log \frac{1}{1-at}\right)^2 d\mu(t)}{\frac{1}{a^2} \log \frac{1}{1-a^2}} < \infty.
+$$
+
+Assume (3.7) for the moment. Let $\beta \in [0,1)$, $\alpha \in ((1+\beta)/2, 1)$ and consider the measure $d\mu_\alpha(t) = (\frac{1}{t}\log\frac{1}{1-t})^{-\alpha}dt$. Using that $\mu_\alpha([t,1)) \asymp (1-t)(\frac{1}{t}\log\frac{1}{1-t})^{-\alpha}$, we deduce
+
+$$
+\int_0^1 \frac{\mu_\alpha([t, 1))}{(1-t)^2} \left( \frac{1}{t} \log \frac{1}{1-t} \right)^\beta d\mu_\alpha(t) \asymp \int_0^1 \frac{1}{(1-t)} \left( \frac{1}{t} \log \frac{1}{1-t} \right)^{\beta-2\alpha} dt < \infty
+$$
+
+and
+
+$$
+\begin{align*}
+& \left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{-1} \int_{[0,1]} \frac{\mu_\alpha([t,1))}{(1-t)^2} \left(\frac{1}{at} \log \frac{1}{1-at}\right)^2 d\mu_\alpha(t) \\
+&\ge C \left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{-1} \int_{[0,a]} \frac{1}{1-t} \left(\frac{1}{t} \log \frac{1}{1-t}\right)^{-2\alpha} \left(\frac{1}{t^2} \log \frac{1}{1-t^2}\right)^2 dt \\
+&\ge C \left(\log \frac{1}{1-a}\right)^{2-2\alpha},
+\end{align*}
+$$
+
+which in particular implies that
+
+$$
+\lim_{a \to 1^-} \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \int_{[0,1]} \frac{\mu_\alpha([t, 1))}{(1-t)^2} \left( \frac{1}{at} \log \frac{1}{1-at} \right)^2 d\mu_\alpha(t) = \infty.
+$$
+
+So, $\mu_\alpha$ does not satisfy (3.7) and thus $\mathcal{H}_{\mu_\alpha}$ is not bounded.
+---PAGE_BREAK---
+
+In order to prove (3.7), using that $(A^2)^* \cong A^2$ under the pairing $\langle \cdot, \rangle_{A^2}$,
+we obtain
+
+$$
+(3.8) \quad \mathcal{H}_\mu : A^2 \to A^2 \text{ is bounded}
+\quad \Leftrightarrow \quad
+\left| \int_D \left( \int_{[0,1]} \frac{f(t)}{1-tz} d\mu(t) \right) \overline{g(z)} dA(z) \right| \le C \|f\|_{A^2} \|g\|_{A^2} \text{ for all } f,g \in A^2.
+$$
+
+Set $g_a(z) = \frac{1}{1-az}$, $a \in (0,1)$. Then $\|g_a\|_{A^2}^2 = \frac{1}{a^2} \log \frac{1}{1-az}$ and
+
+$$
+\begin{align*}
+\int_D \frac{g_a(z)}{1-t\bar{z}} dA(z) &= \int_D \left(\sum_{n=0}^\infty (az)^n\right) \left(\sum_{n=0}^\infty (t\bar{z})^n\right) dA(z) \\
+&= \frac{1}{at} \log \frac{1}{1-at}, \quad a,t \in (0,1).
+\end{align*}
+$$
+
+Then, by (3.8) (with $g = g_a$) and Fubini's theorem, we get
+
+$$
+(3.9) \quad \sup_{a \in (0,1)} \left| \int_0^1 f(t) d\mu_a(t) \right| \le C \|f\|_{A^2} \quad \text{for all } f \in A^2,
+$$
+
+where
+
+$$
+d\mu_a(t) = \frac{\frac{1}{at} \log \frac{1}{1-at}}{\left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{1/2}} d\mu(t).
+$$
+
+So, there is $C > 0$ such that
+
+$$
+(3.10) \quad \sup_{a, \beta \in (0,1)} \left| \int_0^\beta f(t) d\mu_a(t) \right| \le C \|f\|_{A^2} \quad \text{for all } f \in A^2.
+$$
+
+Next, arguing as in the proof of Proposition 1.4, we obtain
+
+$$
+(3.11) \quad \sup_{a, \beta \in (0,1)} \left\| \int_0^\beta \frac{d\mu_a(t)}{(1-wt)^2} \right\|_{A^2} < \infty,
+$$
+
+which together with the fact that
+
+$$
+\begin{align*}
+\left\| \int_0^\beta \frac{d\mu_a(t)}{(1-wt)^2} \right\|_{A^2}^2 &= \sum_{n=0}^\infty (n+1) \left[ \int_0^\beta t^n d\mu_a(t) \right]^2 \\
+&\geq \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \sum_{n=0}^\infty (n+1) \int_0^\beta t^{2n} \left( \frac{1}{at} \log \frac{1}{1-at} \right)^2 \mu([t,\beta]) d\mu(t) \\
+&\geq \frac{1}{4} \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \int_0^\beta \frac{\left( \frac{1}{at} \log \frac{1}{1-at} \right)^2}{(1-t)^2} \mu([t,\beta]) d\mu(t)
+\end{align*}
+$$
+
+finishes the proof. $\blacksquare$
+---PAGE_BREAK---
+
+**Acknowledgements.** The authors wish to thank Professor A. Aleman for his helpful comments and for interesting discussions on the topic of the paper.
+
+The first author is partially supported by the European Networking Programme “HCAA” of the European Science Foundation. The second author is partially supported by the Ramón y Cajal program of MICINN (Spain). Both authors are supported by grants from “Ministerio de Educación y Ciencia, Spain” (MTM2007-60854) and from “La Junta de Andalucía” (FQM210) and P09-FQM-4468.
+
+References
+
+[ACP] J. M. Anderson, J. Clunie and Ch. Pommerenke, *On Bloch functions and normal functions*, J. Reine Angew. Math. 270 (1974), 12–37.
+
+[ARS] N. Arcozzi, R. Rochberg and E. Sawyer, *Carleson measures for analytic Besov spaces*, Rev. Mat. Iberoamer. 18 (2002), 443–510.
+
+[ARSW] N. Arcozzi, R. Rochberg, E. Sawyer and B. Wick, *Bilinear forms on the Dirichlet space*, Anal. PDE 3 (2010), 21–47.
+
+[C] L. Carleson, *An interpolation problem for bounded analytic functions*, Amer. J. Math. 80 (1958), 921–930.
+
+[CS] J. Cima and D. Stegenga, *Hankel operators on $H^p$, in: Analysis at Urbana. Vol. 1: Analysis in Function Spaces*, London Math. Soc. Lecture Note Ser. 137, Cambridge Univ. Press, Cambridge, 1989), 133–150.
+
+[CSi] J. Cima and A. Siskakis, *Cauchy transforms and Cesàro averaging operators*, Acta Sci. Math. (Szeged) (1999), 505–513.
+
+[Di] E. Diamantopoulos, *Hilbert matrix on Bergman spaces*, Illinois J. Math. 48 (2004), 1067–1078.
+
+[DiS] E. Diamantopoulos and A. Siskakis, *Composition operators and the Hilbert matrix*, Studia Math. 140 (2000), 191–198.
+
+[DJV] M. Dostanić, M. Jevtić and D. Vukotić, *Norm of the Hilbert matrix on Bergman and Hardy spaces and a theorem of Nehari type*, J. Funct. Anal. 254 (2008), 2800–2815.
+
+[D] P. L. Duren, *Theory of $H^p$ Spaces*, Academic Press, New York, 1970. Reprint: Dover, Mineola, NY, 2000.
+
+[DS] P. L. Duren and A. P. Schuster, *Bergman Spaces*, Math. Surveys Monogr. 100, Amer. Math. Soc., Providence, RI, 2004.
+
+[G] J. B. Garnett, *Bounded Analytic Functions*, Academic Press, 1981.
+
+[Gi] D. Girela, *Analytic functions of bounded mean oscillation*, in: Complex Functions Spaces, R. Aulaskari (ed.), Univ. Joensuu Dept. Math. Rep. Ser. 4 (2001), 61–171.
+
+[JPS] S. Janson, J. Petree and S. Semmes, *On the action of Hankel and Toeplitz operators on some function spaces*, Duke Math. J. 51 (1984), 937–958.
+
+[PV] M. Papadimitrakis and J. A. Virtanen, *Hankel and Toeplitz operators on $H^1$: continuity, compactness and Fredholm properties*, Integral Equations Operator Theory 61 (2008), 573–591.
+
+[Pe] V. Peller, *Hankel Operators and Their Applications*, Springer Monogr. Math., Springer, New York, 2003.
+---PAGE_BREAK---
+
+[Po] S. C. Power, *Hankel operators on Hilbert space*, Bull. London Math. Soc. 12 (1980), 422–442.
+
+[S] D. Stegenga, *Multipliers of the Dirichlet space*, Illinois J. Math. 24 (1980), 113–139.
+
+[T] V. A. Tolokonnikov, *Hankel and Toeplitz operators in Hardy spaces*, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 141 (1985), 165–175 (in Russian); English transl.: J. Soviet Math. 37 (1987), 1359–1364.
+
+[W] H. Widom, *Hankel matrices*, Trans. Amer. Math. Soc. 121 (1966), 1–35.
+
+[Wu] Z. Wu, *The dual and second predual of $W_\sigma$*, J. Funct. Anal. 116 (1993), 314–334.
+
+[Z] R. Zhao, *On logarithmic Carleson measures*, Acta Sci. Math. (Szeged) 69 (2003), 605–618.
+
+[Zh] K. Zhu, *Operator Theory in Function Spaces, I, II*, 2nd ed., Cambridge Univ. Press, Cambridge, 1959.
+
+Petros Galanopoulos, José Ángel Peláez
+Departamento de Análisis Matemático
+Universidad de Málaga
+Campus de Teatinos, 29071 Málaga, Spain
+E-mail: galanopoulos_petros@yahoo.gr
+japelaez@uma.es
+
+Received December 9, 2009
+
+Revised version May 26, 2010
+
+(6764)
\ No newline at end of file
diff --git a/samples/texts_merged/6324184.md b/samples/texts_merged/6324184.md
new file mode 100644
index 0000000000000000000000000000000000000000..c7c817b4e646a86b73629939cfd0b5344405877a
--- /dev/null
+++ b/samples/texts_merged/6324184.md
@@ -0,0 +1,395 @@
+
+---PAGE_BREAK---
+
+Supporting information for
+
+“Spatial structure, host heterogeneity and parasite virulence: implications for
+vaccine-driven evolution”
+
+Y. H. Zurita-Gutiérrez & S. Lion
+
+April 30, 2015
+
+**Appendix S1: Theory**
+
+S1.1 Spatial invasion fitness
+
+The dynamics of the mutant parasite are given by the following equations
+
+$$
+\begin{align*}
+\frac{dp_{I'_{N}}}{dt} &= \beta'_{NN}[S_N|I'_N]p_{I'_N} + \beta'_{TN}[S_N|I'_T]p_{I'_T} - (d+\alpha'_{N})p_{I'_N} \\
+\frac{dp_{I'_T}}{dt} &= \beta'_{NT}[S_T|I'_N]p_{I'_N} + \beta'_{TT}[S_T|I'_T]p_{I'_T} - (d+\alpha'_{T})p_{I'_T}
+\end{align*}
+$$
+
+or, in matrix form,
+
+$$
+\frac{d}{dt} \begin{pmatrix} p_{I'_{N}} \\ p_{I'_{T}} \end{pmatrix} = M \begin{pmatrix} p_{I'_{N}} \\ p_{I'_{T}} \end{pmatrix} \quad (S1.1)
+$$
+
+where
+
+$$
+M = \begin{pmatrix}
+\beta'_{NN}[S_N|I'_N] - (d+\alpha'_N) & \beta'_{TN}[S_N|I'_T] \\
+\beta'_{NT}[S_T|I'_N] & \beta'_{TT}[S_T|I'_T] - (d+\alpha'_T)
+\end{pmatrix}
+$$
+
+We can rewrite **M** as **M** = **F** − **V**, where
+
+$$
+\mathbf{F} = \begin{pmatrix} \beta'_{NN}[S_N | I'_N] & \beta'_{TN}[S_N | I'_T] \\ \beta'_{NT}[S_T | I'_N] & \beta'_{TT}[S_T | I'_T] \end{pmatrix}
+$$
+
+and
+
+$$
+\boldsymbol{V} = \begin{pmatrix} d + \alpha'_{N} & 0 \\ 0 & d + \alpha'_{T} \end{pmatrix}
+$$
+
+All the entries of **F** and **V**-1 are positive, and the dominant eigenvalue of −**V** is cleary negative, so we can use the Next-Generation Theorem. Thus, the mutant invades if the dominant eigenvalue of **A** = **F**·**V**-1 is greater than 1. With the notations
+
+$$
+\begin{align*}
+R'_{NN} &= \beta'_{NN} / \delta'_{N} \\
+R'_{TN} &= \beta'_{TN} / \delta'_{N} \\
+R'_{NT} &= \beta'_{NT} / \delta'_{T} \\
+R'_{TT} &= \beta'_{TT} / \delta'_{T}
+\end{align*}
+$$
+
+we have
+
+$$
+\mathbf{A} = \begin{pmatrix}
+R'_{NN}[S_N | I'_N] & R'_{TN}[S_N | I'_T] \\
+R'_{NT}[S_T | I'_N] & R'_{TT}[S_T | I'_T]
+\end{pmatrix}
+$$
+---PAGE_BREAK---
+
+Some straightforward algebra shows that the dominant eigenvalue of this matrix is
+
+$$
+\begin{align*}
+\mathcal{R} ={}& \frac{1}{2} (R'_{NN}[S_N|I'_N] + R'_{TT}[S_T|I'_T]) \\
+& + \frac{1}{2} \sqrt{(R'_{NN}[S_N|I'_N] + R'_{TT}[S_T|I'_T])^2 + 4(R'_{NT}R'_{TN}[S_N|I'_T][S_T|I'_N] - R'_{NN}R'_{TT}[S_N|I'_N][S_T|I'_T])}
+\end{align*}
+$$
+
+When $g_P = 1$ (global dispersal), we recover the expression found by Gandon (2004) for a well-mixed population.
+
+Noting $a_{ij}$ the elements of $\mathbf{A}$, we have
+
+$$
+\mathbf{A} = \begin{pmatrix} a_{NN} & a_{TN} \\ a_{NT} & a_{TT} \end{pmatrix}
+$$
+
+At equilibrium, the dominant eigenvalue is unity, $\mathcal{R} = 1$. An associated right eigenvector is the vector
+of densities of each class of infected hosts at equilibrum, $\mathbf{u} = (\hat{p}_{I_N} \ \hat{p}_{I_T})^T$. We therefore have
+
+$$
+\frac{p_{I_T}}{p_{I_N}} = \frac{1 - a_{NN}}{a_{TN}} = \frac{a_{NT}}{1 - a_{TT}} \quad (S1.2)
+$$
+
+An associated left eigenvector is the vector of reproductive values, $\mathbf{v}$ (Taylor, 1990; Rousset, 2004). Normalising $\mathbf{v}$ such that $\mathbf{v}^T\mathbf{u} = 1$, we find that the class reproductive values $c_j = v_j u_j$ at equilibrium satisfy $c_N + c_T = 1$, with
+
+$$
+c_N = \frac{a_{NT} p_{IN}^2}{a_{NT} p_{IN}^2 + a_{TN} p_{IT}^2}. \qquad (S1.3)
+$$
+
+Furthermore, at equilibrium, det($\mathbf{A} - \mathbf{I}$) = 0, which yields the following equilibrium condition
+
+$$
+1 - a_{NN} - a_{TT} = a_{NT}a_{TN} - a_{NN}a_{TT} \quad (S1.4)
+$$
+
+For the sake of simplicity, we make now the additional assumption that transmission can be written
+as the product of infectivity and susceptibility. Hence, we write $\beta_{ij} = \beta_i \sigma_j$, where $\sigma_N = 1$ and $\sigma_T$ is
+the relative susceptibility of treated hosts. We then have
+
+$$
+\begin{align*}
+R'_{NN} &= R'_N = \beta'_N / \delta'_N \\
+R'_{TT} &= \sigma_T R'_T = \sigma_T \beta'_T / \delta'_T \\
+R'_{TN} &= R'_T \frac{\delta'_T}{\delta'_N} \\
+R'_{NT} &= \sigma_T R'_N \frac{\delta'_N}{\delta'_T}
+\end{align*}
+$$
+
+and we obtain
+
+$$
+\mathcal{R} = \frac{1}{2} (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{1}{2} \sqrt{(R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C'} \quad (\text{S1.5})
+$$
+
+where
+
+$$
+C' = [S_N | I'_N][S_T | I'_T] - [S_N | I'_T][S_T | I'_N] \tag{S1.6}
+$$
+
+measures the spatial correlation of treatments experienced by mutant hosts. Equation (S1.4) can then
+be rewritten as
+
+$$
+1 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) = -\sigma_T R_N R_T C \quad (\text{S1.7})
+$$
+---PAGE_BREAK---
+
+## S1.2 Selection gradient
+
+Assuming that selection is weak, we can further calculate the selection gradient.
+
+$$ \partial \mathcal{R} = \frac{1}{2} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{\frac{1}{4} \partial ((R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C')}{\sqrt{(R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C}} $$
+
+At neutrality, we have $\mathcal{R} = 1$ and therefore
+
+$$ \sqrt{(R_N(S_N|I_N) + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C} = 2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) > 0 \quad (\text{S1.8}) $$
+
+Using equation (S1.7), we thus have
+
+$$ \sqrt{(R_N(S_N|I_N) + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C} = 1 - \sigma_T R_N R_T C > 0 \quad (\text{S1.9}) $$
+
+Hence
+
+$$ \partial \mathcal{R} = \frac{1}{2} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{\frac{1}{4} \partial ((R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C')}{1 - \sigma_T R_N R_T C} $$
+
+The numerator of the right-hand side of the latter equation can be written as
+
+$$
+\begin{aligned}
+& \frac{1}{2}\left(2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])\right) \partial(R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T]) \\
+& \qquad + \frac{1}{4}\partial\left((R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T])^2 - 4\sigma_T R'_N R'_T C'\right) \\
+&=\frac{1}{2}\left(2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])\right) \partial(R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T]) \\
+& \qquad + \frac{1}{2}(R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) \partial((R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T])) - \sigma_T \partial(R'_N R'_T C')
+\end{aligned}
+$$
+
+which yields the following expression for $\mathcal{R}$
+
+$$ \partial \mathcal{R} = \frac{1}{1 - \sigma_T R_N R_T C} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T] - \sigma_T R'_{N} R'_{T} C') \quad (\text{S1.10}) $$
+
+## S1.3 Simplifications
+
+We can write $\partial \mathcal{R}$ as
+
+$$ \partial \mathcal{R} = \frac{\partial W + \partial S}{1 - \sigma_T R_N R_T C} \quad (\text{S1.11}) $$
+
+where $\partial W$ collects all direct selective effects, and $\partial S$ collects all indirect selective effects, i.e. the selective effects on local densities.
+
+### Direct effects
+
+We have
+
+$$ \partial W = [S_N | I_N] \partial R'_N + \sigma_T [S_T | I_T] \partial R'_T - \sigma_T C (R_N \partial R'_T + R_T \partial R'_N) \quad (\text{S1.12}) $$
+
+Plugging (S1.7) into the expression of $\partial W$, we obtain
+
+$$ \partial W = [S_N|I_N]\partial R'_N + \sigma_T [S_T|I_T]\partial R'_T + (R_N\partial R'_T + R_T\partial R'_N)\left(\frac{1-(R_N[S_N|I_N]+\sigma_T R_T[S_T|I_T])}{R_N R_T}\right) \quad (\text{S1.13}) $$
+
+which gives after simplifications
+
+$$ \partial W = \frac{\partial R'_{N}}{R_{N}} (1 - \sigma_{T} R_{T} [S_{T}|I_{T}]) + \frac{\partial R'_{T}}{R_{T}} (1 - R_{N}[S_{N}|I_{N}]) \quad (\text{S1.14}) $$
+---PAGE_BREAK---
+
+From the dynamics of $I_N$ and $I_T$, we have
+
+$$R_N[S_N|I_N] = 1 - \frac{\beta_T[S_N|I_T]p_{IT}}{\delta_N p_{IN}} = 1 - \frac{\beta_T[S_N|I_T]p_{IT}}{h_N p_{S_N}} = 1 - \frac{\beta_T[I_T|S_N]}{h_N} \quad (S1.15)$$
+
+$$\sigma_T R_T [S_T | I_T] = 1 - \sigma_T \frac{\beta_N [S_T | I_N] p_{IN}}{\delta_T p_{IT}} = 1 - \sigma_T \frac{\beta_N [S_T | I_N] p_{IN}}{\sigma_T h_T p_{S_T}} = 1 - \frac{\beta_N [I_N | S_T]}{h_T} \quad (S1.16)$$
+
+so $\tau_T \equiv 1 - R_N[S_N|I_N]$ is the share of the force of infection on naive hosts that is caused by infections from the treated class, and $\tau_N = 1 - \sigma_T R_T[S_T|I_T]$ has the same interpretation for treated hosts. We then have
+
+$$\partial W = \tau_N \frac{\partial R'_N}{R_N} + \tau_T \frac{\partial R'_T}{R_T} \quad (S1.17)$$
+
+## Indirect effects
+
+We now turn to the “spatial” component of the selection gradient
+
+$$
+\begin{align}
+\partial S &= R_N \partial[S_N | I'_N] + \sigma_T R_T \partial[S_T | I'_T] \nonumber \\
+&\quad + \sigma_T R_N R_T ([S_N | I_T] \partial[S_T | I'_N] + [S_T | I_N] \partial[S_N | I'_T] - [S_N | I_N] \partial[S_T | I'_T] - [S_T | I_T] \partial[S_N | I'_N]) \tag{S1.18} \\
+&= R_N (1 - \sigma_T R_T [S_T | I_T]) \partial[S_N | I'_N] + \sigma_T R_T (1 - R_N [S_N | I_N]) \partial[S_T | I'_T] \nonumber \\
+&\quad + \sigma_T R_N R_T [S_N | I_T] \partial[S_T | I'_N] + \sigma_T R_N R_T [S_T | I_N] \partial[S_N | I'_T] \tag{S1.19} \\
+&= R_N [(1 - \sigma_T R_T [S_T | I_T]) \partial[S_N | I'_N] + \sigma_T R_T [S_N | I_T] \partial[S_T | I'_N]] \nonumber \\
+&\quad + \sigma_T R_T [(1 - R_N [S_N | I_N]) \partial[S_T | I'_T] + R_N [S_T | I_N] \partial[S_N | I'_T]] \tag{S1.20}
+\end{align}
+$$
+
+Furthermore, we have
+
+$$R_T[S_N|I_T] = \frac{\delta_N p_{IN}}{\delta_T p_{IT}} (1 - R_N[S_N|I_N]) = \frac{h_N p_{S_N}}{\sigma_T h_T p_{ST}} \tau_T \quad (S1.21)$$
+
+$$\sigma_T R_N [S_T | I_N] = \frac{\delta_T p_{IT}}{\delta_N p_{IN}} (1 - \sigma_T R_T [S_T | I_T]) = \frac{\sigma_T h_T p_{ST}}{h_N p_{SN}} \tau_N \quad (S1.22)$$
+
+This yields
+
+$$\partial S = R_N \left[ \tau_N \partial[S_N | I'_N] + \frac{h_N p_{SN}}{h_T p_{ST}} \tau_T \partial[S_T | I'_N] \right] + \sigma_T R_T \left[ \tau_T \partial[S_T | I'_T] + \frac{h_T p_{ST}}{h_N p_{SN}} \tau_N \partial[S_N | I'_T] \right] \quad (S1.23)$$
+
+or equivalently
+
+$$
+\begin{align}
+\partial S ={}& \tau_N \left[ R_N \partial[S_N | I'_N] + R_T \frac{\sigma_T h_T p_{ST}}{h_N p_{SN}} \partial[S_N | I'_T] \right] \nonumber \\
+& + \tau_T \left[ R_N \frac{h_N p_{SN}}{\sigma_T h_T p_{ST}} \sigma_T \partial[S_T | I'_N] + \sigma_T R_T \partial[S_T | I'_T] \right] \tag{S1.24}
+\end{align}
+$$
+
+## Link with reproductive values
+
+The quantities $\tau_N$ and $\tau_T$ have a direct interpretation in terms of reproductive values. Indeed, we have
+
+$$\tau_T = \frac{\beta_T[I_T|S_N]}{h_N} = \frac{\beta_T[S_N|I_T]p_{IT}}{\delta_N p_{IN}} = a_{TN} \frac{p_{IT}}{p_{IN}} = 1 - a_{NN} \quad (S1.25)$$
+
+The last equation comes from equation (S1.2). Similarly, we have
+
+$$\tau_N = a_{NT} \frac{p_{IN}}{p_{IT}} = 1 - a_{TT} \quad (S1.26)$$
+---PAGE_BREAK---
+
+Hence, it follows from equation (S1.4)
+
+$$
+\tau_N + \tau_T = 1 + \sigma_T R_N R_T C \tag{S1.27}
+$$
+
+and
+
+$$
+\frac{\tau_N}{\tau_N + \tau_T} = \frac{a_{NT} p_{IN}^2}{a_{NT} p_{IN}^2 + a_{TN} p_{IT}^2} \quad (\text{S1.28})
+$$
+
+where the last expression can be identified as $c_N$ in equation (S1.3).
+
+**Full selection gradient**
+
+Plugging equation (S1.17) and (S1.24) into equation (S1.11), and noting that the denominator is $\tau_N + \tau_T$, we obtain the following expression for the selection gradient
+
+$$
+\begin{align}
+\partial R = c_N & \left[ \frac{\partial R'_{N}}{R_N} + R_N \partial[S_N | I'_N] + R_T \frac{\sigma_T h_T p_{S_T}}{h_N p_{S_N}} \partial[S_N | I'_T] \right] \tag{S1.29a} \\
+& + c_T \left[ \frac{\partial R'_{T}}{R_T} + R_N \frac{h_N p_{S_N}}{h_T p_{S_T}} \partial[S_T | I'_N] + R_T \sigma_T \partial[S_T | I'_T] \right] \tag{S1.29b}
+\end{align}
+$$
+
+Although we have obtained this result by direct differentiation of the invasion fitness, we note that an alternative derivation can be obtained by noting that the selection gradient can be written as
+
+$$
+\partial \mathcal{R} = \sum_{k,l} v_k u_l \partial(a_{lk})
+$$
+
+By writing $a_{\ell k} = F_\ell m_{\ell k}$, we can write an equation similar to equation (5) in Rousset (1999), and further simplifications lead to equation (S1.29).
+
+**S1.4 Uncorrelated landscapes**
+
+If the landscape is uncorrelated, additional simplifications follow. First, the spatial correlation in treatment is always zero, hence $C = C' = 0$. It follows from equation (S1.5) that the invasion fitness of a rare mutant takes the following simple form:
+
+$$
+\mathcal{R} = R'_{N}[S_N | I'_{N}] + R'_{T}\sigma_{T}[S_T | I'_{T}] \quad (\text{S1.30})
+$$
+
+Then the selection gradient can be written simply as
+
+$$
+\partial \mathcal{R} = R_N[S_N | I'_N] \frac{\partial R'_N}{R_N} + \sigma_T R_T [S_T | I'_T] \frac{\partial R'_T}{R_T} + R_N \partial[S_N | I'_N] + R_T \sigma_T \partial[S_T | I'_T] \quad (\text{S1.31})
+$$
+
+For a neutral mutant, we have at equilibrium [$S_N|I'_N$] = [$S_N|I_N$] and [$S_T|I'_T$] = [$S_T|I_T$]. Furthermore,
+we have at equilibrium
+
+$$
+R_N[S_N | I_N] = c_N
+\quad
+(S1.32)
+$$
+
+and
+
+$$
+\sigma_T R_T [S_T | I_T] = c_T = 1 - c_N \quad (\text{S1.33})
+$$
+
+Combining equations (S1.30)-(S1.33), and noting that $\partial[S_x|I'_y] = (1-g_P)q_{S_x/I'_y}$, we obtain equation (9) in the main text.
+
+**S1.5 Host reproduction**
+
+So far, our results depend neither on host reproduction nor on the specific mechanism generating het-
+erogeneity. The only assumption we make is that the parasite can only transmit horizontally (i.e. there
+is no vertical transmission). For the specific example of vaccination, we consider density-dependent
+reproduction, following previous spatial models of host-parasite interactions (Boots & Sasaki, 2000;
+Lion & Gandon, 2015).
+---PAGE_BREAK---
+
+We assume that host reproduction occurs at rate $b$ and can be either global (with probability $g_H$) or local (with probability $1-g_H$). We also assume that only susceptible hosts can reproduce. Reproduction takes place into empty sites, which introduces density-dependence. Offspring are produced at rates $\lambda_N = b[o|S_N]$ and $\lambda_T = b[o|S_T]$ for naive and treated susceptible hosts, respectively, where $[o|S_i] = g_H p_o + (1-g_H)q_o/S_i$.
+
+For the vaccination example, we further consider that offspring have a probability $\nu$ of entering the treated class at birth, as depicted in figure 1a. Note that, for a fully imperfect vaccine ($r_i = 0$), all hosts are identical for the parasite and, as a result, $c = \nu$.
+
+## S1.6 Stochastic simulations
+
+We performed stochastic individual-based simulations to analyse the effect of spatial structure and host quality on the evolution of host exploitation. The program was coded in C and implements the host-parasite life cycle (figure 1a in the main text) on a regular square lattice with 100×100 sites. Each site can contain at most one individual. The lattice is updated asynchronously in continuous time using the Gillespie algorithm (Gillespie, 1977).
+
+For the simulations, we used the following trade-off:
+
+$$ \beta(x) = 20 \ln(x+1) \quad (\text{S1.34}) $$
+
+$$ \alpha(x) = x \quad (\text{S1.35}) $$
+
+Upon infection, parasites can mutate at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. All simulations were run with parameters values: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$.
+
+## References
+
+[1] Gillespie, D. (1977). Exact stochastic simulation of coupled chemical reactions. *The Journal of Physical Chemistry.* **81**: 2340–2361.
+
+[2] Taylor, P. D. (1990). Allele-frequency change in a class-structured population. *Am. Nat.* **135**(1): 95–106. DOI: 10.1086/285034.
+
+[3] Rousset, F. (1999). Reproductive value vs sources and sinks. *Oikos*. **86**(3): 591–596.
+
+[4] Boots, M. & A. Sasaki (2000). The evolutionary dynamics of local infection and global reproduction in host-parasite interactions. *Ecol. Lett.* **3**: 181–185. DOI: 10.1046/j.1461-0248.2000.00139.x.
+
+[5] Gandon, S., M. J. Mackinnon, S. Nee & A. F. Read (2001). Imperfect vaccines and the evolution of pathogen virulence. *Nature*. **414**: 751–756. DOI: 10.1038/414751a.
+
+[6] Gandon, S., M. J. Mackinnon, S. Nee & A. F. Read (2003). Imperfect vaccination: some epidemiological and evolutionary consequences. *Proc. R. Soc. B.* **270**: 1129–1136. DOI: 10.1098/rspb.2003.2370.
+
+[7] Gandon, S. (2004). Evolution of multihost parasites. *Evolution*. **58**(3): 455–469. DOI: 10.1111/j.0014-3820.2004.tb01669.x.
+
+[8] Rousset, F. (2004). Genetic structure and selection in subdivided populations. Princeton University Press, Princeton, NJ, USA.
+
+[9] Lion, S. & M. Boots (2010). Are parasites "prudent" in space? *Ecol. Lett.* **13**(10): 1245–55. DOI: 10.1111/j.1461-0248.2010.01516.x.
+
+[10] Lion, S. & S. Gandon (2015). Evolution of spatially structured host-parasite interactions. *J. evol. Biol*. DOI: 10.1111/jeb.12551.
+---PAGE_BREAK---
+
+Appendix S2: Evolutionary consequences of an anti-growth vaccine:
+vaccine coverage (figure S2)
+
+We show here the impact of vaccination coverage on parasite prevalence and virulence, for near-perfect vaccines ($r_2 = 0.9$). We broadly recover the predictions of Gandon et al. (2001, 2003): increasing vaccination coverage has little impact on parasite prevalence, but may select for higher virulence (figure S2a). Note that, as parasite dispersal becomes more local, parasite prevalence is minimised at lower vaccination coverage (figure S2b). Lower parasite dispersal leads to lower prevalence and more prudent exploitation over the whole range of vaccination coverage, but selection for increased virulence is stronger at intermediate parasite dispersal.
+
+Figure S2: The evolutionarily stable host exploitation (a) and prevalence (b) of the parasite as a function of vaccine coverage for an anti-growth vaccine $r_2$. The dashed lines indicate the predictions of non-spatial theory. The dots indicate the mean and standard deviation for six runs of the stochastic process. The fractions represent the number of runs that went extinct out of the six runs. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$.
+---PAGE_BREAK---
+
+Appendix S3: Evolutionary consequences of an anti-transmission vaccine (figure S3)
+
+Figure S3: The evolutionarily stable host exploitation (a,b) and prevalence (c,d) of the parasite as a function of parasite dispersal, vaccine efficacy, and vaccine coverage for an anti-transmission vaccine $r_3$. The dashed lines indicate the predictions non-spatial theory. The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$.
+---PAGE_BREAK---
+
+Appendix S4: Effect of parasite evolution on total host density (figure S4)
+
+Figure S4: The total host density on the evolutionary attractor as a function of (a,c) vaccine efficacy and (b,d) vaccine coverage for (a,b) anti-infection ($r_1$) and (c,d) anti-growth ($r_2$) vaccines. The dashed lines indicate the predictions of non-spatial theory. The dots indicate the mean for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$.
+---PAGE_BREAK---
+
+# Appendix S5: Effect of host dispersal (figure S5)
+
+In the main text, we investigate how changes in parasite dispersal affect the parasite evolution when host reproduce locally. Here, we show the robustness of our results when host dispersal is either partially ($g_H = 0.5$) or fully global ($g_H = 1$). For anti-growth (b) and anti-toxin (c) vaccines, global host dispersal weakens the effect of local parasite dispersal on the evolution of virulence. For anti-infection vaccines (a), the interplay between global host dispersal and local parasite dispersal gives rise to a non-linear relationship between vaccine efficacy and ES virulence, with a maximum for near-perfect vaccine. A complete study of the interplay between host and parasite dispersal kernels is beyond the scope of this paper, but this result suggests that the evolutionary outcome depends on both host and parasite dispersal patterns (see also Lion & Gandon, 2015 for a discussion in homogeneous spatially structured populations). Note that, as expected, global host dispersal always leads to higher prevalence (d,e,f).
+
+Figure S5: The evolutionarily stable host exploitation (a,b,c) and prevalence (d,e,f) for (a,d) anti-infection ($r_1$), (b,e) anti-growth ($r_2$) and (c,f) anti-toxin ($r_4$) vaccines. For each figure, the results for fully local parasite dispersal ($g_P = 0$) and either fully local ($g_H = 0$, plain lines), partially global ($g_H = 0.5$, dotted lines), or fully global ($g_H = 1$, dashed lines) host dispersal are shown. The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$.
+---PAGE_BREAK---
+
+# Appendix S6: Effect of host fecundity (figure S6)
+
+Previous studies have shown that, in the absence of vaccination, the kin competition effect is predicted to vanish when habitat saturation increases: as host fecundity increases, the differences between spatial and non-spatial models flatten out (Lion & Boots, 2010). Indeed, when host fecundity is infinite, the model converge towards a simple SIS model without demography, for which parasite dispersal only affects the speed of evolution, but not the endpoint. Stochastic simulations lead to the same result for anti-infection and anti-transmission vaccines, although for an anti-growth vaccine, the effect of host fecundity appears to be more complex (figure S6).
+
+Figure S6: The evolutionarily stable host exploitation (plain lines) and prevalence (dashed lines) of the parasite as a function of parasite dispersal for (a) an anti-infection vaccine ($r_1$), (b) an anti-infection vaccine ($r_2$) and (c) an anti-transmission vaccine ($r_3$) for a near-perfect vaccine ($\nu = 0.9$ and $r_i = 0.9$) and increasing values of host fecundity ($b = 8, 12, 24, 40, 100$). The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$.
\ No newline at end of file
diff --git a/samples/texts_merged/6332297.md b/samples/texts_merged/6332297.md
new file mode 100644
index 0000000000000000000000000000000000000000..b181a9269b6f4db08f08bdfdf1d98a7c8b144bbe
--- /dev/null
+++ b/samples/texts_merged/6332297.md
@@ -0,0 +1,449 @@
+
+---PAGE_BREAK---
+
+# The Worst Case Finite Optimal Value in Interval Linear Programming
+
+Milan Hladík¹,*
+
+¹ Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University,
+Malostranské nám. 25, 11800, Prague, Czech Republic
+E-mail: *hladik@kam.mff.cuni.cz*
+
+**Abstract.** We consider a linear programming problem, in which possibly all coefficients are subject to uncertainty in the form of deterministic intervals. The problem of computing the worst case optimal value has already been thoroughly investigated in the past. Notice that it might happen that the value can be infinite due to infeasibility of some instances. This is a serious drawback if we know a priori that all instances should be feasible. Therefore we focus on the feasible instances only and study the problem of computing the worst case finite optimal value. We present a characterization for the general case and investigate special cases, too. We show that the problem is easy to solve provided interval uncertainty affects the objective function only, but the problem becomes intractable in case of intervals in the right-hand side of the constraints. We also propose a finite reduction based on inspecting candidate bases. We show that processing a given basis is still an NP-hard problem even with non-interval constraint matrix, however, the problem becomes tractable as long as uncertain coefficients are situated either in the objective function or in the right-hand side only.
+
+**Key words:** linear programming, interval analysis, sensitivity analysis, interval linear programming, NP-completeness
+
+Received: September 28, 2018; accepted: November 14, 2018; available online: December 13, 2018
+
+DOI: 10.17535/crorr.2018.0019
+
+## 1. Introduction
+
+Consider a linear programming (LP) problem
+
+$$f(A, b, c) = \min c^T x \text{ subject to } x \in M(A, b), \quad (1)$$
+
+where $M(A, b)$ is the feasible set with constraint matrix $A \in \mathbb{R}^{m \times n}$ and the right-hand side vector $b \in \mathbb{R}^m$. We use the convention $\min\emptyset = \infty$ and $\max\emptyset = -\infty$. Basically, one of the following canonical forms
+
+$$f(A,b,c) = \min c^T x \text{ subject to } Ax = b, x \ge 0, \qquad (\text{A})$$
+
+$$f(A,b,c) = \min c^T x \text{ subject to } Ax \le b, \qquad (\text{B})$$
+
+$$f(A,b,c) = \min c^T x \text{ subject to } Ax \le b, x \ge 0 \qquad (\text{C})$$
+
+is usually considered. As was repeatedly observed, in the interval setting, these forms are not equivalent to each other in general [10, 12, 17], so they have to be analyzed separately. We can consider a general form involving all the canonical forms together [13], but from the sake of exposition, it is better to consider the canonical forms separately.
+
+*Corresponding author.
+---PAGE_BREAK---
+
+**Interval data.** An interval matrix is defined as the set
+
+$$ \mathbf{A} = \{ A \in \mathbb{R}^{m \times n}; \underline{A} \leq A \leq \overline{A} \}, $$
+
+where $\underline{A}, \overline{A} \in \mathbb{R}^{m \times n}$, $\underline{A} \leq \overline{A}$ are given matrices. We will use also the notion of the midpoint and radius matrix defined respectively as
+
+$$ A_c := \frac{1}{2}(\underline{A} + \overline{A}), \quad A_{\Delta} := \frac{1}{2}(\overline{A} - \underline{A}). $$
+
+The set of all $m \times n$ interval matrices is denoted by $\mathbb{IR}^{m \times n}$. Similar notation is used for interval vectors, considered as one column interval matrices, and interval numbers. For interval arithmetic see, e.g., the textbooks [20, 22].
+
+**Interval linear programming.** Let $\mathbf{A} \in \mathbb{IR}^{m \times n}$, $\mathbf{b} \in \mathbb{IR}^m$ and $\mathbf{c} \in \mathbb{IR}^n$ be given. By an interval linear programming problem we mean a family of LP problems (1) with $\mathbf{A} \in \mathbf{A}$, $\mathbf{b} \in \mathbf{b}$ and $\mathbf{c} \in \mathbf{c}$. A particular LP problem from this family is called a *realization*.
+
+In the recent years, the optimal value range problem was intensively studied. The problem consists of determining the best case and worst case optimal values defined as
+
+$$
+\begin{align*}
+\underline{f} &:= \min f(\mathbf{A}, \mathbf{b}, \mathbf{c}) && \text{subject to } \mathbf{A} \in \mathbf{A}, \mathbf{b} \in \mathbf{b}, c \in \mathbf{c}, \\
+\overline{f} &:= \max f(\mathbf{A}, \mathbf{b}, \mathbf{c}) && \text{subject to } \mathbf{A} \in \mathbf{A}, \mathbf{b} \in \mathbf{b}, c \in \mathbf{c}.
+\end{align*}
+$$
+
+The interval $\boldsymbol{f} = [\boldsymbol{f}, \boldsymbol{\bar{f}}]$ then gives us the range of optimal values of the interval LP problem; each realization (1) has the optimal value in $\boldsymbol{f}$. If we define the image of optimal values
+
+$$ f(\mathbf{A}, \mathbf{b}, \mathbf{c}) := \{f(\mathbf{A}, b, c) \mid A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}\}, $$
+
+then the optimal value range alternatively reads
+
+$$
+\begin{align*}
+\underline{f} &:= \min f(\mathbf{A}, \mathbf{b}, \mathbf{c}), \\
+\overline{f} &:= \max f(\mathbf{A}, \mathbf{b}, \mathbf{c}).
+\end{align*}
+$$
+
+References [6, 12] present a survey on this topic. Methods and formulae for determining $\underline{f}$ and $\overline{f}$ were discussed in [5, 11, 21, 24]. Some of the values are easily computable, but some are NP-hard, depending on the particular form (A)-(C) of the LP problem. The hard cases are $\overline{f}$ for type (A) and $\underline{f}$ for type (B); NP-hardness was proved in [6, 7, 26, 28]. Hladík [15] proposes approximation method for the intractable cases. Garajová et al. [10] study what is the effect of transformations of the constraints on the optimal value range, among others.
+
+Besides the optimal value range problem also the effects on the optimal solution set were investigated. See [2, 16, 19] for some of the recent results and the types of solutions considered.
+
+**Problem formulation.** The worst case optimal value $\overline{f}$ can be infinite (i.e., $\overline{f} = \infty$) due to infeasibility of some realization. However, in many situations, we know a priori or can assure that all instances are feasible; a typical example is the transportation problem [4]. Therefore, we focus on feasible realizations only and define the *worst case finite optimal value* as
+
+$$ \bar{f}_{fin} := \max f(\mathbf{A}, b, c) \text{ subject to } A \in \mathcal{A}, b \in \mathcal{B}, c \in \mathcal{C}, f(\mathbf{A}, b, c) < \infty. $$
+
+**Example 1.** Consider the interval LP problem
+
+$$
+\min x \quad \text{subject to} \quad x \le [-1, 1], x \ge 0.
+$$
+
+Choosing a negative value from the interval [-1, 1], we obtain an infeasible LP problem. Choosing a nonnegative value, the resulting optimal value is zero. Therefore $f(\mathbf{A}, \mathbf{b}, \mathbf{c}) = \{0, \infty\}$ and $\boldsymbol{f} = [\boldsymbol{\underline{f}}, \boldsymbol{\overline{f}}] = [0, \infty]$, but $\bar{f}_{fin} = 0$.
+---PAGE_BREAK---
+
+We will assume that there is at least one infeasible realization, that is, $f(A, b, c) = \infty$ for some $A \in \mathbf{A}$, $b \in \mathbf{b}$ and $c \in \mathbf{c}$; methods for checking this property are discussed in [6, 13], among others. Otherwise, if every realization is feasible, then $\bar{f}_{fin} = \bar{f}$, and we can use standard techniques for computing $\bar{f}$.
+
+## 2. General results
+
+As the following example shows, even the value of $\bar{f}_{fin}$ can be infinite. We will show later in Proposition 5 that this happens only if there are intervals in the constraint matrix.
+
+**Example 2.** Consider the interval LP problem
+
+$$ \min -x_1 \quad \text{subject to} \quad [0,1]x_2 = -1, x_1 - x_2 = 0, x_1, x_2 \le 0. $$
+
+By direct inspection, we observe that $f(\mathbf{A}, \mathbf{b}, \mathbf{c}) = [1, \infty]$ and $\mathbf{f} = [1, \infty]$. We have $\bar{f} = \infty$ because the LP problem is infeasible when choosing the zero from the interval $[0, 1]$. However, we have also $\bar{f}_{fin} = \infty$ since the optimal value $f(A, b, c) \to \infty$ as the selection from $[0, 1]$ tends to zero.
+
+Denote by
+
+$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad y \in N(A^T, c) \qquad (2) $$
+
+the dual problem to (1). For the canonical forms (A)–(C), the dual problems respectively read
+
+$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y \le c, \qquad (A) $$
+
+$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y = c, y \le 0, \qquad (B) $$
+
+$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y \le c, y \le 0. \qquad (C) $$
+
+By duality in linear programming, we can replace the inner optimization problem in the definition of $\bar{f}_{fin}$ by its dual problem with no additional assumptions. This is a bit surprising since duality in real or interval linear programming usually needs some kind of (strong) feasibility; see Novotná et al. [23].
+
+**Proposition 1.** We have
+
+$$ \bar{f}_{fin} = \max g(A,b,c) \quad \text{subject to} \quad A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}, g(A,b,c) < \infty. \qquad (3) $$
+
+**Proof.** By strong duality in linear programming, both primal and dual problems have the same optimal value as long as at least one of them is feasible. If the primal problem is infeasible for every realization of interval data, then the dual problem is for every realization either infeasible or unbounded. In any case, both sides of (3) are equal to $-\infty$. Thus we will assume that the feasible set $M(A,b)$ is nonempty for at least one realization. The assumption ensures feasibility of at least one realization, so we can replace the primal problem by the dual one. Notice that feasibility of all realizations is not necessary to assume since primarily infeasible instances are idle for both primal and dual problems. $\square$
+
+The advantage of the formula (3) is that the “max min” optimization problem is reduced to “max max” problem
+
+$$ \bar{f}_{fin} = \max b^T y \quad \text{subject to} \quad y \in N(A^T, c), M(A,b) \neq \emptyset, A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}, \qquad (4) $$
+
+which can be hopefully more easy to deal with.
+---PAGE_BREAK---
+
+### 3. Special cases with A real
+
+In this section, we focus on certain sub-classes of the main problem. In particular, we consider the case with real constraint matrix, i.e., $A_{\Delta} = 0$. This case is not much on restriction on generality since the matrix $A$ characterizes the structure of the model and often is fixed. This is particularly true in transportations problems or flows in networks [1, 27]. In contrast, costs $c$ in the objective function and capacities corresponding to the right-hand side vectors $b$ are typically affected various kinds of uncertainties.
+
+As we already mentioned, transformations between the LP forms (A)-(C) is not equivalent in general. Nevertheless, in some cases, it is possible. Garajová et al. [10] showed that provided $A$ is real, finite optimal values (and therefore also $\bar{f}_{fin}$) is not changed under the following transformations:
+
+* transform an interval LP problem of type (A)
+
+$$ \min c^T x \text{ subject to } Ax = b, x \ge 0 $$
+
+to form (C) splitting equations to double inequalities
+
+$$ \min c^T x \text{ subject to } Ax \le b, Ax \ge b, x \ge 0, $$
+
+* transform an interval LP problem of type (B)
+
+$$ \min c^T x \text{ subject to } Ax \le b $$
+
+to form (C) by imposing nonnegativity of variables
+
+$$ \min c^T x^{+} - c^T x^{-} \text{ subject to } Ax^{+} - Ax^{-} \le b, x^{+}, x^{-} \ge 0. $$
+
+In Garajová et al. [10], it was also observed that the first transformation may change finite optimal values in the case with interval $\mathcal{A}$. Below, we show by an example that this is also true for the second transformation.
+
+**Example 3.** Consider the interval LP problem of type (B)
+
+$$ \min -x \text{ subject to } [0,1]x \le -1, -[1,2]x \le 5. $$
+
+It is easy to see that $f = [1, 5] \cup \{\infty\}$ and $\bar{f}_{fin} = 5$. Imposing nonnegativity of variables leads to the interval LP problem
+
+$$ \min -x^{+} + x^{-} \text{ subject to } [0,1]x^{+} - [0,1]x^{-} \le -1, -[1,2]x^{+} + [1,2]x^{-} \le 5. $$
+
+Now, the set of optimal values expands significantly. For instance, the realization
+
+$$ \min -x^{+} + x^{-} \text{ subject to } 0.1x^{+} - 0.1x^{-} \le -1, -2x^{+} + 1x^{-} \le 5 $$
+
+has the optimal value of 10. By direct inspection, we can see that $f = \{-\infty\} \cup [1, \infty]$. That is, the worst case finite optimal value grows to $\bar{f}_{fin} = \infty$.
+
+#### 3.1. Interval objective function
+
+If interval data are situated in the objective vector only, computation of $\bar{f}_{fin}$ is easy just by solving one LP problem.
+
+**Proposition 2.** If $A$ and $b$ are real, then computation of $\bar{f}_{fin}$ is a polynomial problem.
+---PAGE_BREAK---
+
+**Proof.** Under the assumptions, the problem (4) takes the form of an LP problem in variables $x, y, c$. Moreover, the variable $c$ can be easily eliminated. For types (A) and (C) in particular, the resulting LP problem draw, respectively
+
+$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax = b, x \ge 0, A^T y \le \bar{c}, \qquad (5) $$
+
+$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax \le b, x \ge 0, A^T y \le \bar{c}, y \le 0. \qquad (6) $$
+
+For type (B) we have
+
+$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax \le b, c \le A^T y \le \bar{c}, y \le 0. \quad \square $$
+
+**Corollary 1.** Suppose that $A$ and $b$ are real and $M(A, b) \neq \emptyset$. For interval LP problems of types (A) and (C) the value of $\bar{f}_{fin}$ is attained at $c := \bar{c}$.
+
+**Proof.** Due to $M(A,b) \neq \emptyset$, problems (5) and (6) take respectively the form of
+
+$$ \bar{f}_{fin} = \max b^T y \text{ subject to } A^T y \le \bar{c}, $$
+
+$$ \bar{f}_{fin} = \max b^T y \text{ subject to } A^T y \le \bar{c}, y \le 0. $$
+
+Again by $M(A,b) \neq \emptyset$, we can replace the LP problems by their duals
+
+$$ \bar{f}_{fin} = \min \bar{c}^T x \text{ subject to } Ax = b, x \ge 0, $$
+
+$$ \bar{f}_{fin} = \min \bar{c}^T x \text{ subject to } Ax \le b, x \ge 0. $$
+
+The LP problems on the right-hand sides yield $\bar{f}_{fin}$ for the corresponding LP forms. $\square$
+
+Notice that for LP problems of type (B), this property is not true. In general, $\bar{f}_{fin}$ is not attained for extremal values of $c$, which is illustrated by the following example.
+
+**Example 4.** Consider the interval LP problem of type (B)
+
+$$ \min -x_1 + c_2 x_2 \text{ subject to } x_1 + x_2 \le 2, -x_1 + x_2 \le 0, $$
+
+where $c_2 \in c_2 = [-0.5, 2]$. It is not hard to see that $\bar{f}_{fin} = \bar{f} = -2$, and it is attained for the value of $c_2 := -1$ at the point $x = (1, 1)^T$. For smaller $c_2$, the optimal value is $-1 + c_2 < -2$. For larger $c_2$, the optimal value is $-\infty$ since the problem is unbounded.
+
+## 3.2. Interval right-hand side
+
+In contrast to the previous case, if interval data are situated in the right-hand side vector only (i.e., $A_{\Delta} = 0$ and $c_{\Delta} = 0$), computation of $\bar{f}_{fin}$ is intractable.
+
+**Proposition 3.** If $A$ and $c$ are real, then checking $\bar{f}_{fin} > 0$ is NP-hard for type (A).
+
+**Proof.** By [9], checking whether there is at least one feasible realization of the interval system
+
+$$ A^T y \le 0, b^T y > 0 $$
+
+is an NP-hard problem. Hence it is NP-hard to check $\bar{f} > 0$ (not yet speaking about $\bar{f}_{fin}$) for the interval LP problem
+
+$$ \max b^T y \text{ subject to } A^T y \le 0. $$
+---PAGE_BREAK---
+
+Due to positive homogeneity of the constraints, we can rewrite the problem as
+
+$$
+\max \mathbf{b}^T y \text{ subject to } A^T y \le 0, y \le e, -y \le e, \qquad (7)
+$$
+
+where $e = (1, \dots, 1)^T$. For this interval problem, checking $\bar{f}_{fin} > 0$ is NP-hard.
+
+The interval problem (7) follows the form (3); the condition $g(A, b, c) < \infty$ needn't be considered since the problem is feasible and finite for each realization. Thus we can view this problem as the dual of an interval LP problem of type (A), which has a fixed objective function vector and a fixed constraint matrix. $\square$
+
+**Corollary 2.** If *A* and *c* are real, then checking $\bar{f}_{fin} > 0$ is NP-hard for type (B) and for type (C).
+
+*Proof.* By Proposition 3, checking $\bar{f}_{fin} > 0$ is NP-hard for an interval LP problem
+
+$$
+\min c^T x \text{ subject to } Ax = \mathbf{b}, x \ge 0.
+$$
+
+According to the discussion at the beginning of Section 3, the value of $\bar{f}_{fin}$ is not changed under
+the transformation of equations to double inequalities
+
+$$
+\min c^T x \text{ subject to } Ax \leq b, Ax \geq b, x \geq 0.
+$$
+
+This is, however, a type (C) problem, which must therefore be NP-hard.
+
+Type (B) problems are also NP-hard since every problem in the form of (C) is essentially
+in the form of (B). $\square$
+
+Despite intractability, computation of $\bar{f}_{fin}$ need not be always so hard. If $A$ is real, then
+(4) takes the form of a bilinear programming problem, that is, the constraints are linear and
+the objective function is bilinear (with respect to variables $y, b, c$). Even though it is NP-hard,
+some instances may be faster solvable.
+
+**Example 5.** Consider an interval LP problem in the form
+
+$$
+\min c^T x \text{ subject to } Ax \geq b
+$$
+
+with $b > 0$. Then (4) reads
+
+$$
+\bar{f}_{\text{fin}} = \max b^T y \text{ subject to } Ax \geq b, A^T y = c, y \geq 0, b \in \mathbf{b}.
+$$
+
+Since the variables are nonnegative, it has the special form of a geometric program, and hence
+it is efficiently solvable [3].
+
+**4. Basis approach**
+
+If the LP problem (1) has a finite optimal value, then it possesses an optimal solution cor-
+responding to an optimal basis. For concreteness, consider type (A) problem. A basis B is
+optimal if and only if the following two conditions are satisfied
+
+$$
+A_B^{-1} b \ge 0, \tag{8a}
+$$
+
+$$
+c_N^T - c_B^T A_B^{-1} A_N \ge 0^T. \tag{8b}
+$$
+
+The optimal value then is $f(A, b, c) = c_B^T A_B^{-1} b.$
+
+Given a basis *B* and an interval LP problem, we will now address the question what is
+the highest optimal value achievable at this basis. This can be formulated as an optimization
+problem
+
+$$
+\max c_B^T A_B^{-1} b \text{ subject to (8), } A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}. \quad (9)
+$$
+---PAGE_BREAK---
+
+**Real constraint matrix.** Suppose from now on that $A$ is real. Then the optimization problem (9) reads
+
+$$ \max c_B^T A_B^{-1} b \quad \text{subject to} \quad (8), \ b \in \mathbf{b}, \ c \in \mathbf{c}. \tag{10} $$
+
+Its constraints are linear in variables $b, c$. Therefore, checking its feasibility is an easy task. In accordance with [12], we say that a basis $B$ is weakly optimal if it admits at least one finite optimal value, that is, $B$ is optimal for some realization. From the above reasoning, we have
+
+**Proposition 4.** *Checking whether a basis B is weakly optimal is a polynomial problem.*
+
+The feasible set of (10) is bounded, so the optimal value is bounded, too. Since there are finitely many basis, the worst case finite optimal value must be finite. Hence we just derived
+
+**Proposition 5.** If $A$ is real, then $\bar{f}_{fin} < \infty$.
+
+If $c$ is real, then (9) takes the form of an LP problem
+
+$$ \max c_B^T A_B^{-1} b \quad \text{subject to} \quad (8), \ b \in \mathbf{b}, \tag{11} $$
+
+and so it is polynomially solvable. Similarly in the case when $b$ is real.
+
+**Proposition 6.** If $A, b$ are real or $A, c$ are real, then solving (9) is polynomial.
+
+Solving problem (9) with $A$ real and $\mathbf{b}, \mathbf{c}$ interval values is, however, still intractable.
+
+**Proposition 7.** If $A$ is real, then solving (9) is NP-hard.
+
+*Proof.* By Witsenhausen [29], it is NP-hard to find the maximum value of a bilinear form $u^T M v$ on interval domain $u, v \in [0, 1]^n$, where $M$ is symmetric nonsingular. We will reduce this problem to our problem. We put $\mathbf{b} := [0, 1]^n$ and $A_B := I_n$, where $I_n$ is the identity matrix. Next, we substitute $c_B := M u$. The condition
+
+$$ c_B = M u, \ u \in [0, 1]^n $$
+
+is equivalent to
+
+$$ 0 \leq M^{-1} c_B \leq 1, $$
+
+so we can formulate it as (8b) for $A_N = (M^{-1}, -M^{-1})$ and $c_N = (1^T, 0^T)^T$. The condition (8a) is trivially satisfied as $A_B^{-1} b = b \in [0, 1]^n$. This completes the reduction. $\square$
+
+**Real A and c.** By Proposition 3 we know that computing $\bar{f}_{fin}$ is NP-hard even when $A$ and $c$ are real, and intervals are situated in the right-hand side vector $\mathbf{b}$ only. The above considerations give us a finite reduction for computing $\bar{f}_{fin}$. For each basis $B$, check if it is weakly optimal and determine the worst case optimal value associated with $B$ by solving the LP problem (11).
+
+In this way, the box $\mathbf{b}$ splits into convex polyhedral sub-parts, which are usually called stability or critical regions in the context of sensitivity analysis and parametric programming [8]. Each region corresponds to a weakly optimal basis. In the area of interval linear programming, but in another context, stability regions were also discussed in Mráz [21].
+
+The obvious drawback of this approach is that there are exponentially many bases. On the other hand, the number of weakly optimal bases might be reasonably small. In order to process them, consider the following graph. The nodes correspond to weakly optimal bases. There is an edge between two nodes if and only if the corresponding bases are neighbors, that is, they differ in exactly one entry (the basic index sets differ in one entry). Since the set $\mathbf{b}$ of the objective vectors of the dual problem (2) is convex and compact, the graph of weakly
+---PAGE_BREAK---
+
+Figure 1: (Example 6) Illustration of the dual problem: for different values of the objective vector **b**, the optimal solution moves from $y^1$ to $y^2$ and to unbounded instances.
+
+optimal bases is connected. Therefore, we can start with one weakly optimal basis, inspect the neighboring bases for weak optimality and process until all weakly optimal bases are found.
+
+This method can be significantly faster than processing all possible bases. In particular, if the interval vector **b** is narrow, then we can expect that the number of weakly optimal basis is small, or even there is a unique one. This case of unique basis is called *basis stable* problem and was investigated in [14, 18, 25]. Even though it is NP-hard to check for basis stability of a basis B for a general interval LP problem, there are practically efficient sufficient conditions; see [14].
+
+Moreover, basis stability is polynomially decidable provided A, b or A, c are real, which is our case. Concretely, we have to verify two conditions. First, check (8b), which is easy as all data are constant. Second, compute by interval arithmetic the expression $A_B^{-1}b$, and check that the lower bound is nonnegative.
+
+**Example 6.** Consider the interval LP problem of type (A) with data
+
+$$A = \begin{pmatrix} 1 & 2 & 0 & -1 & -1 \\ 1 & 1 & 1 & 1 & 0 \end{pmatrix}, \quad b = \begin{pmatrix} [3, 5] \\ [2, 4] \end{pmatrix}, \quad c = (10 \ 20 \ 5 \ 3 \ 1)^T.$$
+
+The dual problem is illustrated on Figure 1. There are two weakly optimal bases, $B = \{1, 2\}$ and $B' = \{1, 3\}$. On the figure, they correspond to vertices $y^1 = (10,0)^T$ and $y^2 = (5,5)^T$.
+
+For basis B, the constraint $A_B^{-1}b \ge 0$ from (8a) takes the form
+
+$$
+\begin{aligned}
+-b_1 + 2b_2 &\geq 0, \\
+b_1 - b_2 &\geq 0.
+\end{aligned}
+ $$
+
+By the LP problem (11), we compute the value of the highest optimal value corresponding to this basis as 50.
+
+For basis $B'$, the constraint $A_B^{-1}b \ge 0$ draws
+
+$$
+\begin{aligned}
+b_1 &\geq 0, \\
+-b_1 + b_2 &\geq 0.
+\end{aligned}
+ $$
+
+The LP problem (11) now gives the value of 40 for the highest optimal value associated to $B'$.
+
+In total, we see that the worst case optimal value is $\bar{f}_{fin} = 50$ and it is attained for basis B. Figure 2 depicts the interval vector **b** and its subparts corresponding to the optimal bases B and $B'$ and to infeasible instances.
+---PAGE_BREAK---
+
+Figure 2: (Example 6) The sub-parts of interval vector **b** corresponding to the optimal bases **B** and **B'** and to infeasible instances.
+
+## 5. Conclusion
+
+We investigated the problem of computing the highest possible optimal value when input data are subject to variations in given intervals and we restrict to feasible instances only. We analyzed the computational complexity issues by identifying the cases that are already polynomially solvable and those that are still NP-hard. The basis reduction proposes an approach that is not a priori exponential even for the NP-hard cases.
+
+Several open questions arised during the work on the topic. This includes for example the problem of what is the computational complexity of this question: Is $f_{fin}$ attained for a given basis $\mathbf{B}$?
+
+## Acknowledgement
+
+The author was supported by the Czech Science Foundation Grant P403-18-04735S.
+
+## References
+
+[1] Ahuja, R. K., Magnanti, T. L. and Orlin, J. B. (1993). Network Flows. Theory, Algorithms, and Applications. Englewood Cliffs, NJ: Prentice Hall.
+
+[2] Ashayerinasab, H. A., Nehi, H. M. and Allahdadi, M. (2018). Solving the interval linear programming problem: A new algorithm for a general case. Expert Systems with Applications, 93, Suppl. C, 39–49.
+
+[3] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press.
+
+[4] Cerulli, R., D'Ambrosio, C. and Gentili, M. (2017). Best and worst values of the optimal cost of the interval transportation problem. In Sforza, A., and Sterle, C. (Eds.), Optimization and Decision Science: Methodologies and Applications, volume 217 of Springer Proceedings in Mathematics & Statistics, (pp. 367–374). Cham: Springer.
+
+[5] Chinneck, J. W. and Ramadan, K. (2000). Linear programming with interval coefficients. Journal of the Operational Research Society, 51(2), 209–220.
+---PAGE_BREAK---
+
+[6] Fiedler, M., Nedoma, J., Ramík, J., Rohn, J. and Zimmermann, K. (2006). Linear Optimization Problems with Inexact Data. New York: Springer.
+
+[7] Gabrel, V. and Murat, C. (2010). Robustness and duality in linear programming. Journal of the Operational Research Society, 61(8), 1288-1296.
+
+[8] Gal, T. and Greenberg, H. J. (Eds.) (1997). Advances in Sensitivity Analysis and Parametric Programming. Boston: Kluwer Academic Publishers.
+
+[9] Garajová, E., Hladík, M. and Rada, M. (2017). On the properties of interval linear programs with a fixed coefficient matrix. In Sforza, A., and Sterle, C. (Eds.), Optimization and Decision Science: Methodologies and Applications, volume 217 of Springer Proceedings in Mathematics & Statistics, (pp. 393–401). Cham: Springer.
+
+[10] Garajová, E., Hladík, M. and Rada, M. (2018). Interval linear programming under transformations: Optimal solutions and optimal value range. Central European Journal of Operations Research. In press, doi: 10.1007/s10100-018-0580-5
+
+[11] Hladík, M. (2009). Optimal value range in interval linear programming. Fuzzy Optimization and Decision Making, 8(3), 283-294.
+
+[12] Hladík, M. (2012). Interval linear programming: A survey. In Mann, Z. A. (Ed.), Linear Programming – New Frontiers in Theory and Applications, chapter 2, (pp. 85–120). New York: Nova Science Publishers.
+
+[13] Hladík, M. (2013). Weak and strong solvability of interval linear systems of equations and inequalities. Linear Algebra and its Applications, 438(11), 4156–4165.
+
+[14] Hladík, M. (2014). How to determine basis stability in interval linear programming. Optimization Letters, 8(1), 375–389.
+
+[15] Hladík, M. (2014). On approximation of the best case optimal value in interval linear programming. Optimization Letters, 8(7), 1985–1997.
+
+[16] Hladík, M. (2017). On strong optimality of interval linear programming. Optimization Letters, 11(7), 1459–1468.
+
+[17] Hladík, M. (2017). Transformations of interval linear systems of equations and inequalities. Linear and Multilinear Algebra, 65(2), 211–223.
+
+[18] Koníčková, J. (2001). Sufficient condition of basis stability of an interval linear programming problem. ZAMM, Z. Angew. Mathematics and Mechanics of Solids, 81, Suppl. 3, 677–678.
+
+[19] Li, W., Liu, X. and Li, H. (2015). Generalized solutions to interval linear programmes and related necessary and sufficient optimality conditions. Optimization Methods and Software, 30(3), 516–530.
+
+[20] Moore, R. E., Kearfott, R. B., and Cloud, M. J. (2009). Introduction to Interval Analysis. Philadelphia, PA: SIAM.
+
+[21] Mráz, F. (1998). Calculating the exact bounds of optimal values in LP with interval coefficients. Annals of Operations Research, 81, 51–62.
+
+[22] Neumaier, A. (1990). Interval Methods for Systems of Equations. Cambridge: Cambridge University Press.
+
+[23] Novotná, J., Hladík, M. and Masařík, T. (2017). Duality gap in interval linear programming. In Zadnik Stirn et al., L. (Ed.), Proceedings of the 14th International Symposium on Operational Research SOR’17, Bled, Slovenia, September 27-29, 2017, (pp. 501–506). Ljubljana, Slovenia: Slovenian Society Informatika.
+
+[24] Rohn, J. (1984). Interval linear systems. Freiburger Intervall-Berichte 84/7, Albert-Ludwigs-Universität, Freiburg.
+
+[25] Rohn, J. (1993). Stability of the optimal basis of a linear program under uncertainty. Operations Research Letters, 13(1), 9–12.
+
+[26] Rohn, J. (1997). Complexity of some linear problems with interval data. Reliable Computing, 3(3), 315–323.
+
+[27] Schrijver, A. (2004). Combinatorial Optimization. Polyhedra and efficiency, volume 24 of Algorithms and Combinatorics. Berlin: Springer.
+
+[28] Serafini, P. (2005). Linear programming with variable matrix entries. Operations Research Letters, 33(2), 165–170.
+
+[29] Witsenhausen, H. S. (1986). A simple bilinear optimization problem. Systems & Control Letters, 8(1), 1–4.
\ No newline at end of file
diff --git a/samples/texts_merged/6376231.md b/samples/texts_merged/6376231.md
new file mode 100644
index 0000000000000000000000000000000000000000..ef29ecdba73d680195d947e3c446efa799cbf459
--- /dev/null
+++ b/samples/texts_merged/6376231.md
@@ -0,0 +1,621 @@
+
+---PAGE_BREAK---
+
+Project Choice from a Verifiable Proposal
+
+Yingni Guo
+
+Eran Shmaya*
+
+May 8, 2021
+
+Abstract
+
+An agent observes the set of available projects and proposes some, but not neces-
+sarily all, of them. A principal chooses one or none from the proposed set. We solve
+for a mechanism that minimizes the principal's worst-case regret. If the agent can pro-
+pose only one project, it is chosen for sure if the principal's payoff exceeds a threshold;
+otherwise, the probability that it is chosen decreases in the agent's payoff. If the agent
+can propose multiple projects, his payoff from a multiproject proposal equals the max-
+imal payoff from proposing each project alone. Our results highlight the benefits from
+randomization and from the ability to propose multiple projects.
+
+JEL: D81, D82, D86
+
+Keywords: verifiable disclosure, evidence, project choice, regret minimization
+
+# 1 Introduction
+
+Project choice is one of the most important functions of an organization. The process
+often involves two parties: (i) a party at a lower hierarchical level who has expertise and
+
+*Guo: Department of Economics, Northwestern University; email: yingni.guo@northwestern.edu.
+Shmaya: Department of Economics, Stony Brook University; email: eran.shmaya@stonybrook.edu. We
+thank seminar audiences at the One World Mathematical Game Theory Seminar, the Toulouse School of
+Economics, the University of Bonn, Northwestern University, the University of Pittsburgh, and Carnegie
+Mellon University for valuable feedback.
+---PAGE_BREAK---
+
+proposes projects, and (ii) a part at a higher hierarchical level who evaluates the proposed projects and makes the choice. This describes the relationship between a division and the headquarters when the division has a chance to choose a factory location or to choose an office building. It also applies to the relationship between a department and the university when the department has a hiring slot open.
+
+This process of project choice is naturally a principal-agent problem. The agent privately observes which projects are available and proposes a subset of the available projects. The principal chooses one from the proposed projects or rejects them all. If the two parties had identical preferences over projects, the agent would propose the project that is their shared favorite among the available ones, and the principal would always automatically approve the agent's proposal. In many applications, however, the two parties do not share the same preferences. For instance, the division may fail to internalize each project's externalities on other divisions; the department and the university may put different weights on candidates' research and nonresearch abilities. Armed with the proposal-setting power, the agent has a tendency to propose his favorite project and hide his less preferred ones, even if those projects are "superstars" for the principal. How shall the principal encourage the agent to propose the principal's preferred projects? What is the principal's optimal mechanism for choosing a project?
+
+It is easy to see that no mechanism can guarantee that the principal's favorite project among the available ones will always be chosen. We define the principal's *regret* as the difference between his payoff from his favorite project and his expected payoff from the project chosen under the mechanism. We look for a mechanism that works fairly well for the principal in all circumstances, i.e., a mechanism that minimizes the principal's worst-case regret. This worst-case regret approach to uncertainty can be traced back to Wald (1950) and Savage (1951). It has since been used widely in game theory, mechanism design, and machine learning. A decision theoretical axiomatization for the minimax regret criterion can
+---PAGE_BREAK---
+
+be found in Milnor (1954) and Stoye (2011).
+
+Depending on the principal's verification capacity, we distinguish two environments. In the *multiproject* environment, the agent can propose any subset of the available projects. In the *single-project* environment, the agent can propose only one available project. Besides project choice within organizations, the single-project environment also applies to antitrust regulation: a firm chooses a merger from available merger opportunities to propose and the regulator decides whether to approve or reject the firm's proposal (e.g., Lyons (2003), Neven and Röller (2005), Armstrong and Vickers (2010), Ottaviani and Wickelgren (2011), Nocke and Whinston (2013)).
+
+We take the environment as exogenous and derive the optimal mechanisms in both environments. In the single-project environment, the only way for the principal to incentivize the agent is to reject his proposal with positive probability. The multiproject environment, however, allows the principal to “spend” this rejection probability on other proposed projects. Therefore, even though the principal chooses at most one project, he expects to do better in the multiproject environment than in the single-project one. Comparing the two environments will also allow us to quantify the principal’s gain from higher verification capacity.
+
+We begin with the single-project environment. A mechanism specifies for each proposed single project the probability that it will be approved. In the optimal mechanism, if the proposed project gives the principal a sufficiently high payoff, it is approved for sure. We call such projects *good* projects for the principal. If, on the contrary, the proposed project is *mediocre* for the principal, it is approved only with some probability. The probability that a mediocre project is approved decreases in its payoff to the agent, in order to deter the agent from hiding projects that are more valuable for the principal. This mechanism aligns the incentives of the agent with those of the principal in the following ways. First, if the agent has at least one good project for the principal, he will propose a good project. Second, if all his projects are mediocre for the principal, he will propose the principal's favorite one.
+---PAGE_BREAK---
+
+In the multiproject environment, a mechanism specifies for each proposed set of projects a randomization over the proposed projects and “no project.” If the agent proposes only one project, the optimal mechanism takes a form similar to the one in the single-project environment. In particular, if the proposed project is sufficiently good for the principal, it is chosen for sure. Otherwise, the project is chosen with some probability that decreases in its payoff to the agent. If the agent proposes more than one project, the randomization maximizes the principal’s expected payoff, subject to the constraint that the agent is promised the maximal expected payoff he would get from proposing each project alone. Under this mechanism, the more projects the agent proposes, the weakly higher his expected payoff is, so the agent is willing to propose all available projects.
+
+Since the agent gets the maximal expected payoff from proposing each project alone, we call this mechanism the *project-wide maximal-payoff mechanism*. This mechanism implements a compromise between the two parties in the multiproject environment: with some probability the choice favors the agent and with some probability it favors the principal. We also show that randomization is crucial for the principal's minimal worst-case regret to be lower in the multiproject environment than in the single-project one. In other words, if the principal is restricted to deterministic mechanisms, his minimal worst-case regret is the same in both the single-project and multiproject environments.
+
+**Related literature.** Our paper is closely related to Armstrong and Vickers (2010) and Nocke and Whinston (2013), which study the project choice problem using the Bayesian approach. Armstrong and Vickers (2010) characterize the optimal deterministic mechanism in the single-project environment and show through examples that the principal does strictly better if randomization or multiproject proposals are allowed. Nocke and Whinston (2013) focus on mergers (i.e., projects) that are ex ante different and further incorporate the bargaining process among firms. They show that a tougher standard is imposed on mergers
+---PAGE_BREAK---
+
+involving larger partners. We take the worst-case regret approach to this multidimensional
+screening problem. This more tractable approach allows us to explore questions which are
+intractable under the Bayesian approach, including how much the principal benefits from
+randomization, from higher verification capacity and from a smaller project domain.
+
+Goel and Hann-Caruthers (2020) consider the project choice problem where the number of available projects is public information. The projects are only partially verifiable, since the agent's only constraint is not to overreport projects' payoffs to the principal. Because their agent cannot hide projects like our agent does, he loses the proposal-setting power. The resulting incentive schemes are thus quite different.
+
+Since in our model the agent can propose only those projects that are available, the agent's proposal is some evidence about his private information. Hence, our paper is closely related to research on verifiable disclosure (e.g., Grossman and Hart (1980), Grossman (1981), Milgrom (1981), Dye (1985)) and, more broadly, the evidence literature (see Dekel (2016) for a survey). We discuss the relation to this literature in more detail after we introduce the model.
+
+Our result relates to a theme in Aghion and Tirole (1997), namely, that the principal has formal authority, but the agent shares real authority due to his private information. We take this theme one step further. Our agent's real authority has two sources: he knows which projects are available, and he determines the proposal from which the principal chooses a project. The idea of striking a compromise is related to Bonatti and Rantakari (2016). They examine the compromise between two symmetric, competing agents whose efforts are crucial for discovering projects. We instead focus on the compromise between an agent who proposes projects and a principal who chooses one or none from the proposed projects.
+
+Finally, our paper contributes to the literature on mechanism design in which the designer minimizes his worst-case regret. Hurwicz and Shapiro (1978) examine a moral hazard problem. Bergemann and Schlag (2008, 2011) examine monopoly pricing. Renou and Schlag
+---PAGE_BREAK---
+
+(2011) apply the solution concept of $\epsilon$-minimax regret to the problem of implementing social choice correspondences. Beviá and Corchón (2019) examine the contest which minimizes the designer's worst-case regret. Guo and Shmaya (2019) study the optimal mechanism for monopoly regulation and Malladi (2020) studies the optimal approval rules for innovation. More broadly, we contribute to the growing literature of mechanism design with worst-case objectives. For a survey on robustness in mechanism design, see Carroll (2019).
+
+## 2 Model and mechanism
+
+Let $D$ be the domain of all possible *verifiable projects*. Let $u: D \to R_+$ be the agent's payoff function, so his payoff is $u(a)$ if project $a$ is chosen. If no project is chosen, the agent's payoff is zero.
+
+The agent's private type $A \subseteq D$ is a finite set of available projects. The agent proposes a set $P$ of projects, and the principal can choose one project from this set. The set $P$ is called the agent's *proposal*. It must satisfy two conditions. First, the agent can propose only available projects. Hence, the agent's proposal must be a subset of his type, $P \subseteq A$. This is what we meant earlier when we said that projects are verifiable. Second, $P \in \mathcal{E}$ for some fixed set $\mathcal{E}$ of subsets of $D$. The set $\mathcal{E}$ captures all the exogenous restrictions on the proposal. For instance, in the setting of antitrust regulation, the agent is restricted to proposing at most one project. In many organizations, the principal have limited verification capacity or limited attention, so the agent can propose at most a certain number of projects.
+
+We begin with two environments which are natural first steps: *single-project* and *multiproject*. In the single-project environment, the agent can propose at most one available project, so $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$. In the multiproject environment, the agent can propose any set of available projects so $\mathcal{E} = 2^D$, the power set of $D$. In subsection 6.1, we discuss the intermediate environments in which the agent can propose up to $k$ projects for some fixed
+---PAGE_BREAK---
+
+number $k \ge 2$.
+
+The agent's proposal *P* serves two roles. First, if we view a proposal as a message, then different types have access to different messages. Hence, the agent's proposal is some evidence about his type, as in Green and Laffont (1986). We explore the implication of this evidence role in section 3. Second, the proposal determines the set of projects from which the principal can choose. This second role is a key difference between our paper and the evidence literature. Once the agent puts his proposal on the table, there is no relevant information asymmetry left. This implies that cheap-talk communication will not help. We elaborate on this point in subsection 6.2.
+
+A subprobability measure over D with a finite support is given by $\pi: D \to [0, 1]$ such that
+
+$$\text{support}(\pi) = \{a \in D : \pi(a) > 0\}$$
+
+is finite, and $\sum_a \pi(a) \le 1$. When we say that a project *is chosen from* a subprobability measure $\pi$ with finite support, we mean that project *a* is chosen with probability $\pi(a)$, and that no project is chosen with probability $1 - \sum_a \pi(a)$.
+
+The principal's ability to reject all proposed projects (or equivalently, to choose no project) is crucial for him to retain some "bargaining power." If, on the contrary, the principal must choose a project as long as the agent has proposed some, then the agent effectively has all the bargaining power. The agent will propose only his favorite project which will be chosen for sure.
+
+A *mechanism* $\rho$ attaches to each proposal $P \in \mathcal{E}$ a subprobability measure $\rho(\cdot|P)$ such that $\text{support}(\rho(\cdot|P)) \subseteq P$. The interpretation is that, if the agent proposes $P$, then a project is chosen from the subprobability measure $\rho(\cdot|P)$. Thus, the agent's expected payoff under the mechanism $\rho$ if he proposes $P$ is $U(\rho, P) = \sum_{a \in P} u(a)\rho(a|P)$.
+
+A *choice function* $f$ attaches to each type $A$ of the agent a subprobability measure $f(\cdot|A)$
+---PAGE_BREAK---
+
+such that $\text{support}(f(\cdot|A)) \subseteq A$. The interpretation is that, if the set of available projects is $A$, then a project is chosen from the subprobability measure $f(\cdot|A)$.
+
+A choice function $f$ is *implemented* by a mechanism $\rho$ if, for every type $A$ of the agent, there exists a probability measure $\mu$ with support over $\text{argmax}_{P\subseteq A, P\in\mathcal{E}} U(\rho, P)$ such that $f(a|A) = \sum_P \mu(P)\rho(a|P)$. The interpretation is that the agent selects only proposals that give him the highest expected payoff among the proposals that he can make, and that, if the agent has multiple optimal proposals, then he can randomize among them.
+
+# 3 The evidence structure
+
+When the agent proposes a set $P$ of projects, he provides evidence that his type $A$ satisfies $P \subseteq A$. In this section, we discuss the implication of this role of the agent's proposal as well as the relation to the evidence literature.
+
+## 3.1 Normality in the multiproject environment
+
+In our multiproject environment, where $\mathcal{E} = 2^D$, the agent has the ability to provide the maximal evidence for his type. This property is called *normality* in the literature (Lipman and Seppi (1995), Bull and Watson (2007), Ben-Porath, Dekel and Lipman (2019)). Another interpretation of the multiproject environment is to view an agent who proposes a set $P$ as an agent who claims that his type is $P$. The relation that “type $A$ can claim to be type $B$” between types is reflexive and transitive, by the corresponding properties of the inclusion relation between sets. Transitivity is called the nested range condition in Green and Laffont (1986) and is also assumed in Hart, Kremer and Perry (2017).
+
+In our single-project environment, where $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$, normality does not hold. The single-project environment is the main focus in Armstrong and Vickers (2010) and Nocke and Whinston (2013), and is similar to the assumption in Glazer and Rubinstein
+---PAGE_BREAK---
+
+(2006) and Sher (2014) that the speaker can make one and only one of the statements he has access to.
+
+## 3.2 Revelation principle in the multiproject environment
+
+Consider the multiproject environment $\mathcal{E} = 2^D$. A mechanism $\rho$ is incentive-compatible (IC) if the agent finds it optimal to propose his type $A$ truthfully. That is, $U(\rho, A) \ge U(\rho, P)$ for every finite set $A \subseteq D$ and every subset $P \subseteq A$. Equivalently, a mechanism $\rho$ is IC if and only if $U(\rho, P)$ weakly increases in $P$ with respect to set inclusion. The following proposition states the revelation principle in the multiproject environment.
+
+**Proposition 3.1.** *Assume $\mathcal{E} = 2^D$. If a choice function $f$ is implemented by some mechanism, then the mechanism $f$ is IC and implements the choice function $f$.*
+
+As we explained in subsection 3.1, the multiproject environment satisfies normality and the nested range condition. Previous papers (e.g., Green and Laffont (1986), Bull and Watson (2007)) have shown that the revelation principle holds under these assumptions. Our proposition 3.1 does not follow directly from their theorems, however, because the agent's proposal $P$ serves two roles in our model. In addition to providing evidence, the proposal also determines the set of projects from which the principal can choose. Nonetheless, a similar argument for the revelation principle can be made within our model as well.
+
+*Proof of Proposition 3.1.* Assume that the mechanism $\rho$ implements the choice function $f$. Then for every finite set $A \subseteq D$ and every subset $P \subseteq A$, we have:
+
+$$U(f, A) = \max_{Q \subseteq A} U(\rho, Q) \ge \max_{Q \subseteq P} U(\rho, Q) = U(f, P),$$
+
+where the inequality follows from the fact that $Q \subseteq P$ implies $Q \subseteq A$, and the two equalities follow from the fact that $\rho$ implements $f$. Hence, the mechanism $f$ is IC. Also, by definition,
+---PAGE_BREAK---
+
+if the mechanism $f$ is IC, then it implements the choice function $f$. $\square$
+
+Since an implementable choice function is itself an IC mechanism and vice versa, we will
+use both terms interchangeably whenever we discuss the multiproject environment.
+
+# 4 The principal's problem
+
+Let $v: D \rightarrow \mathbb{R}_{+}$ be the principal's payoff function, so his payoff is $v(a)$ if project $a$ is chosen.
+If no project is chosen, the principal's payoff is zero.
+
+The principal's *regret* from a choice function *f* when the set of available projects is *A* is:
+
+$$ \mathrm{RGRT}(f, A) = \max_{a \in A} v(a) - \sum_{a \in A} v(a)f(a|A). $$
+
+The regret is the difference between what the principal could have achieved if he knew the set $A$ of available projects and what he actually achieves. Savage (1951) calls this difference *loss*. We instead call it regret, by following the more recent game theory and computer science literature. Wald (1950) and Savage (1972) propose to consider only *admissible* choice functions (i.e., choice functions that are not weakly dominated). A choice function $f$ is *admissible* if there exists no other $f'$ such that the principal's regret is weakly higher under $f$ than under $f'$ for every type of the agent and strictly higher for some type. For the rest of the paper, we focus on admissible choice functions.
+
+The worst-case regret (WCR) from a choice function $f$ is:
+
+$$ \text{WCR}(f) = \sup_{A \subseteq D, |A| < \infty} \text{RGRT}(f, A), $$
+
+where the supremum ranges over all possible types of the agent (i.e., all possible finite sets of available projects). The principal's problem is to minimize WCR($f$) over all implementable
+---PAGE_BREAK---
+
+choice functions $f$. This step is our only departure from the Bayesian approach. The Bayesian approach will instead assign a prior belief over the number and the characteristics of the available projects. The principal's problem, then, is to minimize the *expected* regret instead of the *worst-case* regret.
+
+Note that, while our principal takes the worst-case regret approach to uncertainty about the agent’s type, he calculates the expected payoff with respect to his own objective randomization. The same assumption is made by Savage (1972) when he discusses the use of randomized acts under the worst-case regret approach (Savage, 1972, Chapter 9.3). A similar assumption is made in the ambiguity aversion literature. For example, in Gilboa and Schmeidler (1989), the decision maker calculates his expected payoff with respect to random outcomes (i.e., “roulette lotteries”) but evaluates acts using the maxmin approach with non-unique priors. If we make the alternative assumption that the principal takes the worst-case regret approach even towards his own randomization, we effectively restrict the principal to deterministic mechanism.
+
+From now on, we assume that the set $D$ of all possible verifiable projects is $[\underline{u}, 1] \times [\underline{v}, 1]$ for some parameters $\underline{u}, \underline{v} \in [0, 1]$, and that the functions $u(\cdot)$ and $v(\cdot)$ are projections over the first and second coordinates. Abusing notation, we denote a project $a \in D$ also by $a = (u, v)$, where $u$ and $v$ are the agent’s and the principal’s payoffs, respectively, if project $a$ is chosen.
+
+The parameters $\underline{u}$ and $\underline{v}$ quantify the uncertainty faced by the principal: the higher they are, the smaller the uncertainty. They also measure players’ preference intensity over projects. As $\underline{u}$ increases, the agent’s preferences over projects become less strong, so it becomes easier to align the incentives of the agent with those of the principal. As $\underline{v}$ increases, the principal’s preferences over projects become less strong, so the agent’s tendency to propose his own favorite project becomes less costly for the principal.
+---PAGE_BREAK---
+
+# 5 Optimal mechanisms
+
+## 5.1 Preliminary intuition
+
+We now use an example to illustrate the fundamental trade-off faced by the principal, as well as the intuition behind the optimal mechanisms. We first explain how randomization helps to reduce the WCR in the single-project environment. We then explain how the multiproject environment can further reduce the WCR. For this illustration, we assume that $v = 0$ so $D = [\underline{u}, 1] \times [0, 1]$.
+
+Figure 1: Preliminary intuition, $v = 0$
+
+Consider the single-project environment and assume first that the principal is restricted to deterministic mechanisms. In this case, a mechanism is a set of projects that the principal approves for sure, and all other projects are rejected outright. For each such mechanism, the principal has two fears. First, if the agent has multiple projects which will be approved, then he will propose what he likes the most, even if projects are available that are more valuable to the principal. Second, if the agent has only projects which will be rejected, then the principal loses the payoff from these projects. Applied to the project $\bar{a} = (1, 1/2)$, these two fears imply that no matter how the principal designs the deterministic mechanism, his
+---PAGE_BREAK---
+
+WCR is at least 1/2. As shown in figure 1, this project $\bar{a}$ gives the agent his highest payoff 1, while giving the principal only a moderate payoff 1/2. If the mechanism approves $\bar{a}$ and the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$, then the agent will propose $\bar{a}$ rather than $(\underline{u}, 1)$, so the principal suffers regret 1/2. If the mechanism rejects $\bar{a}$ but $\bar{a}$ is the only available project, then the principal also suffers regret 1/2. Thus, the WCR under any deterministic mechanism is at least 1/2. On the other hand, the deterministic mechanism that approves project $(u, v)$ if and only if $v \ge 1/2$ achieves the WCR of 1/2, so it is optimal among all the deterministic mechanisms.
+
+We now explain how randomization can reduce the WCR in the single-project environment. We first note that, if $\underline{u} = 0$, then, even with randomized mechanisms, the principal cannot reduce his WCR below 1/2. This is because the only way to incentivize the agent to propose the project $(\underline{u}, 1) = (0, 1)$ when the set of available projects is $\{\bar{a}, (0, 1)\}$ is still to reject the project $\bar{a}$ outright if $\bar{a}$ is proposed. However, if $\underline{u} > 0$, then the principal can do better. He can approve the project $\bar{a}$ with probability $\underline{u}$, while still maintaining the agent's incentive to propose the principal's preferred project $(\underline{u}, 1)$ when the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$. We carry out this idea in Theorem 5.1 in subsection 5.2.
+
+Let us now consider the multiproject environment. We again begin with deterministic mechanisms. Under deterministic mechanisms, more choice functions can be implemented in the multiproject environment than in the single-project one.¹ However, when restricted to deterministic mechanisms, the principal has the same minimal WCR in the multiproject environment as in the single-project one. This is because, if the principal wants to choose $(\underline{u}, 1)$ when the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$, then the only way to incentivize the agent to include $(\underline{u}, 1)$ in his proposal is to reject the project $\bar{a}$ when $\bar{a}$ is proposed alone.
+
+We now explain how randomization can help in the multiproject environment, even when
+
+¹For example, the principal can implement the choice function that chooses (i) the agent’s favorite project, if there are at least two available projects, and (ii) nothing, if there is at most one available project.
+---PAGE_BREAK---
+
+ = 0. While a deterministic mechanism must pick either $\bar{a}$ or (0, 1) or nothing when the agent proposes $\{\bar{a}, (0, 1)\}$, a randomized mechanism can reach a compromise by choosing each project with probability 1/2. On the other hand, if the agent proposes only $\bar{a}$, the principal chooses $\bar{a}$ with probability 1/2, so the agent of type $\{\bar{a}, (0, 1)\}$ is willing to propose $\{\bar{a}, (0, 1)\}$ instead of just $\bar{a}$. The regret is 1/4 both when the agent's type is $\{\bar{a}, (0, 1)\}$ and when his type is $\{\bar{a}\}$. We carry out this idea of reaching a compromise in Theorem 5.2 in subsection 5.3. Specifically, when the agent proposes $P$, the principal gives the agent the maximal payoff he can offer, subject to the constraint that he can give the agent this same payoff if the agent proposes $P \cup \{(\underline{u}, 1)\}$ and can still keep his regret under control.
+
+## 5.2 Optimal mechanism in the single-project environment
+
+Since the agent can propose at most one project, a mechanism specifies the approval probability for each proposed project. Instead of using our previous notation $\rho(a|\{a\})$, we let $\alpha(u, v) \in [0, 1]$ denote the approval probability if the agent proposes the project $(u, v)$.
+
+**Theorem 5.1 (Single-project environment).** Assume $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$. Let
+
+$$R^s = \max_{v \in [\underline{u}, 1]} \min((1-\underline{u})v, 1-v) = \min\left(\frac{1-\underline{u}}{2-\underline{u}}, 1-\frac{v}{u}\right).$$
+
+1. The WCR under any mechanism is at least $R^s$.
+
+2. The mechanism $\alpha^s$ is given by:
+
+$$\alpha^s(u, v) = \begin{cases} 1, & \text{if } v \ge 1 - R^s \text{ or } u = 0, \\ \frac{u}{u}, & \text{if } v < 1 - R^s \text{ and } u > 0. \end{cases}$$
+
+It implements a choice function that has the WCR of $R^s$ and is admissible.
+---PAGE_BREAK---
+
+3. If a mechanism $\alpha$ implements a choice function that has the WCR of $R^s$, then $\alpha(u, v) \le \alpha^s(u, v)$ for every $(u, v) \in D$.
+
+The mechanism $\alpha^s$ consists of an *automatic-approval* region and a *chance* region. If the proposed project is sufficiently good for the principal (i.e., $v \ge 1 - R^s$), then it is automatically approved. If the project is mediocre for the principal (i.e., $v < 1 - R^s$), then the approval probability equals $\underline{u}/u$, so the agent expects a payoff $\underline{u}$ from proposing a mediocre project.
+
+The agent will propose a project in the automatic-approval region if he has at least one such project. If all his projects are in the chance region, he will propose a project that gives the principal the highest payoff. The principal still suffers regret from two sources. First, if the agent has multiple projects that will be automatically approved, he will propose what he favors instead of what the principal favors. Second, if the agent has only projects in the chance region, his proposal is rejected with positive probability. The threshold for the automatic-approval region, $1 - R^s$, is chosen to keep the regret from both sources under control.
+
+The approval probability $\alpha^s(u, v)$ increases in $v$ (the principal's payoff) and decreases in $u$ (the agent's payoff). This monotonicity in $v$ and $u$ is natural. In particular, the principal is less likely to approve projects that give the agent high payoffs in order to deter the agent from hiding projects that give the principal high payoffs. It is interesting to compare our optimal mechanism $\alpha^s$ in the single-project environment to that in Armstrong and Vickers (2010). They characterize the optimal deterministic mechanism in a Bayesian setting. Under the assumptions that (i) projects are i.i.d. and (ii) the number of available projects is independent of their characteristics, they show that the optimal deterministic mechanism $\alpha(u, v)$ increases in $v$: a project $(u, v)$ is approved if and only if $v \ge r(u)$ for some function $r(u)$. They also characterize the optimal $r(u)$ explicitly. Their argument can be generalized to show that the optimal randomized mechanism $\alpha(u, v)$ also increases in $v$,
+---PAGE_BREAK---
+
+but it is not clear how to solve for the optimal $\alpha(u, v)$. It is an open problem under which assumptions on the prior belief the optimal randomized mechanism $\alpha(u, v)$ in the Bayesian setting decreases in $u$.
+
+The typical situation under the worst-case regret approach to uncertainty is that multiple mechanisms can achieve the minimal WCR. Assertion 3 in Theorem 5.1 says that the mechanism $\alpha^s$ is uniformly more generous in approving the agent's proposal than any other mechanism that can have the WCR of $R^s$. This assertion has two implications. First, among all mechanisms that can have the WCR of $R^s$, the mechanism $\alpha^s$ is the agent's most preferred one. Second, compared to any mechanism that can have the WCR of $R^s$, the mechanism $\alpha^s$ gives the principal a higher payoff (or equivalently, a lower regret) for every singleton $A$ and a strictly higher payoff for some singleton $A$.
+
+## 5.3 Optimal mechanism in the multiproject environment
+
+We now present the optimal mechanism in the multiproject environment. Let $\alpha : [\underline{u}, 1] \times [\underline{u}, 1] \rightarrow [0, 1]$ be a function and consider the following *project-wide maximal-payoff mechanism* (PMP mechanism) induced by the function $\alpha$:
+
+1. If the proposal $P$ includes only one project $(u, v)$, it is approved with probability $\alpha(u, v)$.
+
+2. If the proposal $P$ includes multiple projects, the mechanism randomizes over the proposed projects and no project to maximize the principal's expected payoff, while promising the agent an expected payoff of $\max_{(u,v)\in P} \alpha(u, v)u$. This is the maximal expected payoff the agent could get from proposing each project alone.
+
+By the definition of a PMP mechanism, the more projects the agent proposes, the weakly higher his expected payoff will be. The agent is therefore willing to propose his type truthfully. In other words, PMP mechanisms are IC. Note that for a mechanism to be IC, the
+---PAGE_BREAK---
+
+agent's payoff from a multiproject proposal must be at least his payoff from proposing each project alone. A PMP mechanism has the feature that the agent is promised exactly the maximal payoff from proposing each project alone, but not more.
+
+Our next theorem shows that there exists an optimal PMP mechanism.
+
+**Theorem 5.2** (Multiproject environment). Assume $\mathcal{E} = 2^D$. For every $u \in [\underline{u}, 1]$ and $p \in [0, 1]$, let $\gamma(p, u)$ be
+
+$$ \gamma(p, u) = \min\{q \in [0, 1] : qu + (1-q)\underline{u} \ge pu\}. \quad (1) $$
+
+Let
+
+$$ R^m = \max_{(u,v) \in D} \min_{p \in [0,1]} \max(v(1-p), (1-v)\gamma(p,u)). \quad (2) $$
+
+1. The WCR under any mechanism is at least $R^m$.
+
+2. Let $\rho^m$ be the PMP mechanism induced by
+
+$$ \alpha^m(u, v) = \max\{p \in [0, 1] : (1-v)\gamma(p, u) \le R^m\}. \quad (3) $$
+
+It has the WCR of $R^m$ and is admissible.
+
+3. If $\rho$ is an IC, admissible mechanism which has the WCR of $R^m$, then $U(\rho, A) \le U(\rho^m, A)$ for every type $A$.
+
+The explicit expressions for $R^m$ and $\alpha^m(u, v)$ are presented at the end of this subsection.
+
+It follows from (1) and (3) that $\alpha^m(u, v) = 1$ if $v \ge 1 - R^m$ and $\alpha^m(u, v) < 1$ otherwise. Like in the case of the single-project environment, when the agent proposes only one project, the project is approved for sure if its payoff to the principal is sufficiently high and approved with some probability otherwise. For this reason, we still call $v \ge 1 - R^m$ and $v < 1 - R^m$ the automatic-approval and the chance regions, respectively. Figure 2 depicts these two regions.
+---PAGE_BREAK---
+
+When the agent proposes more than one project, the principal promises the agent an
+expected payoff of $\max_{(u,v) \in P} \alpha^m(u, v)u$. In both panels of figure 2, each dotted curve con-
+nects all the projects that induce the same value of $\alpha^m(u, v)u$, so it can be interpreted as
+an “indifference curve” for the agent. For a project in the automatic-approval region, the
+principal is willing to compensate the agent his full payoff. In contrast, for a project in the
+chance region, the principal is willing to compensate the agent only a discounted payoff.
+The lower the project’s payoff to the principal, the more severe the discounting. Hence,
+indifference curves are vertical in the automatic-approval region and tilt counterclockwise as
+the principal’s payoff $v$ further decreases. The agent’s expected payoff is determined by the
+project (among those proposed) that is on the highest indifference curve.
+
+Figure 2: Reaching a compromise when agent's favorite project is in chance region, $u = v = 0$
+
+Under the optimal mechanism $\rho^m$, if the agent's favorite project is in the automatic-approval region, then this project will be chosen for sure. In this case, there is no benefit to either party from proposing other available projects. The left panel of figure 2 gives such an example: ⭐ and ▲ denote the available projects and ▲ will be chosen for sure. In contrast, if the agent's favorite project is in the chance region, the benefit to the principal from the
+---PAGE_BREAK---
+
+agent's proposing multiple projects can be significant. The right panel of figure 2 illustrates such an example. Instead of rejecting ▲ with positive probability, the mechanism randomizes between ▲ and ★ while promising the agent the same payoff he would get from proposing ▲ alone. In such cases, the optimal mechanism imposes a compromise between the two parties: sometimes the choice favors the agent, and at other times it favors the principal.
+
+Lastly, the explicit expressions for $R^m$ and $\alpha^m$ are given by:
+
+$$R^m = \begin{cases} \frac{(1-\underline{u})(2-\underline{u}-2\sqrt{1-\underline{u}})}{\underline{u}^2} & \text{if } \underline{v} < \frac{1-\sqrt{1-\underline{u}}}{\underline{u}}, \\ \frac{(1-\underline{u})(1-\underline{v})\underline{v}}{1-\underline{u}\underline{v}} & \text{otherwise,} \end{cases}$$
+
+and
+
+$$\alpha^m(u, v) = \begin{cases} 1, & \text{if } v \ge 1 - R^m \text{ or } u = 0, \\ \left(1 - \frac{R^m}{1-v}\right) \frac{u}{v} + \frac{R^m}{1-v}, & \text{if } v < 1 - R^m \text{ and } u > 0. \end{cases}$$
+
+## 5.4 Comparing the WCR under two environments
+
+Figure 3 compares the WCR under the single-project and the multiproject environments. The left panel depicts the WCR as a function of $\underline{u}$ for a fixed $\underline{v}$. The right panel depicts the WCR as a function of $\underline{v}$ for a fixed $\underline{u}$. Roughly speaking, the principal's gain from having the multiproject environment as compared to the single-project environment, measured by $R^m - R^s$, is larger when $\underline{u}$ or $\underline{v}$ is smaller (i.e., when the principal faces more uncertainty or when players can potentially have strong preferences over projects).
+---PAGE_BREAK---
+
+Figure 3: WCR: single-project (dashed curve) vs. multiproject (solid curve)
+
+# 6 Discussion
+
+## 6.1 Intermediate verification capacity
+
+We have focused on the single-project and the multiproject environments, which are natural first steps for us to study. Nonetheless, there are intermediate environments in which the principal can verify up to $k$ projects for some fixed $k \ge 2$, so $\mathcal{E} = \{P \subseteq D : |P| \le k\}$. We call this the $k$-project environment.
+
+**Proposition 6.1** (Two are enough). For any $k \ge 2$, the PMP mechanism induced by $\alpha^m(u, v)$ is optimal in the $k$-project environment. The WCR under this mechanism is $R^m$.
+
+*Proof*. Let $A$ be the set of available projects. Let $(u_p, v_p) \in \argmax\{v : (u, v) \in A\}$ and $(u_a, v_a) \in \argmax\{\alpha^m(u, v)u : (u, v) \in A\}$. Let $P = \{(u_p, v_p), (u_a, v_a)\}$. Then under the PMP mechanism induced by $\alpha^m(u, v)$, the agent is willing to propose $P$ since this proposal gives him $\alpha^m(u_a, v_a)u_a$, the maximal payoff he can get under the mechanism. The principal's payoff given the proposal $P$ equals his payoff if the set of available projects was actually $P$. By Theorem 5.2 this payoff is at least $v_p - R^m$, so the principal's regret is at most $R^m$. $\square$
+
+Proposition 6.1 shows that having the full benefit of compromise does not require infinite
+---PAGE_BREAK---
+
+or high verification capacity. A capacity of only two projects is sufficient. Furthermore, even if the principal can verify up to ten projects, it suffices to let the agent propose up to two, which provides a parsimonious way to get the full benefit of compromise.
+
+## 6.2 Cheap-talk communication does not help for any $\mathcal{E}$
+
+We could have started from a more general definition of a mechanism that chooses a project based on both the proposal $P$ and a cheap-talk message $m$ from the agent, as in Bull and Watson (2007) and Ben-Porath, Dekel and Lipman (2019). However, in our model cheap talk does not benefit the principal. This is because the principal can choose a project only from the proposed set $P$ and he knows the payoffs that each project in $P$ gives to both parties. Hence, no information asymmetry remains after the agent proposes $P$, and so there is no benefit to cheap talk.
+
+More specifically, for any proposal $P$ and any cheap-talk messages $m_1, m_2$, we argue that it is without loss for the principal to choose the same subprobability measure over $P$ after $(P, m_1)$ and after $(P, m_2)$. Suppose otherwise that the principal chooses a subprobability measure $\pi_1$ after $(P, m_1)$ and chooses $\pi_2$ after $(P, m_2)$. If the agent strictly prefers $\pi_1$ to $\pi_2$, then he can profitably deviate to $(P, m_1)$ whenever he is supposed to say $(P, m_2)$. Hence, $(P, m_2)$ never occurs on the equilibrium path. If the agent is indifferent between $\pi_1$ and $\pi_2$, then the principal can pick his preferred measure between $\pi_1$ and $\pi_2$ after both $(P, m_1)$ and $(P, m_2)$, without affecting the agent's incentives. This argument does not depend on the exogenous restriction $\mathcal{E}$ on the agent's proposal $P$, so cheap-talk communication does not help for any $\mathcal{E}$.
+---PAGE_BREAK---
+
+## 6.3 The commitment assumption
+
+Commitment is crucial for the principal to have some “bargaining power” in the project choice problem. If the principal has no commitment power, sequential rationality requires that he choose his favorite project among the proposed one(s). The agent now has all the bargaining power. He will propose only his favorite project which will be chosen for sure.
+
+In the multiproject environment, the full-commitment solution involves two types of ex post suboptimality. First, no project is chosen despite that the agent has proposed some. Second, a worse project for the principal is chosen despite that a better project for him is also proposed. Some applications may fall between the full-commitment and the no-commitment settings: the principal can commit to choosing no project but cannot commit to choosing a worse project when a better project is also proposed. In such a partial-commitment setting, a multiproject proposal is effectively a single-project proposal with only the principal’s favorite project among the proposed one. The optimal mechanism in this partial-commitment setting is then the same as that in the single-project environment characterized in Theorem 5.1.
+
+# 7 Proofs
+
+## 7.1 Proof of Theorem 5.1
+
+**Claim 7.1.** *The WCR from any mechanism is at least Rs.*
+
+*Proof.* Let $v \in [\underline{v}, 1]$. If $\alpha(1, v) > \underline{u}$, then, if the agent has two projects $(1, v)$ and $(\underline{u}, 1)$, the agent will propose $(1, v)$ and the regret will be $1 - \alpha(1, v)v \ge 1 - v$. If $\alpha(1, v) \le \underline{u}$, then, if the agent has only the project $(1, v)$, the regret is $v - \alpha(1, v)v \ge v(1 - \underline{u})$. Therefore, WCR $\ge \min((1 - \underline{u})v, 1 - v)$ for every $v \in [\underline{v}, 1]$. $\square$
+
+**Claim 7.2.** *The WCR from $\alpha^s$ is Rs.*
+---PAGE_BREAK---
+
+*Proof.* We call a project $(u, v)$ good if $v \ge 1 - R^s$ and mediocre if $v < 1 - R^s$. From the definition of $R^s$ it follows that $(1 - \underline{u})v \le R^s$ for every mediocre project.
+
+According to $\alpha^s$, if the agent proposes a mediocre project, then his expected payoff is $\underline{u}$; if the agent proposes a good project $(u, v)$, then his expected payoff is $u \ge \underline{u}$. Therefore, if the agent has some good project, he will propose a good project $(u, v)$ and the regret is at most $1 - v \le R^s$. If all projects are mediocre, then the agent will propose the project $(u, v)$ with the highest $v$, so the regret is at most $(1 - \alpha^s(u, v))v = (1 - \underline{u}/u)v \le (1 - \underline{u})v \le R^s$. $\square$
+
+**Claim 7.3.** If $\alpha$ has the WCR of $R^s$, then $\alpha(u,v) \le \alpha^s(u,v)$ for every $(u,v) \in D$. Hence, $\alpha^s$ is admissible.
+
+*Proof.* Fix a project $(u,v)$. If $v \ge 1 - R^s$ or $u=0$, then $\alpha^s(u,v)=1$ and therefore $\alpha(u,v) \le \alpha^s(u,v)$. If $v < 1 - R^s$ and $u > 0$, then since the WCR under $\alpha$ is $R^s$, it must be the case that if $A = \{(u,v), (\underline{u},1)\}$, then the agent proposes the project $(\underline{u},1)$. Otherwise, the regret is at least $1 - v > R^s$. Therefore $\alpha(u,v)u \le \alpha(\underline{u},1)\underline{u} \le \underline{u}$, which implies $\alpha(u,v) \le \underline{u}/u = \alpha^s(u,v)$, as desired.
+
+Finally, if $\alpha$ has the WCR of $R^s$ and $\alpha \ne \alpha^s$, then there exists $(u,v) \in D$ such that $\alpha(u,v) < \alpha^s(u,v)$. The regret is strictly higher under $\alpha$ than under $\alpha^s$ if $A = \{(u,v)\}$, so $\alpha^s$ is admissible. $\square$
+---PAGE_BREAK---
+
+## 7.2 Proof of Theorem 5.2
+
+Let $a^* = (\underline{u}, 1)$. Let $\bar{U}(P)$ be the optimal value of the following linear programming with variables $\pi(u, v)$ for every $(u, v) \in P$:
+
+$$ \bar{U}(P) = \max_{\pi} \underbrace{\underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u})}_{\text{s.t.}} \quad (4a) $$
+
+$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (4b) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (4c) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v)(1-v) \le R^m. \quad (4d) $$
+
+The following claim explains the role of $\bar{U}(P)$ in our argument: $\bar{U}(P)$ is the maximal payoff that the principal can give the agent for the proposal $P$ such that the principal can give the agent this same payoff if the agent proposed $P \cup \{a^*\}$, while still keeping regret below $R^m$.
+
+**Claim 7.4.** If $\rho$ is an IC mechanism which has the WCR of at most $R^m$, then $U(\rho, P) \le \bar{U}(P)$ for every proposal $P$.
+
+*Proof.* Let $\tilde{P} = P \cup \{a^*\}$. Let $\pi = \rho(\cdot|\tilde{P})$. Since the regret under the mechanism $\rho$ when the set of available projects is $\tilde{P}$ is at most $R^m$, it follows that $\sum_{(u,v) \in P} \pi(u,v)(1-v) \le R^m$. Therefore the restriction of $\pi$ to the set $P$ is a feasible point in problem (4). Moreover
+
+$$ U(\rho, \tilde{P}) = \pi(a^*)\underline{u} + \sum_{(u,v) \in P} \pi(u,v)u \le \underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u}), \quad (5) $$
+
+where the inequality follows from $\pi(a^*) + \sum_{(u,v) \in P} \pi(u,v) \le 1$. The right hand side of (5) is the objective function of (4) at $\pi$. Therefore, $U(\rho, \tilde{P}) \le \bar{U}(P)$. Finally, since the mechanism $\rho$ is IC, it follows that $U(\rho, P) \le U(\rho, \tilde{P})$. Therefore, $U(\rho, P) \le \bar{U}(P)$, as desired. $\square$
+
+When $P$ is a singleton $\{(u,v)\}$, we also denote $\bar{U}(\{(u,v)\})$ by $\bar{U}(u,v)$. The following
+---PAGE_BREAK---
+
+claim, which follows immediately from (1) and (3), explains the role of the function $\alpha^m(u, v)$
+in our argument.
+
+**Claim 7.5.** When $P$ is a singleton $\{(u, v)\}$, $\overline{U}(u, v) = \alpha^m(u, v)u$.
+
+For a proposal $P$, let $\underline{U}(P) = \max_{(u,v) \in P} \alpha^m(u, v)u$. The following claim explains the role of $\underline{U}(P)$ in our argument.
+
+**Claim 7.6.** If $\rho$ is an IC mechanism that accepts the singleton proposal $\{(u, v)\}$ with probability $\alpha^m(u, v)$, then $U(\rho, P) \ge \underline{U}(P)$.
+
+*Proof.* Since $\rho$ is IC, we have that $U(\rho, P) \ge U(\{(u, v)\}, \rho) = \alpha^m(u, v)u$ for every $(u, v) \in P$. $\square$
+
+Claims 7.4 bounds from above the agent's expected payoff in an IC mechanism which has
+the WCR of at most $R^m$. Claim 7.6 bounds from below the agent's expected payoff in an IC
+mechanism which approves the singleton proposal $\{(u, v)\}$ with probability $\alpha^m(u, v)$. The
+following claim shows that the definition of $R^m$ is such that both bounds can be satisfied.
+
+**Claim 7.7.** $\underline{U}(P) \le \overline{U}(P)$ for every $P$.
+
+*Proof.* The function $\overline{U}(P)$ defined in (4) is increasing in $P$. Therefore, from Claim 7.5 we have:
+
+$$
+\alpha^m(u, v)u = \overline{U}(u, v) \le \overline{U}(P), \quad \forall (u, v) \in P.
+$$
+
+It follows that:
+
+$$
+\underline{U}(P) = \max_{(u,v) \in P} \alpha^m(u,v)u \leq \overline{U}(P).
+\quad \square
+$$
+---PAGE_BREAK---
+
+By definition, the mechanism $\rho^m$ solves the following linear programming:
+
+$$ \rho^m(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (6a) $$
+
+$$ \text{s.t.} \quad \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (6b) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (6c) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v)u = \underline{U}(P). \quad (6d) $$
+
+It is possible that (6) has multiple optimal solutions. Since all the optimal solutions are payoff-equivalent for both the principal and the agent, we do not distinguish among them. From now on, the notation $\rho(\cdot|P) \neq \rho^m(\cdot|P)$ means that $\rho(\cdot|P)$ is not among the optimal solutions to (6).
+
+The following lemma is the core of the argument. It gives an equivalent characterization of the mechanism $\rho^m$.
+
+**Lemma 7.8.** The optimal solutions to (6) and those to the following problem coincide. Hence, $\rho^m(\cdot|P)$ is also given by the solution to the following problem:
+
+$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (7a) $$
+
+$$ \text{s.t.} \quad \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (7b) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (7c) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v)u \ge \underline{U}(P), \quad (7d) $$
+
+$$ \sum_{(u,v) \in P} \pi(u, v)u \le \overline{U}(P). \quad (7e) $$
+
+*Proof of Lemma 7.8.* We discuss two cases separately.
+---PAGE_BREAK---
+
+Case 1. Assume that there exists some $(u, v) \in P$ such that $v \ge 1 - R^m$. Consider the following linear programming which is a relaxation of both problem (6) and (7):
+
+$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (8a) $$
+
+s.t.
+
+$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (8b) $$
+
+$$ \sum_{(u,v) \in P} \pi(u,v) \le 1, \quad (8c) $$
+
+$$ \sum_{(u,v) \in P} \pi(u,v) u \ge \underline{U}(P). \quad (8d) $$
+
+We claim that the constraint (8d) holds with equality at every optimal solution. Indeed, if (8d) is not binding then an optimal solution to (8) is also an optimal solution to the following linear programming:
+
+$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (9a) $$
+
+s.t.
+
+$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (9b) $$
+
+$$ \sum_{(u,v) \in P} \pi(u,v) \le 1, \quad (9c) $$
+
+which is derived from (8) by removing (8d). Let $v_p = \max_{(u,v) \in P} v$ and $u_p = \max_{(u,v_p) \in P} u$. By the definition of $\alpha^m$ in (3), $\alpha^m(u_p, v_p) = 1$ given that $v_p \ge 1 - R^m$. Every optimal solution $\pi^*$ to problem (9) satisfies $\text{support}(\pi^*) \subseteq \text{argmax}_{(u,v) \in P} v$, which implies that
+
+$$ \sum_{(u,v) \in P} \pi^*(u,v)u \le u_p = \alpha^m(u_p, v_p)u_p \le \underline{U}(P). $$
+
+This implies that every optimal solution to (8) satisfies (8d) with equality, so it is a feasible point in both (6) and (7). Since problem (8) is a relaxation of both problem (6) and (7),
+---PAGE_BREAK---
+
+the optimal values of (6), (7), and (8) coincide. Hence, every optimal solution to (6) or (7)
+is optimal in (8). This, combined with the fact that every optimal solution to (8) is optimal
+in (6) and (7), implies that the optimal solutions to (6) and (7) coincide.
+
+Case 2. Assume now that $v < 1 - R^m$ for every $(u, v) \in P$. We claim that $\underline{U}(P) = \overline{U}(P)$ and therefore problems (6) and (7) coincide. Given that $v < 1 - R^m$ for every $(u, v) \in P$, the constraint (4c) in problem (4) must be slack since if it is satisfied with an equality then (4d) is violated. Therefore, in this case $\overline{U}(P)$ also satisfies
+
+$$
+\begin{aligned}
+\bar{U}(P) &= \max_{\pi} \underbrace{u + \sum_{(u,v) \in P} \pi(u,v)(u-u)}_{\text{s.t.}} \\
+&\qquad \sum_{(u,v) \in P} \pi(u,v)(1-v) \le R^m,
+\end{aligned}
+\tag{10} $$
+
+which is derived from problem (4) by removing (4c). Problem (10) admits a solution $\pi^*$ with the property that, for some $(u^*, v^*) \in P$, the only non-zero element of $\pi^*$ is $\pi^*(u^*, v^*)$. Therefore, by Claim 7.5,
+
+$$ \bar{U}(P) = \bar{U}(u^*, v^*) = \alpha^m(u^*, v^*)u^* \le \underline{U}(P). $$
+
+Therefore, by Claim 7.7 we get $\bar{U}(P) = \underline{U}(P)$, as desired.
+□
+
+We now show that, when the set of available projects is a singleton, the regret under the
+mechanism $\rho^m$ is at most $R^m$.
+
+**Claim 7.9.** For every singleton $A = \{(u, v)\}$, the regret under $\rho^m$ is at most $R^m$.
+
+*Proof.* In this case, $\rho^m$ accepts with probability $\alpha^m(u, v)$ so the regret is $v(1 - \alpha^m(u, v))$. By the definition of $R^m$, there exists some $\bar{p} \in [0, 1]$ such that $\max(v(1-\bar{p}), (1-v)\gamma(\bar{p}, u)) \le R^m$.
+---PAGE_BREAK---
+
+By (3), $\bar{p} \le \alpha^m(u, v)$. Therefore, it also follows that $v(1 - \alpha^m(u, v)) \le v(1 - \bar{p}) \le R^m$. $\square$
+
+**Claim 7.10.** The optimal value in problem (7) is at least $\max_{(u,v)\in P} v - R^m$.
+
+*Proof.* Since the constraints (7d) and (7e) cannot both be binding, it is sufficient to prove that the optimal value in the two problems derived from (7) by removing either (7d) or (7e) is at least $v_p - R^m$ where $v_p = \max_{(u,v)\in P} v$. Let $(u_p, v_p) \in P$ denote a principal's favorite project.
+
+If we remove (7d) let $\pi$ be given by $\pi(u_p, v_p) = \alpha^m(u_p, v_p)$ and $\pi(u, v) = 0$ when $(u, v) \ne (u_p, v_p)$. Then $\sum_{(u,v)\in P} \pi(u,v)u = \alpha^m(u_p, v_p)u_p \le \underline{U}(P) \le \overline{U}(P)$ so (7e) is satisfied. Also $v_p(1 - \alpha^m(u_p, v_p)) \le R^m$ by Claim 7.9, which implies that the value of the objective function in (7) at $\pi$ is at least $v_p - R^m$, as desired.
+
+If we remove (7e) let $\pi$ be the optimal solution to (4) and let $\pi'$ be the probability distribution over $P$ such that $\pi'(u, v) = \pi(u, v)$ when $(u, v) \ne (u_p, v_p)$ and $\pi'(u_p, v_p) = 1 - \sum_{(u,v)\in P\setminus\{(u_p,v_p)\}} \pi(u,v)$, so $\pi'$ is derived from $\pi$ by allocating the probability of choosing no project to $(u_p, v_p)$. Then
+
+$$ \sum_{(u,v) \in P} \pi'(u,v)u = u_p + \sum_{(u,v) \in P} \pi(u,v)(u-u_p) \ge \underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u}) = \overline{U}(P) \ge \underline{U}(P), $$
+
+where the last equality follows from the fact that $\pi$ is optimal in (4). Therefore, $\pi'$ satisfies (7d). Also
+
+$$ \sum_{(u,v) \in P} \pi'(v)(v_p - v) = \sum_{(u,v) \in P} \pi(v)(v_p - v) \le \sum_{(u,v) \in P} \pi(v)(1-v) \le R^m $$
+
+where the last inequality follows from (4d), as desired. $\square$
+
+*Proof of Theorem 5.2.*
+
+1. Fix $(u, v) \in D$ and let $P = \{(u, v)\}$ and $\tilde{P} = \{(u, v), (\underline{u}, 1)\}$.
+
+Let $p$ be the probability that $\rho$ accepts $(u, v)$ when the proposal is $P$. So, $RGRT(P, \rho) =$
+---PAGE_BREAK---
+
+$(1-p)v$. Since the mechanism is IC, the agent's expected payoff under $\tilde{P}$ must be at least $pu$. By definition of $\gamma(u,p)$, this implies that when the proposal is $\tilde{P}$ the mechanism accepts $(u,v)$ with probability at least $\gamma(u,p)$. So, $RGRT(\tilde{P}, \rho) \ge (1-v)\gamma(u,p)$. Therefore $WCR(\rho) \ge \max((1-p)v, (1-v)\gamma(u,p))$.
+
+2. The mechanism $\rho^m$ is IC, and it solves problem (7) by Lemma 7.8. By Claim 7.10, the optimal value in problem (7) is at least $\max_{(u,v) \in P} v - R^m$. Since the objective function in (7) is the principal's payoff under $\pi$, the principal's regret is at most $R^m$.
+
+We next argue that $\rho^m$ is admissible. Let $\rho$ be an IC mechanism which has the WCR of $R^m$ and let $\alpha(u, v)$ be the probability that $\rho$ accepts a singleton proposal $\{(u, v)\}$. Then, $\rho^m$ is not weakly dominated by $\rho$ based on the following two claims:
+
+(a) If the agent's type $A$ is a singleton $\{(u,v)\}$, then $\alpha(u,v) \le \alpha^m(u,v)$ by claims 7.4 and 7.5. Hence, the principal's payoff is weakly higher under $\rho^m$ than under $\rho$ for singleton $A$.
+
+(b) Suppose that $\alpha(u,v) = \alpha^m(u,v)$ for every $(u,v)$. Fix a proposal $P$ and let $\pi = \rho(\cdot|P)$ so $U(\rho,P) = \sum_{(u,v) \in P} \pi(u,v)u$. Then, since $\rho$ is IC it follows from Claim 7.6 that $U(\rho,P) \ge \underline{U}(P)$, and, from Claim (7.4), that $U(\rho,P) \le \overline{U}(P)$. Therefore $\pi$ is a feasible point in problem (7). Since $\rho^m(\cdot|P)$ is the optimal solution to (7), the principal's payoff is weakly higher under $\rho^m$ than under $\rho$.
+
+3. Let $\rho$ be an IC, admissible mechanism which has the WCR of $R^m$ and which differs from $\rho$. We want to show that $U(\rho,P) \le U(\rho^m,P)$ for every finite $P \subseteq D$. Recall that $U(\rho^m,P) = \underline{U}(P)$ for every $P$.
+---PAGE_BREAK---
+
+We first construct a new mechanism $\tilde{\rho}$ based on $\rho$ and $\rho^m$:
+
+$$ \tilde{\rho}(\cdot|P) = \begin{cases} \rho^m(\cdot|P), & \text{if } U(P, \rho) \ge \underline{U}(P) \\ \rho(\cdot|P), & \text{if } U(P, \rho) < \underline{U}(P). \end{cases} $$
+
+By definition, $U(\tilde{\rho}, P) = \min(U(\rho, P), U(\rho^m, P))$. The functions $U(P, \rho)$ and $U(P, \rho^m)$ are increasing in $P$ since $\rho$ and $\rho^m$ are IC. Therefore $U(P, \tilde{\rho})$ is increasing in $P$, so $\tilde{\rho}$ is also IC. Moreover, for every $P$ either $\tilde{\rho}(\cdot|P) = \rho(\cdot|P)$ or $\tilde{\rho}(\cdot|P) = \rho^m(\cdot|P)$. Therefore the WCR under $\tilde{\rho}$ is also $R^m$.
+
+We next argue that for every $P$, $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does.
+
+(a) Consider a set $P$ such that $U(\rho, P) < \underline{U}(P)$. Then $\tilde{\rho}(\cdot|P) = \rho(\cdot|P)$, so $\tilde{\rho}$ gives the principal the same payoff as $\rho$ does.
+
+(b) Consider a set $P$ such that $U(\rho, P) \ge \underline{U}(P)$. From Claim 7.4 we know that $U(P, \rho) \le \overline{U}(P)$ for every $P$. Therefore, $\rho(\cdot|P)$ is a feasible point in problem (7). It follows from Lemma 7.8 that $\rho^m$ gives the principal a weakly higher payoff than $\rho$ does. Moreover, if $\rho(\cdot|P) \ne \rho^m(\cdot|P)$, then $\rho^m$ gives the principal a strictly higher payoff than $\rho$ does.
+
+Since $\tilde{\rho}(\cdot|P) = \rho^m(\cdot|P)$ for every $P$ such that $U(\rho, P) \ge \underline{U}(P)$, $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does for every such $P$.
+
+We have argued that $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does for every $P$. On the other hand, $\rho$ is admissible, so there cannot be a $P$ such that $\tilde{\rho}$ gives the principal a strictly higher payoff than $\rho$ does. This implies that for every $P$ such that $U(\rho, P) \ge \underline{U}(P)$, $\rho(\cdot|P) = \rho^m(\cdot|P)$, so $U(\rho, P)$ is equal to $\underline{U}(P)$. Hence, for every $P$, $U(\rho, P) \le \underline{U}(P).
+---PAGE_BREAK---
+
+
+---PAGE_BREAK---
+
+References
+
+Aghion, Philippe, and Jean Tirole. 1997. "Formal and Real Authority in Organizations." *Journal of Political Economy*, 105(1): 1–29.
+
+Armstrong, Mark, and John Vickers. 2010. "A Model of Delegated Project Choice." *Econometrica*, 78(1): 213–244.
+
+Ben-Porath, Elchanan, Eddie Dekel, and Barton L Lipman. 2019. "Mechanisms with evidence: Commitment and robustness." *Econometrica*, 87(2): 529–566.
+
+Bergemann, Dirk, and Karl H. Schlag. 2008. "Pricing without Priors." *Journal of the European Economic Association*, 6(2/3): 560–569.
+
+Bergemann, Dirk, and Karl Schlag. 2011. "Robust monopoly pricing." *Journal of Economic Theory*, 146(6): 2527–2543.
+
+Beviá, Carmen, and Luis Corchón. 2019. "Contests with dominant strategies." *Economic Theory*.
+
+Bonatti, Alessandro, and Heikki Rantakari. 2016. "The Politics of Compromise." *American Economic Review*, 106(2): 229–59.
+
+Bull, Jesse, and Joel Watson. 2007. "Hard evidence and mechanism design." *Games and Economic Behavior*, 58(1): 75–93.
+
+Carroll, Gabriel. 2015. "Robustness and linear contracts." *American Economic Review*, 105(2): 536–63.
+
+Carroll, Gabriel. 2019. "Robustness in Mechanism Design and Contracting." *Annual Review of Economics*, 11(1): 139–166.
+---PAGE_BREAK---
+
+Chassang, Sylvain. 2013. “Calibrated incentive contracts.” *Econometrica*, 81(5): 1935–1971.
+
+Dekel, Eddie. 2016. “On Evidence in Games and Mechanism Design.” *Econometric Society* *Presidential Address*.
+
+Dye, Ronald A. 1985. “Strategic Accounting Choice and the Effects of Alternative Financial Reporting Requirements.” *Journal of Accounting Research*, 23(2): 544–574.
+
+Gilboa, Itzhak, and David Schmeidler. 1989. “Maxmin expected utility with non-unique prior.” *Journal of Mathematical Economics*, 18(2): 141–153.
+
+Glazer, Jacob, and Ariel Rubinstein. 2006. “A study in the pragmatics of persuasion: a game theoretical approach.” *Theoretical Economics*, 1: 395–410.
+
+Goel, Sumit, and Wade Hann-Caruthers. 2020. “Project selection with partially verifiable information.”
+
+Green, Jerry R., and Jean-Jacques Laffont. 1986. “Partially Verifiable Information and Mechanism Design.” *The Review of Economic Studies*, 53(3): 447–456.
+
+Grossman, Sanford J. 1981. “The Informational Role of Warranties and Private Disclosure about Product Quality.” *The Journal of Law and Economics*, 24(3): 461–483.
+
+Grossman, S. J., and O. D. Hart. 1980. “Disclosure Laws and Takeover Bids.” *The Journal of Finance*, 35(2): 323–334.
+
+Guo, Yingni, and Eran Shmaya. 2019. “Robust Monopoly Regulation.” *Working paper*.
+
+Hart, Sergiu, Ilan Kremer, and Motty Perry. 2017. “Evidence Games: Truth and Commitment.” *American Economic Review*, 107(3): 690–713.
+---PAGE_BREAK---
+
+Hurwicz, Leonid, and Leonard Shapiro. 1978. "Incentive Structures Maximizing Residual Gain under Incomplete Information." *The Bell Journal of Economics*, 9(1): 180–191.
+
+Kasberger, Bernhard, and Karl H Schlag. 2020. "Robust bidding in first-price auctions: How to bid without knowing what others are doing." Available at SSRN 3044438.
+
+Lipman, Barton L, and Duane J Seppi. 1995. "Robust inference in communication games with partial provability." *Journal of Economic Theory*, 66(2): 370–405.
+
+Lyons, Bruce R. 2003. "Could politicians be More Right Than Economists? A Theory of Merger Standards." *Working paper*.
+
+Malladi, Suraj. 2020. "Judged in Hindsight: Regulatory Incentives in Approving Innovations." Available at SSRN.
+
+Milgrom, Paul R. 1981. "Good News and Bad News: Representation Theorems and Applications." *The Bell Journal of Economics*, 12(2): 380–391.
+
+Milnor, John. 1954. "Games against nature, in \"Decision Processes\" (RM Thrall, CH Coombs, and RL Davis, Eds.)."
+
+Neven, Damien J., and Lars-Hendrik Röller. 2005. "Consumer surplus vs. welfare standard in a political economy model of merger control." *International Journal of Industrial Organization*, 23(9): 829–848. Merger Control in International Markets.
+
+Nocke, Volker, and Michael D. Whinston. 2013. "Merger Policy with Merger Choice." *American Economic Review*, 103(2): 1006–33.
+
+Ottaviani, Marco, and Abraham L. Wickelgren. 2011. "Ex ante or ex post competition policy? A progress report." *International Journal of Industrial Organization*, 29(3): 356–359. Special Issue: Selected Papers, European Association for Research in Industrial Economics 37th Annual Conference, Istanbul, Turkey, September 2-4, 2010.
+---PAGE_BREAK---
+
+Renou, Ludovic, and Karl H. Schlag. 2011. “Implementation in minimax regret equilibrium.” *Games and Economic Behavior*, 71(2): 527–533.
+
+Savage, Leonard J. 1972. *The foundations of statistics*. Courier Corporation.
+
+Savage, L. J. 1951. "The Theory of Statistical Decision." *Journal of the American Statistical Association*, 46(253): 55-67.
+
+Sher, Itai. 2014. “Persuasion and dynamic communication.” *Theoretical Economics*, 9(1): 99–136.
+
+Stoye, Jörg. 2011. “Axioms for minimax regret choice correspondences.” *Journal of Economic Theory*, 146(6): 2226–2251.
+
+Wald, Abraham. 1950. "Statistical decision functions."
\ No newline at end of file
diff --git a/samples/texts_merged/6470527.md b/samples/texts_merged/6470527.md
new file mode 100644
index 0000000000000000000000000000000000000000..c55b8cae90f58b9c9e572f5ba5b9f7cb80af8170
--- /dev/null
+++ b/samples/texts_merged/6470527.md
@@ -0,0 +1,315 @@
+
+---PAGE_BREAK---
+
+# MECHANISM DESIGN AND MOTION PLANNING OF PARALLEL-CHAIN NONHOLONOMIC MANIPULATOR
+
+Li, L.
+
+School of Mechanical Engineering, Baoji University of Arts and Sciences, Baoji 721016, China
+E-Mail: leeliang@126.com
+
+## Abstract
+
+Inspired by the nonholonomic theory, this paper proposes a parallel-chain nonholonomic manipulator with a chainable kinetics model. To build the manipulator, the friction disc motion synthesis and decomposition mechanism was taken as the joint transmission component. Based on Chow's theorem, the kinetics model of the manipulator was proved as nonholonomic and controllable. Then, the system's configuration coordinates were mapped from the joint space to the chain space via coordinate transformation, and the manipulator motion was planned in the chain space. Through two simulation experiments, it is proved that all joints of the proposed manipulator can move to the target configuration within the specified time. To sum up, the author successfully built an underactuated manipulator that can drive the motion of four joints with two motors. The research findings lay the basis for the development of small lightweight manipulators.
+
+(Received, processed and accepted by the Chinese Representative Office.)
+
+**Key Words:** Nonholonomic, Parallel-Chain, Chain Transformation, Motion Planning
+
+## 1. INTRODUCTION
+
+In analytical mechanics, a nonholonomic system refers to a system whose constraint equations contain the derivative of the coordinates with respect to time. In other words, the system speed or acceleration is under constraint. The nonholonomic mechanical system is underactuated, as it has fewer degrees of freedom (DoFs) than the number of dimensions in its configuration space. Hence, a multi-dimensional motion in the configuration space can be determined by a few control inputs, making it possible to design compact, lightweight multi-joint manipulators. The research into nonholonomic manipulator carries practical implications for the development of assistive robots like small robots, medical robots and multi-fingered dexterous hands.
+
+In the field of robotics, the research into nonholonomic system mainly concentrates on the control of existing nonholonomic robots, such as wheeled mobile robots, spherical robots and underwater robots [1-3]. Owing to the motion nonlinearity of nonholonomic robots, it is necessary to develop a unique path planning method for each nonholonomic system, adding to the difficulty in the motion control of new nonholonomic robots.
+
+In reality, many kinematics models of existing nonholonomic robots (e.g. wheeled mobile robots and trailer systems) can be converted into the chained model, a drift-free controllable nonholonomic system model. A system whose kinematics equations can be described with a chained model is called a chained system. Such a system boasts excellent properties (nilpotent and smooth), and simple structured mathematical model. In view of these advantages, many scholars have created nonholonomic robots with chainable kinematics model. For example, Nakamura proposed an underactuated manipulator based on a friction ball vector synthesis and decomposition mechanism [4]. The manipulator supports path planning via the control method of a chained system, as its kinematics model can be converted into a chained model. Under the diffeomorphism of chained transformation, paper [5] designs the gear steering connection mechanism for nonpowered trailer, and constructs a chainable wheeled mobile trailer system that can accurately track the target trajectory. Yamaguchi developed a 4 DoFs
+---PAGE_BREAK---
+
+wheeled mobile robot capable of chained transformation [6-8]; the wheeled mobile mechanism is controlled precisely with the drive angle and azimuth of the traction robot and the angle of the active steering system mounted on the connecting rod.
+
+Based on the previous research into a parallel-chain type chainable nonholonomic manipulator [9-11], this paper puts forward a two-motor parallel-chain four-joint nonholonomic manipulator. In the parallel-chain manipulator, the friction disc motion synthesis and decomposition mechanism serve as the joint transmission component, and the motion is transferred by dual universal joint in parallel-chain mode. Compared to the parallel-chain manipulator, the proposed manipulator, with a concise structure and a small power loss, offers an effective solution to the conflict between the number of drive units and manipulator mass in multi-joint manipulator.
+
+The remainder of this paper is organized as follows: Section 2 introduces the design of the parallel-chain nonholonomic manipulator; Section 3 establishes the kinematics model of the manipulator, demonstrates the manipulator controllability, and analyses the chain transformation features; Section 4 plans a path that maps back to the joint space in the chain space based on the control law of time polynomial motion planning; Section 5 concludes that the proposed manipulator can move from the initial configuration to the target configuration within the specific time under the control law of the chained system, and outperforms the parallel-chain manipulator in trajectory simplicity and motion efficiency.
+
+# 2. PARALLEL-CHAIN NONHOLONOMIC MANIPULATOR
+## MECHANISM
+
+### 2.1 Motion principle of friction disc
+
+As shown in Fig. 1, when the friction wheel with the radius $r$ rotates around axis $I$ at the angular velocity $W_i$, there is only pure rolling between the friction wheel and the friction disc; then, the friction disc will rotate around axis $O$ at the angular velocity $W_o$. The friction wheel and the friction disc are perpendicular to each other. Let $M$ be the contact point between the friction wheel and the friction disc. The friction wheel can also rotate relative to the friction disc around the connecting line between its own axis and point $M$. When the rotation angle reaches $\alpha$, the linear velocities of the friction wheel and the friction disc were plotted into a vector diagram (Fig. 1 b).
+
+Figure 1: Friction disc motion synthesis and decomposition mechanism.
+
+Then, the following equation holds: $V_o = W_o R = V_i \cos \alpha = W_i r \cos \alpha$.
+---PAGE_BREAK---
+
+Thus, we have:
+
+$$W_o = \frac{r}{R} w_i \cos \alpha \quad (1)$$
+
+where R is the distance between point M and the centre of friction disc; $V_i$ and $V_o$ are the linear velocities of the friction wheel and the friction disc at point M, respectively.
+
+It can be seen that the transmission ratio between the friction wheel and the friction disc can be controlled by adjusting the angle $\alpha$. Hence, $\alpha$ was defined as the transmission angle.
+
+The rolling-induced relative motion of the friction wheel on the friction disc depends on the relative change of configuration. Based on the relative configuration-variable structure, the designed friction disc motion synthesis and decomposition mechanism is subjected to the nonholonomic constraint [12-15].
+
+## 2.2 Design of parallel-chain nonholonomic manipulator
+
+A friction disc mechanism was arranged at each joint of the manipulator. In the mechanism, the friction wheel and the friction disc are permanently connected to the front and rear joints, respectively. The transmission ratio between the two components changes with the included angle between them (i.e. the joint angle). Fig. 2 illustrates the structure of parallel-chain four-joint manipulator.
+
+Figure 2: Mechanism of parallel-chain four-joint manipulator.
+
+The rotation of motor 2 directly drives joint 1 to rotate about the axis by the angle $\theta_1$. Since friction wheel 1 is fixed to the frame through the side plate and friction plate 1 is fixed to the first joint, motor 2 controls the rotation angle $\theta_1$ of joint 1 as if a transmission angle $\theta_1$ is added to the friction transmission of the friction wheel and the friction disc.
+
+Motor 1 transmits its energy in two directions. In one direction, the motor drives the friction wheel through gears, the friction wheel drives the friction disc via rolling friction, and the friction disc drives joint 2 to rotate by the angle $\theta_2$ through the synchronous belt; meanwhile, the motor adds a transmission angle $\theta_2$ between the friction wheel and the friction disc at joint 2. In the other direction, motor 1 transmits its energy to the nearest rear joint via the dual universal joint, so that each rear joint can transmit energy to its next rear joint in turns.
+
+In this way, the four joints can be driven by two motors. The prototype of the parallel-chain four-joint manipulator is presented in Fig. 3.
+---PAGE_BREAK---
+
+Figure 3: Prototype of parallel-chain four-joint manipulator.
+
+The following issues call for special attention in the production and assembly of the prototype:
+
+(1) To ensure effective, reliable and accurate transmission of motion and force, there should be sufficient friction between the friction wheel and the friction disc. Hence, the material should have a large friction coefficient. Besides, a certain amount of positive pressure should be applied to point M, such that there is no relative sliding but pure rolling between the friction wheel and the friction.
+
+(2) As shown in Fig. 4 a, point M should be placed on the axis of the joint. Otherwise, the friction wheel will slide on the friction disc when the joint rotates to a certain angle. The resulting change in the distance R between point M and the centre of the friction disc will reduce the transmission accuracy.
+
+(3) The input shaft and the output shaft of the dual universal joint should have the same rotational angular velocity. In other words, the centreline OO of the dual universal joint must be consistent with the joint axis. Moreover, the intermediate shaft should be retractable, so as to compensate for the change in the axial distance between the input and output shafts caused by the rotation of manipulator joints (Fig. 4 b).
+
+(4) For the compactness and lightweight of the whole structure, the periphery of the connecting rod should be made into large rounded corner and the central part of the rod should be grooved, without sacrificing the strength and rigidity. In the horizontal direction, the main energy transmission chain (dual universal joint) and the motion transmission chain (friction wheel and friction disc) should be arranged at the same distance from the edge of the manipulator. The distance should approximate the spacing between the two transmission chains. In the vertical direction, the two transmission chains should be placed symmetrically about the connecting rod. All these arrangements ensure that the centre of mass of the manipulator is close to its geometric centre, thereby improving the kinetic performance of the manipulator.
+
+Figure 4: a) location of point M, b) structure of dual universal joint.
+---PAGE_BREAK---
+
+# 3. KINEMATICS ANALYSIS AND CHAIN TRANSFORMATION
+
+## 3.1 Kinematics modelling
+
+The configuration space of the four-joint nonholonomic manipulator hinges on the joint rotation angle $\theta_i$ ($i=1, 2, 3, 4$) and the angular displacement $\varphi$ of the friction wheel. Hence, the generalized coordinate vector of the manipulator system was defined as $q = [q_1, q_2, q_3, q_4, q_5] = [\varphi, \theta_1, \theta_2, \theta_3, \theta_4]$, and the control inputs as the angular velocities of the two motors $u_1$ and $u_2$. According to the kinematics relationship, the kinematics model of the parallel-chain four-joint manipulator can be derived as:
+
+$$
+\begin{bmatrix} \dot{q}_1 \\ \dot{q}_2 \\ \dot{q}_3 \\ \dot{q}_4 \\ \dot{q}_5 \end{bmatrix} =
+\begin{bmatrix} \varphi \\ \dot{\theta}_1 \\ \dot{\theta}_2 \\ \dot{\theta}_3 \\ \dot{\theta}_4 \end{bmatrix} =
+\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \frac{r}{R}\cos\theta_1 & 0 \\ \frac{r}{R}\cos\theta_2 & 0 \\ \frac{r}{R}\cos\theta_3 & 0 \end{bmatrix}
+\begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = [p_1(q) \enspace p_2(q)]
+\begin{bmatrix} u_1 \\ u_2 \end{bmatrix}
+\quad (2)
+$$
+
+where *r* is the radius of the friction wheel.
+
+## 3.2 Controllability analysis
+
+Eq. (2) describes a drift-free control system. For such a drift-free symmetric affine system, the reachable space is expanded from the distribution $\Delta(q) = \text{span}\{p_1, p_2\}$.
+
+According to the controllability conditions of nonholonomic systems (Chow's theorem) [16], a drift-free affine system is controllable if its reachable distribution $\Delta_p(q) = \text{span}\{p_1, p_2, [p_1, p_2], [p_1, [p_1, p_2]], ...\}$ is in full rank. Note that $[p_1, p_2]$ and $[p_1, [p_1, p_2]]$ are the Lie bracket operations on vectors $p_1, p_2$ and $p_1, [p_1, p_2]$, respectively. Then, we have $[p_1, p_2] = \frac{\partial p_2 q}{\partial q} p_1(q) - \frac{\partial p_1(q)}{\partial q} p_2(q)$.
+
+Thus, the reachable space of the parallel-chain nonholonomic four-joint manipulator can be expressed as:
+
+$$
+\Delta_p (q) = \operatorname{span} \{ p_1, p_2, [p_1, p_2], [p_1, [p_1, p_2]], [p_1, [p_1, [p_1, p_2]]] \} =
+\begin{bmatrix}
+1 & 0 & 0 & 0 & 0 \\
+0 & 1 & 0 & 0 & 0 \\
+k c_1 & 0 & k s_1 & 0 & 0 \\
+k c_2 & 0 & 0 & k^2 s_1 s_2 & k^3 s_1 c_1 c_2 \\
+k c_3 & 0 & 0 & 0 & k^3 s_1 s_2 s_3
+\end{bmatrix}
+\quad (3)
+$$
+
+where $k = \frac{r}{R}$, $c_i = \cos \theta_i$, $s_i = \sin \theta_i \neq 0$ ($i = 1, 2, 3$).
+
+It can be derived from Eq. (3) that $\dim \Delta_p(q) = 5$ if $\sin\theta_1 \neq 0$, $\sin\theta_2 \neq 0$ and $\sin\theta_3 \neq 0$, that is, $\sin\theta_i \neq 0$ ($i = 1, 2, 3$). In this case, the rank of the matrix equals the number of dimensions in the configuration space. In other words, the reachable space expanded from the system is involutive, which satisfies the controllability rank condition. Therefore, the parallel-chain four-joint nonholonomic manipulator is nonholonomic and controllable in the five-dimensional reachable space, as long as its work space satisfies $\theta_i \neq 0$ ($i = 1, 2, 3$). In this case, the motion of the five configuration variables can be controlled with two motors.
+
+## 3.3 Analysis of chain transformation features
+
+After investigating a wheeled mobile robot system with *n* trailers, Sørdalen proposed the conditions and methods for the chain transformation of a drift-free affine system with a triangular configuration [17], similar to Eq. (2):
+---PAGE_BREAK---
+
+$$
+\left\{
+\begin{array}{l}
+\dot{q}_1 = u_1 \\
+\dot{q}_2 = u_2 & i \in \{3, \dots, n\} \\
+\dot{q}_1 = f_i(q_{i-1})u_1
+\end{array}
+\right.
+$$
+
+If the smooth function $f_i(q_{i-1})$ satisfies $\left. \frac{\partial f_i(q_{i-1})}{\partial q_{i-1}} \right|_{q=q_0} \neq 0$ ($\forall i \in \{3, 4, \dots, n\}$) in the neighbourhood of $q_0$, there exist diffeomorphic coordinate transformation and input transformation such that the system can be converted to a chained system.
+
+If $\theta_i \neq 0$ ($i=1, 2, 3$), then the chain transformation and input feedback transformation of the four-joint nonholonomic manipulator can be expressed as:
+
+$$
+\left\{
+\begin{aligned}
+Z_5 &= \theta_4 \\
+Z_4 &= k \cos \theta_3 \\
+Z_3 &= -k^2 \cos \theta_2 \sin \theta_3 \\
+Z_2 &= k^3 (\cos \theta_1 \sin \theta_2 \sin \theta_3 - \cos^2 \theta_2 \cos \theta_3) \\
+Z_1 &= \varphi
+\end{aligned}
+\right.
+\tag{4}
+$$
+
+$$
+\begin{equation}
+\begin{cases}
+v_1 = \dot{z}_1 = \dot{\varphi} = u_1 \\
+v_2 = \dot{z}_2 = k^4 c_2 (3c_1 s_2 s_3 + s_3 c_2^2 + s_3 c_2^2) u_1 - k^3 s_1 s_2 s_3 u_2
+\end{cases}
+\tag{5}
+\end{equation}
+$$
+
+# 4. MOTION PLANNING FOR PARALLEL-CHAIN FOUR-JOINT NONHOLONOMIC MANIPULATOR
+
+The basic idea of the motion planning for chainable nonholonomic manipulator is to map the initial configuration $q^i$ and target configuration $q^f$ of the system into the initial configuration $z^i$ and target configuration $z^f$ of the chain space, forming a path from the initial configuration $z^i$ to the target configuration $z^f$, and then map the path to the joint space through inverse chain transformation.
+
+The relative mature motion planning methods for chained systems include piecewise constant input method, trigonometric function input method, polynomial input method, and switching control method. Among them, the polynomial input method stands out for its simple integration operation and the ability to control all variables to move to the target configuration along a smooth trajectory. The polynomial expression of the time-variation of two control inputs is:
+
+$$
+\begin{equation}
+\begin{cases}
+V_1(t) = b_1 \\
+V_2(t) = b_2 + b_3t + b_4t^2
+\end{cases}
+\tag{6}
+\end{equation}
+$$
+
+The motion planning aims to find a bounded control input $u(t)$ such that the system reaches the target configuration $z^f$ from the initial configuration $z^i$ over the specified time $T$. In other words, the system satisfies the following constraints:
+
+$$
+\left\{
+\begin{array}{l}
+f_1 = Z_2(T) - Z_2^f = 0 \\
+f_2 = Z_3(T) - Z_3^f = 0 \\
+f_3 = Z_4(T) - Z_4^f = 0 \\
+f_4 = Z_5(T) - Z_5^f = 0
+\end{array}
+\right.
+\qquad (7)
+$$
+
+Through integration, the chained system can be expressed as:
+---PAGE_BREAK---
+
+$$
+\left\{
+\begin{aligned}
+z_2(T) &= b_2 T + \frac{T^2}{2} b_3 + \frac{T^3}{3} b_4 + z_2^i \\
+z_3(T) &= \frac{T^2}{2} b_1 b_2 + \frac{T^3}{6} b_1 b_3 + \frac{T^4}{12} b_1 b_4 + T z_2^i b_1 + z_3^i \\
+z_4(T) &= \frac{T^3}{6} b_1^2 b_2 + \frac{T^4}{24} b_1^2 b_3 + \frac{T^5}{60} b_1^2 b_4 + \frac{T^2}{2} b_1^2 z_2^i + z_3^i T b_1 + z_4^i \\
+z_5(T) &= \frac{T^4}{24} b_1^3 b_2 + \frac{T^5}{120} b_1^3 b_3 + \frac{T^6}{360} b_1^3 b_4 + \frac{T^3}{6} b_1^3 z_2^i + \frac{T^2}{2} z_3^i b_1^2 + T z_4^i b_1 + z_5^i
+\end{aligned}
+\right.
+\quad (8)
+$$
+
+Substituting Eq. (8) into Eq. (7), we have a set of nonlinear equations about $b_1, b_2, b_3$ and $b_4$. The Newton iteration form of the equation set is:
+
+$$
+b^{(k+1)} = b^{(k)} - [f'(b^{(k)})]^T F(b^{(k)})
+$$
+
+where $F'(b)$ is the Jacobian matrix of $F(b)$; $[F'(b)]^+$ is the pseudo-inverse of $F'(b)$. Let $b = [b_1, b_2, b_3, b_4]^T$ and $F = [f_1, f_2, f_3, f_4]^T$.
+
+Given the initial value $b^{(0)}$, $b$ can be calculated by the iteration Eq. (9). Then, the trajectory of $z_i^{(t)}$ can be acquired by substituting $b$ into Eq. (8). Through the inverse chain transformation of Eq. (4), we can obtain the expression of the angular displacement of each joint with respect to the $z$-variable. Thus, the motion curves of the angular displacement of the four joints can be expressed as:
+
+$$
+\left\{
+\begin{array}{l}
+\theta_4 = Z_5 \\
+\theta_3 = \arcos(Z_4/K) \\
+\theta_2 = \arcos(-\displaystyle\frac{Z_3}{K^2 \sin\theta_3}) \\
+\theta_1 = \arcos(\displaystyle\frac{\displaystyle\frac{Z_2}{K^3} + \cos\theta_3(\cos\theta_2)^2}{\sin\theta_2 \sin\theta_3})
+\end{array}
+\right.
+\qquad (10)
+$$
+
+# 5. SIMULATION EXPERIMENTS
+
+Experiment 1:
+
+Let the initial configuration $\theta^i = [\theta_1^i \ \theta_2^i \ \theta_3^i \ \theta_4^i]^T$ of an parallel-chain four-joint nonholonomic manipulator be $[5^0 \ 5^0 \ 5^0 \ 5^0]^T$ and the target configuration of that manipulator be $\theta^f = [\theta_1^f \ \theta_2^f \ \theta_3^f \ \theta_4^f]^T = [15^0 \ 15^0 \ 15^0 \ 15^0]^T$.
+
+Substituting the configurations into Eq. (4), the boundary conditions in the chain space
+can be derived as $z^i = [z_1^i \ z_2^i \ z_3^i \ z_4^i]^T = [-0.1958 \ -0.0297 \ 0.5822 \ 0.0873]^T$ and $z^f =
+[z_1^f \ z_2^f \ z_3^f \ z_4^f]^T = [-0.1670 \ -0.0854 \ 0.5645 \ 0.2618]^T$.
+
+Figure 5: Trajectory, a) of variable *z* in the chain space, b) of each joint in the joint space.
+---PAGE_BREAK---
+
+Let the motion time $T = 20$ s and $b^{(0)} = [0.1 \ 0.1 \ 0.1 \ 0.1]^T$. The termination condition of the system iteration was set with the error at the termination time:
+
+$$e = \sqrt{(z_2(T) - Z_2^g)^2 + (z_3(T) - Z_3^g)^2 + (z_4(T) - Z_4^g)^2 + (z_5(T) - Z_5^g)^2} < 10^{-6}.$$
+
+Then, Eq. (9) was solved by Newton iteration method. Through 9 iterations, we have $b = [b_1 \ b_2 \ b_3 \ b_4]^T = [0.0151831 \ 0.0007583 \ 0.0000772 \ -0.0000007]^T$. Substituting $b$ into Eq. (8), we have the time-variation curve of variable $z$ (Fig. 5 a). According to Eq. (10), the path in the chain space can be mapped back to the joint space via inverse transformation. Under the time polynomial input control, the output of the four joints of the nonholonomic manipulator is as shown in Fig. 5 b.
+
+At $T=20$ s, $\theta_1=14.9999999^{\circ}$, $\theta_2=14.9999999^{\circ}$, $\theta_3=14.9999999^{\circ}$ and $\theta_4=14.9999999^{\circ}$.
+
+Let the error of target configuration be: $e = \frac{\theta^r - \theta^i}{\theta^g - \theta^i}$
+
+where $\theta^r$ is the actual displacement of joint rotation. At this time, target configuration error of each joint is $e_{\theta_1}=0.0000001\%$, $e_{\theta_2}=0.0000001\%$, $e_{\theta_3}=0.0000001\%$ and $e_{\theta_4}=0.0000001\%$. The simulation results show that, under the time polynomial input control, all joints have smooth trajectories except for a slight fluctuation of joint 1 in the initial phase, and arrive at the target configuration.
+
+### Experiment 2:
+
+Let the initial configuration of the proposed manipulator $\theta^i = [\theta_1^i \ \theta_2^i \ \theta_3^i \ \theta_4^i]^T$ be $[20^{\circ} \ 20^{\circ} \ 20^{\circ} \ 20^{\circ}]^T$ and its target configuration be $\theta^f = [\theta_1^f \ \theta_2^f \ \theta_3^f \ \theta_4^f]^T = [10^{\circ} \ 10^{\circ} \ 10^{\circ} \ 10^{\circ}]^T$. Suppose the motion time $T = 20$ s. Through simulation, the time-variation trajectories of the chain variable and joint variable are as shown in Figs. 6 a and 6 b, respectively.
+
+Figure 6: Trajectory, a) of variable $z$ in the chain space, b) of each joint in the joint space.
+
+At $T = 20$ s, $\theta_1 = 10.000000000016^{\circ}$, $\theta_2 = 10.000000000016^{\circ}$, $\theta_3 = 10^{\circ}$ and $\theta_4 = 10^{\circ}$. The simulation results show that each joint of the manipulator has a smooth trajectory and arrives at the target configuration within the specified time.
+
+Comparing the results of the two simulation experiments, it is clear that, all joints of the parallel-chain four-joint manipulator can move accurately from the initial configuration to the target configuration within the specified time, when the input is controlled by the time polynomial obtained through Newton iteration. The motion of each joint is stable, with virtually no large fluctuation. Therefore, the Newton iteration-based polynomial input control is a feasible motion planning method for the parallel-chain four-joint nonholonomic manipulator.
+---PAGE_BREAK---
+
+# 6. CONCLUSIONS
+
+Considering the friction disc motion synthesis and decomposition mechanism, this paper proposes a chainable-type parallel-chain four-joint nonholonomic manipulator based on the parallel-chain nonholonomic manipulator. According to the nonlinear control theory, the author proved that the reachable space expanded from the manipulator system satisfies the involution distribution, i.e. the system is controllable. Then, the nonholonomic motion planning was transformed into the solution to nonlinear equation set, using the time polynomial input method of the chained system. The unknown coefficients of the time polynomial were solved by Newton iteration method. After that, two simulation experiments were performed on the motion between initial and target configurations. The results show that all joints of the proposed manipulator can move stably and accurately from the initial configuration to the target configuration within the specified time.
+
+Nevertheless, there is no guarantee that the planned path between the initial configuration and the target configuration in the chain space can be transformed back into the joint space without singularity, especially when the joint variables are coupled tightly due to the increase in the number of joints on the manipulator. Thus, the key to the path planning of nonholonomic manipulator lies in the existence of the solution to inverse transformation of the planed path from the chain space to joint space. In the future research, the author will construct the mathematical expression of the geometric and topological features of the nonholonomic path, identify the conditions for the path between adjacent configurations to converge into the chain space, and establish the existence criterion of the inverse transformation solution for the nonholonomic path.
+
+# ACKNOWLEDGEMENT
+
+This work is supported by the Special Scientific Research Plan of Shaanxi Provincial Department of Education (17JK0048), and the Specialized Research Fund for the Doctor Program of Baoji University of Arts and Sciences (ZK16044).
+
+# REFERENCES
+
+[1] Zhai, J.-Y.; Song, Z.-B. (2018). Adaptive sliding mode trajectory tracking control for wheeled mobile robots, *International Journal of Control*, 8 pages, doi:10.1080/00207179.2018.1436194
+
+[2] Van Loock, W.; Pipeleers, G.; Diehl, M.; De Schutter, J.; Swevers, J. (2014). Optimal path following for differentially flat robotic systems through a geometric problem formulation, *IEEE Transactions on Robotics*, Vol. 30, No. 4, 980-985, doi:10.1109/TRO.2014.2305493
+
+[3] Li, L. (2017). Nonholonomic motion planning using trigonometric switch inputs, *International Journal of Simulation Modelling*, Vol. 16, No. 1, 176-186, doi:10.2507/IJSIMM16(1)CO5
+
+[4] Chung, W.-J.; Nakamura, Y. (2002). Design and control of a chained form manipulator, *International Journal of Robotics Research*, Vol. 21, No. 5-6, 389-408, doi:10.1177/027836402761393351
+
+[5] Nakamura, Y.; Ezaki, H.; Tan, Y.-G.; Chung, W. (2001). Design of steering mechanism and control of nonholonomic trailer systems, *IEEE Transactions on Robotics and Automation*, Vol. 17, No. 3, 367-374, doi:10.1109/70.938393
+
+[6] Yamaguchi, H.; Mori, M.; Kawakami, A. (2011). Control of a five-axle, three-steering coupled-vehicle system and its experimental verification, *IFAC Proceedings Volumes*, Vol. 44, No. 1, 12976-12984, doi:10.3182/20110828-6-IT-1002.01455
+
+[7] Yamaguchi, H. (2012). Dynamical analysis of an undulatory wheeled locomotor: a trident steering walker, *IFAC Proceedings Volumes*, Vol. 45, No. 22, 157-164, doi:10.3182/20120905-3-HR-2030.00064
+---PAGE_BREAK---
+
+[8] Yamaguchi, H. (2007). A path following feedback control law for a trident steering walker, *Transactions of the Society of Instrument and Control Engineers*, Vol. 43, No. 7, 562-571, doi:10.9746/ve.sicetr1965.43.562
+
+[9] Dobrin, C.; Bondrea, I.; Pîrvu, B.-C. (2015). Modelling and simulation of collaborative processes in manufacturing, *Academic Journal of Manufacturing Engineering*, Vol. 13, No. 3, 18-25
+
+[10] Tan, Y.-G.; Li, L.; Liu, M.-Y.; Chen, G.-L. (2012). Design and path planning for controllable underactuated manipulator, *International Journal of Advancements in Computing Technology*, Vol. 4, No. 2, 212-221, doi:10.4156/ijact.vol4 issue 2.26
+
+[11] Li, L.; Tan, Y.-G.; Li, Z. (2014). Nonholonomic motion planning strategy for underactuated manipulator, *Journal of Robotics*, Vol. 2014, Paper 743857, 10 pages, doi:10.1155/2014/743857
+
+[12] Djedai, H.; Mdouki, R.; Mansouri, Z.; Aouissi, M. (2017). Numerical investigation of three-dimensional separation control in an axial compressor cascade, *International Journal of Heat and Technology*, Vol. 35, No. 3, 657-662, doi:10.18280/ijht.350325
+
+[13] Tan, Y.-G.; Jiang, Z.-Q.; Zhou, Z.-D. (2006). A nonholonomic motion planning and control based on chained form transformation, *Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems*, 3149-3153, doi:10.1109/IROS.2006.282337
+
+[14] Pamuk, M. T.; Savaş, A.; Seçgin, Ö.; Arda, E. (2018). Numerical simulation of transient heat transfer in friction-stir welding, *International Journal of Heat and Technology*, Vol. 36, No. 1, 26-30, doi:10.18280/ijht.360104
+
+[15] Medina, Y. C.; Fonticiella, O. M. C., Morales, O. F. G. (2017). Design and modelation of piping systems by means of use friction factor in the transition turbulent zone, *Mathematical Modelling of Engineering Problems*, Vol. 4, No. 4, 162-167, doi:10.18280/mmep.040404
+
+[16] Li, Z. X. (1997). *A Mathematical Introduction to Robot Manipulation*, China Machine Press, Beijing
+
+[17] Sørdalen, O. J. (1993). Conversion of the kinematics of a car with n trailers into a chained form, *Proceedings of the 1993 IEEE International Conference on Robotics and Automation*, Vol. 1, 382-387, doi:10.1109/ROBOT.1993.292011
\ No newline at end of file
diff --git a/samples/texts_merged/6535016.md b/samples/texts_merged/6535016.md
new file mode 100644
index 0000000000000000000000000000000000000000..a7a458fc320ae505836052cb60232fe72059ab1b
--- /dev/null
+++ b/samples/texts_merged/6535016.md
@@ -0,0 +1,607 @@
+
+---PAGE_BREAK---
+
+Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics
+
+Evan D. Remington¹, Devika Narain¹,², Eghbal A. Hosseini², and Mehrdad Jazayeri¹,²,*
+
+
+
+¹McGovern Institute for Brain Research, ²Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
+
+*Correspondence
+
+Mehrdad Jazayeri, Ph.D.
+Robert A. Swanson Career Development Professor
+Assistant Professor, Department of Brain and Cognitive Sciences
+Investigator, McGovern Institute for Brain Research
+Investigator, Center for Sensorimotor Neural Engineering
+MIT 46-6041
+43 Vassar Street
+Cambridge, MA 02139, USA
+Phone: 617-715-5418
+Fax: 617-253-5659
+Email: mjaz@mit.edu
+
+Acknowledgements
+
+We thank S.W. Egger, H. Sohn, and V. Parks for their helpful suggestions on the manuscript. D.N. is supported by the Rubicon grant (2015/446-14-008) from the Netherlands organization for scientific research (NWO). M.J. is supported by NIH (NINDS-NS078127), the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.
+
+Author contributions
+
+E.D.R. and M.J. designed the main experimental paradigm. E.D.R. and E.A.H. trained the animals. E.D.R. collected neural data from both animals. E.D.R. developed KiNeT. E.D.R. performed all analyses. D.N. developed the recurrent neural network models. E.D.R. and M.J. interpreted the results and wrote the paper.
+
+Declaration of Interests
+
+The authors declare no competing interests.
+---PAGE_BREAK---
+
+## Summary
+
+Sensorimotor computations can be flexibly adjusted according to internal states and contextual inputs. The mechanisms supporting this flexibility are not understood. Here, we tested the utility of a dynamical system perspective to approach this problem. In a dynamical system whose state is determined by interactions among neurons, computations can be rapidly and flexibly reconfigured by controlling the system's inputs and initial conditions. To investigate whether the brain employs such control strategies, we recorded from the dorsomedial frontal cortex (DMFC) of monkeys trained to measure time intervals and subsequently produce timed motor responses according to multiple context-specific stimulus-response rules. Analysis of the geometry of neural states revealed a control mechanism that relied on the system's inputs and initial conditions. A tonic input specified by the behavioral context adjusted firing rates throughout each trial, while the dynamics in the measurement epoch allowed the system to establish initial conditions for the ensuing production epoch. This initial condition in turn set the speed of neural dynamics in the production epoch allowing the animal to aim for the target interval. These results provide evidence that the language of dynamical systems can be used to parsimoniously link brain activity to sensorimotor computations.
+---PAGE_BREAK---
+
+# Introduction
+
+Humans and nonhuman primates are capable of generating a vast array of behaviors, a feat dependent on the brain's ability to produce a vast repertoire of neural activity patterns. However, identifying the mechanisms by which the brain flexibly selects neural activity patterns across a multitude of contexts remains a fundamental and outstanding problem in systems neuroscience.
+
+Here, we aimed to answer this question using a dynamical systems approach. Work in the motor system has provided support for a hypothesis that movement-related activity in motor cortex can be described at the level of neural populations and viewed as low dimensional neural trajectories of a dynamical system (Churchland et al. 2010; Churchland et al. 2012; Seely et al. 2016; Fetz 1992; Michaels et al. 2016). More recently, a dynamical systems view has been used to provide explanations for neural trajectories in premotor and prefrontal cortical areas in various cognitive tasks (Mante et al. 2013; Rigotti et al. 2010; Carnevale et al. 2015; Hennequin et al. 2014; Rajan et al. 2016). This line of investigation has been complemented by efforts in developing, training, and analyzing recurrent neural network models that can emulate a range of motor and cognitive behaviors, leading to novel insights into the underlying latent dynamics (Mante et al. 2013; Hennequin et al. 2014; Sussillo et al. 2015; Chaisangmongkon et al. 2017; Wang et al. 2017). These early successes hold promise for the development of a more ambitious "computation-through-dynamics" (CTD) as a general framework for understanding how activity patterns in the brain support flexible behaviorally-relevant computations.
+
+The behavior of a dynamical system can be described in terms of three components: (1) the interaction between state variables that characterize the system's latent dynamics, (2) the system's initial state, and (3) the external inputs to the system. Accordingly, the hope for using the mathematics of dynamical systems to understand flexible generation of neural activity patterns and behavior depends on our ability to understand the co-evolution of behavioral and neural states in terms of these three components. Assuming that synaptic couplings between neurons and other biophysical properties are approximately constant on short timescales (i.e. trial to trial), we asked whether behavioral flexibility can be understood in terms of adjustments to initial state and external inputs.
+
+There is evidence that certain aspects of behavioral flexibility can be understood through these mechanisms. For example, it has been proposed that preparatory activity prior to movement initializes the system such that ensuing movement-related activity follows the appropriate trajectory (Churchland et al. 2010). Similarly, the presence of a context input can enable a recurrent neural network to perform flexible rule- (Mante et al. 2013; Song et al. 2016) and category-based decisions (Chaisangmongkon et al. 2017). However, whether these initial insights would apply more broadly and generalize when both inputs and initial conditions change is an important outstanding question.
+---PAGE_BREAK---
+
+For many behaviors, distinguishing the effects of the synaptic coupling, inputs and initial conditions in neural activity patterns is challenging. For example, neural activity during a reaching movement is likely governed by both local recurrent interactions and distal inputs from time-varying and condition-dependent reafferent signals (Todorov & Jordan 2002; Scott 2004; Pruszynski et al. 2011). Similarly, in many perceptual decision making tasks, it is not straightforward to disambiguate the sensory drive from recurrent activity representing the formation of a decision and the subsequent motor plan (Mante et al. 2013; Meister et al. 2013; Thura & Cisek 2014). This makes it difficult to tease apart the contribution of recurrent dynamics governed by initial conditions from the contribution of dynamic inputs (Sussillo et al. 2016). To address this challenge, we designed a sensorimotor task for nonhuman primates in which animals had to measure and produce time intervals using internally-generated patterns of neural activity in the absence of potentially confounding time varying sensory and reafferent inputs. Using a novel analysis of the geometry and dynamics of *in-vivo* activity in the dorsal medial frontal cortex (DMFC) and *in-silico* activity in recurrent neural network models trained to perform the same task, we found that behavioral flexibility is mediated by the complementary action of inputs and initial conditions controlling the structural organization of neural trajectories.
+---PAGE_BREAK---
+
+# Results
+
+## Ready, Set, Go (RSG) task
+
+Our aim was to ask whether flexible control of internally-generated dynamics could be understood in terms of systematic adjustments made to initial conditions and external inputs of a dynamical system. We designed a “Ready, Set, Go” (RSG) timing task to directly investigate the role of these two factors. The basic sensory and motor events in the task were as follows: following fixation of a central spot, monkeys viewed two peripheral visual flashes (“Ready” followed by “Set”) separated by a sample interval, $t_s$, and produced an interval, $t_p$, after Set by making a saccade to a visual target that was presented throughout the trial. In order to obtain juice reward, animals had to generate $t_p$ as close as possible to a target interval, $t_i$ (Figure 1B), which was equal to $t_s$ times a “gain factor”, $g$ ($t_i=gt_s$). The demand for flexibility was imposed in two ways (Figure 1C). First, $t_s$ varied between 0.5 and 1 sec on a trial-by-trial basis (drawn from a discrete uniform “prior” distribution). Second, $g$ switched between 1 ($g$=1 context) and 1.5 ($g$=1.5 context) across blocks of trials (Figure 1D, mean block length = 101, std = 49 trials).
+
+To verify that animals learned the task (Figure 1E), we used regression analyses to assess the dependence of $t_p$ on $t_s$ and $g$. First, we analyzed the relationship between $t_s$ and $t_p$ within each context ($t_p = \beta_0 + \beta_1 t_s$). Results indicated that $t_p$ increased monotonically with $t_s$ for both contexts ($\beta_1 > 0$, p << 0.001 for all sessions). Next, we assessed the influence of gain on $t_p$ in several complementary analyses. First, we compared regression slopes relating $t_p$ to $t_s$ within each context. The slopes were significantly higher in the $g$=1.5 compared to $g$=1 context (mean $\beta_1$= 0.84 vs. 1.2; signed-rank test p = 0.002, n = 10 sessions; Figure 1E, inset). Second, we fit a regression model to behavior across both gains that included additional regressors for gain and its interaction with $t_s$ ($t_p = \beta_0 + \beta_1 t_s + \beta_2 g + \beta_3 gt_s$). Results indicated a significant positive interaction between $t_s$ and $g$ (mean $\beta_3$ = 0.73; $\beta_3$ > 0, p < 0.0001 in each session). Finally, we fit a regression model relating $t_p$, z-scored for each $t_s$, to the number of trials following a context switch to determine how fast monkeys adjusted their behavior. There was no evidence for a slow adaptation of $t_p$ as a function of number of trials after switch (one-tailed test for $\beta_1$ in first 25 trials after switch, p > 0.25), indicating that the switching was rapid. Together, these results confirmed that animals used an estimate of $t_s$ to compute $t_p$ and flexibly adjusted their responses according to the gain information.
+
+For both gains, responses were variable, and average responses exhibited a regression to the mean (mean $\beta_1$ < 1, p = 0.005 for $g$=1, and mean $\beta_1$ < 1.5, p = 0.0001 for $g$=1.5, one-sided signed-rank test). As with previous work (Jazayeri & Shadlen 2015; Acerbi et al. 2012; Miyazaki et al. 2005; Jazayeri & Shadlen 2010), behavior was accurately captured by a Bayesian model (Figure 1E, Methods) indicating that animals integrated their knowledge about the prior distribution, the sample interval and the gain to optimize their behavior.
+---PAGE_BREAK---
+
+**Figure 1.** The RSG task and behavior. (A) RSG task. On each trial, three rectangular stimuli termed “Ready,” “Set,” and “Go” were shown on the screen arranged in a semi-circle. Following fixation, Ready and Set were extinguished. After a random delay, first Ready and then Set stimuli were flashed (small lines around the rectangles signify flashed stimuli). The time interval between Ready and Set demarcated a sample interval, *t*s. The monkey's task was to generate a saccade (“Go”) to a visual target such that the interval between Set and Go (produced interval, *t*p) was equal to a target interval, *t*v, of *t*s multiplied by a gain factor, *g*. The animal had to perform the task in two behavioral contexts, one in which *t*t was equal to *t*s (*g*=1 context), and one in which *t*t was 50% longer than *t*s (*g*=1.5 context). The context was cued by the color of fixation and the position of a context stimulus (small white square below the fixation) throughout the trial. (B) Animals received juice reward when the error between *t*p and *t*t was small, and the reward magnitude decreased with the size of error (see Methods for details). On rewarded trials, the saccadic target turned green (panel A). (C) For both contexts, *t*s was drawn from a discrete uniform distribution with seven values equally spaced from 0.5 to 1 sec (left). The values of *t*s were chosen such that the corresponding values of *t*t across the two contexts were different but partially overlapping (right). (D) The context changed across blocks of trials. The number of trials in a block was varied pseudorandomly (mean and std shown). (E) *t*p as a function of *t*s for each context across all recording sessions. Circles indicate mean *t*p across all sessions, shaded regions indicate +/- one standard deviation from the mean, dashed lines indicate *t*p, and solid lines are the fits of a Bayesian observer model to behavior. Inset: Slope of the regression line (*β*1) relating *t*p to *t*s in the two contexts. Regression slopes were larger in the *g*=1.5 context, with a significant interaction between *t*s and *g* (*p* < 0.0001) for all sessions (see text; ** indicates *p* < 0.002 for signed-rank test). In all panels, different shades of gray and red are associated with *g*=1 and *g*=1.5, respectively.
+---PAGE_BREAK---
+
+Neural activity in the RSG task
+
+To assess the neural computations in RSG, we focused on the dorsal region of the medial frontal cortex (DMFC) comprising supplementary eye fields, supplementary motor area and presupplementary motor area. DMFC is a natural candidate for our task because it plays a crucial role in timing as shown by numerous studies in humans (Halsband et al. 1993; Rao et al. 2001; Coull et al. 2004; Pfeuty et al. 2005; Macar et al. 2006; Cui et al. 2009), monkeys (Okano & Tanji 1987; Merchant et al. 2013; Kunimatsu & Tanaka 2012; Isoda & Tanji 2003; Romo & Schultz 1992; Merchant et al. 2011; Mita et al. 2009; Ohmae et al. 2008; Kurata & Wise 1988), and rodents (Matell et al. 2003; Kim et al. 2009; Smith et al. 2010; Kim et al. 2013; Xu et al. 2014; Murakami et al. 2014), and because it is involved in context-specific control of actions (Isoda & Hikosaka 2007; Ray & Heinen 2015; Yang & Heinen 2014; Shima et al. 1996; Matsuzaka & Tanji 1996; Brass & von Cramon 2002).
+
+We recorded from 326 units (127 from monkey C and 199 from monkey J) in DMFC. Between 11 and 82 units were recorded simultaneously in a given session, however in this study, we combined data across all units irrespective of whether they were recorded simultaneously. Firing patterns were heterogeneous and varied across units, task epochs, and experimental contexts. In the Ready-Set epoch, responses were modulated by both gain and elapsed time (e.g. units #1, 3, and 5, **Figure 2A**). For many units, firing rate modulations underwent a salient change at the earliest expected time of Set (0.5 sec). For example, responses of some units increased monotonically in the first 0.5 sec but decreased afterwards (**Figure 2A**, units #1, 3).
+
+Following Set, firing rates were characterized by a mixture of 1) transient changes after Set (unit #1 and 3), 2) sustained modulations during the Set-Go epoch (units #1 and 5), and 3) monotonic changes in anticipation of the saccade (units #1, 2 and 4). These characteristics were not purely sensory or motor and varied systematically with $t_s$ and gain. For example, the amplitude of the early transient response (unit #1) depended on both $t_s$ and gain, indicating that it was not a visually-triggered response to Set. The same was true for the sustained modulations after Set and activity modulations prior to saccade initiation.
+
+We also examined the representation of $t_s$ and gain across the population by projecting the data on dimensions along which activity was strongly modulated by context and interval in state-space (i.e. the space spanned by the firing rates of all 326 units; see Methods). Similar to individual units, population activity was modulated by both elapsed time and gain during both the Ready-Set (**Figure 2B**) and Set-Go (**Figure 2C**) epochs. We used this rich dataset to investigate whether the flexible adjustment of intrinsic dynamics across the population with respect to $t_s$ and gain could be understood using the language of dynamical systems.
+---PAGE_BREAK---
+
+Figure 2. Neural responses in dorsomedial frontal cortex (DMFC) during the RSG task. (A) Firing rates of 5 example units during the various phases of the task aligned to Ready (left column), Set (middle) and Go (right). Responses aligned to Ready and Set were sorted by $t_s$. Responses aligned to Go were sorted into 5 bins, each with the same number of trials, ordered by $t_p$. Gray and red lines correspond to activity during the $g=1$ and $g=1.5$ contexts, respectively, with darker lines corresponding to longer intervals. (B) Visualization of population activity in the Ready-Set epoch sorted by $t_s$. The “gain axis” corresponds to the axis along which responses were maximally separated with respect to context. The other two dimensions (“PC 1 & PC 2”) correspond to the first two principal components of the data after removing the context dimension. (C) Visualization of population activity in the Set-Go epoch sorted into 5 bins, each with the same number of trials, ordered by $t_p$. Top: Activity plotted in 2 dimensions spanned by PC 1 and the dimension of maximum variance with respect to $t_p$ within each context (“Interval axis”). Bottom: Same as Top rotated 90 degree (circular arrow) to visualize activity in the plane spanned by the context axis (“Gain axis”) and PC 1. In both panels, PC1 was computed after removing the variance explained along the Interval axis and Gain axis dimensions. Squares, circles, and crosses in the state space plots represent Ready, Set, and Go, respectively.
+---PAGE_BREAK---
+
+Flexible neural computations: a dynamical systems perspective
+
+We pursued the idea that neural computations responsible for flexible control of saccade initiation time can be understood in terms of the behavior of a dynamical system established by interactions among neurons. To formulate a rigorous hypothesis for how a dynamical system could confer such flexibility, we considered the goal of the task and worked backwards logically. The goal of the animal is to flexibly control the saccade initiation time to a fixed target. Previous motor timing studies proposed that saccade initiation is triggered when the activity of a subpopulation of neurons with monotonically increasing firing rates (i.e., “ramping”) reaches a threshold (Mita et al. 2009; Kunimatsu & Tanaka 2012; Romo & Schultz 1987; Roitman & Shadlen 2002; Hanes & Schall 1996; Tanaka 2005; Maimon & Assad 2006). For these neurons, flexibility requires that the slope of the ramping activity be adjusted (Jazayeri & Shadlen 2015). More recently, it was found that actions are initiated when the collective activity of neurons with both ramping and more complex activity patterns reach an action-triggering state (Churchland et al. 2006; Wang et al. 2017), and that flexible control of initiation time can be understood in terms of the speed with which neural activity evolves toward that terminal state (Wang et al. 2017).
+
+In a dynamical system, the speed with which activity evolves over time is determined by the derivative of the state. If we denote the state of the system by $X$, the derivative is usually specified by two factors, a function of the current state, $f(X)$, and an external input, $U$, that may be constant or context- and/or time-dependent:
+
+$$ \frac{dX}{dt} = f(X) + U $$
+
+When analyzing the collective activity of a specific population of neurons, this formulation has a straightforward interpretation. The state represents the collective firing rate of neurons under investigation, $f(X)$ accounts for the interactions among those neurons, and $U$ corresponds to external input from another population of neurons, possibly controlled by an external sensory drive. The only additional information needed to determine the behavior of this system is its initial condition, $X_0$, which specifies the initial neural state prior to generating a desired dynamic pattern of activity.
+
+To assess the utility of the dynamical systems perspective for understanding behavioral flexibility, we assumed that $f(X)$ (i.e., synaptic coupling in DMFC) is fixed across trials. This leaves inputs and initial conditions as the only “dials” for achieving flexibility (Figure 3). To formalize a set of concrete hypotheses for the potential role of inputs and initial conditions, we first focused on behavioral flexibility with respect to $t_s$ for each gain context. How can a dynamical system adjust the speed at which activity during Set-Go evolves in a $t_s$-dependent manner? In RSG, within each context, there are no sensory inputs (exafferent or reafferent) that could serve as a $t_s$-dependent input drive. Therefore, we hypothesized that the $t_s$-dependent adjustment of
+---PAGE_BREAK---
+
+speed in the Set-Go epoch results from a parametric control of initial conditions at the time of Set. The corollary to this hypothesis is that the time-varying activity during the Ready-Set epoch is responsible for adjusting this initial condition based on the desired speed during the ensuing Set-Go epoch (Wang et al. 2017).
+
+Second, we asked how speed might be controlled across the two gain contexts. One possibility is to establish initial conditions that generalize across the two contexts (Figure 3A). To do so, initial conditions must vary with speed requirements associated with producing $t_i=gt_s$, which has implicit information about both gain and $t_s$ (i.e., $X_0 gt_s$). If both gain and $t_s$ are encoded by initial conditions, we would expect neural trajectories to form a single organized structure with respect to the target time ($t_i=gt_s$). In the extreme case, neural trajectories associated with the same value of $gt_s$ across the two contexts (e.g., 1.5x0.5 and 1.0x0.75) should terminate in the same state at the time of Set and should evolve along identical trajectories during the Set-Go epoch. We refer to this solution as $A_1$ (Figure 3A).
+
+Alternatively, DMFC responses may rely on a persistent gain-dependent input to adjust speed across the two gain contexts (Figure 3B). As exemplified by recurrent neural network models, in dynamical systems, a persistent input can rapidly reconfigure computations by driving the system to different regions of the state space (Mante et al. 2013; Sussillo et al. 2015; Hennequin et al. 2014; Chaisangmongkon et al. 2017; Song et al. 2016). This solution, which we refer to as $A_2$, predicts a qualitatively different geometrical organization of neural trajectories compared to $A_1$, with two key features. First, there should be a gain-dependent organization forming two sets of neural trajectories in two different regions of the state space. Second, neural trajectories should be organized with respect to $t_s$ and $t_p$ (i.e., within each context) but not necessarily with respect to $t_i$ (i.e., across contexts). Because the context information in RSG was provided as an external visual input (fixation cue), and was available throughout the trial, we predict that this solution offers the more plausible prediction for how the brain might solve the task.
+
+Therefore, the dynamical systems perspective in RSG leads us to the following specific hypotheses: 1) the evolution of activity in the Ready-Set epoch parametrizes the initial conditions needed to control the speed of dynamics in the production epoch for each context, and 2) the context cue acts as a persistent external input leading the system to establish structurally similar yet distinct sets of neural trajectories associated with the two gains, and no $t_p$-related structure across contexts, consistent with $A_2$.
+
+Visualization of neural trajectories from Set to Go in state space (Figure 3C, same as in Figure 2C) provided qualitative support for these hypotheses. First, within each context, neural trajectories for different $t_p$ bins were clearly associated with different initial conditions and remained separate and ordered throughout the Set-Go epoch. Second, context information seemed to displace the entire group of neural trajectories to a different region of neural state space without altering their relative organization as a function of $t_p$. Third, indexing time along nearby trajectories suggested that the speed with which responses evolved along each trajectory was systematically related to the desired $t_i$; i.e., slower for longer $t_i$. To validate these observations quantitatively, we
+---PAGE_BREAK---
+
+developed an analysis technique which we termed “kinematic analysis of neural trajectories” (KiNeT) that
+helped us measure the relative speed and position of multiple, possibly curved (Figure S1), neural trajectories.
+---PAGE_BREAK---
+
+**Figure 3.** Dynamical systems predictions for the RSG task. (A,B) Schematic illustrations for dynamical systems solutions to generalize RSG across contexts through manipulation of initial conditions or external inputs. (A) Gain-control by initial condition ($A_1$). Top: The target interval $t_f=gt_s$ ($g$, gain, $t_s$ sample interval) is encoded by the initial conditions ($X_0 gt_s$) generated during the Ready-Set epoch (not shown). Middle: After the Set cue (open circles), activity evolves towards an action-triggering state (crosses) with a speed (colored arrows) fully determined by position along the initial condition subspace (ordinate). Activity across contexts is organized according to $t_f=gt_s$. Bottom: same trajectories, rotated to show an oblique view. Trajectories are separated only along the initial condition axis across both contexts such that trajectory structure reflects $t_f$ explicitly. There is no separation along the Input axis. (B) Gain-control by external input ($A_2$). Top: $t_s$ is encoded by initial conditions ($X_0(t_s)$), and a persistent context-dependent input encodes the gain (red and gray arrow for the two gains). Middle: within each context, trajectories associated with the same $t_s$ evolve along the same position on the initial condition axis at different speeds due to the context-dependent input. Activity is organized according to $t_s$ and not $t_f$. Bottom: oblique view. A context-dependent external input creates two sets of neural trajectories in the state space for the two contexts in the Set-Go epoch. This input controls speed in conjunction with $t_s$-dependent initial conditions, generating a structure which reflects $t_s$ and $g$ explicitly, but not $t_f$. In both $A_1$ and $A_2$, responses would be initiated when activity projected onto the time axis reaches a threshold. (C) DMFC data. Top: unknown mechanism of RSG control in DMFC. Middle, bottom: 3-dimensional projection of DMFC activity in the Set-Go epoch (from Figure 2C). Middle: qualitative assessment indicated that neural trajectories within each context for different $t_p$ bins were associated with different initial conditions and remained separate and ordered through the response. Bottom: Across the two contexts, neural trajectories formed two separated sets of neural trajectories without altering their relative organization as a function of $t_p$. Both of these features were consistent with $A_2$. Filled circles depict states along each trajectory at a constant fraction of the trajectory length, illustrating speed differences across trajectories.
+---PAGE_BREAK---
+
+Control of neural trajectories by initial condition within contexts
+
+We first employed KiNeT to validate that animals' behavior was predicted by the speed with which neural trajectories evolved over time. We reasoned that neural states evolving faster will reach the same destination on the trajectory in a shorter amount of time. Therefore, we estimated relative speed across the trajectories by performing a time alignment to identify the times when neural activity reached nearby points on each trajectory (Figure 4A). We then used this approach to analyze the geometrical structure of trajectories through the Set-Go epoch.
+
+To perform KiNeT, we binned trials from each gain and recording session into five groups according to $t_p$.
+Neural responses from these trials were averaged, then PCA was applied to generate five neural trajectories
+within the state space spanned by the first 10 PCs that explained 89% of variance. We denote each trajectory
+by $\Omega[i](t)$ (or $\Omega[i]$ for shorthand; a table with definitions of all symbols is provided in Methods) where $i$
+indexes the trajectory and $t$ represents elapsed time since Set. We estimated speed and position along each
+$\Omega[i]$ relative to the trajectory associated with the middle (third) bin, which we refer to as the reference
+trajectory $\Omega[\text{ref}]$. We denoted neural states on the reference trajectory by $s[\text{ref}][j]$, where $j$ indexes states
+through time along $\Omega[\text{ref}]$. We used curly brackets to refer to a collection of indices. For example, $s[\text{ref}]\{j\}$
+refers to all states on $\Omega[\text{ref}]$, and $t[\text{ref}]\{j\}$ corresponds to the time points on $\Omega[\text{ref}]$ associated with those
+states.
+
+For each $s[\text{ref}][j]$, we found the nearest point on all non-reference trajectories ($i \neq \text{ref}$) as measured by
+Euclidean distance. We denoted the collection of the nearest states on $\Omega[i]$ by $s[i]\{j\}$, and the
+corresponding time points by $t[i]\{j\}$. The corresponding time points along different trajectories provided the
+means for comparing speed: if $t[i]\{j\}$ were systematically greater than $t[\text{ref}]\{j\}$, we could conclude that
+$\Omega[i]$ evolves at a slower speed compared to $\Omega[\text{ref}]$ (Figure 4A). This relationship can be readily inferred from
+the slope of the line that relates $t[i]\{j\}$ to $t[\text{ref}]\{j\}$. While a unity slope indicates that the speeds are the
+same, higher and lower values would indicate slower and faster speeds of $\Omega[i]$ compared to $\Omega[\text{ref}]$,
+respectively.
+
+Applying KiNeT to neural trajectories in the Set-Go epoch indicated that $\Omega\{i\}$ evolved at similar speeds
+immediately following the Set cue (unity slope). Later, speed profiles diverged such that neural trajectories
+associated with longer intervals slowed down and trajectories associated with shorter intervals sped up for
+---PAGE_BREAK---
+
+both gain contexts (Figure 4B). This is consistent with previous work that the key variable predicting $t_p$ is the speed with which neural trajectories evolve (Wang et al. 2017). One common concern in this type of analysis is that averaging firing rates across trials of slightly different duration could lead to a biased estimate of neural trajectory. To ensure that our estimates of average speed were robust, we applied KiNeT to neural trajectories while aligning trials to Go instead of Set. Results remained unchanged and confirmed that the speed of neural trajectories predicted $t_p$ across trials (Figure S2).
+
+Having validated speed as the key variable for predicting $t_p$, we focused on our first hypothesis that the evolution of activity in the Ready-Set epoch parametrizes the initial conditions needed to control the speed of dynamics in the production epoch for each context. Because speed is a scalar variable and has an orderly relationship to $t_p$, this hypothesis predicts that the neural trajectories (and their initial conditions) should also have an orderly organizational structure with respect to $t_p$. In other words, there should be a systematic relationship between the vectors connecting nearest points across neural trajectories and the $t_p$ to which they correspond. We tested this prediction in two complementary ways. First, we performed an *analysis of direction* testing whether the vectors connecting nearby trajectories were more aligned than expected by chance. Second, we performed an *analysis of distance* asking whether the distance between the reference trajectory and the other trajectories respected the distance between the corresponding speeds.
+
+**Analysis of direction.** We used KiNeT to measure the angle between vectors connecting nearest points (Euclidean distance) across consecutive trajectories ordered by $t_p$. Let us use $\vec{\Delta}_{\Omega[i][j]}$ to denote the difference vector ($\vec{\Delta}$) connecting nearest points across trajectories (subscript $\Omega$) between $s[i][j]$ and $s[i+1][j]$. According to our hypothesis, the direction of $\vec{\Delta}_{\Omega[i][j]}$ should be similar to $\vec{\Delta}_{\Omega[i+1][j]}$ connecting $s[i+1][j]$ to $s[i+2][j]$. To test this, we measured the angle between these two difference vectors, denoted by $\theta_{\Omega[i][j]}$. The null hypothesis of unordered trajectories predicts that $\vec{\Delta}_{\Omega[i][j]}$ and $\vec{\Delta}_{\Omega[i+1][j]}$ should be unaligned on average ($\bar{\theta}_{\Omega\{i\}[j]} = 90^\circ$; bar signifies mean of the angles over the index $i$ in curly brackets). Results indicated that $\theta_{\Omega[i][j]}$ was substantially smaller than 90 degrees for both contexts (Figure 4C). This provides the first line of quantitative evidence for an orderly organization of neural trajectories with respect to $t_p$.
+
+**Analysis of distance.** We used KiNeT to measure the length of the vectors connecting nearest points on $\Omega[i]$ and $\Omega[\text{ref}]$, denoted by $D[i][j]$, at different time points ($[j]$). This analysis revealed that trajectories evolving faster than $\Omega[\text{ref}]$ and those evolving slower than $\Omega[\text{ref}]$ were located on the opposite sides of $\Omega[\text{ref}]$, and that the magnitude of $D[i][j]$ increased progressively for larger speed differences (Figure 4D). This analysis
+---PAGE_BREAK---
+
+provided clear evidence that, for each context, the relative position of neural trajectories and their initial conditions in the state space were predictive of $t_p$.
+
+To further substantiate the link between the geometry of neural trajectories and behavior, we asked whether trial-by-trial fluctuations of $t_p$ for each $t_s$ could be explained in terms of systematic fluctuations of speed and location of neural trajectories in the state space. We reasoned that fluctuations of $t_p$ partially reflect animals' misestimation of $t_s$. This predicts that larger values of $t_p$ for the same $t_s$ result from slower neural trajectories whose location in state space are biased toward longer values of $t_s$. We tested this prediction by using KiNet to examine the relative geometrical organization of neural trajectories associated with larger and smaller values of $t_p$ for the same $t_s$. Results indicated that neural trajectories that correspond to larger values of $t_p$ evolved at slower speeds and were shifted in state space toward larger values of $t_s$ (Figure S3). This analysis extends the correspondence between behavior and the organization of neural trajectories to include animals' trial-by-trial variability. Together, these results provide strong evidence for our first hypothesis: that activity during Ready-Set epoch parametrically adjusts the system's initial condition (i.e., neural state at the time of Set), which in turn controls the speed of neural trajectory in the Set-Go epoch and the consequent $t_p$.
+---PAGE_BREAK---
+
+**Figure 4.** Kinematic analysis of neural trajectories (KiNeT). (A) Illustration of KiNeT. Top: a collection of trajectories $\Omega\{i\}$ originate from Set, organized by initial condition, and terminate at Go. Tick marks on the trajectories indicate unit time. Darker trajectories evolve at a lower speed as demonstrated by the distance between tick marks and the dashed line connecting tick marks. KiNeT quantifies the position of trajectories and the speed with which states evolve along them relative to a reference trajectory (middle trajectory, $\Omega[\text{ref}]$). To do so, it finds a collection of states $s[i]\{j\}$ on each $\Omega[i]$ that are closest to $\Omega[\text{ref}]$ through time. Trajectories which evolve at a slower speed require more time to reach those states leading to larger values of $t[i][j]$. KiNet quantifies relative position by a distance measure, $D[i][j]$ (distance between $\Omega[i]$ and $\Omega[\text{ref}]$ at $t[i][j]$) that is signed (blue arrows) and is considered positive when $\Omega[i]$ corresponds to larger values of $t_p$ (slower trajectories). Middle: trajectories rotated such that the time axis is normal to the plane of illustration, denoted by a circle with an inscribed cross. Filled circles represent the states $s\{i\}[j]$ aligned to $s[\text{ref}][j]$ for a particular $j$. Vectors $\vec{\Delta}\Omega[i][j]$ connect states on trajectories of shorter to longer $t_p$. Angles $\theta\Omega[i][j]$ between successive $\vec{\Delta}\Omega[i][j]$ provide a measure of $t_p$-related structure. Bottom: equations defining the relevant variables. (B). Speed of neural trajectories compared to $\Omega[\text{ref}]$ computed for each context separately. Shortly after Set, all trajectories evolved with similar speed (unity slope). Afterwards, $\Omega[i]$ associated with shorter $t_s$ evolved faster than $\Omega[\text{ref}]$ as indicated by a slope of less than unity (i.e., $t[i]\{j\}$ smaller than $t[\text{ref}]\{j\}$), $\Omega[i]$ associated with longer $t_s$ evolved slower than $\Omega[\text{ref}]$. Filled circles on the unity line
+---PAGE_BREAK---
+
+indicate *Ĵ* values for which $t[i][j]$ was significantly correlated with $l_p[i][j]$ (bootstrap test, r > 0, p < 0.05, n = 100). (C) Relative position of adjacent neural trajectories computed for each context separately. $\bar{\theta}_{\Omega\{i\}[j]}$ (bar signifies average across trajectories) were significantly smaller than 90 degrees (filled circle) for the majority of the Set-Go epoch (bootstrap test, $\bar{\theta}_{\Omega\{i\}[j]}$ < 90, p < 0.05, n = 100) indicating that $\vec{\Delta}_{\Omega\{i\}[j]}$ were similar across $\Omega[i]$. (D) Distance of neural trajectories to $\Omega[\text{ref}]$ computed for each context separately. Distance measures ($D[i][j]$) indicated that $\Omega\{i\}$ had the same ordering as $l_p\{i\}$. Significance tested using bootstrap samples for each *Ĵ* (p < 0.05, n = 100).
+---PAGE_BREAK---
+
+Control of neural trajectories across contexts by external input
+
+To identify the mechanism by which flexible speed control might be generalized across contexts, we first tested whether both gain and $t_s$ are encoded by initial conditions ($A_1$). According to this alternative, neural trajectories should follow the organization of $t_p$ across both contexts (Figure 3A), in addition to within each context (Figure 4C). To test $A_1$, we sorted neural trajectories across the two contexts according to $t_p$ (Figure 5A, top), and asked whether the angle between vectors connecting nearest points ($\theta_{\Omega[i][j]}$) was significantly less than 90 degrees (Figure 5A, bottom). Unlike the within-context results (Figure 4C), when neural trajectories from both contexts were combined, the angle between nearby neural trajectories was significantly larger than 90 degrees ($p < 0.05$ for all $j$; Figure 5B). This indicates that trajectories across contexts do not have an orderly relationship to $t_t$ ($A_2$: less than 90 deg) even though they exhibit a structural organization that deviates from randomness (90 deg).
+
+Next, we investigated the hypothesis that the context cue acts as a persistent external input ($A_2$; Figure 3B), leading the system to establish structurally similar but distinct collections of neural trajectories across contexts (Figure 6A,B). This hypothesis can be broken down to a set of specific geometrical constraints in the Set-Go epoch. We determined whether the data met these constraints by testing whether the converse of each could be rejected, as illustrated in Figure 6C-F. If we denote the collection of neural trajectories in the two contexts by $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$, these constraints and tests can be formalized as follows:
+
+1. $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$ should evolve in the same direction as a function of time with different average speeds (i.e. slower for $\Omega_{g=1.5}\{i\}$). If the converse were true (i.e., trajectories evolving in different directions, Figure 6C, left), we would expect no systematic relationship between time points across the two contexts. Results from KiNeT across contexts (see Methods) revealed a monotonically increasing relationship between $t_g=1[\text{ref}]\{j\}$ and $t_g=1.5[\text{ref}]\{j\}$, confirming that Set-Go trajectories across contexts evolved in the same direction (Figure 6C, right). Moreover, $t_g=1.5[\text{ref}]\{j\}$ had a higher rate of change than $t_g=1[\text{ref}]\{j\}$ indicating that average speeds were slower in the $g=1.5$ condition. This suggests that speed control played a consistent role across contexts (Figure 6A).
+
+2. $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$ should be organized similarly with respect to $t_p$. In other words, the vector that connects nearby points in $\Omega_{g=1}\{i\}$ should be aligned to its counterpart that connect nearby points in $\Omega_{g=1.5}\{i\}$. To evaluate this constraint, we used the angle between pairs of vectors that connect nearby points within each context. We use an example to illustrate the procedure (Figure 6B). Consider one vector connecting nearby points in two successive neural trajectories in the gain of 1 (e.g. $\Omega_{g=1}[1]$ and $\Omega_{g=1}[2]$),
+---PAGE_BREAK---
+
+and another vector connecting the corresponding points in the gain of 1.5 (e.g., $\Omega_{g=1.5}[1]$ and $\Omega_{g=1.5}[2]$). A similar orientation between the two groups of trajectories (Figure 6A) would cause the angle between these vectors ($\theta_g[i]$) to be significantly smaller than 90 degrees. If instead, $\Omega_{g=1}$ and $\Omega_{g=1.5}$ were oriented differently (Figure 6D, left) or had no consistent relationship, these vectors would be on average orthogonal. Using KiNeT, we found that this angle ($\theta_g[i][j]$) was consistently smaller than 90 degrees throughout the Set-Go epoch, providing quantitative evidence that the collection of neural trajectories associated with the two gains were structurally similar (Figure 6A).
+
+3. If context information is provided as a tonic input, $\Omega_{g=1}$ and $\Omega_{g=1.5}$ should be separated in state space along a context axis throughout the Set-Go epoch. To verify this constraint, we assumed that neural trajectories for each context were embedded in distinct manifolds and compared the minimum distance between the two manifolds ($D_g$) to an analogous distance metric within each manifold (Figure 6B; see Methods). These distance measures should be the same if the groups of trajectories associated with the two contexts overlap in state space (Figure 6E, left). However, we found distances to be substantially larger across contexts compared to within contexts (Figure 6E, right). This confirms that the groups of trajectories associated with the two contexts were separated in state space (Figure 6A).
+
+4. The results so far reject a number of alternative hypotheses (Figure 6C,D,E) and leave out two possibilities: either $\Omega_{g=1}$ and $\Omega_{g=1.5}$ are separated along the same dimension that separates trajectories within each context (Figure 6F, left), or they are separated along a distinct input axis in accordance with $A_2$ (Figure 6A). To distinguish between these two, we asked whether the vector associated with the minimum distance $D_g[j] (\vec{\Delta}_g[j])$ was aligned to vectors connecting nearby states within each context ($\vec{\Delta}\{\{i\}[j]\}$). Analysis of the angle between these vectors ($\theta_g[\{i\}[j]]$) indicated that the two were orthogonal for almost all j (Figure 6F, right). This ruled out the remaining possibility that trajectories across contexts were separated along the same dimension as within-context (Figure 6F, left).
+
+Having validated these constraints quantitatively, we concluded that population activity across gains formed two groups of isomorphic speed-dependent neural trajectories (Figure 6A). These results support our primary hypothesis that flexible control of speed based on gain context was established by a context-dependent persistent external input (Figure 3B).
+---PAGE_BREAK---
+
+**Figure 5.** Neural trajectories across contexts do not form a single structure reflecting $t_p$. (A) A schematic illustrating neural trajectories across the two contexts after Set. Top: The expected geometrical structure under $A_1$. Neural trajectories for the gain of 1 (gray) and 1.5 (red) are organized along a single initial condition axis and ordered with respect to $t_p$. Bottom: A rotation of the top showing neural trajectories with the time axis normal to the plane of illustration. If the neural trajectories were organized as such, then the angle between vectors connecting nearby points (e.g., $\theta_{\Omega}[3][j]$) would be less than 90 ($A_1$, **Figure 3A**). (B) Left: orientation of vectors connecting adjacent neural trajectories combined across the two contexts. Right: possible geometrical structures, including $A_1$ (bottom), $A_2$ (top), and unorganized (middle). $\bar{\theta}_{\Omega}\{i\}[j]$ was larger 90 degrees for all $j$ in the Set-Go interval, consistent with $A_2$. Shaded regions represent 90% bootstrap confidence intervals.
+---PAGE_BREAK---
+
+**Figure 6.** Neural trajectories comprise distinct but similar structures across gains. (A) A schematic showing the organization of neural trajectories in a subspace spanned by Input, Initial condition and Time if context were controlled by persistent external input. If DMFC were to receive a gain-dependent input, we would expect neural trajectories from Set to Go to be separated along an input subspace, generating two similar but separated $t_p$-related structures for each context ($A_2$, **Figure 3B**). We verified this geometrical structure by excluding alternative structures (interdictory circles indicate rejected alternatives). (B) An illustration of neural trajectories for $g=1$ (gray filled circle) and $g=1.5$ (red filled circle) with the time axis normal to the plane of illustration. Gray and red arrows show vectors connecting nearby points in each context independently ($\vec{\Delta}_g=1$ and $\vec{\Delta}_g=1.5$). When the neural trajectories associated with the two gains are structured similarly, these vectors are aligned and the angle between them ($\theta_g$) is less than 90 deg. We used KiNeT to test this possibility (see Methods). (C) Left: Schematic illustrating a condition in which the time axis for trajectories in the two contexts (gray and red) are not aligned. Right: $t_g=1[\text{ref}]\{j\}$ increased monotonically with $t_g=1.5[\text{ref}]\{j\}$ indicating that the time axes across contexts were aligned. Values of $t_g=1.5[\text{ref}]\{j\}$ above the unity line indicate that activity evolved at a slower speed in the $g=1.5$ context. The dashed gray line represents unity and the dashed red line represent expected values for $t_g=1.5[\text{ref}]\{j\}$ if speeds were scaled perfectly by a factor of 1.5. (D) Left: Schematic illustrating an example configuration in which $\Omega_{g=1}\{i\}$ and $\Omega_{g=1.5}\{i\}$ do not share the same $t_p$-related structure. Right: $\bar{\theta}_g[i][j]$ was significantly less than 90 degrees for all $j$ indicating that the tp-structure was similar across the two contexts. (E). Left: Schematic illustrating a condition in which $\Omega_{g=1}\{i\}$ and
+---PAGE_BREAK---
+
+Ωg=1.5{i} are overlapping. Right: The minimum distance Dg across contexts (black line) was substantially larger than that found
+between subsets of trajectories within contexts (red and gray lines, see Methods) indicating the two sets of trajectories were not
+overlapping. (F) Left: Schematic illustrating a condition in which Ωg=1{i} and Ωg=1.5{i} are separated along the same direction
+
+that neural trajectories within each context were separated. Right: $\vec{\Delta}_g[j]$ was orthogonal to $\vec{\Delta}_{\Omega\{i\}[j]}$ representing tp-related
+structure within each context (gray and red lines). In (C-E), shaded regions represent 90% bootstrap confidence intervals, and circles
+represent statistical significance (p < 0.05, bootstrap test, n = 100).
+---PAGE_BREAK---
+
+RNN models recapitulate the predictions of inputs and initial conditions
+
+The geometry and dynamics of DMFC responses were consistent with the hypothesis that behavioral flexibility in the RSG task relies on systematic adjustments of initial conditions and external inputs of a dynamical system. Motivated by recent advances in the use of recurrent neural networks (RNNs) as a tool for testing hypotheses about cortical dynamics (Mante et al. 2013; Hennequin et al. 2014; Sussillo et al. 2015; Chaisangmongkon et al. 2017; Wang et al. 2017), we investigated whether RNNs trained to perform the RSG task would establish similar geometrical structures and dynamics.
+
+We focused on a generic class of RNNs comprised of synaptically coupled nonlinear units that receive nonspecific background activity (see Methods). First, we tested whether RNNs could perform the RSG task in a single gain context ($g=1$ or $g=1.5$ only). To do so, we created RNNs that received an additional input encoding Ready and Set as two brief pulses separated by $t_s$. We trained these RNNs to generate a linear output function after Set that reached a threshold (Go) at the desired production interval, $t_i = gt_s$. Analysis of successfully trained RNNs revealed that they, like DMFC, controlled $t_p$ by adjusting the speed of neural trajectories within a low-dimensional geometrical structure parameterized by initial conditions (Figure S4).
+
+Next, we investigated RNNs trained to perform the RSG task across multiple gain values. Our primary aim was to verify the importance of a persistent gain-dependent input in establishing isomorphic geometrical structures similar to DMFC (Figures 3, 6). To do so, we created RNNs with two different architectures, one in which the gain information was provided by the level of a persistent input, and another in which the gain information was provided by a transient pulse before the Ready cue. We refer to these networks as tonic-input RNNs and transient-input RNNs, respectively (Figure 7A). We used the tonic-input RNN as a direct test of whether a gain-dependent persistent input could emulate the geometrical structure of responses in DMFC, and the transient-input RNN to test whether such persistence was necessary.
+
+Using PCA and KiNeT, we found that neural trajectories in the two networks were structured differently. In the tonic-input RNN, trajectories formed two isomorphic structures separated along the dimension associated with the gain-dependent persistent input (Figure 7B). In contrast, trajectories generated by the transient-input RNN were better described as coalescing towards a single structure parameterized by initial condition (Figure 7C). To verify these observations quantitatively, we evaluated the geometry of neural trajectories in the two RNN variants using the same analyses we performed on DMFC activity. In particular, we sorted trajectories with respect to $t_p$ across the two gain contexts ($g=1$ and $g=1.5$) and quantified the angle between vectors connecting nearest points $(\theta_{\Omega}[i][j])$. As noted in the analysis of DMFC, this angle is expected to be acute if trajectories form a single structure ($A_j: \bar{\theta}_{\Omega}\{i\}[j] < 90^\circ$), and obtuse if trajectories form two gain-dependent
+---PAGE_BREAK---
+
+structures ($A_2$: $\theta_\Omega[i][j] \ge 90^\circ$). As predicted, the tonic-input solved the task by forming two isomorphic structures ($A_2$) indicating that when a persistent gain-dependent input is present, RNNs rely on a solution with separate gain-dependent geometrical structures (Figure 7E). In contrast, in the transient-input RNNs, angles between consecutive trajectories were acute ($A_1$). This result strengthens the conclusion about the importance of a persistent gain-dependent input in establishing separate isomorphic structures (Figure 7D).
+
+We also compared the two RNNs in terms of the distance between trajectories across the two contexts using the same metric ($D_g$) we used previously used for the analysis of DMFC (Figure 6E). The minimum distance between $\Omega_g=1\{i\}$ and $\Omega_g=1.5\{i\}$ at the time of Set was consistently smaller in the transient-input RNN compared to tonic-input RNN (Figure 7F,G). In some of the successfully trained transient networks, $D_g$ was larger at the time of Set, but this distance consistently decayed from Set to Go. In contrast, in the tonic-input RNN, $D_g$ remained large throughout the production epoch. We compared the two types of RNN quantitatively but comparing values of $D_g$ in each RNN normalized by the distance between the trajectories that correspond to the shortest and longest $t_p$ bin for the $g=1$ context in the same RNN. In the tonic networks, the minimum normalized distance ranged between 0.4 and 1.6, which was nearly 10 times larger than the that observed in the transient networks (0.003 to 0.04). Additionally, trajectories in all transient networks gradually established a $t_t$-related structure consistent with $A_1$. In contrast, trajectories in the tonic networks, like the DMFC data, were characterized by two separate $t_p$-related structures, one for each gain context. These results provide an important theoretical confirmation of our original dynamical systems hypothesis that when gain information is provided as persistent input, the system establishes distinct and isomorphic gain-dependent sets of neural trajectories.
+---PAGE_BREAK---
+
+**Figure 7.** RNNs with tonic but not transient input captured the structure of activity in DMFC. (A) Schematic illustration of the recurrent neural networks (RNNs). The networks are provided with brief Ready and Set pulses separated in time by $t_s$, after which the activity projected onto the output space by weighting function z must generate a ramp to a threshold (dashed line) at the context-dependent $t_t$. Additionally, each network is provided with a context-dependent “input” which either terminates prior to Ready (“Transient input,” top), or persists throughout the trial (“Tonic input,” bottom) (B) Top: state-space projections of tonic-input RNN activity in the Set-Go epoch within the plane spanned by Initial condition (ordinate) and Time (abscissa). Within this plane of view, neural trajectories within each context are separated based on $t_p$ but overlap with respect to gain. Bottom: Same neural trajectories shown in the top panel viewed within the plane spanned by Input (ordinate) and Time (abscissa). In this view, neural trajectories are separated by gain but overlap with respect to $t_p$ within each gain. Results are shown with the same format as Figure 3. (C) Same as panel B for the transient-input RNN. Top: Trajectories, when viewed within the plane of Initial condition and Time, are organized with respect to $t_p$ across both gains. Bottom: when viewed within the plane of Input and Time, trajectories are highly overlapping irrespective of gain. (D)
+---PAGE_BREAK---
+
+Analysis of direction in the tonic-input RNN with the same format as **Figure 5B**. $\bar{\theta}_{\Omega}[i][j]$ was larger than 90 deg for the entire Set-Go epoch. This is consistent with a geometry in which the two gains form two separate sets of isomorphic neural trajectories (inset).
+
+(E) Same as panel D for the transient-input network for which $\bar{\theta}_{\Omega}[i][j]$ was consistently less than 90 deg. This is consistent with a geometry in which neural trajectories are organized with respect to $t_p$ regardless of the gain context (inset). (F,G) Trajectory separation across contexts for the tonic-input (F) and transient-input (G) networks with the same format as **Figure 6E**. $D_g$ was substantially larger through the Set-Go epoch in the tonic-input network (F). In (D-G), shaded regions represent 90% bootstrap confidence intervals, and circles represent statistical significance ($p < 0.05$, bootstrap test, n = 100).
+---PAGE_BREAK---
+
+# Discussion
+
+Linking behavioral computations to neural mechanisms requires that the space of models we consider suitably match the computational demands of the behavior. In this study, we focused on the computations that enable the brain to exert precise and flexible control over movement initiation time (Wang et al. 2017). Because such temporal control depends on intrinsically dynamic patterns of neural activity, we employed a dynamical systems perspective to understand the underlying computational logic. An important feature of the dynamical systems view is that it obviates the need for the system to harbor an explicit representation of experimentally defined task-relevant variables ($t_s$, $g$, and $t_i$). Instead, neural signals that control behavior may be more appropriately characterized in terms of constraints imposed by latent dynamics that hold an implicit representation of task-relevant variables to control behavior. This viewpoint has a strong basis in current theories of motor control that posit an implicit representation of kinematic information in motor cortical activity during movements (Churchland et al. 2010; Churchland et al. 2012; Chaisangmongkon et al. 2017; Fetz 1992; Shenoy et al. 2013; Michaels et al. 2016). These theories cast movement control in terms of the function of an inverse model (Wolpert & Kawato 1998; Todorov & Jordan 2002; Sabes 2000) that inverts a desired endpoint to suitable control mechanisms during movement. We built upon this framework by evaluating the utility of dynamical systems theory in characterizing the control mechanisms the brain uses to produce a desired interval ($t_i$) jointly specified by gain and $t_s$ ($t_i = gt_s$).
+
+Results indicated that flexible control of behavior could be parsed in terms of systematic adjustments to initial conditions and external inputs of a dynamical system. Activity structure within each gain context indicated that the system's initial conditions controlled $t_p$ by parameterizing the speed of neural trajectories (Jazayeri & Shadlen 2015; Wang et al. 2017). The displacement of neural trajectories in the state space as a function of gain, and the lack of structural representation of $t_p$ across both gains suggested that DMFC received the gain information as a context-dependent tonic input. Following recent advances in using RNNs to generate and test hypotheses about dynamical systems (Mante et al. 2013; Rigotti et al. 2010; Hennequin et al. 2014; Rajan et al. 2016; Sussillo et al. 2015; Chaisangmongkon et al. 2017), we verified this interpretation by analyzing the behavior of different RNN models trained to perform the RSG task with either tonic or transient context-dependent inputs. Although both networks used initial conditions to set the speed of neural trajectories, only the tonic-input RNNs reliably established separate structures of neural trajectories across gains, similar to what we found in DMFC.
+
+Although we do not know the constraints that led the brain to establish separate geometrical structures, we speculate about potential computational advantages associated with this particular solution. First and foremost, this may be a particularly robust solution; as the gain information was provided by a persistent visual cue, the brain could use this input as a reliable signal to modulate neural dynamics in RSG. This solution may also
+---PAGE_BREAK---
+
+reflects animals' learning strategy. We trained monkeys to perform the RSG tasks with two gain contexts. On the one extreme, animals could have treated these as completely different tasks leading to completely unrelated response structures for the two gains. On the other extreme, animals could have established a single parametric solution that would enable the animal to perform the two contexts as part of a single continuum (e.g., represent $t_i$). DMFC responses, however, did not match either extreme. Instead, the system established what might be viewed as a modular solution comprised of two separate isomorphic structures. We take this as evidence that the brain sought similar solutions for the two contexts, but it did so while keeping the solutions separated in the state space. This strategy preserves a separable, unambiguous representation of gain and $t_s$ at the population level (Machens et al. 2010; Mante et al. 2013; Kobak et al. 2016) and provides the additional flexibility of parametric adjustments to the two parameters independently. Future extensions of our experimental paradigm to cases where context information is not present throughout the trial (e.g., internally inferred rules) might provide a more direct test of these possibilities.
+
+Regardless of the learning strategies and constraints that shaped DMFC responses, our results highlight an important computational role for inputs that deviate from traditional views. We found that changing the level of a static input can be used to generalize an arbitrary stimulus response mapping in the RSG task to a new context. Similar inferences can be made from other recent studies that have evaluated the computational utility of inputs that encode task rules and behavioral contexts (Mante et al. 2013; Song et al. 2016; Chaisangmongkon et al. 2017). Extending this idea, it may be possible for the system to use multiple orthogonal input vectors to flexibly and rapidly switch between sensorimotor mappings along different dimensions. Together, these findings suggest that a key function of cortical inputs may be to flexibly reconfigure the intrinsic dynamics of cortical circuits by driving the system to different regions of the state space. This allows the same group of neurons to access a reservoir of latent dynamics needed to perform different task-relevant computations.
+
+Our results raise a number of additional important questions. First, future work should identify the neurobiological substrate of the putative context-dependent input to DMFC in the RSG task, which may be among various cortical and subcortical areas (Lu et al. 1994; Bates & Goldman-Rakic 1993; Wang et al. 2005; Akkal et al. 2007; Wallis et al. 2001). The nature of the input is also unknown. In our RNN models, context information was provided by external drive, and was indistinguishable from recurrent inputs from the perspective of individual units. In cortex, reconfiguration of circuit dynamics may be achieved by either an external drive similar to the function of thalamic relay signals, or through targeted modulation of neural activity (Harris & Thiele 2011; Nadim & Bucher 2014). Second, while the signals recorded in this study were consistent with a prominent role for DMFC in RSG, other brain areas, such as the thalamus (Guo et al. 2017; Schmitt et al. 2017) and prefrontal cortex (Miller & Cohen 2001) are also likely to help maintain the observed dynamics. Third, although we assumed that recurrent interactions were fixed during our experiment, it is almost certain
+---PAGE_BREAK---
+
+that synaptic plasticity plays a key role as the network learns to incorporate context-dependent inputs (Kleim et al. 1998; Pascual-Leone et al. 1995; Yang et al. 2014; Xu et al. 2009). Finally, the persistent separation of neural trajectories observed in DMFC allowed for a dynamical account which did not require invocation of “hidden” network states to explain timing behavior (Buonomano & Merzenich 1995; Karmarkar & Buonomano 2007; Murray & Escola 2017) or contextual control (Stokes et al. 2013). However, it is possible that factors not measured by extracellular recording (e.g., short-term synaptic plasticity) contribute to both contextual control and timing behavior in RSG and similar tasks. These open questions aside, our results provide a novel way to bridge the divide between neural activity and behavior by using the language of dynamical systems.
+---PAGE_BREAK---
+
+## References
+
+Acerbi, L., Wolpert, D.M. & Vijayakumar, S., 2012. Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing. *PLoS computational biology*, 8(11), p.e1002771.
+
+Afshar, A. et al., 2011. Single-trial neural correlates of arm movement preparation. *Neuron*, 71(3), pp.555–564.
+
+Akkal, D., Dum, R.P. & Strick, P.L., 2007. Supplementary motor area and presupplementary motor area: targets of basal ganglia and cerebellar output. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 27(40), pp.10659–10673.
+
+Bates, J.F. & Goldman-Rakic, P.S., 1993. Prefrontal connections of medial motor areas in the rhesus monkey. *The Journal of comparative neurology*, 336(2), pp.211–228.
+
+Brass, M. & von Cramon, D.Y., 2002. The role of the frontal cortex in task preparation. *Cerebral cortex*, 12(9), pp.908–914.
+
+Buonomano, D.V. & Merzenich, M.M., 1995. Temporal information transformed into a spatial code by a neural network with realistic properties. *Science*, 267(5200), pp.1028–1030.
+
+Carnevale, F. et al., 2015. Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty. *Neuron*. Available at: http://dx.doi.org/10.1016/j.neuron.2015.04.014.
+
+Chaisangmongkon, W. et al., 2017. Computing by Robust Transience: How the Fronto-Parietal Network Performs Sequential, Category-Based Decisions. *Neuron*, 93(6), pp.1504–1517.e4.
+
+Churchland, M.M. et al., 2010. Cortical preparatory activity: representation of movement or first cog in a dynamical machine? *Neuron*, 68(3), pp.387–400.
+
+Churchland, M.M. et al., 2012. Neural population dynamics during reaching. *Nature*, 487(7405), pp.51–56.
+
+Churchland, M.M., Afshar, A. & Shenoy, K.V., 2006. A central source of movement variability. *Neuron*, 52(6), pp.1085–1096.
+
+Coull, J.T. et al., 2004. Functional anatomy of the attentional modulation of time estimation. *Science*, 303(5663), pp.1506–1508.
+
+Cui, X. et al., 2009. Ready...go: Amplitude of the FMRI signal encodes expectation of cue arrival time. *PLoS biology*, 7(8), p.e1000167.
+
+Fetz, E.E., 1992. Are movement parameters recognizably coded in the activity of single neurons? *The Behavioral and brain sciences*. Available at: http://journals.cambridge.org/abstract_S0140525X00072599.
+
+Garcia, C., 2012. A simple procedure for the comparison of covariance matrices. *BMC evolutionary biology*, 12, p.222.
+
+Guo, Z.V. et al., 2017. Maintenance of persistent activity in a frontal thalamocortical loop. *Nature*, 545(7653), pp.181–186.
+
+Halsband, U. et al., 1993. The role of premotor cortex and the supplementary motor area in the temporal control of movement in man. *Brain: a journal of neurology*, 116 (Pt 1), pp.243–266.
+
+Hanes, D.P. & Schall, J.D., 1996. Neural control of voluntary movement initiation. *Science*, 274(5286),
+---PAGE_BREAK---
+
+pp.427–430.
+
+Harris, K.D. & Thiele, A., 2011. Cortical state and attention. *Nature reviews. Neuroscience*, 12(9), pp.509–523.
+
+Hennequin, G., Vogels, T.P. & Gerstner, W., 2014. Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements. *Neuron*, 82(6), pp.1394–1406.
+
+Isoda, M. & Hikosaka, O., 2007. Switching from automatic to controlled action by monkey medial frontal cortex. *Nature neuroscience*, 10(2), pp.240–248.
+
+Isoda, M. & Tanji, J., 2003. Contrasting neuronal activity in the supplementary and frontal eye fields during temporal organization of multiple saccades. *Journal of neurophysiology*, 90(5), pp.3054–3065.
+
+Jazayeri, M. & Shadlen, M.N., 2015. A Neural Mechanism for Sensing and Reproducing a Time Interval. *Current biology: CB*. Available at: http://dx.doi.org/10.1016/j.cub.2015.08.038.
+
+Jazayeri, M. & Shadlen, M.N., 2010. Temporal context calibrates interval timing. *Nature neuroscience*, 13(8), pp.1020–1026.
+
+Karmarkar, U.R. & Buonomano, D.V., 2007. Timing in the absence of clocks: encoding time in neural network states. *Neuron*, 53(3), pp.427–438.
+
+Kim, J. et al., 2009. Inactivation of medial prefrontal cortex impairs time interval discrimination in rats. *Frontiers in behavioral neuroscience*, 3, p.38.
+
+Kim, J. et al., 2013. Neural correlates of interval timing in rodent prefrontal cortex. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 33(34), pp.13834–13847.
+
+Kleim, J.A., Barbay, S. & Nudo, R.J., 1998. Functional reorganization of the rat motor cortex following motor skill learning. *Journal of neurophysiology*, 80(6), pp.3321–3325.
+
+Kobak, D. et al., 2016. Demixed principal component analysis of neural population data. *eLife*, 5. Available at: http://dx.doi.org/10.7554/eLife.10989.
+
+Kunimatsu, J. & Tanaka, M., 2012. Alteration of the timing of self-initiated but not reactive saccades by electrical stimulation in the supplementary eye field. *The European journal of neuroscience*, 36(9), pp.3258–3268.
+
+Kurata, K. & Wise, S.P., 1988. Premotor and supplementary motor cortex in rhesus monkeys: neuronal activity during externally- and internally-instructed motor tasks. *Experimental brain research. Experimentelle Hirnforschung*. *Experimentation cerebrale*, 72(2), pp.237–248.
+
+Lu, M.T., Preston, J.B. & Strick, P.L., 1994. Interconnections between the prefrontal cortex and the premotor areas in the frontal lobe. *The Journal of comparative neurology*, 341(3), pp.375–392.
+
+Macar, F., Coull, J. & Vidal, F., 2006. The supplementary motor area in motor and perceptual time processing: fMRI studies. *Cognitive processing*, 7(2), pp.89–94.
+
+Machens, C.K., Romo, R. & Brody, C.D., 2010. Functional, But Not Anatomical, Separation of “What” and “When” in Prefrontal Cortex. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 30(1), pp.350–360.
+
+Maimon, G. & Assad, J.A., 2006. A cognitive signal for the proactive timing of action in macaque LIP. *Nature neuroscience*, 9(7), pp.948–955.
+
+Mante, V. et al., 2013. Context-dependent computation by recurrent dynamics in prefrontal cortex. *Nature*,
+---PAGE_BREAK---
+
+503(7474), pp.78–84.
+
+Matell, M.S., Meck, W.H. & Nicolelis, M.A.L., 2003. Interval timing and the encoding of signal duration by ensembles of cortical and striatal neurons. *Behavioral neuroscience*, 117(4), pp.760–773.
+
+Matsuzaka, Y. & Tanji, J., 1996. Changing directions of forthcoming arm movements: neuronal activity in the presupplementary and supplementary motor area of monkey cerebral cortex. *Journal of neurophysiology*, 76(4), pp.2327–2342.
+
+Meister, M.L.R., Hennig, J.A. & Huk, A.C., 2013. Signal multiplexing and single-neuron computations in lateral intraparietal area during decision-making. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 33(6), pp.2254–2267.
+
+Merchant, H. et al., 2013. Interval Tuning in the Primate Medial Premotor Cortex as a General Timing Mechanism. *Journal of Neuroscience*, 33(21), pp.9082-9096.
+
+Merchant, H. et al., 2011. Measuring time with different neural chronometers during a synchronization-continuation task. *Proceedings of the National Academy of Sciences of the United States of America*, 108(49), pp.19784–19789.
+
+Michaels, J.A. et al., 2015. Predicting Reaction Time from the Neural State Space of the Premotor and Parietal Grasping Network. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 35(32), pp.11415–11432.
+
+Michaels, J.A., Dann, B. & Scherberger, H., 2016. Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning. *PLoS computational biology*, 12(11), p.e1005175.
+
+Miller, E.K. & Cohen, J.D., 2001. An integrative theory of prefrontal cortex function. *Annual review of neuroscience*, 24, pp.167–202.
+
+Mita, A. et al., 2009. Interval time coding by neurons in the presupplementary and supplementary motor areas. *Nature neuroscience*, 12(4), pp.502–507.
+
+Miyazaki, M., Nozaki, D. & Nakajima, Y., 2005. Testing Bayesian models of human coincidence timing. *Journal of neurophysiology*, 94(1), pp.395–399.
+
+Murakami, M. et al., 2014. Neural antecedents of self-initiated actions in secondary motor cortex. *Nature neuroscience*, 17(11), pp.1574–1582.
+
+Murray, J.M. & Escola, G.S., 2017. Learning multiple variable-speed sequences in striatum via cortical tutoring. *eLife*, 6. Available at: http://dx.doi.org/10.7554/eLife.26084.
+
+Nadim, F. & Bucher, D., 2014. Neuromodulation of neurons and synapses. *Current opinion in neurobiology*, 29, pp.48–56.
+
+Ohmae, S. et al., 2008. Neuronal activity related to anticipated and elapsed time in macaque supplementary eye field. *Experimental brain research. Experimentelle Hirnforschung*. *Experimentation cerebrale*, 184(4), pp.593–598.
+
+Okano, K. & Tanji, J., 1987. Neuronal activities in the primate motor fields of the agranular frontal cortex preceding visually triggered and self-paced movement. *Experimental brain research. Experimentelle Hirnforschung*. *Experimentation cerebrale*, 66(1), pp.155–166.
+
+Pachitariu, M. et al., 2016. Kilosort: realtime spike-sorting for extracellular electrophysiology with hundreds of channels. *bioRxiv*, p.061481. Available at: http://www.biorxiv.org/content/early/2016/06/30/061481
+---PAGE_BREAK---
+
+[Accessed September 11, 2017].
+
+Pascual-Leone, A. et al., 1995. Modulation of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new fine motor skills. *Journal of neurophysiology*, 74(3), pp.1037-1045.
+
+Pfeuty, M., Ragot, R. & Pouthas, V., 2005. Relationship between CNV and timing of an upcoming event. *Neuroscience letters*, 382(1-2), pp.106-111.
+
+Pruszynski, J.A. et al., 2011. Primary motor cortex underlies multi-joint integration for fast feedback control. *Nature*, 478(7369), pp.387-390.
+
+Rajan, K. & Abbott, L.F., 2006. Eigenvalue spectra of random matrices for neural networks. *Physical review letters*, 97(18), p.188104.
+
+Rajan, K., Harvey, C.D. & Tank, D.W., 2016. Recurrent Network Models of Sequence Generation and Memory. *Neuron*, 90(1), pp.128-142.
+
+Rakitin, B.C. et al., 1998. Scalar expectancy theory and peak-interval timing in humans. *Journal of experimental psychology: Animal behavior processes*, 24(1), pp.15-33.
+
+Rao, S.M., Mayer, A.R. & Harrington, D.L., 2001. The evolution of brain activation during temporal processing. *Nature neuroscience*, 4(3), pp.317-323.
+
+Ray, S. & Heinen, S.J., 2015. A mechanism for decision rule discrimination by supplementary eye field neurons. *Experimental brain research. Experimentelle Hirnforschung*. Experimentation cerebrale, 233(2), pp.459-476.
+
+Remington, E. & Jazayeri, M., 2017. Late Bayesian inference in sensorimotor behavior. bioRxiv, p.130062.
+Available at: http://biorxiv.org/content/early/2017/04/24/130062 [Accessed April 24, 2017].
+
+Rigotti, M. et al., 2010. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. *Frontiers in computational neuroscience*, 4, p.24.
+
+Roitman, J.D. & Shadlen, M.N., 2002. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 22(21), pp.9475-9489.
+
+Romo, R. & Schultz, W., 1987. Neuronal activity preceding self-initiated or externally timed arm movements in area 6 of monkey cortex. *Experimental brain research. Experimentelle Hirnforschung*. Experimentation cerebrale, 67(3), pp.656-662.
+
+Romo, R. & Schultz, W., 1992. Role of primate basal ganglia and frontal cortex in the internal generation of movements. III. Neuronal activity in the supplementary motor area. *Experimental brain research*. *Experimentelle Hirnforschung*. Experimentation cerebrale, 91(3), pp.396-407.
+
+Rossant, C. et al., 2016. Spike sorting for large, dense electrode arrays. *Nature neuroscience*, 19(4), pp.634-641.
+
+Sabes, P.N., 2000. The planning and control of reaching movements. *Current opinion in neurobiology*, 10(6), pp.740-746.
+
+Schmitt, L.I. et al., 2017. Thalamic amplification of cortical connectivity sustains attentional control. *Nature*, 545(7653), pp.219-223.
+
+Scott, S.H., 2004. Optimal feedback control and the neural basis of volitional motor control. *Nature reviews. Neuroscience*, 5(7), pp.532-546.
+---PAGE_BREAK---
+
+Seely, J.S. et al., 2016. Tensor Analysis Reveals Distinct Population Structure that Parallels the Different Computational Roles of Areas M1 and V1. *PLoS computational biology*, 12(11), p.e1005164.
+
+Shenoy, K.V., Sahani, M. & Churchland, M.M., 2013. Cortical control of arm movements: a dynamical systems perspective. *Annual review of neuroscience*, 36, pp.337–359.
+
+Shima, K. et al., 1996. Role for cells in the presupplementary motor area in updating motor plans. *Proceedings of the National Academy of Sciences of the United States of America*, 93(16), pp.8694–8698.
+
+Smith, N.J. et al., 2010. Reversible Inactivation of Rat Premotor Cortex Impairs Temporal Preparation, but not Inhibitory Control, During Simple Reaction-Time Performance. *Frontiers in integrative neuroscience*, 4, p.124.
+
+Song, H.F., Yang, G.R. & Wang, X.-J., 2016. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework. *PLoS computational biology*, 12(2), p.e1004792.
+
+Stokes, M.G. et al., 2013. Dynamic coding for cognitive control in prefrontal cortex. *Neuron*, 78(2), pp.364–375.
+
+Sussillo, D. et al., 2015. A neural network that finds a naturalistic solution for the production of muscle activity. *Nature neuroscience*, 18(7), pp.1025–1033.
+
+Sussillo, D. et al., 2016. LFADS - Latent Factor Analysis via Dynamical Systems. arXiv [cs.LG]. Available at: http://arxiv.org/abs/1608.06315.
+
+Tanaka, M., 2005. Involvement of the central thalamus in the control of smooth pursuit eye movements. *The Journal of neuroscience: the official journal of the Society for Neuroscience*, 25(25), pp.5866–5876.
+
+Thura, D. & Cisek, P., 2014. Deliberation and commitment in the premotor and primary motor cortex during dynamic decision making. *Neuron*, 81(6), pp.1401–1416.
+
+Todorov, E. & Jordan, M.I., 2002. Optimal feedback control as a theory of motor coordination. *Nature neuroscience*, 5(11), pp.1226–1235.
+
+Wallis, J.D., Anderson, K.C. & Miller, E.K., 2001. Single neurons in prefrontal cortex encode abstract rules. *Nature*, 411(6840), pp.953–956.
+
+Wang, J. et al., 2017. Flexible timing by temporal scaling of cortical responses. *Nature neuroscience*. Available at: http://dx.doi.org/10.1038/s41593-017-0028-6.
+
+Wang, Y. et al., 2005. Prefrontal cortical cells projecting to the supplementary eye field and presupplementary motor area in the monkey. *Neuroscience research*, 53(1), pp.1–7.
+
+Werbos, P.J., 1990. Backpropagation through time: what it does and how to do it. *Proceedings of the IEEE*, 78(10), pp.1550–1560.
+
+Wolpert, D.M. & Kawato, M., 1998. Multiple paired forward and inverse models for motor control. *Neural networks: the official journal of the International Neural Network Society*, 11(7-8), pp.1317–1329.
+
+Xu, M. et al., 2014. Representation of interval timing by temporally scalable firing patterns in rat prefrontal cortex. *Proceedings of the National Academy of Sciences of the United States of America*, 111(1), pp.480–485.
+
+Xu, T. et al., 2009. Rapid formation and selective stabilization of synapses for enduring motor memories. *Nature*, 462(7275), pp.915–919.
+
+Yang, G. et al., 2014. Sleep promotes branch-specific formation of dendritic spines after learning. *Science*,
+---PAGE_BREAK---
+
+344(6188), pp. 1173–1178.
+
+Yang, S.-N. & Heinen, S., 2014. Contrasting the roles of the supplementary and frontal eye fields in ocular decision making. *Journal of neurophysiology*, 111(12), pp.2644–2655.
+---PAGE_BREAK---
+
+# Methods
+
+All experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the Committee of Animal Care at the Massachusetts Institute of Technology. Two monkeys (*Macaca mulatta*), one female (C) and one male (J), were trained to perform the Ready, Set, Go (RSG) behavioral task. Monkeys were seated comfortably in a dark and quiet room. Stimuli and behavioral contingencies were controlled using MWorks (https://mworks.github.io/) on a 2012 Mac Pro computer. Visual stimuli were presented on a frontoparallel 23-inch Acer H236HL monitor at a resolution of 1920x1080 at a refresh rate of 60 Hz, and auditory stimuli were played from the computer's internal speaker. Eye positions were tracked with an infrared camera (Eyelink 1000; SR Research Ltd, Ontario, Canada) and sampled at 1 kHz.
+
+## RSG Task
+
+**Task contingencies.** Monkeys had to measure a sample interval, $t_s$, and subsequently produce a target interval $t_t$ whose relationship to $t_s$ was specified by a context-dependent gain parameter ($l_t = gain \times l_s$) which was set to either 1 (g=1 context) or 1.5 (g=1.5 context). On each trial, $t_s$ was drawn from a discrete uniform prior distribution (7 values, minimum = 500 ms, maximum = 1000 ms), and *gain* (*g*) was switched across blocks of trials (101 +/- 49 trials (mean +/- std)).
+
+**Trial structure.** Each trial began with the presentation of a central fixation point (FP, circular, 0.5 deg diameter), a secondary context cue (CC, square, 0.5 deg width, 3-5 deg below FP), an open circle centered at FP (OC, radius 8-10 deg, line width 0.05 deg, gray) and three rectangular stimuli (2.0x0.5 deg, gray) placed 90 deg apart over the perimeter of OC with their long side oriented radially. FP was red for the g=1 context and purple for the g=1.5 context. CC was placed directly below FP in the g=1 context, and was shifted 0.5 deg rightward in the g=1.5 context. Two of the rectangular stimuli were presented only briefly and served as placeholders for the subsequent 'Ready' and 'Set' flashes. The third rectangle served as the saccadic target ('Go'), which together with FP, CC, and OC remained visible throughout the trial. Ready was always positioned to the right or left of FP (3 o'clock or 9 o'clock position). Set was positioned 90 deg clockwise with respect to Ready and the saccadic target was placed opposite to Ready (**Figure 1A**).
+
+Monkeys had to maintain their gaze within an electronic window around FP (2.5 and 5.5 deg window for C and J, respectively) or the trial was aborted. After a random delay (uniform hazard), first the Ready and then the Set cues were flashed (83 ms, white). The two flashes were accompanied by a short auditory cue (the "pop" system sound), and were separated by $t_s$. The produced interval $t_p$ was defined as the interval between the onset of the Set cue and the time the eye position entered a 5-deg electronic window around the saccadic target. Following saccade, the response was deemed a "hit" if the error $\epsilon = |l_p - l_t|$ was smaller than a
+---PAGE_BREAK---
+
+$t_i$-dependent threshold $\epsilon_{thresh} = \alpha t_i + \beta$ where $\alpha$ was between 0.2 and 0.25, and $\beta$ was 25 ms. The exact choice of these parameters were not critical for performing the task or for the observed behavior; instead, they were chosen to maintain the animals motivated and willing to work for more trials per session. On hit trials, the target, animals received juice reward and FP turned green. The reward amount, as a fraction of maximum possible reward, decreased with increasing error according to $((\epsilon_{thresh} - \epsilon)/\epsilon_{thresh})^{1.5}$, with a minimum fraction 0.1 (Figure 1B). Trials in which $t_p$ was more than 3.5 times the median absolute deviation (MAD) away from the mean were considered outliers and were excluded from further analyses.
+
+As an initial analysis of whether monkeys learned the RSG task across gains, we fit linear regression models to
+the behavior separately for each gain:
+
+$$
+(1) t_p = \beta_1 \times t_s + \beta_0
+$$
+
+To quantify the difference in slopes between the two contexts. We also fit models with an interaction term
+across both contexts:
+
+$$
+(2) t_p = \beta_1 t_s + \beta_2 g + \beta_3 g t_s
+$$
+
+If the animals successfully learned to apply the gain, $\beta_3$ should be positive.
+
+We further applied a Bayesian observer model (Jazayeri & Shadlen 2015; Acerbi et al. 2012; Miyazaki et al. 2005; Jazayeri & Shadlen 2010), which captured the behavior in both contexts (Figure 1E). Full details of the model can be found in previous work (Jazayeri & Shadlen 2010; Jazayeri & Shadlen 2015). Briefly, we assumed that both measurement and production of time intervals are noisy. Measurement and production noise were modeled as zero-mean Gaussian with standard deviation proportional to the base interval (Rakitin et al. 1998), with constant of proportionality of $w_m$ and $w_p$, respectively. A Bayesian model observer produced $t_p$ after deriving an optimal estimate of $t_i$ from the mean of the posterior. To account for the possibility that the mental operation of mapping $t_s$ to $t_i$ according to the gain factor might be noisier in the g=1.5 context than in the g=1 context (Remington & Jazayeri 2017), we allowed $w_m$ and $w_p$ to vary across contexts.
+
+Recording
+
+We recorded neural activity in dorsomedial frontal cortex (DMFC) with 24-channel linear probes (Plexon, inc.). Recording locations were selected according to stereotaxic coordinates and the existence of task-relevant modulation of neural activity. In monkey C, recordings were made between 3.5 mm to 7 mm lateral of the midline and 1.5 mm posterior to 4.5 mm anterior of the genu of the arcuate sulcus. In monkey J, we recorded
+---PAGE_BREAK---
+
+from between 3 mm to 4.5 mm lateral of the midline and 0.75 mm to 5 mm anterior of the genu of the arcuate sulcus. Data were recorded and stored using a Cerebus Neural Signal Processor (NSP; Blackrock Microsystems). Preliminary spike sorting was performed online using the Blackrock NSP, followed by offline sorting using the Phy spike sorting software package using the spikedetekt, klusta, and kilosort algorithms (Rossant et al. 2016; Pachitariu et al. 2016). Sorted spikes were then analyzed using custom code in MATLAB (The MathWorks Inc.).
+
+## Analysis of DMFC data
+
+Average firing rates of individual neurons were estimated using a 150 ms smoothing filter applied to spike counts in 1 ms time bins. We used PCA to visualize and analyze activity patterns across the population of neurons across animals. PCA was applied after a soft normalization: spike counts measured in 10 ms bins were divided by the square root of the maximum spike count across all bins and conditions. The normalization was implemented to minimize the possibility high firing rate neurons dominating the analysis.
+
+When binning data according to increasing values of $t_p$, we ensured that all bins had equal number of trials, independently for each session. To average firing rates across trials within a group, we truncated trials to the median $t_p$, and averaged firing rates with attrition. Analyses of neural data were applied to all 10 sessions across both monkeys. For analyses, we included neurons for which at least 15 trials were recorded in each condition and which had a minimum unsmoothed modulation depth of 15 spikes per second. We did not separately analyze trials immediately following context switches due to the low number of context switches per session (mean = 6.8 switches).
+
+For visualization of neural trajectories in state space, we identified dimensions along which responses were maximally separated with respect to context ("gain axis," **Figure 2B,C**, "initial condition," **Figure 3C**) and $t_p$ ("interval axis," **Figure 2B,C**, and "initial condition," **Figure 3C**). We first calculated the context component by projecting data onto the vector defined by the difference between neural activity averaged over time and $t_p$ for each context. This component of the activity was then subtracted away from the full activity. For the Ready-Set epoch, we then performed PCA (PCs 1 and 2, **Figure 2B**) on the data with the context component removed. For the Set-Go epoch, we calculated the $t_p$ component by projecting data onto the vector defined by the difference between the activity associated with longest and shortest values $t_p$, averaged across time and context. We then performed PCA (PC 1, **Figures 2C and 3C**) on the data with the context and $t_p$ components removed.
+---PAGE_BREAK---
+
+Kinematic analysis of neural trajectories (KiNeT)
+
+We developed KiNeT to compare the geometry, relative speed and relative position along a group of neural
+trajectories that have an orderly organization and change smoothly with time. To describe KiNeT rigorously, we
+developed the following symbolic notations. Square and curly brackets refer to individual items and groups of
+items, respectively.
+
+The algorithm for applying KiNeT can be broken down into the following steps: 1) Choose a Euclidean coordinate system to analyze the neural trajectories. We chose the first 10 PCs in the Set-Go epoch, which captured 89% of the variance in the data. 2) Designate one trajectory as reference, $\Omega[\text{ref}]$. We used the trajectory associated with the middle $t_p$ bin as reference. 3) On each of the non-reference trajectories $\Omega[i]$ ($i \neq \text{ref}$), find $s[i]\{j\}$ with minimum Euclidean distance to $s[\text{ref}]\{j\}$ and their associated times $t[i]\{j\}$ according to the following equations:
+
+$$ (3) \quad t[i][j] = \arg \min_t ||\Omega[i](t) - s[\text{ref}][j]|| $$
+
+$$ (4) \quad s[i][j] = \Omega[i](t[i][j]) $$
+
+Organization of trajectories in state space: The distances $D[i]\{j\}$ were used to characterize positions in neural state space of each $\Omega[i]$ relative to $\Omega[\text{ref}]$. The magnitude of $D[i][j]$ was defined as the norm of the vector connecting $s[i][j]$ to $s[\text{ref}][j]$, which we refer to as $\vec{\Delta}_{\text{ref}}[i][j]$. The sign of $D[i][j]$ was defined as follows: for the trajectory $\Omega[1]$ associated with the shortest $t_s$ or $t_p$, and $\Omega[N]$ associated with the longest, $D[i][j]$ was defined to be negative and positive, respectively. For all other trajectories, $D[i][j]$ was positive if the angle between $\vec{\Delta}_{\text{ref}}[i][j]$ and $\vec{\Delta}_{\text{ref}}[N][j]$ was smaller than the angle between $\vec{\Delta}_{\text{ref}}[i][j]$ and $\vec{\Delta}_{\text{ref}}[1][j]$, and negative otherwise.
+
+Analysis of neural trajectories across contexts: We analyzed the geometry across gains in three ways.
+First, we analyzed the relationships between the two sets of trajectories. This required aligning the activity
+between the two contexts in time. To do this, we started with the aligned times $t\{i\}\{j\}$ found within each
+context, and using successive groups of neural states in the g=1 context indexed by $t_g=1[\text{ref}]\{j\}$, found the
+reference time $l_g=1.5[\text{ref}]\{j\}$ in the g=1.5 context for which the mean distances between neural states in
+paired trajectories (i.e. the first $t_p$ bins of both gains, second $t_p$ bins, etc.) were smallest. This resulted in an
+---PAGE_BREAK---
+
+array of times from $l_g=1.5[\text{ref}]\{j\}$, indexed by $l_g=1[\text{ref}]\{j\}$, such that the trajectories across gains were aligned in time for subsequent analyses (Figure 6C). The second way that we analyzed geometry across gains was to collect trajectories across both gains, order according to trajectory duration, and run the standard KiNeT procedure. Finally, we measured the distance $D_g$ between the structures using the across-context time alignment. For successive $j$, we measured the minimum distance between line segments connecting consecutive trajectories within each context. For five $t_p$ bins, this meant four line segments for each context, and $4^2=16$ distances. We chose the minimum of these distance values as the value of $D_g$ between the two structures. As a point of comparison, we generated set of “null” distances by splitting trajectories from each context into odd- and even- numbered trajectories and calculating the minimum distance between the sets of connecting line segments (Figure 6E).
+
+| Symbol | Description | | Ω[i] | The i-th neural trajectory | | Ω[i](t) | The state on the i-th trajectory at time t, 1 ≤ i ≤ N, where N is the number of trajectories | | Ω{i} | A collection of neural trajectories | | Ω[ref] | “Reference” neural trajectory | | Ω[1] | The trajectory of shortest duration | | Ω[N] | The trajectory of longest duration | | l[ref][j] | Elapsed time for j-th time bin on Ω[ref] | | s[ref][j] | Neural state on Ω[ref] at l[ref][j] | | s[i][j] | Neural state on Ω[i] with minimum distance to s[ref][j] | | s[i]{j} | s[i][j] across all time bins | | s{i}[j] | s[i][j] on all trajectories at j-th time bin | | l[i][j] | Elapsed time on Ω[i] at s[i][j] |
+---PAGE_BREAK---
+
+| l[i]{j} | Elapsed time on Ω[i] across all time bins | | l{i}{j] | Elapsed time on all trajectories at j-th time bin | | D[i][j] | Euclidean distance between s[i][j] and s[ref][j] | | D[i]{j} | Array of euclidean distances between s{i}[j] and s[ref][j] | | Δ̅ref[i][j] | Vector travelling from Ω[ref] to Ω[i] at the j-th time bin, i ≠ ref | | Δ̅Ω[i][j] | Vector traveling from s[i][j] to s[i + 1][j], 1 ≤ i ≤ N − 1 | | θΩ[i][j] | Angle between Δ̅Ω[i][j] and Δ̅Ω[i + 1][j], 1 ≤ i ≤ N − 2 | | θ̅Ω{i}{j] | Average of θΩ[i][j] across i for the j-th time bin | | θg[i][j] | Angle between Δ̅Ω,g=1[i][j] and Δ̅Ω,g=1.5[i][j], 1 ≤ i ≤ N − 1 | | Δ̅g[j] | Vector connecting the nearest points on line segments connecting sg=1{i}[j] and sg=1.5{i}[j] | | Dg[j] | Magnitude (length) of Δ̅g[j] | | θg,Ω[j] | Angle between Δ̅g[j] and the mean of Δ̅Ω{i}[j] over i. |
+
+**Statistics:** Confidence intervals for KiNeT performed on trajectories binned according to $t_p$ were computed by a bootstrapping procedure, randomly selecting trials with replacement 100 times. To test for statistical significance of metrics generated through the KiNeT procedure, we used bootstrap tests, where p was the fraction of bootstrap iterations for which the metric was consistent with the null hypothesis. Unless otherwise stated, significance of a measure for individual time points was set to p < 0.05. The results of KiNeT applied to neural data from individual monkeys produced similar results, and were similar for different methods of data smoothing.
+---PAGE_BREAK---
+
+Recurrent neural network
+
+We constructed a firing rate recurrent neural network (RNN) model with $N = 200$ nonlinear units. The network dynamics were governed by the following differential equation:
+
+$$\tau \dot{x}(t) = -x(t) + Jr(t) + Bu + c_x + \rho_x(t)$$
+
+$$r(t) = \tanh[x(t)]$$
+
+$x(t)$ is a vector containing the activity of all units. and $r(t)$ represents the firing rates of those units by transforming $x$ through a $\tanh$ nonlinearity. Time $t$ was sampled every millisecond for a duration of $T = 3300$ ms. The time constant of decay for each unit was set to $\tau = 10$ ms. The unit activations also contain an offset $c_x$ and white noise $\rho_x(t)$ at each time step with standard deviation in the range [0.01-0.015]. The matrix $J$ represents recurrent connections in the network. The network received multi-dimensional input $u$ through synaptic weights $B = [b_c; b_s]$. The input $u$ was comprised of a gain-dependent context cue $u_c(t)$ and an input $u_s(t)$ that provided Ready and Set pulses. In $u_s(t)$ Ready and Set were encoded as 20 ms pulses with a magnitude of 0.4 that were separated by time $t_s$.
+
+Two classes of networks were trained to perform the RSG task with multiple gains. In the tonic-input RNNs, the gain-dependent input $u_c(t)$ was set to a fixed offset for the entire duration of the trial. In contrast, in the transient-input RNNs, $u_c(t)$ was active transiently for 440 ms and was terminated 50-130 ms before the onset of the Ready pulse. The amplitude of $u_c(t)$ was set to 0.3 for g=1 and 0.4 for g=1.5. The transient network received an additional gain-independent persistent input of magnitude 0.4, similar to the tonic networks. Both types of networks produced a one-dimensional output $z(t)$ through summation of units with weights $w_o$ and a bias term $c_z$.
+
+$$z(t) = w_o^T r(t) + c_z$$
+
+Network Training
+
+Prior to training, model parameters ($\theta$), which comprised $J$, $B$, $w_o$, $c_x$ and $c_z$ were initialized. Initial values of matrix $J$ were drawn from a normal distribution with zero mean and variance $1/N$, following previous work (Rajan & Abbott 2006). Synaptic weights $B = [b_c; b_s]$ and the initial state vector $x(0)$ and unit biases $c_x$ were initialized to random values drawn from a uniform distribution with range [-1,1]. The output weights, $w_o$ and bias $c_z$, were initialized to zero. During training, model parameters were optimized by truncated Newton
+---PAGE_BREAK---
+
+methods using backpropagation-through-time (Werbos 1990) by minimizing a squared loss function between
+the network output $z_i(t)$ and a target function $f_i(t)$, as defined by:
+
+$$H(\theta) = \frac{1}{|TI|} \sum_{I} \sum_{t} (z_{i}(t) - f_{i}(t))^{2}$$
+
+Here *i* indexes different trials in a training set (*I* = different gains (*G*) x intervals (*t*s) x repetitions (*r*)). The target function *f**i*(*t*) was only defined in the Set-Go epoch (the output of the network was not constrained during the Ready-Set epoch). The value of *f**i*(*t*) was zero during the Set pulse. After Set, the target function was governed by two parameters that could be adjusted to make *f**i*(*t*) nonlinear, scaling, non-scaling or approximately-linear:
+
+$$f_i(t) = A(e^{\frac{t}{\alpha t_s}} - 1)$$
+
+For the networks reported, $f_i(t)$ was an approximately-linear ramp function parametrized by $\Lambda = 3$ and $\alpha = 2.8$. Variable $t_t$ represents the transformed interval for a given $t_s$ and gain $G$. Solutions were robust with respect to the parametric variations of the target function (e.g., nonlinear and non-scaling target functions). In trained networks, the production time, $t_p$ was defined as the time between the Set pulse and when the output ramped to a fixed threshold ($z_i = 1$).
+
+During training, we employed two strategies to obtain robust solutions. First, we trained the networks to flexibly switch between three gain contexts, the two original values ($g=1$ and $g=1.5$) and an additional intermediate value of $g=1.25$ for which the amplitude of $u_c(t)$ was set to 0.35. However, the behavior of networks trained with the two original gains were qualitatively similar. Second, we set $\rho(t)$ to zero, and instead, the context-dependent input, $u_c(t)$ received white noise with standard deviation of 0.005, per unit time ($\Delta t = 0$).
+---PAGE_BREAK---
+
+Supplement
+
+Go-aligned KiNeT
+
+**Figure S1.** “Go”-aligned KiNeT, related to **Figure 4**. Applying KiNeT to neural trajectories aligned to the Set cue resulted in $t[i][j]$ which diverged from $t[\text{ref}]$ to scale with trajectory length in a manner consistent with neural speed control as a means to produce different $t_p$. To rule out the possibility that this temporal scaling of trajectories was an artifact of temporal smearing of PSTHs near the time of Go caused by averaging trials of different lengths, we applied KiNeT to data aligned to Go (saccade). (A). Aligned times (speed) across both contexts. As in the Set-aligned analysis, $t[i][j]$ for shorter $\Omega[i]$ diverged to shorter values, while $t[i][j]$ for longer $\Omega[i]$ diverged towards longer values as $t[\text{ref}]$ (here time before Go) increased. In contrast to the lack of temporal scaling proximal to the Set cue, $t[i][j]$ were ordered according to $t_p$ leading all the way up to the Go cue. Circles on the $t[\text{ref}]$ line indicate $j$ for which the ordering of $t[i][j]$ was significantly correlated with the $t_p$ bin (bootstrap test, r > 0.1, p < 0.05, n = 100). (B,C) $t_p$-related structure of $\Omega\{i\}$ (B). Analysis of direction. As
+---PAGE_BREAK---
+
+in the Set-aligned KiNeT, $\bar{\theta}_{\Omega\{i\}}[j]$ (bar signifies mean over the index $i$ in curly brackets) was significantly smaller than 90 degrees for the majority of the Set-Go interval (bootstrap test, $\bar{\theta}_{\Omega\{i\}}[j] < 90$, p < 0.05, n = 100) indicating that $\vec{\Delta}_{\Omega\{i\}}[j]$ were similar, across $\Omega[i]$. (C) Analysis of distance. Euclidean distance to $\Omega[\text{ref}]$. Trajectories were ordered in neural space according to $D[i][j]$, where $\Omega[i]$ with $t_p$ with more similar to the middle $t_p$ bin to being located closer to $\Omega[\text{ref}]$. Significance tested by counting the number of times in which $D[i][j]$ was not ordered according to $t_p$ bin in bootstrap samples for each $j$ (p < 0.05, n = 100).
+---PAGE_BREAK---
+
+Rotation of trajectories through time
+
+**Figure S2.** Rotation of trajectories through time, related to Figure 4. We estimated the degree to which the principal axes (PC directions) associated with nearest states along the five trajectories, $s\{i\}[j]$, changed with time relative to $t=0$ using two metrics: a similarity index ($SI(0, t)$) that measures the variance explained by PCs at time $t$ and $t=0$ (see below for full description), and a rotation index ($\theta_{PC_1}(0, t)$) measuring the angle of the first PC ($PC_1$) in the state space at time $t$ compared to $t=0$. (A) $SI(0, t)$. This index varies between 0 and 1 with 1 signifying matching PCs and 0 signifying orthogonal PCs. The gradual change in $SI(0, t)$ away from 1 and toward 0 indicated that $\Omega\{i\}$ gradually changed orientation with time. Shaded area represents 90% bootstrap confidence intervals (n = 100). Dashed lines represent the 90% confidence intervals for the similarity of two sets of $s\{i\}[j]$ drawn randomly from a multivariate Gaussian distribution with covariance matched to the data. $SI(0, t)$ captures the extent to which the orientation of $s\{i\}[j]$ in state space changes with time and is therefore sensitive to both rotations and scaling transformations. (B) $\theta_{PC_1}(0, t)$. The gradual change in $\theta_{PC_1}(0, t)$ away from 0 toward 90 deg indicates that trajectories underwent rotations through state space from Set to Go. Unlike $SI(0, t)$ that is sensitive to both rotations and scaling transformations, $\theta_{PC_1}(0, t)$ is only sensitive to rotations. These data-driven observations motivated the use of KiNeT for analyzing neural trajectories throughout the paper.
+---PAGE_BREAK---
+
+**Similarity Index:** The similarity index, adapted from (Garcia 2012), was calculated using the following procedure: 1) Select two datasets, one for neural activity patterns at the time of Set ($t=0$), denoted by $r_0$, and one at time $t$ after Set, denoted by $r_t$. 2) Calculate the principal component coefficients for each dataset. 3) Project the points of each dataset onto their own and the others' principal coefficients, creating four sets of principal component scores. 4) Calculate the fraction of variance explained by each principal component in each of the four sets of scores. $\sigma_{0,0}^{2,i}$ is the fraction of variance in $r_0$ explained by principal component $i$ of $r_0$, $\sigma_{t,0}^{2,i}$ is the fraction of variance of $r_t$ explained by principal component $i$ of $r_0$, $\sigma_{t,t}^{2,i}$ is the fraction of variance in $r_t$ explained by principal component $i$ of $r_t$, and $\sigma_{0,t}^{2,i}$ is the fraction of variance in $r_0$ explained by principal component $i$ of $r_t$. 5) For each component of each dataset, calculate the difference between (1) the fraction of variance explained by that component for its own dataset (e.g. $\sigma_{0,0}^{2,i}$) and (2) the fraction explained by that same component for the other dataset (e.g. $\sigma_{t,0}^{2,i}$). 6) Sum and normalize the calculated differences. This can be written as follows:
+
+$$S(0, t) = 1 - \frac{1}{4} \sum_i (|\sigma_{0,0}^{2,i} - \sigma_{t,0}^{2,i}| + |\sigma_{t,t}^{2,t} - \sigma_{0,t}^{2,i}|)$$
+
+The similarity index is 0 when the associated covariance matrix of one dataset lies in the nullspace of the other, and 1 when the covariance matrices are identical.
+
+In order to interpret the values of similarity index in the DMFC dataset, we compared similarity index for two surrogate datasets that matched the statistics of DMFC activity. Each dataset was constructed by drawing five samples (the number of $t_p$ bins) from a ten-dimensional Gaussian distribution (the number of principal components) with a diagonal covariance matrix constructed using the eigenvalues of the covariance matrix of the DMFC data. We calculated the similarity index for 1000 pairs of surrogate data (i.e., null distribution), and used the 5th and 95th percentiles to generate 90% confidence intervals. With this procedure, a similarity index above the 90% confidence interval was considered more "similar" than expected by chance, whereas a similarity index below the 90% confidence interval was considered dissimilar.
+---PAGE_BREAK---
+
+Variability in neural trajectories systematically predicted behavioral variability
+
+**Figure S3.** Relating neural variability to behavioral variability, related to Figure 4. (A). Schematic showing three neural trajectories between Set (circle) and Go (cross) associated with three different $t_s$ values. Neural states, $s[i][j]$, are indexed by trajectory $(i)$, which is specified by initial condition, and elapsed time ($j$). Noise may cause neural states to deviate from mean trajectories. We reasoned that deviations across and along trajectories may cause systematic biases in $t_p$. $s_{short}[i][j]$ (light star) shows an example in which noise moves the state in the direction of shorter $t_s$ (toward $s[i-1][j]$) and in the direction of the Go state ($s[i][j+1]$) by vectors $\epsilon_\Omega$ and $\epsilon_t$, respectively. Both deviations should lead to shorter $t_p$. (B) Prediction 1 ($P_1$): deviations $\epsilon_\Omega$ off of one trajectory toward a trajectory associated with larger $t_s$ should lead to larger $t_p$, and vice versa. To test $P_1$, we divided trials for each $t_s$ into two bins. One bin contained all trials in which $t_p$ was shorter than median $t_p$
+---PAGE_BREAK---
+
+and the other, all trials in which $t_p$ was longer than median $t_p$. We computed neural trajectories for the short and long $t_p$ bins, and denoted the corresponding states by $s_{short}[i][j]$ and $s_{long}[i][j]$ (dark star), respectively. If $P_j$ is correct, then the geometric relationship between $s_{short}[i][j]$ and $s_{long}[i][j]$ should be similar to that between $s[i-1][j]$ (shorter $t_s$) and $s[i][j+1]$ (longer $t_s$). Therefore the vector pointing from $s_{short}[i][j]$ to $s_{long}[i][j]$ ($\vec{\Delta}_p[i][j]$, dashed arrow) and the vector pointing from $s[i-1][j]$ to $s[i][j+1]$ ($\vec{\Delta}_{\Omega}[i][j]$, blue arrow) should be aligned, and the angle between them, denoted by $\theta_{p,\Omega}[i][j]$, should be acute. See below description for calculation of $\vec{\Delta}_{\Omega}[i][j]$ for shortest and longest $t_s$. (C) Prediction 2 ($P_2$): deviations $\epsilon_t$ along trajectories should influence the time it takes for activity to reach the Go state and should therefore influence $t_p$ (Afshar et al. 2011; Michaels et al. 2015). If $P_2$ is correct, then $s_{short}[i][j]$ should be ahead of $s_{long}[i][j]$. Therefore, $\vec{\Delta}_p[i][j]$ should point backwards in time, and the angle between $\vec{\Delta}_p[i][j]$ and $\vec{\Delta}_l[i][j]$ that connects $s[i][j-1]$ to $s[i][j+1]$, denoted by $\theta_{p,t}[i][j]$ should be obtuse. See below description for calculation of $\vec{\Delta}_l[i][j]$ for first and last time points.
+
+(D,E) Testing $P_1$ and $P_2$ for the $g=1$ (D) and $g=1.5$ (E) contexts. Consistent with $P_1$, average $\theta_{p,\Omega\{i\}[j]}$ ($\bar{\theta}_{p,\Omega\{i\}[j]}$, blue), were less than 90 deg from Set to Go indicating that $t_p$ was larger (smaller) when neural states deviated toward a trajectory associated with a larger (smaller) $t_s$. Importantly, the systematic relationship between $t_p$ and neural activity was already present at the time of the Set, indicating that $t_p$ was influenced by variability during the Ready-Set measurement epoch. Consistent with $P_2$, $\bar{\theta}_{p,t\{i\}[j]}$ (green) was greater than 90 deg, indicating that $t_p$ was larger (smaller) when speed along the neural trajectory was slower (faster). The angle between $\vec{\Delta}_p\{i\}[j]$ and $\vec{\Delta}_l\{i\}[j]$ was initially close to 90 deg consistent with the observation that trajectories evolved at similar speeds early in the Set-Go epoch (Figure 4B).
+
+We also measured the angle between $\vec{\Delta}_{\Omega\{i\}[j]}$ and $\vec{\Delta}_{l\{i\}[j]}$, denoted by $\bar{\theta}_{\Omega,t\{i\}[j]}$ (yellow). This angle was not significantly different than expected by chance (90 deg) for most time points. We determined when (at what $j$) an angle was significantly different from 90 deg ($p < 0.05$) by comparing angles to the corresponding null distribution derived from 100 random shuffles with respect to $t_p$. Angles that were significantly different from 90 deg are shown by darker circles. Because the comparison of $t_s$- vs. $t_p$-related structure (Figure S3) required grouping trials into substantially more bins than the other analyses (14 vs. 7 or 5), we reduced the minimum number of trials required to 10 for this analysis (273 units; 95 from monkey C and 178 from monkey J). We did not find that the results of any of the analyses were dependent on the specific threshold chosen, and results were similar in individual subjects.
+---PAGE_BREAK---
+
+**Calculation of $\vec{\Delta}_{\Omega}[i][j]$ for shortest and longest $t_s$:**
+
+Because there was a finite number of $t_s$ values, we could not compute $\vec{\Delta}_{\Omega}[i][j]$ for $i = 1$ and $i = N_i$ ($s[i-1][j]$ was not defined for $i = 1$ and $s[i+1][j]$ was not defined for $i = N_i$). Therefore, for the shortest $t_s$, we changed $\vec{\Delta}_{\Omega}[i][j]$ to $s[2][j] - s[1][j]$ (instead of $s[2][j] - s[0][j]$), and for the longest $t_s$, to $s[N_i][j] - s[N_i - 1][j]$ (instead of $s[N_i + 1][j] - s[N_i - 1][j]$).
+
+**Calculation of $\vec{\Delta}_t[i][j]$ for the earliest and latest times:**
+
+Because there was a finite number of time points, we could not compute $\vec{\Delta}_t[i][j]$ for $j = 1$ and $j = N_j$ ($s[i][j-1]$ was not defined for $j = 1$ and $s[i][j+1]$ was not defined for $j = N_j$). Therefore, for the first time point, we changed $\vec{\Delta}_t[i][j]$ to $s[i][2] - s[i][1]$ (instead of $s[i][2] - s[i][0]$), and for the last time point, to $s[i][N_j] - s[i][N_j - 1]$ (instead of $s[N_j + 1][j] - s[N_j - 1][j]$).
+---PAGE_BREAK---
+
+Analysis of the recurrent neural networks
+
+**Figure S4.** Analysis of the recurrent neural networks (RNNs), related to Figure 4 and Figure 7. (A-E) Tonic-input RNN. (A) “Behavior”; same format as in Figure 1E. The networks successfully learned the task as evidenced by positive regression slopes ($β_1$, larger for $g= 1.5$ context) and a significant positive interaction between $t_s$ and $g$ ($p << 0.001$). For each network, we simulated 30 trials per $t_s$ and $g$, removing outliers in which $t_p$ was more than 3.5 times the median absolute deviation (MAD) away from the mean. (B-D) Organization of neural trajectories within each context; same format as Figure 4B-D. KiNeT analysis verified that the organization of neural trajectories in the tonic-input RNN matched the organization observed in DMFC (compare to Figure 4B-D). (E) Relating unit variability to behavioral variability; same format as in Figure S3. (F-J) Same analyses as in A-E for the transient-input RNN.
+---PAGE_BREAK---
+
+| Symbol | Description | | Ω[i] | The i-th neural trajectory | | Ω[i](t) | The state on the i-th trajectory at time t, 1 ≤ i ≤ N, where N is the number of trajectories | | Ω{i} | A collection of neural trajectories | | Ω[ref] | "Reference" neural trajectory | | Ω[1] | The trajectory of shortest duration | | Ω[N] | The trajectory of longest duration | | l[ref][j] | Elapsed time for j-th time bin on Ω[ref] | | s[ref][j] | Neural state on Ω[ref] at l[ref][j] | | s[i][j] | Neural state on Ω[i] with minimum distance to s[ref][j] | | s[i]{j} | s[i][j] across all time bins | | s{i}[j] | s[i][j] on all trajectories at j-th time bin | | l[i][j] | Elapsed time on Ω[i] at s[i][j] | | l[i]{j} | Elapsed time on Ω[i] across all time bins | | l{i}[j] | Elapsed time on all trajectories at j-th time bin | | D[i][j] | Euclidean distance between s[i][j] and s[ref][j] | | D[i]{j} | Array of euclidean distances between s{i}[j] and s[ref][j] | | Δ̂ref[i][j] | Vector travelling from Ω[ref] to Ω[i] at the j-th time bin, i ≠ ref | | Δ̂Ω[i][j] | Vector traveling from s[i][j] to s[i + 1][j], 1 ≤ i ≤ N − 1 |
+---PAGE_BREAK---
+
+| θΩ[i][j] | Angle between Δ⃗Ω[i][j] and Δ⃗Ω[i + 1][j], 1 ≤ i ≤ N - 2 | | θ̅Ω{i}[j] | Average of θΩ[i][j] across i for the j-th time bin | | θg[i][j] | Angle between Δ⃗Ω,g=1[i][j] and Δ⃗Ω,g=1.5[i][j], 1 ≤ i ≤ N - 1 | | Δ⃗g[j] | Vector connecting the nearest points on line segments connecting sg=1{i}[j] and sg=1.5{i}[j] | | Dg[j] | Magnitude (length) of Δ⃗g[j] | | θg,Ω[j] | Angle between Δ⃗g[j] and the mean of Δ⃗Ω{i}[j] over i. |
\ No newline at end of file
diff --git a/samples/texts_merged/6724971.md b/samples/texts_merged/6724971.md
new file mode 100644
index 0000000000000000000000000000000000000000..2f5802ce1e30cda564ce7a8398c693debd6281ab
--- /dev/null
+++ b/samples/texts_merged/6724971.md
@@ -0,0 +1,641 @@
+
+---PAGE_BREAK---
+
+# THE EXISTENCE OF FIXED POINTS FOR THE $·/GI/1$ QUEUE
+
+BY JEAN MAIRESSE AND BALAJI PRABHAKAR
+
+CNRS-Université Paris 7 and Stanford University
+
+A celebrated theorem of Burke's asserts that the Poisson process is a fixed point for a stable exponential single server queue; that is, when the arrival process is Poisson, the equilibrium departure process is Poisson of the same rate. This paper considers the following question: Do fixed points exist for queues which dispense i.i.d. services of finite mean, but otherwise of arbitrary distribution (i.e., the so-called $·/GI/1/∞$/FCFS queues)? We show that if the service time $S$ is nonconstant and satisfies $\int P\{S \ge u\}^{1/2} du < \infty$, then there is an unbounded set $\mathcal{S} \subset (E[S], \infty)$ such that for each $\alpha \in \mathcal{S}$ there exists a unique ergodic fixed point with mean inter-arrival time equal to $\alpha$. We conjecture that in fact $\mathcal{S} = (E[S], \infty)$.
+
+## 1. Introduction.
+Consider a single server First-Come-First-Served queue with infinite waiting room, at which the service times are i.i.d. ($a·/GI/1/∞$/FCFS queue). We are interested in the question of whether such queues possess fixed points: an inter-arrival process which has the same distribution as the corresponding inter-departure process.
+
+The question of the existence of fixed points is intimately related to the limiting behavior of the distribution of departure processes from a tandem of queues. Specifically, consider an infinite tandem of $·/GI/1/∞$/FCFS queues. The queues are indexed by $k \in \mathbb{N}$ and the customers are indexed by $n \in \mathbb{Z}$. The numbering of each customer is fixed at the first queue and remains the same as he/she passes through the tandem. Each customer leaving queue $k$ immediately enters queue $k+1$. At queue $k$, write $S(n, k)$ for the service time of customer $n$ and $A(n, k)$ for the inter-arrival time between customers $n$ and $n+1$. We assume that the initial inter-arrival process, $A^0 = (A(n, 0), n \in \mathbb{Z})$, is ergodic and independent of $(S(n, k), n \in \mathbb{Z}, k \in \mathbb{N})$. We also assume that the service variables $(S(n, k), n, k)$ are i.i.d. and that $E[S(0, 0)] < E[A(0, 0)] < \infty$. To avoid trivialities we assume that the service times are nonconstant, that is, $P\{S(0, 0) \neq E[S(0, 0)]\} > 0$.
+
+By Loynes' results [15], each of the equilibrium departure processes $A^k = (A(n, k), n \in \mathbb{Z})$ for $k \ge 1$ is ergodic of mean $E[A(0, 0)]$. The following are natural fixed point problems:
+
+Received February 2001; revised January 2003.
+AMS 2000 subject classifications. 60K25, 60K35, 68M20, 90B15, 90B22.
+Key words and phrases. Queue, tandem queueing networks, general independent services, stability, Loynes theorem, Burke theorem.
+---PAGE_BREAK---
+
+*Existence.* For a given service distribution, does there exist a mean $\alpha$ ergodic inter-arrival process such that the corresponding inter-departure process has the same distribution? If yes, call such a distribution an *ergodic fixed point* of mean $\alpha$.
+
+*Uniqueness.* If an ergodic fixed point of mean $\alpha$ exists, is it unique?
+
+*Convergence.* Assume there is a unique ergodic fixed point of mean $\alpha$. If the inter-arrival process to the first queue, $A^0$, is ergodic of mean $\alpha$, then does the law of $A^k$ converge weakly to the ergodic fixed point of mean $\alpha$ as $k \to \infty$? If yes, call the fixed point an *attractor*.
+
+A strand of research in stochastic network theory has pursued these questions for some time. Perhaps the earliest and best-known result is Burke's theorem [7], which shows that the Poisson process of rate $1/\alpha$ is a fixed point for exponential server queues with mean service time $\beta < \alpha$. Anantharam [1] established its uniqueness, and Mountford and Prabhakar [18] established that it is an attractor.
+
+For $·/GI/1/∞/FCFS$ queues, the subject of this paper, Chang [8] established the uniqueness of an ergodic fixed point, should it exist, assuming that the services have a finite mean and an unbounded support. Prabhakar [19] provides a complete solution to the problems of uniqueness and convergence assuming only a finite mean for the service time and the existence of an ergodic fixed point. However, the existence of such fixed points was only known for exponential and geometric service times.
+
+This paper establishes the existence of fixed points for a large class of service time distributions. We obtain the following result: if the service time $S$ has mean $\beta$ and if $\int P\{S \ge u\}^{1/2} du < \infty$, then there is a set $\mathcal{S}$ closed in $(\beta, \infty)$, with $\inf\{u \in \mathcal{S}\} = \beta$, $\sup\{u \in \mathcal{S}\} = \infty$ and such that:
+
+(a) For $\alpha \in \mathcal{S}$, there exists a mean $\alpha$ ergodic fixed point for the queue. Given this, [19] implies the attractiveness of the fixed point.
+
+(b) For $\alpha \notin \mathcal{S}$, consider the stationary (but not ergodic) process $F$ of mean $\alpha$ obtained as the convex combination of the ergodic fixed points of means $\underline{\alpha}$ and $\bar{\alpha}$ where $\underline{\alpha} = \sup\{u \in \mathcal{S}, u \le \alpha\}$ and $\bar{\alpha} = \inf\{u \in \mathcal{S}, \alpha \le u\}$. (Since $\mathcal{S}$ is closed, $\underline{\alpha}$ and $\bar{\alpha}$ belong to $\mathcal{S}$ and $F$ is a fixed point for the queue.) If the inter-arrival times of the input process have a mean $\alpha$, then the Cesaro average of the laws of the first $k$ inter-departure processes converges weakly to $F$ as $k \to \infty$.
+
+These results rely heavily on a strong law of large numbers for the total time spent by a customer in a tandem of queues proved in [2]. We conjecture that our results are suboptimal and that in fact $\mathcal{S} = (\beta, \infty)$.
+
+**2. Preliminaries.** The presence of an underlying probability space $(\Omega, \mathcal{F}, P)$ on which all the r.v.'s are defined is assumed all along. Given a measurable space $(K, \mathcal{K})$, we denote by $\mathcal{L}(K)$ the set of $K$-valued random variables, and by $\mathcal{M}(K)$ the set of probability measures on $(K, \mathcal{K})$. Throughout the paper, we
+---PAGE_BREAK---
+
+consider random variables valued in $\mathbb{R}_+^Z$. Equipped with the product topology, or
+topology of coordinate-wise convergence, $\mathbb{R}_+^Z$ is a Polish space. We shall work
+on the measurable space $(\mathbb{R}_+^Z, \mathcal{B})$ where $\mathcal{B}$ is the corresponding Borel $\sigma$-algebra.
+With the topology of weak convergence, the space $\mathcal{M}(\mathbb{R}_+^Z)$ is a Polish space. For
+details see, for instance, [3], [10] or [11]. The weak convergence of $(\mu_n)_n$ to $\mu$ is
+denoted by $\mu_n \xrightarrow{w} \mu$. Furthermore, for $X_n, X \in \mathcal{L}(\mathbb{R}_+^Z)$, we say that $X_n$ converges
+weakly to $X$ (and we write $X_n \xrightarrow{w} X$) if the law of $X_n$ converges weakly to the law
+of $X$. A process $X \in \mathcal{L}(\mathbb{R}_+^Z)$ is *constant* if $X = (c)^Z$ a.s. for some $c \in \mathbb{R}_+$.
+
+We write $\mathcal{M}_s(\mathbb{R}_+^Z)$ for the set of stationary probability measures with finite one-
+dimensional mean, and $\mathcal{M}_c(\mathbb{R}_+^Z)$ for the set of ergodic probability measures with
+finite one-dimensional mean. For $\alpha \in \mathbb{R}_+$, we denote by $\mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$ and $\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)$
+the sets of stationary and ergodic probability measures with one-dimensional
+mean $\alpha$.
+
+The strong order on $\mathcal{M}(\mathbb{R}_+^Z)$, or $\mathcal{L}(\mathbb{R}_+^Z)$, is defined as follows (see [21] for more
+on strong orders). Consider $A, B \in \mathcal{L}(\mathbb{R}_+^Z)$ with respective distributions $\mu$ and $\nu$.
+We say that $A$ (resp. $\mu$) is strongly dominated by $B$ (resp. $\nu$), denoted $A \le_{\text{st}} B$
+ resp. $\mu \le_{\text{st}} \nu$), if
+
+$$E[f(A)] \le E[f(B)] \quad (\text{resp.} \int f d\mu \le \int f dv),$$
+
+for any measurable $f: \mathbb{R}_+^Z \to \mathbb{R}$ which is increasing and such that the expectations
+are well defined. Here we consider the usual component-wise ordering of $\mathbb{R}_+^Z$.
+
+PROPOSITION 2.1 ([22]). For $\mu$ and $\nu$ belonging to $\mathcal{M}(\mathbb{R}_+^Z)$, $\mu \le_{\text{st}} \nu$ iff
+$\int f d\mu \le \int f dv$ for any increasing and continuous real function $f$ such that the
+expectations are well defined. For $\mu_n, \nu_n, n \in \mathbb{N}$, $\mu$ and $\nu$ belonging to $\mathcal{M}(\mathbb{R}_+^Z)$,
+suppose that $\mu_n \xrightarrow{w} \mu$, $\nu_n \xrightarrow{w} \nu$ and that $\mu_n \le_{\text{st}} \nu_n$. Then $\mu \le_{\text{st}} \nu$.
+
+We shall use the following fact a couple of times. Consider two random
+processes on $\mathbb{R}_+^Z$: A which is ergodic and B which is stationary. Assume
+that $A \le_{\text{st}} B$. Let B be compatible with a P-stationary shift $\theta: \Omega \to \Omega$ and denote
+by $\tilde{\mathfrak{T}}$ the invariant $\sigma$-algebra. Then we have
+
+$$ (1) \qquad E[A(0)] \le E[B(0)|\tilde{\mathfrak{T}}] \qquad \text{a.s.} $$
+
+Furthermore, if A is independent of B then the conditional law of B on the event
+{$E[B(0)|\tilde{\mathfrak{T}}] = E[A(0)]$} is equal to the law of A. To prove this, the two ingredients
+are a representation theorem such as Theorem 1 in [14] and Birkhoff's ergodic
+theorem.
+
+The symbols $\sim$ and $\perp$ stand for "is distributed as" and "is independent of," respectively. We use the notation $N^* = N \setminus \{0\}$, $R^* = R \setminus \{0\}$, and $x^+ = \max(x, 0) = x \vee 0$. For $u, v$ in $\mathbb{R}^N$ or $\mathbb{R}^Z$, $u \le v$ denotes $u(n) \le v(n)$ for all $n$.
+---PAGE_BREAK---
+
+**3. The model.** We introduce successively the $·/·/1/∞/FCFS$ queue (Section 3.1), the $G/G/1/∞/FCFS$ queue (Section 3.2), and the infinite tandem $G/G/1/∞/FCFS → ·/GI/1/∞/FCFS → ...$ (Section 3.3). The presentation is made in an abstract and functional way. However, to help intuition, we use the queueing terminology and notation.
+
+**3.1. The single queue.** Define the mapping
+
+$$ (2) \qquad \begin{aligned} \Psi : \mathbb{R}_+^Z &\times \mathbb{R}_+^Z \rightarrow \mathbb{R}_+^Z \cup \{(+\infty)^Z\}, \\ (a,s) &\mapsto w = \Psi(a,s), \end{aligned} $$
+
+with
+
+$$ (3) \qquad \begin{aligned} w(n) &= \Psi(a, s)(n) \\ &= \left[ \sup_{j \le n-1} \sum_{i=j}^{n-1} s(i) - a(i) \right]^+. \end{aligned} $$
+
+A priori, $\Psi$ is valued in $[0, \infty)^Z$, but it is easily checked using (5) below that $\Psi$ actually takes values in $\mathbb{R}_+^Z \cup \{(+\infty)^Z\}$. The map $\Psi$ computes the workloads ($w$) from the inter-arrivals ($a$) and the services ($s$). Observe that we have, for $m < n$ (Lindley's equations),
+
+$$ (4) \qquad w(n) = [w(n-1) + s(n-1) - a(n-1)]^+, $$
+
+$$ (5) \qquad w(n) = \left[ \max_{m E[A(0)|\mathfrak{T}]\}$, we have $W = (\infty)^Z$ and $D = LS$ [i.e., $\forall n, D(n) = S(n+1)$].
+
+*The critical case.* On the event $\{E[S(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]\}$, we have $D=LS$ and anything may happen for $W$. For instance, if $A=S=(c)^Z$ for $c \in \mathbb{R}_+$, then $W=(0)^Z$. If $S$ is i.i.d. and nonconstant and $A \perp S$, then $W=(\infty)^Z$.
+
+Observe that a consequence of the above is that
+
+$$ \{E[D(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]\} = \{E[S(0)|\mathfrak{T}] \le E[A(0)|\mathfrak{T}]\} $$
+
+(more rigorously, the symmetric difference of the two events has 0 probability).
+
+When the shift $\theta$ is ergodic, we are a.s. in the stable case when $E[S(0)] < E[A(0)]$, respectively, in the unstable case when $E[S(0)] > E[A(0)]$, and in the critical case when $E[S(0)] = E[A(0)]$.
+
+Let $\sigma$ be the law of $S$. Define
+
+$$ (10) \qquad \begin{array}{l} \Phi_{\sigma}: M_S(\mathbb{R}_+^Z) \to M_S(\mathbb{R}_+^Z), \\ \mu \mapsto \Phi_{\sigma}(\mu), \end{array} $$
+---PAGE_BREAK---
+
+where $\Phi_{\sigma}(\mu)$ is the law of $\Phi(A, S)$ where $A \sim \mu$, $S \sim \sigma$ and $A \perp S$. The map $\Phi_{\sigma}$ is called the *queueing map*. A distribution $\mu$ such that $\Phi_{\sigma}(\mu) = \mu$ is called a *fixed point* for the queue. If the inter-arrival process is distributed as a fixed point $\mu$, then so is the inter-departure process. Consider now an ergodic queue. Rephrasing Loynes' results, we get
+
+$$
+\begin{align*}
+\forall \alpha > \beta, \quad \Phi_{\sigma}: \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}) &\rightarrow \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}), \\
+\forall \alpha \leq \beta, \quad \Phi_{\sigma}: \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}) &\rightarrow \{\sigma\}.
+\end{align*}
+$$
+
+In particular, we have $\Phi_{\sigma}(\sigma) = \sigma$. We say that $\sigma$ is a *trivial* fixed point for the ergodic queue.
+
+Below the main objective is to get nontrivial fixed points for $\Phi_{\sigma}$ in the special case of an i.i.d. queue. More precisely, we want to address the following question: for any $\alpha > \beta$, does there exist a fixed point which is ergodic and of mean $\alpha$?
+
+3.3. *Stable i.i.d. queues in tandem.* Consider a family $\{S(n, k), n \in \mathbb{Z}, k \in \mathbb{N}\}$ of i.i.d. random variables valued in $\mathbb{R}_+$ with $E[S(0, 0)] = \beta \in \mathbb{R}_+^*$. Assume that $S(0, 0)$ is nonconstant, that is, $P\{S(0, 0) = \beta\} < 1$. For $k$ in $\mathbb{N}$, define $S^k: \Omega \to \mathbb{R}_+^\mathbb{Z}$ by $S^k = (S(n,k))_{n \in \mathbb{Z}}$. Let $\sigma$ be the distribution of $S^k$. Consider $A^0 = (A(n,0))_{n \in \mathbb{Z}}: \Omega \to \mathbb{R}_+^\mathbb{Z}$ and assume that $A^0$ is stationary, independent of $S^k$ for all $k$, and satisfies $E[A(0,0)] = \alpha \in \mathbb{R}_+^*$. Let $\theta$ be a $P$-stationary shift such that $A^0$ and $S^k$ for all $k$ are compatible with $\theta$. Let $\mathfrak{T}$ be the corresponding invariant $\sigma$-algebra. We assume that the stability condition $\beta < E[A(0,0)|\mathfrak{T}]$ holds a.s.
+
+Define recursively for all $k \in \mathbb{N}$
+
+$$ (11) \qquad W^k = (W(n,k))_{n \in \mathbb{Z}} = \Psi(A^k, S^k), $$
+
+$$ (12) \qquad A^{k+1} = (A(n, k+1))_{n \in \mathbb{Z}} = \Phi(A^k, S^k). $$
+
+The random processes $A^k$, $S^k$ and $W^k$ are, respectively, the inter-arrival, service and workload processes at queue $k$. The random process $A^{k+1}$ is the inter-departure process at queue $k$ and the inter-arrival process at queue $k+1$. Each $(A^k, S^k)$ defines a stable i.i.d. queue according to the terminology of Section 3.2. Globally, this model is called a *tandem of stable i.i.d. queues*.
+
+The sequence $(A^k)_k$ is a Markov chain on the state space $\mathbb{R}_+^\mathbb{Z}$. Clearly, $\mu$ is a stationary distribution of $(A^k)_k$ if and only if $\mu$ is a fixed point for the queue, that is, iff $\Phi_\sigma(\mu) = \mu$. Hence, the problem to be solved can be rephrased as: does the Markov chain $(A^k)_k$ admit nontrivial stationary distributions?
+
+**4. Uniqueness of fixed points and convergence.** In this section, we recall several results about the uniqueness of fixed points as well as convergence results. Associated with the existence results to be proved in Section 5, the results recalled here complete the picture about fixed point theorems. More importantly, they will be instrumental in several of the later proofs.
+---PAGE_BREAK---
+
+THEOREM 4.1 ([2, 17]). Consider the stable i.i.d. tandem model defined in Section 3.3 with an ergodic inter-arrival process of mean $\alpha > \beta$. Assume that
+
+$$ (13) \qquad \int_{\mathbb{R}_+} P\{S(0, 0) \ge u\}^{1/2} du < \infty. $$
+
+Then there exists $M(\alpha) \in \mathbb{R}_+$ such that almost surely $\lim_{n \to +\infty} n^{-1} \times \sum_{i=0}^{n-1} W(0, i) = M(\alpha)$, where $M(\alpha) = \sup_{x \ge 0} (\gamma(x) - \alpha x)$ and the function $\gamma: \mathbb{R}_+ \to \mathbb{R}_+$ depends only on the service process. If we further assume that the initial inter-arrival process satisfies
+
+$$ (14) \qquad \exists c, E[S(0, 0)] < c < E[A(0, 0)], \\ E\left[\sup_{n \in \mathbb{N}^*}\left[\sum_{i=-n}^{-1} c - A(i, 0)\right]^+\right] < \infty, $$
+
+then the convergence to $M(\alpha)$ also holds in $L_1$.
+
+Observe that $M(\alpha)$ depends on the inter-arrival process only via its mean. The function $\gamma$ in Theorem 4.1 is continuous, strictly increasing, concave and satisfies $\gamma(0) = 0$. For details on $\gamma$, refer to [2, 12].
+
+Theorem 4.1 is proved in [2] under the condition: $E[S(0, 0)^{3+a}] < \infty$ for some $a > 0$. The above version is proved in [17] (using similar methods as in [2]) and is better since we have
+
+$$ \begin{align*} [\exists a > 0, E[S(0, 0)^{2+a}] < \infty] &\implies \int P\{S(0, 0) \ge u\}^{1/2} du < \infty \\ &= E[S(0, 0)^2] < \infty. \end{align*} $$
+
+Condition (14) is slightly stronger than $E[W(0, 0)] < \infty$. Indeed, recall the following results from [9]. If $E[S(0, 0)^2] < \infty$, then, setting $\beta = E[S(0, 0)]$,
+
+$$ (15) \qquad \begin{aligned} \exists c > \beta, E\left[\sup_{n \ge 1}\left[\sum_{i=-n}^{-1} c - A(i, 0)\right]^+\right] &< \infty \\ &\implies E[W(0, 0)] < \infty, \\ E[W(0, 0)] &< \infty \\ &\implies E\left[\sup_{n \ge 1}\left[\sum_{i=-n}^{-1} \beta - A(i, 0)\right]^+\right] < \infty. \end{aligned} $$
+
+Condition (14) is satisfied, for example, by the deterministic process $P\{A^0 = (\alpha)^Z\} = 1$.
+
+The next result requires some preparation. Let $\mathcal{L}_s(\mathbb{R}_+^Z \times \mathbb{R}_+^Z)$ be the set of random processes $((X(n), Y(n))_{n \in \mathbb{Z}}$ which are stationary in $n$. Consider $\mu$ and $\nu$
+---PAGE_BREAK---
+
+in $\mathcal{M}_s(\mathbb{R}_+^Z)$ and let $\mathcal{D}(\mu, \nu) = \{(X, Y) \in \mathcal{L}_s(\mathbb{R}_+^Z \times \mathbb{R}_+^Z) | X \sim \mu, Y \sim \nu\}$. That is, $\mathcal{D}(\mu, \nu)$ is the set of jointly stationary processes whose marginals are distributed as $\mu$ and $\nu$. The $\bar{\rho}$ distance between $\mu$ and $\nu$ is given by
+
+$$ (16) \qquad \bar{\rho}(\mu, \nu) = \inf_{(X,Y) \in \mathcal{D}(\mu,\nu)} E[|X(0) - Y(0)|]. $$
+
+See Gray [13], Chapter 8, for a proof that $\bar{\rho}$ is indeed a distance. Given two r.v.'s A and B with respective laws $\mu$ and $\nu$, set $\bar{\rho}(A, B) = \bar{\rho}(\mu, \nu)$. We recall a well-known fact (see also Section 7): convergence in the $\bar{\rho}$ distance implies weak convergence, but not conversely.
+
+**THEOREM 4.2 ([8, 19]).** Consider a stationary queue as in Section 3.2 with service process S and two inter-arrival processes A and B, possibly of different means. Assume that $A \perp S$ and $B \perp S$. Then,
+
+$$ (17) \qquad \bar{\rho}(\Phi(A, S), \Phi(B, S)) \le \bar{\rho}(A, B). $$
+
+Consider now a stable i.i.d. tandem model as in Section 3.3 with inter-arrival processes $A^0$ and $B^0$ with different laws but such that $E[A(0,0)|\mathcal{T}] = E[B(0,0)|\mathcal{T}]$ a.s. Recall that $(A^n)_n$ and $(B^n)_n$ are defined recursively by $A^{n+1} = \Phi(A^n, S^n)$ and $B^{n+1} = \Phi(B^n, S^n)$. Then there exists $k \in \mathbb{N}^*$ such that
+
+$$ (18) \qquad \bar{\rho}(A^k, B^k) < \bar{\rho}(A^0, B^0). $$
+
+If we further assume that $B^1 = \Phi(B^0, S^0) \sim B^0$, then
+
+$$ (19) \qquad \lim_{n \to +\infty} \bar{\rho}(A^n, B^0) = 0 \quad \text{and} \quad \text{hence } A^n \xrightarrow{w} B^0. $$
+
+Chang [8] gives an elegant proof of (17). He also proves (18) for unbounded services. Prabhakar [19] removes this restriction and also establishes (19). As opposed to Theorem 4.1, observe that the convergence result in (19) is proved under the a priori assumption of existence of a fixed point.
+
+Define (“*p*: α” stands for “pathwise means are equal to α”)
+
+$$ (20) \qquad \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z) = \left\{ \mu \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z) \mid X \sim \mu \Rightarrow \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} X(i) = \alpha \text{ a.s.} \right\}. $$
+
+Obviously, $\mathcal{M}_e^\alpha(\mathbb{R}_+^Z) \subset \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z) \subset \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$. The ergodic components of $\chi \in \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$ all have one-dimensional mean $\alpha$. An important consequence of (18) is the following uniqueness result.
+
+**COROLLARY 4.3.** Consider an i.i.d. queue as in Section 3.2. The corresponding queueing map $\Phi_\sigma$ has at most one fixed point in $\mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$ for $\alpha > E[S(0)]$.
+
+In particular, there is at most one fixed point in $\mathcal{M}_e^\alpha(\mathbb{R}_+^Z)$. In fact, we have the following stronger result.
+---PAGE_BREAK---
+
+PROPOSITION 4.4. Consider an i.i.d. queue as in Section 3.2 and $\alpha > E[S(0)]$. If $\zeta \in M_s^{p:\alpha}(\mathbb{R}_+^Z)$ is a fixed point, then it is necessarily ergodic; that is, $\zeta \in M_c^\alpha(\mathbb{R}_+^Z)$.
+
+PROOF. Suppose that the ergodic decomposition of $\zeta$ is given by $\zeta = \int \mu\Gamma(d\mu)$, where $\Gamma$ is a probability measure on $M_c^\alpha(\mathbb{R}_+^Z)$. Denote the support of $\Gamma$ by $\text{supp}(\Gamma) \subset M_c^\alpha(\mathbb{R}_+^Z)$. Assume that $\zeta$ is nonergodic, meaning that $\text{supp}(\Gamma)$ is not a singleton. Let $S$ be a subset of $\text{supp}(\Gamma)$ such that $0 < \Gamma\{S\} < 1$.
+
+Consider a stable i.i.d. tandem model as in Section 3.3. Let $A^0$ and $B^0$ be two inter-arrival processes, independent of the services, and such that $A^0 \sim \zeta$, $B^0 \sim \zeta$, $A^0 \perp B^0$. Define $(A^k)_k$ and $(B^k)_k$ as in (12). Let $C_b(\mathbb{R}_+^Z)$ be the set of continuous and bounded functions from $\mathbb{R}_+^Z$ to $\mathbb{R}$. Recall that $L$ is the left translation shift of $\mathbb{R}_+^Z$ and define recursively $L^{i+1} = L \circ L^i$. Define the $\theta$-invariant events
+
+$$ A = \left\{ \exists \mu \in S, \forall f \in C_b(\mathbb{R}_+^Z), \lim_{n} \frac{1}{n} \sum_{i=0}^{n-1} f(L^i A^0) = \int f d\mu \right\}, $$
+
+$$ B = \left\{ \exists \mu \in \text{supp}(\Gamma) \setminus S, \forall f \in C_b(\mathbb{R}_+^Z), \lim_{n} \frac{1}{n} \sum_{i=0}^{n-1} f(L^i B^0) = \int f d\mu \right\}. $$
+
+Roughly speaking, on the event $A \cap B$, the processes $A^0$ and $B^0$ are distributed according to different components of the ergodic decomposition of $\zeta$. Using the independence of $A^0$ and $B^0$, we have
+
+$$ P\{A \cap B\} = P\{A\}P\{B\} = \Gamma\{S\}(1 - \Gamma\{S\}) > 0. $$
+
+Define the processes
+
+$$ \tilde{A}^0 = A^0 1_{A \cap B} + (\alpha)^{\mathbb{Z}} 1_{(A \cap B)^c}, \quad \tilde{B}^0 = B^0 1_{A \cap B} + (\alpha)^{\mathbb{Z}} 1_{(A \cap B)^c}. $$
+
+By construction, the laws of $\tilde{A}^0$ and $\tilde{B}^0$ are different and we have $E[\tilde{A}(0, 0)|\mathcal{T}] = E[\tilde{B}(0, 0)|\mathcal{T}] = \alpha$ almost surely. Hence we can apply (18) in Theorem 4.2: there exists $k \in \mathbb{N}^*$ such that $\tilde{\rho}(\tilde{A}^k, \tilde{B}^k) < \tilde{\rho}(\tilde{A}^0, \tilde{B}^0)$. We deduce easily that $\tilde{\rho}(A^k, B^k) < \tilde{\rho}(A^0, B^0)$. This is in obvious contradiction with $\tilde{\rho}(A^0, B^0) = 0$ which follows from $A^0 \sim B^0$. We conclude that the support of $\Gamma$ is a singleton. $\square$
+
+**5. Existence of fixed points.** Consider the stable i.i.d. tandem model of Section 3.3. The objective is to prove Theorem 5.1, that is, to obtain nontrivial stationary distributions for $(A^k)_k$, or equivalently nontrivial fixed points for $\Phi_\sigma$.
+
+The first step is classical and consists of considering Cesaro averages of the laws of $A^k$. Consider the quadruple $(A^k, S^k, W^k, A^{k+1})$ and denote its law by
+---PAGE_BREAK---
+
+$v_k \in \mathcal{M}(\mathbb{R}_+^Z \times \mathbb{R}_+^Z \times [0, \infty]^Z \times \mathbb{R}_+^Z)$. For $n \in \mathbb{N}^*$, define $\mu_n \in \mathcal{M}(\mathbb{R}_+^Z \times \mathbb{R}_+^Z \times [0, \infty]^Z \times \mathbb{R}_+^Z)$ by
+
+$$\mu_n = \frac{1}{n} \sum_{k=0}^{n-1} v_k.$$
+
+The following interpretation may be useful: $\mu_n$ is the law of $(A^N, S^N, W^N, A^{N+1})$ where $N$ is a r.v. uniformly distributed over $\{0, \dots, n-1\}$ and independent of all the other r.v.'s of the problem.
+
+For all $n \in \mathbb{N}^*$, consider a quadruple of random processes $(\hat{A}^n, \hat{S}^n, \hat{W}^n, \hat{D}^n)$ distributed according to $\mu_n$. We have
+
+$$ (21) \qquad \hat{S}^n \sim \sigma, \quad \hat{S}^n \perp \hat{A}^n, \quad \hat{W}^n = \Psi(\hat{A}^n, \hat{S}^n), \quad \hat{D}^n = \Phi(\hat{A}^n, \hat{S}^n). $$
+
+First of all, we argue that the sequence $(\mu_n)_n$ is tight. Denote by $\mu_n^{(1)}$, $\mu_n^{(2)}$, $\mu_n^{(3)}$ and $\mu_n^{(4)}$ the marginals of $\mu_n$ corresponding respectively to the laws of $\hat{A}^n$, $\hat{S}^n$, $\hat{W}^n$ and $\hat{D}^n$. Since $\mu_n^{(3)}$ is defined on the compact space $[0, \infty]^Z$ and since $\mu_n^{(2)} = \sigma$, the only point to be argued is that $(\mu_n^{(1)})_n$ and $(\mu_n^{(4)})_n$ are tight. According to Loynes' results, we have $\mu_n^{(1)}$, $\mu_n^{(4)} \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$ [we even have $\mu_n^{(1)}$, $\mu_n^{(4)} \in \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$]. For $\varepsilon > 0$, the set $K = \prod_{i \in Z}[0, 2^{li+2}/\varepsilon]$ is compact in the product topology according to Tychonoff's theorem. It is immediate to check that for $\mu \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$, we have $\mu\{K\} \ge 1 - \alpha\varepsilon$. We conclude that $(\mu_n^{(1)})_n$ and $(\mu_n^{(4)})_n$ are tight.
+
+Consequently, by Prohorov's theorem, $(\mu_n)_n$ admits weakly converging subsequences. Let $\mu$ be a subsequential limit of $(\mu_n)_n$. Consider a quadruple of random processes
+
+$$ (22) \qquad (\hat{A}, \hat{S}, \widetilde{\hat{W}}, \widetilde{\hat{D}}) \sim \mu. $$
+
+It follows immediately from (21) that
+
+$$ (23) \qquad \hat{S} \sim \sigma, \quad \hat{S} \perp \hat{A}. $$
+
+Recall that we have $\hat{D}^n = [\hat{A}^n - \hat{S}^n - \hat{W}^n]^+ + L\hat{S}^n$. By the continuous mapping theorem, we deduce that
+
+$$ (24) \qquad \tilde{D} = [\hat{A} - \hat{S} - \widetilde{\hat{W}}]^+ + L\widetilde{\hat{S}}. $$
+
+On the other hand, it is not a priori true that $\widetilde{\hat{W}} = \Psi(\hat{A}, \hat{S})$ and $\tilde{D} = \Phi(\hat{A}, \hat{S})$ (which is the reason for the notation $\hat{A}, \hat{S}$ on the one side and $\widetilde{\hat{W}}, \tilde{D}$ on the other). Using (5) we have, for all $k < l-1$,
+
+$$
+\begin{aligned}
+& \left[ \max_{k \beta \}.
+$$
+
+Using Loynes' results for the critical case, we have $\Phi(\hat{A}, \hat{S}) = L\hat{S}$ on the event $\mathcal{A}$.
+Now using (26), we deduce that $\tilde{D} = \Phi(\hat{A}, \hat{S}) = L\tilde{S}$ on the event $\mathcal{A}$.
+
+Since $\hat{A} \ge_{st} \hat{S}$ and $\hat{A} \perp \tilde{S}$, we have, according to (1),
+
+$$
+\hat{A} = \bar{\bar{S}} 1_{\mathcal{A}} + \hat{A} 1_{\mathcal{A}^c},
+$$
+---PAGE_BREAK---
+
+where $\bar{S} \sim \hat{S}$. Furthermore, we have just proved that
+
+$$ \tilde{D} = L\hat{S}_{1A} + \tilde{D}_{1A^c}. $$
+
+Since $\hat{A} \sim \tilde{D}$, we deduce readily that $\hat{A}1_{A^c} \sim \tilde{D}1_{A^c}$. On the event $A^c$, we have, using Birkhoff's ergodic theorem,
+
+$$ \lim_{n \to \infty} \frac{1}{n} \sum_{i=-n}^{-1} \hat{A}(i) = E[\hat{A}(0)|\mathcal{T}] > \beta \implies \lim_{n \to \infty} \frac{1}{n} \sum_{i=-n}^{-1} \tilde{D}(i) > \beta. $$
+
+In view of $\tilde{D} = [\hat{A} + \hat{S} - \tilde{W}]^+ + L\hat{S}$, we deduce that on $A^c$, we have $\tilde{W} \in \mathbb{R}_+^Z$ a.s. For $k \beta\}. \end{aligned} $$
+
+Consequently, if $\tilde{W} = (\infty)^{\mathbb{Z}}$ a.s. then $\zeta = \sigma$, and if $P\{\tilde{W} \in \mathbb{R}_+^{\mathbb{Z}}\} > 0$ then $\zeta$ is a nontrivial fixed point for the queue.
+
+Assume now that the moment condition $\int P\{S(0, 0) \ge u\}^{1/2} du < \infty$ is satisfied. This is the condition needed in Theorem 4.1 to obtain that $\lim_n n^{-1} \times \sum_{i=0}^{n-1} W(0, i) = M(\alpha)$ a.s. for a finite constant $M(\alpha)$. Let us prove that
+
+$$ (31) \qquad \lim_{n \to +\infty} \frac{1}{n} \sum_{i=0}^{n-1} W(0, i) = M(\alpha) \text{ a.s.} \implies \tilde{W}(0) \in \mathbb{R}_{+} \text{ a.s.} $$
+
+We argue by contradiction; hence, suppose that $P\{\tilde{W}(0) = +\infty\} = a > 0$. Fix $K > 0$. Let $f$ be a strictly increasing function of $\mathbb{N}$ such that $\mu_{f(n)} \xrightarrow{w} \mu$. We have $\widehat{W}^{f(n)}(0) \xrightarrow{w} \tilde{W}(0)$. Recall that $P\{\widehat{W}^n(0) \ge K\} = n^{-1}\sum_{i=0}^{n-1} P\{W(0, i) \ge K\}$. We deduce that
+
+$$ \forall b \in (0, a), \exists N, \forall n = f(k) \ge N, \quad \frac{1}{n} \sum_{i=0}^{n-1} P\{W(0, i) \ge K\} \ge b. $$
+
+Fix $b \in (0, a)$, $c \in (0, b)$ and $n = f(k) \ge N$. Define the event $\mathcal{E} = \{n^{-1} \times \sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}} \ge c\}$ and set $q = P\{\mathcal{E}\}$. We have
+
+$$
+\begin{aligned}
+& \sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}} \\
+& = \left(\sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}}\right) 1_{\mathcal{E}} + \left(\sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}}\right) 1_{\mathcal{E}^c} \le n1_{\mathcal{E}} + nc1_{\mathcal{E}^c}.
+\end{aligned}
+$$
+
+Taking expectations, we get
+
+$$ nb \le \sum_{i=0}^{n-1} P\{W(0, i) \ge K\} \le nq + n(1-q)c. $$
+
+We conclude that $q \ge (b-c)/(1-c) > 0$. Since this last inequality is valid for any $K$, we clearly have a contradiction with the a.s. convergence of $n^{-1}\sum_{i=0}^{n-1} W(0, i)$ to a finite constant.
+
+We conclude that under the assumptions of Theorem 4.1, the fixed point $\zeta$ is nontrivial. Summarizing all of the above, we obtain the following result.
+---PAGE_BREAK---
+
+**THEOREM 5.1.** Consider a single server infinite buffer FCFS queue with an i.i.d. service process $S$ satisfying: $E[S(0)] \in \mathbb{R}_+^*$, $P\{S(0) = E[S(0)]\} < 1$ and $\int P\{S(0) \ge u\}^{1/2} du < \infty$. Then there exists an ergodic inter-arrival process $A$ with $A \perp S$ and $E[S(0)] < E[A(0)] < \infty$, and such that the corresponding inter-departure process $D$ has the same distribution as $A$.
+
+PROOF. Consider a tandem of queues as in Section 3.3 where the service processes $S^k$ are distributed as $S$ with law $\sigma$. Consider the process $\hat{A}$ with law $\zeta$ as defined in (22). By the ergodic decomposition theorem and the linearity of $\Phi_\sigma$, we have
+
+$$ \zeta = \int_{\mathcal{M}_c(\mathbb{R}_+^Z)} \chi \Gamma(d\chi), \quad \Phi_\sigma(\zeta) = \int_{\mathcal{M}_c(\mathbb{R}_+^Z)} \Phi_\sigma(\chi) \Gamma(d\chi). $$
+
+But $\zeta = \Phi_\sigma(\zeta)$. Therefore, the uniqueness of ergodic decompositions and the mean preservation property of stable queues imply that
+
+$$ \zeta_\alpha = \int_{\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)} \chi \Gamma(d\chi) = \int_{\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)} \Phi_\sigma(\chi) \Gamma(d\chi) = \Phi_\sigma(\zeta_\alpha) $$
+
+for every $\alpha$ in the support of $E[\hat{A}(0)|\mathfrak{T}]$. By Proposition 4.4, the distributions $\zeta_\alpha$ are ergodic. According to (31), which holds since $\int P\{S(0) \ge u\}^{1/2} du < \infty$, we have $P\{\tilde{W} \in \mathbb{R}_+^Z\} = 1$ and $E[\hat{A}(0)|\mathfrak{T}] > E[S(0)]$ according to (30). Hence any $\alpha$ in the support of $E[\hat{A}(0)|\mathfrak{T}]$ is such that $\alpha > E[S(0)]$ and we conclude that the corresponding distribution $\zeta_\alpha \in \mathcal{M}_c^\alpha(\mathbb{R}_+^Z)$ is such that $\Phi_\sigma(\zeta_\alpha) = \zeta_\alpha$. $\square$
+
+To the best of our knowledge, this provides the first positive answer (apart from the cases of exponential and geometric service times) to the intriguing question of the existence of nontrivial ergodic fixed points for a $./GI/1/\infty$/FCFS queue.
+
+**6. Values of the means for which a fixed point exists.** Consider a tandem of stable i.i.d. queues as in Section 3.3 and let $\Phi_\sigma$ be the corresponding queueing operator. Assume also that the condition (13) holds. Define
+
+$$ (32) \qquad \mathcal{S} = \{\alpha \in (\beta, +\infty) \mid \exists \mu \in \mathcal{M}_c^\alpha(\mathbb{R}_+^Z), \Phi_\sigma(\mu) = \mu\}. $$
+
+According to Theorem 5.1, the set $\mathcal{S}$ is nonempty. We establish in Theorem 6.4 that $\mathcal{S}$ is unbounded, and closed in $(\beta, \infty)$. We believe that $\mathcal{S} = (\beta, +\infty)$ but we have not been able to prove this last point (see Conjecture 6.6). Proposition 6.5 also describes the limiting behavior from inputting in the tandem an ergodic inter-arrival process whose mean $\alpha$ does not belong to $\mathcal{S}$ (the case $\alpha \in \mathcal{S}$ is settled by Theorem 4.2).
+
+From now on, for $\alpha \in \mathcal{S}$, denote by $\zeta_\alpha$ the unique ergodic fixed point of mean $\alpha$ and by $A_\alpha$ an inter-arrival process distributed as $\zeta_\alpha$. Let $S$ be distributed as $\sigma$ and independent of all other r.v.'s. Also it is convenient to denote by $\mathcal{L}(A)$ the law of a r.v. $A$, and by supp $A$ its support.
+---PAGE_BREAK---
+
+The following argument is used several times. Consider $\alpha \in \mathcal{S}$ and let $(A^n)_n$ be defined as in (12) starting from an ergodic process $A^0$ of mean $\alpha$. According to (19), we have $A^n \xrightarrow{w} A_\alpha$. It implies that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(A^i) \xrightarrow{w} \mathcal{L}(A_\alpha)$. According to (28), we have
+
+$$ (33) \qquad \frac{1}{n} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(A^i, S^i)) \xrightarrow{w} \mathcal{L}(\Psi(A_\alpha, S)). $$
+
+We now prove a series of preliminary lemmas.
+
+LEMMA 6.1. For any $\alpha > \beta$, $\mathcal{S} \cap (\beta, \alpha) \neq \emptyset$.
+
+PROOF. Fix $\alpha > \beta$. Let $(A^n)_n$ be defined as in (12) starting from an ergodic process $A^0$ of mean $\alpha$. Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. Recall from the proof of Theorem 5.1 that
+
+$$ (34) \qquad \operatorname{supp} E[\hat{A}(0)|\mathfrak{T}] \subset \mathfrak{S} \subset (\beta, \infty). $$
+
+By Fatou's lemma, $E[\hat{A}(0)] \le \alpha$. Since $E[\hat{A}(0)] = E[E[\hat{A}(0)|\mathfrak{T}]]$, we conclude that $\mathfrak{S} \cap (\beta, \alpha] \ne \emptyset$. $\square$
+
+LEMMA 6.2. Consider an ergodic inter-arrival process $A^0$ of mean $\alpha > \beta$. Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. Consider $\delta \in \mathcal{S} \cap (\beta, \alpha]$ (resp. $\delta \in \mathcal{S} \cap [\alpha, \infty)$, assuming $\mathcal{S} \cap [\alpha, \infty) \ne \emptyset$), then $A_\delta \le_{\text{st}} \hat{A}$ and $\Psi(\hat{A}, S) \ge_{\text{st}} \Psi(A_\delta, S)$ [resp., $A_\delta \ge_{\text{st}} \hat{A}$ and $\Psi(\hat{A}, S) \le_{\text{st}} \Psi(A_\delta, S)$]. Further, if $\mathcal{S} \cap [\alpha, \infty) \ne \emptyset$, then $E[\hat{A}(0)] = \alpha$.
+
+PROOF. Consider the case $\delta \in \mathcal{S} \cap [\alpha, \infty)$. The other case can be treated similarly. Define the process $B^0 = \delta\alpha^{-1}A^0$, that is,
+
+$$ \forall n, \quad B(n, 0) = -\frac{\delta}{\alpha} A(n, 0). $$
+
+The process $B^0$ is ergodic and of mean $\delta$. At mean $\delta$, $\Phi_\sigma$ admits the fixed point $\zeta_\delta$. By (19), we have $B^k \xrightarrow{w} A_\delta$. By construction, we have $A^0 \le B^0$ almost surely. Using the monotonicity property (9), we get that, for all $k \in \mathbb{N}$,
+
+$$ A^k \le B^k \quad \text{and} \quad \Psi(A^k, S^k) \ge \Psi(B^k, S^k). $$
+
+It implies that for all $k \in \mathbb{N}^*$,
+
+$$ \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \le_{\text{st}} \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(B^i) $$
+---PAGE_BREAK---
+
+and
+
+$$ \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(\Psi(A^k, S^k)) \geq_{\text{st}} \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(\Psi(B^k, S^k)). $$
+
+Going to the limit along an appropriate subsequence and applying (33), we obtain
+
+$$ \hat{A} \leq_{\text{st}} A_{\delta} \quad \text{and} \quad \Psi(\hat{A}, S) \geq_{\text{st}} \Psi(A_{\delta}, S). $$
+
+We are left with having to show that $E[\hat{A}(0)] = \alpha$. Observe that $k^{-1} \times \sum_{i=0}^{k-1} \mathcal{L}(B^k) \xrightarrow{w} \zeta_\delta$, and that the one-dimensional marginals converge in expectation since $k^{-1} \sum_{i=0}^{k-1} E[B(0, i)] = \delta = E[A_\delta(0)]$. It follows by Theorem 5.4 of [3] that the sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(B^k))_k$ is uniformly integrable. It implies that the dominated sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i))_k$ is also uniformly integrable. Along an appropriate subsequence, this last sequence converges weakly to the law of $\hat{A}$ and we conclude (Theorem 5.4 of [3]) that it also converges in expectation. Since $k^{-1} \sum_{i=0}^{k-1} E[A(0, k)] = \alpha$ for all $k$, we deduce that $E[\hat{A}(0)] = \alpha$. $\square$
+
+LEMMA 6.3. *The following statements are true:*
+
+(a) for $\alpha, \delta \in \mathcal{S}$ and $\alpha < \delta$, $A_\alpha \leq_{\text{st}} A_\delta$ and $\Psi(A_\alpha, S) \geq_{\text{st}} \Psi(A_\delta, S)$;
+
+(b) for $\alpha \in \mathcal{S}, E[\Psi(A_\alpha, S)(0)] = M(\alpha)$, where $M(\alpha)$ is defined in Theorem 4.1.
+
+PROOF. Part (a) is a direct consequence of Lemma 6.2. Consider part (b). Fix $\alpha \in \mathcal{S}$. Consider $A^0$ an ergodic inter-arrival process of mean $\alpha$ satisfying condition (14). From Theorem 4.1, we have
+
+$$ \lim_{n} \frac{1}{n} E[\Psi(A^i, S^i)(0)] = M(\alpha). $$
+
+Starting from (33) and applying Fatou's lemma, we get
+
+$$ E[\Psi(A_\alpha, S)(0)] \leq \lim_{n} \frac{1}{n} E[\Psi(A^i, S^i)(0)] = M(\alpha). $$
+
+Now let us prove that $M(\alpha) \leq E[\Psi(A_\alpha, S)(0)]$. By Lemma 6.1, there exists $\delta \in \mathcal{S} \cap (\beta, \alpha)$. Define the process $B^0 = \alpha\delta^{-1}A_\delta$ and let $(B^n)_n$ be defined as in (12). The process $B^0$ is ergodic of mean $\alpha$. We also have $B^0 \geq A_\delta$ a.s. Using (9), this implies
+
+$$ \frac{1}{n} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i(0))) \leq_{\text{st}} \mathcal{L}(\Psi(A_\delta, S)(0)) \quad \text{for all } n. $$
+
+Since $E[\Psi(A_\delta, S)(0)] \leq M(\delta) < \infty$, the sequence $\{n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i)(0)), n \in \mathbb{N}^*\}$ is uniformly integrable. Furthermore, we have from (33)
+---PAGE_BREAK---
+
+that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i)(0)) \xrightarrow{w} \mathcal{L}(\Psi(A_\alpha, S)(0))$. Applying Theorem 5.4 of [3], weak convergence plus uniform integrability implies convergence in expectation:
+
+$$\lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(B^i, S^i)(0)] = E[\Psi(A_\alpha, S)(0)].$$
+
+Now recall from Theorem 4.1 that we have $n^{-1} \sum_{i=0}^{n-1} \Psi(B^i, S^i)(0) \to M(\alpha)$ almost surely. Applying Fatou's lemma, we get
+
+$$M(\alpha) \leq \lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(B^i, S^i)(0)].$$
+
+Summarizing, we have $M(\alpha) \leq E[\Psi(A_\alpha, S)(0)]$. This completes the proof. $\square$
+
+**THEOREM 6.4.** *The set $\mathcal{S}$ is closed in $(\beta, \infty)$ and $\inf\{u \in \mathcal{S}\} = \beta$, $\sup\{u \in \mathcal{S}\} = +\infty$.*
+
+**PROOF.** A direct consequence of Lemma 6.1 is that $\inf\{u \in \mathcal{S}\} = \beta$. We prove that $\sup\{u \in \mathcal{S}\} = +\infty$ by contradiction. Thus, suppose $\sup\{u \in \mathcal{S}\} < \infty$ and consider $\alpha > \sup\{u \in \mathcal{S}\}$. Let $A^0$ be an ergodic inter-arrival process of mean $\alpha$ satisfying condition (14). Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. By Lemma 6.2, $A_\delta \le_{st} \hat{A}$ for any $\delta \in \mathcal{S}$. According to (1), this implies that $\delta \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Since $\supp E[\hat{A}(0)|\mathfrak{T}] \subset \mathcal{S}$, see (34), we conclude that almost surely
+
+$$E[\hat{A}(0)|\mathfrak{T}] = \sup\{u \in \mathcal{S}\} \in \mathcal{S}.$$
+
+Set $\eta = \sup\{u \in \mathcal{S}\}$. Since $\hat{A}$ is a fixed point, we must have $\hat{A} \sim A_\eta$. In particular, along an appropriate subsequence, we have that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(A^i, S^i))$ converges weakly to $\mathcal{L}(\Psi(A_\eta, S))$. Now, a sequential use of Lemma 6.3, Fatou's lemma and Theorem 4.1 gives us
+
+$$M(\eta) = E[\Psi(A_\eta, S)(0)] \leq \lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(A^i, S^i)(0)] = M(\alpha).$$
+
+It follows from the properties of $\gamma$ recalled after the statement of Theorem 4.1 that $M(x)$ is a positive and decreasing function that is strictly decreasing on the interval $\{x | M(x) > 0\}$. Since $\alpha < \eta$ and $M(\alpha) \le M(\eta)$, we conclude that $M(\alpha) = M(\eta) = 0$. Thus, $E[\Psi(A_\eta, S)(0)] = 0$, that is, $P\{\Psi(A_\eta, S) = (0)^{\mathbb{Z}}\} = 1$. Let us input the process $A_\eta$ into the tandem of queues. Using (8) recursively, we obtain
+
+$$\begin{align*}
+A_{\eta}^{k}(0) &= A_{\eta}(0) + \sum_{i=0}^{k-1}[S(1, i) - S(0, i)] + \sum_{i=0}^{k-1}[\Psi(A_{\eta}^{i}, S^{i})(1) - \Psi(A_{\eta}^{i}, S^{i})(0)] \\
+&= A_{\eta}(0) + \sum_{i=0}^{k-1}[S(1, i) - S(0, i)].
+\end{align*}$$
+---PAGE_BREAK---
+
+Since the service times are i.i.d. and nonconstant, the partial sums $\sum_{i=0}^{k-1}[S(1, i) - S(0, i)]$ form a null-recurrent random walk. Thus there is a $k$ for which $A_{\alpha_k}^k(0) < 0$ with strictly positive probability, which is impossible. Or, we cannot have $M(\eta) = 0$. In turn, this implies $\sup\{u \in \mathcal{S}\} = \infty$, and via Lemma 6.2 we get that $E[\hat{A}(0)] = \alpha$.
+
+We now prove that $\mathcal{S}$ is closed in $(\beta, \infty)$. Consider a sequence $\alpha_k$ of elements of $\mathcal{S}$ that increases to $\alpha \in (\beta, \infty)$. Let $A^0$ and $\hat{A}$ be defined as above (for the mean $\alpha$). Using Lemma 6.2, we have $A_{\alpha_k} \le \text{st} \hat{A}$ and using (1), we have $\alpha_k \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Passing to the limit, we get $\alpha \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Since $E[\hat{A}(0)] = E[E[\hat{A}(0)|\mathfrak{T}]] = \alpha$, we conclude that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\alpha\}$. It implies that $\alpha \in \mathcal{S}$. The proof works similarly when $\alpha_k$ is a decreasing sequence. $\square$
+
+**PROPOSITION 6.5.** *Consider an ergodic inter-arrival process $A^0$ of mean $\alpha$. There are two possibilities:*
+
+1. if $\alpha \in \mathcal{S}$, then $\bar{\rho}(A^k, A_\alpha) \xrightarrow{k} 0$ and hence $A^k \xrightarrow{w} A_\alpha$;
+
+2. if $\alpha \notin \mathcal{S}$, then $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \xrightarrow{w} p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$, where
+
+$$ (35) \qquad \underline{\alpha} = \sup\{u \in \mathcal{S}; u \le \alpha\}, \qquad \overline{\alpha} = \inf\{u \in \mathcal{S}; u \ge u\} \quad \text{and} \quad p = \frac{\overline{\alpha} - \alpha}{\underline{\alpha} - \overline{\alpha}}. $$
+
+In words, the weak Cesaro limit is a linear combination of the largest ergodic fixed point of mean less than $\alpha$ and of the smallest ergodic fixed point of mean more than $\alpha$. The weak Cesaro limit always has mean $\alpha$.
+
+**PROOF.** The case $\alpha \in \mathcal{S}$ is a restatement of (19). Consider $\alpha \notin \mathcal{S}$. Denote by $\hat{A}$ a process whose law is a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. By Lemma 6.2, we have $A_u \le \text{st} \hat{A} \le \text{st} A_v$ for any $u, v \in \mathcal{S}$ such that $u < \alpha < v$. Therefore, using (1), we get that $u \le E[\hat{A}(0)|\mathfrak{T}] \le v$ a.s. Since $\text{supp } E[\hat{A}(0)|\mathfrak{T}] \subset \mathcal{S}$ [see (34)] and $E[\hat{A}(0)] = \alpha$ (Lemma 6.2) we conclude that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\underline{\alpha}, \overline{\alpha}\}$, where $\underline{\alpha}$ and $\overline{\alpha}$ are defined as in (35).
+
+We know from Section 5 that the law of $\hat{A}$ is a fixed point. Given that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\underline{\alpha}, \overline{\alpha}\}$, Proposition 4.4 tells us that $\hat{A} \sim pA_{\underline{\alpha}} + (1-p)A_{\overline{\alpha}}$ for some $p$. Therefore $E[\hat{A}(0)] = p\underline{\alpha} + (1-p)\overline{\alpha}$ and from $E[\hat{A}(0)] = \alpha$, we conclude that $p = (\overline{\alpha} - \alpha)/((\underline{\alpha} - \alpha))$.
+
+A consequence of the above argument is that any convergent subsequence of $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i)$ must converge weakly to $p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$. Recalling an argument of Section 5, the sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i), k \in \mathbb{N}^*)$ is tight, hence sequentially compact. This implies that $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \xrightarrow{w} p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$. $\square$
+
+The previous results characterize $\mathcal{S}$ to a certain extent. We believe that more is true.
+---PAGE_BREAK---
+
+CONJECTURE 6.6. For any $\alpha > \beta = E[S(0, 0)]$, there exists an ergodic fixed point of mean $\alpha$. That is, $\mathcal{S} = (\beta, +\infty)$.
+
+It is possible to show that $\mathcal{S}$ is equal to the image of the derivative of $\gamma$ defined in Theorem 4.1. (Since $\gamma$ is concave, its derivative $\gamma'$ is continuous except at a countable number of points. At the points of discontinuity, we consider that both the left and the right-hand limits belong to the image.) Hence the conjecture is true if the function $\gamma$ has a continuous derivative. However, we have not been able to prove this. The function $\gamma$ defines the limit shape of an oriented last-passage time percolation model on $\mathbb{N}^2$ with weights $(S(i, j))_{i,j}$ on the lattice points; see [2, 12, 17]. Establishing the smoothness of the limit shape in percolation models is usually a difficult question.
+
+**7. Complements.** In proving Theorem 5.1, an essential step was to establish the identity (28): $\tilde{D} = \Phi(\hat{A}, \hat{S})$. This can be rephrased as the weak continuity of the operator $\Phi_\sigma$ of an i.i.d. queue on the converging subsequences of the Cesaro averages of the laws of $A^k$. In fact a much stronger result holds:
+
+**THEOREM 7.1.** For a stationary queue defined as in Section 3.2, the operator $\Phi_\sigma$ is weakly continuous on $M_s(\mathbb{R}_+^Z)$.
+
+Theorem 7.1 is a generalization of a result due to Borovkov ([4], Chapter 11 or [5], Chapter 4); see also [6]. Borovkov proves that for an ergodic queue, $\Phi_\sigma$ is weakly continuous on $\bigcup_{\beta E[S(0)]$. The set $M_c^\alpha(\mathbb{R}_+^Z)$ is mapped into itself by $\Phi_\sigma$. However, it is not convex. Its convexification is the set $M_s^{p:\alpha}(\mathbb{R}_+^Z)$ defined in (20). The set $M_s^{p:\alpha}(\mathbb{R}_+^Z)$ is not weakly closed [as can be seen by considering $(\xi_n)_n$ defined in (36)]. Its closure is the set $\bigcup_{x \le \alpha} M_s^x(\mathbb{R}_+^Z)$.
+---PAGE_BREAK---
+
+Since $\Phi_{\sigma}(\mu) \geq_{\text{st}} \sigma$ for all $\mu$, we deduce the following natural and “minimal” candidate for $\mathcal{C}$:
+
+$$ \mathcal{C} = \bigcup_{x \leq \alpha} \mathcal{M}_s^x(\mathbb{R}_+^Z) \cap \{\mu \mid \mu \geq_{\text{st}} \sigma\}. $$
+
+It is easily checked that $\mathcal{C}$ is compact, convex, and mapped into itself by $\Phi_{\sigma}$. We therefore conclude that there exists a fixed point in $\mathcal{C}$. The problem is that $\mathcal{C}$ is too large: it contains the trivial fixed point $\sigma$, and we have no way to assert the existence of a nontrivial fixed point.
+
+Building on the above idea, one could try the same approach with another topology on $\mathcal{M}_s(\mathbb{R}_+^Z)$: the one induced by the $\bar{\rho}$ distance defined in (16). According to Theorem 4.2, the map $\Phi_{\sigma}$ is 1-Lipschitz on $\mathcal{M}_s(\mathbb{R}_+^Z)$, hence continuous. However, there is no clear way to build a compact and convex set on which to work. Indeed, let $\xi_n \in \mathcal{M}_e^1(\mathbb{R}_+^Z)$ be the distribution of the periodic process whose period is given by
+
+$$ (36) \qquad (\underbrace{0, \dots, 0}_{n}, \underbrace{2, \dots, 2}_{n}). $$
+
+It is easy to see that $(\xi_n)_n$ is not sequentially compact in $\mathcal{M}_s(\mathbb{R}_+^Z)$ for the $\bar{\rho}$ topology. Indeed, we have $\xi_n \xrightarrow{w} \xi$, where $\xi$ is defined by $P\{\xi = (0)^Z\} = P\{\xi = (2)^Z\} = 1/2$. Since convergence in the $\bar{\rho}$ topology implies weak convergence, if $(\xi_n)_n$ admits a subsequential limit in the $\bar{\rho}$ topology, then it has to be $\xi$. However, it is easy to check that $\bar{\rho}(\xi_n, \xi) = 1$ for all $n$.
+
+**Acknowledgment.** The authors would like to thank Tom Kurtz for a very careful reading and in particular for suggesting a simplification of the original proof of Theorem 5.1. This has led to an important shortening and overall improvement of the paper.
+
+## REFERENCES
+
+[1] ANANTHARAM, V. (1993). Uniqueness of stationary ergodic fixed point for a $M/K$ node. *Ann. Appl. Probab.* **3** 154–172. [Correction (1994) *Ann. Appl. Probab.* **4** 607.]
+
+[2] BACCELLI, F., BOROVKOV, A. and MAIRESSE, J. (2000). Asymptotic results on infinite tandem queueing networks. *Probab. Theory Related Fields* **118** 365–405.
+
+[3] BILLINGSLEY, P. (1968). *Convergence of Probability Measures*. Wiley, New York.
+
+[4] BOROVKOV, A. (1976). *Stochastic Processes in Queueing Theory*. Springer, Berlin. [Russian edition (1972), Nauka, Moscow.]
+
+[5] BOROVKOV, A. (1984). *Asymptotic Methods in Queueing Theory*. Wiley, New York. [Russian edition (1980), Nauka, Moscow.]
+
+[6] BRANDT, A., FRANKEN, P. and LISEK, B. (1990). *Stationary Stochastic Models*. Wiley, New York.
+
+[7] BURKE, P. (1956). The output of a queueing system. *Oper. Res.* **4** 699–704.
+
+[8] CHANG, C. S. (1994). On the input-output map of a $G/G/1$ queue. *J. Appl. Probab.* **31** 1128–1133.
+---PAGE_BREAK---
+
+[9] DALEY, D. and ROLSKI, T. (1992). Finiteness of waiting-time moments in general stationary single-server queues. *Ann. Appl. Probab.* **2** 987–1008.
+
+[10] DUDLEY, R. (1989). *Real Analysis and Probability*. Wadsworth & Brooks/Cole, Belmont, CA.
+
+[11] ETHIER, S. and KURTZ, T. (1986). *Markov Processes: Characterization and Convergence*. Wiley, New York.
+
+[12] GLYNN, P. and WHITT, W. (1991). Departures from many queues in series. *Ann. Appl. Probab.* **1** 546–572.
+
+[13] GRAY, R. (1988). *Probability, Random Processes, and Ergodic Properties*. Springer, Berlin.
+
+[14] KAMAE, T., KRENGEL, U. and O'BRIEN, G. L. (1977). Stochastic inequalities on partially ordered spaces. *Ann. Probab.* **5** 899–912.
+
+[15] LOYNES, R. (1962). The stability of a queue with non-independent interarrival and service times. *Proc. Cambridge Philos. Soc.* **58** 497–520.
+
+[16] MAIRESSE, J. and PRABHAKAR, B. (1999). On the existence of fixed points for the $·/GI/1$ queue. LIAFA Research Report 99/25, Université Paris 7.
+
+[17] MARTIN, J. (2002). Large tandem queueing networks with blocking. *Queueing Systems Theory Appl.* **41** 45–72.
+
+[18] MOUNTFORD, T. and PRABHAKAR, B. (1995). On the weak convergence of departures from an infinite sequence of $·/M/1$ queues. *Ann. Appl. Probab.* **5** 121–127.
+
+[19] PRABHAKAR, B. (2003). The attractiveness of the fixed points of a $·/GI/1$ queue. *Ann. Probab.* **31** 2237–2269.
+
+[20] RUDIN, W. (1991). *Functional Analysis*, 2nd ed. McGraw-Hill, New York.
+
+[21] STOYAN, D. (1984). *Comparison Methods for Queues and Other Stochastic Models*. Wiley, New York.
+
+[22] WHITT, W. (1992). Uniform conditional stochastic order. *J. Appl. Probab.* **17** 112–123.
+
+LIAFA
+UNIVERSITY DENIS DIDEROT
+CASE 7014
+2 PLACE JUSSIEU
+F-75251 PARIS CEDEX 05
+FRANCE
+
+E-MAIL: jean.mairesse@liafa.jussieu.fr
+
+DEPARTMENTS OF ELECTRICAL ENGINEERING
+AND COMPUTER SCIENCE
+
+STANFORD UNIVERSITY
+
+STANFORD, CALIFORNIA 94305-9510
+
+E-MAIL: balaji@stanford.edu
\ No newline at end of file
diff --git a/samples/texts_merged/6743834.md b/samples/texts_merged/6743834.md
new file mode 100644
index 0000000000000000000000000000000000000000..fdec9cfecea87a0ead2237590343f3d38b8622fa
--- /dev/null
+++ b/samples/texts_merged/6743834.md
@@ -0,0 +1,93 @@
+
+---PAGE_BREAK---
+
+# Solutions Complex Analysis Stein Shakarchi
+
+When people should go to the ebook stores, search creation by shop, shelf by shelf, it is truly problematic. This is why we provide the ebook compilations in this website. It will agreed ease you to look guide **solutions complex analysis stein shakarchi** as you such as.
+
+By searching the title, publisher, or authors of guide you truly want, you can discover them rapidly. In the house, workplace, or perhaps in your method can be all best place within net connections. If you take aim to download and install the solutions complex analysis stein shakarchi, it is totally easy then, back currently we extend the link to purchase and make bargains to download and install solutions complex analysis stein shakarchi hence simple!
+
+is one of the publishing industry's leading distributors, providing a comprehensive and impressively high-quality range of fulfilment and print services, online book reading and download.
+
+## Solutions Complex Analysis Stein Shakarchi
+
+SOLUTIONS/HINTS TO THE EXERCISES FROM COMPLEX ANALYSIS BY STEIN AND SHAKARCHI 3 Solution 3.zn= seicφ implies that z= s1n ei(φ +2πik), where k= 0,1,…,n- 1 and s1 n is the real nth root of the positive number s. There are nsolutions as there should be since we are finding the roots of a degree n polynomial in the algebraically closed field C.
+
+## SOLUTIONS/HINTS TO THE EXERCISES FROM COMPLEX ANALYSIS BY ...
+
+Chapter 1. Preliminaries to Complex Analysis 1.1 Complex numbers and the complex plane 1.1.1 Basic properties 1.1.2 Convergence 5.1.3 Sets in the complex plane 5.2 Functions on the complex plane 8.2.1 Continuous functions 8.2.2 Holomorphic functions 8.2.3 Power series 14.3 Integration along curves 18.4 Exercises 24 Chapter 2.
+---PAGE_BREAK---
+
+**Complex Analysis (Princeton Lectures in Analysis, Volume II)**
+
+Complex Analysis (Elias M. Stein, Rami Shakarchi)
+
+**(PDF) Complex Analysis (Elias M. Stein, Rami Shakarchi ...**
+
+solutions-complex-analysis-stein-shakarchi 1/1 Downloaded from datacenterdynamics.com.br on October 27, 2020 by guest [MOBI] Solutions Complex Analysis Stein Shakarchi Yeah, reviewing a book solutions complex analysis stein shakarchi could increase your close links listings. This is just one of the solutions for you to be successful.
+
+**Solutions Complex Analysis Stein Shakarchi ...**
+
+Stein and Shakarchi move from an introduction addressing Fourier series and integrals to in-depth considerations of complex analysis; measure and integration theory, and Hilbert spaces; and, finally, further topics such as functional analysis, distributions and elements of probability theory.
+
+**Stein And Shakarchi Complex Analysis Manual Solution ...**
+
+SOLUTIONS/HINTS TO THE EXERCISES FROM COMPLEX ANALYSIS BY STEIN AND SHAKARCHI 3 Solution 3.zn = $\text{sei}\u3c6$ implies that $z = s \ 1 \ n \ \text{ei}(\u3c6 \ n + 2\pi i k)$, where $k = 0, 1, \dots, n-1$ and $s \ 1 \ n$ is the real nth root of the positive number s.
+
+**solution to complex analysis stein shakarchi - Análise Complex**
+
+Solutions Complex Analysis Stein Shakarchi Solutions Complex Analysis Stein Shakarchi 3 Solution 3zn= $\text{sei}\varphi$ implies that $z=s1n\text{ei}(\varphi+2\pi ik)$, where $k=0,1,\dots,n-1$ and $s1n$ is the real nth root of the positive number s There are nsolutions as there should be since we are finding the roots of a degree n polynomial in the algebraically Fourier Analysis Solutions Stein Shakarchi Stein Shakarchi Real Analysis Solutions FROM COMPLEX ANALYSIS BY STEIN AND
+
+**Read Online Real Analysis Stein Shakarchi Solutions**
+
+Stein And Shakarchi Complex Analysis Manual Solution. ... The starting point is the simple idea of extending a function initially given for real values of the argument to one that is defined when
+---PAGE_BREAK---
+
+the argument is complex. ...
+
+**Stein Real Analysis Solution - costamagarakis.com**
+
+Fourier Analysis Solutions Stein Shakarchi The Princeton Lectures in Analysis is a series of four mathematics textbooks, each covering a different area of mathematical analysis. They were written by Elias M. Stein and Rami Shakarchi and published by Princeton University Press between 2003 and 2011.
+
+**Download Stein Shakarchi Real Analysis**
+
+and the textbook is Complex Analysis by Stein and Shakarchi (ISBN13: 978-0-691-11385-2). Note to students: it's nice to include the statement of the problems, but I leave that up to you. I am only skimming the solutions. I will occasionally add some comments or mention alternate solutions. If
+
+**Math 302: Solutions to Homework - Williams College**
+
+Princeton Lectures in Analysis. The Princeton Lectures in Analysis is a series of four mathematics textbooks, each covering a different area of mathematical analysis. They were written by Elias M. Stein and Rami Shakarchi and published by Princeton University Press between 2003 and 2011. They are, in order, Fourier Analysis: An Introduction; Complex Analysis; Real Analysis: Measure Theory, Integration, and Hilbert Spaces; and Functional Analysis: Introduction to Further Topics in Analysis.
+
+**Princeton Lectures in Analysis - Wikipedia**
+
+June 22nd, 2018 - Download and Read Stein Shakarchi Fourier Analysis Solutions Stein Shakarchi Fourier Analysis Solutions Give us 5 minutes and we will show you the best book to read today " COMPLEX ANALYSIS BY ELIAS M STEIN ANSWERS
+
+**Fourier Analysis Solutions Stein Shakarchi**
+
+Problem 4 (3.2 in Stein-Shakarchi) Integrate over the upper semicircular contour; the integral over the semicircular part is 0 since the degree of the denominator is greater than 2. Therefore the desired integral is just the sum of all residues that lie in the upper semicircular contour. The poles are the 4-th
+
+**Solution to Stein Complex Analysis | Holomorphic**
+---PAGE_BREAK---
+
+**Function ...**
+
+Numerous examples and applications throughout its four planned volumes, of which Complex Analysis is the second, highlight the far-reaching consequences of certain ideas in analysis to other fields...
+
+**Complex Analysis by Elias M. Stein, Rami Shakarchi - Books ...**
+
+Real Analysis: Measure Theory, Integration, and Hilbert Spaces
+Elias M. Stein and Rami Shakarchi. Real Analysis is the third volume in the Princeton Lectures in Analysis, a series of four textbooks that aim to present, in an integrated manner, the core areas of analysis. Here the focus is on the development of measure and...
+
+**Rami Shakarchi | Princeton University Press**
+
+and Shakarchi Real Analysis Solution(Stein………………) - कौशल The Princeton Lectures in Analysis is a series of four mathematics textbooks, each covering a different area of mathematical analysis.They were written by Elias M. Stein and Rami Shakarchi Stein Real Analysis Solution - food.whistleblower.org
+
+**Real Analysis Stein Shakarchi Solutions**
+
+Harvard Mathematics Department : Home page
+
+**Harvard Mathematics Department : Home page**
+
+Veja gratis o arquivo Stein & Shakarchi - Complex Analysis - Solutions enviado para a disciplina de Análise Complexa
+Categoria: Exercício - 5 - 30060137
+
+Copyright code: d41d8cd98f00b204e9800998ecf8427e.
\ No newline at end of file
diff --git a/samples/texts_merged/6772016.md b/samples/texts_merged/6772016.md
new file mode 100644
index 0000000000000000000000000000000000000000..296df04ae04bd451b7e8d920d770984e53faf3c0
--- /dev/null
+++ b/samples/texts_merged/6772016.md
@@ -0,0 +1,219 @@
+
+---PAGE_BREAK---
+
+GEOMETRIC EVOLUTION PROBLEMS AND
+ACTION-MEASURES
+
+M. BULIGA
+
+1. INTRODUCTION
+
+Geometric evolution problems are connected to many interesting phenom-
+ena, such as ice melting, metal solidification, explosions, damage mechanics.
+Any such problem numbers among the unknowns a geometric object. The
+canonical example of a geometric evolution problem is the mean curvature
+flow of a surface. A more complex situation arises in the study of brittle crack
+propagation. The state of a brittle body is described by a pair displacement-
+crack, therefore the crack propagation problem has two unknowns. We have
+to suppose that, at any moment, the displacement has no discontinuities away
+from the crack. Moreover, the displacement is connected with the crack by
+the boundary conditions: these contain conditions such as unilateral contact
+of the lips of the crack.
+
+In most of the studies the fracture propagation is not recognized to have a
+geometrical nature. It is the purpose of this paper to formulate a general geo-
+metric evolution problem based on the notion of action-measure, introduced
+here. For particular choices of the action-measure we obtain formulations of
+the mean curvature flow or the brittle fracture propagation problems.
+
+2. ACTION MEASURES AND VISCOSITY SOLUTIONS
+
+($L$, $\le$, $\tau$) is a sequential topological ordered set (or t.o.s.) if ($L$, $\le$) is an ordered set and for any sequence $(\beta_h)_h$ in $L$, converging to some $\beta \in L$, if there exists $\alpha \in L$ such that $\beta_h \le \alpha$ for any $h$, then $\beta \le \alpha$.
+
+Let us consider $F : X \to L$, where $X$ is a topological space and $L$ is a
+sequential t.o.s. A minimal element of $F$ is any $x \in X$ such that for any
+$y \in X$, if $F(y) \le F(x)$ then $F(y) = F(x)$. Remark however that, due to the
+
+*Key words and phrases.* geometric evolution problems, viscosity solutions, brittle fracture mechanics, mean curvature flow.
+---PAGE_BREAK---
+
+lack of total ordering, a minimal element may not be a minimizer, i.e. even
+if $x \in X$ is a minimal element of $F$, it is not true that $F(x) \leq F(y)$ for any
+$y \in X$.
+
+A particular case of t.o.s. is any space of measures. An action measure
+is a function defined over a topological space with values in a space of mea-
+sures. The direct method in the calculus of variations can be reformulated in
+this frame. In particular, if the space of measures is a topological dual of a
+space of functions then the direct method can be written in a particular form.
+We leave to the reader the formulation of the general direct method and the
+reformulation of the theorem in this case.
+
+Action measures are related to (first order) viscosity solutions (see [4], [5],
+[6]). Indeed, take a function
+
+$$
+H : \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}
+$$
+
+$C^1$ in the first argument and positive one-homogeneous in the second. (Weaker assumptions may be taken.) Consider now $L$, the polar of $H$,
+
+$$
+L(x, p) = \sup \{ \langle p, q \rangle - H(x, q) : q \in \mathbb{R}^n \}
+$$
+
+For any fixed $T > 0$ we define the set
+
+$$
+\begin{align*}
+\Lambda_{\tau} &= \{ c : \bar{\Omega} \times [0, T] \to \bar{\Omega} : c(x, \cdot) \in C^1([0, T]) \quad \forall x \in \Omega, \\
+&c(\cdot, 0) = id, c(x, T) \in \partial\Omega \quad \forall x \in \Omega \}
+\end{align*}
+$$
+
+and the function $F: A_T \to M(\Omega)$
+
+$$
+F(c)(B) = \int_B g(x,T) \, dx + \int_B \int_0^T L(c(x,t), \dot{c}(x,t)) \, dt \, dx .
+$$
+
+Here $g$ is a positive function defined on $\partial\Omega$. This action-measure has minimal elements. Moreover it has minimizing elements. Let $c_0$ be any one of them. Then
+
+$$
+(1) \qquad F(c_0)(B) = \int_B u(x) \, dx \quad \forall B \in \mathcal{B}(\Omega)
+$$
+
+where *u* is the viscosity solution of the problem
+
+$$
+(2) \qquad H(u, \nabla u) = 0 , \quad u = g \text{ on } \partial\Omega .
+$$
+
+Notice that in this setting of the problem (2) the primary unknown is the map $c_0$. The viscosity solution of (2), that is $u$, is the Lebesgue density of the measure $F(c_0)$.
+
+Any function $c \in A_T$ can be identified with a path of deformations of $\Omega$ by
+$c(\cdot, t) \mapsto c_j(\cdot) : \bar{\Omega} \to \bar{\Omega}$. This fact make us formulate the following general
+problem:
+---PAGE_BREAK---
+
+Consider a space $M$ of curves $t \mapsto \phi_t : \Omega \to \Omega$ and an action measure $\Lambda: M \to \mathrm{Meas}(\Omega)$, where $\mathrm{Meas}(\Omega)$ is a space of scalar measures over $\Omega$. Find and describe, under suitable conditions over $M$ and $\Lambda$, the minimal elements of the action measure $\Lambda$.
+
+### 3. EVOLUTION DRIVEN BY DIFFEMORPHISMS
+
+Diff$_0(\Omega)$ denotes the space of $C^\infty$ diffeomorphisms of $\Omega$ with compact support, that is the set of all $C^\infty$ functions $\phi: R^n \to R^n$ such that $\phi^{-1} \in C^\infty$ and $\mathrm{supp}(\phi - id) \subset \subset \Omega$. It is well known that any vector-field $\eta \in C_0^\infty(\Omega, R^n)$ (i.e. with compact support in $\Omega$) generates a one-parameter flow $t \mapsto \phi_t \in \mathrm{Diff}_0(\Omega)$, solution of the problem: $\dot{\phi}_t = \eta \cdot \phi_t$, $\phi_0 = id$, where the dot "·" denotes function composition.
+
+Consider a sufficiently regular set $B \subset \Omega$. Let $\xi_B$ be the characteristic function of B. For any $\phi \in \mathrm{Diff}(\Omega)$ we have the equality: $\xi_{\phi(B)} = \xi_B \cdot \phi^{-1}$.
+
+A geometric evolution of the set $B$ is any curve $t \mapsto B(t)$, such that $B(0) = B$. A particular case of geometric evolution of $B$ is when $B(t)$ is isotopically equivalent to $B$. Such an evolution (which we call isotopic) can be obtained by considering a curve $t \mapsto \phi_t \in \mathrm{Diff}_0(\Omega)$, $\phi_0 = id$. Any such curve induces a geometric evolution of $B$ by $B(t) = \phi_t(B)$. Therefore, this kind of geometric evolution of the set $B$ is equivalent to a curve in $\mathrm{Diff}_0(\Omega)$, with origin at $id$.
+
+We can make weaker assumptions upon the geometric evolution of $B$. In this paper we shall introduce the notion of geometric evolution driven by diffeomorphisms. The advantage of this notion is that potentially complex evolutions of $B$ are locally approximated by isotopic evolutions. We describe further what an evolution driven by diffeomorphisms is.
+
+The regularity assumptions upon the initial set $B$ are described first. $H^k$ denotes the $k$-dimensional Hausdorff measure. We shall suppose that $B$ has $k$ Hausdorff dimension. We suppose also that for any vector-field $\eta \in C_0^\infty(\Omega, R^n)$ there exists the derivative with respect to $t$ of the function $t \mapsto \xi_{\phi_t(B)}H^k$, where $\phi_t$ is the one-parameter flow generated by $\eta$. Moreover, this derivative is supposed to be absolutely continuous with respect to the measure $H^{k-1}$.
+
+An evolution of $B$ driven by diffeomorphisms is a curve $t \mapsto B(t)$, $B(0) = B$, such that:
+
+i) $d/dt \xi_{B(t)} H^k$ is absolutely continuous with respect to $H^{k-1}$. The support of this measure is denoted by $\partial^+ B(t)$ and is called the border of $B(t)$.
+
+ii) there is a curve $t \mapsto \eta(t) \in C_0^\infty(\Omega, R^n)$ such that for almost any $t$ we have the inequality of measures:
+
+$$ \frac{d}{dt} \xi_{B(t)} H^k \leq \frac{d}{ds} \xi_{B(t)} \cdot \phi_s^{-1} \cdot \eta(s) H^k $$
+---PAGE_BREAK---
+
+where $s \mapsto \phi_{s,\eta(t)}$ is the one-parameter flow generated by $\eta(t)$ and the derivative with respect to $s$ is made for $s=0$.
+
+iii) the function $t \mapsto d/ds \xi_{B(t)} \cdot \phi_{s,\eta(t)}^{-1} \mathcal{H}^k(\Omega)$ is measurable.
+
+iv) for any $t < t'$ we have $B(t) \subset B(t')$.
+
+Let us denote by $Bar^+(t, Q)$ the set of all $\eta \in C_0^\infty(\Omega, \mathbb{R}^n)$ with compact support in $Q \subset \Omega$ which satisfy: $d/dt \xi_{B(t)} \mathcal{H}^k \leq d/ds \xi_{B(t)} \cdot \phi_{s,\eta}^{-1} \mathcal{H}^k$. Obviously, the set $Bar^+(t, Q)$ depends on the evolution $t \mapsto B(t)$.
+
+We have the following result: for almost any $t$ there is a positive vector-field $v(t)$, with support on $\partial^* B(t)$, called the normal velocity field, such that for any $\eta \in Bar^+(t, \Omega)$ we have
+
+$$ \frac{d}{dt} \xi_{B(t)} \mathcal{H}^k \leq v(t) \mathcal{H}^{k-1} \leq \frac{d}{ds} \xi_{B(t)} \cdot \phi_{s,\eta}^{-1} \mathcal{H}^k . $$
+
+### 4. A GENERAL GEOMETRIC EVOLUTION PROBLEM
+
+Consider now a set $C \subset P(\Omega)$, which contains only regular closed sets $B \subset \Omega$ and let $M$ be a family of evolutions of an initial set $B_0 \in C$ driven by diffeomorphisms, such that for any $t$ and any curve $t \mapsto B(t) \in M$ we have $B(t) \in C$. Let us consider also a functional $E: C \to R$, such that $E(B) \geq E(B')$ if $B \subset B'$. $E$ is smooth in the following sense: for any $B \in C$ and any one-parameter flow $t \mapsto \phi_{1,\eta}$ the function $t \mapsto E(\phi_{1,\eta}(B))$ is derivable in $t=0$. This derivative will be denoted by $dE(B, \eta)$. Given a geometric evolution $t \mapsto B(t) \in M$, for any borelian set $Q \in B(\Omega)$, the variation of $E$ at $B(t) \in C$, inside $Q$ is defined by the formula:
+
+$$ dE(B(t))(Q) = \sup \left\{ dE(B(t), \eta) : \exists \lambda > 0, \lambda\eta \in Bar^+(t, Q), d(\partial^* B, \phi_{1,\eta}(\partial^* B)) \leq 1 \right\}. $$
+
+Under suitable assumptions $-dE(B(t))$ is a positive measure.
+
+We introduce now the action-measure defined for any geometric evolution $t \mapsto B(t) \in M$ by the expression:
+
+$$ A(t \mapsto B(t))(Q) = \int_0^T \int_{\partial^* B(t) \cap Q} v(t) d\mathcal{H}^{k-1} dt + \int_0^T dE(B(t))(Q) dt . $$
+
+Notice that the first term of $A$ can be written as the variation of $\mathcal{H}^k(B(t))$ from 0 to $T$. Remark also that we can consider functions $E = E(B, t)$, such that $E(B, t) \geq E(B, t')$ if $B \subset B'$.
+
+**Example 1.** Mean curvature flow. (see [1]) Let us take $k=n$ in the regularity assumptions, that is $B_0$ $n$-dimensional, and $E(B) = -\mathcal{H}^{n-1}(\partial^* B)$. Then any minimal element of the action measure $A$ defined above is a super-solution of the mean curvature flow problem, that is for almost any $t$ and
+---PAGE_BREAK---
+
+almost any $x \in \partial^{\ast}B(t)$ we have $v(t) \geq k(x,t)$, where $k(x,t)$ is the mean curvature of $\partial^{\ast}B(t)$ in $x$ (with the convention of positive curvature for spheres).
+
+**Example 2.** **Brittle crack propagation.** By a crack set in $\Omega$ we mean a closed, finite rectifiable set $B$. $\Omega$ represents the reference configuration and $\mathbf{u}: \bar{\Omega} \to \mathbb{R}^n$ is the deformation of a hyper-elastic body. The free energy density is $w(\nabla \mathbf{u})$; in the case of infinitesimal deformations $\mathbf{u}$ represents the displacement of the body and $w$ is a quadratic function of the symmetric gradient of $\mathbf{u}$.
+
+A path $t \mapsto \mathbf{v}(t)$ of deformations (or displacements) is given on $\partial\Omega$. The evolution of the body is supposed to be quasi-static. An initial crack set $B_0$ is present in the body. We are interested in the propagation of this crack under the path of imposed deformations. We introduce for this the following functional, defined for any crack set $B$ and any moment $t$:
+
+$$E(B, t) = \inf \left\{ \int_{\Omega} w(\nabla(\mathbf{u})) dx : \mathbf{u} \in C^1(\bar{\Omega} \setminus B), \mathbf{u} = \mathbf{v}(t) \text{ on } \partial\Omega \setminus B \right\}.$$
+
+Our principle of brittle crack propagation states that the evolution of the initial crack $B_0$ is a minimal element of the action-measure:
+
+$$\Lambda(t \mapsto B(t))(Q) = G H^{n-1}(B(T) \cap Q) + \int_{0}^{T} dE(B(t), t)(Q) dt .$$
+
+The physical meaning of this principle is: choose the crack propagation $t \mapsto B(t)$ such that the energy consumed by the body in order to produce in $Q$ the crack growth $t \mapsto B(t) \cap Q$ is less than the energy released in $Q$ due only to crack propagation.
+
+In the particular case of infinitesimal deformations if we take the curve $t \mapsto B_0(t) = B_0$ we see that $\Lambda(B_0(\cdot))(Q) = 0$ for any $Q$, therefore $\Lambda(B(\cdot))$ is a negative measure. Therefore, in this case, a generalization of Griffith criterion holds.
+
+In [2], [3] we have proposed a minimizing movement model of brittle crack propagation in infinitesimal deformations ([3], definitions 4.1 and 5.1). The model is presented here in a condensed form. Let us consider the set $M$ of all $(\mathbf{u}, K)$ such that $K \subset \bar{\Omega}$ is a crack set, $\mathbf{u} \in \mathbf{u} \in C^1(\bar{\Omega} \setminus K, \mathbb{R}^n)$ and for $H^{n-1}$-almost any $x \in K$ there exist the normal $\mathbf{n}(x)$ at $K$ in $x$ and $\mathbf{u}^+(x), \mathbf{u}^-(x)$.
+
+We define the functions
+
+$$J: M \times M \to R,$$
+
+$$J((\mathbf{u}, K), (\mathbf{v}, L)) = \int_{\Omega} w(\nabla \mathbf{v}) \, d\mathbf{x} + G H^{n-1}(L \setminus K),$$
+
+$$\Psi: [0, \infty) \times M \to \{0, +\infty\},$$
+---PAGE_BREAK---
+
+$$ \Psi(\lambda, (v, K)) = \begin{cases} 0 & \text{if } v = u_0(\lambda) \text{ on } \partial\Omega \setminus K \\ +\infty & \text{otherwise.} \end{cases} $$
+
+We consider the initial data $(u_0, K) \in M$ such that $u_0 = u(u_0(0), K)$. For any $s \ge 1$ we define the sequences
+
+$$ k \in N \mapsto u^s(k), L^s(k), K^s(k), $$
+
+$(u^s(k), L^s(k)) \in M$ and $(u^s(k), K^s(k)) \in M$, recursively:
+
+i) $(u^s, K^s)(0) = (u_0, K)$, $L^s(0) = K$,
+
+ii) for any $k \in N$ $(u^s, L^s)(k+1) \in M$ minimizes the functional
+
+$$ (v, L) \in M \mapsto J(((u^s, K^s)(k), (v, L)) + \Psi((k+1)/s, (v, L)) $$
+
+over $M$. $K^s(k+1)$ is defined by the formula:
+
+$$ K^s(k+1) = K^s(k) \cup L^s(k+1). $$
+
+$(u, L, K): [0, +\infty) \to M$ is an energy minimizing movement associated to $J$ with the constraint $\Psi$ and initial data $(u_0, K)$ if there is a diverging sequence $(s_i)$ such that for any $t > 0$ we have: $u^s([s_i t]) \to u(t)$ in $L^2(\Omega, R^n)$. $L(t)$ is called the active crack at the moment $t$ and
+
+$$ K(t) = \bigcup_{s \in [0, t]} S(s) $$
+
+is the total damaged region at the same moment.
+
+We have the following result which connects these two models of brittle
+crack propagation presented here.
+
+**Theorem.** Let us consider an energy minimizing brittle crack propagation $t \mapsto (u, S(t), K(t))$. Suppose that $t \mapsto K(t)$ is driven by diffeomorphisms. Then the curve $t \mapsto K(t)$ is a minimal element of the action-measure $\mathcal{A}$ defined above, in the case of infinitesimal deformations.
+
+REFERENCES
+
+[1] L. Ambrosio, Geometric evolution problems, distance function and viscosity solutions, *Università di Pisa Preprint* 2.245.986, 1996
+
+[2] M. Buliga, Variational Formulations in Brittle Fracture Mechanics. PhD Thesis, Institute of Mathematics of the Romanian Academy, 1997
+
+[3] M. Buliga, Energy minimizing brittle crack propagation, *Journal of Elasticity*, (to appear), 1998
+
+[4] M.G. Crandall, P.L. Lions, Viscosity solutions of Hamilton-Jacobi equations *Trans. Amer. Math. Soc.*, **277**, 1983, 1–43
+
+[5] M.G. Crandall, L.C. Evans, P.L. Lions, Some properties of viscosity solutions to Hamilton-Jacobi equations, *Trans. Amer. Math. Soc.*, **282**, 1984, 487–502
+
+[6] P.L. Lions, Generalized solutions of Hamilton-Jacobi equations, *Research Notes in Math*, **69**, Pitman, 1982
\ No newline at end of file
diff --git a/samples/texts_merged/6838080.md b/samples/texts_merged/6838080.md
new file mode 100644
index 0000000000000000000000000000000000000000..8572f82cb7ce3f0a8db77d293942a28393b5f51a
--- /dev/null
+++ b/samples/texts_merged/6838080.md
@@ -0,0 +1,1211 @@
+
+---PAGE_BREAK---
+
+# Unlinkable and Strongly Accountable Sanitizable Signatures from Verifiable Ring Signatures*
+
+Xavier Bultel¹,² and Pascal Lafourcade¹,²
+
+¹ CNRS, UMR 6158, LIMOS, F-63173 Aubière, France
+
+² Université Clermont Auvergne, BP 10448, 63000 Clermont-Ferrand, France
+
+**Abstract.** An *Unlinkable Sanitizable Signature* scheme (USS) allows a sanitizer to modify some parts of a signed message such that nobody can link the modified signature to the original one. A *Verifiable Ring Signature* scheme (VRS) allows the users to sign messages anonymously within a group such that a user can prove a *posteriori* to a verifier that he is the signer of a given message. In this paper, we first revisit the notion of VRS: we improve the proof capabilities of the users, we give a complete security model for VRS and we give an efficient and secure scheme called EVeR. Our main contribution is GUSS, a generic USS based on a VRS scheme and an unforgeable signature scheme. We show that GUSS instantiated with EVeR and the Schnorr's signature is twice as efficient as the best USS scheme of the literature. Moreover, we propose a stronger definition of accountability: an USS is accountable when the signer can prove whether a signature is sanitized. We formally define the notion of strong accountability when the sanitizer can also prove the origin of a signature. We show that the notion of strong accountability is important in practice. Finally, we prove the security properties of GUSS (including the strong accountability) and EVeR under the Decisional Diffie-Hellman assumption in the random oracle model.
+
+## 1 Introduction
+
+Sanitizable Signatures (SS) were introduced by Ateniese et al. [1], but similar primitives were independently proposed in [23]. In this primitive, a signer allows a proxy (called the sanitizer) to modify some parts of a signed message. For example, a magistrate wishes to delegate the power to summon someone to the court to his secretary. He signs the message "Franz is summoned to court for an interrogation on Monday" and gives the signature to his secretary, where "Franz" and "Monday" are sanitizable and the other parts are fixed. Thus, in order to summoned Joseph K. on Saturday in the name of the magistrate, the secretary can change the signed message into "Joseph K. is summoned to the court for an interrogation on Saturday".
+
+Ateniese et al. in [1] propose some applications of this primitive in privacy of health data, authenticated media streams and reliable routing informations. They also introduced five security properties formalized by Brzuska et al. in [4]:
+
+**Unfogeability:** no unauthorised user can generate a valid signature.
+
+* This research was conducted with the support of the “Digital Trust” Chair from the University of Auvergne Foundation.
+---PAGE_BREAK---
+
+**Immutability:** sanitizer cannot transform a signature from an unauthorised message.
+
+**Privacy:** no information about the original message is leaked by a sanitized signature.
+
+**Transparency:** nobody can say if a signature is sanitized or not.
+
+**Accountability:** the signer can prove that a signature is sanitized or is the original one.
+
+Finally, in [6] authors point a non-studied but relevant property called *unlinkability*: a scheme is unlinkable when it is not possible to link a sanitized signature to the original one. The authors give a generic unlinkable scheme based on group signatures. In 2016, Fleischhacker et al. [16] give a more efficient construction based on signatures with re-randomizable keys.
+
+On the other hand, ring signature is a well-studied cryptographic primitive introduced by Shamir et al. in [22] where any user can sign anonymously within an ad-hoc group of users. Such a scheme is verifiable [21] when any user can prove a posteriori to a verifier that he is the signer of a given message. In this paper, we improve the proof properties of VRS, we give an efficient VRS scheme called EVeR and a generic unlinkable sanitizable signature scheme called GUSS that uses verifiable ring signatures. We also show that the definition of accountability is too weak for practical uses, and we propose a stronger definition.
+
+**Contributions:** Existing VRS schemes allow any user to prove that he is the signer of a given message. We extend the definition of VRS to allow a user to prove that he is not the signer of a given message. We give a formal security model for VRS that takes into account this property. We first extend the classical security properties of ring signatures to verifiable ring signatures, namely the *unforgeability* (no unauthorised user can forge a valid signature) and the *anonymity* (nobody can distinguish the signer in the group). In addition we define the *accountability* (a user cannot sign a message and prove that he is not the signer) and the *non-usurpability* (a user cannot prove that he is the signer of a message if it is not true, and a user cannot forge a message such that the other users cannot prove that they are not the signers). To the best of our knowledge, it is the first time that formal security models are proposed for VRS. We also design an efficient secure VRS scheme under the decisional Diffie-Hellman assumption in the random oracle model.
+
+The usual definition of accountability for SS considers that the signer can prove the origin of a signature (signer or sanitizer) using a proof algorithm such that:
+
+1. The signer cannot forge a signature together with a proof that the signature comes from the sanitizer.
+
+2. The sannitizer cannot forge a signature such that the proof algorithm accuses the signer.
+
+The proof algorithm requires the secret key of the signer. To show that this definition is too weak, we consider a dishonest signer who refuses to proof the origin of a litigious signature. The dishonest signer claims that he lost his secret key because of problems with his hard drive. There is no way to verify whether the signer is lying. Unfortunately, without his secret key, the signer cannot generate the proof for the litigious signature. Then nobody can judge if the signature is sanitized or not and there is a risk of accusing the honest sanitizer wrongly. To solve this problem, we add a second proof algorithm that allows the sanitizer to prove the origin of a signature. To achieve the strong accountability, the two following additional properties are required:
+---PAGE_BREAK---
+
+1. The sanitizer cannot sanitize a signature $\sigma$ and prove that $\sigma$ is not sanitized.
+
+2. The signer cannot forge a signature such that the sanitizer proof algorithm accuses the sanitizer.
+
+The main contribution of this paper is to propose an efficient and generic unlinkable SS scheme called GUSS. This scheme is instantiated by a VRS and an unforgeable signature scheme. It is the first SS scheme that achieves strong accountability. We compare GUSS with the other schemes of the literature:
+
+**Brzuska et al. [6]** This scheme is based on group signatures. Our scheme is build on the same model, but it uses ring signatures instead of group signatures. The main advantage of group signatures is that the size of the signature is not proportional to the size of the group. However, for small groups, ring signatures are much more efficient than group signatures. Since the scheme of Brzuska et al. and GUSS uses group/ring signatures for groups of two users, GUSS is much more practicale for an equivalent level of genericity.
+
+**Fleischhacker et al. [16]** This scheme is based on signatures with re-randomizable keys. It is generic, however it uses different tools that must have special properties to be compatible with each other. To the best of our knowledge, it is the most efficient scheme of the literature. GUSS instantiated with EVeR and the Schnorr's signature is twice as efficient as the best instantiation of this scheme. In Fig. 1, we compare the efficiency of each algorithm of our scheme and the scheme of Fleischhacker et al..
+
+**Lai et al. [19]** Recently, Lai et al. proposed an USS that is secure in the standard model, however it uses pairing and it is much less efficient than the scheme of Fleischhacker et al. that is in the random oracle model, thus it is much less efficient than our scheme. In their paper [19], Lai et al. give a comparison of the efficiency of the three schemes of the literature.
+
+ | SiGen | SaGen | Sig | San | Ver | SiProof | SiJudge | Total | pk | spk | sk | ssk | σ | π |
|---|
| [16] | 7 | 1 | 15 | 14 | 17 | 23 | 6 | 73 | 7 | 1 | 14 | 1 | 14 | 4 | | GUSS | 2 | 1 | 8 | 7 | 10 | 3 | 2 | 36 | 2 | 1 | 2 | 1 | 12 | 4 |
+
+**Fig. 1.** Comparison of GUSS and the scheme of Fleischhacker et al.: The first six columns give the number of exponentiations of each algorithms of both schemes, namely the key generation algorithm of the signer (SiGen) and the sanitizer (SaGen), the signature algorithm (Sig), the verification algorithm (Ver), the sanitize algorithm (San), the proof algorithm (SiProof) and the judge algorithm (SiJudge). The last six columns gives respectively the size of the public key of the signer (pk) and the sanitizer (pk), the size of the secret key of the signer (sk) and the sanitizer (ssk), the size of a signature ($\sigma$) and the size of a proof ($\pi$) outputted by SiProof. This size is measured in elements of a group $G$ of prime order. As in [16], for the sake of clarity, we do not distinguish between elements of $G$ and elements of $\mathbb{Z}_p^*$. We consider the best instantiation of the scheme of Fleischhacker et al. given in [16].
+
+**Related works:** Sanitizable Signatures (SS) was first introduced by Ateniese et al. [1]. Later, Brzuska et al. give formal security definitions [5] for unfogeability, immutability, privacy, transparency and accountability. Unlinkability was introduced and
+---PAGE_BREAK---
+
+formally defined by Brzuska et al. in [6]. In [7], Brzuska et al. introduce an alternative definition of accountability called *non-interactive public accountability* where the capability to prove the origin of a signature is given to a third party. One year later, the same authors propose a stronger definition of unlinkability [8] and design a scheme that is both strongly unlinkable and non-interactive public accountable. However, non-interactive public accountability is not compatible with transparency. In this paper, we focus on schemes that are unlinkable, transparent and interactive accountable. To the best of our knowledge, there are only 3 schemes with these 3 properties, i.e. [6, 16, 19].
+
+Some works are focused on other properties of SS that we do not consider here, as SS with multiple sanitizers [10], or SS where the power of the sanitizer is limited [9]. Finally, there exist other primitives that solve related but different problems as homomorphic signatures [18], redactable signatures [3] or proxy signatures [17]. Differences between these primitives and sanitizable signatures are detailed in [16].
+
+On the other hand, *Ring Signatures (RS)* [22] were introduced by rivest et al. in 2003 and *Verifiable Ring Signatures (VRS)* [21] were introduced in 2003 by Lv. RS allows the users to sign anonymously within a group, and VRS allows a user to prove that he is the signer of a given message. To the best of our knowledge, even if several VRS have been proposed [12, 24], there is no security model for this primitive in the literature. Convertible ring signatures [20] are very closed to verifiable ring signatures, it allows the signer of an anonymous (ring) signature to transform it into a standard signature (*i.e.* a desanonimized signature). It can be used as a verifiable ring signature because the desanonimized signature can be viewed as a proof that the user is the signer of a given message. However, in this paper we propose a stronger definition of VRS where a user also can prove that he is not the signer of a message, and this property cannot be achieved using convertible signatures.
+
+A *List Signature* scheme (LS) [11] is a kind of RS that have the following property: if a user signs two messages for the same *event-id*, then it is possible to link these signatures and the user identity is publicly revealed. It can be used to design a VRS in our model: to prove whether he is the signer of a given message, the user signs a second message using the same event-id. If the two signatures are linked, then the judge is convinced that the user is the signer, else he is convinced that the user is not the signer. However, LS requires security properties that are too strong for VRS (linkability and traceability) and it would result in less efficient schemes.
+
+**Outline:** In section 2, we present the formal definition and the security models for both verifiable ring signatures and unlinkable sanitizable signatures. In section 3, we present our two schemes EvER and GUSS, before concluding in section 4. Moreover, we recall in appendix A the standard cryptographic definitions used in this paper, namely the DDH assumption, the deterministic digital signatures (DS), the Schnorr's signature and the non-interactive zero-knowledge proofs (NIZKP).
+---PAGE_BREAK---
+
+# 2 Formal Definitions
+
+## 2.1 Verifiable Ring Signatures
+
+We give formal definitions and security of Verifiable Ring Signatures (VRS). A VRS is a ring signature scheme where a user can prove to a judge if he is the signer of a message or not. It is composed of 6 algorithms. $V.Init$, $V.Gen$, $V.Sig$ and $V.Ver$ are defined as in the usual ring signature definitions. $V.Gen$ generates public and private keys. $V.Sig$ anonymously signs a message according to a set of public keys. $V.Ver$ verifies the soundness of a signature. A VRS has two additional algorithms: $V.Proof$ allows a user to prove whether he is the signer of a message or not, and $V.Judge$ allows to verify the proofs outputted by $V.Proof$.
+
+**Definition 1 (Verifiable Ring Signature (VRS)).** A Verifiable Ring Signature scheme is a tuple of 6 algorithms defined by:
+
+* $V.Init(1^k)$: It returns a setup value *init*.
+* $V.Gen(init)$: It returns a pair of signer public/private keys ($pk, sk$).
+* $V.Sig(L, m, sk)$: This algorithm computes a signature $\sigma$ using the key $sk$ for the message $m$ according to the set of public keys $L$.
+* $V.Ver(L, m, \sigma)$: It returns a bit $b$: if the signature $\sigma$ of $m$ is valid according to the set of public key $L$ then $b = 1$, else $b = 0$.
+* $V.Proof(L, m, \sigma, pk, sk)$: It returns a proof $\pi$ for the signature $\sigma$ of $m$ according to the set of public key $L$.
+* $V.Judge(L, m, \sigma, pk, \pi)$: It returns a bit $b$ or the bottom symbol $\perp$: if $b = 1$ (resp. 0) then $\pi$ proves that $\sigma$ comes from (resp. does not come from) the signer corresponding to the public key $pk$. It outputs $\perp$ when the proof is not well formed.
+
+*Unforgeability*: We first adapt the unforgeability property of ring signatures to VRS. Informally, a VRS is unforgeable when no adversary is able to forge a signature for a ring of public keys without any corresponding secret key. In this model, the adversary has access to a signature oracle $V.Sig(\cdot, \cdot, \cdot)$ (that outputs signatures of chosen messages for chosen users in the ring) and a proof oracle $V.Proof(\cdot, \cdot, \cdot, \cdot, \cdot)$ (that computes proofs as the algorithm $V.Proof$ for chosen signatures and chosen users). The adversary succeeds the attack when he outputs a valid signature that was not already computed by the signature oracle.
+
+**Definition 2 (Unforgeability).** Let $P$ be a VRS of security parameter $k$, $n$ be an integer.
+
+We consider two oracles:
+
+* $V.Sig(\cdot, \cdot, \cdot):$ On input $(L, l, m)$, if $1 \le l \le n$ then this oracle returns the message $V.Sig(L, sk_i, m)$, else it returns $\perp$.
+* $V.Proof(\cdot, \cdot, \cdot, \cdot, \cdot):$ On input $(L, m, \sigma, l)$, if $1 \le l \le n$ then this proof oracle returns $V.Proof(L, m, \sigma, pk_l, sk_l)$, else it returns $\perp$.
+
+$P$ is *n*-unf secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $q_S$ is the number of calls to the oracle $V.Sig(\cdot, \cdot, \cdot)$ and $\sigma_i$ is the $i^{th}$ signature outputted by this oracle:
+---PAGE_BREAK---
+
+**ExpP,An-unf(k):**
+
+$$
+\begin{align*}
+\text{init} & \leftarrow \text{V.Init}(1^k) \\
+& \forall 1 \le i \le n, (\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{V.Gen}(\text{init}) \\
+& (L_*, \sigma_*, m_*) \leftarrow \mathcal{A}^{\text{V.Sig}(\cdot, \cdot, \cdot)}_{\text{V.Proof}(\cdot, \cdot, \cdot, \cdot)} (\{\mathbf{pk}_i\}_{1 \le i \le n}) \\
+& \text{if } \text{V.Ver}(L_*, \sigma_*, m_*) = 1 \text{ and } L_* \subseteq \{\mathbf{pk}_i\}_{1 \le i \le n} \text{ and } \forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_* \\
+& \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+P is unforgeable when it is *n-unf* secure for any polynomials bounded *n*.
+
+Anonymity: we adapt the anonymity property of ring signatures to VRS. Informally, a VRS is anonymous when no adversary is able to link a signature to the corresponding user. The adversary has access to the signature oracle and the proof oracle. During a first phase, he chooses two honest users in the ring, and in the second phase, he has access to a challenge oracle $\mathrm{LRSO}_b(d_0, d_1, \cdot, \cdot)$ that outputs signatures of chosen messages using the secret key of one of the two chosen users. The adversary successes the attack if he guesses who is the user chosen by the challenge oracle. Note that the adversary cannot use the proof oracle on the signatures outputted by the challenge oracle.
+
+**Definition 3 (Anonymity).** Let $P$ be a VRS of security parameter $k$, let $n$ be an integer.
+Let the following oracle be:
+
+$\mathcal{LRSO}_b(d_0, d_1, \cdot, \cdot):$ On input $(m, L)$, if $\{\mathbf{pk}_{d_0}, \mathbf{pk}_{d_1}\} \subseteq L$ then this oracle returns $\mathbf{V.Sig}(L, \mathbf{sk}_{d_b}, m)$, else it returns $\bot$.
+
+$P$ is *n-ano* secure when for any polynomial time adversary $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$, the difference between $1/2$ and the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\mathbf{V.Sig}(\cdot, \cdot, \cdot)$ and $\mathbf{V.Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 2 and where $q_S$ (resp. $q_P$) is the number of calls to the oracle $\mathbf{V.Sig}(\cdot, \cdot, \cdot)$ (resp. $\mathbf{V.Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$), $(L_i, m_i, \sigma_i, l_i)$ is the $i^{th}$ query sent to oracle $\mathbf{V.Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$ and $\sigma'_j$ is the $j^{th}$ signature outputted by the oracle $\mathcal{LRSO}_b(d_0, d_1, \cdot, \cdot)$:
+
+**ExpP,An-ano(k):**
+
+$$
+\begin{align*}
+\text{init} & \leftarrow \text{V.Init}(1^k) \\
+& \forall 1 \le i \le n, (\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{V.Gen}(\text{init}) \\
+& (d_0, d_1) \leftarrow \mathcal{A}_1^{\text{V.Sig}(\cdot, \cdot, \cdot)}_{\text{V.Proof}(\cdot, \cdot, \cdot, \cdot)} (\{\mathbf{pk}_i\}_{1 \le i \le n}) \\
+& b \stackrel{\S}{\leftarrow} \{\mathbf{0}, 1\} \\
+& b_* \leftarrow \mathcal{A}_2^{\text{V.Sig}(\cdot, \cdot, \cdot)}_{\text{V.Proof}(\cdot, \cdot, \cdot, \cdot)}_{\text{LRSO}_b(d_0, d_1, \cdot)} (\{\mathbf{pk}_i\}_{1 \le i \le n}) \\
+& \text{if } (b = b_*) \text{ and } (\forall i,j \in \{\mathbf{1},\ldots,\max(q_S,q_P)\}, (\sigma_i \neq \sigma'_j) \text{ or } (l_i \neq d_0 \text{ and } l_i \neq d_1)) \\
+& \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+P is anonymous when it is *n-ano* secure for any polynomials bounded *n*.
+
+Accountability: We consider an adversary that has access to a proof oracle and a signature oracle. A VRS is accountable when no adversary is able to forge a signature $\sigma$ (that does not be outputted by the signature oracle) together with a proof that he is not the signer of $\sigma$. Note that the ring of $\sigma$ must contain at most one public key that does not come from an honest user, thus the adersary knows at most one secret key that corresponds to a public key in the ring.
+
+**Definition 4 (Accountability).** Let $P$ be a VRS of security parameter $k$ and let $n$
+be an integer. $P$ is $n$-acc secure when for any polynomial time adversary $\mathcal{A}$, the
+---PAGE_BREAK---
+
+probability that $\mathcal{A}$ wins the following experiment is negligible, where $V.Sig(\cdot, \cdot, \cdot)$ and $V.Proof(\cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 2 and where $q_S$ is the number of calls to the oracle $V.Sig(\cdot, \cdot, \cdot)$ and $\sigma_i$ is the $i^{th}$ signature outputted by this oracle:
+
+$$
+\begin{array}{l}
+\textbf{Exp}_{P,\mathcal{A}}^{\text{n-acc}}(k): \\
+\text{init} \leftarrow V.\text{Init}(1^k) \\
+\forall 1 \le i \le n, (\textit{pk}_i, \textit{sk}_i) \leftarrow V.\text{Gen}(\text{init}) \\
+(L_*, m_*, \sigma_*, \textit{pk}_*, \pi_*) \leftarrow \mathcal{A}^{\text{V.Sig}(\cdot, \cdot, \cdot), \text{V.Proof}(\cdot, \cdot, \cdot, \cdot)}(\{\textit{pk}_i\}_{1 \le i \le n}) \\
+\quad \text{if } (L \subseteq \{\textit{pk}_i\}_{1 \le i \le n} \cup \{\textit{pk}_*\}) \text{ and } (\text{V.Ver}(L_*, \sigma_*, m_*) = 1) \text{ and} \\
+\quad \quad (\text{V.Judge}(\bar{L}_*, m_*, \sigma_*, \textit{pk}_*, \pi_*) = 0) \text{ and } (\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_*) \\
+\quad \text{then return } 1, \text{ else return } 0
+\end{array}
+$$
+
+*P* is accountable when it is *n*-**acc** secure for any polynomially bounded *n*.
+
+Non-usurpability: We distinguish two experiments for this property:
+
+– The first experiment, denoted non-usu-1, considers an adversary that has access to a proof oracle and a signature oracle. Its goal is to forge a valid signature with a proof that the signer is another user in the ring. Since this property is not required to build our generic USS, we give the formal definition of this security experiment in Appendix B.
+
+– The second experiment, denoted non-usu-2, considers an adversary that has access to a proof oracle and a signature oracle and that recieves the public key of an honest user as input. The goal of the adversary is to forge a signature $\sigma$ such that the proof algorithm runs by the honest user returns a proof that $\sigma$ was computed by the honest user (i.e. the proof algorithm returns 1) or a non-valid proof (i.e. the proof algorithm returns $\perp$). Moreover, the signature $\sigma$ must not come from the signature orale.
+
+**Definition 5 (Non-usurpability).** Let $P$ be a VRS of security parameter $k$ and let $n$ be an integer. $P$ is $n$-non-usu-$2$ secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $V.Sig(\cdot, \cdot, \cdot)$ and $V.Proof(\cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 2 and where $q_S$ is the number of calls to the oracle $V.Sig(\cdot, \cdot, \cdot)$ and $\sigma_i$ is the $i^{th}$ signature outputted by this oracle:
+
+$$
+\begin{array}{l}
+\mathbf{Exp}_{P,\mathcal{A}}^{\text{n-non-usu-2}}(k): \\
+\text{init} \leftarrow V.\text{Init}(1^k) \\
+(\textit{pk}, \textit{sk}) \leftarrow V.\text{Gen}(\text{init}) \\
+(L_*, m_*, \sigma_*) \leftarrow \mathcal{A}^{\text{V.Sig}(\cdot, \cdot, \cdot), \text{V.Proof}(\cdot, \cdot, \cdot, \cdot)}(\textit{pk}) \\
+\pi \leftarrow V.\text{Proof}(L_*, m_*, \sigma_*, \textit{pk}, \textit{sk}) \\
+\text{if } (\text{V.Ver}(L_*, \sigma_*, m_*) = 1) \text{ and} \\
+\quad (\text{V.Judge}(L_*, m_*, \sigma_*, \textit{pk}_*, \pi_*) \neq 0) \text{ and } (\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_*) \\
+\quad \text{then return } 1, \text{ else return } 0
+\end{array}
+$$
+
+*P* is non-usurpable when it is both *n-non-usu-1* (see Appendix. B) and *n-non-usu-2*
+secure for any polynomials bounded *n*.
+
+## 2.2 Sanitizable Signature
+
+We give the formal definition and security properties of the sanitizable signature primitive. Comparing to the previous definitions where only the signer can prove the origin
+---PAGE_BREAK---
+
+of a signature, our definition introduces algorithms that allow the sanitizer to prove
+the origin of a signature. Moreover, in addition to the usual security models of [5], we
+present two new security experiments that improve the accountability definition.
+
+A SS scheme contains 10 algorithms. Init outputs the setup values. SiGen and SaGen generate respectively the signer and the sanitizer public/private keys. As in classical signature schemes, the algorithms Sig and Ver allow the users to sign a message and to verify a signature. However, signatures are computed using a sanitizer public key and an admissible function ADM. The algorithm San allows the sanitizer to transform a signature of a message $m$ according to a modification function MOD: if MOD is admissible according to the admissible function (i.e. $MOD(ADM) = 1$) this algorithm returns a signature of the message $MOD(m)$.
+
+SiProof allows the signer to prove whether a signature is sanitized or not. Proofs
+outputted by this algorithm can be checked by anybody using the algorithm SiJudge.
+Finally, algorithms SaProof and SaJudge have the same functionalities as SiProof
+and SiJudge, but the proof are computed from the secret parameters of the sanitizer
+instead of the signer.
+
+**Definition 6 (Sanitizable Signature (SS)).** A Sanitizable Signature scheme is a tuple of 10 algorithms defined as follows:
+
+Init(1^k): It returns a setup value init.
+
+SiGen(init): It returns a pair of signer public/private keys (pk, sk).
+
+SaGen(init): It returns a pair of sanitizer public/private keys (spk, ssk).
+
+Sig(m, sk, spk, ADM): This algorithm computes a signature σ using the key sk for the message m, the sanitizer key spk and the admissible function ADM. Note that we assume that ADM can be efficiently recovered from any signature.
+
+San(m, MOD, σ, pk, ssk): Let the admissible function ADM according to the signature σ. If $\text{ADM}(MOD) = 1$ then this algorithm returns a signature σ' of the message $m' = MOD(m)$ using the signature σ, the signer public key pk and the sanitizer secret key ssk. Else it returns ⊥.
+
+Ver(m, σ, pk, spk): It returns a bit b: if the signature σ of m is valid for the two public keys pk and spk then b = 1, else b = 0.
+
+SiProof(sk, m, σ, spk): It returns a signer proof $\pi_{si}$ for the signature $\sigma$ of m using the signer secret key sk and the sanitizer public key spk.
+
+SaProof(ssk, m, σ, pk): It returns a sanitizer proof $\pi_{sa}$ for the signature $\sigma$ of m using the sanitizer secret key ssk and the signer public key pk.
+
+SiJudge(m, σ, pk, spk, πsi): It returns a bit d or the bottom symbol ⊥: if $\pi_{si}$ proves that $\sigma$ comes from the signer corresponding to the public key pk then $d = 1$, else if $\pi_{si}$ proves that $\sigma$ comes from the sanitizer corresponding to the public key spk then $d = 0$, else the algorithm outputs ⊥.
+
+SaJudge(m, σ, pk, spk, πsa): It returns a bit d or the bottom symbol ⊥: if $\pi_{sa}$ proves that $\sigma$ comes from the signer corresponding to the public key pk then $d = 1$, else if $\pi_{sa}$ proves that $\sigma$ comes from the sanitizer corresponding to the public key spk then $d = 0$, else the algorithm outputs ⊥.
+
+As it is mentioned in Introduction, SS schemes have the following security prop-
+erties: unfogeability, immutability, privacy, transparency and accountability. In [5] au-
+---PAGE_BREAK---
+
+thors show that if a scheme has the *immutability*, the *transparency* and the *accountability* properties, then it has the *unforgeability* and the *privacy* properties. Hence we do not need to prove these two properties, then we do not recall their formal definitions.
+
+**Immutability:** A SS is immutable when no adversary is able to sanitize a signature without the corresponding sanitizer secret key or to sanitize a signature using a modification function that is not admissible (i.e. $\text{ADM}(\text{MOD}) = 0$). To help him, the adversary has access to a signature oracle $\text{Sig}(\cdot, \text{sk}, \cdot, \cdot)$ and a proof oracle $\text{SiProof}(\text{sk}, \cdot, \cdot, \cdot)$.
+
+**Definition 7 (Immutability).** We consider the two following oracles:
+
+$\text{Sig}(\cdot, \text{sk}, \cdot, \cdot):$ On input $(m, \text{ADM}, \text{spk})$, this oracle returns $\text{Sig}(m, \text{sk}, \text{ADM}, \text{spk})$.
+
+$\text{SiProof}(\text{sk}, \cdot, \cdot, \cdot):$ On input $(m, \sigma, \text{spk})$, this oracle returns $\text{SiProof}(\text{sk}, m, \sigma, \text{spk})$.
+
+Let $P$ be a SS of security parameter $k$. $P$ is *Immut* secure (or immutable) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $q_{\text{Sig}}$ is the number of calls to the oracle $\text{Sig}(\cdot, \text{sk}, \cdot, \cdot)$, $(m_i, \text{ADM}_i, \text{spk}_i)$ is the $i^{\text{th}}$ query asked to this oracle and $\sigma_i$ is the corresponding response:
+
+$$
+\begin{array}{l}
+\mathbf{Exp}_{P,A}^{\text{Immut}}(k): \\
+\quad init \leftarrow Init(1^k) \\
+\quad (\text{pk}, \text{sk}) \leftarrow \text{SiGen}(init) \\
+\quad (\text{spk}_*, m_*, \sigma_*) \leftarrow A^{\text{Sig}(\cdot, \text{sk}, \dots, \cdot)}(\text{SiProof}(\text{sk}, \dots, \cdot))(\text{pk}) \\
+\quad \quad \text{if } (\text{Ver}(m_*, \sigma_*, \text{pk}, \text{spk}_*) = 1) \text{ and } (\forall i \in \{1, \dots, q_{\text{Sig}}\}, (\text{spk}_* \neq \text{spk}_i) \text{ or } (\forall \text{ MOD such that } \text{ADM}_i(\text{MOD}) = 1, m_* \neq \text{MOD}(m_i))) \\
+\quad \quad \quad \text{then return } 1, \text{ else return } 0
+\end{array}
+$$
+
+**Transparency:** The transparency property guarantees that no adversary is able to distinguish whether a signature is sanitized or not. In addition to the signature oracle and the signer proof oracle, the adversary has access to a sanitize oracle $\text{San}(\cdot, \cdot, \cdot, \cdot, \cdot, \cdot, \cdot, \cdot)$ that sanitizes chosen signatures and a sanitizer proof oracle $\text{SaProof}(\text{ssk}, \cdot, \cdot, \cdot)$ that computes sanitizer proofs for given signatures. Moreover the adversary has access to a challenge oracle $\text{Sa}/\text{Si}(b, \text{pk}, \text{spk}, \text{sk}, \text{ssk}, \cdot, \cdot, \cdot)$ that depends to a randomly chosen bit $b$: this oracle signs a given message and sanitizes it, if $b=0$ then it outputs the original signature, else it outputs the sanitized signature. The adversary cannot use the proof oracles on the signatures outputted by the challenge oracle. To success the experiment, the adversary must guess $b$.
+
+**Definition 8 (Transparency).** We consider the following oracles:
+
+$\text{San}(\cdot, \cdot, \cdot, \cdot, \cdot, \cdot, \cdot):$ On input $(m, \text{MOD}, \sigma, \text{pk})$, it returns $\text{San}(m, \text{MOD}, \sigma, \text{pk}, \text{ssk})$.
+
+$\text{SaProof}(\text{ssk}, \cdot, \cdot, \cdot):$ On input $(m, \sigma, \text{pk})$, this oracle returns $\text{SaProof}(\text{ssk}, m, \sigma, \text{pk})$.
+
+$\text{Sa}/\text{Si}(b, \text{pk}, \text{spk}, \text{sk}, \text{ssk}, \cdot, \cdot, \cdot):$ On input $(m, \text{ADM}, \text{MOD})$, if $\text{ADM}(\text{MOD}) = 0$, this oracle returns $\perp$. Else if $b=0$, this oracle returns $\text{Sig}(\text{MOD}(m), \text{sk}, \text{spk}, \text{ADM})$, else if $b=1$, this oracle returns $\text{San}(m, \text{MOD}, \text{Sig}(m, \text{sk}, \text{spk}, \text{ADM}), \text{pk}, \text{ssk})$.
+
+Let $P$ be a SS of security parameter $k$. $P$ is *Trans* secure (or transparent) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\text{Sig}(\cdot, \text{sk}, \cdot, \cdot)$ and $\text{SiProof}(\cdot, \cdot, \cdot, \cdot)$ are defined as in Def. 7, and where $S_{\text{Sa/Si}}$ (resp. $S_{\text{SiProof}}$ and $S_{\text{SaProof}}$) corresponds to the set of all signature outputted by the oracle $\text{Sa/Si}$ (resp. sending to the oracles $\textit{SiProof}$ and $\textit{SaProof}$):
+---PAGE_BREAK---
+
+**ExpP,ATrans(k):**
+
+$$
+\begin{align*}
+\text{init} & \leftarrow \text{Init}(1^k) \\
+(\mathit{pk}, \mathit{sk}) & \leftarrow \text{SiGen}(\text{init}) \\
+(\mathit{spk}, \mathit{ssk}) & \leftarrow \text{SaGen}(\text{init}) \\
+b & \stackrel{\$}{\leftarrow} \{0, 1\} \\
+& \qquad \text{Sig}(.., \mathit{sk}, ..), \text{San}(.., .., \mathit{ssk}), \text{SiProof}(\mathit{sk}, .., ..)
+\end{align*}
+$$
+
+$$
+b' \leftarrow A^{SaProof(ssk,..), Sa/Si b,pk, spk, sk, ssk,...)} (pk, spk)
+$$
+
+$$
+\begin{array}{l}
+\text{if } (b = b') \text{ and } (S_{Sa/Si} \cap (S_{SiProof} \cup S_{SaProof}) = \emptyset) \\
+\quad \text{then return } 1, \text{ else return } 0
+\end{array}
+$$
+
+Unlinkablility: The unlinkablility property ensures that a sanitized signature cannot be
+linked with the original one. We consider an adversary that has access to the signature
+oracle, the sanitize oracle, and both the signer and the sanitizer proof oracles. Moreover,
+the adversary has access to a challenge oracle LRSan(b, pk, ssk, ., .) that depends to
+a bit b: this oracle takes as input two signatures σ₀ and σ₁, the two corresponding
+messages m₀ and m₁ and two modification functions MOD₀ and MOD₁ chosen by the
+adversary. If the two signatures have the same admissible function ADM, if MOD₀ and
+MOD₁ are admissible according to ADM and if MOD₀(m₀) = MOD₁(m₁) then the
+challenge oracle sanitizes σ_b using MOD_b and returns it. The goal of the adversary is to
+guess the bit b.
+
+**Definition 9 (Unlinkability).** Let the following oracle:
+
+LRSan(b, pk, ssk, ., .): On input $((m_0, \text{MOD}_0, \sigma_0)(m_1, \text{MOD}_1, \sigma_1))$, if for $i \in \{0, 1\}$,
+$\text{Ver}(m_i, \sigma_i, pk, spk) = 1$ and $\text{ADM}_0 = \text{ADM}_1$ and $\text{ADM}_0(\text{MOD}_0) = 1$ and
+$\text{ADM}_1(\text{MOD}_1) = 1$ and $\text{MOD}_0(m_0) = \text{MOD}_1(m_1)$, then this oracle returns
+$\text{San}(m_b, \text{MOD}_b, \sigma_b, pk, ssk)$, else it returns 0.
+
+Let P be a SS of security parameter k. P is Unlink secure (or unlinkable) when for any polynomial time adversary A, the difference between 1/2 and the probability that A wins the following experiment is negligible, where Sig(., sk, .., .) and SiProof(sk, .., ..) are defined as in Def. 7 and San(., ., ., ., ssk) and SaProof(ssk, .., ., .) are defined as in Def. 8:
+
+**ExpP,AUnlink(k):**
+
+$$
+\begin{align*}
+\text{init} & \leftarrow \text{Init}(1^k) \\
+(\mathit{pk}, \mathit{sk}) & \leftarrow \text{SiGen}(\text{init}) \\
+(\mathit{spk}, \mathit{ssk}) & \leftarrow \text{SaGen}(\text{init}) \\
+b & \stackrel{\$}{\leftarrow} \{0, 1\} \\
+& \quad \text{Sig}(., sk..., ) , \text{San}(., ..., ssk) \\
+b' & \leftarrow A^{SiProof(sk..., ), SaProof(ssk..., ), LRSan(b, pk, spk...)} (\mathit{pk}, \mathit{spk}) \\
+\multicolumn{2}{l}{\text{if } (b = b') \text{ then return } 1, \text{ else return } 0}
+\end{align*}
+$$
+
+Accountability: Standard definition of accountability is split into two security exper-
+iments: the sanitizer accountability and the signer accountability. In the sanitizer ac-
+countability experiment, the adversary has access to the signature oracle and the signer
+proof oracle. Its goal is to forge a signature such that the signer proof algorithm returns
+a proof that this signature is not sanitized. To success the experiment, this signature
+must not come from the signature oracle.
+---PAGE_BREAK---
+
+**Definition 10 (Sanitizer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is SaAcc-1 secure (or sanitizer accountable) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\text{Sig}(., sk, ..)$ and $\text{SiProof}(sk, .., ..)$ are defined as in Def. 7, $q_{\text{Sig}}$ is the number of calls to the oracle $\text{Sig}(., sk, ..)$, $(m_i, \text{ADM}_i, \text{spk}_i)$ is the $i^{th}$ query asked to this oracle and $\sigma_i$ is the corresponding response:
+
+$$
+\begin{align*}
+\mathbf{Exp}_{P,A}^{\mathrm{SaAcc}-1}(k):
+& init \leftarrow \mathrm{Init}(1^k) \\
+& (\mathit{pk}, \mathit{sk}) \leftarrow \mathrm{SiGen}(init) \\
+& (\mathit{spk}_*, m_*, \sigma_*) \leftarrow \mathcal{A}^{\mathrm{Sig}(., sk,..,.)}_{(\mathrm{Sig}(., sk,..,)), \mathrm{SiProof}(sk,..,.)}(\mathit{pk}) \\
+& \pi_*^i \leftarrow \mathrm{SiProof}(sk, m_*, \sigma_*, \mathit{spk}_*) \\
+& \text{if } \forall i \in \{1,\dots,q_{\mathrm{Sig}}\}, (\sigma_* \neq \sigma_i) \\
+& \quad \text{and } ((\mathrm{Ver}(m_*, \sigma_*, \mathit{pk}, \mathit{spk}_*) = 1)) \\
+& \quad \text{and } (\mathrm{SiJudge}(m_*, \sigma_*, \mathit{pk}, \mathit{spk}_*, \pi_*^i) \neq 0) \\
+& \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+In the signer accountability experiment, the adversary knows the public key of the
+sanitizer and has access to the sanitize oracle and the sanitizer proof oracle. Its goal is
+to forge a signature together with a proof that this signature is sanitized. To success the
+experiment, this signature must not come from the sanitize oracle.
+
+**Definition 11 (Signer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is SiAcc-1 secure (or signer accountable) when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $\text{San}(., ., ., ., ssk)$ and $\text{SaProof}(\text{ssk}, ., ., .)$ are defined as in Def. 8 and where $q_{\text{San}}$ is the number of calls to the oracle $\text{San}(., ., ., ., ssk)$, $(m_i, \text{MOD}_i, \sigma_i, \mathbf{pk}_i)$ is the $i^{th}$ query asked to this oracle and $\sigma'_i$ is the corresponding response:
+
+$$
+\begin{align*}
+\mathbf{Exp}_{P,A}^{\text{SiAcc}-1}(k):
+& init \leftarrow \text{Init}(1^k) \\
+& (\mathit{spk}, \mathit{ssk}) \leftarrow \text{SaGen}(init) \\
+& (\mathit{pk}_*, m_*, \sigma_*, \pi_*^i) \leftarrow \mathcal{A}^{\text{San}(., ., ., ., ssk), \text{SaProof}(\mathit{ssk}, ., ., .)}(\mathit{spk}) \\
+& \text{if } \forall i \in \{1, \dots, q_{\text{San}}\}, (\sigma_* \neq \sigma'_i) \\
+& \quad \text{and } ((\text{Ver}(m_*, \sigma_*, \mathit{pk}_*, \mathit{spk}) = 1)) \\
+& \quad \text{and } (\text{SiJudge}(m_*, \sigma_*, \mathit{pk}_*, \mathit{spk}, \pi_*^i) = 0) \\
+& \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+Strong Accountability: Since our definition of sanitizable signature provides a second proof algorithm for the sanitizer, we define two additional security experiments (for signer and sanitizer accountability) to ensure the soundness of the proofs computed by this algorithm. We say that a scheme is strongly accountable when it is signer and sanitizer accountable for both the signer and the sanitizer proof algorithm.
+
+Thus, in our second signer accountability experiment, we consider an adversary
+that has access to the sanitize oracle and the sanitizer proof oracle. Its goal is to forge
+a signature such that the sanitizer proof algorithm returns a proof that this signature is
+sanitized. To win the experiment, this signature must not come from the sanitize oracle.
+
+**Definition 12 (Strong Signer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is SiAcc-2 secure when for any polynomial time adversary $\mathcal{A}$, the probability that
+---PAGE_BREAK---
+
+A wins the following experiment is negligible, where $q_{San}$ is the number of calls to the oracle $San(., .., ., ssk)$, ($m_i$, $MOD_i$, $\sigma_i$, $pk_i$) is the $i^{th}$ query asked to this oracle and $\sigma'_i$ is the corresponding response:
+
+$$
+\begin{align*}
+\mathbf{Exp}_{P,A}^{\text{SiAcc-2}}(k):
+& init \leftarrow \text{Init}(1^k) \\
+& (spk, ssk) \leftarrow \text{SaGen}(init) \\
+& (pk_*, m_*, \sigma_*) \leftarrow A^{\text{San}(.,...,ssk), \text{SaProof}(ssk,...)}(spk) \\
+& \pi_{sa} \leftarrow \text{SaProof}(ssk, m_*, \sigma_*, pk_*) \\
+& \text{if } \forall i \in \{1, ..., q_{San}\}, (\sigma_* \neq \sigma'_i) \\
+& \quad \text{and } ((\text{Ver}(m_*, \sigma_*, pk_*, spk) = 1)) \\
+& \quad \text{and } (\text{SaJudge}(m_*, \sigma_*, pk_*, spk, \pi_{sa}^*) \neq 1) \\
+& \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+P is strong signer accountable when it is both *SiAcc-1* and *SiAcc-2* secure.
+
+Finally, in our second sanitizer accountability experiment, we consider an adversary that knows the public key of the signer and has access to the signer oracle and the signer proof oracle. Its goal is to sanitize a signature with a proof that this signature is not sanitized. To win the experiment, this signature must not come from the signer oracle.
+
+**Definition 13 (Strong Sanitizer Accountability).** Let $P$ be a SS of security parameter $k$. $P$ is *SaAcc-2* secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, $\text{Sig}(., sk, ..)$ and $\text{SiProof}(sk, .., ..)$ are defined as in Def. 7, $q_{\text{Sig}}$ is the number of calls to the oracle $\text{Sig}(., sk, ..)$, $(m_i, \text{ADM}_i, spk_i)$ is the $i^{th}$ query asked to this oracle and $\sigma_i$ is the corresponding response:
+
+$$
+\begin{align*}
+\mathbf{Exp}_{P,A}^{\text{SaAcc-2}}(k):
+& init \leftarrow \text{Init}(1^k) \\
+& (pk, sk) \leftarrow \text{SaGen}(init) \\
+& (spk_*, m_*, \sigma_*, \pi_{sa}^*) \leftarrow A^{\text{Sig}(.,sk,...),\text{SiProof}(sk,...)}(spk) \\
+& \text{if } \forall i \in \{1, \dots, q_{\text{Sig}}\}, (\sigma_* \neq \sigma'_i) \\
+& \quad \text{and } ((\text{Ver}(m_*, \sigma_*, pk, spk_*) = 1)) \\
+& \quad \text{and } (\text{SaJudge}(m_*, \sigma_*, pk, spk_*, \pi_{sa}^*) = 1) \\
+& \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+P is strong sanitizer accountable when it is both SaAcc-1 and SaAcc-2 secure.
+
+# 3 Schemes
+
+## 3.1 An Efficient Verifiable Ring Signature: EVeR
+
+We present our VRS scheme called EVeR (for *Efficient Verifiable Ring signature*) . It is based on the DDH assumption and uses a NIZKP of equality of two discrete logarithms out of n elements. We show how to build this NIZKP: let G be a group of prime order p, n be an integer and the following language be:
+
+$$
+\mathcal{L}_n = \{ \{(h_i, z_i, g_i, y_i)\}_{1 \le i \le n} : \exists 1 \le i \le n, (h_i, z_i, g_i, y_i) \in G^4; \log_{g_i}(y_i) = \log_{h_i}(z_i) \}
+$$
+
+Consider the case $n = 1$. In [13], authors present an interactive zero-knowledge proof of knowledge system for the language $\mathcal{L}_1$. It proves the equality of two discrete logarithms. For example using $(h, z, g, y) \in \mathcal{L}_1$, a prover convinces a verifier
+---PAGE_BREAK---
+
+that $\log_g(y) = \log_h(z)$. The witness used by the prover is $x = \log_g(y)$. This proof system is a *sigma protocol* in the sense that there are only three interactions: the prover sends a commitment, the verifier sends a challenge, and the prover returns a response.
+
+To transform the proof system of $\mathcal{L}_1$ into a generic proof system of any $\mathcal{L}_n$, we use
+the generic transformation given in [14]. For any language $\mathcal{L}$ and any integer $n$, the
+authors show how to transform a proof that an element is in $\mathcal{L}$ into a proof that one out
+of $n$ element is in $\mathcal{L}$ under the condition that the proof is a sigma protocol. Note that the
+resulting proof system is also a sigma protocol.
+
+The final step is to transform it into a non-interactive proof system. We use the well-known Fiat-Shamir transformation [15]. This transformation outputs a non interactive proof system from any interactive proof system that is a sigma protocol. The resulting proof system is complete, sound and zero-knowledge in the random oracle model. Finally, we obtain the following scheme.
+
+**Scheme 1 (LogEqn)** Let $G$ be a group of prime order $p$, $H : \{0,1\} \to \mathbb{Z}_p^*$ be a hash function and $n$ be an integer. We define the NIZKP system LogEqn = (LEproven, LEverifn) for $\mathcal{L}_n$ by:
+
+LEproven({(hi, zi, gi, yi}1≤i≤n, x). We denote by $j$ the integer such that $x = \log_{g_j}(y_j) = \log_{h_j}(z_j)$. This algorithm picks $r_j ← \mathbb{Z}_{p_j}^*$, computes $R_j = g_j^{r_j}$ and $S_j = h_j^{r_j}$. For all $i \in \{1, ..., n\}$ and $i ≠ j$, it picks $c_i ← \mathbb{Z}_{p_i}^*$ and $\gamma_i ← \mathbb{Z}_{p_i}^*$, and computes $R_i = g_i^{\gamma_i}/y_i^{c_i}$ and $S_i = h_i^{\gamma_i}/z_i^{c_i}$. It computes $c = H(R_1||S_1||...||R_n||S_n)$. It then computes $c_j = c/(\prod_{i=1;i≠j}^n c_i)$ and $\gamma_j = r_j + c_j \cdot x$. It outputs $\pi = (\{R_i, S_i, c_i, \gamma_i\}_{1≤i≤n})$.
+
+LEverifn({(hi, zi, gi, yi}1≤i≤n, π). Using π = ({Ri, Si, ci, γi}1≤i≤n). If ∏ni=1; i≠j ci ≠ H(R1||S1||...||Rn||Sn) then it returns 0. Else if there exists i ∈ {1, ..., n} such that giγi ≠ Ri · yici or hiγi ≠ Si · zici then it returns 0. Else it returns 1.
+
+**Theorem 1.** The NIZKP LogEq$_n$ is a proof of knowledge, moreover it is complete, sound, and zero-knowledge in the random oracle model.
+
+The proof of this theorem is a direct implication of [13], [14] and [15].
+Using this proof system, we build our VRS scheme called EVeR:
+
+**Scheme 2 (Efficient Verifiable Ring Signature (EVeR))** EVeR is a VRS defined by:
+
+V.Init(1^k): It generates a prime order group setup (G, p, g) and a hash function H :
+{0, 1}* → G. It returns a setup value init = (G, p, g, H).
+
+V.Gen(init): It picks sk ← Z*_p, computes pk = g*sk and returns a pair of signer public/private keys (pk, sk).
+
+V.Sig(L, m, sk): It picks r ← Z*_p, it computes h = H(m||r) and z = h*sk, it runs P ← LEprove_{|L|}({(h, z, g, pk_l)}_{pk_l∈L}, sk) and returns σ = (r, z, P).
+
+V.Ver(L, m, σ): It parses σ = (r, z, P), computes h = H(m||r) and returns b ← LEverif_{|L|}({(h, z, g, pk_l)}_{pk_l∈L}, P).
+
+V.Proof(L, m, σ, pk, sk): It parses σ = (r, z, P), computes h = H(m||r) and z̄ = h^sk, runs P ← LEprove₁({(h, z̄, g, pk)}, sk) and returns π = (z̄, P).
+---PAGE_BREAK---
+
+V. Judge($L, m, \sigma, \mathbf{pk}, \pi$): It parses $\sigma = (r, z, P)$ and $\pi = (\bar{z}, \bar{P})$, computes $h = H(m||r)$ and runs $b \leftarrow LE_{\text{prove}}_1(\{(h, \bar{z}, g, \mathbf{pk}\}, \pi)$. If $b \neq 1$ then it returns $\perp$. Else, if $z = \bar{z}$ then it returns 1, else it returns 0.
+
+All users have an ElGamal key pair ($\mathbf{pk}, \mathbf{sk}$) such that $\mathbf{pk} = g^{\mathbf{sk}}$ where $g$ is a generator of a prime order group. To sign a message $m$ according to a set of public key $L$ using her key pair ($\mathbf{pk}, \mathbf{sk}$), Alice chooses a random $r$ and computes $h = H(m||r)$ and $z = h^{\mathbf{sk}}$ where $H$ is an universal hash function. Alice produces a proof $\pi$ that there exists $\mathbf{pk}_l \in L$ such that $\log_g(\mathbf{pk}_l) = \log_h(z)$ using the NIZKP $\mathrm{LogEq}_{|L|}$ where $|L|$ denotes the cardinal of $L$. The signature is the triplet $(r, z, \pi)$. To verify a signature, it suffices to verify the proof $\pi$ according to $L$, $m$ and the other parts of the signature. To prove that she is the signer of the message $m$, Alice generates a proof that $\log_g(\mathbf{pk}) = \log_h(z)$ using the NIZKP $\mathrm{LogEq}_1$. Verifying this proof, a judge is convinced that $z = h^{\mathbf{sk}}$. We then consider a second signature $(r', z', \pi')$ of a message $m'$ produced from another key pair $(\mathbf{pk}', \mathbf{sk}')$. We set $h' = H(m'||r')$, and we recall that $z' = (h')^{\mathbf{sk}'}$. To prove that she is not the signer of $m'$, Alice computes $\bar{z}' = (h')^{\mathbf{sk}}$ and she generates a proof that $\log_g(\mathbf{pk}) = \log_{h'}(\bar{z}')$. Since $\bar{z}' \neq z'$, Alice proves that $\log_g(\mathbf{pk}) \neq \log_{h'}(z')$, then she is not the signer of $(r', z', \pi')$.
+
+**Theorem 2.** *EVeR is unforgeable, anonymous, accountable and non-usurpable under the DDH assumption in the random oracle model.*
+
+We give the intuition of the security properties, the proof of the theorem is given Appendix C:
+
+**Unforgeability:** The scheme is unforgeable since nobody can prove that $\log_g(\mathbf{pk}_l) = \log_h(z)$ without the knowledge of $\mathbf{sk} = \log_h(z)$.
+
+**Anonymity:** Breaking the anonymity of such a signature is equivalent to break the DDH assumption. Indeed, to link a signature $z = h^{\mathbf{sk}}$ with the corresponding public key of Alice $\mathbf{pk} = g^{\mathbf{sk}}$, an attacker must solve the DDH problem on the instance $(\mathbf{pk}, h, z)$. Moreover, note that since the value $r$ randomizes the signature, it is not possible to link two signatures of the same message produced by Alice.
+
+**Accountability:** To break the accountability, an adversary must to forge a valid signature (i.e. to prove that there exists $\mathbf{pk}_l$ in the group such that $\log_g(\mathbf{pk}_l) \neq \log_h(z)$) and to prove that he is not the signer (i.e. $\log_g(\mathbf{pk}) \neq \log_h(z)$ where $\mathbf{pk}$ is the public key chosen by the adversary). However, since the adversary does not know the secret keys of the other members of the group, it would have to break the soundness of $\mathrm{LogEq}$ to win the experiment, which is not possible.
+
+**Non-usurpability: (-non-usu-1) no adversary is able to forge a proof that he is the signer of a signature produced by another user since it is equivalent to prove a false statement using a sound NIZKP. (-non-usu-2) the proof algorithm run by a honest user with the public key $\mathbf{pk}$ returns a proof that this user is the signer of a given signature only if $\log_g(\mathbf{pk}) = \log_h(z)$. Since no adversary is able to compute $z$ such that $\log_g(\mathbf{pk}) = \log_h(z)$ without the corresponding secret key, no adversary is able to break the non-usurpability of EvER.*
+---PAGE_BREAK---
+
+## 3.2 Our Unlinkable Sanitizable Signature Scheme: GUSS
+
+We present our USS instantiated by a digital signature (DS) scheme and a VRS.
+
+**Scheme 3 (Generic Unlinkable Sanitizable Signature (GUSS))** Let $D$ be a deterministic digital signature scheme and $V$ be a verifiable ring signature scheme such that:
+$$D = (\mathcal{D.Init}, \mathcal{D.Gen}, \mathcal{D.Sig}, \mathcal{D.Ver}) \quad V = (\mathcal{V.Init}, \mathcal{V.Gen}, \mathcal{V.Sig}, \mathcal{V.Ver}, \mathcal{V.Proof}, \mathcal{V.Judge})$$
+GUSS instantiated with $(D, V)$ is a sanitizable signature scheme defined by:
+
+**Init(1k):** It runs $\text{init}_d \leftarrow \mathcal{D.Init}(1^k)$ and $\text{init}_v \leftarrow \mathcal{V.Init}(1^k)$, it returns $\text{init} = (\text{init}_d, \text{init}_v)$.
+
+**SiGen(init):** It parses $\text{init} = (\text{init}_d, \text{init}_v)$, runs $(pk_d, sk_d) \leftarrow \text{SiGen}(\text{init}_d)$ and $(pk_v, sk_v) \leftarrow \mathcal{V.Gen}(\text{init}_v)$ returns $(pk, sk)$ where $pk = (pk_d, pk_v)$ and $sk = (sk_d, sk_v)$.
+
+**SaGen(init):** It parses $\text{init} = (\text{init}_d, \text{init}_v)$ and runs $(spk, ssk) \leftarrow \mathcal{V.Gen}(\text{init}_v)$. It returns $(spk, ssk)$.
+
+**Sig(m, sk, spk, ADM):** It parses $sk = (sk_d, sk_v)$. It first computes the fixed message part $M \leftarrow \text{FIX}_{\text{ADM}}(m)$ and runs $\sigma_1 \leftarrow \mathcal{D.Sig}(sk_d, (M||\text{ADM}||pk||spk))$ and $\sigma_2 \leftarrow \mathcal{V.Sig}(\{pk_v, spk\}, sk_v, (\sigma_1||m))$. It returns $\sigma = (\sigma_1, \sigma_2, \text{ADM})$.
+
+**San(m, MOD, σ, pk, ssk):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and $pk = (pk_d, pk_v)$. This algorithm first computes the modified message $m' \leftarrow \text{MOD}(m)$ and it runs $\sigma'_2 \leftarrow \mathcal{V.Sig}(\{pk_v, spk\}, ssk, (\sigma_1||m'))$. It returns $\sigma' = (\sigma_1, \sigma'_2, \text{ADM})$.
+
+**Ver(m, σ, pk, spk):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and it computes the fixed message part $M \leftarrow \text{FIX}_{\text{ADM}}(m)$. It then runs $b_1 \leftarrow \mathcal{D.Ver}(pk_d, (M||\text{ADM}||pk||spk), \sigma_1)$ and $b_2 \leftarrow \mathcal{V.Ver}(\{pk_d, spk\}, (\sigma_1||m), \sigma_2)$. It returns $b = (b_1 \land b_2)$.
+
+**SiProof(sk, m, σ, spk):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and the key $sk = (sk_d, sk_v)$. It runs $\pi_{si} \leftarrow \mathcal{V.Proof}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; pk_v, sk_v)$ and returns it.
+
+**SaProof(ssk, m, σ, pk):** It parses the signature $\sigma = (\sigma_1, \sigma_2, \text{ADM})$. It runs $\pi_{sa} \leftarrow \mathcal{V.Proof}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; spk, ssk)$ and returns it.
+
+**SiJudge(m, σ, pk, spk, πsi):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and $pk = (pk_d, pk_v)$. It runs $b \leftarrow \mathcal{V.Judge}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; pk_v, π_{si})$ and returns it.
+
+**SaJudge(m, σ, pk, spk, πsa):** It parses $\sigma = (\sigma_1, \sigma_2, \text{ADM})$ and $pk = (pk_d, pk_v)$. It runs $b \leftarrow \mathcal{V.Judge}(\{pk_v, spk\}, (m||\sigma_1), \sigma_2; spk, π_{sa})$ and returns $(1-b)$.
+
+The signer secret key $sk = (sk_d, sk_v)$ contains a secret key $sk_d$ compatible with the DS scheme and a secret key $sk_v$ compatible the VRS scheme. The signer public key $pk = (pk_d, pk_v)$ contains the two corresponding public keys. The sanitizer public/secret key pair $(spk, ssk)$ is generated as in the VRS scheme.
+
+Let $m$ be a message and $M$ be the fixed part chosen by the signer according to the admissible function $\text{ADM}$. To sign $m$, the signer first signs $M$ together with the public key of the sanitizer $spk$ and the admissible function $\text{ADM}$ using the DS scheme. We denote this signature by $\sigma_1$. The signer then signs in $\sigma_2$ the full message $m$ together with $\sigma_1$ using the VRS scheme for the set of public keys $L = \{pk_v, spk\}$. On the other words, he anonymously signs $(\sigma_1||m)$ within a group of two users: the signer and the sanitizer. The final sanitizable signature is $\sigma = (\sigma_1, \sigma_2)$. The verification algorithm is in two steps: it verifies the signature $\sigma_1$ and it verifies the anonymous signature $\sigma_2$.
+
+To sanitize this signature $\sigma = (\sigma_1, \sigma_2)$, the sanitizer chooses an admissible message $m'$ according to $\text{ADM}$ (i.e. $m$ and $m'$ have the same fixed part). He then anonymously signs $m'$ together with $\sigma_1$ using the VRS for the group $L = \{pk_v, spk\}$ using the secret key ssk. We denote by $\sigma'_2$ this signature. The final sanitized signature is $\sigma' = (\sigma_1, \sigma'_2)$.
+---PAGE_BREAK---
+
+**Theorem 3.** For any deterministic and unforgeable DS scheme *D* and any unforgeable, anonymous, accountable and non-usurpable VRS scheme *V*, GUSS instantiated with (*D*, *V*) is immutable, transparent, strongly accountable and unlinkalbe.
+
+We give the intuition of the security properties, the proof of the theorem is given in Appendix D:
+
+Transparency: According to the anonymity of $\sigma_2$ and $\sigma'_2$, nobody can guess if a signature come from the signer or the sanitizer, and since both signatures have the same structure, we cannot guess whether a signature is sanitized or not.
+
+Immutability: Since it is produced by an unforgeable DS scheme, nobody can forge the signature $\sigma_1$ of the fixed part $M$ without the signer secret key. Thus the sanitizer cannot change the fixed part of the signatures. Moreover, since $\sigma_1$ signs the public key of the sanitizer in addition to $M$, the other users can not forge a signature of an admissible message using $\sigma_1$.
+
+Unlinkability: An adversary knows (i) two signatures $\sigma^0$ and $\sigma^1$ that have the same fixed part $M$ according to the same function $\text{ADM}$ for the same sanitizer and (ii) the sanitized signature $\sigma' = (\sigma'_1, \sigma'_2)$ computed from $\sigma^b$ for a given admissible message $m'$ and an unknown bit $b$. To achieve unlinkability, it must be hard to guess $b$. Since the DS scheme is deterministic, the two signatures $\sigma^0 = (\sigma^0_1, \sigma^0_2)$ and $\sigma^1 = (\sigma^1_1, \sigma^1_2)$ have the same first part (i.e. $\sigma^0_1 = \sigma^1_1$). As it was shown before, the $\sigma'$ has the same first part $\sigma'_1$ as the original one, thus $\sigma'_1 = \sigma^0_1 = \sigma^1_1$ and $\sigma'_1$ leaks no information about $b$. On the other hand, the second part of the sanitized signature $\sigma'_2$ is computed from the modified message $m'$ and the first part of the original signature. Since $\sigma^0_1 = \sigma^1_1$, we deduce that $\sigma'_2$ leaks no information about $b$. Finally, the best strategy of the adversary is to randomly guess $b$.
+
+(Strong) Accountability: the signer must be able to prove the provenance of a signature. It is equivalent to break the anonymity of the second parts $\sigma_2$ of this signature: if it was created by the signer then it is the original signature, else it was created by the sanitizer and it is a sanitized signature. By definition, the VRS scheme used to generate $\sigma_2$ provides a way to prove whether a user is the author of a signature or not. GUSS uses it in its proof algorithm to achieve accountability. Note that since the sanitizer uses the same VRS scheme to sanitize a signature, it also can prove the origin of a given signature to achieve the strong accountability.
+
+# 4 Conclusion
+
+In this paper, we revisit the notion of verifiable ring signatures. We improve its properties of verifiability, we give a security model for this primitive and we design a simple, efficient and secure scheme named EvER. We extend the security model of sanitizable signatures in order to allow the sanitizer to prove the origin of a signature. Finally, we design a generic unlinkable sanitizable signature scheme named GUSS based on verifiable ring signatures. This scheme is twice as efficient as the best scheme of the literature. In the future, we aim at find other applications to verifiable ring signatures that are secure in our model.
+---PAGE_BREAK---
+
+References
+
+1. Giuseppe Ateniese, Daniel H. Chou, Breno de Medeiros, and Gene Tsudik. *Sanitizable Signatures*, pages 159–177. Springer Berlin Heidelberg, 2005.
+
+2. Dan Boneh. The decision Diffie-Hellman problem. In *Third Algorithmic Number Theory Symposium (ANTS)*, volume 1423 of LNCS. Springer, 1998. Invited paper.
+
+3. Christina Brzuska, Heike Busch, Oezguer Dagdelen, Marc Fischlin, Martin Franz, Stefan Katzenbeisser, Mark Manulis, Cristina Onete, Andreas Peter, Bertram Poettering, and Dominique Schröder. *Redactable Signatures for Tree-Structured Data: Definitions and Constructions*. Springer Berlin Heidelberg, 2010.
+
+4. Christina Brzuska, Marc Fischlin, Tobias Freudenreich, Anja Lehmann, Marcus Page, Jakob Schelbert, Dominique Schröder, and Florian Volk. Security of sanitizable signatures revisited. In Stanislaw Jarecki and Gene Tsudik, editors, *PKC 2009*, volume 5443 of LNCS, pages 317–336. Springer, March 2009.
+
+5. Christina Brzuska, Marc Fischlin, Tobias Freudenreich, Anja Lehmann, Marcus Page, Jakob Schelbert, Dominique Schröder, and Florian Volk. *Security of Sanitizable Signatures Revisited*, pages 317–336. Springer Berlin Heidelberg, 2009.
+
+6. Christina Brzuska, Marc Fischlin, Anja Lehmann, and Dominique Schröder. Unlinkability of sanitizable signatures. In Phong Q. Nguyen and David Pointcheval, editors, *PKC 2010*, volume 6056 of LNCS, pages 444–461. Springer, May 2010.
+
+7. Christina Brzuska, Henrich C. Pöhls, and Kai Samelin. *Non-interactive Public Accountability for Sanitizable Signatures*, pages 178–193. Berlin, Heidelberg, 2013.
+
+8. Christina Brzuska, Henrich C. Pöhls, and Kai Samelin. *Efficient and Perfectly Unlinkable Sanitizable Signatures without Group Signatures*, pages 12–30. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014.
+
+9. Sébastien Canard and Amandine Jambert. *On Extended Sanitizable Signature Schemes*, pages 179–194. Springer Berlin Heidelberg, 2010.
+
+10. Sébastien Canard, Amandine Jambert, and Roch Lescuyer. *Sanitizable Signatures with Several Signers and Sanitizer s*, pages 35–52. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
+
+11. Sbastien Canard, Berry Schoenmakers, Martijn Stam, and Jacques Traoré. List signature schemes. *Discrete Applied Mathematics*, 154(2):189 – 201, 2006.
+
+12. Z. Changlun, L. Yun, and H. Dequan. A new verifiable ring signature scheme based on nyberg-rueppel scheme. In *2006 8th international Conference on Signal Processing*, volume 4, 2006.
+
+13. David Chaum and Torben P. Pedersen. Wallet databases with observers. In Ernest F. Brickell, editor, *CRYPTO'92*, volume 740 of LNCS, pages 89–105. Springer, August 1993.
+
+14. R. Cramer, I. Damgård, and B. Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In *CRYPTO'94*, volume 839 of LNCS. Springer, 1994.
+
+15. Amos Fiat and Adi Shamir. *Advances in Cryptology — CRYPTO' 86: Proceedings*, chapter How To Prove Yourself: Practical Solutions to Identification and Signature Problems, pages 186–194. Springer Berlin Heidelberg, Berlin, Heidelberg, 1987.
+
+16. N. Fleischhacker, J. Krupp, G. Malavolta, J. Schneider, D. Schrader, and M. Simkin. Efficient unlinkable sanitizable signatures from signatures with re-randomizable keys. In *Public-Key Cryptography PKC 2016*, LNCS. Springer, 2016.
+
+17. Georg Fuchsbauer and David Pointcheval. *Anonymous Proxy Signatures*. Springer Berlin Heidelberg, 2008.
+
+18. Robert Johnson, David Molnar, Dawn Song, and David Wagner. *Homomorphic Signature Schemes*, pages 244–262. Springer Berlin Heidelberg, Berlin, Heidelberg, 2002.
+---PAGE_BREAK---
+
+19. Russell W. F. Lai, Tao Zhang, Sherman S. M. Chow, and Dominique Schröder. *Efficient Sanitizable Signatures Without Random Oracles*, pages 363–380. Springer International Publishing, 2016.
+
+20. K. C. Lee, H. A. Wen, and T. Hwang. Convertible ring signature. *IEE Proceedings - Communications*, 152(4):411–414, 2005.
+
+21. Jiqiang Lv and Xinmei Wang. *Verifiable Ring Signature*, pages 663–665. DMS Proceedings, 2003.
+
+22. Ronald L. Rivest, Adi Shamir, and Yael Tauman. How to leak a secret. In Colin Boyd, editor, *ASIACRYPT 2001*, volume 2248 of LNCS, pages 552–565. Springer, December 2001.
+
+23. Ron Steinfeld, Laurence Bull, and Yuliang Zheng. *Content Extraction Signatures*, pages 285–304. Springer Berlin Heidelberg, 2002.
+
+24. Shangping Wang, Rui Ma, Yaling Zhang, and Xiaofeng Wang. Ring signature scheme based on multivariate public key cryptosystems. *Computers and Mathematics with Applications*, 62(10):3973 – 3979, 2011.
+
+A Cryptographic Background
+
+**Definition 14 (DDH [2]).** Let $\mathbb{G}$ be a multiplicative group of prime order $p$ and $g \in \mathbb{G}$ be a generator. Given an instance $h = (g^a, g^b, g^z)$ for unknown $a, b, z \stackrel{*}{\leftarrow} \mathbb{Z}_p^*$, the Decisional Diffie-Hellman (DDH) problem is to decide whether $z = a \cdot b$ or not. The DDH hypothesis states that there exists no PPT algorithm that solves DDH problem.
+
+We recall the notion of deterministic digital signature, and we recall the determin-
+istic version of the Schnorr’s signature.
+
+**Definition 15 ((Deterministic) Digital Signature (DS)).** A Digital Signature scheme *S* is a tuple of 4 algorithms defined as follows:
+
+D.Init(1k): It returns a setup value init.
+
+SiGen(init): It returns a pair of signer public/private keys (pk, sk).
+
+D.Sig(m, sk): This algorithm computes a signature σ of m using the key sk.
+
+D.Ver(pk, m, σ): It returns a bit b: if the signature σ of m is valid according to pk then
+b = 1, else b = 0.
+
+Such a scheme is unforgeable when no polynomial adversary wins the following experi-
+ment with non-negligible probability where D.Sig(·, sk) is a signature oracle, qS is the
+number of queries to this oracle and σᵢ is the iᵗʰ signature computed by this oracle:
+
+$$
+\begin{align*}
+\text{Exp}_{S,A}^{\text{unf}}(k):
+& \quad \text{init} \leftarrow D.\text{Init}(1^k) \\
+& \quad (\text{pk}, \text{sk}) \leftarrow \text{SiGen}(\text{init}) \\
+& \quad (m_*, \sigma_*) \leftarrow A^{\text{D.Sig}(., \text{sk})}(\text{pk}) \\
+& \quad \text{if } (D.\text{Ver}(\sigma_*, m_*, =) = 1) \text{ and } (\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_*) \\
+& \quad \quad \text{then return } 1, \text{ else return } 0
+\end{align*}
+$$
+
+Moreover, such a scheme is deterministic when the algorithm D.Sig(m, sk) is determin-
+istic. As it is mentioned in [16], any DS scheme can be transformed into a deterministic
+DS scheme without lost of efficiency or security using a pseudo random function, that
+can be simulated by a hash function in the random oracle model.
+---PAGE_BREAK---
+
+**Definition 16 (Deterministic Schnorr's Signature).** The (Deterministic) Schnorr's Signature is defined by the following algorithms:
+
+D.Init($1^k$): It returns a setup value $init = (G, p, g, H)$ where $G$ is a group of prime order $p$, $g \in G$ and $H: \{0, 1\} \to \mathbb{Z}_p^*$ is a hash function.
+
+SiGen(init): It picks $sk \leftarrow \mathbb{Z}_p^*$, computes $pk = g^{sk}$ and returns ($pk, sk$).
+
+D.Sig($m, sk$): It compute the $r = H(m||sk)$, $R = g^r$, $z = r + sk \cdot H(R||m)$ and returns $\sigma = (R, z)$.
+
+D.Ver($pk, m, \sigma$): It parse $\sigma = (R, z)$, if $g^z = R \cdot pk^{H(R||m)}$ then it returns 1, else 0.
+
+This DS scheme is deterministic and unforgeable under the DDH assumption in the random oracle model.
+
+A zero-knowledge proof (ZKP) allows a prover knowing a witness to convince a verifier that a statement $s$ is in a given language without leak any information except $s$. Such a proof is a proof of knowledge (PoK) when the verifier is also convinced that the prover knows the witness $w$. We recall the definition of a non-interactive zero-knowledge proof of knowledge.
+
+**Definition 17 (NIZKP).** A non-interactive ZKP (NIZKP) For a language $\mathcal{L}$ is a couple of algorithms (ZKPpro, ZKPver) such that:
+
+Prove($s, w$). This algorithm outputs a proof $\pi$ that $s \in \mathcal{L}$ using the witness $w$.
+
+Verify($s, \pi$). This algorithm checks whether $\pi$ is a valid proof that $s \in \mathcal{L}$ and outputs a bit.
+
+A NIZKP proof verifies the following properties:
+
+**Completeness** For any statement $s \in \mathcal{L}$ and the corresponding witness $w$, we have that Verify($s$, Prove($s$, $w$)) = 1.
+
+**Soundness** There is no polynomial time adversary $\mathcal{A}$ such that $\mathcal{A}(\mathcal{L})$ outputs $(s, \pi)$ such that Verify($s$, $\pi$) = 1 and $x \notin \mathcal{L}$ with non-negligible probability.
+
+**Zero-knowledge.** A proof $\pi$ leaks no information, i.e. there exists a PPT algorithm Sim (called the simulator) such that outputs of Prove($s$, $w$) and the outputs of Sim($s$) follow the same probability distribution.
+
+Moreover, such a proof is a proof of knowledge when for any $s \in \mathcal{L}$ and the corresponding witness $w$, any bit-string input $i \in \{0, 1\}^*$ and any algorithm $\mathcal{A}(s, \text{in})$ there exists a knowledge extractor $\mathcal{E}$ such that the probability that $\mathcal{E}^{\mathcal{A}(s,\text{in})}(s)$ outputs the witness $w$ given access to oracle $\mathcal{A}(s, \text{in})$ is as high as the probability that $\mathcal{A}(s, \text{in})$ outputs a proof $\pi$ such that Verify($s$, $\pi$) = 1.
+
+## B First experiment for non-usurpability
+
+**Definition 18 (n-non-usu-1 experiment).** Let $P$ be a SS of security parameter $k$. $P$ is n-non-usu-1 secure when for any polynomial time adversary $\mathcal{A}$, the probability that $\mathcal{A}$ wins the following experiment is negligible, where $V\Sig(\cdot, \cdot, \cdot)$ and $V\text{Proof}(\cdot, \cdot, \cdot, \cdot, \cdot)$ and where $q_S$ is the number of calls to the oracle $V\Sig(\cdot, \cdot, \cdot)$ and $(L_i, l_i, m_i)$ (resp. $\sigma_i$) is the $i^{th}$ query to this oracle (resp. signature outputted by this oracle):
+---PAGE_BREAK---
+
+**Exp***P*,A**n*-non-*usu*-1 (*k*):*
+
+init ← V.Init(1*k*)
+
+∀1 ≤ *i* ≤ *n*, (pk*i*, sk*i*) ← V.Gen(init)
+
+(L*, m*, σ*, l*, π*) ← 𝒜V.Sig(..., ...), V.Proof(..., ...)({pk*i*}1≤*i*≤*n*)
+
+π ← V.Proof(L*, m*, σ*, pk, sk)
+
+if (V.Ver(L*, σ*, m*) = 1) and
+
+(V.Judge(L*, m*, σ*, pk*l*, π*) = 1) and
+
+(∀*i* ∈ {1, ..., qS}, (L*i*, l*i*, m*i*, σ*i*) = (L*, l*, m*, σ*))
+
+then return 1, else return 0
+
+C Security proofs of EVeR
+
+**Lemma 1.** EVeR is $n$-unf secure for any polynomially bounded $n$ under the DL assumption in the random oracle model.
+
+*Proof.* We recall that since LogEq$_n$ is valid, then for any $s \in \mathcal{L}$ and the corresponding witness $w$, for any bit-string input in $\in \{0, 1\}^*$ and any algorithm $\mathcal{A}(\text{in})$ there exists a knowledge extractor $\mathcal{E}$ such that the probability that $\mathcal{E}^{\mathcal{A}(\text{in})}(k)$ outputs the witness $w$ given access to oracle $\mathcal{A}(\text{in})$ is as high as the probability that $\mathcal{A}(\text{in})$ outputs a proof $\pi$ such that Verify($s, \pi$) = 1. Moreover, since LogEq$_n$ is zero-knowledge there exists a PPT algorithm Sim (called the simulator) such that outputs of Prove($s, w$) and the outputs of Sim($s$) follow the same probability distribution.
+
+Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{EVeR}, \mathcal{A}}^{\text{n-unf}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ that solve the DL problem with non-negligible probability.
+
+$\mathcal{B}$ construction: $\mathcal{B}$ receives the input $(G, p, g, y)$ where $g$ is the generator of the group $G$ of prime order $p$ and $y$ is an element of $G$. For all $i \in \{1, \dots, n\}$, it picks $x_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$ and sets $\mathbf{pk}_i = y^{x_i}$. $\mathcal{B}$ initializes an empty list $H_{\text{list}}$. $\mathcal{B}$ runs $x' \leftarrow \mathcal{E}^{\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \le i \le n})}(k)$ where $\mathcal{A}'$ is the following algorithm:
+
+**Algorithm** $\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \le i \le n})$: It runs $(L_*, \sigma_*, m_*) \leftarrow \mathcal{A}(\{\mathbf{pk}_i\}_{1 \le i \le n})$. It simulates the oracles to $\mathcal{A}$ as follows:
+
+**Random oracle** $H(\cdot)$: On the $i^{\text{th}}$ input $M_i$, if $\exists j < i$ such that $M_j = M_i$ then it sets $u_j = u_i$. Else it picks $u_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. Finally, it returns $g^{u_i}$.
+
+**Oracle V.Sig**(*·*, *·*, *·*): On the $i^{\text{th}}$ input $(L_i, l_i, m_i)$, it picks $r_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. It computes $h_i = H(m_i||r_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $z_i = \mathbf{pk}_{l'_i}^{u_j}$ and it runs $P_i \leftarrow \text{Sim}(\{(h_i, z_i, g, \mathbf{pk}_l)\}_{\mathbf{pk}_l \in L_i})$. It returns $(r_i, z_i, P_i)$ to $\mathcal{A}$.
+
+**Oracle V.Proof**(*·*, *·*, *·*, *·*): On the $i^{\text{th}}$ input $(L'_i, m'_i, \sigma'_i, l'_i)$, it parses $\sigma'_i = (r'_i, z'_i, P'_i)$. It computes $h'_i = H(m'_i||r'_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $\tilde{z}_i = \mathbf{pk}_{l'_i}^{u_j}$ and it runs $P'_i \leftarrow \text{Sim}((h'_i, \tilde{z}_i, g, \mathbf{pk}_{l'_i})$. It returns $(\tilde{z}_i, P'_i)$ to $\mathcal{A}$.
+
+Finally, $\mathcal{A}'$ parse $\sigma_* = (r_*, z_*, P_*)$ and returns $P_*$.
+---PAGE_BREAK---
+
+**Analyze:** First note that the experiment $n$-unf is perfectly simulated for $\mathcal{A}$, then it returns $(L_*, \sigma_*, m_*)$ such that for $\sigma_* = (r_*, z_*, P_*)$ and $h_* = H(m_*|r_*)$, we have $\Pr[\text{LEverif}_{L_*}((\{h_*, z_*, g_*, \mathbf{pk}_l\})_{\mathbf{pk}_l \in L_*}, P_*)] = 1] \geq \lambda(k)$ and $L_* \subset \{\mathbf{pk}_i\}_{1 \leq i \leq n}$. We deduce that $\mathcal{A}'$ returns a proof $P_*$ such that $\Pr[\text{LEverif}_{L_*}((\{h_*, z_*, g_*, \mathbf{pk}_l\})_{\mathbf{pk}_l \in L_*}, P_*)] = 1] = \lambda(k)$, then $\mathcal{E}^{\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \leq i \leq n})}(k)$ returns the discrete logarithm $x'$ of one of the public keys in $\{\mathbf{pk}_i\}_{1 \leq i \leq n}$ with probability at least $\lambda(k)$. Suppose that $\mathcal{A}'$ returns a valid proof. Since for all $i$, $\mathbf{pk}_i = y^{x_i}$, and since there exists $j$ such that $\mathbf{pk}_j = g^{x'}$, then the discrete logarithm of $y$ is $x'/x_j$. We deduce that $\mathcal{B}$ returns the discrete logarithm of $y$ with probability at least $\lambda(k)$. $\square$
+
+**Lemma 2.** *EVeR* is *n*-ano secure for any polynomially bounded *n* under the DDH assumption in the random oracle model.
+
+*Proof.* Let the *n*-ano$_{\psi}$ be the same experiment as *n*-ano except that the oracle LRSO$_b$ can be called at most $\psi$ times. We prove the two following claims:
+
+**Claim 1** If $\exists \mathcal{A} \in \text{POLY}(k)$ such that $\lambda_1(k) = \text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_1}(k)$ is non-negligible, then $\exists \mathcal{B} \in \text{POLY}(k)$ that breaks the DDH assumption with non-negligible probability.
+
+**Claim 2** Let $\psi \ge 1$ be, suppose that $\epsilon(k) = \text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_{\psi}}(k)$ is negligible. Then, if $\exists \mathcal{A} \in \text{POLY}(k)$ such that $\lambda_{\psi+1}(k) = \text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_{\psi+1}}(k)$ is non-negligible, then $\exists \mathcal{B} \in \text{POLY}(k)$ that breaks the DDH assumption with non-negligible probability.
+
+This two claims imply that $\text{Adv}_{\text{EVeR},\mathcal{A}}^{\text{n-ano}_{\psi}}(k)$ is negligible for any $n$ and any $\psi$ that are polynomially bounded.
+
+**Proof of Claim 1:** We show how to build the algorithm $\mathcal{B}$. It receives a DDH instance $((G, p, g), X, Y, Z)$ as input. It picks $d \stackrel{s}{\leftarrow} \{1, \dots, n\}$. For all $i \in \{1, \dots, n\}$:
+
+- if $i=d$ then it sets $\mathbf{pk}_i = X$
+
+- else, it runs $(\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{V.Gen}(init)$ where $init = (G, p, g, H)$.
+
+$\mathcal{B}$ runs $(d_0, d_1) \leftarrow \mathcal{A}_1(\{\mathbf{pk}_i\}_{1 \le i \le n})$. During the experiment, $\mathcal{B}$ simulates the oracle for $\mathcal{A}$ as follows:
+
+**Random oracle** $H(.):$ On the $i^{\text{th}}$ input $M_i$, if $\exists j < i$ such that $M_j = M_i$ then it sets $u_j = u_i$. Else it picks $u_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. Finally, it returns $g^{u_i}$.
+
+**Oracle V.Sig**$(\cdot, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(L_i, l_i, m_i)$, it picks $r_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. It computes $h_i = H(m_i||r_i)$ using the oracle $H(.)$, then there exists $j$ such that $m'_i||r'_i = M_j$.
+
+- If $l_i = d$ then it computes $z_i = X^{u_j}$ and it runs $P_i \leftarrow \text{Sim}(\{(h_i, z_i, g, \mathbf{pk}_l)\}_{\mathbf{pk}_l \in L_i})$. It returns $\sigma_i = (r_i, z_i, P_i)$ to $\mathcal{A}$.
+
+- Else it runs and returns $\sigma_i \leftarrow \text{V.Sig}(L_i, \mathbf{sk}_{l_i}, m_i)$
+
+**Oracle V.Proof**$(\cdot, \cdot, \cdot, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(L'_i, m'_i, \sigma'_i, l'_i)$, it parses $\sigma'_i = (r'_i, z'_i, P'_i)$. It computes $h'_i = H(m'_i||r'_i)$ using the oracle $H(.)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $\bar{z}_i = \mathbf{pk}_{l'_i}^{u'_j}$ and it runs $P'_i \leftarrow \text{Sim}((h'_i, \bar{z}_i, g, \mathbf{pk}_{l'_i})$. It returns $(\bar{z}_i, P'_i)$ to $\mathcal{A}$.
+
+$\mathcal{B}$ runs $b_* \leftarrow \mathcal{A}_2(\{\mathbf{pk}_i\}_{1 \le i \le n})$. During the experiment, $\mathcal{B}$ simulates the oracle $\text{V.Sig}(\cdot, \cdot, \cdot)$ as in the first phase. It simulates the three other oracles as follows:
+---PAGE_BREAK---
+
+**Oracle LRSOb(d0, d1, ...,):** On input (m'', L''), it picks r'' $\Leftarrow$ Zp*. If ∃i such that ri = r'' then B aborts the experiment and returns b'* $\Leftarrow$ {0, 1}, else it runs P'' $\leftarrow$ Sim({(Y, Z, g, pkl)pki∈L'') and returns (r'', Z, P'') to A.
+
+**Oracle V.Proof(...):** On the i-th input (L'i, m'i, σ'i, l'i), if LRSOb have been already called and σ'i = σ'' and (l'i = d0 or l'i = d1) then it returns ⊥ to A. Else, it process as in first phase.
+
+**Random oracle H(.):** On the *i*-th input $M_i$, if LRSO$_b$ have been already called and $M_i = (r''||m'')$ then it returns Y to A. Else, it process as in first phase.
+
+Let $b'$ be the bit that verifies $d_{b'} = d$. If $b' = b_*$ then B returns $b_*' = 1$, else $b_*' = 0$.
+
+*Analyze:* Let $q$ be the number of queries asked to V.Sig(*·*, ..., ) and let $E$ be the event "B does not aborts the experiment of $\mathcal{A}$. We have:
+
+$$
+\begin{align*}
+\Pr[\neg E] &= \Pr[(\exists i, r_i = r'') \lor (d_0 \neq d \land d_1 \neq d)] \\
+&\leq \Pr[(\exists i, r_i = r'')] + \Pr[(d_0 \neq d \land d_1 \neq d)] \\
+&\leq \sum_{i=1}^{q} \Pr[r_i = r'] + \Pr[(d_0 \neq d \land d_1 \neq d)] \\
+&\leq \frac{q}{|G|} + \frac{1}{n}
+\end{align*}
+$$
+
+We deduce that:
+
+$$ \mathrm{Pr}[E] \geq 1 - \left( \frac{q}{|G|} + \frac{1}{n} \right) \geq \frac{n-1}{n} - \frac{q}{|G|} \geq \frac{1}{n} - \frac{q}{|G|} $$
+
+Let $\alpha, \beta$ be two elements of $G$ such that $X = g^\alpha$ and $Y = g^\beta$. Let $b$ be the solution to the DDH instance, i.e. $b = 1$ iff $Z = g^{\alpha \cdot \beta}$. We compute the probability that $B$ wins its DDH experiment:
+
+$$
+\begin{align*}
+\Pr[b'_* = b] &= \Pr[E] \cdot \Pr[b'_* = b|E] + (1 - \Pr[E]) \cdot \Pr[b'_* = b|\neg E] \\
+&= \Pr[E] \cdot (\Pr[b'_* = b|E] - \Pr[b'_* = b|\neg E]) + \Pr[b'_* = b|\neg E] \\
+&= \Pr[E] \cdot (\Pr[b'_* = b|E] - \frac{1}{2}) + \frac{1}{2} \\
+&= \Pr[E] \cdot (\Pr[Z = g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \land (Z = g^{\alpha \cdot \beta})] \\
+&\quad + \Pr[Z \neq g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \land (Z \neq g^{\alpha \cdot \beta})] - \frac{1}{2}) + \frac{1}{2} \\
+&= \Pr[E] \cdot (\frac{1}{2} \cdot (\frac{1}{2} \pm \lambda_1(k)) + \frac{1}{2} \cdot \frac{1}{2} - \frac{1}{2}) + \frac{1}{2} \\
+&= \pm \lambda_1(k) \cdot \frac{\Pr[E]}{2} + \frac{1}{2}
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Finally, we deduce the advantage of $\mathcal{B}$ against the DDH problem:
+
+$$ \left| \Pr[b'_* = b] - \frac{1}{2} \right| = \lambda_1(k) \cdot \frac{\Pr[E]}{2} = \lambda_1(k) \cdot \left( \frac{1}{2 \cdot n} - \frac{q}{2 \cdot |G|} \right) $$
+
+This advantage is non-negligible, which conclude the proof of Claim 1.
+
+**Proof of Claim 2:** We show how to build the algorithm $\mathcal{B}$. It runs the same reduction as in claim 1, except that the algorithm $\mathcal{B}$ simulates the oracles LRSO$_b(d_0, d_1, \cdot, \cdot)$ and V.Proof$(\cdot, \cdot, \cdot, \cdot, \cdot)$ as follows during the second phase of the $\mathcal{A}$ experiment:
+
+**Oracle LRSO$_b$(d$_0$, d$_1$, ·, ·):** On the $i$th input $(m_i''$, $L_i'')$, if $i = 0$ then this oracle is defined as in the reduction of the Claim 1. Else it runs the oracle V.Sig$(\cdot, \cdot, \cdot)$ on the input $(m_i'', d, L_i'')$ and returns the resulted signature $\sigma_i''$ to $\mathcal{A}$.
+
+**Oracle V.Proof($\cdot, \cdot, \cdot, \cdot, \cdot$):** On the $i$th input $(L_i', m_i', \sigma_i', l_i')$, if LRSO$_b$ have been already called and $\exists j$ such that $\sigma_i' = \sigma_j''$ and ($l_i' = d_0$ or $l_i' = d_1$) then it returns $\perp$ to $\mathcal{A}$. Else, it process as in the reduction of Claim 1.
+
+*Analyze:* Let $q$ be the number of queries asked to V.Sig$(\cdot, \cdot, \cdot)$ and let $E$ be the event "B does not aborts the experiment of $\mathcal{A}$. As in Claim 1, we have:
+
+$$ \Pr[E] \geq \frac{1}{n} - \frac{q}{|G|} $$
+
+Let $\alpha, \beta$ be two elements of $G$ such that $X = g^\alpha$ and $Y = g^\beta$. Let $b$ be the solution to the DDH instance, i.e. $b = 1$ iff $Z = g^{\alpha \cdot \beta}$. We compute the probability that $\mathcal{B}$ wins its DDH experiment:
+
+$$
+\begin{align*}
+\Pr[b'_* = b] &= \Pr[E] \cdot \Pr[b'_* = b|E] + (1 - \Pr[E]) \cdot \Pr[b'_* = b|\neg E] \\
+&= \Pr[E] \cdot (\Pr[Z = g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \wedge (Z = g^{\alpha \cdot \beta})] \\
+&\quad + \Pr[Z \neq g^{\alpha \cdot \beta}] \cdot \Pr[b'_* = b|E \wedge (Z \neq g^{\alpha \cdot \beta})] - \frac{1}{2}) + \frac{1}{2} \\
+&= \Pr[E] \cdot (\frac{1}{2} \cdot (\frac{1}{2} \pm \lambda(k)) + (\frac{1}{2} \pm \epsilon_1(k)) - \frac{1}{2}) + \frac{1}{2} \\
+&= (\pm\lambda(k) \pm \epsilon(k)) \cdot \frac{\Pr[E]}{2} + \frac{1}{2}
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Finally, we deduce the advantage of $\mathcal{B}$ against the DDH problem:
+
+$$
+\begin{align*}
+\left|\Pr[b'_* = b] - \frac{1}{2}\right| &= \left|\pm \lambda(k) \pm \epsilon(k)\right| \cdot \frac{\Pr[E]}{2} \\
+&\geq (\lambda(k) - \epsilon(k)) \cdot \left(\frac{1}{2 \cdot n} - \frac{q}{2 \cdot |G|}\right) \\
+&\geq \lambda(k) \cdot \frac{1}{2 \cdot n} - \left(\frac{q \cdot \lambda(k)}{2 \cdot |G|}\right) - \epsilon(k) \cdot \left(\frac{1}{2 \cdot n} - \frac{q}{2 \cdot |G|}\right) \\
+&\geq \lambda(k) \cdot \frac{1}{2 \cdot n} - \left(\frac{q \cdot \lambda(k)}{2 \cdot |G|}\right) - \frac{\epsilon(k)}{2 \cdot n}
+\end{align*}
+$$
+
+This advantage is non-negligible, which conclude the proof of Claim 2 and the proof of the lemma. $\square$
+
+**Lemma 3.** EVeR is $n$-acc secure for any polynomially bounded $n$ under the DL assumption in the random oracle model.
+
+*Proof.* We first recall that since LogEq$_n$ is valid, then there exists a polynomial time extractor $\mathcal{E}$ and a polynomial time simulator Sim for LogEq$_n$. Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{EVeR},\mathcal{A}}^{\text{n-acc}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ that solve the DL problem with non-negligible probability.
+
+$\mathcal{B}$ description: For all $i \in \{1, \dots, n\}$, it picks $x_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$ and sets $\mathbf{pk}_i = Y^{x_i}$. $\mathcal{B}$ runs $x' \leftarrow \mathcal{E}^{\mathcal{A'}(\{\mathbf{pk}_i\}_{1 \le i \le n})}(k)$ where $\mathcal{A}'$ is the following algorithm:
+
+**Algorithm $\mathcal{A}'(\{\mathbf{pk}_i\}_{1 \le i \le n})$:** It runs $(L_*, m_*, \sigma_*, \mathbf{pk}_*, \pi_*) \leftarrow \mathcal{A}(\{\mathbf{pk}_i\}_{1 \le i \le n})$. It simulates the oracles to $\mathcal{A}$ as follows:
+
+**Random oracle $H(\cdot)$:** On the $i^{\text{th}}$ input $M_i$, if $\exists j < i$ such that $M_j = M_i$ then it sets $u_j = u_i$. Else it picks $u_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. Finally, it returns $g^{u_i}$.
+
+**Oracle V.Sig**(·, ·, ·): On the $i^{\text{th}}$ input $(L_i, l_i, m_i)$, it picks $r_i \stackrel{s}{\leftarrow} \mathbb{Z}_p^*$. It computes $h_i = H(m_i||r_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $z_i = X^{u_j}$ and it runs $P_i \leftarrow \text{Sim}(\{(h_i, z_i, g, \mathbf{pk}_l)\}_{\mathbf{pk}_l \in L_i})$. It returns $\sigma_i = (r_i, z_i, P_i)$ to $\mathcal{A}$.
+
+**Oracle V.Proof**(*·*, *·*, *·*, *·*): On the $i^{\text{th}}$ input $(L'_i, m'_i, \sigma'_i, l'_i)$, it parses $\sigma'_i = (r'_i, z'_i, P'_i)$. It computes $h'_i = H(m'_i||r'_i)$ using the oracle $H(\cdot)$, then there exists $j$ such that $m'_i||r'_i = M_j$. It computes $\bar{z}_i = \mathbf{pk}_{l'_i}^{u'_j}$ and it runs $P'_i \leftarrow \text{Sim}((h'_i, \bar{z}_i, g, \mathbf{pk}_{l'_i})$. It returns $(\bar{z}_i, \pi'_i)$ to $\mathcal{A}$.
+
+Finally, $\mathcal{A}'$ compute $h_*(r_*||m_*)$ using the random oracle $H(\cdot)$, parses $\sigma_* = (r_*, z_*, P_*)$ and returns $P_*$.
+---PAGE_BREAK---
+
+Analyze: We parse $\sigma_* = (r_*, z_*, P_*)$ and $\pi_* = (\bar{z}_*, P_*')$. Suppose that $\mathcal{A}$ wins the experiment, then we have:
+
+$$L_* \subseteq \{\text{pk}_i\}_{1 \le i \le n} \cup \{\text{pk}_*\} \qquad (1)$$
+
+$$\text{V.Ver}(L_*, \sigma_*, m_*) = 1 \qquad (2)$$
+
+$$\text{V.Judge}(L_*, m_*, \sigma_*, \text{pk}_*, \pi_*) = 0 \qquad (3)$$
+
+$$\forall i \in \{1, \dots, q_S\}, \sigma_i \neq \sigma_* \qquad (4)$$
+
+Moreover, equation (4) implies that $\forall i \in \{1, \dots, q_S\}, P_i \neq P_*$, then $P_i$ was not generated by the simulator Sim. We deduce the following equation from (2):
+
+$$\text{LEverif}_{|L_*|}(\{(h_*, z_*, g, \text{pk}_l)\}_{\text{pk}_l \in L_*}, P_*) = 1 \qquad (5)$$
+
+Thus $\mathcal{A}$ returns a valid proof with non negligible probability $\lambda(k)$. Since $\mathcal{E}$ is an extractor for LogEq$_n$, it implies that:
+
+$$\Pr[\exists \text{pk} \in L_*, x' = \log_g(\text{pk}) = \log_{h_*}(z_*)] \ge \lambda(k) \qquad (6)$$
+
+We deduce the following equation from (3):
+
+$$\bar{z}_* \neq z_* \qquad (7)$$
+
+$$\text{LEverif}_1(\{(h_*, \bar{z}_*, g, \text{pk}_*)\}, P_*') = 1 \qquad (8)$$
+
+Since LogEq$_n$ is sound, we deduce there exists a negligible function $\epsilon$ such that:
+
+$$\Pr[\log_g(\text{pk}_*) = \log_{h_*}(\bar{z}_*)] \ge 1 - \epsilon(k) \qquad (9)$$
+
+$$\Rightarrow \Pr[\log_g(\text{pk}_*) \neq \log_{h_*}(z_*)] \ge 1 - \epsilon(k) \qquad (10)$$
+
+$$\Rightarrow \Pr[\log_g(\text{pk}_*) = \log_{h_*}(z_*)] \le \epsilon(k) \qquad (11)$$
+
+Finally, from (1), (6) and (11) we deduce the probability that $\mathcal{B}$ wins the experiment:
+
+$$\Pr[\exists \text{pk} \in L_*, x' = \log_g(\text{pk}) = \log_{h_*}(z_*)] \ge \lambda(k)$$
+
+$$\iff \Pr[\exists \text{pk} \in L_* \setminus \{\text{pk}_*\}, x' = \log_g(\text{pk}) = \log_{h_*}(z_*)] + \Pr[x' = \log_g(\text{pk}_*) = \log_{h_*}(z_*)] \ge \lambda(k)$$
+
+$$\iff \Pr[Y = g^x] + \Pr[x' = \log_g(\text{pk}_*) = \log_{h_*}(z_*)] \ge \lambda(k)$$
+
+$$\iff \Pr[Y = g^x] \ge \lambda(k) - \Pr[\log_g(\text{pk}_*) = \log_{h_*}(z_*)]$$
+
+$$\iff \Pr[Y = g^x] \ge \lambda(k) - \epsilon(k)$$
+
+Since $\Pr[Y = g^x]$ is non negligible then $\mathcal{B}$ solve the DL problem with non-negligible probability. $\square$
+
+**Lemma 4.** EVeR is *n*-non-*usu*-2 secure for any polynomially bounded *n* under the DL assumption in the random oracle model.
+
+*Proof.* We first recall that since LogEq$_n$ is valid, then there exists a polynomial time extractor $\mathcal{E}$ and a polynomial time simulator Sim for LogEq$_n$. Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \Pr[\text{Exp}_{\text{EVeR},\mathcal{A}}^{\text{n-non-usu-2}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ that solve the DL problem with non-negligible probability.
+---PAGE_BREAK---
+
+B description: B sets pk = Y. B runs x ← $\mathcal{E}^{\mathcal{A}'(\text{pk})}(k)$ where $\mathcal{A}'$ is the following
+algorithm:
+
+**Algorithm $\mathcal{A}'(\text{pk})$:** It runs $(L_*, m_*, \sigma_*) \leftarrow \mathcal{A}(\text{pk})$. It simulates the oracles to $\mathcal{A}$ as in the reduction of the previous proof. Finally, $\mathcal{A}'$ compute $h_*(r_*||m_*)$ using the random oracle $H(\cdot)$, parses $\sigma_* = (r_*, z_*, P_*)$ and returns $P_*$.
+
+Finally, $\mathcal{B}$ returns $x$.
+
+Analyze: We parse σ* = (r*, z*, P*). Suppose that A wins the experiment, then we have, for any π* ← V.Proof(L*, m*, σ*, pk, sk) where π* = (z̃*, P'*):
+
+$$
+\begin{align}
+\text{V.Ver}(L_*, \sigma_*, m_*) &= 1 \tag{12} \\
+\text{V.Judge}(L_*, m_*, \sigma_*, \text{pk}, \pi_*) &= 1 \tag{13} \\
+\forall i \in \{1, \dots, q_S\}, \sigma_i &\neq \sigma_* \tag{14}
+\end{align}
+$$
+
+Moreover, equation (4) implies that $\forall i \in \{1, \dots, q_S\}, P_i \neq P_*$, then $P_i$ was not generated by the simulator Sim. We deduce the following equation from (12):
+
+$$
+LEverif_{L_*}(\{(h_*, z_*, g, pk_l)\}_{pk_l \in L_*}, P_*) = 1 \quad (15)
+$$
+
+Thus $\mathcal{A}$ returns a valid proof with non negligible probability $\lambda(k)$. Since $\mathcal{E}$ is an extrac-
+tor for LogEq$_n$, it implies that:
+
+$$
+\Pr[\exists \mathbf{pk}_l \in L_*, x = \log_g(\mathbf{pk}_l) = \log_{h_*}(z_*)] \geq \lambda(k) \quad (16)
+$$
+
+We deduce the following equation from (13):
+
+$$
+\bar{z}_* = z_*
+$$
+
+$$
+LEverif_1(\{(h_*, z_*, g, \mathbf{pk})\}, P'_*) = 1 \quad (18)
+$$
+
+Since LogEq_n is sound, we deduce there exists a negligible function ϵ such that:
+
+$$
+\Pr[\log_g(\mathbf{pk}) = \log_{h_*}(z_*)] \geq 1 - \epsilon(k) \quad (19)
+$$
+
+$$
+\iff \Pr[\log_g(\mathbf{pk}) \neq \log_{h_*}(z_*)] \leq \epsilon(k) \quad (20)
+$$
+
+Finally, from (16) and (20) we deduce the probability that $\mathcal{B}$ wins the experiment:
+
+$$
+\begin{align*}
+& \Pr[\exists \text{pk}_l \in L_*; x = \log_g(\text{pk}_l) = \log_{h_*}(z_*)] \geq \lambda(k) \\
+& \iff \Pr[x = \log_g(\text{pk}) = \log_{h_*}(z_*)] + \Pr[\exists \text{pk}_l \in L_* \setminus \{\text{pk}\}, x = \log_g(\text{pk}_l) = \log_{h_*}(z_*)] \geq \lambda(k) \\
+& \iff \Pr[x = \log_g(\text{pk}) = \log_{h_*}(z_*)] \geq \lambda(k) - \Pr[\exists \text{pk}_l \in L_* \setminus \{\text{pk}\}, x = \log_g(\text{pk}_l) = \log_{h_*}(z_*)] \\
+& \iff \Pr[Y = g^x] \geq \lambda(k) - \Pr[\log_g(\text{pk}) = \log_{h_*}(z_*)] \\
+& \iff \Pr[Y = g^x] \geq \lambda(k) - \epsilon(k)
+\end{align*}
+$$
+
+Since $\Pr[Y = g^x]$ is non negligible then $\mathcal{B}$ solve the DL problem with non-negligible probability. $\square$
+---PAGE_BREAK---
+
+**D Security proofs of GUSS**
+
+**Lemma 5.** If *D* is *unf* secure then *GUSS* is *immut* secure.
+
+*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{immut}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ such that $\text{Pr}[\text{Exp}_{\mathcal{D},\mathcal{B}}^{\text{unf}}(k) = 1]$ is non-negligible.
+
+$\mathcal{B}$ construction: $\mathcal{B}$ receives the public key $pk_d$ as input. It runs $init_v \leftarrow \text{V.Init}(1^k)$ and $(pk_v, sk_v) \leftarrow \text{V.Gen}(init_v)$. It sets $pk = (pk_d, pk_v)$ and runs $(spk_*, m_*, \sigma_*) \leftarrow \mathcal{A}(pk)$. During the experiment, $\mathcal{B}$ simulates the two oracles $\text{Sig}(\cdot, sk, \cdot, \cdot)$ and $\text{SiProof}(sk, \cdot, \cdot, \cdot)$ to $\mathcal{A}$ as follows:
+
+$\text{Sig}(\cdot, sk, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(m_i, \text{ADM}_i, \text{spk}_i)$, $\mathcal{B}$ first computes the fixed message part $M \leftarrow \text{FIX}_{\text{ADM}_i}(m_i)$ and sends $(M_i || \text{ADM}_i || \text{pk} || \text{spk}_i)$ to the oracle $\text{D.Sig}(sk_d, \cdot)$ and receives the signature $\sigma_{i,1}$. It runs $\sigma_2 \leftarrow \text{V.Sig}(\{\text{pk}_v, \text{spk}_i\}, \text{sk}_v, (\sigma_{i,1} || m_i))$. It returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \text{ADM}_i)$.
+
+$\text{SiProof}(sk, \cdot, \cdot, \cdot):$ On the $i^{\text{th}}$ input $(m'_i, \sigma'_i, \text{spk}'_i)$, It parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \text{ADM}'_i)$. It runs $\pi'_{\text{SK},i} \leftarrow \text{V.Proof}(\{\text{pk}_v, \text{spk}'_i\}, (m'_i || \sigma'_{i,1}), \sigma'_{i,2}, \text{pk}_v, \text{sk}_v)$ and returns it.
+
+Finally, $\mathcal{B}$ parses $\sigma_* = (\sigma_{1,*}, \sigma_{2,*}, \text{ADM}_*)$, $M_* \leftarrow \text{FIX}_{\text{ADM}_*}(m_*)$ and returns the couple $((M_* || \text{ADM}_* || \text{pk} || \text{spk}_*), \sigma_{1,*})$.
+
+*analyze* We show the if $\mathcal{A}$ wins its experiment, then $\mathcal{B}$ also wins its experiment. Suppose that $\mathcal{A}$ wins its experiment, then the following equations holds:
+
+$$
+\begin{gather}
+\mathrm{Ver}(m_*, \sigma_*, \mathrm{pk}, \mathrm{spk}_*) = 1 \tag{21} \\
+\forall i \in \{1, \dots, q_{\mathrm{Sig}}\}, (\mathrm{spk}_* \neq \mathrm{spk}_i) \text{ or } (\mathrm{FIX}_{\mathrm{ADM}_*}(m_*) \neq \mathrm{FIX}_{\mathrm{ADM}_i}(m_i)) \tag{22}
+\end{gather}
+$$
+
+(21) implies the following equation:
+
+$$ D.\operatorname{Ver}(\mathrm{pk}_d, (M_* || \mathrm{ADM}_* || \mathrm{pk} || \mathrm{spk}_*), \sigma_{1,*}) = 1 $$
+
+Moreover, (22) implies that:
+
+$$
+\forall i \in \{1, \dots, q_{\mathrm{Sig}}\}, (M_* || \mathrm{ADM}_* || \mathrm{pk} || \mathrm{spk}_*) \neq (M_i || \mathrm{ADM}_i || \mathrm{pk} || \mathrm{spk}_i)
+$$
+
+We deduce that $\mathcal{B}$ never sends the message $(M_* || \mathrm{ADM}_* || \mathrm{pk} || \mathrm{spk}_*)$ to the oracle $\mathrm{Sig}(\cdot, sk, \cdot, \cdot)$. Moreover, we deduce that if $\mathcal{A}$ wins its experiment, then $\mathcal{B}$ wins its experiment, thus
+$$
+\Pr[\mathrm{Exp}_{\mathcal{D},\mathcal{B}}^{\mathrm{unf}}(k) = 1] \geq \lambda(k)
+\hspace*{\fill} \square
+$$
+
+**Lemma 6.** If *V* is *2-ano* secure then *GUSS* is *trans* secure.
+
+*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \text{Pr}[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\textit{trans}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \textbf{POLY}(k)$ such that $\text{Pr}[\text{Exp}_{\mathcal{V},\mathcal{B}}^{\textit{2-ano}}(k) = 1]$ is non-negligible.
+---PAGE_BREAK---
+
+$B_1$ receives $(\mathbf{pk}, \mathbf{pk}_v)$ as input, and returns $(1, 2)$. $B_2$ runs $(\mathbf{pk}_d, \mathbf{sk}_d) \leftarrow \text{SiGen}(\text{D.Init}(1^k))$ and sets $\mathbf{pk} = (\mathbf{pk}_d, \mathbf{pk}_v)$. It runs $b' \leftarrow A(\mathbf{pk}, \mathbf{spk})$ and returns $b'$. During the experiment, $B_2$ simulates the oracles to $\mathcal{A}$ as follows:
+
+**Sig**(*., sk*, ..., *): On the $i$-th input $(m_i, \text{ADM}_i, \text{spk}_i)$, $B_2$ first computes the fixed message part $\tilde{M}_i \leftarrow \text{FIX}_{\text{ADM}_i}(m_i)$ and runs $\sigma_{i,1} \leftarrow \text{D.Sig}(\text{sk}_d, (\tilde{M}_i || \text{ADM}_i || \mathbf{pk} || \text{spk}_i))$ and it sends $(\{\mathbf{pk}_v, \text{spk}_i\}, 1, (m_i || \sigma_{i,1}))$ to the oracle $\text{V.Sig}(..., ...)$ that returns the signature $\sigma_{i,2}$. $B_2$ returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \text{ADM}_i)$ to $\mathcal{A}$.
+
+**San**(*., ..., ssk*): On the $i$-th input $(m'_i, \text{MOD}'_i, \sigma'_i, \mathbf{pk}'_i)$, it parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \text{ADM}'_i)$ and $\mathbf{pk}'_i = (\mathbf{pk}'_{d,i}, \mathbf{pk}'_{v,i})$. This algorithm first computes the modified message $\bar{m}'_i \leftarrow \text{MOD}'_i(m'_i)$ and it sends $(\{\mathbf{pk}'_{v,i}, \mathbf{spk}\}, 2, (\bar{m}'_i || \sigma'_{i,1}))$ to the oracle $\text{V.Sig}(..., ...)$ that returns the signature $\bar{\sigma}'_{i,2}$. $B_2$ returns $\bar{\sigma}'_i = (\bar{\sigma}'_{i,1}, \bar{\sigma}'_{i,2}, \text{ADM}'_i)$ to $\mathcal{A}$.
+
+**SiProof**($\mathbf{sk}$, ..., *): On the $i$-th input $(m''_i, \sigma''_i, \mathbf{spk''}_i)$, $B$ parses $\sigma''_i = (\sigma''_{i,1}, \sigma''_{i,2}, \text{ADM''}_i)$. It sends $(\{\mathbf{pk}_v, \mathbf{spk''}_i\}, (m''_i || \sigma''_{i,1}), \sigma''_{i,2}, \mathbf{pk}_v, 1)$ to the oracle $\text{V.Proof}(..., ..., ...)$ that returns the proof $\pi''_{\text{Si},i}$. Finally, $B$ returns $\pi''_{\text{Si},i}$.
+
+**SaProof**($\mathbf{ssk}$, ..., *): On the $i$-th input $(m'''_i, \sigma'''_i, \mathbf{pk'''}_i)$, $B$ parses $\sigma'''_i = (\sigma'''_{i,1}, \sigma'''_{i,2}, \text{ADM'''}_i)$ and $\mathbf{pk}'''_i = (\mathbf{pk}'''_{d,i}, \mathbf{pk}'''_{v,i})$. It sends $(\{\mathbf{pk}'''_{v,i}, \mathbf{spk}\}, (m'''_i || \sigma'''_{i,1}), \sigma'''_{i,2}, \mathbf{spk}, 2)$ to the oracle $\text{V.Proof}(..., ..., ...)$ that returns the proof $\pi'''_{\text{Sa},i}$. Finally, $B$ returns $\pi'''_{\text{Sa},i}$.
+
+**Sa/Si**(b, **pk**, **spk**, sk, ssk, ..., *): On the $i$-th input $(\tilde{m}_i, \tilde{\text{ADM}}_i, \tilde{\text{MOD}}_i)$, if $\tilde{\text{ADM}}_i(\tilde{\text{MOD}}_i) = 0$, $B_2$ returns $\perp$. Else $B_2$ computes the fixed message part $\tilde{M}_i \leftarrow \text{FIX}_{\tilde{\text{ADM}}_i}(\tilde{m}_i)$. It runs $\tilde{\sigma}_{i,1} \leftarrow \text{D.Sig}(\text{sk}_d, (\tilde{M}_i || \tilde{\text{ADM}}_i || \mathbf{pk} || \mathbf{spk}))$ and it sends $((\tilde{\text{MOD}}(\tilde{m}_i) || \tilde{\sigma}_{i,1}), \{\mathbf{pk}_v, \mathbf{skp}\})$ to the oracle $\text{LRSO}_b(1, 2, ..., *)$ that returns the signature $\tilde{\sigma}_{i,2}$. $B_2$ returns $\tilde{\sigma}_i = (\tilde{\sigma}_{i,1}, \tilde{\sigma}_{i,2}, \tilde{\text{ADM}}_i)$ to $\mathcal{A}$.
+
+*analyze*: Suppose that $\mathcal{A}$ wins its experiment, then $b = b'$ and:
+
+$$ S_{\text{Sa/Si}} \cap (S_{\text{SiProof}} \cup S_{\text{SaProof}}) = \emptyset $$
+
+where $S_{\text{Sa/Si}}$ (resp. $S_{\text{SiProof}}$ and $S_{\text{SaProof}}$) corresponds to the set of all signatures out-
+putted by the oracle Sa/Si (resp. sending to the oracles SiProof and SaProof). It im-
+plies that the messages sending to the oracle V.Proof(...,...,...) was not already signed
+by LRSO$_b$(1, 2, ..., ). More formally, we have:
+
+$$ \forall i,j \in \{1,\dots,\max(q_S,q_P)\}, (\sigma_i \neq \sigma_j') $$
+
+where $q_S$ (resp. $q_P$) is the number of calls to the oracle $\text{V.Sig}(\cdot,\cdot,\cdot)$ (resp. $\text{V.Proof}(\cdot,\cdot,\cdot,\cdot,\cdot)$).
+Finally, the probability that $\mathcal{B}$ wins its experiment is the same as the probability that $\mathcal{A}$
+wins its experiments:
+
+$$ \Pr[\mathrm{Exp}_{V,B}^{2-\mathrm{ano}}(k) = 1] \geq \lambda(k) $$
+
+which conclude the proof. □
+
+**Lemma 7.** If *D* is *unf* secure then *GUSS* is unlink secure.
+
+*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = |\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k) = 1]| - 1/2|$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ such that $\Pr[\text{Exp}_{\mathcal{D},\mathcal{B}}^{\text{unf}}(k) = 1]$ is non-negligible.
+---PAGE_BREAK---
+
+$B$ construction: $B_1$ receives $(pk_d)$ as input, $B$ runs $(pk_v, sk_v) \leftarrow \text{SiGen}(V.\text{Init}(1^k))$ and $(spk, ssk) \leftarrow \text{SiGen}(V.\text{Init}(1^k))$, and sets $pk = (pk_d, pk_v)$. It chooses $b \stackrel{\$}{\leftarrow} \{0, 1\}$ and runs $b' \leftarrow A(pk, spk)$. During the experiment, $B_2$ simulates the oracles to $A$ as follows:
+
+**Sig**(**sk**, ..): On the $i^{th}$ input $(m_i, ADM_i, spk_i)$, $B$ first computes the fixed message part $M \leftarrow \text{FIX}_{ADM_i}(m_i)$ and sends $(M_i || ADM_i || pk || spk_i)$ to the oracle $\text{D.Sig}(sk_d, \cdot)$ and receives the signature $\sigma_{i,1}$. It runs $\sigma_2 \leftarrow \text{V.Sig}(\{pk_v, spk_i\}, sk_v, (\sigma_{i,1} || m_i))$. It returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, ADM_i)$.
+
+**SiProof**(sk, .., ..): On the $i^{th}$ input $(m'_i, \sigma'_i, spk'_i)$, It parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, ADM'_i)$. It runs $\pi'_{si,i} \leftarrow \text{V.Proof}(\{pk_v, spk'_i\}, (m'_i || \sigma'_{i,1}), \sigma'_{i,2}, pk_v, sk_v)$ and returns it.
+
+**San**(**sk**, .., .., ssk): On the $i^{th}$ input $(m''_i, MOD''_i, \sigma''_i, pk''_i)$, $B$ runs $\bar{\sigma}''_i \leftarrow \text{San}(m''_i, MOD''_i, \sigma''_i, pk''_i, ssk)$ and returns $\bar{\sigma}''_i$ to $A$.
+
+**SaProof**(ssk, .., ..): On the $i^{th}$ input $(m'''_i, \sigma'''_i, pk'''_i)$, $B$ runs $\pi'''_{sa,i} \leftarrow \text{SaProof}(ssk, m'''_i, \sigma'''_i, pk'''_i)$ and returns $\pi_{sa}$ to $A$.
+
+**LRSan**(b, pk, ssk, ..): On the $i^{th}$ input $((\tilde{m}_{0,i}, \tilde{MOD}_{0,i}, \tilde{\sigma}_{0,i})(\tilde{m}_{1,i}, \tilde{MOD}_{1,i}, \tilde{\sigma}_{1,i}))$, if for $j \in \{0, 1\}$, $\text{Ver}(\tilde{m}_{j,i}, \tilde{\sigma}_{j,i}, pk, spk) = 1$ and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, then this oracle returns $\tilde{\sigma}'_i = (\tilde{\sigma}'_{1,b,i}, \tilde{\sigma}'_{2,b,i}, a\tilde{d}m'_b) \leftarrow \text{San}(\tilde{m}_{b,i}, \tilde{MOD}_{b,i}, \tilde{\sigma}_{b,i}, pk, ssk)$ to $A$, else it returns $\perp$. Moreover, if for $j \in \{0, 1\}$, $\text{Ver}(\tilde{m}_{j,i}, \tilde{\sigma}_{j,i}, pk, spk) = 1$ and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, and if $\exists x$ such that $\tilde{\sigma}_{x,i}$ was not already outputted by the oracle $\text{D.Sig}(sk_d, \cdot)$, then $B$ returns $((\text{FIX}_{\tilde{ADM}_{x,i}}(\tilde{m}_{x,i})||\tilde{ADM}_{x,i}||pk||spk), \tilde{\sigma}_{x,i})$ to the challenger and aborts the experiment for $A$.
+
+If $B$ has not already aborted the experiment, then it returns $\perp$.
+
+*analyze*: First observe that, if for any $i \in \{1, ..., q\}$ where $q$ is the number of queries to the oracle LRSan($b$, `pk`, `ssk`, `..`), for $j \in \{0, 1\}$, Ver($\tilde{m}_{j,i}$, $\tilde{\sigma}_{j,i}$, `pk`, `spk`) = 1 and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, and $\tilde{\sigma}_{j,i}$ was already outputted by the oracle D.Sig(`sk_d`, `·`), then
+
+$$
+\mathrm{FIX}_{\tilde{AD}\bar{M}_{0,i}}(\tilde{m}_{0,i}) || \bar{AD}\bar{M}_{0,i} || pk || spk = \mathrm{FIX}_{\tilde{AD}\bar{M}_{1,i}}(\tilde{m}_{1,i}) || \bar{AD}\bar{M}_{1,i} || pk || spk
+$$
+
+Since *D* is deterministic, we deduce that $\tilde{\sigma}'_{1,b,i} = \tilde{\sigma}_{0,i} = \tilde{\sigma}_{1,i}$. On the other hand, the second parts of the outputted signature $\tilde{\sigma}'_{2,b,i}$ does not depend of *b*. Finally, $\tilde{AD}\bar{M}'_{b,i} = \bar{AD}\bar{M}_{0,i} = \bar{AD}\bar{M}_{1,i}$, then $\tilde{AD}\bar{M}'_{b,i}$ does not depend of *b*. We deduce that the outputted signature $\tilde{\sigma}'_{b,i}$ leaks no information about *b*. In this case, the best strategy of *A* to wins the experiment is to randomly guess the bit $b'$.
+
+On the other hand, if there exists $i \in \{1, ..., q\}$ where $q$ is the number of queries to the oracle LRSan($b$, `pk`, `ssk`, `..`), for $j \in \{0, 1\}$, Ver($\tilde{m}_{j,i}$, $\tilde{\sigma}_{j,i}$, `pk`, `spk`) = 1 and $\tilde{ADM}_{0,i} = \tilde{ADM}_{1,i}$ and $\tilde{ADM}_{j,i}(\tilde{MOD}_{j,i}) = 1$ and $\tilde{MOD}_{0,i}(\tilde{m}_{0,i}) = \tilde{MOD}_{1,i}(\tilde{m}_{1,i})$, and if $\exists x$ such that $\tilde{\sigma}_{x,i}$ was not already outputted by the oracle D.Sig(`sk_d`, `·`), then $B$ returns $((\text{FIX}_{\bar{AD}\bar{M}_{x,i}}(\tilde{m}_{x,i})||\bar{AD}\bar{M}_{x,i}||pk||spk), \tilde{\sigma}_{x,i})$ to the challenger and wins its experiment. We denote this event by $E$. We have:
+
+$$
+\Pr[\mathrm{Exp}_{D,B}^{\mathrm{unf}}(k) = 1] \geq \Pr[E]
+$$
+---PAGE_BREAK---
+
+On the other hand, we have:
+
+$$
+\begin{align*}
+\Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1] &= \Pr[E] \cdot \Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1|E] \\
+&\quad + (1 - \Pr[E]) \cdot \Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1|\neg E] \\
+&= \Pr[E] \cdot \Pr[\mathrm{Exp}_{\mathrm{GUSS}, \mathcal{A}}^{\mathrm{unlink}}(k) = 1|E] + \frac{1}{2} - \frac{1}{2} \cdot \Pr[E]
+\end{align*}
+$$
+
+It implies that:
+
+$$
+\Pr[E] = \frac{\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k)=1] - \frac{1}{2}}{\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k)=1|E] - \frac{1}{2}} = \frac{\pm\lambda(k)}{\Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{unlink}}(k)=1|E] - \frac{1}{2}} \geq \lambda(k)
+$$
+
+Finally, we deduce that
+
+$$
+\Pr[\mathrm{Exp}_{D,B}^{\mathrm{unf}}(k) = 1] \geq \lambda(k)
+$$
+
+which conclude the proof □
+
+**Lemma 8.** If V is 1-acc secure then GUSS is SiAcc-1 secure.
+
+*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \mathrm{POLY}(k)$ such that the advantage $\lambda(k) = \Pr[\mathrm{Exp}_{\mathrm{GUSS},\mathcal{A}}^{\mathrm{SiAcc-1}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \mathrm{POLY}(k)$ such that $\Pr[\mathrm{Exp}_{V,\mathcal{B}}^{\mathrm{1-acc}}(k) = 1]$ is non-negligible.
+
+$\mathcal{B}$ construction: $\mathcal{B}$ receives ($\mathbf{spk}$) as input, and runs runs ($\mathbf{pk}_*, m_*, \sigma_*, \pi_{si,*}) \leftarrow \mathcal{A}(\mathbf{spk})$. During the experiment, $\mathcal{B}$ simulates the oracles to $\mathcal{A}$ as follows:
+
+San(.,..,.,ssk): On the i-th input ($m_i$, $\mathrm{MOD}_i$, $\sigma_i$, $\mathbf{pk}_i$), it parses $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \mathrm{ADM}_i)$ and $\mathbf{pk}_i = (\mathbf{pk}_{d,i}, \mathbf{pk}_{v,i})$. This algorithm first computes the modified message $\bar{m}_i \leftarrow \mathrm{MOD}_i(m_i)$ and it sends $(\{\mathbf{pk}_{v,i}, \mathbf{spk}\}, 1, (\bar{m}_i || \sigma_{i,1}))$ to the oracle $V.\mathrm{Sig}(.,,.)$ that returns the signature $\bar{\sigma}_{i,2}$. $\mathcal{B}_2$ returns $\bar{\sigma}_i = (\sigma_{i,1}, \bar{\sigma}_{i,2}, \mathrm{ADM}_i)$ to $\mathcal{A}$.
+
+SaProof(ssk,..,...): On the i-th input ($m'_i$, $\sigma'_i$, $\mathbf{pk}'_i$), $\mathcal{B}$ parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \mathrm{ADM}'_i)$ and $\mathbf{pk}'_i = (\mathbf{pk}'_{d,i}, \mathbf{pk}'_{v,i})$. It sends $(\{\mathbf{pk}'_{v,i}, \mathbf{spk}\}, (m'_i || \sigma'_{i,1}), \sigma'_{i,2}, \mathbf{spk}, 1)$ to the oracle $V.\mathrm{Proof}(.,.,.,.)$ that returns the proof $\pi'_{\mathrm{sa},i}$. Finally, $\mathcal{B}$ returns $\pi'_{\mathrm{sa},i}$ to $\mathcal{A}$.
+
+Finally, $\mathcal{B}$ parses $\mathbf{pk}_* = (\mathbf{pk}_{d,*}, \mathbf{pk}_{v,*})$ and $\sigma_* = (\sigma_{1,*}, \sigma_{2,*}, \mathrm{ADM}_*)$ and returns $(\{\mathbf{spk}, \mathbf{pk}_{v,*}\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \mathbf{pk}_{v,*}, \pi_{si,*})$
+
+*analyze*: Suppose that $\mathcal{A}$ wins its experiment, then:
+
+$$
+\forall i \in \{1, \dots, q_{\text{San}}\}, (\sigma_* \neq \sigma'_i) \tag{23}
+$$
+
+$$
+\operatorname{Ver}(m_*, \sigma_*, \mathbf{pk}_*, \mathbf{spk}) = 1 \tag*{(24)}
+$$
+
+$$
+\text{\sc SiJudge}(m_*, \sigma_*, \mathbf{pk}_*, \mathbf{spk}, \pi_{s i, *}) = 0 \qquad (25)
+$$
+
+where $q_{\text{San}}$ is the number of calls to the oracle San(.,,.,,., ssk). First note that $\{\texttt{spk}, \texttt{pk}_{v,*}\} \subset
+\{\texttt{spk}\} \cup \{\texttt{pk}_{v,*}\}$. (23) implies that:
+
+$$
+\forall i \in \{1, \dots, q_S\}, \sigma_{*,2} \neq \bar{\sigma}_{i,2}
+$$
+---PAGE_BREAK---
+
+where $q_S$ is the number of queries to V.Sig(.,..,). Indeed, if $(\sigma_* \neq \sigma'_i)$ then $\sigma_{1,*} \neq \sigma_{1,i}$ or $\sigma_{2,*} \neq \sigma_{2,i}$ or $\text{ADM}_* \neq \text{ADM}_i$: if $\text{ADM}_* \neq \text{ADM}_i$ then $\sigma_{1,*} \neq \sigma_{1,i}$ because $\sigma_{1,*}$ (resp. $\sigma_{1,i}$) is a signature of $\text{ADM}_*$ (resp. $\text{ADM}_i$). If $\sigma_{1,*} \neq \sigma_{1,i}$ then $\sigma_{2,*} \neq \sigma_{2,i}$ because $\sigma_{2,*}$ (resp. $\sigma_{2,i}$) is a signature of $\sigma_{1,*}$ (resp. $\sigma_{1,i}$). Finally, in all cases $\sigma_{*,2} \neq \bar{\sigma}_{i,2}$.
+
+On the other hand, (24) implies that:
+
+$$
+V.Ver(\{\text{spk}, \text{pk}_{v,*}\}, \sigma_{2,*}, m_* || \sigma_{1,*}) = 1
+$$
+
+Finally, (25) implies that:
+
+$$
+V.Judge(\{\text{spk}, \text{pk}_{v,*}\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \text{pk}_{v,*}, \pi_{s_i,*}) = 0
+$$
+
+We deduce that the probability that $\mathcal{B}$ wins its experiment is the same as the probability
+that $\mathcal{A}$ wins its experiments:
+
+$$
+\Pr[\mathrm{Exp}_{V,\mathcal{B}}^{\mathrm{1-acc}}(k) = 1] \geq \lambda(k)
+$$
+
+which conclude the proof.
+□
+
+**Lemma 9.** If *V* is 1-*non-usu-2* secure then GUSS is SaAcc-1 secure.
+
+*Proof.* Suppose that there exists an adversary $\mathcal{A} \in \text{POLY}(k)$ such the advantage $\lambda(k) = \Pr[\text{Exp}_{\text{GUSS},\mathcal{A}}^{\text{SaAcc-1}}(k) = 1]$ is non-negligible. We show how to build an algorithm $\mathcal{B} \in \text{POLY}(k)$ such that $\Pr[\text{Exp}_{V,\mathcal{B}}^{\text{1-non-usu-2}}(k) = 1]$ is non-negligible.
+
+$\mathcal{B}$ construction: $\mathcal{B}$ receives $(\mathbf{pk}_v)$ as input, it generates $(\mathbf{pk}_d, \mathbf{sk}_d) \leftarrow \text{SiGen}(\text{init}_d)$,
+sets $\mathbf{pk} = (\mathbf{pk}_d, \mathbf{pk}_v)$ and runs $(\mathbf{spk}_*, m_*, \sigma_*) \leftarrow \mathcal{A}(\mathbf{pk})$. During the experiment,
+$\mathcal{B}$ simulates the oracles to $\mathcal{A}$ as follows:
+
+Sig(., sk, ..): On the i'th input ($m_i$, $\text{ADM}_i$, spk$_i$), B first computes the fixed message part $M_i \leftarrow \text{FIX}_{\text{ADM}_i}(m_i)$ and runs $\sigma_{i,1} \leftarrow \text{D.Sig}(sk_d, (M_i||\text{ADM}_i||\mathbf{pk}||\mathbf{spk}_i))$ and it sends $(\{\mathbf{pk}_v, \mathbf{spk}_i\}, 1, (m_i||\sigma_{i,1}))$ to the oracle V.Sig(.,..,) that returns the signature $\sigma_{i,2}$. B returns $\sigma_i = (\sigma_{i,1}, \sigma_{i,2}, \text{ADM}_i)$ to A.
+
+SiProof(sk, ., ., .): On the i-th input ($m'_i$, $\sigma'_i$, spk'_i), B parses $\sigma'_i = (\sigma'_{i,1}, \sigma'_{i,2}, \text{ADM}'_i)$. It sends $(\{\mathbf{pk}_v, \mathbf{spk}'_i\}, (m'_i||\sigma'_{i,1}), \sigma'_{i,2}, \mathbf{pk}_v, 1)$ to the oracle V.Proof(., ., ., ., .) that returns the proof $\pi'_{\text{si},i}$. Finally, B returns $\pi'_{\text{si},i}$.
+
+Finally, B parses $\sigma_* = (\sigma_{1,*}, \sigma_{2,*}, \text{ADM}_*)$ and returns $(\{\text{spk}_*, \text{pk}_v\}, m_* || \sigma_{1,*}, \sigma_{2,*})$.
+
+*analyze*: Suppose that $\mathcal{A}$ wins its experiment, then, for any $\pi_{\text{si},*} \leftarrow \text{SiProof}(\text{sk}, m_*, \sigma_*, \text{spk}_*)$:
+
+$$
+\forall i \in \{1, \dots, q_{\text{Sig}}\}, (\sigma_* \neq \sigma'_i) \tag{26}
+$$
+
+$$
+\mathrm{Ver}(m_*, \sigma_*, \mathrm{pk}, \mathrm{spk}_*) = 1
+$$
+
+$$
+(27) \\
+\text{SaJudge}(m_*, \sigma_*, \mathrm{pk}, \mathrm{spk}_*, \pi_{\mathrm{si},*}) = 1
+$$
+
+$$
+(28) \\
+\text{SaJudge}(m_*, \sigma_*, pk, spk_*) = 1
+$$
+
+where $q_\text{San}$ is the number of calls to the oracle $\text{San}(., ., ., ., ssk)$. First note that (26)
+implies that:
+
+$$
+\forall i \in \{1, \dots, q_S\}, \sigma_{*,2} \neq \bar{\sigma}_{i,2}
+$$
+---PAGE_BREAK---
+
+where $q_S$ is the number of queries to V.Sig(.,..). Indeed, if $(\sigma_* \neq \sigma'_i)$ then $\sigma_{1,*} \neq \sigma_{1,i}$ or $\sigma_{2,*} \neq \sigma_{2,i}$ or $\text{ADM}_* \neq \text{ADM}_i$: if $\text{ADM}_* \neq \text{ADM}_i$ then $\sigma_{1,*} \neq \sigma_{1,i}$ because $\sigma_{1,*}$ (resp. $\sigma_{1,i}$) is a signature of $\text{ADM}_*$ (resp. $\text{ADM}_i$). If $\sigma_{1,*} \neq \sigma_{1,i}$ then $\sigma_{2,*} \neq \sigma_{2,i}$ because $\sigma_{2,*}$ (resp. $\sigma_{2,i}$) is a signature of $\sigma_{1,*}$ (resp. $\sigma_{1,i}$). Finally, in all cases $\sigma_{*,2} \neq \bar{\sigma}_{i,2}$.
+
+On the other hand, (27) implies that:
+
+$$
+V.Ver(\{\text{spk}_*, \text{pk}_v\}, \sigma_{2,*}, m_* || \sigma_{1,*}) = 1
+$$
+
+Moreover, (28) implies that:
+
+$$
+\mathrm{SaJudge}(m_*, \sigma_*, \mathbf{pk}, \mathbf{spk}_*, \pi_{\mathrm{si},*}) = 1
+$$
+
+Indeed, $\pi_{\mathrm{si},*}$ cannot be equal to $\perp$ since it is computed by the proof algorithm from a valid signature. It implies that:
+
+$$
+V.Judge(\{\text{spk}_*, \text{pk}_v\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \text{pk}_v, \pi_{si,*}) = 0
+$$
+
+Finally, note that since $\pi_{si,*} \leftarrow SiProof(\mathbf{sk}, m_*, \sigma_*, \mathbf{spk}_*)$ then:
+
+$$
+\pi_{si,*} \leftarrow V.Proof(\{\text{spk}_*, \text{pk}_v\}, m_* || \sigma_{1,*}, \sigma_{2,*}, \text{pk}_v, \text{sk}_v)
+$$
+
+We deduce that the probability that *B* wins its experiment is the same as the probability
+that *A* wins its experiments:
+
+$$
+\Pr[\text{Exp}_{V,B}^{1-\text{non-usu-}\cdot 2}(k) = 1] \geq \lambda(k)
+$$
+
+which conclude the proof.
+□
\ No newline at end of file
diff --git a/samples/texts_merged/6859646.md b/samples/texts_merged/6859646.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f787b531bd4bb8ab3e5a0d0a794f8d9ab429252
--- /dev/null
+++ b/samples/texts_merged/6859646.md
@@ -0,0 +1,2485 @@
+
+---PAGE_BREAK---
+
+# Secondary School Examination-2020
+## Marking Scheme - MATHEMATICS STANDARD
+
+**Subject Code: 041 Paper Code: 30/2/1, 30/2/2, 30/2/3**
+
+### General instructions
+
+1. You are aware that evaluation is the most important process in the actual and correct assessment of the candidates. A small mistake in evaluation may lead to serious problems which may affect the future of the candidates, education system and teaching profession. To avoid mistakes, it is requested that before starting evaluation, you must read and understand the spot evaluation guidelines carefully. Evaluation is a 10-12 days mission for all of us. Hence, it is necessary that you put in your best efforts in this process.
+
+2. Evaluation is to be done as per instructions provided in the Marking Scheme. It should not be done according to one's own interpretation or any other consideration. Marking Scheme should be strictly adhered to and religiously followed. However, while evaluating, answers which are based on latest information or knowledge and/or are innovative, they may be assessed for their correctness otherwise and marks be awarded to them. In class-X, while evaluating two competency based questions, please try to understand given answer and even if reply is not from marking scheme but correct competency is enumerated by the candidate, marks should be awarded.
+
+3. The Head-Examiner must go through the first five answer books evaluated by each evaluator on the first day, to ensure that evaluation has been carried out as per the instructions given in the Marking Scheme. The remaining answer books meant for evaluation shall be given only after ensuring that there is no significant variation in the marking of individual evaluators.
+
+4. Evaluators will mark (√) wherever answer is correct. For wrong answer 'X' be marked. Evaluators will not put right kind of mark while evaluating which gives an impression that answer is correct and no marks are awarded. This is **most common mistake which evaluators are committing**.
+
+5. If a question has parts, please award marks on the right-hand side for each part. Marks awarded for different parts of the question should then be totaled up and written in the left-hand margin and encircled. This may be followed strictly.
+
+6. If a question does not have any parts, marks must be awarded in the left-hand margin and encircled. This may also be followed strictly.
+
+7. If a student has attempted an extra question, answer of the question deserving more marks should be retained and the other answer scored out.
+
+8. No marks to be deducted for the cumulative effect of an error. It should be penalized only once.
+
+9. A full scale of marks 0-80 marks as given in Question Paper) has to be used. Please do not hesitate to award full marks if the answer deserves it.
+
+10. Every examiner has to necessarily do evaluation work for full working hours i.e. 8 hours every day and evaluate 20 answer books per day in main subjects and 25 answer books per day in other subjects (Details are given in Spot Guidelines).
+
+11. Ensure that you do not make the following common types of errors committed by the Examiner in the past:
+* Leaving answer or part thereof unassessed in an answer book.
+* Giving more marks for an answer than assigned to it.
+* Wrong totaling of marks awarded on a reply.
+* Wrong transfer of marks from the inside pages of the answer book to the title page.
+* Wrong question wise totaling on the title page.
+* Wrong totaling of marks of the two columns on the title page.
+* Wrong grand total.
+* Marks in words and figures not tallying.
+* Wrong transfer of marks from the answer book to online award list.
+* Answers marked as correct, but marks not awarded. (Ensure that the right tick mark is correctly and clearly indicated. It should merely be a line. Same is with the X for incorrect answer.)
+* Half or a part of answer marked correct and the rest as wrong, but no marks awarded.
+
+12. While evaluating the answer books if the answer is found to be totally incorrect, it should be marked as cross (X) and awarded zero (0) Marks.
+
+13. Any unassessed portion, non-carrying over of marks to the title page, or totaling error detected by the candidate shall damage the prestige of all the personnel engaged in the evaluation work as also of the Board. Hence, in order to uphold the prestige of all concerned, it is again reiterated that the instructions be followed meticulously and judiciously.
+
+14. The Examiners should acquaint themselves with the guidelines given in the Guidelines for spot Evaluation before starting the actual evaluation.
+
+15. Every Examiner shall also ensure that all the answers are evaluated, marks carried over to the title page, correctly totaled and written in figures and words.
+
+16. The Board permits candidates to obtain photocopy of the Answer Book on request in an RTI application and also separately as a part of the re-evaluation process on payment of the processing charges.
+---PAGE_BREAK---
+
+QUESTION PAPER CODE 30/2/1
+EXPECTED ANSWER/VALUE POINTS
+SECTION - A
+
+Question numbers 1 to 10 are multiple choice questions of 1 mark each.
+
+You have to select the correct choice :
+
+Marks
+
+Q.No.
+
+1. The sum of exponents of prime factors in the prime-factorisation of 196 is
+ (a) 3
+ (b) 4
+ (c) 5
+ (d) 2
+ **Ans:** (b) 4
+
+1
+
+2. Euclid's division Lemma states that for two positive integers a and b, there exists unique integer q and r satisfying a = bq + r, and
+ (a) $0 < r < b$
+ (b) $0 < r \leq b$
+ (c) $0 \leq r < b$
+ (d) $0 \leq r \leq b$
+ **Ans:** (c) $0 \leq r < b$
+
+1
+
+3. The zeroes of the polynomial $x^2 - 3x - m(m+3)$ are
+ (a) $m, m+3$
+ (b) $-m, m+3$
+ (c) $m, -(m+3)$
+ (d) $-m, -(m+3)$
+ **Ans:** (b) $-m, m+3$
+
+1
+
+4. The value of k for which the system of linear equations $x + 2y = 3$, $5x + ky + 7 = 0$ is inconsistent is
+ (a) $-\frac{14}{3}$
+ (b) $\frac{2}{5}$
+ (c) 5
+ (d) 10
+ **Ans:** (d) 10
+
+1
+
+5. The roots of the quadratic equation $x^2 - 0.04 = 0$ are
+ (a) $\pm 0.2$
+ (b) $\pm 0.02$
+ (c) 0.4
+ (d) 2
+ **Ans:** (a) $\pm 0.2$
+
+1
+
+6. The common difference of the A.P. $\frac{1}{p}$, $\frac{1-p}{p}$, $\frac{1-2p}{p}$, ... is
+ (a) 1
+ (b) $\frac{1}{p}$
+ (c) -1
+ (d) $\frac{-1}{p}$
+ **Ans:** (c) -1
+
+1
+
+7. The $n^{th}$ term of the A.P. a, 3a, 5a, ... is
+ (a) na
+ (b) $(2n-1)a$
+ (c) $(2n+1)a$
+ (d) 2na
+ **Ans:** (b) $(2n-1)a$
+
+1
+
+8. The point P on x-axis equidistant from the points A(-1, 0) and B(5, 0) is
+ (a) (2, 0)
+ (b) (0, 2)
+ (c) (3, 0)
+ (d) (2, 2)
+ **Ans:** (a) (2, 0)
+
+1
+
+9. The co-ordinates of the point which is reflection of point (-3, 5) in x-axis are
+ (a) (3, 5)
+ (b) (3, -5)
+ (c) (-3, -5)
+ (d) (-3, 5)
+ **Ans:** (c) (-3, -5)
+
+1
+---PAGE_BREAK---
+
+10.
+
+If the point P (6, 2) divides the line segment joining A(6, 5) and B(4, y) in the ratio 3 : 1, then the value of y is
+
+(a) 4
+
+(b) 3
+
+(c) 2
+
+(d) 1
+
+**Ans:** 1 mark be awarded to everyone
+
+1
+
+In Q. Nos. 11 to 15, fill in the blanks. Each question is of 1 mark.
+
+11.
+
+In fig. 1, MN || BC and AM : MB = 1 : 2, then $\frac{ar(\Delta AMN)}{ar(\Delta ABC)} = \underline{\hspace{2cm}}$
+
+Fig. 1
+
+**Ans:** $\frac{1}{9}$
+
+1
+
+12.
+
+In given Fig. 2, the length PB = _______ cm.
+
+**Ans:** 4
+
+13.
+
+In $\triangle ABC$, AB = $6\sqrt{3}$ cm, AC = 12 cm and BC = 6 cm, then $\angle B = \underline{\hspace{2cm}}$.
+
+**Ans:** 90°
+
+OR
+
+Two triangles are similar if their corresponding sides are ______.
+
+**Ans:** proportional
+
+1
+
+1
+
+14.
+
+The value of $(\tan 1^\circ \tan 2^\circ \dots \tan 89^\circ)$ is equal to ______.
+
+**Ans:** 1
+
+15.
+
+In Fig. 3, the angles of depressions from the observing positions O₁ and O₂ respectively of the object A are ______, ______.
+
+Fig. 3
+
+**Ans:** 30°, 45°
+
+$\frac{1}{2} + \frac{1}{2}$
+---PAGE_BREAK---
+
+Q. Nos. 16 to 20 are short answer type questions of 1 mark each.
+
+16. If $\sin A + \sin^2 A = 1$, then find the value of the expression $(\cos^2 A + \cos^4 A)$.
+
+$$
+\begin{array}{l}
+\text{Ans: } \sin A = 1 - \sin^2 A \\
+\qquad \sin A = \cos^2 A
+\end{array}
+$$
+
+$$ \cos^2 A + \cos^4 A = \sin A + \sin^2 A = 1 $$
+
+1/2
+
+1/2
+
+17. In Fig. 4 is a sector of circle of radius 10.5 cm. Find the perimeter of the sector. (Take $\pi = \frac{22}{7}$)
+
+Fig. 4
+
+$$
+\begin{aligned}
+\text{Ans: Perimeter} &= 2r + \frac{\pi r \theta}{180^\circ} \\
+&= 2 \times 10.5 + \frac{22}{7} \times 10.5 \times \frac{60^\circ}{180^\circ} \\
+&= 21 + 11 = 32 \text{ cm}
+\end{aligned}
+$$
+
+1/2
+
+1/2
+
+18. If a number x is chosen at random from the numbers -3, -2, -1, 0, 1, 2, 3, then find the probability of x² < 4.
+
+$$
+\begin{align*}
+\text{Ans: Number of Favourable outcomes} &= 3 \text{ i.e., } \{-1, 0, 1\} \quad \therefore P(x^2 < 4) = \frac{3}{7}
+\end{align*}
+$$
+
+OR
+
+What is the probability that a randomly taken leap year has 52 Sundays ?
+
+$$
+\text{Ans: } P(52 \text{ Sundays}) = \frac{5}{7}
+$$
+
+1
+
+19. Find the class-marks of the classes 10-25 and 35-55.
+
+$$
+\text{Ans: Class Marks } \frac{10+25}{2} = 17.5; \frac{35+55}{2} = 45
+$$
+
+1/2+1/2
+
+20. A die is thrown once. What is the probability of getting a prime number.
+
+$$
+\begin{array}{l}
+\text{Ans: Number of prime numbers} = 3 \text{ i.e. ; } \{2, 3, 5\} \\[1em]
+P(\text{Prime Number}) = \frac{3}{6} \text{ or } \frac{1}{2}
+\end{array}
+$$
+
+1/2
+
+1/2
+---PAGE_BREAK---
+
+SECTION - B
+
+Q. Nos. 21 to 26 carry 2 marks each
+
+21. A teacher asked 10 of his students to write a polynomial in one variable on a paper and then to handover the paper. The following were the answers given by the students:
+
+$$2x + 3, 3x^2 + 7x + 2, 4x^3 + 3x^2 + 2, x^3 + \sqrt{3x} + 7, 7x + \sqrt{7}, 5x^3 - 7x + 2,$$
+
+$$2x^2 + 3 - \frac{5}{x}, 5x - \frac{1}{2}, ax^3 + bx^2 + cx + d, x + \frac{1}{x}.$$
+
+Answer the following questions :
+
+(i) How many of the above ten, are not polynomials ?
+
+(ii) How many of the above ten, are quadratic polynomials ?
+
+Ans: (i) 3
+
+(ii) 1
+
+1
+
+1
+
+22. In Fig. 5, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that
+
+$$\frac{ar(\Delta ABC)}{ar(\Delta DBC)} = \frac{AO}{DO}$$
+
+Fig. 5
+
+Ans:
+
+Draw $AX \perp BC$, $DY \perp BC$
+$\triangle AOX \sim \triangle DOY$
+
+$$\frac{AX}{DY} = \frac{AO}{DO} \quad \dots (i)$$
+
+$$\frac{ar(\triangle ABC)}{ar(\triangle DBC)} = \frac{\frac{1}{2} \times BC \times AX}{\frac{1}{2} \times BC \times DY}$$
+
+$$\frac{AX}{DY} = \frac{AO}{DO} \text{ (From (i))}$$
+
+OR
+
+In Fig. 6, if $AD \perp BC$, then prove that $AB^2 + CD^2 = BD^2 + AC^2$.
+
+Fig. 6
+
+Ans: In rt $\triangle ABD$
+
+$AB^2 = BD^2 + AD^2$ ... (i)
+
+In rt $\triangle ADC$
+
+$CD^2 = AC^2 - AD^2$ ... (ii)
+
+Adding (i) & (ii)
+
+$$AB^2 + CD^2 = BD^2 + AC^2$$
+
+1/2
+
+1/2
+
+1/2
+
+1/2
+
+1/2
+
+1
+---PAGE_BREAK---
+
+23. Prove that $1 + \frac{\cot^2 \alpha}{1 + \cos \alpha} = \cos \alpha \sec \alpha$
+
+$$
+\begin{align*}
+\text{Ans: L.H.S} &= 1 + \frac{\cos \sec^2 \alpha - 1}{1 + \cos \sec \alpha} \\
+&= 1 + \frac{(\cos \sec \alpha - 1)(\cos \sec \alpha + 1)}{\cos \sec \alpha + 1} \\
+&= \cos \sec \alpha = R.H.S
+\end{align*}
+$$
+
+OR
+
+$$
+\sin^2 \theta + \tan^2 \theta = \sec^2 \theta - \tan^2 \theta
+$$
+
+$$
+\begin{align*}
+\text{Ans: L.H.S} &= \tan^4 \theta + \tan^2 \theta \\
+&= \tan^2 \theta (\tan^2 \theta + 1) \\
+&= (\sec^2 \theta - 1) (\sec^2 \theta) = \sec^4 \theta - \sec^2 \theta = R.H.S
+\end{align*}
+$$
+
+24. The volume of a right circular cylinder with its height equal to the radius is $25\frac{1}{7}$ cm³. Find the height of the cylinder. (Use $\pi = \frac{22}{7}$)
+
+$$
+\text{Ans: Let height and radius of cylinder } x \text{ cm}
+$$
+
+$$
+V = \frac{176}{7} \text{cm}^3
+$$
+
+$$
+\frac{22}{7} \times x^2 \times x = \frac{176}{7}
+$$
+
+$$
+x^{3}=8 \Rightarrow x=2
+$$
+
+∴ height of cylinder = 2 cm
+
+25. A child has a die whose six faces show the letters as shown below :
+
+The die is thrown once. What is the probability of getting (i) A, (ii) D ?
+
+$$
+\text{Ans: (i) } P(A) = \frac{2}{6} \text{ or } \frac{1}{3} \qquad (\text{ii) } P(D) = \frac{1}{6}
+$$
+
+1+1
+
+26. Compute the mode for the following frequency distribution :
+
+
+
+ Size of items (in cm) |
+ 0-4 |
+ 4-8 |
+ 8-12 |
+ 12-16 |
+ 16-20 |
+ 20-24 |
+ 24-28 |
+
+
+ | Frequency |
+ 5 |
+ 7 |
+ 9 |
+ 17 |
+ 12 |
+ 10 |
+ 6 |
+
+
+
+$$
+\text{Ans: } l = 12 \quad f_0 = 9 \quad f_1 = 17 \quad f_2 = 12 \quad h = 4
+$$
+
+$$
+\text{Mode} = 12 + \frac{17-9}{34-9-12} \times 4 = 14.46 \text{ cm (Approx)}
+$$
+
+$$
+\frac{1}{1+\frac{1}{2}}
+$$
+---PAGE_BREAK---
+
+SECTION - C
+
+Question numbers 27 to 34 carry 3 marks each.
+
+27. If $2x + y = 23$ and $4x - y = 19$, find the value of $(5y - 2x)$ and $\left(\frac{y}{x} - 2\right)$
+
+**Ans:** $2x + y = 23, 4x - y = 19$
+Solving, we get $x = 7, y = 9$
+
+$5y - 2x = 31, \frac{y}{x} - 2 = \frac{-5}{7}$
+
+OR
+
+Solve for x: $\frac{1}{x+4} - \frac{1}{x+7} = \frac{11}{30}, x \neq -4, 7$
+
+**Ans:**
+
+$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x-7} &= \frac{11}{30} \\ &\Rightarrow \frac{-11}{(x+4)(x-7)} = \frac{11}{30} \end{aligned} $$
+
+$$ \Rightarrow x^2 - 3x + 2 = 0 $$
+
+$$ \Rightarrow (x-2)(x-1) = 0 $$
+
+$$ \Rightarrow x = 2, 1 $$
+
+The Following solution should also be accepted
+
+$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x+7} &= \frac{11}{30} \\ &\Rightarrow \frac{x+7-x-4}{(x+4)(x-7)} = \frac{11}{30} \\ &\Rightarrow 11x^2 + 121x + 218 = 0 \end{aligned} $$
+
+Here, D = 5049
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ x = -\frac{(a+c)(b+c-2a)}{6(b-a)} $$
+
+**Ans:**
+
+Here $d = b - a$
+
+Let c be the n-th term
+$\therefore c = a + (n-1)(b-a)$
+$$ n = -\frac{c+b-2a}{b-a} $$
+$$ S_n = -\frac{(c+b-2a)(a+c)}{6(b-a)} $$
+
+$$ n = -\frac{(c+b-2a)}{(b-a)} $$
+
+$$ S_n = -\frac{(c+b-2a)(a+c)}{(b-a)} $$
+---PAGE_BREAK---
+
+OR
+
+Solve the equation : 1 + 4 + 7 + 10 + ... + x = 287.
+
+**Ans:** Let sum of n terms = 287
+
+$$ \frac{n}{2} [2 \times 1 + (n-1)3] = 287 $$
+
+$$ \frac{1}{2} $$
+
+$$ 3n^2 - n - 574 = 0 $$
+
+$$ \frac{1}{2} $$
+
+$$ (3n + 41)(n - 14) = 0 $$
+
+$$ \frac{1}{2} $$
+
+$$ n = 14 \left( \text{Reject } n = \frac{-41}{3} \right) $$
+
+$$ \frac{1}{2} $$
+
+$$ x = a_{14} = 1 + 13 \times 3 = 40 $$
+
+$$ 1 $$
+
+29. In a flight of 600 km, an aircraft was slowed down due to bad weather. The average speed of the trip was reduced by 200 km/hr and the time of flight increased by 30 minutes. Find the duration of flight.
+
+**Ans:** Let actual speed = x km/hr
+A.T.Q
+
+$$ \frac{600}{x - 200} - \frac{600}{x} = \frac{1}{2} $$
+
+$$ 1 $$
+
+$$ x^2 - 200x - 240000 = 0 $$
+
+$$ (x - 600)(x + 400) = 0 $$
+
+$$ x = 600 \text{ (x = -400 Rejected)} $$
+
+$$ \frac{1}{2} $$
+
+$$ \text{Duration of flight} = \frac{600}{600} = 1 \text{ hr} $$
+
+$$ \frac{1}{2} $$
+
+30. If the mid-point of the line segment joining the points A(3, 4) and B(k, 6) is P(x, y) and $x + y - 10 = 0$, find the value of k.
+
+**Ans:**
+
+$$ A \left( \frac{\text{mid point of A}}{k}, \frac{\text{mid point of A}}{6} \right) $$
+
+$$ x = \frac{3+k}{2}, \quad y=5 $$
+
+$$ \frac{1}{2} + \frac{1}{2} $$
+
+$$ x + y - 10 = 0 \Rightarrow \frac{3+k}{2} + 5 - 10 = 0 $$
+
+$$ \Rightarrow k = 7 $$
+
+$$ 1 $$
+
+OR
+
+Find the area of triangle ABC with A(1, -4) and the mid-points of sides through A being (2, -1) and (0, -1).
+
+**Ans:** B(3, 2), C(-1, 2)
+
+Area = $\frac{1}{2}|1(2-2)+3(2+4)-1(-4-2)| = 12$ squnits
+
+$$ \frac{1}{2} + \frac{1}{2} $$
+
+$$ 1+1 $$
+---PAGE_BREAK---
+
+31. In Fig. 7, if $\triangle ABC \sim \triangle DEF$ and their sides of lengths (in cm) are marked along them, then find the lengths of sides of each triangle.
+
+Fig. 7
+
+**Ans:** As $\triangle ABC \sim \triangle DEF$
+
+$$ \frac{2x-1}{18} = \frac{3x}{6x} $$
+
+$x = 5$
+
+AB = 9 cm DE = 18 cm
+
+BC = 12 cm EF = 24 cm
+
+CA = 15 cm FD = 30 cm
+
+1/2+1/2
+
+32. If a circle touches the side BC of a triangle ABC at P and extended sides AB and AC at Q and R, respectively, prove that
+
+$$AQ = \frac{1}{2}(BC + CA + AB)$$
+
+**Ans:**
+
+Correct Fig
+
+$$ \begin{aligned} AQ &= \frac{1}{2} (2AQ) \\ &= \frac{1}{2} (AQ + AQ) \\ &= \frac{1}{2} (AQ + AR) \\ &= \frac{1}{2} (AB + BQ + AC + CR) \\ &= \frac{1}{2} (AB + BC + CA) \end{aligned} $$
+
+$\therefore$ [BQ = BP, CR = CP]
+
+1/2
+
+33. If $\sin \theta + \cos \theta = \sqrt{2}$, prove that $\tan \theta + \cot \theta = 2$.
+
+$$ \text{Ans: } \sin \theta + \cos \theta = \sqrt{2} $$
+
+$$ \begin{array}{l} \tan \theta + 1 = \sqrt{2} \sec \theta \\ \\ \text{Sq. both sides} \\ \tan^2 \theta + 1 + 2 \tan \theta = 2\sec^2 \theta \\ \\ \tan^2 \theta + 1 + 2 \tan \theta = 2(1 + \tan^2 \theta) \\ \\ 2 \tan \theta = \tan^2 \theta + 1 \\ \\ 2 = \tan \theta + \cot \theta \end{array} $$
+
+1
+
+1
+
+1
+
+1
+---PAGE_BREAK---
+
+**34.** The area of a circular play ground is 22176 cm². Find the cost of fencing this ground at the rate of 50 per metre.
+
+**Ans:** Let the radius of playground be r cm
+
+$$ \pi r^2 = 22176 \text{ cm}^2 $$
+
+$$ r = 84 \text{ cm} $$
+
+1
+
+$$ \text{Circumference} = 2\pi r = 2 \times \frac{22}{7} \times 84 = 528 \text{ cm} $$
+
+1
+
+$$ \text{Cost of fencing} = \frac{50}{100} \times 528 = 264 $$
+
+1
+
+### SECTION - D
+
+Question numbers 35 to 40 carry 4 marks each.
+
+**35.** Prove that $\sqrt{5}$ is an irrational number.
+
+**Ans:** Let $\sqrt{5}$ be a rational number.
+
+$$ \sqrt{5} = \frac{p}{q}, p \& q \text{ are coprimes } & \& q \neq 0 \\ 5q^2 = p^2 \Rightarrow 5 \text{ divides } p^2 \Rightarrow 5 \text{ divides } p \text{ also Let } p = 5a, \text{ for some integer } a \\ 5q^2 = 25a^2 \Rightarrow q^2 = 5a^2 \Rightarrow 5 \text{ divides } q^2 \Rightarrow 5 \text{ divides } q \text{ also} $$
+
+∴ 5 is a common factor of p, q, which is not possible as
+p, q are coprimes.
+
+Hence assumption is wrong $\sqrt{5}$ is irrational no.
+
+1
+
+1
+
+1
+
+1
+
+**36.** It can take 12 hours to fill a swimming pool using two pipes. If the pipe of larger diameter is used for four hours and the pipe of smaller diameter for 9 hours, only half of the pool can be filled. How long would it take for each pipe to fill the pool separately?
+
+**Ans:** Let time taken by pipe of larger diameter to fill the tank be x hr
+Let time taken by pipe of smaller diameter to fill the tank be y hr
+A.T.Q
+
+$$ \frac{1}{x} + \frac{1}{y} = \frac{1}{12}, \quad \frac{4}{x} + \frac{9}{y} = \frac{1}{2} $$
+
+1+1
+
+Solving we get x = 20 hr y = 30 hr
+
+1+1
+
+**37.** Draw a circle of radius 2 cm with centre O and take a point P outside the circle such that OP = 6.5 cm. From P, draw two tangents to the circle.
+
+**Ans:** Correct construction of circle of radius 2 cm
+Correct construction of tangents.
+
+1
+
+3
+
+OR
+
+Construct a triangle with sides 5 cm, 6 cm and 7 cm and then construct another triangle whose sides are $\frac{3}{4}$ times the corresponding sides of the first triangle.
+
+**Ans:** Correct construction of given triangle
+Construction of Similar triangle
+
+1
+
+3
+---PAGE_BREAK---
+
+**38.** From a point on the ground, the angles of elevation of the bottom and the top of a tower fixed at the top of a 20 m high building are 45° and 60° respectively. Find the height of the tower.
+
+**Ans:** Let height of tower = h m
+
+In rt. $\Delta BCD \tan 45^\circ = \frac{BC}{CD}$
+
+$$
+\left.
+\begin{array}{l}
+1 = \frac{20}{CD} \\
+CD = 20 \text{ m}
+\end{array}
+\right\}
+$$
+
+In rt. $\Delta ACD \tan 60^\circ = \frac{AC}{CD}$
+
+$$ \sqrt{3} = \frac{20+h}{20} $$
+
+$$ h = 20(\sqrt{3}-1)m $$
+
+corr fig. 1
+
+1
+
+1
+
+1
+
+**39.** Find the area of the shaded region in Fig. 8, if PQ = 24 cm, PR = 7 cm and O is the centre of the circle.
+
+Fig. 8
+
+**Ans:**
+
+$\angle P = 90^\circ \ RQ = \sqrt{(24)^2 + 7^2} = 25 \text{ cm}, r = \frac{25}{2} \text{ cm}$
+
+$$ \left.
+\begin{array}{l}
+\text{Area of shaded portion} = \text{Area of semi circle} - \ar(\Delta PQR) \\
+= \frac{1}{2} \times \frac{22}{7} \times \left(\frac{25}{2}\right)^2 - 84 \\
+= 161.54 \text{ cm}^2
+\end{array}
+\right\} $$
+
+$$
+\begin{array}{l}
+\frac{1}{2} \\
+2 \\
+\frac{1}{2}
+\end{array}
+$$
+
+OR
+
+Find the curved surface area of the frustum of a cone, the diameters of whose circular ends are 20 m and 6 m and its height is 24 m.
+
+**Ans:**
+
+$R = 10 \text{ m}$ $r = 3 \text{ m}$ $h = 24 \text{ m}$
+
+$$ l = \sqrt{(24)^2 + (10-3)^2} = 25 \text{ m} $$
+
+$$ CSA = \pi(10 + 3)25 = 325 \pi \text{ m}^2 $$
+
+$$
+\begin{array}{l}
+\frac{1}{2}+1\frac{1}{2} \\
+1 \\
+1+1
+\end{array}
+$$
+
+**40.** The mean of the following frequency distribution is 18. The frequency f in the class interval 19 – 21 is missing. Determine f.
+
+| Class interval | 11 – 13 | 13 – 15 | 15 – 17 | 17 – 19 | 19 – 21 | 21 – 23 | 23 – 25 |
|---|
| Frequency | 3 | 6 | 9 | 13 | f | 5 | 4 |
+---PAGE_BREAK---
+
+**Ans:**
+
+| C.I | f | x | xf |
|---|
| 11-13 | 3 | 12 | 36 | | 13-15 | 6 | 14 | 84 | | 15-17 | 9 | 16 | 144 | | 17-19 | 13 | 18 | 234 | | 19-21 | f | 20 | 20f | | 21-23 | 5 | 22 | 110 | | 23-25 | 4 | 24 | 96 | | 40+f | 704 + 20f |
+
+$$ \text{Mean} = \frac{\sum xf}{\sum f} \Rightarrow 18 = \frac{704+20f}{40+f} \Rightarrow f=8 $$
+
+OR
+
+The following table gives production yield per hectare of wheat of 100 farms of a village :
+
+| Production yield | 40-45 | 45-50 | 50-55 | 55-60 | 60-65 | 65-70 |
|---|
| No. of farms | 4 | 6 | 16 | 20 | 30 | 24 |
|---|
+
+Change the distribution to a 'more than' type distribution and draw its ogive.
+
+**Ans:**
+
+| Production yield | Number of farms |
|---|
| More than or equal to 40 | 100 | | More than or equal to 45 | 96 | | More than or equal to 50 | 90 | | More than or equal to 55 | 74 | | More than or equal to 60 | 54 | | More than or equal to 65 | 24 |
+
+Plotting of points (40, 100) (45, 96) (50, 90) (55, 74) (60, 54) (65, 24) join to get ogive.
+
+2
+
+2
+
+2
+
+2
+---PAGE_BREAK---
+
+QUESTION PAPER CODE 30/2/2
+EXPECTED ANSWER/VALUE POINTS
+SECTION - A
+
+Question numbers 1 to 10 are multiple choice questions of 1 mark each.
+
+You have to select the correct choice :
+
+Marks
+
+Q.No.
+
+1. The value of k for which the system of linear equations x + 2y = 3, 5x + ky + 7 = 0 is inconsistent is
+
+(a) $-\frac{14}{3}$
+
+(b) $\frac{2}{5}$
+
+(c) 5
+
+(d) 10
+
+Ans: (d) 10
+
+1
+
+2. The zeroes of the polynomial $x^2 - 3x - m(m+3)$ are
+
+(a) m, m + 3
+
+(b) -m, m + 3
+
+(c) m, -(m + 3)
+
+(d) -m, -(m + 3)
+
+Ans: (b) -m, m + 3
+
+1
+
+3. Euclid's division Lemma states that for two positive integers a and b, there exists unique integer q and r satisfying $a = bq + r$, and
+
+(a) $0 < r < b$
+
+(b) $0 < r \leq b$
+
+(c) $0 \leq r < b$
+
+(d) $0 \leq r \leq b$
+
+Ans: (c) $0 \leq r < b$
+
+1
+
+4. The sum of exponents of prime factors in the prime-factorisation of 196 is
+
+(a) 3
+
+(b) 4
+
+(c) 5
+
+(d) 2
+
+Ans: (b) 4
+
+1
+
+5. If the point P(6, 2) divides the line segment joining A(6, 5) and B(4, y) in the ratio 3 : 1, then the value of y is
+
+(a) 4
+
+(b) 3
+
+(c) 2
+
+(d) 1
+
+Ans: 1 mark be awarded to everyone
+
+1
+
+6. The co-ordinates of the point which is reflection of point (-3, 5) in x-axis are
+
+(a) (3, 5)
+
+(b) (3, -5)
+
+(c) (-3, -5)
+
+(d) (-3, 5)
+
+Ans: (c) (-3, -5)
+
+1
+
+7. The point P on x-axis equidistant from the points A(-1, 0) and B(5, 0) is
+
+(a) (2, 0)
+
+(b) (0, 2)
+
+(c) (3, 0)
+
+(d) (2, 2)
+
+Ans: (a) (2, 0)
+
+1
+
+8. The $n^{th}$ term of the A.P. a, 3a, 5a, ... is
+
+(a) na
+
+(b) $(2n-1)a$
+
+(c) $(2n+1)a$
+
+(d) 2na
+
+Ans: (b) $(2n-1)a$
+
+1
+
+9. The common difference of the A.P. $\frac{1}{p}, \frac{1-p}{p}, \frac{1-2p}{p}, ...$ is
+
+(a) 1
+
+(b) $\frac{1}{p}$
+
+(c) -1
+
+(d) $-\frac{1}{p}$
+
+Ans: (c) -1
+
+1
+---PAGE_BREAK---
+
+10. The roots of the quadratic equation $x^2 - 0.04 = 0$ are
+
+(a) ± 0.2
+
+(b) ± 0.02
+
+(c) 0.4
+
+(d) 2
+
+Ans: (a) ± 0.2
+
+In Q. Nos. 11 to 15, fill in the blanks. Each question is of 1 mark.
+
+11. In Fig. 1, the angles of depressions from the observing positions O₁ and O₂ respectively of the object A are ______, ______.
+
+Fig. 1
+
+Ans: 30°, 45°
+
+$\frac{1}{2} + \frac{1}{2}$
+
+12. In Fig. 2, MN || BC and AM : MB = 1 : 2, then $\frac{\text{ar}(ΔAMN)}{\text{ar}(ΔABC)} = $ ______.
+
+Fig. 2
+
+Ans: $\frac{1}{9}$
+
+13. In given Fig. 3, the length PB = ______ cm.
+
+Fig. 3
+
+Ans: 4
+
+14. In ΔABC, AB = $6\sqrt{3}$ cm, AC = 12 cm and BC = 6 cm, then ∠B = ______.
+
+Ans: 90°
+
+OR
+Two triangles are similar if their corresponding sides are ______.
+
+1
+
+1
+
+15. The value of sin 23° cos 67° + cos 23° sin 67° is ______.
+
+Ans: proportional
+
+1
+
+1
+---PAGE_BREAK---
+
+Q. Nos. 16 to 20 are short answer type questions of 1 mark each.
+
+16. In Fig. 4 is a sector of circle of radius 10.5 cm. Find the perimeter of the sector. (Take $\pi = \frac{22}{7}$)
+
+Fig. 4
+
+**Ans:** Perimeter $= 2r + \frac{\pi r \theta}{180^{\circ}}$
+$= 2 \times 10.5 + \frac{22}{7} \times 10.5 \times \frac{60^{\circ}}{180^{\circ}}$
+$= 21 + 11 = 32 \text{ cm}$
+
+1/2
+
+1/2
+
+17. If a number x is chosen at random from the numbers -3, -2, -1, 0, 1, 2, 3, then find the probability of x² < 4.
+
+**Ans:** Number of Favourable outcomes = 3 i.e., {-1, 0, 1} : P(x² < 4) = $\frac{3}{7}$
+
+1/2+1/2
+
+OR
+
+What is the probability that a randomly taken leap year has 52 Sundays ?
+
+**Ans:** P(52 Sundays) = $\frac{5}{7}$
+
+1
+
+18. A die is thrown once. What is the probability of getting a prime number.
+
+**Ans:** Number of prime numbers = 3 i.e. {2, 3, 5}
+
+P(Prime Number) = $\frac{3}{6}$ or $\frac{1}{2}$
+
+1/2
+
+1/2
+
+19. If tan A = cot B, then find the value of (A + B).
+
+**Ans:** $\tan A = \tan (90^\circ - B)$
+$\therefore A + B = 90^\circ$
+
+1/2
+
+1/2
+
+20. Find the class marks of the classes 15 – 35 and 45 – 60.
+
+**Ans:**
+$$\frac{15+35}{2} = 25$$
+
+$$\frac{45+60}{2} = 52.5$$
+
+1/2
+
+1/2
+
+SECTION - B
+
+Q. Nos. 21 to 26 carry 2 marks each
+
+21. A teacher asked 10 of his students to write a polynomial in one variable on a paper and then to handover the paper. The following were the answers given by the students:
+---PAGE_BREAK---
+
+$$2x+3, 3x^2+7x+2, 4x^3+3x^2+2, x^3+\sqrt{3x}+7, 7x+\sqrt{7}, 5x^3-7x+2,$$
+
+$$2x^2 + 3 - \frac{5}{x}, 5x - \frac{1}{2}, ax^3 + bx^2 + cx + d, x + \frac{1}{x}.$$
+
+Answer the following questions :
+
+(i) How many of the above ten, are not polynomials ?
+
+(ii) How many of the above ten, are quadratic polynomials ?
+
+**Ans:** (i) 3
+
+(ii) 1
+
+1
+
+1
+
+**22. Compute the mode for the following frequency distribution :**
+
+
+
+ |
+ Size of items (in cm)
+ |
+
+ 0 - 4
+ |
+
+ 4 - 8
+ |
+
+ 8 - 12
+ |
+
+ 12 - 16
+ |
+
+ 16 - 20
+ |
+
+ 20 - 24
+ |
+
+ 24 - 28
+ |
+
+
+ |
+ Frequency
+ |
+
+ 5
+ |
+
+ 7
+ |
+
+ 9
+ |
+
+ 17
+ |
+
+ 12
+ |
+
+ 10
+ |
+
+ 6
+ |
+
+
+
+1/2
+
+$$
+\text{Mode} = 12 + \frac{17-9}{34-9-12} \times 4 = 14.46 \text{ cm (Approx)}
+$$
+
+$$
+1 + \frac{1}{2}
+$$
+
+**23.** In Fig. 5, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that
+
+$$
+\frac{\text{ar}(\Delta \text{ABC})}{\text{ar}(\Delta \text{DBC})} = \frac{\text{AO}}{\text{DO}}
+$$
+
+Fig. 5
+
+$$
+\frac{\text{AX}}{\text{DY}} = \frac{\text{AO}}{\text{DO}} \quad \dots (i)
+$$
+
+$$
+\frac{\text{ar}(\Delta \text{ABC})}{\text{ar}(\Delta \text{DBC})} = \frac{\frac{1}{2} \times \text{BC} \times \text{AX}}{\frac{1}{2} \times \text{BC} \times \text{DY}}
+$$
+
+$$
+\frac{\mathrm{AX}}{\mathrm{DY}}=\frac{\mathrm{AO}}{\mathrm{DO}} \quad (\text { From } (1))
+$$
+
+OR
+
+In Fig. 6, if AD ⊥ BC, then prove that AB² + CD² = BD² + AC².
+
+Fig. 6
+---PAGE_BREAK---
+
+**Ans:** In rt $\triangle$ ABD
+
+$AB^2 = BD^2 + AD^2$ ... (i)
+
+1/2
+
+In rt $\triangle$ ADC
+
+$CD^2 = AC^2 - AD^2$ ... (ii)
+
+1/2
+
+Adding (i) & (ii)
+
+$AB^2 + CD^2 = BD^2 + AC^2$
+
+1
+
+**24.** Prove that $1 + \frac{\cot^2 \alpha}{1 + \cos \alpha} = \cos \alpha$
+
+**Ans:** L.H.S = $1 + \frac{\cos ec^2\alpha - 1}{1 + \cos ec \alpha}$
+
+1/2
+
+$$
+\begin{aligned}
+&= 1 + \frac{(\cos ec \alpha - 1)(\cos ec \alpha + 1)}{\cos ec \alpha + 1} \\
+&= \cosec \alpha = R.H.S
+\end{aligned}
+ $$
+
+1
+
+1/2
+
+OR
+
+Show that $\tan^4\theta + \tan^2\theta = \sec^4\theta - \sec^2\theta$
+
+**Ans:** L.H.S = $\tan^4\theta + \tan^2\theta$
+
+$$
+\begin{aligned}
+&= \tan^2\theta (\tan^2\theta + 1) \\
+&= (\sec^2\theta - 1)(\sec^2\theta) = \sec^4\theta - \sec^2\theta = R.H.S
+\end{aligned}
+ $$
+
+1/2
+
+1+1/2
+
+**25.** A child has a die whose six faces show the letters as shown below :
+
+A B C D E
+
+The die is thrown once. What is the probability of getting (i) A, (ii) D ?
+
+**Ans:** (i) P(A) = $\frac{2}{6}$ or $\frac{1}{3}$
+
+(ii) P(D) = $\frac{3}{6}$ or $\frac{1}{2}$
+
+1+1
+
+**26.** A solid is in the shape of a cone mounted on a hemisphere of same base radius. If the curved surface areas of the hemispherical part and the conical part are equal, then find the ratio of the radius and the height of the conical part.
+
+**Ans:** CSA of conical part = CSA of hemispherical part
+
+$$
+\begin{aligned}
+& \pi rl = 2\pi r^2 \\
+& \sqrt{r^2 + h^2} = 2r \\
+& h^2 = 3r^2 \\
+& \frac{r}{h} = \frac{1}{\sqrt{3}} \Rightarrow \text{ratio is } 1 : \sqrt{3}
+\end{aligned}
+ $$
+
+1/2
+
+1/2
+
+1/2
+
+1/2
+---PAGE_BREAK---
+
+**SECTION - C**
+
+**Question numbers 27 to 34 carry 3 marks each.**
+
+27. In Fig. 7, if $\triangle ABC \sim \triangle DEF$ and their sides of lengths (in cm) are marked along them, then find the lengths of sides of each triangle.
+
+Fig. 7
+
+**Ans:** As $\triangle ABC \sim \triangle DEF$
+
+$$ \frac{2x-1}{18} = \frac{3x}{6x} $$
+
+$1$
+
+$x = 5$
+
+1
+
+AB = 9 cm DE = 18 cm
+
+BC = 12 cm EF = 24 cm
+
+CA = 15 cm FD = 30 cm
+
+$$ \frac{1}{2} + \frac{1}{2} = \frac{1}{2} $$
+
+28. If a circle touches the side BC of a triangle ABC at P and extended sides AB and AC at Q and R, respectively, prove that
+
+$$ AQ = \frac{1}{2} (BC + CA + AB) $$
+
+**Ans:**
+
+Correct Fig
+
+$$ AQ = \frac{1}{2} (2AQ) $$
+
+$$ \frac{1}{2} $$
+
+$$ = \frac{1}{2} (AQ + AQ) $$
+
+$$ = \frac{1}{2} (AQ + AR) $$
+
+$$ = \frac{1}{2} (AB + BQ + AC + CR) $$
+
+$$ 1 $$
+
+$$ = \frac{1}{2} (AB + BC + CA) $$
+
+$$ 1 $$
+
+$$ \therefore [BQ = BP, CR = CP] $$
+
+29. The area of a circular play ground is $22176 \text{ cm}^2$. Find the cost of fencing this ground at the rate of 50 per metre.
+
+**Ans:** Let the radius of playground be r cm
+
+$$ \pi r^2 = 22176 \text{ cm}^2 $$
+
+$$ r = 84 \text{ cm} $$
+
+$$ \frac{22}{7} $$
+
+Circumference = $2\pi r = 2 \times \frac{22}{7} \times 84 = 528 \text{ cm}$
+
+$$ 1 $$
+---PAGE_BREAK---
+
+Cost of fencing = $\frac{50}{100} \times 528 = 264$
+
+30.
+
+If $2x + y = 23$ and $4x - y = 19$, find the value of $(5y - 2x)$ and $(\frac{y}{x} - 2)$
+
+**Ans:** $2x + y = 23, 4x - y = 19$
+Solving, we get $x = 7, y = 9$
+
+$5y - 2x = 31, \frac{y}{x} - 2 = \frac{-5}{7}$
+
+1
+
+1+1
+
+$\frac{1}{2}+1\frac{1}{2}$
+
+OR
+
+Solve for x: $\frac{1}{x+4} - \frac{1}{x+7} = \frac{11}{30}, x\# = -4, 7$
+
+**Ans:**
+
+$$
+\begin{align*}
+\frac{1}{x+4} - \frac{1}{x-7} &= \frac{11}{30} \\
+&\Rightarrow \frac{-11}{(x+4)(x-7)} = \frac{11}{30}
+\end{align*}
+$$
+
+$$
+\Rightarrow x^2 - 3x + 2 = 0
+$$
+
+$$
+\Rightarrow (x-2) (x-1) = 0
+$$
+
+$$
+\Rightarrow x = 2, 1
+$$
+
+The Following solution should also be accepted
+
+$$
+\begin{align*}
+\frac{1}{x+4} - \frac{1}{x+7} &= \frac{11}{30} \\
+&\Rightarrow \frac{x+7-x-4}{(x+4)(x-7)} = \frac{11}{30}
+\end{align*}
+$$
+
+$$
+\Rightarrow 11x^2 + 121x + 218 = 0
+$$
+
+Here, D = 5049
+
+$$
+x = \frac{-121 \pm \sqrt{5049}}{22}
+$$
+
+$\frac{1}{2}$
+
+31.
+
+If the mid-point of the line segment joining the points A(3, 4) and B(k, 6) is P(x, y) and $x + y - 10 = 0$, find the value of k.
+
+**Ans:**
+
+$$
+A \left( \frac{\text{P}}{(3, 4)}, \left( \frac{\text{P}}{(x, y)}, \frac{\text{P}}{(K, 6)} \right) \right)
+$$
+
+$$
+x = \frac{3+k}{2} \quad y = 5
+$$
+
+$$
+x + y - 10 = 0 \Rightarrow \frac{3+k}{2} + 5 - 10 = 0
+$$
+
+$$
+\Rightarrow k = 7
+$$
+
+OR
+
+Find the area of triangle ABC with A(1, -4) and the mid-points of sides through A being (2, -1) and (0, -1).
+
+**Ans:** B(3, 2), C(-1, 2)
+
+$$
+\text{Area} = \frac{1}{2} |(1(2-2) + 3(2+4) - 1(-4-2))| = 12 \text{ sq units}
+$$
+
+$\frac{1}{2}+1\frac{1}{2}$
+
+$1+1$
+---PAGE_BREAK---
+
+32. If in an A.P., the sum of first m terms is n and the sum of its first n terms is m, then prove that the sum of its first (m + n) terms is $-(m + n)$.
+
+**Ans:**
+$S_m = n$ and $S_n = m$
+
+$$2a + (m-1)d = \frac{2n}{m} \quad \dots(i) \qquad 2a + (n-1)d = \frac{2m}{n} \quad \dots(ii)$$
+
+1
+
+Solving (i) & (ii), $a = \frac{m^2+n^2+mn-n-m}{mn}$ & $d = \frac{-2(n-m)}{mn}$
+
+1
+
+$$S_{m+n} = \frac{m+n}{2} \left[ \frac{2 \times m^2 + n^2 + mn - n - m}{mn} \right] + (m+n-1) \left\{ \frac{-2(n+m)}{mn} \right\}$$
+
+$$= (-1)(m+n)$$
+
+1/2
+1/2
+
+OR
+
+Find the sum of all 11 terms of an A.P. whose middle term is 30.
+
+**Ans:**
+Middle term = $\left(\frac{11+1}{2}\right)^{\text{th}}$ term = $a_6 = 30$
+
+1
+
+$$S_{11} = \frac{11}{2}[2a + 10d]$$
+
+$$= 11(a + 5d)$$
+
+$$= 11 a_6 = 11 \times 30 = 330$$
+
+1/2
+1/2
+1
+
+33. A fast train takes 3 hours less than a slow train for a journey of 600 km. If the speed of the slow train is 10 km/h less than that of the fast train, find the speed of each train.
+
+**Ans:**
+Let the speeds of fast train & slow train be x km/hr
+& (x - 10) km/hr respectively.
+A.T.Q.
+
+$$\frac{600}{x-10} - \frac{600}{x} = 3$$
+
+$$x^2 - 10x - 2000 = 0$$
+
+$$(x - 50)(x + 40) = 0$$
+
+$x = 50$ or $-40$
+
+Speed is always positive, So, $x = 50$
+
+1/2
+
+∴ Speed of fast train & slow train are 50 km/hr & 40 km/hr respectively.
+
+1/2
+
+34. If $1 + \sin^2\theta = 3 \sin\theta \cos\theta$, prove that $\tan\theta = 1$ or $\frac{1}{2}$
+
+**Ans:**
+$$\frac{1+\sin^2\theta}{\cos^2\theta} = \frac{3\sin\theta \cdot \cos\theta}{\cos^2\theta} \text{ (Dividing both sides by } \cos^2\theta\text{)}$$
+
+$$\sec^2\theta + \tan^2\theta = 3\tan\theta$$
+
+$$(1 + \tan^2\theta) + \tan^2\theta = 3\tan\theta$$
+
+$$2\tan^2\theta - 3\tan\theta + 1 = 0$$
+
+$$(\tan\theta - 1)(2\tan\theta - 1) = 0$$
+
+1/2
+1/2
+1/2
+1/2
+1/2
+---PAGE_BREAK---
+
+$$ \tan \theta = 1 \text{ or } \frac{1}{2} $$
+
+## SECTION - D
+
+**Question numbers 35 to 40 carry 4 marks each.**
+
+**35.** The mean of the following frequency distribution is 18. The frequency f in the class interval 19 – 21 is missing. Determine f.
+
+| Class interval | 11 - 13 | 13 - 15 | 15 - 17 | 17 - 19 | 19 - 21 | 21 - 23 | 23 - 25 |
|---|
| Frequency | 3 | 6 | 9 | 13 | f | 5 | 4 |
+
+**Ans:**
+C.I
+11-13
+13-15
+15-17
+17-19
+19-21
+21-23
+23-25
+f
+3
+6
+9
+13
+f
+5
+4
+x
+12
+14
+16
+18
+20
+22
+24
+\underline{40+f}
+xf
+36
+84
+144
+234
+20f
+110
+96
+\underline{704 + 20f}
+
+$$ \text{Mean} = \frac{\sum xf}{\sum f} \Rightarrow 18 = \frac{704+20f}{40+f} \Rightarrow f=8 $$
+
+OR
+
+The following table gives production yield per hectare of wheat of 100 farms of a village :
+
+| Production yield | 40-45 | 45-50 | 50-55 | 55-60 | 60-65 | 65-70 | | No. of farms | 4 | 6 | 16 | 20 | 30 | 24 |
+
+Change the distribution to a 'more than' type distribution and draw its ogive.
+
+**Ans:**
+
+| Production yield | Number of farms |
|---|
| More than or equal to 40 | 100 | | More than or equal to 45 | 96 | | More than or equal to 50 | 90 | | More than or equal to 55 | 74 | | More than or equal to 60 | 54 | | More than or equal to 65 | 24 |
+
+Plotting of points (40, 100) (45, 96) (50, 90) (55, 74) (60, 54) (65, 24) join to get ogive.
+
+$$ \tan \theta = 1 \text{ or } \frac{1}{2} $$
+
+2
+
+2
+
+2
+
+2
+---PAGE_BREAK---
+
+**36.** Find the area of the shaded region in Fig. 8, if PQ = 24 cm, PR = 7 cm and O is the centre of the circle.
+
+Fig. 8
+
+$$
+\begin{aligned}
+\text{Ans: } \angle P = 90^\circ \text{ RQ} &= \sqrt{(24)^2 + 7^2} = 25 \text{ cm}, r = \frac{25}{2} \text{ cm} \\
+&= \frac{1}{2} \times \frac{22}{7} \times \left(\frac{25}{2}\right)^2 - 84 \\
+&= 161.54 \text{ cm}^2
+\end{aligned}
+$$
+
+OR
+
+Find the curved surface area of the frustum of a cone, the diameters of whose circular ends are 20 m and 6 m and its height is 24 m.
+
+$$
+\begin{array}{l}
+\text{Ans: } R = 10 \text{ m} \quad r = 3 \text{ m} \quad h = 24 \text{ m} \\[1em]
+l = \sqrt{(24)^2 + (10-3)^2} = 25 \text{ m} \\
+CSA = \pi(10 + 3)25 = 325 \pi \text{ m}^2
+\end{array}
+$$
+
+**37.** Prove that $\sqrt{5}$ is an irrational number.
+
+$$
+\begin{array}{l}
+\text{Ans: Let } \sqrt{5} \text{ be a rational number.} \\
+\sqrt{5} = \frac{p}{q}, p \text{ & q are coprimes & } q \neq 0 \\
+5q^2 = p^2 \Rightarrow 5 \text{ divides } p^2 \Rightarrow 5 \text{ divides } p \text{ also Let } p = 5a, \text{ for some integer } a \\
+5q^2 = 25a^2 \Rightarrow q^2 = 5a^2 \Rightarrow 5 \text{ divides } q^2 \Rightarrow 5 \text{ divides } q \text{ also} \\
+\therefore 5 \text{ is a common factor of } p, q, \text{ which is not possible as } \\
+\text{p, q are coprimes.} \\
+\text{Hence assumption is wrong } \sqrt{5} \text{ is irrational no.}
+\end{array}
+$$
+
+**38.** It can take 12 hours to fill a swimming pool using two pipes. If the pipe of larger diameter is used for four hours and the pipe of smaller diameter for 9 hours, only half of the pool can be filled. How long would it take for each pipe to fill the pool separately ?
+
+$$
+\begin{array}{l}
+\text{Ans: Let time taken by pipe of larger diameter to fill the tank be x hr} \\
+\text{Let time taken by pipe of smaller diameter to fill the tank be y hr} \\
+\text{A.T.Q} \\
+\\
+\displaystyle \frac{1}{x} + \frac{1}{y} = \frac{1}{12}, \quad \frac{4}{x} + \frac{9}{y} = \frac{1}{2} \\
+\\
+\text{Solving we get } x = 20 \text{ hr } y = 30 \text{ hr}
+\end{array}
+$$
+---PAGE_BREAK---
+
+**39.** Draw two tangents to a circle of radius 4 cm, which are inclined to each other at an angle of 60°.
+
+**Ans:** Correct construction of circle of radius 4 cm
+
+Correct construction of tangents
+
+OR
+
+Construct a triangle ABC with sides 3 cm, 4 cm and 5 cm. Now, construct another triangle whose sides are $\frac{4}{5}$ times the corresponding sides of ΔABC.
+
+**Ans:** Correct construction of triangle with sides 3 cm, 4 cm & 5 cm
+
+Correct construction of similar triangle
+
+**40.** The angle of elevation of the top of a building from the foot of a tower is 30° and the angle of elevation of the top of a tower from the foot of the building is 60°. If the tower is 50 m high, then find the height of the building.
+
+**Ans:** Correct figure
+Let the height of building be h m
+
+$$ \text{In rt. } \triangle \text{BCD, } \tan 60^\circ = \frac{50}{BC} $$
+
+$$ \Rightarrow BC = \frac{50}{\sqrt{3}} \quad \dots (i) $$
+
+$$ \text{In rt. } \triangle \text{ABC, } \tan 30^\circ = \frac{h}{BC} $$
+
+$$ \Rightarrow \quad \frac{1}{\sqrt{3}} = \frac{h}{50/\sqrt{3}} \quad (\text{from (i)}) $$
+
+$$ \therefore h = \frac{50}{3} \text{ or } 16\frac{2}{3} \text{ or } 16.67 \text{ m} $$
+---PAGE_BREAK---
+
+QUESTION PAPER CODE 30/2/3
+EXPECTED ANSWER/VALUE POINTS
+SECTION - A
+
+Question numbers 1 to 10 are multiple choice questions of 1 mark each.
+
+You have to select the correct choice :
+
+Marks
+
+Q.No.
+
+1. The point P on x-axis equidistant from the points A(-1, 0) and B(5, 0) is
+
+(a) (2, 0)
+
+(b) (0, 2)
+
+(c) (3, 0)
+
+(d) (2, 2)
+
+Ans: (a) (2, 0)
+
+1
+
+2. The co-ordinates of the point which is reflection of point (-3, 5) in x-axis are
+
+(a) (3, 5)
+
+(b) (3, -5)
+
+(c) (-3, -5)
+
+(d) (-3, 5)
+
+Ans: (c) (-3, -5)
+
+1
+
+3. If the point P (6, 2) divides the line segment joining A(6, 5) and B(4, y) in the ratio 3 : 1, then the value of y is
+
+(a) 4
+
+(b) 3
+
+(c) 2
+
+(d) 1
+
+Ans: 1 mark be awarded to everyone
+
+1
+
+4. The sum of exponents of prime factors in the prime-factorisation of 196 is
+
+(a) 3
+
+(b) 4
+
+(c) 5
+
+(d) 2
+
+Ans: (b) 4
+
+1
+
+5. Euclid's division Lemma states that for two positive integers a and b, there exists unique integer q and r satisfying $a = bq + r$, and
+
+(a) $0 < r < b$
+
+(b) $0 < r \leq b$
+
+(c) $0 \leq r < b$
+
+(d) $0 \leq r \leq b$
+
+Ans: (c) $0 \leq r < b$
+
+1
+
+6. The zeroes of the polynomial $x^2 - 3x - m(m+3)$ are
+
+(a) m, m + 3
+
+(b) -m, m + 3
+
+(c) m, -(m + 3)
+
+(d) -m, -(m + 3)
+
+Ans: (b) -m, m + 3
+
+1
+
+7. The value of k for which the system of linear equations $x + 2y = 3$, $5x + ky + 7 = 0$ is inconsistent is
+
+(a) $-\frac{14}{3}$
+
+(b) $\frac{2}{5}$
+
+(c) 5
+
+(d) 10
+
+Ans: (d) 10
+
+1
+
+8. The roots of the quadratic equation $x^2 - 0.04 = 0$ are
+
+(a) $\pm 0.2$
+
+(b) $\pm 0.02$
+
+(c) 0.4
+
+(d) 2
+
+Ans: (a) $\pm 0.2$
+
+1
+
+9. The common difference of the A.P. $\frac{1}{p}$, $\frac{1-p}{p}$, $\frac{1-2p}{p}$, ... is
+
+(a) 1
+
+(b) $\frac{1}{p}$
+
+(c) -1
+
+(d) $-\frac{1}{p}$
+
+Ans: (c) -1
+
+1
+---PAGE_BREAK---
+
+10. The $n^{th}$ term of the A.P. a, 3a, 5a, ... is
+
+(a) na
+
+(b) (2n - 1)a
+
+(c) (2n + 1) a
+
+(d) 2na
+
+**Ans:** (b) (2n - 1)a
+
+1
+
+In Q. Nos. 11 to 15, fill in the blanks. Each question is of 1 mark.
+
+11. In Fig. 1, the angles of depressions from the observing positions O₁ and O₂ respectively of the object A are __________, _________.
+
+Fig. 1
+
+**Ans:** 30°, 45°
+
+$\frac{1}{2} + \frac{1}{2}$
+
+12. In $\triangle ABC$, AB = $6\sqrt{3}$ cm, AC = 12 cm and BC = 6 cm, then $\angle B = $ ________.
+
+**Ans:** 90°
+
+OR
+
+Two triangles are similar if their corresponding sides are ________.
+
+**Ans:** proportional
+
+1
+
+1
+
+13. In given Fig. 2, the length PB = _______ cm.
+
+Fig. 2
+
+**Ans:** 4
+
+1
+
+14. In Fig. 3, MN || BC and AM : MB = 1 : 2, then $\frac{ar(\triangle AMN)}{ar(\triangle ABC)} = $ ________.
+
+Fig. 3
+
+**Ans:** $\frac{1}{9}$
+
+1
+
+15. The value of sin 32° cos 58° + cos 32° sin 58° is
+
+**Ans:** 1
+
+1
+---PAGE_BREAK---
+
+OR
+
+The value of $\frac{\tan 35^\circ}{\tan 55^\circ} + \frac{\cot 78^\circ}{\tan 12^\circ}$ is ______.
+
+**Ans:** 2
+
+1
+
+Q. Nos. 16 to 20 are short answer type questions of 1 mark each.
+
+16. A die is thrown once. What is the probability of getting a prime number.
+
+**Ans:** Number of prime numbers = 3 i.e. {2, 3, 5}
+
+$\text{P(Prime Number)} = \frac{3}{6} \text{ or } \frac{1}{2}$
+
+1/2
+
+1/2
+
+17. If a number x is chosen at random from the numbers -3, -2, -1, 0, 1, 2, 3, then find the probability of $x^2 < 4$.
+
+**Ans:** Number of Favourable outcomes = 3 i.e., {-1, 0, 1} $\therefore P(x^2 < 4) = \frac{3}{7}$
+
+1/2+1/2
+
+OR
+
+What is the probability that a randomly taken leap year has 52 Sundays ?
+
+**Ans:** $P(52 \text{ Sunday}) = \frac{5}{7}$
+
+1
+
+18. If $\sin A + \sin^2 A = 1$, then find the value of the expression ($\cos^2 A + \cos^4 A$).
+
+**Ans:**
+$$
+\begin{cases}
+\sin A = 1 - \sin^2 A \\
+\sin A = \cos^2 A
+\end{cases}
+\text{ }
+\begin{array}{l}
+\cos^2 A + \cos^4 A = \sin^2 A + \sin^2 A = 1
+\end{array}
+$$
+
+1/2
+
+1/2
+
+19. Find the area of the sector of a circle of radius 6 cm whose central angle is 30°.
+(Take $\pi = 3.14$)
+
+**Ans:** Area = $3.14 \times (6)^2 \times \frac{30^\circ}{360^\circ}$
+= $9.42 \text{ cm}^2$
+
+1/2
+
+1/2
+
+20. Find the class marks of the classes 20 – 50 and 35 – 60.
+
+**Ans:**
+$$ \frac{20+50}{2} = 35 $$
+
+$$ \frac{35+60}{2} = 47.5 $$
+
+1/2
+
+1/2
+
+SECTION - B
+
+Q. Nos. 21 to 26 carry 2 marks each.
+
+21. A teacher asked 10 of his students to write a polynomial in one variable on a paper and then to handover the paper. The following were the answers given by the students:
+
+$2x + 3$, $3x^2 + 7x + 2$, $4x^3 + 3x^2 + 2$, $x^3 + \sqrt{3x} + 7$, $7x + \sqrt{7}$, $5x^3 - 7x + 2$,
+$2x^2 + 3 - \frac{5}{x}$, $5x - \frac{1}{2}$, $ax^3 + bx^2 + cx + d$, $x + \frac{1}{x}$
+---PAGE_BREAK---
+
+Answer the following questions :
+
+(i) How many of the above ten, are not polynomials ?
+
+(ii) How many of the above ten, are quadratic polynomials ?
+
+**Ans:** (i) 3
+
+(ii) 1
+
+1
+
+1
+
+22. A child has a die whose six faces show the letters as shown below :
+
+The die is thrown once. What is the probability of getting (i) A, (ii) D ?
+
+**Ans:** (i) $P(A) = \frac{2}{6}$ or $\frac{1}{3}$
+
+(ii) $P(D) = \frac{1}{6}$
+
+1+1
+
+23. In Fig. 4, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that
+
+$$\frac{ar(\Delta ABC)}{ar(\Delta DBC)} = \frac{AO}{DO}$$
+
+Fig. 4
+
+**Ans:**
+
+Draw $AX \perp BC$, $DY \perp BC$
+$\Delta AOX \sim \Delta DOY$
+
+$$\frac{AX}{DY} = \frac{AO}{DO} \quad \dots(i)$$
+
+$$\frac{ar(\triangle ABC)}{ar(\triangle DBC)} = \frac{\frac{1}{2} \times BC \times AX}{\frac{1}{2} \times BC \times DY}$$
+
+$$\frac{AX}{DY} = \frac{AO}{DO} \text{ (From (i))}$$
+
+OR
+
+In Fig. 5, if $AD \perp BC$, then prove that $AB^2 + CD^2 = BD^2 + AC^2$.
+
+**Ans:**
+In rt $\triangle ABD$ $AB^2 = BD^2 + AD^2$ ... (i)
+In rt $\triangle ADC$ $CD^2 = AC^2 - AD^2$ ... (ii)
+Adding (i) & (ii)
+$$AB^2 + CD^2 = BD^2 + AC^2$$
+
+1/2
+
+1/2
+
+1/2
+
+1/2
+
+1/2
+
+1
+---PAGE_BREAK---
+
+24.
+
+Prove that $1 + \frac{\cot^2 \alpha}{1 + \cos \alpha} = \cos \alpha$
+---PAGE_BREAK---
+
+**Ans:**
+
+Correct Fig
+
+$$ \begin{aligned} \text{AQ} &= \frac{1}{2} (2\text{AQ}) \\ &= \frac{1}{2} (\text{AQ} + \text{AQ}) \\ &= \frac{1}{2} (\text{AQ} + \text{AR}) \\ &= \frac{1}{2} (\text{AB} + \text{BQ} + \text{AC} + \text{CR}) \\ &= \frac{1}{2} (\text{AB} + \text{BC} + \text{CA}) \\ &\therefore [\text{BQ} = \text{BP}, \text{CR} = \text{CP}] \end{aligned} $$
+
+1/2
+
+1/2
+
+1
+
+1
+
+28. The area of a circular play ground is 22176 cm². Find the cost of fencing this ground at the rate of 50 per metre.
+
+**Ans:** Let the radius of playground be r cm
+
+$$ \begin{aligned} \pi r^2 &= 22176 \text{ cm}^2 \\ r &= 84 \text{ cm} \end{aligned} $$
+
+1
+
+Circumference = $2\pi r = 2 \times \frac{22}{7} \times 84 = 528$ cm
+
+1
+
+Cost of fencing = $\frac{50}{100} \times 528 = 264$
+
+1
+
+29. If the mid-point of the line segment joining the points A(3, 4) and B(k, 6) is P(x, y) and x + y - 10 = 0, find the value of k.
+
+**Ans:**
+
+$$ A = \frac{\vert A - C \vert}{\sqrt{(x - 4)^2 + (y - 6)^2}} $$
+
+$$ x = \frac{3+k}{2}, \quad y=5 $$
+
+$$ x+y-10=0 \Rightarrow \frac{3+k}{2}+5-10=0 $$
+
+$$ \Rightarrow k=7 $$
+
+OR
+
+Find the area of triangle ABC with A(1, -4) and the mid-points of sides through A being (2, -1) and (0, -1).
+
+**Ans:** B(3, 2), C(-1, 2)
+
+$$ \text{Area} = \frac{1}{2} |1(2-2)+3(2+4)-1(-4-2)| = 12 \text{ sq units} $$
+
+1/2+1/2
+
+1
+
+1
+
+1/2+1/2
+
+1+1
+---PAGE_BREAK---
+
+30. In Fig. 6, if $\triangle ABC \sim \triangle DEF$ and their sides of lengths (in cm) are marked along them, then find the lengths of sides of each triangle.
+
+Fig. 6
+
+**Ans:** As $\triangle ABC \sim \triangle DEF$
+
+$$ \frac{2x-1}{18} = \frac{3x}{6x} $$
+
+$$ x = 5 $$
+
+$$ AB = 9 \text{ cm} $$
+
+DE = 18 cm
+
+BC = 12 cm
+
+EF = 24 cm
+
+CA = 15 cm
+
+FD = 30 cm
+
+$$ \frac{1}{12+1/2} $$
+
+31. If $2x + y = 23$ and $4x - y = 19$, find the value of $(5y - 2x)$ and $(\frac{y}{x} - 2)$
+
+**Ans:** $2x + y = 23$, $4x - y = 19$
+
+Solving, we get $x = 7$, $y = 9$
+
+$$ 5y - 2x = 31, \quad \frac{y}{x} - 2 = \frac{-5}{7} $$
+
+OR
+
+Solve for $x$: $\frac{1}{x+4} - \frac{1}{x+7} = \frac{11}{30}$, $x\# = -4, 7$
+
+**Ans:**
+
+$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x-7} &= \frac{11}{30} \\ &\Rightarrow \frac{-11}{(x+4)(x-7)} = \frac{11}{30} \\ &\Rightarrow x^2 - 3x + 2 = 0 \\ &\Rightarrow (x-2)(x-1) = 0 \\ &\Rightarrow x = 2, 1 \end{aligned} $$
+
+The Following solution should also be accepted
+
+$$ \begin{aligned} \frac{1}{x+4} - \frac{1}{x+7} &= \frac{11}{30} \\ &\Rightarrow \frac{x+7-x-4}{(x+4)(x-7)} = \frac{11}{30} \\ &\Rightarrow 11x^2 + 121x + 218 = 0 \end{aligned} $$
+
+Here, D = 5049
+
+$$ x = \frac{-121 \pm \sqrt{5049}}{22} $$
+
+$$ \frac{1}{1+1/2} $$
+
+$$ \frac{1}{2} $$
+---PAGE_BREAK---
+
+**32.** Which term of the A.P. 20,19$\frac{1}{4}$,18$\frac{1}{2}$,17$\frac{3}{4}$... is the first negative term.
+
+$$ \text{Ans: } a = 20 \text{ & } d = 19\frac{1}{4} - 20 = -\frac{3}{4} $$
+
+$$ a_n < 0 $$
+
+$$ 20 + (n-1)\left(-\frac{3}{4}\right) < 0 $$
+
+$$ n > 27\frac{2}{3} $$
+
+∴ 28th term of the given A. P. is first negative term
+
+OR
+
+Find the middle term of the A.P. 7, 13, 19, ..., 247.
+
+$$ \text{Ans: } a = 7 \text{ & } d = 13 - 7 = 6 $$
+
+$$ 247 = 7 + (n - 1)6 $$
+
+$$ n = 41 $$
+
+$$ \text{Middle term} = \left(\frac{41+1}{2}\right)^{\text{th}} = 21^{\text{st}} \text{ term.} $$
+
+$$ a_{21} = 7 + 20 \times 6 = 127 $$
+
+**33.** Water in a canal, 6 m wide and 1.5 m deep, is flowing with a speed of 10 km/h.
+How much area will it irrigate in 30 minutes, if 8 cm standing water is
+required ?
+
+$$ \text{Ans: Volume of water in canal in 1 hr} = 10000 \times 6 \times 1.5 = 90000 \text{ m}^3 $$
+
+$$ \text{Volume of water in canal in 30 mins} = \frac{1}{2} \times 90000 = 45000 \text{ m}^3 $$
+
+$$ \begin{aligned} \text{Area} &= \frac{45000}{8/100} \\ &= 562500 \text{ m}^2 \end{aligned} $$
+
+**34.** Show that :
+
+$$ \frac{\cos^2(45^\circ + \theta) + \cos^2(45^\circ - \theta)}{\tan(60^\circ + \theta) \tan(30^\circ - \theta)} = 1 $$
+
+$$ \text{Ans: L.H.S} = \frac{\cos^2(45^\circ + \theta) + \sin^2(90^\circ - 45^\circ + \theta)}{\tan(60^\circ + \theta) \cdot \cot(90^\circ - 30^\circ + \theta)} $$
+
+$$ = \frac{\cos^2(45^\circ + \theta) + \sin^2(45^\circ + \theta)}{\tan(60^\circ + \theta) \cdot \cot(60^\circ + \theta)} $$
+
+$$ = \frac{1}{1} = 1 = R.H.S $$
+---PAGE_BREAK---
+
+SECTION - D
+
+Question numbers 35 to 40 carry 4 marks each.
+
+35. The mean of the following frequency distribution is 18. The frequency f in the class interval 19 – 21 is missing. Determine f.
+
+| Class interval | 11 - 13 | 13 - 15 | 15 - 17 | 17 - 19 | 19 - 21 | 21 - 23 | 23 - 25 |
|---|
| Frequency | 3 | 6 | 9 | 13 | f | 5 | 4 |
+
+**Ans:**
+
+C.I
+
+f
+
+x
+
+xf
+
+11-13
+
+3
+
+12
+
+36
+
+13-15
+
+6
+
+14
+
+84
+
+15-17
+
+9
+
+16
+
+144
+
+17-19
+
+13
+
+18
+
+234
+
+19-21
+
+f
+
+20
+
+20f
+
+21-23
+
+5
+
+22
+
+110
+
+23-25
+
+$\frac{4}{40+f}$
+
+24
+
+96
+---
+$704 + 20f$
+
+$$ \text{Mean} = \frac{\sum xf}{\sum f} \Rightarrow 18 = \frac{704+20f}{40+f} \Rightarrow f=8 $$
+
+OR
+
+The following table gives production yield per hectare of wheat of 100 farms of a village :
+
+| Production yield | 40-45 | 45-50 | 50-55 | 55-60 | 60-65 | 65-70 | | No. of farms | 4 | 6 | 16 | 20 | 30 | 24 |
+
+Change the distribution to a 'more than' type distribution and draw its ogive.
+
+**Ans:**
+
+| Production yield | Number of farms |
|---|
| More than or equal to 40 | 100 | | More than or equal to 45 | 96 | | More than or equal to 50 | 90 | | More than or equal to 55 | 74 | | More than or equal to 60 | 54 | | More than or equal to 65 | 24 |
+
+Plotting of points (40, 100) (45, 96) (50, 90) (55, 74) (60, 54) (65, 24) join to get ogive.
+
+2
+
+2
+
+36. From a point on the ground, the angles of elevation of the bottom and the top of a tower fixed at the top of a 20 m high building are 45° and 60° respectively. Find the height of the tower.
+
+**Ans:** Let height of tower = h m
+---PAGE_BREAK---
+
+In rt. $\triangle BCD \tan 45° = \frac{BC}{CD}$
+
+$$
+\left.
+\begin{array}{l}
+1 = \frac{20}{CD} \\
+CD = 20 \text{ m}
+\end{array}
+\right\}
+$$
+
+In rt. $\triangle ACD \tan 60° = \frac{AC}{CD}$
+
+$$
+\sqrt{3} = \frac{20 + h}{20}
+$$
+
+$$
+h = 20(\sqrt{3}-1)m
+$$
+
+corr fig. 1
+
+1
+
+1
+
+1
+
+1
+
+37. It can take 12 hours to fill a swimming pool using two pipes. If the pipe of larger diameter is used for four hours and the pipe of smaller diameter for 9 hours, only half of the pool can be filled. How long would it take for each pipe to fill the pool separately ?
+
+Ans: Let time taken by pipe of larger diameter to fill the tank be x hr
+Let time taken by pipe of smaller diameter to fill the tank be y hr
+
+A.T.Q
+
+$$
+\frac{1}{x} + \frac{1}{y} = \frac{1}{12}, \quad \frac{4}{x} + \frac{9}{y} = \frac{1}{2}
+$$
+
+Solving we get x = 20 hr y = 30 hr
+
+1+1
+
+1+1
+
+38. Prove that $\sqrt{5}$ is an irrational number.
+
+Ans: Let $\sqrt{5}$ be a rational number.
+
+$$
+\sqrt{5} = \frac{p}{q}, p \text{ & q are coprimes & } q \neq 0
+$$
+
+1
+
+$5q^2 = p^2 \Rightarrow 5$ divides $p^2 \Rightarrow 5$ divides $p$ also Let $p = 5a$, for some integer $a$
+
+1
+
+$5q^2 = 25a^2 \Rightarrow q^2 = 5a^2 \Rightarrow 5$ divides $q^2 \Rightarrow 5$ divides $q$ also
+
+1
+
+∴ 5 is a common factor of p, q, which is not possible as p, q are coprimes.
+
+Hence assumption is wrong $\sqrt{5}$ is irrational no.
+
+1
+
+39. Draw a circle of radius 3.5 cm. From a point P, 6 cm from its centre, draw two tangents to the circle.
+
+Ans: Correct construction of circle of radius 3.5 cm
+
+Correct construction of tangents.
+
+OR
+
+Construct a $\triangle ABC$ with AB = 6 cm, BC = 5 cm and $\angle B = 60°$.
+
+Now construct another triangle whose sides are $\frac{2}{3}$ times the corresponding sides of $\triangle ABC$.
+---PAGE_BREAK---
+
+**Ans:** Correct construction of given triangle
+Construction of Similar triangle
+
+1
+
+3
+
+40. A solid is in the shape of a hemisphere surmounted by a cone. If the radius of hemisphere and base radius of cone is 7 cm and height of cone is 3.5 cm, find the volume of the solid.
+
+$$ \left(\text{Take } \pi = \frac{22}{7}\right) $$
+
+**Ans:**
+
+$$
+\begin{aligned}
+& \text{Volume of solid} = \frac{1}{3} \times \frac{22}{7} \times (7)^2 \times 3.5 + \frac{2}{3} \times \frac{22}{7} \times (7)^3 \\
+&= \frac{22}{7} \times (7)^2 \times \left[ \frac{3.5}{3} + \frac{2}{3} \times 7 \right] \\
+&= 898\frac{1}{3} \text{ or } 898.33 \text{ cm}^3
+\end{aligned}
+$$
+
+2
+
+1
+
+1
\ No newline at end of file
diff --git a/samples/texts_merged/692782.md b/samples/texts_merged/692782.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e000c825bf9e5ec3ce858af1de2617be81121ad
--- /dev/null
+++ b/samples/texts_merged/692782.md
@@ -0,0 +1,220 @@
+
+---PAGE_BREAK---
+
+# Propagation with time-dependent Hamiltonian
+
+Gang Huang¹
+
+¹Johannes Gutenberg University of Mainz
+
+July 16, 2020
+
+## Abstract
+
+In this note, we introduce one basic concept in nonlinear optical spectroscopy: time-dependent Hamiltonian. Then we give one example of application of the time evolution operator.
+
+APS/123-QED
+
+Institute for Physics, Johannes Gutenberg University, Mainz, Germany gang@uni-mainz.de
+
+In optical spectroscopy, the choice we face is: (1) working with a time-independent Hamiltonian in a larger phase space that includes the matter and the radiation field (Shaul Mukamel, 1995); (2) using a time-dependent Hamiltonian in a smaller phase space of the matter alone.
+
+For any vector $|\psi\rangle$ in Hilbert space, its dynamical equation is the time-dependent Schrodinger equation:
+
+$$i\hbar \frac{\partial |\psi(t)\rangle}{\partial t} = \mathbf{H} |\psi(t)\rangle. \quad (1)$$
+
+Since
+
+$$|\psi(t)\rangle = \sum_l |f_l\rangle \langle f_l|\psi(t)\rangle, \quad (2)$$
+
+and
+
+$$\mathbf{H}|f_l\rangle = E_l|f_l\rangle, \quad (3)$$
+
+we have
+
+$$i\hbar \frac{\partial}{\partial t} \langle f_l |\psi(t)\rangle = E_l \langle f_l |\psi(t)\rangle,$$
+
+which is
+
+$$i\hbar \frac{\partial}{\partial t} c_l = E_l c_l,$$
+
+or
+
+$$\mathbf{H}\mathbf{c} = \mathbf{E}\mathbf{c}. \quad (4)$$
+
+We obtain the wave function at time $t$:
+
+$$\langle f_l | \psi(t) \rangle = e^{-\frac{i E_l (t-t_0)}{\hbar}} \langle f_l | \psi(t_0) \rangle, \quad (5)$$
+---PAGE_BREAK---
+
+where $\langle f_l | \psi(t_0) \rangle$ is the initial expansion coefficients of the wavefunction. We then have
+
+$$ |\psi(t)\rangle = \sum_l e^{-\frac{iE_l(t-t_0)}{\hbar}} |f_l\rangle \langle f_l|\psi(t_0)\rangle, \quad (6) $$
+
+Therefore, the evolution operator $U(t, t_0)$ can be defined as:
+
+$$ |\psi(t)\rangle \equiv U(t, t_0)|\psi(t_0)\rangle, $$
+
+or
+
+$$ U(t, t_0) = \sum_l |f_l\rangle e^{-\frac{iE_l(t-t_0)}{\hbar}} \langle f_l|. \quad (7) $$
+
+It is immediately follows that
+
+$$ U(t_0, t_0) = 1. \quad (8) $$
+
+The eq. 7 gives the evolution operator in a specific representation, i.e., the eigenstates of the Hamiltonian **H**.
+
+Here is one example of application of the time evolution operator. Calculate the time evolution operator of a coupled 2-level system ($|\psi_a\rangle$ and $|\psi_b\rangle$) with energies $\epsilon_a$, $\epsilon_b$, and a coupling $V_{ab}$, represented by the Hamiltonian
+
+$$ \begin{bmatrix} \epsilon_a & V_{ab} \\ V_{ba} & \epsilon_b \end{bmatrix}. $$
+
+Solution: Denote
+
+$$ V_{ab} = V_{ba}^* = |V_{ab}|e^{-i\chi}(0 < \chi < \pi/2). \quad (9) $$
+
+Denote $\lambda$ as the eigenvalue of the energy, solve the JiuQi equation
+
+$$ (\epsilon_a - \lambda)(\epsilon_b - \lambda) - |V_{ab}|^2 = 0, \quad (10) $$
+
+we get the eigenvalue of the energy: $\lambda_{\pm} = \frac{(\epsilon_a + \epsilon_b) \pm \sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}}{2}$. Then the eigenstates can be calculated.
+For $\lambda = \lambda_-$,
+
+$$ (\epsilon_b - \lambda_-)b = -V_{ab}e^{i\chi}a, $$
+---PAGE_BREAK---
+
+$$
+(11)
+$$
+
+i.e.,
+
+$$
+\begin{align*}
+\frac{b}{a} &= \frac{-|V_{ab}|e^{i\chi}}{\epsilon_b - \lambda_{-}} \\
+&= \frac{-2|V_{ab}|e^{i\chi}}{(\epsilon_b - \epsilon_a) + \sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}} \\
+&= \frac{-2|V_{ab}|e^{i\chi}/(\epsilon_a - \epsilon_b)}{-1 + \sqrt{1 + \frac{4|V_{ab}|^2}{(\epsilon_a - \epsilon_b)^2}}} \\
+&= \frac{-\tan 2\theta}{-1 + \sec 2\theta} e^{i\chi} \\
+&= -\frac{\cos\theta}{\sin\theta} e^{i\chi},
+\end{align*}
+$$
+
+where we have set
+
+$$
+\tan 2\theta \equiv \frac{2|V_{ab}|}{\epsilon_a - \epsilon_b}, \quad 0 < \theta < \frac{\pi}{2}. \tag{12}
+$$
+
+Therefore,
+
+$$
+|\psi_-\rangle = \left[ \begin{array}{c} -\sin\theta e^{-i\chi/2} \\ \cos\theta e^{i\chi/2} \end{array} \right]. \qquad (13)
+$$
+
+Similarly, replace $\lambda_-$ by $\lambda_+$, we can obtain
+
+$$
+|\psi_+\rangle = \left[ \begin{array}{c} \cos\theta e^{-i\chi/2} \\ \sin\theta e^{i\chi/2} \end{array} \right].
+$$
+
+$$
+(14)
+$$
+
+Thus, from eq. 7, the time evolution operator is
+
+$$
+U(t, t_0) = |\psi_+\rangle\langle\psi_+|e^{-\frac{i}{\hbar}\lambda_+(t-t_0)} + |\psi_-\rangle\langle\psi_-|e^{-\frac{i}{\hbar}\lambda_-(t-t_0)}. \quad (15)
+$$
+
+Using eq. (13) and (14), we obtain the exprssion of $U(t, t_0)$:
+
+$$
+U(t, t_0) =
+\begin{bmatrix}
+\cos^2\theta & \cos\theta\sin\theta e^{-i\chi} \\
+\cos\theta\sin\theta e^{i\chi} & \sin^2\theta
+\end{bmatrix}
+e^{-\frac{i}{\hbar}\lambda_{+}(t-t_0)} +
+\\
++
+\\
+\begin{bmatrix}
+\sin^2\theta & -\cos\theta\sin\theta e^{-i\chi} \\
+-\cos\theta\sin\theta e^{i\chi} & \cos^2\theta
+\end{bmatrix}
+e^{-\frac{i}{\hbar}\lambda_{-}(t-t_0)}.
+\tag{16}
+$$
+
+Discussion: suppose the system is initially (at time $t_0$ = 0) in the $|\phi_a\rangle$ state, i.e., $|\psi(0)\rangle = |\phi_a\rangle$. We can calculate the probability of the system to be found in the $|\phi_b\rangle$ state at time $t$
+---PAGE_BREAK---
+
+$$
+\begin{align}
+P_{ba}(t) &= |\langle \phi_b | \psi(t) \rangle|^2 \tag{17} \\
+&= |\langle \phi_b | U(t, t_0) \psi(0) \rangle|^2 \nonumber \\
+&= |\langle \phi_b | U(t, t_0) | \phi_a \rangle|^2 \nonumber
+\end{align}
+$$
+
+$$
+(18)
+$$
+
+Since
+
+$$
+\langle \phi_b | U(t, t_0) | \phi_a \rangle =
+$$
+
+$$
+\begin{bmatrix} 0 & 1 \end{bmatrix}
+\begin{bmatrix} U_{aa}(t) & U_{ab}(t) \\ U_{ba}(t) & U_{bb}(t) \end{bmatrix}
+\begin{bmatrix} 1 \\ 0 \end{bmatrix}
+$$
+
+$$
+\begin{align*}
+&= U_{ba}(t) \\
+&= \sin\theta\cos\theta e^{i\chi} e^{-\frac{i}{\hbar}\lambda_{+}(t-t_0)} - \\
+&\quad - \sin\theta\cos\theta e^{i\chi} e^{-\frac{i}{\hbar}\lambda_{-}(t-t_0)} \\
+&= \sin 2\theta e^{i\chi} \frac{2(\cos\frac{\lambda_{+}(t-t_0)}{\hbar} - i\sin\frac{\lambda_{+}(t-t_0)}{\hbar})}{2} - \\
+&\quad - \cos\lambda_{-}(t-t_0) \frac{\lambda_{-}(t-t_0)}{\hbar + i\sin\frac{\lambda_{-}(t-t_0)}{\hbar}} \\
+&= \sin 2\theta e^{i\chi} \frac{2 \times 2 i \sin\beta(\cos\alpha - i \sin\alpha)}{2} \\
+&= i \sin 2\theta e^{i(\chi - \alpha)} \sin\beta, \quad (13) \text{ where we have defined}
+\end{align*}
+$$
+
+$$
+\alpha = \frac{(\epsilon_a + \epsilon_b)(t - t_0)}{2\hbar}, \beta = \frac{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}(t - t_0)}{2\hbar}.
+$$
+
+$$
+(14)
+$$
+
+So
+
+$$
+\begin{align}
+|\langle \phi_b | U(t, t_0) | \phi_a \rangle|^2 &= \sin^2 2\theta \sin^2 \beta \nonumber \\
+&= \frac{4|V_{ab}|^2}{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}} \sin^2 \frac{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}(t-t_0)}{2\hbar}. \tag{15}
+\end{align}
+$$
+
+This is known as Rabi formula and
+
+$$
+\Omega_R \equiv \frac{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}}{\hbar} \qquad (16)
+$$
+---PAGE_BREAK---
+
+is known as Rabi frequency. For example, in the case of alkali atoms, the order of magnitude of the Rabi frequency is MHz. We assume that $(\epsilon_a - \epsilon_b)^2$ and $4|V_{ab}|^2$ have the same order of magnitude, i.e.,
+
+$$ \frac{4|V_{ab}|^2}{\sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2}} \sim \sqrt{(\epsilon_a - \epsilon_b)^2 + 4|V_{ab}|^2} \approx 10^6. $$
+
+## References
+
+(1995).
\ No newline at end of file
diff --git a/samples/texts_merged/7081601.md b/samples/texts_merged/7081601.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac196320060de82ae699faa87889eee1b1784ee0
--- /dev/null
+++ b/samples/texts_merged/7081601.md
@@ -0,0 +1,995 @@
+
+---PAGE_BREAK---
+
+# A Systolic Design Methodology with Application to Full-Search Block-Matching Architectures
+
+YEN-KUANG CHEN AND S.Y. KUNG
+
+Princeton University
+
+Received May 21, 1997; Revised November 5, 1997
+
+**Abstract.** We present a systematic methodology to support the design tradeoffs of array processors in several emerging issues, such as (1) high performance and high flexibility, (2) low cost, low power, (3) efficient memory usage, and (4) system-on-a-chip or the ease of system integration. This methodology is algebraic based, so it can cope with high-dimensional data dependence. The methodology consists of some transformation rules of data dependency graphs for facilitating flexible array designs. For example, two common partitioning approaches, LPGS and LSGP, could be unified under the methodology. It supports the design of high-speed and massively parallel processor arrays with efficient memory usage. More specifically, it leads to a novel *systolic cache* architecture comprising of shift registers only (cache without tags). To demonstrate how the methodology works, we have presented several systolic design examples based on the block-matching motion estimation algorithm (BMA). By multiprojecting a 4D DG of the BMA to 2D mesh, we can reconstruct several existing array processors. By multiprojecting a 6D DG of the BMA, a novel 2D systolic array can be derived that features significantly improved rates in data reusability (96%) and processor utilization (99%).
+
+## 1. Introduction
+
+The rapid progress in VLSI technology will soon reach more than 100 million transistors in a chip, implying tremendous computation power for many applications, e.g., real-time multimedia processing. Many important design issues emerge for the hardware design for these applications:
+
+1. High performance and high flexibility
+
+2. Low cost, low power, and efficient memory usage
+
+3. System-on-a-chip or the ease of system integration
+
+4. Fast design turn-around
+
+The challenge is that many of these design issues dis-
+cord with each other.
+
+In addressing these critical issues, we present a sys-
+tematic methodology to support the design of a broad
+scope of array processors. This allows us to design and
+evaluate diverse designs easily and quickly. This alge-
+braic methodology can handle algorithms with high-
+
+dimensional data dependency. It can exploit a high
+degree of data reusability and thus it can design high
+performance processor arrays with high efficiency in
+memory usage.
+
+In this paper, we focus on the block-matching motion
+estimation algorithm (BMA) [6] as an example. The
+basic idea of the BMA is to locate a displaced block,
+which is most similar to the current block, within the
+search area in the previous frame as shown in Fig. 1.
+Various criteria have been presented for the BMA. The
+most popular one is to find the least sum of the absolute
+difference (SAD) as
+
+$$ \text{Motion Vector} = \arg \min_{[u,v]} \{SAD[u, v]\} $$
+
+$$ SAD[u, v] = \sum_{i=1}^{n} \sum_{j=1}^{n} \left| s[i+u, j+v] - r[i, j] \right| $$
+
+$$ -p \leq u \leq p, -p \leq v \leq p $$
+
+where *n* is the block width and height, *p* is the absolute value of the maximum possible vertical/horizontal motion, *r*[i,j] is the pixel intensity (luminance value)
+---PAGE_BREAK---
+
+Fig. 1. In the process of the block-matching motion estimation algorithm, the current frame is divided into a number of non-overlapping current blocks, which are *n* pixels × *n* pixels. Each of the current blocks will be compared with (2*p* + 1) × (2*p* + 1) different displaced blocks in the search area of the previous frame.
+
+in the current block at (i, j), s[i+u, j+v] is the pixel
+intensity in the search area in the previous frame, and
+(u, v) represents the candidate displacement vector.
+
+The BMA is extremely computationally intensive in
+current video coding [7, 15]. For example, a SAD
+for a block of 16 × 16 pixels requires 512 additions.
+For search range {−32, ..., +32} × {−32, ..., +32},
+there are 4225 SADs, and hence, 2.16 × 10⁶ additions.
+For a video with 720 pixels × 480 pixels × 30 frames
+per second, 88 × 10⁹ additions per second would be
+required for a real-time MPEG-1 video coding. In or-
+der to tackle such a computationally demanding prob-
+lem in real-time, putting massively parallel processing
+elements (PEs) together as a computing engine, like
+systolic array, is often mandatory.
+
+Such fully utilized processing power can process a
+tremendous amount of data. In the example, each pixel
+in the previous frame will be revisited thousands of
+times. If each visit involves a memory fetch, it would
+imply an extremely short memory read cycle time (32
+ps) for real-time motion estimation of CCIR 601 pic-
+tures. So far, state-of-the-art memories are far beyond
+such demand. In order to make the data flow keep up
+with the processing power, memory access localities
+must be exploited. Particularly, data reusability plays
+
+a critical role in the systolic design of many important
+applications.
+
+In order to find a good tradeoff point between several
+conflicting design goals, a systematic/comprehensive
+design methodology must be used. Since most multi-
+media signal processing algorithms have the following
+features: localized operations, intensive computation,
+and matrix operation, high-level mapping methodolo-
+gies are proving very efficient. (For the reader's conve-
+nience, in the Appendices, we review the basic systolic
+design notations and methodology.)
+
+**1.1. Previous Approaches for Systolic BMA Design**
+
+Because the BMA for a single current block is a 4-
+dimensional algorithm (as shown in Appendix A.1), it
+is impossible to get a 2D or 1D system implementa-
+tion by one projection. Conventionally, the BMA is
+decomposed into subparts, which (1) are individually
+defined over index spaces with dimensions less than
+or equal to three and (2) are suitable to perform the
+canonical projection. The functional decomposition
+method simplifies the multi-dimensional time sched-
+ule and projection problem [5, 10, 16, 20]. For exam-
+ple, one such decomposition is to take *u* out first and
+consider it later as follows:
+
+$$
+\begin{equation}
+\begin{aligned}
+SAD[v] = & \sum_{i=1}^{n} \sum_{j=1}^{n} |s[i, j + v] - r[i, j]| \\
+& - p \le v \le p
+\end{aligned}
+\end{equation}
+$$
+
+As a result, we can get several existing DGs as shown
+in Fig. 2.
+
+There are many arrays in [10, 16] that can be derived by canonical projecting of the 3D DG shown in Fig. 2. However, most of the designs require a huge amount of memory bandwidth. For example, the design shown in Fig. 3(a) can be derived by projecting the DG in Fig. 2 along the *v*-direction. This design needs 16 byte data per cycles. Without sufficient memory bandwidth, the PEs are idle most of the time. Hence, most of these designs are not practical.
+
+Another method (called *index fixing*) fixes one the loop index at a time over and over. When two or fewer loop indices remain, the remaining algorithm can be easily transformed into systolic design [4, 5, 10, 16]. For example, the design in Fig. 3(a) can also be derived by fixing the index of the *u* and *v* of the 4-dimensional DG.
+---PAGE_BREAK---
+
+Fig. 2. Two 3D DG examples of the BMA [2, 10, 16].
+
+Fig. 3. Previous array design examples. (a) Projected without buffers. (b) Projected with buffers [8].
+
+A breakthrough design that greatly reduces the I/O bandwidth by exploiting *data reusability* is shown in [8] (cf. Fig. 3(b)). It carries some extra buffers. The advantage of this design is that the data are input serially such that the hunger of the I/O is greatly reduced. The amount of the input data per operation is only 1 byte. Furthermore, shift registers instead of random access memories are used here such that the control is easier, the buffer area is smaller, and the data access rate is higher. Moreover, because the search windows of the current blocks overlap each other, a simple FIFO (based on this design) is proposed to cap-
+
+ture more data reusability and thus further reduce the I/O bandwidth [14].
+
+However, the design shown in Fig. 3(b) is one of the designs that is blamed for inefficiency because of unnecessary computations. The inefficiency comes from the following problem: In order to have only one I/O port for the whole array, the data running through the whole array must be unified. Hence, in this design, some processor may receive some useless data and do some unnecessary computation (or without doing real computation) [1, 8]. The utilization rate = $\frac{(2p + 1)^2}{(n + 2p)^2}$.
+---PAGE_BREAK---
+
+Later, a 2D array design prevents some unnecessary data running through every PE by inputting the data from two memory ports [1]. It not only needs low I/O bandwidth but can also achieve high computational power.
+
+A transformation of snapshot (called *slice and tile*) is employed to produce different forms of DGs [2]. There will be a reduction of one dimension in the DG. For example, an original 3D BMA would become a 2D DG. After that, canonical single projection approaches can be used. This technique can re-design most of the existing architectures in graphs. However, the memory organization must be designed via a careful bookkeeping system on the information about the interface between subparts.
+
+## 1.2. Overview of this Work
+
+In this paper, we present a systematic methodology, multiprojection, to support the design of a broad scope of array processors. Many previous approaches, such as *functional decomposition*, *index fixing*, and *slice and tile*, can be regarded as its special cases.
+
+We also propose several useful rules essential for the implementation of multiprojection. For instance, by applying LPGS (locally parallel globally sequential) or LSGP (locally sequential globally parallel) during the multiprojection, the design can enjoy expandabilities without compromising the data reusability. Other rules for reducing the number of buffers are also made available. The rules may be adopted to improve computational power and flexibilities and reduce I/O requirement and control overhead.
+
+We shall demonstrate how the multiprojection can achieve this goal, based on a systolic design example of the BMA. Our methodology is applied to design (1) massively parallel systolic architectures and (2) fast *systolic cache* architectures for the MPEG application.
+
+# 2. Multiprojection Methodology for Optimal Systolic Design
+
+Conventional single projection can only map an $n$-dimensional DG directly onto an $(n-1)$-dimensional SFG. However, due to current VLSI technology constraint, it is hard to implement a 3D or 4D systolic array. In order to map an $n$-dimensional DG directly onto an $(n-k)$-dimensional SFG without DG de-
+
+composition, a multi-dimensional projection method is introduced [11, 17, 18, 24].
+
+The projection method, which maps an $n$-dimensional DG to an $(n-1)$-dimensional SFG, can be applied $k$ times and thus reduces the dimension of the array to $n-k$. More elaborately, a similar projection method can be used to map an $(n-1)$-dimensional SFG into an $(n-2)$-dimensional SFG, and so on. This scheme is called *multiprojection*.
+
+The *functional decomposition*, *index fixing*, and *slice and tile* are the special cases of the multiprojection. Multiprojection can not only obtain the DGs and SFGs from functional decomposition but can also obtain other 3D DGs, 2D SFGs, and other designs that are difficult to be obtained from other methods.
+
+Multiprojection is introduced here to design array processors which satisfy most of the following design criteria: (1) increase the computational power, (2) reduce the I/O requirement, (3) reduce the control overhead, and (4) have some expandabilities. For example, a localized recursive algorithm for block matching is derived so that the original 6D BMA is transferred into 3D algorithm [22]. (We will see why the BMA is 6-dimensional later in Section 2.1 and Section 4.3.) After that, it is derived into two designs—a 1D systolic array and a 2D semi-systolic array. Both of the arrays are reported to achieve an almost 100% utilization rate. Nevertheless, since the original 6D is folded into 3D, the designs have more constraints. The former one requires a massive amount of I/O ports. The latter one is only useful when the size of the current block ($n$) is equal to twice of the search range ($2p$) and requires a massive amount of data broadcasting.
+
+## 2.1. High Dimensional Algorithm
+
+Before we jump into the discussion of the multi-projection, it is advisable to introduce the concept of high-dimensional algorithms first. An algorithm is said to be $n$-dimensional if it has $n$-depth recursive loops in nature. For example, a block-matching algorithm for the whole frame is 6-dimensional as shown Fig. 4(a). The indices $x, y, u, v, i, j$ contribute the algorithm into 6D.
+
+It is very important to respect the *read-after-read* data dependency. If a datum could be read time after time by hundreds of operations and those operations are put closely together, then a small cache can get rid of a large amount of external memory accesses.
+---PAGE_BREAK---
+
+Fig. 4. (a) The 6D BMA, where $N_v$ is the number of current blocks in the vertical direction, $N_h$ is the number of current blocks in the horizontal direction, $n$ is the block size, and $p$ is the search range. The indices $x, y, u, v, i, j$ contribute the algorithm into 6D. The inner four loops are exactly those shown in Fig. 22. (b) A 3D BMA that folds two loops in (a) into one loop. (c) On the other hand, a 7D BMA ($x, y, u, v, i, j_1, j_2$ 7-dimension) can be constructed by modifying the inmost loop index $j$ of the original algorithm into two indices $j$ and $j_2$.
+
+Since $s[x*n+i+u, y*n+j+v]$ will be read time after time for different $x, y, u, v, i, j$ combinations, this algorithm is 6D.
+
+One the other hand, if we ignore the read-after-read data dependency, the DG has only two-dimensional
+
+read-after-write dependency based on variable SAD. Although the DG become lower dimensional, it would be harder to track the data reusability and reduce the amount of memory accesses.
+---PAGE_BREAK---
+
+*Transformation to Lower Dimension.* As shown in Fig. 4(b), two loops are folded into one loop to make the algorithm become less-dimensional [22].
+
+The DG becomes 3-dimensional because there are only 3 loop indices. The number of projections in multiprojection become less and it is easier to optimize the scheduling. However, in this modified algorithm, the operation regarding (u,v+1) must be executed directly after the operation regarding (u,v). It makes the algorithm become less flexible. Efficient, expandable, and low I/O designs are harder to achieve. Besides, the folding of 6D DG will make it benefit less from some useful graph transformation as shown in Section 3.
+
+*Transformation to Higher Dimension.* We can also construct some artificial indices to make a lower-dimensional DG problem become higher-dimensional DG. For example, the inmost loop of the original algorithm could be modified as shown in Fig. 4(c).
+
+The indices x, y, u, v, i, j₁, j₂ transform this algorithm into a 7-dimensional concept. This approach is not generally recommended because the number of steps for multiprojection increases in order to have the low-dimension design. However, this method provides an option of execution in the order of $j = \{1, N/2 + 1, 2, N/2 + 2, ...\}$ instead of $j = \{1, 2, ..., N/2, N/2 + 1, ...\}$ (simply exchanging the order of the j₁ loop and the j₂ loop). As we will see later in Section 3.7, LSGP and LPGS partitioning can be carried out via multiprojection after a DG is transformed into an artificial higher-dimensional DG.
+
+## 2.2. Algebraic Formulation of Multiprojection
+
+The process of multiprojection could be written as a number of single projections using the same algebraic formulation as introduced in Appendix A.1. In this section, we explain how to project the (n-1)-dimensional SFG to an (n-2)-dimensional SFG. The potential difficulties of this mapping are (1) the presence of delay edges in the (n-1)-dimensional SFG, and (2) the delay management of the edges in the (n-2)-dimensional SFG.
+
+*Double-Projection.* For simplicity, we first introduce how to have a 2D SFG for a 4D DG by the multiprojection.
+
+**Step 1** We project the 4D DG into a 3D SFG by projection vector $\vec{d}_4$ (4 × 1 column vector), projection matrix $\mathbf{P}_4$ (3 × 4 matrix), and scheduling vector $\vec{s}_4$ (4 × 1 column vector) with three constraints: (1) $\vec{s}_4^T \vec{d}_4 > 0$, (2) $\mathbf{P}_4 \vec{d}_4 = 0$, and (3) $\vec{s}_4^T \vec{e}_i \ge 0 \ \forall i$. The computation node $\underline{\mathcal{C}}$ (4 × 1) in 4D DG will be mapped into the 3D SFG by
+
+$$ \begin{bmatrix} T_3(\underline{\mathcal{C}}) \\ \underline{n}_3(\underline{\mathcal{C}}) \end{bmatrix} = \begin{bmatrix} \vec{s}_4^T \\ \mathbf{P}_4 \end{bmatrix} \underline{\mathcal{C}} $$
+
+The data dependence edges will be mapped into the 3D SFG by
+
+$$ \begin{bmatrix} D_3(\vec{e}_i) \\ \vec{m}_3(\vec{e}_i) \end{bmatrix} = \begin{bmatrix} \vec{s}_4^T \\ \mathbf{P}_4 \end{bmatrix} \vec{e}_i $$
+
+**Theorem 1.** $D_3(\vec{e}_i) \neq 0$ for any $\vec{m}_3(\vec{e}_i) = 0$.
+
+*Proof:* For $\vec{m}_3(\vec{e}_i) = 0$, $\vec{e}_i$ is proportional to $\vec{d}_4$. For example, $\vec{e}_i = \alpha\vec{d}_4$ ($\alpha \neq 0$). The basic constraint $\vec{s}_4^T\vec{d}_4 > 0$ implies $\alpha\vec{s}_4^T\vec{d}_4 \neq 0$; therefore, $D_3(\vec{e}_i) = \vec{s}_4^T\vec{e}_i \neq 0$. $\square$
+
+**Step 2** We project the 3D SFG into a 2D SFG by projection vector $\vec{d}_3$ (3 × 1 column vector), projection matrix $\mathbf{P}_3$ (2 × 3 matrix), and scheduling vector $\vec{s}_3$ (3 × 1 column vector) with three constraints: (1) $\vec{s}_3^T\vec{d}_3 > 0$, (2) $\mathbf{P}_3\vec{d}_3 = 0$, and (3) $\vec{s}_3^T\vec{m}_3(\vec{e}_i) \ge 0 \ \forall \vec{e}_i$ for broadcasting data. Or, $\vec{s}_3^T\vec{m}_3(\vec{e}_i) > 0 \ \forall \vec{e}_i$ for non-broadcasting data.
+The computation node $\underline{n}_3(\underline{\mathcal{C}})$ (3 × 1) in the 3D SFG, which is mapped from $\underline{\mathcal{C}}$ (4 × 1) in the 4D DG, will be mapped into the 2D SFG by
+
+$$ \begin{bmatrix} T'_2(\underline{\mathcal{C}}) \\ \underline{n}'_2(\underline{\mathcal{C}}) \end{bmatrix} = \begin{bmatrix} \vec{s}_3^T \\ \mathbf{P}_3 \end{bmatrix} \underline{n}_3(\underline{\mathcal{C}}) $$
+
+The data dependence edges in the 3D SFG will further be mapped into the 2D SFG by
+
+$$ \begin{bmatrix} D'_2(\vec{e}_i) \\ \vec{m}'_2(\vec{e}_i) \end{bmatrix} = \begin{bmatrix} \vec{s}_3^T \\ \mathbf{P}_3 \end{bmatrix} \vec{m}_3(\vec{e}_i) $$
+
+**Step 3** We can combine the results from the previous 2 steps. Let allocation matrix $\mathbf{A} = \mathbf{P}_3\mathbf{P}_4$ and scheduling vector $\mathbf{S}^T = \vec{s}_3^T\mathbf{P}_4 + M_4\vec{s}_4^T$. ($M_4 \ge 1 + (N_4 - 1)\vec{s}_3^T\vec{d}_3$ where $N_4$ is the maximum number of nodes along the $\vec{d}_3$ direction in the 3D SFG.)
+
+• Node mapping:
+
+$$ \begin{bmatrix} T_2(\underline{\mathcal{C}}) \\ \underline{n}_2(\underline{\mathcal{C}}) \end{bmatrix} = \begin{bmatrix} \mathbf{S}^T \\ \mathbf{A} \end{bmatrix} \underline{\mathcal{C}} $$
+---PAGE_BREAK---
+
+where $\underline{n}_2(\underline{\mathcal{C}}) = \underline{A}\underline{\mathcal{C}}$ means where the original computational node $\underline{\mathcal{C}}$ is mapped. $T_2(\underline{\mathcal{C}}) = \underline{S}\underline{\mathcal{C}}$ means when the computation node is to be executed.
+
+* Edge mapping:
+
+$$ \begin{bmatrix} D_2(\vec{e}_i) \\ \vec{m}_2(\vec{e}_i) \end{bmatrix} = \begin{bmatrix} \mathbf{S}^T \\ \mathbf{A} \end{bmatrix} \vec{e}_i $$
+
+where $\vec{m}_2(\vec{e}_i) = \mathbf{A}\vec{e}_i$ means where the original data dependency relationship is mapped. $D_2(\vec{e}_i) = \mathbf{S}^T\vec{e}_i$ means how much time delay should be in the edge $\vec{m}_2(\vec{e}_i)$.
+
+**Constraints for Data and Processor Availability.** Every dependent datum comes from previous computation. To ensure data availability, every edge must have at least one unit of delay if the edge is not broadcasting some data.
+
+**Theorem 2.** **Data Availability.** $D_2(\vec{e}_i) = \mathbf{S}^T\vec{e}_i \ge 0$ if $\vec{e}_i$ is for broadcasting data. $D_2(\vec{e}_i) = \mathbf{S}^T\vec{e}_i > 0$ if $\vec{e}_i$ is not for broadcasting data.
+
+**Proof:**
+
+$$
+\begin{align*}
+D_2(\vec{e}_i) &= \mathbf{S}^T \vec{e}_i \\
+&= (\vec{s}_3^T \mathbf{P}_4 + M_4 \vec{s}_4^T) \vec{e}_i \\
+&= \vec{s}_3^T \mathbf{P}_4 \vec{e}_i + M_4 \vec{s}_4^T \vec{e}_i \\
+&\geq \vec{s}_3^T \mathbf{P}_4 \vec{e}_i \\
+&\quad (\text{from the constraint (3) in step 1}) \\
+&> 0 \quad (\text{or, } \geq 0) \\
+&\quad (\text{from the constraint (3) in step 2})
+\end{align*}
+$$
+
+□
+
+Two computational nodes that are mapped into a single processor could not be executed at the same time. To ensure processor availability, $T_2(\underline{\mathcal{C}}_i) \neq T_2(\underline{\mathcal{C}}_j)$ must be satisfied for any $\underline{\mathcal{C}}_i \neq \underline{\mathcal{C}}_j$ and $\underline{n}_2(\underline{\mathcal{C}}_i) = \underline{n}_2(\underline{\mathcal{C}}_j)$.
+
+**Theorem 3.** **Processor Availability.** $T_2(\underline{\mathcal{C}}_i) \neq T_2(\underline{\mathcal{C}}_j)$ for any $\underline{\mathcal{C}}_i \neq \underline{\mathcal{C}}_j$ and $\underline{n}_2(\underline{\mathcal{C}}_i) = \underline{n}_2(\underline{\mathcal{C}}_j)$.
+
+**Proof:** For any $\underline{n}_2(\underline{\mathcal{C}}_i) = \underline{n}_2(\underline{\mathcal{C}}_j)$
+$\Rightarrow \mathbf{P}_3\underline{n}_3(\underline{\mathcal{C}}_i) - \mathbf{P}_3\underline{n}_3(\underline{\mathcal{C}}_j) = 0$
+$\Rightarrow \underline{n}_3(\underline{\mathcal{C}}_i) - \underline{n}_3(\underline{\mathcal{C}}_j)$ is proportional to $\vec{d}_3$.
+$\Rightarrow \underline{n}_3(\underline{\mathcal{C}}_i) - \underline{n}_3(\underline{\mathcal{C}}_j) = \mathbf{P}_4(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) = \alpha\vec{d}_3$
+
+Since $N_4$ is the maximum number of nodes along the $\vec{d}_3$ direction in the 3D SFG, $\alpha \in \{\underline{0}, \pm\underline{1}, \pm\underline{2}, \dots, \pm\underline{(N_4-1)}\}$.
+
+$$
+\begin{align*}
+T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= \mathbf{S}^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&= (\vec{s}_3^T \mathbf{P}_4 + M_4 \vec{s}_4^T)(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&= \vec{s}_3^T \mathbf{P}_4(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) + M_4 \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&= \alpha \vec{s}_3^T \vec{d}_3 + M_4 \alpha \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j)
+\end{align*}
+$$
+
+1. If $\mathbf{P}_4\underline{\mathcal{C}}_i = \mathbf{P}_4\underline{\mathcal{C}}_j$, then $\alpha = 0$ and
+
+$$
+\begin{align*}
+T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= M_4 \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&\neq 0 && (\text{by Theorem 1})
+\end{align*}
+$$
+
+2. If $\mathbf{P}_4\underline{\mathcal{C}}_i \neq \mathbf{P}_4\underline{\mathcal{C}}_j$, then $\alpha \in \{\pm\underline{1}, \dots, \pm\underline{(N_4-1)}\}$
+
+(a) If $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) = 0$, then
+
+$$
+\begin{align*}
+T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= \alpha \vec{s}_3^T \vec{d}_3 \\
+&\neq 0 && (\text{by the basic constraint of step 2})
+\end{align*}
+$$
+
+(b) If $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \neq 0$, then by assuming $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) > 0$ without losing generality, we have
+
+$$
+\begin{align*}
+T_2(\underline{\mathcal{C}}_i) - T_2(\underline{\mathcal{C}}_j) &= \alpha \vec{s}_3^T \vec{d}_3 + M_4 \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&\geq \alpha \vec{s}_3^T \vec{d}_3 \\
+&\quad +(1 + (\underline{(N_4-1)}\vec{s}_3^T \vec{d}_3))\vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&= (\alpha + (\underline{(N_4-1)}\vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j)))\vec{s}_3^T \vec{d}_3 \\
+&\quad +\vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&\geq (\alpha + (\underline{(N_4-1)}\vec{s}_3^T \vec{d}_3 + \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j)) \\
+&\quad (\because \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \geq 1) \\
+&\geq 0 + \vec{s}_4^T (\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) \\
+&\quad (\because \alpha + N_4 - 1 \geq 0) \\
+&> 0
+\end{align*}
+$$
+
+If $\vec{s}_4^T(\underline{\mathcal{C}}_i - \underline{\mathcal{C}}_j) < 0$, then let $\underline{c}'_i = \underline{\mathcal{C}}_j$ and $\underline{c}'_j = \underline{\mathcal{C}}_i$. The condition $T_2(\underline{c}'_i) \neq T_2(\underline{c}'_j)$ for any $\underline{c}'_i \neq c'_j$ and $\underline{n}_2(\underline{\mathcal{C}}'_i) = n'_2(\underline{\mathcal{C}}'_j)$ holds. So, the proof will.
+
+Q.E.D. from 1, 2(a), and 2(b). □
+
+Multiprojection n-Dimensional DG into k-Dimensional SFG.
+---PAGE_BREAK---
+
+**Step 1** Let the $n$-dimensional SFG define as the
+$n$-dimensional DG. That is, $\underline{n}_n(\mathcal{C}_x) = \mathcal{C}_x$ and
+the $\vec{m}_n(\vec{e}_i) = \vec{e}_i$.
+
+**Step 2** We project the *l*-dimensional SFG into a
+(*l* − 1)-dimensional SFG by projection vector $\vec{d}_l$
+(*l* × 1), projection matrix **P****l* ((*l* − 1) × *l*), and
+scheduling vector $\vec{s}_l$ (*l* × 1) with basic constraint
+$\vec{s}_l^T \vec{d}_l > 0$, **P****l* $\vec{d}_l$ = 0, and $\vec{s}_l^T \vec{m}_l(\vec{e}_i) \ge$ (or >)
+0$\forall\vec{e}_i$.
+The computation node $\mathcal{C}_i$ (*l* × 1) and the data de-
+pendence edge $\vec{m}_l(\vec{e}_i)$ (*l* × 1) in *l*-dimensional
+SFG will be mapped into the (*l* − 1)-dimensional
+SFG by
+
+$$
+\underline{n}_{l-1}(\underline{\mathcal{c}}_i) = \mathbf{P}_l \underline{n}_l(\underline{\mathcal{c}}_i) \quad (1)
+$$
+
+$$
+\vec{m}_{l-1}(\vec{e}_i) = \mathbf{P}_l \vec{m}_l(\vec{e}_i) \quad (2)
+$$
+
+**Step 3** After ($n-k$) projections, the results can be
+combined. The allocation matrix will be
+
+$$
+\mathbf{A} = \mathbf{P}_k \mathbf{P}_{k+1} \cdots \mathbf{P}_n \qquad (3)
+$$
+
+The scheduling vector will be
+
+$$
+\begin{align}
+\mathbf{S}^T &= \bar{\mathbf{s}}_{k+1}^T \mathbf{P}_{k+2} \mathbf{P}_{k+3} \cdots \mathbf{P}_n \nonumber \\
+&\quad + M_{k+2} \bar{\mathbf{s}}_{k+2}^T \mathbf{P}_{k+3} \mathbf{P}_{k+4} \cdots \mathbf{P}_n \nonumber \\
+&\quad + M_{k+2} M_{k+3} \bar{\mathbf{s}}_{k+3}^T \mathbf{P}_{k+4} \mathbf{P}_{k+5} \cdots \mathbf{P}_n \nonumber \\
+&\vdots \nonumber \\
+&\quad + M_{k+2} M_{k+3} \cdots M_n \bar{\mathbf{s}}_n \tag{4}
+\end{align}
+$$
+
+where $M_l \ge 1 + (N_l - 1)\bar{s}_{l-1}^T d_{l-1}$ and $N_l$ is
+the maximum number of nodes along the $d_{l-1}$
+direction in the $l$-dimensional SFG. Therefore,
+
+• Node mapping will be:
+
+$$
+\left[ \frac{T_k(\underline{\mathcal{c}}_i)}{\underline{n}_k(\underline{\mathcal{c}}_i)} \right] = \left[ \frac{\mathbf{S}^T}{\mathbf{A}} \right] \underline{\mathcal{c}}_i \quad (5)
+$$
+
+• Edge mapping will be:
+
+$$
+\left[ D_k(\vec{e}_i) \quad \vec{m}_k(\vec{e}_i) \right] = \left[ \begin{matrix} S^T \\ A \end{matrix} \right] \vec{e}_i \quad (6)
+$$
+
+Constraints for Processor and Data Availability. If no transmittance property is assumed, every edge must have at least one delay because every dependent data is come from previous computation. It is easy to show that data availability is satisfied, i.e., $D_k(\vec{e}_i) > 0 \forall i.$
+
+Following the same proof of Theorem 3, one can
+easily show processor availability is also satisfied., i.e.,
+$T_k(c_i) \neq T_k(c_j)$ for any $c_i \neq c_j$ and $\underline{n}_2(c_i) = \underline{n}_2(c_j)$.
+
+2.3. Optimization in Multiprojection
+
+After projection directions are fixed, the structure of
+the array is determined. The remaining part of the
+design is to find a scheduling that can complete the
+computation in minimal time under processor and data
+availability constraint. That is,
+
+$$
+\min_{\mathbf{S}} \left( \max_{\underline{\mathcal{c}}_x, \underline{\mathcal{c}}_y} \{\mathbf{S}^T (\underline{\mathcal{c}}_x - \underline{\mathcal{c}}_y)\} \right)
+$$
+
+under the following constraints:
+
+1. $\mathbf{S}^T\vec{e}_i > 0 \quad \forall \vec{e}_i$ (Data Availability)
+
+2. $\mathbf{S}^T\mathcal{C}_i \neq \mathbf{S}^T\vec{c}_j \quad \forall \mathcal{C}_i \neq \vec{c}_j, A\mathcal{C}_i = A\vec{c}_j$ (Processor Availability)
+
+A method using quadratic programming techniques
+is proposed to tackle the optimization problem [26].
+However, it takes non-polynomial time to find the op-
+timal solution. A polynomial-time heuristic approach,
+which uses the branch-and-bound technique and tries
+to solve the problem by linear programming, is also
+proposed [25].
+
+Here, we propose another heuristic procedure to
+find a near optimal scheduling in our multiprojection
+method. In each single projection, from i-dimension
+to (i - 1)-dimension, find an $\vec{s}_i$ by
+
+$$
+\vec{s}_i = \arg\min_{\vec{s}} \left\{
+\max_{\underline{n}_i(\underline{\mathcal{c}}_x), \underline{n}_i(\underline{\mathcal{c}}_y)}
+\left\{
+\vec{s}^T [\underline{n}_i(\underline{\mathcal{c}}_x) - \underline{n}_i(\underline{\mathcal{c}}_y)]
+\right\}
+\right\}
+\quad \forall \underline{\mathcal{c}}_x, \underline{\mathcal{c}}_y \in \text{DG}(7)
+$$
+
+under the following constraints:
+
+1. $\bar{\boldsymbol{s}}_i^T \bar{\boldsymbol{d}}_i > 0$
+
+2. $\bar{s}_i^T m_i(e_j) \ge 0 \quad \forall j$ if $(i-1)$-dimension is not the final goal.
+
+$\bar{s}_i^T m_i(e_j) > 0$ $\forall j$ if $(i-1)$-dimension is the final goal.
+
+This procedure will find a linear scheduling vector in polynomial time, when the given processor allocation function is linear. Although we have no proof of optimization yet, several design examples show our method can provide optimal scheduling when the DG is shift-invariant and the projections directions are along the axes. (Nevertheless, it will still be an NP-hard problem for all possible processor allocation and time allocation functions.)
+---PAGE_BREAK---
+
+**Table 1.** Graph transformation rules for equivalent DGs. Note that the *transmittent data*, which are used repeatedly by many computation nodes in the DG (see Appendix A.2), play a critical role here.
+
+| Rules | Apply to | Function | Advantages |
|---|
| Assimilarity | 2D transmittent data | Keep only one edge and delete the others in the 2nd dimension | Save links | | Summation | 2D accumulation data | Keep only one edge and delete the others in the 2nd dimension | Save links | | Degeneration | 2D transmittent data | Reduce a long buffers to a single register | Save buffers | | Reformation | 2D transmittent data | Reduce a long delay to a shorter one | Save buffers | | Redirection | Order independent data (e.g., transmittent or accumulation data) | Opposite the edge | Save problems on negative edges |
+
+Fig. 5. (a) A high-dimensional DG, where a datum is transmittent to a set of nodes by the solid 2D mesh. (b) There are several paths via which the datum can reach a certain node. (c) During the multiprojection, the dependencies in different directions get different delay. (d) Because the data could reach the nodes by two possible paths, the *assimilarity rule* is applied to this SFG. Only one of the edges in the second dimension is kept. Without changing the correctness of the algorithm, a number of links and buffers are reduced.
+
+## 3. Equivalent Graph Transformation Rules
+
+In Appendix A.2 and Section 2.1, some transformation rules of the DG are introduced. In order to have better designs, we also provide some graph transformation rules that can help us reduce the number of connections between processors, the size of buffer, or the power consumption. Table 1 shows a brief summary of the rules.
+
+### 3.1. Assimilarity Rule
+
+As shown in Fig. 5, the assimilarity rule can save some links without changing the correctness of the DG. If a datum is transmittent to a set of operation/computation nodes in the DG/SFG by a 2D (or higher-dimensional) mesh, then there are several possible paths via which the datum can reach a certain node. For example, in the BMA, the $s[i+u, j+v]$
+---PAGE_BREAK---
+
+Fig. 6. (a) A datum is the summation of a set of nodes by a 2D mesh in an SFG. During the multiprojection, the dependencies in different directions get different delay. (b) Without changing the correctness of the algorithm, only one of the edges in the second dimension is kept. By the summation rule, a number of links and buffers are reduced.
+
+Fig. 7. (a) When transforming an SFG description to a systolic array, the conventional delay management uses $(m-1)$ registers for $m$ units of delay on the links. (b) If the data sets of two adjacent nodes overlap each other, the degeneration rule suggests that only a register is required because the other data could be obtained by the other direction.
+
+can be passed by $s[(i+1)+(u-1), j+v]$ via loop *i*, or by $s[i+u, (j+1)+(v-1)]$ via loop *j*. Keeping only one edge in the second dimension is sufficient for the data to reach everywhere.
+
+The procedure of keeping only one edge for a set of edges can save a great number of interconnection buffers. Usually, this rule is applied after the final SFG is obtained. In this way, we can get rid of edges with longer delay and more edges.
+
+One of the major drawbacks of this assimilarity rule is that every node must use the same set of data before this rule can be applied. It is not true for any algorithm that uses a 2D mesh to transmittent the data. Generally speaking, the data set of a node greatly overlaps with the data set of the other nodes but not identically. In order to reduce the connection edges, we can make all
+
+the nodes process the same set of data artificially (i.e., ask the nodes to do some useless computations) and then apply this rule.
+
+## 3.2. Summation Rule
+
+As shown in Fig. 6, the summation rule can save some links without changing the correctness of the DG. Because summation is associative, the order of the summation can be changed. If output is obtained by aggregating a 2D (or higher-dimensional) mesh of computational nodes, we can accumulate the partial sum in one dimension first, then accumulate the total from the partial sum in the second dimension afterward. For example, in the BMA, the SAD[u,v] is the 2D summation of $|s[i+u, j+v] - r[i, j]|$ over $1 \le i, j \le n$. We can accumulate the difference over index *i* first, or over
+---PAGE_BREAK---
+
+**Fig. 8.** (a) A high-dimensional DG, where a datum is transmittent to a set of nodes by a 2D mesh, is projected into an SFG. During the multiprojection, the dependencies in different directions get different delay. Because the data could reach the nodes by more than two possible paths, the assimilarity rule is applied to this SFG. Only one of the edges in the second dimension is kept. (b) The delay (i.e., the number of buffers) could be further decreased when the *reformation* rule transforms the original 2D mesh into a tilted mesh.
+
+index *j* first (cf. Fig 2). We should calculate the data in
+the direction with fewer buffers first, then rigorously
+calculate the data in the other direction later.
+
+### 3.4. Reformation Rule
+
+For 2D or higher-dimensional transmittent data, the
+structure of the mesh is not rigid. For example,
+in the BMA, the $s[i+u, j+v]$ can be passed
+by $s[(i+k)+(u-k), j+v]$ via loop *i* and by
+$s[i+u, (j+k)+(v-k)]$ via loop *j* for $1 \le k \le n$.
+For a different *k*, the structure of the 2D transmittent
+mesh is different. The final delay in the designed
+SFG will be different. As a result, we should choose
+*k*, depending on the required buffer size. Generally
+speaking, the shorter the delay, the fewer the buffers.
+
+For example, Fig. 8(a) shows a design after applying
+the assimilarity rule. Only a long delayed edge was left.
+Moreover, the data are transmittent to the whole array.
+So, we detour the long delayed edge, make use of the
+delay in the first dimension, and get the design show
+in Fig. 8(b), where the longest delay is now shorter.
+
+### 3.3. Degeneration Rule
+
+The degeneration rule reduces the data link when data
+are transmittent through a 2D (or higher-dimensional)
+mesh when (1) each node has its own data set and
+(2) the data sets of two adjacent nodes overlap each
+other significatly. One way to save the buffer is to
+let the overlapping data transmittent from one dimen-
+sion thoroughly (like that in the assimilarity rule) and
+let the non-overlapping transmittent from the other di-
+mension(s) (unlike that in the assimilarity rule). In the
+second dimension, it is only necessary to keep non-
+overlapping data. Fig. 7 shows that only a register is
+required because the other data could be obtained by
+the other direction.
+
+### 3.5. Redirection Rule
+
+Because some operations are associative (e.g., sum-
+mation data, transmittent data), the arcs in the DG are
+reversible. The arcs are reversed to help the design.
+For example, the datum $s[(i+1)+(u-1), j+v]$ is passed to $s[i+u, j+v]$ via loop *i* in the BMA.
+After mapping the DG to a SFG, the delay on the edge is
+negative. Conventionally, negative delay is not allowed
+and we must find another scheduling vector $\vec{s}$. This
+rule tells us to move the data in the opposite direction
+/passing the $s[i+u, j+v]$ to $s[(i+1)+(u-1), j+v]$ instead of re-calculating the scheduling vector
+(cf. Fig. 9).
+
+**Fig. 9.** (a) Generally speaking, an SFG with a negative delay is not permissible. (b) However, if the dependencies have no polarization, then we apply the redirection rule to direct the edges with negative delay to the opposite direction. After that, the SFG become permissible.
+---PAGE_BREAK---
+
+### 3.6. Design Optimization vs. Equivalent Transformation Rules
+
+All these rules do not modify the correctness of the implementation, but could accomplish some degree of design optimization.
+
+1. The assimilarity rule and the summation rule have no influence on the overall calculation time. However, these two rules reduce the buffers and links. Generally speaking, these two rules are applied after the SFG is yielded.
+
+2. The degeneration rule does not influence the overall calculation time. It is applied when one would like to transform the SFG into hardware design. It helps the reduction of the buffers and links. However, extra control logic circuits are required.
+
+3. The reformation rule and the redirection rule will have influence on the scheduling problem because these two rules can make some prohibited scheduling vectors become permissible.
+
+These rules help the design optimization but also make the optimization process harder. Sometimes, the optimization process will become a iterative procedure which consists of (1) scheduling optimization and (2) equivalent transformation.
+
+### 3.7. Locally Parallel Globally Sequential and Locally Sequential Globally Parallel Systolic Design by Multiprojection
+
+In Appendix A.4, LPGS and LSGP have been introduced briefly. In this section, we delineate a unified partitioning and scheduling scheme for LPGS and LSGP into our multiprojection method. The advantage of this unified partitioning model is that various partitioning methods can be achieved by choosing projection vectors. The systematic scheduling scheme can explore more inter-processor parallelism.
+
+*Equivalent Graph Transformation Rules for Index Folding.* A unified re-indexing method is adopted to fold original DG into a higher-dimensional DG but with a smaller size in a chosen dimension. Then, our multiprojection approach is applied to obtain the LPGS or LSGP designs. The only difference between LPGS and LSGP under our uniform approaches is the order of the projection. Our approach is even better in deciding the scheduling because our scheduling is automatically inherited from multiprojection scheduling instead of hierarchical scheduling.
+
+*Index Folding.* In order to map an algorithm into a systolic array by LPGS or LSGP, we propose a re-
+
+Fig. 10. (a) shows a 2 × 6 DG. (b) shows an equivalent 2 × 3 × 2 DG after index folding. (c) an LPGS partitioning when we project the 3D DG along the *a* direction. (d) an LSGP partitioning when we project the 3D DG along the *b* direction.
+---PAGE_BREAK---
+
+Fig. 11. A core in the 4D DG of the BMA. There are $n \times n \times (2p+1) \times (2p+1)$ nodes in the DG. The node $i, j, u, v$ represents the computation $SAD[u, v] = SAD[u, v] + |s[i+u, j+v] - r[i, j]|$. We denote $\vec{E}_1$ as the data dependency between computation nodes for $s[i+u, j+v]$. Because $s[i+u, j+v]$ can come from two possible directions: (1) $s[(i-1)+u, (j+v)]$ or (2) $s[i+u, (j-1)+v]$, $\vec{E}_1$ can be $(1, 0, -1, 0)$ and $(0, 1, 0, -1)$. By the same token, $\vec{E}_2$—the data dependency of the current block—could be $(0, 0, -1, 0)$ and $(0, 0, -1, 0)$. $\vec{E}_3$, which accumulates the difference, could be $(1, 0, 0, 0)$ and $(0, 1, 0, 0)$. The representation of the DG is not unique; most of the dependence edges can be redirected because of data transmittance.
+
+indexing method for the computational nodes into a
+higher-dimensional DG problem.
+
+An example is shown in Fig. 10. We want to map a
+$2 \times 6$ DG into a smaller 2D systolic array. Let $u, v$ be
+the indices $(0 \le u \le 1, 0 \le v \le 5)$ of the DG.
+
+First, we will re-index all the computational nodes
+$(u, v)$ into $(u, a, b)$. The 2D DG becomes a 3D DG
+$(2 \times 2 \times 3)$ where an $a$ means 3 units of $v$, a $b$ means
+1 unit of $v$, and $0 \le a \le 1$, $0 \le b \le 2$. Then, a node
+at $(u, a, b)$ in the 3D DG is equivalent to the node at
+$(u, (3a + b))$ in the original 2D DG.
+
+After this, by multiprojection, we can have the fol-
+lowing two partitioning methods:
+
+**1. LPGS**
+
+If we project the 3D DG along the *a* direction,
+then the nodes that are close to each other in the *v*
+direction will be mapped into the different nodes.
+That is, the computation nodes are going to be
+executed in parallel. This is an LPGS partitioning.
+
+**2. LSGP**
+
+If we project the 3D DG along *b*, then the nodes
+that are close to each other in the *v* direction will be
+mapped into the same node. That is, the computa-
+
+tion nodes are going to be executed in a sequential
+order. This is an LSGP partitioning.
+
+Note that we must be careful about the data depen-
+dency after transformation. One unit of original *v* will
+be 0 unit of *a* and 1 unit of *b* when the dependence edge
+does not move across different packing segments. (In
+the example, a packing segment consists of all the com-
+putation nodes within three units of sequential *v*. That
+is, the packing boundary is when 3 divides *v*.) One
+unit of the *v* is 1 unit of the *a* and -2 unit of the *b* when
+the dependence edge crosses the packing boundary of
+the transformed DG one time.
+
+**4. Systolic Designs for Full-Search Block-Matching Algorithms by Multiprojection Approach**
+
+4.1. 4D DG of BMA
+
+As Fig. 22 shows the pseudo code of the BMA of a
+single current block, Fig. 11 shows a core in the 4D
+DG of the BMA for a current block. The operations of
+taking difference, taking absolute value, and accumu-
+lating residue are embedded in a 4-dimensional space
+i,j,u,v. The indeices i and j (1 ≤ i, j ≤ n) are the
+indices of the pixels in a current block. The indices
+u and v (-p ≤ u, v ≤ p) are the indices of the po-
+tential displacement vector. The actual DG would be
+a 4-dimensional repeat of the same core. Although it
+is more difficult to visualize the actual DG, it is fairly
+straightforward to manipulate algebra on the core and
+thus manipulate multiprojection.
+
+We use $\vec{E}_1$ to denote the data dependency of the search window. The $s[i+u, j+v]$ will be used repeatedly for (1) different $i, j$, (2) same $i + v$, and (3) same $j + u$. Therefore, $\vec{E}_1$ is a 2-dimensional reformable mesh. One possible choice is (1, 0, -1, 0) and (0, 1, 0, -1). The $r[i, j]$ will be used repeatedly for different $u, v$. Hence, $\vec{E}_2$, the data dependency of the current block, could be (0, 0, -1, 0) and (0, 0, -1, 0). The summation can be done in *i*-first order or *j*-first order. $\vec{E}_3$, which accumulates the difference, could be (1, 0, 0, 0) and (0, 1, 0, 0). The representation of the DG is not unique; most of the dependence edges can be redirected because of data transmittance.
+---PAGE_BREAK---
+
+Constructing Previous Designs. As mentioned be-
+fore, our multiprojection can cover most of the previ-
+ous design methods. Here is the first example.
+
+After our first projection with $\vec{d}_4^T = (0, 0, -1, 0)$, $\vec{s}_4^T = (0, 0, -1, 0)$, and
+
+The following is the 4D DG of the BMA:
+
+$$P_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix}$$
+
+Search Window ($\vec{E}_1$)
+1, 0, -1, 0 $D_4$ = 0
+0, 1, 0, -1 $D_4$ = 0
+
+Current Blocks ($\vec{E}_2$)
+0, 0, -1, 0 $D_4$ = 0
+0, 0, 0, -1 $D_4$ = 0
+
+Partial Sum of SAD ($\vec{E}_3$)
+1, 0, 0, 0 $D_4$ = 0
+0, 1, 0, 0 $D_4$ = 0
+
+the SFG will be
+
+Fig. 12. (a) A 2D BMA systolic design from double-projecting the 4D DG using Eq. (9). (b) The design after the assimilarity rule is applied. (c) The design after the reformation rule is applied (cf. Fig. [8]). (d) The design by applying the degeneration rule. Its timing diagram is shown in Fig. 13.
+---PAGE_BREAK---
+
+Fig. 13. The timing diagram of the design in Fig. 12(d).
+
+Fig. 14. (a) The data sets of different current blocks indicates the possibilities of the data reuse. (b) The 5D DG of the BMA.
+
+| Search Window ($\vec{E}_1$) | 1, 0, 0 | $D_3 = 1$ | | 0, 1, 1 | $D_3 = 0$ | | Current Blocks ($\vec{E}_2$) | 0, 0, 0 | $D_3 = 1$ | | 0, 0, 1 | $D_3 = 0$ | | Partial Sum of SAD ($\vec{E}_3$) | 1, 0, 0 | $D_3 = 0$ | | 0, 1, 0 | $D_3 = 0$ |
+
+If we discard any edges that have delay, then $\vec{E}_1 = (\bar{0}, \bar{1}, \bar{1})$, $\vec{E}_2 = (\bar{0}, \bar{0}, \bar{1})$, $\vec{E}_3 = (\bar{0}, \bar{1}, \bar{0}) \& (\bar{1}, \bar{0}, \bar{0})$. We construct the 3D DG shown in Fig. 2. And, we also construct many previous designs based on the 3D DG.
+
+If we keep the edges that have delays, then we can reconstruct the design in [8] (cf. Fig. 3(b)) by projecting the SFG one more time with $\vec{d}_3^T = (\bar{0}, \bar{0}, \bar{1})$, $\vec{s}_3^T = (\bar{1}, \bar{0}, \bar{1})$, and
+
+$$P_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$$
+---PAGE_BREAK---
+
+To ensure processor availability,
+
+$$M \geq 1 + (N - 1)(\vec{s}_3 \cdot \vec{d}_3) \quad (8)$$
+
+where N is the maximal number of nodes along the $\vec{d}_3$-direction in the SFG. Because the index u ranges from $-p$ to $p$, N is $2p+1$. Hence, $M = 2p+1$ and
+
+$$\left\{ \begin{array}{l} \mathbf{A} = \mathbf{P}_3 \mathbf{P}_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix} \\ \mathbf{S}^T = \vec{s}_3^T \mathbf{P}_4 + M \vec{s}_4^T = [1, 0, -2p-1, -1]^T \end{array} \right. \quad (9)$$
+
+We have
+
+| Search Window ($\vec{E}_1$) | 1, 0 | $D_2 = 2p + 2$ | | 0, 1 | $D_2 = 1$ |
+
+| Current Blocks ($\vec{E}_2$) | 0, 0 | $D_2 = 2p + 1$ | | 0, 0 | $D_2 = 1$ |
+
+| Partial Sum of SAD ($\vec{E}_3$) | 1, 0 | $D_2 = 1$ | | 0, 1 | $D_2 = 0$ |
+
+as Fig. 12(a) shows the design.
+
+Design Via Assimilarity and Reformation Rule. This design has a huge amount of buffers although it can catch considerable data reusability. In order to reduce the number of buffers, we can apply the *assimilarity rule*, as suggested in Section 3.1. We make
+
+all the nodes process the same set of data (s [-p+1, -p+1],..., s[p+n, p+n]), and delete most of the link in the second dimension, as shown in Fig. 12(b). We further apply the *reformation rule* to make the design smaller, and get the design shown in Fig. 12(c), which is identical to the design proposed in [8].
+
+In terms of I/O bandwidth requirements, this design is superior to many other designs because the data are input serially and the I/O bandwidth is reduced by one order of magnitude. Shift registers instead of random access memories are used here. Thus, the control is easier, the buffer area is smaller, and the data access rate is higher. (The I/O rate of the current block is only 6% of the rate f the search window. It is relatively easy to manage the data flow of the current date. Therefore, we focus on the I/O requirement of the search window in this paper.)
+
+However, because of artificial unifying of the input data, some unnecessary data must go through every PE. So, the utilization rate is only 66% when n = 1 and p = 32.
+
+Design Via Degeneration Rule. Another approach to save buffer for Fig. 12(a) is to apply the *degeneration rule*. As shown in Fig. 12(d), this design can also save a number of buffers as well as keep the processor busy. It has a 77% total utilization rate (include the loading
+
+Fig. 15. (a) The design, proposed in [14], can be re-delivered by multiprojecting the 5D DG of the BMA with the *assimilarity rule* and the *reformation rule*. (b) A new design can be devised by multiprojecting the 5D DG of the BMA with the degeneration rule.
+---PAGE_BREAK---
+
+Fig. 16. (a) The data sets of different current blocks (in row-major order) indicates different possibilities of the data reuse. (b) The design with data input in the order of row major. Its timing diagram is shown in Fig. 17.
+
+Fig. 17. The timing diagram of the design in Fig. 16(b).
+
+phase and computation), and use only one I/O port for search window. Its timing diagram is shown in Fig. 13.
+
+As shown in Fig. 14, two contiguous current blocks may share some parts of the search window.
+
+## 4.2. Multiprojecting 5D DG of BMA
+
+Increasing the reusability of the data can reduce the I/O and, hence, increase the overall performance. This motivates the introduction of the 5D DG of the BMA.
+
+Let $x, y$ define the indices of the current blocks in a frame. In the 5D design, we fix $y$ at a constant value. $\vec{E}_4$ is new. $\vec{E}_4$ passes the data of the search window shared by the current blocks of a same $y$. $\vec{E}_1, \vec{E}_2, \vec{E}_3$ are the same as before; more specifically, $\vec{E}_1$ passes the data of the search window for a given current block.
+
+If we project the 5D DG along $x, u, v$ direction and apply the assimilarity and the reformation rule
+---PAGE_BREAK---
+
+Fig. 18. (a) The data reusability between current blocks. (b) The core of the 6D DG of the BMA. (The core will be repeated when $0 \le x \le N_v$, $0 \le y \le N_h$, $1 \le i, j \le n$, $-p \le u, v \le p$.) $\vec{E}_1 = (0, 0, 1, 0, -1, 0)$ and $(0, 0, 0, 1, 0, -1)$. $\vec{E}_2 = (0, 0, 0, 0, -1, 0)$ and $(0, 0, 0, 0, -1, 0)$. $\vec{E}_3 = (0, 0, 1, 0, 0, 0)$ and $(0, 0, 0, 1, 0, 0)$. $\vec{E}_4 = (1, 0, 0, 0, -n, 0)$ and $(0, 1, 0, 0, 0, -n)$.
+
+Fig. 19. The design by multiprojecting the 6D DG of the BMA with the degeneration rule. The basic structure of the processor array is the same as 5D design. Its systolic cache is detailed in Fig. 20.
+
+to it, we have the same design as proposed in [14] (cf. Fig. 15(a)). By adding some buffers in the chip, we can reuse a major part of the search window without reloading it. The ratio of reused data: $\frac{2p \times (n+2p)}{(n+2p) \times (n+2p)}$. When $n = 16$, $p = 32$, the ratio amounts to about 80% while 4KB on-chip buffer is added. However, this de-
+
+sign would share the same problem, a low utilization rate, as that in [8] (cf. Fig. 3(b)).
+
+Fig. 15(b) shows the design after the degeneration rule is applied to 5D DG. It has a 99% total utilization rate (include the loading phase and the computation phase), and uses only one I/O port for search window.
+
+**Row-Major 5D DG of BMA.** In the previous design, we assume that the BMA will be performed in the column major of the current blocks. However, in MPEG codec, current blocks are coded in the order of the row major. In order to work with current MPEG codec, the previous column-major systolic design may require an extra buffer to save the motion vector information.
+
+In order to avoid the extra buffer, the data that overlapped between the current blocks in the row major (cf. Fig. 16(a)) is also considered. Because the memory designed for the buffer is in row-major, the data reused between two current blocks become piecewise continuous. Its correspondent design and timing diagram are shown in Fig. 16(b) and 17.
+
+## 4.3. Multiprojecting 6D DG of BMA
+
+As the full-frame BMA is 6D (cf. Fig. 4), Fig. 18 shows the 6D DG of the BMA. Let $x_i, y_j$ define the indices of the current blocks in a frame. $\vec{E}_1, \vec{E}_2, \vec{E}_3$ are the same as above. The new feature is that $\vec{E}_4$ now represents inter-block usability shifted in both $x$ and $y$ indices.
+---PAGE_BREAK---
+
+Fig. 20. The systolic cache of the design shown in Fig. 19: (a) Its timing diagram. (b) The overall picture. (c) The first-level systolic cache. (d) A subcell of second-level systolic cache. (e) The second-level systolic cache.
+---PAGE_BREAK---
+
+Fig. 21. A seamless design of expandable array processors (cf. Fig 19).
+
+Table 2. A comparison of several designs. Our algebraic design methodology can handle algorithms with high-dimensional data dependency and thus exploit the maximum degree of data reusability. Our design from multiprojection the 6D DG of the BMA can achieve 99% total utilization rate of the PEs and 96% data reusability rate of the search window.
+
+ | Advantage | Disadvantage |
|---|
Our design from 4D DG (By degeneration rule, Fig. 12) | Only one I/O port | 81% total utilization rate | Our design from 5D DG (By degeneration rule, Fig. 15) | Only one I/O port 99% total utilization rate | 80% data reusability rate | Our design from 6D DG (By degeneration rule, Fig. 21) | Only one I/O port 99% total utilization rate 96% data reusability rate Expandable | |
+
+Special Supporting Memory/Cache/Buffer Design.
+Since it is hard to hold all the data in the same chip that
+holds the processor array, a small cache is important.
+Because the memory access pattern is very regular in
+the full search BMA, there is a predetermined way
+for best replacement policy of the cache. Eventually,
+we can get rid of the tags for the cache between the
+main memory and processing unit because we know
+(1) where the data should go, (2) which data should
+be replaced, and (3) where we should fetch the data.
+
+Based on this idea, we can design a so-called *systolic cache*—a pre-fetch external cache.
+
+Fig. 19 shows the extended systolic design for the row-major 6D DG. The schematic design of the *systolic cache* to support such a row-major 6D DG design is detailed in Fig. 20. If the width of a frame $F_h$ is 1024 ($F_h = N_h \times n$) and half of the search window size $p$ is 32, then the size of that cache will be $2p \times F_h = 64K$ cache.
+
+**LPGS and LSGP for Expandable Design.** In addition to the overlapping between search windows of different current blocks, another important property is that there
+---PAGE_BREAK---
+
+Fig. 22. (a) The pseudo code of the BMA for a single current block. This pseudo code is exactly the inner four loops as shown in Fig. 4(a).
+(b) A single assignment code for the BMA. Every element in SAD[u, v, i, j] array will be assigned to a value only once—as the name
+come from.
+---PAGE_BREAK---
+
+```c
+for (i = 1; i <= n; i++)
+ for (j = 1; j <= n; j++)
+ {
+ R[u, -p-1, i, j] = r[i, j];
+ S[u, -p-1, i, j] = s[u+i, -p-1+j];
+ }
+
+for (v = -p; v <= p; v++)
+{
+ SAD[u, v, 0, n] = 0;
+ for (i = 1; i <= n; i++)
+ {
+ SAD[u, v, i, 0] = SAD[u, v, i - 1, n];
+ for (j = 1; j <= n; j++)
+ {
+ R[u, v, i, j] = R[u, v-1, i, j];
+ S[u, v, i, j] = S[u, v-1, i, j+1];
+ SAD[u,v,i,j] = SAD[u,v,i,j-1] + | S[u,v,i,j] - R[u,v,i,j] |;
+ }
+ }
+}
+```
+
+Fig. 23. A example of the localized recursive BMA. The variable $s[u+i, u+j]$ and $r[i, j]$ in the inner three loop of the single assignment code shown in Fig. 22(b) are replaced by locally-interconnected array $S[u,v,i,j]$ and $R[u,v,i,j]$ respectively.
+
+Fig. 24. There are two methods for mapping the partitioned DG to an array: locally parallel globally sequential (LPGS) and locally sequential globally parallel (LSGP).
+
+is **no overlap and no gap** between search windows of different current blocks at any time. The search window data departing one array can be used immediately by another array. The reusable data are taken over naturally by the next array without extra buffers or special links. This design has very high expandabilities. The chips can be cascaded easily without performance lost as shown in Fig. 21.
+
+## 5. Conclusions
+
+In this work, we concentrate on an algebraic multiprojection methodology, capable of manipulating an algorithm with high-dimensional data dependence, to design the special data flow for highly reusable data.
+
+Multiprojecting the 6D DG of the BMA can give us high performance processor array designs with minimum supporting buffers (cf. Table 2). We can achieve very high data reusability rates by simple buffers, e.g.,
+---PAGE_BREAK---
+
+shift registers or cache without tags. The data in the search-window are reused as many times as possible in the SAD computations at different search-positions. Therefore, the problem of the input bandwidth for the search-area data can be alleviated.
+
+It is desirable to have a chip flexible for different block-sizes and search-ranges so that it can be used in a variety of application systems. The size of buffers and their scheduling could be derived automatically when array processors are designed via multiprojection.
+
+In addition, the expandability of the array processor design is very important for some practical implementations. The multiprojection can give us the expandability not only for single chip solution but also for the chip array design.
+
+This work has also been extended to operation placement and scheduling in fine-grain parallel architectures [3]. Because this method exploits cache and communication localities, it results in highly efficient parallel codes.
+
+# Appendix
+
+## A.1. Common Systolic Design Approaches
+
+Several useful transformation techniques have been proposed for mapping the algorithm into parallel and/or pipeline VLSI architecture [11]. There are 3 stages in common systolic design methodology: the first is dependence graph (DG) design, the second is mapping the DG to a signal flow graph (SFG), and the third is design array processor based on the SFG.
+
+More precisely, a DG is a directed graph, $G =< V, E >$, which shows the dependence of the computations that occur in an algorithm. Each operation will be represented as one node, $\zeta \in V$, in the graph. The dependence relation will be shown as an arc, $\vec{e} \in E$, between the corresponding operations. A DG can be also considered as the graphical representation of a single assignment algorithm. Our approach to the construction of a DG will be based on the space-time indices in the recursive algorithm: Corresponding to the space-time index space in the recursive algorithm, there is a natural lattice space (with the same indices) for the DG, with one node residing on each grid point. Then the data dependencies in the recursive algorithm may be explicitly expressed by the arcs connecting the interacting nodes in the DG, while its functional description will be embedded in the nodes. A high-dimensional
+
+looped algorithm will lead to a high-dimensional DG. For example, the BMA for a single current block is a 4-dimensional recursive algorithm [22].
+
+A complete SFG description includes both functional and structural description parts. The functional description defines the behavior within a node, whereas the structural description specifies the interconnection (edges and delays) between the nodes. The structural part of an SFG can be represented by a finite directed graph, $G =< V, E, D(E) >$ since the SFG expression consists of processing nodes, communicating edges, and delays. In general, a node, $\zeta \in V$, represents an arithmetic or logic function performed with zero delay, such as multiplication or addition. The directed edges $\vec{e} \in E$ model the interconnections between the nodes. Each edge $\vec{e}$ of $E$ connects an output port of a node to an input port of some node and is weighted with a delay count $D(\vec{e})$. The delay count is determined by the timing and is equal to the number of time steps needed for the corresponding arcs. Often, input and output ports are refereed to as sources and sinks, respectively.
+
+Since a complete SFG description should include both functional description (defines the behavior within a node) and structural description (specifies the interconnection—edges and delays—between the nodes), we can easily transform an SFG into a systolic array, wavefront array, SIMD, or MIMD. Therefore, most research is on how to transfer a DG to an SFG in the systolic design methodology.
+
+There are two basic considerations for mapping from a DG to an SFG:
+
+1. **Placement:** To which processors should operations be assigned? (A criterion might be to minimize communication/exchange of data between processors.)
+
+2. **Scheduling:** In what ordering should the operations be assigned to a processor? (A criterion might be to minimize total computing time.)
+
+Two steps are involved in mapping a DG to an SFG array. The first step is the processor assignment. Once the processor assignment is fixed, the second step is the scheduling. The allowable processor and schedule assignments can be quite general; however, in order to derive a regular systolic array, linear assignments and scheduling attract more attention.
+
+*Processor Assignment.* Processor assignment decides which processor is going to execute which node in the DG. A processor could carry out the opera-
+---PAGE_BREAK---
+
+tions of a number of nodes. For example, a projection method may be applied, in which nodes of the DG along a straight line are assigned to a common processing element (PE). Since the DG of a locally recursive algorithm is regular, the projection maps the DG onto a lower dimensional lattice of points, known as the processor space. Mathematically, a linear projection is often represented by a projection vector $\vec{d}$. The mapping assigns the node activities in the DG to processors. The index set of nodes of the SFG are represented by the mapping
+
+$$ \mathbf{P}: I^n \rightarrow I^{n-1} $$
+
+where $I^n$ is the index set of the nodes of the DG, and $I^{n-1}$ is the Cartesian product of (n-1) integers. The mapping of a computation $\mathcal{C}_i$ in the DG onto a node $\underline{n}$ in the SFG is found by:
+
+$$ \underline{n}(\mathcal{C}_i) = \mathbf{P}\mathcal{C}_i $$
+
+where $\underline{n}(\cdot)$ denotes the mapping function from a node in the DG to a node in the SFG, and the processor basis $\mathbf{P}$, denoted by an $(n-1) \times n$ matrix, is orthogonal to $\vec{d}$. Mathematically,
+
+$$ \vec{d}^T \mathbf{P} = 0 $$
+
+This mapping also maps the arcs of the DG to the edges of the SFG. The set of edges $\vec{m}(\vec{e})$ into each node of the SFG is derived from the set of dependence edges $\vec{e}$ at each point in the DG by
+
+$$ \vec{m}(\vec{e}_i) = \mathbf{P}\vec{e}_i $$
+
+where $\vec{m}(\cdot)$ denotes the mapping function from an edge in the DG to an edge in the SFG.
+
+In this paper, bold face letters (e.g., $\mathbf{P}$) represent matrices. Overhead arrows represent an $n$-dimensional vector, written as an $n \times 1$ matrix, e.g., $\vec{e}_i$ (a dependency arc in the DG) and $\vec{m}(\vec{e}_i)$ (an SFG dependency edge that comes for the $\vec{e}_i$). An $n$-tuple (a point in $n$-dimensional space), written as an $n \times 1$ matrix, is represented by underlined letters, e.g., $\mathcal{C}_i$ (a computation node in the DG) and $\underline{n}(\mathcal{C}_i)$ (an SFG computation node that comes from $\mathcal{C}_i$).
+
+**Scheduling.** The projection should be accompanied by a scheduling scheme, which specifies the sequence of the operations in all the PEs. A schedule function represents a mapping from the $n$-dimensional index space of the DG onto a 1D scheduling time space. A linear schedule is based on a set of parallel and uni-
+
+formly spaced hyper-planes in the DG. These hyper-planes are called equi-temporal hyper-planes—all the nodes on the same hyper-plane must be processed at the same time. Mathematically, the schedule can be represented by a schedule vector (column vector) $\vec{s}$, pointing to the normal direction of the hyper-planes. The scheduling of a computation $\mathcal{C}$ in the DG on a node $\underline{n}$ in the SFG is found by:
+
+$$ T(\underline{n}) = \vec{s}^T \underline{n} $$
+
+where $T(\cdot)$ denotes the timing function of a node in the DG to the execution time of the processor in the SFG.
+
+The delay $D(\vec{e})$ on every edge is derived from the set of dependence edges $\vec{e}$ at each point in the DG by
+
+$$ D(\vec{e}_i) = \vec{s}^T \vec{e}_i $$
+
+where $D(\cdot)$ denotes the timing function of an edge in the DG to the delay of the edge in the SFG.
+
+**Permissible Linear Schedules.** There is a partial ordering among the computations, inherent in the algorithm, as specified by the DG. For example, if there is a directed path from node $\mathcal{C}_x$ to node $\mathcal{C}_y$, then the computation represented by node $\mathcal{C}_y$ must be executed after the computation represented by node $\mathcal{C}_x$ is completed. The feasibility of a schedule is determined by the partial ordering and the processor assignment scheme.
+
+The necessary and sufficient conditions are stated below:
+
+1. $\vec{s}^T \vec{e} \ge 0$, for any dependence arc $\vec{e}$. $\vec{s}^T \vec{e} \neq 0$, for non-broadcast data.
+
+2. $\vec{s}^T \vec{d} > 0$.
+
+The first condition stands for data availability and states that the precedent computation must be completed before the succeeding computation starts. Namely, if node $\mathcal{C}_y$ depends on node $\mathcal{C}_x$, then the time step assigned for $\mathcal{C}_y$ can not be less than the time step assigned for $\mathcal{C}_x$. The first condition means that the causality should be enforced in a permissible schedule. But, if a datum is used by many operations in the DG (read-after-read data dependencies), the causality constraint could be a little bit different. As popularly adopted, the same data value is broadcast to all the operation nodes. The data are called *broadcast data*. In this case, there is no delay required. Alternatively, the same data may be propagated step by step via local
+---PAGE_BREAK---
+
+arcs without being modified to all the nodes. This kind of data, which is propagated without being modified, is called *transmittent data*. There should be at least one delay for transmittent data.
+
+The second condition stands for processor availability, i.e., 2 computation nodes cannot be executed in the same time if they are mapped into the same processor element. The second condition implies that nodes on an equi-temporal hyper-plane should not be projected to the same PE. In short, the schedule is permissible if and only if (1) all the dependency arcs flow in the same direction across the hyper-planes; and (2) the hyper-planes are not parallel with projection vector $\vec{d}$.
+
+In general, the projection procedure involves the following steps:
+
+1. For any projection direction, a processor space is orthogonal to the projection direction. A processor array may be obtained by projecting the index points to the processor space.
+
+2. Replace the arcs in the DG with zero or nonzero delay edges between their corresponding processors. The delay on each edge is determined by the timing and is equal to the number of time steps needed for the corresponding arcs.
+
+3. Since each node has been projected to a PE and each input (or output) data is connected to some nodes, it is now possible to attach the input and output data to their corresponding processors.
+
+## A.2. The Transformation of DG
+
+Besides the direction of the projection and the schedule, the choice of a particular DG for an algorithm can greatly affect the performance of the resulting array. The following are the two most common transformations of the DG seen in the literature:
+
+### Reindexing
+
+A useful technique for modifying the DG is to apply a coordinate transformation to the index space (called *reindexing*). Examples for reindexing are plane-by-plane shifting or circular shifting in the index space. For instance, when there is no permissible linear schedule or systolic schedule for the original DG, it is often desirable to modify the DG so that such a desired schedule may be obtained. The effect of this method is equivalent to the re-timing method [13].
+
+### Localized dependence graph
+
+A locally recursive algorithm is an algorithm whose corresponding DG has only local dependencies—all variables are (directly) dependent upon the variables of neighboring nodes only. The length of each dependency arc is independent of the problem size.
+
+On the other hand, a non-localized recursive algorithm has global interconnections/dependencies. For example, a same datum will be used by many operations, i.e., the same data value will repeatedly appear in a set of index points in the recursive algorithm or DG. As popularly adopted, the operation nodes receive the datum by broadcasting. The data are called *broadcast data* and this set is termed a broadcast contour. Such a non-localized recursive algorithm, when mapped onto an array processor, is likely to result in an array with global interconnections.
+
+In general, global interconnections are more expensive than localized interconnections. In certain instances, such global arcs can be avoided by using a proper projection direction in the mapping schemes. To guarantee a locally interconnected array, a localized recursive algorithm would be derived (and, equivalently, a localized DG). In many cases, such broadcasting can be avoided and replaced by local communication. For example, in Fig. 23, the variable $s[u+i, u+j]$ and $r[i, j]$ in the inner three loops of the BMA (cf. Fig. 22(b)) are replaced by local variables $s[u,v,i,j]$ and $r[u,v,i,j]$ respectively. The key point is that instead of broadcasting the (public) data along a global arc, the same data may be propagated step by step via local arcs without being modified to all the nodes. This kind of data, which is propagated without being modified, is called *transmittent data*.
+
+## A.3. General Formulation of Optimization Problems
+
+It takes more efforts to find an optimal and permissible linear scheduling than it does to find a permissible linear scheduling. In this section, we show how to derive an optimal design.
+
+*Optimization Criteria.* Optimization plays an important role in implementing systems. In terms of parallel processing, there are many ways to evaluate of a de-
+---PAGE_BREAK---
+
+sign: one is to measure by the completion time (T), another one is to measure by the product of the VLSI chip area and the completion time (A × T) [12]. In general, the optimization problems can be categorized into:
+
+1. To find a best scheduling that minimizes the execution time, for given constraints on the number of processing units [25].
+
+2. To minimize the cost (area, power, etc.) under certain given timing constraints [19].
+
+In either case, such tasks are proved to be NP-hard. In this paper, we focus on how to find an optimal schedule given an array structure—the timing is an optimization goal, not a constraint.
+
+**Basic Formula.** First, we know that the computation time of a systolic array can be written as
+
+$$T = \max_{\mathcal{L}_x, \mathcal{L}_y} \{\vec{s}^T (\mathcal{L}_x - \mathcal{L}_y)\} + 1$$
+
+where $\mathcal{L}_x$ and $\mathcal{L}_y$ are two computation nodes in the DG.
+
+The optimization problem becomes the following min-max formulation:
+
+$$\vec{s}_{op} = \arg \left[ \min_{\vec{s}} \left[ \max_{\mathcal{L}_x, \mathcal{L}_y} \{\vec{s}^T (\mathcal{L}_x - \mathcal{L}_y)\} + 1 \right] \right]$$
+
+under the following two constraints: $\vec{s}^T \vec{d} > 0$ and $\vec{s}^T \vec{e} > 0$, for any dependence arc $\vec{e}$.
+
+The minimal computation time schedule $\vec{s}$ can be found by solving the proper integer linear programming [12, 21, 25] or quadratic programming [26].
+
+### A.4. Partitioning Methods
+
+As DSP systems grow too complex to be contained in a single chip, partitioning is used to design a system into multi-chip architectures. In general, the mapping scheme (including both the node assignment and scheduling) will be much more complicated than the regular projection methods discussed in the previous sections because it must optimize chip area while meeting constraints on throughput, input/output timing and latency. The design takes into consideration I/O pins, inter-chip communication, control overheads, and tradeoff between external communication and local memory.
+
+For a systematic mapping from the DG onto a systolic array, the DG is regularly partitioned into many blocks, each consisting of a cluster of nodes in the DG. As shown in Fig. 24, there are two methods for mapping the partitioned DG to an array: the locally sequential globally parallel (LSGP) method and the locally parallel globally sequential (LPGS) method [11].
+
+For convenience of presentation, we adopt the following mathematical notations. Suppose that an $n$-dimensional DG is linear projected to an $(n-1)$-dimensional SFG array of size $L_1 \times L_2 \times \cdots \times L_{n-1}$. The SFG is partitioned into $M_1 \times M_2 \times \cdots \times M_{n-1}$ blocks, where each block is of size $Z_1 \times Z_2 \times \cdots \times Z_{n-1}$. $Z_i = L_i/M_i$ for $i \in \{1, 2, \cdots, n-1\}$,
+
+**Allocation.**
+
+1. In the LSGP scheme, one block is mapped to one PE. Each PE sequentially executes the nodes of the corresponding block. The number of blocks is equal to the number of PEs in the array, i.e., the array size equals to the product $M_1 \times M_2 \times \cdots \times M_{n-1}$.
+
+2. In the LPGS scheme, the block size is chosen to match the array size, i.e., one block can be mapped to one array. All nodes within one block are processed concurrently, i.e., locally parallel. One block after another block of node data is loaded into the array and processed in a sequential manner, i.e., globally sequential.
+
+**Scheduling.** In LSGP, after processor allocation, from the processor sharing perspective, there are $Z_1 \times Z_2 \times \cdots \times Z_{n-1}$ nodes in each block in the SFG, which share one PE. An acceptable (i.e., sufficiently slow) schedule is chosen so that at any instant there is at most one active PE in each block.
+
+As to the scheduling scheme for the LPGS method, a general rule is to select a (global) scheduling that does not violate the data dependencies. Note that the LPGS design has the advantage that blocks can be executed one after another in a natural order. However, this simple ordering is valid only when there is no reverse data dependence for the chosen blocks.
+
+**Generalized Partitioning Method.** A unified partitioning and scheduling scheme is proposed for LPGS and LSGP in [9]. The main contribution includes a unified partitioning model and a systematic two-level scheduling scheme. The unified partitioning model can support LPGS and LSGP design in the same manner.
+---PAGE_BREAK---
+
+The systematic two-level scheduling scheme can spec-
+ify the intra-processor schedule and inter-processor
+schedule independently. Hence, more inter-processor
+parallelism can be effectively explored.
+
+A general frame work for processing mapping is also
+proposed in [17, 18].
+
+Optimization for Partitioning. The problem of find-
+ing an optimal (or reasonably small) schedule is a NP-
+hard problem. A systematic methodology for optimal
+partitioning is described in [23].
+
+Acknowledgements
+
+This work was supported in part by Sarnoff Research
+Center, Mitsubishi Electric, and the George Van Ness
+Lothrop Honorific Fellowship.
+
+References
+
+1. J. Baek, S. Nam, M. Lee, C. Oh, and K. Hwang, "A Fast Array Architecture for Block Matching Algorithm," *Proc. of IEEE Symposium on Circuits and Systems*, vol. 4, pp. 211–214, 1994.
+2. S. Chang, J.-H. Hwang, and C.-W. Jen, "Scalable Array Architecture Design for Full Search Block Matching," *IEEE Trans. on Circuits and Systems for Video Technology*, vol. 5, no. 4, pp. 332–343, Aug. 1995.
+3. Y.-K. Chen and S. Y. Kung, "An Operation Placement and Scheduling Scheme for Cache and Communication Localities in Fine-Grain Parallel Architectures," in *Proc. of Int'l Symposium on Parallel Architectures, Algorithms and Networks*, pp. 390–396, Dec. 1997.
+4. L. De Vos, "VLSI-architectures for the Hierarchical Block-Matching Algorithm for HDTV Applications," *SPIE Visual Communications and Image Processing*, vol. 1360, pp. 398–409, 1990.
+5. L. De Vos and M. Stegherr, "Parameterizable VLSI Architectures for Full-Search Block-Matching Algorithm," *IEEE Trans. on Circuits and Systems*, vol. 36, no. 10, pp. 1309–1316, Oct. 1989.
+6. D. Le Gall, "MPEG: A Video Compression Standard for Multimedia Applications," *Communications of the ACM*, vol. 34, no. 4, Apr. 1991.
+7. K. Guttag, R. J. Gove, and J. R. V. Aken, "A Single-Chip Multiprocessor For Multimedia: The MVP," *IEEE Computer Graphics & Applications*, vol. 11, no. 6, pp. 53–64, Nov. 1992.
+8. C.-H. Hsieh and T.-P. Lin, "VLSI Architecture for Block-Matching Motion Estimation Algorithm," *IEEE Trans. on Circuits and Systems for Video Technology*, vol. 2, no. 2, pp. 169–175, June 1992.
+9. Y.-T. Hwang and Y.-H. Hu, "A Unified Partitioning and Scheduling Scheme for Mapping Multi-Stage Regular Iterative Algorithms onto Processor Arrays," *Journal of VLSI Signal Processing Applications*, vol. 11, pp. 133–150, Oct. 1995.
+
+10. T. Komarek and P. Pirsch, "Array Architectures for Block Matching Algorithms," *IEEE Trans. on Circuits and Systems*, vol. 36, no. 10, pp. 1301-1308, Oct. 1989.
+11. S. Y. Kung, *VLSI Array Processors*. Englewood Cliffs, NJ: Prentice Hall, 1988.
+12. G.-J. Li and B. W. Wah, "The Design of Optimal Systolic Array," *IEEE Trans. on Computer*, vol. 34, no. 1, pp. 66-77, Jan. 1985.
+13. N. L. Passos and E. H.-M. Sha, "Achieving Full Parallelism Using Multidimensional Retiming," *IEEE Trans. on Parallel and Distributed Systems*, vol. 7, no. 11, pp. 1150-1163, Nov. 1996.
+14. P. Pirsch, N. Demassieux, and W. Gehrke, "VLSI Architectures for Video Compression-A Survey," *Proceedings of the IEEE*, vol. 83, no. 2, pp. 220-246, Feb. 1995.
+15. F. Sijstermans and J. van der Meer, "CD-1 Full-Motion Video Encoding on a Parallel Computer," *Communications of the ACM*, vol. 34, no. 4, pp. 81-91, Apr. 1991.
+16. M.-T. Sun, "Algorithms and VLSI Architectures for Motion Estimation," *VLSI Implementations for Image Communications*, pp. 251-282, 1993.
+17. J. Teich and L. Thiele, "Partitioning of Processor Arrays: a Piecewise Regular Approach," *INTEGRATION: The VLSI Journal*, vol. 14, no. 3, pp. 297-332, 1993.
+18. J. Teich, L. Thiele, and L. Zhang, "Partitioning Processor Arrays under Resource Constraints," *Journal of VLSI Signal Processing*, vol. 17, no. 1, pp. 5-20, Sept. 1997.
+19. W.F. Verhaegh, P.E. Lippens, E.H.Aarts, J.H.Korst,J.L.van Meerbergen,and A.van der Werf,"Improved Force-directed Scheduling in High-throughput Digital Signal Processing,"*IEEE Trans.on Computer-Aided Design of Integrated Circuits and Systems*, vol. 14, no. 8, pp. 945-960,Aug 1995.
+20.B.-M.Wang,J.-C.Yen.,and S.Chang,"Zero Waiting-Cycle Hierarchical Block Matching Algorithm and its Array Architectures,"*IEEE Trans.on Circuits and Systemsfor Video Technology*, vol. 4, no. 4, pp. 18-28, Feb. 1994.
+21.Y.Wong and J.-M.Delosme,"Optimization of Computation Time for Systolic Array,"*IEEE Trans.on Computer*, vol. 41, no. 2, pp. 159-177, Feb. 1992.
+22.H.Yeo and Y.-H.Hu,"A Novel Modular Systolic Array Architecture for Full-Search Block Matching Motion Estimation","*IEEE Trans.on Circuits and Systems for Video Technology*, vol. 5, no. 5, pp. 407-416, Oct. 1995.
+23.K.-H.Zimmermann,"A Unifying Lattice-Based Approach for the Partitioning of Systolic Arrays via LPGS and LSGP,"*Journal of VLSI Signal Processing*, vol. 17, no. 1, pp. 21-47, Sept. 1997.
+24.K.-H.Zimmermann,"Linear Mappings of n-Dimensional Uniform Recurrences onto k-Dimensional Systolic Array","*Journal of Signal Processing System for Signal, Image, and Video Technology*, vol. 12, no. 2, pp. 187-202, May 1996.
+25.K.-H.Zimmermann and W.Achtziger,"Finding Space-Time Transformations for Uniform Recurrences via Branching Parametric Linear Programming","*Journal of VLSI Signal Processing*, vol. 15, no. 3, pp. 259-274, 1997.
+26.K.-H.Zimmermann and W.Achtziger,"On Time Optimal Implementation of Uniform Recurrences onto Array Processors via Quadratic Programmin","*Journal of VLSI Signal Processing*, vol. 19, no. 1, pp. 19-38, 1998.
+
diff --git a/samples/texts_merged/7100604.md b/samples/texts_merged/7100604.md
new file mode 100644
index 0000000000000000000000000000000000000000..95d793b5b3df9bd220771c0df04e07b87d512f45
--- /dev/null
+++ b/samples/texts_merged/7100604.md
@@ -0,0 +1,1109 @@
+
+---PAGE_BREAK---
+
+# Efficient Market Making via Convex Optimization, and a Connection to Online Learning
+
+Jacob Abernethy, University of Pennsylvania
+Yiling Chen, Harvard University
+Jennifer Wortman Vaughan, University of California, Los Angeles
+
+We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomial-time pricing mechanisms for various problems. We demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the market institution's bounded budget. Although our framework was designed with the goal of deriving efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. Using our framework, we illustrate the mathematical parallels between cost function based markets and online learning and establish a correspondence between cost function based markets and market scoring rules for complete markets.
+
+**Categories and Subject Descriptors:** F.0 [Theory of Computation]: General; J.4 [Computer Applications]: Social and Behavioral Sciences
+
+**General Terms:** Algorithms, Economics, Theory
+
+**Additional Key Words and Phrases:** Market design, securities market, prediction market, automated market maker, convex analysis, online linear optimization
+
+**ACM Reference Format:**
+
+Abernethy, J., Chen, Y., Vaughan, J. W. 2012. Efficient Market Making via Convex Optimization, and a Connection to Online Learning. ACM TEAC 1, 1, Article X (2012), 38 pages.
+DOI 10.1145/0000000.000000 http://doi.acm.org/10.1145/0000000.000000
+
+Parts of this research initially appeared in Chen and Vaughan [2010] and Abernethy et al. [2011]. This work is supported NSF grants CCF-0953516, CCF-0915016, IIS-1054911, and DMS-070706, DARPA grant FA8750-05-2-0249, and a Yahoo! PhD Fellowship, and is based on work that was supported by NSF under CNS-0937060 to the CRA for the CIFellows Project. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors alone. The authors are grateful to David Pennock for useful discussions about this work and Xiaolong Li and Michael Ruberry for comments on an earlier draft.
+
+Author's addresses: J. Abernethy, Computer and Information Science Department, University of Pennsylvania; Y. Chen, School of Engineering and Applied Sciences, Harvard University; J. W. Vaughan, Computer Science Department, University of California, Los Angeles.
+
+Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org.
+
+© 2012 ACM 0000-0000/2012/-ARTX $15.00
+DOI 10.1145/0000000.000000 http://doi.acm.org/10.1145/0000000.000000
+---PAGE_BREAK---
+
+# 1. INTRODUCTION
+
+Securities markets play a fundamental role in economics and finance. A securities market offers a set of contingent securities whose payoffs depend on the future state of the world. For example, an Arrow-Debreu security pays $1 if a particular state of the world is reached and $0 otherwise [Arrow 1964; 1970]. Consider an Arrow-Debreu security that will pay off in the event that a category 4 or higher hurricane passes through Florida in 2012. A Florida resident who worries about his home being damaged might buy this security as a form of insurance to hedge his risk; if there is a hurricane powerful enough to damage his home, he will be compensated. Additionally, a risk-neutral trader who has reason to believe that the probability of a category 4 or higher hurricane landing in Florida in 2012 is *p* should be willing to buy this security at any price below *p* or (short) sell it at any price above *p* to capitalize his information. For this reason, the market price of the security can be viewed as the traders' collective estimate of how likely it is that a powerful hurricane will occur. Securities markets thus have dual functions: risk allocation and information aggregation.
+
+Insurance contracts, options, futures, and many other financial derivatives are examples of contingent securities. A securities market primarily focused on information aggregation is often referred to as a prediction market. The forecasts of prediction markets have proved to be accurate in a variety of domains [Ledyard et al. 2009; Berg et al. 2001; Wolfers and Zitzewitz 2004]. While our work builds on ideas from prediction market design [Chen and Vaughan 2010; Othman et al. 2010; Agrawal et al. 2011], our framework can be applied to any contingent securities.
+
+A securities market is said to be complete if it offers at least |O| linearly independent securities over a set O of mutually exclusive and exhaustive states of the world, which we refer to as outcomes [Arrow 1964; 1970; Mas-Colell et al. 1995]. For example, a prediction market with *n* Arrow-Debreu securities for *n* outcomes is complete. In a complete securities market without transaction fees, a trader may bet on any combination of the securities, allowing him to hedge any possible risk he may have. It is generally assumed that the trader may short sell a security, betting against the given outcome; in a market with short selling, the *n*th security is not strictly necessary, as a trader can substitute the purchase of this security by short selling all others. Furthermore, traders can change the market prices to reflect any valid probability distribution over the outcome space, allowing them to reveal any belief. Completeness therefore provides expressiveness for both risk allocation and information aggregation.
+
+Unfortunately, completeness is not always achievable. In many real-world settings, the outcome space is exponentially large or even infinite. For instance, a competitive race between *n* athletes results in an outcome space of *n!* rank orders, while the future price of a stock has an infinite outcome space, namely $\mathbb{R}_{\ge 0}$. In such situations operating a complete securities market is not practical for two reasons: (a) humans are notoriously bad at estimating small probabilities and (b) it is computationally intractable to manage such a large set of securities. Instead, it is natural to offer a smaller set of structured securities. For example, rather than offer a security corresponding to each rank ordering, in pair betting a market institution offers securities of the form "$1 if candidate A beats candidate B" [Chen et al. 2007a; Chen et al. 2008a]. There has been a surge of recent research examining the tractability of running standard prediction market mechanisms (such as the popular Logarithmic Market Scoring Rule (LMSR) market maker [Hanson 2003]) over combinatorial outcome spaces by limiting the space of available securities [Pennock and Sami 2007]. While this line of research has led to a few positive results [Chen et al. 2007b; Chen et al. 2008b; Guo and Pennock 2009; Agrawal et al. 2008], it has led more often to hardness results [Chen et al. 2007b; Chen
+---PAGE_BREAK---
+
+et al. 2008a] or to markets with undesirable properties such as unbounded loss of the market institution [Gao et al. 2009].
+
+In this paper, we propose a general framework to design automated market makers for securities markets. An automated market maker is a market institution that adaptively sets prices for each security and is always willing to accept trades at these prices. Unlike previous research aimed at finding a space of securities that can be efficiently priced using an existing market maker like the LMSR, we start with an arbitrary space of securities and design a new market maker tailored to this space. Our framework is therefore very general and includes existing market makers for complete markets, such as the LMSR and Quad-SCPM [Agrawal et al. 2011], as special cases.
+
+We take an axiomatic approach. Given a relatively small space of securities with bounded payoff, we define a set of intuitive conditions that a reasonable market maker should satisfy. We prove that a market maker satisfying these conditions must price securities via a convex potential function (the cost function), and that the space of reachable security prices must be precisely the convex hull of the payoff vectors for each outcome (that is, the set of vectors, one per outcome, denoting the payoff for each security if that outcome occurs). We then incorporate ideas from online convex optimization [Hazan 2009; Rakhlin 2009] to define a convex cost function in terms of an optimization over this convex hull; the vector of prices is chosen as the optimizer of this convex objective. With this framework, instead of dealing with the exponentially large or infinite outcome space, we only need to deal with the lower-dimensional convex hull. The problem of automated market making is reduced to the problem of convex optimization, for which we have many efficient techniques to leverage.
+
+To demonstrate the advantages of our framework, we provide two new computationally efficient markets. The first market can efficiently price subset bets on permutations, which are known to be #P-hard to price using the LMSR [Chen et al. 2008a]. The second market can be used to price bets on the landing location of an object on a sphere. For situations where the convex hull cannot be efficiently represented, we show that we can relax the convex hull to gain computational tractability without compromising the market maker's bounded budget. This allows us to provide a computationally efficient market maker for the aforementioned pair betting, which is also known to be #P-hard to price using the LMSR [Chen et al. 2008a].
+
+Although our framework was designed with the goal of deriving novel, efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. With our framework, we illustrate the mathematical parallels between cost function based markets and online learning, and establish a correspondence between cost function based markets and market scoring rules for complete markets.
+
+**Roadmap of the paper:** The rest of the paper is organized as follows. We begin in Section 2 with a review of the relevant literature on automated market makers and prediction market design. In Section 3 we describe the problem of market design for large outcome spaces, discuss the difficulties inherent to this problem, and introduce our axiomatic approach. In Section 4 we give a detailed framework for constructing pricing mechanisms based on convex optimization and conjugate duality. We give a couple of examples of efficient duality-based cost function market makers in Section 5. In Section 6 we consider the computational issues associated with our framework, and show how the proposed convex optimization problem can be relaxed to gain tractability without increasing the worst-case loss of the market maker. We illustrate the mathematical parallels between our framework and online learning in Section 7. Finally, in
+---PAGE_BREAK---
+
+Section 8, we describe how our framework can be used to establish a correspondence between cost function based markets and market scoring rules for complete markets.
+
+## 2. BACKGROUND AND RELATED WORK
+
+Automated market makers for complete markets are well studied in both economics and finance. Our work builds on the literature on cost function based markets [Hanson 2003; 2007; Chen and Pennock 2007]. A simple cost function based market maker offers $|O|$ Arrow-Debreu securities, each corresponding to a potential outcome. The market maker determines how much each security should cost using a differentiable cost function, $C: \mathbb{R}^{|O|} \to \mathbb{R}$, which is simply a potential function specifying the amount of money currently wagered in the market as a function of the number of shares of each security that have been purchased. If $q_o$ is the number of shares of security $o$ currently held by traders, and a trader would like to purchase a bundle of $r_o$ shares for each security $o \in O$ (where each $r_o$ could be positive, representing a purchase, zero, or even negative, representing a sale), the trader must pay $C(q+r) - C(q)$ to the market maker. The instantaneous price of security $o$ (that is, the price per share of an infinitesimal portion of a security) is then $\partial C(q)/\partial q_o$, and is denoted $p_o(q)$.
+
+One example of a cost function based market that has received considerable attention is Hanson's Logarithmic Market Scoring Rule (LMSR) [Hanson 2003; 2007; Chen and Pennock 2007]. The cost function of the LMSR is
+
+$$C(\mathbf{q}) = b \log \sum_{o \in O} e^{q_o / b}, \quad (1)$$
+
+where $b > 0$ is a parameter of the market controlling the rate at which prices change. The corresponding price function for each security $o$ is
+
+$$p_o(\mathbf{q}) = \frac{\partial C(\mathbf{q})}{\partial q_o} = \frac{e^{q_o/b}}{\sum_{o' \in O} e^{q_{o'}/b}}. \quad (2)$$
+
+It is well known that the monetary loss of an automated market maker using the LMSR is upperbounded by $b \log |O|$. Additionally, the LMSR satisfies several other desirable properties, which are discussed in more detail in Section 3.1.
+
+When $|O|$ is large or infinite, calculating the cost of a purchase becomes intractable in general. Recent research on automated market makers for large outcome spaces has focused on restricting the allowable securities over a combinatorial outcome space and examining whether the LMSR prices can be computed efficiently in the restricted space. If the outcome space contains $n!$ rank orders of $n$ competing candidates, it is #P-hard to price *pair bets* (securities of the form "$1 if and only if candidate A beats candidate B") or *subset bets* (for example, "$1 if one of the candidates in subset C finishes at position $k$") using the LMSR on the full set of permutations [Chen et al. 2008a]. If the outcome space contains $2^n$ Boolean values of $n$ binary base events, it is #P-hard to price securities on conjunctions of any two base events (for example, "$1 if and only if a Democrat wins Florida and Ohio") using the LMSR [Chen et al. 2008a]. This line of research has led to some positive results when the uncertain event enforces particular structure on the outcome space. In particular, for a single-elimination tournament of $n$ teams, securities such as "$1 if and only if team A wins a $k$th round game" and "$1 if and only if team A beats team B given they face off" can be priced efficiently using the LMSR [Chen et al. 2008b]. The tractability of these securities is due to a structure-preserving property — the market probability can be represented by a Bayesian network and price updating does not change the structure of the network. Pennock and Xia [2011] significantly generalized this result and characterize all structure-preserving securities. For a taxonomy tree on some statistic where the value
+---PAGE_BREAK---
+
+of the statistic of a parent node is the sum of those of its children, securities such as "$1 if and only if the value of the statistic at node A belongs to $[x, y]$" can be priced efficiently using the LMSR [Guo and Pennock 2009].
+
+One approach to combat the computational intractability of pricing over combinatorial spaces is to approximate the market prices using sampling techniques. Yahoo!'s Predictalot,¹ a play-money combinatorial prediction market for the NCAA Men's Basketball playoff, allows traders to bet on almost any combination of the 2⁶³ outcomes of the tournament. Predictalot is based on the LMSR. Instead of calculating the exact prices for securities, it uses importance sampling to approximate the prices. Xia and Pennock [2011] devised a Monte-Carlo algorithm that can efficiently compute the price of any security in disjunctive or conjunctive normal form with guaranteed error bounds. However, using sampling techniques brings a new problem to pricing. The sampling algorithm in general won't give the same prices if quoted twice, even if the market status remains the same. Because of this, traders can exploit the market to make a profit, which increases the loss of the market maker.
+
+In this paper, we take a drastically different approach to combinatorial market design. Instead of searching for supportable spaces of securities for existing market makers, we design new market makers tailored to any security space of interest and with desirable theoretical properties. Additionally, rather than requiring that securities have a fixed (e.g., $1) payoff when the underlying event happens, we allow more general contingent securities with arbitrary, efficiently computable and bounded payoffs.
+
+Our approach makes use of powerful techniques from convex optimization. Agrawal et al. [2011] and Peters et al. [2007] also use convex optimization for automated market making. One major difference is that they only consider complete markets, while we consider markets with an arbitrary set of securities. They consider the setting in which traders submit limit orders, and formulate a convex optimization problem that can be solved by the market institution in order to decide what quantity of orders to accept. While formulating the problem in terms of limit orders leads to a syntactically different problem, their mechanisms can be turned into equivalent cost function based market makers. Agrawal et al. [2011] show that their mechanisms can be formulated as a risk minimization problem with an associated penalty function. Mathematically the penalty function plays a similar role as the conjugate function $R$ in our framework, but they do not explicitly make a connection with conjugate duality.
+
+This paper focuses on cost function based market makers. It is worth noting that there are other market mechanisms, with different properties, designed for securities markets. For complete markets, Dynamic Parimutuel Markets [Pennock 2004; Mandgold et al. 2005] also use a cost function to price securities, however the securities are parimutuel bets whose future payoff is not fixed a priori, but depends on the market activities. Brahma et al. [2010] and Das and Magdon-Ismail [2008] design Bayesian learning market makers that maintain a belief distribution and update it based on the traders' behavior. Call markets have been studied to trade securities over combinatorial spaces. In a call market, participants submit limit orders and the market institution determines what orders to accept or reject. Researchers have studied the computational complexity of operating call markets for both permutation [Chen et al. 2007b; Agrawal et al. 2008; Ghodsi et al. 2008] and Boolean [Fortnow et al. 2004] combinatorics.
+
+Related work on online learning and related work on market scoring rules are discussed in Sections 7 and 8 respectively.
+
+¹http://labs.yahoo.com/project/336
+---PAGE_BREAK---
+
+### 3. AN AXIOMATIC APPROACH TO MARKET DESIGN
+
+In this work, we are primarily interested in a market-design scenario in which the outcome space $\mathcal{O}$ is exponentially large, or even infinite, making it infeasible to run a complete market; not only is it generally intractable for the market maker to price an exponential number of securities, but it is notoriously difficult for human traders to reason about the probabilities of so many individually unlikely outcomes. To address both of these problems, we restrict the market maker to offer a menu of only $K$ securities for some reasonably-sized $K$. These securities will be designed by the market maker and one can interpret each security as corresponding to some “interesting” or “useful” query that we might like to make about the future outcome. For example, if a set of players compete in a tournament, the market maker can offer a security for every question of the form “does player X survive beyond round Y?”
+
+We assume that the payoff of each security, clearly depending on the future outcome $o$, can be described by an arbitrary but efficiently-computable function $\rho: \mathcal{O} \to \mathbb{R}_{\ge 0}^K$; if a trader purchases a share of security $i$ and the true outcome is $o$, then the trader is paid $\rho_i(o)$. We call such a security space complex. The complete security space is a special case of a complex security space in which $K = |\mathcal{O}|$ and for each $i \in \{1, \dots, K\}, \rho_i(o)$ equals 1 if $o$ is the $i$th outcome and 0 otherwise. The markets we design enable traders to purchase arbitrary security bundles $r \in \mathbb{R}^K$. A negative element of $r$ encodes a sale of such a security. The payoff for $r$ upon outcome $o$ is exactly $\rho(o) \cdot r$, where $\rho(o)$ denotes the vector of payoffs for each security for outcome $o$. Let us define $\rho(\mathcal{O}) := \{\rho(o)|o \in \mathcal{O}\}$. It will be assumed, throughout the paper, that $\rho(\mathcal{O})$ is closed and bounded.
+
+The first step in the design of automated market makers for complex security spaces is to determine an appropriate set of properties that we would like such market makers to satisfy. To build intuition about which properties might be desirable, we first step back and consider what it is that makes a market maker like the LMSR a good choice for complete markets.
+
+#### 3.1. What Makes A Market Maker Reasonable?
+
+Consider the cost function associated with the Logarithmic Market Scoring Rule (Equation 1) and the corresponding instantaneous price functions (Equation 2). This cost function and the resulting market satisfy several natural properties that make the LMSR a “reasonable” choice:
+
+(1) The cost function is differentiable everywhere. As a result, an instantaneous price $p_o(q) = \partial C(q)/\partial q_o$ can always be obtained for the security associated with any outcome $o$, regardless of the current quantity vector $q$.
+
+(2) The market incorporates information from the traders, in the sense that the purchase of a security corresponding to outcome $o$ causes $p_o$ to increase.
+
+(3) The market does not provide explicit opportunities for arbitrage. Since instantaneous prices are never negative, traders are never paid to obtain securities. Additionally, the sum of the instantaneous prices of the securities is always 1. If the prices summed to something less than (respectively, greater than) 1, a trader could purchase (respectively, short sell) small equal quantities of each security for a guaranteed profit. This is prevented. In addition to preventing arbitrage, these properties also ensure that prices can be interpreted naturally as probabilities, representing the market's current estimate of the distribution over outcomes.
+---PAGE_BREAK---
+
+(4) The market is *expressive* in the sense that a trader with sufficient funds can always set the market prices to reflect his beliefs about the probability of each outcome.²
+
+As described in Section 2, previous research on cost function based markets for com-
+binatorial outcome spaces has focused on developing algorithms to efficiently imple-
+ment or approximate LMSR pricing [Chen et al. 2008a; Chen et al. 2008b; Guo and
+Pennock 2009]. Because of this, there has been no need to explicitly extend these prop-
+erties to complex markets; the properties hold automatically for any implementation of
+the LMSR. This is no longer the case when our goal is to design new markets tailored
+to custom sets of securities.
+
+To gain intuition about what makes an arbitrary complex market “reasonable,” let us begin by considering the example of *pair betting* [Chen et al. 2007a; Chen et al. 2008a]. Suppose our outcome space consists of rankings of a set of *n* competitors, such as *n* horses in a race. The outcome of such a race is a permutation $\pi : [n] \to [n]$, where $[n]$ denotes the set $\{1, \dots, n\}$, and $\pi(i)$ is the final position of $i$, with $\pi(i) = 1$ being best. A typical market for this setting might offer *n* securities, with the $i$th security paying off \$1 $\pi(i) = 1$ and \$0 otherwise. Additionally, there might be separate, *independent* markets allowing bets on horses to place (come in first or second) or show (come in first, second, or third). However, running independent markets for sets of outcomes with clear correlations is wasteful in that information revealed in one market does not automatically propagate to the others. Instead, suppose that we would like to define a set of securities that allow traders to make arbitrary *pair bets*; that is, for every $i, j$, a trader can purchase a security which pays out \$1 whenever $\pi(i) < \pi(j)$. What properties would make a market for pair bets reasonable?
+
+The first two properties described above have straight-forward interpretations in this setting. We would still like the instantaneous price of each security to be well-defined at all times; intuitively, the instantaneous price of the security for $\pi(i) < \pi(j)$ should represent the traders' collective belief about the probability that horse $i$ finishes ahead of horse $j$. Call this price $p_{i,j}$. We would still like the market to incorporate information, in the sense that buying the security corresponding to $\pi(i) < \pi(j)$ should never cause the price $p_{i,j}$ to drop.
+
+The remaining two properties are more tricky to quantify. Intuitively, these proper-
+ties require us to define a set of constraints over the prices achievable in the market
+(to prevent arbitrage), and to ensure that any prices reflecting consistent beliefs about
+the distribution over outcomes can be achieved (for expressiveness). One can come up
+with various logical constraints that prices should satisfy. For example, $p_{i,j}$ must be
+nonnegative at all times for all $i$ and $j$, and $p_{i,j} + p_{j,i}$ must always equal 1 since exactly
+one of the two securities corresponding to $\pi(i) < \pi(j)$ and $\pi(j) < \pi(i)$ respectively will
+pay out \$1. Similar reasoning gives us the additional constraint that for all $i$, $j$, and
+$k$, $p_{i,j} + p_{j,k} + p_{k,i}$ must be at least 1 and no more than 2. But are these constraints
+enough to prevent arbitrage? Are they too strong to allow the expression of arbitrary
+consistent beliefs?
+
+In general, this type of ad hoc reasoning can lead us to many apparently reasonable
+constraints, but does not yield an algorithm to determine whether or not we have
+generated the full set of constraints necessary to prevent arbitrage, and cannot be
+applied easily to more complicated security spaces. We address this problem in the next
+section. We start by formalizing the desirable market properties described above in the
+context of complex markets. We then provide a precise mathematical characterization
+of all cost functions that satisfy these properties.
+
+²Othman et al. [2010] introduced a similar property for complete markets, which they called *surjectivity*.
+---PAGE_BREAK---
+
+## 3.2. An Axiomatic Characterization of Complex Markets
+
+We are now ready to formalize a set of conditions or axioms that one might expect a market to satisfy, and show that these conditions lead to some natural mathematical restrictions on the costs of security bundles. (We consider relaxations of these conditions in Section 6.) We do not presuppose a cost function based market. However, we show that the use of a convex cost function is necessary given the assumption of path independence on the security purchases.
+
+**3.2.1. Path Independence and the Use of Cost Functions.** Imagine a sequence of traders entering the marketplace and purchasing security bundles. Let $r_1, r_2, r_3, \dots$ be the sequence of security bundles purchased. After $t-1$ such purchases, the $t$-th trader should be able to enter the marketplace and query the market maker for the cost of arbitrary bundles. The market maker must be able to furnish a cost, denoted $Cost(r|r_1, \dots, r_{t-1})$, for any bundle $r$ given a previous trade sequence $r_1, \dots, r_{t-1}$. If the trader chooses to purchase $r_t$ at a cost of $Cost(r_t|r_1, \dots, r_{t-1})$, the market maker may update the costs of each bundle accordingly. Our first condition requires that the cost of acquiring a bundle $r$ must be the same regardless of how the trader splits up the purchase.
+
+**CONDITION 1 (PATH INDEPENDENCE).** For any $r, r',$ and $r''$ such that $r = r' + r''$, for any $r_1, \dots, r_t$,
+
+$$ Cost(r|r_1, \dots, r_t) = Cost(r'|r_1, \dots, r_t) + Cost(r''|r_1, \dots, r_t, r'). $$
+
+Path independence helps to reduce both arbitrage opportunities and the strategic play of traders, as traders need not reason about the optimal path leading to some target position. However, it is worth pointing out that there are interesting markets that do not satisfy this condition, such as the continuous double auction and the market maker for continuous double auctions considered by Brahma et al. [2010] and Das and Magdon-Ismail [2008]. These markets do not fall into our framework and deserve separate treatment.
+
+It turns out that the path independence alone implies that prices can be represented by a cost function $C$, as illustrated in the following theorem.
+
+**THEOREM 3.1.** *Under Condition 1, there exists a cost function $C: \mathbb{R}^K \rightarrow \mathbb{R}$ such that we may always write*
+
+$$ Cost(\mathbf{r}_t|\mathbf{r}_1, \dots, \mathbf{r}_{t-1}) = C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1} + \mathbf{r}_t) - C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1}). $$
+
+**PROOF.** Let $C(q) := Cost(q|\emptyset)$. Clearly $C(0) = Cost(0|\emptyset) = 0$. We will show, via induction on $t$, that for any $t$ and any bundle sequence $r_1, \dots, r_t$,
+
+$$ Cost(\mathbf{r}_t|r_1, \dots, r_{t-1}) = C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1} + \mathbf{r}_t) - C(\mathbf{r}_1 + \dots + \mathbf{r}_{t-1}). \quad (3) $$
+
+When $t=1$, this holds trivially. Assume that Equation 3 holds for all bundle sequences of any length $t \le T$. By Condition 1,
+
+$$
+\begin{align*}
+\text{Cost}(\mathbf{r}_{T+1} | \mathbf{r}_1, \dots, \mathbf{r}_T) \\
+&= \text{Cost}(\mathbf{r}_{T+1} + \mathbf{r}_T | \mathbf{r}_1, \dots, \mathbf{r}_{T-1}) - \text{Cost}(\mathbf{r}_T | \mathbf{r}_1, \dots, \mathbf{r}_{T-1}) \\
+&= C\left(\mathbf{r}_{T+1} + \mathbf{r}_T + \sum_{t=1}^{T-1} \mathbf{r}_t\right) - C\left(\sum_{t=1}^{T-1} \mathbf{r}_t\right) - C\left(\mathbf{r}_T + \sum_{t=1}^{T-1} \mathbf{r}_t\right) \\
+&= C\left(\sum_{t=1}^{T+1} \mathbf{r}_t\right) - C\left(\sum_{t=1}^{T} \mathbf{r}_t\right),
+\end{align*}
+$$
+---PAGE_BREAK---
+
+and we see that Equation 3 holds for $t = T + 1$ too. □
+
+With this theorem in mind, we drop the cumbersome Cost($r|r_1, \dots, r_t$) notation from now on, and write the cost of a bundle $r$ as $C(q+r) - C(q)$, where $q = r_1 + \dots + r_t$ is the vector of previous purchases.
+
+**3.2.2. Formalizing the Properties of a Reasonable Market.** Recall that one of the functions of a securities market is to aggregate traders' beliefs into an accurate prediction. Each trader may have his own (potentially secret) information about the future, which we represent as a distribution $p \in \Delta|_O^n$ over the outcome space, where $\Delta_n = \{x \in \mathbb{R}^n_+ : \sum_{i=1}^n x_i = 1\}$, the n-simplex. The pricing mechanism should therefore incentivize the traders to reveal $p$, but simultaneously avoid providing arbitrage opportunities. Towards this goal, we now revisit the relevant properties of the LMSR discussed in Section 3.2, and show how the ideas behind each of these properties can be extended to the complex market setting, yielding four additional conditions on our pricing mechanism.
+
+The first condition ensures that the gradient of $C, \nabla C(q)$, is always well-defined. If we imagine that a trader can buy or sell an arbitrarily small bundle, we would like the cost of buying and selling an infinitesimal quantity of any particular bundle to be the same. If $\nabla C(q)$ is well-defined, it can be interpreted as a vector of instantaneous prices for each security, with $\partial C(q)/\partial q_i$ representing the price per share of an infinitesimal amount of security $i$. Additionally, we can interpret $\nabla C(q)$ as the traders' current estimates of the expected payoff of each security, in the same way that $\partial C(q)/\partial q_o$ was interpreted as the probability of outcome $o$ when considering the complete security space.
+
+**CONDITION 2 (EXISTENCE OF INSTANTANEOUS PRICES).** *C* is continuous and differentiable everywhere on $\mathbb{R}^K$.
+
+The next condition encompasses the idea that the market should react to trades in a sensible way in order to incorporate the private information of the traders. In particular, it says that the purchase of a security bundle $r$ should never cause the market to lower the price of $r$. This condition is closely related to incentive compatibility for a myopic trader. It is equivalent to requiring that a trader with a distribution $p \in \Delta|_O^n$ can never find it profitable (in expectation) to buy a bundle $r$ and at the same time find it profitable to buy the bundle $-r$. In other words, there can not be more than one way to express one's information.
+
+**CONDITION 3 (INFORMATION INCORPORATION).** For any **q** and **r** ∈ R^K, C(**q** + 2**r**) − C(**q** + **r**) ≥ C(**q** + **r**) − C(**q**).
+
+The no arbitrage condition states that it is never possible for a trader to purchase a security bundle $r$ and receive a positive profit regardless of the outcome. Without this property, the market maker would occasionally offer traders a chance to obtain a guaranteed profit, which is clearly suboptimal in terms of the market maker's loss. However, we do consider the relaxation of this property in Section 6.
+
+**CONDITION 4 (NO ARBITRAGE).** For all **q**, **r** ∈ R^K, there exists an **o** ∈ O such that
+C(**q** + **r**) − C(**q**) ≥ **r** · ρ(**o**).
+
+Finally, the expressiveness condition specifies that any trader can set the market prices to reflect his beliefs, within any ε error, about the expected payoffs of each security if arbitrarily small portions of shares may be purchased. The ε approximation factor is necessary because the trader's beliefs may only be expressible in the limit;
+---PAGE_BREAK---
+
+note that the LMSR does not allow a trader to express the belief that an outcome will
+occur with probability 1 except in the limit.
+
+CONDITION 5 (EXPRESSIVENESS). For any $\mathbf{p} \in \Delta_{|\mathcal{O}|}$, we write $\mathbf{x}^{\mathrm{P}} := \mathbb{E}_{o \sim p}[\rho(o)]$. Then for any $\mathbf{p} \in \Delta_{|\mathcal{O}|}$ and any $\epsilon > 0$ there is some $\mathbf{q} \in \mathbb{R}^K$ for which $\|\nabla C(\mathbf{q}) - \mathbf{x}^{\mathrm{P}}\| < \epsilon$.
+
+Having formalized our set of conditions, we must now address the question of how to determine whether or not these conditions are satisfied for a particular cost function $C$. The following theorem precisely characterizes the set of all cost functions that satisfy these conditions. The statement and proof require the use of a few pieces of terminology from convex optimization, which will be our main tool for designing cost functions that satisfy Conditions 2-5; for more on why this is necessary, see the note in Section 4. In particular, the *relative boundary* of a convex set $S$ is its boundary in the “ambient” dimension of $S$. For example, if we consider the $n$-dimensional probability simplex $\Delta_n := \{\mathbf{x} \in \mathbb{R}^n : \sum_i x_i = 1, \forall i x_i \ge 0\}$, then the relative boundary of $\Delta_n$ is the set $\{\mathbf{x} \in \Delta_n : x_i = 0 \text{ for some } i\}$. We use relint($S$) to refer to the *relative interior* of a convex set $S$, which is the set $S$ minus all of the points on the relative boundary. The interior of a square in 3-dimensional space is empty, but the relative interior is not. We will use closure($S$) to refer to the closure of $S$, the smallest closed set containing all of the limit points of $S$. For any subset $S$ of $\mathbb{R}^d$, let $\mathcal{H}(S)$ denote the convex hull of $S$. An important object, which we will use throughout the paper, is $\mathcal{H}(\rho(O))$ the convex hull of the set of outcome payoffs. (Recall that $\rho(O) := \{\rho(o) | o \in O\}$.) As we have assumed that $\rho(O)$ is a closed set, it follows easily that $\mathcal{H}(\rho(O))$ is also closed, and hence closure($\mathcal{H}(\rho(O))$) = $\mathcal{H}(\rho(O))$.
+
+**THEOREM 3.2.** *Under Conditions 2-5, C must be convex with*
+
+$$
+\operatorname{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathcal{H}(\rho(\mathcal{O})). \quad (4)
+$$
+
+Moreover, any convex differentiable function $C : \mathbb{R}^K \to \mathbb{R}$ respecting (4) must also satisfy Conditions 2-5.
+
+PROOF. We begin with the first direction. Take any $C$ satisfying Conditions 2-5. We first establish that $C$ is convex everywhere. Assume $C$ is non-convex somewhere. Then there must exist some $\mathbf{q}$ and $\mathbf{r}$ such that $C(\mathbf{q}) > (1/2)C(\mathbf{q} + \mathbf{r}) + (1/2)C(\mathbf{q} - \mathbf{r})$. This means $C(\mathbf{q} + \mathbf{r}) - C(\mathbf{q}) < C(\mathbf{q}) - C(\mathbf{q} - \mathbf{r})$, which contradicts Condition 3, so $C$ must be convex.
+
+To prove the equality, we will establish containment in both directions. We first prove that {$\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K$} $\subset \mathcal{H}(\rho(\mathcal{O}))$, from which it follows that closure{$(\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K)$} $\subseteq \mathcal{H}(\rho(\mathcal{O}))$ because $\mathcal{H}(\rho(\mathcal{O}))$ is already closed by assumption. Notice that Condition 2 trivially guarantees that $\nabla C(\mathbf{q})$ is well-defined for any $\mathbf{q}$. Towards a contradiction, let us assume there exists some $\mathbf{q}'$ for which $\nabla C(\mathbf{q}') \notin \mathcal{H}(\rho(\mathcal{O}))$. Because the hull is a convex set, this can be reformulated in the following way: There must exist some halfspace, defined by a normal vector $\mathbf{r}$, that separates $\nabla C(\mathbf{q}')$ from every member of $\rho(\mathcal{O})$. More precisely
+
+$$
+\nabla C(\mathbf{q}') \notin \mathcal{H}(\rho(\mathcal{O})) \iff \exists \mathbf{r} \forall o \in \mathcal{O} : \nabla C(\mathbf{q}') \cdot \mathbf{r} < \rho(o) \cdot \mathbf{r}.
+$$
+
+The strict inequality in this equation is due to the assumption that $\mathcal{H}(\rho(O))$ is a closed convex set. On the other hand, letting $\mathbf{q} := \mathbf{q}' - \mathbf{r}$, we see by convexity of $C$ that $C(\mathbf{q}+\mathbf{r})-C(\mathbf{q}) \leq \nabla C(\mathbf{q}') \cdot \mathbf{r}$. Combining these last two inequalities, we see that the price of bundle $\mathbf{r}$ purchased with history $\mathbf{q}$ is always smaller than the payoff for any outcome. This implies that there exists some arbitrage opportunity, contradicting Condition 4.
+
+We now show that $\mathcal{H}(\rho(O)) \subseteq \text{closure}\{\nabla C(\mathbf{q}) : q \in \mathbb{R}^K\}$. The statement of Condition 5 is equivalent to the statement that every element $\mathbf{x}^\mathrm{P} \in \mathcal{H}(\rho(O))$ is a limit point
+---PAGE_BREAK---
+
+of the set $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$. But then we are done, as the closure($S$) is defined as the $S$ including all of its limit points.
+
+We now prove the final statement, which is that (4) is also sufficient to achieve Conditions 2-5. Take some convex differentiable $C: \mathbb{R}^K \to \mathbb{R}$ for which (4) is true. Condition 2 follows by definition. As previously argued, Condition 3 is equivalent to the convexity of $C$. Condition 5 is equivalent to the statement that $\mathcal{H}(\rho(\mathcal{O})) \subseteq \text{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$. Finally, to establish Condition 4, we have to reverse our previous argument. The existence of an arbitrage opportunity means that there exist some $\mathbf{q}, \mathbf{r}$ such that $C(\mathbf{q} + \mathbf{r}) - C(\mathbf{q}) < \rho(\mathbf{o}) \cdot \mathbf{r}$ for each $\mathbf{o} \in \mathcal{O}$. Using convexity of $C$, we also have that $\nabla C(\mathbf{q}) \cdot \mathbf{r} \le C(\mathbf{q} + \mathbf{r}) - C(\mathbf{q})$. Combining gives us that $\nabla C(\mathbf{q}) \cdot \mathbf{r} \le \rho(\mathbf{o}) \cdot \mathbf{r}$ for all $\mathbf{o} \in \mathcal{O}$, but this last statement is equivalent to the statement that $\nabla C(\mathbf{q}) \notin \mathcal{H}(\rho(\mathcal{O}))$. This is a contradiction and thus Condition 4 is satisfied. $\square$
+
+What we have arrived at from the set of proposed conditions is that (a) a pricing mechanism can always be described precisely in terms of a convex cost function $C$ and (b) the set of reachable prices of a mechanism, that is the set $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$, must be identically the convex hull of the payoff vectors for each outcome $\mathcal{H}(\rho(\mathcal{O}))$ except possibly differing at the relative boundary of $\mathcal{H}(\rho(\mathcal{O}))$. For complete markets, this would imply that the set of achievable prices should be the convex hull of the $n$ standard basis vectors. Indeed, this comports exactly with the natural assumption that the vector of security prices in complete markets should represent a probability distribution, or equivalently that it should lie in the $n$-simplex [Agrawal et al. 2011].
+
+# 4. DESIGNING THE COST FUNCTION VIA CONJUGATE DUALITY
+
+The natural conditions we introduced above imply that to design a market for a set of $K$ securities with payoffs specified by an arbitrary payoff function $\rho: \mathcal{O} \to \mathbb{R}_{\ge 0}^K$, we should use a cost function based market with a convex, differentiable cost function such that $\text{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathcal{H}(\rho(\mathcal{O}))$. We now provide a general technique that can be used to design and compare properties of cost functions that satisfy these criteria. Our proposed framework uses the notion *conjugate duality* to construct cost functions. The aim here is to simplify the task of designing a function $C$ which satisfies Conditions 2-5. We refer to any market mechanism belonging to our framework as a *Duality-based Cost Function Market Maker*.
+
+**Duality-based Cost Function Market Maker**
+
+*Input:* Outcome space $\mathcal{O}$
+
+*Input:* $K$ securities specified by a payoff function $\rho: \mathcal{O} \to \mathbb{R}_{\ge 0}^K$
+
+*Input:* Convex compact price space $\Pi$ (typically $\Pi \equiv \mathcal{H}(\rho(\mathcal{O}))$)
+
+*Input:* Strictly convex $R$ with $\text{relint}(\Pi) \subseteq \text{dom}(R)$
+
+*Output:* Market mechanism specified by the cost function $C: \mathbb{R}^K \to \mathbb{R}$ with
+
+$$C(\mathbf{q}) := \sup_{\mathbf{x} \in \text{relint}(\Pi)} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x})$$
+
+To understand this framework, we begin by reviewing the definition of a convex conjugate. Here and throughout the paper we use the notation $\text{dom}(f)$ to refer to the domain of a function $f$, i.e., where it is defined and finite valued.
+---PAGE_BREAK---
+
+*Definition 4.1 (Rockafellar [1970], Section 12).* For any convex function $f : \mathbb{R}^K \rightarrow [-\infty, \infty]$, the convex conjugate $f^*$ of $f$ is defined as
+
+$$f^*(z) := \sup_{x \in \mathbb{R}^K} z \cdot x - f(x).$$
+
+The curious reader can find good discussions of conjugate functions in, e.g., Boyd and Vandenberghe [2004] or Hiriart-Urruty and Lemaréchal [2001]. Rockafellar [1970] further shows that if $f$ is convex and proper³ then $f^*$ is also convex and proper. Properness shall be assumed throughout; that is, when we introduce a function and refer to it as *convex* we mean *convex and proper*.
+
+The notion of convex duality has several nice features. For example, under weak conditions it holds that $f^{**} \equiv f$ for a convex $f$. We need more tools from convex analysis to give precise proofs of the results needed for the present discussion, however we save the technical details for the appendix. We now state the key result that justifies the duality-based framework. The proof of this theorem can also be found in the appendix.
+
+**THEOREM 4.2.** *Assume we have an outcome space $\mathcal{O}$ and a payoff function $\rho$ such that $\rho(\mathcal{O})$ is a bounded subset of $\mathbb{R}^K$. Then for any cost function $C: \mathbb{R}^K \to \mathbb{R}$ satisfying Conditions 2-5 and where $C$ is closed$^4$, there exists a strictly convex function $R: \mathbb{R}^K \to [-\infty, \infty]$ such that*
+
+$$C(\mathbf{q}) = \sup_{\mathbf{x} \in \text{relint}(\mathcal{H}(\rho(\mathcal{O})))} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x}). \quad (5)$$
+
+Furthermore, for any convex function $R$ defined on $\text{relint}(\mathcal{H}(\rho(\mathcal{O})))$, if $R$ is strictly convex on its domain then the cost function defined by the conjugate, $C := R^*$, satisfies Conditions 2-5.
+
+This theorem is the key result that will guide us in designing a market pricing mechanism. This mechanism relies on constructing a cost function $C: \mathbb{R}^K \to \mathbb{R}$ that satisfies Conditions 2-5, and we are now given ingredients to achieve this: pick any strictly convex function $R$ with domain containing $\mathcal{H}(\rho(\mathcal{O}))$, and let $C$ be defined as in (5). Moreover, any $C$ satisfying the desired conditions can be constructed in this fashion.
+
+## **4.1. Properties of Duality-based Cost Functions**
+
+We now devote a few paragraphs to some important details regarding the proposed duality-based pricing mechanism.
+
+In our definition, we introduce the concept of a “price space” denoted by $\Pi$. For the conditions of Theorem 4.2 to hold, we need $\Pi \equiv \mathcal{H}(\rho(\mathcal{O}))$. One might ask why we even introduce a price space $\Pi$ when it is already given by $\rho$. Indeed, we give the more general definition because, as we will discuss, there can be computational benefits to allowing $\Pi$ to be larger. We also require that $R$ be differentiable which, while not strictly necessary, is a reasonable condition and eases the notation as we can now discuss the gradient $\nabla R(\mathbf{x})$.
+
+This duality based approach to designing the market mechanism is convenient for several reasons. First, it leads to markets that are efficient to implement whenever $\mathcal{H}(\rho(\mathcal{O}))$ can be described by a polynomial number of simple constraints.⁵ The difficulty with combinatorial outcome spaces is that actually enumerating the set of out-
+
+³The properness of a function is defined in appendix. This is not to be confused with the properness of a scoring rule that we will discuss in Section 8.
+
+⁴See the appendix for the definition of closed convex functions.
+
+⁵Under reasonable assumptions, a convex program can be solved with error $\epsilon$ in time polynomial in $1/\epsilon$ and the size of the problem input using standard techniques, e.g., the ellipsoid method and interior point
+---PAGE_BREAK---
+
+comes can be challenging or impossible. In our proposed framework we need only work with the convex hull of the payoff vectors for each outcome when represented by a low-dimensional payoff function $\rho(\cdot)$. This has significant benefits, as one often encounters convex sets which contain exponentially many vertices yet can be described by polynomially many constraints. Moreover, as the construction of $C$ is based entirely on convex programming, we reduce the problem of automated market making to the problem of optimization for which we have a wealth of efficient algorithms. Second, this method yields simple formulas for properties of markets that help us choose the best market to run. Two of these properties, worst-case monetary loss and worst-case information loss, are analyzed in Section 4.2.
+
+In order to establish precise statements, our discussions about certain convex sets – e.g., {$\nabla C$}, $\mathcal{H}(\rho(O))$, and $\Pi$ – have required precise definitions like the relative boundary and interior, and the closure of a set. One might ask whether this is necessary, as we might be focusing too heavily on “boundary cases.” While these details are occasionally cumbersome, they are important and do arise for very simple markets. For example, for the case of a complete market on $n$ outcomes using the LMSR cost function $C(\mathbf{q}) = b \log \sum_i \exp(q_i/b)$, we have that {$\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^n$} = $\text{relint}(\Delta_n)$; prices of 0 and 1 can be reached only in the limit.
+
+For the remainder of the paper, we shall further assume that our chosen $R$ is continuous and defined everywhere on $\mathcal{H}(\rho(O))$; that is, not just on the relative interior. It is not entirely unreasonable to consider functions $R$ for which this is not the case, for example we could imagine an $R$ which asymptotes towards the boundary of $\mathcal{H}(\rho(O))$. However, there are practical reasons why this is undesirable as we will show such cases lead to unbounded loss for the market maker. Notice also that if $R$ is defined on the compact set $\mathcal{H}(\rho(O))$ it follows immediately that $R$ is also bounded on $\mathcal{H}(\rho(O))$. Furthermore, we can always write
+
+$$C(\mathbf{q}) = \max_{\mathbf{x} \in \mathcal{H}(\rho(O))} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x}); \quad (6)$$
+
+that is, where we have replaced $\sup$ with $\max$. Equation 6 is often convenient as we need to consider the maximizer of the optimization.
+
+**LEMMA 4.3.** If $R$ is continuous and defined on all of $\mathcal{H}(\rho(O))$, the price vector at any $\mathbf{q} \in \mathbb{R}^K$ satisfies
+
+$$\nabla C(\mathbf{q}) = \arg\max_{\mathbf{x} \in \mathcal{H}(\rho(O))} \mathbf{q} \cdot \mathbf{x} - R(\mathbf{x}). \quad (7)$$
+
+**PROOF.** We first note that the optimization problem in Equation 7 has a unique maximizer because $R$ is strictly convex. We know via conjugate duality that for any $\mathbf{q} \in \mathbb{R}^K$,
+
+$$R(\nabla C(\mathbf{q})) = \sup_{\mathbf{q}' \in \mathbb{R}^K} \mathbf{q}' \cdot \nabla C(\mathbf{q}) - C(\mathbf{q}').$$
+
+Since the supremum is over all of $\mathbb{R}^K$, it is achieved anywhere the derivative of the objective function (with respect to $\mathbf{q}'$) vanishes. This holds when $\mathbf{q}' = \mathbf{q}$, which gives us that
+
+$$R(\nabla C(\mathbf{q})) + C(\mathbf{q}) = \mathbf{q} \cdot \nabla C(\mathbf{q}), \quad (8)$$
+
+for every $\mathbf{q}$. Equation 7 follows immediately from Equation 8. $\square$
+
+methods. Efficient techniques for convex optimization have been thoroughly studied and can be found in standard texts; hence we omit such discussions in the present work.
+---PAGE_BREAK---
+
+Lemma 4.3 shows that given any q, the instantaneous prices are simply the maximizer of the convex optimization problem (6) for any R that is bounded and defined on $H(\rho(O))$. This convenient fact will be used throughout the paper.
+
+Given an arbitrary smooth convex function $f$, we can define the *Legendre Transformation* which maps a point $x \in \text{dom}(f)$ via the rule $x \mapsto \nabla f(x)$. Indeed, under certain circumstances we get that this map is the inverse of the Legendre transformation of the conjugate $f^*$, i.e., $\nabla f^*(\nabla f(x)) = x$ and $\nabla f(\nabla f^*(y)) = y$ for every $x \in \text{dom}(f)$ and $y \in \text{dom}(f^*)$. Unfortunately the required conditions are quite strong: we need that $f$ is strictly convex, the interior of $\text{dom}(f)$ is non-empty, and $\nabla f$ always diverges towards the boundary of $\text{dom}(f)$ (see Chapter 26 of Rockafellar [1970]). So while we would like to argue that $\nabla C$ is the inverse of the map $\nabla R$ for our framework, this will generally not be true. Assuming $R$ is differentiable then given any $x \in H(\rho(O))$, according to Lemma 4.3 we always have $\nabla C(\nabla R(x)) = x$ by setting $q = \nabla R(x)$. However, $\nabla R(\nabla C(q)) = q$ does not hold in general. On the other hand, if $H(\rho(O))$ has a non-empty interior, and the optimal solution to Equation 6 is always contained within the interior, then the statement $\nabla R(\nabla C(q)) = q$ will hold. Note, however, that these conditions are not satisfied for a complete market on $n$ outcomes, where $H(\rho(O))$ is the $n$-simplex $\Delta_n$ which has an empty interior (even though the relative interior is non-empty). Thus, cost function based market makers for complete markets do not satisfy $\nabla R(\nabla C(q)) = q$. In fact, while each $q$ maps to a single price $x = \nabla C(q)$, each price $x$ can be achieved at multiple values of $q$ in these markets.
+
+## 4.2. Bounding the Market Maker's Loss and Loss of Information
+
+We now discuss two key properties of our proposed market framework. We will make use of the notion of a Bregman divergence. The *Bregman divergence* with respect to a differentiable convex function $f$ is given by
+
+$$D_f(x, y) := f(x) - f(y) - \nabla f(y) \cdot (x - y).$$
+
+It is clear by convexity that $D_f(x, y) \ge 0$ for all $x$ and $y$.
+
+### 4.2.1. Bounding the Market Maker's Monetary Loss.
+When comparing market mechanisms, it is useful to consider the market maker's worst-case monetary loss,
+
+$$\sup_{q \in \mathbb{R}^K} \left( \sup_{o \in O} (\rho(o) \cdot q) - C(q) + C(0) \right).$$
+
+This quantity is simply the worst-case difference between the maximum amount that the market maker might have to pay the traders ($\sup_{o \in O} \rho(o) \cdot q$) and the amount of money collected by the market maker ($C(q) - C(0)$). The following theorem provides a bound on this loss in terms of the conjugate function.
+
+**THEOREM 4.4.** *Consider any duality-based cost function market maker with $\Pi = H(\rho(O))$. The worst-case monetary loss of the market maker is no more than*
+
+$$\sup_{x \in \rho(O)} R(x) - \min_{x \in H(\rho(O}}} R(x). \quad (9)$$
+
+Furthermore, the above bound is tight, as the supremum of the market maker's loss is exactly the value in Equation 9.
+
+**PROOF.** Let $q$ denote the final vector of quantities sold, $\nabla C(q)$ denote the final vector of instantaneous prices, and $o$ denote the true outcome. From Equations 6 and 7, we have that $C(q) = \nabla C(q) \cdot q - R(\nabla C(q))$ and $C(0) = -\min_{x \in H(\rho(O))} R(x)$. The difference between the amount that the market maker must pay out and the amount that
+---PAGE_BREAK---
+
+the market maker has previously collected when outcome o happens is
+
+$$
+\begin{align*}
+& \rho(\mathbf{o}) \cdot \mathbf{q} - C(\mathbf{q}) + C(\mathbf{0}) \\
+&= \rho(\mathbf{o}) \cdot \mathbf{q} - (\nabla C(\mathbf{q}) \cdot \mathbf{q} - R(\nabla C(\mathbf{q}))) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) \\
+&= \mathbf{q} \cdot (\rho(\mathbf{o}) - \nabla C(\mathbf{q})) + R(\nabla C(\mathbf{q})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) + R(\rho(\mathbf{o})) - R(\rho(\mathbf{o})) \\
+&= R(\rho(\mathbf{o})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) - (R(\rho(\mathbf{o})) - R(\nabla C(\mathbf{q})) - \mathbf{q} \cdot (\rho(\mathbf{o}) - \nabla C(\mathbf{q}))) \tag{10} \\
+&\le R(\rho(\mathbf{o})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) - (R(\rho(\mathbf{o})) - R(\nabla C(\mathbf{q})) - \nabla R(\nabla C(\mathbf{q})) \cdot (\rho(\mathbf{o}) - \nabla C(\mathbf{q}))) \\
+&= R(\rho(\mathbf{o})) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathbf{O}))} R(\mathbf{x}) - D_R(\rho(\mathbf{o}), \nabla C(\mathbf{q})),
+\end{align*}
+$$
+
+where $D_R$ is the Bregman divergence with respect to $R$, as defined above. The first equality follows from Equation 8. The inequality follows from the first-order optimality condition for convex optimization, which says that for any convex and differentiable $f$ defined on the domain $\Pi$, if $f$ is minimized at $x$, then
+
+$$ \nabla f(x) \cdot (y - x) \ge 0 \text{ for any } y \in \Pi. $$
+
+Consider $f(x) = R(x) - q \cdot x$. The minimum of this function occurs at $x = \nabla C(q)$ via the duality assumption. Plugging in $y = \rho(o)$ yields the inequality.
+
+Since the divergence is always nonnegative, this is upperbounded by $R(\rho(o)) - \min_{x \in \mathcal{H}(\rho(O))} R(x)$, which is in turn upperbounded by $\sup_{x \in \rho(O)} R(x) - \min_{x \in \mathcal{H}(\rho(O))} R(x)$.
+
+Finally, we show that this loss bound is tight. First, select any $\epsilon > 0$. Choose an outcome $o$ so that $\sup_{o' \in O} R(\rho(o')) - R(\rho(o)) < \epsilon/2$. Next, choose some $q'$ so that $D_R(\rho(o), \nabla C(q')) < \epsilon/2$. This is achievable because the space of gradients of $C$ is assumed to span relint($\mathcal{H}(\rho(O))$) via Theorem 3.2, and so we can ensure that $\nabla C(q')$ is arbitrarily close to $\rho(o)$. Finally, let $q := \nabla R(\nabla C(q'))$, and observe that by construction we have $\nabla C(q) = \nabla C(q')$. To compute the market maker's loss for this particular choice of $q$ and $o$, we apply Equation 10 to obtain:
+
+$$
+\begin{align*}
+&R(\rho(\boldsymbol{o})) - \min_{\boldsymbol{x} \in \mathcal{H}(\rho(\boldsymbol{o}))} R(\boldsymbol{x}) - (R(\rho(\boldsymbol{o})) - R(\nabla C(\boldsymbol{q})) - \boldsymbol{q} \cdot (\boldsymbol{\rho}(\boldsymbol{o}) - \nabla C(\boldsymbol{q}))) \\
+&= R(\rho(\boldsymbol{o})) - \min_{\boldsymbol{x} \in \mathcal{H}(\rho(\boldsymbol{o}))} R(\boldsymbol{x}) - D_R(\rho(\boldsymbol{o}), \nabla C(\boldsymbol{q})) \\
+&> \sup_{\boldsymbol{o}' \in O} R(\rho(\boldsymbol{o}')) - \min_{\boldsymbol{x} \in \mathcal{H}(\rho(\boldsymbol{o}'))} R(\boldsymbol{x}) - \epsilon
+\end{align*}
+$$
+
+where the first equality holds by the definition of the Bregman divergence, because $q = \nabla R(\nabla C(q))$. $\square$
+
+This theorem tells us that as long as the conjugate function is bounded on $\mathcal{H}(\rho(O))$, the market maker's worst-case loss is also bounded.⁶ It says further that this loss is actually realized, for a particular outcome $o$, at least when the price vector approaches $\rho(o)$. This suggests that loss to the market maker is worst when the traders are the most certain about the outcome.
+
+### 4.2.2. Bounding Information Loss.
+Information loss can occur when securities are sold in discrete quantities (for example, single units), as they are in most real-world markets.
+
+⁶In Section 6, we will state a more general, stronger bound on market maker's loss capturing the intuitive notion that the market maker's profits should be higher when the distance between the final vector of prices and the payoff vector $\rho(o)$ of the true outcome $o$ is large; see Theorem 6.2.
+---PAGE_BREAK---
+
+Without the ability to purchase arbitrarily small bundles, traders may not be able to change the market prices to reflect their true beliefs about the expected payoff of each security, even if expressiveness is satisfied. We will argue that the amount of information loss is captured by the market's bid-ask spread for the smallest trading unit. Given some $q$, the current bid-ask spread of security bundle $r$ is defined to be $(C(q+r) - C(q)) - (C(q) - C(q-r))$. This is simply the difference between the current cost of buying the bundle $r$ and the current price at which $r$ could be sold.
+
+To see how the bid-ask spread relates to information loss, suppose that the current vector of quantities sold is $q$. If securities must be sold in unit chunks, a rational, risk-neutral trader will not buy security $i$ unless she believes the expected payoff of this security is at least $C(q+e^i) - C(q)$, where $e^i$ is the vector that has value 1 at its $i$th element and 0 everywhere else. Similarly, she will not sell security $i$ unless she believes the expected payoff is at most $C(q) - C(q-e^i)$. If her estimate of the expected payoff of the security is between these two values, she has no incentive to buy or sell the security. In this case, it is only possible to infer that the trader believes the true expected payoff lies somewhere in the range $[C(q) - C(q-e^i), C(q+e^i) - C(q)]$. The bid-ask spread is precisely the size of this range.
+
+The bid-ask spread depends on how fast instantaneous prices change as securities are bought or sold. Intuitively, the bid-ask spread relates to the depth of the market. When the bid-ask spread is large, new purchases or sales can change the prices of the securities dramatically; essentially, the market is shallow. When the bid-ask spread is small, purchases or sales may only move the prices slightly; the market is deep. Based on this intuition, for complete markets, Chen and Pennock [2007] use the inverse of $\partial^2 C(q)/\partial q_i^2$ to capture the notion of market depth for each security $i$ independently. In a similar spirit, we define a *market depth parameter* $\beta$. Larger values of $\beta$ correspond to deeper markets. We will bound the bid-ask spread in terms of this parameter, and use this parameter to show that there exists a clear tradeoff between worst-case monetary loss and information loss; this will be formalized in Theorem 4.7 below.
+
+To simplify discussion, assume that $C$ is twice-differentiable. Our parameter $\beta$ is related to the curvature of $C$. Given any unit vector $v$, the curvature (i.e., second derivative) of $C$ at $q$ in the direction of $v$ can be calculated as $v^\top \nabla^2 C(q)v$, where $\nabla^2 C(q)$ is the Hessian of $C$ at $q$. Furthermore, for any unit vector $v$, $v^\top \nabla^2 C(q)v$ is lower bounded by the smallest eigenvalue and upper bounded by the largest eigenvalue of $\nabla^2 C(q)$. To see this, note that the Hessian is a symmetric matrix, and therefore has $K$ linearly independent eigenvectors, each normalized to have length one. Let $u_i$ be the $i$th unit eigenvector of $\nabla^2 C(q)$ corresponding to eigenvalue $\lambda_i$. $\lambda_i$ is nonnegative due to convexity of $C$. Any unit vector $v$ can be represented as a linear combination of the $K$ unit eigenvectors, $v = \sum_i a_i u_i$ with $\sum_i a_i^2 = 1$. For any orthogonal eigenvectors $u_i$ and $u_j$, it is easy to see that $u_i^\top \nabla^2 C(q) u_i = \lambda_i$ and $u_i^\top \nabla^2 C(q) u_j = 0$. Thus, $v^\top \nabla^2 C(q) v = \sum_i a_i^2 \lambda_i$, which lies in $[\min_i \lambda_i, \max_i \lambda_i]$.
+
+*Definition 4.5.* For any duality-based cost function market maker with twice-differentiable cost function $C$, the *market depth parameter* $\beta(q)$ for a quantity vector $q$ is defined as $\beta(q) = 1/\lambda_C(q)$, where $\lambda_C(q)$ is the largest eigenvalue of $\nabla^2 C(q)$, the Hessian of $C$ at $q$. The worst-case market depth is $\beta = \inf_{q \in \mathbb{R}^K} \beta(q)$.
+
+As described above, this definition of worst-case market depth implies that $1/\beta$ is an upper bound on the curvature of $C$. We will derive the upper bound of the bid-ask spread and the lower bound of the worst-case loss of the market maker in terms of $\beta$. Our derivation makes use of the following lemma that establishes a convenient relationship between the Bregman divergence of a convex function $f$ and the eigenvalues of the Hessian of $f$. The proof of the lemma is in Appendix B.
+---PAGE_BREAK---
+
+**LEMMA 4.6.** Let $f(\mathbf{x})$ be a twice-differentiable convex function. If for all $\mathbf{x} \in \text{dom}(f)$, every eigenvalue of $\nabla^2 f(\mathbf{x})$ falls in the set $[a, b]$, $a \le b$, then for any $\mathbf{x}, \mathbf{x}' \in \text{dom}(f)$,
+
+$$ \frac{a\|\mathbf{x} - \mathbf{x}'\|^2}{2} \le D_f(\mathbf{x}, \mathbf{x}') \le \frac{b\|\mathbf{x} - \mathbf{x}'\|^2}{2}. \quad (11) $$
+
+We now present a theorem showing an inherent tension between worst-case monetary loss and information loss. Here $\text{diam}(\mathcal{H}(\rho(\mathcal{O})))$ denotes the diameter of the hull of the payoff vectors for each outcome.
+
+**THEOREM 4.7.** For any duality-based cost function market maker with twice differentiable $C$ and worst-case market depth $\beta$, the bid-ask spread for bundle $r$ with previous purchases $q$ is no more than $\|\mathbf{r}\|^2/\beta$. The worst-case monetary loss of the market maker is at least $\beta \cdot \text{diam}^2(\mathcal{H}(\rho(\mathcal{O})))/\sqrt{\beta}$.
+
+**PROOF.** The bid-ask spread can be written in terms of Bregman divergences. In particular, $C(q+r) - C(q) - (C(q) - C(q-r)) = D_C(q+r, q) + D_C(q-r, q)$. According to Lemma 4.6, because $1/\beta$ is the upper bound of the eigenvalues of $\nabla^2 C(q)$ at any $q$, both $D_C(q+r, q)$ and $D_C(q-r, q)$ are upper bounded by $\|\mathbf{r}\|^2/2\beta$. Thus, $C(q+r) - C(q) - (C(q) - C(q-r)) \le \|\mathbf{r}\|^2/\beta$.
+
+Let $\mathbf{x}_0 = \arg\min_{x \in \Pi} R(\mathbf{x})$. The first-order optimality condition for convex optimization gives that $\nabla R(\mathbf{x}_0) \cdot (\mathbf{x} - \mathbf{x}_0) \ge 0$ for all $\mathbf{x} \in \Pi$. According to Theorem 4.4, the worst-case loss of the market maker is
+
+$$
+\begin{align*}
+\sup_{\mathbf{x} \in \rho(\mathcal{O})} R(\mathbf{x}) - \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} R(\mathbf{x}) &= \sup_{\mathbf{x} \in \rho(\mathcal{O})} (R(\mathbf{x}) - R(\mathbf{x}_0)) \\
+&= \sup_{\mathbf{x} \in \rho(\mathcal{O})} (D_R(\mathbf{x}, \mathbf{x}_0) + \nabla R(\mathbf{x}_0) \cdot (\mathbf{x} - \mathbf{x}_0)) \\
+&\ge \sup_{\mathbf{x} \in \rho(\mathcal{O})} D_R(\mathbf{x}, \mathbf{x}_0).
+\end{align*}
+$$
+
+Because $C$ is twice-differentiable, for any $q$ such that $\nabla C(q) \in \text{relint}(\Pi)$, we have a correspondence between the Hessian of $C$ at $q$ and the Hessian of $R$ at $\nabla C(q)$. More precisely, we have that $u^\top\nabla^2 C(q)u = u^\top\nabla^{-2}R(\nabla C(q))u$ for any $u = x - x'$ with $x, x' \in \Pi$, where $\nabla^{-2}R(\nabla C(q))$ denotes the inverse of the Hessian of $R$ at $\nabla C(q)$. (See, for example, Gorni [1991] for more on the second-order properties of convex functions.) This means that $\beta(q)$ is equivalently defined as the smallest eigenvalue of $\nabla^2 R(\nabla C(q))|_{\Pi}$; that is, where we consider the second derivative only within the price region $\Pi$. Thus, $\beta$ lower bounds the eigenvalues of $\nabla^2 R(x)$ for all $x \in \Pi$.
+
+Applying Lemma 4.6, we have $D_R(x, x_0) \ge \frac{\beta}{2} \|x - x_0\|^2$. In the worst-case, $x_0$ is in the center of $\mathcal{H}(\rho(\mathcal{O}))$ and $\|x - x_0\|$ is at least $\text{diam}(\mathcal{H}(\rho(\mathcal{O}))) / 2$, which finishes the proof. $\square$
+
+We can see that there is a direct tradeoff between the upper bound⁷ of the bid-ask spread, which shrinks as $\beta$ grows, and the lower bound of the worst-case loss of the market maker, which grows linearly in $\beta$. This tradeoff is very intuitive. When the market is shallow (small $\beta$), small trades have a large impact on market prices, and traders cannot purchase too many shares of the same security without paying a lot. When the market is deep (large $\beta$), prices change slowly, allowing the market maker
+
+⁷Strictly speaking, as we are emphasizing the necessary tradeoff between bid-ask spread and worst-case loss, we should have a *lower bound* on the bid-ask spread. On the other hand, if the worst-case market depth parameter is $\beta$ then there is some $q$ and $r$ such that $D_C(q+r, q)/\|\mathbf{r}\|^2 \approx 1/(2\beta)$ and this approximation can be made arbitrarily tight for small enough $r$ when $C$ is twice differentiable.
+---PAGE_BREAK---
+
+to gain more precise information, but simultaneously forcing the market maker to take on more risk since many shares of a security can be purchased at prices that are potentially too low. This tradeoff can be adjusted by scaling $R$, which scales $\beta$. This is analogous to adjusting the “liquidity parameter” $b$ of the LMSR.
+
+### 4.3. Selecting a Conjugate Function
+
+We have seen that the choice of the conjugate function $R$ impacts market properties such as worst-case loss and information loss. We now explore this choice in more detail. In many situations, the ideal choice of the conjugate is a function of the form
+
+$$R(x) := \frac{\lambda}{2} \|x - x_0\|^2. \quad (12)$$
+
+Here $R(x)$ is simply the squared Euclidean distance between $x$ and an initial price vector $x_0 \in \Pi$, scaled by $\lambda/2$. By utilizing this quadratic conjugate function, we achieve a market depth $\beta(q)$ that is uniformly $\lambda$ for any $q$ for which $\nabla C(q) \in \text{relint}(\Pi)$. Furthermore, if $x_0$ is chosen as the “center” of $\Pi$, namely $x_0 = \arg\min_{x \in \Pi} \max_{y \in \Pi} \|x - y\|$, then the worst-case loss of the market maker is $\max_{x \in \Pi} R(x) = (\lambda/8)\text{diam}^2(\Pi)$. While the market maker can tune $\lambda$ appropriately according to the desired tradeoff between worst-case market depth and worst-case loss, the tradeoff is tightest when $R$ has a Hessian that is uniformly a scaled identity matrix, or more precisely where $R$ takes the form in Equation 12.
+
+Unfortunately, by selecting a conjugate of this form, or any $R$ with bounded derivative, the market maker does inherit one potentially undesirable property: security prices may become constant when $\nabla C(q)$ reaches a point at relbnd($\Pi$), the relative boundary of $\Pi$. That is, if we arrive at a total demand $q$ where $\nabla C(q) = \rho(o)$ for some outcome $o$, our mechanism begins offering securities at a price equal to the best-case payoff, akin to asking someone to bet a dollar for the chance to possibly win a dollar. The Quad-SCPM for complete markets is known to exhibit this behavior [Agrawal et al. 2011].
+
+To avoid these undesirable pricing scenarios, it is sufficient to require that our conjugate function satisfies one condition. We say that a convex function $R$ defined on $\Pi$ is a pseudo-barrier⁸ for $\Pi$ if $\|\nabla R(x_t)\| \to \infty$ for any sequence of points $x_1, x_2, \dots \in \Pi$ which tends towards relbnd($\Pi$). If we require our conjugate function $R$ to be a pseudo-barrier, we are guaranteed that the instantaneous price vector $\nabla C(q)$ always lies in $\text{relint}(\Pi)$, and does not become constant near the boundary.
+
+It is important to note that, while it is desirable that $\|\nabla R(x_t)\| \to \infty$ as $x_t$ approaches relbnd($\Pi$), it is generally not desirable that $R(x_t) \to \infty$. Recall that the market maker's worst-case loss grows with the maximum value of $R$ on $\Pi$ and thus we restrict a conjugate function that is bounded on the domain. A perfect example of convex function that is simultaneously bounded and a pseudo-barrier is the negative entropy function $H(x) = \sum_i x_i \log x_i$, defined on the $n$-simplex $\Delta_n$. It is perhaps no surprise that the LMSR, the most common market mechanism for complete security spaces, can be described by the choice $R(x) := bH(x)$ where the price space $\Pi = \Delta_n$ [Agrawal et al. 2011; Chen and Vaughan 2010].
+
+⁸We use the term pseudo-barrier to distinguish this from the typical definition of a barrier function on a set $\Pi$, which is a function that grows without bound towards the boundary of $\Pi$. The term *Legendre* was used by Cesa-Bianchi and Lugosi [2006] for a similar notion, which may have originated in Rockafellar [1970], yet this definition requires the stronger condition that $\Pi$ contains a nonempty interior.
+---PAGE_BREAK---
+
+**5. EXAMPLES OF COMPUTATIONALLY EFFICIENT MARKETS**
+
+In the previous section, we provided a general framework for designing markets on combinatorial or infinite outcome spaces. We now provide some examples of markets that can be operated efficiently using this framework.
+
+**5.1. Subset Betting**
+
+Recall the scenario described in Section 3.1 in which the outcome is a ranking of a set of $n$ competitors, such as $n$ horses in a race, represented as a permutation $\pi : [n] \to [n]$. Chen et al. [2007a] proposed a betting language, *subset betting*, in which traders can place bets $(i, j)$, for any candidate $i$ and any slot $j$, that pay out \$1 in the event that $\pi(i) = j$ and \$0 otherwise.⁹ Chen et al. [2008a] showed that pricing bets of this form using the LMSR is #P-hard and provided an algorithm for approximating the prices by exploiting the structure of the market. Using our framework, it is simple to design a computationally efficient market for securities of this form.
+
+In order to set up such a combinatorial market within our framework, we must be able to efficiently work with the convex hull of the payoff vectors for each outcome. Notice that, for an outcome $\pi$, the associated payoff can be described by a matrix $M_\pi$, with $M_\pi(i,j) = I[\pi(i) = j]$, where $I[\cdot]$ is the indicator function. Taking this one step further, it is easy to verify that the convex hull of the set of permutation matrices is precisely the set of *doubly stochastic matrices*, that is the set
+
+$$ \Pi = \left\{ X \in \mathbb{R}^{n \times n}_{\ge 0} : \sum_{i'=1}^{n} X(i', j) = \sum_{j'=1}^{n} X(i, j') = 1 \ \forall i, j \right\}, $$
+
+where $X(i, j)$ represents the element at the $i$th row and $j$th column of the matrix $X$. Notice, importantly, that this set is described by only $n^2$ variables and $O(n)$ constraints.
+
+To fully specify the market maker, we must also select a conjugate function $R$ for our price space. While the quadratic conjugate function is an option, there is a natural extension of the negative entropy function, whose desirable properties were discussed in the previous section, for the space of stochastic matrices. For any $X \in \Pi$, let us set
+
+$$ R(X) = b \sum_{i,j} X(i,j) \log X(i,j) $$
+
+for some parameter $b > 0$. The worst-case market depth is computed as the minimum of the smallest eigenvalue of the Hessian of $R$ within relint($\Pi$). This occurs when the $X$ matrix has all values $1/n$, hence the worst-case depth is $nb$. The worst-case loss, on the other hand, is easily computed as $bn \log n$. Note that this bound on worst-case loss is the same that would be obtained by running $n$ independent markets, one for each slot $j$, using the LMSR.
+
+**5.2. Sphere Betting**
+
+One important challenge of operating a combinatorial prediction market is to always maintain the logical consistency of security prices. Our framework offers a way to incorporate the constraints on security prices into pricing. Hence, in addition to combinatorial prediction markets, our framework can be used to design markets where security prices have some natural constraints due to their problem domains.
+
+⁹The original definition of subset betting allowed bets of the form "any candidate in set S will end up in slot j" or "candidate i will end up in one of the slots in set S." A bet of this form can be constructed easily using our betting language by bundling multiple securities.
+---PAGE_BREAK---
+
+We consider an example in which the outcome space is infinite. An object orbiting
+the planet, perhaps a satellite, is predicted to fall to earth in the near future and will
+land at an unknown location, which we would like to predict. We represent locations
+on the earth as unit vectors $u \in \mathbb{R}^3$. The difficulty of this example arises from the fact
+that the outcome must be a unit vector, imposing constraints on the three coordinates.
+We will design a market with three securities, each corresponding to one coordinate
+of the final location of the object. In particular, security $i$ will pay off $u_i + 1$ dollars if
+the object lands in location $u$. (The addition of 1, while not strictly necessary, ensures
+that the payoffs, and therefore prices, remain positive, though it will be necessary
+for traders to sell securities to express certain beliefs.) This means that traders can
+purchase security bundles $r \in \mathbb{R}^3$ and, when the object lands at a location $u$, receive
+a payoff $(u+1) \cdot r$. Note that in this example, the outcome space is infinite, but the
+security space is small.
+
+The price space $\mathcal{H}(\rho(O))$ for this market will be the 2-norm unit ball centered at 1. To construct a market for this scenario, let us make the simple choice of $R(x) = \lambda\|x-1\|^2$ for some parameter $\lambda > 0$. When $\|q\| \le 2\lambda$, there exists an $x$ such that $\nabla R(x) = q$. In particular, this is true for $x = (1/2)q/\lambda + 1$, and $q \cdot x - R(x)$ is maximized at this point. When $\|q\| > 2\lambda$, $q \cdot x - R(x)$ is maximized at an $x$ on the boundary of $\mathcal{H}(\rho(O))$. Specifically, it is maximized at $x = q/||q|| + 1$. From this, we can compute
+
+$$C(\mathbf{q}) = \begin{cases} \frac{1}{4\lambda} ||\mathbf{q}||^2 + \mathbf{q} \cdot \mathbf{1}, & \text{when } ||\mathbf{q}|| \le 2\lambda, \\ ||\mathbf{q}|| + \mathbf{q} \cdot \mathbf{1} - \lambda, & \text{when } ||\mathbf{q}|| > 2\lambda. \end{cases}$$
+
+The market depth parameter $\beta$ is $2\lambda$; in fact, $\beta(x) = 2\lambda$ for any price vector $x$ in the interior of $\mathcal{H}(\rho(O))$. By Theorem 4.4, the worst-case loss of the market maker is no more than $\lambda$, which is precisely the lower bound implied by Theorem 4.7. Finally, the divergence $D_C(q+r, q) \le \|r\|^2/(4\lambda)$ for all $q, r$, with equality when $\|q\|, \|q+r\| \le 2\lambda$, implying that the bid-ask spread scales linearly with $\|r\|^2/\lambda$.
+
+We note that for this particular prediction problem, if we try to predict the latitude and longitude of the landing location, we don't have any constraints on prices. In particular, we can have two securities that pay off linearly with the latitude and longitude of the landing location respectively. These two securities are independent and can be traded in two independent markets.
+
+**6. COMPUTATIONAL COMPLEXITY AND RELAXATIONS**
+
+In Section 3, we argued that the space of feasible price vectors should be precisely
+$\mathcal{H}(\rho(O))$, the convex hull of the payoff vectors for each outcome. In each of our exam-
+ples, we have discussed market scenarios for which this hull has a polynomial number
+of constraints, allowing us to efficiently calculate prices via convex optimization. Un-
+fortunately, one should not necessarily expect that a given payoff function and outcome
+space will lead to an efficiently describable convex hull. In this section, we explore a
+couple of approaches to overcome such complexity challenges. First, we discuss the
+case in which $\mathcal{H}(\rho(O))$ has exponentially (or infinitely) many constraints yet gives rise
+to a separation oracle. Second, we show that the price space $\Pi$ can indeed be relaxed
+beyond $\mathcal{H}(\rho(O))$ without increasing the risk to the market maker. Finally, we show
+how this relaxation applies in practice.
+
+**6.1. Separation Oracles**
+
+If we encounter a convex hull $\mathcal{H}(\rho(O))$ with exponentially-many constraints, all may
+not be lost. In order to calculate prices, we need to solve the optimization problem
+$\max_{x \in \mathcal{H}(\rho(O))} q \cdot x - R(x)$. Under certain circumstances this can still be solved efficiently.
+---PAGE_BREAK---
+
+Consider a convex optimization problem with a concave objective function $f(x)$ and constraints $g_i(x) \le 0$ for all $i$ in some index set $I$. That is, we want to solve:
+
+$$
+\begin{array}{ll}
+\max & f(\mathbf{x}) \\
+\text{s.t.} & \mathbf{x} \in \mathbb{R}^d \\
+& g_i(\mathbf{x}) \le 0 \quad \forall i \in I
+\end{array}
+$$
+
+This can be converted to a problem with a linear objective in the standard way:
+
+$$
+\begin{array}{ll}
+\max & c \\
+\text{s.t.} & x \in \mathbb{R}^d, c \in \mathbb{R} \\
+& f(x) \geq c \\
+& g_i(x) \leq 0 \quad \forall i \in I
+\end{array}
+$$
+
+Of course, if *I* is an exponentially or infinitely large set we will have trouble solving this problem directly. On the other hand, the constraint set may admit an efficient separation oracle, defined as a function that takes as input a point (x, c) and returns true if all the necessary constraints are satisfied or, otherwise, returns false and specifies a violated constraint.¹⁰ Given an efficient separation oracle, one has access to alternative methods for optimization, the most famous being Khachiyan's ellipsoid method, that run in polynomial time. For more details see, for example, Grötschel et al. [1981].
+
+This suggests that a fruitful direction for designing computationally efficient market makers is to examine the pricing problem on an instance-by-instance basis, and for a particular instance of interest, leverage the structure of the instance to develop an efficient algorithm for solving the specific separation problem. We leave this for future research.
+
+## 6.2. Relaxation of the Price Space
+
+When dealing with a convex hull $\mathcal{H}(\rho(O))$ that has a prohibitively large constraint set and does not admit an efficient separation oracle, we still have one tool at our disposal: we can modify $\mathcal{H}(\rho(O))$ to get an alternate price space $\Pi$ which we can work with efficiently. Recall that in Section 3, we arrived at the requirement that $\Pi = \mathcal{H}(\rho(O))$ as a necessary conclusion of the proposed conditions on our market maker. If we wish to violate this requirement, we need to consider which conditions must be weakened and revise the resulting guarantees from Section 3.
+
+We will continue to construct duality-based cost function market makers in the usual way, via the tuple $(O, \rho, \Pi, R)$. $\Pi$ is still a convex compact set of feasible prices. But we now allow $\Pi$ to be distinct from $\mathcal{H}(\rho(O))$. Not surprisingly, the choice of $\Pi$ will affect the interest of the traders and the market maker. We prove several claims which will aid us in our market design. Theorem 6.1 tells us that the expressiveness condition should not be relaxed, while Theorem 6.2 tells us that the no-arbitrage condition can be. Together, these imply that we may safely choose $\Pi$ to be a *superset* of $\mathcal{H}(\rho(O))$.
+
+The first (perhaps surprising) theorem tells us that expressiveness is not only useful for information aggregation, it is actually necessary for the market maker to avoid unbounded loss. The proof involves showing that if $\rho$ is the final outcome and $\rho(o) \notin \Pi$, then it is possible to make an infinite sequence of trades such that each trade causes a constant amount of loss to the market maker.
+
+¹⁰More precisely, a separation oracle returns any separating hyperplane that divides the input from the feasible set.
+---PAGE_BREAK---
+
+**THEOREM 6.1.** For any duality-based cost function market maker, the worst-case loss of the market maker is unbounded if $\rho(O) \notin \Pi$.
+
+**PROOF.** Consider some outcome $o$ such that $\rho(o) \notin \Pi$. By definition, the feasible price set $\Pi = \{\nabla C(q) : \forall q\}^c$ is compact. Because $\rho(o) \notin \Pi$, there exists a hyperplane that strongly separates $\Pi$ and $\rho(o)$. In other words, there exists an $k > 0$ such that $\|\rho(o) - \nabla C(q)\| \ge k, \forall q$.
+
+When outcome $o$ is realized, $B(q) = \rho(o) \cdot q - C(q) + C(0)$ is the market maker's loss given $q$. We have $\nabla B(q) = \rho(o) - \nabla C(q)$, which represents the instantaneous change of the market maker's loss. For infinitesimal $\epsilon$, let $q' = q + \epsilon(\rho(o) - \nabla C(q))$. Then
+
+$$
+\begin{align*}
+B(q') &= B(q) + \nabla B(q) \cdot [\epsilon(\rho(o) - \nabla C(q))] \\
+&= B(q) + \epsilon ||\rho(o) - \nabla C(q)||^2 \geq B(q) + \epsilon k^2.
+\end{align*}
+$$
+
+This shows that for any $q$ we can find a $q'$ such that the market maker's worst-case loss is at least increased by $\epsilon k^2$. This process can continue for infinite steps. Hence, we conclude that the market maker's loss is unbounded. $\square$
+
+In the following theorem, which is a simple extension of Theorem 4.4, we see that including additional price vectors in $\Pi$ does not adversely impact the market maker's worst-case loss, despite the fact that the no-arbitrage condition is violated.
+
+**THEOREM 6.2.** *Consider any duality-based cost function market maker with $R$ and $\Pi$ satisfying $\sup_{x \in H(\rho(O))} R(x) < \infty$ and $H(\rho(O)) \subseteq \Pi$. Assume that the initial price vector satisfies $\nabla C(0) \in H(\rho(O))$. Let $q$ denote the vector of quantities sold and $o$ denote the true outcome. The monetary loss of the market maker is no more than*
+
+$$ (R(\rho(o))) - \min_{x \in H(\rho(O))} R(x) - D_R(\rho(o), \nabla C(q)). $$
+
+**PROOF.** This proof is nearly identical to the proof of Theorem 4.4. The only major difference is that now $C(0) = -\min_{x \in \Pi} R(x)$ instead of $C(0) = -\min_{x \in H(\rho(O))} R(x)$, but this is equivalent since we have assumed that $\nabla C(0) \in H(\rho(O))$. $R(\rho(o))$ is still well-defined and finite since we have assumed that $H(\rho(O)) \subseteq \Pi$. $\square$
+
+This tells us that expanding $\Pi$ can only help the market maker; increasing the range of $\nabla C(q)$ can only increase the divergence term. This may seem somewhat counterintuitive. We originally required that $\Pi \subseteq H(\rho(O))$ as a consequence of the no-arbitrage condition, and by relaxing this condition, we are providing traders with potential arbitrage opportunities. However, these arbitrage opportunities do not hurt the market maker. As long as the initial price vector lies in $H(\rho(O))$, any such situations where a trader can earn a guaranteed profit are effectively created (and paid for) by other traders! In fact, if the final price vector $\nabla C(q)$ falls outside the convex hull, the divergence term will be strictly positive, improving the bound.
+
+To elaborate on this point, let's consider an example where $\Pi$ is strictly larger than $H(\rho(O))$. Let $q$ be the current vector of purchases, and assume the associated price vector $x = \nabla C(q)$ lies in the interior of $H(\rho(O))$. Consider a trader who purchases a bundle $r$ such that the new price vector leaves this set, i.e., $y := \nabla C(q+r) \notin H(\rho(O))$. We claim that this choice can be strictly improved in the sense that there is an alternative bundle $r'$ whose associated profit, for any outcome $o$, is strictly greater than the profit for $r$.
+
+For simplicity, assume $y$ is an interior point of $\Pi \setminus H(\rho(O))$ so that $q+r = \nabla R(y)$. Define $\pi(y) := \arg\min_{y' \in H(\rho(O))} D_R(y', y)$, the minimum divergence projection of $y$ into $H(\rho(O))$. The alternative bundle we consider is $r' = \nabla R(\pi(y)) - q$. Our trader
+---PAGE_BREAK---
+
+pays $C(\mathbf{q}+\mathbf{r}) - C(\mathbf{q}+\mathbf{r}')$ less to purchase $\mathbf{r}'$ than to purchase $\mathbf{r}$. Hence, for any outcome $\mathbf{o}$, we see that the increased profit for $\mathbf{r}'$ over $\mathbf{r}$ is
+
+$$
+\begin{align}
+\rho(\mathbf{o}) \cdot (\mathbf{r}' - \mathbf{r}) - C(\mathbf{q} + \mathbf{r}') + C(\mathbf{q} + \mathbf{r}) &> \rho(\mathbf{o}) \cdot (\mathbf{r}' - \mathbf{r}) + \nabla C(\mathbf{q} + \mathbf{r}') \cdot (\mathbf{r} - \mathbf{r}') \\
+&= (\rho(\mathbf{o}) - \pi(\mathbf{y})) \cdot (\mathbf{r}' - \mathbf{r}). \tag{13}
+\end{align}
+$$
+
+Notice that we achieve strict inequality precisely because $\nabla C(\mathbf{q} + \mathbf{r}') = \pi(\mathbf{y}) \neq \mathbf{y} = \nabla C(\mathbf{q} + \mathbf{r})$. Now use the optimality condition for $\pi(\mathbf{y})$ to see that, since $\rho(\mathbf{o}) \in \mathcal{H}(\rho(\mathcal{O}))$, $\nabla_{\pi(\mathbf{y})}(D_R(\pi(\mathbf{y}), \mathbf{y})) \cdot (\rho(\mathbf{o}) - \pi(\mathbf{y})) \ge 0$. It is easy to check that $\nabla_{\pi(\mathbf{y})}(D_R(\pi(\mathbf{y}), \mathbf{y})) = \nabla R(\pi(\mathbf{y})) - \nabla R(\mathbf{y}) = \mathbf{r}' - \mathbf{r}$. Combining this last expression with the inequality above and (13) tells us that the profit increase is strictly greater than $(\rho(\mathbf{o}) - \pi(\mathbf{y})) \cdot (\mathbf{r}' - \mathbf{r}) \ge 0$. Simply put, the trader receives a guaranteed positive increase in profit for any outcome $\mathbf{o}$.
+
+The next theorem shows that any time the price vector lies outside of $\rho(\mathbf{o})$, traders could profit by moving it back inside. The proof uses a nice application of minimax duality for convex-concave functions.
+
+**THEOREM 6.3.** For any duality-based cost function market maker, given a current quantity vector $\mathbf{q}_0$ with current price vector $\nabla C(\mathbf{q}_0) = \mathbf{x}_0$, a trader has the opportunity to earn a guaranteed profit of at least $\min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} D_R(\mathbf{x}, \mathbf{x}_0)$.
+
+**PROOF.** A trader looking to earn a guaranteed profit when the current quantity is $\mathbf{q}_0$ hopes to purchase a bundle $\mathbf{r}$ so that the worst-case profit $\min_{\mathbf{o} \in \mathcal{O}} \rho(\mathbf{o}) \cdot \mathbf{r} - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0)$ is as large as possible. Notice that this quantity is strictly positive since $\mathbf{r} = 0$, which always has 0 profit, is one option. Thus, a trader would like to solve the following objective:
+
+$$
+\begin{align*}
+& \max_{\mathbf{r} \in \mathbb{R}^K} \min_{\mathbf{o} \in \mathcal{O}} \rho(\mathbf{o}) \cdot \mathbf{r} - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0) \\
+&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} \max_{\mathbf{r} \in \mathbb{R}^K} \mathbf{x} \cdot \mathbf{r} - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0) \\
+&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} \max_{\mathbf{r} \in \mathbb{R}^K} \mathbf{x} \cdot (\mathbf{q}_0 + \mathbf{r}) - C(\mathbf{q}_0 + \mathbf{r}) + C(\mathbf{q}_0) - \mathbf{x} \cdot \mathbf{q}_0 \\
+&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} R(\mathbf{x}) + C(\mathbf{q}_0) - \mathbf{x} \cdot \mathbf{q}_0 \\
+&= \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} R(\mathbf{x}) + \mathbf{x}_0 \cdot \mathbf{q}_0 - R(\mathbf{x}_0) - \mathbf{x} \cdot \mathbf{q}_0 \\
+&\geq \min_{\mathbf{x} \in \mathcal{H}(\rho(\mathcal{O}))} D_R(\mathbf{x}, \mathbf{x}_0).
+\end{align*}
+$$
+
+The first equality with the $\min/\max$ swap holds via Sion's Minimax Theorem [Sion 1958]. The last inequality was obtained using the first-order optimality condition of the solution $\mathbf{x}_0 = \arg\max_{\mathbf{x} \in \Pi} \mathbf{x} \cdot \mathbf{q}_0 - R(\mathbf{x})$ for the vector $\mathbf{x} - \mathbf{x}_0$ which holds since $\mathbf{x} \in \Pi$. $\square$
+
+When $x_0 \in H(\rho(O))$, $D_R(x, x_0)$ is minimized when $x = x_0$ and the bound is vacuous, as we would expect. The more interesting case occurs when the prices have fallen outside of $H(\rho(O))$, in which case a trader is guaranteed a riskless profit by moving $\nabla C(q)$ to the closest point in $H(\rho(O))$.
+
+## 6.3. Pair Betting via Relaxation
+
+We return our attention to the scenario in which the outcome is a ranking of *n* competitors, as described in Section 3.1. Consider a complex market in which traders make arbitrary pair bets: for every *i*, *j*, a trader can purchase a security which pays out \$1
+---PAGE_BREAK---
+
+whenever $\pi(i) < \pi(j)$. Like subset bets, pricing pair bets using the LMSR is known to be #P-hard [Chen et al. 2008a].
+
+We can represent the payoff structure of any such outcome $\pi$ by a matrix $M_{\pi}$ defined by
+
+$$M_{\pi}(i,j) = \begin{cases} 1, & \text{if } \pi(i) < \pi(j) \\ \frac{1}{2}, & \text{if } i = j \\ 0, & \text{if } \pi(i) > \pi(j). \end{cases}$$
+
+We would like to choose our feasible price region as the set $\mathcal{H}(\{M_{\pi} : \pi \in S_n\})$, where $S_n$ is the set of permutations on $[n]$. Unfortunately, the computation of this convex hull is necessarily hard: if given only a separation oracle for the set $\mathcal{H}(\{M_{\pi} : \pi \in S_n\})$, we could construct a linear program to solve the “minimum feedback arc set” problem, which is known to be NP-hard [Karp 1972].
+
+On the positive side, we see from the previous section that the market maker can work in a larger feasible price space without risking a larger loss. We thus relax our feasible price region $\Pi$ to the set of $n \times n$ real-valued matrices $X \in \mathbb{R}^{n^2}$ satisfying the intuitive set of constraints described in Section 3.1:
+
+$$
+\begin{align*}
+X(i,j) &\ge 0 && \forall i,j \in [n] \\
+X(i,j) &= 1 - X(j,i) && \forall i,j \in [n] \\
+X(i,j) + X(j,k) + X(k,i) &\ge 1 && \forall i,j,k \in [n]
+\end{align*}
+$$
+
+This relaxation was first discussed by Megiddo [1977], who referred to such matrices as *generalized order matrices*. He proved that, for $n \le 4$, we do have $\Pi = \mathcal{H}(\{M_{\pi} : \pi \in S_n\})$, but gave a counterexample showing strict containment for $n = 13$. By using this relaxed price space, the market maker allows traders to bring the price vector outside of the convex hull, yet includes a set of basic (and natural) constraints on the prices. Such a market could be implemented with any strongly convex conjugate function (e.g., quadratic).
+
+Notice that in this example, it is computationally hard in general for a trader to determine whether or not a particular price vector falls within the convex hull; if this were not the case, then we would be able to construct a separation oracle, and could price pair bets efficiently without the relaxation. Therefore, although arbitrage opportunities may be created, it is generally intractable for traders to find and exploit these opportunities.
+
+# 7. RELATION TO ONLINE LEARNING
+
+In this section, we use our framework to explore the striking mathematical connections that exist between automated market makers and the class of Follow the Regularized Leader algorithms for online learning. While the problem of learning in an online environment appears quite different semantically from the problem of pricing securities in a market, we show that the two frameworks have a strong syntactic correspondence. We begin with a brief overview of no-regret learning and the online linear optimization problem.
+
+## 7.1. Online Learning and Regret-Minimizing Algorithms
+
+Perhaps the most canonical example of online, no-regret learning is the problem of *learning from expert advice*. In the expert setting, we imagine an algorithm that must make a sequence of predictions based on the advice of a set of *N* experts and receive a
+---PAGE_BREAK---
+
+corresponding sequence of losses.¹¹ The goal of the algorithm is to achieve a cumulative loss that is “almost as low” as the cumulative loss of the best performing expert in hindsight. No statistical assumptions are made about these losses. Indeed, algorithms are expected to perform well even if the sequence of losses is chosen by an adversary.
+
+Formally, at every time step $t \in \{1, \dots, T\}$, every expert $i \in \{1, \dots, N\}$ receives a loss $\ell_{i,t} \in [0, 1]$. The cumulative loss of expert $i$ at time $T$ is then defined as $L_{i,T} = \sum_{t=1}^{T} \ell_{i,t}$. An algorithm $\mathcal{A}$ maintains a weight $w_{i,t}$ for each expert $i$ at time $t$, where $\sum_{i=1}^{N} w_{i,t} = 1$. These weights can be viewed as a distribution over the experts. The algorithm then receives its own instantaneous loss $\ell_{\mathcal{A},t} = \sum_{i=1}^{N} w_{i,t}\ell_{i,t}$, which can be interpreted as the expected loss the algorithm would receive if it always chose an expert to follow according to the current distribution. The cumulative loss of $\mathcal{A}$ up to time $T$ is defined in the natural way as $L_{\mathcal{A},T} = \sum_{t=1}^{T} \ell_{\mathcal{A},t} = \sum_{t=1}^{T} \sum_{i=1}^{N} w_{i,t}\ell_{i,t}$. Below we use the symbols $\ell_t$, $L_t$, and $w_t$ to refer to the vector of losses, the vector of cumulative loss, and the vector of weights, respectively, for each expert on round $t$.
+
+It is unreasonable to expect the algorithm to achieve a small cumulative loss if none of the experts perform well. For this reason, it is typical to measure the performance of an algorithm in terms of its *regret*, defined to be the difference between the cumulative loss of the algorithm and the loss of the best performing expert, that is,
+
+$$L_{\mathcal{A},T} - \min_{i \in \{1, \dots, N\}} L_{i,T}.$$
+
+An algorithm is said to have no regret if the average per time step regret approaches 0 as $T$ approaches infinity.
+
+The popular Randomized Weighted Majority (WM) algorithm [Littlestone and Warmuth 1994; Freund and Schapire 1997] is an example of a no-regret algorithm. Weighted Majority uses weights
+
+$$w_{i,t} = \frac{e^{-\eta L_{i,t-1}}}{\sum_{j=1}^{N} e^{-\eta L_{j,t-1}}}, \quad (14)$$
+
+where $\eta > 0$ is a tunable parameter known as the *learning rate*. It is well known that the regret of WM after $T$ trials can be bounded as
+
+$$L_{WM(\eta),T} - \min_{i \in \{1, \dots, N\}} L_{i,T} \leq \eta T + \frac{\log N}{\eta}.$$
+
+When $T$ is known in advance, setting $\eta = \sqrt{\log N/T}$ yields the standard $O(\sqrt{T \log N})$ regret bound.
+
+It has been shown that the weights chosen by Weighted Majority are precisely those that minimize a combination of empirical loss and an entropy-based regularization term [Kivinen and Warmuth 1997; 1999; Helmbold and Warmuth 2009]. More specifically, the weight vector $w_t$ at time $t$ is precisely the solution to the following minimization problem:
+
+$$\min_{w \in \Delta_N} w \cdot L_{t-1} - \frac{1}{\eta} H(w)$$
+
+where $H$ is the entropy function, $H(w) := -\sum_{i=1}^{N} w_i \log w_i$. Indeed, Weighted Majority is an example of broader class of algorithms collectively known as *Follow the Regulated Leader* (FTRL) algorithms [Shalev-Shwartz and Singer 2007; Hazan and Kale
+
+¹¹This framework could be formalized equally well in terms of rewards, but losses are more common in the literature.
+---PAGE_BREAK---
+
+2010; Hazan 2009]. The FTRL template can be applied to a wide class of learning problems that fall under a general framework commonly known as *online convex optimization* [Zinkevich 2003]. Other problems that fall into this framework include online linear pattern classification [Kivinen and Warmuth 1997], online Gaussian density estimation [Azoury and Warmuth 2001], and online portfolio selection [Cover 1991]. In Algorithm 1, we present a version of FTRL tailored to the *online linear optimization* problem, an extension of the expert setting in which weights $w_t$ are chosen from a fixed bounded convex action space $\mathcal{K} \subset \mathbb{R}^N$. Notice that the experts setting is just a special case of online linear optimization, where the set $\mathcal{K}$ is the $N$-simplex $\Delta_N$.
+
+**ALGORITHM 1:** Follow the Regularized Leader (FTRL)
+
+1: Input: convex compact decision set $\mathcal{K} \subset \mathbb{R}^N$
+2: Input: strictly convex differentiable regularization function $\mathcal{R}(\cdot)$ defined on $\mathcal{K}$
+3: Parameter: $\eta > 0$
+4: Initialize: $\mathbf{L}_0 = \langle 0, \dots, 0 \rangle$
+5: **for** $t = 1, \dots, T$ **do**
+6: The learner selects action $w_t \in \mathcal{K}$ according to:
+
+$$ \mathbf{w}_t := \underset{\mathbf{w} \in \mathcal{K}}{\operatorname{argmin}} \mathbf{L}_{t-1} \cdot \mathbf{w} + \frac{1}{\eta} \mathcal{R}(\mathbf{w}) \quad (15) $$
+
+7: Nature reveals $\ell_t$, learner suffers loss $\ell_t \cdot w_t$
+8: The learner updates $\mathbf{L}_t = \mathbf{L}_{t-1} + \ell_t$
+9: **end for**
+
+For a complete description of the FTRL algorithm, we refer the reader to the excellent notes of Rakhlin [2009]. We will make use of a result from these notes, but we first include two additional assumptions that we will use to make the connection to duality-based cost function market makers. In the remainder of this section, we use $\|\cdot\|$ to denote the L2 norm.
+
+**ASSUMPTION 1.** For each time step $t$, $\|\ell_t\| \le 1$.
+
+**ASSUMPTION 2.** The regularizer $\mathcal{R}(\cdot)$ has the Legendre property defined in Section 11.2 of Cesa-Bianchi and Lugosi [2006]: $\mathcal{R}$ is strictly convex on relint($\mathcal{K}$) and $\|\nabla \mathcal{R}(\mathbf{w})\| \to \infty$ as $\mathbf{w} \to \text{relbnd}(\mathcal{K})$.
+
+Under the latter assumption, the solution to Equation 15 will always occur in the relative interior of $\mathcal{K}$, which implies that the optimization is effectively unconstrained. We can now utilize Corollary 9 of Rakhlin [2009] to obtain the following.
+
+**PROPOSITION 7.1.** Under Assumptions 1 and 2, the FTRL algorithm enjoys the following regret bound: For any $\mathbf{w}^* \in \mathcal{K}$,
+
+$$ \sum_{t=1}^{T} \ell_t \cdot \mathbf{w}_t - \sum_{t=1}^{T} \ell_t \cdot \mathbf{w}^* \leq \frac{1}{\eta} \left( \mathcal{R}(\mathbf{w}^*) - \mathcal{R}(\mathbf{w}_1) - D_{\mathcal{R}}(\mathbf{w}^*, \mathbf{w}_{T+1}) + \sum_{t=1}^{T} D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \right). $$
+
+This proposition may not be so illuminating at first glance, but it expresses a fundamental tradeoff in the learning problem. If we choose a regularizer $\mathcal{R}$ with heavy curvature, or equivalently if we choose a small $\eta$, then given the nature of the optimization problem in Equation 15, we ensure that the updates $w_t \to w_{t+1}$ are “small” and hence $D_{\mathcal{R}}(w_t, w_{t+1})$ will be small. On the other hand, we pay for either of these choices since (a) the bound is proportional to $1/\eta$, and (b) the difference $\mathcal{R}(w^*) - \mathcal{R}(w_1)$ grows larger when $\mathcal{R}$ has more curvature.
+---PAGE_BREAK---
+
+Under certain reasonable assumptions on $\mathcal{R}$, it is possible to prove that $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \le O(\eta^2)$. For example, if $\mathcal{R}$ is strongly convex (with respect to the L2 norm), then $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \le \eta^2 \|\ell_t\|^2$. See Rakhlin [2009] for more details.
+
+COROLLARY 7.2. Suppose that there exists $B > 0$ such that for every $t$, $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1}) \le B\eta^2$, and that there exists $C > 0$ such that $\mathcal{R}(\mathbf{w}^*) - \mathcal{R}(\mathbf{w}_1) \le C$. Then $\text{Regret}(\text{FTRL}) \le C/\eta + \eta BT$. If $\eta = \sqrt{C/BT}$, then $\text{Regret}(\text{FTRL}) \le 2\sqrt{BCT}$.
+
+This final bound is quite powerful. It says that the regret of FTRL on any online linear optimization problem is always on the order of $\sqrt{T}$. The constant in front of this rate will depend on the total variation of the regularization function on $\mathcal{K}$ (that is, $\mathcal{R}(\mathbf{w}^*) - \mathcal{R}(\mathbf{w}_1)$) as well as the stability of the updates (that is, the terms $D_{\mathcal{R}}(\mathbf{w}_t, \mathbf{w}_{t+1})$).
+
+## 7.2. An Equivalence Between Online Learning and Market Making
+
+Having reviewed much of the literature on the design of online learning algorithms, we now pivot back to the primary topic at hand, the design of market makers for complex security spaces. We will see that the tools that have been developed for the online learning setting are strikingly similar to those we have constructed for selecting pricing mechanisms. This is rather surprising, as the problem of learning in an online environment is semantically quite distinct from the problem of pricing securities in a prediction market: a learning algorithm receives *losses* and selects *weights* whereas a market maker manages *trades* and sets *prices*. We now show how these two problems can be viewed as two sides of the same coin. The two frameworks have very different semantics yet, in a very strong sense, have nearly identical syntax.
+
+The relationship is described in full detail in Figure 1. We imagine that the learner uses the FTRL algorithm (Algorithm 1) to select weights, and the market uses the duality-based cost function market maker framework.
+
+What we emphasize in Figure 1 is that, by identifying the objects $\Pi$, $R(\cdot)$, and $\{r_t\}$ with the objects $\mathcal{K}$, $\mathcal{R}(\cdot)/\eta$, and $\{-\ell_t\}$, respectively, the mechanisms for choosing an instantaneous price vector $x_t \in \Pi$ and selecting a weight vector $w_t \in \mathcal{K}$ are identical. Put another way, if we consider security bundles $r_t$ as the negative loss vectors $\ell_t$, then the duality-based cost function market maker becomes exactly FTRL.
+
+The connection seems to break down when we arrive at the last pair of statements, as the FTRL regret and the market maker's worst-case loss do not appear to be identical. Strictly speaking this is true. However, these two quantities are not so far apart. Using the previous identification, we see that the term $\max_{x \in \Pi} x \cdot q_T$, representing the worst-case payout of the market maker, matches exactly the term $-\min_{w \in \mathcal{K}} w \cdot L_T$. Now let us do a first-order approximation on the negation of the first term, i.e., the market maker's earnings from selling securities. We have
+
+$$C(\mathbf{q}_T) - C(\mathbf{q}_0) = \sum_{t=1}^{T} C(\mathbf{q}_t) - C(\mathbf{q}_{T-1}) \approx \sum_{t=1}^{T} \nabla C(\mathbf{q}_{T-1}) \cdot (\mathbf{q}_t - \mathbf{q}_{T-1}) = \sum_{t=1}^{T} x_t \cdot r_t, \quad (16)$$
+
+where we used the fact that the instantaneous price vector $x_t$ is equal to $\nabla C(q_{t-1})$. This is not too surprising. Every trader will roughly pay the instantaneous prices $x_t$ for the securities times the quantities $r_t$ of each security sold. The total earned by the market maker $C(q_T) - C(q_0)$ is then roughly the sum of these payments over all trades.
+
+How bad is this approximation? We can quantify this explicitly, since the difference between $C(q_t) - C(q_{t-1})$ and $\nabla C(q_{t-1}) \cdot (q_t - q_{t-1})$ is exactly the value $D_C(q_t, q_{t-1})$. If $\mathcal{R}$ has the Legendre property (described in Assumption 2) then via standard arguments [Rockafellar 1970] we can also conclude that $D_C(q_t, q_{t-1}) = D_{\mathcal{R}}(x_t, x_{t+1})$. Under this assumption, in other words, the worst-case loss of the market maker can be writ-
+---PAGE_BREAK---
+
+Fig. 1. The similarities between the duality-based cost function market maker framework and the Follow the Regularized Leader algorithm for online linear optimization.
+
+ten as
+
+$$ \max_{\mathbf{x} \in \Pi} \mathbf{x} \cdot \mathbf{q}_T - \sum_{t=1}^{T} \mathbf{x}_t \cdot \mathbf{r}_t - \sum_{t=1}^{T} D_R(\mathbf{x}_t, \mathbf{x}_{t+1}). $$
+
+Putting everything together, this final bound is exactly what we should expect. Look again at Theorem 6.2 and Proposition 7.1. The bounds in these theorems are nearly identical under the translation matching $w^* \leftrightarrow \rho(o)$, $w_{T+1} \leftrightarrow \nabla C(\mathbf{q})$, and $R(x) \leftrightarrow \mathcal{R}(w)/\eta$, since by definition of FTRL, $w_1 = \arg\min_{w \in \mathcal{K}} \mathcal{R}(w)$. The key difference is that the sum of divergence terms seems to get “lost in translation” when we look at Theorem 6.2. The above equation tells us why this is.
+
+It is worth looking further into this key difference between the FTRL algorithm for online linear optimization and our proposed automated market maker. We could imagine a modified market maker with a different mechanism: after the $(t-1)$th trade the market maker posts the (instantaneous) price vector $\mathbf{x}_t$, a trader arrives to purchase bundle $\mathbf{r}_t$, and the trader pays exactly $\mathbf{x}_t \cdot \mathbf{r}_t$. Notice this is different from the original
+---PAGE_BREAK---
+
+framework, where the trader would pay $C(\mathbf{q} + \mathbf{r}_i) - C(\mathbf{q})$, although we observed in Equation 16 that these two values are not so far apart.
+
+Under the mapping outlined in Figure 1, algorithms for the expert setting ($K = \Delta_n$) correspond to complete markets. Weighted Majority corresponds directly to the LMSR, with the learning rate $\eta$ playing a similar role to the LMSR parameter $b$. The similarity between the Weighted Majority weights (Equation 14) and the LMSR prices (Equation 2) has been observed and exploited in the past [Chen et al. 2008a]. The Quad-SCPM market [Agrawal et al. 2011] can be mapped to online gradient descent, which is known to be equivalent to FTRL with a quadratic regularizer [Hazan et al. 2007; Hazan 2009].
+
+## 8. RELATION TO MARKET SCORING RULES
+
+We have described ways in which our optimization-based framework can be used to derive novel, efficient automated market makers for markets in which the outcome space is very large. Our framework also provides new insights into the complete market setting. In this section, we describe how our framework can be used to establish a correspondence between cost function based markets and market scoring rules.
+
+Consider the special case of complete markets, and in particular, markets that offer *n* Arrow-Debreu securities for the *n* mutually exclusive and exhaustive outcomes. Our framework defines a set of market makers by equating the set of allowable prices II to the *n*-simplex. That is, a market maker for a complete market that satisfies conditions 1–5 in Section 3 can use a cost function of the form
+
+$$C(\mathbf{q}) = \sup_{\mathbf{x} \in \text{relint}(\Delta_n)} \mathbf{x} \cdot \mathbf{q} - R(\mathbf{x}), \quad (17)$$
+
+where $R(x)$ is strictly convex over $\Delta_n$. The market price $x(q) = \nabla C(q)$ is the optimal solution to the convex optimization. It is easy to check that when $R(x) = b \sum_{i=1}^n x_i \log x_i$, the negative entropy function, we have the LMSR market maker. The LMSR is a popular example of a large class of market makers, called *market scoring rules* (MSR). In this section, after reviewing the notion of a proper scoring rule and describing the class of MSRs, we use Equation 17 to establish a correspondence between MSRs and cost function based market makers for complete markets.
+
+### 8.1. Proper Scoring Rules
+
+*Scoring rules* have long been used in the evaluation of probabilistic forecasts. In the context of information elicitation, scoring rules are used to encourage individuals to make careful assessments and truthfully report their beliefs [Savage 1971; Garthwaite et al. 2005; Lambert et al. 2008]. In the context of machine learning, scoring rules are used as loss functions to evaluate and compare the performance of different algorithms [Buja et al. 2005; Reid and Williamson 2009]. We briefly mention recent work of Abernethy and Frongillo [2011] who used a generalized notion of a scoring rule in order to construct a market mechanism for solving machine learning problems.
+
+Formally, let $\{1, \dots, n\}$ be a set of mutually exclusive and exhaustive outcomes of a future event. A scoring rule maps a probability distribution $p$ over outcomes to a score $s_i(p)$ for each outcome $i$, with $s_i(p)$ taking values in the range $[-\infty, \infty]$. Intuitively, this score represents the reward that a forecaster receives for predicting the distribution $p$ if the outcome turns out to be $i$. A scoring rule is said to be *regular* relative to the probability simplex $\Delta_n$ if $\sum_{i=1}^n p_i s_i(p') \in [-\infty, \infty)$ for all $p, p' \in \Delta_n$, with $\sum_{i=1}^n p_i s_i(p) \in (-\infty, \infty)$. This implies that $s_i(p) \in (-\infty, \infty)$ whenever $p_i > 0$ and $s_i(p)$ may equal to $-\infty$ when $p_i = 0$. A scoring rule is said to be *proper* if a risk-neutral forecaster who believes the true distribution over outcomes to be $p$ has no incentive to report any alternate distribution $p'$, that is, if $\sum_{i=1}^n p_i s_i(p) \ge \sum_{i=1}^n p_i s_i(p')$ for all
+---PAGE_BREAK---
+
+distributions $p' \in \Delta_n$. The rule is *strictly proper* if this inequality holds with equality only when $p = p'$.
+
+Two examples of regular, strictly proper scoring rules commonly used in both information elicitation and machine learning are the quadratic scoring rule [Brier 1950]:
+
+$$s_i(\mathbf{p}) = a_i + b \left( 2p_i - \sum_{i=1}^{n} p_i^2 \right) \quad (18)$$
+
+and the logarithmic scoring rule [Good 1952]:
+
+$$s_i(\mathbf{p}) = a_i + b \log(p_i) \quad (19)$$
+
+where $b > 0$ and $a_1, \dots, a_n$ are parameters.
+
+Proper scoring rules are closely related to convex functions. In fact, the following characterization theorem of Gneiting and Raftery [2007], which is credited to McCarthy [1956] and Savage [1971], gives the precise relationship between convex functions and proper scoring rules.
+
+**THEOREM 8.1 (GNEITING AND RAFTERY [2007]).** A regular scoring rule is (strictly) proper if and only if there exists a (strictly) convex function $G: \Delta_n \to \mathbb{R}$ such that for all $i \in \{1, \dots, n\}$,
+
+$$s_i(\mathbf{p}) = G(\mathbf{p}) - G'(\mathbf{p}) \cdot \mathbf{p} + G'_i(\mathbf{p}),$$
+
+where $G'(\mathbf{p})$ is any subgradient of $G$ at the point $\mathbf{p}$ and $G'_i(\mathbf{p})$ is the $i$-th element of $G'(\mathbf{p})$.
+
+Note that for a scoring rule defined in terms of a function $G$,
+
+$$\sum_{i=1}^{n} p_i s_i(\mathbf{p}) = \sum_{i=1}^{n} p_i (G(\mathbf{p}) - G'(\mathbf{p}) \cdot \mathbf{p} + G'_i(\mathbf{p})) = G(\mathbf{p}).$$
+
+Theorem 8.1 therefore indicates that a regular scoring rule is (strictly) proper if and only if its expected score function $G(\mathbf{p})$ is (strictly) convex on $\Delta_n$, and the vector with elements $s_i(\mathbf{p})$ is a subgradient of $G$ at the point $\mathbf{p}$. Hence, every bounded convex function $G$ over $\Delta_n$ induces a proper scoring rule.
+
+Define $S(\tilde{\mathbf{p}}, \mathbf{p}) = \sum_{i=1}^n p_i s_i(\tilde{\mathbf{p}})$ to be the expected score of a forecaster who has belief $\mathbf{p}$ but predicts $\tilde{\mathbf{p}}$. Then, $G(\mathbf{p}) = S(\mathbf{p}, \mathbf{p})$. If a scoring rule is regular and proper, $d(\tilde{\mathbf{p}}, \mathbf{p}) = S(\mathbf{p}, \mathbf{p}) - S(\tilde{\mathbf{p}}, \mathbf{p})$ is the associated divergence function that captures the expected loss in score if a forecaster predicts $\tilde{\mathbf{p}}$ rather than his true belief $\mathbf{p}$. It is known that if $G(\mathbf{p})$ is differentiable, the divergence function is the Bregman divergence for $G$, that is, $d(\tilde{\mathbf{p}}, \mathbf{p}) = D_G(\tilde{\mathbf{p}}, \mathbf{p})$. For a nice survey on uses, properties, and characterizations of proper scoring rules, see Gneiting and Raftery [2007].
+
+## 8.2. Market Scoring Rules
+
+Market scoring rules (MSR) were developed by Hanson [2003; Hanson [2007] as a method of using scoring rules to pool opinions from many different forecasters. Market scoring rules are sequentially shared scoring rules. Formally, the market maintains a current probability distribution $\mathbf{p}$. At any time, a trader can enter the market and change this distribution to an arbitrary distribution $p'$ of her choice.¹² If the outcome turns out to be $i$, she receives a (possibly negative) payoff of $s_i(\mathbf{p}') - s_i(\mathbf{p})$. For example, in the MSR defined using the logarithmic scoring rule in Equation 19, a trader
+
+¹²In some market scoring rules, such as the LMSR, distributions that place a weight of 0 on any outcome are not allowed since a trader would have to pay an infinite amount of money if the outcome with reported probability 0 actually occurred.
+---PAGE_BREAK---
+
+who changes the distribution from p to p' receives a payoff of $b \log(p'_i/p_i)$. This market formulation is equivalent to the cost function based formulation of the LMSR (hence its name) in the sense that a trader who changes the market probabilities from p to p' in the MSR formulation receives the same payoff for every outcome i as a trader who changes the quantity vectors from any q to q' such that market prices satisfy $x(q) = p$ and $x(q') = p'$ in the cost function based formulation. Using proper scoring rules, market scoring rules preserve the nice incentive compatible property of proper scoring rules for *myopic* traders. A trader who believes the true distribution to be p and only cares about payoff of her current action, maximizes her expected payoff by changing the market's distribution to p.
+
+One advantage of the market scoring rule formulation is the ease of bounding the market maker's worst-case loss. Each trader in a market scoring rule is essentially responsible for paying the previous trader's score. Thus the market maker is responsible only for paying the score of the final trader. Let $p_0$ be the initial probability distribution of the market. The worst-case loss of the market maker is then
+
+$$ \max_{i \in \{1, \dots, n\}} \sup_{\mathbf{p} \in \Delta_n} (s_i(\mathbf{p}) - s_i(\mathbf{p}_0)). $$
+
+The LMSR market maker is not the only market that can be defined as either a market scoring rule or a cost function based market. The fact that there exists a correspondence between certain market scoring rules and certain cost function based markets was noted by Chen and Pennock [2007]. They pointed out that the MSR with scoring function $s$ and the cost function based market with cost function $C$ are equivalent if for all $\mathbf{q}$ and all outcomes $i$, $C(\mathbf{q}) = q_i - s_i(\mathbf{x}(\mathbf{q}))$. However, they provide neither any guarantees about the circumstances under which this condition can be satisfied nor a general way to find the cost function given a market scoring rule; $\mathbf{x}(\mathbf{q})$ is the gradient of $C(\mathbf{q})$ and the condition defines a differential equation. Agrawal et al. [2011] also made use of the equivalence between markets when this strong condition holds. In the next section, we will give very general precise conditions under which an MSR is equivalent to a cost function based market and provide a way to translate a market scoring rule to a cost function based market and vice versa.
+
+**8.3. Equivalence between Market Scoring Rules and Cost Function Based Market Makers**
+
+Recall that a convex cost function $C$ can be defined as $C(\mathbf{q}) = \sup_{\mathbf{x} \in \text{relint}(\Delta_n)} \sum_{i=1}^n x_i q_i - R(\mathbf{x})$ for a strictly convex function $R$, namely the convex conjugate of $C$. According to Theorem 8.1, there is a one-to-one and onto mapping between strictly convex and differentiable $R$ and strictly proper, regular scoring rules with differentiable scoring functions $s_i(\mathbf{x})$, where for every pair we have
+
+$$ R(\mathbf{x}) = \sum_{i=1}^{n} x_i s_i(\mathbf{x}), \quad (20) $$
+
+and
+
+$$ s_i(\mathbf{x}) = R(\mathbf{x}) - \sum_{j=1}^{n} \frac{\partial R(\mathbf{x})}{\partial x_j} x_j + \frac{\partial R(\mathbf{x})}{\partial x_i}. \quad (21) $$
+
+Theorem 8.2 below shows that the cost function based market using $R$ in (20) and the market scoring rule market using $s_i(\mathbf{x})$ in (21) are equivalent in terms of traders' profits, reachable price vectors, and the market maker's worst-case loss under some mild conditions.
+---PAGE_BREAK---
+
+**THEOREM 8.2.** Given a strictly convex, continuous conjugate function $R$ and a strictly proper, regular scoring rule $\mathbf{s}$ with scoring functions $s_i$ satisfying the relationships in Equations 20 and 21, if both $R$ and $s_i$'s are differentiable everywhere in relint($\Delta_n$), the corresponding cost function based market and market scoring rule market are equivalent in the following three aspects:
+
+(a) A trade in the cost function based market bringing the quantity vector **q** to **q'** and the price vector $\mathbf{x}(\mathbf{q})$ to $\mathbf{x}(\mathbf{q}')$ gives the same profit as a trade in the MSR market bringing the market probability from $\mathbf{x}(\mathbf{q})$ to $\mathbf{x}(\mathbf{q}')$ for every outcome $i$ as long as $\mathbf{x}(\mathbf{q}), \mathbf{x}(\mathbf{q}') > 0$.
+
+(b) Given any probability vector $\mathbf{x}$ for which $s_i(\mathbf{x}) \in (-\infty, \infty) \forall i$ in the MSR market, there is always a quantity vector $\mathbf{q}$ such that $\nabla C(\mathbf{q}) = \mathbf{x}$ in the cost function based market.
+
+(c) If the initial probability vector $\mathbf{x}_0$ in the MSR market is equal to the initial price vector $\nabla C(0)$ in the cost function based market, and $\mathbf{x}_0 \in \text{relint}(\Delta_n)$, then both markets have the same worst-case loss for the market maker.
+
+PROOF. Because $R$ is continuous and defined on $\Delta_n$, $R$ is bounded on $\Delta_n$. According to Lemma 4.3, for any $\mathbf{q} \in \mathbb{R}^n$,
+
+$$ \mathbf{x}(\mathbf{q}) = \nabla C(\mathbf{q}) = \underset{\mathbf{x} \in \Delta_n}{\operatorname{argmax}} \left( \sum_{i=1}^{n} x_i q_i - R(\mathbf{x}) \right) \quad (22) $$
+
+in the cost function based market. Below, we prove each part in turn.
+
+Part (a). Due to Equation 22, if $\mathbf{x}(\mathbf{q}) > 0$, $\mathbf{x}(\mathbf{q})$ must be the optimal solution to the unconstrained optimization problem $\max_{\mathbf{x}} \sum_{i=1}^{n} x_i q_i - R(\mathbf{x}) - \lambda_{\mathbf{q}} (\sum_{i=1}^{n} x_i - 1)$ for some $\lambda_{\mathbf{q}}$. Since $R$ is differentiable in $\text{relint}(\Delta_n)$, this means that
+
+$$ q_i = \frac{\partial R(\mathbf{x}(\mathbf{q}))}{\partial x_i(\mathbf{q})} + \lambda_{\mathbf{q}} \quad (23) $$
+
+for some $\lambda_{\mathbf{q}}$.
+
+Suppose in the cost function based market a trader changes the outstanding shares from **q** to **q'** and this trade changes the market price from **x**(**q**) > 0 to **x**(**q')** > 0. If outcome *i* occurs, the trader's profit is
+
+$$
+\begin{align*}
+& (q'_i - q_i) - (C(\mathbf{q}') - C(\mathbf{q})) \\
+&= (q'_i - q_i) - \left( \sum_{j=1}^{n} x_j(\mathbf{q}') q'_j - R(\mathbf{x}(\mathbf{q}')) \right) + \left( \sum_{j=1}^{n} x_j(\mathbf{q}) q_j - R(\mathbf{x}(\mathbf{q})) \right) \\
+&= \left( q'_i - \sum_{j=1}^{n} x_j(\mathbf{q}') q'_j + R(\mathbf{x}(\mathbf{q}')) \right) - \left( q_i - \sum_{j=1}^{n} x_j(\mathbf{q}) q_j + R(\mathbf{x}(\mathbf{q})) \right) \\
+&= \left( \frac{\partial R(\mathbf{x}(\mathbf{q}'))}{\partial x_i(\mathbf{q}')} - \sum_{j=1}^{n} x_j(\mathbf{q}') \frac{\partial R(\mathbf{x}(\mathbf{q}'))}{\partial x_j(\mathbf{q}')} + R(\mathbf{x}(\mathbf{q}')) \right) \\
+&\quad - \left( \frac{\partial R(\mathbf{x}(\mathbf{q}))}{\partial x_i(\mathbf{q})} - \sum_{j=1}^{n} x_j(\mathbf{q}) \frac{\partial R(\mathbf{x}(\mathbf{q}))}{\partial x_j(\mathbf{q})} + R(\mathbf{x}(\mathbf{q})) \right) \\
+&= s_i(\mathbf{x}(\mathbf{q}')) - s_i(\mathbf{x}(\mathbf{q})).
+\end{align*}
+$$
+
+The first equality follows since $\mathbf{x}(\mathbf{q})$ is the solution to $\max_{\mathbf{x}\in\Delta_n} (\sum_{i=1}^n x_i q_i - R(\mathbf{x}))$ and the third equality follows from Equation 23. Since $s_i(\mathbf{x}(\mathbf{q}')) - s_i(\mathbf{x}(\mathbf{q}))$ is the profit of
+---PAGE_BREAK---
+
+a trader who changes the market probability from $x(q)$ to $x(q')$ in the MSR market when outcome $i$ occurs, this completes the proof of part (a).
+
+*Part (b).* In the MSR market, only probability vectors in the set $Y = \{x \in \Delta_n : s_i(x) \in (-\infty, \infty) \forall i\}$ can possibly be reported by a trader with finite wealth. Since the scoring rule $s$ is regular, $s_i(x) \in [-\infty, \infty)$ and it can equal $-\infty$ only when $x_i$ is 0. However, any $x$ that sets $s_i(x) = -\infty$ for some $i$ is not allowed, as it requires the trader to pay infinite amount of money when outcome $i$ actually happens.
+
+We show that in the cost function based market it is possible to achieve any price vector $x \in Y$ by setting $q_i = s_i(\mathbf{x})$ for all $i$. By strict properness of the scoring rule $s$, we know that $\sum_{i=1}^n x'_i s_i(\mathbf{x}) - \sum_{i=1}^n x'_i s_i(\mathbf{x}') \le 0$ for any $\mathbf{x}$ and $\mathbf{x}'$ and the equality holds only when $\mathbf{x}' = \mathbf{x}$. For any vector $\mathbf{x} \in Y$, $s(\mathbf{x}) \in \mathbb{R}^n$. By Equation 22, we have $\nabla C(s(\mathbf{x})) = \operatorname{argmax}_{\mathbf{x}' \in \Delta_n} \sum_{i=1}^n x_i' s_i(\mathbf{x}) - \sum_{i=1}^n x_i' s_i(\mathbf{x}') = \mathbf{x}$. Hence, the price vector in the cost function based market is exactly $\mathbf{x}$.
+
+*Part (c).* We know that $C(0) = \max_{\mathbf{x} \in \Delta_n} -R(\mathbf{x})$. If $\mathbf{x}_0 \in \text{relint}(\Delta_n)$, we have $\mathbf{x}_0 = \nabla C(0) = \text{argmin}_{\mathbf{x} \in \Delta_n} R(\mathbf{x})$ and $\mathbf{x}_0$ must satisfy
+
+$$ \nabla R(\mathbf{x}_0) = 0. \tag{24} $$
+
+Combining Equation 24 with Equation 21, we have
+
+$$ s_i(\mathbf{x}_0) = R(\mathbf{x}_0). \tag{25} $$
+
+The worst-case loss of the cost function based market maker is
+
+$$
+\begin{align}
+\sup_{\mathbf{x} \in \rho(O)} R(\mathbf{x}) - \min_{\mathbf{x} \in \mathcal{H}(\rho(O))} R(\mathbf{x}) &= \sup_{\mathbf{x} \in \rho(O)} R(\mathbf{x}) - R(\mathbf{x}_0) \\
+&= \sup_{\mathbf{x} \in \rho(O)} \sum_i x_i s_i(\mathbf{x}) - R(\mathbf{x}_0) \\
+&= \max_{i \in \{1, \dots, n\}} s_i(\mathbf{e}^i) - R(\mathbf{x}_0) \tag{26}
+\end{align}
+$$
+
+where $\mathbf{e}^i$ is the $n$-dimensional vector that has 1 for its $i$-th element and 0 everywhere else. The second equality is due to Equation 20. The third equality is because $\rho(O) = \{\mathbf{e}^1, \dots, \mathbf{e}^n\}$ for the complete market we consider.
+
+The worst-case loss of the MSR market maker with the scoring functions $s_i(\mathbf{x})$ is
+
+$$
+\begin{align}
+\max_{i \in \{1, \dots, n\}} \sup_{\mathbf{x} \in \Delta_n} (s_i(\mathbf{x}) - s_i(\mathbf{x}_0)) &= \max_{i \in \{1, \dots, n\}} \sup_{\mathbf{x} \in \Delta_n} s_i(\mathbf{x}) - R(\mathbf{x}^0) \\
+&= \max_{i \in \{1, \dots, n\}} s_i(\mathbf{e}^i) - R(\mathbf{x}^0). \tag{27}
+\end{align}
+$$
+
+The first equality is due to Equation 25. The second equality holds because for strictly proper scoring rule $s$
+
+$$ s_i(\mathbf{e}^i) = \sum_{j=1}^{n} e_j^i s_j(\mathbf{e}^i) \geq \sum_{j=1}^{n} e_j^i s_j(\mathbf{x}) = s_i(\mathbf{x}) $$
+
+for all $\mathbf{x} \in \Delta_n$.
+
+Equations 26 and 27 are identical. Hence, the worst-case loss of the market maker is the same in these two markets. $\square$
+
+Theorem 8.2 shows that a trader's profit for moving the prices from $x$ to $x'$ can be different in these two markets only when $x$ or $x'$ (or both) lie on the relative boundary of $\Delta_n$, and the worst-case loss of the market maker can be different in these two markets only when the initial market price vector lies on the relative boundary of
+---PAGE_BREAK---
+
+$\Delta_n$. The reachable price vectors, however, are always the same. The LMSR market is an example where both the initial market price vector and market prices at any consequent time are in relint($\Delta_n$). The MSR market using a quadratic scoring rule is an example where the initial market price vector is in relint($\Delta_n$) but future market prices can reach the relative boundary of $\Delta_n$. Its corresponding cost function based market maker is equivalent to the Quad-SCPM market introduced by Agrawal et al. [2011].
+
+## 9. CONCLUSION
+
+We conclude by mentioning one nice direction of work. As we discussed, there is an inherent tradeoff between the bid-ask spread and the worst-case loss of the market maker. But if the market maker chooses to sell securities with an additional *transaction cost* for each security sold, then this money can not only help to cover the worst-case loss, but can also lead to a profit. Furthermore, if a market becomes popular, the market-maker may wish to increase the market depth. This idea has been explored by Othman et al. [2010] for the case of complete markets, introducing a *liquidity sensitive* market maker, and they provide a new model with profit guarantees. Othman and Sandholm [2011] recently extend the work and characterize a family of market makers that are liquidity sensitive. Via our framework, we can define an alternative method for simultaneously including transaction costs and guaranteeing profit. In particular, this is achieved through relaxing the price space, as discussed in Section 6.2. We leave the details to future work.
+
+## REFERENCES
+
+ABERNETHY, J., CHEN, Y., AND VAUGHAN, J. W. 2011. An optimization-based framework for automated market-making. In *Proceedings of the 12th ACM Conference on Electronic Commerce*. 297–306.
+
+ABERNETHY, J. AND FRONGILLO, R. M. 2011. A collaborative mechanism for crowdsourcing prediction problems. In *Advances in Neural Information Processing Systems*.
+
+AGRAWAL, S., DELAGE, E., PETERS, M., WANG, Z., AND YE, Y. 2011. A unified framework for dynamic prediction market design. *Operations Research* 59, 3, 550–568.
+
+AGRAWAL, S., WANG, Z., AND YE, Y. 2008. Parimutuel betting on permutations. In *Proceedings of the 4th International Workshop On Internet And Network Economics*. 126–137.
+
+ARROW, K. J. 1964. The role of securities in the optimal allocation of risk-bearing. *Review of Economic Studies* 31, 2, 91–96.
+
+ARROW, K. J. 1970. *Essays in the Theory of Risk Bearing*. North Holland, Amsterdam.
+
+AZOURY, K. S. AND WARMUTH, M. K. 2001. Relative loss bounds for on-line density estimation with the exponential family of distributions. *Machine Learning* 43, 3, 211–246.
+
+BERG, J. E., FORSYTHE, R., NELSON, F. D., AND RIETZ, T. A. 2001. Results from a dozen years of election futures markets research. In *Handbook of Experimental Economic Results*, C. A. Plott and V. Smith, Eds.
+
+BOYD, S. AND VANDENBERGHE, L. 2004. *Convex Optimization*. Cambridge University Press.
+
+BRAHMA, A., DAS, S., AND MAGDON-ISMAIL, M. 2010. Comparing prediction market structures, with an application to market making. Working paper.
+
+BRIER, G. 1950. Verification of forecasts expressed in terms of probability. *Monthly Weather Review* 78, 1, 1–3.
+
+BUJA, A., STUETZLE, W., AND SHEN, Y. 2005. Loss functions for binary class probability estimation and classification: Structure and applications. Working draft.
+
+CESA-BIANCHI, N. AND LUGOSI, G. 2006. *Prediction, Learning, and Games*. Cambridge University Press.
+
+CHEN, Y., FORTNOW, L., LAMBERT, N., PENNOCK, D. M., AND WORTMAN, J. 2008a. Complexity of combinatorial market makers. In *Proceedings of the 9th ACM Conference on Electronic Commerce*. 190–199.
+
+CHEN, Y., FORTNOW, L., NIKOLOVA, E., AND PENNOCK, D. M. 2007a. Betting on permutations. In *Proceedings of the 8th ACM Conference on Electronic Commerce*. ACM, 326–335.
+
+CHEN, Y., FORTNOW, L., NIKOLOVA, E., AND PENNOCK, D. M. 2007b. Betting on permutations. In *Proceedings of the 8th ACM conference on Electronic commerce*. 326–335.
+---PAGE_BREAK---
+
+CHEN, Y., GOEL, S., AND PENNOCK, D. M. 2008b. Pricing combinatorial markets for tournaments. In *Proceedings of the 40th ACM Symposium on Theory of Computing*.
+
+CHEN, Y. AND PENNOCK, D. M. 2007. A utility framework for bounded-loss market makers. In *Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence*. 49–56.
+
+CHEN, Y. AND VAUGHAN, J. W. 2010. A new understanding of prediction markets via no-regret learning. In *Proceedings of the 11th ACM Conference on Electronic Commerce*. 189–198.
+
+COVER, T. 1991. Universal portfolios. *Mathematical Finance* **1**, 1–29.
+
+DAS, S. AND MAGDON-ISMAIL, M. 2008. Adapting to a market shock: Optimal sequential market-making. In *Proceedings of the 21st Annual Conference on Neural Information Processing Systems*. 361–368.
+
+FORTNOW, L., KILIAN, J., PENNOCK, D. M., AND WELLMAN, M. P. 2004. Betting boolean-style: A framework for trading in securities based on logical formulas. *Decision Support Systems* **39**, 1, 87–104.
+
+FREUND, Y. AND SCHAPIRE, R. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. *Journal of Comp. and System Sciences* **55**, 1, 119–139.
+
+GAO, X., CHEN, Y., AND PENNOCK, D. M. 2009. Betting on the real line. In *Proceedings of the 5th Workshop on Internet and Network Economics*. 553–560.
+
+GARTHWAITE, P. H., KADANE, J. B., AND O'HAGAN, A. 2005. Statistical methods for eliciting probability distributions. *Journal of the American Statistical Association* **100**, 680–701.
+
+GHODSI, M., MAHINI, H., MIRROKNI, V. S., AND ZADIMOGHADDAM, M. 2008. Permutation betting markets: Singleton betting with extra information. In *Proceedings of the 9th ACM conference on Electronic commerce*. 180–189.
+
+GNEITING, T. AND RAFTERY, A. 2007. Strictly proper scoring rules, prediction, and estimation. *Journal of the American Statistical Association* **102**, 477, 359–378.
+
+GOOD, I. J. 1952. Rational decisions. *Journal of the Royal Statistical Society, Series B (Methodological)* **14**, 1, 107–114.
+
+GORNI, G. 1991. Conjugation and second-order properties of convex functions. *Journal of Mathematical Analysis and Applications* **158**, 2, 293–315.
+
+GRÖTSCHEL, M., LOVÁSZ, L., AND SCHRIJVER, A. 1981. The ellipsoid method and its consequences in combinatorial optimization. *Combinatorica* **1**, 2, 169–197.
+
+GUO, M. AND PENNOCK, D. M. 2009. Combinatorial prediction markets for event hierarchies. In *Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems*. 201–208.
+
+HANSON, R. 2003. Combinatorial information market design. *Information Systems Frontiers* **5**, 1, 105–119.
+
+HANSON, R. 2007. Logarithmic market scoring rules for modular combinatorial information aggregation. *Journal of Prediction Markets* **1**, 1, 3–15.
+
+HAZAN, E. 2009. A survey: The convex optimization approach to regret minimization. Draft.
+
+HAZAN, E., AGARWAL, A., AND KALE, S. 2007. Logarithmic regret algorithms for online convex optimization. *Machine Learning* **69**, 2–3, 169–192.
+
+HAZAN, E. AND KALE, S. 2010. Extracting certainty from uncertainty: regret bounded by variation in costs. *Machine Learning* **80**, 165–188.
+
+HELMBOLD, D. AND WARMUTH, M. 2009. Learning permutations with exponential weights. *JMLR* **10**, 1705–1736.
+
+HIRIART-URRUTY, J.-B. AND LEMARÉCHAL, C. 2001. *Fundamentals of Convex Analysis*. Springer.
+
+KARP, R. 1972. Reducibility among combinatorial problems. In *Complexity of Computer Computations (Symposium Proceedings)*. Plenum Press, 85–103.
+
+KIVINEN, J. AND WARMUTH, M. 1997. Exponentiated gradient versus gradient descent for linear predictors. *Journal of Information and Computation* **132**, 1, 1–63.
+
+KIVINEN, J. AND WARMUTH, M. K. 1999. Averaging expert predictions. In *Computational Learning Theory: 4th European Conference (EuroCOLT '99)*. Springer, 153–167.
+
+LAMBERT, N., PENNOCK, D. M., AND SHOHAM, Y. 2008. Eliciting properties of probability distributions. In *Proceedings of the 9th ACM Conference on Electronic Commerce*.
+
+LEDYARD, J., HANSON, R., AND ISHIKIDA, T. 2009. An experimental test of combinatorial information markets. *Journal of Economic Behavior and Organization* **69**, 182–189.
+
+LITTLESTONE, N. AND WARMUTH, M. 1994. The weighted majority algorithm. *Info. and Computation* **108**, 2, 212–261.
+
+MANGOLD, B., DOOLEY, M., DORNFEST, R., FLAKE, G. W., HOFFMAN, H., KASTURI, T., AND PENNOCK, D. M. 2005. The tech buzz game. *IEEE Computer* **38**, 7, 94–97.
+
+MAS-COLELL, A., WHINSTON, M. D., AND GREEN, J. R. 1995. *Microeconomics Theory*. Oxford University Press, New York, NY.
+
+ACM Transactions on Economics and Computation, Vol. 1, No. 1, Article X, Publication date: 2012.
+---PAGE_BREAK---
+
+MCCARTHY, J. 1956. Measures of the value of information. *PNAS* **42**, 654–655.
+
+MEGIDDO, N. 1977. Mixtures of order matrices and generalized order matrices. *Discrete Mathematics* **19**, 2, 177–181.
+
+OTHMAN, A. AND SANDHOLM, T. 2011. Homogeneous risk measures and liquidity-sensitive automated market makers. In *Proceedings of the 7th Workshop on Internet and Network Economics*. 314–325.
+
+OTHMAN, A., SANDHOLM, T., PENNOCK, D. M., AND REEVES, D. M. 2010. A practical liquidity-sensitive automated market maker. In *Proceedings of the 11th ACM Conference on Electronic Commerce*. 377–386.
+
+PENNOCK, D. M. 2004. A dynamic pari-mutuel market for hedging, wagering, and information aggregation. In *Proceedings of the Fifth ACM Conference on Electronic Commerce (EC'04)*.
+
+PENNOCK, D. M. AND SAMI, R. 2007. Computational aspects of prediction markets. In *Algorithmic Game Theory*, N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, Eds. Cambridge University Press.
+
+PENNOCK, D. M. AND XIA, L. 2011. Price updating in combinatorial prediction markets with bayesian networks. In *Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence*. 581–588.
+
+PETERS, M., SO, A. M.-C., AND YE, Y. 2007. Pari-mutuel markets: Mechanisms and performance. In *Proceedings of the 3rd International Workshop on Internet and Network Economics*. 82–95.
+
+RAKHLIN, A. 2009. Lecture notes on online learning. Draft.
+
+REID, M. D. AND WILLIAMSON, R. C. 2009. Surrogate regret bounds for proper losses. In *ICML*.
+
+ROCKAFELLAR, R. T. 1970. *Convex analysis*. Princeton Univ Press.
+
+SAVAGE, L. J. 1971. Elicitation of personal probabilities and expectations. *Journal of the American Statistical Association* **66**, 336, 783–801.
+
+SHALEV-SHWARTZ, S. AND SINGER, Y. 2007. A primal-dual perspective of online learning algorithms. *Machine Learning* **69**, 2–3, 115–142.
+
+SION, M. 1958. On general minimax theorems. *Pacific Journal of Mathematics* **8**, 1, 171–176.
+
+WOLFERS, J. AND ZITZEWITZ, E. 2004. Prediction markets. *Journal of Economic Perspective* **18**, 2, 107–126.
+
+XIA, L. AND PENNOCK, D. M. 2011. An efficient monte-carlo algorithm for pricing combinatorial prediction markets for tournaments. In *Proceedings of the International Joint Conferences on Artificial Intelligence*. 305–314.
+
+ZINKEVICH, M. 2003. Online convex programming and generalized infinitesimal gradient ascent. In *ICML*.
+
+## A. CONVEX ANALYSIS RESULTS AND PROOF OF THEOREM 4.2
+
+Towards proving Theorem 4.2, we provide another definition and a couple of results from Rockafellar [1970].
+
+*Definition A.1 (Rockafellar [1970], Section 7).* A convex function $f$ is said to be proper if $f(x) > -\infty$ for all $x$ and $f(x) < +\infty$ for some $x$. Also, $f : \mathbb{R}^K \to [-\infty, \infty]$ is said to be closed when the epigraph of $f$ is a closed set, or equivalently, the set $\{x : f(x) \le \alpha\}$ is closed for all $\alpha \in \mathbb{R}$.
+
+**THEOREM A.2 (ROCKAFELLAR [1970], THEOREM 12.2 AND COROLLARY 12.2.2).**
+For any closed convex function $f : \mathbb{R}^K \to [-\infty, \infty]$, the conjugate $f^*$ is also closed and convex, and $f^{**} = f$. Furthermore, we can write
+
+$$f^*(y) = \sup_{x \in \text{relint}(\text{dom}(f))} y \cdot x - f(x).$$
+
+The preceding theorem tells us that the convex conjugate, which is usually defined in terms of a $\sup$ over all of $\mathbb{R}^K$, can also be written as a $\sup$ over just the relative interior of the domain of the function. This is useful for our duality-based framework, as we want to optimize only inside of the convex hull of the payoff vectors.
+
+**THEOREM A.3 (ROCKAFELLAR [1970], THEOREM 26.3).** Given a proper closed convex function $f : \mathbb{R}^K \to [-\infty, \infty]$, $f$ is finite and differentiable everywhere on $\mathbb{R}^K$ if and only if its conjugate $f^*$ is strictly convex on $\text{dom}(f^*)$.
+---PAGE_BREAK---
+
+PROOF OF THEOREM 4.2. We begin with the first part of the Theorem, showing that for any $C: \mathbb{R}^K \to \mathbb{R}$ satisfying Conditions 2-5, there exists a function $R$ such that Equation 5 is true for any $\mathbf{q} \in \mathbb{R}^K$.
+
+Let $C: \mathbb{R}^K \to \mathbb{R}$ be some cost function satisfying Conditions 2-5. Theorem 3.2 implies that closure($\{\nabla C(\mathbf{q}): \mathbf{q} \in \mathbb{R}^K\}$) = $\mathcal{H}(\rho(\mathcal{O}))$. It follows also that
+
+$$ \mathrm{relint}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathrm{relint}(\mathcal{H}(\rho(\mathcal{O}))). \quad (28) $$
+
+Let us now consider the convex conjugate of $C$,
+
+$$ C^*(\mathbf{x}) := \sup_{\mathbf{q} \in \mathbb{R}^K} \mathbf{x} \cdot \mathbf{q} - C(\mathbf{q}). \quad (29) $$
+
+Recall that we use the notation $\mathrm{dom}(f)$ to refer to the domain of a function $f$, i.e., where it is defined and finite valued. We can show that
+
+$$ \{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\} \subseteq \mathrm{dom}(C^*) \subseteq \mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}). \quad (30) $$
+
+For the first containment, it is clear that if we set $\mathbf{x} = \nabla C(q')$ for any $q' \in \mathbb{R}^K$ then the supremum in (29) is achieved for $\mathbf{q} = q'$ and hence $C^*(\nabla C(q')) = q' \cdot \nabla C(q') - C(q')$. Since $C^*$ is defined on $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$, we have $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\} \subseteq \mathrm{dom}(C^*)$. For the second containment, take some $\mathbf{x} \notin \mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$ and consider the derivative of the objective function in (29) with respect to any $\mathbf{q}$, which is $\mathbf{x} - \nabla C(\mathbf{q})$. This derivative will always have norm bigger than some $\epsilon > 0$ by construction for any $\mathbf{q}$, and hence the objective must increase without bound. Since $\mathbf{x}$ that does not belong to $\mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$ must not in $\mathrm{dom}(C^*)$, we establish $\mathrm{dom}(C^*) \subseteq \mathrm{closure}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\})$.
+
+We now show that the choice of $R := C^*$ is strictly convex and satisfies (5). Indeed, strict convexity follows trivially from Theorem A.3. We establish (5) by observing that
+
+$$ C(\mathbf{q}) = C^{**}(\mathbf{q}) = \sup_{\mathbf{x} \in \mathbb{R}^K} \mathbf{x} \cdot \mathbf{q} - C^*(\mathbf{x}) = \sup_{\mathbf{x} \in \mathrm{relint}(\mathrm{dom}(C^*))} \mathbf{x} \cdot \mathbf{q} - C^*(\mathbf{x}), $$
+
+where the last equality follows because of Theorem A.2. According to (28) and (30), we also have $\mathrm{relint}(\mathrm{dom}(C^*)) = \mathrm{relint}(\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}) = \mathrm{relint}(\mathcal{H}(\rho(\mathcal{O})))$ as desired.
+
+We now prove the other direction. Take any strictly convex $R$ defined on $\mathrm{relint}(\mathcal{H}(\rho(\mathcal{O})))$ and let $C(\mathbf{q}) := R^*(\mathbf{q}) = \sup_{\mathbf{x} \in \mathrm{dom}(R)} \mathbf{q} \cdot \mathbf{x} - R(\mathbf{x})$. To establish Conditions 2-5, Theorem 3.2 tells us that it is sufficient to establish three facts: (a) $C$ is defined on all of $\mathbb{R}^K$, (b) $C$ is everywhere differentiable, and (c) $C$ has the property that closure($\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$) = $\mathcal{H}(\rho(\mathcal{O}))$. It is easy to establish (a), since for any $\mathbf{q} \in \mathbb{R}^K$, $C(\mathbf{q})$ is defined as a supremum of a concave function on a bounded domain, which always exists. For (b), Theorem A.3 gives us that $R$ being strictly convex implies that $C$ is everywhere differentiable. To prove (c), we note that we already proved that for any differentiable $C$, the sets $\{\nabla C(\mathbf{q}) : \mathbf{q} \in \mathbb{R}^K\}$ and $\mathrm{dom}(C^*)$ are identical except possibly for points occurring at their respective relative boundaries. Thus, $\mathrm{dom}(C^*) = \mathrm{dom}(R) = \mathrm{relint}(\mathcal{H}(\rho(\mathcal{O}))),$ which implies (c). $\square$
+
+## B. PROOF OF LEMMA 4.6
+
+Let $g(t) := D_f(\mathbf{x} + t\boldsymbol{\tau}/\|\boldsymbol{\tau}\|, \boldsymbol{\tau})$. Notice that $g(0) = 0$, and $g'(0) = 0$ since $f$ is differentiable and $D_f(\boldsymbol{x}, \boldsymbol{x}')$ is minimized at $\boldsymbol{x} = \boldsymbol{x}'$ or, equivalently, $g(t)$ is minimized at $t = 0$. Using the fundamental theorem of calculus, it follows that
+
+$$ D_f(\boldsymbol{x}+\boldsymbol{r}, \boldsymbol{x}) = g(\|\boldsymbol{r}\|) - g(0) = \int_0^{\|\boldsymbol{r}\|} g'(s)ds = \int_0^{\|\boldsymbol{r}\|} (g'(s)-g'(0))ds = \int_0^{\|\boldsymbol{r}\|} g''(t)dt ds. $$
+---PAGE_BREAK---
+
+Because
+
+$$g(t) = D_f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|, \mathbf{x}) = f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|) - f(\mathbf{x}) - \nabla f(\mathbf{x}) \cdot (\mathbf{tr}/\|\mathbf{r}\|),$$
+
+we obtain
+
+$$g'(t) = \nabla f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|) \cdot (\mathbf{r}/\|\mathbf{r}\|) - \nabla f(\mathbf{x}) \cdot (\mathbf{r}/\|\mathbf{r}\|).$$
+
+Taking the derivative of the above expression regarding $t$, we further have
+
+$$g''(t) = (\mathbf{r}/\|\mathbf{r}\|)^{\top} \nabla^2 f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|)(\mathbf{r}/\|\mathbf{r}\|).$$
+
+Because the curvature of $f$ at $\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|$ is lower bounded by the smallest eigenvalue and upper bounded by the largest eigenvalue of $\nabla^2 f(\mathbf{x} + \mathbf{tr}/\|\mathbf{r}\|)$, it must be true that $a \le g''(t) \le b$. Thus,
+
+$$\int_0^{\|\mathbf{r}\|} \int_0^s a dt ds \le \int_0^{\|\mathbf{r}\|} \int_0^s g''(t) dt ds \le \int_0^{\|\mathbf{r}\|} \int_0^s b dt ds \implies$$
+
+$$\int_0^{\|\mathbf{r}\|} as \, ds \le \int_0^{\|\mathbf{r}\|} \int_0^s g''(t) \, dt \, ds \le \int_0^{\|\mathbf{r}\|} b s \, ds \implies$$
+
+$$\frac{a\|\mathbf{r}\|^2}{2} \le \int_0^{\|\mathbf{r}\|} \int_0^s g''(t) dt ds \le \frac{b\|\mathbf{r}\|^2}{2}.$$
+
+As $\mathbf{r}$ can be chosen arbitrarily as long as $\mathbf{x}$ and $\mathbf{x} + \mathbf{r}$ are both in dom($f$), we establish Inequality 11.
\ No newline at end of file
diff --git a/samples/texts_merged/7604074.md b/samples/texts_merged/7604074.md
new file mode 100644
index 0000000000000000000000000000000000000000..7c02851ae14ffbed39d4863e1944401ed27c8e46
--- /dev/null
+++ b/samples/texts_merged/7604074.md
@@ -0,0 +1,475 @@
+
+---PAGE_BREAK---
+
+Cosmology of a polynomial model for de Sitter
+gauge theory sourced by a fluid
+
+Jia-An Lu¹
+
+School of Physics, Sun Yat-sen University,
+Guangzhou 510275, China
+
+**Abstract**
+
+In the de Sitter gauge theory (DGT), the fundamental variables are the de Sitter (dS) connection and the gravitational Higgs/Goldstone field $\xi^A$. Previously, a model for DGT was analyzed, which generalizes the MacDowell–Mansouri gravity to have a variable cosmological constant $\Lambda = 3/l^2$, where $l$ is related to $\xi^A$ by $\xi^A\xi_A = l^2$. It was shown that the model sourced by a perfect fluid does not support a radiation epoch and the accelerated expansion of the parity invariant universe. In this work, I consider a similar model, namely, the Stelle–West gravity, and couple it to a modified perfect fluid, such that the total Lagrangian 4-form is polynomial in the gravitational variables. The Lagrangian of the modified fluid has a nontrivial variational derivative with respect to $l$, and as a result, the problems encountered in the previous work no longer appear. Moreover, to explore the elegance of the general theory, as well as to write down the basic framework, I perform the Lagrange–Noether analysis for DGT sourced by a matter field, yielding the field equations and the identities with respect to the symmetries of the system. The resulted formula are dS covariant and do not rely on the existence of the metric field.
+
+PACS numbers: 04.50.Kd, 98.80.Jk, 04.20.Cv
+
+Key words: Stelle–West gravity, gauge theory of gravity, cosmic acceleration
+
+# 1 Introduction
+
+The gauge theories of gravity (GTG) aim at treating gravity as a gauge field, in particular, constructing a Yang–Mills-type Lagrangian which reduces to GR in some limiting case, while providing some novel falsifiable predictions. A well-founded subclass of GTG is the Poincaré gauge theory (PGT) [1–5], in which the gravitational field consists of the Lorentz connection and the co-tetrad field. Moreover, the PGT can be reformulated as de Sitter gauge theory (DGT), in which the Lorentz connection and the co-tetrad field are united into a de Sitter (dS) connection [6, 7]. In fact, before the idea of DGT is realized, a related Yang–Mills-type Lagrangian for gravity was proposed by MacDowell and Mansouri [8], and reformulated into a dS-invariant form by West [9], which reads
+
+$$
+\begin{aligned}
+\mathcal{L}^{\text{MM}} &= \epsilon_{ABCDE} \xi^E \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} \\
+&= \epsilon_{\alpha\beta\gamma\delta} (l R^{\alpha\beta} \wedge R^{\gamma\delta} - 2l^{-1} R^{\alpha\beta} \wedge e^{\gamma} \wedge e^{\delta} + l^{-3} e^{\alpha} \wedge e^{\beta} \wedge e^{\gamma} \wedge e^{\delta}),
+\end{aligned}
+\quad (1)
+$$
+
+where $\epsilon_{ABCDE}$ and $\epsilon_{\alpha\beta\gamma\delta}$ are the 5d and 4d Levi-Civita symbols, $\xi^A$ is a dS vector constrained by $\xi^A\xi_A = l^2$, $l$ is a positive constant, $\mathcal{F}^{AB}$ is the dS curvature, $R^{\alpha\beta}$ is the
+
+¹Email: ljagdgz@163.com
+---PAGE_BREAK---
+
+Lorentz curvature, and $e^\alpha$ is the orthonormal co-tetrad field. This theory is equivalent to the Einstein–Cartan (EC) theory with a cosmological constant $\Lambda = 3/l^2$ and a Gauss–Bonnet (GB) topological term, as seen in Eq. (1).
+
+Note that some special gauges with the residual Lorentz symmetry can be defined by $\xi^A = \delta^A_4 l$. Henceforth, $\xi^A$ is akin to an unphysical Goldstone field. To make $\xi^A$ physical, and so become the gravitational Higgs field, one may replace the constant $l$ by a dynamical $l$, resulting in the Stelle–West (SW) theory [7]. The theory is further explored by Refs. [10,11] (see also the review [12]), in which the constraint $\xi^A\xi_A = l^2$ is completely removed, in other words, $\xi^A\xi_A$ needs not to be positive. Suppose that $\xi^A\xi_A = \sigma l^2$, where $\sigma = \pm 1$. When $l \neq 0$, the metric field can be defined by $g_{\mu\nu} = (\tilde{D}_\mu\xi^A)(\tilde{D}_\nu\xi_A)$, where $\tilde{D}_\mu\xi^A = \tilde{\delta}^A_B D_\mu\xi^B$, $\tilde{\delta}^A_B = \delta^A_B - \xi^A\xi_B/\sigma l^2$, $D_\mu\xi^A = d_\mu\xi^A + \Omega^A_{B\mu}\xi^B$, and $\Omega^A_{B\mu}$ is the dS connection. It was shown that $\sigma = \pm 1$ corresponds to the Lorentz/Euclidean signature of the metric field, and the signature changes when $\xi^A\xi_A$ changes its sign [11].
+
+On the other hand, it remains to check whether the SW gravity is viable. Although the SW lagrangian reduces to the MM Lagrangian when $l$ is a constant, the field equations do not. In the SW theory, there is an additional field equation coming from the variation with respect to $l$, which is nontrivial even when $l$ is a constant. Actually, a recent work [13] presents some negative results for a related model, whose Lagrangian is equal to the SW one times $(-l/2)$. For a homogeneous and isotropic universe with parity-invariant torsion, it is found that $l$ being a constant implies the energy density of the material fluid being a constant, and so $l$ should not be a constant in the general case. Moreover, in the radiation epoch, the $l$ equation forces the energy density equal to zero; while in the matter epoch, a dynamical $l$ only works to renormalize the gravitational constant by some constant factor, and hence the cosmic expansion decelerates as in GR.
+
+In this work, it is shown that the SW gravity suffers from similar problems encountered in the model considered by Ref. [13]. Also, I try to solve these problems by using a new fluid with the Lagrangian being polynomial in the gravitational variables. The merits of a Lagrangian polynomial in some variables are that it is simple and nonsingular with respect to those variables. In Refs. [14,15], the polynomial Lagrangian for gravitation and other fundamental fields were proposed, while in this paper, the polynomial Lagrangian for a perfect fluid is proposed, which reduces to the Lagrangian of a usual perfect fluid when $l$ is a constant. It turns out that, in contrast to the case with an ordinary fluid, the SW gravity coupled with the new fluid supports the radiation epoch and naturally drives the cosmic acceleration. In addition, when writing down the basic framework of DGT, a Lagrangian–Noether analysis is performed, which generalizes the results of Ref. [16] to the cases with arbitrary matter field and arbitrary $\xi^A$.
+
+The article is organized as follows. In Sec. 2.1, a Lagrangian–Noether analysis is done for the general DGT sourced by a matter field. In Sec. 2.2, I reduce the analysis in Sec. 2.1 in the Lorentz gauges, and show how the two Noether identities in PGT can be elegantly unified into one identity in DGT. In Sec. 3.1, the SW model of DGT is introduced, with the field equations derived both in the general gauge and the Lorentz gauges. Further, the matter source is discussed in Sec. 3.2, where a modified perfect fluid with the Lagrangian polynomial in the gravitational variables is constructed, and a general class of perfect fluids is defined, which contains both the usual and the modified perfect fluids. Then I couple the SW gravity with the class of fluids and study the coupling system in the homogeneous, isotropic and parity-invariant universe. The field equations are deduced in Sec. 4.1 and solved in Sec. 4.2, and the results are compared with observations in Sec.
+---PAGE_BREAK---
+
+4.3. In Sec. 5, I give some conclusions, and discuss the remaining problems and possible solutions.
+
+# 2 de Sitter gauge theory
+
+## 2.1 Lagrangian–Noether machinery
+
+The DGT sourced by a matter field is described by the Lagrangian 4-form
+
+$$ \mathcal{L} = \mathcal{L}(\psi, D\psi, \xi^A, D\xi^A, \mathcal{F}^{AB}), \quad (2) $$
+
+where $\psi$ is a $p$-form valued at some representation space of the dS group $SO(1, 4)$, $D\psi = d\psi + \Omega^{AB}T_{AB} \wedge \psi$ is the covariant exterior derivative, $T_A{}^B$ are representations of the dS generators, $\xi^A$ is a dS vector, $D\xi^A = d\xi^A + \Omega^A{}_B\xi^B$, $\Omega^A{}_B$ is the dS connection 1-form, and $\mathcal{F}^A{}_B = d\Omega^A{}_B + \Omega^A{}_C \wedge \Omega^C{}_B$ is the dS curvature 2-form. The variation of $\mathcal{L}$ resulted from the variations of the explicit variables reads
+
+$$ \begin{aligned} \delta \mathcal{L} = & \delta\psi \wedge \partial\mathcal{L}/\partial\psi + \delta D\psi \wedge \partial\mathcal{L}/\partial D\psi + \delta\xi^A \cdot \partial\mathcal{L}/\partial\xi^A + \delta D\xi^A \wedge \partial\mathcal{L}/\partial D\xi^A \\ & + \delta\mathcal{F}^{AB} \wedge \partial\mathcal{L}/\partial\mathcal{F}^{AB}, \end{aligned} \quad (3) $$
+
+where $(\partial\mathcal{L}/\partial\psi)_{\mu_{p+1}\cdots\mu_4} \equiv \partial\mathcal{L}_{\mu_1\cdots\mu_p\mu_{p+1}\cdots\mu_4}/\partial\psi_{\mu_1\cdots\mu_p}$, and the other partial derivatives are similarly defined. The variations of $D\psi$, $D\xi^A$ and $\mathcal{F}^{AB}$ can be transformed into the variations of the fundamental variables $\psi$, $\xi^A$, and $\Omega^{AB}$, leading to
+
+$$ \begin{aligned} \delta \mathcal{L} = & \delta\psi \wedge V_{\psi} + \delta\xi^A \cdot V_A + \delta\Omega^{AB} \wedge V_{AB} \\ & + d(\delta\psi \wedge \partial\mathcal{L}/\partial D\psi + \delta\xi^A \cdot \partial\mathcal{L}/\partial D\xi^A + \delta\Omega^{AB} \wedge \partial\mathcal{L}/\partial \mathcal{F}^{AB}), \end{aligned} \quad (4) $$
+
+where
+
+$$ V_{\psi} = \delta \mathcal{L} / \delta \psi = \partial \mathcal{L} / \partial \psi - (-1)^p D \partial \mathcal{L} / \partial D \psi, \quad (5) $$
+
+$$ V_A = \delta L / \delta\xi^A = \partial L / \partial\xi^A - D \partial L / \partial D\xi^A, \quad (6) $$
+
+$$ V_{AB} = \delta L / \delta\Omega^{AB} = T_{AB}\psi \wedge \partial L / \partial D\psi + \partial L / \partial D\xi^{[A} \cdot \xi_{B]} + D\partial L / \partial F^{AB}. \quad (7) $$
+
+The symmetry transformations in DGT consist of the diffeomorphism transformations and the dS transformations. For the diffeomorphism transformations, they can be promoted to a gauge-invariant version [16, 17], namely, the parallel transports in the fiber bundle with the gauge group as the structure group. The action of an infinitesimal parallel transport on a variable is a gauge-covariant Lie derivative$^2$ $L_v = v]D + Dv]$, where $v$ is the vector field which generates the infinitesimal parallel transport, and ] denotes a contraction, for example, $(v]_\psi)_{\mu_2...,\mu_p} = v^{\mu_1}\psi_{\mu_1,\mu_2...,\mu_p}$. Put $\delta = L_v$ in Eq. (3), utilize the arbitrariness of $v$, then one obtains the chain rule
+
+$$ v]_{\mathcal{L}} = (v]_{\psi}) \wedge \partial_{\mathcal{L}}/\partial\psi + (v]_{D\psi}) \wedge \partial_{\mathcal{L}}/\partial D\psi + (v]_{D\xi^A}) \cdot \partial_{\mathcal{L}}/\partial D\xi^A \\ +(v]_{F^{AB}}) \wedge \partial_{\mathcal{L}}/\partial F^{AB}, \quad (8) $$
+
+and the first Noether identity
+
+$$ (v]D\psi) \wedge V_{\psi} + (-1)^p(v]\psi) \wedge DV_{\psi} + (v]D\xi^A) \cdot V_A + (v]F^{AB}) \wedge V_{AB} = 0. \quad (9) $$
+
+$^2$The gauge-covariant Lie derivative has been used in the metric-affine gauge theory of gravity [18].
+---PAGE_BREAK---
+
+On the other hand, the dS transformations are defined as vertical isomorphisms on the fiber bundle. The actions of an infinitesimal dS transformation on the fundamental variables are as follows:
+
+$$ \delta\psi = B^{AB}T_{AB}\psi, \quad \delta\xi^A = B^{AB}\xi_B, \quad \delta\Omega^{AB} = -DB^{AB}, \qquad (10) $$
+
+where $B^A{}_B$ is a dS algebra-valued function which generates the infinitesimal dS transformation. Substitute Eq. (10) and $\delta\mathcal{L} = 0$ into Eq. (4), and make use of Eq. (7) and the arbitrariness of $B^{AB}$, one arrives at the second Noether identity
+
+$$ DV_{AB} = -T_{AB}\psi \wedge V_{\psi} - V[A \cdot \xi_{B}]. \qquad (11) $$
+
+The above analyses are so general that they do not require the existence of a metric field. In the special case with a metric field being defined, $\xi^A \xi_A$ equating to a positive constant, and $p=0$, the above analyses coincide with those in Ref. [16].
+
+## 2.2 Reduction in the Lorentz gauges
+
+Consider the case with $\xi^A \xi_A = l^2$, where $l$ is a positive function. Then we may define the projector $\tilde{\delta}^A{}_B = \delta^A{}_B - \xi^A \xi_B / l^2$, the generalized tetrad $\tilde{D} \xi^A = \tilde{\delta}^A{}_B D \xi^B$, and a symmetric rank-2 tensor³
+
+$$ g_{\mu\nu} = \eta_{AB}(\tilde{D}_{\mu}\xi^{A})(\tilde{D}_{\nu}\xi^{B}), \qquad (12) $$
+
+which is a localization of the dS metric $\hat{g}_{\mu\nu} = \eta_{AB}(d_{\mu}\dot{\xi}^{A})(d_{\nu}\dot{\xi}^{B})$, where $\dot{\xi}^{A}$ are the 5d Minkowski coordinates on the 4d dS space. Though Eq. (12) seems less natural than the choice $g^{*}_{\mu\nu} = \eta_{AB}(D_{\nu}\xi^{A})(D_{\nu}\xi^{B})$, it coincides with another natural identification (15) (the relation between Eqs. (12) and (15) will be discussed later). If $g_{\mu\nu}$ is non-degenerate, it is a metric field with Lorentz signature, and one may define $\tilde{D}^{\mu}\xi_A \equiv g^{\mu\nu}\tilde{D}_{\nu}\xi_A$. Put $v^\mu = \tilde{D}^\mu\xi_A$ in Eq. (9) and utilize $(\tilde{D}_\mu\xi^A)(\tilde{D}^\mu\xi_B) = \tilde{\delta}^A{}_B$, we get
+
+$$ \begin{aligned} \tilde{V}_A = &-(\tilde{D}\xi_A]D\psi) \wedge V_\psi - (-1)^p(\tilde{D}\xi_A]\psi) \wedge DV_\psi - (\tilde{D}\xi_A]d\ln l) \cdot V_C\xi^C \\ &-(\tilde{D}\xi_A]\mathcal{F}^{CD}) \wedge V_{CD}, \end{aligned} \qquad (13) $$
+
+where $\tilde{V}_A = \tilde{\delta}^B{}_AV_B$. When $l$ is a constant, Eq. (13) implies that the $\xi^A$ equation ($\tilde{V}_A = 0$ for this case) can be deduced from the other field equations ($V_\psi = 0$ and $V_{CD} = 0$), as pointed out by Ref. [19]. Substitute Eq. (13) into Eq. (11), and make use of $\tilde{V}_{[A} \cdot \xi_{B]} = V_{[A} \cdot \xi_{B]}$ and $\tilde{D}\xi_{[A} \cdot \xi_{B]} = D\xi_{[A} \cdot \xi_{B]}$, one attains
+
+$$ \begin{aligned} DV_{AB} = &-T_{AB}\psi \wedge V_{\psi} + (D\xi_{[A} \cdot \xi_{B]})[D\psi) \wedge V_{\psi} + (-1)^p(D\xi_{[A} \cdot \xi_{B]})[\psi) \wedge DV_{\psi} \\ &+(D\xi_{[A} \cdot \xi_{B]})[d \ln l) \cdot V_C\xi^C + (D\xi_{[A} \cdot \xi_{B]})[\mathcal{F}^{CD}) \wedge V_{CD}. \end{aligned} \qquad (14) $$
+
+When $l$ is a constant, Eq. (14) coincides with the corresponding result in Ref. [16]. As will be shown later, Eq. (14) unifies the two Noether identities in PGT.
+
+To see this, let us define the Lorentz gauges by the condition $\xi^A = \delta^A{}_4l$ [7]. If $h^A{}_B \in SO(1, 4)$ preserves these gauges, then $h^A{}_B = \text{diag}(h^\alpha_\beta, 1)$, where $h^\alpha_\beta$ belongs to the Lorentz group $SO(1, 3)$. In the Lorentz gauges, $\Omega^\alpha_\beta$ transforms as a Lorentz connection,
+
+³This formula has been given by Refs. [11, 19], and is different from that originally proposed by Stelle and West [7] by a factor $(l_0/l)^2$, where $l_0$ is the vacuum expectation value of $l$.
+---PAGE_BREAK---
+
+and $\Omega^{\alpha}_4$ transforms as a co-tetrad field. Therefore, one may identify $\Omega^{\alpha}_{\beta}$ as the spacetime connection $\Gamma^{\alpha}_{\beta}$, and $\Omega^{\alpha}_4$ as the co-tetrad field $e^{\alpha}$ divided by some quantity with the dimension of length, a natural choice for which is $l$. Resultantly, $\Omega^{AB}$ is identified with a combination of geometric quantities as follows:
+
+$$ \Omega^{AB} = \begin{pmatrix} \Gamma^{\alpha\beta} & l^{-1}e^{\alpha} \\ -l^{-1}e^{\beta} & 0 \end{pmatrix}. \qquad (15) $$
+
+In the case with constant $l$, this formula has been given by Refs. [7,20], and, in the case with varying $l$, it has been given by Refs. [10, 19]. In the Lorentz gauges, $\tilde{D}\xi^4 = 0$, $\tilde{D}\xi^{\alpha} = \Omega^{\alpha}_4 l = e^{\alpha}$ (where Eq. (15) is used), and so $g_{\mu\nu}$ defined by Eq. (12) satisfies $g_{\mu\nu} = \eta_{\alpha\beta}e^{\alpha}_{\mu}e^{\beta}_{\nu}$, implying that Eq. (12) coincides with Eq. (15). Moreover, according to Eq. (15), one finds the expression for $\mathcal{F}^{AB}$ in the Lorentz gauges as follows [19]:
+
+$$ \mathcal{F}^{AB} = \begin{pmatrix} R^{\alpha\beta} - l^{-2}e^{\alpha} \wedge e^{\beta} & l^{-1}[S^{\alpha} - d \ln l \wedge e^{\alpha}] \\ -l^{-1}[S^{\beta} - d \ln l \wedge e^{\beta}] & 0 \end{pmatrix}, \qquad (16) $$
+
+where $R^{\alpha}_{\beta} = d\Gamma^{\alpha}_{\beta} + \Gamma^{\alpha}_{\gamma} \wedge \Gamma^{\gamma}_{\beta}$ is the spacetime curvature, and $S^{\alpha} = de^{\alpha} + \Gamma^{\alpha}_{\beta} \wedge e^{\beta}$ is the spacetime torsion.
+
+Now it is ready to interpret the results in Sec. 2.1 in the Lorentz gauges. In those gauges, $D\psi = D^{\Gamma}\psi + 2l^{-1}e^{\alpha}T_{\alpha4} \wedge \psi$, $D\xi^{\alpha} = e^{\alpha}$, $D\xi^4 = dl$, and so Eq. (2) becomes
+
+$$ \mathcal{L} = \mathcal{L}^L(\psi, D^\Gamma \psi, l, dl, e^\alpha, R^{\alpha\beta}, S^\alpha), \qquad (17) $$
+
+where $D^{\Gamma}\psi = d\psi + \Gamma^{\alpha\beta}T_{\alpha\beta} \wedge \psi$. It is the same as a Lagrangian 4-form in PGT [21], with the fundamental variables being $\psi, l, \Gamma^{\alpha\beta}$ and $e^{\alpha}$. The relations between the variational derivatives with respect to the PGT variables and those with respect to the DGT variables can be deduced from the following equality:
+
+$$ \delta\xi^A \cdot V_A + 2\delta\Omega^{\alpha4} \wedge V_{\alpha4} = \delta l \cdot \Sigma_l + \delta e^\alpha \wedge \Sigma_\alpha, \qquad (18) $$
+
+where $\Sigma_l \equiv \delta\mathcal{L}^L/\delta l$ and $\Sigma_\alpha \equiv \delta\mathcal{L}^L/\delta e^\alpha$. Explicitly, the relations are:
+
+$$ \Sigma_{\psi} \equiv \delta \mathcal{L}^L / \delta \psi = V_{\psi}, \qquad (19) $$
+
+$$ \Sigma_l = V_4 - 2l^{-2}e^\alpha \wedge V_{\alpha 4}, \qquad (20) $$
+
+$$ \Sigma_{\alpha\beta} = \delta\mathcal{L}^L/\delta\Gamma^{\alpha\beta} = V_{\alpha\beta}, \qquad (21) $$
+
+$$ \Sigma_\alpha = 2l^{-1}V_{\alpha 4}. \qquad (22) $$
+
+It is remarkable that the DGT variational derivative $V_{AB}$ unifies the two PGT variational derivatives $\Sigma_{\alpha\beta}$ and $\Sigma_{\alpha}$. With the help of Eqs. (19)–(22), the $\alpha\beta$ components and $\alpha 4$ components of Eq. (14) are found to be
+
+$$ D^\Gamma \Sigma_{\alpha\beta} = -T_{\alpha\beta} \psi \wedge \Sigma_\psi + e_{[\alpha} \wedge \Sigma_{\beta]}, \qquad (23) $$
+
+$$ D^\Gamma \Sigma_\alpha = D_\alpha^\Gamma \psi \wedge \Sigma_\psi + (-1)^p (e_\alpha] \psi) \wedge D^\Gamma \Sigma_\psi + \partial_\alpha l \cdot \Sigma_l \\ + (e_\alpha] R^{\beta\gamma}) \wedge \Sigma_{\beta\gamma} + (e_\alpha] S^\beta) \wedge \Sigma_\beta, \qquad (24) $$
+
+which are just the two Noether identities in PGT [21], with both $\psi$ and $l$ as the matter fields. This completes our proof for the earlier statement that the DGT identity (14) unifies the two Noether identities in PGT.
+---PAGE_BREAK---
+
+# 3 Polynomial models for DGT
+
+## 3.1 Stelle-West gravity
+
+It is natural to require that the Lagrangian for DGT is regular with respect to the fundamental variables. The simplest regular Lagrangian are polynomial in the variables, and, in order to recover the EC theory, the polynomial Lagrangian should be at least linear in the gauge curvature. Moreover, to ensure $\mathcal{F}^{AB} = 0$ is naturally a vacuum solution, the polynomial Lagrangian should be at least quadratic in $\mathcal{F}^{AB} {}^4$. The general Lagrangian quadratic in $\mathcal{F}^{AB}$ reads:
+
+$$
+\begin{aligned}
+\mathcal{L}^G &= (\kappa_1 \epsilon_{ABCDE} \xi^E + \kappa_2 \eta_{AC} \xi_B \xi_D + \kappa_3 \eta_{AC} \eta_{BD}) \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} \\
+&= \kappa_1 \mathcal{L}^{\text{SW}} + \kappa_2 (S^\alpha \wedge S_\alpha - 2S^\alpha \wedge d \ln l \wedge e_\alpha) \\
+&\quad + \kappa_3 [R^{\alpha\beta} \wedge R_{\alpha\beta} + d(2l^{-2} S^\alpha \wedge e_\alpha)],
+\end{aligned}
+\quad (25)
+$$
+
+where the $\kappa_1$ term is the SW Lagrangian, the $\kappa_2$ and $\kappa_3$ terms are parity odd, and the $\kappa_3$ term is a sum of the Pontryagin and modified Nieh-Yan topological terms. This quadratic Lagrangian is a special case of the at most quadratic Lagrangian proposed in Refs. [10,22], and one should note that the quadratic Lagrangian satisfies the requirement mentioned above about the vacuum solution, while the at most quadratic Lagrangian does not always satisfy that requirement.
+
+Among the three terms in Eq. (25), the SW term is the only one that can be reduced to the EC Lagrangian in the case with positive and constant $\xi^A\xi_A$. Thus the SW Lagrangian is the simplest choice for the gravitational Lagrangian which (i) is regular with respect to the fundamental variables; (ii) can be reduced to the EC Lagrangian; (iii) ensures $\mathcal{F}^{AB} = 0$ is naturally a vacuum solution.
+
+The SW Lagrangian 4-form $\mathcal{L}^{\text{SW}}$ takes the same form as $\mathcal{L}^{\text{MM}}$ in the first line of Eq. (1), while $\xi^A$ is not constrained by any condition. Substitute Eq. (1) into Eqs. (6)-(7), make use of $\partial\mathcal{L}^{\text{SW}}/\partial\mathcal{F}^{AB} = \epsilon_{ABCDE} \xi^E \mathcal{F}^{CD}$ and the Bianchi identity $D\mathcal{F}^{AB} = 0$, one immediately gets the gravitational field equations
+
+$$ -\kappa \epsilon_{ABCDE} \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} = \delta \mathcal{L}^m / \delta \xi^E, \quad (26) $$
+
+$$ -\kappa \epsilon_{ABCDE} D\xi^E \wedge \mathcal{F}^{CD} = \delta \mathcal{L}^m / \delta \Omega^{AB}, \quad (27) $$
+
+where $\mathcal{L}^m$ is the Lagrangian of the matter field coupled to the SW gravity, with $\kappa$ as the coupling constant. In the vacuum case, Eq. (27) has been given by Ref. [22] by direct computation, while here, Eq. (27) is obtained from the general formula (7).
+
+In the Lorentz gauges, $\mathcal{L}^{\text{SW}}$ takes the same form as $\mathcal{L}^{\text{MM}}$ in the second line of Eq. (1), while $l$ becomes a dynamical field. The gravitational field equations read
+
+$$ -(\kappa/4)\epsilon_{\alpha\beta\gamma\delta}\epsilon^{\mu\nu\sigma\rho}e^{-1}R^{\alpha\beta}_{\quad\mu\nu}R^{\gamma\delta}_{\quad\sigma\rho} - 4\kappa l^{-2}R + 72\kappa l^{-4} = \delta S_m/\delta l, \quad (28) $$
+
+$$ -\kappa \epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\quad\sigma\rho} + 8\kappa e_{[\alpha}^{\mu} e_{\beta]}^{\nu} \partial_{\nu} l^{-1} + 4\kappa l^{-1} T^{\mu}_{\alpha\beta} = \delta S_m / \delta \Gamma^{\alpha\beta}_{\quad\mu}, \quad (29) $$
+
+$$ -8\kappa l^{-1}(G^{\mu}_{\alpha} + \Lambda e_{\alpha}) = \delta S_m / \delta e^{\alpha}_{\mu}, \quad (30) $$
+
+where $e = \det(e^{\alpha}_{\mu})$, $R$ is the scalar curvature, $G^{\mu}_{\alpha}$ is the Einstein tensor, $T^{\mu}_{\alpha\beta} = S^{\mu}_{\alpha\beta} + 2e_{[\alpha}^{\mu} S^{\nu}_{\beta]\nu}$, and $S_m$ is the action of the matter field.
+
+⁴When the Lagrangian is linear in $\mathcal{F}^{AB}$, we may add some ‘constant term’ (independent of $\mathcal{F}^{AB}$) to ensure $\mathcal{F}^{AB}=0$ is a vacuum solution, but this way is not so natural.
+---PAGE_BREAK---
+
+## 3.2 Polynomial dS fluid
+
+For the same reason of choosing a polynomial Lagrangian for DGT, we intend to use those matter sources with polynomial Lagrangian. It has been shown that the Lagrangian of fundamental fields can be reformulated into polynomial forms [14, 15]. However, when describing the universe, it is more adequate to use a fluid as the matter source. The Lagrangian of an ordinary perfect fluid [23] can be written in a Lorentz-invariant form:
+
+$$ \mathcal{L}_{\mu\nu\rho\sigma}^{\text{PF}} = -\epsilon_{\alpha\beta\gamma\delta} e_{\mu}^{\alpha} e_{\nu}^{\beta} e_{\rho}^{\gamma} e_{\sigma}^{\delta} \rho + \epsilon_{\alpha\beta\gamma\delta} J^{\alpha} e_{\mu}^{\beta} e_{\nu}^{\gamma} e_{\rho}^{\delta} \eta^{\sigma} \wedge \partial_{\mu}\phi, \quad (31) $$
+
+where $\phi$ is a scalar field, $J^\alpha$ is the particle number current which is Lorentz covariant and satisfies $J^\alpha J_\alpha < 0$, $\rho = \rho(n)$ is the energy density, and $n \equiv \sqrt{-J^\alpha J_\alpha}$ is the particle number density. The Lagrangian (31) is polynomial in the PGT variable $e^\alpha_\mu$, but it is not polynomial in the DGT variables when it is reformulated into a dS-invariant form, in which case the Lagrangian reads
+
+$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{PF}} = & -\epsilon_{ABCDE}(D_\mu\xi^A)(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D)(\xi^E/l)\rho \\ & +\epsilon_{ABCDE}J^A(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D) \wedge (\xi^E/l)\partial_\mu\phi, \end{aligned} \quad (32) $$
+
+where $J^A$ is a dS-covariant particle number current, which satisfies $J^AJ_A < 0$ and $J^\alpha\xi_A = 0$, $\rho = \rho(n)$ and $n \equiv \sqrt{-J^AJ_A}$. Because $l^{-1}$ appears in Eq. (32), the Lagrangian is not polynomial in $\xi^A$.
+
+A straightforward way to modify Eq. (32) into a polynomial Lagrangian is to multiply it by $l$. In the Lorentz gauges, $J^4 = 0$, and we may define the invariant $J^\mu \equiv J^\alpha e_\alpha^\mu$. Then the modified Lagrangian $\mathcal{L}_{\mu\nu\rho\sigma}^{\prime PF} = -e\epsilon_{\mu\nu\rho\sigma}\rho l + e\epsilon_{\mu'\nu\rho\sigma}J^{\mu'} \wedge l \cdot \partial_\mu\phi$. It can be verified that this Lagrangian violates the particle number conservation law $\nabla_\mu J^\mu = 0$, where $\nabla_\mu$ is the linearly covariant, metric-compatible and torsion-free derivative. To preserve the particle number conservation, we may replace $l \cdot \partial_\mu\phi$ by $\partial_\mu(l\phi)$, and the corresponding dS-invariant Lagrangian is
+
+$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} = & -\epsilon_{ABCDE}(D_\mu\xi^A)(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D)\xi^E\rho(n) \\ & +\epsilon_{ABCDE}J^A(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D) \wedge \left(\frac{1}{4}D_\mu\xi^E \cdot \phi + \xi^E \partial_\mu\phi\right). \end{aligned} \quad (33) $$
+
+The perfect fluid depicted by the above Lagrangian is called the polynomial dS fluid, or dS fluid for short. In the Lorentz gauges,
+
+$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} &= -e\epsilon_{\mu\nu\rho\sigma}\rho l + \epsilon_{\alpha\beta\gamma\delta}J^\alpha e^\beta_\nu e^\gamma_\rho e^\delta_\sigma \wedge (\partial_\mu l \cdot \phi + l \cdot \partial_\mu \phi) \\ &= -e\epsilon_{\mu\nu\rho\sigma}\rho l + e\epsilon_{\mu'\nu\rho\sigma}J^{\mu'} \wedge \partial_\mu(l\phi), \end{aligned} \quad (34) $$
+
+which is equivalent to Eq. (31) when $l$ is a constant.
+
+Define the Lagrangian function $\mathcal{L}_{\text{DF}}$ by $\mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} = \mathcal{L}_{\text{DF}} e\epsilon_{\mu\nu\rho\sigma}$, then $\mathcal{L}_{\text{DF}} = -\rho l + J^\mu \partial_\mu(l\phi)$. To compare the polynomial dS fluid with the ordinary perfect fluid, let us consider a general model with the Lagrangian function
+
+$$ \mathcal{L}_m = -\rho l^k + J^\mu \partial_\mu (l^k \phi), \quad (35) $$
+
+where $k \in \mathbb{R}$. When $k=0$, it describes the ordinary perfect fluid; when $k=1$, it describes the polynomial dS fluid. The variation of $S_m = \int dx^4 e \mathcal{L}_m$ with respect to $\phi$ gives the
+---PAGE_BREAK---
+
+particle number conservation law $\nabla_{\mu}J^{\mu} = 0$. The variation with respect to $J^{\alpha}$ yields
+$\partial_{\mu}(l^{k}\phi) = -\mu U_{\mu}l^{k}$, where $\mu \equiv d\rho/dn = (\rho+p)/n$ is the chemical potential, $p = p(n)$
+is the pressure, and $U^{\mu} \equiv J^{\mu}/n$ is the 4-velocity of the fluid particle. Making use of
+these results, one may check that the on-shell Lagrangian function is equal to $pl^{k}$, and
+the variational derivatives
+
+$$
+\delta S_m / \delta l = -k \rho l^{k-1}, \tag{36}
+$$
+
+$$
+\delta S_m / \delta \Gamma^{\alpha\beta}_{\mu} = 0, \quad (37)
+$$
+
+$$
+\delta S_m / \delta e^\alpha_\mu = (\rho + p) l^k U^\mu U_\alpha + pl^k e^\alpha_\mu . \quad (38)
+$$
+
+It is seen that $\delta S_m / \delta l = 0$ for the ordinary perfect fluid, while $\delta S_m / \delta l = -\rho$ for the polynomial dS fluid.
+
+Finally, it should be noted that the polynomial dS fluid does not support a signature change corresponding to $\xi^A\xi_A$ varying from negative to positive. The reason is that when $\xi^A\xi_A < 0$, there exists no $J^A$ which satisfies $J^AJ_A < 0$ and $J^A\xi_A = 0$.
+
+# 4 Cosmological solutions
+
+## 4.1 Field equations for the universe
+
+In this section, the coupling system of the SW gravity and the fluid model (35) will be analyzed in the homogenous, isotropic, parity-invariant and spatially flat universe characterized by the following ansatz [13]:
+
+$$
+e^0_\mu = d_\mu t, \quad e^i_\mu = a \, d_\mu x^i, \tag{39}
+$$
+
+$$
+S^0_{\mu\nu} = 0, \quad S^i_{\mu\nu} = b e^0_\mu \wedge e^i_\nu, \tag{40}
+$$
+
+where *a* and *b* are functions of the cosmic time *t*, and *i* = 1, 2, 3. On account of Eqs. (39)–(40), the Lorentz connection Γαβμ and curvature Rαβμν can be calculated [13]. Further, assume that Uμ = e0μ, then Uμ = −eμ0, and so Uα = −δ0α. Now the reduced form of each term of Eqs. (28)–(30) can be attained. In particular,
+
+$$
+\epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} R^{\alpha\beta}_{\mu\nu} R^{\gamma\delta}_{\sigma\rho} = 96(ha)' a^{-1} h^2, \quad (41)
+$$
+
+$$
+R = 6[(ha)'a^{-1} + h^2], \tag{42}
+$$
+
+$$
+\epsilon_{0i\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\sigma\rho} = -4h^2 \dot{l} e_i{}^\mu, \quad (43)
+$$
+
+$$
+\epsilon_{ij\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\sigma\rho} = 0, \quad (44)
+$$
+
+$$
+T^{\mu}_{0i} = -2b e_i{}^{\mu}, \quad T^{\mu}_{ij} = 0, \tag{45}
+$$
+
+$$
+G^{\mu}_{0} = -3h^{2}e_{0}^{\mu}, \qquad (46)
+$$
+
+$$
+G^{\mu}_i = -[2(ha)' a^{-1} + h^2] e_i^{\mu}, \quad (47)
+$$
+
+$$
+\delta S_m / \delta e^\mu = -\rho l^k e_0^\mu, \quad (48)
+$$
+
+$$
+\delta S_m / \delta e_i^\mu = p l^k e_i^\mu, \quad (49)
+$$
+
+where $\cdot$ on top of a quantity or being a superscript denotes the differentiation with respect to $t$, and $h = \dot{a}/a - b$. Substitution of the above equations into Eqs. (28)–(30) leads to
+
+$$
+(ha)' a^{-1} (h^2 + l^{-2}) + l^{-2} (h^2 - \Lambda) = k \rho l^{k-1} / 24\kappa, \quad (50)
+$$
+---PAGE_BREAK---
+
+$$ (h^2 + l^{-2})\dot{l} - 2bl^{-1} = 0, \qquad (51) $$
+
+$$ 8\kappa l^{-1}(-3h^2 + \Lambda) = \rho l^k, \qquad (52) $$
+
+$$ 8\kappa l^{-1}[-2(ha)\dot{a}^{-1} - h^2 + \Lambda] = -pl^k, \qquad (53) $$
+
+which constitute the field equations for the universe.
+
+## 4.2 Solutions for the field equations
+
+Before solving the field equations (50)–(53), let us first derive the continuity equation from the field equations. Rewrite Eq. (52) as
+
+$$ h^2 = l^{-2} - \rho l^{k+1}/24\kappa. \qquad (54) $$
+
+Substituting Eq. (54) into Eq. (53) yields
+
+$$ (ha)\dot{a}^{-1} = l^{-2} + (\rho + 3p)l^{k+1}/48\kappa. \qquad (55) $$
+
+Multiply Eq. (55) by $2h$, making use of Eq. (54) and $h = \dot{a}/a - b$, one gets
+
+$$ 2h\dot{h} = (\rho + p)l^{k+1}\dot{a}a^{-1}/8\kappa - 2b(ha)\dot{a}a^{-1}, \qquad (56) $$
+
+in which, according to Eqs. (50), (51) and (54),
+
+$$ 2b(ha)\dot{a}^{-1} = \dot{l}[(k+1)\rho l^k/24\kappa + 2l^{-3}]. \qquad (57) $$
+
+Differentiate Eq. (54) with respect to $t$, and compare it with Eqs. (56)–(57), one arrives at the continuity equation
+
+$$ \dot{\rho} + 3(\rho + p)\dot{a}a^{-1} = 0, \qquad (58) $$
+
+which is, unexpectedly, the same as the usual one. Suppose that $p = w\rho$, where $w$ is a constant. Then Eq. (58) has the solution
+
+$$ \rho = \rho_0(a/a_0)^{-3(1+w)}, \qquad (59) $$
+
+where $a_0$ and $\rho_0$ are the values of $a$ and $\rho$ at some moment $t_0$.
+
+Now it is ready to solve Eqs. (50)–(52), while Eq. (53) is replaced by Eq. (58) with the solution (59). Firstly, substitute Eqs. (54)–(55) into Eq. (50), one finds
+
+$$ \rho l^{k+3} = 48\kappa(3w - k - 1)/(3w + 1). \qquad (60) $$
+
+Assume that $\kappa < 0$, then according to the above relation, $\rho l^{k+3} > 0$ implies $(3w - k - 1)/(3w + 1) < 0$. We only concern the cases with $k=0, 1$, and so assume that $k+1 > -1$, then $\rho l^{k+3} > 0$ constrains $w$ by
+
+$$ -\frac{1}{3} < w < \frac{k+1}{3}. \qquad (61) $$
+
+For the ordinary fluid ($k=0$), the pure radiation ($w=1/3$) cannot exist. In fact, on account of Eq. (60), $\rho l^3 = 0$ in this case, which is unreasonable. This problem is similar to that appeared in Ref. [13]. On the other hand, for the dS fluid ($k=1$), Eq. (61)
+---PAGE_BREAK---
+
+becomes $-1/3 < w < 2/3$, which contains both the cases with pure matter ($w = 0$) and
+pure radiation ($w = 1/3$). Generally, the combination of Eqs. (59) and (60) yields
+
+$$l = l_0(a/a_0)^{\frac{3(w+1)}{k+3}}, \quad (62)$$
+
+where $l_0$ is the value of $l$ when $t = t_0$, and is related to $\rho_0$ by Eq. (60).
+Secondly, substitute Eq. (54) into Eq. (51), and utilize Eqs. (60) and (62), one gets
+
+$$b = \frac{3(w + 1)(k + 2)}{(3w + 1)(k + 3)} \dot{a} a^{-1}, \qquad (63)$$
+
+and hence
+
+$$h = \frac{3w - 2k - 3}{(3w + 1)(k + 3)} \dot{a} a^{-1}. \qquad (64)$$
+
+Thirdly, substitution of Eqs. (60) and (64) into Eq. (52) leads to
+
+$$\dot{a}a^{-1} = H_0(l_0/l), \qquad (65)$$
+
+where $H_0 \equiv (\dot{a}a^{-1})_{t_0}$ is the Hubble constant, being related to $l_0$ by
+
+$$H_0 = \sqrt{\frac{3w+1}{-3w+2k+3}} \cdot (k+3)l_0^{-1}. \qquad (66)$$
+
+Here note that Eq. (61) implies that $3w + 1 > 0$, $-3w + k + 1 > 0$, $k + 1 > -1$, and so
+$-3w + 2k + 3 > 0$. In virtue of Eqs. (63), (65) and (62), one has
+
+$$b = b_0(a_0/a)^{\frac{3(w+1)}{k+3}}, \qquad (67)$$
+
+where $b_0$ is related to $H_0$ by Eq. (63). Moreover, substitute Eq. (62) into Eq. (65) and
+solve the resulting equation, one attains
+
+$$(a/a_0)^{\frac{3(w+1)}{k+3}} - 1 = \frac{3(w+1)}{k+3} \cdot H_0(t-t_0). \qquad (68)$$
+
+In conclusion, the solutions for the field equations (50)-(53) are given by Eqs. (59),
+(62), (67) and (68), with the independent constants $a_0$, $H_0$ and $t_0$.
+
+**4.3 Comparison with observations**
+
+If $k$ is specified, we can determine the value of the coupling constant $\kappa$ from the observed values of $H_0 = 67.4 \text{ km} \cdot \text{s}^{-1} \cdot \text{Mpc}^{-1}$ and $\Omega_0 \equiv 8\pi\rho_0/3H_0^2 = 0.315$ [24]. For example, put $k=1$, then according to Eq. (66) (with $w=0$), one has
+
+$$l_0 = 4/\sqrt{5}H_0 = 8.19 \times 10^{17} \text{ s}. \qquad (69)$$
+
+Substitution of Eq. (69) and $\rho_0 = 3H_0^2\Omega_0/8\pi = 1.79 \times 10^{-37} \text{ s}^{-2}$ into Eq. (60) yields
+
+$$\kappa = -\rho_0 l_0^4 / 96 = -8.41 \times 10^{32} \text{ s}^2. \qquad (70)$$
+
+This value is an important reference for the future work which will explore the viability
+of the model in the solar system scale.
+---PAGE_BREAK---
+
+Also, we can compare the deceleration parameter $q \equiv -a\ddot{a}/\dot{a}^2$ derived from the above models with the observed one. With the help of Eqs. (65) and (62), one finds $\dot{a} \sim a^{(k-3w)/(k+3)}$, then $\ddot{a} = \frac{k-3w}{k+3} \cdot \dot{a}^2 a^{-1}$, and so
+
+$$q = \frac{3w-k}{k+3}. \quad (71)$$
+
+Put $w=0$, it is seen that the universe accelerates ($q<0$) if $k>0$, linearly expands ($q=0$) if $k=0$, and decelerates ($q>0$) if $k<0$. In particular, for the model with an ordinary fluid ($k=0$), the universe expands linearly⁵; while for the model with a dS fluid ($k=1$), the universe accelerates with $q=-1/4$, which is consistent with the observational result $-1 \le q_0 < 0$ [25–27], where $q_0$ is the present-day value of $q$. It should be noted that Eq. (71) implies that $q$ is a constant when $w$ is a constant, and so the models cannot describe the transition from deceleration to acceleration when $w$ is a constant.
+
+## 5 Remarks
+
+It is shown that the requirement of regular Lagrangian may be crucial for DGT, as it is shown that the SW gravity coupled with an ordinary perfect fluid (whose Lagrangian is not regular with respect to $\xi^A$ when $\xi^A\xi_A = 0$) does not permit a radiation epoch and the acceleration of the universe, while the SW gravity coupled with a polynomial dS fluid (whose Lagrangian is regular with respect to $\xi^A$) is out of these problems. Yet, the latter model is still not a realistic model, because it cannot describe the transition from deceleration to acceleration in the matter epoch.
+
+There are two possible ways to find a more reasonable model. The first is to modify the gravitational part to be the general quadratic model (25), which is a special case of the at most quadratic model proposed in Refs. [10, 22], but the coupling of which with the polynomial dS fluid is unexplored. It is unknown whether the effect of the $\kappa_2$ term could solve the problem encountered in the SW gravity.
+
+The second way is to modify the matter part. Although the Lagrangian of the polynomial dS fluid is regular with respect to $\xi^A$, it is not regular with respect to $J^A$ when $\xi^A\xi_A = 0$, in which case there should be $J^AJ_A \ge 0$, and so the number density $n \equiv \sqrt{-J^AJ_A}$ is not regular. Maybe one could find a new fluid model whose Lagrangian is regular with respect to all the variables, based on the polynomial models for fundamental fields proposed in Refs. [14, 15].
+
+## Acknowledgments
+
+I thank Profs. S.-D. Liang and Z.-B. Li for their abiding help. Also, I would to thank my parents and my wife. This research is supported by the National Natural Science Foundation for Young Scientists of China under Grant No. 12005307.
+
+⁵This result is different from that in Ref. [13], where the cosmological solution describes a decelerating universe. It shows that the SW model is not equivalent to the model considered in Ref. [13].
+---PAGE_BREAK---
+
+References
+
+[1] T. W. B. Kibble. Lorentz invariance and the gravitational field. J. Math. Phys. 2, 212-221 (1961)
+
+[2] D. W. Sciama. On the analogy between charge and spin in general relativity, in: Recent Developments in General Relativity, Festschrift for Infeld (Pergamon Press, Oxford, 1962) pp. 415–439
+
+[3] M. Blagojević and F. W. Hehl. Gauge Theories of Gravitation. A Reader with Commentaries. Imperial College Press, London, 2013
+
+[4] V. N. Ponomariov, A. O. Barvinsky and Y. N. Obukhov. Gauge Approach and Quantization Methods in Gravity Theory (Nauka, Moscow, 2017)
+
+[5] E. W. Mielke. Geometrodynamics of Gauge Fields, 2nd. ed. (Springer, Switzerland, 2017)
+
+[6] K. S. Stelle and P. C. West. De Sitter gauge invariance and the geometry of the Einstein-Cartan theory. J. Phys., A12, L205-L210 (1979)
+
+[7] K. S. Stelle and P. C. West. Spontaneously broken de Sitter symmetry and the gravitational holonomy group. Phys. Rev. D 21, 1466-1488 (1980)
+
+[8] S. W. MacDowell and F. Mansouri. Unified geometric theory of gravity and supergravity. Phys. Rev. Lett. 38, 739-742 (1977)
+
+[9] P. C. West. A geometric gravity Lagrangian. Phys. Lett. B 76, 569 (1978)
+
+[10] H. Westman and T. Złośnik. Exploring Cartan gravity with dynamical symmetry breaking. Class. Quant. Grav. 31, 095004 (2014)
+
+[11] J. Magueijo, M. Rodríguez-Vázquez, H. Westman and T. Złośnik. Cosmological sig-nature change in Cartan Gravity with dynamical symmetry breaking. Phys. Rev. D 89, 063542 (2014)
+
+[12] H. Westman and T. Złośnik. An introduction to the physics of Cartan gravity. Ann. Phys. 361, 330-376 (2015)
+
+[13] S. Alexander, M. Cortês, A. Liddle, J. Magueijo, R. Sims, and L. Smolin. The cosmology of minimal varying Lambda theories. Phys. Rev. D 100, 083507 (2019)
+
+[14] H. R. Pagels. Gravitational gauge fields and the cosmological constant. Phys. Rev. D 29, 1690-1698 (1984)
+
+[15] H. Westman and T. Złośnik. Cartan gravity, matter fields, and the gauge principle. Ann. Phys. 334, 157-197 (2013)
+
+[16] J.-A. Lu. Energy, momentum and angular momentum conservation in de Sitter gravity. Class. Quantum Grav. 33, 155009 (2016)
+
+[17] F. W. Hehl, P. von der Heyde, G. D. Kerlick, and J. M. Nester. General relativity with spin and torsion: Foundations and prospects. Rev. Mod. Phys. 48, 393 (1976)
+---PAGE_BREAK---
+
+[18] F. W. Hehl, J. D. McCrea, E. W. Mielke, and Y. Ne'eman. Metric-affine gauge theory of gravity: field equations, Noether identities, world spinors, and breaking of dilation invariance. Phys. Rep. 258, 1-171 (1995)
+
+[19] J.-A. Lu and C.-G. Huang. Kaluza-Klein-type models of de Sitter and Poincaré gauge theories of gravity. Class. Quantum Grav. 30, 145004 (2013)
+
+[20] H.-Y. Guo. The local de Sitter invariance. Kexue Tongbao 21, 31-34 (1976)
+
+[21] Y. N. Obukhov. Poincaré gauge gravity: selected topics. Int. J. Geom. Meth. Mod. Phys. 3, 95-138 (2006)
+
+[22] H. Westman and T. Złośnik. Gravity, Cartan geometry, and idealized waywisers. arXiv:1203.5709 (2012)
+
+[23] J. D. Brown. Action functionals for relativistic perfect fluids. Class. Quant. Grav. 10, 1579 (1993)
+
+[24] Planck Collaboration. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641, A6 (2020)
+
+[25] A. G. Riess et al. Observational evidence from supernovae for an accelerating universe and a cosmological constant. Astron. J. 116, 1009-1038 (1998)
+
+[26] B. Schmidt et al. The high-Z supernova search: measuring cosmic deceleration and global curvature of the universe using type IA supernovae. Astrophys. J. 507, 46-63 (1998)
+
+[27] S. Perlmutter et al. Measurements of Omega and Lambda from 42 high redshift supernovae. Astrophys. J. 517, 565-586 (1999)
\ No newline at end of file
diff --git a/samples/texts_merged/7618174.md b/samples/texts_merged/7618174.md
new file mode 100644
index 0000000000000000000000000000000000000000..0c5404fc31c41dae905eebf1044b064b93fba0e9
--- /dev/null
+++ b/samples/texts_merged/7618174.md
@@ -0,0 +1,712 @@
+
+---PAGE_BREAK---
+
+QUADRATIC BOUNDS
+ON THE QUASICONVEXITY OF
+NESTED TRAIN TRACK SEQUENCES
+
+by
+TARIK AOUGAB
+
+Electronically published on March 4, 2014
+
+Topology Proceedings
+
+Web: http://topology.auburn.edu/tp/
+
+Mail: Topology Proceedings
+Department of Mathematics & Statistics
+Auburn University, Alabama 36849, USA
+
+E-mail: topolog@auburn.edu
+
+ISSN: 0146-4124
+
+COPYRIGHT © by Topology Proceedings. All rights reserved.
+---PAGE_BREAK---
+
+QUADRATIC BOUNDS ON THE QUASICONVEXITY OF
+NESTED TRAIN TRACK SEQUENCES
+
+TARIK AOUGAB
+
+**ABSTRACT.** Let $S_{g,p}$ denote the genus $g$ orientable surface with $p$ punctures. We show that nested train track sequences constitute $O((g,p)^2)$-quasiconvex subsets of the curve graph, effectivizing a theorem of Howard A. Masur and Yair N. Minsky. As a consequence, the genus $g$ disk set is $O(g^2)$-quasiconvex. We also show that splitting and sliding sequences of birecurrent train tracks project to $O((g,p)^2)$-unparameterized quasigeodesics in the curve graph of any essential subsurface, an effective version of a theorem of Masur, Lee Mosher, and Saul Schleimer.
+
+# 1. INTRODUCTION
+
+Let $S_{g,p}$ denote the orientable surface of genus $g$ with $p \ge 0$ punctures, and let $\mathcal{C}(S_{g,p})$ be the corresponding curve complex. Finally, let $\mathcal{C}_k(S_{g,p})$ denote the corresponding $k$-skeleton.
+
+Let $(\tau_i)_i$ be a sequence of train tracks on $S_{g,p}$ such that $\tau_{i+1}$ is carried by $\tau_i$ for each $i$. Such a collection of train tracks defines a subset of $\mathcal{C}_0(S_{g,p})$ called a *nested train track sequence*. A train track splitting sequence is an important special case of such a sequence, in which $\tau_i$ is obtained from $\tau_{i-1}$ via one of two simple combinatorial moves, *splitting* and *sliding*.
+
+A nested train track sequence is said to have *R*-bounded steps if the $\mathcal{C}_1$-distance between the vertex cycles of $\tau_i$ and those of $\tau_{i+1}$ is bounded above by R. Howard A. Masur and Yair N. Minsky [13] show that any
+
+2010 Mathematics Subject Classification. 57M07, 20F65.
+Key words and phrases. curve complex, disk set, mapping class roup.
+The author was partially supported by an NSF grant during the completion of this work.
+©2014 Topology Proceedings.
+---PAGE_BREAK---
+
+nested train track sequence with $R$-bounded steps is a $K = K(R, g, p)$-quasigeodesic. Our first result provides some effective control on $K$ as a function of $g$ and $p$; in what follows, let $\omega(g, p) = 3g + p - 4$.
+
+**Theorem 1.1.** There exists a function $K(g,p) = O(\omega(g,p)^2)$ such that any nested train track sequence with $R$-bounded steps is a $(K(g,p) + R)$-unparameterized quasigeodesic of the curve graph $C_1(S_{g,p})$, which is $(K(g,p) + R)$-quasiconvex.
+
+Masur, Lee Mosher, and Saul Schleimer [14] use Masur and Minsky's result [13] to show that if $Y \subseteq S_{g,p}$ is any essential subsurface, then a sliding and splitting sequence on $S_{g,p}$ maps to a uniform unparameterized quasigeodesic under the subsurface projection map to $\mathcal{C}(Y)$. Using Theorem 1.1, we show the following theorem.
+
+**Theorem 1.2.** *There exists a function $A(g,p) = O(\omega(g,p)^2)$ satisfying the following. Suppose $Y \subseteq S_{g,p}$ is an essential subsurface, and let $(\tau_i)_i$ be a splitting and sliding sequence of birecurrent train tracks on $S_{g,p}$. Then $(\tau_i)_i$ projects to an $A(g,p)$-unparameterized quasigeodesic in $C_1(Y)$.*
+
+Let $H_g$ denote the genus $g$ handlebody and let $D(g) \subset C_1(S_g)$ denote the set of meridians, curves on $S_g$ that bound disks in $H_g$. Also due to Masur and Minsky [13] is the fact that any two meridians in $D(g)$ can be connected by a 15-bounded nested train track sequence. Therefore, we obtain the following corollary of Theorem 1.1.
+
+**Corollary 1.3.** *There exists a function $f(g) = O(g^2)$ such that $D(g)$ is an $f(g)$-quasiconvex subset of $C_1(S_g)$.*
+
+The mapping class group, denoted Mod($S$), is the group of isotopy classes of orientation preserving homeomorphisms of a surface $S$ (see [5] for a thorough exposition).
+
+As an application of Corollary 1.3, we obtain a more effective approach for detecting when a pseudo-Anosov mapping class $\phi$ is generic. Here, *generic* means that the stable lamination of $\phi$ is not a limit of meridians; the term “generic” is warranted by a theorem of Steven P. Kerckhoff [10], which states that the set of all projective measured laminations which are limits of meridians constitutes a measure 0 subset of $\mathcal{PML}(S)$, the space of all projective measured laminations on a surface $S$.
+
+In what follows, let $d_{C(S)}$ denote distance in $C_1(S)$; when there is no confusion, the reference to $S$ will be omitted. Masur and Minsky [11] showed that $C_1(S)$ is a $\delta$-hyperbolic metric space.
+
+Using Theorem 1.2, [1], and the fact that the curve graphs are uniformly hyperbolic (as shown by the author in [2], and independently in [3], [4], and [9]), we have the following corollary.
+---PAGE_BREAK---
+
+**Corollary 1.4.** There exists a function $r(g) = O(g^2)$ such that $\phi \in Mod(S_g)$ is a generic pseudo-Anosov mapping class if and only if there exists some $k \in \mathbb{N}$ such that for all $n > k$,
+
+$$d_C(D(g), \phi^n(D(g))) > r(g).$$
+
+**Remark 1.5.** By the argument of Aaron Abrams and Saul Schleimer [1], it suffices to take $r(g) = 2\delta + 2f(g)$ for $\delta$ the hyperbolicity constant of $C_1$, and $f(g)$ as in the statement of Corollary 1.3.
+
+We also note that quasiconvexity of $D(g)$ and the fact that splitting sequences map to quasigeodesics under subsurface projection are main ingredients in the proof due to Masur and Schleimer [15] that the disk complex is $\delta$-hyperbolic. Thus, the effective control discussed above is perhaps a first step to studying the growth of the hyperbolicity constant of the disk complex.
+
+The proof of the main theorem, Theorem 1.1, relies on the ability to control
+
+(1) the hyperbolicity constant $\delta(g,p)$ of $C_1$;
+
+(2) $B = B(g,p)$, a bound on the diameter of a set of vertex cycles of a fixed train track $\tau \subset S_{g,p}$; and
+
+(3) the “nesting lemma constant” $k(g,p)$.
+
+As mentioned above, due to work of the author and the authors of [3], [4], and [9], curve graphs are uniformly hyperbolic. Furthermore, [9] shows that all curve graphs are 17-hyperbolic.
+
+Regarding (2), the author [2] has also shown that for sufficiently large $\omega$, $B(g,p) \le 3$.
+
+Therefore, all that remains is to analyze the growth of $k(g,p)$, which we address in section 5 by following Masur and Minsky's original argument [11] while keeping track of the constants that pop up along the way. However, in order to do this, we have need of an effective criterion for determining when a train track $\tau$ is non-recurrent, which we address in section 4.
+
+In section 2, we review some preliminaries about curve complexes and subsurface projections. In section 3, we review train tracks on surfaces and bounds on curve graph distance given by intersection number, as obtained in previous work. In section 4, we obtain an effective way of detecting non-recurrence of train tracks by analyzing the linear algebra of the corresponding branch-switch incidence matrix. In section 5, we obtain an effective version of Masur and Minsky's nesting lemma [11], which is the main tool needed to prove Theorem 1.1. In section 6 we complete the proofs of theorems 1.1 and 1.2, and Corollary 1.3.
+---PAGE_BREAK---
+
+## 2. PRELIMINARIES: COARSE GEOMETRY, COMBINATORIAL COMPLEXES, AND SUBSURFACE PROJECTIONS
+
+Let $(X, d_X)$ and $(Y, d_Y)$ be metric spaces. For some $k \ge 1$, a relation $f : X \to Y$ is a *k-quasi-isometric embedding* of $X$ into $Y$ if, for any $x_1, x_2 \in X$, we have
+
+$$ \frac{1}{k} d_Y(f(x_1, x_2) - k \le d_X(x_1, x_2) \le k \cdot d_Y(f(x_1), f(x_2)) + k. $$
+
+Since $f$ is not necessarily a map, $f(x)$ and $f(y)$ need not be singletons, and the distance $d_Y(f(x), f(y))$ is defined to be the diameter in the metric $d_Y$ of the union $f(x) \cup f(y)$. If the $k$-neighborhood of $f(X)$ is all of $Y$, then $f$ is a *k-quasi-isometry* between $X$ and $Y$, and we refer to $X$ and $Y$ as being *quasi-isometric*.
+
+Given an interval $[a, b] \in \mathbb{Z}$, a *k-quasigeodesic* in $X$ is a $k$-quasi-isometric embedding $f : [a, b] \to X$. If $f : [a, b] \to X$ is any relation such that there exists an interval $[c, d]$ and a strictly increasing function $g \cdot [c, d] \to [a, b]$ such that $f \circ g$ is a $k$-quasigeodesic, we say that $f$ is a *k-unparameterized quasigeodesic*. In this case we also require that, for each $i \in [c, d - 1]$, the diameter of $f([g(i), g(i+1)])$ is at most $k$. We will sometimes refer to a quasigeodesic by its image in the metric space $X$.
+
+A simple closed curve on $S_{g,p}$ is *essential* if it is homotopically non-trivial and not homotopic into a neighborhood of a puncture.
+
+The *curve complex* of $S_{g,p}$, denoted $\mathcal{C}(S_{g,p})$, is the simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves on $S_{g,p}$, such that $k+1$ vertices span a $k$-simplex exactly when the corresponding $k+1$ isotopy classes can be realized disjointly on $S_{g,p}$. The curve complex is made into a metric space by identifying each simplex with the standard Euclidean simplex with unit length edges. Let $\mathcal{C}_k(S)$ denote the $k$-skeleton of $\mathcal{C}(S)$.
+
+The curve complex is a locally infinite, infinite diameter metric space. By a theorem of Masur and Minsky [11], $\mathcal{C}(S)$ is $\delta$-hyperbolic for some $\delta = \delta(S) > 0$, meaning that the $\delta$-neighborhood of the union of any two edges of a geodesic triangle contains the third edge.
+
+The curve complex admits an isometric (but not properly discontinuous) action of $\text{Mod}(S)$, and it is a flag complex, so that its combinatorics are completely encoded by $\mathcal{C}_1(S)$, the *curve graph*; note also that $\mathcal{C}(S)$ is quasi-isometric to $\mathcal{C}_1(S)$, and therefore, to study the coarse geometry of $\mathcal{C}$, it suffices to consider the curve graph. Let $d_\mathcal{C}$ denote distance in the curve graph.
+
+If $p \ne 0$, we can consider more general combinatorial complexes, which also allow vertices to represent essential arcs connecting punctures, up to isotopy. As such, define $\mathcal{A}\mathcal{C}(S)$, the *arc and curve complex of* $S$, to
+---PAGE_BREAK---
+
+be the simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves and arcs on $S$. In case $S$ has boundary, the isotopy classes of arcs which constitute a vertex of $\mathcal{A}C$ are not required to be rel boundary; that is, two arcs represent the same vertex if they are isotopic via an isotopy which need not fix the boundary pointwise.
+
+As with $\mathcal{C}(S)$, two vertices are connected by an edge if and only if the corresponding isotopy classes can be realized disjointly, and the higher dimensional skeleta are defined by requiring $\mathcal{A}C(S)$ to be flag. As with $\mathcal{C}$, denote by $\mathcal{A}C_k(S)$ the $k$-skeleton of $\mathcal{A}C(S)$. It is worth noting that $\mathcal{A}C(S)$ is quasi-isometric to $\mathcal{C}(S)$, with quasiconstants not depending on the topological type of $S$.
+
+A non-annular subsurface $Y$ of $S$ is the closure of a complementary component of an essential multi-curve on $S$; an annular subsurface $Y \subseteq S$ is a closed neighborhood of an essential simple closed curve on $S$, homeomorphic to $[0, 1] \times S^1$. A subsurface is essential if its boundary components are all essential curves and it is not homotopy equivalent to a thrice-punctured sphere.
+
+Let $Y \subseteq S$ be an essential, embedded subsurface of $S$. Then there is a covering space $S^Y$ associated to the inclusion $\pi_1(Y) < \pi_1(S)$. While $S^Y$ is not compact, note that the Gromov compactification of $S^Y$ is homeomorphic to $Y$, and via this homeomorphism, we identify $\mathcal{A}C(Y)$ with $\mathcal{A}C(S^Y)$. Then, given $\alpha \in \mathcal{A}C_0(S)$, the subsurface projection map $\pi_Y : \mathcal{A}C(S) \to \mathcal{A}C(Y)$ is defined by setting $\pi_Y(\alpha)$ equal to its preimage under the covering map $S^Y \to S$.
+
+Technically, this defines a map from $\mathcal{A}C_0(S)$ into $2^{\mathcal{A}C_0(S)}$ since there may be multiple connected components of the pre-image of a curve or arc, but the image of any point in the domain is a bounded subset of the range. Thus, to make $\pi_Y$ a map, we can simply choose some component of this pre-image for each point in the domain and then extend the map $\pi_Y$ simplicially to the higher dimensional skeleta.
+
+Given an arc $a \in \mathcal{A}C(S)$, there is a closely related simple closed curve $\tau(a) \in C_1(S)$, obtained from $a$ by surgering along the boundary components that $a$ meets. More concretely, let $\mathcal{N}(a)$ denote a thickening of the union of $a$ together with the (at most two) boundary components of $S$ that $a$ meets, and define $\tau(a) \in 2^{C_1(S)}$ to be the components of $\partial(\mathcal{N}(a))$.
+
+Thus, we obtain a *subsurface projection map*
+
+$$\psi_Y := \tau \circ \pi_Y : \mathcal{C}(S) \to \mathcal{C}(Y)$$
+
+for $Y \subseteq S$ any essential subsurface.
+
+Then, given $\alpha, \beta \in \mathcal{C}(S)$, define $d_Y(\alpha, \beta)$ by
+
+$$d_Y(\alpha, \beta) := \text{diam}_{\mathcal{C}(Y)}(\psi_Y(\alpha) \cup \psi_Y(\beta)).$$
+---PAGE_BREAK---
+
+### 3. TRAIN TRACKS AND INTERSECTION NUMBERS
+
+In this section, we recall some basic terminology of train tracks on surfaces; we refer the reader to [18] and [16] for a more in-depth discussion. A *train track* $\tau \subset S$ is an embedded 1-complex whose vertices and edges are called *switches* and *branches*, respectively. Branches are smooth parameterized paths with well-defined tangent vectors at the initial and terminal switches. At each switch $v$ there is a unique line $L \subset T_v S$ such that the tangent vector of any branch incident at $v$ coincides with $L$.
+
+As part of the data of $\tau$, we choose a preferred direction along this line at each switch $v$; a half branch incident at $v$ is called *incoming* if its tangent vector at $v$ is parallel to this chosen direction and is called *outgoing* if it is anti-parallel. Therefore, at each switch, the incident half branches are partitioned disjointly into two orientation classes, the *incoming germ* and *outgoing germ*.
+
+The valence of each switch must be at least 3 unless $\tau$ has a connected component consisting of a simple closed curve; in this case, $\tau$ has one bivalent switch for such a component.
+
+Finally, we require that every complementary component of $S \setminus \tau$ has a negative generalized Euler characteristic, that is
+
+$$\chi(Q) - \frac{1}{2}V(Q) < 0$$
+
+for any complementary component $Q$; here, $\chi(Q)$ is the usual Euler characteristic and $V(Q)$ is the number of cusps on $\partial(Q)$.
+
+A *train path* is a path $\gamma : [0, 1] \to \tau$, smooth on $(0, 1)$, which traverses a switch only by entering via one germ and exiting from the other; a *closed train path* is a train path with $\gamma(0) = \gamma(1)$. A *proper closed train path* is a closed train path with $\gamma'(0) = \gamma'(1)$; here, $\gamma'(t)$ is the unit tangent vector to the path $\gamma$ at time $t$.
+
+Let $\mathcal{B}$ denote the set of branches of $\tau$; then a non-negative, real-valued function $\mu : \mathcal{B} \to \mathbb{R}_+$ is called a *transverse measure* on $\tau$ if for each switch $v$ of $\tau$, we have
+
+$$\sum_{b \in i(v)} \mu(b) = \sum_{b' \in o(v)} \mu(b')$$
+
+where $i(v)$ is the set of incoming branches and $o(v)$ is the set of outgoing ones. These are called the *switch conditions*. $\tau$ is called *recurrent* if it admits a strictly positive transverse measure, that is, one that assigns a positive weight to every branch. A switch of $\tau$ is called *semi-generic* if exactly one of the two germs of half branches consists of a single half branch. $\tau$ is called semi-generic if all switches are semi-generic, and $\tau$ is *generic* if $\tau$ is semi-generic and each switch has degree at most 3. $\tau$
+---PAGE_BREAK---
+
+is called *large* if each connected component of its complement is simply connected.
+
+Any positive scaling of a transverse measure is also a transverse measure, and therefore the set of all transverse measures, viewed as a subset of $\mathbb{R}^\mathcal{B}$, is a cone over a compact polyhedron in projective space. Let $P(\tau)$ denote the projective polyhedron of transverse measures. A projective measure class $[\mu] \in P(\tau)$ is called a *vertex cycle* if it is an extreme point of $P(\tau)$. It is worth noting that if $\tau$ is any train track on $S$, there exists a generic, recurrent train track $\tau'$ such that $P(\tau) = P(\tau')$.
+
+A lamination $\lambda$ is *carried* by $\tau$ if there is a smooth map $\phi: S \to S$ called the $\tau$-carrying map for $\lambda$ which is isotopic to the identity, $\phi(\lambda) \subset \tau$, and such that the restriction of the differential $d\phi$ to any tangent line of $\lambda$ is non-singular. If $c$ is any simple closed curve carried by $\tau$, then $c$ induces an integral transverse measure called the *counting measure*, for which each branch of $\tau$ is assigned the natural number equaling the number of times the image of $c$ under its carrying map traverses that branch.
+
+A train track $\tau'$ is *carried* by $\tau$ if there exists a smooth map $\phi: S \to S$ isotopic to the identity, such that for any lamination $\lambda$ carried by $\tau'$, $\phi$ is a $\tau$-carrying map for $\lambda$.
+
+A subset $\tau' \subset \tau$ is called a *subtrack* of $\tau$ if it is also a train track on $S$. In this case, we write $\tau' < \tau$.
+
+Given any train track $\tau$ with branch set $\mathcal{B}$, we can distinguish branches as being one of three types: If $b \in \mathcal{B}$ and both half branches of $b$ are the only half branch in their respective germs, $b$ is called *large*. If both half branches of $b$ are in germs containing more than one half branch, $b$ is *small*; otherwise, $b$ is *mixed* (Figure 1).
+
+FIGURE 1. Branch Classes. Left: $b_1$ is small; Middle: $b_2$ is mixed; Right: $b_3$ is large.
+
+If $[v]$ is a vertex cycle of $\tau$, then there is a unique (up to isotopy) simple closed curve $c(v)$ such that $c$ is carried by $\tau$, and the counting measure on $c$ is an element of $[v]$. Therefore, if $[v_1]$ and $[v_2]$ are two vertex cycles of $\tau$, we can define the distance $d([v_1], [v_2])$ between them to be the curve graph
+---PAGE_BREAK---
+
+distance between their respective simple closed curve representatives:
+
+$$d([v_1], [v_2]) := d_C(c(v_1), c(v_2)).$$
+
+Using this, we can also define the distance between two train tracks $\tau$ and $\tau'$ to be the distance between their vertex cycle sets:
+
+$$d(\tau, \tau') := \min\{d([v_\tau], [v_{\tau'}]): [v_\tau] \text{ is a vertex cycle of } \tau \text{ and} \\ [v_{\tau'}] \text{ is a vertex cycle of } \tau']\}.$$
+
+A train track $\tau$ is called *transversely recurrent* if, for each branch $b$ of $\tau$, there exists a simple closed curve $c$ intersecting $b$, such that $S \setminus (\tau \cup c)$ contains no bigon complementary regions. A track $\tau$ which is both recurrent and transversely recurrent is called *birecurrent*.
+
+A *nested train track sequence* is a sequence $(\tau_i)_i$ on $S_{g,p}$ of birecurrent train tracks such that $\tau_j$ is carried by $\tau_{j+1}$ for each $j$. This, in turn, determines a collection of vertices in $C_1(S_{g,p})$ by associating the track $\tau_j$ with its collection of vertices.
+
+Given $R > 0$, a nested train track sequence $(\tau_i)_i$ is said to have $R$-bounded steps if
+
+$$d(\tau_i, \tau_{i+1}) \le R$$
+
+for each $i$. An important special case is the example of a *splitting and sliding sequence*. This is any train track sequence where $\tau_i$ is obtained from $\tau_{i+1}$ via one of two combinatorial moves, *splitting* (Figure 2) or *sliding* (Figure 3).
+
+FIGURE 2. Any large branch admits three possible “splittings.”
+---PAGE_BREAK---
+
+FIGURE 3. Any mixed branch admits a “sliding.”
+
+We will need the following theorem, as seen in [2].
+
+**Theorem 3.1.** There exists a natural number $n \in \mathbb{N}$ such that if $\omega(g,p) > n$, the following holds: Suppose $\tau \subset S_{g,p}$ is any train track and $[v_1]$ and $[v_2]$ are vertex cycles of $\tau$. Then
+
+$$d([v_1], [v_2]) \le 3.$$
+
+Let $\text{int}(P(\tau)) \subset P(\tau)$ denote the set of strictly positive transverse measures on $\tau$. There, $\tau$ is recurrent if and only if $\text{int}(P(\tau)) \neq \emptyset$. For $\tau$ a large track, a *diagonal extension* $\sigma$ of $\tau$ is a track such that $\tau < \sigma$ and each branch of $\sigma \setminus \tau$ has the property that its endpoints are incident at corners of complementary regions of $\tau$.
+
+Following [11], let $E(\tau)$ denote the set of all diagonal extensions of $\tau$, and define
+
+$$PE(\tau) := \bigcup_{\sigma \in E(\tau)} P(\sigma).$$
+
+Let $N(\tau)$ be the union of $E(\kappa)$ over all large, recurrent subtracks $\kappa < \tau$:
+
+$$N(\tau) := \bigcup_{\kappa < \tau, \kappa \text{ large, recurrent}} E(\kappa),$$
+
+and define
+
+$$PN(\tau) := \bigcup_{\kappa \in N(\tau)} P(\kappa).$$
+
+Define $\text{int}(PE(\tau))$ to be the measures in $PE(\tau)$ whose restrictions to $\tau$ are strictly positive, and define
+
+$$\text{int}(PN(\tau)) := \bigcup_{\kappa} \text{int}(PE(\kappa)).$$
+
+The following theorem will be heavily relied upon in section 3.
+
+**Theorem 3.2 ([2]).** For $\epsilon \in (0,1)$, there is some $\eta = \eta(\epsilon)$ such that if $\alpha, \beta \in C_0(S_g)$, whenever $\omega(g,p) > \eta(\epsilon)$ and $d_C(\alpha, \beta) \ge k$,
+
+$$i(\alpha, \beta) \ge \left( \frac{\omega(g,p)^{\epsilon}}{q(g,p)} \right)^{k-2}$$
+
+where $q(g,p) = O(\log_2(\omega))$.
+---PAGE_BREAK---
+
+**Remark 3.3.** In the above, $i(\alpha, \beta)$ is the geometric intersection number between $\alpha$ and $\beta$, defined by
+
+$$i(\alpha, \beta) := \min |x \cap \beta|$$
+
+where the minimum is taken over all $x$ isotopic to $\alpha$.
+
+We can explicitly write down the function $q(g,p)$ from the statement of Theorem 3.2. $q(g,p)$ is an upper bound on the girth of a finite graph with at most $8(6g+3p-7)$ vertices and average degree larger than 2.02. As seen in [6],
+
+$$
+\begin{aligned}
+q(g,p) = & \left( \frac{8}{\log_2(1.01)} + 5 \right) \log_2(8(6g + 3p - 7)) \\
+& < 1000 \cdot \log_2(100\omega).
+\end{aligned}
+$$
+
+This upper bound will be used in section 5.
+
+## 4. DETECTING RECURRENCE FROM THE INCIDENCE MATRIX
+
+Let $\tau = (S, \mathcal{B}) \subset S_{g,p}$ be a train track with branch set $\mathcal{B}$ and switch set $S$.
+
+Label the branches $\mathcal{B} = \{b_1, \dots, b_n\}$ and switches $S = \{s_1, \dots, s_m\}$, and identify $\mathbb{R}^n$ with real-valued functions over $\mathcal{B}$. Then, associated to $\tau$ is a linear map $L_\tau: \mathbb{R}^n \to \mathbb{R}^m$ and a corresponding matrix in the standard basis defined by, given $u \in \mathbb{R}^n$, the $j^{th}$ coordinate of $L_\tau(u)$ is the sum of the incoming weights, minus the sum of the outgoing weights at the $j^{th}$ switch, $1 \le j \le m$. Let $\mathbb{R}_+^n$ denote the strictly positive orthant of $\mathbb{R}^n$, the collection of vectors with all positive coordinates.
+
+We call $L_\tau$ the incidence matrix for $\tau$. Note that $\mu \in \mathbb{R}^n$ is a transverse measure on $\tau$ if and only if $\mu \in \ker(L_\tau)$; thus, $\tau$ is recurrent if $\ker(L_\tau)$ intersects $\mathbb{R}_+^n$ non-trivially.
+
+As mentioned in the proof of Lemma 4.1 of [11], if $\ker(L_\tau) \cap \mathbb{R}_+^n = \emptyset$, then there is some $\delta > 0$ such that
+
+$$ \|L_{\tau}(u)\| \geq \delta \cdot u_{min}, \quad \forall u \in \mathbb{R}_{+}^{n}. $$
+
+Here, $u_{min}$ is the minimum over all coordinates of the vector $u$, and $\|\cdot\|$ is the standard Euclidean norm in $\mathbb{R}^m$. The main goal of this section is to effectivize this statement, that is, to obtain explicit control on the size of $\delta$ as a function of $g$ and $p$.
+
+**Theorem 4.1.** Let $\tau = (S, \mathcal{B})$, $|\mathcal{B}| = n$, and $|S| = m$ be a non-recurrent train track on $S_{g,p}$, and let $u \in \mathbb{R}_+^n$. Then
+
+$$ \|L_{\tau}(u)\|_{sup} \geq \frac{u_{min}}{12g + 4p - 12}, $$
+
+where $\|\cdot\|_{sup}$ is the sup norm on $\mathbb{R}^m$.
+---PAGE_BREAK---
+
+*Proof.* We begin by observing that non-recurrence is equivalent to the existence of “extra” branches, ones that must be assigned 0 by any transverse measure:
+
+**Lemma 4.2.** Suppose that for each branch $b \in \mathcal{B}$, there is some corresponding transverse measure $\mu_b$ on $\tau$ such that $\mu(b) > 0$. Then $\tau$ is recurrent.
+
+Therefore, the existence of a branch $b$, which is assigned 0 by every
+transverse measure on $\tau$, is equivalent to $\tau$ being non-recurrent. We will
+call such a branch *invisible*.
+
+Given $s \in S$, the switch condition at $s$ represents a row vector of the
+matrix corresponding to the linear transformation $L_{\tau}$. This is the vector
+$v_s$ that has 1's in the coordinates corresponding to the incoming half
+branches incident to $s$ and -1's in the coordinates corresponding to the
+outgoing half branches incident to $s$. Note that $v_s$ could also have a $\pm 2$
+in place of two 1's if both ends of a single branch are incident to $s$. Let
+$R(L_{\tau})$ denote the row space of $L_{\tau}$, the vector space spanned by the row
+vectors.
+
+The following is an immediate corollary of Theorem 4.1.
+
+**Lemma 4.3.** Suppose $b \in \mathcal{B}$ is an invisible branch. Then $b$ is not contained in a closed train path.
+
+For $b$, a branch of $\tau$, let $S(b) \subset S$ denote the switches of $\tau$ incident to $b$; thus, $|S(b)| = 1$ or 2. For $x \in S(b)$, consider the pointed universal cover $(\tilde{\tau}, \tilde{x})$ with associated covering projection $\pi : (\tilde{\tau}, \tilde{x}) \to (\tau, x)$. We define $P(\tilde{\tau}, \tilde{x}) \subseteq \tilde{\tau}$ to be the subset of the universal cover consisting of train paths in $\tilde{\tau}$ emanating from $\tilde{x}$ that do not traverse any branch which projects to $b$ under $\pi$.
+
+Any train path emanating from $\tilde{x}$ has a natural choice of orientation
+by defining its initial point to be $\tilde{x}$. This induces an orientation on any
+branch $e$ contained in $\tilde{P}$. Note that this is well defined because $\tilde{\tau}$ does
+not contain closed train paths (proper or otherwise).
+
+We say that $P(\tilde{\tau}, \tilde{x})$ is unidirectional if, whenever $e_i, e_j \subseteq P(\tilde{\tau}, \tilde{x})$ project to the same branch $e$ of $\tau$, the orientations of $e$ induced by $e_i$ and $e_j$ agree.
+
+Given $u \in \mathbb{R}^n$, define the *deviation* of $u$ at $s \in S$, denoted by $d_s(u)$, to be the absolute value of the coordinate of $L_\tau(u)$ corresponding to $s$. It suffices to assume that for $u$, as in the statement of the theorem,
+
+$$
+(4.1) \qquad d_s(u) < \frac{u_{\min}}{12g + 4p - 12}, \quad \forall s \in S.
+$$
+
+We will use this assumption to obtain a contradiction.
+
+Since $\tau$ is non-recurrent, it must contain an invisible branch $b$.
+---PAGE_BREAK---
+
+**Lemma 4.4.** Let $s_1, s_2 \in S(b)$ be the two (possibly non-distinct) switches incident to the invisible branch $b$ and let $\tilde{s}_1$ and $\tilde{s}_2 \in \tilde{\tau}$ be corresponding lifts which together bound a lift of $b$. Then at least one of $\mathcal{P}(\tilde{\tau}, \tilde{s}_i)$, $i = 1, 2$ is unidirectional.
+
+*Proof.* Suppose not. Then there exist branches $(e_j^i)_{\substack{j=1,2 \\ i=1,2}}^{j=1,2} \in \mathcal{P}(\tilde{\tau}, \tilde{x})$ such that $e_1^i$, $i = 1, 2$ project to a branch $e_1$ of $\tau$ with opposite orientations, and similarly for $e_2^i$, $i = 1, 2$. Thus, in $\tau$ there exist two train paths starting from $s_1$ and ending at $e_1$, but which traverse $e_1$ in opposite directions. Concatenating these two paths produces a loop in $\tau$, which is a train path away from $s_1$.
+
+By the same exact argument, there is another loop containing the switch $s_2$ and the branch $e_2$, which is a train path away from $s_2$. We can then concatenate these two paths across the branch $b$ to obtain a "dumbbell"-shaped closed train path, which contains $b$ (see Figure 4). This contradicts Lemma 4.2. □
+
+FIGURE 4. If neither train path set emanating from $b$ is unidirectional, then there exist non-closed train paths starting and ending at $s_1$ and $s_2$. Joining these paths across $b$ yields a closed train path containing $b$, pictured above.
+
+Therefore, we assume henceforth that $\mathcal{P}(\tilde{\tau}, \tilde{s}_1)$ is unidirectional; let $\mathcal{Q}(s_1) \subseteq \tau$ be the projection of $\mathcal{P}$ to $\tau$. That $\mathcal{P}$ is unidirectional will allow us to redefine which half branches are incoming and which are outgoing (without changing the linear algebraic structure of $L_\tau$) such that each branch of $\mathcal{Q}$ is mixed.
+
+More concretely, orient each edge $e \subseteq \mathcal{Q}(s_1)$ by projecting the orientation on $\tilde{e}$ down to $e$, where $\tilde{e} \subseteq \tilde{\mathcal{P}}$ is any branch of $\tilde{\tau}$ with $\pi(\tilde{e}) = e$; unidirectionality implies that this construction is well defined. Then we
+---PAGE_BREAK---
+
+simply define a half-branch $e' \subset e \in Q$ to be outgoing at a switch $s$ if the orientation of $e'$ coming from $e$ points away from $s$, and similarly for in-coming branches. Note that this is well defined in that two half-branches incident to the same switch in distinct germs will be assigned opposing directional classes.
+
+This rule then defines an assignment of direction for all half branches of $\tau$ as follows. The half branches of $\tau$ which are not contained in $Q$ can be partitioned disjointly into two subcollections: the *frontier* half branches (those which are incident to a switch contained in $Q$) and the *interior* half branches (those for which the incident switch is not contained in $Q$). Once directions have been assigned to the half branches of $Q$ as above, directions for frontier half branches are determined by which germ they belong to at the corresponding switch. For interior half branches, simply assign the original directions coming from $\tau$.
+
+Let $S(Q) \subseteq S$ denote the switches of $\tau$ contained in $Q$ and recall that $v_s$ denotes the row vector of $L_{\tau}$ corresponding to the switch $s \in S$.
+
+**Lemma 4.5.** The vector $V = \sum_{s \in S(Q)} v_s \in R(L_{\tau})$ is a non-zero integer vector, all of whose coordinates are non-negative.
+
+*Proof.* Since every branch of $Q$ is mixed, each component of $V$ corresponding to a branch of $Q$ is 0. The same is true for any branch not in $Q$ which does not contain a frontier half-branch.
+
+We claim that frontier half branches must be incoming at the switch contained in $S(Q)$ to which it is incident; this will imply that $V$ takes on a positive value for each component corresponding to a branch containing a frontier half branch.
+
+Indeed, let $e$ be a branch containing a frontier half branch and let $s \in S(Q)$ be incident to $e$. $s \in S(Q)$ implies that there is another branch $e'$ incident to $s$ such that $e'$ is a branch of $Q$ and $e'$ is incoming at $s$. Thus, if $e$ were outgoing at $s$, there would exist a train path emanating from $s_1$ which traverses $e$, by concatenating the train path starting at $s_1$ and ending at $e'$ with the train path connecting $e'$ to $e$ over $s$. This contradicts the assumption that $e \notin Q$.
+
+Thus, to complete the argument, it suffices to show that the collection of frontier half branches is non-empty. Recall that $b$ is an invisible branch, and is therefore not contained in any closed train path. It then follows that the half branch of $b$ incident to $s_1$ is frontier. $\square$
+
+We now use the following elementary fact regarding train tracks on $S_{g,p}$ (see [18] for proof).
+
+**Lemma 4.6.** Let $\tau \subset S_g, \tau = (\mathcal{B}, \mathcal{S})$ be a train track. Then
+
+$$|\mathcal{B}| \leq 18g + 6p - 18;$$
+---PAGE_BREAK---
+
+$$|\mathcal{S}| \leq 12g + 4p - 12.$$
+
+Therefore, there are at most $12g+4p-12$ row vectors of $L_{\tau}$ in the sum
+$V$. Furthermore, since the components of $V$ are all non-negative integers,
+
+$$|V \cdot u| \geq u_{min},$$
+
+where $\cdot$ denotes the standard Euclidean dot product. On the other hand, assuming the validity of (4.1), one obtains
+
+$$
+\begin{align*}
+|V \cdot u| &= \left| \sum_{s \in S(Q)} v_s \cdot u \right| \le \sum_{s \in S(Q)} |v_s \cdot u| \\
+&= \sum_{s \in S(Q)} d_s(u) < (12g + 4p - 12) \cdot \frac{u_{min}}{12g + 4g - 12} = u_{min},
+\end{align*}
+$$
+
+a contradiction.
+□
+
+5. AN EFFECTIVE NESTING LEMMA
+
+In this section, we will use Theorem 3.2 and Lemma 4.3 to establish
+the following effective version of Masur and Minsky’s [11] nesting lemma.
+
+**Lemma 5.1.** There exists a function $k(g,p) = O(\omega^2)$ such that if $\sigma$ and $\tau$ are large train tracks and $\sigma$ is carried by $\tau$, and $d(\tau,\sigma) > k(g,p)$, then
+
+$$PN(\sigma) \subset \text{int}(PN(\tau)).$$
+
+**Remark 5.2.** When convenient, we will assume our train tracks to be generic; as mentioned in [13], the proof of the nesting lemma in the generic case is easily extendable to the general setting.
+
+If $\mu \in P(\tau)$, define the *combinatorial length* of $\mu$ with respect to $\tau$,
+$l_{\tau}(\mu)$, to be the integral of $\mu$ over $\mathcal{B}$, that is
+
+$$l_{\tau}(\mu) := \sum_{b} \mu(b).$$
+
+We also define
+
+$$l_{N(\tau)}(\mu) := \min_{\sigma} l_{\sigma}(\mu)$$
+
+where the minimum is taken over all tracks $\sigma \in N(\tau)$ carrying $\mu$.
+
+We will need the following lemma, as seen in [8].
+
+**Lemma 5.3.** Let $c$ be a simple closed curve carried by a train track $\tau$. Then the counting measure on $c$ is a vertex cycle of $\tau$ if and only if, for any branch $b$ of $\tau$, the image of $c$ under its corresponding carrying map traverses $b$ at most twice, and never twice in the same direction.
+---PAGE_BREAK---
+
+Since the vertex cycles are the extreme points of $P(\tau)$, by the classical
+Krein-Milman theorem, any projective transverse measure class can be
+written as a convex combination of vertex cycles; that is, given $\kappa \in P(\tau)$,
+there exists $(a_i)$ such that
+
+$$
+(5.1) \qquad \kappa = \sum_i a_i \alpha_i,
+$$
+
+where $(\alpha_i)$ are the vertex cycles of $\tau$. Any train track on $S_{g,p}$ has at most
+$18g + 6p - 18$ branches, and therefore, by Lemma 5.3, if $\tau$ is any train
+track and $\alpha$ is a vertex cycle,
+
+$$
+l_{\tau}(\alpha) \leq 2(18g + 6p - 18).
+$$
+
+Lemma 5.3 also implies that any train track $\tau$ has at most $3^{18g+6p-18}$ vertex cycles since any branch is traversed once, twice, or no times. We therefore conclude that, given $\kappa$ as in equation (5.1),
+
+$$
+(5.2) \quad \max_i a_i \le l_\tau(\sigma) < \left[ (2(18g + 6p - 18)) \cdot 3^{18g+6p} \right] \max_i a_i
+$$
+
+$$
+(5.3) \qquad = C \cdot \max_i a_i.
+$$
+
+**Lemma 5.4.** Given $L > 0$, there exists a function $h_L(g,p) =$ $O(\log_{\omega(g,p)}(L))$ such that if $\alpha \in P(\tau)$ and $l_{\tau}(\alpha) \le L$, then $d_C(\alpha, \tau) < h_L(g,p)$.
+
+*Proof*. Suppose $l_{\tau}(\alpha) \le L$. We will abuse notation and refer to the image of $\alpha$ under its carrying map by $\alpha$. Then every time $\alpha$ traverses a branch of $\tau$, by Lemma 5.3, it can intersect a vertex cycle at most twice. Therefore, if $v$ is any vertex cycle of $\tau$,
+
+$$
+i(v, \alpha) \le 2L,
+$$
+
+and hence, by Theorem 3.2, for any $\epsilon \in (0,1)$ and $\omega = \text{omega}(\epsilon)$ suffi-
+ciently large,
+
+$$
+\begin{align}
+(5.4) \quad d_C(v, \alpha) &\le \frac{\log_\omega(2L)}{\lambda(\log_\omega(3)+1) - \log_\omega(1000 \cdot \log_2(100\omega))} + 2 \\
+&= O(\log_\omega(L)). \tag*{\hspace*{\fill} \square}
+\end{align}
+$$
+
+**Remark 5.5.** One needs to be cautious in manipulating the inequality
+in Theorem 3.2 to obtain equation (5.4); if
+
+$$
+\rho(\omega, \lambda) := \lambda(\log_{\omega}(3) + 1) - \log_{\omega}(1000 \cdot \log_{2}(100\omega)) < 0,
+$$
+
+the direction of the inequality changes and we will not get the desired
+upper bound on curve graph distance. However,
+
+$$
+\lim_{\omega \to \infty} \rho(\omega, \lambda) = \lambda > 0,
+$$
+
+and therefore, for sufficiently large $\omega$, this is not an issue.
+---PAGE_BREAK---
+
+**Lemma 5.6.** Suppose $\sigma$ is a large recurrent train track carried by $\tau$ on $S_{g,p}$, and let $\sigma' \in E(\sigma)$ and $\tau' \in E(\tau)$ such that $\sigma'$ is carried by $\tau'$. Then the total number of times, counting multiplicity, that branches of $\sigma'$ traverse any branch of $\tau' \setminus \tau$ is bounded above by $m_0 = 36g + 12p$.
+
+*Proof.* The complete argument may be found in Masur and Minsky's original paper [11] on the hyperbolicity of the curve complex. For our purposes and for the sake of brevity, it suffices here to simply remark that they show that any given branch of $\sigma'$ can only traverse branches of $\tau' \setminus \tau$ at most twice. Then, since any track has less than $18g + 6p$ branches, the result follows. $\square$
+
+To prove the following lemma, we use the results from section 4.
+
+**Lemma 5.7.** There exists $R = R(g,p)$ with
+
+$$ \frac{1}{R(g,p)} = O(\omega^2), $$
+
+such that if $\sigma < \tau$, $\sigma$ is large, $\tau$ is generic, $\mu \in P(\tau)$, and every branch $b$ of $\tau \setminus \sigma$ and $b'$ of $\sigma$ satisfies $\mu(b) < R(g)\mu(b')$, then $\mu \in \text{int}(PE(\sigma))$ and $\sigma$ is recurrent.
+
+*Proof.* We follow Masur and Minsky's original argument [11]. The main tools are the elementary moves on train tracks called splitting and sliding as introduced in section 3 (see figures 2 and 3), which can be used to take $\tau$ to a diagonal extension of $\sigma$. In order to do this, we need to move any branch of $\tau \setminus \sigma$ into a corner of a complementary region of $\sigma$. A split or a slide applied to any such branch either reduces the number of branches of $\tau \setminus \sigma$ incident to a given branch of $\sigma$ or decreases the distance between a branch of $\tau \setminus \sigma$ and a corner of a complementary region of $\sigma$.
+
+Thus, a bounded number of such moves produces a track carried by a diagonal extension of $\sigma$. If a splitting is performed involving a branch $b$ of $\tau \setminus \sigma$ and a branch $c$ of $\sigma$, the resulting track contains a new branch $c'$ of $\sigma$, and we can extend $\mu$ to $c'$ to be consistent with the switch conditions by assigning $\mu(c') = \mu(c) - \mu(b)$. In particular, a sufficient condition for being able to define $\mu$ on the new track is
+
+$$ (5.5) \qquad \mu(c) > \mu(b). $$
+
+There are at most $18g + 6p$ branches of $\tau \setminus \sigma$ and at most $18g + 6p$ branches of $\sigma$ or $\tau$. As earlier mentioned, a splitting move either reduces the number of branches of $\tau \setminus \sigma$ incident to $\sigma$, or it reduces the number of edges of $\sigma$ between a given branch of $\tau \setminus \sigma$ and a corner that it faces. Once a branch of $\tau \setminus \sigma$ is separated by a corner of a complementary region of $\sigma$ by only edges of $\sigma$ for which no splitting moves can be performed, a slide move takes such an edge to a corner point. Therefore, each edge of
+---PAGE_BREAK---
+
+$\tau \setminus \sigma$ is taken to a corner of $\sigma$ after no more than $18g+6p+1$ slidings and splittings, and therefore we obtain $\tau'$ after at most $(18g+6p)(18g+6p+1)$ such moves.
+
+Now, let $R(g,p) = \frac{1}{(18g+6p)(18g+6p+1)+1}$, and assume that for this value of $R$, the hypothesis of the statement is satisfied. In light of equation (5.5), $\mu$ is definable on the diagonal extension $\tau'$ that we obtain after splitting and sliding as long as
+
+$$ (5.6) \qquad \min_{\sigma} \mu > \frac{1}{R(g,p)} \max_{\tau \setminus \sigma} \mu, $$
+
+which is precisely what the hypothesis of Lemma 5.6 implies. Therefore, $\mu$ is extendable to a diagonal extension of $\sigma$ such that all branches receive positive weights; hence, $\mu \in \text{int}(PE(\sigma))$.
+
+It remains to show that $\sigma$ is recurrent; suppose not. Let $B(\sigma)$ denote the branch set of $\sigma$. Then Lemma 4.3 implies that if $u \in \mathbb{R}^{|B(\sigma)|}$ is a vector with all positive coordinates,
+
+$$ \|L_{\sigma}(u)\| \geq \frac{u_{\min}}{12g + 4p - 12}. $$
+
+In light of equation (5.6), since $\mu$ satisfies the switch conditions on $\sigma$, the vector $\mu$ has small deviations up to the additive error coming from the weight it assigns to any branch of $\tau \setminus \sigma$, which is less than
+
+$$ \frac{\mu_{\min}}{R(g,p)}; $$
+
+since we assumed that $\tau$ is generic, there are at most two branches of $\tau \setminus \sigma$ incident to any branch of $\sigma$, and therefore the deviations of $\mu$ are all less than $\frac{\mu_{\min}}{12g+4p-12}$, contradicting Lemma 4.3. $\square$
+
+**Lemma 5.8.** Let $L > 0$ be given. Then there exist functions $s_L(g,p)$ and $y(g,p) = O(\omega^3 3^{18\omega})$ satisfying the following: If $\sigma$ is large and carried by $\tau$ and $\sigma' \in E(\sigma)$, $\tau' \in E(\tau)$ such that $\tau'$ carries $\sigma'$, and if $d_C(\sigma, \tau) \ge s_L$, then any simple closed curve $\beta$ carried on $\sigma'$ can be written in $P(\tau')$ as $\beta_{\tau} + \beta'_{\tau'}$, such that
+
+$$ l_{\tau'}(\beta'_{\tau}) \le y(g,p) \cdot l_{\sigma'}(\beta) \text{ and} \\ l_{\tau}(\beta_{\tau}) \ge s_L(g,p)l_{\sigma'}(\beta). $$
+
+*Proof.* The details of the argument are not entirely relevant for the proof of our main theorem but may be found in [11]; therefore, we omit the particulars of the proof, and remark only that in their argument, Masur and Minsky show that it suffices to take
+
+$$ y(g,p) := C \cdot m_0 W_0 C_0, $$
+---PAGE_BREAK---
+
+where $C$ is the constant from equation (5.3), $m_0$ is the constant from the statement of Remark 5.5, $W_0$ is a bound on the weights that a vertex cycle can place on any one branch of $\sigma'$ (and therefore it suffices to take $W_0 = 3$ by Lemma 5.3), and $C_0$ is a bound on the combinatorial length of any vertex cycle on any train track on $S_{g,p}$. Putting all of this together, we obtain
+
+$$y(g,p) := [(2(18g + 6p - 18)) \cdot 3^{18g+6p}] (3(36g+12p-36)^2) = O(\omega^3 3^{18\omega}),$$
+as claimed.
+
+Masur and Minsky [11] also show that it suffices to take
+
+$$s_L(g,p) := h_L(C_0 L + y(g,p)) + 2B,$$
+
+where $B$ is a bound on the curve graph distance between any two vertex cycles of the same train track.
+
+Therefore, by Theorem 3.1, for sufficiently large $\omega$,
+
+$$ (5.7) \qquad s_L(g,p) \le h_L(C_0 L + y(g,p)) + 6. \qquad \square $$
+
+*Proof of Lemma 5.1.* Again with concision in mind, we do not include the entirety of Masur and Minsky’s argument [11]; we simply remark here that in our notation, it suffices to choose
+
+$$k(g,p) := s_{C m_0} \left( \frac{m_2}{R(g,p)} \right)^{m_3} (g,p).$$
+
+Here, $m_0$ is as in Lemma 5.5 and is thus bounded above by $36g + 12p$, $m_2 < (18g + 6p)^{18g+6p}$, and $m_3 < 18g + 6p$. Thus,
+
+$$ C m_0 \cdot \left( \frac{m_2}{R(g)} \right)^{m_3} \\ < [(2(18g + 6p - 18)) \cdot 3^{18g}] \cdot (36g + 12p) ((18g + 6p)^{18g+6p+2})^{18g+6p} =: D, $$
+
+and therefore, by Lemma 4.5, for $\omega(g,p)$ sufficiently large,
+
+$$ \begin{align*} k(g,p) &< h_D(C_0 D + y(g,p)) + 6 \\ &= O(\log_\omega(\omega^3 3^{18\omega}(18\omega)^{324\omega^2+36\omega})) \\ &= O(\omega^2). \end{align*} \qquad \square $$
+
+## 6. PROOF OF THE MAIN THEOREM AND COROLLARIES
+
+In this section, we prove the main results.
+
+**Theorem 1.1.** There exists a function $K(g,p) = O(\omega(g,p)^2)$ such that any nested train track sequence with $R$-bounded steps is a $(K(g,p) + R)$-unparameterized quasigeodesic of the curve graph $C_1(S_{g,p})$, which is $(K(g,p) + R)$-quasiconvex.
+---PAGE_BREAK---
+
+*Proof.* Where possible, we use the same notation that Masur and Minsky [11] do to avoid confusion. Let $\delta$ be the hyperbolicity constant of $C_1(S)$. By [9], it suffices to take $\delta = 17$. Let $B$ be a bound on the diameter of the set of vertex cycles of a given train track $\tau \subset S_{g,p}$. As mentioned above, for sufficiently large $\omega$ it suffices to take $B=3$ (see [2] for a proof of this).
+
+Given a nested train sequence $(\tau_i)_i$, consider a subsequence $(\tau_{ij})_j$ such that
+
+$$k(g, p) \le d(\tau_{ij}, \tau_{ij+1}) < k(g, p) + R,$$
+
+and such that if $\tau_n$ is any track not in the subsequence $(\tau_{ij})_j$, then there is some $c$ for which
+
+$$d(\tau_{ic}, \tau_n) < k(g, p).$$
+
+Then, since $d(\tau_{ij}, \tau_{ij+1}) \ge k(g, p)$, the effective nesting lemma implies that
+
+$$PN(\tau_{ij+1}) \subset int(PN(\tau_{ij})).$$
+
+For any train track $\tau$, one always has
+
+$$N_1(int(PN(\tau))) \subset PN(\tau),$$
+
+where $N_m(int(PN(\tau)))$ denotes the set of multi-curves distance at most $m$ in $C_1$ from some multi-curve representing a measure in $int(PN(\tau))$. Combining these two inclusions and inducting yields
+
+$$N_{m-1}(PN(\tau_{ij+m})) \subset int(PN(\tau_{ij})).$$
+
+Masur and Minsky [11] then make use of a lemma which implies that no vertex cycle of $\tau_{ij}$ is in $int(PN(\tau_{ij}))$, and therefore
+
+$$d(\tau_{ij}, \tau_{ik}) \ge |k-j|.$$
+
+Thus, if $(v_{ij})_j$ is any sequence of the vertices of $(\tau_{ij})_j$, we have
+
+$$|m-n| \le d_C(v_{in}, v_{im}) < (k(g,p) + R + 2B)|m-n|,$$
+
+which implies that $(v_{ij})_j$ is a $(k(g,p)+R+2B)$-quasigeodesic. This proves the first part of Theorem 1.1, with $K(g,p) := 2k(g,p) + 46$. (We have shown the sequence to be a $(k(g,p)+R+6)$-quasigeodesic, but we will need the extra $k(g,p)+40$ for the quasiconvexity statement.)
+
+We now show $(\tau_i)_{i \in I_1}$ is $(K(g,p)+R)$-quasiconvex. In any $\delta$-hyperbolic metric space, a geodesic segment $\gamma$ connecting the endpoints of a $K$-quasigeodesic segment $\gamma'$ is contained in a $W$-neighborhood of $\gamma'$, where $W = W(K, \delta)$. $W$ is sometimes known as the *stability constant*.
+
+Therefore, a geodesic segment connecting any two elements of the vertex cycle sequence $(v_{ij})_j$ is contained in a $W(K, \delta) = W(k(g,p)+R+6, 17)$-neighborhood of the sequence.
+
+**Lemma 6.1.** For sufficiently large $\omega$, $W < K(g,p) + R$.
+---PAGE_BREAK---
+
+*Proof.* We only give a sketch here; the main idea of the proof follows an argument of Ken'ichi Ohshika [17, p. 35], and we refer to this for a more complete argument. Hyperbolicity of $C_1$ implies the existence of an exponential divergence function; that is, if $\alpha_1, \alpha_2 : [0, \infty) \to C_1$ are two geodesic rays based at the same point $x_0 \in C_1$, then there is some exponential function $f$ so that for sufficiently large $r$ (depending on the choice of geodesic rays), the length of any arc outside of a ball of radius $r$ centered at $x$, connecting $\alpha_1(r)$ and $\alpha_2(r)$, is at least $f(r)$.
+
+Let $x$ and $y$ be two elements of a vertex cycle sequence $(v_{ij})_j$ and let $h$ be a geodesic segment connecting them. Denote by $w$ the $(k(g,p)+M+6)$-quasigeodesic segment obtained by following along the vertex sequence from $x$ to $y$.
+
+Let $D = \sup_{x \in h} d_C(x, w)$ and suppose $s \in h$ with $d_C(s, w) = D$. Let $a$ and $b$ be two points on $w$ whose distance from $s$ is $D$ and such that $a$ and $b$ are on different sides of $s$. Note that we can assume that such points exist because the end points of $w$ are also the endpoints of $h$, and therefore $s$ must be at least $D$ from the end points of $w$.
+
+Let $a'$ ($b'$, respectively) be points located $2D$ from $s$ on either side of $s$ on $w$; if $s$ is closer than $2D$ to one of the endpoints of $w$, simply define $a'$ ($b'$, respectively) to be this corresponding endpoint of $w$. Let $y, z \in h$ be points whose distances are less than $D$ from $a'$ and $b'$, respectively. Note that there is an arc $\sigma$ joining $y$ to $z$ by first connecting $y$ to $a'$, then $a'$ to $b'$ along $w$, and then jumping back over to $h$. Thus,
+
+$$
+\begin{align*}
+d_C(y, z) &\le d_C(y, a') + d_C(a', b') + d_C(b', z) \\
+&\le D + 4D + D = 6D.
+\end{align*}
+$$
+
+This gives a bound on the length of the segment of $w$ connecting $y$ and
+$z$ since it is a quasigeodesic:
+
+$$ \text{length}_w(y,z) \leq (k(g,p) + R + 6) \cdot 6D. $$
+
+Let $\beta$ be the arc obtained by concatenating the following 5 arcs: the arc along $h$ from $a$ to $a'$, the arc connecting $a'$ to $y$, the arc along $w$ from $y$ to $z$, the arc connecting $z$ to $b'$, and the arc along $h$ from $b'$ to $b$ (see Figure 5).
+
+It follows that
+
+$$ \mathrm{length}(\beta) \leq 4D + (k(g,p) + R + 6)D. $$
+
+Now we use the divergence function $f$ for $C_1$ to bound the length of $\beta$
+from below. Indeed, for sufficiently large $D$, we have
+
+$$ \mathrm{length}(\beta) \geq f(D-c), $$
+---PAGE_BREAK---
+
+FIGURE 5. The length of the path $\beta$ (the dotted path) is bounded above by $4D + (k(g,p) + R + 6)D$.
+
+where $c$ is a constant related to $f(0)$, and which does not affect the growth rate of the function $f$. Therefore,
+
+$$f(D-c) \leq 4D + (k(g,p) + R + 6)D.$$
+
+Therefore, if $D > k(g,p) + R + 6$, $\omega$ cannot be arbitrarily large because $f(x)$ eventually dominates $x^2$. This completes the proof of the lemma. $\square$
+
+**Remark 6.2.** We note that the conclusion of Lemma 6.1 is not at all sharp; indeed, the same argument would have shown that $W$ is eventually smaller than $(k(g,p) + R + 6)^\lambda$ for any $\lambda \in (0, 1)$. However, we do not concern ourselves with this because the contribution to the quasiconvexity of nested sequences coming from $W$ will be dominated by a larger term, as will be seen below.
+
+We have now shown that the collection of vertices of the sequence $(\tau_{ij})_j$ is quasiconvex with quasiconvexity constant $k(g,p) + R + 6$. It remains to analyze the vertex cycles of tracks that are not in this subsequence. If $v$ is such a vertex and $\omega$ is sufficiently large, we know that $v$ is within $k(g,p)+6$ from some vertex of one of the $\tau_{ij}$'s. In any $\delta$-hyperbolic space, geodesics with nearby end points fellow travel, in that they remain within a bounded neighborhood of one another, whose diameter depends only on $\delta$ and the distance between endpoints.
+
+Indeed, if $h$ is any geodesic segment connecting arbitrary vertices $v_1$ and $v_2$, $h$ must remain within $2\delta + k(g,p) + 6 \leq 40 + k(g,p)$ of some geodesic connecting vertices of the $\tau_{ij}$.
+
+Therefore, the collection of all vertices of the sequence $(\tau_i)_{i \in I_1}$ is a $(46+R+2k(g,p))$-quasiconvex subset of $C_1$. This completes the proof of Theorem 1.1. $\square$
+---PAGE_BREAK---
+
+*Proof of Corollary 1.3.* Masur and Minsky [13] complete their argument showing the quasiconvexity of $D(g) \subset C_1(S_g)$ by noting that any two disks in $D(g)$ can be connected by a path in $D(g)$ representing a *well-nested curve replacement sequence*, a certain kind of nested train track sequence with $R$-bounded steps for which one can take $R$ to be 15.
+
+Thus, we see that $D(g)$ is $(61 + 4k(g, 0))$-quasiconvex, and this completes the proof of Corollary 1.3. $\square$
+
+## 6.1. PROOF OF THEOREM 1.2.
+
+The purpose of this subsection is to prove Theorem 1.2, which states that the splitting and sliding sequences project to $O(\omega^2)$-unparameterized quasigeodesics in the curve graph of any essential subsurface $Y \subseteq S$. To do this, we simply follow the original argument of [14], effectivizing along the way.
+
+We first introduce some terminology. Given a subsurface $Y$, as in section 2, let $S^Y$ denote the (non-compact) covering space of $S$ corresponding to $Y$. Then, if $\tau$ is a train track on $S$, let $\tau^Y$ denote the pre-image under the covering projection of $\tau$ to $S^Y$. Then let $C(\tau^Y)$ and $\mathcal{A}C(\tau^Y)$ denote the collection of essential, non-peripheral, simple closed curves (curves and arcs, respectively) in the Gromov compactification of $S^Y$ whose interiors are train paths on $\tau^Y$. Let $V(\tau)$ denote the collection of vertex cycles of a track $\tau$.
+
+Then, if $Y$ is not an annulus, define the *induced track*, denoted $\tau|_Y$, to be the union of branches of $\tau^Y$ traversed by some element of $C(\tau^Y)$.
+
+*Proof of Theorem 1.2.* We first note that any splitting and sliding sequence $(\tau_i)_i$ is a nested train track sequence with $Z$-bounded steps, for $Z$ some uniform constant. Indeed, if $\tau_i$ is obtained from $\tau_{i-1}$ by either a splitting or a sliding, any vertex cycle of $\tau_i$ may intersect a vertex cycle of $\tau_{i-1}$ at most 6 times over any branch of $\tau_{i-1}$. Thus, there is some linear function $f: \mathbb{N} \to \mathbb{N}$ such that $i(v_i, v_{i-1}) < f(\omega(g,p))$ for $(\tau_i)_i$ a sliding and splitting sequence on $S_{g,p}$, and $v_i$ ($v_{i-1}$, respectively) is any vertex cycle of $\tau_i$ ($\tau_{i-1}$, respectively), and therefore, as a consequence of Theorem 3.2, for sufficiently large $\omega$,
+
+$$d_C(v_i, v_{i-1}) < 4.$$
+
+To show that $(\psi_Y(\tau_i))_i$ is an $O(\omega^2)$-unparameterized quasigeodesic in $C(Y)$, we will exhibit a splitting and sliding sequence $(\sigma_i)_i$ on $Y$ such that $d_C(\tau_i, \sigma_i) = O(1)$. Then we will be done by applying Theorem 1.1 to the sequence $(\sigma_i)$.
+
+Given a vertex cycle $\alpha$ of $\tau_j|_Y$, define $\sigma_j \subset \tau_j|_Y$ to be the minimal track carrying $\alpha$; thus, $\sigma_j$ is recurrent by construction, and Masur, Mosher, and Schleimer [14] show $\sigma_j$ to be transversely recurrent as well.
+---PAGE_BREAK---
+
+Furthermore, they show that $\sigma_{j+1}$ is obtained from $\sigma_j$ by a slide or a split so long as $\sigma_j \neq \sigma_{j+1}$. Therefore, $(\sigma_i)_i$ constitutes a sliding and splitting sequence of birecurrent train tracks and thus is a nested train track sequence on $Y$ with $Z$-bounded steps.
+
+Since $\sigma_j$ is a subtrack of $\tau_j|Y$, by Lemma 5.3, any vertex cycle of $\sigma_j$ is a vertex cycle of $\tau_j|Y$, and therefore the diameter of $V(\tau_j|Y) \cup V(\sigma_j)$ is no more than 6 for sufficiently large $\omega$.
+
+Since $\alpha$ is carried by $\tau_j|Y$, it is also carried by $\tau_j$. Masur, Mosher, and Schleimer [14] then make use of a lemma which implies the existence of a vertex cycle $\beta_j$ of $\tau_j$ which intersects the subsurface $Y$ essentially. By [14, Lemma 2.8 and Lemma 5.4],
+
+$$i(\pi_Y(\beta_j), v_j) < 8|\mathcal{B}(\tau_j)|,$$
+
+and therefore, by Lemma 4.6 and Theorem 3.2, for $\omega$ sufficiently large,
+
+$$d_C(\pi_Y(\beta_j), v_j) < 4.$$
+
+This same argument applies to any vertex cycle of $\tau_j$ which projects non-trivially to $Y$, and thus we conclude that
+
+$$d_Y(\sigma_j, \tau_j) \le d_Y(\sigma_j, \tau_j|Y) + d_Y(\tau_j|Y, \tau_j) < 6 + 4 = 10,$$
+
+for all $\omega$ sufficiently large. $\square$
+
+**Acknowledgments.** The author would primarily like to thank his adviser, Yair Minsky, for his guidance and for many helpful suggestions. He would also like to thank Ian Biringer, Catherine Pfaff, Saul Schleimer, and Harold Sultan for their time and for the many motivating conversations they’ve had with the author regarding this work. Finally, the author thanks the referee for several helpful comments.
+
+REFERENCES
+
+[1] Aaron Abrams and Saul Schleimer, *Distances of Heegaard splittings*, Geom. Topol. **9** (2005), 95–119 (electronic).
+
+[2] Tarik Aougab. *Uniform hyperbolicity of the graphs of curves*. arXiv:1212.3160 [math.GT]. Available at http://arxiv.org/pdf/1212.3160.pdf.
+
+[3] Brian H. Bowditch, *Uniform hyperbolicity of the curve graphs*. Available at http://homepages.warwick.ac.uk/masgak/papers/uniformhyp.pdf.
+
+[4] Matt Clay, Kasra Rafi, and Saul Schleimer, *Uniform hyperbolicity of the curve graph via surgery sequences*. arXiv:1302.5519 [math.GT]. Available at http://arxiv.org/pdf/1302.5519.pdf.
+
+[5] Benson Farb and Dan Margalit, *A Primer on Mapping Class Groups*. Princeton Mathematical Series, 49. Princeton, NJ: Princeton University Press, 2012.
+---PAGE_BREAK---
+
+[6] Samuel Fiorini, Gwenaël Joret, Dirk Oliver Theis, and David R. Wood, *Small minors in dense graphs*. arXiv:1005.0895 [math.CO]. Available at http://arxiv.org/pdf/1005.0895.pdf. Small minors in dense graphs. European J. Combin. 33 (2012), no. 6, 1226–1245.
+
+[7] John Hempel, *3-manifolds as viewed from the curve complex*, Topology **40** (2001), no. 3, 631–657.
+
+[8] Ursula Hamenstädt, *Geometry of the complex of curves and of Teichmüller space* in Handbook of Teichmüller Theory. Vol. I. Ed. Athanase Papadopoulos. IRMA Lectures in Mathematics and Theoretical Physics, 11. Zürich: Eur. Math. Soc., 2007. 447–467.
+
+[9] Sebastian Hensel, Piotr Przytycki, and Richard C. H. Webb, *Slim unicorns and uniform hyperbolicity for arc graphs and curve graphs*. arXiv:1301.5577 [math.GT]. Available at http://arxiv.org/pdf/1301.5577.pdf.
+
+[10] Steven P. Kerckhoff, *The measure of the limit set of the handlebody group*, Topology **29** (1990), no. 1, 27–40.
+
+[11] Howard A. Masur and Yair N. Minsky, *Geometry of the complex of curves. I. Hyperbolicity*, Invent. Math. **138** (1999), no. 1, 103–149.
+
+[12] ————, *Geometry of the complex of curves. II. Hierarchical structure*, Geom. Funct. Anal. **10** (2000), no. 4, 902–974.
+
+[13] ————, *Quasiconvexity in the curve complex* in In the Tradition of Ahlfors and Bers, III. Ed. William Abikoff and Andrew Haas. Contemporary Mathematics, 355. Providence, RI: Amer. Math. Soc., 2004. 309–320.
+
+[14] Howard Masur, Lee Mosher, and Saul Schleimer, *On train-track splitting sequences*, Duke Math. J. **161** (2012), no. 9, 1613–1656.
+
+[15] Howard Masur and Saul Schleimer, *The geometry of the disk complex*, J. Amer. Math. Soc. **26** (2013), no. 1, 1–62.
+
+[16] Lee Mosher, *Train track expansions of measured foliations*. Available at http://andromeda.rutgers.edu/sinmosher/arationality031228.pdf. 2003.
+
+[17] Ken'ichi Ohshika, *Discrete Groups*. Translated from the 1998 Japanese original by the author. Translations of Mathematical Monographs, 207. Iwanami Series in Modern Mathematics. Providence, RI: American Mathematical Society, 2002.
+
+[18] R. C. Penner and J. L. Harer, *Combinatorics of Train Tracks*. Annals of Mathematics Studies, 125. Princeton, NJ: Princeton University Press, 1992.
+
+[19] William P. Thurston, *On the geometry and dynamics of diffeomorphisms of surfaces*, Bull. Amer. Math. Soc. (N.S.) **19** (1988), no. 2, 417–431.
+
+DEPARTMENT OF MATHEMATICS; YALE UNIVERSITY; 10 HILLHOUSE AVENUE; NEW HAVEN, CT 06510 USA
+
+E-mail address: tarik.aougab@yale.edu
\ No newline at end of file
diff --git a/samples/texts_merged/7642017.md b/samples/texts_merged/7642017.md
new file mode 100644
index 0000000000000000000000000000000000000000..3551ceee9fc6dddc1b098efac5464bd729379fc5
--- /dev/null
+++ b/samples/texts_merged/7642017.md
@@ -0,0 +1,313 @@
+
+---PAGE_BREAK---
+
+Fast and Accurate Texture Recognition with Multilayer
+Convolution and Multifractal Analysis
+
+Hicham Badri, Hussein Yahia, Khalid Daoudi
+
+► To cite this version:
+
+Hicham Badri, Hussein Yahia, Khalid Daoudi. Fast and Accurate Texture Recognition with Multilayer Convolution and Multifractal Analysis. European Conference on Computer Vision, ECCV 2014, Sep 2014, Zürich, Switzerland. [hal-01064793](https://hal.inria.fr/hal-01064793)
+
+HAL Id: hal-01064793
+
+https://hal.inria.fr/hal-01064793
+
+Submitted on 17 Sep 2014
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+# Fast and Accurate Texture Recognition with Multilayer Convolution and Multifractal Analysis
+
+Hicham Badri, Hussein Yahia, and Khalid Daoudi
+
+INRIA Bordeaux Sud-Ouest, 33405 Talence, France
+{hicham.badri,hussein.yahia,khalid.daoudi}@inria.fr
+
+**Abstract.** A fast and accurate texture recognition system is presented. The new approach consists in extracting locally and globally invariant representations. The locally invariant representation is built on a multi-resolution convolutional network with a local pooling operator to improve robustness to local orientation and scale changes. This representation is mapped into a globally invariant descriptor using multifractal analysis. We propose a new multifractal descriptor that captures rich texture information and is mathematically invariant to various complex transformations. In addition, two more techniques are presented to further improve the robustness of our system. The first technique consists in combining the generative PCA classifier with multiclass SVMs. The second technique consists of two simple strategies to boost classification results by synthetically augmenting the training set. Experiments show that the proposed solution outperforms existing methods on three challenging public benchmark datasets, while being computationally efficient.
+
+## 1 Introduction
+
+Texture classification is one of the most challenging computer vision and pattern recognition problems. A powerful texture descriptor should be invariant to scale, illumination, occlusions, perspective/affine transformations and even non-rigid surface deformations, while being computationally efficient. Modeling textures via statistics of spatial local textons is probably the most popular approach to build a texture classification system [1,2,3,4,5,6,7]. Based on this Bag-of-Words architecture, these methods try to design a robust local descriptor. Distributions over these textons are then compared using a proper distance and a nearest neighbor or kernel SVMs classifier [8]. Another alternative to regular histograms consists in using multifractal analysis [9,10,11,12,13]. The VG-fractal method [9] statistically represents the textures with the full PDF of the local fractal dimensions or lengths, while the methods in [10,11,12,13] make use of the box-counting method to estimate the multifractal spectrum. Multifractal-based descriptors are theoretically globally invariant to bi-Lipschitz transforms that include perspective transforms and texture deformations. A different approach recently presented in [14] consists in building a powerful local descriptor by cascading wavelet scattering transformations of image patches and using a generative PCA classifier [15]. Unfortunately, while these methods achieve high accuracy on some standard benchmark datasets, little attention is given to the computational efficiency, which is crucial in a real-world system.
+---PAGE_BREAK---
+
+We present in this paper a new texture classification system which is both accurate and computationally efficient. The motivation behind the proposed work comes from the success of multifractal analysis [10,9,11,12,13]. Given an input texture, the image is filtered with a small filter bank for various filter orientations. A pooling operator is then applied to improve robustness to local orientation change. This process is repeated for different resolutions for a richer representation. This first step generates various low-pass and high-pass responses that form a *locally invariant* representation. The mapping towards the final descriptor is done via multifractal analysis. It is well known that the *multifractal spectrum* encodes rich texture information. The methods in [10, 11, 12, 13] use the box-counting method to estimate the multifractal spectrum. However, this method is unstable due to the limited resolution of real-world images. We present a new multifractal descriptor that is more stable and improves invariance to bi-Lipschitz transformations. This improvement is validated by extensive experiments on public benchmark datasets. The second part of our work concerns training strategies to improve classification rates. We propose to combine the generative PCA classifier [14,15] with kernel SVMs [8] for classification. We also introduce two strategies called "synthetic training" to artificially add more training data based on illumination and scale change. Results outperforming the state-of-the-art are obtained over challenging public datasets, with high computational efficiency.
+
+The paper is organized as follows: section 2 describes the proposed descriptor, section 3 presents the proposed training strategies, section 4 presents classification results conducted on 3 public datasets as well as a comparison with 9 state-of-the-art methods.
+
+## 2 Robust Invariant Texture Representation
+
+The main goal of a texture recognition system is to build an *invariant* representation, a mapping which reduces the large intra-class variability. This is a very challenging problem because the invariance must include various complex transformations such as translation, rotation, occlusion, illumination change, non-rigid deformations, perspective view, among others. As a result, two similar textures with different transformation parameters must have similar descriptors. An example is given in Figure 1. Not only the system should be accurate, but it should be also computationally efficient. Otherwise, its use in a real-world system would be limited due to the long processing time to extract the descriptor. Our goal in this paper is to build both an *accurate* and *fast* texture recognition system. Our Matlab non-optimized implementation takes around 0.7 second to extract the descriptor on a medium size image (480 × 640) using a modern laptop. The processing time can be further decreased by reducing the resolution of the image without sacrificing much the accuracy. This is due to the strong robustness of our descriptor to scale changes via accurate multifractal statistics that encode rich multi-scale texture information. We explain in this section how we build the proposed descriptor, the motivation behind the approach and the connection with previous work.
+
+### 2.1 Overview of the Proposed Approach
+
+The proposed descriptor is based on two main steps :
+---PAGE_BREAK---
+
+Fig. 1: Intra-class variability demonstration. The three textures 1, 2 and 3 exhibit strong changes in scale and orientation as well as non-rigid deformations. As can be seen, the proposed descriptor is nearly invariant to these transformations (see section 2).
+
+1. Building a *locally* invariant representation : using multiple high-pass filters, we generate different sparse representations for different filter orientations. A pooling operator is applied on the orientation to increase the local invariance to orientation change. The process is repeated for multiple image resolutions for a richer representation.
+
+2. Building a *globally* invariant representation : the first step generates various images that encode different texture information. We also include the multi-resolution versions of the input to provide low-pass information. We need a mapping that transforms this set of images into a stable, fixed-size descriptor. We use multi-fractal analysis to statistically describe each one of these images. We present a new method that extracts rich information directly from local singularity exponents. The local exponents encode rich multi-scale texture information. Their log-normalized distribution represents a stable mapping which is invariant to complex bi-Lipschitz transforms. As a result, the proposed multifractal descriptor is proven mathematically to be robust to strong environmental changes.
+
+## 2.2 Locally Invariant Representation
+
+A locally invariant representation aims at increasing the similarity of local statistics between textures of the same class. To build this representation, we construct a simple convolutional network where the input image is convolved with a filter bank for various orientations, and then pooled to reduce local orientation change. The multilayer extension consists in repeating the same process for various image resolutions on the low-pass output of the previous resolution, which offers a richer representation.
+---PAGE_BREAK---
+
+Given an input texture *I*, the image is first low-pass filtered with a filter $\psi_l$ to reduce small image domain perturbations and produce an image $J_{1,0}$. This image is then filtered with multiple zero-mean high-pass filters $\psi_{k,\theta}$, where *k* denotes the filter number and $\theta$ its orientation. High-pass responses encode higher-order statistics that are not present in the low-pass response $J_{1,0}$. A more stable approach consists in applying the modulus on the high-pass responses, which imposes symmetric statistics and improves invariance of the local statistics. Applying multiple filtering with multiple different filters naturally increases the amount of texture information that are going to be extracted further via multifractal analysis. In order to increase the local invariance to orientation, we apply a pooling operator $\phi_\theta: \mathcal{R}^{i \times j \times n} \rightarrow \mathcal{R}^{i \times j}$ on the oriented outputs for each filter:
+
+$$ J_{1,k} = \phi_{\theta}(|J_{1,0} \star \psi_{k,\theta}|, \theta = \theta_1, \dots, \theta_n), \quad k = 1, \dots, K, \qquad (1) $$
+
+where *n* is the number of orientations and *i* × *j* is the size of the low-pass image. As a result, we obtain 1 low-pass response and *K* high-pass responses, each image is encoding different statistics. For a richer representation, we repeat the same operation for different resolutions $s = 2^0, \dots, -L$, where *s* = 1 is the finest resolution and $s = 2^{-L}$ is the coarsest resolution. The image generation process is then generalized as follows:
+
+$$ J_{s,k} = \begin{cases} I \star \psi_l & k=0, s=1 \\ \downarrow (J_{2s,0} \star \psi_l) & k=0, s \neq 1 \\ \phi_\theta(|J_{s,0} \star \psi_{k,\theta}|, \theta=\theta_1, \dots, \theta_n) & k=1, \dots, K, \end{cases} \qquad (2) $$
+
+where $\downarrow$ denotes the downsampling operator. We found that calculating statistics on multiple resolutions instead of a single one increases significantly the robustness of the descriptor. This can be expected because two textures may seem "more similar" at a lower resolution. As a result, the intra-class variability decreases as the resolution decreases, but keeping higher resolution images is important to ensure extra-class decorrelation.
+
+## Dimensionality Reduction with Pooling
+
+Using multiple filters $\psi_{k,\theta}$ increases dramatically the size of the image set. Knowing that each image $J_{s,k}$ will be used to extract statistics using multifractal analysis, this will result in a very large descriptor. One resulting issue is the high dimensionality of the training set. Another one is the processing time as the statistics should be applied on each image. We propose to merge different high-pass responses $J_{s,k}$ together to reduce the number of images. A straightforward approach would be to gather various images $\{J_{s,k}, k=t, \dots, u\}$ and then apply a pooling operator $\phi_r$ that is going to merge each image subset into one single image $J_{s,k_{t,...,u}}$:
+
+$$ J_{s,k_{t,...,u}}} = \phi_r(J_{s,k}, k=t, \dots, u). \qquad (3) $$
+
+As a result, the number of high-pass responses will be decreased; this leads to a reduced size descriptor. The pooling operator $\phi_r$ can be either the mean or the min/max functions. We take $\phi_r$ as a maximum function in this paper. An example is given in Figure 2 for one resolution $s=0$ using 6 high-pass filters and one low-pass filter. The
+---PAGE_BREAK---
+
+number of images is reduced from 7 to 3. For 5 resolutions ($s = 2^0, \dots, -4$), the total number of images goes from 35 to 15, which is an important reduction.
+
+Fig. 2: Image generation example applied on the texture input $I$ for one resolution using 6 high-pass filters. The images $J_{0,1,\dots,6}$ are a result of the orientation pooling (eq. 2). The 6 images are reduced to 2 images using a pooling operator $\phi_r$ on similar responses to reduce the dimensionality. The same process is repeated for multiple resolutions.
+
+## 2.3 Globally Invariant Representation
+
+Once the set of low-pass and high-pass images is generated, we need to extract global statistics, a mapping into a fixed-size descriptor, which is *globally invariant* to the complex physical transformations. We propose to use a new multifractal approach to statistically describe textures suffering from strong environmental changes. To understand the difference between the proposed method and the previous work, we first present the standard fractal and multifractal analysis framework used by the previous methods, we then introduce the proposed approach.
+
+**Multifractal Analysis** In a nutshell, a fractal object $E$ is self-similar across scales. One characteristic of its irregularity is the so-called *box fractal dimension*. By measuring a fractal object on multiple scales $r$, the box fractal dimension is defined as a power-law relashionship between the scale $r$ and the smallest number of sets of length $r$ covering $E$ [16]:
+
+$$ \dim(E) = \lim_{r \to 0} \frac{\log N(r, E)}{-\log r}, \quad (4) $$
+
+Using squared boxes of size $r$, this dimension can be estimated numerically, known as the *box-counting method*. Multifractal analysis is an extension of this important notion. A multifractal object $F$ is composed of many fractal components $F_{1,...,f}$. In this
+---PAGE_BREAK---
+
+case, a single fractal dimension is not sufficient to describe this object. The *multifractal spectrum* is the collection of all the associated fractal dimensions that describe the multifractal object.
+
+It is easy to show mathematically that the fractal dimension is invariant to bi-Lipschitz transformations [17], which includes various transformations such as non-rigid transformations, view-point change, translation, rotation, etc.. As a result, the multifractal spectrum is also invariant to these transformations. This makes the multifractal spectrum an attractive tool to globally describe textures. However, the box-counting method gives a rather crude estimation of the real fractal dimension. The fractal dimension is estimated for each fractal set using a log-log regression. As the resolution $r$ is supposed to be very small ($r \to 0$), using small-sized boxes on a relatively low-resolution image results in a biased estimation due to the relatively low-resolution of real-world images [18]. It has been used as the core of various recent multifractal texture descriptors [10, 11, 12, 13] that use the same box-counting method to build the final descriptor. We present a different method to statistically describe textures using multifractal analysis. Contrary to previous methods, we use a new measure which is based on the distribution of local singularity exponents. It can be shown in fact that this measure is related to the true multifractal spectrum, and its precision is proven by the high-accuracy of the proposed descriptor. Moreover, this approach is computationally efficient, which permits to achieve high accuracy at reduced processing time.
+
+**Proposed Multifractal Descriptor** The proposed method first estimates the local singularity exponents $h(x)$ on each pixel $x$, and then applies the empirical histogram followed by log operator to extract the global statistics $\phi_h = \log(\rho_h + \epsilon)$. This operation is performed on all the resulting images of the first step, which results in multiple histograms $\phi_{h_i}$. The concatenation of all these histograms forms the final descriptor.
+
+Let $J$ be an image, and $\mu_\psi(B(x,r)) = \int_{B(x,r)} (J \star \psi_r)(y)dy$ a positive measure, where $\psi_r$ is an appropriate wavelet at scale $r$ (Gaussian in our case) and $B(x,r)$ a closed disc of radius $r > 0$ centered at $x$. Multifractal analysis states that the wavelet projections scale as power laws in $r$ [19,20,21]. We use a microcanonical evaluation [20] which consists in assessing an exponent $h(x)$ for each pixel $x$:
+
+$$ \mu_{\psi}(B(x, r)) \approx \alpha(x)r^{h(x)}, \quad r \to 0. \qquad (5) $$
+
+The validity of equation (5) has been tested on a large dataset [21], which proves that natural images exhibit a strong multifractal behavior. Introducing the log, the formula is expressed as a linear fit:
+
+$$ \log(\mu_{\psi}(B(x, r))) \approx \log(\alpha(x)) + h(x)\log(r), \quad r \to 0. \qquad (6) $$
+
+Rewriting the equation in the matrix form permits to calculate all the exponents at once by solving the following linear system:
+
+$$ \underbrace{\begin{bmatrix} 1 & \log(r_1) \\ \vdots \\ 1 & \log(r_l) \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} \log(\alpha(x_1)) & \cdots & \log(\alpha(x_N)) \\ h(x_1) & \cdots & h(x_N) \end{bmatrix}}_{\eta} = \underbrace{\begin{bmatrix} \log(\mu_{\psi}(B(x_1, r_1))) & \cdots & \log(\mu_{\psi}(B(x_N, r_1))) \\ \vdots & & \vdots \\ \log(\mu_{\psi}(B(x_1, r_l))) & \cdots & \log(\mu_{\psi}(B(x_N, r_l))) \end{bmatrix}}_{b}, \quad (7) $$
+---PAGE_BREAK---
+
+$$ \underset{\eta}{\operatorname{argmin}} ||A\eta - b||_2^2, h(x_i) = \eta(2, i), \quad (8) $$
+
+where *N* is the number of pixels of the image *J*, *l* is the number of scales used in the log-log regression. This matrix formulation is computationally efficient and plays an important role in the speed of the proposed method. Given the local exponents *h*(*x*), which is an image of the same size of *J* that describes the local irregularities at each pixel, we need to extract now a fixed-size measure that globally describes the statistics of *h*(*x*). Using the box-counting method, this would require extracting all the fractal fractal sets $F_h = \{x | h(x) \approx h\}$, and then calculating the box-counting dimension for each set $F_h$. As discussed before, this approach leads to a crude estimation of the true multifractal spectrum due to the actual low-resolution of real-world images. Moreover, a log-log regression should be performed on each fractal set. Instead, we propose to use the empirical histogram $\rho_h$ followed by a log operator:
+
+$$ \varphi_h = \log(\rho_h + \epsilon), \quad (9) $$
+
+where $\epsilon \ge 1$ is set to provide stability. The distribution of the local exponents is an invariant representation which encodes the multi-scale properties of the texture. The log acts as a normalization operator that nearly linearizes histogram scaling and makes the descriptor more robust to small perturbations. This way, we have access to reliable statistics¹. This log-histogram is calculated on each image generated in the first step, which results in a set of histograms $\varphi_{h_1,...,h_M}$, where *M* is the total number of generated images. The final descriptor $\varphi$ is constructed by concatenating $(\uplus)$ all the generated histograms:
+
+$$ \varphi = \bigoplus_m^{M} \varphi_{h_m}; \quad (10) $$
+
+A descriptor example is given in Figure 3. This descriptor $\varphi$ is the result of the concatenation of 14 log exponents histograms calculated on the images generated with the first step of the method presented in section 2.2 and further explained in Figure 2. Three images are generated for each scale *s*; a low-pass response is presented in red, and two high-pass responses are presented in black and gray in the figure ².
+
+## 2.4 Analysis
+
+The basic multifractal framework consists in generating multiple images and then extracting statistics using multifractal analysis. Multifractal descriptors are mathematically invariant to bi-Lipschitz transforms, which even includes non-rigid transformation and view-point change. The proposed method follows the same strategy, but is substantially different from the previous methods. The differences lie in both the image generation step and the statistical description. For instance, the WMFS method [13]
+
+¹ A mathematical relationship between the log exponents histogram and the multifractal spectrum is presented in the supplementary material.
+
+² A histogram was discarded for $s = 2^{-4}$ in the second high response (in gray) due to the large size of the filter which is larger than the actual size of the input image at resolution $s = 2^{-4}$.
+---PAGE_BREAK---
+
+Fig. 3: A descriptor example using a low-pass response and two high-pass responses for 5 resolutions $s = 2^0, \dots, -4$. The exponents log-histogram is calculated for each response and for multiple image resolutions $s$.
+
+generates multiple images for multiple orientations, each oriented image is then analyzed using Daubechies discrete wavelet transform as well as using the wavelet leaders [22]. The multifractal spectrum (MFS) is then estimated for each image, for a given orientation using the box-counting method. Each MFS is then concatenated for a given orientation and the final descriptor is defined as the mean of all the descriptors over the orientation. Contrary to this method, we use different high-pass filters instead of one single analyzing wavelet, which permits to extract different statistics. Generating multiple descriptors for multiple orientations is computationally expensive. In contrast, we generate only one descriptor. To ensure local robustness to orientation, we apply a pooling operator on the *filtered responses*. This approach is much more computationally efficient. Finally, the core of our method is the new multifractal descriptor which permits to extract accurate statistics, contrary to the popular box-counting method as explained in the previous section. The proposed method takes about 0.7 second to extract the whole descriptor on an image of size 480 × 640, compared to 37 seconds as reported in the state-of-the-art multifractal method [13]. Experiments show that the proposed descriptor permits also to achieve higher accuracy, especially in large-scale situations when the extra-class decorrelation is a challenging issue.
+
+## 2.5 Pre and Post Processing
+
+Pre-processing and post-processing can improve the robustness of a texture recognition system. For instance, the method in [12] performs a scale normalization step on each input texture using blob detection. This step first estimates the scale of the texture and then a normalization is applied, which aims at increasing the robustness to scale change. Other texture classification methods such as [9] use Weber's law normalization to improve robustness to illumination. We do not use any scale normalization step such as [12,13], we rather use sometimes histogram equalization to improve robustness to illumination change. We also use a post-processing on features vector $\phi$ using wavelet domain soft-thresholding [?]. This step aims at increasing the intra-class correlation by
+---PAGE_BREAK---
+
+reducing small histogram perturbations (for more details, please refer to the supplementary material).
+
+# 3 Classification and Training Strategies
+
+The second part of our work concerns the training aspect of the texture recognition problem. The globally invariant representation offers a theoretically stable invariant representation via accurate multifractal statistics. However, there are other small transformations and perturbations that may occur in real-world images and this is where a good training strategy will help us to take advantage of the proposed descriptor in practice. We work on two ideas :
+
+1. The choice of the classifier can improve recognition rates: we introduce a simple combination between the Generative PCA classifier [14] and SVMs [8].
+
+2. The lack of data is an issue, how to get more data? : Given an input training texture image, we synthetically generate more images by changing its illumination and scale. We call this strategy "synthetic training".
+
+Experiments on challenging public benchmark datasets, including a large-scale dataset with 250 classes, validates the robustness of the proposed solution.
+
+## 3.1 Classification
+
+**Support Vector Machines** SVMs [8] are widely used in texture classification [10,12,13,17,6]. Commonly used kernels are mainly RBF Gaussian kernel, polynomials and $\chi^2$ kernel. Extension to multiclass can be done via strategies such as one-vs-one and one-vs-all. In this paper, we use the one-vs-all strategy with an RBF-kernel. It consists in building a binary classifier for each class as follows: for each class, a positive label is assigned to the corresponding instances and a negative label is affected to all the remaining instances. The winning class $c_{svm}$ can be chosen based on probability estimates [23] or a simple score maximization:
+
+$$ c_{svm} = \underset{1 \le c \le N_c}{\operatorname{argmax}} \{f_{svm}(x,c)\} , \quad f_{svm}(x,c) = \sum_{i=1}^{M_c} \alpha_i^c y_i^c K(x_i^c, x) + b_c, \quad (11) $$
+
+where $\alpha_i^c$ are the optimal Lagrange multipliers of the classifier representing the class $c$, $x_i^c$ are the support vectors of the class $c$, $y_i^c$ are the corresponding $\pm 1$ labels, $N_c$ is the number of classes and $x$ is the instance to classify.
+
+**Generative PCA Classifier** The generative PCA (GPCA) classifier is a simple PCA-based classifier recently used in [15,14]. Given a test descriptor $x$, GPCA finds the closest class centroid $\mathbb{E}(\{x_c\})$ to $x$, after ignoring the first $D$ principal variability directions. Let $V_c$ be the linear space generated by the $D$ eigenvectors of the covariance matrix of largest eigenvalues, and $V_c^\perp$ its orthogonal complement. The generative PCA classifier uses the projection distance associated to $P_{V_c^\perp}$:
+
+$$ c_{pca} = \underset{1 \le c \le N_c}{\operatorname{argmin}} \| P_{V_c^\perp} (x - \mathbb{E}(\{x_c\})) \|^2. \quad (12) $$
+---PAGE_BREAK---
+
+Classification consists in choosing the class $c_{pca}$ with the minimum projection distance.
+
+**GPCA-SVM Classifier** We propose to combine GPCA and SVMs in one single classifier. The idea behind this combination comes from the observation that SVMs and GPCA often fail on different instances. As a result, a well-established combination of these classifiers should theoretically lead to improved performance. We propose a combination based on the distance between the score separation of each classifier output
+
+$$ c_{final} = \begin{cases} c_{svm} & \text{if } f_{svm}(x, c_{svm}) - f_{svm}(x, c_{pca}) \geq th_{svm} \\ c_{pca} & \text{otherwise,} \end{cases} \quad (13) $$
+
+where $th_{svm}$ is a threshold parameter. The score separation gives an idea of SVMs' accuracy to classify a given instance. Another similar approach would be using probability estimates [23] instead of the score. If the measure $f_{svm}(x, c_{svm}) - f_{svm}(x, c_{pca})$ is relatively important, this means that SVMs are quite "confident" about the result. Otherwise, the classifier selects the GPCA result. Determining the best threshold $th_{svm}$ for each instance is an open problem. In this paper, we rather fix a threshold value for each experiment. We generally select a small threshold for small training sets and larger thresholds for larger sets. Even if this strategy is not optimal, experiments show that the combination improves the classification rates as expected.
+
+## 3.2 Synthetic Training
+
+One important problem in training is coping with the low amount of examples. We propose a simple strategy to artificially add more data to the training set by changing illumination and scale of each instance of the training set. While this idea seems simple, it can have a dramatic impact on the performance as we will see in the next section.
+
+**Multi-Illumination Training** Given an input image *I*, multi-illumination training consists in generating other images of the same content of *I* but with different illumination. There are two illumination cases; the first one consists in *uniform* changing by image scaling of the form *a*I, where *a* is a given scalar. The second case consists in *nonuniform* changing using histogram matching with a set of histograms. The histograms can come from external images, or even from the training set itself (for example by transforming or combining a set of histograms).
+
+**Multi-Scale Training** Given an input image *I*, multi-scale training consists simply in generating other images of the same size as *I* by zooming-in and out. In this paper, we use around 4 generated images, 2 by zooming-in and 2 others by zooming-out.
+
+# 4 Texture Classification Experiments
+
+We present in this section texture classification results conducted on standard public datasets **UIUC** [24,1], **UMD** [25] and **ALOT** [26,27], as well as a comparison with 9 state-of-the-art methods.
+---PAGE_BREAK---
+
+**Datasets Description** The UIUC dataset [24,1] is one of the most challenging texture datasets presented so far. It is composed of 25 classes, each class contains 40 grayscale images of size 480 × 640 with strong scale, rotation and viewpoint changes in uncontrolled illumination environment. Some images exhibit also strong non-rigid deformations. Some samples are presented in Figure 4. The UMD dataset [25] is similar to UIUC with higher resolution images (1280 × 960) but exhibits less non-rigid deformations and stronger illumination changes compared to UIUC. To evaluate the proposed method on a large-scale dataset, we choose the ALOT dataset [26,27]. It consists of 250 classes, 100 samples each. We use the same setup as the previous multifractal methods [13]: grayscale version with half resolution (768 × 512). The ALOT dataset is very challenging as it represents a significantly larger number of classes (250) compared to UIUC and UMD (25) and very strong illumination change (8 levels of illumination). The viewpoint change is however less dramatic compared to UIUC and UMD.
+
+Fig. 4: Texture samples from the **UIUC** dataset [24,1]. Each row represents images from the same class with strong environmental changes.
+
+**Implementation details** In order to build a fast texture classification system, we use only two high-pass filtering responses, which results in 3 histograms per image resolution³. The number of the image scales is fixed to 5. The filter bank consists in high-pass wavelet filters (Daubechies, Symlets and Gabor). A more robust descriptor can be built by increasing the number of filters and orientations. Filtering can be parallelized for faster processing. While augmenting the number of filters slightly improves classification results, the minimalist setup presented above, coupled with the training strategies introduced in this paper, permits to outperform existing techniques while offering in addition computational efficiency.
+
+**Evaluation**
+
+We evaluate the proposed system and compare it with state-of-the-art methods for 50 random splits between training and testing. The evaluation consists in three steps:
+
+³ Except for **ALOT** dataset, we use 3 high-pass responses for a more robust representation.
+---PAGE_BREAK---
+
+1. log-histogram vs. box-counting: We evaluate the precision of our log-histogram method and compare it with the box-counting method used in previous methods.
+
+2. Learning efficiency: We compare the proposed GPCA-SVM combination with single GPCA and SVM results and see how the proposed synthetic training strategy improves classification rates.
+
+3. We compare our main results with **9** state-of-the-art results.
+
+**log-histogram vs. box-counting** In this experiment, we replace the log-histogram step of our approach with the box-counting method widely used in the previous multifractal methods to see if the proposed log-histogram leads to a more accurate bi-Lipschitz invariance. The results are presented in Figure 5. As can be seen, the log-histogram approach leads to higher performance, especially when more data is available. This clearly shows that indeed, the log-histogram leads to a better bi-Lipschitz invariance, as theoretically discussed before. The log-histogram is a simple operation that permits our system to achieve high computational efficiency.
+
+Fig. 5: Comparison between the box-counting method and the proposed log-histogram approach for various dataset training sizes (5, 10 and 20). The proposed approach leads to a more accurate descriptor.
+
+**Learning Efficiency** In this experiment, we first compare the proposed GPCA-SVM combination with single GPCA and SVM classifiers using the proposed descriptor. Each dataset is presented in the form $D_{(y)}^x$ where x is the name of the dataset and y is the training size in number of images. The best results are in bold. As can be seen in Table 1, the GPCA-SVM does indeed improve classification rates. We expect to get even better results with a better strategy to set the threshold parameters $th_{svm}$ as in the proposed experiments, the threshold is fixed for all the instances. Now we compare the results with and without the proposed synthetic training strategy. As can be seen, synthetic training leads to a dramatic improvement. This is a very interesting approach as it increases only the training time. The system can achieve higher recognition accuracy for almost the same computational efficiency. For the **UMD** and **ALOT** datasets, we use uniform illumination change with the multiplicative parameter $a$ in the range [0.9, 0.95, 1.05, 1.1]. For the **UIUC** dataset, we use the nonuniform illumination change
+---PAGE_BREAK---
+
+with two histograms. For the multi-scale training, we use only four generated images (two by zooming-in and two other by zooming-out), which increases the training set 9 times in the **UMD** and **UIUC** datasets (no mutli-scale training is used for the **ALOT** dataset).
+
+ | | D(5)UIUC | D(10)UIUC | D(20)UIUC | D(5)UMD | D(10)UMD | D(20)UMD | D(10)ALOT | D(30)ALOT | D(50)ALOT |
|---|
| Proposed | GPCA | 91.15% | 97.12% | 99.07% | 95.07% | 97.85% | 99.40% | 89.30% | 98.03% | 99.27% | | SVM | 91.23% | 96.30% | 98.47% | 94.43% | 97.44% | 99.25% | 88.96% | 98.16% | 99.14% | | GPCA-SVM | 92.58% | 97.17% | 99.10% | 95.23% | 98.04% | 99.44% | 90.67% | 98.45% | 99.34% | | + Synthetic Train | GPCA | 95.84% | 98.77% | 99.67% | 98.02% | 99.13% | 99.62% | 91.54% | 98.81% | 99.59% | | SVM | 95.40% | 98.43% | 99.46% | 97.75% | 99.06% | 99.72% | 92.23% | 98.80% | 99.51% | | GPCA-SVM | 96.13% | 98.93% | 99.78% | 98.20% | 99.24% | 99.79% | 92.82% | 99.03% | 99.64% |
+
+Table 1: Classification rates comparison using GPCA-SVM and synthetic training.
+
+**Discussions** We compare the proposed method MCMA (Multilayer Convolution - Multifractal Analysis) with 9 state-of-the-art methods for 50 random splits between training and testing, for different training sizes. Results are presented in Table 2. The best results are in bold ⁴. As can be seen, the proposed method outperforms the published results on the 3 datasets. Compared to the leading method [14], our system seems to better handle viewpoint change and non-rigid deformations. This is clearly shown in the results on the **UIUC** dataset that exhibits strong enviromental changes. This result can be expected as the scattering method builds invariants on translation, rotation and scale changes, which does not include viewpoint change and non-rigid deformations. Contrary to this, using accurate multifractal statistics, our solution produces descriptors that are invariant to these complex transformations. The proposed system maintains a high performance on the **UMD** dataset. It is worth noting that on this dataset, the images are of high resolution (1280 × 960), which gives an advantage over the **UIUC** dataset. However, we did not use the original resolution, we rather rescale the images to half-size for faster processing. The high accuracy shows that the proposed multifractal method is able to extract robust invariant statistics even on low-resolution images.
+
+On the large-scale dataset **ALOT**, the proposed method maintains high performance.
+
+Recall that this dataset contains 250 classes with 100 samples each. This is a very challenging dataset that evaluates the extra-class decorrelation of the produced descriptors.
+
+A robust descriptor should increase the intra-class correlation, but should also decrease the extra-class correlation and this has been evaluated on a large-scale data set which contains as many different classes as possible. The results on the **ALOT** dataset clearly show a significant performance drop of the leading multifractal method WMFS. The proposed solution in fact outperforms the WMFS method even without synthetic training as can be seen in Table 1. This proves that the proposed descriptor is able to extract a robust invariant representation.
+
+⁴ Detailed results with standard deviation can be found in the supplementary material.
+---PAGE_BREAK---
+
+ | DUIUC(5) | DUIUC(10) | DUIUC(20) | DUMD(5) | DUMD(10) | DUMD(20) | DALOT(10) | DALOT(30) | DALOT(50) |
|---|
| MFS [10] | - | - | 92.74% | - | - | 93.93% | 71.35% | 82.57% | 85.64% | | OTF-MFS [11] | - | - | 97.40% | - | - | 98.49% | 81.04% | 93.45% | 95.60% | | WMFS [13] | 93.40% | 97.00% | 97.62% | 93.40% | 97.00% | 98.68% | 82.95% | 93.57% | 96.94% | | VG-Fractal [9] | 85.35% | 91.64% | 95.40% | - | - | 96.36% | - | - | - | | Varma [28] | - | - | 98.76% | - | - | - | - | - | - | | Lazebnik [1] | 91.12% | 94.42% | 97.02% | 90.71% | 94.54% | 96.95% | - | - | - | | BIF [5] | - | - | 98.80% | - | - | - | - | - | - | | SRP [7] | - | - | 98.56% | - | - | 99.30% | - | - | - | | Scattering [14] | 93.30% | 97.80% | 99.40% | 96.60% | 98.90% | 99.70% | - | - | - | | MCMA | 96.13% | 98.93% | 99.78% | 98.20% | 99.24% | 99.79% | 92.82% | 99.03% | 99.64% |
|---|
+
+Table 2: Classification rates on the UIUC, UMD and ALOT datasets.
+
+# 5 Conclusion
+
+This paper presents a fast and accurate texture classification system. The proposed solution builds a locally invariant representation using a multilayer convolution architecture that performs convolutions with a filter bank, applies a pooling operator to increase the local invariance and repeats the process for various image resolutions. The resulting images are mapped into a stable descriptor via multifractal analysis. We present a new multifractal descriptor that extracts rich texture information from the local singularity exponents. The descriptor is mathematically validated to be invariant to bi-Lipschitz transformations, which includes complex environmental changes. The second part of paper tackles the training part of the recognition system. We propose the GPCA-SVM classifier that combines the generative PCA classifier with the popular kernel SVMs to achieve higher accuracy. In addition, a simple and efficient "synthetic training" strategy is proposed that consists in synthetically generating more training data by changing illumination and scale of the training instances. Results outperforming the state-of-the-art are obtained and compared with 9 recent methods on 3 challenging public benchmark datasets, while ensuring high computational efficiency.
+
+# Acknowledgements
+
+Hicham Badri's PhD is funded by an INRIA (Direction of Research) CORDI-S grant. He is making a PhD in co-supervision with INRIA and Mohammed V-Agdal University - LRIT, Associated Unit to CNRST (URAC 29).
+
+# References
+
+1. Lazebnik, S., Schmid, C., Ponce, J.: A sparse texture representation using local affine regions. PAMI **27** (2005) 1265–1278
+
+2. Zhang, J., Marszalek, M., Lazebnik, S., Schmid, C.: Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Comput. Vision **73**(2) (June 2007) 213–238
+---PAGE_BREAK---
+
+3. Varma, M., Zisserman, A.: A statistical approach to material classification using image patch exemplars. PAMI 31(11) (November 2009) 2032–2047
+
+4. Ojala, T., Pietikäinen, M., Mäenpää, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. PAMI 24(7) (July 2002) 971–987
+
+5. Crosier, M., Griffin, L.D.: Texture classification with a dictionary of basic image features. In: CVPR, IEEE Computer Society (2008)
+
+6. Liu, L., Fieguth, P.W.: Texture classification from random features. PAMI 34(3) (2012) 574–586
+
+7. Liu, L., Fieguth, P.W., Kuang, G., Zha, H.: Sorted random projections for robust texture classification. In: ICCV. (2011) 391–398
+
+8. Scholkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA (2001)
+
+9. Varma, M., Garg, R.: Locally invariant fractal features for statistical texture classification. In: CVPR, Rio de Janeiro, Brazil. (October 2007)
+
+10. Xu, Y., Ji, H., Fermuller, C.: A projective invariant for textures. 2006 CVPR 2 (2006) 1932–1939
+
+11. Xu, Y., Huang, S.B., Ji, H., Fermuller, C.: Combining powerful local and global statistics for texture description. In: CVPR, IEEE (2009) 573–580
+
+12. Xu, Y., Yang, X., Ling, H., Ji, H.: A new texture descriptor using multifractal analysis in multi-orientation wavelet pyramid. In: CVPR. (2010) 161–168
+
+13. Ji, H., Yang, X., Ling, H., Xu, Y.: Wavelet domain multifractal analysis for static and dynamic texture classification. IEEE Transactions on Image Processing 22(1) (2013) 286–299
+
+14. Sifre, L., Mallat, S.: Rotation, scaling and deformation invariant scattering for texture discrimination. In: CVPR. (2013)
+
+15. Bruna, J., Mallat, S.: Invariant scattering convolution networks. PAMI 35(8) (August 2013) 1872–1886
+
+16. Falconer, K.: Techniques in Fractal Geometry. Wiley (1997)
+
+17. Xu, Y., Ji, H., Fermüller, C.: Viewpoint invariant texture description using fractal analysis. Int. J. Comput. Vision 83(1) (June 2009) 85–100
+
+18. Arneodo, A., Bacry, E., Muzy, J.F.: The thermodynamics of fractals revisited with wavelets. Physica A: Statistical and Theoretical Physics 213(1-2) (January 1995) 232–275
+
+19. Turiel, A., del Pozo, A.: Reconstructing images from their most singular fractal manifold. IEEE Trans. Img. Proc. 11(4) (April 2002) 345–350
+
+20. Yahia, H., Turiel, A., Perez-Vicente, C.: Microcanonical multifractal formalism: a geometrical approach to multifractal systems. Part I: singularity analysis. Journal of Physics A: Math. Theor (41) (2008)
+
+21. Turiel, A., Parga, N.: The multifractal structure of contrast changes in natural images: From sharp edges to textures. Neural Computation 12(4) (2000) 763–793
+
+22. Wendt, H., Roux, S.G., Jaffard, S., Abry, P.: Wavelet leaders and bootstrap for multifractal analysis of images. Signal Process. 89(6) (June 2009) 1100–1114
+
+23. Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2 (2011) 27:1–27:27
+
+24. : UIUC: http://www-cvr.ai.uiuc.edu/once_grp/data/.
+
+25. : UMD: http://www.cfar.umd.edu/~fer/website-texture/texture.htm.
+
+26. Burghouts, G.J., Geusebroek, J.M.: Material-specific adaptation of color invariant features. Pattern Recognition Letters 30 (2009) 306–313
+
+27. : ALOT: http://staff.science.uva.nl/~aloi/public_alot/.
+
+28. Varma, M.: Learning the discriminative powerinvariance trade-off. In: In ICCV. (2007)
\ No newline at end of file
diff --git a/samples/texts_merged/7774888.md b/samples/texts_merged/7774888.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b9878b92849f009742a8cbc92fe07675806bfb2
--- /dev/null
+++ b/samples/texts_merged/7774888.md
@@ -0,0 +1,807 @@
+
+---PAGE_BREAK---
+
+# Spectral theory and operator ergodic theory on super-reflexive Banach spaces
+
+by
+
+EARL BERKSON (Urbana, IL)
+
+**Abstract.** On reflexive spaces trigonometrically well-bounded operators have an operator-ergodic-theory characterization as the invertible operators *U* such that
+
+$$ (*) \quad \sup_{n \in \mathbb{N}, z \in \mathcal{T}} \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) k^{-1} z^k U^k \right\| < \infty. $$
+
+Trigonometrically well-bounded operators permeate many settings of modern analysis, and this note highlights the advances in both their spectral theory and operator ergodic theory made possible by a recent rekindling of interest in the R. C. James inequalities for super-reflexive spaces. When the James inequalities are combined with Young-Stieltjes integration for the spaces $V_p(\mathcal{T})$ of functions having bounded $p$-variation, it transpires that every trigonometrically well-bounded operator on a super-reflexive space $X$ has a norm-continuous $V_p(\mathcal{T})$-functional calculus for a range of values of $p > 1$, and we investigate the ways this outcome logically simplifies and simultaneously expands the structure theory, Fourier analysis, and operator ergodic theory of trigonometrically well-bounded operators on $X$. In particular, on a super-reflexive space $X$ (but not on a general relexive space) a theorem of Tauberian type holds: the (C, 1) averages in (*) corresponding to a trigonometrically well-bounded operator $U$ can be replaced by the set of all the rotated ergodic Hilbert averages of $U$, which, in fact, is a precompact set relative to the strong operator topology. This circle of ideas is facilitated by the development of a convergence theorem for nets of spectral integrals of $V_p(\mathcal{T})$-functions. In the Hilbert space setting we apply the foregoing to the operator-weighted shifts which are known to provide a universal model for trigonometrically well-bounded operators on Hilbert space.
+
+## 1. Introduction and notation.
+The set of positive integers, the set of all integers, the real line, and the complex plane will be denoted by $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$, and $\mathbb{C}$, respectively. The unit circle $\{z \in \mathbb{C} : |z| = 1\}$ will be designated by $\mathcal{T}$. The symbol “K” with a (possibly empty) set of subscripts will be used to denote a constant which depends only on its subscripts, and which can change in value from one occurrence to another. Except where other-
+
+2010 Mathematics Subject Classification: Primary 26A45, 46B20, 47A35, 47B40.
+Key words and phrases: ergodic Hilbert transform, super-reflexive Banach space, spectral decomposition, p-variation, trigonometrically well-bounded operator.
+---PAGE_BREAK---
+
+wise indicated, the convergence of a bilateral series $\sum_{k=-\infty}^{\infty} a_k$ will mean
+the convergence of its sequence of bilateral partial sums $\{\sum_{k=-n}^{n} a_k\}_{n=1}^{\infty}$.
+Throughout all that follows, $\mathcal{X}$ will be an arbitrary Banach space, and we
+shall symbolize by $\mathfrak{B}(\mathcal{X})$ the Banach algebra of all continuous linear oper-
+ators mapping $\mathcal{X}$ into $\mathcal{X}$, the identity operator on $\mathcal{X}$ being denoted by $I$.
+A trigonometric polynomial will be a linear combination of a finite subset of
+the functions $\epsilon_n(z) \equiv z^n \ (z \in \mathbb{T}, n \in \mathbb{Z})$. Given a trigonometric polynomial
+$Q(z) \equiv \sum_n a_n z^n$ and an invertible $T \in \mathfrak{B}(\mathcal{X})$, we shall denote by $Q(T)$ the
+operator $\sum_n a_n T^n$.
+
+Deferring the precise details from spectral theory to §2, we use this in-
+troductory section to fix some notation and to outline our considerations,
+beginning with the abstract notions of spectral decomposability and spec-
+tral integration. An operator $U \in \mathfrak{B}(\mathcal{X})$ is said to be trigonometrically well-
+bounded ([5]) provided that $U$ has a “unitary-like” spectral representation
+
+$$
+(1.1) \qquad U = \int_{0-}^{2\pi} e^{it} dE(t),
+$$
+
+where $E(\cdot) : \mathbb{R} \to \mathfrak{B}(\mathcal{X})$ is a bounded idempotent-valued function possessing certain additional properties reminiscent of, but weaker than, those that would be inherited from a countably additive Borel spectral measure in $\mathbb{R}$, and where the integral in (1.1) is a Riemann–Stieltjes integral existing in the strong operator topology. After suitable normalization, the idempotent-valued function $E(\cdot)$ in (1.1) is uniquely determined, and is called the *spectral decomposition* of $U$. The spectral decomposition $E(\cdot)$ gives rise to a notion of Riemann–Stieltjes *spectral integration* against the integrator $E(\cdot)$. Spectral integration with respect to $E(\cdot)$ provides the trigonometrically well-bounded operator $U$ with a norm-continuous functional calculus implemented by $BV(\mathbb{T})$, the Banach algebra of all complex-valued functions $\psi$ on $\mathbb{T}$ having bounded variation and furnished with the $BV([0, 2\pi])$-norm of the corresponding function $\psi^\dagger(\cdot) \equiv \psi(e^{i\langle\cdot\rangle})$.
+
+Trigonometrically well-bounded operators abound in the structures of modern analysis that require weakened forms of orthogonality to treat delicate convergence phenomena beyond the reach of the unconditional convergence associated with spectral measures. For a variety of naturally occurring examples of trigonometrically well-bounded operators, see, e.g., [8], §4 of [10], and [20]. In particular, if $\mathcal{X}$ is a UMD space, then any invertible $U \in \mathfrak{B}(\mathcal{X})$ such that $U$ is power-bounded (that is, $\sup_{n \in \mathbb{Z}} \|U^n\| < \infty$) is trigonometrically well-bounded. For some applications of trigonometrically well-bounded operators to operator ergodic theory and transference methods, see [3], [13], [14], [15], [17], and [18].
+
+Our starting point for this article is the following operator-ergodic-theory
+characterization of trigonometrically well-bounded operators on an arbitrary
+---PAGE_BREAK---
+
+reflexive Banach space $\mathcal{X}_0$ (see the equivalence of conditions (i) and (ii) of
+Theorem (2.4) in [6]).
+
+PROPOSITION 1.1. Let $\mathcal{X}_0$ be a reflexive Banach space, and let $U \in \mathfrak{B}(\mathcal{X}_0)$ be an invertible operator. Then $U$ is trigonometrically well-bounded if and only if
+
+$$
+(1.2) \quad \sup \left\{ \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) \frac{z^k}{k} U^k \right\| : n \in \mathbb{N}, z \in \mathbb{T} \right\} < \infty.
+$$
+
+This article features results in both spectral theory and operator er-
+godic theory made possible by a recent renewal of interest in the conse-
+quences of R. C. James' inequalities for super-reflexive Banach spaces. (For
+these inequalities, see [30]; for the basic notions and fundamental features
+of super-reflexive spaces, see [31] as well as the celebrated result of P. Enflo
+in [26], which characterizes super-reflexivity as the property of having an
+equivalent uniformly convex norm.) When the James inequalities from [30]
+are combined with Young's inequalities in [40] for the spaces of functions
+having bounded $p$-variation on the circle (the $V_p(\mathbb{T})$ spaces), $1 < p < \infty$,
+it transpires that for every trigonometrically well-bounded operator on a
+super-reflexive Banach space, spectral integration against its spectral de-
+composition extends its BV$(\mathbb{T})$-functional calculus to a norm-continuous
+$V_p(\mathbb{T})$-functional calculus, for a suitable range of values of $p > 1$ (Theorem
+3.7 below). One indicator of the scope of this extension is that, in contrast
+to BV$(\mathbb{T})$, every class $V_p(\mathbb{T})$ contains a continuous, nowhere differentiable
+function of Hardy-Weierstrass type (see Remark 2.8(ii) below).
+
+The spectral integration of function classes of “higher variation” was initiated in [11], but heretofore has been confined to integrating against the spectral decompositions of: invertible power-bounded operators on classical UMD spaces [19], or invertible operators that are separation-preserving and modulus mean-bounded on reflexive Lebesgue spaces of sigma-finite measures [18]. Consequently, the results below ensuring spectral integration of $V_p(\mathbb{T})$ in the wide setting of super-reflexive spaces markedly expand the scope of spectral integration. Since functions of higher variation act as Fourier multipliers in classical unweighted settings as well as in classical weighted settings (see, e.g., Theorem 8 of [18], Théorème 1 and Lemme 3 of [24]), the spectral integration of the spaces $V_p(\mathbb{T})$ provided by Theorem 3.7 below can be viewed as a mechanism for the transference to super-reflexive spaces of a wide family of classical Fourier multipliers, with ramifications for the Fourier analysis of operators. In this regard let us recall that in various contexts where the left bilateral shift is a trigonometrically well-bounded operator (with spectral decomposition $\mathcal{E}(\cdot)$, say) on a sequence space, any bounded complex-valued function $f$ which is continuous a.e. on the circle, and
+---PAGE_BREAK---
+
+such that the spectral integral $\int_{[0,2\pi]} f(e^{it}) d\mathcal{E}(t)$ exists, will act as a Fourier multiplier for the given sequence space, with $\int_{[0,2\pi]} f(e^{it}) d\mathcal{E}(t)$ serving as the multiplier transform of $f$ (p. 16 of [9], Scholium (5.13) of [10], Theorem 4.3 of [16]). Theorem 5.5 below illustrates this point with a new application.
+
+By drawing on §3, the treatment in §4 furnishes a number of pleasant consequences for the operator ergodic theory of trigonometrically well-bounded operators that logically simplifies and expands their machinery in the super-reflexive space setting. In particular, if $U$ is a trigonometrically well-bounded operator on a super-reflexive space $X$, then a Tauberian-type theorem holds (Theorem 4.3 below). Specifically, the $(C, 1)$ averages appearing in the uniform boundedness condition (1.2) can be replaced by the rotated ergodic Hilbert averages of $U$:
+
+$$ (1.3) \qquad \tilde{\mathcal{W}} = \left\{ \sum_{0 < |k| \le n} \frac{z^k}{k} U^k : n \in \mathbb{N}, z \in \mathbb{T} \right\}. $$
+
+In fact, the set $\tilde{\mathcal{W}}$ is precompact relative to $\sigma_X$, the strong operator topology of $\mathfrak{B}(X)$. In the general reflexive space setting, this norm-boundedness of $\tilde{\mathcal{W}}$ need not hold for a trigonometrically well-bounded operator $U$ (see Remark 2.5 below). However, thanks to Hardy's Tauberian Theorem (see, e.g., Theorem II.2.2 in [32]), in the general Banach space setting the set $\tilde{\mathcal{W}}$ corresponding to a power-bounded trigonometrically well-bounded operator is norm-bounded (Theorem (3.21) of [7]). So the streamlining effect of Theorem 4.3 below is that for boundedness of $\tilde{\mathcal{W}}$, the hypothesis of power-boundedness can be dropped provided the underlying Banach space is super-reflexive. In the realm of Fourier analysis of operators on super-reflexive spaces, this streamlining effect is illustrated below by the strong convergence of the operator-valued "Fourier series" associated with a trigonometrically well-bounded operator $U$ and $BV(\mathbb{T})$-functions (Theorem 4.4). (In this setting, it is further shown that the operator-valued "Fourier series" associated with a trigonometrically well-bounded operator $U$ and $V_p(\mathbb{T})$-functions converge $(C, 1)$ in the strong operator topology (Theorem 4.5 below).) The foregoing circle of ideas is facilitated by the development of a suitable convergence theorem for the spectral integrals of $V_p(\mathbb{T})$-functions (Theorem 3.9 below).
+
+Since, when taken as a whole, the foregoing results can fail to hold in the general reflexive space setting, it is a pleasant surprise to find them valid throughout the broad context furnished by super-reflexive spaces, which include the UMD spaces ([1], [34]) properly ([22], [35]). In §5, we confine attention to the Hilbert space context by taking up some applications of the foregoing to operator-weighted shifts, which have been shown in [16] to furnish a universal model for estimates regarding trigonometrically well-bounded operators on Hilbert space.
+---PAGE_BREAK---
+
+In the course of the exchanges during the Oberwolfach Workshop on Spectral Theory in Banach Spaces and Harmonic Analysis (July 25–31, 2004), Nigel Kalton offered the seminal suggestion that the James inequalities for super-reflexive spaces ([30]) might prove to be a useful tool for advances in spectral integration. The author wishes to thank Nigel Kalton for subsequently informing him of this perceptive viewpoint, which forms the basis for the developments below. On the heels of the Oberwolfach Workshop on Spectral Theory in Banach Spaces and Harmonic Analysis, work aimed in the direction of Kalton’s suggestion was carried out in a doctoral dissertation at the University of Edinburgh [21]. This thesis work and the present article spiritually overlap each other in two places, and this state of affairs will be described below in Remark 3.8, where we discuss the anatomy of the present article’s methods.
+
+**2. Background items.** In this section, we recall requisite notions, starting with the basic machinery of spectral families and their associated spectral integration.
+
+**DEFINITION 2.1.** A *spectral family* in a Banach space $\mathcal{X}$ is an idempotent-valued function $E(\cdot) : \mathbb{R} \to \mathfrak{B}(\mathcal{X})$ with the following properties:
+
+(i) $E(\lambda)E(\tau) = E(\tau)E(\lambda) = E(\lambda)$ if $\lambda \le \tau$;
+
+(ii) $\|E\|_u = \sup\{\|E(\lambda)\| : \lambda \in \mathbb{R}\} < \infty$;
+
+(iii) with respect to the strong operator topology, $E(\cdot)$ is right continuous and has a left-hand limit $E(\lambda^{-})$ at each point $\lambda \in \mathbb{R}$;
+
+(iv) $E(\lambda) \to I$ as $\lambda \to \infty$ and $E(\lambda) \to 0$ as $\lambda \to -\infty$, each limit being with respect to the strong operator topology.
+
+If, in addition, there exist $a, b \in \mathbb{R}$ with $a \le b$ such that $E(\lambda) = 0$ for $\lambda < a$ and $E(\lambda) = I$ for $\lambda \ge b$ then $E(\cdot)$ is said to be *concentrated on* $[a, b]$.
+
+Given a spectral family $E(\cdot)$ in the Banach space $\mathcal{X}$ concentrated on a compact interval $J = [a, b]$, an associated theory of spectral integration can be developed as follows. For each bounded function $\psi : J \to \mathbb{C}$ and each partition $\mathcal{P} = (\lambda_0, \lambda_1, \dots, \lambda_n)$ of $J$, where we take $\lambda_0 = a$ and $\lambda_n = b$, set
+
+$$ (2.1) \qquad S(\mathcal{P}; \psi, E) = \sum_{k=1}^{n} \psi(\lambda_k) \{E(\lambda_k) - E(\lambda_{k-1})\}. $$
+
+If the net $\{S(\mathcal{P}; \psi, E)\}$ converges in the strong operator topology of $\mathfrak{B}(\mathcal{X})$ as $\mathcal{P}$ runs through the set of partitions of $J$ directed to increase by refinement, then the strong limit is called the *spectral integral* of $\psi$ with respect to $E(\cdot)$, and is denoted by $\int_J \psi(\lambda) dE(\lambda)$ or, more briefly, by $\int_J \psi dE$.
+---PAGE_BREAK---
+
+In this case, we define $\int_J^\oplus \psi(\lambda) dE(\lambda)$ by writing
+
+$$\int_J^\oplus \psi(\lambda) dE(\lambda) = \psi(a)E(a) + \int_J \psi(\lambda) dE(\lambda),$$
+
+and so $\int_J^\oplus \psi(\lambda) dE(\lambda)$ is the limit in the strong operator topology of the sums
+
+$$ (2.2) \quad \tilde{S}(\mathcal{P}; \psi, E) = \psi(a)E(a) + \sum_{k=1}^n \psi(\lambda_k)\{E(\lambda_k) - E(\lambda_{k-1})\}. $$
+
+It can be shown that the spectral integral $\int_J \psi(\lambda) dE(\lambda)$ exists for each $\psi \in \text{BV}(J)$, and that the mapping
+
+$$ (2.3) \qquad \psi \mapsto \int_J^\oplus \psi(\lambda) dE(\lambda) $$
+
+is an identity-preserving algebra homomorphism of $BV(J)$ into $\mathfrak{B}(\mathcal{X})$ satisfying
+
+$$ (2.4) \qquad \left\| \int_J^\oplus \psi(t) dE(t) \right\| \le \|\psi\|_{\text{BV}(J)} \sup\{\|E(\lambda)\| : \lambda \in \mathbb{R}\}, $$
+
+where $\|\cdot\|_{\text{BV}(J)}$ denotes the usual Banach algebra norm expressed by
+
+$$ \|\psi\|_{\text{BV}(J)} = \sup_{x \in J} |\psi(x)| + \text{var}(\psi, J). $$
+
+In this connection, we recall here a key oscillation notion for the spectral family $E(\cdot)$ in the arbitrary Banach space $\mathcal{X}$ concentrated on a compact interval $J = [a, b]$. For each $x \in \mathcal{X}$, and each partition of $[a, b]$, $\mathcal{P} = (a = a_0 < a_1 < \dots < a_N = b)$, we put
+
+$$ \omega(\mathcal{P}, E, x) = \max_{1 \le j \le N} \sup \{\|E(t)x - E(a_{j-1})x\| : a_{j-1} \le t < a_j\}. $$
+
+Now, as $\mathcal{P}$ increases through the set of all partitions of $[a, b]$ directed to increase by refinement, we have (see Lemma 4 of [38])
+
+$$ (2.5) \qquad \lim_{\mathcal{P}} \omega(\mathcal{P}, E, x) = 0. $$
+
+In the setting of the arbitrary Banach space $\mathcal{X}$, one can establish with the aid of (2.5) the following “workhorse” convergence theorem for spectral integrals of $BV(J)$-functions taken with respect to $E(\cdot)$. In the setting of super-reflexive spaces, Theorems 3.9 and 3.11 below show that this convergence theorem has counterparts for functions of higher variation.
+
+**THEOREM 2.2.** Let $\{\psi_\alpha\}_{\alpha \in \mathcal{A}}$ be a net in $BV(J)$, and let $\psi$ be a complex-valued function on $J$ such that
+
+(i) $\sup_{\alpha \in \mathcal{A}} \text{var}(\psi_\alpha, J) < \infty$,
+
+(ii) $\psi_\alpha \to \psi$ pointwise on $J$.
+---PAGE_BREAK---
+
+Then $\psi \in \text{BV}(J)$, and $\{\int_J^\oplus \psi_\alpha dE\}_{\alpha \in \mathcal{A}}$ converges to $\int_J^\oplus \psi dE$ in the strong operator topology.
+
+The foregoing basic theory of spectral integration was developed in [38]. We refer the reader to §2 of [7] for a simplified account using the above notation. We shall also consider in connection with the above matters the Banach algebra $\text{BV}(\mathbb{T})$, which consists of all $\psi : \mathbb{T} \to \mathbb{C}$ such that the function $\psi^\dagger(t) = \psi(e^{it})$ belongs to $\text{BV}([0, 2\pi])$, furnished with the norm $\|\psi\|_{\text{BV}(\mathbb{T})} = \|\psi^\dagger\|_{\text{BV}([0, 2\pi])}$. The following notation will come in handy—particularly whenever Fejér's Theorem is invoked. Given any function $f : \mathbb{R} \to \mathbb{C}$ which has a right-hand limit and a left-hand limit at each point of $\mathbb{R}$, we shall denote by $f^\# : \mathbb{R} \to \mathbb{C}$ the function defined for every $t \in \mathbb{R}$ by putting
+
+$$f^{\#}(t) = \frac{1}{2} \left\{ \lim_{s \to t^+} f(s) + \lim_{s \to t^-} f(s) \right\}.$$
+
+In the case of a function $\phi : \mathbb{T} \to \mathbb{C}$ such that $\phi(e^{i\cdot}) : \mathbb{R} \to \mathbb{C}$ has everywhere a right-hand limit and a left-hand limit, we shall, by a slight abuse of notation, write
+
+$$ (2.6) \qquad \phi^{\#}(t) = \frac{1}{2} \left\{ \lim_{s \to t^+} \phi(e^{is}) + \lim_{s \to t^-} \phi(e^{is}) \right\} \quad \text{for all } t \in \mathbb{R}. $$
+
+In particular, for each $\phi \in \text{BV}(\mathbb{T})$, it is clear that we may regard the $(2\pi)$-periodic function $\phi^\#$ as an element of $\text{BV}(\mathbb{T})$. (In general, when there is no danger of confusion, we shall, as convenient, tacitly indulge in the conventional practice of identifying a function $\Psi$ defined on $\mathbb{T}$ with its $(2\pi)$-periodic counterpart $\Psi(e^{i\cdot})$ defined on $\mathbb{R}$.)
+
+**DEFINITION 2.3.** An operator $U \in \mathfrak{B}(\mathcal{X})$ is said to be trigonometrically well-bounded if there is a spectral family $E(\cdot)$ in $\mathcal{X}$ concentrated on $[0, 2\pi]$ such that $U = \int_{[0,2\pi]} e^{i\lambda} dE(\lambda)$. In this case, it is possible to arrange that $E((2\pi)^{-}) = I$, and with this additional property the spectral family $E(\cdot)$ is uniquely determined by $U$, and is called the *spectral decomposition* of $U$.
+
+**REMARK 2.4.** The above discussion regarding (2.3) and (2.4) shows that a trigonometrically well-bounded operator on a Banach space has a norm-continuous $\text{BV}(\mathbb{T})$-functional calculus. In the setting of super-reflexive spaces, Theorem 3.7 below will extend this $\text{BV}(\mathbb{T})$-functional calculus to a norm-continuous functional calculus based on functions of appropriately higher variation.
+
+After the development in [4] of an intimately related precursor class (the “well-bounded operators of type (B)”), the class of trigonometrically well-bounded operators was introduced in [5], and its fundamental structural theory further developed in [6]. In the general Banach space setting
+---PAGE_BREAK---
+
+ resp., in the reflexive space setting described in Proposition 1.1), trigonometrically well-bounded operators can be characterized by the precompactness in the weak operator topology (resp., the uniform boundedness) of the
+(C, 1) means of their full set of rotated discrete ergodic Hilbert averages.
+(For the general Banach space case, see Theorem 5.2 of [14].) In order to
+discuss this recurring theme, it will be convenient to establish a notation
+for the sequence of trigonometric polynomials underlying it via spectral
+integration—specifically, for each $n \in \mathbb{N}$ and each $z \in \mathbb{T}$, we write
+
+$$
+(2.7) \qquad \mathfrak{s}_n(z) = \sum_{0 < |k| \le n} \frac{z^k}{k}
+$$
+
+(thus, {$\mathfrak{s}_n$}$_{n=1}^{\infty}$ is the sequence of partial sums for the Fourier series of $\phi_0 \in$
+BV($\mathbb{T}$) defined by $\phi_0(1) = 0$ and $\phi_0(e^{it}) = i(\pi - t)$ for $0 < t < 2\pi$). The
+fact that var($\mathfrak{s}_n$, $\mathbb{T}$) $\to \infty$ as $n \to \infty$ is a well-known consequence of the
+properties of the Lebesgue constants (see, e.g., (3.9) of [14]), and renders
+(2.4) incapable of bounding the sequence {||$\mathfrak{s}_n(T)$||}$_{n=1}^{\infty}$ in the case of an
+arbitrary trigonometrically well-bounded operator on an arbitrary Banach
+space $\mathcal{X}$. The following remark guarantees that there is no way out of this,
+even in the setting of a general reflexive Banach space, and this fact serves
+to underscore the aforementioned felicitous properties which Theorem 4.3
+confers on the set $\tilde{\mathcal{W}}$ in (1.3) when the underlying Banach space is super-
+reflexive.
+
+**REMARK 2.5.** Example (3.1) in [6] exhibits a reflexive Banach space $\mathcal{X}_0$
+and a trigonometrically well-bounded operator $T_0 \in \mathfrak{B}(\mathcal{X}_0)$ such that for
+each trigonometric polynomial $Q$, we have
+
+$$
+\|Q(T_0)\|_{\mathfrak{B}(\mathcal{X}_0)} = |Q(1)| + \text{var}(Q, \mathbb{T}).
+$$
+
+Hence $\|\mathfrak{s}_n(T_0)\|_{\mathfrak{B}(\mathcal{X}_0)} \to \infty$ as $n \to \infty$. A noteworthy feature of the reflexive Banach space $\mathcal{X}_0$ used in this example is that, by virtue of [25] (note, e.g., Lemma 1.e.4 in [33]), $\mathcal{X}_0$ cannot be made uniformly convex by equivalent renorming (in view of Corollary 3 of [26], this last can be equivalently restated by saying that the reflexive Banach space $\mathcal{X}_0$ is not super-reflexive).
+
+On a more positive note, we mention here that trigonometrically well-
+bounded operators do enjoy the following operator-valued variant of Fejér’s
+Theorem (see Theorem (3.10)(i) of [7]). (For a marked improvement on
+the conclusion of this next theorem in the presence of super-reflexivity, see
+Theorem 4.4 below.)
+
+**THEOREM 2.6.** Suppose that *U* is a trigonometrically well-bounded op-
+erator on a Banach space *X*, and *E*(·) is the spectral decomposition of *U*.
+Let *f* ∈ BV(*T*), and let *f*# be as in (2.6). Then the series ∑∞∑k=-∞ ̂*f*(k)*U*^k
+is (*C*, 1)-summable in the strong operator topology to (that is, the sequence
+---PAGE_BREAK---
+
+$$ \begin{gather*} \left\{ \sum_{k=-n}^{n} \left(1 - \frac{|k|}{n+1}\right) \hat{f}(k) U^k \right\}_{n=1}^{\infty} \text{ converges in the strong operator topology to} \\ \int_{[0,2\pi]} \int f^{\#}(t) dE(t). \end{gather*} $$
+
+The centerpiece of our considerations in §3 will be a proof that, in the context of super-reflexivity, spectral integration against $E(\cdot)$ can be extended from BV($\mathbb{T}$) to the broader classes $V_p(\mathbb{T})$ consisting of the functions of bounded $p$-variation, where $p$ ranges over an appropriate subinterval of $(1, \infty)$ (see Theorem 3.7 below). To avoid later digressions, we take up here the definition of the $p$-variation of a function $\psi$.
+
+**DEFINITION 2.7.** Let $J = [a, b]$ be a compact interval of $\mathbb{R}$. For $1 \le p < \infty$, the $p$-variation of a function $\psi: J \to \mathbb{C}$ is specified by writing
+
+$$ \mathrm{var}_p(\psi, [a,b]) = \sup \left\{ \sum_{k=1}^{N} |\psi(x_k) - \psi(x_{k-1})|^p \right\}^{1/p}, $$
+
+where the supremum is extended over all partitions $a = x_0 < x_1 < \dots < x_N = b$ of $[a, b]$.
+
+By definition, the class $V_p(J)$ consists of all functions $\psi: J \to \mathbb{C}$ such that $\mathrm{var}_p(\psi, [a,b]) < \infty$. It is readily verified that $V_p(J)$ becomes a unital Banach algebra under pointwise operations when endowed with the norm $\|\cdot\|_{V_p(J)}$ specified by
+
+$$ \|\psi\|_{V_p(J)} = \sup\{|\psi(x)| : x \in J\} + \mathrm{var}_p(\psi, J). $$
+
+Moreover, if $\psi \in V_p(J)$, then $\lim_{x \to y^+} \psi(x)$ exists for each $y \in [a, b)$ and $\lim_{x \to y^-} \psi(x)$ exists for each $y \in (a, b]$, and the set of discontinuities of $\psi$ in $J$ is countable. It is elementary that $V_1(J)$ and BV$(J)$ consist of the same functions, and also that $V_q(J) \subseteq V_r(J)$ when $1 \le q \le r < \infty$, since $\|\psi\|_{V_p(J)}$ is a decreasing function of $p$. For additional fundamental features of $V_p(J)$, see, e.g., §2 in [11].
+
+For $\psi: \mathbb{T} \to \mathbb{C}$, we define $\mathrm{var}_p(\psi, \mathbb{T})$ to be $\mathrm{var}_p(\psi(e^{i\cdot}), [0, 2\pi])$, and we designate by $V_p(\mathbb{T})$ the class consisting of all functions $\psi: \mathbb{T} \to \mathbb{C}$ such that $\mathrm{var}_p(\psi, \mathbb{T}) < \infty$. With pointwise operations on $\mathbb{T}$, $V_p(\mathbb{T})$ likewise becomes a unital Banach algebra when furnished with the norm
+
+$$ \|\psi\|_{V_p(\mathbb{T})} = \|\psi(e^{i\cdot})\|_{V_p([0,2\pi])} = \sup\{|\psi(z)| : z \in \mathbb{T}\} + \mathrm{var}_p(\psi, \mathbb{T}). $$
+
+**REMARK 2.8.** (i) For $1 \le p < \infty$ and $\psi: \mathbb{T} \to C$, there is also a rotation-invariant notion for the $p$-variation of $\psi$ on $\mathbb{T}$, which serves as an alternative to $\mathrm{var}_p(\psi, \mathbb{T})$ defined above. Specifically, we can define
+
+$$ \mathfrak{v}_p(\psi, \mathbb{T}) = \sup \left\{ \sum_{k=1}^{N} |\psi(e^{it_k}) - \psi(e^{it_{k-1}})|^p \right\}^{1/p}, $$
+---PAGE_BREAK---
+
+where the supremum is taken over all finite sequences $-\infty < t_0 < t_1 < \dots < t_N = t_0 + 2\pi < \infty$. It is evident that
+
+$$ (2.8) \qquad \mathrm{var}_p(\psi, \mathbb{T}) \le \nu_p(\psi, \mathbb{T}) \le 2 \mathrm{var}_p(\psi, \mathbb{T}), $$
+
+and that $\nu_1(\psi, \mathbb{T}) = \mathrm{var}_1(\psi, \mathbb{T})$. Moreover, for $1 \le p < \infty$, $V_p(\mathbb{T})$ is also a unital Banach algebra under the norm $\|\cdot\|_{\nu_p(\mathbb{T})}$ given by
+
+$$ \|\psi\|_{\nu_p(\mathbb{T})} = \sup\{|\psi(z)| : z \in \mathbb{T}\} + \nu_p(\psi, \mathbb{T}), $$
+
+which, by virtue of (2.8), is obviously equivalent to the Banach algebra norm $\|\cdot\|_{V_p(\mathbb{T})}$ defined above. (When convenient, we shall use the equivalence of the norms $\|\cdot\|_{\nu_p(\mathbb{T})}$ and $\|\cdot\|_{V_p(\mathbb{T})}$ without comment.) Straightforward application of the Generalized Minkowski Inequality shows that if $F \in L^1(\mathbb{T})$ and $\psi \in V_p(\mathbb{T})$, then the convolution $F * \psi$ belongs to $V_p(\mathbb{T})$, with
+
+$$ (2.9) \qquad \|F * \psi\|_{V_p(\mathbb{T})} \le \|F\|_{L^1(\mathbb{T})} \|\psi\|_{\nu_p(\mathbb{T})} \le 2 \|F\|_{L^1(\mathbb{T})} \|\psi\|_{V_p(\mathbb{T})}. $$
+
+(ii) It is worth noting here that if $1 < q < \infty$, then $\bigcup_{1 \le p < q} V_p(\mathbb{T})$ is not dense in $V_q(\mathbb{T})$. To see this, first note that if $1 \le p < \infty$ and $f \in V_p(\mathbb{T})$, then, in the notation of [29] we have, $f \in \Lambda_p$. This is a standard inclusion, established for $p=1$ in Lemma 9 of [29], and for $1 < p < \infty$ on pages 259, 260 of [40] (nowadays this inclusion for $1 < p < \infty$ is also transparent via, e.g., Theorem 3.1 of [23]). Hence Lemma 11 of [29] shows that $\{\hat{f}(k)\}_{k=-\infty}^{\infty}$, the sequence of Fourier coefficients of $f$, satisfies
+
+$$ (2.10) \qquad \sup\{|k|^{1/p}|\hat{f}(k)| : k \in \mathbb{Z}\} < \infty. $$
+
+In view of this, we can define for $1 \le p < \infty$ the linear mapping $\mathfrak{T}_p : V_p(\mathbb{T}) \to \ell^\infty(\mathbb{Z})$ by writing $\mathfrak{T}_p(f) = \{|k|^{1/p} \hat{f}(k)\}_{k=-\infty}^{\infty}$. It follows via the Closed Graph Theorem that $\mathfrak{T}_p$ is continuous, and so the following set $\mathcal{N}_p(\mathbb{T})$, which coincides with $(\mathfrak{T}_p)^{-1}(c_0(\mathbb{Z}))$, is a closed subspace of $V_p(\mathbb{T})$:
+
+$$ \mathcal{N}_p(\mathbb{T}) = \{g \in V_p(\mathbb{T}) : |k|^{1/p} \hat{g}(k) \to 0 \text{ as } |k| \to \infty\}. $$
+
+It is clear from (2.10) that $\bigcup_{1 \le p < q} V_p(\mathbb{T}) \subseteq \mathcal{N}_q(\mathbb{T})$. However, $F_q$, Hardy's $(2\pi)$-periodic, Weierstrass-type, continuous, nowhere differentiable function from [28], which is specified by
+
+$$ F_q(t) = \sum_{n=0}^{\infty} 2^{-n/q} \cos(2^n t) \quad \text{for all } t \in \mathbb{R}, $$
+
+belongs to $Lip_{1/q}(\mathbb{R})$ by 1.33 of [28], and hence its restriction $F_q|_{[0, 2\pi]}$ can be regarded as belonging to $V_q(\mathbb{T})$. It is clear that for each non-negative integer $n$,
+
+$$ 2^{n/q} \widehat{F}_q(2^n) = \frac{1}{2}, $$
+
+whence $F_q|_{[0, 2\pi]}$ does not belong to $\mathcal{N}_q(\mathbb{T})$. (Compare (9.4) of [40].)
+---PAGE_BREAK---
+
+If we replace absolute values by norms in the foregoing definitions of $p$-variation, we arrive at the corresponding definitions for vector-valued functions. Furthermore, for a vector-valued function $f$ defined on $\mathbb{R}$ (including the scalar-valued case), the standard counterpart for $\mathbb{R}$ of $p$-variation is given by
+
+$$ \operatorname{var}_p(f, \mathbb{R}) = \sup_{-\infty < a < b < \infty} \operatorname{var}_p(f, [a, b]). $$
+
+If $E(\cdot)$ is a spectral family of projections in an arbitrary Banach space $\mathcal{X}$, and $1 \le p < \infty$, we shall also use the symbol $\operatorname{var}_p(E)$ to denote
+
+$$ \sup\{\operatorname{var}_p(E(\cdot)x, \mathbb{R}) : \|x\| \le 1\}. $$
+
+**3. Super-reflexivity and spectral integration of $V_p(\mathbb{T})$ with $p > 1$.**
+
+For extensive details and terminology regarding the structure theory of super-reflexive spaces, we refer the interested reader to, e.g., Part 4 of [2]. One of R. C. James' inequalities for super-reflexive spaces (Theorem 3 of [30]) states the following.
+
+**THEOREM 3.1.** Let $X$ be a super-reflexive Banach space. If $\phi$ and $K$ are real numbers such that
+
+$$ 0 < 2\phi < 1/K \le 1, $$
+
+then there is $q = q(X, \phi, K) \in (1, \infty)$ such that for any normalized basic sequence $\{y_j\}$ in $X$ with basis constant not exceeding $K$, we have
+
+$$ (3.1) \qquad \phi\left\{\sum_j |a_j|^q\right\}^{1/q} \le \left\|\sum_j a_j y_j\right\|, $$
+
+for all scalar sequences $\{a_j\}$ such that $\sum_j a_j y_j$ converges.
+
+In the context of a spectral family of projections in a super-reflexive space, James's Theorem 3.1 above readily specializes so as to take the following form.
+
+**PROPOSITION 3.2.** If $E(\cdot)$ is a spectral family of projections in a super-reflexive Banach space $X$, and $\phi$ is a real number satisfying
+
+$$ (3.2) \qquad 0 < \phi < \frac{1}{4\|E\|_u}, $$
+
+then there is a real number $q = q(X, \phi, \|E\|_u) \in (1, \infty)$ such that
+
+$$ (3.3) \qquad \operatorname{var}_q(E) \le \frac{2\|E\|_u}{\phi}. $$
+
+*Proof.* Let $x \in X \setminus \{0\}$, and suppose that $-\infty < \lambda_0 < \lambda_1 < \dots < \lambda_N < \infty$. Let $\{z_j\}_{j=1}^M$ be the basic sequence consisting of all non-zero terms extracted from $\{{E(\lambda_k) - E(\lambda_{k-1})}\}_{k=1}^N x\}_{j=1}^M$, let $\{y_j\}_{j=1}^M$ be the normalized basic sequence $\{z_j/||z_j||\}_{j=1}^M$ (whose basis constant clearly does not exceed
+---PAGE_BREAK---
+
+$2\|E\|_u)$, and let $\{a_j\}_{j=1}^M$ be the sequence of real numbers $\{\|z_j\|\}_{j=1}^M$. Then, in the present context, (3.1) becomes the desired conclusion (3.3), since the sum in the majorant of (3.1) telescopes here. ■
+
+Since we shall not require any specificity for the roles played by the constants $\phi$, $\|E\|_u$, and $q = q(X, \phi, \|E\|_u)$ in Proposition 3.2, we include here the following condensed version (which can also be derived directly from Proposition IV.II.3 on pages 249–250 of [2] by similar reasoning to that above after using the equivalent renorming of $X$ specified by $\|x\| = \sup_{-\infty 1$, and $f \in V_p(J)$, $g \in V_q(J)$ have no common discontinuities. Then the Riemann-Stieltjes integral $\int_a^b f(t) dg(t)$ exists and obeys the estimate
+
+$$ \left| \int_a^b f(t) dg(t) \right| \le \left\{ 1 + \zeta \left( \frac{1}{p} + \frac{1}{q} \right) \right\} \|f\|_{V_p(J)} \text{var}_q(g, J). $$
+
+*(Here $\zeta$ designates the Riemann zeta function specified by $\zeta(s) = \sum_{n=1}^{\infty} n^{-s}$ for $s > 1$)*
+
+**THEOREM 3.5.** Let $X$ be a super-reflexive Banach space, and let $E(\cdot)$ be the spectral decomposition of a trigonometrically well-bounded operator $U \in \mathfrak{B}(X)$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $u \in (1, q')$, where $q' = q(q-1)^{-1}$ is the conjugate index of $q$. Then, in terms of the notation of (2.6), for every $f \in \text{BV}(\mathbb{T})$ we have
+
+$$ (3.4) \quad \left\| \int_{[0,2\pi]}^\oplus f^{#}(t) dE(t) \right\| \le 3 \left\{ 1 + \zeta \left( \frac{1}{u} + \frac{1}{q} \right) \right\} \|f\|_{V_u(\mathbb{T})} \text{var}_q(E). $$
+
+*Proof.* Here and henceforth we denote by $\{\kappa_n\}_{n=0}^\infty$ the Fejér kernel for $\mathbb{T}$,
+
+$$ \kappa_n(z) = \sum_{k=-n}^{n} \left(1 - \frac{|k|}{n+1}\right) z^k. $$
+
+Clearly $u^{-1} + q^{-1} > 1$. For $f \in \text{BV}(\mathbb{T})$, each trigonometric polynomial $\kappa_n * f$
+---PAGE_BREAK---
+
+is in $BV(\mathbb{T}) \subseteq V_u(\mathbb{T})$, with
+
+$$
+\|\kappa_n * f\|_{BV(\mathbb{T})} \leq \|f\|_{BV(\mathbb{T})}.
+$$
+
+For the integral
+
+$$
+\int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dx^*(E(t)x)
+$$
+
+(which automatically exists for arbitrary $x \in X$, and $x^*$ in the dual space $X^*$), we now apply Theorem 3.4 to the pair of functions $\kappa_n * f \in V_u(\mathbb{T})$ and $x^*(E(\cdot)x) \in V_q([0, 2\pi])$ to obtain the estimate
+
+$$
+\left| \int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dx^*(E(t)x) \right|
+\le \left\{ 1 + \zeta \left( \frac{1}{u} + \frac{1}{q} \right) \right\} \| \kappa_n * f \|_{V_u(\mathbb{T})} \mathrm{var}_q(E) \|x\| \|x^*\|,
+$$
+
+and consequently for each $n$, we see with the aid of this last estimate that
+
+$$
+(3.5) \quad \left\| \int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dE(t) \right\|
+\le
+\begin{cases}
+1 + \zeta \left(\frac{1}{u} + \frac{1}{q}\right) & \| \kappa_n * f \|_{V_u(\mathbb{T})} \operatorname{var}_q(E) \\
+\le 2 \left\{ 1 + \zeta \left(\frac{1}{u} + \frac{1}{q}\right) \right\} \| f \|_{V_u(\mathbb{T})} \operatorname{var}_q(E).
+\end{cases}
+$$
+
+Since $\{\kappa_n * f\}_{n=0}^\infty$ converges pointwise to $f^\#$ on $\mathbb{T}$ while its terms have uniformly bounded 1- variations, we can infer via Theorem 2.2 above that, in the strong operator topology,
+
+$$
+\int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dE(t) \rightarrow_{[0,2\pi]} f^\#(t) dE(t).
+$$
+
+Hence (3.5) shows that (3.4) holds. ■
+
+In order to pass from the estimate in (3.4) for the spectral integral of $f^\#$ when $f \in BV(\mathbb{T})$ to the spectral integration of $V_p(\mathbb{T})$-functions, we shall need to rely on the following exemplar of the tools which spectral integration furnishes for such situations.
+
+**THEOREM 3.6.** Suppose that $U$ is a trigonometrically well-bounded op-
+erator on an arbitrary Banach space $\mathcal{X}$, $E(\cdot)$ is the spectral decomposition
+of $U$, and $1 < u < \infty$. Suppose further that there is a constant $\tau$ such that
+
+$$
+(3.6) \quad \left\| \int_{[0,2\pi]}^{\oplus} \psi^{\#}(e^{it}) dE(t) \right\| \leq \tau \|\psi\|_{V_u(\mathbb{T})} \quad \text{for all } \psi \in BV(\mathbb{T}).
+$$
+
+Then if $1 \le p < u$, the spectral integral $\int_{[0,2\pi]} \phi(e^{it}) dE(t)$ exists for each $\phi \in V_p(\mathbb{T})$, and the mapping $\phi \in V_p(\mathbb{T}) \mapsto \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t)$ is an identity-
+---PAGE_BREAK---
+
+preserving algebra homomorphism of $V_p(\mathbb{T})$ into $\mathfrak{B}(\mathcal{X})$ such that
+
+$$ \left\| \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t) \right\| \leq \tau K_{p,u} \| \phi \|_{V_p(\mathbb{T})} \quad \text{for all } \phi \in V_p(\mathbb{T}). $$
+
+*Proof*. A demonstration of the current theorem can readily be modeled after the proof of Theorem 2.1 in [11] by replacing the Fourier multiplier norm estimate in Proposition 2.3 et seq. of [11] by the present hypothesis (3.6). Alternatively, one can extract key elements of a proof for the current theorem by making suitable modifications to the reasoning for its Marcinkiewicz power-classes counterpart in Theorem 12 of [18]. ■
+
+By taking $u = 2^{-1}(p+q')$ in Theorem 3.5 while combining Theorems 3.5 and 3.6 we arrive at the following principal result, which guarantees spectral integration of $V_p(\mathbb{T})$ spaces in the presence of super-reflexivity, and thereby extends to each $V_p(\mathbb{T})$ space, throughout an appropriate range of $p > 1$, the BV$(\mathbb{T})$-functional calculus for trigonometrically well-bounded operators.
+
+**THEOREM 3.7.** Let $X$ be a super-reflexive Banach space, and let $E(\cdot)$ be the spectral decomposition of a trigonometrically well-bounded operator $U \in \mathfrak{B}(X)$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $p \in (1, q')$, where $q' = q(q-1)^{-1}$ is the conjugate index of $q$. Then the spectral integral $\int_{[0,2\pi]} \phi(e^{it}) dE(t)$ exists for each $\phi \in V_p(\mathbb{T})$, and the mapping $\phi \in V_p(\mathbb{T}) \mapsto \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t)$ is an identity-preserving algebra homomorphism of $V_p(\mathbb{T})$ into $\mathfrak{B}(X)$ such that
+
+$$ \left\| \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t) \right\| \leq K_{p,q} \text{var}_q(E) \| \phi \|_{V_p(\mathbb{T})} \quad \text{for all } \phi \in V_p(\mathbb{T}). $$
+
+**REMARK 3.8.** (i) As already indicated above, from both a conceptual and historical standpoint, Proposition 3.2 (along with its abbreviated version in Proposition 3.3) can best be viewed as the immediate specialization to spectral families of James’ celebrated estimate for super-reflexive spaces here quoted as Theorem 3.1. On the basis of extensive calculations aided by [30], Theorem 2.1 of [21] asserts what amounts to Proposition 3.2 above. The reasoning devoted to Theorem 2.1 in [21] occurs there on pp. 14–28, 31, with the following description on page 23: “The proof of Theorem 2.1 is rather involved, and requires several technical results”.
+
+(ii) Some generic spectral integration tool for the general Banach space setting, such as Theorem 3.6, seems to be required for the transition from Proposition 3.2 and the fundamental theorem of Young–Stieltjes integration reproduced in Theorem 3.4 in order to arrive at Theorem 3.7. The reasoning offered for Theorem 4.1 in [21], which purports to establish the same result as Theorem 3.7 above without such a transitional tool, is flawed, primarily
+---PAGE_BREAK---
+
+because it rests on the false premise that $V_1(\mathbb{T})$ is norm-dense in $V_p(\mathbb{T})$, if
+$1 < p < \infty$, in contradiction to the result in Remark 2.8(ii) above.
+
+We now proceed to associate with Theorem 3.7 a useful convergence theorem for appropriate nets of spectral integrals in the context of super-reflexivity. This (as well as Theorem 3.11 below) furnishes the promised extension of Theorem 2.2 to functions of higher variation.
+
+**THEOREM 3.9.** *Assume the hypotheses on X, E(·), U, and q of Theorem 3.7, and let p ∈ (1, q'). Suppose that {gβ}β∈B is a net of mappings from T into C satisfying*
+
+$$
+\begin{align*}
+(3.7) \qquad & \rho \equiv \sup\{\text{var}_p(g_\beta, \mathbb{T}) : \beta \in B\} < \infty, \\
+& \text{and such that for each } \beta \in B, \text{ and each } t_0 \in \mathbb{R}, \\
+(3.8) \qquad & \lim_{t \to t_0^-} g_\beta(e^{it}) = g_\beta(e^{it_0}).
+\end{align*}
+$$
+
+*Suppose further that {g_β}_{β∈B} converges pointwise on T to a complex-valued function g. Then g ∈ V_p(T), and the net*
+
+$$
+\left\{ \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t) \right\}_{\beta \in B}
+$$
+
+converges in the strong operator topology of $\mathfrak{B}(X)$ to $\int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t).$
+
+*Proof.* Clearly, var$_p$(g, T) ≤ ρ < ∞. Choose q₁ so that 1 < q < q₁ < ∞ and p⁻¹ + q⁻¹ > 1. Fix x ∈ X \ {0}, let ε > 0 be given, and use (2.5) to infer that [0, 2π] has a partition Pε = (0 = t₀ < t₁ < ... < tJ = 2π) such that
+
+$$
+(3.9) \qquad \omega(U, E, x) < \epsilon \quad \text{for any refinement } U \text{ of } P_{\epsilon}.
+$$
+
+For an arbitrary pair of refinements of $\mathcal{P}_\varepsilon$, say $\mathcal{P} = (0 = a_0 < a_1 < \dots < a_N = 2\pi)$, $\mathcal{Q} = (0 = b_0 < b_1 < \dots < b_M = 2\pi)$, and for any $\beta \in B$, we shall now consider the following two sums:
+
+$$
+S_1 \equiv \sum_{j=1}^{N} E(a_{j-1})x\{g_{\beta}(e^{ia_j}) - g_{\beta}(e^{ia_{j-1}})\},
+$$
+
+$$
+S_2 \equiv \sum_{m=1}^{M} E(b_{m-1})x\{g_{\beta}(e^{ib_m}) - g_{\beta}(e^{ib_{m-1}})\}.
+$$
+
+For $1 \le \nu \le J$, let $I_\nu = [y_\nu, z_\nu]$ be the rightmost subinterval of $\mathcal{P}$ contained
+in the subinterval $[t_{\nu-1}, t_\nu]$ of $\mathcal{P}_\varepsilon$, and let $S'_1$ denote the sum $S_1$ after the
+replacement of the terms $E(y_\nu)x\{g_\beta(e^{iz_\nu}) - g_\beta(e^{iy_\nu})\}$, $1 \le \nu \le J$, by corre-
+sponding terms $E(y_\nu)x\{g_\beta(e^{iz'_\nu}) - g_\beta(e^{iy_\nu})\}$, where $y_\nu < z'_\nu < z_\nu$, $1 \le \nu \le J$.
+Moreover, we can choose these points $z'_\nu$, $1 \le \nu \le J$, so that we can similarly
+form $S'_2$ from $S_2$ by truncating to the same right end-point $z'_\nu$ the rightmost
+---PAGE_BREAK---
+
+in the string of subintervals of $\mathcal{Q}$ contained in each $[t_{\nu-1}, t_\nu]$. In terms of
+this notation, we can write
+
+$$S'_1 - S'_2 = \sum_{\nu=1}^{J} (\Omega_{\nu} - \Lambda_{\nu}),$$
+
+where, for $1 \le \nu \le J$, $\Omega_\nu$ (resp., $\Lambda_\nu$) represents the contribution to $S'_1$ (resp., $S'_2$) of the string of intervals that are contained in the subinterval $[t_{\nu-1}, t_\nu]$ of $\mathcal{P}_\varepsilon$. Provided that the pair of reciprocal indices involved has sum exceeding 1 (as is true here for $q_1^{-1}, p^{-1}$), the reasoning leading up to and including Young's estimate (6.4) in [40] can be applied to any pair of qualifying functions such that one is vector-valued, and the other is scalar-valued (a quick way to see this is to apply temporarily an arbitrary continuous linear functional, then invoke directly the results in [40] for a pair of scalar-valued functions, and then revert to norms in the ultimate vector-valued expressions).
+
+Applying Young's estimate (6.4), and then the technique in (10.8) of [40],
+together with (3.9) above, we can infer that for $1 \le \nu \le J$ we have, in terms
+of the Riemann zeta function $\zeta$,
+
+$$
+\begin{equation}
+\begin{aligned}
+& \left\| \Omega_{\nu} - \Lambda_{\nu} \right\| \\
+& \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1+\zeta(q_1^{-1}+p^{-1})\} \text{var}_{q}^{q/q_1}(E(\cdot)x, [t_{\nu-1}, t_{\nu}]) \text{var}_{p}(g_{\beta}, [t_{\nu-1}, t_{\nu}]).
+\end{aligned}
+\end{equation}
+$$
+
+Summing the estimates in (3.10) from $\nu = 1$ to $J$, and then applying Hölder's inequality (for the pair of indices $q_1, p$) to the resulting majorant, we find that
+
+$$
+\[
+\begin{array}{l}
+(3.11) \\
+\| S'_{1} - S'_{2} \| \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1+\zeta(q_1^{-1}+p^{-1})\} \mathrm{var}_q^{q/q_1}(E(\cdot)x, [0, 2\pi]) \mathrm{var}_p(g_\beta, \mathbb{T}).
+\end{array}
+\]
+$$
+
+If in the sums $S'_1$ and $S'_2$ we now let each $z'_\nu$ approach from the left the
+corresponding point $t_\nu$, then (3.8) gives
+
+$$
+\begin{equation}
+\begin{split}
+& \left\| \sum_{j=1}^{N} E(a_{j-1})x \{g_{\beta}(e^{ia_j}) - g_{\beta}(e^{ia_{j-1}})\} - \sum_{m=1}^{M} E(b_{m-1})x \{g_{\beta}(e^{ib_m}) - g_{\beta}(e^{ib_{m-1}})\} \right\| \\
+& \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1 + \zeta(q_1^{-1} + p^{-1})\} \mathrm{var}_q^{q/q_1}(E(\cdot)x, [0, 2\pi])\rho.
+\end{split}
+\tag{3.12}
+\end{equation}
+$$
+
+For notational convenience, let us denote by $\delta_\epsilon$ the majorant in (3.12), while
+keeping in mind that $\delta_\epsilon \to 0$ as $\epsilon \to 0^+$. After a summation by parts is
+performed on each of the vector-valued sums appearing in the minorant of
+(3.12), we find that, in the notation of (2.2), the estimate (3.12) can be
+rewritten as follows:
+
+$$
+(3.13) \quad \| \tilde{\mathcal{S}}(\mathcal{P}; g_\beta(e^{i\cdot}), E) x - \tilde{\mathcal{S}}(\mathcal{Q}; g_\beta(e^{i\cdot}), E) x \| \le \delta_\varepsilon.
+$$
+---PAGE_BREAK---
+
+Upon letting $\mathcal{P}$ run through all refinements of $\mathcal{P}_\varepsilon$ in (3.13), while simultaneously holding fixed both the arbitrary refinement $\mathcal{Q}$ of $\mathcal{P}_\varepsilon$ and the arbitrary $\beta \in B$, we get
+
+$$
+(3.14) \quad \left\| \int_{[0,2\pi]}^{\oplus} g_\beta(e^{it}) dE(t)x - \tilde{\mathcal{S}}(\mathcal{Q}; g_\beta(e^{i\cdot})), E)x \right\| \le \delta_\varepsilon.
+$$
+
+Next, while holding $\mathcal{P}, \mathcal{Q}$ fixed in (3.13), we let $\beta$ run through $B$ to obtain,
+via the pointwise convergence on $\mathbb{T}$,
+
+$$
+\|\tilde{\mathcal{S}}(\mathcal{P}; g(e^{i\cdot}), E)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot}), E)x\| \le \delta_\varepsilon.
+$$
+
+Letting $\mathcal{P}$ run through all refinements of $\mathcal{P}_\varepsilon$ in this yields, for every refine-
+ment $\mathcal{Q}$ of $\mathcal{P}_\varepsilon$,
+
+$$
+\left\| \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot})), E)x \right\| \leq \delta_{\epsilon}.
+$$
+
+Combining this estimate with (3.14), we find that for all $\beta \in B$, and every
+refinement $\mathcal{Q}$ of $\mathcal{P}_{\varepsilon}$,
+
+$$
+(3.15) \quad \left\| \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t)x - \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x \right\| \\
+\le 2\delta_{\epsilon} + \| \tilde{\mathcal{S}}(\mathcal{Q}; g_{\beta}(e^{i\cdot})), E)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot})), E)x \| .
+$$
+
+In (3.15), we now specialize $\mathcal{Q}$ to be $\mathcal{P}_{\varepsilon}$, and we see from the pointwise
+convergence of $\{g_{\beta}\}_{\beta \in B}$ to $g$ on $\mathbb{T}$ that for all sufficiently large $\beta \in B$,
+
+$$
+\left\| \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t)x - \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x \right\| \le 3\delta_{\varepsilon}. \blacksquare
+$$
+
+**REMARK 3.10.** Our treatment of the spectral integration of functions of higher variation emphasizes applications thereof to a unified framework of trigonometrically well-bounded operators and related periodic functions. For this purpose $[0, 2\pi]$ conveniently serves as the fundamental interval. It is worth noting, however, that the above Theorems 3.7 and 3.9 do not need to be tied directly to trigonometrically well-bounded operators, since they readily imply their analogues for spectral families concentrated on arbitrary intervals by using simple affine changes of the real variable (e.g., mapping $[0, \pi]$ onto an interval $J = [a, b]$). The outcome, which includes an extension of the BV($J$)-functional calculus induced by spectral families (2.3), can be stated as follows.
+
+**THEOREM 3.11.** Let $E(\cdot)$ be a spectral family of projections in a super-reflexive Banach space $X$. Suppose that $E(\cdot)$ is concentrated on a compact interval $J = [a, b]$, and let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $p \in (1, q')$. Then the spectral integral $\int_J \Phi dE$ exists for each $\Phi \in V_p(J)$, and the mapping $\Phi \in V_p(J) \to$
+---PAGE_BREAK---
+
+$$ \int_J^\oplus \Phi dE \text{ is a continuous identity-preserving homomorphism of the Banach algebra } V_p(J) \text{ into the Banach algebra } \mathfrak{B}(X) \text{ such that} $$
+
+$$ \left\| \int_J^\oplus \Phi dE \right\| \le K_{p,q} \operatorname{var}_q(E) \| \Phi \|_{V_p(J)} \quad \text{for all } \Phi \in V_p(J). $$
+
+If $\{\Phi_\beta\}_{\beta \in B}$ is a net of mappings from $J$ into $\mathbb{C}$ satisfying
+
+$$ \sup\{\operatorname{var}_p(\Phi_\beta, J) : \beta \in B\} < \infty, $$
+
+and such that for each $\beta \in B$, and each $t_0 \in (a, b]$,
+
+$$ \lim_{t \to t_0^-} \Phi_\beta(t) = \Phi_\beta(t_0), $$
+
+and if $\{\Phi_\beta\}_{\beta \in B}$ converges pointwise on $J$ to a complex-valued function $\Phi$, then $\Phi \in V_p(J)$, and the net
+
+$$ \left\{ \int_J^\oplus \Phi_\beta dE \right\}_{\beta \in B} $$
+
+converges in the strong operator topology of $\mathfrak{B}(X)$ to $\int_J^\oplus \Phi dE$.
+
+**4. Some consequences.** The stage is almost set for the main result of this section (Theorem 4.3), which will establish the precompactness relative to the strong operator topology of the set of rotated Hilbert averages $\tilde{W}$ corresponding to a trigonometrically well-bounded operator $U$ on a super-reflexive space. In order to obtain this result, we shall also require the following two auxiliary items from the literature.
+
+**PROPOSITION 4.1.** Suppose that $1 \le p < \infty$. Then we have, for the sequence of trigonometric polynomials $\{s_n\}_{n=1}^\infty$ in (2.7),
+
+$$ (4.1) \qquad \sup_{n \in \mathbb{N}} \operatorname{var}_p(s_n, \mathbb{T}) < \infty \quad \text{if and only if} \quad p > 1. $$
+
+*Proof*. Since, as was noted in conjunction with (2.7), $\operatorname{var}_1(s_n, \mathbb{T}) \to \infty$ as $n \to \infty$, it suffices to have
+
+$$ \sup_{n \in \mathbb{N}} \operatorname{var}_p(s_n, \mathbb{T}) < \infty \quad \text{if } p > 1. $$
+
+The derivation of this is included in §12 of the article [40]. ■
+
+In view of this, the set $\mathcal{S}$ consisting of all rotates of $\{s_n : n \in \mathbb{N}\}$ must satisfy
+
+$$ (4.2) \qquad \sup_{n \in \mathbb{N}, z \in \mathbb{T}} \|s_n((\cdot)z)\|_{V_p(\mathbb{T})} < \infty \quad \text{if } p > 1, $$
+
+by virtue of (2.8), and because $\{s_n\}_{n=1}^\infty$ is the sequence of partial sums for the Fourier series of a BV($\mathbb{T}$)-function, whence
+
+$$ \sup_{n \in \mathbb{N}} \|s_n\|_{L^\infty(\mathbb{T})} < \infty. $$
+---PAGE_BREAK---
+
+The second auxiliary item we shall rely on is the following convenient formulation of the “Helly Selection Theorem for Functions of Bounded p-Variation” (Theorem 2.4 of [36]). (Although it will not be an issue for us, we note that in the parlance of [36], the symbol $\text{var}_p$ denotes what is, in the sense of our notation, $\text{var}_p^p$.)
+
+**THEOREM 4.2.** Let $\mathcal{F}$ be a sequence of functions mapping a subset $\mathcal{M}$ of $\mathbb{R}$ to a metric space $\mathcal{Y}$, and such that, for some $p \in [1, \infty)$, $\mathcal{F}$ has uniformly bounded $p$-variation on $\mathcal{M}$ (in symbols, $\sup\{\text{var}_p(F, \mathcal{M}) : F \in \mathcal{F}\} < \infty$). Suppose further that for each $t \in \mathcal{M}$, $\{F(t) : F \in \mathcal{F}\}$ has compact closure in $\mathcal{Y}$. Then $\mathcal{F}$ has a subsequence $\{f_n\}_{n=1}^\infty$ pointwise convergent on $\mathcal{M}$ to a function $f : \mathcal{M} \to \mathcal{Y}$ such that
+
+$$ \text{var}_p(f, \mathcal{M}) \leq \sup\{\text{var}_p(F, \mathcal{M}) : F \in \mathcal{F}\} < \infty. $$
+
+**THEOREM 4.3.** If $U$ is a trigonometrically well-bounded operator on a super-reflexive Banach space $X$, then the closure, relative to the strong operator topology, of the class $\tilde{\mathcal{W}}$ specified in (1.3) by
+
+$$ (4.3) \qquad \tilde{\mathcal{W}} = \left\{ \sum_{0 < |k| \le n} \frac{z^k}{k} U^k : n \in \mathbb{N}, z \in \mathbb{T} \right\} $$
+
+is compact in the strong operator topology, and hence, in particular,
+
+$$ (4.4) \qquad \sup\{\|T\| : T \in \tilde{\mathcal{W}}\} < \infty. $$
+
+Conversely, if $\mathcal{X}_0$ is a reflexive Banach space, and $U \in \mathfrak{B}(\mathcal{X}_0)$ is an invertible operator such that (4.4) holds, then $U$ is trigonometrically well-bounded.
+
+*Proof.* Let $E(\cdot)$ be the spectral decomposition of $U$, and choose $q,p$ as in the hypotheses of Theorem 3.7. Let $x \in X \setminus \{0\}$. We are required to show that the set $\tilde{\mathcal{W}}x$ is totally bounded in the metric space defined by the norm of $X$. For this purpose, let $\mathcal{G}$ be a sequence in $\tilde{\mathcal{W}}x$. Hence for some sequence $\mathcal{F}$ taken from the set of trigonometric polynomials $\mathfrak{S}$ appearing in the minorant of (4.2), we can express $\mathcal{G}$ as $\mathcal{F}(U)x$. By virtue of (4.2) and Theorem 4.2, we can extract from the sequence of trigonometric polynomials $\mathcal{F}$ a subsequence $\{f_k\}_{k=1}^\infty$ pointwise convergent on $\mathbb{T}$ to a function $f : \mathbb{T} \to \mathbb{C}$ such that
+
+$$ \text{var}_p(f, \mathbb{T}) \leq \sup\{\text{var}_p(F, \mathbb{T}) : F \in \mathfrak{S}\} < \infty. $$
+
+By Theorem 3.9, applied to $\{f_k\}_{k=1}^\infty$, we see that $\{f_k(U)\}_{k=1}^\infty$ converges in the strong operator topology to $\int_{[0,2\pi]} f(e^{it}) dE(t)$.
+
+The converse conclusion follows directly from Proposition 1.1, since for each $z \in \mathbb{T}$, the $(C, 1)$ averages appearing in (1.2) are the means of the corresponding discrete Hilbert averages in (4.3). ■
+
+An application of Theorem 3.7 of [12] to (4.4) yields the following improvement of Theorem 2.6.
+---PAGE_BREAK---
+
+**THEOREM 4.4.** Let $X$ be a super-reflexive Banach space, let $U \in \mathfrak{B}(X)$ be trigonometrically well-bounded, and let $E(\cdot)$ be the spectral decomposition of $U$. Then for each $f \in \text{BV}(\mathbb{T})$, the series $\sum_{k=-\infty}^{\infty} \hat{f}(k)U^k$ converges in the strong operator topology to $\int_{[0,2\pi]} f^{\#}(t) dE(t)$.
+
+In the presence of super-reflexivity, we now also have the following extension of Theorem 2.6 from $\text{BV}(\mathbb{T})$ to spaces $V_p(\mathbb{T})$, for appropriate $p > 1$.
+
+**THEOREM 4.5.** Let $X$ be a super-reflexive Banach space, and let $U \in \mathfrak{B}(X)$ be a trigonometrically well-bounded operator. Denote by $E(\cdot)$ the spectral decomposition of $U$, let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$, and let $p \in (1, q')$. If $\phi \in V_p(\mathbb{T})$, then for each $x \in X$,
+
+$$ (4.5) \quad \left\| \sum_{\nu=-n}^{n} \left(1 - \frac{|\nu|}{n+1}\right) \hat{\phi}(\nu) U^\nu x - \left\{ \int_{[0,2\pi]}^\oplus \phi^{\#}(t) dE(t) \right\} x \right\| \to 0 \quad \text{as } n \to \infty. $$
+
+*Proof.* Clearly, the sequence of trigonometric polynomials $\{\kappa_n * \phi\}_{n \ge 0}$ has the property that $\sup_{n \ge 0} \| \kappa_n * \phi \|_{V_p(\mathbb{T})} < \infty$, and by Fejér's Theorem, $(\kappa_n * \phi)(e^{it}) \to \phi^{\#}(t)$ for all $t \in \mathbb{R}$. The desired conclusion is now an immediate consequence of Theorem 3.9 applied to the pointwise convergent sequence $\{\kappa_n * \phi\}_{n \ge 0}$. ■
+
+**REMARK 4.6.** In contrast to the situation for $\text{BV}(\mathbb{T})$-functions in Theorem 4.4, it is an open question whether or not one can, for the general $\phi \in V_p(\mathbb{T})$, improve the strong $(C, 1)$-convergence in (4.5) to strong convergence of the series $\sum_{\nu=-\infty}^{\infty} \hat{\phi}(\nu)U^{\nu}$. In this regard, one can use Theorem 3.1 of [37] in combination with Theorem 4.5 to obtain the following partial result in the positive direction. We omit the details for expository reasons.
+
+**PROPOSITION 4.7.** Suppose that $\mathcal{Y}$ is a UMD space having an unconditional basis, let $U \in \mathfrak{B}(\mathcal{Y})$ be a trigonometrically well-bounded operator. Denote by $E(\cdot)$ the spectral decomposition of $U$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$, and let $p \in (1, q')$. If $\phi \in V_p(\mathbb{T})$, then for each $y \in \mathcal{Y}$ we have, for almost all $z \in \mathbb{T}$,
+
+$$ \left\| \left( \sum_{k=-n}^{n} \hat{\phi}(k) U^k z^k \right) y - \left( \int_{[0,2\pi]}^\oplus (\phi_z)^{\#}(t) dE(t) \right) y \right\|_{\mathcal{Y}} \to 0 \quad \text{as } n \to \infty. $$
+
+**REMARK 4.8.** Since the Haar system is an unconditional basis for $L^r([0, 1])$, $1 < r < \infty$, the space $L^r(\mathbb{T})$ satisfies the hypotheses on $\mathcal{Y}$ of Proposition 4.7. In particular, by specializing to the value $r = 2$, we see that any separable Hilbert space (finite-dimensional or infinite-dimensional) satisfies these hypotheses on $\mathcal{Y}$.
+---PAGE_BREAK---
+
+**5. Operator-weighted Hilbert sequence spaces and trigonometrically well-bounded shift operators.** Henceforth, $\mathcal{R}$ will be an arbitrary Hilbert space with inner product $\langle \cdot, \cdot \rangle$. As shown in Theorem 2.3 of [16], shifts on appropriate operator-weighted Hilbert sequence spaces serve as a model for the general behavior of trigonometrically well-bounded operators on arbitrary Hilbert spaces. More specifically, to any invertible operator $V \in \mathfrak{B}(\mathcal{R})$ there correspond a bilateral operator-valued weight sequence $\mathfrak{W}_V \subseteq \mathfrak{B}(\mathcal{R})$ and an affiliated Hilbert sequence space $\ell^2(\mathfrak{W}_V)$ such that $V$ is trigonometrically well-bounded on $\mathcal{R}$ if and only if the right bilateral shift $\mathcal{R}$ is a trigonometrically well-bounded operator on $\ell^2(\mathfrak{W}_V)$; moreover, if this is the case, then the norm properties of trigonometric polynomials of $\mathcal{R}$ mirror the norm properties of trigonometric polynomials of $V$. (See (5.6) below. For additional background facts regarding these matters, see [12].) In this section, we shall discuss how application of the preceding sections to this circle of ideas in Hilbert space affords some new insights into the role of the Hilbert transform and of multiplier theory in non-commutative analysis.
+
+We begin by describing the relevant class of operator-weighted Hilbert sequence spaces. An *operator-valued weight sequence* on $\mathcal{R}$ will be a bilateral sequence $\mathfrak{W} = \{W_k\}_{k=-\infty}^{\infty} \subseteq \mathfrak{B}(\mathcal{R})$ such that for each $k \in \mathbb{Z}$, $W_k$ is a positive, invertible, self-adjoint operator. We associate with $\mathfrak{W}$ the weighted Hilbert space $\ell^2(\mathfrak{W})$ consisting of all sequences $x = \{x_k\}_{k=-\infty}^{\infty} \subseteq \mathcal{R}$ such that
+
+$$ \sum_{k=-\infty}^{\infty} \langle W_k x_k, x_k \rangle < \infty, $$
+
+and furnished with the inner product $\langle \langle \cdot, \cdot \rangle \rangle$ specified by
+
+$$ \langle\langle x, y \rangle\rangle = \sum_{k=-\infty}^{\infty} \langle W_k x_k, y_k \rangle. $$
+
+Thus, $\ell^2(\mathfrak{W})$ is a generalization to non-commutative analysis of the $\ell^2$-spaces defined by scalar-valued weight sequences in the special case where $\mathcal{R} = \mathbb{C}$. (For the continuous variable generalization from scalar-valued weights to operator-valued weights, see [39].) Note that for each $z \in \mathbb{T}$, there is a natural unitary operator $\Delta_z$ defined on $\ell^2(\mathfrak{W})$ by writing $\Delta_z(\{x_k\}_{k=-\infty}^{\infty}) = \{z^k x_k\}_{k=-\infty}^{\infty}$.
+
+The links between the considerations of the previous sections and $\ell^2(\mathfrak{W})$ stem from the interplay between $\ell^2(\mathfrak{W})$ and the discrete Hilbert kernel $h: \mathbb{Z} \to \mathbb{R}$, which, in terms of the function $\phi_0 \in \text{BV}(\mathbb{T})$ specified in conjunction with (2.7), is expressed by $h = \hat{\phi}_0$. Thus $h(0) = 0$, and $h(k) = k^{-1}$ for $k \in \mathbb{Z} \setminus \{0\}$. The truncates $\{h_N\}_{N=1}^{\infty}$ of the discrete Hilbert kernel $h$ are defined by writing, for each $N \in \mathbb{N}$ and each $k \in \mathbb{Z}$, $h_N(k) = h(k)$ if $|k| \le N$, and $h_N(k) = 0$ if $|k| > N$. The formal operator of convolution by
+---PAGE_BREAK---
+
+$h$ on $\ell^2(\mathfrak{M})$ will be referred to as the discrete Hilbert transform, and will be symbolized by $D$ (convolution by $h_N$ on $\ell^2(\mathfrak{M})$ will be denoted by $D_N$). If $h$ defines a bounded convolution operator from $\ell^2(\mathfrak{M})$ into $\ell^2(\mathfrak{M})$, we shall say that $\mathfrak{M}$ possesses the *Treil–Volberg property*. It was shown in [12] that in the context of $\ell^2(\mathfrak{M})$, one can define an operator-valued counterpart (the discrete analogue of [39]) for the Muckenhoupt $A_2$-weight condition—if this condition is satisfied by $\mathfrak{M}$, we write $\mathfrak{M} \in A_2(\mathcal{R})$. Since we do not need this $A_2(\mathcal{R})$ weight condition for our present considerations, we shall not pursue it further, except to note that the condition $\mathfrak{M} \in A_2(\mathcal{R})$ is always necessary, but, for the continuous-variable case and infinite-dimensional $\mathcal{R}$, is known not to be sufficient, for $\mathfrak{M}$ to possess the Treil–Volberg property (see, respectively, Proposition 4.4 of [12], and Theorem 1.1 of [27]).
+
+The connection between the Treil–Volberg property and the right (bilateral) shift $\mathcal{R}: \ell^2(\mathfrak{M}) \to \mathcal{R}^\mathbb{Z}$ specified by
+
+$$ \mathcal{R}(\{x_k\}_{k=-\infty}^{\infty}) = \{x_{k-1}\}_{k=-\infty}^{\infty} $$
+
+is expressed as follows (Theorem 4.12 of [12]).
+
+**PROPOSITION 5.1.** Let $\mathfrak{M} = \{W_k\}_{k=-\infty}^{\infty}$ be an operator-valued weight sequence on the arbitrary Hilbert space $\mathcal{R}$. Then the following assertions are equivalent:
+
+(i) $\mathfrak{M}$ has the Treil–Volberg property.
+
+(ii) The right shift $\mathcal{R}$ is a trigonometrically well-bounded operator on $\ell^2(\mathfrak{M})$.
+
+(iii) $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$ such that
+
+$$ (5.1) \quad \sup_{n \in \mathbb{N}} \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) \frac{\mathcal{R}^k}{k} \right\| < \infty. $$
+
+**REMARK 5.2.** If $\mathcal{R} \in \mathfrak{B}(\ell^2(\mathfrak{M}))$, then for each $z \in \mathbb{T}$, $\Delta_z \mathcal{R} \Delta_{\bar{z}} = z \mathcal{R}$, and hence the condition (1.2) reduces to (5.1) in the context of Proposition 5.1(iii).
+
+By virtue of (4.4), we can add the following two conditions to the list of equivalent conditions in Proposition 5.1.
+
+**PROPOSITION 5.3.** Under the hypotheses of Proposition 5.1, each of the following two conditions is equivalent to the conditions (i)–(iii) listed therein:
+
+(iv) $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$ such that
+
+$$ (5.2) \quad \sup_{n \in \mathbb{N}} \left\| \sum_{0 < |k| \le n} \frac{\mathcal{R}^k}{k} \right\| < \infty. $$
+---PAGE_BREAK---
+
+$$ (v) \{D_N\}_{N=1}^{\infty} \subseteq \mathfrak{B}(\ell^2(\mathfrak{M})), \text{ with} $$
+
+$$ (5.3) \qquad \sup_{N \in \mathbb{N}} \|D_N\|_{\mathfrak{B}(\ell^2(\mathfrak{M}))} < \infty. $$
+
+*Proof*. It is elementary that (iv) ⇒ (iii). The implication (ii) ⇒ (iv) is a consequence of (4.4). If (iv) holds, then for each $N \in \mathbb{N}$,
+
+$$ (5.4) \qquad s_N(\mathcal{R}) = D_N, $$
+
+and hence (v) holds. So the proof of Proposition 5.3 boils down to assuming (v) in order to show any one of the conditions (i) through (iv). Since there is no a priori reason to infer from (v) that $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$, we cannot make immediate use of (5.4), and so we shall sidestep this difficulty by establishing (i) directly. Since the Hilbert space $\ell^2(\mathfrak{M})$ is, in particular, reflexive, it follows from (5.3) that the closure of
+
+$$ \mathcal{D} = \{D_N : N \in \mathbb{N}\} $$
+
+in the weak operator topology of $\mathfrak{B}(\ell^2(\mathfrak{M}))$ is compact in the weak operator topology of $\mathfrak{B}(\ell^2(\mathfrak{M}))$. Consequently, there are a subnet $\{D_{N_\gamma}\}_{\gamma \in \Gamma}$ and an operator $\mathfrak{H} \in \mathfrak{B}(\ell^2(\mathfrak{M}))$ such that
+
+$$ (5.5) \qquad D_{N_\gamma} \to \mathfrak{H} \quad \text{in the weak operator topology of } \mathfrak{B}(\ell^2(\mathfrak{M})). $$
+
+Hence it will suffice to verify that for every vector $y = \{y_k\}_{k=-\infty}^{\infty} \in \ell^2(\mathfrak{M})$ such that the support of $y$ is a singleton, $\mathfrak{H}$ acts on $y$ as convolution by $h$. It is a routine matter to perform this verification by using (5.5) in conjunction with such vectors. ■
+
+**REMARK 5.4.** In classical single-variable Fourier analysis, as well as in its generalizations to norm inequalities involving scalar-valued weights, the boundedness of the relevant Hilbert transform goes hand-in-hand with the boundedness of pillars like the Hardy–Littlewood maximal function and the maximal Hilbert transform—which leave in their wake the uniform boundedness of the Hilbert transform’s truncates. This familiar scenario ultimately entails the validity of the relevant version of the Marcinkiewicz Multiplier Theorem and of the Littlewood–Paley Theorem. However, in the framework of condition (i) of Proposition 5.1 such underpinnings as maximal operators are lacking, and moreover, Theorem 6.1 of [16] shows that there is an operator-valued weight sequence $\mathfrak{W}_0$ on the Hilbert space $\ell^2(\mathbb{N})$ such that $\mathfrak{W}_0$ enjoys the Treil–Volberg property, but the analogues of the classical Marcinkiewicz Multiplier Theorem and the Littlewood–Paley Theorem fail to hold on $\ell^2(\mathfrak{W}_0)$. One motivation for obtaining the above implication (i) ⇒ (v) is that it, nevertheless, confirms the survival of the uniform boundedness for the Hilbert transform’s truncates, in an environment where so many mainstays fail to carry over. The next theorem adds still more to the
+---PAGE_BREAK---
+
+positive side of the ledger by extending this type of boundedness result to
+appropriate function classes.
+
+**THEOREM 5.5.** Suppose that $\mathcal{R}$ is an arbitrary Hilbert space, and $\mathfrak{M} = \{W_k\}_{k=-\infty}^{\infty}$ is an operator-valued weight sequence on $\mathcal{R}$ having the Treil-Volberg property. Then there is $\gamma \in (1, \infty)$ such that for each $p$ satisfying $1 \le p < \gamma$, and each function $\phi \in V_p(\mathbb{T})$, convolution by the inverse Fourier transform $\phi^\vee$ on $\ell^2(\mathfrak{M})$ is a bounded linear mapping $\mathfrak{F}_\phi$ of $\ell^2(\mathfrak{M})$ into $\ell^2(\mathfrak{M})$ satisfying
+
+$$
+\|\mathfrak{F}_{\phi}\|_{\mathfrak{B}(\ell^2(\mathfrak{M}))} \leq K_{\mathfrak{M},p} \|\phi\|_{V_p(\mathbb{T})}.
+$$
+
+*Proof.* Combine Theorems 4.2 and 4.3 of [16] and Corollary 4.4 of [16] with Theorem 3.7 above. ■
+
+We finish this section with a brief sketch of how the above scene furnishes a model for estimates with trigonometrically well-bounded operators on Hilbert spaces. Suppose that $V \in \mathfrak{B}(\mathcal{R})$ is an invertible operator, and let $\mathfrak{M}_V$ be the operator-valued weight sequence on the Hilbert space $\mathcal{R}$ given by $\mathfrak{M}_V = \{(V^k)*V^k\}_{k=-\infty}^{\infty}$. Lemma 2.2 of [16] and Theorem 2.3 of [16] guarantee that the right shift $\mathcal{R}$ is a bounded invertible linear mapping of $\ell^2(\mathfrak{M}_V)$ onto itself such that for every trigonometric polynomial $Q$,
+
+$$
+(5.6) \qquad \|Q(\mathcal{R})\|_{\mathfrak{B}(\ell^2(\mathfrak{M}_V))} = \sup_{z \in \mathbb{T}} \|Q(zV)\|_{\mathfrak{B}(\mathcal{R})}.
+$$
+
+In view of Proposition 1.1 and the equivalence of conditions (ii) and (iii)
+in Proposition 5.1, it follows directly from (5.6) that the right shift $\mathcal{R}$ is
+trigonometrically well-bounded on $\ell^2(\mathfrak{M}_V)$ if and only if $V$ is trigonometrically well-bounded on $\mathcal{R}$.
+
+References
+
+[1] D. J. Aldous, *Unconditional bases and martingales in $L_p(F)$*, Math. Proc. Cambridge Philos. Soc. 85 (1979), 117-123.
+
+[2] B. Beauzamy, *Introduction to Banach Spaces and Their Geometry*, North-Holland Math. Stud. 68 (Notas de Mat. 86), Elsevier Science, New York, 1982.
+
+[3] E. Berkson, J. Bourgain, and T. A. Gillespie, *On the almost everywhere convergence of ergodic averages for power-bounded operators on $L^p$-subspaces*, Integral Equations Operator Theory 14 (1991), 678-715.
+
+[4] E. Berkson and H. R. Dowson, *On uniquely decomposable well-bounded operators*, Proc. London Math. Soc. (3) 22 (1971), 339-358.
+
+[5] E. Berkson and T. A. Gillespie, *AC functions on the circle and spectral families*, J. Operator Theory 13 (1985), 33-47.
+
+[6] —, —, *Fourier series criteria for operator decomposability*, Integral Equations Operator Theory 9 (1986), 767–789.
+
+[7] —, —, *Stečkin's theorem, transference, and spectral decompositions*, J. Funct. Anal. 70 (1987), 140–170.
+---PAGE_BREAK---
+
+[8] E. Berkson and T. A. Gillespie, *The spectral decomposition of weighted shifts and the $A_p$ condition*, Colloq. Math. (special volume dedicated to A. Zygmund) 60-61 (1990), 507-518.
+
+[9] —, —, *Spectral decompositions and harmonic analysis on UMD spaces*, Studia Math. 112 (1994), 13-49.
+
+[10] —, —, *Mean-boundedness and Littlewood-Paley for separation-preserving operators*, Trans. Amer. Math. Soc. 349 (1997), 1169-1189.
+
+[11] —, —, *The q-variation of functions and spectral integration of Fourier multipliers*, Duke Math. J. 88 (1997), 103-132.
+
+[12] —, —, *Mean₂-bounded operators on Hilbert space and weight sequences of positive operators*, Positivity 3 (1999), 101-133.
+
+[13] —, —, *Spectral integration from dominated ergodic estimates*, Illinois J. Math. 43 (1999), 500-519.
+
+[14] —, —, *Spectral decompositions, ergodic averages, and the Hilbert transform*, Studia Math. 144 (2001), 39-61.
+
+[15] —, —, *A Tauberian theorem for ergodic averages, spectral decomposability, and the dominated ergodic estimate for positive invertible operators*, Positivity 7 (2003), 161-175.
+
+[16] —, —, *Shifts as models for spectral decomposability on Hilbert space*, J. Operator Theory 50 (2003), 77-106.
+
+[17] —, —, *Operator means and spectral integration of Fourier multipliers*, Houston J. Math. 30 (2004), 767-814.
+
+[18] —, —, *The q-variation of functions and spectral integration from dominated ergodic estimates*, J. Fourier Anal. Appl. 10 (2004), 149-177.
+
+[19] —, —, *An $M_q(T)$-functional calculus for power-bounded operators on certain UMD spaces*, Studia Math. 167 (2005), 245-257.
+
+[20] E. Berkson, T. A. Gillespie, and P. S. Muhly, *Abstract spectral decompositions guaranteed by the Hilbert transform*, Proc. London Math. Soc. (3) 53 (1986), 489-517.
+
+[21] D. Blagojevic, *Spectral families and geometry of Banach spaces*, PhD thesis, Univ. of Edinburgh, 2007; http://www.era.lib.ed.ac.uk/handle/1842/2389.
+
+[22] J. Bourgain, *Some remarks on Banach spaces in which martingale difference sequences are unconditional*, Ark. Mat. 21 (1983), 163-168.
+
+[23] V. V. Chistyakov and O. E. Galkin, *On maps of bounded p-variation with p > 1*, Positivity 2 (1998), 19-45.
+
+[24] R. Coifman, J. L. Rubio de Francia, et S. Semmes, *Multiplicateurs de Fourier de $L^p(\mathbb{R})$ et estimations quadratiques*, C. R. Acad. Sci. Paris Sér. I Math. 306 (1988), 351-354.
+
+[25] M. M. Day, *Reflexive Banach spaces not isomorphic to uniformly convex spaces*, Bull. Amer. Math. Soc. 47 (1941), 313-317.
+
+[26] P. Enflo, *Banach spaces which can be given an equivalent uniformly convex norm*, Israel J. Math. 13 (1972), 281-288.
+
+[27] T. A. Gillespie, S. Pott, S. Treil, and A. Volberg, *Logarithmic growth for weighted Hilbert transforms and vector Hankel operators*, J. Operator Theory 52 (2004), 103-112.
+
+[28] G. H. Hardy, *Weierstrass's non-differentiable function*, Trans. Amer. Math. Soc. 17 (1916), 301-325.
+
+[29] G. H. Hardy and J. E. Littlewood, *A convergence criterion for Fourier series*, Math. Z. 28 (1928), 612-634.
+
+[30] R. C. James, *Super-reflexive spaces with bases*, Pacific J. Math. 41 (1972), 409-419.
+
+[31] —, *Super-reflexive Banach spaces*, Canad. J. Math. 24 (1972), 896-904.
+---PAGE_BREAK---
+
+[32] Y. Katznelson, *An Introduction to Harmonic Analysis*, Dover, New York, 1976.
+
+[33] J. Lindenstrauss and L. Tzafriri, *Classical Banach Spaces II: Function Spaces*, Ergeb. Math. Grenzgeb. 97, Springer, New York, 1979.
+
+[34] B. Maurey, *Système de Haar*, in: Séminaire Maurey-Schwartz 1974-1975, Centre Math. École Polytechnique, Paris, 1975, 26 pp.
+
+[35] G. Pisier, *Un exemple concernant la super-réflexivité*, ibid., 12 pp.
+
+[36] J. E. Porter, *Helly's selection principle for functions of bounded p-variation*, Rocky Mountain J. Math. 35 (2005), 675-679.
+
+[37] J. L. Rubio de Francia, *Fourier series and Hilbert transforms with values in UMD Banach spaces*, Studia Math. 81 (1985), 95-105.
+
+[38] P. G. Spain, *On well-bounded operators of type (B)*, Proc. Edinburgh Math. Soc. (2) 18 (1972), 35-48.
+
+[39] S. Treil and A. Volberg, *Wavelets and the angle between past and future*, J. Funct. Anal. 143 (1997), 269-308.
+
+[40] L. C. Young, *An inequality of the Hölder type, connected with Stieltjes integration*, Acta Math. 67 (1936), 251-282.
+
+Earl Berkson
+Department of Mathematics
+University of Illinois
+1409 W. Green Street
+Urbana, IL 61801 U.S.A.
+E-mail: berkson@math.uiuc.edu
+
+Received January 30, 2010
+
+Revised version July 7, 2010
+
+(6804)
\ No newline at end of file
diff --git a/samples/texts_merged/822209.md b/samples/texts_merged/822209.md
new file mode 100644
index 0000000000000000000000000000000000000000..b853e54328468a23032f581d5199278dec09f18c
--- /dev/null
+++ b/samples/texts_merged/822209.md
@@ -0,0 +1,738 @@
+
+---PAGE_BREAK---
+
+XHX – A Framework for Optimally Secure
+Tweakable Block Ciphers from Classical Block
+Ciphers and Universal Hashing
+
+Ashwin Jha¹, Eik List², Kazuhiko Minematsu³,
+Sweta Mishra⁴, and Mridul Nandi¹
+
+¹ Indian Statistical Institute, Kolkata, India. {ashwin_r, mridul}@isical.ac.in
+
+² Bauhaus-Universität Weimar, Weimar, Germany. eik.list@uni-weimar.de
+
+³ NEC Corporation, Tokyo, Japan. k-minematsu@ah.jp.nec.com
+
+⁴ IIIT, Delhi, India. swetam@iiitd.ac.in
+
+**Abstract.** Tweakable block ciphers are important primitives for designing cryptographic schemes with high security. In the absence of a standardized tweakable block cipher, constructions built from classical block ciphers remain an interesting research topic in both theory and practice. Motivated by Mennink's $\tilde{F}[2]$ publication from 2015, Wang et al. proposed 32 optimally secure constructions at ASIACRYPT'16, all of which employ two calls to a classical block cipher each. Yet, those constructions were still limited to *n*-bit keys and *n*-bit tweaks. Thus, applications with more general key or tweak lengths still lack support. This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in efficient manner from the block cipher, where we generalize the constructions by Wang et al.; finally, we propose efficient instantiations for the used hash functions.
+
+**Keywords:** Provable security · ideal-cipher model · tweakable block cipher
+
+# 1 Introduction
+
+*Tweakable Block Ciphers.* In addition to the usual key and plaintext inputs of classical block ciphers, tweakable block ciphers (TBCs, for short) are cryptographic transform that adds an additional public parameter called *tweak*. So, a tweakable block cipher $\tilde{E}: \mathcal{K} \times \mathcal{T} \times \mathcal{M} \rightarrow \mathcal{M}$ is a permutation on the plaintext/ciphertext space $\mathcal{M}$ for every combination of key $\mathcal{K} \in \mathcal{K}$ and tweak $\mathcal{T} \in \mathcal{T}$, where $\mathcal{K}, \mathcal{T}$, and $\mathcal{M}$ are assumed to be non-empty sets. Their first use in literature was due to Schroeppel and Orman in the Hasty Pudding Cipher, where the tweak still was called *Spice* [18]. Liskov, Rivest, and Wagner [11] have formalized the concept then in 2002.
+
+In the recent past, the status of tweakable block ciphers has become more prominent, last but not least due to the advent of efficient dedicated constructions,
+---PAGE_BREAK---
+
+such as Deoxys-BC or Joltik-BC that were proposed alongside the TWEAKEY framework [6], or e.g., SKINNY [1]. However, in the absence of a standard, tweakable block ciphers based on classical ones remain a highly interesting topic.
+
+**Blockcipher-based Constructions.** Liskov et al. [11] had described two constructions, known as LRW1 and LRW2. Rogaway [17] proposed XE and XEX as refinements of LRW2 for updating tweaks efficiently and reducing the number of keys. These schemes are efficient in the sense that they need one call to the block cipher plus one call to a universal hash function. Both XE and XEX are provably secure in the standard model, i.e., assuming the block cipher is a (strong) pseudorandom permutation, they are secure up to $O(2^{n/2})$ queries, when using an $n$-bit block cipher. Since this bound results from the birthday paradox on input collisions, the security of those constructions is inherently limited by the birthday bound (BB-secure).
+
+**Constructions with Stronger Security.** Constructions with beyond-birthday-bound (BBB) security have been an interesting research topic. In [13], Minematsu proposed introduced a rekeying-based construction. Landecker, Shrimpton and Terashima [9] analyzed the cascade of two independent LRW2 instances, called CLRW2. Both constructions are secure up to $O(2^{2n/3})$ queries, however, at the price of requiring two block-cipher calls per block plus per-tweak rekeying or plus two calls to a universal hash function, respectively.
+
+For settings that demand stronger security, Lampe and Seurin [8] proved that the chained cascade of more instances of LRW2 could asymptotically approach a security of up to $O(2^n)$ queries, i.e. full $n$-bit security. However, the disadvantage is drastically decreased performance. An alternative direction has been initiated by Mennink [12], who also proposed TBC constructions from classical block ciphers, but proved the security in the ideal-cipher model. Mennink's constructions could achieve full $n$-bit security quite efficiently when both input and key are $n$ bits. In particular, his $\tilde{F}$[2] construction required only two block-cipher calls.
+
+Following Mennink's work, Wang et al. [20] proposed 32 constructions of optimally secure tweakable block ciphers from classical block ciphers. Their designs share an $n$-bit key, $n$-bit tweak and $n$-bit plaintext, and linearly mix tweak, key, and the result of a second offline call to the block cipher. Their constructions have the desirable property of allowing to cache the result of the first block-cipher call; moreover, given a-priori known tweaks, some of their constructions allow further to precompute the result of the key schedule.
+
+All constructions by Wang et al. were restricted to $n$-bit keys and tweaks. While this limit was reasonable, it did not address tweakable block ciphers with tweaks longer than $n$ bit. Such constructions, however, are useful in applications with increased security needs such as for authenticated encryption or variable-input-length ciphers (e.g., [19]). Moreover, disk-encryption schemes are typically based on wide-block tweakable ciphers, where the physical location on disk (e.g., the sector ID) is used as tweak, which can be arbitrarily long.
+---PAGE_BREAK---
+
+In general, extending the key length in the ideal-cipher model is far from trivial (see, e.g., [2,5,10]), and the key size in this model does not necessarily match the required tweak length. Moreover, many ciphers, like the AES-192 or AES-256, possess key and block lengths for which the constructions in [12,20] are inapplicable. In general, the tweak represents additional data accompanying the plaintext/ciphertext block, and no general reason exists why tweaks must be limited to the block length.
+
+Before proving the security of a construction, we have to specify the employed model. The standard model is well-established in the cryptographic community despite the fact that proofs base on few unproven assumptions, such as that a block cipher is a PRP, or ignore practical side-channel attacks. In the standard model, the adversary is given access only to either the *real construction* $\tilde{E}$ or an *ideal construction* $\tilde{\pi}$. In contrast, the ideal-cipher model assumes an ideal primitive—in our case the classical ideal block cipher $E$ which is used in $\tilde{E}$— which the adversary has also access to in both worlds. Although a proof in the ideal-cipher model is not an unexceptional guarantee that no attacks may exist when instantiated in practice [3], for us, it allows to capture away the details of the primitive for the sake of focusing on the security of the construction.
+
+A good example for TBCs proven in the standard model is XTX [14] by Minematsu and Iwata. XTX extended the tweak domain of a given tweakable block cipher $\tilde{E}: \{0,1\}^k \times \{0,1\}^t \times \{0,1\}^n \rightarrow \{0,1\}^n$ by hashing the arbitrary-length tweak to an $(n+t)$-bit value. The first $t$ bits serve as tweak and the latter $n$ bits are XORed to both input and output of $\tilde{E}$. Given an $\epsilon$-AXU family of hash functions and an ideal tweakable cipher, XTX is secure for up to $O(2^{(n+t)/2})$ queries in the standard model. However, no alternative to XTX exists in the ideal-cipher model yet.
+
+**Contribution.** This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. [20]. Like them, the present work also uses the ideal-cipher model for its security analysis. As the major difference to their work, our proposal allows arbitrary tweak lengths and works for any block cipher of $n$-bit block and $k$-bit key. The security is guaranteed for up to $O(2^{(n+k)/2})$ queries, which yields $n$-bit security when $k \ge n$.
+
+Our contributions in the remainder of this work are threefold: First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in an efficient manner from the block cipher, generalizing the constructions by Wang et al.; finally, we propose efficient instantiations for the employed hash functions for concreteness.
+
+*Remark 1.* Recently, Naito [15] proposed the XKX framework of beyond-birthday-secure tweakable block ciphers, which shares similarities to the proposal in the present work. He proposed two instances, the birthday-secure XKX(1) and the beyond-birthday-secure XKX(2). More detailed, the nonce is processed by a block-cipher-based PRF which yields the block-cipher key for the current message; the counter is hashed with a universal hash function under a second, in-
+---PAGE_BREAK---
+
+**Table 1:** Comparison of XHX to earlier highly secure TBCs built upon classical block ciphers. ICM(n, k) denotes the ideal-cipher model for a block cipher with n-bit block and k-bit key; BC(n, k) and TBC(n, t, k) denote the standard-model (tweakable) block cipher of n-bit block, t-bit tweak, and k-bit key. \#Enc. = \#calls to the (tweakable) block cipher, and \#Mult. = \#multiplications over GF(2^n). a(b) = b out of a calls can be precomputed with the secret key; we define s = ⌈k/n⌋.
+
+| Scheme | Model | Tweak | Key | Security | Efficiency | Reference |
|---|
| length in bit | in bit | #Enc. | #Mult. |
|---|
| F̃[2] | ICM(n,n) | n | n | | n | 2 | | [12] | | Eķ1, ..., Eķ32 | ICM(n,n) | n | n | | n | 2 (1) | | [20] | | XTX | TBC(n,t,k) | any l | k + 2n | (n + t)/2 | | 1 | 2[l/n] | [14] | | XKX(2) | BC(n,k) | -* | k + n | min{n,k/2} | | 1 | 1 | [15] | | XHX | ICM(n,k) | any l | k | (n + k)/2 | s + 1 (s) | s[l/n] | | This work | | XHX | ICM(n,k) | 2n | k | | n | s + 1 (s) | s | This work |
+
+* XKX(2) employs a counter as tweak.
+
+dependent key to mask the input. In contrast to other proposals including ours, Naito's construction demands both a counter plus a nonce as parameters to overcome the birthday bound; as a standalone construction, its security reduces to n/2 bits if an adversary could use the same "nonce" value for all queries. Hence, XKX(2) is tailored only to certain domains, e.g., modes of operation in nonce-based authenticated encryption schemes. Our proposal differs from XKX in four aspects: (1) we do not pose limitations on the reuse of input parameters; moreover, (2) we do not require a minimum key length of n + k bits; (3) we do not use several independent keys, but employ the block cipher to derive hashing keys; (4) finally, Naito's construction is proved in the standard model, whereas we consider the ideal-cipher model.
+
+The remainder is structured as follows: Section 2 briefly gives the preliminaries necessary for the rest of this work. Section 3 then defines the general construction, that we call GXHX for simplicity, which hashes the tweak to three outputs. Section 4 continues with the definition and analysis of XHX, which derives the hashing keys from the block cipher. Section 5 describes and analyzes efficient instantiations for our hash functions depending on the tweak length. In particular, we propose instantiations for 2n-bit and arbitrary-length tweaks.
+
+## 2 Preliminaries
+
+**General Notation.** We use lowercase letters $x$ for indices and integers, uppercase letters $X, Y$ for binary strings and functions, and calligraphic uppercase letters $\mathcal{X}, \mathcal{Y}$ for sets. We denote the concatenation of binary strings $X$ and $Y$ by $X \parallel Y$ and the result of their bitwise XOR by $X \oplus Y$. For tuples of bit
+---PAGE_BREAK---
+
+strings $(X_1, \dots, X_n)$, $(Y_1, \dots, Y_n)$ of equal domain, we denote by $(X_1, \dots, X_n) \oplus (Y_1, \dots, Y_n)$ the element-wise XOR, i.e., $(X_1 \oplus Y_1, \dots, X_n \oplus Y_n)$. We indicate the length of $X$ in bits by $|X|$ and write $X_i$ for the $i$-th block. Furthermore, we denote by $X \leftarrow \mathcal{X}$ that $X$ is chosen uniformly at random from the set $\mathcal{X}$. We define three sets of particular interest: $\text{Func}(\mathcal{X}, \mathcal{Y})$ be the set of all functions $F : \mathcal{X} \to \mathcal{Y}$, $\text{Perm}(\mathcal{X})$ the set of all permutations $\pi : \mathcal{X} \to \mathcal{X}$, and $\text{TPerm}(\mathcal{T}, \mathcal{X})$ for the set of tweaked permutations over $\mathcal{X}$ with associated tweak space $\mathcal{T}$. $(X_1, \dots, X_n) \stackrel{n}{\leftarrow} X$ denotes that $X$ is split into $n$-bit blocks i.e., $X_1 || \dots || X_n = X$, and $|X_i| = n$ for $1 \le i \le x-1$, and $|X_x| \le n$. Moreover, we define $\langle X \rangle_n$ to denote the encoding of a non-negative integer $X$ into its $n$-bit representation. Given an integer $x \in \mathbb{N}$, we define the function $\text{TRUNC}_x : \{0,1\}^* \to \{0,1\}^x$ to return the leftmost $x$ bits of the input if its length is $\ge x$, and returns the input otherwise. For two sets $\mathcal{X}$ and $\mathcal{Y}$, a uniform random function $\rho : \mathcal{X} \to \mathcal{Y}$ maps inputs $X \in \mathcal{X}$ independently from other inputs and uniformly at random to outputs $Y \in \mathcal{Y}$. For an event $E$, we denote by $\text{Pr}[E]$ the probability of $E$. For positive integers $n$ and $k$, we denote the falling factorial as $(n)_k := \frac{n!}{k!}$.
+
+**Adversaries.** An adversary **A** is an efficient Turing machine that interacts with a given set of oracles that appear as black boxes to **A**. We denote by **A**Ο the output of **A** after interacting with some oracle O. We write Δ**A** (O1; O2) := |Pr[**A**Ο ⇒ 1] − Pr[**A**Ο2 ⇒ 1]| for the advantage of **A** to distinguish between oracles O1 and O2. All probabilities are defined over the random coins of the oracles and those of the adversary, if any. W.l.o.g., we assume that **A** never asks queries to which it already knows the answer.
+
+A block cipher $E$ with associated key space $\mathcal{K}$ and message space $\mathcal{M}$ is a mapping $E: \mathcal{K} \times \mathcal{M} \rightarrow \mathcal{M}$ such that for every key $K \in \mathcal{K}$, it holds that $E(K, \cdot)$ is a permutation over $\mathcal{M}$. We define Block($\mathcal{K}, \mathcal{M}$) as the set of all block ciphers with key space $\mathcal{K}$ and message space $\mathcal{M}$. A tweakable block cipher $\tilde{E}$ with associated key space $\mathcal{K}$, tweak space $\mathcal{T}$, and message space $\mathcal{M}$ is a mapping $\tilde{E}: \mathcal{K} \times \mathcal{T} \times \mathcal{M} \rightarrow \mathcal{M}$ such that for every key $K \in \mathcal{K}$ and tweak $T \in \mathcal{T}$, it holds that $\tilde{E}(K, T, \cdot)$ is a permutation over $\mathcal{M}$. We also write $\tilde{E}_K^\mathrm{T}(\cdot)$ as short form in the remainder.
+
+The STPRP security of $\tilde{E}$ is defined via upper bounding the advantage of a distinguishing adversary **A** in a game, where we consider the ideal-cipher model throughout this work. There, **A** has access to oracles ($\mathcal{O}, E^\pm$), where $E^\pm$ is the usual notation for access to the encryption oracle *E* and to the decryption oracle *E*-1. *O* is called construction oracle, and is either the real construction $\tilde{E}_K^\pm(\cdot, \cdot)$, or $\tilde{\pi}^\pm(\cdot, \cdot)$ for $\tilde{\pi} \leftarrow \text{TPerm}(\mathcal{T}, \mathcal{M})$. $E^\pm \leftarrow \text{Perm}(\mathcal{M})$ is an ideal block cipher underneath $\tilde{E}$. The STPRP advantage of **A** is defined as $\Delta_{\mathbf{A}}(\tilde{E}_K^\pm(\cdot, \cdot), E^\pm(\cdot, \cdot); \tilde{\pi}^\pm(\cdot, \cdot), E^\pm(\cdot, \cdot))$, where the probabilities are taken over random and independent choice of $K, E, \tilde{\pi}$, and the coins of **A** if any. For the remainder, we say that **A** is a ($q_C, q_P$)-distinguisher if it asks at most $q_C$ queries to its construction oracle and at most $q_P$ queries to its primitive oracle.
+
+**Definition 1 (Almost-Uniform Hash Function).** Let $\mathcal{H}: \mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$ be a family of keyed hash functions. We call $\mathcal{H}$ ϵ-almost-uniform (ϵ-AUniform) if, for $K \leftarrow_K$ and all $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$, it holds that $\text{Pr}_{K \leftarrow_K}[\mathcal{H}(K, X) = Y] \le \epsilon$.
+---PAGE_BREAK---
+
+**Definition 2 (Almost-XOR-Universal Hash Function).** Let $\mathcal{H} : \mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$ be a family of keyed hash functions with $\mathcal{Y} \subseteq \{0,1\}^*$. We say that $\mathcal{H}$ is $\epsilon$-almost-XOR-universal ($\epsilon$-AXU) if, for $K \leftarrow \mathcal{K}$, and for all distinct $X, X' \in \mathcal{X}$ and any $\Delta \in \mathcal{Y}$, it holds that $\Pr_{K \leftarrow \mathcal{K}} [\mathcal{H}(K,X) \oplus \mathcal{H}(K,X') = \Delta] \le \epsilon$.
+
+Minematsu and Iwata [14] defined partial-almost-XOR-universality to capture
+the probability of partial output collisions.
+
+**Definition 3 (Partial-AXU Hash Function).** Let $\mathcal{H} : \mathcal{K} \times \mathcal{X} \to \{0,1\}^n \times \{0,1\}^k$ be a family of hash functions. We say that $\mathcal{H}$ is $(n, k, \epsilon)$-partial-AXU $((n, k, \epsilon)$-pAXU) if, for $K \leftarrow \mathcal{K}$, and for all distinct $X, X' \in \mathcal{X}$ and all $\Delta \in \{0,1\}^n$, it holds that $\Pr_{K \leftarrow \mathcal{K}} [\mathcal{H}(K,X) \oplus \mathcal{H}(K,X') = (\Delta, 0^k)] \le \epsilon$.
+
+**The H-Coefficient Technique.** The H-coefficients technique is a method due to Patarin [4,16]. It assumes the results of the interaction of an adversary **A** with its oracles are collected in a transcript $\tau$. The task of **A** is to distinguish the real world $\mathcal{O}_{\text{real}}$ from the ideal world $\mathcal{O}_{\text{ideal}}$. A transcript $\tau$ is called *attainable* if the probability to obtain $\tau$ in the ideal world is non-zero. One assumes that **A** does not ask duplicate queries or queries prohibited by the game or to which it already knows the answer. Denote by $\Theta_{\text{real}}$ and $\Theta_{\text{ideal}}$ the distribution of transcripts in the real and the ideal world, respectively. Then, the fundamental Lemma of the H-coefficients technique states:
+
+**Lemma 1 (Fundamental Lemma of the H-coefficient Technique [16]).**
+Assume, the set of attainable transcripts is partitioned into two disjoint sets GOODT and BADT. Further assume, there exist $\epsilon_1, \epsilon_2 \ge 0$ such that for any transcript $\tau \in$ GOODT, it holds that
+
+$$
+\frac{\Pr[\Theta_{\text{real}} = \tau]}{\Pr[\Theta_{\text{ideal}} = \tau]} \geq 1 - \epsilon_1, \quad \text{and} \quad \Pr[\Theta_{\text{ideal}} \in \text{BADT}] \leq \epsilon_2.
+$$
+
+Then, for all adversaries **A**, it holds that $\Delta_A(\mathcal{O}_{\text{real}}; \mathcal{O}_{\text{ideal}}) \le \epsilon_1 + \epsilon_2$.
+
+The proof is given in [4,16].
+
+3 The Generic GXHX Construction
+
+Let $n, k, l \ge 1$ be integers and $\mathcal{K} = \{0,1\}^k$, $\mathcal{L} = \{0,1\}^l$, and $\mathcal{T} \subseteq \{0,1\}^*$. Let $E: \mathcal{K} \times \{0,1\}^n \rightarrow \{0,1\}^n$ be a block cipher and $\mathcal{H}: \mathcal{L} \times \mathcal{T} \rightarrow \{0,1\}^n \times \mathcal{K} \times \{0,1\}^n$ be a family of hash functions. Then, we define by GXHX[$E$, $\mathcal{H}$] : $\mathcal{L} \times \mathcal{T} \times \{0,1\}^n \rightarrow \{0,1\}^n$ the tweakable block cipher instantiated with $E$ and $\mathcal{H}$ that, for given key $L \in \mathcal{L}$, tweak $T \in \mathcal{T}$, and message $M \in \{0,1\}^n$, computes the ciphertext $C$, as shown on the left side of Algorithm 1. Likewise, given key $L \in \mathcal{L}$, tweak $T \in \mathcal{T}$, and ciphertext $C \in \{0,1\}^n$, the plaintext $M$ is computed by $M \leftarrow$ GXHX[$E$, $\mathcal{H}]_L^{-1}(T, C)$, as shown on the right side of Algorithm 1. Clearly, GXHX[$E$, $\mathcal{H}$] is a correct and tidy tweakable permutation, i.e., for all
+---PAGE_BREAK---
+
+**Fig. 1:** Schematic illustration of the encryption process of a message *M* and a tweak *T* with the general GXHX[*E*, *H*] tweakable block cipher. *E*: *K* × {0, 1}ⁿ → {0, 1}ⁿ is a keyed permutation and *H*: *L* × *T* → {0, 1}ⁿ × *K* × {0, 1}ⁿ is a keyed universal hash function.
+
+**Algorithm 1** Encryption and decryption algorithms of the general GXHX[*E*, *H*] construction.
+
+| 11: function GXHX[E, H]L(T, M) | 21: function GXHX[E, H]L-1(T, C) | | 12: (H1, H2, H3) ← H(L, T) | 22: (H1, H2, H3) ← H(L, T) | | 13: C ← EH2(M ⊕ H1) ⊕ H3 | 23: M ← EH2-1(C ⊕ H3) ⊕ H1 | | 14: return C | 24: return M |
+
+keys $L \in \mathcal{L}$, all tweak- plaintext inputs $(T, M) \in \mathcal{T} \times \{0, 1\}^n$, and all tweak-ciphertext inputs $(T, C) \in \mathcal{T} \times \{0, 1\}^n$, it holds that
+
+$$ \text{GXHX}[E, \mathcal{H}]_L^{-1}(T, \text{GXHX}[E, \mathcal{H}]_L(T, M)) = M \text{ and} \\ \text{GXHX}[E, \mathcal{H}]_L(T, \text{GXHX}[E, \mathcal{H}]_L^{-1}(T, C)) = C. $$
+
+Figure 1 illustrates the encryption process schematically.
+
+## 4 XHX: Deriving the Hash Keys from the Block Cipher
+
+In the following, we adapt the general GXHX construction to XHX. which differs from the former in two aspects: first, XHX splits the hash function into three functions $\mathcal{H}_1$, $\mathcal{H}_2$, and $\mathcal{H}_3$; second, since we need at least $n + k$ bit of key material for the hash functions, it derives the hash-function key from a key $K$ using the block cipher $E$. We denote by $s \ge 0$ the number of derived hash-function keys $L_i$ and collect them together with the user-given key $K \in \{0, 1\}^k$ into a vector $L := (K, L_1, ..., L_s)$. Moreover, we define a set of variables $I_i$ and $K_i$, for $1 \le i \le s$, which denote input and key to the block cipher $E$ for computing: $L_i := E_{K_i}(I_i)$. We allow flexible, usecase-specific definitions for the values $I_i$ and $K_i$ as long as they fulfill certain properties that will be listed in Section 4.1. We redefine the key space of the hash functions to $\mathcal{L} \subseteq \{0, 1\}^k \times$
+---PAGE_BREAK---
+
+Fig. 2: Schematic illustration of the XHX[E, $\mathcal{H}$] construction where we derive the hash-function keys $L_i$ from the block cipher E.
+
+**Algorithm 2** Encryption and decryption algorithms of XHX where the keys are derived from the block cipher. We define $\mathcal{H} := (\mathcal{H}_1, \mathcal{H}_2, \mathcal{H}_3)$. Note that the exact definitions of $I_i$ and $K_i$ are usecase-specific.
+
+11: **function** XHX[E, $\mathcal{H}$].KEYSETUP(K)
+12: **for** i ← 1 to s **do**
+13: $L_i \leftarrow E_{K_i}(I_i)$
+14: $L \leftarrow (K, L_1, \dots, L_s)$
+15: **return** $L$
+31: **function** XHX[E, $\mathcal{H}$]$_K(T, M)$
+32: $L \leftarrow$ XHX[E, $\mathcal{H}$].KEYSETUP(K)
+33: $(H_1, H_2, H_3) \leftarrow \mathcal{H}(L, T)$
+34: $C \leftarrow E_{H_2}(M \oplus H_1) \oplus H_3$
+35: **return** $C$
+31: **function** XHX[E, $\mathcal{H}$]$_K(T, C)$
+42: $L \leftarrow$ XHX[E, $\mathcal{H}$].KEYSETUP(K)
+43: $(H_1, H_2, H_3) \leftarrow \mathcal{H}(L, T)$
+44: $M \leftarrow E_{H_2}^{-1}(C \oplus H_3) \oplus H_1$
+45: **return** $M$
+
+($\{0, 1\}^n)^s$. Note, the values $L_i$ are equal for all encryptions and decryptions and hence, can be precomputed and stored for all encryptions under the same key.
+
+*The Constructions by Wang et al.* The 32 constructions $\tilde{\mathbb{E}}[2]$ by Wang et al. are a special case of our construction with the parameters $s=1$, key length $k=n$, with the inputs $I_i, K_i \in \{0^n, K\}$, and the option $(I_i, K_i) = (0^n, 0^n)$ excluded. Their constructions compute exactly one value $L_1$ by $L_1 := E_{K_1}(I_1)$. One can easily describe their constructions in the terms of the XHX framework, with three variables $X_1, X_2, X_3 \in \{K, L_1, K \oplus L_1\}$ for which holds that $X_1 \neq X_2$ and $X_3 \neq X_2$, and which are used in XHX as follows:
+
+$$
+\begin{align*}
+\mathcal{H}_1(L,T) &:= X_1, \\
+\mathcal{H}_2(L,T) &:= X_2 \oplus T, \\
+\mathcal{H}_3(L,T) &:= X_3.
+\end{align*}
+ $$
+
+## 4.1 Security Proof of XHX
+
+This section concerns the security of the XHX construction in the ideal-cipher model where the hash-function keys are derived by the (ideal) block cipher E.
+---PAGE_BREAK---
+
+**Properties of $\mathcal{H}$**. For our security analysis, we list a set of properties that we require for $\mathcal{H}$. We assume that $L$ is sampled uniformly at random from $\mathcal{L}$. To address parts of the output of $\mathcal{H}$, we also use the notion $\mathcal{H}_i : \mathcal{L} \times \mathcal{T} \to \{0,1\}^{o_i}$ to refer to the function that computes the $i$-th output of $\mathcal{H}(L,T)$, for $1 \le i \le 3$, with $o_1 := n$, $o_2 := k$, and $o_3 := n$. Moreover, we define $\mathcal{H}_{1,2}(T) := (\mathcal{H}_1(L,T), \mathcal{H}_2(L,T))$, and $\mathcal{H}_{3,2}(T) := (\mathcal{H}_3(L,T), \mathcal{H}_2(L,T))$.
+
+**Property P1.** For all distinct $T, T' \in \mathcal{T}$ and all $\Delta \in \{0,1\}^n$, it holds that
+
+$$ \max_{i \in \{1,3\}} \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{i,2}(T) \oplus \mathcal{H}_{i,2}(T') = (\Delta, 0^k)] \le \epsilon_1. $$
+
+**Property P2.** For all $T \in \mathcal{T}$ and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$, it holds that
+
+$$ \max_{i \in \{1,3\}} \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{i,2}(T) = (c_1, c_2)] \le \epsilon_2. $$
+
+Note that Property P1 is equivalent to saying $\mathcal{H}_{1,2}$ and $\mathcal{H}_{3,2}$ are $(n, k, \epsilon_1)$-pAXU; Property P2 is equivalent to the statement that $\mathcal{H}_{1,2}$ and $\mathcal{H}_{3,2}$ are $\epsilon_2$-AUniform. Clearly, it must hold that $\epsilon_1, \epsilon_2 \ge 2^{-(n+k)}$.
+
+**Property P3.** For all $T \in \mathcal{T}$, all chosen $I_i, K_i$, for $1 \le i \le s$, and all $\Delta \in \{0,1\}^n$, it holds that
+
+$$ \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{1,2}(T) \oplus (I_i, K_i) = (\Delta, 0^k)] \le \epsilon_3. $$
+
+**Property P4.** For all $T \in \mathcal{T}$, all chosen $K_i, L_i$, for $1 \le i \le s$, and all $\Delta \in \{0,1\}^n$, it holds that
+
+$$ \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{3,2}(T) \oplus (L_i, K_i) = (\Delta, 0^k)] \le \epsilon_4. $$
+
+Properties P3 and P4 represent the probabilities that an adversary's query hits the inputs that have been chosen for computing a hash-function key. We list a further property which gives the probability that a set of constants chosen by the adversary can hit the values $I_i$ and $K_i$ from generating the keys $L_i$:
+
+**Property P5.** For $1 \le i \le s$, and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$, it holds that
+
+$$ \Pr_{K \leftarrow \mathcal{K}} [(I_i, K_i) = (c_1, c_2)] \le \epsilon_5. $$
+
+In other words, the tuples $(I_i, K_i)$ contain a sufficient amount of close to $n$ bit entropy, and cannot be predicted by an adversary with greater probability, i.e., $\epsilon_5$ should not be larger than a small multiple of $1/2^n$. From Property 5 and the fact that the values $L_i$ are computed from $E_{K_i}(I_i)$ with an ideal permutation $E$, it follows that for $1 \le i \le s$ and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$
+
+$$ \Pr_{K \leftarrow \mathcal{K}} [(L_i, K_i) = (c_1, c_2)] \le \epsilon_5. $$
+---PAGE_BREAK---
+
+**Fig. 3:** Schematic illustration of the oracles available to **A**.
+
+**Theorem 1.** Let $E \leftarrow$ Block($\mathcal{K}$, $\{0,1\}^n$) be an ideal cipher. Further, let $\mathcal{H}_i$: $\mathcal{L} \times \mathcal{T} \rightarrow \{0,1\}^{o_i}$, for $1 \le i \le 3$ be families of hash functions for which Properties P1 through P4 hold, and let $K \leftarrow K$. Moreover, let Property P5 hold for the choice of all $I_i$ and $K_i$. Let $s$ denote the number of keys $L_i$, $1 \le i \le s$. Let **A** be a $(q_C, q_P)$-distinguisher on XHX[$E, \mathcal{H}]_K$. Then
+
+$$ \Delta_{\mathbf{A}}(\text{XHX}[E, \mathcal{H}], E^{\pm}; \tilde{\pi}^{\pm}, E^{\pm}) \le q_C^2\epsilon_1 + 2q_P q_C \epsilon_2 + q_C s(\epsilon_3 + \epsilon_4) + 2q_P s \epsilon_5 + \frac{s^2}{2^{n+1}} $$
+
+*Proof Idea.* The proof of Theorem 1 follows from Lemmas 1, 2, and 3. Those can be found in Appendix A. Let $\tilde{E}$ denote the XHX[$E, \mathcal{H}$] construction in the remainder. Figure 3 illustrates the oracles available to **A**. The queries by **A** are collected in a transcript $\tau$. We will define a series of bad events that can happen during the interaction of **A** with its oracles:
+
+- Collisions between two construction queries,
+
+- Collisions between a construction and a primitive query,
+
+- Collisions between two primitive queries,
+
+- The case that the adversary finds an input-key tuple in either a primitive or construction query that was used to derive a key $L_i$.
+
+The proof will bound the probability of these events to occur in the transcript in Lemma 2. We define a transcript as **bad** if it satisfies at least one such **bad** event, and define BADT as the set of all attainable **bad** transcripts.
+
+**Lemma 2.** It holds that
+
+$$ \Pr[\Theta_{\text{ideal}} \in \text{BADT}] \le q_C^2\epsilon_1 + 2q_P q_C \epsilon_2 + q_C s(\epsilon_3 + \epsilon_4) + 2q_P s \epsilon_5 + \frac{s^2}{2^{n+1}}. $$
+
+The proof is given in Appendix A.1.
+
+**Good Transcripts.** Above, we have considered **bad** events. In contrast, we define GOODT as the set of all good transcripts, i.e., all attainable transcripts that are *not* bad.
+
+**Lemma 3.** Let $\tau \in \text{GOODT}$ be a good transcript. Then
+
+$$ \frac{\Pr[\Theta_{\text{real}} = \tau]}{\Pr[\Theta_{\text{ideal}} = \tau]} \ge 1. $$
+
+The full proof can be found in Appendix A.2.
+---PAGE_BREAK---
+
+**Algorithm 3** The universal hash function $\mathcal{H}^*$.
+---PAGE_BREAK---
+
+- **Case k < n.** In this case, we could simply truncate $H_2$ from $n$ to $k$ bits. Theoretically, we could derive a longer key from $K$ for the computation of $H_1$ and $H_3$; however, we disregard this case since ciphers with smaller key size than state length are very uncommon.
+
+- **Case k > n.** In the third case, we truncate the hash key $K$ for the computation of $H_1$ and $H_3$ to $n$ bits. Moreover, we derive $s$ hashing keys $L_1, \dots, L_s$ from the block cipher $E$. For $H_2$ and we concatenate the output of $s$ instances of $\mathcal{F}$. This construction is well-known to be $\epsilon^s(m)$-pAXU if $\mathcal{F}$ is $\epsilon(m)$-pAXU. Finally, we truncate the result to $k$ bits if necessary.
+
+**Lemma 4.** $\mathcal{H}^*$ is $2^{sn-k}\epsilon^{s+1}(m)$-pAXU and $2^{sn-k}\rho^{s+1}(m)$-Uniform. Moreover, it satisfies Properties P3 and P4 with probability $2^{sn-k}\rho^{s+1}(m)$ each, and Property P5 with $\epsilon_5 \le 2/2^k$ for our choice of the values $I_i$ and $K_i$.
+
+*Remark 2.* The term $2^{sn-k}$ results from the potential truncation of $H_2$ if the key length $k$ of the block cipher is no multiple of the state size $n$. $H_2$ is computed by concatenating the results of multiple independent invocations of a polynomial hash function $\mathcal{F}$ in $\text{GF}(2^n)$ under assumed independent keys. Clearly, if $\mathcal{F}$ is $\epsilon$-AXU, then their $sn$-bit concatenation is $\epsilon^s$-AXU. However, after truncating $sn$ to $k$ bits, we may lose information, which results in the factor of $2^{sn-k}$. For the case $k=n$, it follows that $s=1$, and the terms $2^{sn-k}\epsilon^{s+1}(m)$ and $2^{sn-k}\rho^{s+1}(m)$ simplify to $\epsilon^2(m)$ and $\rho^2(m)$, respectively.
+
+Our instantiation of $\mathcal{F}$ has $\epsilon(m) = \rho(m) = (m+2)/2^n$. Before we prove Lemma 4, we derive from it the following corollary for XHX when instantiated with $\mathcal{H}^*$.
+
+**Corollary 1.** Let $E$ and XHX[$E, \mathcal{H}^*$] be defined as in Theorem 1, where the maximum length of any tweak is limited by at most $m$ n-bit blocks. Moreover, let $K \leftarrow K$. Let $\mathbf{A}$ be a $(q_C, q_P)$-distinguisher on XHX[$E, \mathcal{H}^*$]. Then
+
+$$ \Delta_{\mathbf{A}}(\text{XHX}[E, \mathcal{H}^*], E^\pm; \tilde{\pi}^\pm, E^\pm) \le \frac{(q_C^2+2q_Cq_P+2q_Cs)(m+2)^{s+1}}{2^{n+k}} + \frac{4q_P s}{2^k} + \frac{s^2}{2^{n+1}}. $$
+
+The proof of the corollary stems from the combination of Lemma 4 with Theorem 1 and can be omitted.
+
+*Proof of Lemma 4.* In the following, we assume that $T, T' \in \{0, 1\}^*$ are distinct tweaks of at most $m$ blocks each. Again, we consider the pAXU property first.
+
+**Partial Almost-XOR-Universality.** This is the probability that for any $\Delta \in \{0, 1\}^n$:
+
+$$
+\begin{align*}
+& \Pr_{L \leftarrow \mathcal{L}} [(\mathcal{F}_{K'}(T), \mathcal{F}_{L_1, \dots, L_s}(T)) \oplus (\mathcal{F}_{K'}(T'), \mathcal{F}_{L_1, \dots, L_s}(T')) = (\Delta, 0^n)] \\
+&= \Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) \oplus \mathcal{F}_{K'}(T') = \Delta, \mathcal{F}_{L_1, \dots, L_s}(T) \oplus \mathcal{F}_{L_1, \dots, L_s}(T') = 0^n] \\
+&\le 2^{sn-k} \cdot \epsilon^{s+1}(m).
+\end{align*}
+$$
+---PAGE_BREAK---
+
+We assume independent hashing keys $K', L_1, \dots, L_s$ here. When $k=n$, it holds that $s=1$, and this probability is upper bounded by $\epsilon^2(m)$ since $\mathcal{F}$ is $\epsilon(m)$-AXU. In the case $k>n$, we compute $s$ words of $H_2$ that are concatenated and truncated to $k$ bits. Hence, $\mathcal{F}_{L_1, \dots, L_s}$ is $2^{sn-k} \cdot \epsilon^s(m)$-AXU. In combination with the AXU bound for $\mathcal{F}_{K'}$, we obtain the pAXU bound for $\mathcal{H}^*$ above.
+
+**Almost-Uniformity.** Here, for any $(\Delta_1, \Delta_2) \in \{0,1\}^n \times \{0,1\}^k$, it shall hold
+
+$$
+\begin{align*}
+\Pr_{L \leftarrow \mathcal{L}} [(\mathcal{F}_{K'}(T), \mathcal{F}_{L_1, \dots, L_s}(T)) = (\Delta_1, \Delta_2)] &= \Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = \Delta_1, \mathcal{F}_{L_1, \dots, L_s}(T) = \Delta_2] \\
+&\le 2^{sn-k} \cdot \rho^{s+1}(m)
+\end{align*}
+$$
+
+since $\mathcal{F}$ is $\rho(m)$-AUniform, and using a similar argumentation for the cases $k=n$ and $k>n$ as for partial-almost-XOR universality.
+
+**Property P3.** For all $T \in \mathcal{T}$ and $\Delta \in \{0,1\}^n$, Property P3 is equivalent to
+
+$$
+\Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = (\Delta \oplus I_i), \mathcal{F}_{L_1, \dots, L_s}(T) = K]
+$$
+
+for a fixed $1 \le i \le s$. Here, this property is equivalent to almost uniformity; hence,
+the probability for the latter equality is at most $2^{sn-k} \cdot \rho^s(m)$. The probability for
+the former equality is at most $\rho(m)$ since the property considers a fixed $i$. Since
+we assume independence of $K$ and $L_1, \dots, L_s$, it holds that $\epsilon_3 \le 2^{sn-k} \cdot \rho^{s+1}(m)$.
+
+**Property P4.** For all $T \in \mathcal{T}$ and $\Delta \in \{0,1\}^n$, Property P4 is equivalent to
+
+$$
+\Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = (\Delta \oplus L_i), \mathcal{F}_{L_1, \dots, L_s}(T) = K]
+$$
+
+for a fixed $1 \le i \le s$. Using a similar argumentation as for Property P3, the
+probability is upper bounded by $\epsilon_4 \le 2^{sn-k} \cdot \rho^{s+1}(m)$.
+
+**Property P5.** We derive the hashing keys $L_i$ with the help of $E$ and the secret key $K$. So, in the simple case that $s=1$, the probability that the adversary can guess any tuple $(I_i, K_i)$, for $1 \le i \le s$, that is used to derive the hashing keys $L_i$, or guess any tuple $(L_i, K_i)$ is at most $1/2^k$. Under the reasonable assumption $s < 2^{k-1}$, the probability becomes for fixed $i$ in the general case:
+
+$$
+\Pr_{K \leftarrow K} [ (I_i, K_i) = (c_1, c_2) ] \leq \frac{1}{2^k - s} \leq \frac{2}{2^k}.
+$$
+
+A similar argument holds that the adversary can guess any tuple $(L_i, K_i)$, for
+$1 \le i \le s$. Hence, it holds for $\mathcal{H}^*$ that $\epsilon_5 \le 2/2^k$.
+
+$\epsilon(m)$ **and** $\rho(m)$. It remains to determine $\epsilon(m)$ and $\rho(m)$ for our instantiation of $\mathcal{F}_K(\cdot)$. It maps tweaks $T = T_1, \dots, T_m$ to the result of
+
+$$
+\left( \bigoplus_{i=1}^{m} T_i \cdot K^{m+3-i} \right) \oplus (\|T\|_n \cdot K \oplus K.
+$$
+---PAGE_BREAK---
+
+**Algorithm 4** The universal hash function $\mathcal{H}^2$.
+
+11: **function** $\mathcal{H}_L^2(T)$
+12: $(K, L_1, \dots, L_s) \leftarrow L$
+13: $(T_1, T_2) \stackrel{n}{\leftarrow} T$
+14: $K' \leftarrow \text{TRUNC}_n(K)$
+15: $H_1 \leftarrow T_1 \Box K'$
+16: $H_2 \leftarrow \text{TRUNC}_k (\mathcal{F}_{L_1}(T) \parallel \dots \parallel \mathcal{F}_{L_s}(T))$
+17: $H_3 \leftarrow T_1 \Box K'$
+18: **return** $(H_1, H_2, H_3)$
+
+21: function $\mathcal{F}_{L_i}(T_1 || T_2)$
+
+22: return $(T_1 \boxdot L_i) \oplus T_2$
+
+This is a polynomial of degree at most $m+2$, which is $(m+2)/2^n$-AXU. Moreover, over $L \in \mathcal{L}$, it lacks fixed points but for every $\Delta \in \{0, 1\}^n$, and any fixed subset of $m$ blocks of $T_1, \dots, T_m$, there are at most $m+2$ out of $2^n$ values for the block $T_{m+1}$ that fulfill $\mathcal{F}_K(T) = \Delta$. Hence, $\mathcal{F}$ is also $(m+2)/2^n$-AUniform. $\square$
+
+$\mathcal{H}^*$ is a general construction which supports arbitrary tweak lengths. Though, if we used $\mathcal{H}^*$ for 2n-bit tweaks, we would need four Galois-Field multiplications. However, we can hash more efficiently, even optimal in terms of the number of multiplications in this case. For this purpose, we define $\mathcal{H}^2$.
+
+**$\mathcal{H}^2$ - A Hash Function for 2n-bit Tweaks.** Naively, for two-block tweaks $|T| = 2n$, an $\epsilon$-pAXU construction with $\epsilon \approx 1/2^{2n}$ could be achieved by simply multiplying the tweak with some key $L \in \mathrm{GF}(2^{2n})$ sampled uniformly over $\mathrm{GF}(2^{2n})$. However, we can realize a similarly secure construction more efficiently by using two multiplications over the smaller field $\mathrm{GF}(2^n)$. Additional conditions, such as uniformity, are satisfied by introducing squaring in the field to avoid fixed points in multiplication-based universal hash function. Following the notations from the previous sections, let $L = (K, L_1)$ be the 2n-bit key of our hash function. For $X, Y \in \mathrm{GF}(2^n)$, we define the operation $\boxdot : \mathrm{GF}(2^n) \times \mathrm{GF}(2^n) \to \mathrm{GF}(2^n)$ as
+
+$$ X \boxdot Y := \begin{cases} X \cdot Y & \text{if } X \neq 0 \\ Y^2 & \text{otherwise.} \end{cases} $$
+
+We assume a common encoding between the bit space and GF(2^n), i.e. a polynomial in the field is represented as its coefficient vector, e. g., the all-zero vector denotes the zero element 0, and the bit string (0...01) denotes the identity element. Hereafter, we write X interchangeably as an element of GF(2^n) or of {0, 1}^n. For L = {0, 1}^n, X = ({0, 1})^2 and Y = {0, 1}^n × {0, 1}^k × {0, 1}^n, the construction H^2 : L × X → Y is defined in Algorithm 4. We note that the usage of keys has been chosen carefully, e.g., a swap of K and L_1 in H^2 would invalidate Property P4.
+
+**Lemma 5.** $\mathcal{H}^2$ is $2^{s+1}/2^{n+k}$-pAXU, $2^s/2^{n+k}$-AUniform, satisfies Properties P3 and P4 with probability $2/2^{n+k}$ each, and Property P5 with $\epsilon_5 = s/2^n$ for our choices of $I_i$ and $K_i$, for $1 \le i \le s$.
+---PAGE_BREAK---
+
+Before proving Lemma 5, we derive from it the following corollary for XHX when instantiated with $\mathcal{H}^2$.
+
+**Corollary 2.** Let $E$ and XHX[$E$, $\mathcal{H}^2$] be defined as in Theorem 1. Moreover, let $K \leftarrow K$. Let $\mathbf{A}$ be a $(q_C, q_P)$-distinguisher on XHX[$E$, $\mathcal{H}^2$]$_K$. Then
+
+$$ \Delta_{\mathbf{A}}(\mathrm{XHX}[E, \mathcal{H}^2], E^{\pm}; \tilde{\pi}^{\pm}, E^{\pm}) \le \frac{2^{s+2}q_C^2 + 2^{s+1}q_Cq_P + 4q_Cs}{2^{n+k}} + \frac{2q_Ps^2}{2^n} + \frac{s^2}{2^{n+1}} $$
+
+Again, the proof of the corollary stems from the combination of Lemma 5 with Theorem 1 and can be omitted.
+
+*Proof of Lemma 5.* Since $H_1$ and $H_3$ are computed identically, we can restrict the analysis of the properties of $\mathcal{H}^2$ to only the outputs $(H_1, H_2)$. Note that $K$ and $L_1$ are independent. In the following, we denote the hash-function results for some tweak $T$ as $H_1, H_2, H_3$, and those for some tweak $T' \ne T$ as $H'_1, H'_2, H'_3$. Moreover, we denote the $n$-bit words of $H_2$ as $(H'_2, \dots, H'_s)$, and those of $H'_2$ as $(H''_2, \dots, H''_s)$.
+
+**Partial Almost-XOR-Universality.** First, let us consider the pAXU property. It holds that $H_1 := T_1 \sqcap K'$ and $H_2 := \text{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T))$. Considering $H_1$, it must hold that $H'_1 = H_1 \oplus \Delta$, with
+
+$$ \Delta = (T'_1 \sqcap K') \oplus (T_1 \sqcap K'). $$
+
+For any $X \ne 0^n$, it is well-known that $X \sqcap Y$ is $1/2^n$-AXU. So, for any fixed $T_1$ and fixed $\Delta \in \{0, 1\}^n$, there is exactly one value $T'_1$ that fulfills the equation if $H'_1 \ne K' \sqcap K'$, and exactly two values if $H'_1 = K' \sqcap K'$, namely $T'_1 \in \{0^n, K'\}$. So
+
+$$ \Pr_{K \leftarrow \{0,1\}^k} [ (T_1 \sqcap K') \oplus (T'_1 \sqcap K') = \Delta ] \le 2/2^n. $$
+
+The argumentation for $H_2$ is similar. The probability that any $L_i = 0^n$, for fixed $1 \le i \le s$, is at most $1/(2^n - s + 1)$, which will be smaller than the probability of $H_i^i = H'^i_2$. So, in the remainder, we can concentrate on the case that all $L_i \ne 0^n$. W.l.o.g., we focus for now on the first word of $H_2$, $H'^1_2$, in the following. For fixed $(T_1, T_2)$, $H'^1_2$, and $T'_2$, there is exactly one value $T'_1$ s.t. $H'^1_2 = H'^1_1$ if $H'^1_2 \ne L_1 \sqcap (L_1 \oplus T'_2)$, namely $T'_1 := T_1 \oplus (T_2 \oplus T'_2) \sqcap L^{-1}_1$. There exist exactly two values $T'_1$ if $H'^1_2 = L_1 \sqcap L_1 \oplus T'_2$, namely $T'_1 \in \{0^n, L_1\}$. Hence, it holds that
+
+$$ \Pr_{L_1 \leftarrow \mathcal{L}} [H_2^1 = H'_2] \le 2/2^n. $$
+
+The same argumentation follows for $H_2^i = H'^i_2$, for $2 \le i \le s$ since the keys $L_i$ are pairwise independent. Since the $sn$ bits of $H_2^s$ and $H'^s_2$ are truncated if $k$ is not a multiple of $n$, the bound has to be multiplied with $2^{sn-k}$. With the factor of $2/2^n$ for $H_1$, it follows for fixed $\Delta \in \{0, 1\}^n$ that $\mathcal{H}^2$ is $\epsilon$-pAXU for $\epsilon$ upper bounded by
+
+$$ \frac{2}{2^n} \cdot 2^{sn-k} \cdot \left( \frac{2}{2^n} \right)^s = \frac{2^{s+1}}{2^{n+k}}. $$
+---PAGE_BREAK---
+
+**Almost-Uniformity.** Here, we concern the probability for any $H_1$ and $H_2$:
+
+$$ \mathrm{Pr}_{L \leftarrow \mathcal{L}} [T_1 \sqcap K' = H_1, \mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = H_2]. $$
+
+If $K' = 0^n$ and $H_1 = 0^n$, then the first equation may be fulfilled for any $T_1$. Though, the probability for $K' = 0^n$ is $1/2^n$. So, we can assume $K' \neq 0^n$ in the remainder. Next, we focus again on the first word of $H_2$, i.e., $H_2^1$. For fixed $L_1$ and $H_2^1$, there exist at most two values $(T_1, T_2)$ to fulfill $(T_1 \sqcap L_1) \oplus T_2 = H_2^1$. In the case $H_1 \neq K' \sqcap K'$, there is exactly one value $T_1 := H_1 \sqcap K'^{-1}$ that yields $H_1$. Then, $T_1, L_1$, and $H_2^1$ determine $T_2 := H_2^1 \oplus (T_1 \sqcap L_1)$ uniquely. In the opposite case that $H_1 = K' \sqcap K'$, there exist exactly two values $(T_1, T'_1)$ that yield $H_1$, namely $0^n$ and $K'$. Each of those determines $T_2$ uniquely. The probability that the so-fixed values $T_1, T_2$ yield also $H_2^2, \dots, H_s^2$ is at most $(2/2^n)^{s-1}$ if $k$ is a multiple of $n$ since the keys $L_i$ are pairwise independent; if $k$ is not a multiple of $n$, we have again an additional factor of $2^{sn-k}$ from the truncation. So, $\mathcal{H}^2$ is $\epsilon$-AUniform for $\epsilon$ at most
+
+$$ 2^{sn-k} \cdot \left( \frac{2}{2^n} \right)^s = \frac{2^s}{2^{n+k}}. $$
+
+**Property P3.** Given $I_i = \langle i - 1 \rangle$ and $K_i = K$, for $1 \le i \le s$, $\epsilon_3$ is equivalent to the probability that a chosen $(T_1, T_2)$ yields $\mathrm{Pr}[T_1 \sqcap K' = \Delta \oplus \langle i - 1 \rangle, \mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = K]$, for some $i$. This can be rewritten to
+
+$$
+\begin{aligned}
+& \mathrm{Pr}[T_1 \sqcap K' = \Delta \oplus \langle i-1 \rangle] \\
+& \quad \cdot \mathrm{Pr}[\mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = K | T_1 \sqcap K' = \Delta \oplus \langle i-1 \rangle].
+\end{aligned}
+$$
+
+For fixed $\Delta \neq K' \sqcap K'$, there is exactly one value $T_1$ that satisfies the first part of the equation; otherwise, there are exactly two values $T_1$ if $\Delta = K' \sqcap K'$. Moreover, $K'$ is secret; so, the values $T_1$ require that the adversary guesses $K'$ correctly. Given fixed $T_1$, $\Delta$, and $K'$, there is exactly one value $T_2$ that matches the first $n$ bits of $K$; $T_2 := (T_1 \sqcap L_1) \oplus K[k-1..k-n]$. The remaining bits of $K$ are matched with probability $2^{sn-k}/2^{(s-1)n}$, assuming that the keys $L_i$ are independent. Hence, it holds that $\epsilon_3$ is at most
+
+$$ \frac{2}{2^n} \cdot \frac{2^{sn-k}}{2^{sn}} = \frac{2}{2^{n+k}}. $$
+
+**Property P4.** This argument follows from a similar argumentation as Property P3. Hence, it holds that $\epsilon_4 \le 2/2^{n+k}$. $\square$
+
+**Acknowledgments.** This work was initiated during the group sessions of the 6th Asian Workshop on Symmetric Cryptography (ASK 2016) held in Nagoya. We thank the anonymous reviewers of the ToSC 2017 and Latincrypt 2017 for their fruitful comments. We thank Ashwin Jha and Mridul Nandi for their remark in [7] wherein they pointed us to a subtle error in our formulation of Fact 1 that has been corrected in this version of 08 March 2021. As they noted, our Proof of Lemma 3 implicitly used a special case of compressing sequences, where the fact already held. Therefore, our proof was only slightly augmented to point it out, but does not change.
+---PAGE_BREAK---
+
+References
+
+1. Christof Beierle, Jérémy Jean, Stefan Kölbl, Gregor Leander, Amir Moradi, Thomas Peyrin, Yu Sasaki, Pascal Sasdrich, and Siang Meng Sim. The SKINNY Family of Block Ciphers and Its Low-Latency Variant MANTIS. In Matthew Robshaw and Jonathan Katz, editors, *CRYPTO II*, volume 9815 of *Lecture Notes in Computer Science*, pages 123–153. Springer, 2016.
+
+2. Mihir Bellare and Phillip Rogaway. The Security of Triple Encryption and a Framework for Code-Based Game-Playing Proofs. In Serge Vaudenay, editor, *EUROCRYPT*, volume 4004 of *Lecture Notes in Computer Science*, pages 409–426. Springer, 2006.
+
+3. John Black. The Ideal-Cipher Model, Revisited: An Uninstantiable Blockcipher-Based Hash Function. In Matthew J. B. Robshaw, editor, *FSE*, volume 4047 of *Lecture Notes in Computer Science*, pages 328–340. Springer, 2006.
+
+4. Shan Chen and John P. Steinberger. Tight Security Bounds for Key-Alternating Ciphers. In Phong Q. Nguyen and Elisabeth Oswald, editors, *EUROCRYPT*, volume 8441 of *Lecture Notes in Computer Science*, pages 327–350. Springer, 2014.
+
+5. Peter Gazi and Ueli M. Maurer. Cascade Encryption Revisited. In Mitsuru Matsui, editor, *ASIACRYPT*, volume 5912 of *Lecture Notes in Computer Science*, pages 37–51. Springer, 2009.
+
+6. Jérémy Jean, Ivica Nikolic, and Thomas Peyrin. Tweaks and Keys for Block Ciphers: The TWEAKEY Framework. In Palash Sarkar and Tetsu Iwata, editors, *ASIACRYPT (2)*, volume 8874 of *Lecture Notes in Computer Science*, pages 274–288, 2014.
+
+7. Ashwin Jha and Mridul Nandi. Tight security of cascaded LRW2. *J. Cryptol.*, 33(3):1272–1317, 2020.
+
+8. Rodolphe Lampe and Yannick Seurin. Tweakable Blockciphers with Asymptotically Optimal Security. In Shiho Moriai, editor, *FSE*, volume 8424 of *Lecture Notes in Computer Science*, pages 133–151. Springer, 2013.
+
+9. Will Landecker, Thomas Shrimpton, and R. Seth Terashima. Tweakable blockciphers with beyond birthday-bound security. In Reihaneh Safavi-Naini and Ran Canetti, editors, *CRYPTO*, volume 7417 of *Lecture Notes in Computer Science*, pages 14–30. Springer, 2012.
+
+10. Jooyoung Lee. Towards Key-Length Extension with Optimal Security: Cascade Encryption and XOR-cascade Encryption. In Thomas Johansson and Phong Q. Nguyen, editors, *EUROCRYPT*, volume 7881 of *Lecture Notes in Computer Science*, pages 405–425. Springer, 2013.
+
+11. Moses Liskov, Ronald L. Rivest, and David Wagner. Tweakable Block Ciphers. In Moti Yung, editor, *CRYPTO*, volume 2442 of *Lecture Notes in Computer Science*, pages 31–46. Springer, 2002.
+
+12. Bart Mennink. Optimally Secure Tweakable Blockciphers. In Gregor Leander, editor, *FSE*, volume 9054 of *Lecture Notes in Computer Science*, pages 428–448. Springer, 2015.
+
+13. Kazuhiko Minematsu. Beyond-Birthday-Bound Security Based on Tweakable Block Cipher. In Orr Dunkelman, editor, *FSE*, volume 5665 of *Lecture Notes in Computer Science*, pages 308–326. Springer, 2009.
+
+14. Kazuhiko Minematsu and Tetsu Iwata. Tweak-Length Extension for Tweakable Blockciphers. In Jens Groth, editor, *IMA Int. Conf.*, volume 9496 of *Lecture Notes in Computer Science*, pages 77–93. Springer, 2015.
+---PAGE_BREAK---
+
+15. Yusuke Naito. Tweakable Blockciphers for Efficient Authenticated Encryptions with Beyond the Birthday-Bound Security. *IACR Transactions on Symmetric Cryptology*, 2017(2):1–26, 2017.
+
+16. Jacques Patarin. The "Coefficients H" Technique. In Roberto Maria Avanzi, Liam Keliher, and Francesco Sica, editors, *SAC*, volume 5381 of *Lecture Notes in Computer Science*, pages 328–345. Springer, 2008.
+
+17. Phillip Rogaway. Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes OCB and PMAC. In *ASIACRYPT*, volume 3329 of *Lecture Notes in Computer Science*, pages 16–31. Springer, 2004.
+
+18. Richard Schroeppel and Hilarie Orman. The Hasty Pudding Cipher. *AES candidate submitted to NIST*, 1998.
+
+19. Thomas Shrimpton and R. Seth Terashima. A Modular Framework for Building Variable-Input-Length Tweakable Ciphers. In Kazue Sako and Palash Sarkar, editors, *ASIACRYPT (1)*, volume 8269 of *Lecture Notes in Computer Science*, pages 405–423. Springer, 2013.
+
+20. Lei Wang, Jian Guo, Guoyan Zhang, Jingyuan Zhao, and Dawu Gu. How to Build Fully Secure Tweakable Blockciphers from Classical Blockciphers. In Jung Hee Cheon and Tsuyoshi Takagi, editors, *ASIACRYPT (1)*, volume 10031 of *Lecture Notes in Computer Science*, pages 455–483, 2016.
+
+A Proof Details
+
+The proof of Theorem 1 follows from Lemmas 1, 2, and 3. Let $\tilde{E}$ denote the XHX[$E, \mathcal{H}$] construction in the remainder. W.l.o.g., we assume, **A** does not ask duplicated queries nor trivial queries to which it already knows the answer, e.g., feeds the result of an encryption query to the corresponding decryption oracle or vice versa. The queries by **A** are collected in a transcript $\tau$. We define that $\tau$ is composed of two disjoint sets of queries $\tau_C$ and $\tau_P$ and $L$, $\tau = \tau_C \cup \tau_P \cup \{L\}$, where $\tau_C := \{(M^i, C^i, T^i, H_1^i, H_2^i, H_3^i, X^i, Y^i, d^i)\}_{1\le i\le q_C}$ denotes the queries by **A** to the construction oracle plus internal variables $H_1^i, H_2^i, H_3^i$ (i.e., the outputs of $\mathcal{H}_1, \mathcal{H}_2$, and $\mathcal{H}_3$, respectively), $X^i$ and $Y^i$ (where $X^i \leftarrow H_1^i \oplus M^i$ and $Y^i \leftarrow H_3^i \oplus C^i$, respectively); and $\tau_P := \{(\hat{K}^i, \hat{X}^i, \hat{Y}^i, d^i)\}_{1\le i\le q_P}$ the queries to the primitive oracle; both sets store also binary variables $d^i$ that indicate the direction of the $i$-th query, where $d^i = 1$ represents the fact that the $i$-th query is an encryption query, and $d^i = 0$ that it is a decryption query. The internal variables for one call to XHX are as given in Algorithm 2 and Figure 2.
+We apply a common strategy for handling bad events from both worlds: in the real world, all secrets (i.e., the hash-function key $L$) are revealed to the **A** after it finished its interaction with the available oracles, but before it has output its decision bit regarding which world it interacted with. Similarly, in the ideal world, the oracle samples the hash-function key independently from the choice of $E$ and $\tilde{\pi}$ uniformly at random, $L \leftarrow \mathcal{L}$, and also reveals $L$ to **A** after the adversary finished its interaction and before has output its decision bit. The internal variables in construction queries – $H_1^i, H_2^i, H_3^i, X^i, Y^i$ – can then be computed and added to the transcript also in the ideal world using the oracle inputs and outputs $T^i, M^i, C^i, H_1^i, H_2^i$, and $H_3^i$.
+---PAGE_BREAK---
+
+Let $1 \le i \ne j \le q$. We define that an attainable transcript $\tau$ is **bad**, i.e., $\tau \in \text{BADT}$, if one of the following conditions is met:
+
+- bad$_1$: There exist $i \neq j$ s.t. $(H_2^i, X^i) = (H_2^j, X^j)$.
+
+- bad$_2$: There exist $i \neq j$ s.t. $(H_2^i, Y^i) = (H_2^j, Y^j)$.
+
+- bad$_3$: There exist $i \neq j$ s.t. $(H_2^i, X^i) = (\tilde{K}^j, \tilde{X}^j)$.
+
+- bad$_4$: There exist $i \neq j$ s.t. $(H_2^i, Y^i) = (\tilde{K}^j, \tilde{Y}^j)$.
+
+- bad$_5$: There exist $i \neq j$ s.t. $(\tilde{K}^i, \tilde{X}^i) = (\tilde{K}^j, \tilde{X}^j)$.
+
+- bad$_6$: There exist $i \neq j$ s.t. $(\tilde{K}^i, \tilde{Y}^i) = (\tilde{K}^j, \tilde{Y}^j)$.
+
+- bad$_7$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_C\}$ s.t. $(X^j, H_2^j) = (I_i, K_i)$ and $d^j = 1$.
+
+- bad$_8$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_C\}$ s.t. $(Y^j, H_2^j) = (L_i, K_i)$ and $d^j = 0$.
+
+- bad$_9$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_P\}$ s.t. $(\tilde{X}^j, \tilde{K}^j) = (I_i, K_i)$.
+
+- bad$_{10}$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_P\}$ s.t. $(\tilde{Y}^j, \tilde{K}^j) = (L_i, K_i)$.
+
+- bad$_{11}$: There exist $i, j \in \{1, \dots, s\}$ and $i \neq j$ s.t. $(K_i, L_i) = (K_j, L_j)$ but $I_i \neq I_j$.
+
+The events
+
+- bad$_1$ and bad$_2$ consider collisions between two construction queries,
+
+- bad$_3$ and bad$_4$ consider collisions between primitive and construction queries,
+
+- bad$_5$ and bad$_6$ consider collisions between two primitive queries, and
+
+- bad$_7$ through bad$_{10}$ address the case that the adversary may could find an input-key tuple in either a primitive or construction query that has been used to derive some of the subkeys $L_i$.
+
+- bad$_{11}$ addresses the event that the ideal oracle produces a collision while sampling the hash-function keys independently uniformly at random.
+
+Note that the events bad$_5$ and bad$_6$ are listed here only for the sake of completeness. We will show briefly that these events can never occur.
+
+## A.1 Proof of Lemma 2
+
+*Proof.* In the following, we upper bound the probabilities of each bad event.
+
+**bad$_1$ and bad$_2$.** Events bad$_1$ and bad$_2$ represent the cases that two distinct construction queries would feed the same tuple of key and input to the underlying primitive *E* if the construction would be the real $\tilde{E}$; bad$_1$ considers the case when the values $H_2^i = H_2^j$ and $X^i = X^j$ collide. In the real world, it follows that $Y^i = Y^j$, while this holds only with small probability in the ideal world. The event bad$_2$ concerns the case when the values $H_2^i = H_2^j$ and $Y^i = Y^j$ collide. Again, in the real world, it follows then that $X^i = X^j$, whereas this holds only with small probability in the ideal world. So, both events would allow **A** to distinguish both worlds. Let us consider bad$_1$ first, and let us start in the real
+---PAGE_BREAK---
+
+world. Since **A** asks no duplicate queries, it must hold that two distinct queries $(M^i, T^i)$ and $(M^j, T^j)$ yielded
+
+$$X^i = (M^i \oplus H_1^i) = (M^j \oplus H_1^j) = X^j \quad \text{and} \quad H_2^i = H_2^j.$$
+
+We define $\Delta := M^i \oplus M^j$ and consider two subcases: in the subcase that $T^i = T^j$, it automatically holds that $H_2^i = H_2^j$ and $H_1^i = H_1^j$. However, this also implies that $M^i = M^j$, i.e., **A** would have asked a duplicate query, which is prohibited. So, it must hold that $T^i \neq T^j$ in the real world.
+
+If $T^i = T^j$ in the ideal world, it must hold that the plaintexts are disjoint, $M^i \neq M^j$, since we assumed that **A** does not make duplicate queries. Since $\tilde{\pi}(T^i, \cdot)$ is a permutation, the resulting plaintexts are also disjoint: $M^i \neq M^j$. From $T^i = T^j$ follows that $H_1^i = H_1^j$ and thus, $X^i$ and $X^j$ cannot be equal:
+
+$$X^i = M^i \oplus H_1^i \neq M^j \oplus H_1^j = X^j,$$
+
+which contradicts with our definition of bad$_1$. So, it must hold that $T^i \neq T^j$ also in the ideal world. From Property P1 and over $L \leftarrow \mathcal{L}$, it holds then
+
+$$
+\begin{align*}
+\Pr[\text{bad}_1] &= \Pr[\exists i \neq j; 1 \le i, j \le q_C : (X^i, H_2^i) = (X^j, H_2^j)] \\
+&= \Pr[\exists i \neq j; 1 \le i, j \le q_C : \mathcal{H}_{1,2}(T^i) \oplus \mathcal{H}_{1,2}(T^j) = (\Delta, 0^k)] \le \binom{q_C}{2} \epsilon_1.
+\end{align*}
+$$
+
+Using a similar argumentation, it follows also from Property P1 that for $T^i \neq T^j$
+
+$$
+\begin{align*}
+\Pr[\text{bad}_2] &= \Pr[\exists i \neq j; 1 \le i, j \le q_C : (Y^i, H_2^i) = (Y^j, H_2^j)] \\
+&= \Pr[\exists i \neq j; 1 \le i, j \le q_C : \mathcal{H}_{3,2}(T^i) \oplus \mathcal{H}_{3,2}(T^j) = (\Delta, 0^k)] \le \binom{q_C}{2} \epsilon_1.
+\end{align*}
+$$
+
+**bad3 and bad4.** Events bad3 and bad4 represent the cases that a construction query to the *real* construction $\tilde{E}$ would feed the same key and input $(H_2^i, X^i)$ to the underlying primitive *E* in the real construction as a primitive query $(\hat{K}^j, \hat{X}^j)$. This is equivalent to guessing the hash-function output for the *i*-th query. Let us consider bad3 first. Over $L \leftarrow \mathcal{L}$ and for all $(\hat{K}^j, \hat{X}^j)$, the probability of bad3 is upper bounded by
+
+$$
+\begin{align*}
+\Pr[\text{bad}_3] &= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : (X^i, H_2^i) = (\hat{X}^j, \hat{K}^j)] \\
+&= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : (H_1^i = M^i \oplus \hat{X}^j) \land (H_2^i = \hat{K}^j)] \\
+&= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : \mathcal{H}_{1,2}(T^i) = (M^i \oplus \hat{X}^j, \hat{K}^j)] \\
+&\le q_C \cdot q_P \cdot \epsilon_2
+\end{align*}
+$$
+---PAGE_BREAK---
+
+due to Property P2. Using a similar argumentation, it holds that
+
+$$
+\begin{align*}
+\Pr[\text{bad}_4] &= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : (X^i, H_2^i) = (\hat{Y}^j, \hat{K}^j)\right] \\
+&= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : (H_3^i = C^i \oplus \hat{Y}^j) \land (H_2^i = \hat{K}^j)\right] \\
+&= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : \mathcal{H}_{3,2}(T^i) = (C^i \oplus \hat{Y}^j, \hat{K}^j)\right] \\
+&\le q_C \cdot q_P \cdot \epsilon_2.
+\end{align*}
+$$
+
+**bad5 and bad6.** Events **bad5** and **bad6** represent the cases that two distinct primitive queries feed the same key and the same input to the primitive **E**. Clearly, in both worlds, this implies that **A** either has asked a duplicate primitive query or has fed the result of an earlier primitive query to the primitive's inverse oracle. Both types of queries are forbidden; so, they will not occur.
+
+**bad7 and bad8.** Let us consider **bad7** first, which considers the case that the *j*-th construction query in encryption direction matches the inputs to **E** used for generating a hash function subkeys $L_i$, for some $j \in [1..q]$ and $i \in [1..s]$. **bad8** considers the equivalent case in decryption direction. We define $\Delta := M^j \oplus \mathcal{H}_1(L, T^j)$. For this **bad** event, it must hold that $M^j \oplus \mathcal{H}_1(L, T^j) = I_i$ and $\mathcal{H}_2(L, T^j) = K_i$. Concerning the tuples $I_i, K_i$, we cannot exclude in general that all values $K_1(K) = ... = K_s(K)$ are equal and therefore, $L_i$ are outputs of the same permutation. From Property P3 and the fact that there have been $j$ queries and the adversary can hit one out of $s$ values, and over $L \leftarrow \mathcal{L}$, it follows that the probability for this event can be upper bounded by
+
+$$
+\begin{align*}
+\Pr[\text{bad}_7] &= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : (X^j, H_2^j) \oplus (I_i, K_i) = (\Delta, 0^k)\right] \\
+&= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : \mathcal{H}_{1,2}(T^j) \oplus (I_i, K_i) = (\Delta, 0^k)\right] \\
+&\le q_C \cdot s \cdot \epsilon_3.
+\end{align*}
+$$
+
+Using a similar argument, it follows from Property P4 that
+
+$$
+\begin{align*}
+\Pr[\text{bad}_8] &= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : (Y^j, H_2^j) \oplus (L_i, K_i) = (\Delta, 0^k)\right] \\
+&= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : \mathcal{H}_{3,2}(T^j) \oplus (L_i, K_i) = (\Delta, 0^k)\right] \\
+&\le q_C \cdot s \cdot \epsilon_4.
+\end{align*}
+$$
+
+**bad9 and bad10.** The event **bad9** models the case that a primitive query in encryption direction matches key and input used for generating $L_i$, for some $i \in [1..s]: (\hat{X}^j, \hat{K}^j) = (I_i, K_i)$. The event **bad10** considers the equivalent case in decryption direction. From our assumption that Property P5 holds and the fact that the adversary can hit one out of $s$ values, and over $K \leftarrow K$, the probability for this event can be upper bounded by
+
+$$
+\Pr[\text{bad}_9] = \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_P : (\hat{X}^j, \hat{K}^j) = (I_i, K_i)\right] \le q_P \cdot s \cdot \epsilon_7
+$$
+---PAGE_BREAK---
+
+We can use a similar argument and Property P5 to upper bound the probability
+that the *j*-th query of **A** hits $L_i$, $K_i$ by
+
+$$
+\Pr[\text{bad}_{10}] = \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_P : (\hat{Y}^j, \hat{K}^j) = (L_i, K_i)\right] \le q_P \cdot s \cdot \epsilon_5.
+$$
+
+bad$_{11}$. It is possible that a number of key inputs $K_i = K_j$, for some $i, j \in$
+$\{1, \dots, s\}, i \neq j$, are equal. The event bad$_{11}$ models the case that the ideal
+oracle produces a collision $(K_i, L_i) = (K_j, L_j)$, although it holds that $I_i \neq I_j$,
+which indicates that the hash-function keys cannot be result of computing them
+from the block cipher $E$. In the worst case, all keys $K_i$, for $1 \le i \le s$, are equal.
+So, the probability for this event can be upper bounded by
+
+$$
+\mathrm{Pr}[\mathrm{bad}_{11}] = \mathrm{Pr}[\exists i, j \in \{1, \dots, s\}, i \neq j : (K_i, L_i) = (K_j, L_j), I_i \neq I_j] \leq \frac{s^2}{2^{n+1}}.
+$$
+
+Our claim in Lemma 2 follows from summing up the probabilities of all bad events.
+
+Before proceeding with the proof of good transcripts, we formulate a short fact
+that will serve useful later on. In the remainder, we denote the falling factorial
+as $(n)_k := \frac{n!}{k!}$. Prior, we recall a definition from [7].
+
+**Definition 4 (Compressing Sequences [7]).** For integers $r \le s$, let $U = (u_1, \dots, u_r)$ and $V = (b_1, \dots, b_s)$ be two sequences over $\mathbb{N}$. We say that $V$ compresses to $U$ if there exists a partition $\mathcal{P}$ of $\{1, \dots, r\}$ such that $\mathcal{P}$ contains exactly $s$ entries, say $\mathcal{P}_1, \dots, \mathcal{P}_s$ and $\forall i \in \{1, \dots, s\}$, it holds that $u_i = \sum_{j \in \mathcal{P}_i} v_j$.
+
+The following Fact has been updated to match Proposition 1 of [7], where we
+changed $r \ge s$. The proof is given there.
+
+**Fact 1 (A Variant of Proposition 1 in [7].)** For integers $r \le s$, let $U=(u_1, \dots, u_r)$ and $V = (v_1, \dots, v_s)$ be two sequences of positive integers such that $V$ compresses to $U$. Then, it holds for any positive integer $n$ such that $2^n \ge \sum_{i=1}^r u_i$ that
+
+$$
+\prod_{i=1}^{r} (N)_{u_i} \leq \prod_{i=1}^{s} (N)_{v_i} \quad \text{and thus} \quad \prod_{i=1}^{r} \frac{1}{(N)_{u_i}} \geq \prod_{i=1}^{s} \frac{1}{(N)_{v_i}}.
+$$
+
+A.2 Proof of Lemma 3
+
+*Proof.* Fix a good transcript $\tau$. In the ideal world, the probability to obtain $\tau$ is
+
+$$
+\begin{align*}
+\Pr[\Theta_{\text{ideal}} = \tau] &= \Pr_{\forall i} [\tilde{\pi}(T^i, M^i) = C^i] \cdot \Pr_{\forall j} [E(\hat{K}^j, \hat{X}^j) = Y^j] \cdot \Pr_{\forall g} [L_g] \\
+&\qquad \cdot \Pr[K \leftarrow K : K].
+\end{align*}
+$$
+
+In the real world, the probability to obtain a transcript $\tau$ is given by
+
+$$
+\Pr[\Theta_{\text{real}} = \tau] = \underset{\forall i, \forall j, \forall g}{\operatorname{Pr}} [ \underset{\forall L}{\tilde{E}}(T^i, M^i) = C^i, E(\hat{K}^j, \hat{X}^j) = Y^j, E(K_g, I_g) = L_g ] \\
+\cdot \Pr[K \leftarrow K : K].
+$$
+---PAGE_BREAK---
+
+First, we consider the distribution of keys. In the ideal world, all components of $L = (K, L_1, \dots, L_s)$ are sampled uniformly and independently at random; the real world employs the block cipher $E$ for generating $L_1, \dots, L_s$. Let us focus on $K$, which is sampled uniformly in both worlds:
+
+$$ \Pr[K \leftarrow \mathcal{K} : K] = \frac{1}{|\mathcal{K}|}. $$
+
+The remaining hash-function key $L_1, \dots, L_s$ will be considered in turn. To prove the remainder of our claim in Lemma 3, we have to show that
+
+$$ \begin{align} & \Pr_{\forall i, \forall j, \forall g} \left[ \tilde{E}_L(T^i, M^i) = C^i, E(\hat{\mathcal{K}}^j, \hat{\mathcal{X}}^j) = Y^j, E(K_g, I_g) = L_g \right] \tag{1} \\ & \ge \Pr_{\forall i} [\tilde{\pi}(T^i, M^i) = C^i] \cdot \Pr_{\forall j} [E(\hat{\mathcal{K}}^j, \hat{\mathcal{X}}^j) = Y^j] \cdot \prod_{g=1}^s \Pr[L_g \leftarrow \{0, 1\}^n : L_g]. \nonumber \end{align} $$
+
+We reindex the keys used in primitive queries to $\hat{\mathcal{K}}^1, \dots, \hat{\mathcal{K}}^\ell$ to eliminate duplicates. Given those indices, we group all primitive queries into sets $\hat{\mathcal{K}}^j$, for $1 \le j \le \ell$, s.t. all sets are distinct and each set $\hat{\mathcal{K}}^j$ contains exactly only the primitive queries with key $\hat{\mathcal{K}}^j$:
+
+$$ \hat{\mathcal{K}}^j := \left\{ (\hat{\mathcal{K}}^i, \hat{\mathcal{X}}^i, \hat{\mathcal{Y}}^i) : \hat{\mathcal{K}}^i = \hat{\mathcal{K}}^j \right\}. $$
+
+We denote by $\hat{k}^j = |\hat{\mathcal{K}}^j|$ the number of queries with key $\hat{\mathcal{K}}^j$. Clearly, it holds that $\ell \le q_P$ and $\sum_{j=1}^\ell \hat{k}^j = q_P$.
+
+Moreover, we also re-index the tweaks of the construction queries to $\mathcal{T}^1, \dots, \mathcal{T}^r$ for the purpose of eliminating duplicates. Given these new indices, we group all construction queries into sets $\mathcal{T}^j$, for $1 \le j \le r$, s.t. all sets are distinct and each set $\mathcal{T}^j$ contains exactly only all construction queries with the tweak $\mathcal{T}^j$:
+
+$$ \mathcal{T}^j := \left\{ (\mathcal{T}^i, M^i, C^i) : \mathcal{T}^i = \mathcal{T}^j \right\}. $$
+
+We denote by $t^j = |\mathcal{T}^j|$ the number of queries with tweak $\mathcal{T}^j$. It holds that $r \le q_C$ and $\sum_{j=1}^r t^j = q_C$.
+
+First, we consider the probability of an obtained good transcript in the ideal world. Therein, all components $L_1, \dots, L_s$ are sampled independently uniformly at random from $\{0, 1\}^n$. So, in the ideal world, it holds that
+
+$$ \prod_{g=1}^{s} \Pr[L_g \leftarrow \{0,1\}^n : L_g] = \frac{1}{(2^n)^s}. $$
+
+Recall that every $\tilde{\pi}(\mathcal{T}^j, \cdot)$ and $\tilde{\pi}^{-1}(\mathcal{T}^j, \cdot)$ is a permutation, and the assumption that **A** does not ask duplicate queries or such to which it already knows the answer. So, all queries are pairwise distinct. The probability to obtain the outputs of our transcript for some fixed tweak $\mathcal{T}^j$ is given by
+
+$$ \frac{1}{2^n \cdot (2^n - 1) \cdots (2^n - t^j + 1)} = \frac{1}{(2^n)_{t^j}}. $$
+---PAGE_BREAK---
+
+The same applies for the outputs of the primitive queries in our transcript for some fixed key $\hat{\mathcal{K}}^j$:
+
+$$ \frac{1}{(2^n)_{\hat{\mathcal{K}}^j}} $$
+
+The outputs of construction and primitive queries are independent from each other in the ideal world. Over all disjoint key and tweak sets, the probability for obtaining $\tau$ in the ideal world is given by
+
+$$ \mathrm{Pr}[\Theta_{\mathrm{ideal}} = \tau] = \left(\prod_{i=1}^{r} \frac{1}{(2^n)_{t_i}}\right) \cdot \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{\mathcal{K}}_j}}\right) \cdot \frac{1}{(2^n)^s} \cdot \frac{1}{|\mathcal{K}|}. \quad (2) $$
+
+It remains to upper bound the probability $\tau$ in the real world. We observe that for every pair of queries $i$ and $j$ with $T^i = T^j$, it holds that $H_2^i = H_2^j$, i.e., both queries always target the same underlying permutation. Moreover, in the real world, two distinct tweaks $T^i \neq T^j$ can still collide in their hash-function outputs $H_2^i = H_2^j$. In this case, the queries with tweaks $T^i$ and $T^j$ also use the same permutation. Furthermore, there may be hash-function outputs $H_2^i$ from construction queries that are identical to keys $\hat{\mathcal{K}}^j$ that were used in primitive queries. In this case, both queries also employ the same permutation and so, the outputs from primitive and from construction queries are not independent as in the ideal world. Moreover, the derived keys $L_i$ are also constructed from the same block cipher $E$; hence, the inputs $K_i$ may also use the same permutation as primitive and construction queries.
+
+For our purpose, we also reindex the keys in all primitive queries into sets to $\hat{\mathcal{K}}^1, \dots, \hat{\mathcal{K}}^\ell$, and also reindex the tweaks in construction queries to $T^1, \dots, T^r$ to eliminate duplicates. We define key sets $\mathcal{K}^j$, for $1 \le j \le \ell$, and tweak sets $T^j$, for $1 \le j \le r$, analogously as we did for the ideal world. Moreover, for every so-indexed tweak $T^i$, we compute its corresponding value $H_2^i$. We also reindex the hash values $H_2^j$ to $H_2^1, \dots, H_2^u$ for duplicate elimination, and group the construction queries into sets
+
+$$ \mathcal{H}_2^j := \left\{ (T^i, M^i, C^i) : \mathcal{H}_2(L, T^i) = H_2^j \right\}. $$
+
+We denote by $h_2^j = |\mathcal{H}_2^j|$ the number of queries whose tweak maps to $H_2^j$. Clearly, it still holds that $\sum_{i=1}^u h_2^j = q_C$. We can define an ordering s.t. for all $1 \le i \le u$, $T^i$ is mapped to $H_2^i$. Since for all $1 \le i \le r$, all queries of tweak $T^j$ are contained in exactly one set $\mathcal{H}_2^j$, there exists some $j \in \{1, \dots, u\}$, s.t. it holds
+
+$$ \sum_{j=1}^{u} h_2^{j} = \sum_{i=1}^{r} t^{i} = q_{C}, \quad u \le r, \text{ and } h_{2}^{i} \ge t^{i}, \text{ for all } 1 \le i \le r. $$
+
+Note that the sequence that contains the number of occurrences of tweak values $\mathcal{T}$ compresses to the sequences that contains the number of occurrences of hash values $\mathcal{H}_2$. Equal tweaks $T^i$ and $T^j$ will map to the same hash value $\mathbb{H}_2$. If the
+---PAGE_BREAK---
+
+hashes of $T^i$ and $T^j$ are identical, than, $H_2$ will be the sum of (at least) their
+numbers of occurrences. Thus, they are compressing, and it follows from Fact 1
+that
+
+$$
+\prod_{j=1}^{u} \frac{1}{(2^n)_{h_2^j}} \geq \prod_{i=1}^{r} \frac{1}{(2^n)_{t^i}}.
+$$
+
+In addition, we reindex the key inputs $K_i$ that are used for generating the keys $L_1, \dots, L_s$ to $K_1, \dots, K_w$ to eliminate duplicates, and group all tuples $(I_i, K_i)$ into sets $\mathcal{K}^j$, for $1 \le j \le w$, s.t. all sets are distinct and each set contains exactly those key-generating tuples with the key $K_j$:
+
+$$
+\mathcal{K}^j := \{(I_i, K_i) : K_i = \mathcal{K}^j.\}
+$$
+
+On this base, we unify and reindex the values $H_2^j$, $\hat{\mathcal{K}}^j$, and $\mathcal{K}^j$ to values $\mathbb{P}^1, \dots, \mathbb{P}^v$ (using $\mathbb{P}$ for permutation). We group all queries into sets $\mathcal{P}^j$, for $1 \le j \le v$, s.t. all sets are distinct and each set $\mathcal{P}^j$ consists of exactly the union of all construction queries with the hash value $H_2 = \mathbb{P}^j$, all primitive queries with $\hat{\mathcal{K}} = \mathbb{P}^j$, and all key-generating tuples with $\mathcal{K} = \mathbb{P}^j$:
+
+$$
+\mathcal{P}^j := \{\mathcal{H}_2^i : \mathcal{H}_2^i = \mathbf{P}^j\} \cup \{\hat{\mathcal{K}}^i : \hat{\mathcal{K}}^i = \mathbf{P}^j\} \cup \{\mathcal{K}^i : \mathcal{K}^i = \mathbf{P}^j\}.
+$$
+
+We denote by $p^j = |\mathcal{P}^j|$ the number of queries that use the same permutation.
+Clearly, it holds that $\sum_{j=1}^v p^j = q_P + q_C + s$. Recall that Block$(k,n)$ denotes the
+set of all $k$-bit key, $n$-bit block ciphers. In the following, we call a block cipher
+$E$ compatible with $\tau$ iff
+
+1. For all $1 \le i \le q_C$, it holds that $C^i = E_{H_2^i}(M^i \oplus H_1^i) \oplus H_3^i$, where $H_1^i = H_1(L, T^i)$, $H_2^i = H_2(L, T^i)$, and $H_3^i = H_3(L, T^i)$, and
+
+2. for all $1 \le j \le q_P$, it holds that $\hat{Y}^j = E_{\hat{\mathcal{K}}^j}(\hat{X}^j)$,
+
+3. and for all $1 \le g \le s$, it holds that $L_i = E_{K_i}(I_i)$.
+
+Let $\text{Comp}(\tau)$ denote the set of all block ciphers $E$ compatible with $\tau$. Then,
+
+$$
+\Pr[\Theta_{\text{real}} = \tau] = \Pr[E \leftarrow \text{Block}(k,n) : E \in \text{Comp}(\tau)] \cdot \Pr[K | \Theta_{\text{real}} = \tau]. \quad (3)
+$$
+
+We focus on the first factor on the right-hand side. Since we assume that no bad
+events have occurred, the fraction of compatible block ciphers is given by
+
+$$
+\mathrm{Pr}[E \leftarrow \text{Block}(k, n) : E \in \mathrm{Comp}(\tau)] = \prod_{i=1}^{v} \frac{1}{(2^n)_{p^i}}.
+$$
+
+It holds that
+
+$$
+\sum_{i=1}^{v} p^i = q_P + q_C + s = \sum_{j=1}^{\ell} \hat{k}^j + \sum_{j=1}^{r} t^j + \sum_{j=1}^{w} k^j = \sum_{j=1}^{\ell} \hat{k}^j + \sum_{j=1}^{u} h_2^j + \sum_{j=1}^{w} k^j.
+$$
+---PAGE_BREAK---
+
+We can substitute the variables $\hat{k}^j, h_2^j$, and $k^j$ on the right-hand side by auxiliary variables $z^j$
+
+$$ \sum_{i=1}^{v} p^i = \sum_{j=1}^{\ell+u+w} z^j \quad \text{where} \quad z^j = \begin{cases} \hat{k}^j & \text{if } j \le \ell, \\ h_2^j & \text{if } \ell < j \le \ell+u, \\ k^j & \text{otherwise.} \end{cases} $$
+
+It holds that $v \le \ell+u+w \le \ell+r+w$. Since each permutation set $\mathcal{P}^i$ consists of all queries in $\tau$ that use a certain key $\hat{K}^j$, and/or all queries in $\tau$ that use one hash $H_2^j$, and/or all tuples $(I_i, K_i)$ that use one value $K^j$, it further holds that for all $1 \le i \le v$, there exists some $j \in \{1, \dots, \ell+u+w\}$ s.t.
+
+$$ p^i \ge z^j. $$
+
+Again, the sequences are compressing, and we can directly apply Fact 1. It follows that
+
+$$
+\begin{align}
+\prod_{i=1}^{v} \frac{1}{(2^n)_{p^i}} &\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{u} \frac{1}{(2^n)_{h_2^j}}\right) \cdot \left(\prod_{j=1}^{w} \frac{1}{(2^n)_{k^j}}\right) \tag{4} \\
+&\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{r} \frac{1}{(2^n)_{t_j}}\right) \cdot \left(\prod_{j=1}^{w} \frac{1}{(2^n)_{k^j}}\right) \\
+&\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{r} \frac{1}{(2^n)_{t_j}}\right) \cdot \frac{1}{(2^n)^s}.
+\end{align}
+$$
+
+Using the combined knowledge from Equations (1) through (4), we can derive that the probability for obtaining the construction and primitive outputs in the transcript is at least as high as the probability in the ideal world:
+
+$$ \Pr[\Theta_{\text{real}} = \tau] \ge \Pr[\Theta_{\text{ideal}} = \tau]. $$
+
+So, we obtain our claim in Lemma 3. □
\ No newline at end of file
diff --git a/samples/texts_merged/825446.md b/samples/texts_merged/825446.md
new file mode 100644
index 0000000000000000000000000000000000000000..a336d1b8bdf6efb6e888cb05327d6da7fb860ca7
--- /dev/null
+++ b/samples/texts_merged/825446.md
@@ -0,0 +1,377 @@
+
+---PAGE_BREAK---
+
+# Analysis of Power Matching on Energy Savings of a Pneumatic Rotary Actuator Servo-Control System
+
+Yeming Zhang¹*, Hongwei Yue¹, Ke Li² and Maolin Cai³
+
+**Abstract**
+
+When saving energy in a pneumatic system, the problem of energy losses is usually solved by reducing the air supply pressure. The power-matching method is applied to optimize the air-supply pressure of the pneumatic system, and the energy-saving effect is verified by experiments. First, the experimental platform of a pneumatic rotary actuator servo-control system is built, and the mechanism of the valve-controlled cylinder system is analyzed. Then, the output power characteristics and load characteristics of the system are derived, and their characteristic curves are drawn. The employed air compressor is considered as a constant-pressure source of a quantitative pump, and the power characteristic of the system is matched. The power source characteristic curve should envelope the output characteristic curve and load characteristic curve. The minimum gas supply pressure obtained by power matching represents the optimal gas supply pressure. The comparative experiments under two different gas supply pressure conditions show that the system under the optimal gas supply pressure can greatly reduce energy losses.
+
+**Keywords:** Pneumatic rotary actuator, Energy savings, Gas supply pressure, Characteristic curve, Power matching
+
+## 1 Introduction
+
+The problem of energy shortages has become increasingly significant with the rapid development of society. In addition to discovering new energy sources, energy conservation is the most effective and important measure to fundamentally solve the energy problem [1]. Energy saving has increasingly become a hot topic of concern. Energy has always been a constraint to economic development, which makes energy-saving research more urgent and practical [2]. Currently, pneumatic technology is widely used in various fields of industry, and has become an important technical means of transmission and control [3, 4]. The use of existing technology to improve the energy utilization rate of energy-consuming equipment is an important energy-saving method [5].
+
+However, the energy efficiency of pneumatic technology is relatively low [6]. Therefore, improving the efficiency of energy utilization and reducing the energy loss of pneumatic systems have become the concern of scholars all over the world [7, 8].
+
+Pneumatic systems have three aspects of energy wastage [9, 10]: (1) gas and power losses during compressor gas production, (2) pressure loss in the gas supply pipeline, and (3) gas leakage from the gas equipment [11]. Accordingly, many methods are available to solve these problems. For the pressure loss in the air source, the timing of opening and closing of multiple air compressors can be optimized, and the gas production process of the air compressors can also be optimized, such as making full use of the expansion of compressed air to reduce unnecessary power consumption [12]. In order to reduce pressure loss in the pipeline, the method of reducing the pressure in the gas supply pipeline can be adopted [13]. When necessary, a supercharger can be added in front of the terminal equipment. For gas leakage from the gas equipment, optimizing the component
+
+*Correspondence: zym@hpu.edu.cn
+
+¹ School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, China
+Full list of author information is available at the end of the article
+---PAGE_BREAK---
+
+structure is usually implemented to solve this problem. The pneumatic servo-control system precisely controls the angle of rotation; however, energy loss still occurs in the system. For this system, reducing the gas supply pressure is the most effective way of reducing the energy loss. Determining the critical pressure and reducing the gas supply pressure as much as possible while ensuring normal operation of the system are the key. The power-matching method can solve the optimization problem of the gas supply pressure based on the power required by the system [14]. In flow compensation, different compensation controllers can also be designed to match the flow and the system to realize the purpose of energy savings [15, 16]. Problems arise with regard to the high energy consumption and poor controllability of the rotary system of a hydraulic excavator due to throttle loss and overflow loss in the control valve during frequent acceleration and deceleration with large inertia. Therefore, Huang et al. [17] proposed the flow matching of a pump valve joint control and an independent measurement method of the hydraulic excavator rotary system to improve the energy efficiency of the system and reduce throttle loss. Xu et al. [18] designed a dynamic bypass pressure compensation circuit of a load sensing system, which solved the problems of pressure shock and energy loss caused by excessive flow and improved the efficiency and controllability of the system. Kan et al. [19] analyzed the basic characteristics of a hydraulic transmission system for wheel loaders using numerical calculation and adopted the optimal design method of a power-matching system. This improved the efficient working area of the system and average efficiency in the transportation process, and reduced the average working fuel consumption rate. Yang et al. designed an electro-hydraulic flow-matching controller with shunt ability to improve the dynamic characteristics and energy-saving effect and improve the stability of the system [20]. Guo et al. [21] used genetic algorithm to optimize the parameters of an asynchronous motor to achieve energy savings and consumption reduction, which proved the effectiveness and practicability of the power matching method of an electric pump system. Wang et al. [22] matched an engine and a generator to achieve efficiency optimization and obtained a common high efficiency area. They proposed a partial power tracking control strategy. Lai et al. [23] proposed a parameter matching method for an accumulator in a parallel hydraulic hybrid excavator and optimized the parameter matching process of the main components such as the engine, accumulator, and hydraulic secondary regulatory pump using genetic algorithm to reduce the installed power. Yan et al. [24] focused on the problem in which the flow of a constant displacement pump could not match with the changing load, resulting in energy loss.
+
+They proposed an electro-hydraulic flow-matching steering control method, which used a servo motor to drive a constant displacement pump independently to reduce the energy consumption of the system. At present, many studies on energy savings are conducted using the power matching method in the hydraulic system, but only few focus on the pneumatic system [25].
+
+In the present study, a method of reducing the gas supply pressure is implemented to reduce energy loss of a pneumatic rotary actuator servo-control system. The output and load characteristic curves of the system are derived, and the power source characteristic curve is matched to determine the optimal gas supply pressure. Finally, the experiment verifies the energy-saving effect under this gas supply pressure.
+
+Through theoretical analysis and experimental verification of the application platform of the pneumatic rotary actuator, a method of function matching and energy optimization method for the pneumatic rotary actuator under normal working conditions is proposed for the first time.
+
+## 2 Experimental Platform
+
+Figure 1 shows the schematic diagram of the pneumatic rotary actuator servo-control system.
+
+As a gas source, the air compressor provides power to the system. The air filter, air regulator, and air lubricator are used to filter and clean the gas. When the driving voltage signal of the proportional directional control valve is given, the proportional valve controls the flow and direction of the gas, and then controls the rotary motion of the pneumatic rotary actuator. The rotary encoder measures the angular displacement and transmits the TTL (Transistor-Transistor Logic) level signals to the data acquisition card. The data acquisition card is installed in the industrial personal computer which calls the program of the upper computer, samples the encoder signal, and outputs a 0–10 V voltage signal through the controller calculation. The driving voltage signal output by the controller further regulates the flow and direction of the proportional directional control valve to reduce the angle error. After continuous iteration, the angle error of the system decreases and tends to stabilize.
+
+Figure 2 shows the experimental platform of the pneumatic rotary actuator servo-control system. The round steel passes through the pneumatic rotary actuator and is connected to the rotary encoder through the coupling. The pneumatic rotary actuator is horizontally installed.
+
+By selecting the MPYE-M5-010-B model proportional valve with a smaller range, we can more easily ensure the control accuracy of the system. The SMC MSQA30A pneumatic rotary actuator is adopted. The actuator has a high-precision ball bearing and belongs
+---PAGE_BREAK---
+
+**Figure 1** Schematic diagram of the pneumatic rotary actuator servo-control system
+
+**Figure 2** Experimental diagram of the pneumatic rotary servo-control system
+
+to a high-precision actuator type. The rotating platform of the actuator contains many symmetrical threaded holes for easy introduction of loads. A high-precision rotary encoder is used, and the 20000P/R resolution
+
+corresponds to an accuracy of $1.8 \times 10^{-2}$, which satisfies the high-precision measurement for the rotation angle. In addition, the air compressor and the filter, regulator, and lubricator (F. R. L.) units satisfy the gas supply pressure of at least 0.8 MPa. The digital I/O port and analog output port of the data-acquisition card must meet the experimental requirements, and the higher digit counter in the data-acquisition card improves the system response speed. The models and parameters of the components are listed in Table 1.
+
+In some experimental tests, measuring the flow rate, pressure, and temperature of the gas is necessary, which can be performed using a flow sensor, a pressure transmitter, and a temperature transmitter (thermocouple), respectively. The flow rate in the inlet and outlet is measured using a flow sensor in the FESTO SFAB series
+
+**Table 1** Models and parameters of the components
+
+| Component | Model | Parameter | | Air compressor | PANDA 750-30L | Maximum supply pressure: 0.8 MPa | | F. R. L. units | AC3000-03 | Maximum working pressure: 1.0 MPa | | Proportional-directional control valve | FESTO MPYE-5-M5-010-B | 3-position 5-way valve, 0–10 V driving voltage | | Pneumatic rotary actuator | SMC MSQA30A | Bore: 30 mm; stroke: 190° | | Rotary Encoder | GSS06-LDH-RAG2000Z1 | Resolution: 20000P/R | | Data-acquisition card | NI PCI-6229 | 32-bit counter; from –10 V to +10 V output voltage | | Industrial personal computer | ADVANTECH IPC-610H | Standard configuration |
+---PAGE_BREAK---
+
+with a range of 2–200 L/min, and the flow rate of the leak port is measured using a flow sensor with a range of 0.1–5 L/min in the SFAH series. The MIK-P300 pressure transmitter has high accuracy and fast response and can accurately measure the pressure changes. A thermocouple is used as a temperature transmitter to measure the gas temperature. To prevent signal interference, a temperature isolator is added to the circuit for the temperature signal transmission. The models and parameters of the test components are listed in Table 2. The circuit connection of the experimental platform is shown in Figure 3.
+
+The schematic diagram of the valve-controlled cylinder system is constructed according to the experimental platform, as shown in Figure 4. The system consists of Chamber **a** and Chamber **b**. The dashed lines represent the boundaries of the chambers. Figure 4 shows the gas-flow mechanism when the spool moves to the right, and $\dot{m}_a$, $\dot{m}_b$ represent the mass flow rates of Chamber **a** and Chamber **b**, respectively. $p_a$, $p_b$ and $T_a$, $T_b$ represent the corresponding pressure and temperature of Chamber **a** and Chamber **b**, respectively. $p_s$ is the gas supply pressure, $p_e$ is the atmospheric pressure, and $\theta$ is the rotation angle of the pneumatic rotary actuator.
+
+Figure 3 Circuit connection of the experimental platform
+
+## 3 Power Characteristic Matching
+
+### 3.1 Output Characteristics of the Valve-Controlled Cylinder
+
+The output characteristic of the valve-controlled cylinder system refers to the relationship between the total load moment and angular velocity when the power source is known. The output characteristic can be obtained by the following method.
+
+When supply pressure $p_s$ is relatively low, i.e., when $0.1013 \text{ MPa} \le p_s \le 0.4824 \text{ MPa}$, the condition is satisfied, i.e., $p_a/p_s > b = 0.21$, where *b* denotes the critical pressure ratio. The gas flow in the proportional-directional control valve is a subsonic flow. Here, the mass flow equation through the proportional valve is [26]
+
+$$ \dot{m}_a = \frac{S_e p_s}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_a}{p_s} \right)^{\frac{2}{\kappa}} - \left( \frac{p_a}{p_s} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (1) $$
+
+$$ \dot{m}_b = \frac{S_e p_b}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_e}{p_b} \right)^{\frac{2}{\kappa}} - \left( \frac{p_e}{p_b} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (2) $$
+
+Table 2 Models and parameters of the test components
+
+| Component | Model | Parameter |
|---|
| Pressure transmitter | MIK-P300 | Range: 0–1.0 MPa; accuracy: 0.3% FS | | Flow sensor 1 | FESTO SFAB-200U-HQ8-2SV-M12 | Range: 2–200 L/min; accuracy: 3% o.m.v. + 0.3% FS | | Flow sensor 2 | FESTO SFAH-5U-Q6S-PNLK-PNVBA-M8 | Range: 0.1–5 L/min; accuracy: 2% o.m.v. + 1% FS | | Temperature transmitter (thermocouple) | TT-K-36 (K type, diameter: 0.1 mm) | Range: 0–260°; accuracy: 0.4% FS | | Temperature isolator | SLDTR-2P11 | Response time: ≤ 10 ms; accuracy: 0.1% FS |
+---PAGE_BREAK---
+
+**Figure 4** Schematic diagram of the valve-controlled cylinder system
+
+where $S_e$ is the effective area of the proportional valve orifice, $R$ is the gas constant, $T_s$ is the gas supply temperature, and $\kappa$ is the isentropic index.
+
+When the opening of the proportional-directional control valve is maximum, the mass flow rates of the two chambers are maximum, which can be expressed as
+
+$$ \dot{m}_{\text{a-max}} = \frac{C \pi r^2 p_s}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_a}{p_s} \right)^{\frac{2}{\kappa}} - \left( \frac{p_a}{p_s} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (3) $$
+
+$$ \dot{m}_{\text{b-max}} = \frac{C \pi r^2 p_b}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_e}{p_b} \right)^{\frac{2}{\kappa}} - \left( \frac{p_e}{p_b} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (4) $$
+
+where C is the flow coefficient and r is the radius of the orifice.
+
+Under adiabatic condition, $p_a/\rho_a^\kappa = p_s/\rho_s^\kappa$ and $p_b/\rho_b^\kappa = p_c/\rho_c^\kappa$, where $\rho_a$, $\rho_b$, $\rho_s$, and $\rho_e$ represent the gas density in Chamber a, gas density in Chamber b, gas supply density, and atmospheric density respectively. For the pneumatic rotary actuator, these can be obtained from the mass flow-rate formulas:
+
+$$ \dot{m}_{\text{a-max}} = \rho_a \cdot 2A \cdot \frac{1}{2} d_f \dot{\theta} = \frac{\rho_a}{\rho_s} \rho_s A d_f \dot{\theta} = \left(\frac{p_a}{p_s}\right)^{\frac{1}{\kappa}} \frac{p_s}{RT_s} A d_f \dot{\theta}, \quad (5) $$
+
+$$ \dot{m}_{\text{b-max}} = \rho_b \cdot 2A \cdot \frac{1}{2} d_f \dot{\theta} = \frac{\rho_e}{\rho_b} \rho_b A d_f \dot{\theta} = \left(\frac{p_e}{p_b}\right)^{\frac{1}{\kappa}} \frac{p_b}{RT_s} A d_f \dot{\theta}, \quad (6) $$
+
+where A is the effective area of a single piston, $d_f$ is the pitch diameter of the gear, and $\dot{\theta}$ is the angular velocity of the pneumatic rotary actuator.
+
+**Table 3** Known parameters in Eq. (8)
+
+| Parameter | Value | | A (m²) | 3.4636 × 10-4 | | df (m) | 0.014 | | κ | 1.4 | | C | 0.6437 | | r (m) | 1.00 × 10-3 | | R (J/(kg·K)) | 287 |
+
+The dynamic equation of the pneumatic rotary actuator can be expressed as follows:
+
+$$ p_a - p_b = \frac{f}{d_f A}, \quad (7) $$
+
+where f is the total load moment.
+
+Combining Eqs. (3)–(6) yields $p_a$ and $p_b$. Substituting the expressions of $p_a$ and $p_b$ into Eq. (7) yields
+
+$$ p_s \left[ 1 - \frac{A^2 d_f^2 \dot{\theta}^2 (\kappa - 1)}{2C^2 \pi^2 r^4 \kappa R T_s} \right]^{\frac{\kappa}{\kappa-1}} - \frac{p_e}{\left[ 1 - \frac{A^2 d_f^2 \dot{\theta}^2 (\kappa-1)}{2C^2 \pi^2 r^4 \kappa R T_s} \right]^{\frac{\kappa}{\kappa-1}}} = \frac{f}{d_f A}. \quad (8) $$
+
+Eq. (8) is the expression of the output characteristic curve of the valve-controlled cylinder. The known parameters in the equation are shown in Table 3.
+
+To extend and improve the influence of the output characteristics of the system, the influence law of the fixed parameters is also theoretically analyzed. Figure 5 shows the output characteristic curves. The following characteristics can be found in plane $\dot{\theta}-f$:
+
+(1) Figure 5(a) shows that when pressure $p_s$ increases from 0.3 MPa to 0.4 MPa, the curve is a parabola and $p_s$ is a variable parameter. Increasing $p_s$ makes the whole parabola move to the right while the shape does not change.
+
+(2) Figure 5(b) shows that when the maximum opening area of the valve increases from $\pi r^2$ to $2\pi r^2$, the whole parabola becomes wider but the vertices remain the same.
+
+(3) Figure 5(c) shows that the increase in effective working area A of the piston makes the top of the parabola move to the right and the parabola simultaneously becomes narrower.
+
+We can see from Eq. (8) that when $\dot{\theta}=0$, the maximum total load moment can be expressed as
+
+$$ f_{\max} = Ad_f(p_s - p_e). \quad (9) $$
+
+When $f=0$, the maximum angular velocity is
+---PAGE_BREAK---
+
+**Figure 5** Output characteristic curve of the valve-controlled cylinder: (a) Output characteristics of the pressure variation, (b) Output characteristics of the change in the valve port area, (c) Output characteristics of the variation in the effective piston area
+
+$$ \dot{\theta}_{\max} = \sqrt{\frac{2C_1^2 \pi^2 r^4 \kappa R T_s}{A^2 d_f^2 (\kappa - 1)}} \left[ 1 - \left( \frac{p_e}{p_s} \right)^{\left(\frac{\kappa-1}{2\kappa}\right)} \right]. \quad (10) $$
+
+## 3.2 Load Characteristic
+
+The load characteristic refers to the relationship between the moment required for the load to move and the position, velocity, and acceleration of the load itself [27]. The load characteristic can be expressed by the angular velocity–moment curve.
+
+The load characteristic is related to the form of load movement. When the load sinusoidally moves, the motion of the load is expressed as
+
+$$ \theta = \theta_m \sin \omega t, \quad (11) $$
+
+where $\theta_m$ is the maximum angular value of the load motion and $\omega$ is the sinusoidal motion frequency of the load.
+
+The angular velocity and acceleration of the load are
+
+$$ \dot{\theta} = \theta_m \omega \cos \omega t, \quad (12) $$
+
+$$ \ddot{\theta} = -\theta_m \omega^2 \sin \omega t. \quad (13) $$
+
+The total load moment of the pneumatic rotary actuator is
+
+$$ f = \left( \frac{1}{2} m_p d_f^2 + J \right) \ddot{\theta} + \frac{1}{2} d_f F_f \\ = - \left( \frac{1}{2} m_p d_f^2 + J \right) \theta_m \omega^2 \sin \omega t \\ + \frac{1}{2} d_f \left[ F_c \operatorname{sign}(\dot{\theta}) + (F_s - F_c)e^{-(\dot{\theta}/\dot{\theta}_s)^2} \operatorname{sign}(\dot{\theta}) + \sigma \dot{\theta} \right], \quad (14) $$
+
+where $m_p$ is the mass of a single piston and $J$ is the moment of inertia of the pneumatic rotary actuator. $F_f$ is the friction force and can be represented by the Stribeck friction model.
+
+$$ F_f = F_c \operatorname{sign}(\dot{\theta}) + (F_s - F_c)e^{-(\dot{\theta}/\dot{\theta}_s)^2} \operatorname{sign}(\dot{\theta}) + \sigma \dot{\theta}, \quad (15) $$
+
+where $F_s$ is the maximum static friction, $F_c$ is the Coulomb friction, $\dot{\theta}_s$ is the critical Stribeck velocity, and $\sigma$ is the viscous friction coefficient.
+---PAGE_BREAK---
+
+**Table 4** Known parameters in Eq. (16)
+
+| Parameter | Value | | Fs (N) | 10.60 | | Fc (N) | 6.03 | | θ̇s (rad/s) | 0.19 | | σ (N·s/rad) | 0.87 | | mp (kg) | 0.21 |
+
+**Figure 6** Load characteristic curve
+
+Combining Eqs. (12)–(14) yields
+
+$$ \left[ \frac{f - \frac{1}{2} d_f F_c \text{sign}(\dot{\theta}) - \frac{1}{2} d_f (F_s - F_c) e^{-(\dot{\theta}/\dot{\theta}_s)^2} \text{sign}(\dot{\theta})}{-\frac{1}{2} d_f \sigma \dot{\theta} - \left(\frac{1}{2} m_p d_f^2 + J \right) \theta_m \omega^2} + \left(\frac{\dot{\theta}}{\dot{\theta}_m \omega}\right)^2 = 1. \right]^2 \quad (16) $$
+
+The known parameters in Eq. (16) are listed in Table 4.
+
+The load characteristic curve can be obtained from Eq. (16) when $\theta_m=180°$ and $\omega=10$ rad/s, as shown in Figure 6.
+
+### 3.3 Power Source Characteristics and Matching
+
+The power source characteristic refers to the characteristic of the flow and pressure provided by the power source, which can be expressed by the flow-pressure curve. The air compressor used in this work can be approximately regarded as a constant-pressure source for a quantitative
+
+**Figure 7** Power source characteristic curve
+
+**Figure 8** Power source characteristic matching
+
+pump. Therefore, the power source characteristic curve is shown in Figure 7, where $\dot{m}_s$ is the gas supply mass flow, $p_s$ is the gas supply pressure, $\dot{m}_L$ is the driving mass flow, and $p_L$ is the driving pressure.
+
+The output and power source characteristics of the valve-controlled cylinder should envelope the load characteristic curve. To minimize unnecessary energy consumption, the output characteristic curve should be tangent to the load characteristic curve, and the power source characteristic curve should be tangent to the output characteristic curve in the f-axis direction and the load characteristic curve in the $\dot{\theta}$-axis direction, as shown in Figure 8.
+
+In this manner, the maximum total load moment is obtained, i.e., $f_{max}=0.96$ N·m. The optimum gas supply pressure can be obtained from Eq. (9), i.e., $p_s=f_{max}/d_f A + p_e= 0.3367$ MPa.
+---PAGE_BREAK---
+
+**4 Experimental Verification of the Energy Savings**
+
+To verify the calculation results presented in the previous section, low-speed uniform-motion experiments of the pneumatic rotary actuator were carried out using 0.6 and 0.3367 MPa supply pressure. The total energy and effective energy consumed by the valve-controlled cylinder system were measured and calculated. In the experiment, the input-angle signal was set as the slope signal, and Chamber **a** was used as the intake chamber. The motion curve of the uniform-velocity period was considered, and the angular strokes in the two experiments were the same. Two flow sensors were used to measure the volume flow of the gas supply pipeline and the Chamber **a** port. Temperature sensors were used to measure the gas temperature of the gas supply pipeline and Chamber **a**.
+
+Figures 9 and 10 show the system response curves at gas supply pressure values of 0.6 and 0.3367 MPa, respectively, including the angle curve, gas supply flow curve, gas supply temperature curve, pressure curve of Chamber **a**, volume-flow curve of Chamber **a**, and temperature curve of Chamber **a**. Figures 9(f) and 10(f) show that the temperature in Chamber **a** changed with the change in the velocity, which first increased, then decreased, and then entered a stable stage.
+
+The total power consumed by the pneumatic system is expressed as [28, 29]:
+
+$$P_T = p_s \dot{V}_s \left[ \ln \frac{p_s}{p_e} + \frac{\kappa}{\kappa - 1} \left( \frac{T_s - T_e}{T_e} - \ln \frac{T_s}{T_e} \right) \right], \quad (17)$$
+
+where $\dot{V}_s$ is the volume flow through the gas supply pipeline, and its numerical variation curves are shown in Figures 9(b) and 10(b). The $T_s$ curves are shown in Figures 9(c) and 10(c).
+
+The effective power of the pneumatic rotary actuator can be expressed as
+
+$$P_E = p_a \dot{V}_a \left[ \ln \frac{p_a}{p_e} + \frac{\kappa}{\kappa - 1} \left( \frac{T_a - T_e}{T_e} - \ln \frac{T_a}{T_e} \right) \right], \quad (18)$$
+
+where $\dot{V}_a$ is the volume flow into Chamber **a**, and its numerical variation curves are shown in Figures 9(e) and 10(e). The $T_a$ curves are shown in Figures 9(f) and 10(f).
+
+By substituting the data in Figures 9 and 10 into Eqs. (17) and (18), the total and effective power of the pneumatic system at different supply pressure values can be obtained, as shown in Figure 11. The total and effective energy consumed by the pneumatic system can be obtained by integrating the data shown in Figure 11 using the Origin software.
+
+The actual work done by the gas on the pneumatic rotary actuator is equal to the sum of the rotational
+
+kinetic energy of the rotating platform, the kinetic energy of the cylinder piston, and the work done by the piston to overcome the friction force, which can be expressed as
+
+$$
+\begin{aligned}
+W &= \frac{1}{2} J \dot{\theta}^2 + \frac{1}{2} \cdot 2m_p \cdot (\dot{y})^2 + F_f y \\
+&= \frac{1}{2} \left( J + \frac{1}{2} m_p d_f^2 \right) \dot{\theta}^2 + \frac{1}{2} F_f d_f \theta,
+\end{aligned}
+\quad (19) $$
+
+where $y$ is the displacement of the actuator piston and $\dot{\theta}$ is replaced by the average value of the angular velocity.
+
+The calculation results are described as follows. When the gas supply pressure is 0.6 MPa, the total energy consumed by the system is 195.552 J, the effective energy is 32.666 J, and the actual work done by the pneumatic rotary actuator is 3.513 J. When the gas supply pressure is 0.3367 MPa, the total energy consumed by the system is 32.207 J, the effective energy is 9.481 J, and the actual work done is 3.517 J. In both cases, the actual work of the pneumatic rotary actuator is almost the same, and when the gas supply pressure is 0.3367 MPa, the energy consumption is greatly reduced.
+
+**5 Further Discussions**
+
+According to the matching method of the power characteristics, for the constant-pressure source servo system with a quantitative pump, we need to calculate the optimal air-supply pressure required for manually adjusting the air-supply pressure to the optimal pressure. Matching efficiency $\eta$ represents the ratio of the power output of the pneumatic system to the input power of the gas source. The matching efficiency is expressed as
+
+$$\eta = \frac{p_L \dot{m}_L}{p_s \dot{m}_s}. \quad (20)$$
+
+Figure 7 shows that the matching efficiency of this method is low. The adaptive power source can adaptively change the gas supply pressure or flow to meet the system requirements and improve the matching efficiency. It can be divided into the following three types [30].
+
+(1) Flow adaptive power source
+
+This power source can adaptively adjust the supply flow from the power source according to the system flow demand to reduce the loss in the flow. The characteristic curve is shown in Figure 12(a). The matching efficiency is expressed as
+
+$$\eta = \frac{p_L \dot{m}_L}{p_s \dot{m}_s'} \approx \frac{p_L}{p_s}. \quad (21)$$
+---PAGE_BREAK---
+
+**Figure 9** System-response curve at gas supply pressure of 0.6 MPa: (a) Angle curve, (b) Gas supply flow, (c) Gas supply temperature, (d) Pressure curve of Chamber a, (e) Volume-flow curve of Chamber a, (f) Temperature curve of Chamber a
+---PAGE_BREAK---
+
+**Figure 10** System response curve at gas supply pressure of 0.3367 MPa: (a) Angle curve, (b) Gas supply flow, (c) Gas supply temperature, (d) Pressure curve of Chamber a, (e) Volume-flow curve of Chamber a, (f) Temperature curve of Chamber a
+---PAGE_BREAK---
+
+**Figure 11** Total and effective power of the pneumatic system under different supply pressure values: (a) Total power, (b) Effective power
+
+(2) Pressure adaptive power source
+
+This power source can adaptively adjust the gas supply pressure of the power source according to the system pressure demand to reduce the pressure loss. The characteristic curve is shown in Figure 12(b). The matching efficiency is expressed as
+
+$$ \eta = \frac{p_L \dot{m}_L}{p'_s \dot{m}_s} \approx \frac{\dot{m}_L}{\dot{m}_s}. \qquad (22) $$
+
+(3) Power adaptive power source
+
+This power source can adaptively adjust the gas supply pressure and flow from the power source according to the system pressure and flow demand to minimize the loss in power. $\dot{m}'_s$ denotes the air-supply flow. The characteristic
+
+**Figure 12** Power characteristics of the adaptive power sources: (a) Flow adaptive power source, (b) Pressure adaptive power source, (c) Power adaptive power source
+---PAGE_BREAK---
+
+curve is shown in Figure 12(c). The matching efficiency is expressed as
+
+$$ \eta = \frac{p_L \dot{m}_L}{p'_s \dot{m}'_s} \approx 1. \qquad (23) $$
+
+Therefore, the power adaptive power source demonstrates better energy-saving effect, and its matching efficiency is closer to 100%.
+
+## 6 Conclusions
+
+Power matching of the pneumatic rotary actuator involves optimizing the relevant parameters of the pneumatic rotary actuator system based on the premise of satisfying the normal operation of the pneumatic rotary actuator, realizing the power demand and output matching, and achieving energy savings. In this study, the derivation process of the output-power and load characteristics of the pneumatic rotary actuator servo-control system is described. The employed air compressor is regarded as a constant-pressure source of the quantitative pump, and the power characteristics of the system are matched. The following conclusions are obtained.
+
+(1) The minimum gas supply pressure obtained by the power-matching method represents the optimal gas supply pressure. The optimum gas supply pressure is 0.3367 MPa.
+
+(2) By comparing the system-response experiments at 0.6 and 0.3367 MPa, the total energy consumed by the system generates savings of 163.345 J. This value verifies that the system under the optimal gas supply pressure can significantly reduce energy loss.
+
+(3) According to the characteristic curves of the adaptive power sources, the matching efficiency of the power adaptive power source is higher than that of the flow and pressure adaptive power sources.
+
+### Acknowledgments
+
+The authors would like to thank Henan Polytechnic University and Beihang University for providing the necessary facilities and machinery to build the prototype of the pneumatic servo system. The authors are sincerely grateful to the reviewers for their valuable review comments, which substantially improved the paper.
+
+### Authors' Contributions
+
+YZ provided guidance for the whole research. KL and HY established the model, designed the experiments and wrote the initial manuscript. KL and MC assisted with sampling and laboratory analyses. YZ and HY revised the manuscript, performed the experiments and analysed the data. All authors read and approved the final manuscript.
+
+### Authors' Information
+
+Yeming Zhang, born in 1979, is currently an associate professor at School of Mechanical and Power Engineering, Henan Polytechnic University, China. He received his PhD degree from Beihang University, China, in 2011. His research interests include complex mechatronics system design and simulation,
+
+intelligent control, reliability and fault diagnosis, pneumatic system energy saving and flow measurement.
+
+Hongwei Yue, born in 1992, is currently a master candidate at School of Mechanical and Power Engineering, Henan Polytechnic University, China.
+
+Ke Li, born in 1991, is currently a PhD candidate at School of Mechanical and Electrical Engineering, Harbin Institute of Technology, China. He received his master degree on mechano-electronic from Henan Polytechnic University, China, in 2019.
+
+Maolin Cai, born in 1972, is currently a professor and a PhD candidate supervisor at Beihang University, China. He received his PhD degree from Tokyo Institute of Technology, Japan, in 2002. His main research direction includes pneumatic and hydraulic fluidics, compressed air energy storage, and pneumatic pipe line system.
+
+### Funding
+
+Supported by Henan Province Science and Technology Key Project of China (Grant Nos. 202102210081, 202102210082), Fundamental Research Funds for Henan Province Colleges and Universities of China (Grant No. NSFRF140120), and Doctor Foundation of Henan Polytechnic University (Grant No. B2012-101).
+
+### Competing Interests
+
+The authors declare no competing financial interests.
+
+### Author Details
+
+¹School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, China. ²School of Mechanical and Electrical Engineering, Harbin Institute of Technology, Harbin 150001, China. ³School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China.
+
+Received: 6 July 2019 Revised: 22 February 2020 Accepted: 18 March 2020
+Published online: 09 April 2020
+
+### References
+
+[1] L Ge, L Quan, X G Zhang, et al. Power matching and energy efficiency improvement of hydraulic excavator driven with speed and displacement variable power source. *Chinese Journal of Mechanical Engineering*, 2019, 32:100, https://doi.org/10.1186/s10033-019-0415-x.
+
+[2] T Chen, L Cai, X F Ma, et al. Modeling and matching performance of a hybrid-power gas engine heat pump system with continuously variable transmission. *Building Simulation*, 2019, 12(2): 273-283.
+
+[3] G W Jia, W Q Xu, M L Cai, et al. Micron-sized water spray-cooled quasi-isothermal compression for compressed air energy storage. *Experimental Thermal and Fluid Science*, 2018, 96: 470-481.
+
+[4] D Shaw, J-J Yu, C Chieh. Design of a hydraulic motor system driven by compressed air. *Energies*, 2013, 6(7): 3149-3166.
+
+[5] M Cheng, B Xu, J H Zhang, et al. Pump-based compensation for dynamic improvement of the electrohydraulic flow matching system. *IEEE Transactions on Industrial Electronics*, 2017, 64(4): 2903-2913.
+
+[6] Y M Zhang, K Li, G Wang, et al. Nonlinear model establishment and experimental verification of a pneumatic rotary actuator position servo system. *Energies*, 2019, 12(6): 1096.
+
+[7] T L Brown, V P Atluri, J P Schmiedeler. A low-cost hybrid drivetrain concept based on compressed air energy storage. *Applied Energy*, 2014, 134: 477-489.
+
+[8] Y M Zhang, M L Cai. Overall life cycle comprehensive assessment of pneumatic and electric actuator. *Chinese Journal of Mechanical Engineering*, 2014, 27(3): 584-594.
+
+[9] M L Cai. Energy saving technology on pneumatic systems. *Chinese Hydraulics & Pneumatics*, 2013(8): 1-8. (in Chinese)
+
+[10] J F Li. Energy saving of pneumatic system. Beijing: Machinery Industry Press, 1997. (in Chinese)
+
+[11] R Saidur, N A Rahim, M Hasanuzzaman. A review on compressed-air energy use and energy savings. *Renewable and Sustainable Energy Reviews*, 2010, 14(4): 1135-1153.
+
+[12] Y M Zhang, S Wang, S L Wei, et al. Optimization of control method of air compressor group under intermittent large flow condition. *Fluid Machinery*, 2017, 45(7): 7-11.
+---PAGE_BREAK---
+
+[13] K Baghestan, S M Rezaei, H A Talebi, et al. An energy-saving nonlinear position control strategy for electro-hydraulic servo systems. *ISA Trans.*, 2015, 59: 268-279.
+[14] S P Yang, H Yu, J G Liu, et al. Research on power matching and energy sav- ing control of power system in hydraulic excavator. *Journal of Mechanical Engineering*, 2014, 50(5): 152-160. (in Chinese)
+[15] M Cheng, B Xu, J H Zhang, et al. Valve-based compensation for controlla- bility improvement of the energy-saving electrohydraulic flow matching system. *Journal of Zhejiang University: Science A*, 2017, 18(6): 430-442.
+[16] B Xu, M Cheng, H Y Yang, et al. A hybrid displacement/pressure control scheme for an electrohydraulic flow matching system. *IEEE/ASME Transactions on Mechatronics*, 2015, 20(6): 2771-2782.
+[17] W N Huang, L Quan, J H Huang, et al. Flow matching with combined control of the pump and the valves for the independent metering swing system of a hydraulic excavator. *Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering*, 2018, 232(10): 1310-1322.
+[18] B Xu, M Cheng, H Y Yang, et al. Electrohydraulic flow matching system with bypass pressure compensation. *Journal of Zhejiang University (Engineering Science)*, 2015, 49(9): 1762-1767. (in Chinese)
+[19] Y Z Kan, D Y Sun, Y Luo, et al. Optimal design of power matching for wheel loader based on power reflux hydraulic transmission system. *Mechanism and Machine Theory*, 2019, 137: 67-82.
+[20] H Y Yang, W Liu, B Xu, et al. Characteristic analysis of electro-hydraulic flow matching control system in hydraulic excavator. *Journal of Mechanical Engineering*, 2012, 48(14): 156-163. (in Chinese)
+[21] X Guo, C Lu, J Li, et al. Analysis of motor-pump system power matching based on genetic algorithm. *EEA - Electrotehnica, Electronica, Automatica*, 2018, 66(1): 93-99.
+
+[22] X Wang, H Lv, Q Sun, et al. A proportional resonant control strategy for efficiency improvement in extended range electric vehicles. *Energies*, 2017, 10(2): 204.
+[23] X L Lai, C Guan. A parameter matching method of the parallel hydraulic hybrid excavator optimized with genetic algorithm. *Mathematical Problems in Engineering*, 2013: 1-6.
+[24] X D Yan, L Quan, J Yang. Analysis on steering characteristics of wheel loader based on electric-hydraulic flow matching principle. *Transactions of the Chinese Society of Agricultural Engineering*, 2015, 31(18): 71-78. (in Chinese)
+[25] L C Xu, X M Hou. Power matching on loader engine and hydraulic torque converter based on typical operating conditions. *Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering*, 2015, 31(7): 80-84. (in Chinese)
+[26] X H Fu, M L Cai, W Y X ang, et al. Optimization study on expansion energy used air-powered vehicle with pneumatic-hydraulic transmission. *Chinese Journal of Mechanical Engineering*, 2018, 31:3, https://doi.org/10.1186/s10033-018-0220-y.
+[27] H B Yuan, H Na, Y Kim. Robust MPC-PIC force control for an electro-hydraulic servo system with pure compressive elastic load. *Control Engineering Practice*, 2018, 79: 170-184.
+[28] Y Shi, M L Cai, W Q Xu, et al. Methods to evaluate and measure power of pneumatic system and their applications. *Chinese Journal of Mechanical Engineering*, 2019, 32:42, https://doi.org/10.1186/s10033-019-0354-6.
+[29] Y Shi, T C Wu, M L Cai, et al. Energy conversion characteristics of a hydro-pneumatic transformer in a sustainable-energy vehicle. *Applied Energy*, 2016, 171: 77-85.
+[30] C C Zhan, X Y Chen. *Hydraulic reliability optimization and intelligent fault diagnosis*. Beijing: Metallurgical Industry Press, 2015. (in Chinese)
+
+Submit your manuscript to a SpringerOpen® journal and benefit from:
+
+► Convenient online submission
+
+► Rigorous peer review
+
+► Open access: articles freely available online
+
+► High visibility within the field
+
+► Retaining the copyright to your article
+
+Submit your next manuscript at ► springeropen.com
\ No newline at end of file
diff --git a/samples/texts_merged/879988.md b/samples/texts_merged/879988.md
new file mode 100644
index 0000000000000000000000000000000000000000..8c107a98b95b994888509c4b170f2d6a2fa73765
--- /dev/null
+++ b/samples/texts_merged/879988.md
@@ -0,0 +1,435 @@
+
+---PAGE_BREAK---
+
+# The Poisson Process and Associated Probability Distributions on Time Scales
+
+Dylan R. Poulsen
+Department of Mathematics
+Baylor University
+Waco, TX 76798
+
+Email: Dylan_Poulsen@baylor.edu
+
+Michael Z. Spivey
+Department of Mathematics and
+Computer Science
+University of Puget Sound
+Tacoma, WA 98416
+
+Email: mspivey@pugetsound.edu
+
+Robert J. Marks II
+Department of Electrical and
+Computer Engineering
+Baylor University
+Waco, TX 76798
+
+Email: Robert_Marks@baylor.edu
+
+**Abstract**—Duals of probability distributions on continuous $\mathbb{R}$ domains exist on discrete $\mathbb{Z}$ domains. The Poisson distribution on $\mathbb{R}$, for example, manifests itself as a binomial distribution on $\mathbb{Z}$. Time scales are a domain generalization in which $\mathbb{R}$ and $\mathbb{Z}$ are special cases. We formulate a generalized Poisson process on an arbitrary time scale and show that the conventional Poisson distribution on $\mathbb{R}$ and binomial distribution on $\mathbb{Z}$ are special cases. The waiting times of the generalized Poisson process are used to derive the Erlang distribution on a time scale and, in particular, the exponential distribution on a time scale. The memoryless property of the exponential distribution on $\mathbb{R}$ is well known. We find conditions on the time scale which preserve the memorylessness property in the generalized case.
+
+On $\mathbb{R}$, this is interpreted in the limiting case and $x^\Delta(t) = \frac{d}{dt}x(t)$. The Hilger integral can be viewed as the antiderivative in the sense that, if $y(t) = x^\Delta(t)$, then for $s, t \in \mathbb{T}$,
+
+$$\int_{\tau=s}^{t} y(\tau)\Delta\tau = x(t) - x(s).$$
+
+The solution to the differential equation
+
+$$x^{\Delta}(t) = zx(t); x(0) = 1,$$
+
+is $x(t) = e_z(t, 0)$ where [2], [10]
+
+$$e_z(t, s) := \exp \left( \int_{\tau=s}^{t} \frac{\log(1 + \mu(\tau)z)}{\mu(\tau)} \Delta\tau \right).$$
+
+For an introduction to time scales, there is an online tutorial [10] or, for a more thorough treatment, see the text by Bohner and Peterson [2].
+
+## I. INTRODUCTION
+
+The theory of continuous and discrete time stochastic processes is well developed [7], [8]. Stochastic processes on general closed subsets of the real numbers, also known as *time scales*, allow a generalization to other domains [4], [9]. The notion of a stochastic process on time scales naturally leads to questions about probability theory on time scales, which has been developed by Kahraman [5]. We begin by introducing a generalized Poisson process on time scales and show it reduces to the conventional Poisson process on $\mathbb{R}$ and the binomial distribution on $\mathbb{Z}$. We then use properties of the Poisson process to motivate generalized Erlang and exponential distributions on time scales. Finally, we show that the generalized exponential distribution has an analogue of the memorylessness property under periodicity conditions on the time scale.
+
+## II. FOUNDATIONS
+
+A time scale, $\mathcal{T}$, is any closed subset of the real line. We restrict attention to causal time scales [6] where $0 \in \mathcal{T}$ and $t \ge 0$ for all $t \in \mathcal{T}$. The forward jump operator [2], [10], $\sigma(t)$, is defined as the point immediately to the right of $t$, in the sense that $\sigma(t) = \inf\{s \in \mathcal{T} \forall s > t\}$. The graininess is the distance between points defined as $\mu(t) := \sigma(t) - t$. For $\mathbb{R}$, $\sigma(t) = t$ and $\mu(t) = 0$.
+
+The time scale or Hilger derivative of a function $x(t)$ on $\mathcal{T}$ is defined as
+
+$$x^{\Delta}(t) := \frac{x(\sigma(t)) - x(t)}{\mu(t)}. \quad (II.1)$$
+
+## III. THE POISSON PROCESS ON TIME SCALES
+
+We begin by presenting the derivation for a particular stochastic process on time scales which mirrors a derivation for the Poisson process on $\mathbb{R}$ [3].
+
+Let $\lambda > 0$. Assume the probability an event occurs in the interval $[t, \sigma(s))_{\mathcal{T}}$ is given by
+
+$$-(\ominus\lambda)(t)(\sigma(s) - t) + o(s - t).$$
+
+where $\ominus z := -z/(1 - \mu(t)z)$ [2], [10]. Hence the probability that no event occurs on the interval is given by
+
+$$1 + (\ominus\lambda)(t)(\sigma(s) - t) + o(s - t).$$
+
+We also assume that at $t=0$ no events have occurred.
+
+We now define a useful notation. Let $X : \mathcal{T} \to \mathbb{N}^0$ be a counting process [8] where $\mathbb{N}^0$ denotes all nonnegative integers. For $k \in \mathbb{N}^0$, define $p_k(t) = \mathbb{P}[X(t) = k]$, the probability that $k$ events have occurred by time $t \in \mathcal{T}$. Let $t, s \in \mathcal{T}$ with $s > t$. Consider the successive intervals $[0, t)_{\mathcal{T}}$
+---PAGE_BREAK---
+
+and $[t, \sigma(s))_{\mathbb{T}}$. We can therefore set up the system of equations
+
+$$
+\begin{align*}
+p_0(\sigma(s)) &= p_0(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\
+p_1(\sigma(s)) &= p_1(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
+&\quad + p_0(t)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\
+&\vdots \\
+p_k(\sigma(s)) &= p_k(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
+&\quad + p_{k-1}(t)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\
+&\vdots
+\end{align*}
+$$
+
+with initial conditions $p_0(0) = 1$ and $p_k(0) = 0$ for $k > 0$. We will let $s \to t$ and solve these equations recursively. Consider the $p_0$ equation. By the definition of the derivative on time scales, we have
+
+$$
+p_0^\Delta(t) = \lim_{s \to t} \frac{p_0(\sigma(s)) - p_0(t)}{\sigma(s) - t} = (\ominus\lambda)(t)p_0(t),
+$$
+
+which, using the initial value $p_0(0) = 1$, has a solution
+
+$$
+p_0(t) = e_{\ominus\lambda}(t, 0). \tag{III.1}
+$$
+
+Now consider the $p_1$ equation. Substituting the solution of the $p_0$ equation yields
+
+$$
+\begin{align*}
+p_1(\sigma(s)) &= p_1(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
+&\quad + e_{\ominus\lambda}(t, 0)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t),
+\end{align*}
+$$
+
+which, using (II.1), yields
+
+$$
+p_1^{\Delta}(t) = (\ominus\lambda)(t)p_1(t) - (\ominus\lambda)(t)e_{\ominus\lambda}(t, 0). \quad (III.2)
+$$
+
+Using the variation of constants formula on time scales [2], we arrive at the solution
+
+$$
+\begin{align*}
+p_1(t) &= - \int_0^t e_{\ominus\lambda}(t, \sigma(\tau))(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\
+&= - \int_0^t e_\lambda(\tau, t)(1 + \mu(\tau)\lambda)(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\
+&= \lambda \int_0^t e_\lambda(\tau, 0)e_\lambda(0, t)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\
+&= \lambda \int_0^t e_{\ominus\lambda}(t, 0)\Delta\tau \\
+&= \lambda t e_{\ominus\lambda}(t, 0) \\
+&= \frac{\lambda}{1 + \mu(0)\lambda} t e_{\ominus\lambda}(t, \sigma(0)) \\
+&= -( \ominus \lambda )(0) t e_{\ominus \lambda }(t, \sigma(0)).
+\end{align*}
+$$
+
+Now consider the $p_2$ equation. Substituting the solution of the $p_1$ equation yields
+
+$$
+\begin{align*}
+p_2(\sigma(s)) &= p_2(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
+&\quad - (\ominus\lambda)(0)te_{\ominus\lambda}(t, \sigma(0))[-(\ominus\lambda)(t)(\sigma(s) - t)] \\
+&\quad + o(s - t),
+\end{align*}
+$$
+
+which, using (II.1) yields
+
+$$
+p_2^{\Delta}(t) = (\ominus\lambda)(t)p_2(t) + (\ominus\lambda)(0)(\ominus\lambda)(t)te_{\ominus\lambda}(t, \sigma(0)).
+$$
+
+Again, using the variation of constants formula on time scales,
+we arrive at the solution
+
+$$
+\begin{align*}
+p_2(t) &= (\ominus\lambda)(0) \\
+& \quad \times \int_0^t e_{\ominus\lambda}(t, \sigma(\tau))(\ominus\lambda)(\tau)\tau e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\
+&= (\ominus\lambda)(0) \\
+& \quad \times \int_0^t e_{\lambda}(\tau, t)(1 + \mu(\tau)\lambda)(\ominus\lambda)(\tau)\tau e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\
+&= -\lambda(\ominus\lambda)(0) \\
+& \quad \times \int_0^t \tau e_{\lambda}(\tau, \sigma(0)) e_{\lambda}(\sigma(0), t) e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\
+&= -\lambda(\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma(0)) \int_0^t \tau \Delta\tau \\
+&= -\lambda(\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma(0)) h_2(t, 0) \\
+&= \frac{-\lambda}{1 + \mu(\sigma(0))\lambda} (\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma^2(0)) h_2(t, 0) \\
+&= (\ominus\lambda)(\sigma(0)) (\ominus\lambda)(0) h_2(t, 0) e_{\ominus\lambda}(t, \sigma^2(0)).
+\end{align*}
+$$
+
+In general, it can be shown via induction that
+
+$$
+p_k(t) = (-1)^k h_k(t, 0) e_{k-1}(t, \sigma^k(0)) \prod_{i=0}^{k-1} (\ominus_i \sigma^i(0)),
+$$
+
+where $h_k(t, 0)$ is the $k^{\text{th}}$ generalized Taylor monomial [2].
+
+The above derivation motivates the following definition:
+
+**Definition III.1.** Let $\mathbb{T}$ be a time scale. We say $S: \mathbb{T} \rightarrow \mathbb{N}^0$ is a $\mathbb{T}$-Poisson process with rate $\lambda > 0$ if for $t \in \mathbb{T}$ and $k \in \mathbb{N}^0$,
+
+$$
+\P[S(t; \lambda) = k] = (-1)^k h_k(t, 0) e_{-\lambda}(t, \sigma^k(0)) \prod_{i=0}^{k-1} (\ominus_i \lambda)(\sigma^i(0)). \quad (III.3)
+$$
+
+Each fixed $t \in T$ generates a discrete distribution of the number of arrivals at $t$. We now examine the specific examples of $\mathbb{R}$, $\mathbb{Z}$ and the harmonic time scale [2].
+
+A. On $\mathbb{R}$ and $\mathbb{Z}$
+
+Let $S: \mathbb{R} \to \mathbb{N}^0$ be an $\mathbb{R}$-Poisson process. Then $\sigma^i(0) = 0$ for all $i \in \mathbb{N}$, $(\ominus\lambda)(t) = -\lambda$ for all $t \in \mathbb{R}$ and $h_k(t) = \frac{t^k}{k!}$.
+
+$$
+\P[S(t; \lambda) = k] = \frac{(\lambda t)^k}{k!} e^{-\lambda t},
+$$
+
+which we recognize as the Poisson distribution.
+
+Now let $S: Z \to N^0$ be an $N^0$-Poisson process. We have
+$\sigma^i(0) = i$ for all $i \in N$, $(\ominus\lambda)(t) = \frac{-\lambda}{1+\lambda} := -p$, and $h_k(t) =$
+$\binom{t}{k}$. Thus we have
+
+$$
+\P[S(t; \lambda) = k] = \binom{t}{k} p^k (1-p)^{t-k},
+$$
+
+which we recognize as the binomial distribution.
+---PAGE_BREAK---
+
+Fig. 1. Probability against Number of Events and Time for the $H_n$-Poisson Process with rate 1.
+
+Fig. 2. A comparison of probability versus number of events near $t = 2$ for the $H_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $H_n$-Poisson process behaves more like the $Z$-Poisson process than the $\mathbb{R}$-Poisson process.
+
+## B. On the Harmonic Time Scale
+
+Now let $S: \mathbb{H}_n \to \mathbb{N}^0$ be an $\mathbb{H}_n$-Poisson process with rate $\lambda$, where
+
+$$ t \in \mathbb{H}_n \text{ if and only if } t = \sum_{k=1}^{n} \frac{1}{k} \text{ for some } n \in \mathbb{N}, $$
+
+which we call the harmonic time scale. To help understand later figures and emphasize that $S$ yields a distinct discrete distribution for each value of $t$, we show the probability against the number of events and time in Figure 1. The choice of $\mathbb{H}_n$ as the time scale shows very informative behavior. Near $t=0$, when the graininess is large, we find behavior that is more like the integers. In contrast, away from $t=0$, where the graininess is small, we find behavior that is more like the real numbers. This behavior is demonstrated in Figures 2–4.
+
+Fig. 3. A comparison of probability versus number of events near $t = 4$ for the $\mathbb{H}_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $\mathbb{H}_n$-Poisson process behaves more like the $\mathbb{R}$-Poisson process than the $Z$-Poisson process.
+
+Fig. 4. A comparison of probability versus time when we fix the number of events at 2 for the $\mathbb{H}_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $\mathbb{H}_n$-Poisson process behaves more like the $Z$-Poisson process near $t = 0$ and more like the $\mathbb{R}$-Poisson process away from $t = 0$.
+
+# IV. THE ERLANG DISTRIBUTION ON TIME SCALES
+
+A time scales generalization of the Erlang distribution can be generated by examining the waiting times between any number of events in the $\mathbb{T}$-Poisson process. To that end, let $\mathbb{T}$ be a time scale. Let $S: \mathbb{T} \to \mathbb{N}$ be a $\mathbb{T}$-Poisson process with rate $\lambda$. Let $T_n$ be a random variable which denotes the time until the $n^{th}$ event. We have
+
+$$
+\begin{aligned}
+\mathbb{P}[S(t; \lambda) < n] &= \mathbb{P}[T_n > t] \\
+&= 1 - \mathbb{P}[T_n \leq t].
+\end{aligned}
+ $$
+
+which implies
+
+$$ 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] = \mathbb{P}[T_n \leq t], $$
+
+which motivates the following definition.
+
+**Definition IV.1.** Let $\mathbb{T}$ be a time scale, $S: \mathbb{T} \to \mathbb{N}^0$ be a $\mathbb{T}$-Poisson Process with rate $\lambda > 0$. We say $F(t; n, \lambda)$ is the $\mathbb{T}$-Erlang cumulative distribution function with shape parameter
+---PAGE_BREAK---
+
+$n$ and rate $\lambda$ provided
+
+$$F(t; n, \lambda) = 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k].$$
+
+From our derivation, it is clear that the $\mathbb{T}$-Erlang distribution models the time until the $n^{th}$ event in the $\mathbb{T}$-Poisson process. We would like to know the probability that the $n^{th}$ event is in any subset of $\mathbb{T}$. To this end, we introduce the $\mathbb{T}$-Erlang probability density function in the next definition.
+
+**Definition IV.2.** Let $\mathbb{T}$ be a time scale, $S: \mathbb{T} \to \mathbb{N}^0$ be a $\mathbb{T}$-Poisson Process with rate $\lambda > 0$. We say $f(t; n, \lambda)$ is the $\mathbb{T}$-Erlang probability density function with shape parameter $n$ and rate $\lambda$ provided
+
+$$f(t; n, \lambda) = - \sum_{k=0}^{n-1} [\mathbb{P}[S(t; \lambda) = k]]^\Delta.$$
+
+where the $\Delta$-differentiation is with respect to $t$.
+
+We want to show that $f(t; n, \lambda)$ can rightly be called a probability density with respect to some accumulation function. Thus, we have the following theorem.
+
+**Theorem IV.1.** Let $\mathbb{T}$ be a time scale. Let $F(t; n, \lambda)$ be a $\mathbb{T}$-Erlang cumulative distribution function with shape parameter $n$ and rate $\lambda$ and let $f(t; n, \lambda)$ be a $\mathbb{T}$-Erlang probability density function with shape parameter $n$ and rate $\lambda$. Then
+
+$$\int_0^t f(\tau; n, \lambda) \Delta\tau = F(t; n, \lambda) \quad (IV.1)$$
+
+and in particular
+
+$$\int_{\mathbb{T}} f(\tau; n, \lambda) \Delta\tau = 1. \quad (IV.2)$$
+
+*Proof:* Implicit in the definition of the $\mathbb{T}$-Erlang probability distribution is a $\mathbb{T}$-Poisson process $S: \mathbb{T} \to \mathbb{N}^0$. By the assumption that
+
+$$\mathbb{P}[S(0; \lambda) = k] = \begin{cases} 1 & k = 0 \\ 0 & k > 0, \end{cases}$$
+
+we have,
+
+$$\begin{align*}
+\int_0^t f(\tau; n, \lambda) \Delta\tau &= \int_0^t -\sum_{k=0}^{n-1} \mathbb{P}[S(\tau; \lambda) = k]^{\Delta} \Delta\tau \\
+&= -\sum_{k=0}^{n-1} \int_0^t \mathbb{P}[S(\tau; \lambda) = k]^{\Delta} \Delta\tau \\
+&= -\sum_{k=0}^{n-1} \mathbb{P}[S(\tau; \lambda) = k]|_0^t \\
+&= -\sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] \\
+&\qquad + \sum_{k=0}^{n-1} \mathbb{P}[S(0; \lambda) = k] \\
+&= 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] \\
+&= F(t; n, \lambda),
+\end{align*}$$
+
+which proves (IV.1). To prove (IV.2), we note for all $k < n$,
+
+$$\lim_{t \to \infty} \mathbb{P}[S(t; \lambda) = k] = 0,$$
+
+by repeated application of L'Hôpital's rule for time scales on III.3 [1]. This fact proves (IV.2) by the same argument as the proof of (IV.1). ■
+
+We note that the moments of the $\mathbb{T}$-Erlang distribution cannot in general be calculated explicitly without some knowledge of the time scale.
+
+## V. THE EXPONENTIAL DISTRIBUTION ON TIME SCALES
+
+Of particular interest to us is the $\mathbb{T}$-Erlang distribution with shape parameter 1. By the above discussion and equation (III.1), the probability density function of this distribution is given by
+
+$$f(t; 1, \lambda) = -\mathbb{P}[S^{\Delta}(t; \lambda) = 0] = -(\ominus\lambda)(t)e_{\ominus\lambda}(t, 0).$$
+
+**Definition V.1.** Let $\mathbb{T}$ be a time scale and let $T$ be a $\mathbb{T}$-Erlang random variable with shape parameter 1 and rate $\lambda$. Then we say $T$ is a $\mathbb{T}$-exponential random variable with rate $\lambda$.
+
+### A. The Expected Value
+
+The $\mathbb{T}$-exponential distribution gives us the rare opportunity to calculate a moment without any knowledge of the time scale.
+
+**Lemma V.1.** Let $\mathbb{T}$ be a time scale and let $T$ be a $\mathbb{T}$-exponential random variable with rate $\lambda > 0$. Then
+
+$$\mathbb{E}(T) = \frac{1}{\lambda}.$$
+---PAGE_BREAK---
+
+**Proof:** Using integration by parts on time scales, we find
+
+$$
+\begin{align*}
+\mathbb{E}(T) &= \int_0^\infty t[-(\ominus\lambda)(t)e_{\ominus\lambda}(t, 0)]\Delta t \\
+&= -te_{\ominus\lambda}(t, 0)|_0^\infty + \int_0^\infty e_{\ominus\lambda}(\sigma(t), 0)\Delta t \\
+&= 0 + \int_0^\infty (1 + \mu(t)(\ominus\lambda)(t))e_{\ominus\lambda}(t, 0)\Delta t \\
+&= \int_0^\infty \frac{1}{1 + \mu(t)\lambda}e_{\ominus\lambda}(t, 0)\Delta t \\
+&= -\frac{1}{\lambda}\int_0^\infty \frac{-\lambda}{1 + \mu(t)\lambda}e_{\ominus\lambda}(t, 0)\Delta t \\
+&= -\frac{1}{\lambda}\int_0^\infty (\ominus\lambda)(t)e_{\ominus\lambda}(t, 0)\Delta t \\
+&= -\frac{1}{\lambda}e_{\ominus\lambda}(t, 0)|_0^\infty \\
+&= -\frac{1}{\lambda}[0 - 1] \\
+&= \frac{1}{\lambda},
+\end{align*}
+$$
+
+which proves our claim.
+
+■
+
+**B. On $\mathbb{R}$ and $\mathbb{Z}$**
+
+We note that if $\mathbb{T} = \mathbb{R}$, then we have
+
+$$f(t; 1, \lambda) = \lambda e^{-\lambda t},$$
+
+which we recognize as the exponential distribution. By Lemma V.1, we find the mean of the exponential distribution is $1/\lambda$, which is well known.
+
+Now if $\mathbb{T} = \mathbb{Z}$, then we have
+
+$$f(t; 1, \lambda) = \frac{\lambda}{1+\lambda} \left(1 - \frac{\lambda}{1+\lambda}\right)^t = p(1-p)^t,$$
+
+where $p := \frac{\lambda}{1+\lambda}$. We recognize the above as the geometric distribution. By Lemma V.1, we find the mean of the geometric distribution is $1/\lambda = (1-p)/p$.
+
+**C. The $\omega$-Memorylessness Property**
+
+Both the geometric and exponential distributions are completely classified by the fact that they have the memorylessness property [8]. We recall the the memoryless property on $\mathbb{R}$ is the property that if $T$ is a continuous random variable, then for all $t, \tau \in \mathbb{R}$,
+
+$$\mathbb{P}[T > t + \tau | T > t] = \mathbb{P}[T > \tau]$$
+
+and that the memoryless property on $\mathbb{Z}$ is the property that if $T$ is a discrete random variable, then for all $t, \tau \in \mathbb{Z}$,
+
+$$\mathbb{P}[T > t + \tau | T > t] = \mathbb{P}[T > \tau].$$
+
+We would like to find conditions on the time scale $\mathbb{T}$ such that the $\mathbb{T}$-exponential distribution on time scales has this property. Let $\mathbb{T}$ is $\omega$-periodic, that is, if $t \in \mathbb{T}$ then $t+\omega \in \mathbb{T}$. Then we can define a property much like the memorylessness property.
+
+**Definition V.2.** Let $\mathbb{T}$ be an $\omega$-periodic time scale. We say a probability distribution on $\mathbb{T}$ has the $\omega$-memorylessness property provided for all $t \in \mathbb{T}$,
+
+$$P(T > t + \omega | T > t) = P(T > \omega),$$
+
+We note that this definition generalizes the memorylessness property on $\mathbb{R}$ and $\mathbb{Z}$ since $\mathbb{R}$ and $\mathbb{Z}$ are $\omega$-periodic for any $\omega$ in $\mathbb{R}$ and $\mathbb{Z}$, respectively.
+
+Let $\mathbb{T}$ be $\omega$-periodic and let $T$ be a $\mathbb{T}$-exponential random variable. Then we claim the $\mathbb{T}$-exponential distribution has the $\omega$-memorylessness property. To show this claim, we first prove two lemmas.
+
+**Lemma V.2.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and let $\lambda > 0$. Then for $t, t_0 \in \mathbb{T}$, $e_{\ominus\lambda}(t+\omega, t_0) = e_{\ominus\lambda}(t, t_0 - \omega)$.
+
+**Proof:** By the definition of the time scales exponential function,
+
+$$
+\begin{align*}
+e_{\ominus\lambda}(t+\omega, t_0) &= \exp\left(\int_{t_0}^{t+\omega} \frac{\text{Log}(1+(\ominus\lambda)(s)\mu(s))\Delta s}{\mu(s)}\right) \\
+&= \exp\left(\int_{t_0}^{t+\omega} \frac{\text{Log}\left(1+\frac{-\lambda\mu(s)}{1+\lambda\mu(s)}\right)\Delta s}{\mu(s)}\right) \\
+&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}\left(1+\frac{-\lambda\mu(\tau+\omega)}{1+\lambda\mu(\tau+\omega)}\right)\Delta\tau}{\mu(\tau+\omega)}\right) \\
+&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}\left(1+\frac{-\lambda\mu(\tau)}{1+\lambda\mu(\tau)}\right)\Delta\tau}{\mu(\tau)}\right) \\
+&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}(1+(\ominus\lambda)(\tau)\mu(\tau))\Delta\tau}{\mu(\tau)}\right) \\
+&= e_{\ominus\lambda}(t, t_0 - \omega),
+\end{align*}
+$$
+
+where we use the fact that for $\omega$-periodic time scales $\mu(t+\omega) = \mu(t)$ for all $t \in \mathbb{T}$ and the change of variables $\tau = s-\omega$.
+
+■
+
+**Lemma V.3.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and $\lambda > 0$. Then for all $t \in \mathbb{T}$, $e_{\ominus\lambda}^{\Delta}(t+\omega, t) = 0$.
+
+**Proof:** By the product rule on time scales and Lemma V.2,
+
+$$
+\begin{align*}
+e_{\ominus\lambda}^{\Delta}(t+\omega,t) &= (e_{\ominus\lambda}(t+\omega,t_0)e_{\ominus\lambda}(t_0,t))^{\Delta} \\
+&= (e_{\ominus\lambda}(t,t_0-\omega)e_{\ominus\lambda}(t_0,t))^{\Delta} \\
+&= (e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0))^{\Delta} \\
+&= e_{\ominus\lambda}(\sigma(t), t_0-\omega)\lambda e_{\lambda}(t,t_0) \\
+&+ (\ominus\lambda)(t)e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
+&= \lambda(1+(\ominus\lambda)(t)\mu(t))e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
+&+ (\ominus\lambda)(t)e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
+&= [-(\ominus\lambda)(t) + (\ominus\lambda)(t)] \\
+&e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
+&= 0.
+\end{align*}
+$$
+
+■
+---PAGE_BREAK---
+
+The above lemmas allow us to prove the following result.
+
+**Theorem V.4.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and let $\lambda > 0$. Then the $\mathbb{T}$-exponential distribution with rate $\lambda$ has the $\omega$-memorylessness property.
+
+*Proof:* Let $T$ be a $\mathbb{T}$-exponential random variable with rate $\lambda > 0$. By Lemma V.2 and Lemma V.3,
+
+$$
+\begin{aligned}
+P(T > t + \omega | T > t) &= \frac{P(T > t + \omega)}{P(T > t)} \\
+&= \frac{\int_{t+\omega}^{\infty} -(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau}{\int_{t}^{\infty} -(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau} \\
+&= \frac{e_{\ominus\lambda}(t+\omega, 0)}{e_{\ominus\lambda}(t, 0)} \\
+&= e_{\ominus\lambda}(t+\omega, t) \\
+&= e_{\ominus\lambda}(\omega, 0) \\
+&= P(T > \omega),
+\end{aligned}
+$$
+
+since $e_{\ominus\lambda}(\omega, 0)$ is a constant independent of $t$ by Lemma V.3. Thus the $\mathbb{T}$-exponential distribution has the $\omega$-memorylessness property ■
+
+## REFERENCES
+
+[1] M. Bohner and A. Peterson, *Advances in Dynamic Equations on Time Scales*, Birkhäuser, Boston, 2003.
+
+[2] M. Bohner and A. Peterson, *Dynamic Equations on Time Scales*, Birkhäuser, Boston, 2001.
+
+[3] W. Ching and M. Ng, *Markov chains: models, algorithms and applications*, Springer, New York, 2006.
+
+[4] John M. Davis, Ian A. Gravagne and Robert J. Marks II, "Bilateral Laplace Transforms on Time Scales: Convergence, Convolution, and the Characterization of Stationary Stochastic Time Series," Circuits, Systems, and Signal Processing, Birkhäuser, Boston, Volume 29, Issue 6 (2010), Page 1141. [DOI 10.1007/s00034-010-9196-2]
+
+[5] S. Kahraman, "Probability Theory Applications on Time Scales," M.S. Thesis, İzmir Institute of Technology, 2008
+
+[6] Robert J. Marks II, Ian A. Gravagne and John M. Davis, "A Generalized Fourier Transform and Convolution on Time Scales," Journal of Mathematical Analysis and Applications Volume 340, Issue 2, 15 April 2008, Pages 901-919.
+
+[7] R.J. Marks II, *Handbook of Fourier Analysis and Its Applications*, Oxford University Press (2009).
+
+[8] A. Papoulis, *Probability, Random Variables and Stochastic Processes*, 3rd Edition, McGraw-Hill, New York (1991)
+
+[9] S. Sanyal, "Stochastic Dynamic Equations," Ph.D. Thesis, Missouri University of Science and Technology, 2008
+
+[10] Baylor Time Scales Group, http://timescales.org/
\ No newline at end of file
diff --git a/samples/texts_merged/88513.md b/samples/texts_merged/88513.md
new file mode 100644
index 0000000000000000000000000000000000000000..b3010a5d04a873f1f38f8b29d5eafe598a2856bb
--- /dev/null
+++ b/samples/texts_merged/88513.md
@@ -0,0 +1,161 @@
+
+---PAGE_BREAK---
+
+# VALIDATION OF THE GAMMA SUBMERSION CALCULATION OF THE REMOTE POWER PLANT MONITORING SYSTEM OF THE FEDERAL STATE OF BADEN-WÜRTTEMBERG
+
+Janis Lapins¹, Wolfgang Bernnat², Walter Scheuermann²
+
+¹Institute of Nuclear Technology and Energy Systems, Pfaffenwaldring 31, University of Stuttgart,
+Stuttgart, Germany
+
+²KE-Technologie GmbH, Stuttgart, Germany
+
+**Abstract:** The radioactive dispersion model used in the framework of the remote nuclear power plant monitoring system of the federal state of Baden-Württemberg applies the method of adjoint fluxes to calculate the sky shine from gamma rays with a regarded gamma energy spectrum for nuclides released. The spectrum is represented by 30 energy groups of the released nuclides. A procedure has been developed to calculate the dose distribution on the ground in case of an accident with a release of radioactivity. For validation purposes, the results produced with the adjoint method in the dispersion code ABR are compared to results produced by forward calculations with Monte Carlo methods using the Los Alamos Code MCNP6.
+
+**Key words:** adjoint method, MCNP, validation, gamma submersion
+
+## THE MODULAR DISPERSION TOOL “ABR”
+
+The federal state of Baden-Württemberg, Germany, operates a remote power plant monitoring system that has an online access to the main safety relevant parameter of the power plant as well as the meteorological data provided by the German weather forecast service (DWD). The data are sent to a server system that is operated for the Ministry of Environment of the federal state. The radioactive dispersion tool “ABR” is an integral part of this system and is used for calculation of radiological consequences in case of an accident, or to prepare and to perform emergency exercises for the civil protection. For a dispersion calculation, the ABR has to account for the following:
+
+* Interpolation of forecasted or measured precipitation to grid (precipitation module)
+
+* Calculation of the wind field from forecast or measurement on grid (terrain-following wind field module)
+
+* Release of the amount of radioactivity to the environment accounting for decay time of nuclides between shutdown of the reactor and the time of emission (release module)
+
+* Transport of radioactivity with wind, also washout and fallout due to deposition or rain, respectively (Lagrange particle transport module)
+
+* Sky shine to a detector 1 m above the ground (sky shine module)
+
+* Calculation of the doses from various exposure paths (gamma submersion, beta submersion, inhalation and ground shine) and for 25 organs and one effective dose (dose module)
+
+All of this is performed by the different modules of the programme system mentioned above. However, this paper will focus on the validation of the sky shine module in conjunction with the dose module which calculates the gamma submersion by the method of adjoint fluxes [1]. For validation, the reference code system MCNP6 [2] is used. Results produced with ABR are benchmarked against it.
+
+## METHOD OF CALCULATION
+
+The dose calculation is performed applying the method of adjoint fluxes to calculate the gamma cloud radiation with a regarded gamma ray energy spectrum for nuclides released comprising 30 energy groups. This procedure enables an efficient algorithm to calculate the dose rates or integrated doses in case of an accident with a release of radioactivity. The system is part of the emergency preparedness and response and is in online operational service. The adjoint fluxes were produced by results from MCNP6 [2]. For validation purposes, the results produced with the adjoint method in the dispersion code ABR are compared to results produced by forward calculations with Monte Carlo methods using MCNP6. The
+---PAGE_BREAK---
+
+computational procedure comprises the following steps: From a point or a volume source, respectively, photons are started isotropically for average energies of the 30 energy groups or distinctive gamma spectrum for single nuclides. Travelling through space, these photons collide with nuclides present in air or the ground and are scattered until they reach the detector. With the help of point detectors, the flux density spectrum can be estimated, and, by making use of a dose-flux-relation, the resulting gamma submersion dose on the ground can be determined.
+
+The backward method in the ABR uses the adjoint fluxes to evaluate the influence of a certain nuclide (spectrum) in the cloud at a certain distance from a detector point on the ground. To obtain these adjoint fluxes, a large number of calculations has been performed to determine the adjoint flux for all energy groups and distances (radii). The radii for which the fluxes were produced are support points. Radii between support points are interpolated. Depending on the energy of the group under consideration there are different exponential fitting functions that account for both energy and distance. The energy deposited within human tissue is accounted for by age classes and by use dose factors from the German Radiation Protection Ordinance that provide dose factors for organs and effective dose [5].
+
+## SOLUTION OF THE TRANSPORT EQUATION
+
+The transport equation in operator notation is
+
+$$M\Phi = Q \quad (1)$$
+
+with
+
+$$M = \vec{\Omega}grad + \Sigma_T(E) + \iint_{\vec{\Omega}\setminus E'} \Sigma_s(\vec{\Omega}' \rightarrow \vec{\Omega}, E' \rightarrow E)dE' d\Omega' \quad (2)$$
+
+In equation (1) above $Q(\vec{r}, \vec{\Omega}, E)$ represents the source vector and $\Phi(\vec{r}, \vec{\Omega}, E)$ represents the flux density vector which both depend on the location $\vec{r}$, the direction $\vec{\Omega}$, and the Energy $E$. In equation (2) the first term represents the leakage term, $\Sigma_T(E)$ represents the collision, and the integral represents the scattering from any direction $\vec{\Omega}'$ and energy $E'$ into the direction $\vec{\Omega}$ and energy $E$ of interest.
+
+After solution of the transport equation reaction rates, e.g. dose rates $\bar{D}$ can be calculated with the help of a response function $R(\vec{r}, E)$ such that the condition
+
+$$\bar{D} = \langle \Phi R \rangle = \int_V \int_E \Phi(\vec{r}, E) R(\vec{r}, E) dr dE \quad (3)$$
+
+is valid. The adjoint equation to the equation (1) is
+
+$$M^+ \Phi^+ = R \quad (4)$$
+
+The adjoint equation has to be defined in a way that the condition
+
+$$\langle \Phi^+ M \Phi \rangle = \langle \Phi M^+ \Phi^+ \rangle \quad (5)$$
+
+holds. If this is the case, the following is also valid:
+
+$$\bar{D} = \langle \Phi^+ M \Phi \rangle = \langle \Phi^+ Q \rangle = \langle \Phi M^+ \Phi^+ \rangle = \langle \Phi R \rangle = \bar{D} \quad (6)$$
+
+I.e. instead of eq. (1), the adjoint function eq. (4) can be solved and the reaction rates are determined by eq. (3). The solution of the adjoint transport equation provides a relation between photon emission of a certain energy/energy range of a point/volume regarded and the dose at a computational point.
+
+## CALCULATION OF ADJOINT FLUXES WITH MCNP
+
+The calculation of the gamma submersion as a consequence of radioactive nuclides in the radioactive cloud can be achieved if the spatial and energy distribution of the gamma sources in relation to certain computational points at the ground are known, together with the composition of air and soil. The computation necessitates the solution of the photon transport equation with respect to the energy dependence of the possible reactions of photons with atoms in air or soil (photo-electrical effect, Compton effect, pair production effect etc.). The solution of the transport equation yields photon spectra for computational points that enable dose calculations. Relevant dose/flux relations are defined by ICRP, [3]. For photons ICRP 74 can be applied. The dose/flux relation is presented in **Figure 2**. With Monte Carlo codes with their continuous energy dependence of the cross sections, a direct solution of the adjoint transport equation is not possible. Nevertheless, these codes can be used to estimate the contribution of a source point/volume to the dose at a computational point, see **Figure 1**. To do this, the source
+---PAGE_BREAK---
+
+point/volume a sufficiently great number of photon trajectories have to be simulated and their
+contribution to the dose is calculated. Computing the dose rates at a computational point of interest, the
+relevant contributions from all source points/volumes of the whole emission field have to be summed up
+such that the dose at the computational point (x, y, z) can be estimated with
+
+$$
+D(x, y, z) = \sum_q \sum_g \Phi_g^+ (r_q, z_q - z) \cdot Q(x_q, y_q, z_q) \cdot V_q \quad (7)
+$$
+
+with
+
+$$
+r_q = \sqrt{(x_q - x)^2 + (y_q - y)^2} \tag{8}
+$$
+
+$\Phi_g^+$ as the adjoint flux depending on the radius and the height, $Q_g$ as specific source concentration and $V_q$ as the volume that contains the concentration.
+
+**Figure 1.** Source point/volume $Q(r_q, z_q)$ and computational point of interest $P(x, y, z)$ in dose calculations
+
+The index q corresponds to the source; the index g corresponds to the energy group or the gamma line of
+the source of the photon emission energy. The coordinates x, y, z correspond to the computational point
+of interest. The coordinates xq, yq (resp. rq), zq correspond to the centre point of the source volume Vq,
+see Figure 1.
+
+**Figure 2.** Dose/flux relation for gamma energies from 0.01 – 10 MeV in 0.07 cm depth of the body according to ICRP 74 [3]
+
+## 2 SCENARIOS FOR DOSE COMPARISONS: A HOMOGENEOUS AND A NON-HOMOGENEOUS RADIOACTIVE CLOUD OF REFERENCE NUCLIDES
+
+For comparison of the results of the gamma submersion dose rates, two scenarios have been defined. The base scenario assumes a homogeneous concentration distribution of three reference nuclides Xe-133, Cs-137 and I-131 with a flat topography both for the ABR and MCNP, respectively. There is no use of the dispersion module of the ABR, but the concentrations are artificially input into the sky shine and dose modules of the ABR. The computational domain and the boundary conditions for this scenario are presented in Table 1. A sketch of the scenario is shown in Figure 5.
+
+An advanced scenario with a 3-D cloud is also presented. For this scenario, a realistic concentration distribution has been generated with the ABR, i.e. the release height of 150 metres with a wind speed of 4 m/s at 10 m height and increasing wind speed with the height for diffusion category D (neutral conditions). The released activity is transported with the wind. After one time step the doses are compared. Since the MCNP cannot simulate the transport of radioactive particles with the wind, the distribution of concentration of the isotope regarded is imported to MCNP via an interface. The results for the dose calculation are also compared. The radioactive cloud together with the wind speed is presented in
+---PAGE_BREAK---
+
+Figure 6. For this paper, the shape of the cloud is regarded as given as the dose rates are subject to comparison and not the cloud shape. The boundary conditions and general assumptions for this case is given in Table 2
+
+The gamma lines of the reference nuclides are shown in **Figure 3** and **Figure 4**, [4]. These gamma emissions are accounted for in the 30 group spectrum of the ABR with their respective intensity. For the MCNP calculation, the gamma energies and their respective intensity are directly input.
+
+**Table 1.** Simulation set-up for homogeneous cloud from 120 – 160 m
+
+| Constant source | ABR | MCNP6 |
|---|
| Computational area (x, y, z) | 20 km x 20 km x 1 km | 20 km x 20 km x 1 km | | Mesh number (x, y,z) | 100 x 100 x 25 | - | | Mesh size in x, y, z - direction | 200 m, 200 m, 40 m | - | | Cloud height | 120 – 160 m | 120 – 160 m | | Activity in cloud [Bqm-3] | | Cs-137 | 6.0E+04 | 6.0E+04 | | Xe-133 | 2.0E+10 | 2.0E+10 | | I-131 | 1.0E+06 | 1.0E+06 |
+
+**Table 2.** Simulation set-up for a non-homogeneous cloud
+
+| Realistic source | ABR | MCNP6 |
|---|
| Computational area (x, y, z) | 20 km x 20 km x 1 km | 20 km x 20 km x 1 km | | Mesh number (x, y,z) | 100 x 100 x 25 | 100 x 100 x 25 | | Mesh size in x, y, z - direction | 200 m, 200 m, 40 m | 200 m, 200 m, 40 m | | Emission height | 150 m | 150 m | | Total activity released [Bq] | | Activity imported via interface | | Cs-137 | 6.0E+09 | 6.0E+09 | | Xe-133 | 2.0E+17 | 2.0E+17 | | I-131 | 1.0E+10 | 1.0E+10 | | Wind speed in 10 m height | 4 m/s | - | | Diffusion category | D | - | | Emission duration | 1 hour | - |
+
+**Figure 3.** Gamma lines and intensities of Cs-137 and Xe-133 (NUDAT 2.6) [4]
+
+**Figure 4.** Gamma lines of I-131 (NUDAT 2.6) [4]
+---PAGE_BREAK---
+
+**Figure 5.** Sketch of the scenario with homogeneous emission layer and exemplary paths from the cloud to the detector (direct, indirect via air and ground reflection, or both)
+
+**Figure 6.** Non-homogeneous distribution of aerosoles after 1 hour with a wind speed of 4 m/s at a height of 10 m simulated with the ABR. The concentration is exported to MCNP
+
+## RESULTS OF COMPARISON
+
+The results of the comparison are presented in the tables below. One can see that the results are in good agreement for the three reference nuclides.
+
+**Table 3.** Results for the base case with homogeneous cloud
+
+| Nuclide | MCNP6 [Sv/h] | ABR [Sv/h] | Ratio ABR/MCNP6 | | Cs-137 | 9.31E-07 | 8.33E-07 | 0.89 | | Xe-133 | 1.36E-02 | 1.30E-02 | 0.96 | | I-131 | 1.01E-05 | 1.03E-05 | 1.02 |
+
+**Table 4.** Results for the advanced case with non-homogenous cloud
+
+| Nuclide | MCNP6 [Sv/h] | ABR [Sv/h] | Ratio ABR/MCNP6 | | Cs-137 | 1.42E-10 | 1.36E-10 | 0.96 | | Xe-133 | 4.49E-04 | 4.9E-04 | 1.09 | | I-131 | 1.49E-10 | 1.57E-10 | 1.05 |
+
+## CONCLUSION
+
+The results for the comparison of gamma submersion dose rates show that there is good agreement between the ABR and MCNP6 for the cases analysed. It could be shown that for all three reference nuclides the maximum deviation for the dose rate of Cs-137 is -11% for the base case.
+
+For the non-homogenous distribution of the concentration for the reference nuclides the agreement is better than 10%. Keeping in mind that for a real dispersion calculation there are a multitude of uncertainties, e.g. emitted nuclide vector, meteorological prediction, transport of cloud, this agreement presented for the comparison of the dose rates for the reference nuclide each can be regarded as excellent.
+
+## REFERENCES
+
+[1] Sohn, G. Pfister, W. Bernnat, G. Hehn: Dose, ein neuer Dosismodul zur Berechnung der effektiven Dosis von 21 Organdosen für die Dosispfade Submersion, Inhalation und Bodenstrahlung, IKE 6 UM 3, Nov. 1994.
+
+[2] D. B. Pelowitz: MCNP6TM User's Manual Version 1.0 LA-CP-13-00634, Rev. 0 (2013)
+
+[3] ICRP, 1996: Conversion Coefficients for use in Radiological Protection against External Radiation. ICRP Publication 74, Ann. ICRP 26 (3-4).
+
+[4] NUDAT 2.6, National Nuclear Data Centre, Brookhaven National Laboratory.
+
+[5] Entwurf zur AVV zu §47 Strahlenschutzverordnung, Anhang 3 (German: General Administrative Regulation for §47 of the German Radiation Protection Ordinance, Appendix 3), (2005).
\ No newline at end of file
|