Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +19 -0
- samples/pdfs/1168240.pdf +3 -0
- samples/pdfs/1836869.pdf +3 -0
- samples/pdfs/1885128.pdf +3 -0
- samples/pdfs/199837.pdf +3 -0
- samples/pdfs/230879.pdf +3 -0
- samples/pdfs/3193892.pdf +3 -0
- samples/pdfs/3327355.pdf +3 -0
- samples/pdfs/3495399.pdf +3 -0
- samples/pdfs/3884483.pdf +3 -0
- samples/pdfs/393503.pdf +3 -0
- samples/pdfs/4239587.pdf +3 -0
- samples/pdfs/4971236.pdf +3 -0
- samples/pdfs/500594.pdf +3 -0
- samples/pdfs/6016935.pdf +3 -0
- samples/pdfs/6218816.pdf +3 -0
- samples/pdfs/6426180.pdf +3 -0
- samples/pdfs/6813453.pdf +3 -0
- samples/pdfs/7089754.pdf +3 -0
- samples/pdfs/7569662.pdf +3 -0
- samples/texts_merged/1117773.md +241 -0
- samples/texts_merged/1131204.md +426 -0
- samples/texts_merged/174916.md +469 -0
- samples/texts_merged/213815.md +271 -0
- samples/texts_merged/250922.md +0 -0
- samples/texts_merged/2515306.md +523 -0
- samples/texts_merged/2590883.md +504 -0
- samples/texts_merged/2763593.md +364 -0
- samples/texts_merged/276850.md +386 -0
- samples/texts_merged/2779026.md +595 -0
- samples/texts_merged/2909063.md +56 -0
- samples/texts_merged/305525.md +295 -0
- samples/texts_merged/3147359.md +589 -0
- samples/texts_merged/3226827.md +194 -0
- samples/texts_merged/3251599.md +679 -0
- samples/texts_merged/3295535.md +0 -0
- samples/texts_merged/3438890.md +226 -0
- samples/texts_merged/3450399.md +67 -0
- samples/texts_merged/3461249.md +272 -0
- samples/texts_merged/3594993.md +309 -0
- samples/texts_merged/3723390.md +333 -0
- samples/texts_merged/4364106.md +764 -0
- samples/texts_merged/4409661.md +0 -0
- samples/texts_merged/450057.md +0 -0
- samples/texts_merged/4808858.md +28 -0
- samples/texts_merged/4872902.md +230 -0
- samples/texts_merged/4994833.md +529 -0
- samples/texts_merged/503850.md +169 -0
- samples/texts_merged/5396754.md +251 -0
- samples/texts_merged/5647681.md +487 -0
.gitattributes
CHANGED
|
@@ -380,3 +380,22 @@ samples/pdfs/2234121.pdf filter=lfs diff=lfs merge=lfs -text
|
|
| 380 |
samples/pdfs/7621530.pdf filter=lfs diff=lfs merge=lfs -text
|
| 381 |
samples/pdfs/4150074.pdf filter=lfs diff=lfs merge=lfs -text
|
| 382 |
samples/pdfs/5687555.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 380 |
samples/pdfs/7621530.pdf filter=lfs diff=lfs merge=lfs -text
|
| 381 |
samples/pdfs/4150074.pdf filter=lfs diff=lfs merge=lfs -text
|
| 382 |
samples/pdfs/5687555.pdf filter=lfs diff=lfs merge=lfs -text
|
| 383 |
+
samples/pdfs/7569662.pdf filter=lfs diff=lfs merge=lfs -text
|
| 384 |
+
samples/pdfs/3327355.pdf filter=lfs diff=lfs merge=lfs -text
|
| 385 |
+
samples/pdfs/4971236.pdf filter=lfs diff=lfs merge=lfs -text
|
| 386 |
+
samples/pdfs/1836869.pdf filter=lfs diff=lfs merge=lfs -text
|
| 387 |
+
samples/pdfs/3884483.pdf filter=lfs diff=lfs merge=lfs -text
|
| 388 |
+
samples/pdfs/199837.pdf filter=lfs diff=lfs merge=lfs -text
|
| 389 |
+
samples/pdfs/1168240.pdf filter=lfs diff=lfs merge=lfs -text
|
| 390 |
+
samples/pdfs/6016935.pdf filter=lfs diff=lfs merge=lfs -text
|
| 391 |
+
samples/pdfs/1885128.pdf filter=lfs diff=lfs merge=lfs -text
|
| 392 |
+
samples/pdfs/393503.pdf filter=lfs diff=lfs merge=lfs -text
|
| 393 |
+
samples/pdfs/3193892.pdf filter=lfs diff=lfs merge=lfs -text
|
| 394 |
+
samples/pdfs/6813453.pdf filter=lfs diff=lfs merge=lfs -text
|
| 395 |
+
samples/pdfs/6426180.pdf filter=lfs diff=lfs merge=lfs -text
|
| 396 |
+
samples/pdfs/500594.pdf filter=lfs diff=lfs merge=lfs -text
|
| 397 |
+
samples/pdfs/3495399.pdf filter=lfs diff=lfs merge=lfs -text
|
| 398 |
+
samples/pdfs/6218816.pdf filter=lfs diff=lfs merge=lfs -text
|
| 399 |
+
samples/pdfs/4239587.pdf filter=lfs diff=lfs merge=lfs -text
|
| 400 |
+
samples/pdfs/7089754.pdf filter=lfs diff=lfs merge=lfs -text
|
| 401 |
+
samples/pdfs/230879.pdf filter=lfs diff=lfs merge=lfs -text
|
samples/pdfs/1168240.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:25aeac281a45f2b60b35010a6b83cfeb3df054ea48abc28250479b648a75c694
|
| 3 |
+
size 164000
|
samples/pdfs/1836869.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b3b420a7da248f295a8ea9698ff516d67161b540fb727297ee611beec80f736
|
| 3 |
+
size 609919
|
samples/pdfs/1885128.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3ccc17ede8e64c5a57e7ece9afe260e5fd63d40134ddb002627eb5b95daf26ce
|
| 3 |
+
size 488805
|
samples/pdfs/199837.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06775c3b500f12171f1657b77930bbb60a3b2700f7483eae231bcab49e7a055d
|
| 3 |
+
size 444863
|
samples/pdfs/230879.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0778cac5815650c4ee155089b69599e1b24b58535889ba2d99789053f9260a05
|
| 3 |
+
size 366665
|
samples/pdfs/3193892.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:670f4c970cf14ea7130d1db817a38b6d97b6f56ba261f68ea05550fa57f2d4e4
|
| 3 |
+
size 180892
|
samples/pdfs/3327355.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6b5a6a355f367a37b1f1520ee23591aa795d65ebad17ad72630b2844b2c567dc
|
| 3 |
+
size 694508
|
samples/pdfs/3495399.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:70ccd2ebefeabb840e2eab6fc67948571c0c725b145ccdd34adfa1376121ee0a
|
| 3 |
+
size 148653
|
samples/pdfs/3884483.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7d5b428e4718f6eea9e4f339288c737ce6712885c927aa18629e1556b1cd84c8
|
| 3 |
+
size 8815754
|
samples/pdfs/393503.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0cbc18f5f59e1b64ffed302c73064ccd608f869a88770d1aa3122aa2b13dc380
|
| 3 |
+
size 365246
|
samples/pdfs/4239587.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:88d0a3d596c851224e35d178b7b49f8acc0fe42c9d19034200ed17ec481717c2
|
| 3 |
+
size 873703
|
samples/pdfs/4971236.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a06a42b1dbde87db16afbdc6a60b6184cd33dcc82fdd6f30c28c8f2500efc083
|
| 3 |
+
size 372574
|
samples/pdfs/500594.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:102ddf114ae1020b97bbd55520214c30550c64760e5608327445f0ac12a22ff8
|
| 3 |
+
size 185636
|
samples/pdfs/6016935.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c225b82666897c57e9af8a6b7a6873024157e8cd9ac160e12f93503b7e62eda6
|
| 3 |
+
size 1249342
|
samples/pdfs/6218816.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c8adeb393ba9273c37d14720f4e5066c92aec95de2fe62d78d09e2f7a4d48415
|
| 3 |
+
size 1641505
|
samples/pdfs/6426180.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cf60e08e68a938f240186c4137ea053dd0e9c2e8cd1a24389d10f138deaa29ca
|
| 3 |
+
size 188785
|
samples/pdfs/6813453.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dbe4eb2560244abb0e94352d0cada9d71d7c9f5ea8e5166a4d23011bba532384
|
| 3 |
+
size 645965
|
samples/pdfs/7089754.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e4d59d451e710cebc205fd0dd2dc402e51c7670433f17a536b6d426e5c43c489
|
| 3 |
+
size 447634
|
samples/pdfs/7569662.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f4502ab7df7ec80f5adf2f232488fc92bbd4ae5bafec8ebe85abbc2d8e47eb94
|
| 3 |
+
size 494074
|
samples/texts_merged/1117773.md
ADDED
|
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Resolving electron transfer kinetics in porous electrodes via diffusion-less
|
| 5 |
+
cyclic voltammetry
|
| 6 |
+
|
| 7 |
+
Shida Yang,<sup>ac</sup> Yang Li,<sup>b</sup> Qing Chen.<sup>ab*</sup>
|
| 8 |
+
|
| 9 |
+
<sup>a</sup>Department of Chemistry, <sup>b</sup>Department of Mechanical and Aerospace Engineering, and
|
| 10 |
+
<sup>c</sup>The Energy Institute, HKUST, Hong Kong.
|
| 11 |
+
|
| 12 |
+
*Corresponding Author E-mail: chenqing@ust.hk (Qing Chen)
|
| 13 |
+
---PAGE_BREAK---
|
| 14 |
+
|
| 15 |
+
**Figure S1.** Background current on Ti foil as assembled in the cell with the active electrolyte but without the carbon felt. (a) $K_3Fe(CN)_6$, (b) $FeCl_3$, and (c) $VOSO_4$. The currents are at least two orders of magnitude lower than those measured with the carbon felt for all three cases, so no background subtraction is necessary for the analysis.
|
| 16 |
+
---PAGE_BREAK---
|
| 17 |
+
|
| 18 |
+
**Figure S2.** Electrochemical surface area measurements of the carbon felt electrode in the electrolytes of (a) $K_3Fe(CN)_6$, (b) $FeCl_3$, and (c) $VOSO_4$. We scan CV in ranges of potential with no visible Faradaic current and plot the average currents against the scan rates. The slopes are divided with a specific capacitance of 20 µF/cm² to derive the areas.
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
**Figure S3.** X-ray photoelectron spectra of different carbon felts.
|
| 22 |
+
|
| 23 |
+
**Table S1.** O/C ratio of different carbon felts and the corresponding standard rate constants $k^0$ of VO$^{2+}$/VO$_2^+$ on these electrodes.
|
| 24 |
+
|
| 25 |
+
<table><thead><tr><th>Carbon Felt</th><th>C ratio/%</th><th>O ratio/%</th><th>O/C</th><th>k<sup>0</sup> (cm/s)</th></tr></thead><tbody><tr><td>CeTech CF020, 400 °C</td><td>92.51</td><td>7.49</td><td>0.081</td><td>1.56±0.15 × 10<sup>-6</sup></td></tr><tr><td>SGL GFA6EA, 400 °C</td><td>90.14</td><td>9.86</td><td>0.109</td><td>1.642±0.072 × 10<sup>-7</sup></td></tr><tr><td>SGL GFA6EA, 450 °C</td><td>89.34</td><td>10.66</td><td>0.119</td><td>2.095±0.518 × 10<sup>-7</sup></td></tr><tr><td>SGL GFA6EA, 500 °C</td><td>88.93</td><td>11.07</td><td>0.124</td><td>2.455±0.216 × 10<sup>-8</sup></td></tr></tbody></table>
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
**Figure S4.** Additional results of the RFB tests. (a) Electrochemical impedance spectroscopy (EIS) and (b) IR-corrected polarization curves of VRFB with CF baked at different temperatures.
|
| 29 |
+
|
| 30 |
+
**Table S2.** Polarization resistance of VRFB with different CF.
|
| 31 |
+
|
| 32 |
+
<table><thead><tr><td>SGL CF</td><td>Ru/Ω cm²</td><td>polarization resistance/Ω cm²</td><td>corrected polarization resistance/Ω cm²</td></tr></thead><tbody><tr><td>400°C</td><td>0.395</td><td>0.487</td><td>0.092</td></tr><tr><td>450°C</td><td>0.421</td><td>0.540</td><td>0.119</td></tr><tr><td>500°C</td><td>0.450</td><td>0.664</td><td>0.214</td></tr></tbody></table>
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
**Table S3.** Summary of standard rate constants *k* of VO<sub>2</sub><sup>+</sup>/VO<sub>2</sub><sup>+</sup> reported in literature.
|
| 36 |
+
|
| 37 |
+
<table>
|
| 38 |
+
<thead>
|
| 39 |
+
<tr>
|
| 40 |
+
<td>
|
| 41 |
+
Electrodes
|
| 42 |
+
</td>
|
| 43 |
+
<td>
|
| 44 |
+
Treatment
|
| 45 |
+
</td>
|
| 46 |
+
<td>
|
| 47 |
+
Method
|
| 48 |
+
</td>
|
| 49 |
+
<td>
|
| 50 |
+
Area
|
| 51 |
+
</td>
|
| 52 |
+
<td>
|
| 53 |
+
k (cm/s)
|
| 54 |
+
</td>
|
| 55 |
+
<td>
|
| 56 |
+
Ref
|
| 57 |
+
</td>
|
| 58 |
+
</tr>
|
| 59 |
+
</thead>
|
| 60 |
+
<tbody>
|
| 61 |
+
<tr>
|
| 62 |
+
<td>
|
| 63 |
+
SGL Carbon GFD4.6
|
| 64 |
+
</td>
|
| 65 |
+
<td>
|
| 66 |
+
Baked at 400 °C for 12 hrs
|
| 67 |
+
</td>
|
| 68 |
+
<td>
|
| 69 |
+
Symmetrical RFB
|
| 70 |
+
</td>
|
| 71 |
+
<td>
|
| 72 |
+
Electro-chemical
|
| 73 |
+
</td>
|
| 74 |
+
<td>
|
| 75 |
+
2.38×10<sup>-6</sup>
|
| 76 |
+
</td>
|
| 77 |
+
<td>
|
| 78 |
+
[1]
|
| 79 |
+
</td>
|
| 80 |
+
</tr>
|
| 81 |
+
<tr>
|
| 82 |
+
<td>
|
| 83 |
+
Disk made from carbon felt (SigraCELL GFA6, SGL carbon)
|
| 84 |
+
</td>
|
| 85 |
+
<td>
|
| 86 |
+
Baked at 400 °C for 30 hrs
|
| 87 |
+
</td>
|
| 88 |
+
<td>
|
| 89 |
+
Linear sweep voltammetry (LSV)
|
| 90 |
+
</td>
|
| 91 |
+
<td>
|
| 92 |
+
Geometric
|
| 93 |
+
</td>
|
| 94 |
+
<td>
|
| 95 |
+
1.6-8.8×10<sup>-8</sup>
|
| 96 |
+
</td>
|
| 97 |
+
<td>
|
| 98 |
+
[2]
|
| 99 |
+
</td>
|
| 100 |
+
</tr>
|
| 101 |
+
<tr>
|
| 102 |
+
<td>
|
| 103 |
+
Ultra-microelectrode made from carbon felts (GrafTech)
|
| 104 |
+
</td>
|
| 105 |
+
<td>
|
| 106 |
+
Electrochemical oxidation and reduction
|
| 107 |
+
</td>
|
| 108 |
+
<td>
|
| 109 |
+
LSV and EIS
|
| 110 |
+
</td>
|
| 111 |
+
<td>
|
| 112 |
+
Electro-chemical
|
| 113 |
+
</td>
|
| 114 |
+
<td>
|
| 115 |
+
1.7-17×10<sup>-5</sup>
|
| 116 |
+
</td>
|
| 117 |
+
<td>
|
| 118 |
+
[3]
|
| 119 |
+
</td>
|
| 120 |
+
</tr>
|
| 121 |
+
<tr>
|
| 122 |
+
<td>
|
| 123 |
+
Carbon felt (Sigratherm GFA5)
|
| 124 |
+
</td>
|
| 125 |
+
<td>
|
| 126 |
+
Not mentioned
|
| 127 |
+
</td>
|
| 128 |
+
<td>
|
| 129 |
+
Galvanic charging / discharging
|
| 130 |
+
</td>
|
| 131 |
+
<td>
|
| 132 |
+
Calculated
|
| 133 |
+
</td>
|
| 134 |
+
<td>
|
| 135 |
+
3×10<sup>-7</sup>
|
| 136 |
+
</td>
|
| 137 |
+
<td>
|
| 138 |
+
[4]
|
| 139 |
+
</td>
|
| 140 |
+
</tr>
|
| 141 |
+
<tr>
|
| 142 |
+
<td>
|
| 143 |
+
Carbon felt (Liao Yang Carbon Fiber Sci-tech. Co., Ltd. China)
|
| 144 |
+
</td>
|
| 145 |
+
<td>
|
| 146 |
+
None
|
| 147 |
+
</td>
|
| 148 |
+
<td>
|
| 149 |
+
CV and EIS
|
| 150 |
+
</td>
|
| 151 |
+
<td>
|
| 152 |
+
Geometric
|
| 153 |
+
</td>
|
| 154 |
+
<td>
|
| 155 |
+
1.84×10<sup>-3</sup>
|
| 156 |
+
</td>
|
| 157 |
+
<td>
|
| 158 |
+
[5]
|
| 159 |
+
</td>
|
| 160 |
+
</tr>
|
| 161 |
+
<tr>
|
| 162 |
+
<td>
|
| 163 |
+
Carbon paper (29, SGL group)
|
| 164 |
+
</td>
|
| 165 |
+
<td>
|
| 166 |
+
Baked at 450 °C for 30 hrs
|
| 167 |
+
</td>
|
| 168 |
+
<td>
|
| 169 |
+
Polarization curve and EIS in a RFB
|
| 170 |
+
</td>
|
| 171 |
+
<td>
|
| 172 |
+
Electro-chemical
|
| 173 |
+
</td>
|
| 174 |
+
<td>
|
| 175 |
+
0.2-1.8×10<sup>-7</sup>
|
| 176 |
+
</td>
|
| 177 |
+
<td>
|
| 178 |
+
[6]
|
| 179 |
+
</td>
|
| 180 |
+
</tr>
|
| 181 |
+
<tr>
|
| 182 |
+
<td>
|
| 183 |
+
Carbon paper (10AA, SGL group)
|
| 184 |
+
</td>
|
| 185 |
+
<td>
|
| 186 |
+
None
|
| 187 |
+
</td>
|
| 188 |
+
<td>
|
| 189 |
+
Symmetrical RFB
|
| 190 |
+
</td>
|
| 191 |
+
<td>
|
| 192 |
+
Gas adsorption
|
| 193 |
+
</td>
|
| 194 |
+
<td>
|
| 195 |
+
2.05×10<sup>-6</sup>
|
| 196 |
+
</td>
|
| 197 |
+
<td>
|
| 198 |
+
[7]
|
| 199 |
+
</td>
|
| 200 |
+
</tr>
|
| 201 |
+
<tr>
|
| 202 |
+
<td>
|
| 203 |
+
Carbon paper (Shanghai Hesen, Ltd. HCP030 N)
|
| 204 |
+
</td>
|
| 205 |
+
<td>
|
| 206 |
+
Electrochemical oxidation and reduction
|
| 207 |
+
</td>
|
| 208 |
+
<td>
|
| 209 |
+
CV
|
| 210 |
+
</td>
|
| 211 |
+
<td>
|
| 212 |
+
Gas adsorption
|
| 213 |
+
</td>
|
| 214 |
+
<td>
|
| 215 |
+
1.04×10<sup>-3</sup>
|
| 216 |
+
</td>
|
| 217 |
+
<td>
|
| 218 |
+
[8]
|
| 219 |
+
</td>
|
| 220 |
+
</tr>
|
| 221 |
+
</tbody>
|
| 222 |
+
</table>
|
| 223 |
+
|
| 224 |
+
SI references:
|
| 225 |
+
|
| 226 |
+
[1] M. V. Holland-Cunz, J. Friedl, U. Stimming, *J. Electroanal. Chem.* **2018**, *819*, 306-311.
|
| 227 |
+
---PAGE_BREAK---
|
| 228 |
+
|
| 229 |
+
[2] Y. Li, J. Parrondo, S. Sankarasubramanian, V. Ramani, *J. Phys. Chem. C* **2019**, *123*, 6370-6378.
|
| 230 |
+
|
| 231 |
+
[3] M. A. Miller, A. Bourke, N. Quill, J. S. Wainright, R. P. Lynch, D. N. Buckley, R. F. Savinell, *J. Electrochem. Soc.* **2016**, *163*, A2095.
|
| 232 |
+
|
| 233 |
+
[4] A. A. Shah, M. J. Watt-Smith, F. C. Walsh, *Electrochim. Acta* **2008**, *53*, 8087-8100.
|
| 234 |
+
|
| 235 |
+
[5] W. Li, Z. Zhang, Y. Tang, H. Bian, T.-W. Ng, W. Zhang, C.-S. Lee, *Adv. Sci.* **2016**, *3*, 1500276.
|
| 236 |
+
|
| 237 |
+
[6] K. V. Greco, A. Forner-Cuenca, A. Mularczyk, J. Eller, F. R. Brushett, *ACS Appl. Mater. Interfaces* **2018**, *10*, 44430-44442.
|
| 238 |
+
|
| 239 |
+
[7] D. Aaron, C.-N. Sun, M. Bright, A. B. Papandrew, M. M. Mench, T. A. Zawodzinski, *ECS Electrochemistry Letters* **2013**, *2*, A29.
|
| 240 |
+
|
| 241 |
+
[8] X. W. Wu, T. Yamamura, S. Ohta, Q. X. Zhang, F. C. Lv, C. M. Liu, K. Shirasaki, I. Satoh, T. Shikama, D. Lu, S. Q. Liu, *J Appl Electrochem* **2011**, *8*.
|
samples/texts_merged/1131204.md
ADDED
|
@@ -0,0 +1,426 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Appendices
|
| 5 |
+
|
| 6 |
+
## A. Derivations and Additional Methodology
|
| 7 |
+
|
| 8 |
+
### A.1. Generalized PointConv Trick
|
| 9 |
+
|
| 10 |
+
The matrix notation becomes very cumbersome for manipulating these higher order n-dimensional arrays, so we will instead use index notation with Latin indices i, j, k indexing points, Greek indices α, β, γ indexing feature channels, and c indexing the coordinate dimensions of which there are $d = 3$ for PointConv and $d = \dim(G) + 2 \dim(Q)$ for LieConv.³ As the objects are not geometric tensors but simply n-dimensional arrays, we will make no distinction between upper and lower indices. After expanding into indices, it should be assumed that all values are scalars, and that any free indices can range over all of the values.
|
| 11 |
+
|
| 12 |
+
Let $k_{ij}^{\alpha,\beta}$ be the output of the MLP $k_\theta$ which takes $\{a_{ij}^c\}$ as input and acts independently over the locations $i, j$. For PointConv, the input $a_{ij}^c = x_i^c - x_j^c$ and for LieConv the input $a_{ij}^c = \text{Concat}([\log(v_j^{-1}u_i), q_i, q_j])^c$.
|
| 13 |
+
|
| 14 |
+
We wish to compute
|
| 15 |
+
|
| 16 |
+
$$h_i^\alpha = \sum_{j;\beta} k_{ij}^{\alpha;\beta} f_j^\beta. \quad (12)$$
|
| 17 |
+
|
| 18 |
+
In Wu et al. (2019), it was observed that since $k_{ij}^{\alpha,\beta}$ is the output of an MLP, $k_{ij}^{\alpha,\beta} = \sum_\gamma W_\gamma^{\alpha,\beta} s_{i,j}^\gamma$ for some final weight matrix $W$ and penultimate activations $s_{i,j}^\gamma$ ($s_{i,j}^\gamma$ is simply the result of the MLP after the last nonlinearity). With this in mind, we can rewrite (12)
|
| 19 |
+
|
| 20 |
+
$$h_i^\alpha = \sum_{j,\beta} \left( \sum_\gamma W_\gamma^{\alpha,\beta} s_{i,j}^\gamma \right) f_j^\beta \quad (13)$$
|
| 21 |
+
|
| 22 |
+
$$= \sum_{\beta, \gamma} W_\gamma^{\alpha, \beta} \left( \sum_j s_{i,j}^\gamma f_j^\beta \right) \quad (14)$$
|
| 23 |
+
|
| 24 |
+
In practice, the intermediate number of channels is much less than the product of $c_{in}$ and $c_{out}$: $|\gamma| < |\alpha||\beta|$ and so this reordering of the computation leads to a massive reduction in both memory and compute. Furthermore, $b_i^{\gamma,\beta} = \sum_j s_{i,j}^\gamma f_j^\beta$ can be implemented with regular matrix multiplication and $h_i^\alpha = \sum_{\beta,\gamma} W_\gamma^{\alpha,\beta} b_i^{\gamma,\beta}$ can be also by flattening $(\beta, \gamma)$ into a single axis $\varepsilon$: $h_i^\alpha = \sum_\varepsilon W^{\alpha,\varepsilon} b_i^\varepsilon$.
|
| 25 |
+
|
| 26 |
+
The sum over index $j$ can be restricted to a subset $j(i)$ (such as a chosen neighborhood) by computing $f_j^\beta$ at each of the required indices and padding to the size of the maximum subset with zeros, and computing $b_i^{\gamma,\beta} = \sum_j s_{i,j(i)}^\gamma f_{j(i)}^\beta$ using dense matrix multiplication. Masking out of the values
|
| 27 |
+
|
| 28 |
+
at indices *i* and *j* is also necessary when there are different numbers of points per minibatch but batched together using zero padding. The generalized PointConv trick can thus be applied in batch mode when there may be varied number of points per example and varied number of points per neighborhood.
|
| 29 |
+
|
| 30 |
+
### A.2. Abelian G and Coordinate Transforms
|
| 31 |
+
|
| 32 |
+
For Abelian groups that cover $\mathcal{X}$ in a single orbit, the computation is very similar to ordinary Euclidean convolution. Defining $a_i = \log(u_i)$, $b_j = \log(v_j)$, and using the fact that $e^{-b_j} e^{a_i} = e^{a_i - b_j}$ means that $\log(v_j^{-1} u_i) = (\log \circ \exp)(a_i - b_j)$. Defining $\tilde{f} = f \circ \exp$, $\tilde{h} = h \circ \exp$; we get
|
| 33 |
+
|
| 34 |
+
$$\tilde{h}(a_i) = \frac{1}{n} \sum_{j \in \text{nbhd}(i)} (\tilde{k}_{\theta} \circ \text{proj})(a_i - b_j) \tilde{f}(b_j), \quad (15)$$
|
| 35 |
+
|
| 36 |
+
where proj = log ◦ exp projects to the image of the logarithm map. Apart from a projection and a change to logarithmic coordinates, this is equivalent to Euclidean convolution in a vector space with dimensionality of the group. When the group is Abelian and $\mathcal{X}$ is a homogeneous space, then the dimension of the group is the dimension of the input. In these cases we have a trivial stabilizer group $H$ and single origin $o$, so we can view $f$ and $h$ as acting on the input $x_i = u_i o$.
|
| 37 |
+
|
| 38 |
+
This directly generalizes some of the existing coordinate transform methods for achieving equivariance from the literature such as log polar coordinates for rotation and scaling equivariance (Esteves et al., 2017), and using hyperbolic coordinates for squeeze and scaling equivariance.
|
| 39 |
+
|
| 40 |
+
**Log Polar Coordinates:** Consider the Abelian Lie group of positive scalings and rotations: $G = \mathbb{R}^* \times SO(2)$ acting on $\mathbb{R}^2$. Elements of the group $M \in G$ can be expressed as a $2 \times 2$ matrix
|
| 41 |
+
|
| 42 |
+
$$M(r, \theta) = \begin{bmatrix} r \cos(\theta) & -r \sin(\theta) \\ r \sin(\theta) & r \cos(\theta) \end{bmatrix}$$
|
| 43 |
+
|
| 44 |
+
for $r \in \mathbb{R}^+$ and $\theta \in \mathbb{R}$. The matrix logarithm is⁴
|
| 45 |
+
|
| 46 |
+
$$\log\left(\begin{bmatrix} r \cos(\theta) & -r \sin(\theta) \\ r \sin(\theta) & r \cos(\theta) \end{bmatrix}\right) = \begin{bmatrix} \log(r) & -\theta \mod 2\pi \\ \theta \mod 2\pi & \log(r) \end{bmatrix},$$
|
| 47 |
+
|
| 48 |
+
or more compactly $\log(M(r, \theta)) = \log(r)I + (\theta \mod 2\pi)J$, which is $[\log(r), \theta \mod 2\pi]$ in the basis for the Lie algebra $[I, J]$. It is clear that proj = log ◦ exp is simply mod $2\pi$ on the J component.
|
| 49 |
+
|
| 50 |
+
As $\mathbb{R}^2$ is a homogeneous space of $G$, one can choose the global origin $o = [1, 0] \in \mathbb{R}^2$. A little algebra shows that
|
| 51 |
+
|
| 52 |
+
³dim($Q$) is the dimension of the space into which $Q$, the orbit identifiers, are embedded.
|
| 53 |
+
|
| 54 |
+
⁴Here $\theta \mod 2\pi$ is defined to mean $\theta + 2\pi n$ for the integer $n$ such that the value is in $(-\pi, \pi)$, consistent with the principal matrix logarithm. $(\theta + \pi)\%2\pi - \pi$ in programming notation.
|
| 55 |
+
---PAGE_BREAK---
|
| 56 |
+
|
| 57 |
+
lifting to the group yields the transformation $u_i = M(r_i, \theta_i)$
|
| 58 |
+
for each point $p_i = u_i o$, where $r = \sqrt{x^2 + y^2}$, and
|
| 59 |
+
$\theta = \operatorname{atan2}(y, x)$ are the polar coordinates of the point $p_i$.
|
| 60 |
+
Observe that the logarithm of $v_j^{-1} u_i$ has a simple expression
|
| 61 |
+
highlighting the fact that it is invariant to scale and rotational
|
| 62 |
+
transformations of the elements,
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\begin{align*}
|
| 66 |
+
\log(v_j^{-1} u_i) &= \log(M(r_j, \theta_j)^{-1} M(r_i, \theta_i)) \\
|
| 67 |
+
&= \log(r_i/r_j) I + (\theta_i - \theta_j \bmod 2\pi) J.
|
| 68 |
+
\end{align*}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Now writing out our Monte Carlo estimation of the integral:
|
| 72 |
+
|
| 73 |
+
$$h(p_i) = \frac{1}{n} \sum_j \tilde{k}_\theta(\log(r_i/r_j), \theta_i - \theta_j \bmod 2\pi) f(p_j),$$
|
| 74 |
+
|
| 75 |
+
which is a discretization of the log polar convolution from
|
| 76 |
+
Esteves et al. (2017). This can be trivially extended to
|
| 77 |
+
encompass cylindrical coordinates with the group $T(1) \times$
|
| 78 |
+
$\mathbb{R}^* \times \text{SO}(2)$.
|
| 79 |
+
|
| 80 |
+
**Hyperbolic coordinates:** For another nontrivial example,
|
| 81 |
+
consider the group of scalings and squeezes $G = \mathbb{R}^* \times \text{SQ}
|
| 82 |
+
acting on the positive orthant $\mathcal{X} = \{(x, y) \in \mathbb{R}^2 : x >$
|
| 83 |
+
$0, y > 0\}$. Elements of the group can be expressed as the
|
| 84 |
+
product of a squeeze mapping and a scaling
|
| 85 |
+
|
| 86 |
+
$$M(r, s) = \begin{bmatrix} s & 0 \\ 0 & 1/s \end{bmatrix} \begin{bmatrix} r & 0 \\ 0 & r \end{bmatrix} = \begin{bmatrix} rs & 0 \\ 0 & r/s \end{bmatrix}$$
|
| 87 |
+
|
| 88 |
+
for any $r, s \in \mathbb{R}^{+}$. As the group is abelian, the logarithm
|
| 89 |
+
splits nicely in terms of the two generators $I$ and $A$:
|
| 90 |
+
|
| 91 |
+
$$\log\left(\begin{bmatrix} rs & 0 \\ 0 & r/s \end{bmatrix}\right) = (\log r)\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + (\log s)\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.$$
|
| 92 |
+
|
| 93 |
+
Again $\mathcal{X}$ is a homogeneous space of $G$, and we choose a
|
| 94 |
+
single origin $o = [1, 1]$. With a little algebra, it is clear that
|
| 95 |
+
$M(r_i, s_i)_o = p_i$ where $r = \sqrt{xy}$ and $s = \sqrt{x/y}$ are the
|
| 96 |
+
hyperbolic coordinates of $p_i$.
|
| 97 |
+
|
| 98 |
+
Expressed in the basis $B = [I, A]$ for the Lie algebra above,
|
| 99 |
+
we see that
|
| 100 |
+
|
| 101 |
+
$$\log(v_j^{-1} u_i) = \log(r_i / r_j) I + \log(s_i / s_j) A$$
|
| 102 |
+
|
| 103 |
+
yielding the expression for convolution
|
| 104 |
+
|
| 105 |
+
$$h(p_i) = \frac{1}{n} \sum_j \tilde{k}_\theta(\log(r_i/r_j), \log(s_i/s_j)) f(p_j),$$
|
| 106 |
+
|
| 107 |
+
which is equivariant to squeezes and scalings.
|
| 108 |
+
|
| 109 |
+
As demonstrated, equivariance to groups that contain the
|
| 110 |
+
input space in a single orbit and are abelian can be achieved
|
| 111 |
+
with a simple coordinate transform; however our approach
|
| 112 |
+
generalizes to groups that are both 'larger' and 'smaller' than
|
| 113 |
+
the input space, including coordinate transform equivariance
|
| 114 |
+
as a special case.
|
| 115 |
+
|
| 116 |
+
### A.3. Sufficient Conditions for Geodesic Distance
|
| 117 |
+
|
| 118 |
+
In general, the function $d(u, v) = \| \log(v^{-1}u) \|_F$, defined
|
| 119 |
+
on the domain of GL(d) covered by the exponential map,
|
| 120 |
+
satisfies the first three conditions of a distance metric but
|
| 121 |
+
not the triangle inequality, making it a semi-metric:
|
| 122 |
+
|
| 123 |
+
1. $d(u, v) \geq 0$
|
| 124 |
+
|
| 125 |
+
2. $d(u, v) = 0 \Leftrightarrow \log(u^{-1}v) = 0 \Leftrightarrow u = v$
|
| 126 |
+
|
| 127 |
+
3. $d(u, v) = \|\log(v^{-1}u)\| = \|- \log(u^{-1}v)\| = d(v, u).$
|
| 128 |
+
|
| 129 |
+
However for certain subgroups of GL(d) with additional
|
| 130 |
+
structure, the triangle inequality holds and the function is
|
| 131 |
+
the distance along geodesics connecting group elements u
|
| 132 |
+
and v according to the metric tensor
|
| 133 |
+
|
| 134 |
+
$$\langle A, B\rangle_u := \mathrm{Tr}(A^T u^{-T} u^{-1} B), \quad (16)$$
|
| 135 |
+
|
| 136 |
+
where $-T$ denotes inverse and transpose.
|
| 137 |
+
|
| 138 |
+
Specifically, if the subgroup $G$ is in the image of the exp :
|
| 139 |
+
$g \to G$ map and each infinitesimal generator commutes with
|
| 140 |
+
its transpose: $[A, A^T] = 0$ for $\forall A \in g$, then $d(u, v) =$
|
| 141 |
+
$\|\log(v^{-1}u)\|_F$ is the geodesic distance between $u, v$.
|
| 142 |
+
|
| 143 |
+
**Geodesic Equation:** Geodesics of (16) satisfying $\nabla_{\dot{\gamma}}\dot{\gamma} = 0$ can equivalently be derived by minimizing the energy functional
|
| 144 |
+
|
| 145 |
+
$$E[\gamma] = \int_{\gamma} \langle \dot{\gamma}, \dot{\gamma} \rangle_{\gamma} dt = \int_{0}^{1} \mathrm{Tr}(\dot{\gamma}^{T} \gamma^{-T} \gamma^{-1} \dot{\gamma}) dt$$
|
| 146 |
+
|
| 147 |
+
using the calculus of variations. Minimizing curves $\gamma(t)$,
|
| 148 |
+
connecting elements $u$ and $v$ in $G$ ($\gamma(0) = v, \gamma(1) = u$)
|
| 149 |
+
satisfy
|
| 150 |
+
|
| 151 |
+
$$0 = \delta E = \delta \int_0^1 \mathrm{Tr}(\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \dot{\gamma}) dt$$
|
| 152 |
+
|
| 153 |
+
Noting that $\delta(\gamma^{-1}) = -\gamma^{-1}\delta\gamma\gamma^{-1}$ and the linearity of the
|
| 154 |
+
trace,
|
| 155 |
+
|
| 156 |
+
$$2 \int_0^1 \operatorname{Tr} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \delta \dot{\gamma}) - \operatorname{Tr} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \delta \gamma \gamma^{-1} \dot{\gamma}) dt = 0.$$
|
| 157 |
+
|
| 158 |
+
Using the cyclic property of the trace and integrating by
|
| 159 |
+
parts, we have that
|
| 160 |
+
|
| 161 |
+
$$-2 \int_0^1 \operatorname{Tr} \left( (\frac{d}{dt}(\dot{\gamma}^T \gamma^{-T} \gamma^{-1}) + \gamma^{-1} \dot{\gamma} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1})^\intercal) \delta\gamma \right) dt = 0,$$
|
| 162 |
+
|
| 163 |
+
where the boundary term $\operatorname{Tr}(\dot{\gamma}\gamma^{-T}\gamma^{-1}\delta\dot{\gamma})|_{0}^{1}$ vanishes since
|
| 164 |
+
$(\delta\gamma)(0) = (\delta\gamma)(1) = 0.$
|
| 165 |
+
|
| 166 |
+
As $\delta\gamma$ may be chosen to vary arbitrarily along the path, $\gamma$
|
| 167 |
+
must satisfy the geodesic equation:
|
| 168 |
+
|
| 169 |
+
$$\frac{d}{dt}(\dot{\gamma}^T\gamma^{-T}\gamma^{-1}) + \gamma^{-1}\dot{\gamma}\dot{\gamma}^T\gamma^{-T}\gamma^{-1} = 0. \quad (17)$$
|
| 170 |
+
---PAGE_BREAK---
|
| 171 |
+
|
| 172 |
+
**Solutions:** When $A = \log(v^{-1}u)$ satisfies $[A, A^T] = 0$, the curve $\gamma(t) = v \exp(t \log(v^{-1}u))$ is a solution to the geodesic equation (17). Clearly $\gamma$ connects $u$ and $v$, $\gamma(0) = v$ and $\gamma(1) = u$. Plugging in $\dot{\gamma} = \gamma A$ into the left hand side of equation (17), we have
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\begin{align*}
|
| 176 |
+
&= \frac{d}{dt}(A^T \gamma^{-1}) + AA^T \gamma^{-1} \\
|
| 177 |
+
&= -A^T \gamma^{-1} \dot{\gamma} \gamma^{-1} + AA^T \gamma^{-1} \\
|
| 178 |
+
&= [A, A^T]\gamma^{-1} = 0
|
| 179 |
+
\end{align*}
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
**Length of $\gamma$:** The length of the curve $\gamma$ connecting $u$ and $v$ is $\|\log(v^{-1}u)\|_F$,
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\begin{align*}
|
| 186 |
+
L[\gamma] &= \int_{\gamma} \sqrt{\langle \dot{\gamma}, \dot{\gamma} \rangle_{\gamma}} dt = \int_{0}^{1} \sqrt{\operatorname{Tr}(\dot{\gamma}^{T}\gamma^{-T}\gamma^{-1}\dot{\gamma})} dt \\
|
| 187 |
+
&= \int_{0}^{1} \sqrt{\operatorname{Tr}(A^{T}A)} dt = \|A\|_{F} = \|\log(v^{-1}u)\|_{F}
|
| 188 |
+
\end{align*}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Of the Lie Groups that we consider in this paper, all of which have a single connected component, the groups $G = T(d)$, $SO(d)$, $\mathbb{R}^* \times SO(d)$, $\mathbb{R}^* \times SQ$ satisfy this property that $[\mathfrak{g}, \mathfrak{g}^T] = 0$; however, the $SE(d)$ groups do not.
|
| 192 |
+
|
| 193 |
+
## A.4. Equivariant Subsampling
|
| 194 |
+
|
| 195 |
+
Even if all distances and neighborhoods are precomputed, the cost of computing equation (6) for $i = 1, ..., N$ is still quadratic, $O(nN) = O(N^2)$, because the number of points in each neighborhood $n$ grows linearly with $N$ as $f$ is more densely evaluated. So that our method can scale to handle a large number of points, we show two ways two equivariantly subsample the group elements, which we can use both for the locations at which we evaluate the convolution and the locations that we use for the Monte Carlo estimator. Since the elements are spaced irregularly, we cannot readily use the coset pooling method described in (Cohen and Welling, 2016a), instead we can perform:
|
| 196 |
+
|
| 197 |
+
**Random Selection:** Randomly selecting a subset of $p$ points from the original $n$ preserves the original sampling distribution, so it can be used.
|
| 198 |
+
|
| 199 |
+
**Farthest Point Sampling:** Given a set of group elements $S = \{u_i\}_{i=1}^k \in G$, we can select a subset $S_p^*$ of size $p$ by maximizes the minimum distance between any two elements in that subset,
|
| 200 |
+
|
| 201 |
+
$$ \mathrm{Sub}_p(S) := S_p^* = \arg \max_{S_p \subset S} \min_{u,v \in S_p: u \neq v} d(u,v), \quad (18) $$
|
| 202 |
+
|
| 203 |
+
farthest point sampling on the group. Acting on a set of elements, $\mathrm{Sub}_p : S \mapsto S_p^*$, the farthest point sub-sampling is equivariant $\mathrm{Sub}_p(wS) = w\mathrm{Sub}_p(S)$ for any $w \in G$. Meaning that applying a group element to each of the elements does not change the chosen indices in
|
| 204 |
+
|
| 205 |
+
the subsampled set because the distances are left invariant $d(u_i, u_j) = d(wu_i, wu_j)$.
|
| 206 |
+
|
| 207 |
+
Now we can use either of these methods for $\mathrm{Sub}_p(\cdot)$ to equivariantly subsample the quadrature points in each neighborhood used to estimate the integral to a fixed number $p$,
|
| 208 |
+
|
| 209 |
+
$$ h_i = \frac{1}{p} \sum_{j \in \mathrm{Sub}_p(\mathrm{nbhd}(u_i))} k_\theta(v_j^{-1} u_i) f_j. \quad (19) $$
|
| 210 |
+
|
| 211 |
+
Doing so has reduced the cost of estimating the convolution from $O(N^2)$ to $O(pN)$, ignoring the cost of computing $\mathrm{Sub}_p$ and $\{\mathrm{nbhd}(u_i)\}_{i=1}^N$.
|
| 212 |
+
|
| 213 |
+
## A.5. Review and Implications of Noether's Theorem
|
| 214 |
+
|
| 215 |
+
In the Hamiltonian setting, Noether's theorem relates the continuous symmetries of the Hamiltonian of a system with conserved quantities, and has been deeply impactful in the understanding of classical physics. We give a review of Noether's theorem, loosely following Butterfield (2006).
|
| 216 |
+
|
| 217 |
+
### More on Hamiltonian Dynamics
|
| 218 |
+
|
| 219 |
+
As introduced earlier, the Hamiltonian is a function acting on the state $H(z) = H(q,p)$, (we will ignore time dependence for now) can be viewed more formally as a function on the cotangent bundle $(q,p) = z \in M = T^*C$ where $C$ is the coordinate configuration space, and this is the setting for Hamiltonian dynamics.
|
| 220 |
+
|
| 221 |
+
In general, on a manifold $\mathcal{M}$, a vector field $X$ can be viewed as an assignment of a directional derivative along $\mathcal{M}$ for each point $z \in \mathcal{M}$. It can be expanded in a basis using coordinate charts $X = \sum_{\alpha} X^{\alpha} \partial_{x^{\alpha}}$, where $\partial_{\alpha} = \frac{\partial}{\partial z^{\alpha}}$ and acts on functions $f$ by $X(f) = \sum_{\alpha} X^{\alpha} \partial_{x^{\alpha}} f$. In the chart, each of the components $X^{\alpha}$ are functions of $z$.
|
| 222 |
+
|
| 223 |
+
In Hamiltonian mechanics, for two functions on $\mathcal{M}$, there is the Poisson bracket which can be written in terms of the canonical coordinates $q_i, p_i,$
|
| 224 |
+
|
| 225 |
+
$$ \{f,g\} = \sum_i \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial q_i} - \frac{\partial f}{\partial q_i} \frac{\partial g}{\partial p_i}. $$
|
| 226 |
+
|
| 227 |
+
The Poisson bracket can be used to associate each function $f$ to a vector field
|
| 228 |
+
|
| 229 |
+
$$ X_f = \{f, \cdot\} = \sum_i \frac{\partial f}{\partial p_i} \frac{\partial}{\partial q_i} - \frac{\partial f}{\partial q_i} \frac{\partial}{\partial p_i}, $$
|
| 230 |
+
|
| 231 |
+
which specifies, by its action on another function $g$, the directional derivative of $g$ along $X_f: X_f(g) = \{f,g\}$. Vector fields that can be written in this way are known as Hamiltonian vector fields, and the Hamiltonian dynamics of the
|
| 232 |
+
|
| 233 |
+
⁵Here we take the definition of the Poisson bracket to be negative of the usual definition in order to streamline notation.
|
| 234 |
+
---PAGE_BREAK---
|
| 235 |
+
|
| 236 |
+
system is a special example $X_H = \{H, \cdot\}$. This vector field in canonical coordinates $z = (p, q)$ is the vector field $X_H = F(z) = J\nabla_z H$ (i.e. the symplectic gradient, as discussed in Section 6.1). Making this connection clear, a given scalar quantity evolves through time as $\dot{f} = \{H, f\}$. But this bracket can be used to evaluate the rate of change of a scalar quantity along the flows of vector fields other than the dynamics, such as the flows of continuous symmetries.
|
| 237 |
+
|
| 238 |
+
## Noether's Theorem
|
| 239 |
+
|
| 240 |
+
The flow $\phi_{\lambda}^X$ by $\lambda \in \mathbb{R}$ of a vector field $X$ is the set of integral curves, the unique solution to the system of ODEs $\dot{z}^\alpha = X^\alpha$ with initial condition $z$ and at parameter value $\lambda$, or more abstractly the iterated application of $X$: $\phi_{\lambda}^X = \exp(\lambda X)$. Continuous symmetries transformation are the transformations that can be written as the flow $\phi_{\lambda}^X$ of a vector field. The directional derivative characterizes how a function such as the Hamiltonian changes along the flow of $X$ and is a special case of the Lie Derivative $\mathcal{L}$.
|
| 241 |
+
|
| 242 |
+
$$ \mathcal{L}_X H = \frac{d}{d\lambda} (H \circ \phi_\lambda^X)|_{\lambda=0} = X(H) $$
|
| 243 |
+
|
| 244 |
+
A scalar function is invariant to the flow of a vector field if and only if the Lie Derivative is zero
|
| 245 |
+
|
| 246 |
+
$$ H(\phi_{\lambda}^{X}(z)) = H(z) \Leftrightarrow \mathcal{L}_{X}H = 0. $$
|
| 247 |
+
|
| 248 |
+
For all transformations that respect the Poisson Bracket⁶, which we add as a requirement for a symmetry, the vector field $X$ is (locally) Hamiltonian and there exists a function $f$ such that $X = X_f = \{f, \cdot\}$. If $M$ is a contractible domain such as $\mathbb{R}^{2n}$, then $f$ is globally defined. For every continuous symmetry $\phi_{\lambda}^{X_f}$,
|
| 249 |
+
|
| 250 |
+
$$ \mathcal{L}_{X_f} H = X_f(H) = \{f, H\} = -\{H, f\} = -X_H(f), $$
|
| 251 |
+
|
| 252 |
+
by the antisymmetry of the Poisson bracket. So if $\phi_{\lambda}^X$ is a symmetry of $H$, then $X = X_f$ for some function $f$, and $H(\phi_{\lambda}^{X_f}(z)) = H(z)$ implies
|
| 253 |
+
|
| 254 |
+
$$ \mathcal{L}_{X_f} H = 0 \Leftrightarrow \mathcal{L}_{X_H} f = 0 \Leftrightarrow f(\phi_{\tau}^{X_H}(z)) = f(z) $$
|
| 255 |
+
|
| 256 |
+
or in other words $f(z(t+\tau)) = f(z(t))$ and $f$ is a conserved quantity of the dynamics.
|
| 257 |
+
|
| 258 |
+
⁶More precisely, the Poisson Bracket can be formulated in a coordinate free manner in terms of a symplectic two form $\omega$, $\{f,g\} = \omega(X_f, X_g)$. In the original coordinates $\omega = \sum_i dp_i \wedge dq^i$, and this coordinate basis, $\omega$ is represented by the matrix $J$ from earlier. The dynamics $X_H$ are determined by $dH = \omega(X_H, \cdot) = \iota_{X_H}\omega$. Transformations which respect the Poisson Bracket are symplectic, $\mathcal{L}_{X_H}\omega = 0$. With Cartan's magic formula, this implies that $d(\iota_{X_H}\omega) = 0$. Because the form $\iota_{X_H}\omega$ is closed, Poincare's Lemma implies that locally $(\iota_{X_H}\omega) = df$ for some function $f$ and hence $X = X_f$ (locally) a Hamiltonian vector field. For more details see Butterfield (2006).
|
| 259 |
+
|
| 260 |
+
This implication goes both ways, if $f$ is conserved then $\phi_{\lambda}^{X_f}$ is necessarily a symmetry of the Hamiltonian, and if $\phi_{\lambda}^{X_f}$ is a symmetry of the Hamiltonian then $f$ is conserved.
|
| 261 |
+
|
| 262 |
+
## Hamiltonian vs Dynamical Symmetries
|
| 263 |
+
|
| 264 |
+
So far we have been discussing Hamiltonian symmetries, invariances of the Hamiltonian. But in the study of dynamical systems there is a related concept of dynamical symmetries, symmetries of the equations of motion. This notion is also captured by the Lie Derivative, but between vector fields. A dynamical system $\dot{z} = F(z)$, has a continuous dynamical symmetry $\phi_{\lambda}^X$ if the flow along the dynamical system commutes with the symmetry:
|
| 265 |
+
|
| 266 |
+
$$ \phi_{\lambda}^{X}(\phi_{t}^{F}(z)) = \phi_{t}^{F}(\phi_{\lambda}^{X}(z)). \quad (20) $$
|
| 267 |
+
|
| 268 |
+
Meaning that applying the symmetry transformation to the state and then flowing along the dynamical system is equivalent to flowing first and then applying the symmetry transformation. Equation (20) is satisfied if and only if the Lie Derivative is zero:
|
| 269 |
+
|
| 270 |
+
$$ \mathcal{L}_X F = [X, F] = 0, $$
|
| 271 |
+
|
| 272 |
+
where $[.,]$ is the Lie bracket on vector fields.⁷
|
| 273 |
+
|
| 274 |
+
For Hamiltonian systems, every Hamiltonian symmetry is also a dynamical symmetry. In fact, it is not hard to show that the Lie and Poisson brackets are related,
|
| 275 |
+
|
| 276 |
+
$$ [X_f, X_g] = X_{\{f,g\}} $$
|
| 277 |
+
|
| 278 |
+
and this directly shows the implication. If $X_f$ is a Hamiltonian symmetry, $\{f, H\} = 0$, and then
|
| 279 |
+
|
| 280 |
+
$$ [X_f, F] = [X_f, X_H] = X_{\{f,H\}} = 0. $$
|
| 281 |
+
|
| 282 |
+
However, the converse is not true, dynamical symmetries of a Hamiltonian system are not necessarily Hamiltonian symmetries and thus might not correspond to conserved quantities. Furthermore even if the system has a dynamical symmetry which is the flow along a Hamiltonian vector field $\phi_{\lambda}^X$, $X = X_f = \{f, \cdot\}$, but the dynamics $F$ are not Hamiltonian, then the dynamics will not conserve $f$ in general. Both the symmetry and the dynamics must be Hamiltonian for the conservation laws.
|
| 283 |
+
|
| 284 |
+
This fact is demonstrated by Figure 9, where the dynamics of the (non-Hamiltonian) equivariant LieConv-T(2) model has a T(2) dynamical symmetry with the generators $\partial_x, \partial_y$ which are Hamiltonian vector fields for $f = p_x, f = p_y$, and yet linear momentum is not conserved by the model.
|
| 285 |
+
|
| 286 |
+
⁷The Lie bracket on vector fields produces another vector field and is defined by how it acts on functions, for any smooth function $g: [X, F](g) = X(F(g)) - F(X(g))$
|
| 287 |
+
---PAGE_BREAK---
|
| 288 |
+
|
| 289 |
+
Figure 9. Equivariance alone is not sufficient, for conservation we need both to model $\mathcal{H}$ and incorporate the given symmetry. For comparison, LieConv-T(2) is T(2)-equivariant but models $F$, and HLieConv-Trivial models $\mathcal{H}$ but is not T(2)-equivariant. Only HLieConv-T(2) conserves linear momentum.
|
| 290 |
+
|
| 291 |
+
## Conserving Linear and Angular Momentum
|
| 292 |
+
|
| 293 |
+
Consider a system of $N$ interacting particles described in Euclidean coordinates with position and momentum $q_{im}, p_{im}$, such as the multi-body spring problem. Here the first index $i = 1, 2, 3$ indexes the spatial coordinates and the second $m = 1, 2, ..., N$ indexes the particles. We will use the bolded notation $\mathbf{q}_m, \mathbf{p}_m$ to suppress the spatial indices, but still indexing the particles $m$ as in Section 6.1.
|
| 294 |
+
|
| 295 |
+
The total linear momentum along a given direction **n** is
|
| 296 |
+
$$ \mathbf{n} \cdot \mathbf{P} = \sum_{i,m} n_i p_{im} = \mathbf{n} \cdot (\sum_m \mathbf{p}_m). $$
|
| 297 |
+
Expanding the Poisson bracket, the Hamiltonian vector field
|
| 298 |
+
|
| 299 |
+
$$ X_{nP} = \{\mathbf{n} \cdot \mathbf{P}, \cdot\} = \sum_{i,m} n_i \frac{\partial}{\partial q_{im}} = \mathbf{n} \cdot \sum_{m} \frac{\partial}{\partial \mathbf{q}_{m}} $$
|
| 300 |
+
|
| 301 |
+
which has the flow $\dot{\phi}_{\lambda}^{X_{nP}}(\mathbf{q}_m, \mathbf{p}_m) = (\mathbf{q}_m + \lambda\mathbf{n}, \mathbf{p}_m)$, a translation of all particles by $\lambda\mathbf{n}$. So our model of the Hamiltonian conserves linear momentum if and only if it is invariant to a global translation of all particles, (e.g. T(2) invariance for a 2D spring system).
|
| 302 |
+
|
| 303 |
+
The total angular momentum along a given axis **n** is
|
| 304 |
+
|
| 305 |
+
$$ \mathbf{n} \cdot \mathbf{L} = \mathbf{n} \cdot \sum_m \mathbf{q}_m \times \mathbf{p}_m = \sum_{i,j,k,m} \epsilon_{ijk} n_i q_{jm} p_{km} = \sum_m \mathbf{p}_m^T A \mathbf{q}_m $$
|
| 306 |
+
|
| 307 |
+
where $\epsilon_{ijk}$ is the Levi-Civita symbol and we have defined
|
| 308 |
+
the antisymmetric matrix $A$ by $A_{kj} = \sum_i \epsilon_{ijk} n_i$.
|
| 309 |
+
|
| 310 |
+
$$ X_{nL} = \{\mathbf{n} \cdot \mathbf{L}, \cdot\} = \sum_{j,k,m} A_{kj} q_{jm} \frac{\partial}{\partial q_{km}} - A_{jk} p_{jm} \frac{\partial}{\partial p_{km}} $$
|
| 311 |
+
|
| 312 |
+
$$ X_{nL} = \sum_m (\mathbf{q}_m^T A^T \frac{\partial}{\partial \mathbf{q}_m} + \mathbf{p}_m^T A^T \frac{\partial}{\partial \mathbf{p}_m}) $$
|
| 313 |
+
|
| 314 |
+
where the second line follows from the antisymmetry of $\mathcal{A}$.
|
| 315 |
+
We can find the flow of $X_{nL}$ from the differential equations
|
| 316 |
+
|
| 317 |
+
$\dot{\mathbf{q}}_m = A\mathbf{q}, \dot{\mathbf{p}}_m = A\mathbf{q}$ which have the solution
|
| 318 |
+
|
| 319 |
+
$$ \phi_{\theta}^{X_{nL}}(\mathbf{q}_m, \mathbf{p}_m) = (e^{\theta A}\mathbf{q}_m, e^{\theta A}\mathbf{p}_m) = (R_{\theta}\mathbf{q}_m, R_{\theta}\mathbf{p}_m), $$
|
| 320 |
+
|
| 321 |
+
where $R_\theta$ is a rotation about the axis **n** by the angle $\theta$, which follows from the Rodriguez rotation formula. Therefore, the flow of the Hamiltonian vector field of angular momentum along a given axis is a global rotation of the position and momentum of each particle about that axis. Again, the dynamics of a neural network modeling a Hamiltonian conserve total angular momentum if and only if the network is invariant to simultaneous rotation of all particle positions and momenta.
|
| 322 |
+
|
| 323 |
+
# B. Additional Experiments
|
| 324 |
+
|
| 325 |
+
## B.1. Equivariance Demo
|
| 326 |
+
|
| 327 |
+
While (7) shows that the convolution estimator is equivariant, we have conducted the ablation study below examining the equivariance of the network empirically. We trained LieConv (Trivial, T(3), SO(3), SE(3)) models on a limited subset of 20k training examples (out of 100k) of the HOMO task on QM9 without any data augmentation. We then evaluate these models on a series of modified test sets where each example has been randomly transformed by an element of the given group (the test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5). In table B.1 the rows are the models configured with a given group equivariance and the columns N/G denote no augmentation at training time and transformations from G applied to the test set (test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5).
|
| 328 |
+
|
| 329 |
+
<table><thead><tr><th>Model</th><th>N/N</th><th>N/T(3)</th><th>N/SO(3)</th><th>N/SE(3)</th></tr></thead><tbody><tr><td>Trivial</td><td><b>173</b></td><td>183</td><td>239</td><td>243</td></tr><tr><td>T(3)</td><td><b>113</b></td><td><b>113</b></td><td>133</td><td>133</td></tr><tr><td>SO(3)</td><td><b>159</b></td><td>238</td><td><b>160</b></td><td>240</td></tr><tr><td>SE(3)</td><td><b>62</b></td><td><b>62</b></td><td><b>63</b></td><td><b>62</b></td></tr></tbody></table>
|
| 330 |
+
|
| 331 |
+
Table 4. Test MAE (in meV) on HOMO test set randomly transformed by elements of $\mathcal{G}$. Despite no data augmentation (N), $\mathcal{G}$ equivariant models perform as well on $\mathcal{G}$ transformed test data.
|
| 332 |
+
|
| 333 |
+
Notably, the performance of the LieConv-G models do not degrade when random G transformations are applied to the test set. Also, in this low data regime, the added equivariances are especially important.
|
| 334 |
+
|
| 335 |
+
## B.2. RotMNIST Comparison
|
| 336 |
+
|
| 337 |
+
While the RotMNIST dataset consists of 12k rotated MNIST digits, it is standard to separate out 10k to be used for training and 2k for validation. However, in Ti-Pooling and E(2)-Steerable CNNs, it appears that after hyperparameters were tuned the validation set is folded back into the training set
|
| 338 |
+
---PAGE_BREAK---
|
| 339 |
+
|
| 340 |
+
to be used as additional training data, a common approach used on other datasets. Although in table 1 we only use 10k training points, in the table below we report the performance with and without augmentation trained on the full 12k examples.
|
| 341 |
+
|
| 342 |
+
<table><thead><tr><th>Aug</th><th>Trivial</th><th>T<sub>y</sub></th><th>T(2)</th><th>SO(2)</th><th>SO(2)×R<sup>*</sup></th><th>SE(2)</th></tr></thead><tbody><tr><td>SO(2)</td><td>1.44</td><td>1.35</td><td>1.32</td><td>1.27</td><td>1.13</td><td>1.13</td></tr><tr><td>None</td><td>1.60</td><td>2.64</td><td>2.34</td><td>1.26</td><td>1.25</td><td>1.15</td></tr></tbody></table>
|
| 343 |
+
|
| 344 |
+
Table 5. Classification Error (%) on RotMNIST dataset for LieConv with different group equivariances and baselines:
|
| 345 |
+
|
| 346 |
+
## C. Implementation Details
|
| 347 |
+
|
| 348 |
+
### C.1. Practical Considerations
|
| 349 |
+
|
| 350 |
+
While the high-level summary of the lifting procedure (Algorithm 1) and the LieConv layer (Algorithm 2) provides a useful conceptual understanding of our method, there are some additional details that are important for a practical implementation.
|
| 351 |
+
|
| 352 |
+
1. According to Algorithm 2, $a_{ij}$ is computed in every LieConv layer, which is both highly redundant and costly. In practice, we precompute $a_{ij}$ once after lifting and feed it through the network with layers operating on the state ($\{a_{ij}\}_{i,j}^{N,N}, \{f_i\}_{i=1}^N$) instead of $\{(u_i, q_i, f_i)\}_{i=1}^N$. Doing so requires fixing the group elements that will be used at each layer for a given forwards pass.
|
| 353 |
+
|
| 354 |
+
2. In practice only $p$ elements of $nbhd_i$ are sampled (randomly) for computing the Monte Carlo estimator in order to limit the computational burden (see Appendix A.4).
|
| 355 |
+
|
| 356 |
+
3. We use the analytic forms for the exponential and logarithm maps of the various groups as described in Eade (2014).
|
| 357 |
+
|
| 358 |
+
### C.2. Sampling from the Haar Measure for Various groups
|
| 359 |
+
|
| 360 |
+
When the lifting map from $\mathcal{X} \to G \times \mathcal{X}/G$ is multi-valued, we need to sample elements of $u \in G$ that project down to $x: uo = x$ in a way consistent with the Haar measure $\mu(\cdot)$. In other words, since the restriction $\mu(\cdot)|_{\text{nbhd}}$ is a distribution, then we must sample from the conditional distribution $u \sim \mu(u|uo = x)|_{\text{nbhd}}$. In general this can be done by parametrizing the distribution of $\mu$ as a collection of random variables that includes $x$, and then sampling the remaining variables.
|
| 361 |
+
|
| 362 |
+
In this paper, the groups we use in which the lifting map is multi-valued are SE(2), SO(3), and SE(3). The process is especially straightforward for SE(2) and SE(3) as these groups can be expressed as a semi-direct product of two groups $G = H \times N$,
|
| 363 |
+
|
| 364 |
+
$$d\mu_G(h, n) = \delta(h)d\mu_H(h)d\mu_N(n), \quad (21)$$
|
| 365 |
+
|
| 366 |
+
where $\delta(h) = \frac{d\mu_N(n)}{d\mu_N(hnh^{-1})}$ (Willson, 2009). For $G = \text{SE}(d) = \text{SO}(d) \times \text{T}(d)$, $\delta(h) = 1$ since the Lebesgue measure $d\mu_{\text{T}(d)}(x) = d\lambda(x) = dx$ is invariant to rotations. So simply $d\mu_{\text{SE}(d)}(R, x) = d\mu_{\text{SO}(d)}(R)dx$.
|
| 367 |
+
|
| 368 |
+
So lifts of a point $x \in \mathcal{X}$ to $\text{SE}(d)$ consistent with the $\mu$ are just $T_x R$, the multiplication of a translation by $x$ and randomly sampled rotations $R \sim \mu_{\text{SO}(d)}(\cdot)$. There are multiple easy methods to sample uniformly from $\text{SO}(d)$ given in (Kuffner, 2004), for example sampling uniformly from $\text{SO}(3)$ can be done by sampling a unit quaternion from the 3-sphere, and identifying it with the corresponding rotation matrix.
|
| 369 |
+
|
| 370 |
+
### C.3. Model Architecture
|
| 371 |
+
|
| 372 |
+
We employ a ResNet-style architecture (He et al., 2016), using bottleneck blocks (Zagoruyko and Komodakis, 2016), and replacing ReLUs with Swish activations (Ramachandran et al., 2017). The convolutional kernel $g_\theta$ internal to each LieConv layer is parametrized by a 3-layer MLP with 32 hidden units, batch norm, and Swish nonlinearities. Not only do the Swish activations improve performance slightly, but unlike ReLUs they are twice differentiable which is a requirement for backpropagating through the Hamiltonian dynamics. The stack of elementwise linear and bottleneck blocks is followed by a global pooling layer that computes the average over all elements, but not over channels. Like for regular image bottleneck blocks, the channels for the convolutional layer in the middle are smaller by a factor of 4 for increased parameter and computational efficiency.
|
| 373 |
+
|
| 374 |
+
**Downsampling:** As is traditional for image data, we increase the number of channels and the receptive field at every downsampling step. The downsampling is performed with the farthest point downsampling method described in Appendix A.4. For a downsampling by a factor of $s < 1$, the radius of the neighborhood is scaled up by $s^{-1/2}$ and the channels are scaled up by $s^{-1/2}$. When an image is downsampled with $s = (1/2)^2$ that is typical in a CNN, this results in 2x more channels and a radius or dilation of 2x. In the bottleneck block, the downsampling operation is fused with the LieConv layer, so that the convolution is only evaluated at the downsampled query locations. We perform downsampling only on the image datasets, which have more points.
|
| 375 |
+
|
| 376 |
+
**BatchNorm:** In order to handle the varied number of group elements per example and within each neighborhood, we
|
| 377 |
+
---PAGE_BREAK---
|
| 378 |
+
|
| 379 |
+
use a modified batchnorm that computes statistics only over
|
| 380 |
+
elements from a given mask. The batch norm is computed
|
| 381 |
+
per channel, with statistics averaged over the batch size and
|
| 382 |
+
each of the valid locations.
|
| 383 |
+
|
| 384 |
+
### C.4. Details for Hamiltonian Models
|
| 385 |
+
|
| 386 |
+
**Model Symmetries:**
|
| 387 |
+
|
| 388 |
+
As the position vectors are mean centered in the model forward pass $q_i^{(i)} = q_i - \bar{q}$, HOGN and HLieConv-SO2* have additional T(2) invariance, yielding SE(2) invariance for HLieConv-SO2*. We also experimented with a HLieConv-SE2 equivariant model, but found that the exponential map for SE2 (involving taylor expands and masking) was not numerically stable enough for second derivatives, required for optimizing through the Hamiltonian dynamics. So instead we benchmark the HLieConv-SO2 (without centering) and the HLieConv-SO2* (with centering) models separately. Layer equivariance is preferable for not prematurely discarding useful information and for better modeling performance, but invariance alone is sufficient for the conservation laws. Additionally, since we know a priori that the spring problem has Euclidean coordinates, we need not model the kinetic energy $K(\mathbf{p}, m) = \sum_{j=1}^n \|\mathbf{p}_j\|^2/m_j$ and instead focus on modeling the potential $V(q, k)$. We observe that this additional inductive bias of Euclidean coordinates improves model performance. Table 6 shows the invariance and equivariance properties of the relevant models and baselines. For Noether conservation, we need both to model the Hamiltonian and have the symmetry property.
|
| 389 |
+
|
| 390 |
+
**Dataset Generation:** To generate the spring dynamics datasets we generated *D* systems each with *N* = 6 particles connected by springs. The system parameters, mass and spring constant, are set by sampling {$m_1^{(i)}, \dots, m_6^{(i)}, k_1^{(i)}, \dots, k_6^{(i)}$}$_{i=1}^N$, $m_j^{(i)} \sim U(0.1, 3.1)$, $k_j^{(i)} \sim U(0, 5)$. Following Sanchez-Gonzalez et al. (2019), we set the spring constants as $k_{ij} = k_i k_j$. For each system
|
| 391 |
+
|
| 392 |
+
$$ \begin{array}{|c|c|c|c|c|} \hline F(\mathbf{z}, t) & \mathcal{H}(\mathbf{z}, t) & T(2) & SO(2) \\ \hline \text{FC} & \bullet & & \\ \hline \text{OGN} & \bullet & & \\ \hline \text{HOGN} & & \bullet & \star \\ \hline \text{LieConv-T(2)} & \bullet & & \star \\ \hline \text{HLieConv-Trivial} & & \bullet & \\ \hline \text{HLieConv-T(2)} & & \bullet & \star \\ \hline \text{HLieConv-SO(2)} & & \bullet & \star \\ \hline \text{HLieConv-SO(2)*} & & \bullet & \star \\ \hline \end{array} $$
|
| 393 |
+
|
| 394 |
+
Table 6. Model characteristics. Models with layers invariant to *G* are denoted with ⋆, and those with equivariant layers with ⋘.
|
| 395 |
+
|
| 396 |
+
$i$, the position and momentum of body $j$ were distributed as $\mathbf{q}_j^{(i)} \sim N(0, 0.16I)$, $\mathbf{p}_j^{(i)} \sim N(0, 0.36I)$. Using the analytic form of the Hamiltonian for the spring problem, $\mathcal{H}(\mathbf{q}, \mathbf{p}) = K(\mathbf{p}, m) + V(\mathbf{q}, k)$, we use the RK4 numerical integration scheme to generate 5 second ground truth trajectories broken up into 500 evaluation timesteps. We use a fixed step size scheme for RK4 chosen automatically (as implemented in Chen et al. (2018)) with a relative tolerance of 1e-8 in double precision arithmetic. We then randomly selected a single segment for each trajectory, consisting of an initial state $\mathbf{z}_t$ and $\tau = 4$ transition states: $(\mathbf{z}_{t+1}^{(i)}, \dots, \mathbf{z}_{t+\tau}^{(i)})$.
|
| 397 |
+
|
| 398 |
+
**Training:** All models were trained in single precision arithmetic (double precision did not make any appreciable difference) with an integrator tolerance of 1e-4. We use a cosine decay for the learning rate schedule and perform early stopping over the validation MSE. We trained with a minibatch size of 200 and for 100 epochs each using the Adam optimizer (Kingma and Ba, 2014) without batch normalization. With 3k training examples, the HLieConv model takes about 20 minutes to train on one 1080Ti.
|
| 399 |
+
|
| 400 |
+
For the examination of performance over the range of dataset sizes in 8, we cap the validation set to the size of the training set to make the setting more realistic, and we also scale the number of training epochs up as the size of the dataset shrinks (epochs = $100(\sqrt{10^3/D})$) which we found to be sufficient to fit the training set. For $D \le 200$ we use the full dataset in each minibatch.
|
| 401 |
+
|
| 402 |
+
**Hyperparameters:**
|
| 403 |
+
|
| 404 |
+
<table><thead><tr><th></th><th>channels</th><th>layers</th><th>lr</th></tr></thead><tbody><tr><td>(H)FC</td><td>256</td><td>4</td><td>1e-2</td></tr><tr><td>(H)OGN</td><td>256</td><td>1</td><td>1e-2</td></tr><tr><td>(H)LieConv</td><td>384</td><td>4</td><td>1e-3</td></tr></tbody></table>
|
| 405 |
+
|
| 406 |
+
**Hyperparameter tuning:** Model hyperparameters were tuned by grid search over channel width, number of layers, and learning rate. The models were tuned with training, validation, and test datasets consisting of 3000, 2000, and 2000 trajectory segments respectively.
|
| 407 |
+
|
| 408 |
+
### C.5. Details for Image and Molecular Experiments
|
| 409 |
+
|
| 410 |
+
**RotMNIST Hyperparameters:** For RotMNIST we train each model for 500 epochs using the Adam optimizer with learning rate 3e-3 and batch size 25. The first linear layer maps the 1-channel grayscale input to $k = 128$ channels, and the number of channels in the bottleneck blocks follow the scaling law from Appendix C.3 as the group elements are downsampled. We use 6 bottleneck blocks, and the total downsampling factor $S = 1/10$ is split geometrically between the blocks as $s = (1/10)^{1/6}$ per block. The initial radius $r$ of the local neighborhoods in the first layer is set so
|
| 411 |
+
---PAGE_BREAK---
|
| 412 |
+
|
| 413 |
+
as to include 1/15 of the total number of elements in each
|
| 414 |
+
neighborhood and is scaled accordingly. The subsampled
|
| 415 |
+
neighborhood used to compute the Monte Carlo convolution
|
| 416 |
+
estimator uses *p* = 25 elements. The models take less than
|
| 417 |
+
12 hours to train on a 1080Ti.
|
| 418 |
+
|
| 419 |
+
**QM9 Hyperparameters:** For the QM9 molecular data, we use the featurization from Anderson et al. (2019), where the input features $f_i$ are determined by the atom type (C,H,N,O,F) and the atomic charge. The coordinates $x_i$ are simply the raw atomic coordinates measured in angstroms. A separate model is trained for each prediction task, all using the same hyperparameters and early stopping on the validation MAE. We use the same train, validation, test split as Anderson et al. (2019), with 100k molecules for train, 10% for test and the remaining for validation. Like with the other experiments, we use a cosine learning rate decay schedule. Each model is trained using the Adam optimizer for 1000 epochs with a learning rate of 3e-3 and batch size of 100. We use SO(3) data augmentation, 6 bottleneck blocks, each with $k = 1536$ channels. The radius of the local neighborhood is set to $r = \infty$ to include all elements. The model takes about 48 hours to train on a single 1080Ti.
|
| 420 |
+
|
| 421 |
+
### C.6. Local Neighborhood Visualizations
|
| 422 |
+
|
| 423 |
+
In Figure 10 we visualize the local neighborhood used with different groups under three different types of transformations: translations, rotations and scaling. The distance and neighborhood are defined for the tuples of group elements and orbit. For Trivial, T(2), SO(2), $\mathbb{R} \times SO(2)$ the correspondence between points and these tuples is one-to-one and we can identify the neighborhood in terms of the input points. For SE(2) each point is mapped to multiple tuples, each of which defines its own neighborhood in terms of other tuples. In the Figure, for SE(2) for a given point we visualize the distribution of points that enter the computation of the convolution at a specific tuple.
|
| 424 |
+
---PAGE_BREAK---
|
| 425 |
+
|
| 426 |
+
**Figure 10.** A visualization of the local neighborhood for different groups, in terms of the points in the input space. For the computation of the convolution at the point in red, elements are sampled from colored region. In each panel, the top row shows translations, middle row shows rotations and bottom row shows scalings of the same image. For $SE(2)$ we visualize the distribution of points entering the computation of the convolution over multiple lift samples. For each of the equivariant models that respects a given symmetry, the points that enter into the computation are not affected by the transformation.
|
samples/texts_merged/174916.md
ADDED
|
@@ -0,0 +1,469 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
ON THE LOCATION OF ZEROS OF THE LAPLACIAN MATCHING
|
| 5 |
+
POLYNOMIALS OF GRAPHS
|
| 6 |
+
|
| 7 |
+
JIANG-CHAO WAN, YI WANG, ALI MOHAMMADIAN
|
| 8 |
+
|
| 9 |
+
School of Mathematical Sciences, Anhui University, Hefei 230601, Anhui, China
|
| 10 |
+
|
| 11 |
+
**ABSTRACT.** The Laplacian matching polynomial of a graph $G$, denoted by $\mathcal{LM}(G,x)$, is a new graph polynomial whose all roots are nonnegative real numbers. In this paper, we investigate the location of zeros of the Laplacian matching polynomials. Let $G$ be a connected graph. We show that $0$ is a root of $\mathcal{LM}(G,x)$ if and only if $G$ is a tree. We prove that the number of distinct positive zeros of $\mathcal{LM}(G,x)$ is at least equal to the length of the longest path in $G$. It is also established that the zeros of $\mathcal{LM}(G,x)$ and $\mathcal{LM}(G-e,x)$ interlace for each edge $e$ of $G$. Using the path-tree of $G$, we present a linear algebraic approach to investigate the largest zero of $\mathcal{LM}(G,x)$ and particularly to give tight upper and lower bounds on it.
|
| 12 |
+
|
| 13 |
+
# 1. INTRODUCTION
|
| 14 |
+
|
| 15 |
+
The graph polynomials, such as the characteristic polynomial, the chromatic polynomial, the independence polynomial, the matching polynomial, and many others, are widely studied and play important roles in applications of graphs in several diverse fields. The location of zeros of graph polynomials is a main topic in algebraic combinatorics and can be used to describe some structures and parameters of graphs. In this paper, we focus on the location of zeros of the Laplacian matching polynomials of graphs. For more results on the location of zeros of graph polynomials, we refer to [9].
|
| 16 |
+
|
| 17 |
+
Throughout this paper, all graphs are assumed to be finite, undirected, and without loops or multiple edges. Let $G$ be a graph. We denote the vertex set of $G$ by $V(G)$ and the edge set of $G$ by $E(G)$. Let $M$ be a subset of $E(G)$. We denote by $V(M)$ the set of vertices of $G$ each of which is an endpoint of one of the edges in $M$. If no two distinct edges in $M$ share a common endpoint, then $M$ is called a *matching* of $G$. The set of matchings of $G$ is denoted by $\mathcal{M}(G)$. A matching $M \in \mathcal{M}(G)$ is said to be *perfect* if $V(M) = V(G)$. The *matching polynomial* of $G$ is
|
| 18 |
+
|
| 19 |
+
$$ \mathcal{M}(G,x) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} x^{|V(G) \setminus V(M)|} $$
|
| 20 |
+
|
| 21 |
+
which was formally defined by Heilmann and Lieb [7] in studying statistical physics, although it has appeared independently in several different contexts.
|
| 22 |
+
|
| 23 |
+
The matching polynomial is a fascinating mathematical object and attracts considerable attention of researchers. For an instance, by studying the multiplicity of zeros of the matching polynomials, Chen and Ku [8] gave a generalization of the Gallai–Edmonds theorem which is a
|
| 24 |
+
|
| 25 |
+
2020 Mathematics Subject Classification. Primary: 05C31, 05C70. Secondary: 05C05, 05C50, 12D10.
|
| 26 |
+
Key words and phrases. Graph polynomial, Matching, Subdivision of graphs, Zeros of polynomials.
|
| 27 |
+
Email adress:wanjc@stu.ahu.edu.cn (J.-C. Wan), wangy@ahu.edu.cn (Y. Wang, corresponding author), ali.m@ahu.edu.cn (A. Mohammadian).
|
| 28 |
+
Funding. The research of the second author is supported by the National Natural Science Foundation of China with grant numbers 11771016 and 11871073. The research of the third author is supported by the Natural Science Foundation of Anhui Province with grant number 2008085MA03.
|
| 29 |
+
---PAGE_BREAK---
|
| 30 |
+
|
| 31 |
+
structure theorem in classical graph theory. For another instance, using a well known upper bound on zeros of the matching polynomials, Marcus, Spielman, and Srivastava [10] established that infinitely many bipartite Ramanujan graphs exist. Some earlier facts on the matching polynomials can be found in [4].
|
| 32 |
+
|
| 33 |
+
We want to summarize here some basic features of the zeros of the matching polynomial. For this, let us first introduce some more notations and terminology which we need. For a vertex $v$ of a graph $G$, we denote by $N_G(v)$ the set of all vertices of $G$ adjacent to $v$. The degree of $v$ is defined as $|N_G(v)|$ and is denoted by $d_G(v)$. The maximum degree and the minimum degree of the vertices of $G$ are denoted by $\Delta(G)$ and $\delta(G)$, respectively. For a subset $W$ of $V(G)$, we shall use $G[W]$ to denote the induced subgraph of $G$ induced by $W$ and we simply use $G-W$ instead of $G[V(G)\setminus W]$. Also, for a vertex $v$ of $G$, we simply write $G-v$ for $G - \{v\}$. For an edge $e$ of $G$, we denote by $G-e$ the subgraph of $G$ obtained by deleting the edge $e$.
|
| 34 |
+
|
| 35 |
+
Let $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_m$ be respectively the zeros of two real rooted polynomials $f$ and $g$ with $\deg f = n$ and $\deg g = m$. We say that the zeros of $f$ and $g$ interlace if either
|
| 36 |
+
|
| 37 |
+
$$\alpha_1 \le \beta_1 \le \alpha_2 \le \beta_2 \le \dots$$
|
| 38 |
+
|
| 39 |
+
or
|
| 40 |
+
|
| 41 |
+
$$\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots$$
|
| 42 |
+
|
| 43 |
+
in which case one clearly must have $|n-m| \le 1$. We adopt the convention that the zeros of any polynomial of degree 0 interlace the zeros of any other polynomial.
|
| 44 |
+
|
| 45 |
+
For any connected graph $G$, the assertions given in (1.1)-(1.3) are known.
|
| 46 |
+
|
| 47 |
+
(1.1) All the roots of $\mathcal{M}(G, x)$ are real. Moreover, if $\Delta(G) \ge 2$, then the zeros of $\mathcal{M}(G, x)$ lie in the interval $(-2\sqrt{\Delta(G)-1}, 2\sqrt{\Delta(G)-1})$ [7].
|
| 48 |
+
|
| 49 |
+
(1.2) The number of distinct roots of $\mathcal{M}(G, x)$ is at least equal to $\ell(G)+1$, where $\ell(G)$ is the length of the longest path in $G$ [5].
|
| 50 |
+
|
| 51 |
+
For each vertex $v \in V(G)$, the zeros of $\mathcal{M}(G-v, x)$ interlace the zeros of $\mathcal{M}(G, x)$.
|
| 52 |
+
|
| 53 |
+
(1.3) In addition, the largest zero of $\mathcal{M}(G, x)$ has the multiplicity 1 and is greater than the largest zero of $\mathcal{M}(G-v, x)$ [6].
|
| 54 |
+
|
| 55 |
+
Recently, Mohammadian [11] introduced a new graph polynomial that is called the *Laplacian matching polynomial* and is defined for a graph $G$ as
|
| 56 |
+
|
| 57 |
+
$$ (1.4) \qquad \mathcal{LM}(G,x) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} \left( \prod_{v \in V(G) \setminus V(M)} (x - d_G(v)) \right). $$
|
| 58 |
+
|
| 59 |
+
Mohammadian proved that all roots of $\mathcal{LM}(G, x)$ are real and nonnegative, and moreover, if $\Delta(G) \ge 2$, then the zeros of $\mathcal{LM}(G, x)$ lie in the interval $[0, \Delta(G) + 2\sqrt{\Delta(G)-1})$. By observing this interval, it is natural to ask: What is the sufficient and necessary condition for 0 to be a root of $\mathcal{LM}(G, x)$? More generally, as a new real rooted graph polynomial, it is natural to investigate the properties of zeros such as the interlacing of zeros, the upper and lower bounds of the largest zero, the maximum multiplicity of zeros, and the number of distinct zeros. In this paper, we mainly prove that the assertions given in (1.5)-(1.7) hold for any connected graph $G$, letting $\ell(G)$ be the length of the longest path in $G$.
|
| 60 |
+
|
| 61 |
+
If $\Delta(G) \ge 2$, then the zeros of $\mathcal{LM}(G, x)$ are contained in the interval $[0, \Delta(G) + 2\sqrt{\Delta(G)-1}\cos\frac{\pi}{2\ell(G)+2}]$, and in addition, the upper bound of the interval is a zero of $\mathcal{LM}(G, x)$ if and only if $G$ is a cycle.
|
| 62 |
+
|
| 63 |
+
(1.5)
|
| 64 |
+
---PAGE_BREAK---
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
(1.6) \quad \text{The number of distinct positive roots of } \mathcal{LM}(G, x) \text{ is at least equal to } \ell(G). \text{ Also, if } \delta(G) \ge 2, \text{ then } \mathcal{LM}(G, x) \text{ has at least } \ell(G) + 1 \text{ distinct positive roots.}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
For each edge $e \in E(G)$, the zeros of $\mathcal{L}\mathcal{M}(G,x)$ and $\mathcal{L}\mathcal{M}(G-e,x)$ interlace in
|
| 71 |
+
the sense that, if $\alpha_1 \le \cdots \le \alpha_n$ and $\beta_1 \le \cdots \le \beta_n$ are respectively the zeros of
|
| 72 |
+
$\mathcal{L}\mathcal{M}(G,x)$ and $\mathcal{L}\mathcal{M}(G-e,x)$ in which $n = |V(G)|$, then $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le$
|
| 73 |
+
$\cdots \le \beta_n \le \alpha_n$. Further, the largest zero of $\mathcal{L}\mathcal{M}(G,x)$ has the multiplicity 1 and
|
| 74 |
+
is strictly greater than the largest zero of $\mathcal{L}\mathcal{M}(H,x)$ for any proper subgraph $H$ of
|
| 75 |
+
$G$.
|
| 76 |
+
|
| 77 |
+
It should be mentioned that the Laplacian matching polynomial is recently studied under a different name and expression by Chen and Zhang [17].
|
| 78 |
+
|
| 79 |
+
For a graph $G$, the *subdivision* of $G$, denoted by $S(G)$, is the graph derived from $G$ by replac-
|
| 80 |
+
ing every edge $e = \{a,b\}$ of $G$ with two edges $\{a,v_e\}$ and $\{v_e,b\}$ along with the new vertex $v_e$
|
| 81 |
+
corresponding to the edge $e$. We know from a result of Yan and Yeh [16] that
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
(1.8) \qquad \mathcal{M}(S(G),x) = x^{|E(G)|-|V(G)|} \mathcal{L}\mathcal{M}(G,x^2)
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
for any graph $G$, which is also proved by Chen and Zhang [17] by different method. The equality (1.8) shows that the problem of the location of zeros of the Laplacian matching polynomial of a graph $G$ can be transformed into the problem that deals with the location of zeros of the matching polynomial of $S(G)$. For an instance, using (1.8) and the first statement in (1.1), it immediately follows that the zeros of $\mathcal{LM}(G,x)$ are nonnegative real numbers. The assertion (1.6) is proved in Section 2 by the subdivision of graphs.
|
| 88 |
+
|
| 89 |
+
One of the most important tools in the theory of the matching polynomial is the concept of ‘path-tree’ which is introduced by Godsil [5]. Given a graph $G$ and a vertex $u \in V(G)$, the *path-tree* $T(G, u)$ is the tree which has as vertices the paths in $G$ which start at $u$ where two such paths are adjacent if one is a maximal proper subpath of the other. In Section 3, we show that the path-tree is also applicable for the Laplacian matching polynomial by making some appropriate adjustments. Using this, we prove (1.5) which is a slight improvement of the second statement of Theorem 2.6 of [11]. The assertion (1.7) is proved in Section 3 by linear algebra arguments.
|
| 90 |
+
|
| 91 |
+
Let us introduce more notations and definitions before moving on to the next section. We use
|
| 92 |
+
$\lambda(f(x))$ to denote the largest zero of a real rooted polynomial $f(x)$. For a square matrix $M$, we shall
|
| 93 |
+
use $\varphi(M, x)$ to denote the characteristic polynomial of $M$ in the indeterminate $x$. If all the roots of
|
| 94 |
+
$\varphi(M, x)$ are real, then its largest zero is denoted by $\lambda(M)$. For a graph $G$, the *adjacency matrix* of
|
| 95 |
+
$G$, denoted by $A(G)$, is a matrix whose rows and columns are indexed by $V(G)$ and the $(u, v)$-entry
|
| 96 |
+
is 1 if $u$ and $v$ are adjacent and 0 otherwise. Let $D(G)$ be the diagonal matrix whose rows and
|
| 97 |
+
columns are indexed as the rows and the columns of $A(G)$ with $d_G(v)$ in the $v$th diagonal position.
|
| 98 |
+
The matrices $L(G) = D(G) - A(G)$ and $Q(G) = D(G) + A(G)$ are respectively said to be the
|
| 99 |
+
*Laplacian matrix* and the *signless Laplacian matrix* of $G$. It is known that $\mathcal{M}(G, x) = \varphi(A(G), x)$
|
| 100 |
+
if and only if $G$ is a forest [14]. In addition, it is proved that $\mathcal{LM}(G, x) = \varphi(L(G), x)$ if and only
|
| 101 |
+
if $G$ is a forest [11]. Among other results, we present a generalization of these results in Section 2.
|
| 102 |
+
|
| 103 |
+
## 2. SUBDIVISION OF GRAPHS AND THE LAPLACIAN MATCHING POLYNOMIAL
|
| 104 |
+
|
| 105 |
+
In this section, we examine the location of zeros of the Laplacian matching polynomial by establishing a relation between the Laplacian matching polynomial of a graph and the matching polynomial of the subdivision of that graph. Then, by analysing the structures of the subdivision of graphs, we will prove (1.6). To begin with, we recall the multivariate matching polynomial that covers both the matching polynomial and the Laplacian matching polynomial. This multivariate graph polynomial was introduced by Heilmann and Lieb [7].
|
| 106 |
+
---PAGE_BREAK---
|
| 107 |
+
|
| 108 |
+
Let $G$ be a graph and associate the vector $\mathbf{x}_G = (x_v)_{v \in V(G)}$ with $G$ in which $x_v$ is an indeterminate corresponding to the vertex $v \in V(G)$. Notice that, for a subgraph $H$ of $G$, $\mathbf{x}_H$ is the vector that has the same coordinate as $\mathbf{x}_G$ in the positions corresponding to the vertices in $V(H)$. The *multivariate matching polynomial* of $G$ is defined as
|
| 109 |
+
|
| 110 |
+
$$ (2.1) \qquad \mathfrak{M}(G, \mathbf{x}_G) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} \left( \prod_{v \in V(G) \setminus V(M)} x_v \right). $$
|
| 111 |
+
|
| 112 |
+
Let $\mathbf{1}_G$ be the all one vector of length $|V(G)|$. Also, for a subgraph $H$ of $G$, we let $\mathbf{d}_{G,H} = (d_G(v))_{v \in V(H)}$. For simplicity, we write $\mathbf{d}_G$ instead of $\mathbf{d}_{G,G}$. We sometimes drop the subscript of the vector symbols if there is no possible confusion. It is easy to see that
|
| 113 |
+
|
| 114 |
+
$$ (2.2) \qquad \mathfrak{M}(G, x\mathbf{1}_G) = \mathcal{M}(G, x) $$
|
| 115 |
+
|
| 116 |
+
and
|
| 117 |
+
|
| 118 |
+
$$ (2.3) \qquad \mathfrak{M}(G, x\mathbf{1}_G - \mathbf{d}_G) = \mathcal{L}\mathcal{M}(G, x). $$
|
| 119 |
+
|
| 120 |
+
Note that
|
| 121 |
+
|
| 122 |
+
$$ \mathfrak{M}(G_1 \cup G_2, (\mathbf{x}_{G_1}, \mathbf{x}_{G_2})) = \mathfrak{M}(G_1, \mathbf{x}_{G_1})\mathfrak{M}(G_2, \mathbf{x}_{G_2}), $$
|
| 123 |
+
|
| 124 |
+
where $G_1 \cup G_2$ denotes the disjoint union of two graphs $G_1$ and $G_2$. So, in what follows, we often restrict our attention on connected graphs.
|
| 125 |
+
|
| 126 |
+
We need the following useful lemma in the sequel.
|
| 127 |
+
|
| 128 |
+
**Lemma 2.1** (Amini [1]). Let $G$ be a graph. For any vertex $v \in V(G)$,
|
| 129 |
+
|
| 130 |
+
$$ \mathfrak{M}(G, \mathbf{x}_G) = x_v \mathfrak{M}(G - v, \mathbf{x}_{G-v}) - \sum_{w \in N_G(v)} \mathfrak{M}(G - v - w, \mathbf{x}_{G-w}). $$
|
| 131 |
+
|
| 132 |
+
By combining Lemma 2.1 and (2.2), we get
|
| 133 |
+
|
| 134 |
+
$$ (2.4) \qquad \mathcal{M}(G, x) = x\mathcal{M}(G-v, x) - \sum_{w \in N_G(v)} \mathcal{M}(G-v-w, x), $$
|
| 135 |
+
|
| 136 |
+
which is a well known recursive formula for the matching polynomial.
|
| 137 |
+
|
| 138 |
+
The following theorem, which is a generalization of (1.8), plays a crucial role in our proofs in Section 3.
|
| 139 |
+
|
| 140 |
+
**Theorem 2.2.** Let $G$ be a graph. For any subset $W$ of $V(G)$,
|
| 141 |
+
|
| 142 |
+
$$ \mathcal{M}(S(G) - W, x) = x^{|E(G)| - |V(G)| + |W|} \mathfrak{M}(G - W, x^2 \mathbf{1}_{G-W} - \mathbf{d}_{G,G-W}). $$
|
| 143 |
+
|
| 144 |
+
*Proof.* For simplicity, let $k = |V(G) \setminus W|$ and $m = |E(G)|$. We prove the assertion by induction on $k$. If $V(G) \setminus W = \{u\}$ for some vertex $u \in V(G)$, then $S(G) - W$ consists of a star on $d_G(u) + 1$ vertices and $|E(G)| - d_G(u)$ isolated vertices. Therefore,
|
| 145 |
+
|
| 146 |
+
$$ \mathcal{M}(S(G) - W, x) = x^{m+1} - d_G(u)x^{m-1} $$
|
| 147 |
+
|
| 148 |
+
and
|
| 149 |
+
|
| 150 |
+
$$ \mathfrak{M}(G - W, x^2 \mathbf{1} - \mathbf{d}) = x^2 - d_G(u). $$
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
So, the claimed equality holds for $k=1$. Assume that $k \ge 2$. Choose a vertex $u \in V(G) \setminus W$ and let $H = S(G) - W - u$. By Lemma 2.1, the induction hypothesis and (2.4), we have
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\begin{align*}
|
| 157 |
+
x^{m-k+2}\mathfrak{M}(G-W, x^2\mathbf{1}-\mathbf{d}) &= x(x^2-d_G(u))x^{m-k+1}\mathfrak{M}(G-W-u, x^2\mathbf{1}-\mathbf{d}) \\
|
| 158 |
+
&\quad - \sum_{v \in N_{G-W}(u)} x^{m-k+2}\mathfrak{M}(G-W-u-v, x^2\mathbf{1}-\mathbf{d}) \\
|
| 159 |
+
&= x(x^2-d_G(u))\mathcal{M}(H,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x) \\
|
| 160 |
+
&= x^2\mathcal{M}(S(G)-W,x) + x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) \\
|
| 161 |
+
&\quad - d_G(u)x\mathcal{M}(H,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x).
|
| 162 |
+
\end{align*}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
Hence, in order to complete the induction step, it suffices to prove that
|
| 166 |
+
|
| 167 |
+
$$ (2.5) \qquad d_G(u)x\mathcal{M}(H,x) = x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x). $$
|
| 168 |
+
|
| 169 |
+
To establish (2.5), let $N_G(u) \cap W = \{a_1, \dots, a_s\}$ and $N_G(u) \setminus W = \{b_1, \dots, b_t\}$. Also, for $i=1, \dots, s$, let $a'_i$ be the vertex of $S(G)$ corresponding to the edge $\{u, a_i\}$ of $G$ and, for $j=1, \dots, t$, let $b'_j$ be the vertex of $S(G)$ corresponding to the edge $\{u, b_j\}$ of $G$. Notice that, if one of $N_G(u) \cap W$ and $N_G(u) \setminus W$ is empty, then we may derive (2.5) by the same discussion as below. We have $d_G(u) = s+t$ and $N_{S(G)-W}(u) = N_{S(G)}(u) = \{a'_1, \dots, a'_s, b'_1, \dots, b'_t\}$. The structure of $H$ is illustrated in Figure 1.
|
| 170 |
+
|
| 171 |
+
**Figure 1.** The structure of $H$.
|
| 172 |
+
|
| 173 |
+
We have $d_H(a'_i) = 0$ for $i = 1, \dots, s$ and $d_H(b'_j) = 1$ for $j = 1, \dots, t$. By applying (2.4) for $a'_i$ and $b'_j$, we find that
|
| 174 |
+
|
| 175 |
+
$$ \mathcal{M}(H,x) = x\mathcal{M}(H-a'_i,x) $$
|
| 176 |
+
---PAGE_BREAK---
|
| 177 |
+
|
| 178 |
+
and
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\begin{align*}
|
| 182 |
+
x\mathcal{M}(H, x) &= x^2\mathcal{M}(H - b'_j, x) - x\mathcal{M}(H - b_j - b'_j, x) \\
|
| 183 |
+
&= x^2\mathcal{M}(H - b'_j, x) - \mathcal{M}(H - b_j, x).
|
| 184 |
+
\end{align*}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
Therefore,
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\begin{align*}
|
| 191 |
+
d_G(u)x\mathcal{M}(H,x) &= sx\mathcal{M}(H,x) + tx\mathcal{M}(H,x) \\
|
| 192 |
+
&= x^2 \sum_{i=1}^{s} \mathcal{M}(H - a'_i, x) + x^2 \sum_{j=1}^{t} \mathcal{M}(H - b'_j, x) - \sum_{j=1}^{t} \mathcal{M}(H - b_j, x) \\
|
| 193 |
+
&= x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x),
|
| 194 |
+
\end{align*}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
which is exactly (2.5). This completes the proof.
|
| 198 |
+
□
|
| 199 |
+
|
| 200 |
+
In what follows, we prove some results about the Laplacian matching polynomial by analysing
|
| 201 |
+
the structures of the subdivision of graphs. The following consequence immediately follows from
|
| 202 |
+
Theorem 2.2 and the first statement in (1.1). It worth to mention that the following result is proved
|
| 203 |
+
in [17] for a different expression of the Laplacian matching polynomial.
|
| 204 |
+
|
| 205 |
+
**Corollary 2.3.** Let $G$ be a graph. Then
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|} \mathcal{L}\mathcal{M}(G, x^2).
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
In particular, the zeros of $\mathcal{L}\mathcal{M}(G, x)$ are nonnegative real numbers.
|
| 212 |
+
|
| 213 |
+
For a graph $G$, it is proved that $\mathcal{L}\mathcal{M}(G, x) = \varphi(L(G), x)$ if and only if $G$ is a forest [11]. Since $0$ is an eigenvalue of $L(G)$, we deduce that $\mathcal{L}\mathcal{M}(G, 0) = 0$ if $G$ is a forest. From (1.4), we get the combinatorial identity
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\sum_{M \in \mathcal{M}(F)} (-1)^{|M|} \left( \prod_{v \in V(F) \setminus V(M)} d_F(v) \right) = 0
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
for any forest F. The following theorem, which is proved in [17], gives a necessary and sufficient condition for 0 to be a root of the Laplacian matching polynomial. We present here a different proof for it.
|
| 220 |
+
|
| 221 |
+
**Theorem 2.4** (Chen, Zhang [17]). Let $G$ be a connected graph. Then, $0$ is a root of $\mathcal{LM}(G, x)$ if and only if $G$ is a tree.
|
| 222 |
+
|
| 223 |
+
*Proof.* If $G$ is a tree, then $|E(G)| = |V(G)| - 1$ and so $\mathcal{LM}(G, x^2) = x\mathcal{M}(S(G), x)$ by Corollary 2.3, implying that $0$ is a root of $\mathcal{LM}(G, x)$. We prove that $0$ is not a root of $\mathcal{LM}(G, x)$ if $G$ is not a tree. For this, assume that $|E(G)| \ge |V(G)|$. One may easily consider $S(G)$ as a bipartite graph with the bipartition $\{V(G), E(G)\}$ after identifying each new vertex $v_e$ of $S(G)$ with its corresponding edge $e$ of $G$.
|
| 224 |
+
|
| 225 |
+
We claim that $S(G)$ has a matching that saturates the part $V(G)$. If $G$ contains a vertex $u$ with degree 1 and $e$ is the edge incident to $u$ in $G$, then it suffices to prove that $S(G-u)$ has a matching that saturates the part $V(G-u)$, since the union of such matching and the edge $\{u, v_e\}$ forms a matching of $S(G)$ that saturates the part $V(G)$. Thus, we may assume that $d_G(v) \ge 2$ for all vertices $v \in V(G)$. We are going to establish that $S(G)$ satisfies Hall's condition [2, Theorem 16.4]. For a subset $W$ of $V(G)$, we shall use $N_G(W)$ to denote the set of vertices of $G$ each of which is adjacent to a vertex in $W$ and $\partial_G(W)$ to denote the set of edges of $G$ each of which has exactly one endpoint in $W$. For any subset $U$ of the part $V(G)$, since $d_G(v) \ge 2$ for all vertices $v \in V(G),
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
(2.6) \qquad |\partial_{S(G)}(U)| \ge 2|U|.
|
| 229 |
+
$$
|
| 230 |
+
---PAGE_BREAK---
|
| 231 |
+
|
| 232 |
+
On the other hand, $d_{S(G)}(v_e) = 2$ for each $e \in E(G)$, so
|
| 233 |
+
|
| 234 |
+
$$ (2.7) \qquad |\partial_{S(G)}(N_{S(G)}(U))| = 2|N_{S(G)}(U)|. $$
|
| 235 |
+
|
| 236 |
+
Clearly, $|\partial_{S(G)}(N_{S(G)}(U))| \ge |\partial_{S(G)}(U)|$ which implies that $|N_{S(G)}(U)| \ge |U|$ using (2.6) and (2.7). This means that $S(G)$ satisfies Hall's condition, as required.
|
| 237 |
+
|
| 238 |
+
We proved that $S(G)$ has a matching that saturates the part $V(G)$. This means that the smallest power of $x$ in $\mathcal{M}(S(G), x)$ is $|E(G)| - |V(G)|$ by (2.1) and (2.2). In view of Corollary 2.3, $\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|}\mathcal{LM}(G, x^2)$ which shows that the constant term in $\mathcal{LM}(G, x)$ is nonzero. So, 0 is not a root of $\mathcal{LM}(G, x)$. This completes the proof. $\square$
|
| 239 |
+
|
| 240 |
+
In the next theorem, we give a lower bound on the number of distinct zeros of the Laplacian matching polynomial.
|
| 241 |
+
|
| 242 |
+
**Theorem 2.5.** Let $G$ be a connected graph and let $\ell(G)$ be the length of the longest path in $G$. Then the number of distinct positive roots of $\mathcal{LM}(G, x)$ is at least equal to $\ell(G)$. Also, if $\delta(G) \ge 2$, then $\mathcal{LM}(G, x)$ has at least $\ell(G) + 1$ distinct positive roots.
|
| 243 |
+
|
| 244 |
+
*Proof.* For convenience, let $\ell = \ell(G)$. Denote by $\ell'$ the length of the longest path in $S(G)$. From (1.2), $\mathcal{M}(S(G), x)$ has at least $\ell' + 1$ distinct roots. By Corollary 2.3, $\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|}\mathcal{LM}(G, x^2)$ which shows that $\mathcal{LM}(G, x^2)$ has at least $\ell'$ distinct nonzero roots. Since all roots of $\mathcal{LM}(G, x)$ are real and nonnegative by Corollary 2.3, it follows that $\mathcal{LM}(G, x)$ has at least $\lceil \ell'/2 \rceil$ distinct positive roots.
|
| 245 |
+
|
| 246 |
+
For each edge $e \in E(G)$, denote by $v_e$ the vertex of $S(G)$ corresponding to $e$. Let $w_0, w_1, \dots, w_\ell$ be a path in $G$. Then, $w_0, v_{e_1}, w_1, \dots, v_{e_\ell}, w_\ell$ is a path in $S(G)$ of length $2\ell$, where $e_i = \{w_{i-1}, w_i\} \in E(G)$ for $i=1, \dots, \ell$. Thus, $\ell' \ge 2\ell$ and so $\mathcal{LM}(G, x)$ has at least $\ell$ distinct positive roots.
|
| 247 |
+
|
| 248 |
+
Now, assume that $\delta(G) \ge 2$. This assumption allows us to consider a vertex $w' \in N_G(w_0) \setminus \{w_1\}$. Then, $S(G)$ contains the path $v_{e'}$, $w_0$, $v_{e_1}$, $w_1$, \dots, $v_{e_\ell}$, $w_\ell$ of length $2\ell + 1$, where $e' = \{w', w_0\} \in E(G)$. Therefore, $\ell' \ge 2\ell + 1$ and so $\mathcal{LM}(G, x)$ has at least $\lceil \ell'/2 \rceil \ge \ell + 1$ distinct positive roots. This completes the proof. $\square$
|
| 249 |
+
|
| 250 |
+
**Remark 2.6.** The second statement in Theorem 2.5 implies that, if $G$ is a graph with a Hamilton cycle, then the zeros of $\mathcal{LM}(G, x)$ are all distinct.
|
| 251 |
+
|
| 252 |
+
Given a graph $G$, it is known that $\mathcal{M}(G, x) = \varphi(A(G, x))$ if and only if $G$ is a forest [14]. Also, as we mentioned before, it is established that $\mathcal{LM}(G, x) = \varphi(L(G), x)$ if and only if $G$ is a forest [11]. Below, we present a general result which shows that the multivariate matching polynomial of a forest has a determinantal representation in terms of its adjacency matrix, which will be used in the next section.
|
| 253 |
+
|
| 254 |
+
**Theorem 2.7.** Let $F$ be a forest. Then $\mathfrak{M}(F, \mathbf{x}_F) = \det(\mathbf{X}_F - A(F))$, where $\mathbf{X}_F$ is a diagonal matrix whose rows and columns are indexed by $V(F)$ and the $(v,v)$-entry is $x_v$ for any vertex $v \in V(F)$. In particular, $\mathcal{M}(F, x) = \varphi(A(F), x)$ and $\mathcal{LM}(F, x) = \varphi(L(F), x)$.
|
| 255 |
+
|
| 256 |
+
*Proof.* We prove that $\mathfrak{M}(F, \mathbf{x}_F) = \det(\mathbf{X}_F - A(F))$ by induction on $|E(F)|$. The equality is trivially valid if $|E(F)| = 0$. So, assume that $|E(F)| \ge 1$. As $F$ is a forest, we may consider two vertices $u, v \in V(F)$ with $N_F(u) = \{v\}$. Without loss of generality, we may assume that the first row and column of $A(F)$ are corresponding to $u$ and the second row and column of $A(F)$ are corresponding to $v$. Expanding the determinant of $\mathbf{X}_F - A(F)$ along its first row, we obtain by the induction hypothesis and Lemma 2.1 that
|
| 257 |
+
|
| 258 |
+
$$ \begin{align*}
|
| 259 |
+
\det (\mathbf{X}_F - A(F)) &= x_u \det (\mathbf{X}_{F-u} - A(F-u)) - \det (\mathbf{X}_{F-u-v} - A(F-u-v)) \\
|
| 260 |
+
&= x_u \mathfrak{M}(F-u, \mathbf{x}_{F-u}) - \mathfrak{M}(F-u-v, \mathbf{x}_{F-u-v}) \\
|
| 261 |
+
&= \mathfrak{M}(F, \mathbf{x}_F),
|
| 262 |
+
\end{align*} $$
|
| 263 |
+
|
| 264 |
+
as desired. The 'in particular' statement immediately follows from (2.2) and (2.3). $\square$
|
| 265 |
+
---PAGE_BREAK---
|
| 266 |
+
|
| 267 |
+
**Corollary 2.8.** For a tree $T$, the multiplicity of $0$ as a root of $\mathcal{LM}(T, x)$ is $1$.
|
| 268 |
+
|
| 269 |
+
*Proof.* It is well known that the number of connected components of a graph $\Gamma$ is equal to the multiplicity of $0$ as a root of $\varphi(L(\Gamma), x)$ [3, Proposition 1.3.7]. So, the result follows from $\mathcal{LM}(T, x) = \varphi(L(T), x)$ which is given in Theorem 2.7. $\square$
|
| 270 |
+
|
| 271 |
+
### 3. THE LARGEST ZERO OF THE LAPLACIAN MATCHING POLYNOMIAL
|
| 272 |
+
|
| 273 |
+
The purpose of this section is to investigate the location of the largest zero of the Laplacian matching polynomial. We give a linear algebraic approach to study the largest zero of the Laplacian matching polynomial and present sharp upper and lower bounds on it. The assertions (1.5) and (1.7) are also proved in this section based on the linear algebraic approach.
|
| 274 |
+
|
| 275 |
+
Let $G$ be a connected graph and $u \in V(G)$. Let $T(G, u)$ be the path-tree of $G$ respect to the vertex $u$ which is introduced in Section 1. Consider two vectors $x_G = (x_v)_{v \in V(G)}$ and $x_{T(G,u)} = (x_P)_{P \in V(T(G,u))}$ of indeterminates associated with $G$ and $T(G, u)$, respectively. For every vertex $P \in V(T(G, u))$, we may identify $x_P$ with $x_{v(P)}$ in which $v(P)$ is the terminal vertex of the path $P$ in $G$. In such way, $G$ and $T(G, u)$ will be equipped with two vectors consisting of the same indeterminates, which are simply denoted by **x** when there is no ambiguity. In what follows, for every subgraph $H$ of $G$ and vertex $u \in V(H)$, we denote by $D_G(T(H, u))$ the diagonal matrix whose rows and columns are indexed by $V(T(H, u))$ and the $(P, P)$-entry is $d_G(v(P))$.
|
| 276 |
+
|
| 277 |
+
The univariate version of the following theorem, which is proved by Godsil [5], has a key role in the theory of the matching polynomial. Notice that, for a graph $G$ and a vertex $u \in V(G)$, $u$ is a path in $G$ and the corresponding vertex in $T(G, u)$ will also be referred to as $u$.
|
| 278 |
+
|
| 279 |
+
**Theorem 3.1 (Amini [1]).** Let $G$ be a connected graph and let $u \in V(G)$. Then
|
| 280 |
+
|
| 281 |
+
$$ \frac{\mathfrak{M}(G - u, \boldsymbol{x})}{\mathfrak{M}(G, \boldsymbol{x})} = \frac{\mathfrak{M}(T(G, u) - u, \boldsymbol{x})}{\mathfrak{M}(T(G, u), \boldsymbol{x})}, $$
|
| 282 |
+
|
| 283 |
+
and moreover, $\mathfrak{M}(G, \boldsymbol{x})$ divides $\mathfrak{M}(T(G, u), \boldsymbol{x})$.
|
| 284 |
+
|
| 285 |
+
For a connected graph $G$ and a vertex $u \in V(G)$, Theorem 3.1 and Theorem 2.7 yield that $\mathcal{M}(G, x)$ divides $\varphi(A(T(G, u)), x)$. Since all roots of the characteristic polynomial of a symmetric matrix are real, the first statement in (1.1) is obtained as an application of Theorem 3.1. For the Laplacian matching polynomial, we get the following result.
|
| 286 |
+
|
| 287 |
+
**Corollary 3.2.** Let $G$ be a connected graph, $H$ be a subgraph of $G$, and $u \in V(H)$. If $H$ is connected, then $\mathfrak{M}(H, x\mathbf{1}_H - d_{G,H})$ divides $\varphi(D_G(T(H, u)) + A(T(H, u)), x)$. In particular, $\varphi(D_G(T(G, u)) + A(T(G, u)), x)$ is divisible by $\mathcal{LM}(G, x)$ for every vertex $u \in V(G)$.
|
| 288 |
+
|
| 289 |
+
*Proof.* By Theorem 3.1, we find that $\mathfrak{M}(H, x\mathbf{1}_H - d_{G,H})$ divides $\mathfrak{M}(T(H, u), x\mathbf{1}_H - d_{G,H})$. It follows from Theorem 2.7 that
|
| 290 |
+
|
| 291 |
+
$$ \begin{aligned} \mathfrak{M}(T(H, u), x\mathbf{1}_H - d_{G,H}) &= \det (xI - D_G(T(H, u)) - A(T(H, u))) \\ &= \varphi(D_G(T(H, u)) + A(T(H, u)), x), \end{aligned} $$
|
| 292 |
+
|
| 293 |
+
which establishes what we require. Since $\mathfrak{M}(G, x\mathbf{1}_G - d_G) = \mathcal{LM}(G, x)$ using (2.3), the ‘in particular’ statement immediately follows. $\square$
|
| 294 |
+
|
| 295 |
+
**Remark 3.3.** The matrix $D_G(T(G, u)) + A(T(G, u))$, which appeared in Corollary 3.2, is a symmetric diagonally dominant matrix with nonnegative diagonal entries, so all of its eigenvalues are nonnegative real numbers. Hence, Corollary 3.2 gives us another proof for the fact that all roots of the Laplacian matching polynomial are real and nonnegative which was also proved in Corollary 2.3.
|
| 296 |
+
---PAGE_BREAK---
|
| 297 |
+
|
| 298 |
+
It is well known that the largest zero of the matching polynomial of a graph is equal to the largest eigenvalue of the adjacency matrix of a path-tree of that graph. This fact is obtained by combining the Perron-Frobenius theorem [3, Theorem 2.2.1] and Theorems 2.7 and 3.1. The following theorem can be considered as an analogue of the fact. Indeed, the following theorem presents a linear algebra technique to treat with the largest zero of the Laplacian matching polynomial.
|
| 299 |
+
|
| 300 |
+
**Theorem 3.4.** Let $G$ be a connected graph, $H$ be a subgraph of $G$, and $u \in V(H)$. If $H$ is connected, then
|
| 301 |
+
|
| 302 |
+
$$ (3.1) \qquad \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})) = \lambda(D_G(T(H,u)) + A(T(H,u))). $$
|
| 303 |
+
|
| 304 |
+
In particular, $\lambda(\mathcal{LM}(G,x)) = \lambda(D_G(T(G,u)) + A(T(G,u)))$. Also, the largest root of $\mathcal{LM}(G,x)$ has the multiplicity 1.
|
| 305 |
+
|
| 306 |
+
*Proof.* We prove (3.1) by induction on $|V(H)|$. Clearly, (3.1) is valid for $|V(H)| = 1$. Assume that $|V(H)| \ge 2$. We first show that
|
| 307 |
+
|
| 308 |
+
$$ (3.2) \qquad \lambda(\mathfrak{M}(H-u, x\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) < \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})). $$
|
| 309 |
+
|
| 310 |
+
To see (3.2), we apply Theorem 2.2 and (1.3) to get that
|
| 311 |
+
|
| 312 |
+
$$ \begin{align*} \lambda(\mathfrak{M}(H-u, x^2\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) &= \lambda(\mathcal{M}(S(G)-W-u, x)) \\ &< \lambda(\mathcal{M}(S(G)-W, x)) \\ &= \lambda(\mathfrak{M}(H, x^2\mathbf{1}_H - \mathbf{d}_{G,H})), \end{align*} $$
|
| 313 |
+
|
| 314 |
+
where $W = V(G) \setminus V(H)$. This clearly proves (3.2). Now, let $N_H(u) = \{u_1, \dots, u_k\}$ and let $H_i$ be the connected component of $H-u$ containing $u_i$ for $i=1, \dots, k$. By the induction hypothesis,
|
| 315 |
+
|
| 316 |
+
$$ (3.3) \qquad \lambda(\mathfrak{M}(H_i, x\mathbf{1}_{H_i} - \mathbf{d}_{G,H_i})) = \lambda(D_G(T(H_i, u_i)) + A(T(H_i, u_i))) $$
|
| 317 |
+
|
| 318 |
+
for $i=1, \dots, k$. It is not hard to see the $k \times k$ block diagonal matrix whose $i$th block diagonal entry is $D_G(T(H_i, u_i)) + A(T(H_i, u_i))$, say $R$, is a principal submatrix of $D_G(T(H, u)) + A(T(H, u))$ with size $|T(H, u)|-1$. Hence, by the interlacing theorem [3, Corollary 2.5.2], it follows that $\lambda(R)$ is greater than or equal to the second largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. Further, it follows from (3.3) and (3.2) that
|
| 319 |
+
|
| 320 |
+
$$ \begin{align*} \lambda(R) &= \max \left\{ \lambda(\mathfrak{M}(H_i, x\mathbf{1}_{H_i} - \mathbf{d}_{G,H_i})) \middle| 1 \le i \le k \right\} \\ &= \lambda(\mathfrak{M}(H-u, x\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) \\ &< \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})). \end{align*} $$
|
| 321 |
+
|
| 322 |
+
Thus, $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is strictly greater than the second largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. On the other hand, Corollary 3.2 implies that $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is a zero of $\varphi(D_G(T(H, u)) + A(T(H, u)), x)$. So, we conclude that $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is the largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. This completes the induction step and demonstrates that (3.1) holds.
|
| 323 |
+
|
| 324 |
+
For the 'in particular' statement, note that (3.1) and (2.3) yield that
|
| 325 |
+
|
| 326 |
+
$$ \lambda(D_G(T(G,u)) + A(T(G,u))) = \lambda(\mathfrak{M}(G, x\mathbf{1}_G - \mathbf{d}_G)) = \lambda(\mathcal{LM}(G,x)), $$
|
| 327 |
+
|
| 328 |
+
and further, the connectedness of $G$ implies that $D_G(T(G,u)) + A(T(G,u))$ is an irreducible matrix with nonnegative entries, and consequently, its largest eigenvalue has the multiplicity 1 by the Perron-Frobenius theorem [3, Theorem 2.2.1]. $\square$
|
| 329 |
+
---PAGE_BREAK---
|
| 330 |
+
|
| 331 |
+
**Corollary 3.5.** Let $G$ be a connected graph and $u \in V(G)$. Then
|
| 332 |
+
|
| 333 |
+
$$ (3.4) \qquad \lambda(\mathcal{LM}(G,x)) \ge \lambda(L(T(G,u))) $$
|
| 334 |
+
|
| 335 |
+
with the equality holds if and only if $G$ is a tree.
|
| 336 |
+
|
| 337 |
+
*Proof.* We first recall the fact that a graph $\Gamma$ is bipartite if and only if $\varphi(L(\Gamma), x) = \varphi(Q(\Gamma), x)$ [3, Proposition 1.3.10]. For each $P \in V(T(G, u))$, we have $d_{T(G,u)}(P) \le d_G(v(P))$, where $v(P)$ is the terminal vertex of the path $P$ in $G$. Therefore, $R = D_G(T(G, u)) + A(T(G, u)) - Q(T(G, u))$ has nonnegative entries, and thus, Theorem 3.4, the Perron-Frobenius theorem [3, Theorem 2.2.1], and the above mentioned fact yield that
|
| 338 |
+
|
| 339 |
+
$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &= \lambda(R + Q(T(G,u))) \\ &\ge \lambda(Q(T(G,u))) \\ &= \lambda(L(T(G,u))), \end{align} \tag{3.5} $$
|
| 340 |
+
|
| 341 |
+
proving (3.4). If $G$ is a tree, then $G$ is isometric to $T(G, u)$ and since $\mathcal{LM}(G,x) = \varphi(L(G), x)$ by Theorem 2.7, the equality in (3.4) is attained. Conversely, assume that the equality in (3.4) holds. Consequently, the equality in (3.5) occurs, and hence, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $R=0$. This means that $d_{T(G,u)}(P) = d_G(v(P))$ for each $P \in V(T(G, u))$. We assert that $G$ is a tree. Towards a contradiction, suppose that there is a cycle $C$ in $G$. As $G$ is connected, there is a path $P_1$ in $G$ which start at $u$, none of its internal vertices is on $C$, and $v(P_1) \in V(C)$. Fix $w \in N_G(v(P_1)) \cap V(C)$ and let $P_2$ be the path on $C$ between $v(P_1)$ and $w$ whose length is more that 1. If $P$ is the path between $u$ and $w$ formed by $P_1$ and $P_2$, then it is clear that $d_{T(G,u)}(P) < d_G(v(P))$. This contradiction completes the proof. $\square$
|
| 342 |
+
|
| 343 |
+
In the following consequence, we give some lower bounds on the largest zero of the Laplacian matching polynomial.
|
| 344 |
+
|
| 345 |
+
**Corollary 3.6.** Let $G$ be a connected graph. Then
|
| 346 |
+
|
| 347 |
+
$$ \lambda(\mathcal{LM}(G,x)) \ge \max \left\{ \Delta(G) + 1, \delta(G) + \sqrt{\Delta(G)} \right\} $$
|
| 348 |
+
|
| 349 |
+
with the equality holds if and only if $G$ is a star.
|
| 350 |
+
|
| 351 |
+
*Proof.* Let $u \in V(G)$ be of degree $\Delta(G)$. Indeed, $d_{T(G,u)}(u) = d_G(u)$ and therefore $\Delta(T(G,u)) = \Delta(G)$. For each connected graph $\Gamma$, Proposition 3.9.3 of [3] states that $\lambda(L(\Gamma)) \ge \Delta(\Gamma) + 1$ with the equality holds if and only if $\Delta(\Gamma) = |V(\Gamma)| - 1$. By this fact and Corollary 3.5, we obtain that $\lambda(\mathcal{LM}(G,x)) \ge \lambda(L(T(G,u))) \ge \Delta(T(G,u)) + 1 = \Delta(G) + 1$, and moreover, the equality $\lambda(\mathcal{LM}(G,x)) = \Delta(G) + 1$ holds if and only if $G$ is a star.
|
| 352 |
+
|
| 353 |
+
For each connected graph $\Gamma$, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $\lambda(A(\Gamma)) \ge \sqrt{\Delta(\Gamma)}$ with the equality holds if and only if $\Gamma$ is a star. Using this fact, Theorem 3.4, and the Weyl inequality [3, Theorem 2.8.1], we derive
|
| 354 |
+
|
| 355 |
+
$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &\ge \delta(G) + \lambda(A(T(G,u))) \\ &\ge \delta(G) + \sqrt{\Delta(T(G,u))} \\ &= \delta(G) + \sqrt{\Delta(G)}. \end{align} \tag{3.6} $$
|
| 356 |
+
---PAGE_BREAK---
|
| 357 |
+
|
| 358 |
+
Suppose that the equality $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ holds. So, the equality in (3.6) is attained, and thus, $T(G, u)$ is a star. This implies that $G$ is a star, and then, $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ forces that $|V(G)| \le 2$. Since the equality $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ is valid for the stars $G$ on at most 2 vertices, the proof is complete. $\square$
|
| 359 |
+
|
| 360 |
+
In the following theorem, we establish (1.5) which slightly improves the second statement of Theorem 2.6 of [11].
|
| 361 |
+
|
| 362 |
+
**Theorem 3.7.** Let $G$ be a connected graph with $\Delta(G) \ge 2$ and let $\ell(G)$ be the length of the longest path in $G$. Then,
|
| 363 |
+
|
| 364 |
+
$$ (3.7) \qquad \lambda(\mathcal{LM}(G, x)) \le \Delta(G) + 2\sqrt{\Delta(G)-1} \cos \frac{\pi}{2\ell(G)+2} $$
|
| 365 |
+
|
| 366 |
+
with the equality holds if and only if $G$ is a cycle.
|
| 367 |
+
|
| 368 |
+
*Proof.* For simplicity, let $\Delta = \Delta(G)$ and $\ell = \ell(G)$. For every positive integers $d$ and $k \ge 2$, the Bethe tree $B_{d,k}$ is a rooted tree with $k$ levels in which the root vertex is of degree $d$, the vertices on levels $2, \dots, k-1$ are of degree $d+1$, and the vertices on level $k$ are of degree 1. By Theorem 7 of [13],
|
| 369 |
+
|
| 370 |
+
$$ (3.8) \qquad \lambda(A(B_{d,k})) = 2\sqrt{d} \cos \frac{\pi}{k+1}. $$
|
| 371 |
+
|
| 372 |
+
Let $u \in V(G)$. It is not hard to check that $T(G, u)$ is isomorphic to a subgraph of $B_{\Delta-1,2\ell+1}$. For this, it is enough to correspond $u \in V(T(G, u))$ to an arbitrary vertex on level $\ell+1$ in $B_{\Delta-1,2\ell+1}$. By applying Theorem 3.4, the Weyl inequality [3, Theorem 2.8.1], the interlacing theorem [3, Corollary 2.5.2], and (3.8), we derive
|
| 373 |
+
|
| 374 |
+
$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &\le \lambda(D_G(T(G,u))) + \lambda(A(T(G,u))) \tag{3.9} \\ &\le \Delta + \lambda(A(B_{\Delta-1,2\ell+1})) \\ &= \Delta + 2\sqrt{\Delta-1} \cos \frac{\pi}{2\ell+2}, \end{align} $$
|
| 375 |
+
|
| 376 |
+
proving (3.7). Now, assume that the equality in (3.7) is achieved. Therefore, the equality in (3.9) occurs, and thus, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $T(G, u)$ is isomorphic to $B_{\Delta-1,2\ell+1}$. Since $\Delta \ge 2$, one can easily obtain that $G$ is a cycle. Conversely, if $G$ is a cycle, then $T(G, u)$ is a path on $2\ell+1$ vertices. By Theorem 3.4 and (3.8), we get
|
| 377 |
+
|
| 378 |
+
$$ \lambda(\mathcal{LM}(G,x)) = \lambda(D_G(T(G,u)) + A(T(G,u))) = 2 + \lambda(A(B_{1,2\ell+1})) = 2 + 2\cos \frac{\pi}{2\ell+2}. $$
|
| 379 |
+
|
| 380 |
+
This completes the proof. $\square$
|
| 381 |
+
|
| 382 |
+
Stevanović [15] proved that the eigenvalues of the adjacency matrix of a tree $T$ are less than $2\sqrt{\Delta(T)-1}$. The corollary below gives an improvement of this upper bound for the subdivision of trees.
|
| 383 |
+
|
| 384 |
+
**Corollary 3.8.** Let $G$ be a graph with $\Delta(G) \ge 2$. Then
|
| 385 |
+
|
| 386 |
+
$$ (3.10) \qquad \lambda(\mathcal{M}(S(G), x)) < 1 + \sqrt{\Delta(G)-1}. $$
|
| 387 |
+
|
| 388 |
+
In particular, if $F$ is a forest with $\Delta(F) \ge 2$, then $\lambda(A(S(F))) < 1 + \sqrt{\Delta(F)-1}$.
|
| 389 |
+
---PAGE_BREAK---
|
| 390 |
+
|
| 391 |
+
*Proof.* It follows from Theorem 3.7 that $\lambda(\mathcal{LM}(G, x)) < \Delta(G) + 2\sqrt{\Delta(G)-1}$. Moreover, it follows from Corollary 2.3 that $\lambda(\mathcal{M}(S(G), x)) = \sqrt{\lambda(\mathcal{LM}(G, x))}$. From these, we find that
|
| 392 |
+
|
| 393 |
+
$$ \lambda(\mathcal{M}(S(G), x)) < \sqrt{\Delta(G) + 2\sqrt{\Delta(G) - 1}} = 1 + \sqrt{\Delta(G) - 1}, $$
|
| 394 |
+
|
| 395 |
+
proving (3.10). As the subdivision of a forest is a forest, the 'in particular' statement follows from Theorem 2.7 and (3.10). $\square$
|
| 396 |
+
|
| 397 |
+
**Remark 3.9.** Note that $\Delta(S(G)) = \Delta(G)$ for every graph $G$ with $\Delta(G) \ge 2$. So, for the subdivision of a graph with the maximum degree at least 2, the upper bound which appears in (3.10) is sharper than the upper bound that comes from (1.1).
|
| 398 |
+
|
| 399 |
+
We demonstrated in Theorem 3.4 that the largest zero of the Laplacian matching polynomial has the multiplicity 1. In the following theorem, we prove the remaining statements of (1.7) as analogues of the results given in (1.3).
|
| 400 |
+
|
| 401 |
+
**Theorem 3.10.** Let $G$ be a graph and let $n = |V(G)|$. For each edge $e \in E(G)$, the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$ interlace in the sense that, if $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_n$ are respectively the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$, then $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots \le \beta_n \le \alpha_n$. Also, if $G$ is connected, then $\lambda(\mathcal{LM}(G, x)) > \lambda(\mathcal{LM}(H, x))$ for any proper subgraph $H$ of $G$.
|
| 402 |
+
|
| 403 |
+
*Proof.* Fix an edge $e \in E(G)$ and denote by $v_e$ the vertex of $S(G)$ corresponding to $e$. Let $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_n$ be the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$, respectively. Corollary 2.3 yields that $\sqrt{\alpha_1} \le \dots \le \sqrt{\alpha_n}$ is the end part of the nondescending sequence which consists of all the zeros of $\mathcal{M}(S(G), x)$ and $\sqrt{\beta_1} \le \dots \le \sqrt{\beta_n}$ is the end part of the nondescending sequence which consists of all the zeros of $\mathcal{M}(S(G-e), x)$. As $S(G-e) = S(G) - v_e$, it follows from (1.3) that the zeros of $\mathcal{M}(S(G), x)$ and $\mathcal{M}(S(G-e), x)$ interlace. So, we find that
|
| 404 |
+
|
| 405 |
+
$$ \sqrt{\beta_1} \le \sqrt{\alpha_1} \le \sqrt{\beta_2} \le \sqrt{\alpha_2} \le \dots \le \sqrt{\beta_n} \le \sqrt{\alpha_n} $$
|
| 406 |
+
|
| 407 |
+
which means that $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots \le \beta_n \le \alpha_n$, as desired.
|
| 408 |
+
|
| 409 |
+
Now, assume that $G$ is connected. Let $H$ be a proper subgraph of $G$ and let $u \in V(H)$. As $T(H, u)$ is a proper subgraph of $T(G, u)$, if $R$ denotes the submatrix of $D_G(T(G, u)) + A(T(G, u))$ corresponding to the vertices in $V(T(H, u))$, then $R - (D_H(T(H, u)) + A(T(H, u)))$ is a nonzero matrix with nonnegative entries. So, by applying Theorem 3.4 and the Perron-Frobenius theorem [3, Theorem 2.2.1], we get
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\begin{align*}
|
| 413 |
+
\lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\
|
| 414 |
+
&> \lambda(R) \\
|
| 415 |
+
&> \lambda(D_H(T(H,u)) + A(T(H,u))) \\
|
| 416 |
+
&= \lambda(\mathcal{LM}(H,x)). \quad \square
|
| 417 |
+
\end{align*}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
**Remark 3.11.** For every graph $G$ and real number $\alpha$, let $m_G(\alpha)$ denote the multiplicity of $\alpha$ as a root of $\mathcal{LM}(G,x)$. As a consequence of Theorem 3.10, we have $|m_G(\alpha) - m_{G-e}(\alpha)| \le 1$ for each edge $e \in E(G)$.
|
| 421 |
+
|
| 422 |
+
It is known that among all trees with a fixed number of vertices the path has the smallest value of the largest Laplacian eigenvalue [12]. The following result can be considered as an analogue of this fact and is obtained from Theorems 2.7 and 3.10.
|
| 423 |
+
|
| 424 |
+
**Corollary 3.12.** Let $P_n$ and $K_n$ be the path and complete graph on $n$ vertices, respectively. For any connected graph $G$ on $n$ vertices which is not $P_n$ and $K_n$,
|
| 425 |
+
|
| 426 |
+
$$ \lambda(\mathcal{LM}(P_n, x)) < \lambda(\mathcal{LM}(G, x)) < \lambda(\mathcal{LM}(K_n, x)). $$
|
| 427 |
+
---PAGE_BREAK---
|
| 428 |
+
|
| 429 |
+
4. CONCLUDING REMARKS
|
| 430 |
+
|
| 431 |
+
In this paper, we have discovered some properties of the location of zeros of the Laplacian matching polynomial. Most of our results can be considered as analogues of known results on the matching polynomial. Comparing to the matching polynomial, the Laplacian matching polynomial contains not only the information of the sizes of matchings in the graph but also the vertex degrees of the graph. Hence, it seems to be that more structural properties of graphs can be reflected by the Laplacian matching polynomial rather than the matching polynomial. For an instance, 0 is a root of $LM(G,x)$ if and only if $G$ is a forest, in while 0 is a root of $M(G,x)$ if and only if $G$ has no perfect matchings.
|
| 432 |
+
|
| 433 |
+
More interesting facts about the Laplacian matching polynomial can be concerned in further. For example, one may focus on the multiplicities of zeros of the Laplacian matching polynomial as there are many results on the multiplicities of zeros of the matching polynomial. In view of Remark 3.11, for every graph $G$ and real number $\alpha$, one may divide $E(G)$ into three subsets based on how the multiplicity of $\alpha$ changes when an edge of $G$ is removed. The corresponding problem about the matching polynomial is investigated by Chen and Ku [8]. Also, it is a known result that the multiplicity of a zero of the matching polynomial is at most the path partition number of the graph, that is, the minimum number of vertex disjoint paths required to cover all the vertices of the graph [4, Theorem 6.4.5]. It seems to be an interesting problem to find a sharp upper bound on the multiplicity of a zero of the Laplacian matching polynomial.
|
| 434 |
+
|
| 435 |
+
REFERENCES
|
| 436 |
+
|
| 437 |
+
[1] N. Amini, Spectrahedrality of hyperbolicity cones of multivariate matching polynomials, Journal of Algebraic Combinatorics 50 (2019) 165–190.
|
| 438 |
+
|
| 439 |
+
[2] J.A. Bondy, U.S.R. Murty, Graph Theory, Graduate Texts in Mathematics, Volume 244, Springer, New York, 2008.
|
| 440 |
+
|
| 441 |
+
[3] A.E. Brouwer, W.H. Haemers, Spectra of Graphs, Springer, New York, 2012.
|
| 442 |
+
|
| 443 |
+
[4] C.D. Godsil, Algebraic Combinatorics, Chapman and Hall Mathematics Series, Chapman & Hall, New York, 1993.
|
| 444 |
+
|
| 445 |
+
[5] C.D. Godsil, Matchings and walks in graphs, Journal of Graph Theory 5 (1981) 285–297.
|
| 446 |
+
|
| 447 |
+
[6] C.D. Godsil, I. Gutman, On the theory of the matching polynomial, Journal of Graph Theory 5 (1981) 137–144.
|
| 448 |
+
|
| 449 |
+
[7] O.J. Heilmann, E.H. Lieb, Theory of monomer-dimer systems, Communications in Mathematical Physics 25 (1972) 190–232.
|
| 450 |
+
|
| 451 |
+
[8] C.Y. Ku, W. Chen, An analogue of the Gallai–Edmonds structure theorem for non-zero roots of the matching polynomial, Journal of Combinatorial Theory—Series B 100 (2010) 119–127.
|
| 452 |
+
|
| 453 |
+
[9] J.A. Makowsky, E.V. Ravve, N.K. Blanchard, On the location of roots of graph polynomials, European Journal of Combinatorics 41 (2014) 1–19.
|
| 454 |
+
|
| 455 |
+
[10] A.W. Marcus, D.A. Spielman, N. Srivastava, Interlacing families I: Bipartite Ramanujan graphs of all degrees, Annals of Mathematics—Second Series 182 (2015) 307–325.
|
| 456 |
+
|
| 457 |
+
[11] A. Mohammadian, Laplacian matching polynomial of graphs, Journal of Algebraic Combinatorics 52 (2020) 33–39.
|
| 458 |
+
|
| 459 |
+
[12] M. Petrović, I. Gutman, The path is the tree with smallest greatest Laplacian eigenvalue, Kragujevac Journal of Mathematics 24 (2002) 67–70.
|
| 460 |
+
|
| 461 |
+
[13] O. Rojo, M. Robbiano, An explicit formula for eigenvalues of Bethe trees and upper bounds on the largest eigenvalue of any tree, Linear Algebra and its Applications 427 (2007) 138–150.
|
| 462 |
+
|
| 463 |
+
[14] H. Sachs, Beziehungen zwischen den in einem Graphen enthaltenen Kreisen und seinem charakteristischen Polynom, Publicationes Mathematicae Debrecen 11 (1964) 119–134.
|
| 464 |
+
|
| 465 |
+
[15] D. Stevanović, Bounding the largest eigenvalue of trees in terms of the largest vertex degree, Linear Algebra and its Applications 360 (2003) 35–42.
|
| 466 |
+
|
| 467 |
+
[16] W. Yan, Y.-N. Yeh, On the matching polynomial of subdivision graphs, Discrete Applied Mathematics 157 (2009) 195–200.
|
| 468 |
+
|
| 469 |
+
[17] Y. Zhang, H. Chen, The average Laplacian polynomial of a graph, Discrete Applied Mathematics 283 (2020) 737–743.
|
samples/texts_merged/213815.md
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Design and Performance of a 24 GHz Band FM-CW
|
| 5 |
+
Radar System and Its Application
|
| 6 |
+
|
| 7 |
+
Kazuhiro Yamaguchi\*, Mitsumasa Saito\†, Kohei Miyasaka\* and Hideaki Matsue\*
|
| 8 |
+
|
| 9 |
+
\* Tokyo University of Science, Suwa
|
| 10 |
+
|
| 11 |
+
‡ CQ-S net Inc., Japan
|
| 12 |
+
|
| 13 |
+
Email: yamaguchi@rs.tus.ac.jp, matsue@rs.suwa.tus.ac.jp, saitoh@kpe.biglobe.ne.jp
|
| 14 |
+
|
| 15 |
+
*Abstract*—This paper describes a design and performance of a FM-CW (Frequency Modulated Continuous Wave) radar system using 24 GHz band. The principle for measuring the distance and the small displacement of target object is described, and the differential detection method for detecting the only target is proposed under the environments which multiple objects are located. In computer simulation, the basic performance of FM-CW radar system is analyzed about the distance resolution and error value according to the various sampling time and sweep bandwidth. Furthermore, the FM-CW radar system with the proposed differential detection method can clearly detect only the target object under the multiple object environment, and the small displacement within 3.11 mm can be measured. In experiment, the performance about measuring the distance and displacement is described by using the designed 24 GHz FM-CW radar system. As the results, it is confirmed that 24 GHz FM-CW radar system with the proposed differential detection method is effective for measuring target under the environments which multiple objects are located.
|
| 16 |
+
|
| 17 |
+
Fig. 1. Sawtooth frequency modulation.
|
| 18 |
+
|
| 19 |
+
I. INTRODUCTION
|
| 20 |
+
|
| 21 |
+
Radar systems with 24 GHz band is based on ARIB standard T73 [1] as sensors for detecting or measuring mobile objects for specified low power radio station. And the 24 GHz band radar system can be applied in various field such as security, medical imaging and so on under indoor and outdoor environments. There are various radar systems have been proposed [2], [3], [4], [5]. The pulsed radar system measures the period between the signal is transmitted and received. The pulsed radar can detect the distance in far field, however, the target in near field can not be detected correctly. The Doppler radar system measures the frequency difference between the reflected and transmitted signals. The Doppler radar can detect the moving velocity of the target, however, the distance of the target can not be detected. The FM-CW (Frequency-Modulated Continuous-Wave) radar system [6], [7] is the most widely used for detecting the distance of the target object in near field and the small displacement of the target.
|
| 22 |
+
|
| 23 |
+
In this paper, we used and developed the 24 GHz FM-CW radar system for measuring the distance and displacement of an object when the object is static or moves very slowly. The basic performance of the 24 GHz FM-CW radar system for measuring a target object is analyzed by using the computer simulation. Moreover, we proposed the differential detection method for signal processing in the FM-CW radar system in order to detect only the target object under the environments which multiple objects are located. Furthermore, an example of application with the 24 GHz FM-CW radar system is shown in experiment.
|
| 24 |
+
|
| 25 |
+
This paper consists of the following sections. Section II describes the principle of a FM-CW radar system. Section III describes and analyses the basic performance and the proposed differential detection method in computer simulation. Section IV shows the experimental results with 24 GHz FM-CW radar system. Finally, Section V concludes this paper.
|
| 26 |
+
|
| 27 |
+
II. PRINCIPLE FOR FMCW RADAR
|
| 28 |
+
|
| 29 |
+
FM-CW (Frequency-Modulated Continuous-Wave) radar
|
| 30 |
+
is a radar transmitting a continuous carrier modulated by a
|
| 31 |
+
periodic function such as a sawtooth wave to provide range
|
| 32 |
+
data shown in Fig. 1. Fig. 2 shows the block diagram of a
|
| 33 |
+
FM-CW radar system [8].
|
| 34 |
+
|
| 35 |
+
In the FM-CW radar system, frequency modulated signal
|
| 36 |
+
at the VCO is transmitted from the transmitter Tx, then signals
|
| 37 |
+
reflected from the targets are received at the receiver Rx.
|
| 38 |
+
Transmitted and received signals are multiplied by a mixer, and
|
| 39 |
+
beat signals are generated as multiplying the two signals. The
|
| 40 |
+
beat signal pass through a low pass filter, then an output signal
|
| 41 |
+
is obtained. In this process, the frequency of the input signal
|
| 42 |
+
is varied with time at the VCO. The modulation waveform
|
| 43 |
+
with a linear sawtooth pattern [9] as shown in Fig. 1. This
|
| 44 |
+
figure illustrates frequency-time relation in the FM-CW radar,
|
| 45 |
+
and the red line denotes the transmitted signal and the blue
|
| 46 |
+
line denotes the received signal. Here, f₀ denotes the center
|
| 47 |
+
frequency, fₛ denotes the frequency bandwidth for sweep, and
|
| 48 |
+
tₛ denotes the period for sweep.
|
| 49 |
+
|
| 50 |
+
We define that the transmitting signal $V_T(f, x)$ at the
|
| 51 |
+
transmitter Tx in Fig. 2 is represented as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
V_{\mathrm{T}}(f,x)=A e^{j \frac{2 \pi f}{c} x},
|
| 55 |
+
\quad(1)
|
| 56 |
+
$$
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
Fig. 2. Block diagram of a FM-CW radar system.
|
| 60 |
+
|
| 61 |
+
where *f* denotes a frequency at a time, *x* denotes a distance between a target and the transmitter, *A* denotes an amplitude value and *c* denotes the speed of light.
|
| 62 |
+
|
| 63 |
+
The reflected signal $V_R(f, x)$ at the receiver Rx in Fig. 2 is represented as
|
| 64 |
+
|
| 65 |
+
$$ V_R(f, x) = \sum_{k=1}^{K} A \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{2\pi f}{c} (2d_k - x)} , \quad (2) $$
|
| 66 |
+
|
| 67 |
+
where $\gamma_k$ and $\varphi_k$ are the reflectivity coefficients for amplitude and phase on kth target, respectively. $\alpha_k$, denotes amplitude coefficient for transmission loss from kth target, and $d_k$ is the distance between the transmitter and the kth target.
|
| 68 |
+
|
| 69 |
+
Here, at the receiver whose position is $x = 0$, Eq. (2) is rewritten as
|
| 70 |
+
|
| 71 |
+
$$ V_R(f, 0) = \sum_{k=1}^{K} A \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{2\pi f}{c} (2d_k)} . \quad (3) $$
|
| 72 |
+
|
| 73 |
+
The beat signal are generated as multiplying the transmitted signal in Eq. (1) and the received signal in Eq. (3) at the position $x = 0$. After LPF, the output signal $V_{\text{out}}(f, 0)$ is generated by
|
| 74 |
+
|
| 75 |
+
$$ V_{\text{out}}(f, 0) = \sum_{k=1}^{K} A^2 \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f d_k}{c}} . \quad (4) $$
|
| 76 |
+
|
| 77 |
+
By using signal processing, a distance and a displacement for the target are given from the generated output signal in Eq. (4). By using the Fourier transform, the distance spectrum of the output signal $P(x)$ is calculated as follows.
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\begin{align}
|
| 81 |
+
P(x) &= \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} V_{\text{out}} e^{-j \frac{4\pi f}{c} x} df \nonumber \\
|
| 82 |
+
&= \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} \sum_{k=1}^{K} A^2 \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f d_k}{c}} e^{-j \frac{4\pi f x}{c}} df \nonumber \\
|
| 83 |
+
&= A^2 \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} e^{j \frac{4\pi f (d_k - x)}{c}} df \nonumber \\
|
| 84 |
+
&= A^2 \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f_0 (d_k - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} . \tag{5}
|
| 85 |
+
\end{align}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
The amplitude value of the distance spectrum $|P(x)|$ in Eq. (5) is given as
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\begin{aligned}
|
| 92 |
+
|P(x)| &= A^2 \left| \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f_0 (d_k - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} \right| \\
|
| 93 |
+
&\leq A^2 f_w \sum_{k=1}^{K} \alpha_k \gamma_k \left| \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} \right|, \quad (6)
|
| 94 |
+
\end{aligned}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
and we have equality if and only if the phase components $\phi_k + \frac{4\pi f_0 (d_k - x)}{c}$ about all of $k$ are equal.
|
| 98 |
+
|
| 99 |
+
Here, we assumed that the number of target is 1. The distance spectrum in Eq. (5) is rewritten as
|
| 100 |
+
|
| 101 |
+
$$ P(x) = A^2 \alpha_1 \gamma_1 e^{j \varphi_1} e^{j \frac{4\pi f_0 (d_1 - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_1 - x)}{c}\right\}}{\frac{2\pi f_w (d_1 - x)}{c}}, \quad (7) $$
|
| 102 |
+
|
| 103 |
+
and the amplitude value of distance spectrum is given as
|
| 104 |
+
|
| 105 |
+
$$ |P(x)| = A^2 \alpha_1 \gamma_1 f_w \left| \frac{\sin\left\{\frac{2\pi f_w (d_1-x)}{c}\right\}}{\frac{2\pi f_w (d_1-x)}{c}} \right|. \quad (8) $$
|
| 106 |
+
|
| 107 |
+
This equation indicates that the distance for the target is generated by the amplitude value of distance spectrum.
|
| 108 |
+
|
| 109 |
+
The phase value of distance spectrum $\angle P(x)$ is represented as
|
| 110 |
+
|
| 111 |
+
$$ \angle P(x) = \varphi_1 + \frac{4\pi f_0 (d_1 - x)}{c} = \theta_1(x) . \quad (9) $$
|
| 112 |
+
|
| 113 |
+
Here, $\theta_1(x)$ satisfy $-\pi \leq \theta_1(x) \leq \pi$, then the displacement for the target is
|
| 114 |
+
|
| 115 |
+
$$ -\frac{c(-\pi - \varphi_1)}{4\pi f_0} \leq d_1 \leq \frac{c(\pi - \varphi_1)}{4\pi f_0} . \quad (10) $$
|
| 116 |
+
|
| 117 |
+
If the phase value satisfies $\phi_1 = 0$, Eq. (10) is rewritten as $-3.11 [\text{mm}] \leq d_1 \leq +3.11 [\text{mm}]$ with $f_0 = 24.15 [\text{GHz}]$. That is, the small displacement of the target within $\pm 3.11 [\text{mm}]$ is generated by the phase value of distance spectrum.
|
| 118 |
+
---PAGE_BREAK---
|
| 119 |
+
|
| 120 |
+
TABLE I. PARAMETERS IN COMPUTER SIMULATIONS
|
| 121 |
+
|
| 122 |
+
<table><thead><tr><td>Parameters</td><td>Value</td></tr></thead><tbody><tr><td>Center frequency</td><td>24.15 GHz</td></tr><tr><td>Bandwidth</td><td>50, 100, 200, 400 MHz</td></tr><tr><td>Sweep time</td><td>1024 µs</td></tr><tr><td>Sampling time of sweep</td><td>0.1, 1, 10 µs</td></tr><tr><td>Number of FFT points</td><td>4096</td></tr><tr><td>Window function</td><td>hamming</td></tr></tbody></table>
|
| 123 |
+
|
| 124 |
+
Fig. 3. Resolution for distance spectrum according to sweep bandwidth.
|
| 125 |
+
|
| 126 |
+
On the other hands, the maximum distance for measuring
|
| 127 |
+
$d_{\max}$ is
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\begin{aligned}
|
| 131 |
+
\Delta f &= \frac{f_w}{t_w/t_s} [\text{Hz}] \, , \\
|
| 132 |
+
d_{\max} &= \frac{c}{4\Delta f} [\text{m}] \, ,
|
| 133 |
+
\end{aligned}
|
| 134 |
+
\quad (11) $$
|
| 135 |
+
|
| 136 |
+
where $t_w$ denotes the sweep time, $t_s$ denotes the interval time for sampling. For example, in the case with $t_w = 1024$ [µs] and $t_s = 1$ [µs], the maximum distance is $d_{\max} = 384$ [m].
|
| 137 |
+
|
| 138 |
+
III. COMPUTER SIMULATION
|
| 139 |
+
|
| 140 |
+
A. Basic Performance
|
| 141 |
+
|
| 142 |
+
At first, we describe the basic performance about the FM-CW radar with 24 GHz band. Parameters for computer simulation are listed in Table I. Center frequency is 24.15 GHz, bandwidth are 50, 100, 200, and 400 MHz. Note that the 400 MHz bandwidth is only used for the computer simulation because of standards in the Radio Law in Japan. Sweep time is 1024 µs, sampling times of sweep are 0.1, 1, 10 µs, number of FFT points is 4096, and the hamming windows is adapted as the window function in signal processing.
|
| 143 |
+
|
| 144 |
+
We assumed that a static target is located at 10 m from the transmitter and receiver, and the distance spectrums are outputted with various parameters. Fig. 3 shows the amplitude value for distance spectrum versus measured distance with various sweep bandwidth. The result shows that the sweep bandwidth influences the distance resolutions and widely bandwidth can improve the resolution. In the case with $t_s = 1$ µs, the distance resolutions with $f_w = 50, 100, 200, 400$ MHz are ±5, ±1.5, ±1, ±0.5 m, respectively. Fig. 4 shows the amplitude value for distance spectrum versus measured distance with various sampling time. The result shows that
|
| 145 |
+
|
| 146 |
+
Fig. 4. Error value for distance spectrum according to sampling interval.
|
| 147 |
+
|
| 148 |
+
Fig. 5. Distance spectrum for measuring moving target.
|
| 149 |
+
|
| 150 |
+
the sampling interval influences the error about the measured distance and shortly sampling interval can reduce the error value for distance. In the case with $f_w = 200$ MHz, the error values about the measured distance with $t_s = 10$ µs is about 0.5 m.
|
| 151 |
+
|
| 152 |
+
Fig. 5 shows the result for measuring a slowly moving target with $f_w = 200$ MHz and $t_s = 1$ µs. The target moved from 10 m to 20 m at intervals of 0.5 m. Fig. 5(a) shows
|
| 153 |
+
---PAGE_BREAK---
|
| 154 |
+
|
| 155 |
+
Fig. 6. Measured displacement.
|
| 156 |
+
|
| 157 |
+
the amplitude value versus measured distance versus target distance with 3-dimensional viewing, and Fig. 5(b) shows measured distance versus target distance with 2-dimensional viewing. The color in (b) is corresponding to the strength of the amplitude value in (a). From these figures, it is confirmed that the distance can be measured correctly according to the positions of the moving target.
|
| 158 |
+
|
| 159 |
+
Fig. 6 shows the result for measuring a target with small displacement, and the measured displacement versus target displacement is outputted. The object is located at 10 m from the receiver, and the object moved from -5 mm to 5 mm at intervals of 0.1 mm. The small displacement can be measured by the phase value of distance spectrum, and the measured displacement is corresponding to the target displacement. Note that the measured displacement denotes the relative displacement and it is not corresponding to the absolute distance between the receiver and the target object. The small displacement within ±3.11 mm is correctly measured with the parameters of the FM-CW radar system in this paper, however, the displacement more than ±3.11 mm has uncertainty.
|
| 160 |
+
|
| 161 |
+
## B. Proposed target detection
|
| 162 |
+
|
| 163 |
+
As mentioned in the above section, the FM-CW radar system can measure the distance and the small displacement for 1 target object. However, it is a special case that only the reflected signal on a target can be received at the receiver. In general, the receiver may receive the reflected signals from many objects. Therefore, when there is some objects for measuring the target distance, signal processing for detecting the distance spectrum from the only target is required.
|
| 164 |
+
|
| 165 |
+
The proposed method removes the signals from the other objects by using the differential detection of distance spectrum. Fig. 7 shows the distance spectrum when the target object moves from 10 m to 20 m and the other objects are located at 15 m and 20 m. The transmitted signal is reflected on the target and the other objects, the receiver receives several reflected signals. Therefore, the distance spectrum of the other objects are also generated by the FM-CW radar system in Fig. 7(a), and the distance spectrum of the target can not be detected clearly. In particular, when the reflection coefficient of the target is lower than that of the other objects, the distance spectrum of the other object has higher amplitude value than that of the target.
|
| 166 |
+
|
| 167 |
+
Fig. 7. Distance spectrum for measuring moving target distance with / without the differential detection under the environments which multiple objects are located.
|
| 168 |
+
|
| 169 |
+
In the proposed differential detection, at first, the distance spectrum of the other objects $P_0$ is generated beforehand in Fig. 7(a). Then, the distance spectrum of the target and the other object $P$ is subtracted by $P_0$. By using the differential detection, distance spectrum removed the distance spectrum of the other targets is generated as $P-P_0$. Therefore, the distance spectrum of the desired target is only detected. Fig. 7(b) shows the distance spectrum by using the proposed differential detection method, and the distance spectrum of the target is correctly measured. As compared with the measured distance spectrums in Fig. 7(a) and (b), it is clearly confirmed that the proposed method can detect target distance by using the difference detection. The proposed differential detection can effectively detect the moving or static target distance from multiple reflections of the background static objects.
|
| 170 |
+
|
| 171 |
+
# IV. EXPERIMENTS
|
| 172 |
+
|
| 173 |
+
In order to evaluate the effectiveness of the proposed method for detecting the target distance and displacement, we develop a FM-CW radar system and carried out the experiments with the radar system in actual environment. Table II lists the parameters, and the developed FM-CW radar system get a certificate of conformity with technical regulations in
|
| 174 |
+
---PAGE_BREAK---
|
| 175 |
+
|
| 176 |
+
TABLE II. PARAMETERS IN EXPERIMENTS
|
| 177 |
+
|
| 178 |
+
<table><thead><tr><td>Parameters</td><td>Value</td></tr></thead><tbody><tr><td>Center frequency f<sub>0</sub></td><td>24.15 GHz</td></tr><tr><td>Sweep bandwidth f<sub>w</sub></td><td>200 MHz</td></tr><tr><td>Sweep time t<sub>w</sub></td><td>1024 μs</td></tr><tr><td>Sampling time of sweep t<sub>s</sub></td><td>1 μs</td></tr><tr><td>Transmitter power output</td><td>0.007 W</td></tr><tr><td>Antenna gain</td><td>11 dBi</td></tr><tr><td>Range of distance</td><td>0 - 100 m</td></tr><tr><td>Range of relative displacement</td><td>±3.11 mm</td></tr></tbody></table>
|
| 179 |
+
|
| 180 |
+
Fig. 8. Distance spectrum for measuring moving target distance with / without the differential detection.
|
| 181 |
+
|
| 182 |
+
Article 38-6 Paragraph 1 of the Radio Law in Japan, and developed FM-CW radar system is accommodate to ARIB standard T73 in Japan [1].
|
| 183 |
+
|
| 184 |
+
## A. Distance Spectrum
|
| 185 |
+
|
| 186 |
+
Fig. 8 shows the distance spectrum of a moving target. A person walked away from the FM-CW radar and then came close between 2 [m] to 10 [m]. In Fig. 8(a), several distance spectrums of the person and the background objects are outputted. The distance spectrum of the moving person is not clearly detected in Fig. 8(a). In order to detect the distance spectrum of the moving person with the differential
|
| 187 |
+
|
| 188 |
+
detection method, the distance spectrum without the person is measured beforehand. By generating the distance spectrum of the background objects beforehand, the distance spectrum of the moving person is correctly detected in Fig. 8(b) with the proposed differential detection. Therefore, the FM-CW radar system can measure movement of the target person effectively.
|
| 189 |
+
|
| 190 |
+
Fig. 9 shows the result of measuring the small displacement for human breathing. The human's chest movement is measured within the range of relative small displacement. In Fig. 9, it is detected that the period of breathing is about 4 [s] and the breathing movement is about within ±2 [mm].
|
| 191 |
+
|
| 192 |
+
## B. Example for application
|
| 193 |
+
|
| 194 |
+
Finally, we show an example of application with 24 GHz FM-CW radar system. Fig. 10 shows a setup of the FM-CW radar system for detecting human breathing in actual environments. The FM-CW radar satisfies the safety guideline, and the details of the safety guideline is described in Appendix.
|
| 195 |
+
|
| 196 |
+
Fig. 11 shows the example for detecting human breathing.
|
| 197 |
+
|
| 198 |
+
Fig. 9. Displacement for measuring the movement of human breathing.
|
| 199 |
+
|
| 200 |
+
Fig. 10. Setup of FM-CW Radar for detecting human breathing.
|
| 201 |
+
|
| 202 |
+
Fig. 11. Example of application.
|
| 203 |
+
---PAGE_BREAK---
|
| 204 |
+
|
| 205 |
+
The distance spectrum in this example is measured as following flow.
|
| 206 |
+
|
| 207 |
+
1) Measuring distance spectrum without any person.
|
| 208 |
+
|
| 209 |
+
2) A person comes to the bed. The radar received signals from human's body.
|
| 210 |
+
|
| 211 |
+
3) The person lies asleep on the bed. The radar detects the person's breathing movement.
|
| 212 |
+
|
| 213 |
+
By generating the distance spectrum of the background objects without the person, the distance spectrum of the person is only detected. When the person comes within the range of radar, the radar system can detect reflected signals from the person, and the distance spectrums of the human's body are detected. After the person lies on the bed, the radar system can detect the small displacement for the person's breathing movement. By using the differential detection method, the distance and small displacement of the moving object is clearly detected.
|
| 214 |
+
|
| 215 |
+
## V. CONCLUSION
|
| 216 |
+
|
| 217 |
+
In this paper, design and performance of a FM-CW radar system with 24 GHz band is described. In computer simulations, basic performances of FM-CW radar system is analyzed about the distance resolution and error value according to the sweep time and the sampling interval, respectively. Moreover, the differential detection method for detecting only the target object is proposed for measuring the distance and the displacement of the target under the environments which multiple objects are located. In experiments, the distance spectrum of the target object is clearly detected by using the differential detection method under the environments which multiple objects are located. Furthermore, an example of application for detecting human's breathing movement is shown. As the result, the 24 GHz FM-CW radar with the proposed differential detection method effectively detect the distance and the small displacement under the environments which multiple objects are located.
|
| 218 |
+
|
| 219 |
+
## ACKNOWLEDGMENT
|
| 220 |
+
|
| 221 |
+
A part of this work was supported by “Ashita wo Ninau Kanagawa Venture Project” of Kanagawa in Japan.
|
| 222 |
+
|
| 223 |
+
The authors appreciate Prof. Toshio Nojima at Hokkaido University in Japan getting the valuable advices for analyzing the safety properties of the developed FM-CW radar system according to the safety guideline.
|
| 224 |
+
|
| 225 |
+
## REFERENCES
|
| 226 |
+
|
| 227 |
+
[1] ARIB STD-T73 Rev. 1.1, *Sensors for Detecting or Measureing Mobile Objects for Specified Low Power Radio Station*, Association of Radio Industries and Businesses Std.
|
| 228 |
+
|
| 229 |
+
[2] S. MIYAKE and Y. MAKINO, "Application of millimeter-wave heating to materials processing(special issue; recent trends on microwave and millimeter wave application technology)," *IEICE transactions on electronics*, vol. 86, no. 12, pp. 2365-2370, dec 2003.
|
| 230 |
+
|
| 231 |
+
[3] M. Skolnik, *Introduction to Radar Systems*. McGraw Hill, 2003.
|
| 232 |
+
|
| 233 |
+
[4] S. Fujimori, T. Uebo, and T. Iritani, "Short-range high-resolution radar utilizing standing wave for measuring of distance and velocity of a moving target," *ELECTRONICS AND COMMUNICATIONS IN JAPAN PART I-COMMUNICATIONS*, vol. 89, no. 5, pp. 52-60, 2006.
|
| 234 |
+
|
| 235 |
+
[5] T. Uebo, Y. Okubo, and T. Iritani, "Standing wave radar capable of measuring distances down to zero meters," *IEICE TRANSACTIONS ON COMMUNICATIONS*, vol. 88, no. 6, pp. 2609-2615, jun 2005.
|
| 236 |
+
|
| 237 |
+
[6] T. SAITO, T. NINOMIYA, O. ISAJI, T. WATANABE, H. SUZUKI, and N. OKUBO, "Automotive fm-cw radar with heterodyne receiver," *IEICE transactions on communications*, vol. 79, no. 12, pp. 1806-1812, dec 1996.
|
| 238 |
+
|
| 239 |
+
[7] W. Butler, P. Poitevin, and J. Bjomholt, "Benefits of wide area intrusion detection systems using fmcw radar," in *Security Technology, 2007 41st Annual IEEE International Carnahan Conference on*, Oct 2007, pp. 176-182.
|
| 240 |
+
|
| 241 |
+
[8] M. Skolnik, *Radar Handbook, Third Edition*. McGraw-Hill Education, 2008.
|
| 242 |
+
|
| 243 |
+
[9] W. Sediono and A. Lestari, "2d image reconstruction of radar indera," in *Mechatronics (ICOM), 2011 4th International Conference On*, May 2011, pp. 1-4.
|
| 244 |
+
|
| 245 |
+
[10] C95.1-2005, *IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields*, 3 kHz to 300 GHz, IEEE Std.
|
| 246 |
+
|
| 247 |
+
[11] Ministry of Internal Affairs and Communications. [Online]. Available: http://www.tele.soumu.go.jp/resource/j/material/dwn/guide38.pdf
|
| 248 |
+
|
| 249 |
+
# APPENDIX
|
| 250 |
+
|
| 251 |
+
In general, electromagnetic wave must be satisfied the guidelines on human exposure to electromagnetic fields, where it have been instituted in various organizations. IEEE C95.1 in USA [10] and ICNIRP in Europe are the guidelines, and MIC also have instituted the guideline in Japan [11].
|
| 252 |
+
|
| 253 |
+
Developed 24 GHz FM-CW radar in this paper have the properties as follow. The power of the transmitter is 7 [mW], the transmitting antenna gain is 11 [dBi], the effective radiated power is 88 [mW], the radiation angle of the transmitting wave is about 50 [degree], and the distance between the transmitter and the human is 2.5 [m]. According to the radar equation, the electric field strength $E$ and the power density $P$ on the human body is calculated as
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\begin{aligned}
|
| 257 |
+
E &= \sqrt{\frac{30 \times 0.088}{2.5}} = 0.65 \text{ [V/m]} , \\
|
| 258 |
+
P &= \frac{E^2}{z_0} = \frac{0.65^2}{120\pi} = 1.12 \times 10^{-4} \text{ [mW/cm}^2\text{]} .
|
| 259 |
+
\end{aligned}
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
According to the guideline [11], these parameters must be satisfied as
|
| 263 |
+
|
| 264 |
+
$$
|
| 265 |
+
\begin{aligned}
|
| 266 |
+
&E \leq 61.4 \text{ [V/m]} , \\
|
| 267 |
+
&P \leq 1 \text{ [mW/cm}^2\text{]} .
|
| 268 |
+
\end{aligned}
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
Therefore, the developed 24 GHz FM-CW radar system in this paper sufficiently satisfies the conditions in the guideline.
|
samples/texts_merged/250922.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/2515306.md
ADDED
|
@@ -0,0 +1,523 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
New Encoding for Translating Pseudo-Boolean Constraints into SAT
|
| 5 |
+
|
| 6 |
+
Amir Aavani and David Mitchell and Eugenia Ternovska
|
| 7 |
+
|
| 8 |
+
Simon Fraser University, Computing Science Department
|
| 9 |
+
{aaa78,mitchell,ter}@sfu.ca
|
| 10 |
+
|
| 11 |
+
Abstract
|
| 12 |
+
|
| 13 |
+
A Pseudo-Boolean (PB) constraint is a linear arithmetic constraint over Boolean variables. PB constraints are and widely used in declarative languages for expressing NP-hard search problems. While there are solvers for sets of PB constraints, there are also reasons to be interested in transforming these to propositional CNF formulas, and a number of methods for doing this have been reported. We introduce a new, two-step, method for transforming PB constraints to propositional CNF formulas. The first step re-writes each PB constraint as a conjunction of PB-Mod constraints, and the second transforms each PB-Mod constraint to CNF. The resulting CNF formulas are compact, and make effective use of unit propagation, in that unit propagation can derive facts from these CNF formulas which it cannot derive from the CNF formulas produced by other commonly-used transformation. We present a preliminary experimental evaluation of the method, using instances of the number partitioning problem as a benchmark set, which indicates that our method out-performs other transformations to CNF when the coefficients of the PB constraints are not small.
|
| 14 |
+
|
| 15 |
+
Introduction
|
| 16 |
+
|
| 17 |
+
A Pseudo-Boolean constraint (PB-constraint) is an equality or inequality on a linear combination of Boolean literals, of the form
|
| 18 |
+
|
| 19 |
+
$$ \sum_{i=1}^{n} a_i l_i \text{ op } b $$
|
| 20 |
+
|
| 21 |
+
where op is one of {<, ≤, =, ≥, >}, $a_1, \dots, a_n$ and b are integers, and $l_1, \dots, l_n$ are Boolean literals. Under truth assignment $\mathcal{A}$ for the literals, the left-hand evaluates to the sum of the coefficients whose corresponding literals are mapped to true by $\mathcal{A}$. PB-constraints are also known as 0-1 integer linear constraints. By taking the variables to be propositional literals, rather than 0-1 valued arithmetic variables, we can consider the combination of PB-constraints with other logical expressions. Moreover, a propositional clause ($l_1 \lor \dots \lor l_k$) is equivalent to the PB-constraint $\sum_{i=1}^k l_i \ge 1$. Thus, PB-constraints are a natural generalization of propositional clauses with which it is easier to describe arithmetic
|
| 22 |
+
|
| 23 |
+
properties of a problem. For example, the Knapsack problem has a trivial representation as a conjunction of two PB-constraints:
|
| 24 |
+
|
| 25 |
+
$$ \sum_{i=1}^{n} w_i l_i < C \quad \land \quad \sum_{i=1}^{n} v_i l_i > V, $$
|
| 26 |
+
|
| 27 |
+
but directly representing it with a propositional CNF formula is non-trivial.
|
| 28 |
+
|
| 29 |
+
Software which finds solutions to sets of PB-constraints (PB solvers) exist, for example PBS (Aloul et al. 2002) and PUEBLO (Sheini and Sakallah 2006), but there is not a sustained effort to produce continually updated high-performance solvers. Integer linear programming (ILP) systems can be used to find solutions to sets of PB-constraints, but they are generally optimized for performance on certain types of optimization problems, and do not perform well on some important families of search problems. Moreover, the standard ILP input is a set of linear inequalities, and many problems are not effectively modelled this way, such as problems involving disjunctions of constraints, such as $(p \land q) \lor (r \land s)$. There are standard techniques for transforming these, involving additional variables, but extensive use of these techniques causes performance problems. (Transforming problems to propositional CNF also requires adding new variables, but there seems to be little performance penalty in this case.)
|
| 30 |
+
|
| 31 |
+
Another approach to solving problems modelled with PB-constraints is to transform them to a logically equivalent set of propositional clauses and then apply a SAT solver. There are at least two clear benefits of this approach. One is that high-performance SAT solvers are being improved constantly, and since they take a standard input format, there is always a selection of good, and frequently updated, solvers to make use of. A second is that solving problems involving Boolean combinations of constraints is straightforward. This approach is particularly attractive for problems which are naturally represented by a relatively small number of PB constraints together which a large number of purely Boolean constraints.
|
| 32 |
+
|
| 33 |
+
The question of how best to transform a set of PB-constraints to a set of clauses is complex. Several methods have been reported, but there is still much to be learned. Here, we describe a new method of transformation, and to present some preliminary evidence of its utility.
|
| 34 |
+
|
| 35 |
+
Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 36 |
+
---PAGE_BREAK---
|
| 37 |
+
|
| 38 |
+
We define a PBMod-constraint to be of the form:
|
| 39 |
+
|
| 40 |
+
$$
|
| 41 |
+
\sum_{i=1}^{n} a_i l_i \equiv b \pmod{M}
|
| 42 |
+
$$
|
| 43 |
+
|
| 44 |
+
where $a_1, \cdots, a_n$ and $b$ are non-negative integers less than $M$, and $l_1, \cdots, l_n$ are literals.
|
| 45 |
+
|
| 46 |
+
Our method of transforming a PB-constraint to CNF in-
|
| 47 |
+
volves first transforming it to a set of PB-Mod constraints,
|
| 48 |
+
and then transforming these to CNF. Thus, we replace the
|
| 49 |
+
question of how best to transform an arbitrary PB-constraint
|
| 50 |
+
to CNF with two questions: how to choose a set of PB-
|
| 51 |
+
Mod constraints, and how to transform each of these to CNF.
|
| 52 |
+
There are benefits of this, due to properties of the PB-Mod
|
| 53 |
+
constraints. For example, we show that there are many PB-
|
| 54 |
+
constraints whose unsatisfiability can be proven by showing
|
| 55 |
+
the unsatisfiability of a PBMod-constraint, which is much
|
| 56 |
+
simple.
|
| 57 |
+
|
| 58 |
+
We present two methods for translating PBMod-
|
| 59 |
+
constraints to CNF. Both these encodings allow unit prop-
|
| 60 |
+
agation to infer inconsistency if the current assignment can-
|
| 61 |
+
not be extended to a satisfying assignment for that PBMod-
|
| 62 |
+
constraint, and hence unit propagation can infer inconsis-
|
| 63 |
+
tency for the original PB-constraint. We also show that the
|
| 64 |
+
number of PB-constraints for which unit propagation can in-
|
| 65 |
+
fer inconsistency, given the output of proposed translation, is
|
| 66 |
+
much larger than for the other existing encodings. We also
|
| 67 |
+
point out that it is impossible to translate all PB-constraints
|
| 68 |
+
in the form $\sum a_i l_i = b$ into polynomial size arc-consistent
|
| 69 |
+
CNF unless P=NP.
|
| 70 |
+
|
| 71 |
+
We also present the results of an experimental study, us-
|
| 72 |
+
ing instances of the number partitioning problem as a bench-
|
| 73 |
+
mark, which indicates that our new method outperforms oth-
|
| 74 |
+
ers in the literature.
|
| 75 |
+
|
| 76 |
+
For the sake of space, proofs are omitted from this paper.
|
| 77 |
+
All proofs can be found in (Aavani 2011).
|
| 78 |
+
|
| 79 |
+
**Notation and Terminology**
|
| 80 |
+
|
| 81 |
+
Let $X$ be a set of Boolean variables. An assignment $\mathcal{A}$ to $X$ is a possibly partial function from $X$ to $\{\text{true, false}\}$. Assignment $\mathcal{A}$ to $X$ is a total assignment if it is defined at every variable in $X$. For any $S \subseteq X$, we write $\mathcal{A}[S]$ for the assignment obtained by restricting the domain of $\mathcal{A}$ to the variables in $S$. We say assignment $\mathcal{B}$ extends assignment $\mathcal{A}$ if $\mathcal{B}$ is defined on every variable that $\mathcal{A}$ is, and for every variable $x$ where $\mathcal{A}$ is defined, $\mathcal{A}(x) = \mathcal{B}(x)$.
|
| 82 |
+
|
| 83 |
+
A literal, *l*, is either a Boolean variable or negation of a
|
| 84 |
+
Boolean variable and we denote by var(l) the variable un-
|
| 85 |
+
derlying literal *l*. Assignment *A* satisfies literal *l*, written
|
| 86 |
+
*A* |= *l*, if *l* is an atom *x* and *A*(*x*) = true or *l* is a negated
|
| 87 |
+
atom ¬*x* and *A*(*x*) = false.
|
| 88 |
+
|
| 89 |
+
A clause $C = \{l_1, \dots, l_m\}$ over $X$ is a set of literals such that $\text{var}(l_i) \in X$. Assignment $\mathcal{A}$ satisfies clause $C = \{l_1, \dots, l_m\}$ if there exists at least one literal $l_i$ such that $\mathcal{A} \models l_i$. A total assignment falsifies clause $C$ if it does not satisfy any of its literals. An assignment satisfies a set of clauses if it satisfies all the clauses in that set.
|
| 90 |
+
|
| 91 |
+
A PB-constraint $Q$ on $X$ is an expression of the form:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
a_1 l_1 + \cdots + a_n l_n \quad \mathbf{op} \quad b \qquad (1)
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where op is one of {<, ≤, =, ≥, >}, for each i, a_i is an
|
| 98 |
+
integer and l_i a literal over X, and b is an integer. We call a_i
|
| 99 |
+
the coefficient of l_i, and b the bound.
|
| 100 |
+
|
| 101 |
+
Total assignment $\mathcal{A}$ to $X$ satisfies PB-constraint $Q$ on $X$, written $\mathcal{A} \models Q$, if $\sum_{i:\mathcal{A}=\lhd l_i} a_i \mathrm{op} b$, that is, the sum of coefficients for literals mapped to true (the left hand side) satisfies the given relation to the bound (the right hand side).
|
| 102 |
+
|
| 103 |
+
Canonical Form
|
| 104 |
+
|
| 105 |
+
In this paper, we focus on translating PB equality constraints
|
| 106 |
+
with positive coefficients:
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
a_1 x_1 + \cdots + a_n x_n = b
|
| 110 |
+
\quad (2)
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where integers $(a_1 \cdots a_n$ and $b$) are all positive.
|
| 114 |
+
|
| 115 |
+
**Definition 1** Constraints $Q_1$ on $X$ and $Q_2$ on $Y \supseteq X$ are equivalent iff for every total assignment $\mathcal{A}$ for $X$ which satisfies $Q_1$, there exists an extension of $\mathcal{A}$ to $Y$ which satisfies $Q_2$, and every total assignment $\mathcal{B}$ to $Y$ which satisfies $Q_2$ also satisfies $Q_1$.
|
| 116 |
+
|
| 117 |
+
It is not hard to show that every PB-constraint has an
|
| 118 |
+
equivalent PB-constraint of the form (2). For sake of space,
|
| 119 |
+
we do not include the details but refer interested readers to
|
| 120 |
+
(Aavani 2011).
|
| 121 |
+
|
| 122 |
+
Valid Translation
|
| 123 |
+
|
| 124 |
+
**Definition 2** Let $Q$ be a PB-constraint or PB-Mod con-
|
| 125 |
+
straint over variables $X = \{x_1, \dots, x_n\}$, $Y$ a set of
|
| 126 |
+
Boolean variables (called auxiliary variables) disjoint from
|
| 127 |
+
$X$, and $v$ a Boolean variable not occurring in $X \cup Y$, and
|
| 128 |
+
$C = \{C_1, \dots, C_m\}$ a set of clauses on $X \cup Y \cup \{v\}$. Then
|
| 129 |
+
we say the pair $(v, C)$, is a valid translation of $Q$ if
|
| 130 |
+
|
| 131 |
+
1. *C* is satisfiable, and
|
| 132 |
+
|
| 133 |
+
2. if *A* is a total assignment for *X* ∪ *Y* ∪ {*v*} that satisfies *C*, then
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\mathcal{A} \models Q \iff \mathcal{A} \models v.
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
Intuitively, *C* ensures that every *v* always takes the same
|
| 140 |
+
truth value as *Q*.
|
| 141 |
+
|
| 142 |
+
In (Bailleux, Boufkhad, and Roussel 2009), a translation
|
| 143 |
+
is defined to be a set of clauses *C* such that *A* |= *Q* off some
|
| 144 |
+
extension of *A* (to the auxiliary variables of *C*) satisfies *C*. If
|
| 145 |
+
⟨*v*, *C*⟩ is a valid translation by Definition 2, then (*v*) ∪ *C* is a
|
| 146 |
+
translation in this other sense, and if *C* is a translation in the
|
| 147 |
+
other sense, then ⟨*v*, *D*⟩, where *D* is equivalent to *v* ↔ *C*,
|
| 148 |
+
is a valid translation. So these two definitions are essentially
|
| 149 |
+
equivalent, except that our definition makes available a vari-
|
| 150 |
+
able which always has the same truth value as *Q*, which can
|
| 151 |
+
be convenient. For example, it makes it easy to use *Q* condi-
|
| 152 |
+
tionally.
|
| 153 |
+
|
| 154 |
+
**Example 1** Let $Q$ be the unsatisfiable PB-constraint $2x_1 + 4\neg x_2 = 3$. Then the pair $(v, \{C_1\})$, where $C_1 = \{\neg v\}$, is a valid translation of $Q$.
|
| 155 |
+
|
| 156 |
+
**Example 2** Let $Q$ be the satisfiable PB-constraint $1x_1 + 2x_2 = 2$. Then $\langle v, C \rangle$, where $C$ is any set of clauses logically equivalent to $(v \leftrightarrow \neg x_1) \land (v \leftrightarrow x_2)$ is a valid translation of $Q$. Here, $X = \{x_1, x_2\}$ and $Y = \emptyset$.
|
| 157 |
+
---PAGE_BREAK---
|
| 158 |
+
|
| 159 |
+
In describing construction of translations, we will some-
|
| 160 |
+
times overload our notation, using a symbol for both a vari-
|
| 161 |
+
able and a translation. For example, if *D* is a valid transla-
|
| 162 |
+
tion, we may use *D* as a variable in a clause for constructing
|
| 163 |
+
another translation. Thus, *D* is the pair ⟨*D*, *C*⟩.
|
| 164 |
+
|
| 165 |
+
Tseitin Transformation
|
| 166 |
+
|
| 167 |
+
The usual method for transforming a propositional formula to CNF is that of Tseitin(Tseitin 1968). To transform formula φ to CNF, a fresh propositional variable is used to represent the truth value of each subformula of φ. For each subformula ψ, denote by ψ' the associated propositional variable. If ψ is a variable, then ψ' is just ψ. The CNF formula is the set of clauses containing the clause (φ'), and for each sub-formula ψ of φ:
|
| 168 |
+
|
| 169 |
+
1. If $\psi = \psi_1 \lor \psi_2$, the clauses $\{\neg\psi', \psi'_1, \psi'_2\}$, $\{\psi', \neg\psi'_1\}$ and $\{\psi', \neg\psi'_2\}$;
|
| 170 |
+
|
| 171 |
+
2. If $\psi = \psi_1 \land \psi_2$, the clauses $\{\neg\psi', \psi'_1\}$, $\{\neg\psi', \psi'_2\}$, and $\{\psi', \neg\psi'_1, \neg\psi'_2\}$;
|
| 172 |
+
|
| 173 |
+
3. If $\psi = \neg\psi_1$, the clauses $\{\neg\psi', \neg\psi'_1\}$ and $\{\psi', \psi'_1\}$.
|
| 174 |
+
|
| 175 |
+
New Method for PBMod-constraints
|
| 176 |
+
|
| 177 |
+
We define a normal PBMod-constraint be of the form:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\sum_{i=1}^{n} a_i l_i \equiv b (\text{mod } M), \quad (3)
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
where $0 \le a_i < M$ for all $1 \le i \le n$ and $0 \le b < M$.
|
| 184 |
+
Total assignment $\mathcal{A}$ is a solution to a PBMod-constraint iff
|
| 185 |
+
the value of left-hand side summation under $\mathcal{A}$ minus the
|
| 186 |
+
value of right-hand side of the equation, $b$, is a multiple of
|
| 187 |
+
$M$.
|
| 188 |
+
|
| 189 |
+
**Definition 3** If $Q$ is the PB-constraint $\sum a_i l_i = b$ and $M$ an integer greater than 1, then by $Q[M]$ we denote the PBMod-constraint $\sum a'_i l_i = b'(\text{mod } M)$ where:
|
| 190 |
+
|
| 191 |
+
1. $a'_i = a_i \bmod M$,
|
| 192 |
+
|
| 193 |
+
2. $b' = b \bmod M$.
|
| 194 |
+
|
| 195 |
+
**Example 3** Let $Q$ be the constraint $6x1+5x2+7x3 = 12$. Then, we have that
|
| 196 |
+
|
| 197 |
+
$Q[3]$ is $0x1 + 2x2 + 1x3 \equiv 0 (\text{mod } 3)$, and
|
| 198 |
+
|
| 199 |
+
$Q[5]$ is $1x1 + 0x2 + 2x3 = 2 (\text{mod } 5).$
|
| 200 |
+
|
| 201 |
+
Every solution to a PB-constraint *Q* is also a solution to
|
| 202 |
+
*Q*[*M]* for any *M* ≥ 2. Also, for sufficiently large values of
|
| 203 |
+
*M*, each solution to *Q*[*M]* is a solution to *Q*.
|
| 204 |
+
|
| 205 |
+
**Proposition 1** If *Q* is a PB-constraint ∑ *a*ᵢ*l*ᵢ = *b* and *M* > ∑ *a*ᵢ then *Q*[*M*] and *Q* have the same satisfying assignments.
|
| 206 |
+
|
| 207 |
+
More interesting is that, for a given PB-constraint Q, we
|
| 208 |
+
can construct sets constraints Q[M_i], none of which are
|
| 209 |
+
equivalent to Q, but such that their conjunction has the same
|
| 210 |
+
set of solutions as Q. Our goal will be to choose values of
|
| 211 |
+
M_i such that the resulting set of PB-Mod constraints is easy
|
| 212 |
+
to transform to CNF.
|
| 213 |
+
|
| 214 |
+
**Proposition 2** Let $Q$ be the PB-constraint $\sum a_i l_i = b, M_1$ and $M_2$ be integers with $M_3 = \text{lcm}(M_1, M_2)$. Further, let $S_1$ be the set of satisfying assignments for $Q[M_1]$, and $S_2$
|
| 215 |
+
|
| 216 |
+
be the set of assignments satisfying $Q[M_2]$. Then the set of satisfying assignments for $Q[M3]$ is $S_1 \cap S_2$.
|
| 217 |
+
|
| 218 |
+
Proposition 2 tells us that in order to find the set of solu-
|
| 219 |
+
tions to a PBMod-constraint modulo $M_3 = \text{lcm}(M_1, M_2)$,
|
| 220 |
+
one can find the set of solutions to two PBMod-constraints
|
| 221 |
+
(modulo $M_1$ and $M_2$) and return their intersection. This gen-
|
| 222 |
+
eralizes in the obvious way.
|
| 223 |
+
|
| 224 |
+
**Lemma 1** Let {$M_1, \dots, M_m$} be a set of $m$ positive integers and $M = \text{lcm}(M_1, \dots, M_m)$. Let $Q$ the PB-constraint $\sum a_i l_i = b$. If $M > \sum a_i$, and $S_i$ is the set of satisfying assignments for $Q[M_i]$, then the set of satisfying assignments of $Q[M]$ is
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
\bigcap_{i \in 1..m} S_i.
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
We can now easily construct a valid translation of a PB-
|
| 231 |
+
constraint from valid translations of a suitable set of PB-
|
| 232 |
+
Mod constraints.
|
| 233 |
+
|
| 234 |
+
**Theorem 1** Let $Q$ be a PB-constraint $\sum a_i l_i = b$, $\{M_1, \dots, M_m\}$ a set of positive integers, and $M = \text{lcm}(M_1, \dots, M_m)$ with $M > \sum a_i$. Suppose that, for each $i \in \{1, \dots, m\}$, $\langle v_k, C_k \rangle$ is a valid translation of $Q[M_i]$, each over distinct sets of variables. Then for any set $C$ of clauses logically equivalent to $\cup_i C_i \cup C'$, where $C'$ is a set of clauses equivalent to $v \leftrightarrow (v_1 \wedge v_2 \cdots \wedge v_m)$, the pair $(v, C)$, is a valid translation of $Q$.
|
| 235 |
+
|
| 236 |
+
Since $\text{lcm}(2, \dots, k) \ge 2^{k-1}$, (Farhi and Kane 2009), the
|
| 237 |
+
set $\mathbb{M}^\mathbb{N} = \{2, \dots, [\log \sum a_i] + 1\}$ can be used as the set of
|
| 238 |
+
moduli for encoding $\sum a_i l_i = b$.
|
| 239 |
+
|
| 240 |
+
Another candidate for set of moduli is the first *m* prime numbers, where *m* is the smallest number such that the produce of the first *m* primes exceeds $\sum a_i$. We will denote this set by $\mathbb{M}^p$. The following proposition gives an estimate for the size of set $\mathbb{M}^p$, and for the value of $P_m$. As usual, we denote by $P_i$ the $i^{th}$ prime number.
|
| 241 |
+
|
| 242 |
+
**Proposition 3** Let $m$ be the smallest integer such that the product of the first $m$ primes is greater than $S$. Then:
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
1. m = |\mathbb{M}^p| = \theta(\frac{\ln S}{\ln \ln S}).
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
2. $P_m < \ln S.$
|
| 249 |
+
|
| 250 |
+
A third candidate is the set
|
| 251 |
+
|
| 252 |
+
$$
|
| 253 |
+
\mathbb{M}^{\mathbb{P}} = \{ P_i^{n_i} | P_i^{n_i - 1} \leq \lg S \leq P_i^{n_i} \}.
|
| 254 |
+
$$
|
| 255 |
+
|
| 256 |
+
It is straightforward to observe that |$\mathbb{M}^{\mathbb{P}}$| ≤ (ln S)/(ln ln S) and the its maximum element is at most lg S.
|
| 257 |
+
|
| 258 |
+
In general, the size of a description of PB-constraint
|
| 259 |
+
∑ *a*<sub>*i*</sub>*l*<sub>*i*</sub> = *b* is θ(n log *a*<sub>Max</sub>) where *n* is the number of liter-
|
| 260 |
+
als (coefficients) in the constraint and *a*<sub>Max</sub> is the value of
|
| 261 |
+
the largest coefficient. The description of PBMod-constraint
|
| 262 |
+
Q[*M*] has size θ(n log *M*). So, a translation for Q[*M*] which
|
| 263 |
+
produces a CNF with O(n<sup>*k*<sub>1</sub></sup> *M*<sup>*k*<sub>2</sub></sup>) clauses and variables, for
|
| 264 |
+
some constants *k*<sub>1</sub> and *k*<sub>2</sub>, (which may be exponential the in-
|
| 265 |
+
put size), provides a may to translate PB-constraints to CNF
|
| 266 |
+
of size polynomial in the representation of the PB-constraint.
|
| 267 |
+
Two such translations are described in the next section. We
|
| 268 |
+
describe several others in (Aavani 2011).
|
| 269 |
+
---PAGE_BREAK---
|
| 270 |
+
|
| 271 |
+
# Encoding For PB-Mod Constraints
|
| 272 |
+
|
| 273 |
+
In this section, we describe translations of PBMod-constraints of the form (3) to CNF. Remember that our ultimate goal is not translation of PB-constraints. For simplicity, we assume all coefficients in each PBMod-constraint are non-zero.
|
| 274 |
+
|
| 275 |
+
## Dynamic Programming Based Transformation (DP)
|
| 276 |
+
|
| 277 |
+
The translation presented here encodes PBMod-constraints using a Dynamic Programming approach. Let $D_m^j$ be a valid translation for $\sum_{i=1}^j a_i l_i \equiv m (\text{mod } M)$. We can use the following set of clauses to describe the relationship among $D_m^j$, $D_{m-a_j}^{j-1}$, $D_m^j$ and $l_j$:
|
| 278 |
+
|
| 279 |
+
1. If both $D_{m-a_j}^{j-1}$ and $l_j$ are true, $D_m^j$ must be true, which can be represented by the clause $\{\neg D_{m-a_j}^{j-1}, \neg l_j, D_m^l\}$.
|
| 280 |
+
|
| 281 |
+
2. If $D_m^{j-1}$ is true and $l_j$ is false, $D_m^j$ must be true, i.e., $\{\neg D_m^{j-1}, l_j, D_m^j\}$.
|
| 282 |
+
|
| 283 |
+
3. If $D_m^j$ is true, either $D_m^{j-1}$ or $D_{m-a_j}^{j-1}$ must be true, i.e., $\{\neg D_m^j, D_m^{j-1}, D_{m-a_j}^{j-1}\}$.
|
| 284 |
+
|
| 285 |
+
For the base cases, when $j=0$, we have:
|
| 286 |
+
|
| 287 |
+
1. $D_0^0$ is true, i.e., $\{D_0^0\}$.
|
| 288 |
+
|
| 289 |
+
2. If $m \neq 0$, $D_m^0$ is false, i.e., $\{\neg D_m^0\}$.
|
| 290 |
+
|
| 291 |
+
**Proposition 4** Let $D = \{D_m^j\}$ and C be the set of clauses used to describe variables in D. Then, pair $\langle D_b^n, C \rangle$ is valid translation for (3).
|
| 292 |
+
|
| 293 |
+
By applying standard dynamic programming techniques, we can avoid describing the unnecessary $D_m^j$, and obtain a smaller CNF.
|
| 294 |
+
|
| 295 |
+
By adding the following clauses, we can boost the performance of unit propagation.
|
| 296 |
+
|
| 297 |
+
1. If $D_{m_1}^j$ is true, $D_{m_2}^j$ should be false($m_1 \neq m_2$), i.e., $\{\neg D_{m_1}^j, \neg D_{m_2}^j\}$.
|
| 298 |
+
|
| 299 |
+
2. There is at least one $m$ such that $D_m^j$ is true, i.e., $\{D_m^j | m = 0 \cdots M - 1\}$.
|
| 300 |
+
|
| 301 |
+
Binary Decision Diagrams, BDD, are standard tools for translating constraints to SAT. One can construct a BDD-based encoding for PBMod-constraints similar to BDD-based encoding for PB-constraint described in (Eén and Sorensson 2006). Unit propagation can infer more facts on the CNF generated by boosted version of DP-based encoding than the CNF generated by BDD-based encoding. Comparing BDD-based and DP-based encodings, former produces larger CNF while unit propagation infers the same facts on the output of both encodings.
|
| 302 |
+
|
| 303 |
+
**Remark 1** In (Aavani 2011), we proved that DP-based encoding, plus the extra clauses, has the following property. Given partial assignment A, if there is no total assignment B extending A such that B satisfies both C and $\sum_{i=1}^j a_i l_i \equiv m (\text{mod } M)$, then unit propagation infers false as the value for variable $D_m^j$.
|
| 304 |
+
|
| 305 |
+
# Divide and Conquer Based Transformation (DC)
|
| 306 |
+
|
| 307 |
+
The translation presented next reflects a Divide and Conquer approach. We define auxiliary variables in $D = \{D_a^{s,l}\}$ such that variable $D_a^{s,l}$ describes the necessary and sufficient condition for satisfiability of subproblem $\sum_{i=s}^{s+l-1} a_i x_i \equiv a (\text{mod } M)$.
|
| 308 |
+
|
| 309 |
+
Let $D^{s,l} = \{D_a^{s,l} : 0 \le a < M\}$. We can use the following set of clauses to describe the relation among the $3 * M$ variables in sets $D^{s,l}$, $D_{s,\frac{l}{2}}$ and $D_{s+\frac{l}{2},\frac{l}{2}}$:
|
| 310 |
+
|
| 311 |
+
1. If both $D_{m_1}^{s,\frac{l}{2}}$ and $D_{m_2}^{s+\frac{l}{2},\frac{l}{2}}$ are true, $D_{m_1+m_2}^{s,l}$ should be true, i.e., $\{\neg D_{m_1}^{s,\frac{l}{2}}, \neg D_{m_2}^{s+\frac{l}{2},\frac{l}{2}}, D_{m_1+m_2}^{s,l}\}$.
|
| 312 |
+
|
| 313 |
+
2. If $D_{m_1}^{s,l}$ is true, $D_{m_2}^{s,l}$ should be false($m_1 \neq m_2$), i.e., $\{\neg D_{m_1}^{s,l}, \neg D_{m_2}^{s,l}\}$.
|
| 314 |
+
|
| 315 |
+
3. There is at least one $m$ such that $D_m^{s,l}$ is true, i.e., $\{D_m^{s,l} | m = 0 \cdots M - 1\}$.
|
| 316 |
+
|
| 317 |
+
For the base cases, when $l=1$, we have:
|
| 318 |
+
|
| 319 |
+
1. $D_0^{s,1}$ is true iff $x_s$ is false, i.e., $\{{x_s, D_1^{s,1}}, \{\neg x_s, \neg D_1^{s,1}\}}$.
|
| 320 |
+
|
| 321 |
+
2. $D_1^{s,1}$ is true iff $x_s$ is true, i.e., $\{{\neg x_s, D_1^{s,1}}, \{x_s, \neg D_1^{s,1}\}}$.
|
| 322 |
+
|
| 323 |
+
**Proposition 5** Let $D = \{D_a^{s,l}\}$ and C be the clauses which are used to describe the variables in D. Then, pair $\langle D_b^n, C \rangle$ is a valid translation for (3).
|
| 324 |
+
|
| 325 |
+
**Remark 2** In (Aavani 2011), we showed another version of DC-based encoding which also has the property we described in Remark 1.
|
| 326 |
+
|
| 327 |
+
**Theorem 2** The numbers of clauses and auxiliary variables used in the DP and CD translations of PBMod constraint $\sum a_i x_i \equiv b (\text{mod } M)$, and the depths of the formulas implicit in these CNF formulas, are as given in Table 1. These same properties, for the PB-constrain translations obtained DP and DC translations together with $M^P$ or $M^{PP}$ as moduli, are as given in Table 2.
|
| 328 |
+
|
| 329 |
+
<table><thead><tr><td>Encoder</td><td># of Aux. Vars.</td><td># of Clauses</td><td>Depth</td></tr></thead><tbody><tr><td>DP</td><td>O(nM)</td><td>O(nM)</td><td>O(n)</td></tr><tr><td>DC</td><td>O(nM)</td><td>O(nM<sup>2</sup>)</td><td>O(log n)</td></tr></tbody></table>
|
| 330 |
+
|
| 331 |
+
Table 1: Summary of size and depth of translations for $\sum a_i x_i \equiv b (\text{mod } M)$.
|
| 332 |
+
|
| 333 |
+
In the previous section, we described two candidates for sets of moduli, namely Prime and PrimePower, and in this section, we explained two encodings for transforming PBMod constraints to SAT, namely DP and DC. This gives us four different translations for PB constraints to SAT. Table 2 summarizes the number of clauses and variables and the depth of corresponding formula for these translations, and also for the Sorting Network based encoding (Eén 2005), and Binary Adder encoding (Eén 2005).
|
| 334 |
+
---PAGE_BREAK---
|
| 335 |
+
|
| 336 |
+
<table><thead><tr><th>PBMod Endr</th><th># of Vars.</th><th># of Clauses</th><th>Depth</th></tr></thead><tbody><tr><td>Prime.DP</td><td>O(1/ln(S))</td><td>O(n ln(S))</td><td>O(n)</td></tr><tr><td>Prime.DC</td><td>O(n log(S)/ln ln(S))</td><td>O(n (log(S)/ln ln(S)))<sup>2</sup>)</td><td>O(log n)</td></tr><tr><td>PPower.DP</td><td>O(log(S)/log log S)</td><td>O(n log(S)/log log(S))</td><td>O(n)</td></tr><tr><td>PPower.DC</td><td>O(n log(S)/log log(S))</td><td>O(n (log(S)/log log(S)))<sup>2</sup>)</td><td>O(log n)</td></tr><tr><td>BAdder</td><td>O(n log(S))</td><td>O(n log(S))</td><td>O(log(S) * log n)</td></tr><tr><td>SN</td><td>O(n log(S/n) log<sup>2</sup>(n log(S/n)))</td><td>O(n log(S/n) log<sup>2</sup>(n log(S/n)))</td><td>O(log<sup>2</sup>(n log(S/n)))</td></tr></tbody></table>
|
| 337 |
+
|
| 338 |
+
Table 2: Summary of size and depth of different encodings for translating $\sum a_i x_i = b$, where $S = \sum a_i$.
|
| 339 |
+
|
| 340 |
+
## Performance of Unit Propagation
|
| 341 |
+
|
| 342 |
+
Here we examine some properties of the proposed encodings.
|
| 343 |
+
|
| 344 |
+
### Background
|
| 345 |
+
|
| 346 |
+
Generalized arc-consistency (GAC) is one of the desired properties for an encoding which is related to the performance of unit propagation, UP, procedure inside a SAT Solver. Bailluex and et. al., in (Bailleux, Boufkhad, and Roussel 2009), defined UP-detect inconsistency and UP-maintain GAC for PB-constraint's encodings. Although, the way they defined a translation is slightly different from us, these two concepts can still be discussed in our context.
|
| 347 |
+
|
| 348 |
+
Let $E$ be an encoding method for PB-constraints, $Q$ be a PB-constraint on $X$ and $\langle v, C \rangle = E(Q)$ the translation for $Q$ obtained from encoding $E$. Then,
|
| 349 |
+
|
| 350 |
+
1. Encoding $E$ for constraint $Q$ supports UP-detect inconsistency if for every (partial) assignment $\mathcal{A}$, we have that every total extension of $A[X]$ makes $Q$ false if and only if unit propagation derives $\{-v\}$ from $C \cup \{\{x\} \mid \mathcal{A} \models x\}$;
|
| 351 |
+
|
| 352 |
+
2. Encoding $E$ for constraint $Q$ is said to UP-maintain GAC if for every (partial) assignment $\mathcal{A}$ and any literal $l$ where $\text{var}(l) \in X$, we have that $l$ is true in every total extension of $\mathcal{A}$ that satisfies $Q$, if and only if unit propagation derives $\{l\}$ from $C \cup \{v\} \cup \{\{x\} \mid \mathcal{A} \models x\}$;
|
| 353 |
+
|
| 354 |
+
An encoding for PB-constraints is generalized arc-consistent, or simply arc-consistent, if it supports both UP-detect inconsistency and UP-maintain GAC for all possible constraints.
|
| 355 |
+
|
| 356 |
+
In this section, we show that there cannot be an encoding for PB-constraint in form $\sum a_i l_i = b$ which always produces a polynomial size arc-consistent CNF unless P=co-NP. Also we study the arc-consistency of our encoding and discuss why one can expect the proposed encodings to perform well.
|
| 357 |
+
|
| 358 |
+
### Hardness Result
|
| 359 |
+
|
| 360 |
+
Here, we show that it is not very likely to have a generalized arc-consistent encoding which always produces polynomial size CNF.
|
| 361 |
+
|
| 362 |
+
**Theorem 3** There does not exist a UP-detectable encoding which always produces polynomial size CNF unless P= co-NP. There does not exist a UP-maintainable encoding which always produces polynomial size CNF unless P= co-NP.
|
| 363 |
+
|
| 364 |
+
**Proof (sketch)** The theorem can be proven by observing that a subset sum problem instance can be written as a PB-constraint, and having a UP-detectable encoding enables us to prove unsatisfiability whenever the original subset problem instance is not satisfiable. The proof for hardness of having UP-maintainable encoding is similar to this argument. For complete proof, see (Aavani 2011).
|
| 365 |
+
|
| 366 |
+
## UP for Proposed Encodings
|
| 367 |
+
|
| 368 |
+
Although there is no arc-consistent encoding for PB-constraints, both DP-based and DC-based encodings for PBMod-constraints are generalized arc-consistent encodings.
|
| 369 |
+
|
| 370 |
+
Also, as mentioned before, unit propagation is able to infer inconsistency, on the CNF generated by these encodings, as soon as the current partial assignment cannot be extended to a total satisfying assignment. Notice that what we state here is more powerful than arc-consistency as it considers the auxiliary variables, too. More formally, let $\langle v, C \rangle$ be the output of DP-based (DC-based) encoding for PBMod-constraint $Q$. Given a partial assignment $\mathcal{A}$ s.t., $v \in \mathcal{A}^+$,
|
| 371 |
+
|
| 372 |
+
$$ \mathcal{A} \not\models C \cup \{v\} \Leftrightarrow \mathcal{A} \not\models_{UP} C \cup \{v\}. \quad (4) $$
|
| 373 |
+
|
| 374 |
+
This feature enables SAT solver to detect their mistakes on each of PBMod-constraints as soon as such a mistake occurs.
|
| 375 |
+
|
| 376 |
+
In the rest of this section, we study the cases for which we expect SAT solvers to perform well on the output of our encoding. Let $Q$ be a PB-constraint on $X$, $\mathcal{A}$ be a partial assignment and $\text{Ans}(\mathcal{A})$ be the set of total assignment, to $X$, satisfying $Q$ and extending $\mathcal{A}[X]$. There are two situations in which UP is able to infer the values of input variables:
|
| 377 |
+
|
| 378 |
+
1. Unit Propagation Detects Inconsistency: One can infer the current partial assignment, $\mathcal{A}$, cannot satisfying $Q$ by knowing $\text{Ans}(\mathcal{A}) = \emptyset$. Recall that there are partial assignments and PB-constraints such that although $\text{Ans}(\mathcal{A}) = \emptyset$, each of the $m$ PBMod-constraints has non-empty solution (but the intersection of their solution is empty).
|
| 379 |
+
|
| 380 |
+
If at least one of the $m$ PBMod-constraints is inconsistent with the current partial assignment, UP can infer inconsistency, in both DP and DC encodings.
|
| 381 |
+
|
| 382 |
+
2. Unit Propagation Infers the Value for an Input Variable: One can infer the value of input variable $x_k$ is true/false if $x_k$ takes the same value in all the solutions to $Q$. For this kind of constraints, UP might be able to infer the value of $x_k$, too.
|
| 383 |
+
---PAGE_BREAK---
|
| 384 |
+
|
| 385 |
+
If there exists a PBMod-constraint for which all its solutions which extend $\mathcal{A}$, have mapped $x_k$ to the same value, UP can infer the value of $x_k$.
|
| 386 |
+
|
| 387 |
+
These two cases are illustrated in the following example.
|
| 388 |
+
|
| 389 |
+
**Example 4** Let $Q(X) = x_1+2*x_2+3x_3+4*x_4+5*x_5 = 12$.
|
| 390 |
+
|
| 391 |
+
1. If $\mathcal{A}$, the current partial assignment, is $\mathcal{A}=\{\neg x_2, \neg x_4\}$ and $M=5$. There is no total assignment satisfying $1x_1 + 3x_3 + 0x_5 \equiv 2 \pmod 5$.
|
| 392 |
+
|
| 393 |
+
2. If $\mathcal{A}$, the current partial assignment, is $\mathcal{A}=\{\neg x_3, \neg x_5\}$ and $M=2$, there are four total assignments extending $\mathcal{A}$ and satisfying PBMod-constraint $1x_1 + 0x_2 + 0x_4 \equiv 0 \pmod 2$. In all of them, $x_1$ is mapped to false.
|
| 394 |
+
|
| 395 |
+
A special case of the second situation is when UP can detect the values of all $x \in X$ given the current partial assignment. In the rest of this section, we estimate the number of PB-constraints for which UP can solve the problem. More precisely, we give a lower bound on the number of PB-constraints for which UP detects inconsistency or it expands an empty assignment to a solution given the translation of those constraints.
|
| 396 |
+
|
| 397 |
+
Let us assume the constraints are selected, uniformly at random, from $\{\sum a_i l_i + \dots + a_n l_n = b : 1 \le a_i \le A = 2^{R(n)} \text{ and } 1 \le b \le n * A\}$ where $R(n)$ is a polynomial in $n$ and $R(n) > n$. To simplify the analysis, we use the same prime modulos $\mathbb{P}^n = \{P_1 = 2, \dots, P_m = \theta(R(n)) > 2n\}$ for all constraints.
|
| 398 |
+
|
| 399 |
+
Consider the following PBMod-constraints:
|
| 400 |
+
|
| 401 |
+
$$1x_1 + \cdots + 1x_{n-1} + 1x_n = n + 1 \pmod{P_m} \quad (5)$$
|
| 402 |
+
|
| 403 |
+
$$1x_1 + \cdots + 1x_{n-1} + 1x_n = n \pmod{P_m} \quad (6)$$
|
| 404 |
+
|
| 405 |
+
It is not hard to verify that (5) does not have any solution and (6) has exactly one solution. It is straightforward to verify that UP can infer inconsistency given a translation obtained by DP-based (DC-based) encoding for (5), even if the current assignment is empty. Also, UP expands the empty assignment to an assignment mapping all $x_i$ to true on a translation for (6) obtained by either DP-based encoding or DC-based encoding. Chinese Remainder Theorem, (Ding, Pei, and Salomaa 1996), implies that there are $(A/P_m)^{n+1} = 2^{(n+1)R(n)}/R(n)^{n+1}$ different PB-constraints in the form $\sum a_i l_i = b$ such that their corresponding PBMod-constraints, where the modulo is $P_m$, are the same as (5). The same claim is true for (6).
|
| 406 |
+
|
| 407 |
+
The above argument shows that, for proposed encoding, the number of easy to solve PB-constraints is huge. In (Aavani 2011), we showed that this number is much smaller for Sorting network:
|
| 408 |
+
|
| 409 |
+
**Observation 1** (Aavani 2011) There are at most $(\log A)^n$ instances where the CNF produced by Sorting Network encoding maintains arc-consistency, while this number for our encoding is at least $(A/\log(A))^n$. So, if $A = 2^{R(n)}$, almost always we have $2^{R(n)}/R(n) \gg R(n)$.
|
| 410 |
+
|
| 411 |
+
**Observation 2** (Aavani 2011) There is a family of PB-constraints whose translation through totalizer-based encoding is not arc-consistent but the translation obtained by our encoding is arc-consistent.
|
| 412 |
+
|
| 413 |
+
## Experimental Evaluation
|
| 414 |
+
|
| 415 |
+
By combining any modulo selection approach and any PBMod-constraint encoder, one can construct a PB-constraint solver. In this section, we selected the following configurations: Prime with DP (Prime.DP), Prime with DC (Prime.DC). We used CryptoMiniSAT as the SAT solver for our encodings, as it performed better than MiniSAT, on our initial benchmarking experiments.
|
| 416 |
+
|
| 417 |
+
To evaluate the performance of these configurations, we used the Number Partitioning Problem, NPP. Given a set of integers $S = \{a_1, \dots, a_n\}$, NPP asks whether there is a subset of $S$ such that the summation of its members is exactly $\sum a_i/2$. Following (Gent and Walsh 1998), we generated 100 random instances for NPP, for a given $n$ and $L$ as follows:
|
| 418 |
+
|
| 419 |
+
Create set $S = \{a_1, \dots, a_n\}$ such that each of $a_i$ is selected independently at random from $[0 \dots 2^L]$.
|
| 420 |
+
|
| 421 |
+
We ran each instance on our two configurations and also on two other encodings, Sorting Network based encoding (SN), Binary Adder Encoding (BADD)(Eén and Sorensson 2006), provided by MiniSAT+¹. All running times, reported in this paper, are the total running times (the result of summation of times spent to generate CNF formulas and time spent to solve the CNF formulas). We also tried to run the experiments with BDD encoder, but as the CNF produced by BDD encoder is exponentially big, it failed to solve medium and large size instances.
|
| 422 |
+
|
| 423 |
+
Before we describe the result of experiments, we discuss some properties of the number partitioning problem.
|
| 424 |
+
|
| 425 |
+
### Number Partitioning Problem
|
| 426 |
+
|
| 427 |
+
The Number partitioning problem is an NP-Complete problems, and it can also be seen as a special case of subset sum problem. In the SAT context, an instance of NPP can be rewritten as a PB-constraint whose comparison operator is “=”. Neither this problem nor subset sum problem has received much attention by the SAT community.
|
| 428 |
+
|
| 429 |
+
Size of an instance of NPP, where set $S$ with $n$ elements and $a_{Max}$ is the maximum absolute value in $S$, is $\theta(n * \log(a_{Max})) + n$. It is known that if the value of $a_{Max}$ is polynomial wrt $n$, the standard dynamic programming approach can solve this problem in time $O(na_{Max})$, which is polynomial time wrt to the instance size. If $a_{Max}$ is too large, $2^\Omega(2^{\theta(n)})$, the naive algorithm, which generates all the $2^n$ subsets of $S$, works in polynomial time wrt the instance size. The hard instances for this problem are those in which $a_{Max}$ is neither too small nor too large wrt $n$.
|
| 430 |
+
|
| 431 |
+
In (Borgs, Chayes, and Pittel 2001), the authors defined $k = L/n$ and showed that NPP has a phase transition at $k=1$: for $k < 1$, there are many perfect partitions with probability tending to 1 as $n \mapsto \infty$, while for $k > 1$, there are not perfect partitions with probability tending to 1. As $n \mapsto \infty$.
|
| 432 |
+
|
| 433 |
+
### Experiments
|
| 434 |
+
|
| 435 |
+
All the experiments were performed on a Linux cluster (Intel(R) Xeon(R) 2.66GHz). We set the time limit for the to
|
| 436 |
+
|
| 437 |
+
¹http://minisat.se/
|
| 438 |
+
---PAGE_BREAK---
|
| 439 |
+
|
| 440 |
+
Figure 1: The left hand figure plots the best solver for pairs $n$ and $L$ ($n = 3 \cdots 30$, $L = 3 \cdots 2n$). The right hand figure shows the average solving time, in second, of the engines which solved all the 100 instances in 10 minutes timeout, for $n = L \in 3 \cdots n$.
|
| 441 |
+
|
| 442 |
+
be 10 minutes. During our experiments, we noticed that
|
| 443 |
+
the sorting network encoding in MiniSAT+ incorrectly an-
|
| 444 |
+
nounces some unsatisfiable instances to be satisfiable (an
|
| 445 |
+
example of which is the following constraint). We did not
|
| 446 |
+
investigate the reason of this issue in the source code of
|
| 447 |
+
MiniSAT+, and all the reported timings are using the bro-
|
| 448 |
+
ken code.
|
| 449 |
+
|
| 450 |
+
$$5x_1 + 7x_2 + 1x_3 + 5x_4 = 9.$$
|
| 451 |
+
|
| 452 |
+
In our experiments, we generated 100 instances for $n \in \{3..30\}$ and $L \in \{3..2 * n\}$. We say a solver wins on a set of instances if it solves more instances than the others and in the case of a tie, we decide the winner by looking at the average running time. The instances on which each solver performed the best are plotted on Figure 1. As the Sorting Network solver was never a winner on any of the sets, it did not show up in the graph.
|
| 453 |
+
|
| 454 |
+
One can observe the following patterns from the data pre-
|
| 455 |
+
sented in Figure 1:
|
| 456 |
+
|
| 457 |
+
1. For $n < 15$, all solvers successfully solve all the instances.
|
| 458 |
+
|
| 459 |
+
2. Sorting network fails to solve all the instances where $n = 20$.
|
| 460 |
+
|
| 461 |
+
3. BADD solves all the instances when $n = L = 24$ in a reasonable time, but it suddenly fails when the $n(L)$ gets larger.
|
| 462 |
+
|
| 463 |
+
4. For large enough $n$ ($n < 15$) BADD is the winner only when $L$ is small.
|
| 464 |
+
|
| 465 |
+
5. For large enough $n$ ($n < 15$) either PDC or PDP is the best performing solver.
|
| 466 |
+
|
| 467 |
+
Conclusion and Future Work
|
| 468 |
+
|
| 469 |
+
We presented a method for translating Pseudo-Boolean con-
|
| 470 |
+
straints into CNF. The size of produces CNF is polyno-
|
| 471 |
+
mial with respect to the input size. We also showed that
|
| 472 |
+
for exponentially many instances, the produced CNF is arc-
|
| 473 |
+
|
| 474 |
+
consistent. The number of arc-consistent instances, for our
|
| 475 |
+
encodings, is much bigger than that of the existing encod-
|
| 476 |
+
ings.
|
| 477 |
+
|
| 478 |
+
In our experimental evaluation section, we described a set
|
| 479 |
+
of randomly generated number partitioning instances with
|
| 480 |
+
two parameters, *n* and *L*, where *n* describes the size of our
|
| 481 |
+
set and $2^L$ is the maximum value in the set. The experimen-
|
| 482 |
+
tal result suggests that Prime.DP and Prime.DC encoding
|
| 483 |
+
outperform Binary Adder and Sorting Network encodings.
|
| 484 |
+
|
| 485 |
+
Future work
|
| 486 |
+
|
| 487 |
+
The upper bounds for our encodings, presented in Table 2,
|
| 488 |
+
are not tight. We hope to improve these and give the exact
|
| 489 |
+
asymptotic sizes. Further experimental evaluation is needed
|
| 490 |
+
to determine the relative performance of the various meth-
|
| 491 |
+
ods on more practical instances, and on instances with larger
|
| 492 |
+
numbers of variables. Finally, we hope to develop heuristics
|
| 493 |
+
for automatically choosing the best encoding to use for any
|
| 494 |
+
given PB constraint.
|
| 495 |
+
|
| 496 |
+
References
|
| 497 |
+
|
| 498 |
+
Aavani, A. 2011. Translating pseudo-boolean constraints into cnf. CoRR abs/1104.1479.
|
| 499 |
+
|
| 500 |
+
Aloul, F.; Ramani, A.; Markov, I.; and Sakallah, K. 2002.
|
| 501 |
+
PBS: a backtrack-search pseudo-boolean solver and opti-
|
| 502 |
+
mizer. In Proceedings of the 5th International Symposium
|
| 503 |
+
on Theory and Applications of Satisfiability, 346–353. Cite-
|
| 504 |
+
seer.
|
| 505 |
+
|
| 506 |
+
Bailleux, O.; Boufkhad, Y.; and Roussel, O. 2009. New Encodings of Pseudo-Boolean Constraints into CNF. Theory and Applications of Satisfiability Testing-SAT 2009 181–194.
|
| 507 |
+
|
| 508 |
+
Borgs, C.; Chayes, J.; and Pittel, B. 2001. Phase transition and finite-size scaling for the integer partitioning problem. Random Structures & Algorithms 19(3-4):247–288.
|
| 509 |
+
---PAGE_BREAK---
|
| 510 |
+
|
| 511 |
+
Ding, C.; Pei, D.; and Salomaa, A. 1996. *Chinese remainder theorem: applications in computing, coding, cryptography*. World Scientific Publishing Co., Inc. River Edge, NJ, USA.
|
| 512 |
+
|
| 513 |
+
Eén, N., and Sorensson, N. 2006. Translating pseudo-boolean constraints into SAT. *Journal on Satisfiability, Boolean Modeling and Computation* 2(3-4):1-25.
|
| 514 |
+
|
| 515 |
+
Eén, N. 2005. *SAT Based Model Checking*. Ph.D. Dissertation, Department of Computing Science, Chalmers University of Technology and Goteborg University.
|
| 516 |
+
|
| 517 |
+
Farhi, B., and Kane, D. 2009. New results on the least common multiple of consecutive integers. In *Proc. Amer. Math. Soc*, volume 137, 1933-1939.
|
| 518 |
+
|
| 519 |
+
Gent, I. P., and Walsh, T. 1998. Analysis of heuristics for number partitioning. *Computational Intelligence* 14(3):430-451.
|
| 520 |
+
|
| 521 |
+
Sheini, H., and Sakallah, K. 2006. Pueblo: A hybrid pseudo-boolean SAT solver. *Journal on Satisfiability, Boolean Modeling and Computation* 2:61-96.
|
| 522 |
+
|
| 523 |
+
Tseitin, G. 1968. On the complexity of derivation in propositional calculus. *Studies in constructive mathematics and mathematical logic* 2(115-125):10-13.
|
samples/texts_merged/2590883.md
ADDED
|
@@ -0,0 +1,504 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
A LOADING-DEPENDENT
|
| 5 |
+
MODEL OF PROBABILISTIC
|
| 6 |
+
CASCADING FAILURE
|
| 7 |
+
|
| 8 |
+
**IAN DOBSON**
|
| 9 |
+
|
| 10 |
+
Electrical & Computer Engineering Department
|
| 11 |
+
University of Wisconsin-Madison
|
| 12 |
+
Madison, WI 53706
|
| 13 |
+
E-mail: dobson@engr.wisc.edu
|
| 14 |
+
|
| 15 |
+
**BENJAMIN A. CARRERAS**
|
| 16 |
+
|
| 17 |
+
Oak Ridge National Laboratory
|
| 18 |
+
Oak Ridge, TN 37831
|
| 19 |
+
E-mail: carrerasba@ornl.gov
|
| 20 |
+
|
| 21 |
+
**DAVID E. NEWMAN**
|
| 22 |
+
|
| 23 |
+
Physics Department
|
| 24 |
+
University of Alaska
|
| 25 |
+
Fairbanks, AK 99775
|
| 26 |
+
E-mail: ffden@uaf.edu
|
| 27 |
+
|
| 28 |
+
We propose an analytically tractable model of loading-dependent cascading failure that captures some of the salient features of large blackouts of electric power transmission systems. This leads to a new application and derivation of the quasibinomial distribution and its generalization to a saturating form with an extended parameter range. The saturating quasibinomial distribution of the number of failed components has a power-law region at a critical loading and a significant probability of total failure at higher loadings.
|
| 29 |
+
|
| 30 |
+
# 1. INTRODUCTION
|
| 31 |
+
|
| 32 |
+
Cascading failure is the usual mechanism for large blackouts of electric power transmission systems. For example, long, intricate cascades of events caused the August 1996 blackout in northwestern America [25] that disconnected 30,390 MW of power
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
to 7.5 million customers [23]. An even more spectacular example is the August
|
| 36 |
+
2003 blackout in northeastern America that disconnected 61,800 MW of power to
|
| 37 |
+
an area spanning 8 states and 2 provinces and containing 50 million people [33].
|
| 38 |
+
The vital importance of the electrical infrastructure to society motivates the con-
|
| 39 |
+
struction and study of models of cascading failure.
|
| 40 |
+
|
| 41 |
+
In this article, we describe some of the salient features of cascading failure in
|
| 42 |
+
blackouts with an analytically tractable probabilistic model. The features that we
|
| 43 |
+
abstract from the formidable complexities of large blackouts are the large but
|
| 44 |
+
finite number of components: components that fail when their load exceeds a thresh-
|
| 45 |
+
old, an initial disturbance loading the system, and the additional loading of com-
|
| 46 |
+
ponents by the failure of other components. The initial overall system stress is
|
| 47 |
+
represented by upper and lower bounds on a range of initial component loadings.
|
| 48 |
+
The model neglects the length of times between events and the diversity of power
|
| 49 |
+
system components and interactions. Of course, an analytically tractable model is
|
| 50 |
+
necessarily much too simple to represent with realism all of the aspects of cas-
|
| 51 |
+
cading failure in blackouts; the objective is, rather, to help understand some global
|
| 52 |
+
systems effects that arise in blackouts and in more detailed models of blackouts.
|
| 53 |
+
Although our main motivation is large blackouts, the model is sufficiently simple
|
| 54 |
+
and general that it could be applied to cascading failure of other large, intercon-
|
| 55 |
+
nected infrastructures.
|
| 56 |
+
|
| 57 |
+
We summarize our cascading failure model and indicate some of the connec-
|
| 58 |
+
tions to the literature that are elaborated later. The model has many identical com-
|
| 59 |
+
ponents randomly loaded. An initial disturbance adds load to each component and
|
| 60 |
+
causes some components to fail by exceeding their loading limit. Failure of a com-
|
| 61 |
+
ponent causes a fixed load increase for other components. As components fail, the
|
| 62 |
+
system becomes more loaded and cascading failure of further components becomes
|
| 63 |
+
likely. The probability distribution of the number of failed components is a satu-
|
| 64 |
+
rating quasibinomial distribution. The quasibinomial distribution was introduced
|
| 65 |
+
by Consul [11] and further studied by Burtin [3], Islam, O'Shaughnessy, and Smith
|
| 66 |
+
[19], and Jaworski [20]. The saturation in our model extends the parameter range
|
| 67 |
+
of the quasibinomial distribution, and the saturated distribution can represent highly
|
| 68 |
+
stressed systems with a high probability of all components failing. Explicit formu-
|
| 69 |
+
las for the saturating quasibinomial distribution are derived using a recursion and
|
| 70 |
+
via the quasimultinomial distribution of the number of failures in each stage of the
|
| 71 |
+
cascade. These derivations of the quasibinomial distribution and its generalization
|
| 72 |
+
to a saturating form appear to be novel. The cascading failure model can also be
|
| 73 |
+
expressed as a queuing model, and in the nonsaturating case, the number of cus-
|
| 74 |
+
tomers in the first busy period is known to be quasibinomial [10,32].
|
| 75 |
+
|
| 76 |
+
The article is organized as follows. Section 2 describes cascading failure black-
|
| 77 |
+
outs and Section 3 describes the model and its normalization. Section 4 derives
|
| 78 |
+
the saturating quasibinomial distribution of the number of failures and shows how
|
| 79 |
+
the saturation generalizes the quasibinomial distribution and extends its parameter
|
| 80 |
+
range. Section 5 illustrates the use of the model in studying the effect of system
|
| 81 |
+
loading.
|
| 82 |
+
---PAGE_BREAK---
|
| 83 |
+
|
| 84 |
+
## 2. THE NATURE OF CASCADING FAILURE BLACKOUTS
|
| 85 |
+
|
| 86 |
+
Bulk electrical power transmission systems are complex networks of large numbers of components that interact in diverse ways. For example, most of America and Canada east of the Rocky Mountains is supplied by a single network running at a shared supply frequency. This network includes thousands of generators, tens of thousands of transmission lines and network nodes, and about 100 control centers that monitor and control the network flows. The flow of power and some dynamical effects propagate on a continental scale. All of the electrical components have limits on their currents and voltages. If these limits are exceeded, automatic protection devices or the system operators disconnect the component from the system. We regard the disconnected component as failed because it is not available to transmit power (in practice, it will be reconnected later). Components can also fail in the sense of misoperation or damage due to aging, fire, weather, poor maintenance, or incorrect design or operating settings. In any case, the failure causes a transient and causes the power flow in the component to be redistributed to other components according to circuit laws and subsequently redistributed according to automatic and manual control actions. The transients and readjustments of the system can be local in effect or can involve components far away, so that a component disconnection or failure can effectively increase the loading of many other components throughout the network. In particular, the propagation of failures is not limited to adjacent network components. The interactions involved are diverse and include deviations in power flows, frequency, and voltage, as well as operation or misoperation of protection devices, controls, operator procedures, and monitoring and alarm systems. However, all of the interactions between component failures tend to be stronger when components are highly loaded. For example, if a more highly loaded transmission line fails, it produces a larger transient, there is a larger amount of power to redistribute to other components, and failures in nearby protection devices are more likely. Moreover, if the overall system is more highly loaded, components have smaller margins so they can tolerate smaller increases in load before failure, the system nonlinearities and dynamical couplings increase, and the system operators have fewer options and more stress.
|
| 87 |
+
|
| 88 |
+
A typical large blackout has an initial disturbance or trigger event, followed by a sequence of cascading events. Each event further weakens and stresses the system and makes subsequent events more likely. Examples of an initial disturbance are short circuits of transmission lines through untrimmed trees, protection device misoperation, and bad weather. The blackout events and interactions are often rare, unusual, or unanticipated because the likely and anticipated failures are already routinely accounted for in power system design and operation. The complexity is such that it can take months after a large blackout to sift through the records, establish the events occurring, and reproduce with computer simulations and hindsight a causal sequence of events.
|
| 89 |
+
|
| 90 |
+
The historically high reliability of North American power transmission systems is largely due to estimating the transmission system capability and designing
|
| 91 |
+
---PAGE_BREAK---
|
| 92 |
+
|
| 93 |
+
and operating the system with margins with respect to a chosen subset of likely and serious contingencies. The analysis is usually either a deterministic analysis of estimated worst cases or a Monte Carlo simulation of moderately detailed probabilistic models that capture steady-state interactions [2]. Combinations of likely contingencies and some dependencies between events such as common mode or common cause are sometimes considered. The analyses address the first few likely failures rather than the propagation of many rare or unanticipated failures in a cascade.
|
| 94 |
+
|
| 95 |
+
We briefly review some other approaches to cascading failure in power system blackouts. Carreras, Lynch, Dobson, and Newman [4] represented cascading transmission line overloads and outages in a power system model using the DC load flow approximation and standard linear programming optimization of the generation dispatch. The model shows critical point behavior as load is increased and can show power tails similar to those observed in blackout data. Chen and Thorp [9] modeled power system blackouts using the DC load flow approximation and standard linear programming optimization of the generation dispatch and represented in detail hidden failures of the protection system. The expected blackout size is obtained using importance sampling and it shows some indications of a critical point as loading is increased. Rios, Kirschen, Jawayeera, Nedic, and Allan [30] evaluated expected blackout cost using Monte Carlo simulation of a power system model that represents the effects of cascading line overloads, hidden failures of the protection system, power system dynamic instabilities, and the operator responses to these phenomena. Ni, McCalley, Vittal, and Tayyib [26] evaluate expected contingency severities based on real-time predictions of the power system state to quantify the risk of operational conditions. The computations account for current and voltage limits, cascading line overloads, and voltage instability. Roy, Asavathiratham, Lesieutre, and Verghese [31] constructed randomly generated tree networks that abstractly represent influences between idealized components. Components can be failed or operational according to a Markov model that represents both internal component failure and repair processes and influences between components that cause failure propagation. The effects of the network degree and the intercomponent influences on the failure size and duration were studied. Pepyne, Panayiotou, Cassandras, and Ho [29] also used a Markov model for discrete state power system nodal components, but they propagated failures along the transmission lines of a power systems network with a fixed probability. They studied the effect of the propagation probability and maintenance policies that reduce the probability of hidden failures. The challenging problem of determining cascading failure due to dynamic transients in hybrid nonlinear differential equation models was addressed by DeMarco [15] using Lyapunov methods applied to a smoothed model and by Parrilo, Lall, Paganini, Verghese, Lesieutre, and Marsden [28] using Karhunen-Loeve and Galerkin model reduction. Watts [34] described a general model of cascading failure in which failures propagate through the edges of a random network. Network nodes have a random threshold and fail when this threshold is exceeded by a sufficient fraction of failed nodes one edge away. Phase transitions causing large cascades can occur when the net-
|
| 96 |
+
---PAGE_BREAK---
|
| 97 |
+
|
| 98 |
+
work becomes critically connected by having sufficiently average degree or when a highly connected network has sufficiently low average degree so that the effect of a single failure is not swamped by a high connectivity to unfailed nodes. Lindley and Singpurwalla [24] described some foundations for causal and cascading failure in infrastructures and model cascading failure as an increase in a component failure rate within a time interval after another component fails. Initial versions of the cascading failure model of this article appear in Dobson, Chen, Thorp, Carreras, and Newman [18] and Dobson, Carreras, and Newman [16].
|
| 99 |
+
|
| 100 |
+
### 3. DESCRIPTION OF MODEL
|
| 101 |
+
|
| 102 |
+
The model has *n* identical components with random initial loads. For each component, the minimum initial load is $L^{\min}$ and the maximum initial load is $L^{\max}$. For $j = 1, 2, \dots, n$, component *j* has initial load $L_j$ that is a random variable uniformly distributed in [$L^{\min}, L^{\max}$]. $L_1, L_2, \dots, L_n$ are independent.
|
| 103 |
+
|
| 104 |
+
Components fail when their load exceeds $L^{\text{fail}}$. When a component fails, a fixed and positive amount of load *P* is transferred to each of the components.
|
| 105 |
+
|
| 106 |
+
To start the cascade, an initial disturbance loads each component by an additional amount *D*. Some components may then fail depending on their initial loads $L_j$, and the failure of each of these components will distribute an additional load *P* that can cause further failures in a cascade. The components become progressively more loaded as the cascade proceeds.
|
| 107 |
+
|
| 108 |
+
In particular, the model produces failures in stages *i* = 0,1,2,... according to the following algorithm, where $M_i$ is the number of failures in stage *i*.
|
| 109 |
+
|
| 110 |
+
**CASCADE Algorithm**
|
| 111 |
+
|
| 112 |
+
0. All *n* components are initially unfailed and have initial loads $L_1, L_2, \dots, L_n$ that are independent random variables uniformly distributed in [$L^{\min}, L^{\max}$].
|
| 113 |
+
|
| 114 |
+
1. Add the initial disturbance *D* to the load of each component. Initialize the stage counter *i* to zero.
|
| 115 |
+
|
| 116 |
+
2. Test each unfailed component for failure: For *j* = 1, ..., *n*, if component *j* is unfailed and its load is greater than $L^{\text{fail}}$, then component *j* fails. Suppose that $M_i$ components fail in this step.
|
| 117 |
+
|
| 118 |
+
3. Increment the component loads according to the number of failures $M_i$: Add $M_i P$ to the load of each component.
|
| 119 |
+
|
| 120 |
+
4. Increment *i* and go to step 2.
|
| 121 |
+
|
| 122 |
+
The CASCADE algorithm has the property that if there are no failures in stage *j* so that $M_j = 0$, then $0 = M_j = M_{j+1} = \dots$ so that there are no subsequent failures (in step 2, $M_j$ can be zero either because all the components have already failed or because the loads of the unfailed components are less than $L^{\text{fail}}$). Since there are *n* components, it follows that $M_n = 0$ and that the outcome with the maximum number of stages with nonzero failures is $1 = M_0 = M_1 = \dots = M_{n-1}$. We are most interested in the total number of failures $S = M_0 + M_1 + \dots + M_{n-1}$.
|
| 123 |
+
---PAGE_BREAK---
|
| 124 |
+
|
| 125 |
+
When the model in an application is being interpreted, the load increment *P* need not correspond only to transfer of a physical load such as the power flow through a component. Many ways by which a component failure makes the failure of other components more likely can be thought of as increasing an abstract "load" on the other components until failure occurs when a threshold is reached.
|
| 126 |
+
|
| 127 |
+
It is useful to normalize the loads and model parameters so that the initial loads lie in [0,1] and $L^{\text{fail}} = 1$ while preserving the sequence of component failures and $M_0, M_1, \dots$. First, note that the sequence of component failures and $M_0, M_1, \dots$ are unchanged by adding the same constant to the initial disturbance *D* and the failure load $L^{\text{fail}}$. In particular, choosing the constant to be $L^{\max} - L^{\text{fail}}$, the initial disturbance *D* is modified to $D + (L^{\max} - L^{\text{fail}})$ and the failure load $L^{\text{fail}}$ is modified to $L^{\text{fail}} + (L^{\max} - L^{\text{fail}}) = L^{\max}$. Then all of the loads are shifted and scaled to yield normalized parameters. The normalized initial load on component *j* is $\ell_j = (L_j - L^{\min})/(L^{\max} - L^{\min})$ so that $\ell_j$ is a random variable uniformly distributed on [0,1]. The normalized minimum initial load is zero, and the normalized maximum initial load and the normalized failure load are both one. The normalized modified initial disturbance and the normalized load increase when a component fails are
|
| 128 |
+
|
| 129 |
+
$$d = \frac{D + L^{\max} - L^{\text{fail}}}{L^{\max} - L^{\min}}, \quad p = \frac{P}{L^{\max} - L^{\min}}. \qquad (1)$$
|
| 130 |
+
|
| 131 |
+
An alternative way to describe the model follows. It is convenient to use the nor-
|
| 132 |
+
malized parameters in Eq. (1). Let $N(t)$ be the number of components with loads in
|
| 133 |
+
$(1-t, 1]$. If the $n$ initial component loadings are regarded as $n$ points in $[0, 1] \subset \mathbb{R}$,
|
| 134 |
+
then $N(t)$ is the number of points greater than $1-t$. Then $0 \le N(t) \le n$, the sample
|
| 135 |
+
paths of $N$ are nondecreasing, and $N(t) = 0$ for $t \le 0$ and $N(t) = n$ for $t \ge 1$.
|
| 136 |
+
|
| 137 |
+
Let the number of components failed at or before stage *j* be $S_j = M_0 + M_1 + \dots + M_j$. Then, assuming $S_{-1} = 0$, the CASCADE algorithm generates $S_0, S_1, \dots$ according to
|
| 138 |
+
|
| 139 |
+
$$S_j = N(d + S_{j-1}p), \quad j = 0, 1, \dots \qquad (2)$$
|
| 140 |
+
|
| 141 |
+
Then $0 \le S_j \le n$, $S_j$ is nondecreasing, and $S_k = S_{k+1}$ implies that $S_j = S_{j+1}$ for $j \ge k$. The minimum such $k$ is the maximum stage number in which failures occur and $S_{-1} < S_0 < S_1 < \dots < S_k = S_{k+1} = \dots$ and the total number of failures $S = S_k$; that is,
|
| 142 |
+
|
| 143 |
+
$$N(d + Sp) = S, \qquad (3)$$
|
| 144 |
+
|
| 145 |
+
$$N(d + S_j p) > S_j, \quad -1 \le j < k. \qquad (4)$$
|
| 146 |
+
|
| 147 |
+
Moreover, for $j < k$ and $r = 0, 1, \dots, M_{j+1} - 1$,
|
| 148 |
+
|
| 149 |
+
$$N(d + (S_j + r)p) \ge N(d + S_j p) = S_{j+1} = S_j + M_{j+1} > S_j + r. \qquad (5)$$
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
Therefore, $N(d + sp) > s$ for $s = 0, 1, \dots, S - 1$, and this inequality and Eq. (3) allow
|
| 153 |
+
the total number of failures to be characterized as
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
S = \min\{s | N(d + sp) = s, s \in \{0,1,2,\dots\}\}. \quad (6)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
If, at stage *j*, $d + S_j p > 1$, we say that the model saturates. Saturation implies $S_{j+1} = n$. Saturation never occurs if *d* and *p* are small enough that $d + np < 1$.
|
| 160 |
+
|
| 161 |
+
The model can be formulated as a queue with a single server. Exactly $n$ cus-
|
| 162 |
+
tomers arrive during a given hour independently and uniformly. The server is avail-
|
| 163 |
+
able to serve these customers at time $d$ after the start of the hour because of
|
| 164 |
+
completing some other task. The customer service time is $p$. Then, $S$ is the num-
|
| 165 |
+
ber of customers that arrive during the first busy period. The queue saturates when
|
| 166 |
+
the first busy period runs past the end of the hour. Charalambides [10] and Takács
|
| 167 |
+
[32] analyzed this queue in the nonsaturating case described in Section 4.3.
|
| 168 |
+
|
| 169 |
+
The model can also be recast in the form of an approximate and idealized fiber bundle model. There are $n$ identical, parallel fibers in the bundle. The $L_j$ of the unnormalized model now indicates breaking strength: Fiber $j$ has random breaking strength $L^{\text{fail}} - L_j$ that is uniformly distributed in [$L^{\text{fail}} - L^{\max}$, $L^{\text{fail}} - L^{\min}$]. Each fiber has zero load initially. Then, an initial force is applied to the bundle that increases the load of each fiber to $D$ and this starts a burst avalanche of fiber breaks of size $S$. When a fiber breaks, it distributes a constant amount of load $P$ to all the other fibers. In contrast, and with better physical justification, idealized fiber bundle models with global redistribution as described by Kloster, Hansen, and Hemmer [22] redistribute the current fiber load equally to the remaining fibers.
|
| 170 |
+
|
| 171 |
+
**4. DISTRIBUTION OF NUMBER OF FAILURES**
|
| 172 |
+
|
| 173 |
+
The main result is that the distribution of the total number of component failures
|
| 174 |
+
$S$ is
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
P[S=r] = \begin{cases}
|
| 178 |
+
\binom{n}{r} \phi(d) (d+rp)^{r-1} (\phi(1-d-rp))^{n-r}, & r=0,1,\ldots,n-1 \\
|
| 179 |
+
1 - \sum_{s=0}^{n-1} P(S=s), & r=n,
|
| 180 |
+
\end{cases} \tag{7}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
where $p \ge 0$ and the saturation function is
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\phi(x) = \begin{cases} 0, & x < 0 \\ x, & 0 \le x \le 1 \\ 1, & x > 1. \end{cases} \qquad (8)
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
It is convenient to assume that $0^0 \equiv 1$ and $0/0 \equiv 1$ when these expressions arise in
|
| 190 |
+
any formula in this article.
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
If $d \ge 0$ and $d + np \le 1$, then there is no saturation ($\phi(x) = x$) and Eq. (7) reduces to the quasibinomial distribution
|
| 194 |
+
|
| 195 |
+
$$P[S=r] = \binom{n}{r} d(d+rp)^{r-1}(1-d-rp)^{n-r}. \quad (9)$$
|
| 196 |
+
|
| 197 |
+
The quasibinomial distribution was introduced by Consul [11] to model an urn problem in which a player makes strategic decisions. Burtin [3] derived the distribution of the number of initially uninfected nodes that become infected in an inverse epidemic process in a random mapping. This distribution is quasibinomial, with $d$ the fraction of initially infected nodes and $p$ the uniform random mapping probability. Islam et al. [19] interpreted $d$ and $p$ as primary and secondary infection probabilities and applied the quasibinomial distribution to data on the final size of influenza epidemics. Jaworski [20] generalized the derivation to a random mapping with a general fixed-point probability.
|
| 198 |
+
|
| 199 |
+
The cascading failure model gives a new application and interpretation of the quasibinomial distribution. Moreover, the saturation in Eq. (7) extends the range of parameters of the quasibinomial distribution to allow $d + np > 1$. Section 5 shows that this extended parameter range can describe regimes with a high probability of all components failing.
|
| 200 |
+
|
| 201 |
+
The next two subsections derive Eq. (7) from the CASCADE algorithm in two ways: by means of a recursion and by means of the quasimultinomial joint distribution of $M_0, M_1, \dots, M_{n-1}$.
|
| 202 |
+
|
| 203 |
+
**4.1. Recursion**
|
| 204 |
+
|
| 205 |
+
It is convenient to show the dependence of the distribution of number of failures on the normalized parameters by writing $P[S=r] = f(r,d,p,n)$.
|
| 206 |
+
|
| 207 |
+
In the case of $n=0$ components,
|
| 208 |
+
|
| 209 |
+
$$f(0, d, p, 0) = 1. \qquad (10)$$
|
| 210 |
+
|
| 211 |
+
According to the CASCADE algorithm, when the initial disturbance $d \le 0$, no components fail, and when $d \ge 1$, all $n$ components fail. Then
|
| 212 |
+
|
| 213 |
+
$$f(r, d, p, n) = \begin{cases} 1 - \phi(d), & r=0 \\ 0, & 0 < r < n \\ \phi(d), & r=n \end{cases} \quad (d \le 0 \text{ or } d \ge 1) \text{ and } n > 0. \tag{11}$$
|
| 214 |
+
|
| 215 |
+
We assume $n > 0$ and $0 < d < 1$ for the rest of the subsection.
|
| 216 |
+
|
| 217 |
+
The initial disturbance $d$ causes stage 0 failure of the components that have initial load $\ell$ in $(1-d, 1]$. Therefore, the probability of any component failing in stage 0 is $d$ and
|
| 218 |
+
---PAGE_BREAK---
|
| 219 |
+
|
| 220 |
+
$$P[M_0 = k] = \binom{n}{k} d^k (1-d)^{n-k}. \quad (12)$$
|
| 221 |
+
|
| 222 |
+
Suppose that $M_0 = k$ and consider the $n-k$ components that did not fail in stage 0. Since none of the $n-k$ components failed in stage 0, their initial loads $\ell$ must lie in $[0, 1-d]$ and the distribution of their initial loads conditioned on not failing in stage 0 is uniform in $[0, 1-d]$. In stage 1, each of the $n-k$ components has had a load increase $d$ from the initial disturbance and an additional load increase $kp$ from the stage 0 failure of $k$ components. Therefore, the equivalent total initial disturbance for each of the $n-k$ components is $D = kp + d$.
|
| 223 |
+
|
| 224 |
+
To summarize, assuming $M_0 = k$, the failure of the $n-k$ components in stage 1 is governed by the model with initial disturbance $D = kp + d$, load transfer $P = p$, $L^{\min} = 0$, $L^{\max} = 1-d$, $L^{\text{fail}} = 1$, and $n-k$ components. Normalizing the parameters using Eq. (1) yields that the failure of the $n-k$ components is governed by the model with normalized initial disturbance $kp/(1-d)$ and normalized load transfer $p/(1-d)$; that is,
|
| 225 |
+
|
| 226 |
+
$$P[S=r|M_0=k] = f\left(r-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right). \quad (13)$$
|
| 227 |
+
|
| 228 |
+
Combining Eqs. (12) and (13) yields the recursion
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
\begin{align*}
|
| 232 |
+
f(r,d,p,n) &= \sum_{k=0}^{r} P[S=r|M_0=k] P[M_0=k] \\
|
| 233 |
+
&= \sum_{k=0}^{r} \binom{n}{k} d^k (1-d)^{n-k} f\left(r-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right), \\
|
| 234 |
+
&\qquad 0 \le r \le n, \quad 0 < d < 1, \quad n > 0. \tag{14}
|
| 235 |
+
\end{align*}
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
Equations (10), (11), and (14) define $f(r,d,p,n) = P[S=r]$ for all $n \ge 0$ and $p \ge 0$. Equations (10) and (11) agree with Eq. (7). Moreover, it is routine to prove in the Appendix that Eq. (7) satisfies recursion (14). Therefore, Eq. (7) is the distribution of $S$ in the CASCADE algorithm. Thus, the recursion offers a simple way to derive the saturating quasibinomial distribution that avoids complicated algebra or combinatorics. It is also straightforward to use Eqs. (10) and (14) to confirm by induction on $n$ that Eq. (7) is a probability distribution.
|
| 239 |
+
|
| 240 |
+
## 4.2. A Quasimultinomial Distribution
|
| 241 |
+
|
| 242 |
+
This subsection shows that the joint distribution of $M_0, M_1, \dots, M_{n-1}$ is quasimultinomial and hence derives Eq. (7). It is convenient throughout to assume $d \ge 0$, restrict $m_0, m_1, \dots$ to nonnegative integers, and write $s_i = m_0 + m_1 + \dots + m_i$ for $i = 0, 1, \dots$ and $s_{-1} = 0$.
|
| 243 |
+
---PAGE_BREAK---
|
| 244 |
+
|
| 245 |
+
Let $\alpha_0 = \phi(d), \beta_0 = 1$, and, for $i=1,2,...$,
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
\alpha_i = \phi \left( \frac{m_{i-1} p}{1 - d - s_{i-2} p} \right), \quad \beta_i = \phi(1 - d - s_{i-2} p). \qquad (15)
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
The identity
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\beta_i(1 - \alpha_i) = \beta_{i+1}, \quad i = 0, 1, 2, \dots, \tag{16}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
can be verified using $1 - \phi(x) = \phi(1-x)$ and $d \ge 0$ and considering all of the cases.
|
| 258 |
+
|
| 259 |
+
In step 2 of stage 0 in the CASCADE algorithm, the probability that the load increment of *d* causes one of the components to fail is $\alpha_0 = \phi(d)$ and the probability of $m_0$ failures in the *n* components is
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
P[M_0 = m_0] = \binom{n}{m_0} \alpha_0^{m_0} (1-\alpha_0)^{n-m_0}. \quad (17)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
Consider the end of step 2 of stage *i* ≥ 1 in the CASCADE algorithm. The
|
| 266 |
+
failures that have occurred are *M*₀ = *m*₀, *M*₁ = *m*₁, ..., *M*ᵢ = *m*ᵢ and there are *n* − *s*ᵢ
|
| 267 |
+
unfailed components, but the component loads have not yet been incremented by
|
| 268 |
+
*m*ᵢ*p* in step 3.
|
| 269 |
+
|
| 270 |
+
Suppose that *d* + *s*ᵢ₋₁*p* < 1. Then, conditioned on the *n* − *s*ᵢ components not yet having failed, the loads of the *n* − *s*ᵢ unfailed components are uniformly distributed in [*d* + *s*ᵢ₋₁*p*, 1]. In step 3, the probability that the load increment of *m*ᵢ*p* causes one of the unfailed components to fail is αᵢ₊₁ and the probability of *m*ᵢ₊₁ failures in the *n* − *s*ᵢ unfailed components is
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
\begin{align}
|
| 274 |
+
P[M_{i+1} &= m_{i+1} | M_i = m_i, \dots, M_0 = m_0] \nonumber \\
|
| 275 |
+
&= \binom{n-s_i}{m_{i+1}} \alpha_{i+1}^{m_{i+1}} (1-\alpha_{i+1})^{n-s_{i+1}}, && m_{i+1} = 0, 1, \dots, n-s_i. \tag{18}
|
| 276 |
+
\end{align}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
Suppose that $d + s_{i-1}p \ge 1$. Then, all of the components must have failed on a previous step and $P[M_{i+1} = m_{i+1}|M_i = m_i, \dots, M_0 = m_0] = 1$ for $m_{i+1} = 0$ and is zero otherwise. In this case, $\alpha_{i+1} = 0$ and Eq. (18) is verified.
|
| 280 |
+
|
| 281 |
+
We claim that for $s_i \le n$,
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
P[M_i = m_i, \dots, M_0 = m_0]
|
| 285 |
+
\begin{equation}
|
| 286 |
+
= \frac{n!}{m_0! m_1! \cdots m_i! (n-s_i)!} (\alpha_0 \beta_0)^{m_0} (\alpha_1 \beta_1)^{m_1} \cdots (\alpha_i \beta_i)^{m_i} \beta_{i+1}^{n-s_i}. \tag{19}
|
| 287 |
+
\end{equation}
|
| 288 |
+
$$
|
| 289 |
+
---PAGE_BREAK---
|
| 290 |
+
|
| 291 |
+
Equation (19) is proved by induction on $i$. For $i=0$, Eq. (19) reduces to Eq. (17). The inductive step is verified by multiplying Eqs. (18) and (19) and using Eq. (16) to obtain $P[M_{i+1} = m_{i+1}, \dots, M_0 = m_0]$ in the form of Eq. (19).
|
| 292 |
+
|
| 293 |
+
An expression equivalent to Eq. (19) obtained using Eq. (16) is
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
\begin{align}
|
| 297 |
+
P[M_i = m_i, \dots, M_0 = m_0] & \\
|
| 298 |
+
&= \frac{n!}{m_0! m_1! \dots m_i! (n-s_i)!} (\beta_0 - \beta_1)^{m_0} (\beta_1 - \beta_2)^{m_1} \dots (\beta_i - \beta_{i+1})^{m_i} \beta_{i+1}^{n-s_i}. \tag{20}
|
| 299 |
+
\end{align}
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
The CASCADE algorithm has the property that if there are no failures in stage $j$ so that $M_j = 0$, then $0 = M_j = M_{j+1} = \dots$ and there are no subsequent failures. This property is verified by Eq. (20) because $m_j = 0$ implies $\beta_{j+1} = \beta_{j+2}$ so that the factor $(\beta_{j+1} - \beta_{j+2})^{m_{j+1}} = 0^{m_{j+1}}$, which vanishes unless $m_{j+1} = 0$. Iterating this argument gives $0 = M_j = M_{j+1} = \dots$. Since the maximum number of failures is $n$, the longest sequence of failures has $n$ stages with $M_0 = M_1 = \dots = M_{n-1} = 1$. It follows that $0 = M_n = M_{n+1} = \dots$ and that the nontrivial part of the joint distribution is determined by $M_0, M_1, \dots, M_{n-1}$. It also follows that $M_{n-1} = 0$ if there are less than $n$ stages with failures.
|
| 303 |
+
|
| 304 |
+
Equation (20) can now be rewritten for $i=n-1$. Let $I$ be the largest integer not exceeding $n$ such that $1-d-s_{I-2}p > 0$. Then, Eq. (20) becomes, for $s_{n-1} \le n$,
|
| 305 |
+
|
| 306 |
+
$$
|
| 307 |
+
\begin{align}
|
| 308 |
+
P[M_{n-1} = m_{n-1}, \dots, M_0 = m_0] & \nonumber \\
|
| 309 |
+
&= \frac{n!}{m_0! m_1! \cdots m_{n-1}! (n-s_{n-1})!} (\phi(d))^{m_0} (m_0 p)^{m_1} (m_1 p)^{m_2} \cdots (m_{I-2} p)^{m_{I-1}} \nonumber \\
|
| 310 |
+
&\qquad \times (\phi(1-d-s_{I-2}p))^{n-s_{I-1}} A(\mathbf{m}, I), \tag{21}
|
| 311 |
+
\end{align}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
where $A(\mathbf{m}, n) = 1$ and $A(\mathbf{m}, I) = 0^{m_{I+1}} \cdots 0^{m_{n-1}} 0^{n-s_{n-1}}$ for $I < n$. It follows from the definition of $A(\mathbf{m}, I)$ that Eq. (21) vanishes for $I < n$ unless $0 = M_{I+1} = \cdots = M_{n-1}$ and $S = M_0 + \cdots + M_I = n$. (Although Eq. (21) was derived assuming $d \ge 0$, it also holds for $d < 0$. In particular, for $d < 0$, Eq. (21) implies $P[M_{n-1} = 0, \dots, M_0 = 0] = 1$.)
|
| 315 |
+
|
| 316 |
+
Equation (21) generalizes the quasibinomial distribution and is a form of quasi-
|
| 317 |
+
multinomial distribution. It is a different generalization of the quasibinomial dis-
|
| 318 |
+
tribution than the quasitrinomial distribution considered by Berg and Mutafchiev
|
| 319 |
+
[1] to describe numbers of nodes in central components of random mappings.
|
| 320 |
+
|
| 321 |
+
Suppose that $S = M_0 + \dots + M_{n-1} = r < n$. Then, $M_{n-1} = 0$ and $M_0 + \dots + M_{n-2} = r - M_{n-1} = r$, and Eq. (21) vanishes unless $I=n$. Summing Eq. (21) over nonnegative integers $m_0, \dots, m_{n-1}$ that sum to $r$ yields
|
| 322 |
+
---PAGE_BREAK---
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
\begin{align*}
|
| 326 |
+
P[S=r] &= \sum_{s_{n-1}=r} \frac{n!}{m_0! m_1! \cdots m_{n-1}! (n-r)!} (\phi(d))^{m_0} (m_0 p)^{m_1} \cdots (m_{n-2} p)^{m_{n-1}} \\
|
| 327 |
+
&\qquad \times (\phi(1-d-rp))^{n-r} \\
|
| 328 |
+
&= \binom{n}{r} (\phi(1-d-rp))^{n-r} p^r \sum_{s_{n-1}=r} \frac{r!}{m_0! m_1! \cdots m_{n-1}!} \\
|
| 329 |
+
&\qquad \times \left(\frac{\phi(d)}{p}\right)^{m_0} m_0^{m_1} \cdots m_{n-2}^{m_{n-2}'},
|
| 330 |
+
\end{align*}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
which reduces to Eq. (7) using a lemma by Katz [21]. (The context of Katz’s lemma assumes $\phi(d)/p$ is a positive integer, but the generalization is immediate.)
|
| 334 |
+
|
| 335 |
+
**4.3. Applying a Generalized Ballot Theorem**
|
| 336 |
+
|
| 337 |
+
Charalambides [10] explained how the quasibinomial distribution appears as a consequence of generalized ballot theorems in the theory of fluctuations of stochastic processes [32]. We summarize this approach and comment that it derives only the nonsaturating cases of Eq. (7).
|
| 338 |
+
|
| 339 |
+
We assume $0 < d < 1$. Consider $p$ multiplied by the number of components $N(t)$ with loads in $(1-t, 1]$. For $0 \le t \le 1$, $pN(t)$ is a stochastic process with interchangeable increments whose sample functions are nondecreasing step functions with $pN(0) = 0$. According to Eq. (6), the first passage time of $t - pN(t)$ through $d$ is $\min\{t | pN(t) = t - d\} = \min\{d + sp | N(d + sp) = s\} = d + Sp$. Then, according to Takács [32, Sect. 17, Thm. 4],
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
P[d + Sp \le t] = \sum_{d \le y \le t} \frac{d}{y} P[pN(y) = y - d] \quad (22)
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
for $0 < d \le t \le 1$; that is,
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
\sum_{k=0}^{\lfloor (t-d)/p \rfloor} P[S=k] = \sum_{k=0}^{\lfloor (t-d)/p \rfloor} \frac{d}{d+kp} P[N(d+kp)=k]. \quad (23)
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
Setting $t = d + rp$ in Eq. (23) for $r = 0, 1, \dots, \min\{n, (1-d)/p\}$, differencing the resulting equations, and using the binomial distribution of $N(t)$ for $0 \le t \le 1$ yields the nonsaturating cases of Eq. (7). However, the approach does not extend to the saturating cases because $pN(t)$ does not have interchangeable increments when $t > 1$.
|
| 352 |
+
|
| 353 |
+
**4.4. Approximate Power Tail Exponent at a Critical Case**
|
| 354 |
+
|
| 355 |
+
We describe standard approximations of the quasibinomial distribution that yield a power tail exponent at the critical case. For parameters satisfying $np + d \le 1$ (no saturation), the distribution of $S$ is quasibinomial and can be approximated by let-
|
| 356 |
+
---PAGE_BREAK---
|
| 357 |
+
|
| 358 |
+
ting $n \to \infty$, $p \to 0$, and $d \to 0$ in such a way that $\lambda = np$ and $\theta = nd$ are fixed to give the generalized (or Lagrangian) Poisson distribution [12–14]
|
| 359 |
+
|
| 360 |
+
$$P[S=r] \approx \theta(r\lambda + \theta)^{r-1} \frac{\exp(-r\lambda - \theta)}{r!}, \quad (24)$$
|
| 361 |
+
|
| 362 |
+
which is the distribution of the number of offspring in a Galton–Watson–Bienaymé branching process, with the first generation produced by a Poisson distribution with parameter $\theta$ and subsequent generations produced by a Poisson distribution with parameter $\lambda$. The critical case for the branching process is $np = \lambda = 1$ and Otter [27] proved that at criticality, the distribution of the number of offspring has a power tail with exponent -1.5. Further implications for cascading failure of the branching process approximation are considered in Dobson, Carreras, and Newman [17].
|
| 363 |
+
|
| 364 |
+
## 5. EFFECT OF LOADING
|
| 365 |
+
|
| 366 |
+
How much can an electric power transmission system be loaded before there is undue risk of cascading failure? This section discusses qualitative effects of loading on the distribution of blackout size and then applies the model to describe the effect of loading and illustrate its use.
|
| 367 |
+
|
| 368 |
+
### 5.1. Distribution of Blackout Size at Extremes of Loading
|
| 369 |
+
|
| 370 |
+
Consider cascading failure in a power transmission system in the impractically extreme cases of very low and very high loading. At very low loading near zero, any failures that occur have minimal impact on other components and these other components have large operating margins. Multiple failures are possible, but they are approximately independent so that the probability of multiple failures is approximately the product of the probabilities of each of the failures. Since the blackout size is roughly proportional to the number of failures, the probability distribution of the blackout size will have an exponential tail. The probability distribution of the blackout size is different if the power system were to be operated recklessly at a very high loading in which every component was close to its loading limit. Then, any initial disturbance would necessarily cause a cascade of failures leading to total or near total blackout. It is clear that the probability distribution of the blackout size must somehow change continuously from the exponential tail form to the certain total blackout form as loading increases from a very low to a very high loading. We are interested in the nature of the transition between these two extremes.
|
| 371 |
+
|
| 372 |
+
### 5.2. Effect of Loading in the Model
|
| 373 |
+
|
| 374 |
+
This subsection describes one way to represent a load increase in the model and how this leads to a parameterization of the normalized model. Then the effect of the load increase on the distribution of the number of components failed is described.
|
| 375 |
+
|
| 376 |
+
For purposes of illustration, the system has $n = 1000$ components. Suppose that the system is operated so that the initial component loadings vary from $L^{\min}$ to
|
| 377 |
+
---PAGE_BREAK---
|
| 378 |
+
|
| 379 |
+
$L^{\text{max}} = L^{\text{fail}} = 1$. Then the average initial component loading $L = (L^{\text{min}} + 1)/2$ may be increased by increasing $L^{\text{min}}$. The initial disturbance $D = 0.0004$ is assumed to be the same as the load transfer amount $P = 0.0004$. These modeling choices for component load lead, via the normalization of Eq. (1), to the parameterization $p = d = 0.0004/(2 - 2L)$, $0.5 \le L < 1$. The increase in the normalized power transfer $p$ with increased $L$ can be thought of as strengthening the component interactions that cause cascading failure.
|
| 380 |
+
|
| 381 |
+
The probability distribution of the number $S$ of components failed as $L$ increases from 0.6 is shown in Figure 1. The distribution for the nonsaturating case $L = 0.6$ has a tail that is approximately exponential. The tail becomes heavier as $L$ increases, and the distribution for the critical case $L = 0.8$, $np = 1$ has an approximate power-law region over a range of $S$. The power-law region has an exponent of approximately $-1.4$ and this compares to the exponent of $-1.5$ obtained by the analytic approximation in Section 4.4. The distribution for the saturated case $L = 0.9$ has an approximately exponential tail for small $r$, zero probability of intermediate $r$, and a probability of 0.80 of all 1000 components failing. If an intermediate number of components fail in a saturated case, then the cascade always proceeds to all 1000 components failing.
|
| 382 |
+
|
| 383 |
+
The increase in the mean number of failures ES as the average initial component loading $L$ is increased is shown in Figure 2. The sharp change in gradient at the critical loading $L = 0.8$ corresponds to the saturation of Eq. (7) and the consequent increasing probability of all components failing. Indeed, at $L = 0.8$, the change in
|
| 384 |
+
|
| 385 |
+
**FIGURE 1.** Log-log plot of distribution of number of components failed $S$ for three values of average initial load $L$. Note the power-law region for the critical loading $L = 0.8$. $L = 0.9$ has an isolated point at (1000,0.80), indicating probability 0.80 of all 1000 components failed. The probability of no failures is 0.61 for $L = 0.6$, 0.37 for $L = 0.8$, and 0.14 for $L = 0.9$.
|
| 386 |
+
---PAGE_BREAK---
|
| 387 |
+
|
| 388 |
+
**FIGURE 2.** Mean number of components failed *ES* as a function of average initial component loading *L*. Note the change in gradient at the critical loading *L* = 0.8. There are *n* = 1000 components and *ES* becomes 1000 at the highest loadings.
|
| 389 |
+
|
| 390 |
+
gradient in Figure 2 together with the power-law region in the distribution of *S* in
|
| 391 |
+
Figure 1 suggest a type 2 phase transition in the system. If we interpret the number
|
| 392 |
+
of components failed as corresponding to blackout size, the power-law region is
|
| 393 |
+
consistent with North American blackout data and blackout simulation results
|
| 394 |
+
[4,8,18]. In particular, North American blackout data suggest an empirical distri-
|
| 395 |
+
bution of blackout size with a power tail with exponent between −1 and −2 [6,7,8].
|
| 396 |
+
This power tail indicates a significant risk of large blackouts that is not present
|
| 397 |
+
when the distribution of blackout sizes has an exponential tail [5].
|
| 398 |
+
|
| 399 |
+
The model results show how system loading can influence the risk of cascading failure. At low loading, there is an approximately exponential tail in the distribution of number of components failed and a low risk of large cascading failure. There is a critical loading at which there is a power-law region in the distribution of number of components failed and a sharp increase in the gradient of the mean number of components failed. As loading is increased past the critical loading, the distribution of number of components failed saturates, there is an increasingly significant probability of all components failing, and there is a significant risk of large cascading failure.
|
| 400 |
+
|
| 401 |
+
**Acknowledgments**
|
| 402 |
+
|
| 403 |
+
The work was coordinated by the Consortium for Electric Reliability Technology Solutions and funded in part by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Power Technologies, Transmission Reliability Program of the U.S. Department of Energy under contract 9908935 and Interagency Agreement DE-A1099EE35075 with the National Science Foundation. The work was funded in part by NSF grants ECS-0214369 and ECS-0216053. Part of this research has been carried out
|
| 404 |
+
---PAGE_BREAK---
|
| 405 |
+
|
| 406 |
+
at Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.
|
| 407 |
+
|
| 408 |
+
References
|
| 409 |
+
|
| 410 |
+
1. Berg, S. & Mutafchiev, L. (1990). Random mappings with an attracting center: Lagrangian distributions and a regression function. *Journal of Applied Probability* 27: 622–636.
|
| 411 |
+
|
| 412 |
+
2. Billington, R. & Allan, R.N. (1996). *Reliability evaluation of power systems*, 2nd ed. New York: Plenum Press.
|
| 413 |
+
|
| 414 |
+
3. Burtin, Y.D. (1980). On a simple formula for random mappings and its applications. *Journal of Applied Probability* 17: 403–414.
|
| 415 |
+
|
| 416 |
+
4. Carreras, B.A., Lynch, V.E., Dobson, I., & Newman, D.E. (2002). Critical points and transitions in an electric power transmission model for cascading failure blackouts. *Chaos* 12(4): 985–994.
|
| 417 |
+
|
| 418 |
+
5. Carreras, B.A., Lynch, V.E., Newman, D.E., & Dobson, I. (2003). Blackout mitigation assessment in power transmission systems. In *36th Hawaii International Conference on System Sciences*.
|
| 419 |
+
|
| 420 |
+
6. Carreras, B.A., Newman, D.E., Dobson, I., & Poole, A.B. (2001). Evidence for self-organized criticality in electric power system blackouts. In *34th Hawaii International Conference on System Sciences*.
|
| 421 |
+
|
| 422 |
+
7. Carreras, B.A., Newman, D.E., Dobson, I., & Poole, A.B. (2004). Evidence for self-organized criticality in a time series of electric power system blackouts. *IEEE Transactions on Circuits and Systems I: Regular Papers* 51(9): 1733–1740.
|
| 423 |
+
|
| 424 |
+
8. Chen, J., Thorp, J.S., & Parashar, M. (2001). Analysis of electric power disturbance data. In *34th Hawaii International Conference on System Sciences*.
|
| 425 |
+
|
| 426 |
+
9. Chen, J. & Thorp, J.S. (2002). A reliability study of transmission system protection via a hidden failure DC load flow model. In *IEE Fifth International Conference on Power System Management and Control*, pp. 384–389.
|
| 427 |
+
|
| 428 |
+
10. Charalambides, Ch.A. (1990). Abel series distributions with applications to fluctuations of sample functions of stochastic functions. *Communications in Statistics: Theory and Methods* 19(1): 317–335.
|
| 429 |
+
|
| 430 |
+
11. Consul, P.C. (1974). A simple urn model dependent upon predetermined strategy. *Sankhyā: The Indian Journal of Statistics, Series B* 36(4): 391–399.
|
| 431 |
+
|
| 432 |
+
12. Consul, P.C. (1988). On some models leading to a generalized Poisson distribution. *Communications in Statistics: Theory and Methods* 17(2): 423–442.
|
| 433 |
+
|
| 434 |
+
13. Consul, P.C. (1989). *Generalized Poisson distributions*. New York: Marcel Dekker.
|
| 435 |
+
|
| 436 |
+
14. Consul, P.C. & Shoukri, M.M. (1988). Some chance mechanisms leading to a generalized Poisson probability model. *American Journal of Mathematical and Management Sciences* 8(1&2): 181–202.
|
| 437 |
+
|
| 438 |
+
15. DeMarco, C.L. (2001). A phase transition model for cascading network failure. *IEEE Control Systems Magazine* 21(6): 40–51.
|
| 439 |
+
|
| 440 |
+
16. Dobson, I., Carreras, B.A., & Newman, D.E. (2003). A probabilistic loading-dependent model of cascading failure and possible implications for blackouts. In *36th Hawaii International Conference on System Sciences*.
|
| 441 |
+
|
| 442 |
+
17. Dobson, I., Carreras, B.A., & Newman, D.E. (2004). A branching process approximation to cascading load-dependent system failure. In *37th Hawaii International Conference on System Sciences*.
|
| 443 |
+
|
| 444 |
+
18. Dobson, I., Chen, J., Thorp, J.S., Carreras, B.A., & Newman, D.E. (2002). Examining criticality of blackouts in power system models with cascading events. In *35th Hawaii International Conference on System Sciences*.
|
| 445 |
+
|
| 446 |
+
19. Islam, M.N., O'Shaughnessy, C.D., & Smith, B. (1996). A random graph model for the final-size distribution of household infections. *Statistics in Medicine* 15: 837–843.
|
| 447 |
+
|
| 448 |
+
20. Jaworski, J. (1998). Predecessors in a random mapping. *Random Structures and Algorithms* 14: 501–519.
|
| 449 |
+
|
| 450 |
+
21. Katz, L. (1955). Probability of indecomposability of a random mapping function. *Annals of Mathematical Statistics* 26: 512–517.
|
| 451 |
+
|
| 452 |
+
22. Kloster, M., Hansen, A., & Hemmer, P.C. (1997). Burst avalanches in solvable models of fibrous materials. *Physical Review E* 56(3).
|
| 453 |
+
---PAGE_BREAK---
|
| 454 |
+
|
| 455 |
+
23. Kosterev, D.N., Taylor, C.W., & Mittelstadt, W.A. (1999). Model validation for the August 10, 1996 WSCC system outage. *IEEE Transactions on Power Systems* 13(3): 967–979.
|
| 456 |
+
|
| 457 |
+
24. Lindley, D.V. & Singpurwalla, N.D. (2002). On exchangeable, causal and cascading failures. *Statistical Science* 17(2): 209–219.
|
| 458 |
+
|
| 459 |
+
25. NERC (North American Electric Reliability Council) (2002). *1996 system disturbances*. Princeton, NJ: NERC.
|
| 460 |
+
|
| 461 |
+
26. Ni, M., McCalley, J.D., Vittal, V., & Tayyib, T. (2003). Online risk-based security assessment. *IEEE Transactions on Power Systems* 18(1): 258–265.
|
| 462 |
+
|
| 463 |
+
27. Otter, R. (1949). The multiplicative process. *Annals of Mathematical Statistics* 20: 206–224.
|
| 464 |
+
|
| 465 |
+
28. Parrilo, P.A., Lall, S., Paganini, F., Verghese, G.C., Lesieutre, B.C., & Marsden, J.E. (1999). Model reduction for analysis of cascading failures in power systems. *Proceedings of the American Control Conference* 6: 4208–4212.
|
| 466 |
+
|
| 467 |
+
29. Pepyne, D.L., Panayiotou, C.G., Cassandras, C.G., & Ho, Y.-C. (2001). Vulnerability assessment and allocation of protection resources in power systems. *Proceedings of the American Control Conference* 6: 4705–4710.
|
| 468 |
+
|
| 469 |
+
30. Rios, M.A., Kirschen, D.S., Jawayeera, D., Nedic, D.P., & Allan, R.N. (2002). Value of security: modeling time-dependent phenomena and weather conditions. *IEEE Transactions on Power Systems* 17(3): 543–548.
|
| 470 |
+
|
| 471 |
+
31. Roy, S., Asavathiratham, C., Lesieutre, B.C., & Verghese, G.C. (2001). Network models: growth, dynamics, and failure. In *34th Hawaii International Conference on System Sciences*, pp. 728–737.
|
| 472 |
+
|
| 473 |
+
32. Takács, L. (1967). *Combinatorial methods in the theory of stochastic processes*. New York: Wiley.
|
| 474 |
+
|
| 475 |
+
33. U.S.–Canada Power System Outage Task Force (2004). *Final Report on the August 14th blackout in the United States and Canada*. United States Department of Energy and National Resources Canada.
|
| 476 |
+
|
| 477 |
+
34. Watts, D.J. (2002). A simple model of global cascades on random networks. *Proceedings of the National Academy of Sciences USA* 99(9): 5766–5771.
|
| 478 |
+
|
| 479 |
+
# APPENDIX
|
| 480 |
+
|
| 481 |
+
## Saturating Quasibinomial Formula Satisfies Recursion
|
| 482 |
+
|
| 483 |
+
We prove that the saturating quasibinomial formula (7) satisfies recursion (14) for $0 < d < 1$ and $n > 0$.
|
| 484 |
+
|
| 485 |
+
In the case $d + rp < 1$ and $r < n$, since
|
| 486 |
+
|
| 487 |
+
$$d + rp < 1 \Leftrightarrow \frac{kp}{1-d} + (r-k) \frac{p}{1-d} < 1, \quad (25)$$
|
| 488 |
+
|
| 489 |
+
none of the instances of $f$ in the right-hand side of Eq. (14) saturate so that the right-hand side of Eq. (14) becomes
|
| 490 |
+
|
| 491 |
+
$$\sum_{k=0}^{r} \binom{n}{k} d^k (1-d)^{n-k} \binom{n-k}{r-k} \frac{kp}{1-d} \left(\frac{rp}{1-d}\right)^{r-k-1} \left(1 - \frac{rp}{1-d}\right)^{n-r} \\ = \binom{n}{r} \sum_{k=0}^{r} \binom{r}{k} \frac{k}{r} d^k (rp)^{r-k} (1-d- rp)^{n-r} = \binom{n}{r} d(d+rp)^{r-1} (1-d- rp)^{n-r}.$$
|
| 492 |
+
|
| 493 |
+
In the case $d + rp \ge 1$ and $r < n$, Eq. (25) and $r - k < n - k$ imply that all of the instances of $f$ in the right-hand side of Eq. (14) vanish.
|
| 494 |
+
---PAGE_BREAK---
|
| 495 |
+
|
| 496 |
+
In the case $r=n$, substituting the expression from Eq. (7) for $f(n-k,(kp)/(1-d))$,
|
| 497 |
+
$p/(1-d), n-k$) into the right-hand side of Eq. (14) leads to
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
1 - \sum_{t=0}^{n-1} \sum_{k=0}^{t} \binom{n}{k} d^k (1-d)^{n-k} f\left(t-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right) = 1 - \sum_{s=0}^{n-1} f(s,d,p,n),
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
where the last step uses the result established above that Eq. (7) satisfies Eq. (14) for
|
| 504 |
+
$r < n$.
|
samples/texts_merged/2763593.md
ADDED
|
@@ -0,0 +1,364 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Face Recognition with One Sample Image per Class
|
| 5 |
+
|
| 6 |
+
Shaokang Chen
|
| 7 |
+
Intelligent Real-Time Imaging and
|
| 8 |
+
Sensing (IRIS) Group
|
| 9 |
+
The University of Queensland
|
| 10 |
+
Brisbane, Queensland, Australia
|
| 11 |
+
shaokang@itee.uq.edu.au
|
| 12 |
+
|
| 13 |
+
Brian C. Lovell
|
| 14 |
+
Intelligent Real-Time Imaging and
|
| 15 |
+
Sensing (IRIS) Group
|
| 16 |
+
The University of Queensland
|
| 17 |
+
Brisbane, Queensland, Australia
|
| 18 |
+
lovell@itee.uq.edu.au
|
| 19 |
+
|
| 20 |
+
## Abstract
|
| 21 |
+
|
| 22 |
+
There are two main approaches for face recognition with variations in lighting conditions. One is to represent images with features that are insensitive to illumination in the first place. The other main approach is to construct a linear subspace for every class under the different lighting conditions. Both of these techniques are successfully applied to some extent in face recognition, but it is hard to extend them for recognition with variant facial expressions. It is observed that features insensitive to illumination are highly sensitive to expression variations, which result in face recognition with changes in both lighting conditions and expressions a difficult task. We propose a new method called Affine Principle Components Analysis in an attempt to solve both of these problems. This method extract features to construct a subspace for face representation and warps this space to achieve better class separation. The proposed technique is evaluated using face databases with both variable lighting and facial expressions. We achieve more than 90% accuracy for face recognition by using only one sample image per class.
|
| 23 |
+
|
| 24 |
+
## 1. Introduction
|
| 25 |
+
|
| 26 |
+
One of the difficulties in face recognition (FR) is the numerous variations between images of the same face due to changes in lighting conditions, view points or facial expressions. A good face recognition system should recognize faces and be immune to these variations as mush as possible. Yet, it is been reported in [19] that differences between images of the same face due to these variations are normally greater than those between different faces. Therefore, most of the systems designed to date can only deal with face images taken under constrained conditions. So these major problems must be
|
| 27 |
+
|
| 28 |
+
overcome in the quest to produce robust face recognition systems.
|
| 29 |
+
|
| 30 |
+
In the past few years, different approaches have been proposed for face recognition to reduce the impact of these nuisance factors. Two main approaches are used for illumination invariant recognition. One is to represent images with features that are less sensitive to illumination changes such as the edge maps of the image. But edges generated from shadows are related to illumination changes and may have an impact on recognition. Experiments in [19] show that even with the best image representations using illumination insensitive features and distance measurement, the misclassification rate is more then 20%. The second approach presented in [21] and [22], is to prove that images of convex Lambertian objects under different lighting conditions can be approximated by a low dimensional linear subspaces. Kreigman, Belhumeur and Georghiades proposed an appearance-based method in [7] for recognizing faces under variations in lighting and view point based on this concept. Nevertheless, these methods all suppose the surface reflectance of human faces is Lambertian reflectance and it is hard for these systems to deal with cast shadows. Furthermore, these systems need several images of the same face taken under different lighting source directions to construct a model of a given face. However, sometimes it is hard to obtain different images of a given face under specific conditions.
|
| 31 |
+
|
| 32 |
+
As for expression invariant recognition, it is still unsolved for machine recognition and is even a difficult task for humans. In [23] and [24], images are morphed to be the same shape as the one used for training. But it is not guaranteed that all images can be morphed correctly, for example an image with closed eyes cannot be morphed to a neutral image because of the lack of texture inside the eyes. It is also hard to learn the local motions within the feature space to determine the expression changes of each face, since the way one person express a certain emotion is normally somewhat different from
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
others. Martinez proposed a method to deal with variations in facial expressions in [20]. An image is divided into several local areas and those that are less sensitive to expressional changes are chosen and weighed independently. But features that are insensitive to expression changes may be sensitive to illumination variation. This is discussed in [19] which says that "when a given representation is sufficient to overcome a single image variation, it may still be affected by other processing stages that control other imaging parameters".
|
| 36 |
+
|
| 37 |
+
It is known that performance of face recognition systems is acutely dependent on the choice of features [3], which is thus the key step in the recognition methodology. Principal Component Analysis (PCA) and Fisher Linear Discriminant (FLD) [1] are two well-known statistical feature extraction techniques for face recognition. PCA, a standard decorrelation technique, derives an orthogonal projection basis, which allows representation of faces in a vastly reduced feature space — this dimensionality reduction increases generalisation ability. PCA finds a set of orthogonal features, which provide a maximally compact representation of the majority of the variation of the facial data. But PCA might extract some noise features that degenerate performance of the system. For this reason, Swets and Weng [8] argue in favor of methods such as FLD which seek to determine the most discriminatory features by taking into account both within-class and between-class variation to derive the Most Discriminating Features (MDF). However, compared to PCA, it has been shown that FLD overfits to the training data resulting in a lack of generalization ability [2].
|
| 38 |
+
|
| 39 |
+
We propose a new method Affine Principle Component Analysis (APCA) that can deal with variations both in illumination and facial expression. This paper discusses APCA and presents results, which show that the recognition performance of APCA greatly exceeds that of both PCA and FLD when recognizing known faces with unknown changes in illumination and expression.
|
| 40 |
+
|
| 41 |
+
## 2. Review of PCA & FLD
|
| 42 |
+
|
| 43 |
+
PCA and FLD are two popular techniques for face recognition. They abstract features from training face images to generate orthogonal sets of feature vectors, which span a subspace of the face images. Recognition is then performed within this space based on some distance metric (possibly Euclidean).
|
| 44 |
+
|
| 45 |
+
### 2.1. PCA (Principal Component Analysis)
|
| 46 |
+
|
| 47 |
+
PCA is a second-order method for finding the linear representation of faces using only the covariance of data and determines the set of orthogonal components (feature vectors) which minimise the reconstruction error for a given number of feature vectors. Consider the face image set $I = [I_1, I_2, ..., I_n]$, where $I_i$ is a p×q image, $i \in [1..n]$, p,q,n ∈ Z⁺, the average face $\Psi$ of the image set is defined by:
|
| 48 |
+
|
| 49 |
+
$$ \Psi = \frac{1}{n} \sum_{i=1}^{n} I_i . \quad (1) $$
|
| 50 |
+
|
| 51 |
+
Normalizing each image by subtracting the average face, we have the normalized difference image:
|
| 52 |
+
|
| 53 |
+
$$ \tilde{D}_i = I_i - \Psi . \quad (2) $$
|
| 54 |
+
|
| 55 |
+
Unpacking $\tilde{D}_i$ row-wise, we form the $N$ ($N = p \times q$) dimensional column vector $d_i$. We define the covariance matrix $C$ of the normalized image set $D = [d_1, d_2, ..., d_n]$ by:
|
| 56 |
+
|
| 57 |
+
$$ C = \sum_{i=1}^{n} d_i d_i^T = DD^T \quad (3) $$
|
| 58 |
+
|
| 59 |
+
An eigendecomposition of $C$ yields eigenvalues $\lambda_i$ and eigenvectors $u_i$ which satisfy:
|
| 60 |
+
|
| 61 |
+
$$ Cu_i = \lambda_i u_i, \quad (4) $$
|
| 62 |
+
|
| 63 |
+
$$ DD^T = C = \sum_{i=1}^{N} \lambda_i u_i u_i^T, \quad (5) $$
|
| 64 |
+
|
| 65 |
+
where $i \in [1..N]$. Since those eigenvectors obtained looks like human faces physically, they are also called eigenfaces. Generally, we select a small subset of $m<n$ eigenvectors, to define a reduced dimensionality facespace that yields higher recognition performance on unseen examples of faces. Choosing $m=10$ or thereabout seems to yield good performance in practice. Although PCA defines a face subspace that contains the greatest covariance, it is not necessarily the best choice for classification since it may retain principle components with large noise and nuisance factors [2].
|
| 66 |
+
|
| 67 |
+
### 2.2 FLD (Fisher Linear Discriminant)
|
| 68 |
+
|
| 69 |
+
FLD finds the optimum projection for classification of the training data by simultaneously diagonalizing the within-class and between-class scatter matrices [2]. The FLD procedure consists of two operations: whitening and diagonalization [2]. Given $M$ classes $S_j$, $j \in [1...M]$, we denote the exemplars of each class by $s_{j,k} = [s_{j,1}, s_{j,2}, ..., s_{j,K_j}]$ where $K_j$ is the number of exemplars in class $j$. Let $\mu_j$ denote the mean of class $j$
|
| 70 |
+
---PAGE_BREAK---
|
| 71 |
+
|
| 72 |
+
and $\bar{\mu}$ denote the grand mean for all the exemplars, then
|
| 73 |
+
the between class scatter matrix is defined by:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
B = \sum_{j=1}^{M} K_j (\mu_j - \bar{\mu})(\mu_j - \bar{\mu})^T, \quad (6)
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
and the within class scatter matrix is defined by:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
W = \sum_{j=1}^{M} \sum_{k=1}^{K_j} (s_{j,k} - \mu_j)(s_{j,k} - \mu_j)^T
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
(7)
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
W_{FLD} = \arg \max_A \frac{|A^T BA|}{|A^T WA|}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
(8)
|
| 92 |
+
|
| 93 |
+
In other words, FLD extracts features that are strong between classes but weak within class. While FLD often yields higher recognition performance than PCA, it tends to overfit to the training data, since it relies heavily on how the within-class scatter captures reliable variations for a specific class [2]. In addition, it is optimised for specific classes, so it needs several samples in every class and thus can determine only a maximum of M-1 features.
|
| 94 |
+
|
| 95 |
+
3. PROPOSED METHOD
|
| 96 |
+
|
| 97 |
+
An Affine PCA method is introduced in this section in
|
| 98 |
+
an attempt to overcome some of the limitations of both
|
| 99 |
+
PCA and FLD. First of all, we apply PCA for dimen-
|
| 100 |
+
sionality reduction and to obtain the eigenfaces *U* .
|
| 101 |
+
Every face image can be projected into this subspace to
|
| 102 |
+
form an *m*-dimensional feature vector *s*<sub>j,k</sub>, where
|
| 103 |
+
*m* < *n*, denotes the number of principal eigenfaces cho-
|
| 104 |
+
sen for the projection, and *k* = 1,2,...,*K*<sub>j</sub>, denotes the k<sup>th</sup>
|
| 105 |
+
sample of the class *S*<sub>j</sub>, where *j* = 1,2,...,*M*. We often
|
| 106 |
+
use the nearest neighbor method for classification, where
|
| 107 |
+
the distance between two face vectors represents the en-
|
| 108 |
+
ergy difference between them. In the case of variable
|
| 109 |
+
illumination, lighting changes dominate over the charac-
|
| 110 |
+
teristic differences between faces. It has also been
|
| 111 |
+
proved in [19] that the distance between face vectors
|
| 112 |
+
with facial expression variations are generally greater
|
| 113 |
+
than that with face identity. This is the main reason why
|
| 114 |
+
PCA does not work well under variable lighting and ex-
|
| 115 |
+
pression. In fact, not all the features have the same im-
|
| 116 |
+
portance in recognition. Features that are strong between
|
| 117 |
+
classes and weak within class are much more useful for
|
| 118 |
+
the recognition task. Therefore, we propose an affine
|
| 119 |
+
model (Affine PCA) to resolve this problem. The affine
|
| 120 |
+
procedure involves three steps: eigenspace rotation,
|
| 121 |
+
whitening transformation and eigenface filtering.
|
| 122 |
+
|
| 123 |
+
3.1. Eigenspace Rotation
|
| 124 |
+
|
| 125 |
+
The eigenfaces extracted from PCA are Most Expres-
|
| 126 |
+
sive Features (MEF) and these are not necessarily opti-
|
| 127 |
+
mal for face recognition performance as stated in [8].
|
| 128 |
+
Applying FLD we can obtain the Most Discriminating
|
| 129 |
+
Features but overfits to only training data lacking of gen-
|
| 130 |
+
eralization capacity. Therefore, in order not to lose gen-
|
| 131 |
+
eralization ability while still keep the discrimination, we
|
| 132 |
+
prefer to rotate the space and find the most variant fea-
|
| 133 |
+
tures that can represent changes due to lighting or ex-
|
| 134 |
+
pression variation. That is to extract the within class
|
| 135 |
+
covariance and apply PCA to find the best eigen features
|
| 136 |
+
that maximally represent within class variations. The
|
| 137 |
+
within class difference is defined as:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
D_{Within} = \sum_{j=1}^{M} \sum_{k=1}^{K_j} s_{j,k} - \mu_j, \qquad (9)
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
and the within class covariance become:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
Cov_{Within} = D_{Within} D_{Within}^{T}, \quad (10)
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
which is a *m* by *m* matrix. Applying singular value
|
| 150 |
+
decomposition (SVD) to within class covariance matrix,
|
| 151 |
+
we have,
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
Cov_{Within} = USV^T = \sum_{i=1}^{m} \sigma_i v_i v_i^T .
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
Then the rotation matrix M is the set of eigen vectors of
|
| 158 |
+
covariance matrix, $M = [v_1, v_2, ..., v_m]$. Then all the
|
| 159 |
+
vectors represented in the original subspace are trans-
|
| 160 |
+
formed into new space by multiply by M.
|
| 161 |
+
|
| 162 |
+
## 3.2. Whitening Transformation
|
| 163 |
+
|
| 164 |
+
The purpose for whitening is to normalize the scatter
|
| 165 |
+
matrix for uniform gain control. Since as stated in [3]
|
| 166 |
+
"mean square error underlying PCA preferentially
|
| 167 |
+
weights low frequencies", we would need to compensate
|
| 168 |
+
for that. The whitening parameter Γ is related to the ei-
|
| 169 |
+
genvalues λᵢ. Conventionally, we would use the stan-
|
| 170 |
+
dard deviation for whitening, that is:
|
| 171 |
+
Γi = √λᵢ, i = [1...m]. But this value appears to compress
|
| 172 |
+
the eigenspace so much that class separability is dimin-
|
| 173 |
+
ished. We therefore use Γᵢ = λᵢ/p, where the exponent p is
|
| 174 |
+
determined empirically.
|
| 175 |
+
|
| 176 |
+
3.3. Filtering the Eigenfaces
|
| 177 |
+
|
| 178 |
+
The aim of filtering is to diminish the contribution of
|
| 179 |
+
eigenfaces that are strongly affected by variations. We
|
| 180 |
+
want to be able to enhance features that capture the main
|
| 181 |
+
differences between classes (faces) while diminishing the
|
| 182 |
+
contribution of those that are largely due to lighting or
|
| 183 |
+
---PAGE_BREAK---
|
| 184 |
+
|
| 185 |
+
expression variation (within class differences). We thus
|
| 186 |
+
define a filtering parameter $\Lambda$ which is related to iden-
|
| 187 |
+
tity-to-variation (ITV) ratio. The ITV is a ratio measur-
|
| 188 |
+
ing the correlation with a change in person versus a
|
| 189 |
+
change in variations for each of the eigenfaces. For an M
|
| 190 |
+
class problem, assume that for each of the M classes
|
| 191 |
+
(persons) we have examples under K standardized differ-
|
| 192 |
+
ent variations in illumination or expression. In case of
|
| 193 |
+
illumination changes, the lighting source is positioned in
|
| 194 |
+
front, above, below, left and right as illustrated in Figure
|
| 195 |
+
2. The facial expression changes are normal, surprised
|
| 196 |
+
and unpleasant as shown in Figure 3. Let us denote the i-th
|
| 197 |
+
eigenface of the k-th sample for class (person) S_j by
|
| 198 |
+
|
| 199 |
+
$s_{i,j,k}$. Then
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\begin{align}
|
| 203 |
+
ITV_i &= \frac{\text{Between Class Scatter}}{\text{Within Class Scatter}} \nonumber \\
|
| 204 |
+
&= \frac{\frac{1}{M} \sum_{j=1}^{M} \frac{1}{K} \sum_{k=1}^{K} |s_{i,j,k} - \bar{\sigma}_{i,k}|}{\frac{1}{M} \sum_{j=1}^{M} \frac{1}{K} \sum_{k=1}^{K} |s_{i,j,k} - \mu_{i,j}|}, \tag{11}
|
| 205 |
+
\end{align}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
\bar{\sigma}_{i,k} = \frac{1}{M} \sum_{j=1}^{M} s_{i,j,k},
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
and
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
\mu_{i,j} = \frac{1}{K} \sum_{k=1}^{K} s_{i,j,k}, \quad i = [1 \cdots m].
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
Here $\bar{\sigma}_{i,k}$ represents the i-th element of the mean face vector for variation $k$ for all persons and $\mu_{i,j}$ represents the i-th element of the mean face vector for person $j$ under all different variations. We then define the scaling parameter $\Lambda$ by:
|
| 219 |
+
|
| 220 |
+
$$
|
| 221 |
+
\Lambda_i = ITV_i^q \quad (12)
|
| 222 |
+
$$
|
| 223 |
+
|
| 224 |
+
where *q* is an exponential scaling factor determined em-
|
| 225 |
+
pirically as before. Instead of this exponential scaling
|
| 226 |
+
factor, other non-linear functions such as thresholding
|
| 227 |
+
suggest themselves. These possibilities have been ex-
|
| 228 |
+
plored, but so far the exponential scaling perform best.
|
| 229 |
+
After the affine transformation, the distance *d* between
|
| 230 |
+
two face vectors *s*<sub>*j*,k*</sub> and *s*<sub>*j',k*</sub>' is:
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
d_{jj',kk'} = \sqrt{\sum_{i=1}^{m} [\omega_i (s_{i,j,k} - s_{i,j',k'})]^2}, \quad (13)
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
\omega_i = \Gamma_i \Lambda_i / |\Gamma \Lambda^T|.
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
The weights $\omega_i$ scale the corresponding eigenfaces. To determine the two exponents $p$ and $q$ for $\Gamma$ and $\Lambda$, we introduce a cost function and optimise them empirically. It is defined by:
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
OPT = \sum_{j=1}^{M} \sum_{k=1}^{K} \sum_{m} \left( \frac{d_{jj,k0}}{d_{jm,k0}} \right), \forall m \in d_{jm,k0} < d_{jj,k0} \quad (14)
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
where $d_{jj,k0}$ is the distance between the sample $s_{j,k}$ and $s_{j,0}$ which is the standard image reference for class $S_j$ (typically the normally illuminated image). Note that the condition $d_{jm,k0} < d_{jj,k0}$ is only true when there is a misclassification error. Thus $OPT$ is a combination of error rate and the ratio of between-class distance to within-class distance. By minimizing $OPT$, we can determine the best choices for $p$ and $q$. Figure 1 shows the relationship between $OPT$ and $p, q$. For one of our training database, a minimum was obtained at $p = -0.2, q = -0.4$.
|
| 247 |
+
|
| 248 |
+
From the above, our final set of transformed eigenfaces would be:
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
u_i' = \omega_i u_i M = \frac{1}{\sigma_i} \omega_i D v_i M \quad (15)
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
where $i=[1...m]$. After transformation, we can apply PCA again on the compressed subspace to further reduce dimensionality (two-stage PCA).
|
| 255 |
+
|
| 256 |
+
# 4. EXPERIMENTAL RESULTS
|
| 257 |
+
|
| 258 |
+
The method is tested on an Asian Face Image Data-
|
| 259 |
+
base PF01 [6] for both changes in lighting source posi-
|
| 260 |
+
tions and facial expressions. The size of each image is
|
| 261 |
+
171×171 pixels with 256 grey levels per pixel. Figure 2
|
| 262 |
+
and 3 show some examples from the database. To evalu-
|
| 263 |
+
ate the performance of our methods, we performed a 3-
|
| 264 |
+
fold cross validation on the database as follows. We
|
| 265 |
+
choose one-third of the 107 subjects to construct our
|
| 266 |
+
APCA model, one-third for training. Then we just add
|
| 267 |
+
the normally faces (pictures in the first column in Figure
|
| 268 |
+
1 and 2) of the remaining one-thirds of the data into our
|
| 269 |
+
recognition database. We then attempt to recognize these
|
| 270 |
+
faces under all the other conditions. This process is re-
|
| 271 |
+
peated three-fold using different partitions and the per-
|
| 272 |
+
formance is averaged. All the results listed in this paper
|
| 273 |
+
are obtained from experiments only on testing data. Ta-
|
| 274 |
+
ble 1 is the comparison of recognition rate between
|
| 275 |
+
APCA and PCA. It is clear from the results that Affine
|
| 276 |
+
PCA performs much better than PCA in face recognition
|
| 277 |
+
under variable lighting conditions. The proposed APCA
|
| 278 |
+
outperforms PCA remarkably in recognition rate with
|
| 279 |
+
99.3% for training data and 95.6% for testing data with
|
| 280 |
+
negligible reduction in performance for normally lit
|
| 281 |
+
faces. Figure 3 displays the recognition rates against
|
| 282 |
+
numbers of eigenfaces used (m). It can be seen that
|
| 283 |
+
selecting the principal 40 to 50 eigenfaces is sufficient
|
| 284 |
+
for invariant luminance face recognition. This number is
|
| 285 |
+
---PAGE_BREAK---
|
| 286 |
+
|
| 287 |
+
somewhat higher than is required for standard PCA, where selecting *m* in the range 10 to 20 is sufficient — this is possibly a necessary consequence of the greater complexity of the APCA face subspace compared to standard PCA.
|
| 288 |
+
|
| 289 |
+
Figure 1. Examples of illumination changes in Asian Face Database PF01.
|
| 290 |
+
|
| 291 |
+
Figure 2. Examples of expression changes in Asian Face Database PF01.
|
| 292 |
+
|
| 293 |
+
As for variations in facial expression, APCA achieves higher recognition rate than PCA with an increase of 10%. For changes in both lighting condition and expression, APCA always performs better than PCA despite of the change in number of eigenfaces. The gain is almost stable with high dimension of subspace. It can also be seen from Figure 3, that recognition rate of expression changes does not decrease dramatically with the reduce of number of eigen features compared to illumination variations. Therefore, only as low as 20 features is enough to recognition faces with facial expression variations.
|
| 294 |
+
|
| 295 |
+
We also test the performance of APCA on variations on illumination and expression simultaneously. The recognition rate of APCA is less than 5% lower than that of illumination changes and expression changes, but it is obviously higher than the recognition rate of PCA. Thus
|
| 296 |
+
|
| 297 |
+
it shows that performance of APCA is stable in spite of the complexity of variations. However, PCA is not as robust as APCA with different variations. For illumination changes, PCA only achieve less than 60% accuracy while the accuracy increase to more than 80% for expression variations. It drops back to 60% with changes combining illumination and expression. This phenomenon has also been reported in [19] as any given representation is not sufficient to overcome variations in both illumination and expression.
|
| 298 |
+
|
| 299 |
+
Figure 3. Recognition Rate Vs. Number of features.
|
| 300 |
+
|
| 301 |
+
<table><thead><tr><th rowspan="2">Method</th><th colspan="3">Recognition rate</th></tr><tr><th>Illumination Variation</th><th>Expression Variation</th><th>Illumination and Expression Variations</th></tr></thead><tbody><tr><td>PCA</td><td>57.3%</td><td>84.6%</td><td>70.6%</td></tr><tr><td>Affine PCA</td><td>95.6%</td><td>92.2%</td><td>86.8%</td></tr></tbody></table>
|
| 302 |
+
|
| 303 |
+
Table 1. Comparison of recognition rate between APCA and PCA.
|
| 304 |
+
|
| 305 |
+
Conclusion
|
| 306 |
+
|
| 307 |
+
We have described an easy to calculate and efficient face recognition algorithm by warping the face subspace constructed from PCA. The affine procedure contains three steps: rotating the eigen space, whitening Transformation, and then filtering the eigenfaces. After affine transformation, features are assigned with different weights for recognition which in fact enlarge the between
|
| 308 |
+
---PAGE_BREAK---
|
| 309 |
+
|
| 310 |
+
class covariance while minimizing within class covariance. There only have as few as two variable parameters during the optimization compared to other methods for high dimensionality problems. This method can not only deal with variations in illumination and expression separately but also perform well for the combination of both changes with only one sample image per class. Experiments show that APCA is more robust to change in illumination and expression and have better generalization capacity compared to the FLD method.
|
| 311 |
+
|
| 312 |
+
A shortcoming of the algorithm is that we can not guarantee that the weights achieved are the best for recognition since we only rotate the eigen space to the direction that best represent the within class covariance. Future work will be to search the eigen space and find the best eigen features suitable for face recognition.
|
| 313 |
+
|
| 314 |
+
## References
|
| 315 |
+
|
| 316 |
+
[1] P.Belhumeur, J. Hespanha, and D. Kriegman, "Eigenfaces vs. fisherfaces: Recognition using class specific linear projection", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.19, No.7, 711-720, 1997.
|
| 317 |
+
|
| 318 |
+
[2] Chengjun Liu and Harry Wechsler, "Enhanced Fisher Linear Discriminant Models for Face Recognition", 14th International Conference on Pattern Recognition, ICPR'98, Queensland, Australia, August 17-20, 1998.
|
| 319 |
+
|
| 320 |
+
[3] Chengjun Liu and Harry Wechsler, "Evolution of Optimal Projection Axes (OPA) for Face Recognition". Third IEEE International Conference on Automatic face and Gesture Recognition, FG'98, Nara, Japan, April 14-16,1998.
|
| 321 |
+
|
| 322 |
+
[4] Dao-Qing Dai, Guo-Can Feng, Jian-Huang Lai and P.C. Yuen, "Face Recognition Based on Local Fisher Features", 2nd Int. Conf. on Multimodal Interface, Beijing, 2000.
|
| 323 |
+
|
| 324 |
+
[5] Hua Yu and Jie Yang, "A Direct LDA Algorithm for High-Dimensional Data-with Application to Face Recognition", Pattern Recognition 34(10), 2001, pp. 2067-2070.
|
| 325 |
+
|
| 326 |
+
[6] Intelligent Multimedia Lab., "Asian Face Image Database PF01", http://nova.postech.ac.kr/.
|
| 327 |
+
|
| 328 |
+
[7] Georghiades, A.S. and Belhumeur, P.N. and Kriegman, D.J., "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", IEEE Trans. Pattern Anal. Mach. Intelligence, vol.23, No. 6, 2001, pp. 643-660.
|
| 329 |
+
|
| 330 |
+
[8] Daniel L. Swets and John Weng, "Using discriminant eigenfeatures for image retrieval", IEEE Trans. on PAMI, vol. 18, No. 8, 1996, pp. 831-836.
|
| 331 |
+
|
| 332 |
+
[9] X.W. Hou, S.Z. Li, H.J. Zhang, "Direct Appearance Models". In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. Hawaii. December, 2001.
|
| 333 |
+
|
| 334 |
+
[10] Z. Xue, S.Z. Li, and E.K. Teoh. "Facial Feature Extraction and Image Warping Using PCA Based Statistic Model". In Proceedings of 2001 International Conference on Image Processing. Thessaloniki, Greece. October 7-10, 2001.
|
| 335 |
+
|
| 336 |
+
[11] S.Z. Li, K.L. Chan and C.L. Wang. "Performance Evaluation of the Nearest Feature Line Method in Image Classification and Retrieval". IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1335-1339. November, 2000.
|
| 337 |
+
|
| 338 |
+
[12] G.D. Guo, H.J. Zhang, S.Z. Li. "Pairwise Face Recognition". In Proceedings of 8th IEEE International Conference on Computer Vision. Vancouver, Canada. July 9-12, 2001.
|
| 339 |
+
|
| 340 |
+
[13] S. Mika, G. Ratsch, J.Weston, and K. R. M. B. Scholkopf, "Fisher discriminant analysis with kernels", Neural networks for Signal Processing IX, 1999, pp.41-48.
|
| 341 |
+
|
| 342 |
+
[14] M. A. Turk and A. P. Pentland, "Eigenfaces for recognition", Journal of Cognitive Neuroscience, vol. 3, No. 1, 1991, pp.71-86.
|
| 343 |
+
|
| 344 |
+
[15] Jie Zhou and David Zhang "Face Recognition by Combining Several Algorithms", ICPR 2002.
|
| 345 |
+
|
| 346 |
+
[16] Alexandre Lemieux and Marc Parizeau, "Experiments on Eigenfaces Robustness", ICPR 2002.
|
| 347 |
+
|
| 348 |
+
[17] A. M. Martinez and A. C. Kak, "PCA versus LDA", IEEE TPAMI, 23(2):228-233, 2001.
|
| 349 |
+
|
| 350 |
+
[18] A. Yilmaz and M. Gokmen, "Eigenhill vs. eigenface and eigengedge", In Proceedings of International Conference Pattern Recognition, Barcelona, Spain, 2000, pp.827-830.
|
| 351 |
+
|
| 352 |
+
[19] Yael Adin, Yael Moses, and Shimon Ullman, "Face Recognition: The problem of Compensating for Changes in Illumination Direction", IEEE PAMI, Vol. 19, No. 7, 1997.
|
| 353 |
+
|
| 354 |
+
[20] Aleix M. Martinez, "Recognizing Impercisely Localized, Partially Occluded and Expression Variant Faces from a Single Sample per Class", IEEE TPAMI, Vol. 24, No. 6, 2002.
|
| 355 |
+
|
| 356 |
+
[21] Ronen Basri and David W. Jacobs, "Lambertian Reflec-tance and Linear Subspaces", IEEE TPAMI, Vol. 25, No.2 2003.
|
| 357 |
+
|
| 358 |
+
[22] Peter W. Hallinan, "A Low-Dimensional representation of Human faces for Arbitrary Lighting Conditioins", Proc. IEEE Conf. Computer Vision and Pattern recognition, 1994.
|
| 359 |
+
|
| 360 |
+
[23] D. Beymer and T. Poggio, "Face Recognition from One Example View", Science, Vol. 272, No. 5250, 1996.
|
| 361 |
+
|
| 362 |
+
[24] M. J. Black, D. J. Fleet and Y. Yacoob, "Robustly esti-mating Changes in Image Appearance", Computer Vision and Image Understanding, Vol. 78, No. 1, 2000.
|
| 363 |
+
|
| 364 |
+
[25] Shaokang Chen, Brian C. Lovell and Sai Sun, "Face Recognition with APCA in Variant Illuminations", Work-shop on Signal Processing and Applications, Australia, December, 2002.
|
samples/texts_merged/276850.md
ADDED
|
@@ -0,0 +1,386 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
On the entropy for group actions on the circle
|
| 5 |
+
|
| 6 |
+
by
|
| 7 |
+
|
| 8 |
+
Eduardo Jorquera (Santiago)
|
| 9 |
+
|
| 10 |
+
**Abstract.** We show that for a finitely generated group of $C^2$ circle diffeomorphisms, the entropy of the action equals the entropy of the restriction of the action to the non-wandering set.
|
| 11 |
+
|
| 12 |
+
1. Introduction. Let $(X, \mathrm{dist})$ be a compact metric space and $G$ a group of homeomorphisms of $X$ generated by a finite family of elements $\Gamma = \{g_1, \dots, g_n\}$. To simplify, we will always assume that $\Gamma$ is symmetric, that is, $g^{-1} \in \Gamma$ for every $g \in \Gamma$. For each $n \in \mathbb{N}$ we denote by $B_{\Gamma}(n)$ the ball of radius $n$ in $G$ (with respect to $\Gamma$), that is, the set of elements $f \in G$ which may be written in the form $f = g_{i_m} \cdots g_{i_1}$ for some $m \le n$ and $g_{i_j} \in \Gamma$. For $g \in G$ we let $\|f\| = \|f\|_{\Gamma} = \min\{n : f \in B_{\Gamma}(n)\}$
|
| 13 |
+
|
| 14 |
+
As in the classical case, given $\varepsilon > 0$ and $n \in \mathbb{N}$, two points $x, y$ in $X$ are said to be $(n, \varepsilon)$-separated if there exists $g \in B_{\Gamma}(n)$ such that $\mathrm{dist}(g(x), g(y)) \ge \varepsilon$. A subset $A \subset X$ is $(n, \varepsilon)$-separated if all $x \neq y$ in $A$ are $(n, \varepsilon)$-separated. We denote by $s(n, \varepsilon)$ the maximal possible cardinality (perhaps infinite) of an $(n, \varepsilon)$-separated set. The topological entropy of the action at the scale $\varepsilon$ is defined by
|
| 15 |
+
|
| 16 |
+
$$h_{\Gamma}(G \circled{=} X, \varepsilon) = \limsup_{n \uparrow \infty} \frac{\log(s(n, \varepsilon))}{n},$$
|
| 17 |
+
|
| 18 |
+
and the *topological entropy* is defined by
|
| 19 |
+
|
| 20 |
+
$$h_{\Gamma}(G \circled{=}} X) = \lim_{\varepsilon \downarrow 0} h_{\Gamma}(G \circled{=}} X, \varepsilon).$$
|
| 21 |
+
|
| 22 |
+
Notice that, although $h_{\Gamma}(G \circled{=} X, \varepsilon)$ depends on the system of generators, the properties of having zero, positive, or infinite entropy are independent of this choice.
|
| 23 |
+
|
| 24 |
+
The definition above was proposed in [5] as an extension of the classical topological entropy of single maps (the definition extends to pseudo-groups
|
| 25 |
+
|
| 26 |
+
2000 Mathematics Subject Classification: 20B27, 37A35, 37C85, 37E10.
|
| 27 |
+
Key words and phrases: topological entropy, group actions, circle diffeomorphisms.
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
of homeomorphisms, and hence is suitable for applications in foliation theory). Indeed, for a homeomorphism $f$, the topological entropy of the action of $\mathbb{Z} \sim \langle f \rangle$ equals twice the (classical) topological entropy of $f$. Nevertheless, the functorial properties of this notion remain unclear. For example, the following fundamental question is open.
|
| 31 |
+
|
| 32 |
+
**GENERAL QUESTION.** Is it true that $h_{\Gamma}(G \circled{=} X)$ is equal to $h_{\Gamma}(G \circled{=} \Omega)$?
|
| 33 |
+
|
| 34 |
+
Here $\Omega = \Omega(G \circled{=} X)$ denotes the *non-wandering set* of the action, or in other words
|
| 35 |
+
|
| 36 |
+
$$ \Omega = \{x \in X : \text{ for every neighborhood } U \text{ of } x, \text{ we have} \\ f(U) \cap U \neq \emptyset \text{ for some } f \neq \text{id in } G\}. $$
|
| 37 |
+
|
| 38 |
+
This is a closed invariant set whose complement $\Omega^c$ corresponds to the *wandering set* of the action.
|
| 39 |
+
|
| 40 |
+
The notion of topological entropy for group actions is quite appropriate in the case where $X$ is a one-dimensional manifold. In fact, in this case, the topological entropy is necessarily finite (cf. §2). Moreover, in the case of actions by diffeomorphisms, the dichotomy $h_{\text{top}} = 0$ or $h_{\text{top}} > 0$ is well understood. Indeed, according to a result originally proved by Ghys, Langevin, and Walczak, for groups of $C^2$ diffeomorphisms [5], and extended by Hurder to groups of $C^1$ diffeomorphisms (see for instance [9]), we have $h_{\text{top}} > 0$ if and only if there exists a resilient orbit for the action. This means that there exists a group element $f$ contracting an interval towards a fixed point $x_0$ inside, and another element $g$ which sends $x_0$ into its basin of contraction under $f$.
|
| 41 |
+
|
| 42 |
+
The results of this work give a positive answer to the General Question above in the context of group actions on one-dimensional manifolds under certain mild assumptions.
|
| 43 |
+
|
| 44 |
+
**THEOREM A.** If $G$ is a finitely generated subgroup of $\operatorname{Diff}_+^2(S^1)$, then for every finite system of generators $\Gamma$ of $G$, we have
|
| 45 |
+
|
| 46 |
+
$$ h_{\Gamma}(G \circled{=} S^1) = h_{\Gamma}(G \circled{=} \Omega). $$
|
| 47 |
+
|
| 48 |
+
Our proof for Theorem A actually works in the Denjoy class $C^{1+bv}$, and applies to general codimension-one foliations on compact manifolds. In the class $C^{1+Lip}$, it is quite possible that we could give an alternative proof using standard techniques from level theory [2, 6].
|
| 49 |
+
|
| 50 |
+
It is unclear whether Theorem A extends to actions of lower regularity. However, it still holds under certain algebraic hypotheses. In fact (quite unexpectedly), the regularity hypothesis is used to rule out the existence of elements $f \in G$ that fix some connected component of the wandering set and which are *distorted*, that is,
|
| 51 |
+
|
| 52 |
+
$$ \lim_{n \to \infty} \frac{\|f^n\|}{n} = 0. $$
|
| 53 |
+
---PAGE_BREAK---
|
| 54 |
+
|
| 55 |
+
Actually, for the equality between the entropies it suffices to require that no element in $G$ be subexponentially distorted. In other words, it suffices to require that, for each element $f \in G$ with infinite order, there exist a non-decreasing function $q : \mathbb{N} \to \mathbb{N}$ (depending on $f$) with subexponential growth satisfying $q(\|f^n\|) \ge n$ for every $n \in \mathbb{N}$. This is an algebraic condition which is satisfied by many groups, for example nilpotent or free groups. (We refer the reader to [1] for a nice discussion of distorted elements.) Under this hypothesis, the following result holds.
|
| 56 |
+
|
| 57 |
+
**THEOREM B.** If $G$ is a finitely generated subgroup of Homeo$_+(S^1)$ without subexponentially distorted elements, then for every finite system of generators $\Gamma$ of $G$, we have
|
| 58 |
+
|
| 59 |
+
$$h_{\Gamma}(G \circled{=} S^1) = h_{\Gamma}(G \circled{=} \Omega).$$
|
| 60 |
+
|
| 61 |
+
The entropy of general group actions and distorted elements seem to be related in an interesting manner. Indeed, though the topological entropy of a single homeomorphism $f$ may be equal to zero, if this map appears as a subexponentially distorted element inside an acting group, then it may create positive entropy for the group action.
|
| 62 |
+
|
| 63 |
+
**2. Some background.** In this work we will consider the normalized length on the circle, and every homeomorphism will be orientation preserving.
|
| 64 |
+
|
| 65 |
+
We begin by noticing that if $G$ is a finitely generated group of circle homeomorphisms and $\Gamma$ is a finite generating system for $G$, then for all $n \in \mathbb{N}$ and all $\varepsilon > 0$ one has
|
| 66 |
+
|
| 67 |
+
$$ (1) \qquad s(n, \varepsilon) \le \frac{1}{\varepsilon} \#B_{\Gamma}(n). $$
|
| 68 |
+
|
| 69 |
+
Indeed, let $A$ be an $(n, \varepsilon)$-separated set of cardinality $s(n, \varepsilon)$. Then for any two adjacent points $x, y$ in $A$ there exists $f \in B_{\Gamma}(n)$ such that $\text{dist}(f(x), f(y)) \ge \varepsilon$. For a fixed $f$, the intervals $[f(x), f(y)]$ which appear have disjoint interiors. Since the total length of the circle is 1, any given $f$ can be used in this construction at most $1/\varepsilon$ times, which immediately gives (1).
|
| 70 |
+
|
| 71 |
+
Notice that, taking the logarithm on both sides of (1), dividing by $n$, and passing to the limit gives
|
| 72 |
+
|
| 73 |
+
$$h_{\Gamma}(G \circled{=} S^1) \le \operatorname{gr}_{\Gamma}(G),$$
|
| 74 |
+
|
| 75 |
+
where $\operatorname{gr}_{\Gamma}(G)$ denotes the *growth* of $G$ with respect to $\Gamma$, that is,
|
| 76 |
+
|
| 77 |
+
$$\operatorname{gr}_{\Gamma}(G) = \lim_{n \to \infty} \frac{\log(\#\{B_{\Gamma}(n)\})}{n}.$$
|
| 78 |
+
|
| 79 |
+
Some easy consequences of this fact are the following:
|
| 80 |
+
---PAGE_BREAK---
|
| 81 |
+
|
| 82 |
+
* If $G$ has subexponential growth, that is, if $\text{gr}_\Gamma(G) = 0$ (in particular, if $G$ is nilpotent, or if $G$ is the Grigorchuk–Maki group considered in [8]), then $\text{h}_\Gamma(G \circled S^1) = 0$ for all finite generating systems $\Gamma$.
|
| 83 |
+
|
| 84 |
+
* In the general case, if $\# \Gamma = q \ge 1$, then from the relations
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\[
|
| 88 |
+
\#B_{\Gamma}(n) \le 1 + \sum_{j=1}^{n} 2q(2q-1)^{j-1} = \begin{cases} 1 + \frac{q}{q-1}((2q-1)^n - 1), & q \ge 2, \\ 1 + 2n, & q=1, \end{cases}
|
| 89 |
+
\]
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
one concludes that
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
h_{\Gamma}(G \circled S^1) \le \log(2q - 1).
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
This shows in particular that the entropy of the action of $G$ on $S^1$
|
| 99 |
+
is finite. Notice that this may also be deduced from the probabilistic
|
| 100 |
+
arguments of [3] (see Théorème D therein). However, these arguments
|
| 101 |
+
only yield the weaker estimate $h_{\Gamma}(G \circled S^1) \le \log(2q)$ when $\Gamma$ has
|
| 102 |
+
cardinality $q$.
|
| 103 |
+
|
| 104 |
+
**3. Some preparations for the proofs.** The statements of our results are obvious when the non-wandering set of the action equals the whole circle. Hence, we will assume in what follows that $\Omega$ is a proper subset of $S^1$, and we will denote by $I$ a connected component of the complement of $\Omega$. Let $\text{St}(I)$ denote the stabilizer of $I$ in $G$.
|
| 105 |
+
|
| 106 |
+
LEMMA 1. *The stabilizer St*(*I*) *is either trivial or infinite cyclic.*
|
| 107 |
+
|
| 108 |
+
*Proof.* The (restrictions to *I* of the) non-trivial elements of St(*I*)|*I* have no fixed points, for otherwise these points would be non-wandering. Thus St(*I*)|*I* acts freely on *I*, and according to Hölder's Theorem [4, 7], its action is semiconjugate to an action by translations. We claim that if St(*I*)|*I* is non-trivial, then it is infinite cyclic. Indeed, if not then the corresponding group of translations is dense. This implies that the preimage by the semiconjugacy of any point whose preimage is a single point corresponds to a non-wandering point for the action. But this contradicts the fact that *I* is contained in Ω<sup>*c*</sup>.
|
| 109 |
+
|
| 110 |
+
If St(I)|_I is trivial then f|_I is trivial for every f ∈ St(I), and hence f
|
| 111 |
+
itself must be the identity. We then conclude that St(I) is trivial.
|
| 112 |
+
|
| 113 |
+
Analogously, $\text{St}(I)$ is cyclic if $\text{St}(I)|_I$ is cyclic. In this case, $\text{St}(I)|_I$ is generated by the restriction to the interval $I$ of the generator of $\text{St}(I)$. $\blacksquare$
|
| 114 |
+
|
| 115 |
+
**DEFINITION 1.** A connected component *I* of Ω<sup>*c*</sup> will be called *of type 1* if St(*I*) is trivial, and *of type 2* if St(*I*) is infinite cyclic.
|
| 116 |
+
|
| 117 |
+
Notice that the families of connected components of type 1 and 2 are
|
| 118 |
+
invariant, that is, for each $f \in G$ the interval $f(I)$ is of type 1 (resp. of
|
| 119 |
+
type 2) if $I$ is of type 1 (resp. of type 2). Moreover, given two connected
|
| 120 |
+
components of type 1 of Ω<sup>*c*</sup>, there exists at most one element in *G* sending
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
the former to the latter. Indeed, if $f(I) = g(I)$ then $g^{-1}f$ is in the stabilizer of $I$, and hence $f = g$ if $I$ is of type 1.
|
| 124 |
+
|
| 125 |
+
LEMMA 2. Let $x_1, \dots, x_m$ be points contained in a single type 1 connected component of $\Omega^c$. If for some $\varepsilon > 0$ the points $x_i, x_j$ are $(\varepsilon, n)$-separated for every $i \neq j$, then $m \le 1 + 1/\varepsilon$.
|
| 126 |
+
|
| 127 |
+
*Proof.* Let $I = ]a,b[$ be the connected component of type 1 of $\Omega^c$ containing the points $x_1, \dots, x_m$. After renumbering the $x_i$'s, we may assume that $a < x_1 < \dots < x_m < b$. For each $1 \le i \le m-1$ one can choose an element $g_i \in B_I(n)$ such that $\text{dist}(g_i(x_i), g_i(x_{i+1})) \ge \varepsilon$. Now, since $I$ is of type 1, the intervals $]g_i(x_i), g_i(x_{i+1})[$ are pairwise disjoint. Therefore, the number of these intervals times their minimal length is less than or equal to 1. This gives $(m-1)\varepsilon \le 1$, thus proving the lemma. $\blacksquare$
|
| 128 |
+
|
| 129 |
+
The case of connected components $I$ of type 2 of $\Omega^c$ is much more complicated. The difficulty is that if the generator of the stabilizer of $I$ is subexponentially distorted in $G$, then there exist exponentially many $(n, \varepsilon)$-separated points inside $I$, and hence a relevant part of the entropy is “concentrated” in $I$. To deal with this problem, for each connected component $I$ of type 2 of $\Omega^c$ we denote by $p_I$ its middle point, and then we define $\ell_I: G \to \mathbb{N}_0$ as follows. Let $h$ be the generator of the stabilizer of $I$ such that $h(x) > x$ for all $x$ in $I$. For each $f \in G$ the element $fhf^{-1}$ is the generator of the stabilizer of $f(I)$ with the analogous property. We then let $\ell_I(f) = |r|$, where $r$ is the unique integer such that
|
| 130 |
+
|
| 131 |
+
$$ f h^r f^{-1} (p_{f(I)}) \leq f(p_I) < f h^{r+1} f^{-1} (p_{f(I)}). $$
|
| 132 |
+
|
| 133 |
+
LEMMA 3. For all $f,g$ in $G$ one has
|
| 134 |
+
|
| 135 |
+
$$ \ell_I(g \circ f) \le \ell_{f(I)}(g) + \ell_I(f) + 1. $$
|
| 136 |
+
|
| 137 |
+
*Proof.* Let $r$ be the unique integer such that
|
| 138 |
+
|
| 139 |
+
$$ (2) \qquad (fhf^{-1})^r (p_{f(I)}) \le f(p_I) < (fhf^{-1})^{r+1} (p_{f(I)}), $$
|
| 140 |
+
|
| 141 |
+
and let $s$ be the unique integer for which
|
| 142 |
+
|
| 143 |
+
$$ (gfhf^{-1}g^{-1})^s (p_{gf(I)}) \le g(p_{f(I)}) < (gfhf^{-1}g^{-1})^{s+1} (p_{gf(I)}), $$
|
| 144 |
+
|
| 145 |
+
so that
|
| 146 |
+
|
| 147 |
+
$$ \ell_I(f) = |r|, \quad \ell_{f(I)}(g) = |s|. $$
|
| 148 |
+
|
| 149 |
+
We then have
|
| 150 |
+
|
| 151 |
+
$$ g^{-1}(gfhf^{-1}g^{-1})^s (p_{gf(I)}) \le p_{f(I)} < g^{-1}(gfhf^{-1}g^{-1})^{s+1} (p_{gf(I)}), $$
|
| 152 |
+
|
| 153 |
+
that is,
|
| 154 |
+
|
| 155 |
+
$$ (fhf^{-1})^s g^{-1} (p_{gf(I)}) \le p_{f(I)} < (fhf^{-1})^{s+1} g^{-1} (p_{gf(I)}). $$
|
| 156 |
+
---PAGE_BREAK---
|
| 157 |
+
|
| 158 |
+
Therefore,
|
| 159 |
+
|
| 160 |
+
$$ (f h f^{-1})^r (f h f^{-1})^s g^{-1}(p_{gf(I)}) \leq f(p_I) < (f h f^{-1})^{r+1} (f h f^{-1})^{s+1} g^{-1}(p_{gf(I)}), $$
|
| 161 |
+
|
| 162 |
+
and hence
|
| 163 |
+
|
| 164 |
+
$$ (f h f^{-1})^{r+s} g^{-1}(p_{gf(I)}) \leq f(p_I) < (f h f^{-1})^{r+s+2} g^{-1}(p_{gf(I)}). $$
|
| 165 |
+
|
| 166 |
+
This easily gives
|
| 167 |
+
|
| 168 |
+
$$ g(f h f^{-1})^{r+s} g^{-1}(p_{gf(I)}) \leq g f(p_I) < g(f h f^{-1})^{r+s+2} g^{-1}(p_{gf(I)}), $$
|
| 169 |
+
|
| 170 |
+
and thus
|
| 171 |
+
|
| 172 |
+
$$ (g f h f^{-1} g^{-1})^{r+s}(p_{gf(I)}) \leq g f(p_I) < (g f h f^{-1} g^{-1})^{r+s+2}(p_{gf(I)}). $$
|
| 173 |
+
|
| 174 |
+
This shows that $\ell_I(gf)$ equals either $|r+s|$ or $|r+s+1|$, which concludes the proof. $\blacksquare$
|
| 175 |
+
|
| 176 |
+
The following corollary is a direct consequence of the preceding lemma, but may be proved independently.
|
| 177 |
+
|
| 178 |
+
**COROLLARY 1.** For every $f \in G$ one has
|
| 179 |
+
|
| 180 |
+
$$ |\ell_I(f) - \ell_{f(I)}(f^{-1})| \leq 1. $$
|
| 181 |
+
|
| 182 |
+
*Proof.* From (2) one obtains
|
| 183 |
+
|
| 184 |
+
$$ h^{-(r+1)}(p_I) < f^{-1}(p_{f(I)}) \leq h^{-r}(p_I) < h^{-r+1}(p_I), $$
|
| 185 |
+
|
| 186 |
+
and hence $\ell_{f(I)}(f^{-1})$ equals either $|r|$ or $|r+1|$. Since $\ell_I(f) = |r|$, the corollary follows. $\blacksquare$
|
| 187 |
+
|
| 188 |
+
**4. The proof in the smooth case.** To rule out the possibility of “concentration” of the entropy on a type 2 connected component $I$ of $\Omega^c$, in the $C^2$ case we will use classical control of distortion arguments in order to construct, starting from the function $\ell_I$, a kind of quasi-morphism from $G$ into $\mathbb{N}_0$. Slightly more generally, let $\mathcal{F}$ be any finite family of connected components of type 2 of $\Omega^c$. We denote by $\mathcal{F}^G$ the family of all intervals contained in the orbits of the intervals in $\mathcal{F}$. For each $f \in G$ we then define
|
| 189 |
+
|
| 190 |
+
$$ \ell_{\mathcal{F}}(f) = \sup_{I \in \mathcal{F}^G} \ell_I(f). $$
|
| 191 |
+
|
| 192 |
+
*A priori*, the value of $\ell_{\mathcal{F}}$ could be infinite. We claim, however, that, for groups of $C^2$ diffeomorphisms, this value is necessarily finite for every element $f$.
|
| 193 |
+
|
| 194 |
+
**PROPOSITION 1.** For every finite family $\mathcal{F}$ of type 2 connected components of $\Omega^c$, the value of $\ell_{\mathcal{F}}(f)$ is finite for each $f \in G$.
|
| 195 |
+
|
| 196 |
+
To prove this proposition, we will need to estimate the function $\ell_I(f)$ in terms of the distortion of $f$ on the interval $I$.
|
| 197 |
+
---PAGE_BREAK---
|
| 198 |
+
|
| 199 |
+
LEMMA 4. For each fixed type 2 connected component $I$ of $\Omega^c$ and every $g \in G$, the value of $\ell_I(g)$ is bounded from above by a number $L(V)$ depending on $V = \text{var}(\log(g'|_I))$, the total variation of the logarithm of the derivative of the restriction of $g$ to $I$.
|
| 200 |
+
|
| 201 |
+
*Proof.* Write $I = ]a,b[$ and $g(I) = [\bar{a},\bar{b}]$. If $h$ is a generator for the stabilizer of $I$, then for every $f \in G$ the value of $\ell_I(f)$ corresponds (up to some constant $\pm 1$) to the number of fundamental domains for the dynamics of $fhf^{-1}$ on $f(I)$ between the points $p_{f(I)}$ and $f(p_I)$, which in turn corresponds to the number of fundamental domains for the dynamics of $h$ on $I$ between $f^{-1}(p_{f(I)})$ and $p_I$. Therefore, we need to show that there exist $c < d$ in $]a,b[$ depending on $V$ and such that $g^{-1}(p_{g(I)})$ belongs to $[c,d]$. We will show that this happens for the values
|
| 202 |
+
|
| 203 |
+
$$c = a + \frac{|I|}{2e^V} \quad \text{and} \quad d = b - \frac{|I|}{2e^V}.$$
|
| 204 |
+
|
| 205 |
+
We will just check that the first choice works, leaving the second one to the reader. By the Mean Value Theorem, there exist $x \in g(I)$ and $y \in [\bar{a}, p_{g(I)}]$ such that
|
| 206 |
+
|
| 207 |
+
$$ (g^{-1})'(x) = \frac{|I|}{|g(I)|} $$
|
| 208 |
+
|
| 209 |
+
and
|
| 210 |
+
|
| 211 |
+
$$ (g^{-1})'(y) = \frac{|g^{-1}([\bar{a}, p_{f(I})]|}{|[\bar{a}, p_{g(I)}]|} = \frac{g^{-1}(p_{g(I)}) - a}{|g(I)|/2}. $$
|
| 212 |
+
|
| 213 |
+
By the definition of the constant $V$, we have $(g^{-1})'(x)/(g^{-1})'(y) \le e^V$. This gives
|
| 214 |
+
|
| 215 |
+
$$ e^V \ge \frac{|I|/|g(I)|}{2(g^{-1}(p_{g(I)}) - a)/|g(I)|} = \frac{|I|}{2(g^{-1}(p_{g(I)}) - a)}, $$
|
| 216 |
+
|
| 217 |
+
thus proving that $g^{-1}(p_{g(I)}) \ge a + |I|/2e^V$, as we wanted to show. $\blacksquare$
|
| 218 |
+
|
| 219 |
+
*Proof of Proposition 1.* Let $J = [\bar{a}, \bar{b}]$ be an interval in the $G$-orbit of $I = ]a, b[$. If $g = g_{i_n} \cdots g_{i_1}, g_{i_j} \in \Gamma$, is an element of minimal length sending $I$ to $J$, then the intervals $I, g_{i_1}(I), g_{i_2}g_{i_1}(I), \dots, g_{i_{n-1}} \cdots g_{i_2}g_{i_1}(I)$ have pairwise disjoint interiors. Therefore,
|
| 220 |
+
|
| 221 |
+
$$ \mathrm{var}(\log(g'|_I)) \le \sum_{j=0}^{n-1} \mathrm{var}(\log(g'_{i_{j+1}}|_{g_{i_j}\cdots g_{i_1}(I)})) \le \sum_{h \in \Gamma} \mathrm{var}(\log(h')) =: W. $$
|
| 222 |
+
|
| 223 |
+
Moreover, setting $V = \text{var}(\log(f'))$, we have
|
| 224 |
+
|
| 225 |
+
$$ \text{var}(\log((fg)'_I)) \le \text{var}(\log(g'_I)) + \text{var}(\log(f')) = W + V. $$
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
By Lemmas 3 and 4 and Corollary 1,
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
\begin{align*}
|
| 232 |
+
\ell_J(f) &\le \ell_J(g^{-1}) + \ell_I(fg) + 1 \le \ell_I(g) + \ell_I(fg) + 2 \\
|
| 233 |
+
&\le L(W) + L(W+V) + 2.
|
| 234 |
+
\end{align*}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
This proves the assertion of the proposition when $\mathcal{F}$ consists of a single interval. The case of general finite $\mathcal{F}$ follows easily. $\blacksquare$
|
| 238 |
+
|
| 239 |
+
For a given $\epsilon > 0$ we define $\ell_{\epsilon} = \ell_{\mathcal{F}_{\epsilon}}$, where $\mathcal{F}_{\epsilon} = \{I_1, \dots, I_k\}$ is the family of all connected components of $\Omega^c$ having length greater than or equal to $\epsilon$, with $k = k(\epsilon)$. Notice that, by Lemma 3, for every $f,g$ in $\Gamma$ one has
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
(3) \qquad \ell_{\varepsilon}(gf) \le \ell_{\varepsilon}(g) + \ell_{\varepsilon}(f) + 1.
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
LEMMA 5. There exist constants $A(\varepsilon) > 0$ and $B(\varepsilon)$ with the following property: If $x_1, \dots, x_m$ are points in a single connected component of type 2 of $\Omega^c$ and $x_i, x_j$ are $(\varepsilon, n)$-separated for every $i \neq j$, then $m \le A(\varepsilon)n + B(\varepsilon)$.
|
| 246 |
+
|
| 247 |
+
*Proof.* Write $c_\varepsilon = \max\{\ell_\varepsilon(g) : g \in \Gamma\}$ (according to Proposition 1, the value of $c_\varepsilon$ is finite). Let $I$ be the type 2 connected component of $\Omega^c$ containing $x_1, \dots, x_m$. We may assume that $x_1 < \dots < x_m$. For each $1 \le i \le k$ let $h_i$ be the generator of $\text{St}(I_i)$. Notice that $\ell_\varepsilon(h_i^r) \ge |r|$ for all $r \in \mathbb{Z}$.
|
| 248 |
+
|
| 249 |
+
If $f$ is an element in $B_{\Gamma}(n)$ sending $I$ to some $I_i$, then the number of points which are $\varepsilon$-separated by $f$ is less than or equal to $1/\varepsilon + 1$. We claim that the number of elements in $B_{\Gamma}(n)$ sending $I$ to $I_i$ is bounded above by $4nc_{\varepsilon} + 4n - 1$. Indeed, if $g$ also sends $I$ onto $I_i$ then $gf^{-1} \in \text{St}(I_i)$, hence $gf^{-1} = h_i^r$ some $r$. Therefore, using (3) one obtains $|r| \le \ell_{\varepsilon}(h_i^r) \le 2nc_{\varepsilon} + 2n - 1$.
|
| 250 |
+
|
| 251 |
+
Since the previous arguments apply to each type 2 interval $I_i$, we have
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
m \le k(1/\varepsilon + 1)(4nc_{\varepsilon} + 4n - 1).
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
Therefore, letting
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
A(\varepsilon) = (4k + 4k/\varepsilon)(1 + c_{\varepsilon}) \quad \text{and} \quad B(\varepsilon) = -(k + k/\varepsilon)
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
concludes the proof. $\blacksquare$
|
| 264 |
+
|
| 265 |
+
To conclude the proof of Theorem A, the following notation will be useful.
|
| 266 |
+
|
| 267 |
+
**NOTATION.** 1. Given $\epsilon > 0$ and $n \in \mathbb{N}$, we denote by $s(n, \epsilon)$ the largest cardinality of an $(n, \epsilon)$-separated subset of $\mathbb{S}^1$. Likewise, $s_{\Omega}(n, \epsilon)$ will denote the largest cardinality of an $(n, \epsilon)$-separated set contained in the non-wandering set.
|
| 268 |
+
|
| 269 |
+
*Proof of Theorem A.* Fix $0 < \epsilon < 1/2L$, where $L$ is a common Lipschitz constant for the elements in $\Gamma$. We will show that, for some function $p_\epsilon$ growing linearly in $n$ (and whose coefficients depend on $\epsilon$), one has
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
(4) \qquad s(n, \varepsilon) \le p_{\varepsilon}(n)s_{\Omega}(n, \varepsilon) + p_{\varepsilon}(n).
|
| 273 |
+
$$
|
| 274 |
+
---PAGE_BREAK---
|
| 275 |
+
|
| 276 |
+
Actually, any function $p_\varepsilon$ with subexponential growth and satisfying such an inequality suffices. Indeed, taking the logarithm of both sides, dividing by $n$, and passing to the limit implies that
|
| 277 |
+
|
| 278 |
+
$$h_{\Gamma}(G \circledast S^1, \varepsilon) = h_{\Gamma}(G \circledast \Omega, \varepsilon).$$
|
| 279 |
+
|
| 280 |
+
Letting $\varepsilon$ go to zero gives
|
| 281 |
+
|
| 282 |
+
$$h_{\Gamma}(G \circledast S^1) \leq h_{\Gamma}(G \circledast \Omega).$$
|
| 283 |
+
|
| 284 |
+
Since the opposite inequality is obvious, this shows the desired equality between the entropies.
|
| 285 |
+
|
| 286 |
+
To show (4), fix an $(n, \varepsilon)$-separated set $S$ containing $s(n, \varepsilon)$ points. Let $n_{\Omega}$ (resp. $n_{\Omega^c}$) be the number of points in $S$ which are in $\Omega$ (resp. in $\Omega^c$). Obviously, $s(n, \varepsilon) = n_{\Omega} + n_{\Omega^c}$. Let $t = t_S$ be the number of connected components of $\Omega^c$ containing points in $S$, and let $l = [t/2]$, where $[\cdot]$ denotes integer part. We will show that there exists an $(n, \varepsilon)$-separated set $T$ contained in $\Omega$ and having cardinality $l$. This will obviously give $s_{\Omega}(n, \varepsilon) \ge l$. The inequalities $t \le 2l+1$ and $n_{\Omega} \le s_{\Omega}(n, \varepsilon)$, together with Lemmas 2 and 3, will imply that
|
| 287 |
+
|
| 288 |
+
$$ \begin{aligned} s(n, \varepsilon) &= n_{\Omega} + n_{\Omega^c} \le n_{\Omega} + tk(1 + 1/\varepsilon)(4nc_{\varepsilon} + 4n - 1) \\ &\le s_{\Omega}(n, \varepsilon) + (2s_{\Omega}(n, \varepsilon) + 1)k(1 + 1/\varepsilon)(4nc_{\varepsilon} + 4n - 1), \end{aligned} $$
|
| 289 |
+
|
| 290 |
+
thus showing (4).
|
| 291 |
+
|
| 292 |
+
To show the existence of the set $T$ with the properties above, we proceed in a constructive way. Let us enumerate the connected components of $\Omega^c$ containing points in $S$ in a cyclic way as $I_1, \dots, I_t$. Now for each $1 \le i \le l$ choose a point $t_i \in \Omega$ between $I_{2i-1}$ and $I_{2i}$, and let $T = \{t_1, \dots, t_l\}$. We need to check that, for $i \ne j$, the points $t_i$ and $t_j$ are $(n, \varepsilon)$-separated. Now by construction, for each $i \ne j$ there exist at least two different points $x, y$ in $S$ contained in the interval of smallest length in $S^1$ joining $t_i$ and $t_j$. Since $S$ is $(n, \varepsilon)$-separated, there exist $m \le n$ and $g_{i_1}, \dots, g_{i_m}$ in $\Gamma$ such that $\text{dist}(h(x), h(y)) \ge \varepsilon$, where $h = g_{i_m} \cdots g_{i_2}g_{i_1}$. Unfortunately, because of the topology of the circle, this does not imply that $\text{dist}(h(t_i), h(t_j)) \ge \varepsilon$. However, the proof will be finished if we show that
|
| 293 |
+
|
| 294 |
+
$$ (5) \quad \text{dist}(g_{i_r} \cdots g_{i_1}(t_i), g_{i_r} \cdots g_{i_1}(t_j)) \ge \varepsilon \quad \text{for some } 0 \le r \le m. $$
|
| 295 |
+
|
| 296 |
+
This claim is obvious if $\text{dist}(t_i, t_j) \ge \varepsilon$. If this is not the case then, by the definition of the constants $\varepsilon$ and $L$, the length of the interval $[g_{i_1}(t_i), g_{i_1}(t_j)]$ is smaller than $1/2$, and hence it coincides with the distance between its endpoints. If this distance is at least $\varepsilon$, then we are done. If not, the same argument shows that the length of the interval $[g_{i_2}g_{i_1}(t_i), g_{i_2}g_{i_1}(t_j)]$ is smaller than $1/2$ and coincides with the distance between its endpoints. If this length is at least $\varepsilon$, then we are done. If not, we continue the procedure. Clearly, there must be some integer $r \le m$ such that the length of the
|
| 297 |
+
---PAGE_BREAK---
|
| 298 |
+
|
| 299 |
+
interval $[g_{i_{r-1}} \cdots g_{i_1}(t_i), g_{i_{r-1}} \cdots g_{i_1}(t_j)]$ is smaller than $\varepsilon$, and the one of
|
| 300 |
+
$[g_{i_r} \cdots g_{i_1}(t_i), g_{i_r} \cdots g_{i_1}(t_j)]$ is greater than or equal to $\varepsilon$. As before, the
|
| 301 |
+
length of the next interval will be forced to be smaller than 1/2, and hence
|
| 302 |
+
it will coincide with the distance between its endpoints. This shows (5) and
|
| 303 |
+
concludes the proof of Theorem A. $\blacksquare$
|
| 304 |
+
|
| 305 |
+
**5. The proof in the absence of subexponentially distorted elements.** Recall that topological entropy is invariant under topological conjugacy. Therefore, due to [3, Théorème D], in order to prove Theorem B we may assume that $G$ is a group of bi-Lipschitz homeomorphisms. Let $L$ be a common Lipschitz constant for the elements in $\Gamma$. Fix again $0 < \varepsilon < 1/2L$, and let $I_1, \dots, I_k$ be the connected components of $\Omega^c$ having length greater than or equal to $\varepsilon$. Let $h_i$ be a generator for the stabilizer of $I_i$ (with $h_i = \text{Id}$ in case $I_i$ is of type 1). Consider the minimal non-decreasing function $q_\varepsilon$ such that, for each of the non-trivial $h_i$'s, one has $q_\varepsilon(\|h_i^r\|) \ge r$ for all positive $r$. We will show that (4) holds for the function
|
| 306 |
+
|
| 307 |
+
$$p_{\varepsilon}(n) = 2k(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1) + 1.$$
|
| 308 |
+
|
| 309 |
+
Notice that, by assumption, this function $p_\epsilon$ grows at most subexponentially
|
| 310 |
+
in $n$. Hence, as in the case of Theorem A, inequality (4) allows us to finish
|
| 311 |
+
the proof of the equality between the entropies.
|
| 312 |
+
|
| 313 |
+
The main difficulty in showing (4) in this case is that Lemma 5 is no
|
| 314 |
+
longer available. However, the following still holds.
|
| 315 |
+
|
| 316 |
+
LEMMA 6. If $x_1, \dots, x_m$ are points in a single type 2 connected component $I$ of $\Omega^c$ having length at least $\varepsilon$, and $x_i, x_j$ are $(\varepsilon, n)$-separated for all $i \neq j$, then $m \le k(1/\varepsilon + 1)(2q_\varepsilon(2n) + 1)$.
|
| 317 |
+
|
| 318 |
+
*Proof.* Let $I$ be the type 2 connected component of $\Omega^c$ containing $x_1, \dots, x_m$. We may assume that $x_1 < \dots < x_m$. If $f$ is an element in $B_I(n)$ sending $I$ to some $I_i$, then the number of points which are $\varepsilon$-separated by $f$ is less than or equal to $1/\varepsilon + 1$. We claim that the number of elements in $B_I(n)$ sending $I$ to $I_i$ is bounded above by $q_\varepsilon(r)$. Indeed, if $g$ also sends $I$ to $I_i$ then $gf^{-1} \in \text{St}(I_i)$, hence $gf^{-1} = h_i^r$ some $r$. Therefore,
|
| 319 |
+
|
| 320 |
+
$$2n \geq \|gf^{-1}\| = \|h_i^r\|,$$
|
| 321 |
+
|
| 322 |
+
and hence
|
| 323 |
+
|
| 324 |
+
$$q_{\epsilon}(2n) \ge q_{\epsilon}(\|h_i^r\|) \ge |r|.$$
|
| 325 |
+
|
| 326 |
+
Since the previous arguments apply to each type 2 interval $I_i$, this gives
|
| 327 |
+
|
| 328 |
+
$$m \le k(1/\epsilon + 1)(2q_\epsilon(2n) + 1),$$
|
| 329 |
+
|
| 330 |
+
thus proving the lemma. $\blacksquare$
|
| 331 |
+
|
| 332 |
+
To show (4) in the present case, we proceed as in the proof of Theorem A. We fix an $(n, \epsilon)$-separated set $S$ containing $s(n, \epsilon)$ points. We let $n_\Omega$
|
| 333 |
+
---PAGE_BREAK---
|
| 334 |
+
|
| 335 |
+
resp. $n_{\Omega^c}$) be the number of points in $S$ which are in $\Omega$ (resp. in $\Omega^c$), so that
|
| 336 |
+
$s(n, \varepsilon) = n_{\Omega} + n_{\Omega^c}$. Let $t = t_S$ be the number of connected components of $\Omega^c$
|
| 337 |
+
containing points in $S$, and let $l = [t/2]$. As before, one can show that there
|
| 338 |
+
exists an $(n, \varepsilon)$-separated set contained in $\Omega$ and having cardinality $l$. This
|
| 339 |
+
will obviously give $s_{\Omega}(n, \varepsilon) \ge l$. The inequalities $t \le 2l+1$ and $n_{\Omega} \le s_{\Omega}(n, \varepsilon)$
|
| 340 |
+
still hold. Using Lemmas 2 and 6 one now obtains
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
\begin{align*}
|
| 344 |
+
s(n, \varepsilon) &= n_{\Omega} + n_{\Omega^{c}} \leq n_{\Omega} + tk(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1) \\
|
| 345 |
+
&\leq s_{\Omega}(n, \varepsilon) + (2s_{\Omega}(n, \varepsilon) + 1)k(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1).
|
| 346 |
+
\end{align*}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
This concludes the proof of Theorem B.
|
| 350 |
+
|
| 351 |
+
**Acknowledgments.** I would like to thank Andrés Navas for introduc-
|
| 352 |
+
ing me to this subject and his continuous support during this work, which
|
| 353 |
+
was partially funded by Research Network on Low Dimensional Dynamical
|
| 354 |
+
Systems (PBCT-Conicyt’s project ADI 17). I would also extend my grati-
|
| 355 |
+
tude to both the referee and the editor for pointing out a subtle error in the
|
| 356 |
+
original version of this paper.
|
| 357 |
+
|
| 358 |
+
References
|
| 359 |
+
|
| 360 |
+
[1] D. Calegari and M. Freedman, *Distortion in transformation groups*, Geom. Topology 10 (2006), 267–293.
|
| 361 |
+
|
| 362 |
+
[2] J. Cantwell and L. Conlon, *Poincaré–Bendixson theory for leaves of codimension one*, Trans. Amer. Math. Soc. 265 (1981), 181–209.
|
| 363 |
+
|
| 364 |
+
[3] B. Deroin, V. Kleptsyn et A. Navas, *Sur la dynamique unidimensionnelle en régularité intermédiaire*, Acta Math. 199 (2007), 199–262.
|
| 365 |
+
|
| 366 |
+
[4] E. Ghys, *Groups acting on the circle*, Enseign. Math. 47 (2001), 329–407.
|
| 367 |
+
|
| 368 |
+
[5] E. Ghys, R. Langevin et P. Walczak, *Entropie géométrique des feuilletages*, Acta Math. 160 (1988), 105–142.
|
| 369 |
+
|
| 370 |
+
[6] G. Hector, *Architecture des feuilletages de classe C²*, Astérisque 107–108 (1983), 243–258.
|
| 371 |
+
|
| 372 |
+
[7] A. Navas, *Groups of Circle Diffeomorphisms*, forthcoming book; Spanish version: Ensaios Matemáticos 13, Brazil. Math. Soc., 2007.
|
| 373 |
+
|
| 374 |
+
[8] —, *Growth of groups and diffeomorphisms of the circle*, Geom. Funct. Anal. 18 (2008), 988–1028.
|
| 375 |
+
|
| 376 |
+
[9] P. Walczak, *Dynamics of Foliations, Groups and Pseudogroups*, IMPAN Monogr. Math. 64, Birkhäuser, Basel, 2004.
|
| 377 |
+
|
| 378 |
+
Departamento de Matemáticas
|
| 379 |
+
Facultad de Ciencias
|
| 380 |
+
Universidad de Chile
|
| 381 |
+
Las Palmeras 3425, Ñuñoa
|
| 382 |
+
Santiago, Chile
|
| 383 |
+
E-mail: ejorquer@u.uchile.cl
|
| 384 |
+
|
| 385 |
+
Received 15 September 2008;
|
| 386 |
+
in revised form 25 February 2009
|
samples/texts_merged/2779026.md
ADDED
|
@@ -0,0 +1,595 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Erdös-Rényi Sequences and Deterministic construction of
|
| 5 |
+
Expanding Cayley Graphs
|
| 6 |
+
|
| 7 |
+
V. Arvind *
|
| 8 |
+
|
| 9 |
+
Partha Mukhopadhyay†
|
| 10 |
+
|
| 11 |
+
Prajakta Nimbhorkar †
|
| 12 |
+
|
| 13 |
+
May 15, 2011
|
| 14 |
+
|
| 15 |
+
Abstract
|
| 16 |
+
|
| 17 |
+
Given a finite group $G$ by its multiplication table as input, we give a deterministic polynomial-time construction of a directed Cayley graph on $G$ with $O(\log|G|)$ generators, which has a rapid mixing property and a constant spectral expansion.
|
| 18 |
+
|
| 19 |
+
We prove a similar result in the undirected case, and give a new deterministic polynomial-time construction of an expanding Cayley graph with $O(\log|G|)$ generators, for any group $G$ given by its multiplication table. This gives a completely different and elementary proof of a result of Wigderson and Xiao [10].
|
| 20 |
+
|
| 21 |
+
For any finite group $G$ given by a multiplication table, we give a deterministic polynomial-time construction of a cube generating sequence that gives a distribution on $G$ which is arbitrarily close to the uniform distribution. This derandomizes the well-known construction of Erdös-Rényi sequences [2].
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
Let $G$ be a finite group with $n$ elements, and let $J = \{g_1, g_2, \dots, g_k\}$ be a *generating set* for the group $G$.
|
| 26 |
+
|
| 27 |
+
The *directed Cayley graph* Cay$(G, J)$ is a directed graph with vertex set $G$ with directed edges of the form $(x, xg_i)$ for each $x \in G$ and $g_i \in J$. Clearly, since $J$ is a generating set for $G$, Cay$(G, J)$ is a strongly connected graph with every vertex of out-degree $k$.
|
| 28 |
+
|
| 29 |
+
The *undirected Cayley graph* Cay$(G, J \cup J^{-1})$ is an undirected graph on the vertex set $G$ with undirected edges of the form $\{x, xg_i\}$ for each $x \in G$ and $g_i \in J$. Again, since $J$ is a generating set for $G$, Cay$(G, J \cup J^{-1})$ is a connected regular graph of degree $|J \cup J^{-1}|$.
|
| 30 |
+
|
| 31 |
+
Let $X = (V, E)$ be an undirected regular $n$-vertex graph of degree $D$. Consider the *normalized adjacency matrix* $A_X$ of the graph $X$. It is a symmetric matrix with largest eigenvalue 1. For $0 < \lambda < 1$, the graph $X$ is an $(n, D, \lambda)$-spectral expander if the second largest eigenvalue of $A_X$, in absolute value, is bounded by $\lambda$.
|
| 32 |
+
|
| 33 |
+
The study of expander graphs and its properties is of fundamental importance in theoretical computer science; the Hoory-Linial-Wigderson monograph is an excellent source [4] for current
|
| 34 |
+
|
| 35 |
+
*The Institute of Mathematical Sciences, Chennai, India>Email: arvind@imsc.res.in
|
| 36 |
+
|
| 37 |
+
†Chennai Mathematical Institute, Siruseri, India. Emails: {partham,prajakta}@cmi.ac.in
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
developments and applications. A central problem is the explicit construction of expander graph families [4, 5]. By explicit it is meant that the family of graphs has efficient deterministic constructions, where the notion of efficiency is often tailored to a specific application, e.g. [9]. Explicit constructions with the best known (and near optimal) expansion and degree parameters are Cayley expander families (the so-called Ramanujan graphs) [5].
|
| 41 |
+
|
| 42 |
+
Does every finite group have an expanding generator set? Alon and Roichman, in [1], answered this in the positive using the probabilistic method. Let $G$ be any finite group with $n$ elements. Given any constant $\lambda > 0$, they showed that a random multiset $J$ of size $O(\log n)$ picked uniformly at random from $G$ is, with high probability, a spectral expander with second largest eigenvalue bounded by $\lambda$. In other words, $\text{Cay}(G, J \cup J^{-1})$ is an $O(\log n)$ degree, $\lambda$-spectral expander with high probability. The theorem also gives a polynomial (in $n$) time randomized algorithm for construction of a Cayley expander on $G$: pick the elements of $J$ independently and uniformly at random and check that $\text{Cay}(G, J \cup J^{-1})$ is a spectral expander. There is a brute-force deterministic simulation of this that runs in $n^{O(\log n)}$ time by cycling through all candidate sets $J$. Wigderson and Xiao in [10], give a very interesting $n^{O(1)}$ time derandomized construction based on Chernoff bounds for matrix-valued random variables (and pessimistic estimators). Their result is the starting point of the study presented in this paper.
|
| 43 |
+
|
| 44 |
+
In this paper, we give an entirely different and elementary $n^{O(1)}$ time derandomized construction that is based on analyzing mixing times of random walks on expanders rather than on its spectral properties. Our construction is conceptually somewhat simpler and also works for directed Cayley graphs.
|
| 45 |
+
|
| 46 |
+
The connection between mixing times of random walks on a graph and its spectral expansion is well studied. For undirected graphs we have the following.
|
| 47 |
+
|
| 48 |
+
**Theorem 1.1** [8, Theorem 1] Let $A$ be the normalized adjacency matrix of an undirected graph. For every initial distribution, suppose the distribution obtained after $t$ steps of the random walk following $A$ is $\epsilon$-close to the uniform distribution in the $L_1$ norm. Then the spectral gap $(1 - |\lambda_1|)$ of $A$ is $\Omega(\frac{1}{t} \log(\frac{1}{\epsilon}))$.
|
| 49 |
+
|
| 50 |
+
In particular, if the graph is $\text{Cay}(G, J \cup J^{-1})$ for any $n$ element group $G$, such that a $C \log n$ step random walk is $\frac{1}{n^c}$-close to the uniform distribution in $L_1$ norm, then the spectral gap is a constant $\frac{c}{C}$.
|
| 51 |
+
|
| 52 |
+
Even for directed graphs a connection between mixing times of random walks and the spectral properties of the underlying Markov chain is known.
|
| 53 |
+
|
| 54 |
+
**Theorem 1.2** [6, Theorem 5.9] Let $\lambda_{max}$ denote the second largest magnitude (complex valued) eigenvalue of the normalized adjacency matrix $P$ of a strongly connected aperiodic Markov Chain. Then the mixing time is lower bounded by $\tau(\epsilon) \ge \frac{\log(1/2\epsilon)}{\log(1/|\lambda_{max}|)}$, where $\epsilon$ is the difference between the resulting distribution and the uniform distribution in the $L_1$ norm.
|
| 55 |
+
|
| 56 |
+
In [7], Pak uses this connection to prove an analogue of the Alon-Roichman theorem for directed Cayley graphs: Let $G$ be an $n$ element group and $J = \langle g_1, \dots, g_k \rangle$ consist of $k = O(\log n)$ group elements picked independently and uniformly at random from $G$. Pak shows that for any initial distribution on $G$, the distribution obtained by an $O(\log n)$ steps *lazy random walk* on the directed graph $\text{Cay}(G, J)$ is $\frac{1}{\text{poly}(n)}$- close to the uniform distribution. Then, by Theorem 1.2, it follows that the directed Cayley graph $\text{Cay}(G, J)$ has a constant spectral expansion. Crucially, we note
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
that Pak considers lazy random walks, since his main technical tool is based on *cube generating sequences* for finite groups introduced by Erdös and Rényi in [2].
|
| 60 |
+
|
| 61 |
+
**Definition 1.3** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of group elements. For any $\delta > 0$, $J$ is said to be a cube generating sequence for $G$ with closeness parameter $\delta$, if the probability distribution $D_J$ on $G$ given by $g_1^{\epsilon_1} \dots g_k^{\epsilon_k}$, where each $\epsilon_i$ is independently and uniformly distributed in $\{0, 1\}$, is $\delta$-close to the uniform distribution in the $L_2$-norm.
|
| 62 |
+
|
| 63 |
+
Erdös and Rényi [2] proved the following theorem.
|
| 64 |
+
|
| 65 |
+
**Theorem 1.4** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of $k$ elements of $G$ picked uniformly and independently at random. Let $D_J$ be the distribution on $G$ generated by $J$, i.e. $D_J(x) = \text{Pr}_{\{\epsilon_i \in \mathbb{R}\{0,1\} : 1 \le i \le k\}} [g_1^{\epsilon_1} \dots g_k^{\epsilon_k} = x]$ for $x \in G$, and $U$ be the uniform distribution on $G$. Then the expected value $\mathbb{E}_J \|D_J - U\|_2^2 = 1/2^k (1 - 1/n)$.
|
| 66 |
+
|
| 67 |
+
In particular if we choose $k = O(\log n)$, the resulting distribution $D_J$ is $\frac{1}{\text{poly}(n)}$-close to the uniform distribution in $L_2$ norm.
|
| 68 |
+
|
| 69 |
+
## Our Results
|
| 70 |
+
|
| 71 |
+
Let $G$ be a finite group with $n$ elements given by its multiplication table. Our first result is a derandomization of a result of Pak [7]. We show a deterministic polynomial-time construction of a generating set $J$ of size $O(\log |G|)$ such that a lazy random walk on Cay$(G, J)$ mixes fast. Throughout the paper, we measure the distance between two distributions in $L_2$ norm.
|
| 72 |
+
|
| 73 |
+
**Theorem 1.5** For any constant $c > 1$, there is a deterministic poly($n$) time algorithm that computes a generating set $J$ of size $O(\log n)$ for the given group $G$, such that given any initial distribution on $G$ the lazy random walk of $O(\log n)$ steps on the directed Cayley graph Cay$(G, J)$ yields a distribution that is $\frac{1}{n^c}$-close (in $L_2$ norm) to the uniform distribution.
|
| 74 |
+
|
| 75 |
+
Theorem 1.5 and Theorem 1.2 together yield the following corollary.
|
| 76 |
+
|
| 77 |
+
**Corollary 1.6** Given a finite group $G$ and any $\epsilon > 0$, there is a deterministic polynomial-time algorithm to construct an $O(\log n)$ size generating set $J$ such that Cay$(G, J)$ is a spectral expander (i.e. its second largest eigenvalue in absolute value is bounded by $\epsilon$).
|
| 78 |
+
|
| 79 |
+
Our next result yields an alternative proof of the Wigderson-Xiao result [10]. In order to carry out a similar approach as the proof of Theorem 1.5 for undirected Cayley graphs, we need a suitable generalization of cube generating sequences, and in particular, a generalization of [2]. Using this generalization, we can give a deterministic poly($n$) time algorithm to compute $J = \langle g_1, g_2, \dots, g_k \rangle$ where $k = O(\log n)$ such that a lazy random walk of length $O(\log n)$ on Cay$(G, J \cup J^{-1})$ is $\frac{1}{\text{poly}(n)}$-close to the uniform distribution. Here the lazy random walk is described by the symmetric transition matrix $A_J = \frac{1}{3}I + \frac{1}{3k}(P_J + P_{J-1})$ where $P_J$ and $P_{J-1}$ are the adjacency matrices of the Cayley graphs Cay$(G, J)$ and Cay$(G, J^{-1})$ respectively.
|
| 80 |
+
|
| 81 |
+
**Theorem 1.7** Let $G$ be a finite group of order $n$ and $c > 1$ be any constant. There is a deterministic poly($n$) time algorithm that computes a generating set $J$ of size $O(\log n)$ for $G$, such that an $O(\log n)$ step lazy random walk on $G$, governed by the transition matrix $A_J$ described above, is $\frac{1}{n^c}$-close to the uniform distribution, for any given initial distribution on $G$.
|
| 82 |
+
---PAGE_BREAK---
|
| 83 |
+
|
| 84 |
+
Theorem 1.7 and the connection between mixing time and spectral expansion for undirected graphs given by Theorem 1.1 yields the following.
|
| 85 |
+
|
| 86 |
+
**Corollary 1.8 (Wigderson-Xiao)** [10] Given a finite group $G$ by its multiplication table, there is a deterministic polynomial (in $|G|$) time algorithm to construct a generating set $J$ such that $\text{Cay}(G, J \cup J^{-1})$ is a spectral expander.
|
| 87 |
+
|
| 88 |
+
Finally, we show that the construction of cube generating sequences can also be done in deterministic polynomial time.
|
| 89 |
+
|
| 90 |
+
**Theorem 1.9** For any constant $c > 1$, there is a deterministic polynomial (in $n$) time algorithm that outputs a cube generating sequence $J$ of size $O(\log n)$ such that the distribution $D_J$ on $G$, defined by the cube generating sequence $J$, is $\frac{1}{n^c}$-close to the uniform distribution.
|
| 91 |
+
|
| 92 |
+
## 1.1 Organization of the paper
|
| 93 |
+
|
| 94 |
+
The paper is organized as follows. We prove Theorem 1.5 and Corollary 1.6 in Section 2. The proof of Theorem 1.7 and Corollary 1.8 are given in Section 3. We prove Theorem 1.9 in Section 4. Finally, we summarize in Section 5.
|
| 95 |
+
|
| 96 |
+
# 2 Expanding Directed Cayley Graphs
|
| 97 |
+
|
| 98 |
+
Let $D_1$ and $D_2$ be two probability distributions over the finite set $\{1, 2, \dots, n\}$. We use the $L_2$ norm to measure the distance between the two distributions: $$ ||D_1 - D_2||_2 = \left[ \sum_{x \in [n]} |D_1(x) - D_2(x)|^2 \right]^{\frac{1}{2}}. $$ Let $U$ denote the uniform distribution on $[n]$. We say that a distribution $D$ is $\delta$-close to the uniform distribution if $$ ||D - U||_2 \le \delta. $$
|
| 99 |
+
|
| 100 |
+
**Definition 2.1** The collision probability of a distribution $D$ on $[n]$ is defined as $\text{Coll}(D) = \sum_{i \in [n]} D(i)^2$. It is easy to see that $\text{Coll}(D) \le 1/n + \delta$ if and only if $||D - U||_2^2 \le \delta$ and $\text{Coll}(D)$ attains its minimum value $1/n$ only for the uniform distribution.
|
| 101 |
+
|
| 102 |
+
We prove Theorem 1.5 by giving a deterministic construction of a cube generating sequence $J$ such that a random walk on $\text{Cay}(G, J)$ mixes in $O(\log n)$ steps. We first describe a randomized construction in Section 2.1, which shows the existence of such a sequence. The construction is based on analysis of [7]. This is then derandomized in Section 2.2.
|
| 103 |
+
|
| 104 |
+
## 2.1 Randomized construction
|
| 105 |
+
|
| 106 |
+
For a sequence of group elements $J = \langle g_1, \dots, g_k \rangle$, we consider the Cayley graph $\text{Cay}(G, J)$, which is, in general, a directed multigraph in which both in-degree and out-degree of every vertex is $k$. Let $A$ denote the adjacency matrix of $\text{Cay}(G, J)$. The lazy random walk is defined by the probability transition matrix $(A+I)/2$ where $I$ is the identity matrix. Let $Q_J$ denote the probability distribution obtained after $m$ steps of the lazy random walk. Pak [7] has analyzed the distribution $Q_J$ and shown that for a random $J$ of $O(\log n)$ size and $m = O(\log n)$, $Q_J$ is $1/n^{O(1)}$-close to the uniform distribution. We note that Pak works with the $L_\infty$ norm. Our aim is to give an efficient deterministic construction of $J$. It turns out for us that the $L_2$ norm and the collision probability
|
| 107 |
+
---PAGE_BREAK---
|
| 108 |
+
|
| 109 |
+
are the right tools to work with since we can compute these quantities exactly as we fix elements of $J$ one by one.
|
| 110 |
+
|
| 111 |
+
Consider any length-$m$ sequence $I = \langle i_1, \dots, i_m \rangle \in [k]^m$, where $i_j$s are indices that refer to elements in the set $J$. Let $R_I^j$ denote the following probability distribution on $G$. For each $x \in G$: $R_I^j(x) = \text{Pr}_{\bar{\epsilon}}[g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = x]$, where $\bar{\epsilon} = (\epsilon_1, \dots, \epsilon_m)$ and each $\epsilon_i \in \{0, 1\}$ is picked independently and uniformly at random. Notice that for each $x \in G$ we have: $Q_J(x) = \frac{1}{k^m} \sum_{I \in [k]^m} R_I^J(x)$.
|
| 112 |
+
|
| 113 |
+
Further, notice that $R_I^J$ is precisely the probability distribution defined by the cube generating sequence $\langle g_{i_1}, g_{i_2}, \dots, g_{i_m} \rangle$, and the above equation states that the distribution $Q_J$ is the average over all $I \in [k]^m$ of the $R_I^J$.
|
| 114 |
+
|
| 115 |
+
In general, the indices in $I \in [k]^m$ are not distinct. Let $L(I)$ denote the sequence of distinct indices occurring in $I$, in the order of their first occurrence in $I$, from left to right. We refer to $L(I)$ as the L-subsequence of $I$. Clearly, the sequence $L(I)$ will itself define a probability distribution $R_{L(I)}^J$ on the group $G$.
|
| 116 |
+
|
| 117 |
+
Suppose the elements of $J$ are independently, randomly picked from $G$. The following lemma shows for any $I \in [k]^m$ that if $R_{L(I)}^J$ is $\delta$-close to uniform distribution (in $L_2$ norm), in expectation, then so is $R_I^J$. We state it in terms of collision probabilities.
|
| 118 |
+
|
| 119 |
+
**Lemma 2.2** For a fixed $I$, If $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} R_{L(I)}^J(g)^2] \leq 1/n + \delta$ then $\mathbb{E}_J[\text{Coll}(R_I^J)] = \mathbb{E}_J[\sum_{g \in G} R_I^J(g)^2] \leq 1/n + \delta$.
|
| 120 |
+
|
| 121 |
+
A proof of Lemma 2.2 is in the appendix to keep our presentation self-contained. A similar lemma for the $L_\infty$ norm is shown in [7, Lemma 1] (though it is not stated there in terms of the expectation).
|
| 122 |
+
|
| 123 |
+
When elements of $J$ are picked uniformly and independently from $G$, by Theorem 1.4, $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} R_{L(I)}^J(g)^2] = \frac{1}{n} + \frac{1}{2\ell}(1 - \frac{1}{n})$, where $\ell$ is the length of the L-subsequence. Thus the expectation is small provided $\ell$ is large enough. It turns out that most $I \in [k]^m$ have sufficiently long L-subsequences (Lemma 2.3). A similar result appears in [7]. We give a proof of Lemma 2.3 in the appendix.
|
| 124 |
+
|
| 125 |
+
**Lemma 2.3** [7] Let $a = \frac{k}{\ell-1}$. The probability that a sequence of length $m$ over $[k]$ does not have an L-subsequence of length $\ell$ is at most $\frac{(ae)^{\frac{k}{a}}}{a^m}$.
|
| 126 |
+
|
| 127 |
+
To ensure the above probability is bounded by $\frac{1}{2^m}$, it suffices to choose $m > \frac{(k/a) \log(ae)}{\log(a/2)}$.
|
| 128 |
+
|
| 129 |
+
The following lemma (which is again an $L_2$ norm version of a similar statement from [7]), we observe that the expected distance from the uniform distribution is small, when $I \in [k]^m$ is picked uniformly at random. The proof of the lemma is given in the appendix.
|
| 130 |
+
|
| 131 |
+
**Lemma 2.4** $\mathbb{E}_J[\text{Coll}(Q_J)] = \mathbb{E}_J[\sum_{g \in G} Q_J(g)^2] \leq \frac{1}{n} + \frac{1}{2\Theta(m)}$.
|
| 132 |
+
|
| 133 |
+
We can make $\frac{1}{2\Theta(m)} < \frac{1}{nc}$ for some $c > 0$, by choosing $m = O(\log n)$. That also fixes $k$ to be $O(\log n)$ suitably.
|
| 134 |
+
---PAGE_BREAK---
|
| 135 |
+
|
| 136 |
+
## 2.2 Deterministic construction
|
| 137 |
+
|
| 138 |
+
Our goal is to compute, for any given constant $c > 0$, a multiset $J$ of $k$ group elements of $G$ such that $\text{Coll}(Q_J) = \sum_{g \in G} Q_J(g)^2 \le 1/n + 1/n^c$, where both $k$ and $m$ are $O(\log n)$. For each $J$ observe, by Cauchy-Schwarz inequality, that
|
| 139 |
+
|
| 140 |
+
$$ \text{Coll}(Q_J) = \sum_{g \in G} Q_J(g)^2 \le \sum_{g \in G} \frac{1}{k^m} \sum_{I \in [k]^m} R_I^J(g)^2 = \frac{1}{k^m} \sum_{I \in [k]^m} \text{Coll}(R_I^J). \quad (1) $$
|
| 141 |
+
|
| 142 |
+
Our goal can now be restated: it suffices to construct in deterministic polynomial time a multiset $J$ of group elements such that the average collision probability $\frac{1}{k^m} \sum_{I \in [k]^m} \text{Coll}(R_I^J) \le 1/n + 1/n^c$.
|
| 143 |
+
|
| 144 |
+
Consider the random set $J = \{X_1, \dots, X_k\}$ with each $X_i$ a uniformly and independently distributed random variable over $G$. Combined with the proof of Lemma 2.4 (in particular from Equation 17), we observe that for any constant $c > 1$ there are $k$ and $m$, both $O(\log n)$ such that
|
| 145 |
+
|
| 146 |
+
$$ \mathbb{E}_J[\text{Coll}(Q_J)] \le \mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le \frac{1}{n} + \frac{1}{n^c}. \quad (2) $$
|
| 147 |
+
|
| 148 |
+
Our deterministic algorithm will fix the elements in $J$ in stages. At stage 0 the set $J = J_0 = \{X_1, X_2, \dots, X_k\}$ consists of independent random elements $X_i$ drawn from the group $G$. Suppose at the $j^{th}$ stage, for $j < k$, the set we have is $J = J_j = \{x_1, x_2, \dots, x_j, X_{j+1}, \dots, X_k\}$, where each $x_r(1 \le r \le j)$ is a fixed element of $G$ and the $X_s(j+1 \le s \le k)$ are independent random elements of $G$ such that
|
| 149 |
+
|
| 150 |
+
$$ \mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le 1/n + 1/n^c. $$
|
| 151 |
+
|
| 152 |
+
**Remark.**
|
| 153 |
+
|
| 154 |
+
1. In the above expression, the expectation is over the random elements of $J$.
|
| 155 |
+
|
| 156 |
+
2. If we can compute in poly($n$) time a choice $x_{j+1}$ for $X_{j+1}$ such that $\mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le 1/n + 1/n^c$ then we can compute the desired generating set $J$ in polynomial (in $n$) time.
|
| 157 |
+
|
| 158 |
+
Given $J = J_j = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ with $j$ fixed elements and $k-j$ random elements, it is useful to partition the set of sequences $[k]^m$ into subsets $S_{r,l}$ where $I \in S_{r,l}$ if and only if there are exactly $r$ indices in $I$ from $\{1, \dots, j\}$, and of the remaining $m-r$ indices of $I$ there are exactly $\ell$ distinct indices. We now define a suitable generalization of L-subsequences.
|
| 159 |
+
|
| 160 |
+
**Definition 2.5** An $(r, \ell)$-normal sequence for $J$ is a sequence $\{n_1, n_2, \dots, n_r, \dots, n_{r+\ell}\} \in [k]^{r+\ell}$ such that the indices $n_s, 1 \le s \le r$ are in $\{1, 2, \dots, j\}$ and the indices $n_s, s > \ell$ are all distinct and in $\{j+1, \dots, k\}$. I.e. the first $r$ indices (possibly with repetition) are from the fixed part of $J$ and the last $\ell$ are all distinct elements from the random part of $J$.
|
| 161 |
+
|
| 162 |
+
**Transforming $S_{r,\ell}$ to $(r, \ell)$-normal sequences**
|
| 163 |
+
|
| 164 |
+
We use the simple fact that if $y \in G$ is picked uniformly at random and $x \in G$ be any element independent of $y$, then the distribution of $xyx^{-1}$ is uniform in $G$.
|
| 165 |
+
|
| 166 |
+
Let $I = \langle i_1, \dots, i_m \rangle \in S_{r,\ell}$ be a sequence. Let $F = \langle i_{f_1}, \dots, i_{f_r} \rangle$ be the subsequence of indices for the fixed elements in $I$. Let $R = \langle i_{s_1}, \dots, i_{s_{m-r}} \rangle$ be the subsequence of indices for the random elements in $I$, and $L = \langle i_{e_1}, \dots, i_{e_\ell} \rangle$ be the L-subsequence in $R$. More precisely, notice that $R$ is a
|
| 167 |
+
---PAGE_BREAK---
|
| 168 |
+
|
| 169 |
+
sequence in {$j+1, \dots, k$$}^{m-r}$ and $L$ is the L-subsequence for $R$. The $(r, \ell)$ normal sequence $\hat{I}$ of
|
| 170 |
+
$I \in S_{r,\ell}$ is the sequence $\langle i_{f_1}, \dots, i_{f_r}, i_{e_1}, \dots, i_{e_\ell} \rangle$.
|
| 171 |
+
|
| 172 |
+
We recall here that the multiset $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ is defined as before. For ease of notation we denote the list of elements of $J$ by $g_t$, $1 \le t \le k$. I.e. $g_t = x_t$ for $t \le j$ and $g_t = X_t$ for $t > j$. Consider the distribution of the products $g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m}$ where $\epsilon_i \in \{0, 1\}$ are independent and uniformly picked at random. Then we can write
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\begin{aligned}
|
| 176 |
+
g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} &= z_0 g_{i_{f_1}}^{\epsilon_{f_1}} z_1 g_{i_{f_2}}^{\epsilon_{f_2}} z_2 \cdots z_{r-1} g_{i_{fr}}^{\epsilon_{fr}} z_r, && \text{where} \\
|
| 177 |
+
z_0 z_1 \cdots z_r &= g_{i_{s_1}}^{\epsilon_{s_1}} g_{i_{s_2}}^{\epsilon_{s_2}} \cdots g_{i_{s_{m-r}}}^{\epsilon_{s_{m-r}}}.
|
| 178 |
+
\end{aligned}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
By conjugation, we can rewrite the above expression as $g_{i_{f_1}}^{\epsilon_{f_1}} z z_1 g_{i_{f_2}}^{\epsilon_{f_2}} z_2 \dots g_{i_{fr}}^{\epsilon_{fr}} z_r$, where $z = g_{i_{f_1}}^{-\epsilon_{f_1}} z_0 g_{i_{f_1}}^{\epsilon_{f_1}}$.
|
| 182 |
+
|
| 183 |
+
We refer to this transformation as moving $g_{i_{f_1}}^{\epsilon_{f_1}}$ to the left. Successively moving the elements
|
| 184 |
+
$g_{i_{f_1}}^{\epsilon_{f_1}}$, $g_{i_{f_2}}^{\epsilon_{f_2}}$, ..., $g_{i_{fr}}^{\epsilon_{fr}}$ to the left we can write
|
| 185 |
+
|
| 186 |
+
$$ g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = g_{i_1}^{\epsilon_{f_1}} \cdots g_{i_r}^{\epsilon_{fr}} z'_0 z'_1 \cdots z'_r, $$
|
| 187 |
+
|
| 188 |
+
where each $z'_t = u_t z_t u_t^{-1}$, and $u_t$ is a product of elements from the fixed element set $\{x_1, \dots, x_j\}$. Notice that each $z_t$ is a product of some consecutive sequence of elements from $\langle g_{i_{s_1}}^{\epsilon_{s_1}}, g_{i_{s_2}}^{\epsilon_{s_2}}, \dots, g_{i_{s_{m-r}}}^{\epsilon_{s_{m-r}}} \rangle$. If $z_t = \prod_{a=b}^{c} g_{i_{sa}}^{\epsilon_{sa}}$ then $z'_t = \prod_{a=b}^{c} u_t g_{i_{sa}}^{\epsilon_{sa}} u_t^{-1}$. Thus, the product $z'_0 z'_1 \dots z'_r$, is of the form
|
| 189 |
+
|
| 190 |
+
$$ z'_0 z'_1 \dots z'_r = \prod_{a=1}^{m-r} h_{s_a}^{\epsilon_{s_a}}, $$
|
| 191 |
+
|
| 192 |
+
where each $h_{s_a} = y_a g_{i_{s_a}}^{\epsilon_{s_a}} y_a^{-1}$, for some elements $y_a \in G$. In this expression, observe that for distinct indices $a$ and $b$, we may have $i_{s_a} = i_{s_b}$ and $y_a \neq y_b$ and hence, in general, $h_{s_a} \neq h_{s_b}$.
|
| 193 |
+
|
| 194 |
+
Recall that the L-subsequence $L = (i_{e_1}, \dots, i_{e_\ell})$ is a subsequence of $R = (i_{s_1}, \dots, i_{s_{m-\ell}})$. Consequently, let $(h_{e_1}, h_{e_2}, \dots, h_{e_\ell})$ be the sequence of all *independent* random elements in the above product $\prod_{a=1}^{m-r} h_{s_a}^{\epsilon_{s_a}}$ that correspond to the L-subsequence. To this product, we again apply the transformation of moving to the left, the elements $h_{e_1}^{\epsilon_{e_1}}, h_{e_2}^{\epsilon_{e_2}}, \dots, h_{e_\ell}^{\epsilon_{e_\ell}}$, in that order. Putting it all together we have
|
| 195 |
+
|
| 196 |
+
$$ g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = g_{i_1}^{\epsilon_{f_1}} \cdots g_{i_r}^{\epsilon_{fr}} h_{e_1}^{\epsilon_{e_1}} \cdots h_{e_\ell}^{\epsilon_{e_\ell}} y(\bar{\epsilon}), $$
|
| 197 |
+
|
| 198 |
+
where $y(\bar{\epsilon})$ is an element in $G$ that depends on $J$, $I$ and $\bar{\epsilon}$, where $\bar{\epsilon}$ consists of all the $\epsilon_j$ for $j \in I \setminus (F \cup L)$. Let $J(I)$ denote the multiset of group elements obtained from $J$ by replacing the subset $\{g_{i e_1}, g_{i e_2}, \dots, g_{i e_\ell}\}$ with $\{h_{e_1}, h_{e_2}, \dots, h_{e_\ell}\}$. It follows from our discussion that $J(I)$ has exactly $j$ fixed elements $x_1, x_2, \dots, x_j$ and $k-j$ uniformly distributed independent random elements. Recall that $\hat{I} = (i_{f_1}, i_{f_2}, \dots, i_{fr}, i_{e_1}, i_{e_2}, \dots, i_{e_\ell})$ is the $(r, \ell)$-normal sequence for $I$. Analogous to Lemma 2.2, we now compare the probability distributions $R_I^J$ and $\hat{R_I}^{J(I)}$. The proof of the lemma is in the appendix.
|
| 199 |
+
---PAGE_BREAK---
|
| 200 |
+
|
| 201 |
+
**Lemma 2.6** For each $j \le k$ and $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ (where $x_1, \dots, x_j \in G$ are fixed elements and $X_{j+1}, \dots, X_k$ are independent uniformly distributed in $G$), and for each $I \in [k]^m$, $\mathbb{E}_J[\text{Coll}(R_I^J)] \le \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$.
|
| 202 |
+
|
| 203 |
+
**Remark 2.7** Here it is important to note that the expectation $\mathbb{E}_J[\text{Coll}(R_I^J)]$ is over the random elements in $J$. On the other hand, the expectation $\mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ is over the random elements in $J(I)$ (which are conjugates of the random elements in $J$). In the rest of this section, we need to keep this meaning clear when we use $\mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ for different $I \in [k]^m$.
|
| 204 |
+
|
| 205 |
+
By averaging the above inequality over all $I$ sequences and using Equation 1, we get
|
| 206 |
+
|
| 207 |
+
$$ \mathbb{E}_J[\text{Coll}(Q_J)] \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(R_I^J)] \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]. \quad (3) $$
|
| 208 |
+
|
| 209 |
+
Now, by Equation 2 and following the proof of Lemma 2.4, when all $k$ elements in $J$ are random then we have $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le 1/n + 1/n^c$. Suppose for any $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ we can compute $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]$ in deterministic polynomial (in $n$) time. Then, given the bound $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le 1/n + 1/n^c$ for $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$, we can clearly fix the $(j+1)^{st}$ element of $J$ by choosing $X_{j+1} := x_{j+1}$ which minimizes the expectation $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]$. Also, it follows easily from Equation 3 and the above lemma that $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le \delta$ implies $\mathbb{E}_J \text{Coll}(Q_J) \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(R_I^J)] \le \delta$. In particular, when $J$ is completely fixed after $k$ stages, and if $\mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le \delta$ then $\text{Coll}(Q_J) \le \delta$.
|
| 210 |
+
|
| 211 |
+
**Remark 2.8** In fact, the quantity $\mathbb{E}_{I \in [k]^m}[\text{Coll}(\hat{R}_I^{J(I)})]$ plays the role of a pessimistic estimator for $\mathbb{E}_{I \in [k]^m}[\text{Coll}(R_I^J)]$.
|
| 212 |
+
|
| 213 |
+
We now proceed to explain the algorithm that fixes $X_{j+1}$. To this end, it is useful to rewrite this as
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\begin{align}
|
| 217 |
+
\mathbb{E}_J \mathbb{E}_I [\text{Coll}(\hat{R}_I^{J(I)})] &= \frac{1}{k^m} \left[ \sum_{r,\ell} \sum_{I \in S_{r,\ell}} \mathbb{E}_J [\text{Coll}(\hat{R}_I^{J(I)})] \right] \\
|
| 218 |
+
&= \sum_{r,\ell} \frac{|S_{r,\ell}|}{k^m} \mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J [\text{Coll}(\hat{R}_I^{J(I)})] \tag{4}
|
| 219 |
+
\end{align}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
For any $r, \ell$ the size of $S_{r,\ell}$ is computable in polynomial time (Lemma 2.9). We include a proof in the appendix.
|
| 223 |
+
|
| 224 |
+
**Lemma 2.9** For each $r$ and $\ell$, $|S_{r,\ell}|$ can be computed in time polynomial in $n$.
|
| 225 |
+
|
| 226 |
+
Since $r, \ell$ is of $O(\log n)$, it is clear from Equation 4 that it suffices to compute $\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ in polynomial time for any given $r$ and $\ell$. We reduce this computation to counting number of paths in weighted directed acyclic graphs. To make the reduction clear, we simply the expression $\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ as follows.
|
| 227 |
+
---PAGE_BREAK---
|
| 228 |
+
|
| 229 |
+
Let $\bar{u}$ be a sequence of length $r$ from the fixed elements $x_1, x_2, \dots, x_j$. We identify $\bar{u}$ as an element in $[j]^r$. The number of $I$ sequences in $S_{r,l}$ that have $\bar{u}$ as the prefix in the $(r, l)$ normal sequence $\hat{I}$ is $\frac{|S_{r,\ell}|}{j^r}$. Recall that $R_{\hat{I}}^{J(I)}(g) = \text{Prob}_{\bar{\epsilon}}[g_{i_{f_1}}^{\epsilon_1} \dots g_{i_{f_r}}^{\epsilon_r} h_{e_1}^{\epsilon_{r+1}} \dots h_{e_\ell}^{\epsilon_{r+\ell}} = g]$. Let $\bar{u} = (g_{i_{f_1}}, \dots, g_{i_{f_r}})$. It is convenient to denote the element $g_{i_{f_1}}^{\epsilon_1} \dots g_{i_{f_r}}^{\epsilon_r} h_{e_1}^{\epsilon_{r+1}} \dots h_{e_\ell}^{\epsilon_{r+\ell}}$ by $M(\bar{u}, \bar{\epsilon}, \hat{I}, J)$.
|
| 230 |
+
|
| 231 |
+
Let $\bar{\epsilon} = (\epsilon_1, \dots, \epsilon_{r+\ell})$ and $\bar{\epsilon}' = (\epsilon'_1, \dots, \epsilon'_{r+\ell})$ be randomly picked from $\{0, 1\}^{r+\ell}$. Then
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\begin{align}
|
| 235 |
+
\mathrm{Coll}(R_{\hat{I}}^{J(I)}) &= \sum_{g \in G} (R_{\hat{I}}^{J(I)}(g))^2 \\
|
| 236 |
+
&= \mathrm{Prob}_{\bar{\epsilon}, \bar{\epsilon}'} [M(\bar{u}, \bar{\epsilon}, \hat{I}, J) = M(\bar{u}, \bar{\epsilon}', \hat{I}, J)]. \tag{5}
|
| 237 |
+
\end{align}
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
For fixed $\bar{\epsilon}, \bar{\epsilon}'$ and $\bar{u} \in [j]^r$, let $S_{r,\ell}^{\bar{u}}$ be the set of all $I \in S_{r,\ell}$ such that the subsequence of indices of $I$ for the fixed elements $\{x_1, x_2, \dots, x_j\}$ is precisely $\bar{u}$. Notice that $|S_{r,\ell}^{\bar{u}}| = \frac{|S_{r,\ell}|}{j^r}$.
|
| 241 |
+
|
| 242 |
+
Then we have the following.
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J \left[ \left( \sum_{g \in G} (R_{\hat{I}}^{J(I)}(g))^2 \right) \right] = \frac{1}{2^{2(\ell+r)}} \left[ \sum_{\bar{\epsilon}, \bar{\epsilon}' \in \{0,1\}^{\ell+r}} \frac{1}{|S_{r,\ell}|} \sum_{\bar{u} \in [j]^r} \sum_{I \in S_{r,\ell}^{\bar{u}}} \mathbb{E}_J [\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}] \right] (6)
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
where $\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}$ is a 0-1 indicator random variable that gets 1 when $M(\bar{u},\bar{\epsilon},\hat{I},J) = M(\bar{u},\bar{\epsilon}',\hat{I},J)$ and 0 otherwise. Crucially, we note the following:
|
| 249 |
+
|
| 250 |
+
**Claim 2.10** For each $I \in S_{r,\ell}^{\bar{u}}$ and for fixed $\bar{\epsilon}, \bar{\epsilon}'$, the random variables $\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}$ are identically distributed.
|
| 251 |
+
|
| 252 |
+
The claim follows from the fact that for each $I \in S_{r,\ell}^{\bar{u}}$, the fixed part in $\hat{I}$ is $\bar{u}$ and elements in the unfixed part are identically and uniformly distributed in $G$. We simplify the expression in Equation 6 further.
|
| 253 |
+
|
| 254 |
+
$$
|
| 255 |
+
\begin{align}
|
| 256 |
+
\frac{1}{|S_{r,\ell}|} & \left[ \sum_{\tilde{u} \in [j]^r} \sum_{I \in S_{r,\ell}^{\tilde{u}}} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] \right] &&= \frac{1}{|S_{r,\ell}|} \left[ \sum_{\tilde{u} \in [j]^r} \frac{|S_{r,\ell}|}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] \right] &&(7) \\
|
| 257 |
+
&&= \sum_{\tilde{u} \in [j]^r} \frac{1}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] &&(8)
|
| 258 |
+
\end{align}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
where Equation 7 follows from Claim 2.10. Let $p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}')$ be the number of different assignments of $\ell$ random elements in $J$ such that $M(\bar{u}, \bar{\epsilon}, \hat{I}, J) = M(\bar{u}, \bar{\epsilon}', \hat{I}, J)$. Then it is easy to see that
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
\sum_{\tilde{u} \in [j]^r} \frac{1}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u}, \tilde{\epsilon}, \hat{I}, J) = M(\tilde{u}, \tilde{\epsilon}', \hat{I}, J)}] = \sum_{\tilde{u}} \frac{1}{j^r} p_{\tilde{u}}(\tilde{\epsilon}, \tilde{\epsilon}') \frac{1}{n^\ell}. \quad (9)
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
where the factor $\frac{1}{n^\ell}$ accounts for the fact that $\ell$ unfixed elements of $J$ are picked uniformly and independently at random from the group $G$.
|
| 268 |
+
---PAGE_BREAK---
|
| 269 |
+
|
| 270 |
+
Notice that $2^{r+\ell} \le 2^m = n^{O(1)}$ for $m = O(\log n)$ and $\bar{\epsilon}, \bar{\epsilon}' \in \{0,1\}^{r+\ell}$. Then, combining the Equation 4 and Equation 9, it is clear that to compute $\mathbb{E}_J \mathbb{E}_I[\text{Coll}(R_{\hat{I}}^{J(I)})]$ in polynomial time, it suffices to compute $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \right] \frac{1}{n^{\ell}}$ (for fixed $r, \ell, \bar{\epsilon}, \bar{\epsilon}'$) in polynomial time. We now turn to this problem.
|
| 271 |
+
|
| 272 |
+
## 2.3 Reduction to counting paths in weighted DAGs
|
| 273 |
+
|
| 274 |
+
We will interpret the quantity $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \right] \frac{1}{n^{\ell}}$ as the sum of weights of paths between a source vertex $s$ and sink vertex $t$ in a layered weighted directed acyclic graph $H = (V, E)$. The vertex set $V$ is $G \times G \times [r+\ell+1] \cup \{s,t\}$, and $s = (e, e, 0)$, where $e$ is the identity element in $G$. The source vertex $s$ is at 0-th layer and the sink $t$ is at the $r + \ell + 2$-th layer. Let $S = \{x_1, x_2, \dots, x_j\}$. The edge set is the union $E = E_s \cup E_S \cup E_{G\setminus S} \cup E_t$, where
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
\begin{align*}
|
| 278 |
+
E_s &= \{(s, (g, h, 1)) \mid g, h \in G\} \\
|
| 279 |
+
E_S &= \{((g, h, t), (gx^{\epsilon_t}, hx^{\epsilon'_t}, t+1)) \mid g, h \in G, x \in S, 1 \le t \le r\}, \\
|
| 280 |
+
E_{G\setminus S} &= \{((g, h, t), (gx^{\epsilon_t}, hx^{\epsilon'_t}, t+1)) \mid g, h \in G, x \in G, r < t \le r+\ell\}, \text{ and} \\
|
| 281 |
+
E_t &= \{((g, g, r+\ell+1), t) \mid g \in G\}.
|
| 282 |
+
\end{align*}
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
All edges in $E_s$ and $E_t$ have weights 1 each. Each edge in $E_S$ has weight $\frac{1}{j}$. Each edge in $E_{G\setminus S}$ has weight $\frac{1}{n}$.
|
| 286 |
+
|
| 287 |
+
Each s-to-t directed path in the graph $G$ corresponds to an $(r, \ell)$-normal sequence $\hat{I}$ (corresponding to some $I \in S_{r,\ell}$), along with an assignment of group elements to the $\ell$ distinct independent random elements that occur in it. For a random $I \in S_{r,\ell}$, the group element corresponding to each of the $r$ “fixed” positions is from $\{x_1, x_2, \dots, x_j\}$ with probability $1/j$ each. Hence each edge in $E_S$ has weight $1/j$. Similarly, the $\ell$ distinct indices in $I$ (from $\{X_{j+1}, \dots, X_k\}$) are assigned group elements independently and uniformly at random. Hence edges in $E_{G\setminus S}$ has weight $\frac{1}{n}$.
|
| 288 |
+
|
| 289 |
+
The weight of an s-to-t path is a product of the weights of edges on the path. The graph depends on $j, \bar{\epsilon},$ and $\bar{\epsilon}'$. So for fixed $r, \ell$, we denote it as $H_{r,\ell}(j, \bar{\epsilon}, \bar{\epsilon}')$. The following claim is immediate from the Equation 9.
|
| 290 |
+
|
| 291 |
+
**Claim 2.11** *The sum of weights of all s to t paths in $H_{j,\bar{\epsilon},\bar{\epsilon}'}$ is $\sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}}$.*
|
| 292 |
+
|
| 293 |
+
In the following lemma we observe that $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$ can be computed in polynomial time. The proof is easy.
|
| 294 |
+
|
| 295 |
+
**Lemma 2.12** *For each $j, \bar{\epsilon}, \bar{\epsilon}', r, \ell$, the quantity $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$ can be computed in time polynomial in $n$.*
|
| 296 |
+
|
| 297 |
+
**Proof:** The graph $H_{r,l}(j, \bar{\epsilon}, \bar{\epsilon}')$ has $n^2$ vertices in each intermediate layer. For each $1 \le t \le r+\ell+2$, we define a matrix $M_{t-1}$ whose rows are indexed by the vertices of layer $t-1$ and columns by vertices of layer $t$, and the $(a,b)^{th}$ entry of $M_{t-1}$ is the weight of the edge $(a,b)$ in the graph $H_{j,\bar{\epsilon},\bar{\epsilon}'}$. Their product $M = \prod_{\ell=0}^{r+\ell+1} M_t$ is a scalar which is precisely $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$. As the product of the matrices $M_t$ can be computed in time polynomial in $n$, the lemma follows. $\square$
|
| 298 |
+
---PAGE_BREAK---
|
| 299 |
+
|
| 300 |
+
To summarize, we describe the $(j+1)^{st}$ stage of the algorithm, where a group element $x_{j+1}$ is chosen for $X_{j+1}$. The algorithm cycles through all $n$ choices for $x_{j+1}$. For each choice of $x_{j+1}$, and for each $\bar{\epsilon}, \epsilon'$, and $r, \ell$, the graph $H_{r,\ell}(j+1, \bar{\epsilon}, \epsilon')$ is constructed. Using Lemma 2.12, the expression in 4 is computed for each choice of $x_{j+1}$ and the algorithm fixes the choice that minimizes this expression. This completes the proof of Theorem 1.5.
|
| 301 |
+
|
| 302 |
+
By Theorem 1.2 we can bound the absolute value of the second largest eigenvalue of the matrix for Cay($G$, $J$). Theorem 1.5 yields that the resulting distribution after an $O(\log n)$ step random walk on Cay($G$, $J$) is $\frac{1}{\text{poly}(n)}$ close to the uniform distribution in the $L_2$ norm. Theorem 1.2 is in terms of the $L_1$ norm. However, since $|L_1| \le n|L_\infty| \le n|L_2|$, Theorem 1.5 guarantees that the resulting distribution is $\frac{1}{\text{poly}(n)}$ close to the uniform distribution also in $L_1$ norm. Choose $\tau = m = c' \log n$ and $\epsilon = \frac{1}{n^c}$ in Theorem 1.2, where $c, c'$ are fixed from Theorem 1.5. Then $\lambda_{max} \le \frac{1}{2O(c/c')} < 1$. This completes the proof of Corollary 1.6. $\square$
|
| 303 |
+
|
| 304 |
+
# 3 Undirected Expanding Cayley Graphs
|
| 305 |
+
|
| 306 |
+
In this section, we show a deterministic polynomial-time construction of a generating set $J$ for any group $G$ (given by table) such that a lazy random walk on the *undirected* Cayley graph Cay$(G, J \cup J^{-1})$ mixes well. As a consequence, we get Cayley graphs which have a constant spectral gap (an alternative proof of a result in [10]). Our construction is based on a simple adaptation of techniques used in Section 2.
|
| 307 |
+
|
| 308 |
+
The key point in the undirected case is that we will consider a generalization of Erdös-Renyi sequences. We consider the distribution on $G$ defined by $g_1^{\epsilon_1} \dots g_k^{\epsilon_k}$ where $\epsilon_i \in_R \{-1, 0, 1\}$. The following lemma is an easy generalization the Erdös-Renyi result (Theorem 1.4). A similar theorem appears in [3, Theorem 14]. Our main focus in the current paper is the derandomized construction of Cayley expanders. Towards that, to make our paper self-contained, we include a short proof of Lemma 3.1 in the appendix.
|
| 309 |
+
|
| 310 |
+
**Lemma 3.1** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of $k$ elements of $G$ picked uniformly and independently at random. Let $D_J$ be the following distribution: $D_J(x) = \text{Pr}_{\{\epsilon_i \in R \{-1, 0, 1\} : 1 \le i \le k\}} [g_1^{\epsilon_1} \cdots g_k^{\epsilon_k} = x]$ for $x \in G$, and $U$ be the uniform distribution on $G$. Then $\mathbb{E}_J[\sum_{x \in G} (D_J(x))^2] = \mathbb{E}_J[\text{Coll}(D_J)] \le (\frac{8}{9})^k + \frac{1}{n}$.
|
| 311 |
+
|
| 312 |
+
## Deterministic construction
|
| 313 |
+
|
| 314 |
+
First, we note that analogues of Lemma 2.2, 2.3, and 2.4 hold in the undirected case too. In particular, When elements of $J$ are picked uniformly and independently from $G$, by Lemma 3.1, we have $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} (R_{L(I)}^J(g))^2] \le (\frac{8}{9})^\ell + \frac{1}{n}$, where $\ell$ is the length of the L-subsequence $L(I)$ of $I$. Now we state Lemma 3.2 below, which is a restatement of Lemma 2.4 for the undirected case. The proof is exactly similar to the proof of Lemma 2.4. As before, we again consider the probability that an $I$ sequence of length $m$ does not have an $L$ sequence of length $\ell$. Also, we fix $\ell, m$ to $O(\log n)$ appropriately.
|
| 315 |
+
|
| 316 |
+
**Lemma 3.2** Let $Q_J(g) = \frac{1}{km} \sum_{I \in [k]^m} R_I(g)$. Then $\mathbb{E}_J[\text{Coll}(Q_J)] = \mathbb{E}_J[\sum_{g \in G} Q_J(g)^2] \le 1/n + 2(\frac{8}{9})^{\Theta(m)}$.
|
| 317 |
+
---PAGE_BREAK---
|
| 318 |
+
|
| 319 |
+
Building on this, we can extend the results in Section 2.2 to the undirected case too in a straightforward manner. In particular, we can use essentially the same algorithm as described in Lemma 2.12 to compute the quantity in Equation 5 in polynomial time also in the undirected setting. The only difference we need to incorporate is that now $\bar{\epsilon}, \epsilon' \in \{-1, 0, 1\}^{r+\ell}$. This essentially completes the proof of Theorem 1.7. We do not repeat all the details here.
|
| 320 |
+
|
| 321 |
+
Finally, we derive Corollary 1.8. The normalized adjacency matrix of the undirected Cayley graph (corresponding to the lazy walk we consider) is given by $A = \frac{1}{3}I + \frac{1}{3k}(P_J + P_{J-1})$ where $P_J$ and $P_{J-1}$ are the corresponding permutation matrices defined by the sets $J$ and $J^{-1}$. As in the proof of Corollary 1.8, we bound the distance of the resulting distribution from the uniform distribution in $L_1$ norm. Let $m = c' \log n$ be suitably fixed from the analysis and $|A^m \bar{v} - \bar{u}|_1 \le \frac{1}{nc}$. Then by Theorem 1.1, the spectral gap $1-|\lambda_1| \ge \frac{c}{c'}$. Hence the Cayley graph is a spectral expander. It follows easily that the standard undirected Cayley graph with adjacency matrix $\frac{1}{2k}(P_J + P_{J-1})$ is also a spectral expander.
|
| 322 |
+
|
| 323 |
+
# 4 Deterministic construction of Erdös-Rényi sequences
|
| 324 |
+
|
| 325 |
+
In this section, we prove Theorem 1.9. We use the method of conditional expectations as follows: From Theorem 1.4, we know that $E_J\|D_J - U\|_2^2 = \frac{1}{2^k}(1-\frac{1}{n})$. Therefore there exists a setting of $J$, say $J = \langle x_1, \dots, x_k \rangle$, such that $\|D_J - U\|_2^2 \le \frac{1}{2^k}(1-\frac{1}{n})$. We find such a setting of $J$ by fixing its elements one by one. Let $\delta = \frac{1}{nc}$, $c > 1$ be the required closeness parameter. Thus we need $k$ such that $\frac{1}{2^k} \le \delta$. It suffices to take $k > c \log n$. We denote the expression $X_{i_1}^{\epsilon_1} \dots X_{i_t}^{\epsilon_t}$ by $\bar{X}^\epsilon$ when the length $t$ of the sequence is clear from the context.
|
| 326 |
+
|
| 327 |
+
Let after $i$th step, $x_1, \dots, x_i$ be fixed and $X_{i+1}, \dots, X_k$ are to be picked. At this stage, by our choice of $x_1, \dots, x_i$, we have $E_J=(X_{i+1},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_i=x_i) \le \frac{1}{2^k}(1-\frac{1}{n})$. Now we cycle through all the group elements for $X_{i+1}$ and fix $X_{i+1} = x_{i+1}$ such that the $E_J=(X_{i+2},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_{i+1}=x_{i+1}) \le \frac{1}{2^k}(1-\frac{1}{n})$. Such an $x_{i+1}$ always exists by a standard averaging argument. In the next theorem, we show that the conditional expectations are efficiently computable at every stage. Theorem 1.9 is an immediate corollary.
|
| 328 |
+
|
| 329 |
+
Assume that we have picked $x_1, \dots, x_i$ from $G$, and $X_{i+1}, \dots, X_k$ are to be picked from $G$. Let the choice of $x_1, \dots, x_i$ be such that $E_J=(X_{i+1},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_i=x_i) \le \frac{1}{2^k}(1-\frac{1}{n})$. Let, for $x \in G$ and $J = \langle X_1, \dots, X_k \rangle$
|
| 330 |
+
|
| 331 |
+
$$Q_J(x) = \mathrm{Pr}_{\bar{\epsilon} \in \{0,1\}^k} [\bar{X}^{\bar{\epsilon}} = x]$$
|
| 332 |
+
|
| 333 |
+
When $J$ is partly fixed,
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\begin{align*}
|
| 337 |
+
\hat{Q}_J(x) &= \mathrm{Pr}_{\bar{\epsilon}_1 \in \{0,1\}^i, \bar{\epsilon}_2 \in \{0,1\}^{k-i}} [\bar{x}^{\bar{\epsilon}_1} \cdot \bar{X}^{\bar{\epsilon}_2} = x] \\
|
| 338 |
+
&= \sum_{y \in G} \mathrm{Pr}_{\bar{\epsilon}_1} [\bar{x}^{\bar{\epsilon}_1} = y] \mathrm{Pr}_{\bar{\epsilon}_2} [\bar{X}^{\bar{\epsilon}_2} = y^{-1}x] \\
|
| 339 |
+
&= \sum_{y \in G} \mu(y) \mathrm{Pr}_{\bar{\epsilon}_2} [\bar{X}^{\bar{\epsilon}_2} = y^{-1}x] \\
|
| 340 |
+
&= \sum_{y \in G} \mu(y) \hat{Q}_{\bar{X}}(y^{-1}x)
|
| 341 |
+
\end{align*}
|
| 342 |
+
$$
|
| 343 |
+
---PAGE_BREAK---
|
| 344 |
+
|
| 345 |
+
where $\mu(y) = \mathrm{Pr}_{\bar{\epsilon}_1}[\bar{x}^{\bar{\epsilon}_1} = y]$. Then $\mathbb{E}_J[\mathrm{Coll}(D_J)] = \mathbb{E}_J\|D_J - U\|_2^2 + \frac{1}{n}$, and $\mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)] = (\mathbb{E}_J\|D_J - U\|_2^2 | X_1 = x_1, X_2 = x_2, \dots, X_i = x_i) + \frac{1}{n}$.
|
| 346 |
+
|
| 347 |
+
Next theorem completes the proof.
|
| 348 |
+
|
| 349 |
+
**Theorem 4.1** For any finite group $G$ of order $n$ given as multiplication table, $\mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)]$ can be computed in time polynomial in $n$.
|
| 350 |
+
|
| 351 |
+
**Proof:**
|
| 352 |
+
|
| 353 |
+
$$ \mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)] = \mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x). \quad (10) $$
|
| 354 |
+
|
| 355 |
+
Now we compute $\mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x)$.
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\begin{align}
|
| 359 |
+
\mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x) &= \mathbb{E}_J \sum_{x \in G} \left( \sum_{y \in G} \mu(y) \hat{Q}_{\bar{X}}(y^{-1}x) \right) \left( \sum_{z \in G} \mu(z) \hat{Q}_{\bar{X}}(z^{-1}x) \right) \\
|
| 360 |
+
&= \sum_{y,z \in G} \mu(y)\mu(z) \mathbb{E}_J \sum_{x \in G} [\hat{Q}_{\bar{X}}(y^{-1}x) \hat{Q}_{\bar{X}}(z^{-1}x)]. \tag{11}
|
| 361 |
+
\end{align}
|
| 362 |
+
$$
|
| 363 |
+
|
| 364 |
+
Now,
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
\begin{align}
|
| 368 |
+
\sum_{x \in G} [\hat{Q}_{\bar{X}}(y^{-1}x) \hat{Q}_{\bar{X}}(z^{-1}x)] &= \sum_{x \in G} \mathrm{Pr}_{\bar{\epsilon}}[\bar{X}^{\bar{\epsilon}} = y^{-1}x] \mathrm{Pr}_{\bar{\epsilon}'}[\bar{X}^{\bar{\epsilon}'} = z^{-1}x] \\
|
| 369 |
+
&= \frac{1}{2^{2k}} \sum_{x, \bar{\epsilon}, \bar{\epsilon}'} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') \\
|
| 370 |
+
&= \frac{1}{2^{2k}} \left( \sum_{\bar{\epsilon}=\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') + \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') \right) (12)
|
| 371 |
+
\end{align}
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
where $\chi_a(\bar{\epsilon})$ is an indicator variable which is 1 if $\bar{X}^{\bar{\epsilon}} = a$ and 0 otherwise. If $\bar{\epsilon} = \bar{\epsilon}'$ then $\chi_{y^{-1}x}(\bar{\epsilon}) \cdot$
|
| 375 |
+
$\chi_{z^{-1}x}(\bar{\epsilon}') = \delta_{y,z}$, where $\delta_{a,b} = 1$ whenever $a=b$ and 0 otherwise.
|
| 376 |
+
|
| 377 |
+
For $\bar{\epsilon} \neq \bar{\epsilon}'$, $\chi_{y^{-1}x}(\bar{\epsilon}) \cdot \chi_{z^{-1}x}(\bar{\epsilon}') = 1$ only if $y\bar{X}^{\bar{\epsilon}} = z\bar{X}^{\bar{\epsilon}'} = x$. Therefore for $\bar{\epsilon} \neq \bar{\epsilon}'$, we have
|
| 378 |
+
|
| 379 |
+
$$ \frac{1}{2^{2k}} \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \cdot \chi_{z^{-1}x}(\bar{\epsilon}') = \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} \delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}} (1-\delta_{\bar{\epsilon},\bar{\epsilon}'}). $$
|
| 380 |
+
|
| 381 |
+
Putting this in Equation 12, we get
|
| 382 |
+
|
| 383 |
+
$$ \frac{1}{2^{2k}} \left( \sum_{\bar{\epsilon}=\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon})\chi_{z^{-1}x}(\bar{\epsilon}') + \sum_{\bar{\epsilon}\neq\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon})\chi_{z^{-1}x}(\bar{\epsilon}') \right) = \frac{n}{2^k}\delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'}\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}(1-\delta_{\bar{\epsilon},\bar{\epsilon}'}). $$
|
| 384 |
+
|
| 385 |
+
Therefore we get
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
\begin{align}
|
| 389 |
+
\mathbb{E}_J \sum_{x \in G} \hat{Q}_{\bar{X}}(y^{-1}x) \cdot \hat{Q}_{\bar{X}}(z^{-1}x) &= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_J [\mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})]] \\
|
| 390 |
+
&= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})\mathbb{E}_J [\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}]] \\
|
| 391 |
+
&= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})\mathrm{Pr}_{\bar{X}}(y\bar{X}^{\bar{\epsilon}} = z\bar{X}^{\bar{\epsilon}'})] \tag{13}
|
| 392 |
+
\end{align}
|
| 393 |
+
$$
|
| 394 |
+
---PAGE_BREAK---
|
| 395 |
+
|
| 396 |
+
**Claim 4.2** For $\bar{\epsilon} \neq \bar{\epsilon}'$, $\Pr_{\bar{X}}(y\bar{X}\bar{\epsilon} = z\bar{X}\bar{\epsilon}') = \frac{1}{n}$.
|
| 397 |
+
|
| 398 |
+
**Proof:** Let $j$ be the smallest index from left such that $\epsilon_j \neq \epsilon'_j$. Let $X_{i+1}^{\epsilon_1}, \dots, X_{i+j-1}^{\epsilon_{j-1}} = a$. Let $X_{i+j+1}^{\epsilon_{i+1}}, \dots, X_k^{\epsilon_{k-i}} = b$ and $X_{i+j+1}^{\epsilon'_1}, \dots, X_k^{\epsilon'_{k-i}} = b'$. Also, without loss of generality, let $\epsilon_j = 1$ and $\epsilon'_j = 0$. Then we have $\Pr_{\bar{X}}(y\bar{X}\bar{\epsilon} = z\bar{X}\bar{\epsilon}') = \Pr_{X_{i+j}}(yaX_{i+j}b = zab') = \frac{1}{n}$. $\square$
|
| 399 |
+
|
| 400 |
+
Thus Equation 13 becomes
|
| 401 |
+
|
| 402 |
+
$$ \mathbb{E}_J \sum_{x \in G} \hat{Q}_{\bar{X}}(y^{-1}x) \cdot \hat{Q}_{\bar{X}}(z^{-1}x) = \frac{n}{2^k} \delta_{y,z} + \frac{2^{2k} - 2^k}{n2^{2k}} $$
|
| 403 |
+
|
| 404 |
+
Putting this in Equation 11, we get
|
| 405 |
+
|
| 406 |
+
$$ \mathbb{E}_J[\text{Coll}(\hat{Q}_J)] = \mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x) = \sum_{y,z \in G} \frac{1}{2^{2k}} [2^k \cdot n \cdot \delta_{y,z} + (2^{2k} - 2^k) \cdot \frac{1}{n}] \mu(y)\mu(z) \quad (14) $$
|
| 407 |
+
|
| 408 |
+
Clearly, for any $y \in G$, $\mu(y)$ can be computed in time $O(2^i)$ which is a polynomial in $n$ since $i \le k = O(\log n)$. Also from Equation 14, it is clear that $\mathbb{E}_J[\text{Coll}(\hat{Q}_J)]$ is computable in polynomial (in $n$) time. $\square\square$
|
| 409 |
+
|
| 410 |
+
# 5 Summary
|
| 411 |
+
|
| 412 |
+
Constructing explicit Cayley expanders on finite groups is an important problem. In this paper, we give simple deterministic construction of Cayley expanders that have a constant spectral gap. Our method is completely different and elementary than the existing techniques [10].
|
| 413 |
+
|
| 414 |
+
The main idea behind our work is a deterministic polynomial-time construction of a cube generating sequence $J$ of size $O(\log|G|)$ such that $\text{Cay}(G, J)$ has a rapid mixing property. In randomized setting, Pak [7] has used similar ideas to construct Cayley expanders. In particular, we also give a derandomization of a well-known result of Erdös and Rényi [2].
|
| 415 |
+
|
| 416 |
+
# References
|
| 417 |
+
|
| 418 |
+
[1] Noga Alon and Yuval Roichman. Random cayley graphs and expanders. *Random Struct. Algorithms*, 5(2):271–285, 1994.
|
| 419 |
+
|
| 420 |
+
[2] Paul Erdös and Alfréd Rényi. Probabilistic methods in group theory. *Journal D'analyse Mathematique*, 14(1):127–138, 1965.
|
| 421 |
+
|
| 422 |
+
[3] Martin Hildebrand. A survey of results on random random walks on finite groups. *Probability Surveys*, 2:33–63, 2005.
|
| 423 |
+
|
| 424 |
+
[4] Shlomo Hoory, Nati Linial, and Avi Wigderson. Expander graphs and their applications. *Bull. AMS*, 43(4):439–561, 2006.
|
| 425 |
+
|
| 426 |
+
[5] Alex Lubotzky, R. Phillips, and Peter Sarnak. Ramanujan graphs. *Combinatorica*, 8(3):261–277, 1988.
|
| 427 |
+
---PAGE_BREAK---
|
| 428 |
+
|
| 429 |
+
[6] Ravi Montenegro and Prasad Tetali. Mathematical aspects of mixing times in markov chains. *Foundations and Trends in Theoretical Computer Science*, 1(3), 2005.
|
| 430 |
+
|
| 431 |
+
[7] Igor Pak. Random cayley graphs with $o(\log[g])$ generators are expanders. In *Proceedings of the 7th Annual European Symposium on Algorithms*, ESA '99, pages 521–526. Springer-Verlag, 1999.
|
| 432 |
+
|
| 433 |
+
[8] Dana Randall. Rapidly mixing markov chains with applications in computer science and physics. *Computing in Science and Engg.*, 8(2):30–41, 2006.
|
| 434 |
+
|
| 435 |
+
[9] Omer Reingold. Undirected connectivity in log-space. *J. ACM*, 55(4), 2008.
|
| 436 |
+
|
| 437 |
+
[10] Avi Wigderson and David Xiao. Derandomizing the ahlswede-winter matrix-valued chernoff bound using pessimistic estimators, and applications. *Theory of Computing*, 4(1):53–76, 2008.
|
| 438 |
+
---PAGE_BREAK---
|
| 439 |
+
|
| 440 |
+
# Appendix
|
| 441 |
+
|
| 442 |
+
We include a proof of Lemma 2.2.
|
| 443 |
+
|
| 444 |
+
## Proof of Lemma 2.2
|
| 445 |
+
|
| 446 |
+
**Proof:** We use the simple fact that if $y \in G$ is picked uniformly at random and $x \in G$ be any element independent of $y$, then the distribution of $xyx^{-1}$ is uniform in $G$.
|
| 447 |
+
|
| 448 |
+
Let $I = \langle i_1, \dots, i_m \rangle$, and $L = \langle i_{r_1}, \dots, i_{r_\ell} \rangle$ be the corresponding L-subsequence (clearly, $r_1 = 1$). Let $J = \langle g_1, g_2, \dots, g_k \rangle$ be uniform and independent random elements from $G$. Consider the distribution of the products $g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m}$ where $\epsilon_i \in \{0, 1\}$ are independent and uniformly picked at random. Then we can write
|
| 449 |
+
|
| 450 |
+
$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = g_{i_{r_1}}^{\epsilon_{r_1}} x_1 g_{i_{r_2}}^{\epsilon_{r_2}} x_2 \dots x_{\ell-1} g_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_\ell,$$
|
| 451 |
+
|
| 452 |
+
where, by definition of L-subsequence, notice that $x_j$ is a product of elements from $\{g_{i_{r_1}}, g_{i_{r_2}}, \dots, g_{i_{r_{j-1}}}\}$ for each $j$. By conjugation, we can rewrite the above expression as
|
| 453 |
+
|
| 454 |
+
$$g_{i_{r_1}}^{\epsilon_{r_1}} x_1 g_{i_{r_2}}^{\epsilon_{r_2}} x_2 \dots h^{\epsilon_{r_\ell}} x_{\ell-1} x_\ell, \text{ where}$$
|
| 455 |
+
|
| 456 |
+
$$h^{\epsilon_{r_\ell}} = x_{\ell-1} g_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_{\ell-1}^{-1}.$$
|
| 457 |
+
|
| 458 |
+
We refer to this transformation as moving $x_{\ell-1}$ to the right. Successively applying this transformation to $x_{\ell-2}, x_{\ell-3}, \dots, x_1$ we can write
|
| 459 |
+
|
| 460 |
+
$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = h_{i_{r_1}}^{\epsilon_{r_1}} h_{i_{r_2}}^{\epsilon_{r_2}} \dots h_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_1 x_2 \dots x_{\ell-1} x_\ell,$$
|
| 461 |
+
|
| 462 |
+
where each $h_{i_{r_j}}$ is a conjugate $z_j g_{i_{r_j}} z_j^{-1}$. Crucially, notice that the group element $z_j$ is a product of elements from $\{g_{i_1}, g_{i_2}, \dots, g_{i_{r_{j-1}}}\}$ for each $j$. As a consequence of this and the fact that $\{g_{i_1}, g_{i_2}, \dots, g_{i_{r_\ell}}\}$ are all independent uniformly distributed elements of $G$, it follows that $\{h_{i_1}, h_{i_2}, \dots, h_{i_\ell}\}$ are all independent uniformly distributed elements of $G$. Let $J'$ denote the set of $k$ group elements obtained from $J$ by replacing the subset $\{g_{i_1}, g_{i_2}, \dots, g_{i_\ell}\}$ with $\{h_{i_1}, h_{i_2}, \dots, h_{i_\ell}\}$. Clearly, $J'$ is a set of $k$ independent, uniformly distributed random group elements from $G$.
|
| 463 |
+
|
| 464 |
+
Thus, we have
|
| 465 |
+
|
| 466 |
+
$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = h_{i_1}^{\epsilon_1} \dots h_{i_\ell}^{\epsilon_\ell} x(\bar{\epsilon}),$$
|
| 467 |
+
|
| 468 |
+
where $x(\bar{\epsilon}) = x_1 x_2 \dots x_r$ is an element in $G$ that depends on $J, I$ and $\bar{\epsilon}$, where $\bar{\epsilon}$ consists of all the $\epsilon_j$ for $j \in I \setminus L$. Hence, for each $g \in G$, observe that we can write
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\begin{align*}
|
| 472 |
+
R_I^J(g) &= \operatorname{Prob}_{\epsilon_1, \ldots, \epsilon_m} \left[ \prod_{j=1}^{m} g_{ij}^{\epsilon_j} = g \right] \\
|
| 473 |
+
&= \operatorname{Prob}_{\epsilon_1, \ldots, \epsilon_m} [h_{i_1}^{\epsilon_1} \cdots h_{i_\ell}^{\epsilon_\ell} = g x(\bar{\epsilon})^{-1}] \\
|
| 474 |
+
&= E_{\bar{\epsilon}}[R'_{L(I)}(gx(\bar{\epsilon})^{-1})].
|
| 475 |
+
\end{align*}
|
| 476 |
+
$$
|
| 477 |
+
---PAGE_BREAK---
|
| 478 |
+
|
| 479 |
+
Therefore we have the following:
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
\begin{align*}
|
| 483 |
+
\mathbb{E}_J[\mathrm{Coll}(R_I^J)] &= \mathbb{E}_J\left[\sum_g (R_I^J(g))^2\right] \\
|
| 484 |
+
&= \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}} R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right] \\
|
| 485 |
+
&\le \mathbb{E}_J\left[\sum_g \mathbb{E}_{\bar{\epsilon}}(R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right] \tag{15} \\
|
| 486 |
+
&= \mathbb{E}_{\bar{\epsilon}}\left[\mathbb{E}_J\left[\sum_g (R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right]\right] \\
|
| 487 |
+
&= \mathbb{E}_{\bar{\epsilon}}\left[\mathbb{E}_J\left[\sum_h (R_{L(I)}^J(h))^2\right]\right] \\
|
| 488 |
+
&= \mathbb{E}_J\left[\sum_h (R_{L(I)}^J(h))^2\right] \\
|
| 489 |
+
&= \mathbb{E}_J[\mathrm{Coll}(R_{L(I)}^J)] \le \frac{1}{n} + \delta \tag{16}
|
| 490 |
+
\end{align*}
|
| 491 |
+
$$
|
| 492 |
+
|
| 493 |
+
where the inequality in 15 follows from Cauchy-Schwarz inequality and the last step follows from the assumption of the lemma.
|
| 494 |
+
|
| 495 |
+
□□
|
| 496 |
+
|
| 497 |
+
We use simple counting argument to prove Lemma 2.3. A similar lemma appears in [7].
|
| 498 |
+
|
| 499 |
+
**Proof of Lemma 2.3**
|
| 500 |
+
|
| 501 |
+
**Proof:** Consider the event that a sequence $X$ of length $m$ does not have an L-subsequence of length $\ell$. Thus it has at most $\ell - 1$ distinct elements, which can be chosen in at most $\binom{k}{\ell-1}$ ways. The $m$ length sequence can be formed from them in at most $[\ell-1]^m$ ways. Therefore
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
\begin{align*}
|
| 505 |
+
\Pr[X \text{ has L-subsequence of length } < \ell] & \leq \frac{\binom{k}{\ell-1} [\ell-1]^m}{k^m} \\
|
| 506 |
+
& \leq \left(\frac{ke}{\ell-1}\right)^{\ell-1} \cdot \left(\frac{\ell-1}{k}\right)^m \\
|
| 507 |
+
& = e^{\ell-1} \left(\frac{\ell-1}{k}\right)^{m-\ell+1} \\
|
| 508 |
+
& = \frac{e^{\ell-1}}{a^{m-(k/a)}} = \frac{(ae)^{k/a}}{a^m}. \tag*{\square\square}
|
| 509 |
+
\end{align*}
|
| 510 |
+
$$
|
| 511 |
+
|
| 512 |
+
Next we prove Lemma 2.4.
|
| 513 |
+
|
| 514 |
+
**Proof of Lemma 2.4**
|
| 515 |
+
|
| 516 |
+
**Proof:**
|
| 517 |
+
|
| 518 |
+
We call $I \in [k]^m$ good if it has an L-subsequence of length at least $\ell$, else we call it bad.
|
| 519 |
+
---PAGE_BREAK---
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
\begin{align*}
|
| 523 |
+
\mathbb{E}_J[\mathrm{Coll}(Q_J)] &= \mathbb{E}_J\left[\sum_{g \in G} Q_J^2(g)\right] \\
|
| 524 |
+
&= \mathbb{E}_J\left[\sum_{g \in G} (\mathbb{E}_I(R_I(g))^2]\right] \\
|
| 525 |
+
&\leq \mathbb{E}_J\left[\sum_{g \in G} \mathbb{E}_I(R_I^2(g))\right] \quad \text{By Cauchy-Schwarz inequality} \tag{17} \\
|
| 526 |
+
&= \mathbb{E}_I[\mathbb{E}_J[\mathrm{Coll}(R_I)]] \\
|
| 527 |
+
&\leq \frac{1}{k^m} \mathbb{E}_J\left[\sum_{I \in [k]^m} \sum_{g \in G} (R_I^J(g))^2 + \sum_{\substack{I \in [k]^m \\ I \text{ is bad}}} 1\right] \\
|
| 528 |
+
&\leq \mathrm{Pr}_I[I \text{ is good}] \left(\frac{1}{n} + \frac{1}{2^\ell}\right) + \mathrm{Pr}_I[I \text{ is bad}] \tag{18}
|
| 529 |
+
\end{align*}
|
| 530 |
+
$$
|
| 531 |
+
|
| 532 |
+
Here the last step follows from Lemma 2.2 and Theorem 1.4. Now we fix $m$ from Lemma 2.3 appropriately to $O(\log n)$ such that $\mathrm{Pr}_I[I \text{ is bad}] \le \frac{1}{2^m}$ and choose $\ell = \Theta(m)$. Hence we get that $\mathbb{E}_J[\mathrm{Coll}(Q_J)] \le \frac{1}{n} + \frac{1}{2^{\Theta(m)}}$. $\square\square$
|
| 533 |
+
|
| 534 |
+
Next, we give the proof of Lemma 2.6.
|
| 535 |
+
|
| 536 |
+
# 6 Proof of Lemma 2.6
|
| 537 |
+
|
| 538 |
+
**Proof:** For each $g \in G$, we can write
|
| 539 |
+
|
| 540 |
+
$$
|
| 541 |
+
\begin{align*}
|
| 542 |
+
R_I^J(g) &= \operatorname{Prob}_{\epsilon_1, \dots, \epsilon_m} \left[ \prod_{j=1}^{m} g_{ij}^{\epsilon_j} = g \right] = \operatorname{Prob}_{\epsilon_1, \dots, \epsilon_m} \left[ g_{i_{f_1}}^{\epsilon_{f_1}} \cdots g_{i_{f_r}}^{\epsilon_{f_r}} h_{e_1}^{\epsilon_{e_1}} \cdots h_{e_\ell}^{\epsilon_{e_\ell}} = gy(\bar{\epsilon})^{-1} \right] \\
|
| 543 |
+
&= E_{\bar{\epsilon}}[R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1})].
|
| 544 |
+
\end{align*}
|
| 545 |
+
$$
|
| 546 |
+
|
| 547 |
+
Therefore we have the following:
|
| 548 |
+
|
| 549 |
+
$$
|
| 550 |
+
\begin{align}
|
| 551 |
+
\mathbb{E}_J[\mathrm{Coll}(R_I^J)] &= \mathbb{E}_J\left[\sum_g (R_I^J(g))^2\right] \nonumber \\
|
| 552 |
+
&= \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}} R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right] \nonumber \\
|
| 553 |
+
&\leq \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}}(R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right] \tag{19} \\
|
| 554 |
+
&= \mathbb{E}_{\bar{\epsilon}}[\mathbb{E}_J\left[\sum_g (R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right]] \nonumber \\
|
| 555 |
+
&= \mathbb{E}_{\bar{\epsilon}}[\mathbb{E}_J\left[\sum_h (R_{\hat{I}}^{J(I)}(h))^2\right]] \nonumber \\
|
| 556 |
+
&= \mathbb{E}_J[\mathrm{Coll}(R_{\hat{I}}^{J(I)})], \nonumber
|
| 557 |
+
\end{align}
|
| 558 |
+
$$
|
| 559 |
+
|
| 560 |
+
where the inequality 19 follows from Cauchy-Schwarz inequality. $\square\square$
|
| 561 |
+
|
| 562 |
+
We include a short proof of Lemma 2.9.
|
| 563 |
+
---PAGE_BREAK---
|
| 564 |
+
|
| 565 |
+
**Proof of Lemma 2.9**
|
| 566 |
+
|
| 567 |
+
**Proof:** There are $\binom{m}{r}$ ways of picking $r$ positions for the fixed elements in $I$. Each such index can be chosen in $j$ ways. From the $(k-j)$ random elements of $J$, $\ell$ distinct elements can be picked in $\binom{k-j}{\ell}$ ways. Let $n_{m-r,\ell}$ be the number of sequences of length $m-r$ that can be constructed out of $\ell$ distinct integers such that every integer appears at least once. Clearly, $|S_{r,\ell}| = \binom{m}{r} j^{\binom{k-j}{\ell}} n_{m-r,\ell}$. It is well known that $n_{m-r,\ell}$ is the coefficient of $x^{m-r}/(m-r)!$ in $(e^x - 1)^\ell$. Thus, by the binomial theorem $n_{m-r,\ell} = \sum_{i=0}^\ell (-1)^i \binom{\ell}{i} (\ell-i)^{m-r}$. Since $m = O(\log n)$ and $\ell \le m$, $n_{m-r,\ell}$ can be computed in time polynomial in $n$. □□
|
| 568 |
+
|
| 569 |
+
Next, we give a proof of Lemma 3.1.
|
| 570 |
+
|
| 571 |
+
**Proof of Lemma 3.1**
|
| 572 |
+
|
| 573 |
+
**Proof:** The proof closely follows the proof of Erdös-Rényi for the case $\bar{\epsilon} \in \{0,1\}^k$. We briefly sketch the argument below for the sake of completeness.
|
| 574 |
+
|
| 575 |
+
We denote the expression $g_1^{\epsilon_1} \cdots g_k^{\epsilon_k}$ by $\bar{g}^{\bar{\epsilon}}$. For a given $J$, $\chi_x(\bar{\epsilon}) = 1$ if $\bar{g}^{\bar{\epsilon}} = x$ and $0$ otherwise. Let $S_1 = \{(\bar{\epsilon}, \bar{\epsilon}') | \bar{\epsilon} \neq \bar{\epsilon}'; \exists i$ such that $\bar{\epsilon}_i \neq \bar{\epsilon}'_i$ and $\bar{\epsilon}_i \bar{\epsilon}'_i = 0\}$. Let $S_2 = \{(\bar{\epsilon}, \bar{\epsilon}') | \bar{\epsilon} \neq \bar{\epsilon}'; \bar{\epsilon}_i \neq \bar{\epsilon}'_i \Rightarrow \bar{\epsilon}_i \bar{\epsilon}'_i = -1\}$
|
| 576 |
+
|
| 577 |
+
$$
|
| 578 |
+
\begin{aligned}
|
| 579 |
+
\mathbb{E}_J[\mathrm{Coll}(D_J)] &= \mathbb{E}_J\left[\sum_{x \in G} (D_J(x))^2\right] \\
|
| 580 |
+
&= \mathbb{E}_J\left[\sum_{x \in G} (\mathrm{Pr}_{\bar{\epsilon}}[\bar{g}^{\bar{\epsilon}} = x])^2\right] \\
|
| 581 |
+
&= \frac{1}{3^{2k}} \mathbb{E}_J\left[\sum_{x \in G} \left(\sum_{\bar{\epsilon}} \chi_x(\bar{\epsilon})\right) \left(\sum_{\bar{\epsilon}'} \chi_x(\bar{\epsilon}')\right)\right] \\
|
| 582 |
+
&= \frac{1}{3^{2k}} \left[ \sum_{\bar{\epsilon}=\bar{\epsilon}'} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] + \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] \right] \\
|
| 583 |
+
&= \frac{1}{3^{2k}} \left( 3^k + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_2} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] \right) \\
|
| 584 |
+
&= \frac{1}{3^{2k}} \left[ 3^k + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_2} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) \right] \\
|
| 585 |
+
&\leq \frac{1}{3^k} + \left(1 - \frac{1}{3^k} - \frac{5^k}{9^k}\right) \frac{1}{n} + \frac{5^k}{9^k} \\
|
| 586 |
+
&= \left(1 - \frac{1}{n}\right) \left(\frac{1}{3^k} + \frac{5^k}{9^k}\right) + \frac{1}{n} \\
|
| 587 |
+
&< \left(\frac{8}{9}\right)^k + \frac{1}{n}
|
| 588 |
+
\end{aligned}
|
| 589 |
+
$$
|
| 590 |
+
|
| 591 |
+
To see the last step, first notice that if $\bar{\epsilon} = \bar{\epsilon}'$ then $\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}') = 1$. A simple counting argument shows that $|S_2| = \sum_{i=0}^k {k \choose i} 2^i 3^{k-i} = 5^k$. So $\sum_{(\bar{\epsilon},\bar{\epsilon}') \in S_2} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) \le 5^k$. Now consider
|
| 592 |
+
---PAGE_BREAK---
|
| 593 |
+
|
| 594 |
+
$(\bar{\epsilon}, \bar{\epsilon}') \in S_1$, let $j$ be the first position from left such that $\bar{\epsilon}_j \neq \bar{\epsilon}'_j$. W.l.o.g assume that $\bar{\epsilon}_j = 1$ (or $\bar{\epsilon}_j = -1$) and $\bar{\epsilon}'_j = 0$. In that case write $\bar{g}^{\bar{\epsilon}} = a_{g_j} b$ and $\bar{g}^{\bar{\epsilon}'} = a_{b'}$. Then $\mathrm{Pr}_{g_j}[g_j = b'b^{-1}] = \frac{1}{n}$. Hence
|
| 595 |
+
$$ \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) = \frac{9^k - 3^k - 5^k}{n}. \quad \square\square $$
|
samples/texts_merged/2909063.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# y⁺ Calculation, Example 6D
|
| 5 |
+
|
| 6 |
+
Example 6D: Consider a high-velocity fluid over a flat plate. It is desired to find the thickness of the viscous sublayer at $y^+=1$. The fluid is H₂O at 395 K and 1 MPa. Its free stream velocity is 700 m/s, and has a boundary layer $\delta=0.1$ m.
|
| 7 |
+
|
| 8 |
+
## Solutions:
|
| 9 |
+
|
| 10 |
+
1) Use the "Yplus_LIKE_Eddy_Scales_Book_Version.m" application found in my CFD/turbulence book, "Applied Computational Fluid Dynamics and Turbulence Modeling", Springer International Publishing, 1st Ed., ISBN 978-3-030-28690-3, 2019, DOI: 10.1007/978-3-030-28691-0.
|
| 11 |
+
|
| 12 |
+
or
|
| 13 |
+
|
| 14 |
+
2) Get a free copy of "Yplus_LIKE_Eddy_Scales_Book_Version.m" at www.cfdturbulence.com, or email me at tayloreddydk1@gmail.com.
|
| 15 |
+
|
| 16 |
+
or
|
| 17 |
+
|
| 18 |
+
3) Use the free $y^+$ estimation GUI tool offered by cfd-online, which is at http://www.cfd-online.com/Tools/yplus.php
|
| 19 |
+
|
| 20 |
+
or
|
| 21 |
+
|
| 22 |
+
4) Follow the step-by-step solution shown in the next slide.
|
| 23 |
+
---PAGE_BREAK---
|
| 24 |
+
|
| 25 |
+
$y^+$ Calculation, Example 6D
|
| 26 |
+
|
| 27 |
+
From $P$ and $T$, $\rho = 942 \text{ kg/m}^3$ and $\mu = 2.28 \times 10^{-4} \text{ kg/m-s}$.
|
| 28 |
+
|
| 29 |
+
$$v = \frac{\mu}{\rho} = \frac{2.28 \times 10^{-4}}{942} = 2.43 \times 10^{-7} \text{ m}^2/\text{s}$$
|
| 30 |
+
|
| 31 |
+
$$Re_x = \frac{U_\infty \delta(x)}{v} = \frac{700 * 0.1}{2.43 \times 10^{-7}} = 2.87 \times 10^{8}, < 10^{9}$$
|
| 32 |
+
|
| 33 |
+
$$C_f = [2 \log_{10}(Re_x) - 0.65]^{-2.3} = [2 \log_{10}(2.87 \times 10^8) - 0.65]^{-2.3} = 1.60 \times 10^{-3}$$
|
| 34 |
+
|
| 35 |
+
$$\tau_w = C_f \frac{\rho U_\infty^2}{2} = 1.60 \times 10^{-3} \frac{942 * 700^2}{2} = 3.78 \times 10^5$$
|
| 36 |
+
|
| 37 |
+
$$u_* = \sqrt{\frac{\tau_w}{\rho}} = \sqrt{\frac{3.78 \times 10^5}{942}} = 20.0$$
|
| 38 |
+
|
| 39 |
+
$$y(\text{at } y^+=1) = \frac{y^+ v}{u_*} = \frac{1 * 2.43 \times 10^{-7}}{20} = 1.22 \times 10^{-8} \text{ m}$$
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
# y⁺ Calculation, Example 6D Solutions
|
| 43 |
+
|
| 44 |
+
## Approach 1 and 2 (the Matlab script, Yplus_LIKE_Eddy_Scales_Book_Version.m)
|
| 45 |
+
|
| 46 |
+
$$Re_x = 2.89 \times 10^8$$
|
| 47 |
+
|
| 48 |
+
$$y(\text{at } y^+=1) = 1.23 \times 10^{-8} \text{ m}$$
|
| 49 |
+
|
| 50 |
+
## Approach 4 (previous slide)
|
| 51 |
+
|
| 52 |
+
$$Re_x = 2.87 \times 10^8$$
|
| 53 |
+
|
| 54 |
+
$$y(\text{at } y^+=1) = 1.22 \times 10^{-8} \text{ m}$$
|
| 55 |
+
|
| 56 |
+
## Approach 3 (cfd-online tool)
|
samples/texts_merged/305525.md
ADDED
|
@@ -0,0 +1,295 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Topology Proceedings
|
| 5 |
+
|
| 6 |
+
**Web:** http://topology.auburn.edu/tp/
|
| 7 |
+
|
| 8 |
+
**Mail:** Topology Proceedings
|
| 9 |
+
Department of Mathematics & Statistics
|
| 10 |
+
Auburn University, Alabama 36849, USA
|
| 11 |
+
|
| 12 |
+
**E-mail:** topolog@auburn.edu
|
| 13 |
+
|
| 14 |
+
**ISSN:** 0146-4124
|
| 15 |
+
|
| 16 |
+
COPYRIGHT © by Topology Proceedings. All rights reserved.
|
| 17 |
+
---PAGE_BREAK---
|
| 18 |
+
|
| 19 |
+
# SPLITTABILITY OVER LINEAR ORDERINGS
|
| 20 |
+
|
| 21 |
+
A. J. Hanna* and T.B.M. McMaster†
|
| 22 |
+
|
| 23 |
+
## Abstract
|
| 24 |
+
|
| 25 |
+
A partial order X is splittable over a partial order Y if for every subset A of X there exists an order preserving mapping $f : X \to Y$ such that $f^{-1}f(A) = A$. We define a cardinal function $sc(X)$ (the 'splittability ceiling' for X) to be the least cardinal $\beta$ such that the disjoint sum of $\beta$ copies of X fails to split over a single copy of X. We allow $sc(X) = \infty$ to cover the case where arbitrarily many disjoint copies may be split. We investigate this cardinal function with respect to (linear) partial orders.
|
| 26 |
+
|
| 27 |
+
## 1. Introduction
|
| 28 |
+
|
| 29 |
+
A. V. Arhangel'skiǐ formulated and developed a range of definitions of splittability (or cleavability) in topology (see for example [1, 2]), of which the following are amongst the most basic.
|
| 30 |
+
|
| 31 |
+
**Definition 1.1.** For topological spaces X and Y:
|
| 32 |
+
|
| 33 |
+
— *X is splittable over Y along the subset A of X if there exists continuous f : X → Y such that:*
|
| 34 |
+
|
| 35 |
+
* The research of the first author was supported by a distinction award scholarship from the Department of Education for Northern Ireland.
|
| 36 |
+
|
| 37 |
+
† The authors would like to express their gratitude to Steven Watson for his helpful comments and insight, especially regarding Theorem 1.8.
|
| 38 |
+
*Mathematics Subject Classification:* 06A05, 06A06, 54A25, 54C99
|
| 39 |
+
**Key words:** splittability, partially ordered set, splittability ceiling
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
(i) $f(A) \cap f(X \setminus A) = \emptyset$ or, equivalently,
|
| 43 |
+
|
| 44 |
+
(ii) $f^{-1}f(A) = A.
|
| 45 |
+
|
| 46 |
+
- *X is splittable over Y if for every subset A of X there exists continuous f : X → Y such that $f^{-1}f(A) = A.$*
|
| 47 |
+
|
| 48 |
+
It quickly becomes apparent that splittability is not exclu-
|
| 49 |
+
sively a topological idea. Indeed, only a routine translation into
|
| 50 |
+
the language of the appropriate category is required for an anal-
|
| 51 |
+
ogous definition of splittability over other structures. (For ex-
|
| 52 |
+
ample, splittability over semigroups is considered in [6].)
|
| 53 |
+
|
| 54 |
+
**Definition 1.2.** Let $X$ and $Y$ be partially ordered sets (posets).
|
| 55 |
+
|
| 56 |
+
– A map $f$ between partial orders is increasing (or order preserving) if $x \le y$ implies $f(x) \le f(y)$.
|
| 57 |
+
|
| 58 |
+
– *X is splittable over Y along the subset A of X if there exists increasing $f: X \to Y$ such that $f^{-1}f(A) = A.$*
|
| 59 |
+
|
| 60 |
+
– *X is splittable over Y if for every subset A of X there exists increasing f : X → Y such that f⁻¹f(A) = A.*
|
| 61 |
+
|
| 62 |
+
The following result was obtained by D. J. Marron [4, 5]:
|
| 63 |
+
|
| 64 |
+
**Theorem 1.3.** A poset *X* is splittable over the *n*-point chain if and only if:
|
| 65 |
+
|
| 66 |
+
(i) *X does not contain a chain of height greater than n, and*
|
| 67 |
+
|
| 68 |
+
(ii) *X does not contain two disjoint chains of height n.*
|
| 69 |
+
|
| 70 |
+
**Note 1.4.** The previous result shows that it is not possible to split the (disjoint) sum of two copies of a finite chain over a single copy of the same finite chain. However, it is possible to disjointly embed two copies of $\omega$ (the positive integers with usual ordering) into a single copy of $\omega$. Clearly, then, it is possible to split 'two disjoint copies' of $\omega$ over $\omega$.
|
| 71 |
+
---PAGE_BREAK---
|
| 72 |
+
|
| 73 |
+
In general, suppose that $\alpha$ copies of a poset $X$ can be disjointly embedded into a single copy of $X$. It is clear that the disjoint sum of $\alpha$ copies of $X$ will split over a single copy of $X$. Indeed, if $(X \cdot \alpha)$ can be embedded into $X$, we can split the sum of $\alpha$ copies of $X$ over $X$.
|
| 74 |
+
|
| 75 |
+
For notation and further information on linear orderings the interested reader is referred to [8].
|
| 76 |
+
|
| 77 |
+
**Definition 1.5** (the ‘splittability ceiling’ for $X$). Let $sc(X)$ be the least cardinal $\beta$ such that the (disjoint) sum of $\beta$ copies of $X$ fails to split over a single copy of $X$. We allow $sc(X) = \infty$ to cover the case where the sum of arbitrarily many disjoint copies may be split.
|
| 78 |
+
|
| 79 |
+
**Note 1.6.** The critical case for deciding $sc(X)$ is reached in attempting to split $2^{|X|}$ copies. If we have more than $2^{|X|}$ disjoint copies of $X$ and split along some subset of their sum, then there must be copies which we are splitting along the same subset (since $X$ has precisely $2^{|X|}$ subsets) and hence the ‘same’ map will do. In other words, if $sc(X) \ge 2^{|X|}$ then $sc(X) = \infty$.
|
| 80 |
+
|
| 81 |
+
**Definition 1.7.** [8] A cardinal number $\aleph_\alpha$ is said to be regular if it is not the sum of fewer than $\aleph_\alpha$ cardinal numbers smaller than $\aleph_\alpha$.
|
| 82 |
+
|
| 83 |
+
**Theorem 1.8.** For any partial order $X$, if $sc(X) \neq \infty$ then $sc(X)$ is a regular cardinal.
|
| 84 |
+
|
| 85 |
+
*Proof.* Suppose $sc(X) = \lambda < \infty$ is not a regular cardinal; then $\lambda$ can be expressed as the sum of $\alpha$ cardinals $\beta_i$ each less than $\lambda$, where $\alpha$ is less than $\lambda$. Let $Y = \bigcup_{i \in \lambda} X_i$ be the disjoint union of $\lambda$ copies of $X$. We can write $Y = \bigcup_{\substack{i \in \alpha \\ j \in \beta_i}} (\bigcup X_j)$. For each $i \in \alpha$ we can split $\bigcup_{j \in \beta_i} X_j$ over a single copy $X_{\beta_i}$ of $X$, since $\beta_i < \lambda$.
|
| 86 |
+
---PAGE_BREAK---
|
| 87 |
+
|
| 88 |
+
Likewise we can split $\bigcup_{i \in \alpha} X_{\beta_i}$ over a single copy of $X$, since $\alpha < \lambda$.
|
| 89 |
+
|
| 90 |
+
Hence we can split $\lambda$ copies of $X$ (along any subset) over $X$ - a contradiction. $\square$
|
| 91 |
+
|
| 92 |
+
**Proposition 1.9.** *The splittability ceiling for the chain of positive integers $\omega$ is infinity (i.e. $sc(\omega) = \infty$).*
|
| 93 |
+
|
| 94 |
+
*Proof.* Given $X$, the (disjoint) sum of copies of $\omega$ and a subset $A$ of $X$, we define a map $f : X \to \omega$ as follows:
|
| 95 |
+
|
| 96 |
+
$$f(x) = \begin{cases} x & (\text{if } x \in A \text{ and } x \text{ is odd}) \text{ or } (x \notin A \text{ and } x \text{ is even}), \\ x+1 & \text{otherwise.} \end{cases}$$
|
| 97 |
+
|
| 98 |
+
It is clear that $f$ is increasing and that $f(A)$ is a subset of the odds while $f(X \setminus A)$ is a subset of the evens. It follows that $f$ splits $X$ along $A$ over $\omega$ as required. $\square$
|
| 99 |
+
|
| 100 |
+
The corresponding result holds for the negative integers $\omega^*$ and for the integers $\omega^* + \omega$.
|
| 101 |
+
|
| 102 |
+
**Proposition 1.10.** Let $\alpha$ be an ordinal (considered as a linear order). Then
|
| 103 |
+
|
| 104 |
+
$$sc(\alpha) = \begin{cases} \infty & \text{if } \alpha \text{ is a limit ordinal} \\ 2 & \text{if } \alpha \text{ is a non-limit ordinal} \end{cases}$$
|
| 105 |
+
|
| 106 |
+
*Proof.* Note that each element in a limit ordinal has an immediate successor. The first part of the result follows from similar methods as employed for $\omega$. If $\alpha$ is a non-limit ordinal we specify the subset $A$ to contain the 'odd' ordinals less than $\alpha$. Similarly we specify the subset $B$ to contain the 'even' ordinals less than $\alpha$. We can express $\alpha = \xi + n$ where $\xi$ is a limit ordinal and $n$ is finite. Now $f(x) \ge x$ for all $x \in \alpha$ and $f(x) = x$ for $x = \xi + i$ ($0 \le i < n$) whenever $f$ is a map splitting $\alpha$ along $A$ or $B$ over $\alpha$. Clearly it will not be possible to split the sum of two copies of $\alpha$ along $A$ and $B$ respectively over a single copy of $\alpha$. $\square$
|
| 107 |
+
---PAGE_BREAK---
|
| 108 |
+
|
| 109 |
+
**Proposition 1.11.** *The splittability ceiling for the chain of rationals η is infinity (i.e. sc(η) = ∞).*
|
| 110 |
+
|
| 111 |
+
*Proof.* Decompose η into two disjoint subsets C and D, each of which is dense in η. Enumerate both C and D in an arbitrary fashion. Given disjoint copies of η and a subset A to split along, define a map for each copy. Begin by enumerating the copy $X_1 = \{x_1, x_2, x_3, \dots\}$. If $x_1 \in A (\notin A)$ map $x_1$ to the first point in the enumeration of C (D). The process continues inductively (using a method similar to that devised by Cantor to show that every countable linear order can be embedded into η). □
|
| 112 |
+
|
| 113 |
+
**Proposition 1.12.** *The splittability ceiling for the chain of the real numbers λ is c+ (i.e. sc(λ) = c+).*
|
| 114 |
+
|
| 115 |
+
*Proof.* We first note that it is possible to disjointly embed continuum-many copies of $\lambda$ into $\lambda$. To prove the result we show that there are only continuum-many increasing maps from the reals into the reals. We know that there are only continuum-many maps from the rationals into the reals. Given increasing $f: \mathbb{R} \to \mathbb{R}$ consider its restriction to the rationals $f|_{\mathbb{Q}}$. For how many increasing maps $g: \mathbb{R} \to \mathbb{R}$ do we have $f|_{\mathbb{Q}} = g|_{\mathbb{Q}}$?
|
| 116 |
+
|
| 117 |
+
We can show that $f$ and $g$ can only differ at countably many points: for each irrational $x$ select both a strictly decreasing sequence $(a_n)$ and a strictly increasing sequence $(b_n)$ of rationals, each converging to $x$. Now $(f(a_n))$ converges to some limit $l$ while $(f(b_n))$ converges to some limit $l'$. If $l = l'$ then $f(x) = g(x) = l$, otherwise $f(x), g(x) \in [l', l]$ and $f(X) \cap [l', l] = \{f(x)\}$. Since there can only be countably many disjoint intervals in the reals, there can only be countably many points $x$ where $f(x) \neq g(x)$.
|
| 118 |
+
|
| 119 |
+
It follows that there can only be continuum-many maps within each equivalence class. Hence there are at most continuum-many increasing maps from the reals to the reals. Clearly if we have more than continuum-many disjoint copies of $\lambda$ and pick different subsets in them, then the union of these copies cannot be
|
| 120 |
+
---PAGE_BREAK---
|
| 121 |
+
|
| 122 |
+
split over a single copy of $\lambda$ along the union of these sets, due
|
| 123 |
+
to the cardinality restriction on increasing maps.
|
| 124 |
+
□
|
| 125 |
+
|
| 126 |
+
Similar arguments can be used to locate an upper bound for
|
| 127 |
+
the number of increasing maps from any linear order into itself.
|
| 128 |
+
The most interesting case appears to be that of the countable
|
| 129 |
+
linear orders. Moreover, unless the order is scattered it can be
|
| 130 |
+
shown that the splittability ceiling will be infinity. This follows
|
| 131 |
+
since any non-scattered linear order will contain a copy of $\eta$ and
|
| 132 |
+
we already know that $sc(\eta) = \infty$.
|
| 133 |
+
|
| 134 |
+
## 2. Countable Linear Orderings
|
| 135 |
+
|
| 136 |
+
**Lemma 2.1.** Let $X$ be a partial order. If $sc(X) > 2$ then $sc(X) \geq \aleph_0$.
|
| 137 |
+
|
| 138 |
+
*Proof.* Let $X_1, X_2, X_3$ be disjoint copies of the partial order $X$, with subsets $A_1, A_2, A_3$ respectively. Let $X_4$ be a fourth copy of $X$. Since $sc(X) > 2$ we can split $X_1 \cup X_2$ along $A_1 \cup A_2$ over $X_4$ using an increasing map $f$ (i.e. $f^{-1}f(A_1 \cup A_2) = A_1 \cup A_2$). Now split $X_3 \cup X_4$ along $B \cup A_3$ (where $B = f(A_1 \cup A_2)$) over $X$ using an increasing map $g$ (i.e. $g^{-1}g(f(A_1 \cup A_2) \cup A_3) = f(A_1 \cup A_2) \cup A_3$).
|
| 139 |
+
|
| 140 |
+
Define a map $h : X_1 \cup X_2 \cup X_3 \to X$ by
|
| 141 |
+
|
| 142 |
+
$$h(x) = \begin{cases} g \circ f(x) & \text{if } x \in X_1 \cup X_2, \\ g(x) & \text{if } x \in X_3; \end{cases}$$
|
| 143 |
+
|
| 144 |
+
then $h$ splits $X_1 \cup X_2 \cup X_3$ along $A_1 \cup A_2 \cup A_3$ over $X$; for suppose $x \in X_1 \cup X_2 \cup X_3$ and
|
| 145 |
+
|
| 146 |
+
$$\begin{align*}
|
| 147 |
+
h(x) \in h(A_1 \cup A_2 \cup A_3) &= h(A_1 \cup A_2) \cup h(A_3) \\
|
| 148 |
+
&= g \circ f(A_1 \cup A_2) \cup g(A_3) \\
|
| 149 |
+
&= g(f(A_1 \cup A_2) \cup A_3).
|
| 150 |
+
\end{align*}$$
|
| 151 |
+
|
| 152 |
+
If $x \in X_3$ then $h(x) = g(x) \in g(f(A_1 \cup A_2) \cup A_3)$ so $x \in f(A_1 \cup A_2) \cup A_3$ and $x \in A_3$. If $x \in X_1 \cup X_2$ then $h(x) = g \circ f(x) \in g(f(A_1 \cup A_2) \cup A_3)$ so $f(x) \in f(A_1 \cup A_2) \cup A_3$.
|
| 153 |
+
---PAGE_BREAK---
|
| 154 |
+
|
| 155 |
+
Fig. 1. Splitting 3 copies of X over a single copy of X
|
| 156 |
+
|
| 157 |
+
Hence $f(x) \in f(A_1 \cup A_2)$ and $x \in A_1 \cup A_2$.
|
| 158 |
+
|
| 159 |
+
Clearly this argument can be extended by induction so that
|
| 160 |
+
$sc(X) > n$ for all $n \in \mathbb{N}$. $\square$
|
| 161 |
+
|
| 162 |
+
**Corollary 2.2.** Let X be a finite partial order; then $sc(X) = 2$ or $\infty$.
|
| 163 |
+
|
| 164 |
+
We show now that the previous result extends to countable
|
| 165 |
+
linear partial orders. To do so, we employ the notion of an ‘order
|
| 166 |
+
shuffling’ and a result due to J. L. Orr.
|
| 167 |
+
|
| 168 |
+
**Definition 2.3.** [7] Let A be a countable linearly ordered set. A function $f : A \to N^+$ is called an order shuffling on A. A linearly ordered set B shuffles into (A, f) if there is an increasing surjection $\sigma$ from B onto A such that the cardinality of $\sigma^{-1}\{a\}$ is at least $f(a)$ for all but finitely many $a \in A$. If this holds for all $a \in A$ then B shuffles into (A, f) exactly.
|
| 169 |
+
---PAGE_BREAK---
|
| 170 |
+
|
| 171 |
+
**Theorem 2.4.** [7] Let A be a countable scattered linear ordering and let f be an order shuffling on A; then A shuffles into (A, f).
|
| 172 |
+
|
| 173 |
+
**Lemma 2.5.** Let X be a countable scattered linear order; then there exist an order preserving surjection $\pi : X \to X$ and points $\{a_1, a_2, \dots, a_n\}$ such that:
|
| 174 |
+
|
| 175 |
+
(i) $|\pi^{-1}(x)| > 1$ for each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$, and
|
| 176 |
+
|
| 177 |
+
(ii) $\pi^{-1}(a_i) = \{a_i\}$ for each $i \in \{1, 2, \dots, n\}$.
|
| 178 |
+
|
| 179 |
+
*Proof*. We use Theorem 2.4 to find an order preserving surjection $\pi : X \to X$ such that $|\pi^{-1}(x)| > 1$ for all but $n$ elements $\{a_1, a_2, \dots, a_n\}$. We assume that $n$ is minimal and that $a_1 < a_2 < \dots < a_n$.
|
| 180 |
+
|
| 181 |
+
Note that if $\pi^{-1}(\{a_1, a_2, \dots, a_n\}) \subseteq \{a_1, a_2, \dots, a_n\}$ then since $\pi$ is order preserving $\pi^{-1}(a_i) = \{a_i\}$. Suppose that $\pi$ does not exhibit property (ii); then there exists $i$ such that the singleton pre-image of $a_i$ under $\pi$ is not contained in $\{a_1, a_2, \dots, a_n\}$. Let $\rho = \pi \circ \pi$ and consider $\rho^{-1}(x)$ for some $x \in X$. If $x \notin \{a_1, a_2, \dots, a_n\}$ then $|\pi^{-1}(x)| > 1$, hence $|\rho^{-1}(x)| > 1$.
|
| 182 |
+
|
| 183 |
+
If $x = a_j$ for $j \neq i$ then clearly $|\rho^{-1}(x)| \ge 1$, but if $x = a_i$ then we can find $y \in X \setminus \{a_1, a_2, \dots, a_n\}$ such that $\pi(y) = x$. Now $|\pi^{-1}(y)| > 1$, so $|\rho^{-1}(x)| > 1$, but $\rho$ now contradicts the minimality of $n$. $\square$
|
| 184 |
+
|
| 185 |
+
**Theorem 2.6.** Let X be a countable linear ordering; then $sc(X) = 2$ or $sc(X) = \infty$.
|
| 186 |
+
|
| 187 |
+
*Proof.* We know that if X is not scattered, then X contains a copy of the rationals, so $sc(X) = \infty$. We also know that if $sc(X) > 2$ then $sc(X) \ge \aleph_0$, that is, we can split the sum of any finite number of copies of X over a single copy. Let X be a countable scattered linear order such that $sc(X) > 2$. Using Lemma 2.5 it is possible to find an increasing surjection $\pi : X \to X$ and points $\{a_1, a_2, \dots, a_n\}$ such that:
|
| 188 |
+
|
| 189 |
+
(i) $|\pi^{-1}(x)| > 1$ for each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$ and
|
| 190 |
+
---PAGE_BREAK---
|
| 191 |
+
|
| 192 |
+
(ii) $\pi^{-1}(a_i) = \{a_i\}$ for each $i = 1, 2, \dots, n$.
|
| 193 |
+
|
| 194 |
+
For each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$ choose $x_1, x_2 \in \pi^{-1}(x)$ with
|
| 195 |
+
$x_1 < x_2$.
|
| 196 |
+
Let $Y = \bigcup_{i \in \beta} X_i$ be the disjoint union of $\beta$ copies of $X$. Let
|
| 197 |
+
$A = \bigcup_{i \in \beta} A_i$ where $A_i \subseteq X_i$. For each subset $B$ of $\{a_1, a_2, \dots, a_n\}$
|
| 198 |
+
|
| 199 |
+
let $X_B$ be a copy of $X$. For each $i \in \beta$ let $C_i = A_i \cap \{a_1, a_2, \dots, a_n\}$
|
| 200 |
+
and define a map $f_i : X_i \to X_{C_i}$ as follows:
|
| 201 |
+
|
| 202 |
+
$$f_i(x) = \begin{cases} a_i & \text{if } x = a_i, \\ x_1 & \text{if } x \in A_i \setminus \{a_1, a_2, \dots, a_n\}, \\ x_2 & \text{if } x \notin A_i \cup \{a_1, a_2, \dots, a_n\}. \end{cases}$$
|
| 203 |
+
|
| 204 |
+
These maps can be used to split $Y$ along $A$ over $2^n$ copies of $X$ (using $f$ say), which can in turn be split along $f(A)$ over a single copy of $X$. Hence we can split $\beta$ copies of $X$ over $X$, so $sc(X) = \infty$. $\square$
|
| 205 |
+
|
| 206 |
+
**Note 2.7.** Given a countable scattered linear order $X$, for $x, y \in X$ we set $x \equiv y$ if and only if there are only finitely many $z \in X$ such that $x < z < y$ or $y < z < x$, and thus obtain an equivalence relation on $X$. Let us denote the equivalence class of a point $x \in X$ by $e(x)$. Now we can determine a subset $A$ of $X$ such that between each two points in $A$ we can find a point not in $A$ and vice versa. The first step is to select a point $x$ from each equivalence class. We assign a point $y \in e(x)$ to the set $A$ if there are an even number of points between $x$ and $y$ (inclusive). We say that $A$ and $X \setminus A$ alternate in $X$. Note that this only works because the order under consideration is scattered.
|
| 207 |
+
|
| 208 |
+
**Lemma 2.8.** Let $X$ be a countable scattered linear order with $sc(X) > 2$. For each $x \in X$ there exists an order preserving injection $f: X \to X$ such that $x \notin f(X)$.
|
| 209 |
+
|
| 210 |
+
*Proof.* Let $x \in X$, where $X$ is a countable scattered linear order with $sc(X) > 2$. Choose a subset $A$ of $X$ that alternates in $X$
|
| 211 |
+
---PAGE_BREAK---
|
| 212 |
+
|
| 213 |
+
as described in Note 2.7. Let $Y = X_1 \cup X_2$ be the disjoint union of 2 copies of $X$. Let $B = A_1 \cup A_2$ where $A = A_1 \subseteq X_1$ and $X \setminus A = A_2 \subseteq X_2$. Choose $f$ that splits $Y$ along $B$ over $X$ and set $f_i = f|_{X_i}$ for $i=1,2$. The choice of $A$ ensures that both $f_1$ and $f_2$ are order preserving injections. If $f_1(X_1)$ or $f_2(X_2)$ do not contain $x$ we have found a suitable map. Otherwise we can find distinct $a_1, a_2 \in X$ such that $f_1(a_1) = f_2(a_2) = x$. Now $a_1 < a_2$ say, so define a map $g: X \to X$ by
|
| 214 |
+
|
| 215 |
+
$$ g(z) = \begin{cases} f_2(z) & \text{for } z < a_2 \\ f_1(z) & \text{for } z \ge a_2. \end{cases} $$
|
| 216 |
+
|
| 217 |
+
This map is an order preserving injection and $x \notin g(X)$. $\square$
|
| 218 |
+
|
| 219 |
+
**Lemma 2.9.** Let $X$ be a countable scattered linear order such that for each $x \in X$ there exists an order preserving injection $f : X \to X$ such that $x \notin f(X)$. If $A$ is a finite subset of $X$ there exists an order preserving injection $g : X \to X$ such that $A \cap g(X) = \emptyset$.
|
| 220 |
+
|
| 221 |
+
*Proof*. Let $A = \{a_1, a_2, \dots, a_n\}$ be a finite subset of $X$. Suppose that there exists an order preserving injection $g : X \to X$ such that $g(X) \cap \{a_1, a_2, \dots, a_k\} = \emptyset$. If $k < n$, then either $a_{k+1} \notin g(X)$ or there exists $b \in X$ such that $g(b) = a_{k+1}$. In the first case, let $h = g$, and in the second case, choose an order preserving injection $f : X \to X$ such that $b \notin f(X)$ and set $h = g \circ f$. Now $h$ is an order preserving injection and $h(X) \cap \{a_1, a_2, \dots, a_{k+1}\} = \emptyset$, and we repeat the above argument. When $k=n$, we are done. $\square$
|
| 222 |
+
|
| 223 |
+
**Theorem 2.10.** Let $X$ be a countable linear order; then $\mathrm{sc}(X) = \infty$ if and only if $2 \cdot X$ order embeds into $X$.
|
| 224 |
+
|
| 225 |
+
*Proof.* We need only prove that if $X$ is a countable linear order and $\mathrm{sc}(X) = \infty$ then $2 \cdot X$ order embeds into $X$. If $X$ is not scattered then $X$ contains a subset isomorphic to the rationals. Since every countable linear order embeds into the rationals (see
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
[8]) clearly $2 \cdot X$ order embeds into $X$. We assume now that $X$ is scattered. First find an increasing surjection $\sigma : X \to X$ such that $|\sigma^{-1}(x)| \ge 2$ for all $x \in X \setminus \{a_1, a_2, \dots, a_m\}$. It is possible (via Lemmas 2.8 and 2.9) to find an order preserving injection $f : X \to X$ such that $a_i \notin f(X)$ for all $i$.
|
| 229 |
+
|
| 230 |
+
Set $Y = \sigma^{-1}(f(X)) \subseteq X$ and define $\pi : Y \to X$ by $\pi = f^{-1} \circ \sigma$. It follows that $\pi$ is order preserving and that $|\pi^{-1}(x)| \ge 2$ for all $x \in X$.
|
| 231 |
+
|
| 232 |
+
Select, for each $x$, two points $x_0, x_1 \in \pi^{-1}(x)$ with $x_0 < x_1$. Define $\phi : \{0, 1\} \times X \to X$ by
|
| 233 |
+
|
| 234 |
+
$$ \phi(i, x) = \begin{cases} x_0 & \text{if } i = 0, \\ x_1 & \text{if } i = 1. \end{cases} $$
|
| 235 |
+
|
| 236 |
+
Clearly, $\phi$ order embeds $2 \cdot X$ into $X$. $\square$
|
| 237 |
+
|
| 238 |
+
**Lemma 2.11.** The following statements are equivalent for any linear order $X$:
|
| 239 |
+
|
| 240 |
+
(i) $2 \cdot X$ order embeds into $X$,
|
| 241 |
+
|
| 242 |
+
(ii) $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$,
|
| 243 |
+
|
| 244 |
+
(iii) $n \cdot X$ order embeds into $X$ for some $n \in \mathbb{N}$ where $n > 1$.
|
| 245 |
+
|
| 246 |
+
*Proof.* We prove first that (i) implies (ii). Let $X$ be a linear order such that $2 \cdot X$ order embeds into $X$; that is, there exists an order preserving injection $f : \{0, 1\} \times X \to X$. Suppose that $k \cdot X$ order embeds into $X$ for all $k < n$ for some $n \in \mathbb{N}$. Hence there exists an order preserving injection $g : \{0, 1, \dots, k-2\} \times X \to X$. Define a map $h : \{0, 1, \dots, k-1\} \times X \to 2 \cdot X$ as follows:
|
| 247 |
+
|
| 248 |
+
$$ h(i, x) = \begin{cases} (0, g(i, x)) & \text{if } i < k-1, \\ (1, g(k-2, x)) & \text{if } i = k-1. \end{cases} $$
|
| 249 |
+
|
| 250 |
+
Now define $\pi : \{0, 1, \dots, k-1\} \times X \to X$ as $\pi = f \circ h$. It follows that $\pi$ is an order preserving injection so by induction we have shown that $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$. That (ii) implies (iii) is trivial. Finally (iii) implies (i) since $2 \cdot X$ will clearly order embed into $n \cdot X$ for any $n > 1$. $\square$
|
| 251 |
+
---PAGE_BREAK---
|
| 252 |
+
|
| 253 |
+
**Theorem 2.12.** Let $X$ be a countable linear order. Then $sc(X) = \infty$ if and only if $sc(n \cdot X) = \infty$ for all $n \in \mathbb{N}$.
|
| 254 |
+
|
| 255 |
+
*Proof.* If $sc(X) = \infty$ then $2 \cdot X$ (and hence $k \cdot X$ for all $k \in \mathbb{N}$) will order embed into $X$ by Lemma 2.11. It follows that $2n \cdot X$ will order embed into $n \cdot X$ and hence into $X$, a sufficient condition for $sc(n \cdot X) = \infty$.
|
| 256 |
+
|
| 257 |
+
If $sc(n \cdot X) = \infty$ then $2n \cdot X$ order embeds into $n \cdot X$ by Theorem 2.10. That is, we can find an order preserving injection $f : \{0, 1, \dots, 2n-1\} \times X \to \{0, 1, \dots, n-1\} \times X$. For any $x \in X$ we can find $x', x'' \in X$ such that:
|
| 258 |
+
|
| 259 |
+
$$f(0,x) \le (0, x') < (0, x'') \le f(2n-1,x).$$
|
| 260 |
+
|
| 261 |
+
Define a map $g : \{0, 1\} \times X \to X$ by
|
| 262 |
+
|
| 263 |
+
$$g(i, x) = \begin{cases} x' & \text{if } i = 0, \\ x'' & \text{if } i = 1. \end{cases}$$
|
| 264 |
+
|
| 265 |
+
Clearly $g$ is an order preserving injection that order embeds $2 \cdot X$ into $X$, a sufficient condition for $sc(X) = \infty$ by Theorem 2.10. $\square$
|
| 266 |
+
|
| 267 |
+
**Theorem 2.13.** Let $X$ be a countable linear order. Then $sc(X) = \infty$ if and only if $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$.
|
| 268 |
+
|
| 269 |
+
## References
|
| 270 |
+
|
| 271 |
+
[1] A. V. Arhangel'skii, *A general concept of cleavability of topological spaces over a class of spaces*, Abstracts Tiraspol Symposium (1985) (Stiinca, Kishinev, 1985), 8–10 (in Russian).
|
| 272 |
+
|
| 273 |
+
[2] A. V. Arhangel'skii, *A survey of cleavability*, Topology and its applications **54** (1993) 141–163.
|
| 274 |
+
|
| 275 |
+
[3] A. J. Hanna and T. B. M. McMaster, *Some results on cleavability*, submitted.
|
| 276 |
+
---PAGE_BREAK---
|
| 277 |
+
|
| 278 |
+
[4] D. J. Marron, * Splittability in ordered sets and in ordered spaces*, Ph.D. thesis, Queen's University Belfast (1997).
|
| 279 |
+
|
| 280 |
+
[5] D. J. Marron and T. B. M. McMaster, * Splittability in ordered sets spaces*, Proc. Eighth Prague Topological Symp., (1996) 280-282.
|
| 281 |
+
[located in Topology Atlas at http://www.unipissing.ca/topology]
|
| 282 |
+
|
| 283 |
+
[6] D. J. Marron and T. B. M. McMaster, *Cleavability in semi-groups*, to appear in Semigroup Forum.
|
| 284 |
+
|
| 285 |
+
[7] J. L. Orr, *Shuffling of linear orders*, Canad. Math. Bull. **38**(2) (1995), 223-229.
|
| 286 |
+
|
| 287 |
+
[8] J. G. Rosenberg, *Linear orderings*, Pure and Applied Mathematics, Academic Press (1982).
|
| 288 |
+
|
| 289 |
+
Department of Pure Mathematics, The Queen's University of
|
| 290 |
+
Belfast, University Road, Belfast, BT7 1NN, United Kingdom
|
| 291 |
+
|
| 292 |
+
*E-mail address: a.hanna@qub.ac.uk*
|
| 293 |
+
|
| 294 |
+
Department of Pure Mathematics, The Queen's University of
|
| 295 |
+
Belfast, University Road, Belfast, BT7 1NN, United Kingdom
|
samples/texts_merged/3147359.md
ADDED
|
@@ -0,0 +1,589 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Conference Paper
|
| 5 |
+
|
| 6 |
+
# Implementing Hybrid Semantics: From Functional to Imperative
|
| 7 |
+
|
| 8 |
+
Sergey Goncharov
|
| 9 |
+
Renato Neves
|
| 10 |
+
José Proença*
|
| 11 |
+
|
| 12 |
+
*CISTER Research Centre
|
| 13 |
+
CISTER-TR-201008
|
| 14 |
+
|
| 15 |
+
2020/11/30
|
| 16 |
+
---PAGE_BREAK---
|
| 17 |
+
|
| 18 |
+
# Implementing Hybrid Semantics: From Functional to Imperative
|
| 19 |
+
|
| 20 |
+
Sergey Goncharov, Renato Neves, José Proença*
|
| 21 |
+
|
| 22 |
+
*CISTER Research Centre
|
| 23 |
+
Polytechnic Institute of Porto (ISEP P.Porto)
|
| 24 |
+
Rua Dr. António Bernardino de Almeida, 431
|
| 25 |
+
4200-072 Porto
|
| 26 |
+
Portugal
|
| 27 |
+
Tel.: +351.22.8340509, Fax: +351.22.8321159
|
| 28 |
+
E-mail: sergey.goncharov@fau.de, nevrenato@di.uminho.pt, pro@isep.ipp.pt
|
| 29 |
+
https://www.cister-labs.pt
|
| 30 |
+
|
| 31 |
+
## Abstract
|
| 32 |
+
|
| 33 |
+
Hybrid programs combine digital control with differential equations, and naturally appear in a wide range of application domains, from biology and control theory to real-time software engineering. The entanglement of discrete and continuous behaviour inherent to such programs goes beyond the established computer science foundations, producing challenges related to e.g. infinite iteration and combination of hybrid behaviour with other effects. A systematic treatment of hybridness as a dedicated computational effect has emerged recently. In particular, a generic idealized functional language HybCore with a sound and adequate operational semantics has been proposed. The latter semantics however did not provide hints to implementing HybCore as a runnable language, suitable for hybrid system simulation (e.g. the semantics features rules with uncountably many premises). We introduce an imperative counterpart of HybCore, whose semantics is simpler and runnable, and yet intimately related with the semantics of HybCore at the level of hybrid monads. We then establish a corresponding soundness and adequacy theorem. To attest that the resulting semantics can serve as a firm basis for the implementation of typical tools of programming oriented to the hybrid domain, we present a web-based prototype implementation to evaluate and inspect hybrid programs, in the spirit of GHCI for Haskell and UTop for OCaml. The major asset of our implementation is that it formally follows the operational semantic rules.
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
# Implementing Hybrid Semantics: From Functional to Imperative
|
| 37 |
+
|
| 38 |
+
Sergey Goncharov¹, Renato Neves² and José Proença³
|
| 39 |
+
|
| 40 |
+
¹ Dept. of Comp. Sci., FAU Erlangen-Nürnberg, Germany
|
| 41 |
+
|
| 42 |
+
² University of Minho & INESC-TEC, Portugal
|
| 43 |
+
|
| 44 |
+
³ CISTER/ISEP, Portugal
|
| 45 |
+
|
| 46 |
+
**Abstract.** Hybrid programs combine digital control with differential equations, and naturally appear in a wide range of application domains, from biology and control theory to real-time software engineering. The entanglement of discrete and continuous behaviour inherent to such programs goes beyond the established computer science foundations, producing challenges related to e.g. infinite iteration and combination of hybrid behaviour with other effects. A systematic treatment of *hybridness* as a dedicated computational effect has emerged recently. In particular, a generic idealized functional language HYBCORE with a sound and adequate operational semantics has been proposed. The latter semantics however did not provide hints to implementing HYBCORE as a runnable language, suitable for hybrid system simulation (e.g. the semantics features rules with uncountably many premises). We introduce an imperative counterpart of HYBCORE, whose semantics is simpler and runnable, and yet intimately related with the semantics of HYBCORE at the level of *hybrid monads*. We then establish a corresponding soundness and adequacy theorem. To attest that the resulting semantics can serve as a firm basis for the implementation of typical tools of programming oriented to the hybrid domain, we present a web-based prototype implementation to evaluate and inspect hybrid programs, in the spirit of GHCI for HASKELL and UTOP for OCAML. The major asset of our implementation is that it formally follows the operational semantic rules.
|
| 47 |
+
|
| 48 |
+
## 1 Introduction
|
| 49 |
+
|
| 50 |
+
**The core idea of hybrid programming.** Hybrid programming is a rapidly emerging computational paradigm [26,29] that aims at using principles and techniques from programming theory (e.g. compositionality [12,26], Hoare calculi [29,34], theory of iteration [2,8]) to provide formal foundations for developing computational systems that interact with physical processes. Cruise controllers are a typical example of this pattern; a very simple case is given by the hybrid program below.
|
| 51 |
+
|
| 52 |
+
```c
|
| 53 |
+
while true do {
|
| 54 |
+
if v ≤ 10 then (v' = 1 for 1) else (v' = -1 for 1) (cruise controller)
|
| 55 |
+
}
|
| 56 |
+
```
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
In a nutshell, the program specifies a digital controller that periodically measures and regulates a vehicle's velocity (v): if the latter is less or equal than 10 the controller accelerates during 1 time unit, as dictated by the program statement $v' = 1 \text{ for } 1$ ($v' = 1$ is a differential equation representing the velocity's rate of change over time. The value 1 on the right-hand side of for is the duration during which the program statement runs). Otherwise, it decelerates during the same amount of time ($v' = -1 \text{ for } 1$). Figure 1 shows the output respective to this hybrid program for an initial velocity of 5.
|
| 60 |
+
|
| 61 |
+
Note that in contrast to standard programming, the cruise controller involves not only classical constructs (while-loops and conditional statements) but also differential ones (which are used for describing physical processes). This cross-disciplinary combination is the core feature of hybrid programming and has a notably wide range of application domains (see [29,30]). However, it also hinders the use of classical techniques of programming, and thus calls for a principled extension of programming theory to the hybrid setting.
|
| 62 |
+
|
| 63 |
+
Fig. 1: Vehicle's velocity
|
| 64 |
+
|
| 65 |
+
As is already apparent from the (cruise controller) example, we stick to an *imperative* programming style, in particular, in order to keep in touch with the established denotational models of physical time and computation. A popular alternative to this for modelling real-time and hybrid systems is to use a *declarative* programming style, which is done e.g. in real-time Maude [27] or Modelica [10]. A well-known benefit of declarative programming is that programs are very easy to write, however on the flip side, it is considerably more difficult to define what they exactly mean.
|
| 66 |
+
|
| 67 |
+
**Motivation and related work.** Most of the previous research on formal hybrid system modelling has been inspired by automata theory and Kleene algebra (as the corresponding algebraic counterpart). These approaches led to the well-known notion of hybrid automaton [17] and Kleene algebra based languages for hybrid systems [28,18,19]. From the purely semantic perspective, these formalizations are rather close and share such characteristic features as *nondeterminism* and what can be called *non-refined divergence*. The former is standardly justified by the focus on formal verification of safety-critical systems: in such contexts overabstraction is usually desirable and useful. However, coalescing *purely hybrid* behaviour with nondeterminism detaches semantic models from their prototypes as they exist in the wild. This brings up several issues. Most obviously, a nondeterministic semantics, especially not given in an operational form, cannot directly serve as a basis for languages and tools for hybrid system testing and simulation. Moreover, models with nondeterminism baked in do not provide a clear indication of how to combine hybrid behaviour with effects other
|
| 68 |
+
---PAGE_BREAK---
|
| 69 |
+
|
| 70 |
+
than nondeterminism (e.g. probability), or to combine it with nondeterminism in a different way (van Glaabeek's spectrum [36] gives an idea about the diversity of potentially arising options). Finally, the Kleene algebra paradigm strongly suggests a relational semantics for programs, with the underlying relations connecting a state on which the program is run with the states that the program can reach. As previously indicated by Höfner and Möller [18], this view is too coarse-grained and contrasts to the trajectory-based one where a program is associated with a trajectory of states (recall Figure 1). The trajectory-based approach provides an appropriate abstraction for such aspects as notions of convergence, periodic orbits, and duration-based predicates [5]. This potentially enables analysis of properties such as *how fast* our (cruise controller) example reaches the target velocity or for *how long* it exceeds it.
|
| 71 |
+
|
| 72 |
+
The issue of *non-refined divergence* mentioned earlier arises from the Kleene algebra law $p;0 = 0$ in conjunction with Fischer-Ladner's encoding of while-loops `while b do { p }` as $(b;p)*; \neg b$. This creates a havoc with all divergent programs `while true do { p }` as they become identified with divergence 0, thus making the above example of a (cruise controller) meaningless. This issue is extensively discussed in Höfner and Möller's work [18] on a *nondeterministic* algebra of trajectories, which tackles the problem by disabling the law $p;0 = 0$ and by introducing a special operator for infinite iteration that inherently relies on nondeterminism. This iteration operator inflates trajectories at so-called 'Zeno points' with arbitrary values, which in our case would entail e.g. the program
|
| 73 |
+
|
| 74 |
+
$$ x := 1; while true do { wait x; x := x/2 } \quad (\text{zeno}) $$
|
| 75 |
+
|
| 76 |
+
to output at time instant 2 all possible values in the valuation space (the expression `wait t` represents a wait call of t time units). More details about Zeno points can be consulted in [18,14].
|
| 77 |
+
|
| 78 |
+
In previous work [12,14], we pursued a *purely hybrid* semantics via a simple *deterministic functional* language HYBCORE, with while-loops for which we used Elgot's notion of iteration [8] as the underlying semantic structure. That resulted in a semantics of finite and infinite iteration, corresponding to a refined view of divergence. Specifically, we developed an operational semantics and also a denotational counterpart for HYBCORE. An important problem of that semantics, however, is that it involves infinitely many premisses and requires calculating total duration of programs, which precludes using such semantics directly in implementations. Both the above examples (cruise controller) and (zeno) are affected by this issue. In the present paper we propose an *imperative* language with a denotational semantics similar to HYBCORE's one, but now provide a clear recipe for executing the semantics in a constructive manner.
|
| 79 |
+
|
| 80 |
+
**Overview and contributions.** Building on our previous work [14], we devise operational and denotational semantics suitable for implementation purposes, and provide a soundness and adequacy theorem relating both these styles of semantics. Results of this kind are well-established yardsticks in the programming language theory [37], and beneficial from a practical perspective. For example, small-step operational semantics naturally guides the implementation of compilers for
|
| 81 |
+
---PAGE_BREAK---
|
| 82 |
+
|
| 83 |
+
programming languages, whilst denotational semantics is more abstract, syntax-independent, and guides the study of program equivalence, of the underlying computational paradigm, and its combination with other computational effects.
|
| 84 |
+
|
| 85 |
+
As mentioned before, in our previous work [14] we introduced a simple functional hybrid language HYBCORE with operational and denotational monad-based semantics. Here, we work with a similar imperative while-language, whose semantics is given in terms of a global state space of trajectories over $\mathbb{R}^n$, which is a commonly used carrier when working with solutions of systems of differential equations. A key principle we have taken as a basis for our new semantics is the capacity to determine behaviours of a program p by being able to examine only some subterms of it. In order to illustrate this aspect, first note that our semantics does not reduce program terms p and initial states $\sigma$ (corresponding to valuation functions $\sigma: \mathcal{X} \to \mathbb{R}$ on program variables $\mathcal{X}$) to states $\sigma'$, as usual in classical programming. Instead it reduces triples p, $\sigma$, t of programs p, initial states $\sigma$ and time instants t to a state $\sigma'$; such a reduction can be read as "given $\sigma$ as the initial state, program p produces a state $\sigma'$ at time instant t". Then, the reduction process of p, $\sigma$, t to a state only examines fragments of p or unfolds it when strictly necessary, depending of the time instant t. For example, the reduction of the (cruise controller) unfolds the underlying loop only twice for the time instant $1 + 1/2$ (the time instant $1 + 1/2$ occurred in the second iteration of the loop). This is directly reflected in our prototype implementation of an interactive evaluator of hybrid programs LINCE. It is available online and comes with a series of examples for the reader to explore (http://arcatools.org/lince). The plot in Figure 1 was automatically obtained from LINCE, by calling on the previously described reduction process for a predetermined sequence of time instants t.
|
| 86 |
+
|
| 87 |
+
For the denotational model, we build on our previous work [12,14] where hybrid programs are interpreted via a suitable monad **H**, called the *hybrid monad* and capturing the computational effect of *hybridness*, following the seminal approach of Moggi [24,25]. Our present semantics is more lightweight and is naturally couched in terms of another monad **H**<sub>S</sub>, parametrized by a set **S**. In our case, as mentioned above, **S** is the set of trajectories over $\mathbb{R}^n$ where *n* is the number of available program variables $\mathcal{X}$. The latter monad is in fact parametrized in a formal sense [35] and comes out as an instance of a recently emerged generic construction [7]. A remarkable salient feature of that construction is that it can be instantiated in a constructive setting (without using any choice principles) – although we do not touch upon this aspect here, in our view this reinforces the fundamental nature of our semantics. Among various benefits of **H**<sub>S</sub> over **H**, the former monad enjoys a construction of an iteration operator (in the sense of Elgot [8]) as a *least fixpoint*, calculated as a limit of an $\omega$-chain of approximations, while for **H** the construction of the iteration operator is rather intricate and no similar characterization is available. A natural question that arises is: how are **H** and **H**<sub>S</sub> related? We do answer it by providing an instructive connection, which sheds light on the construction of **H**, by explicitly identifying semantic ingredients which have to be added to **H**<sub>S</sub> to obtain **H**. Additionally, this results in “backward compatibility” with our previous work.
|
| 88 |
+
---PAGE_BREAK---
|
| 89 |
+
|
| 90 |
+
**Document structure.** After short preliminaries (Section 2), in Section 3 we introduce our while-language and its operational semantics. In Sections 4 and 5, we develop the denotational model for our language and connect it formally to the existing hybrid monad [12,14]. In Section 6, we prove a soundness and adequacy result for our operational semantics w.r.t. the developed model. Section 7 describes LINCE's architecture. Finally, Section 8 concludes and briefly discusses future work. Omitted proofs and examples are found in the extended version of the current paper [15].
|
| 91 |
+
|
| 92 |
+
## 2 Preliminaries
|
| 93 |
+
|
| 94 |
+
We assume familiarity with category theory [1]. By $\mathbb{R}$, $\mathbb{R}_+$ and $\bar{\mathbb{R}}_+$ we respectively denote the sets of reals, non-negative reals, and extended non-negative reals (i.e. $\mathbb{R}_+$ extended with the infinity value $\infty$). Let $[0, \bar{\mathbb{R}}_+)$ denote the set of downsets of $\bar{\mathbb{R}}_+$ having the form $[0, d]$ ($d \in \mathbb{R}_+$) or the form $[0, d)$ ($d \in \bar{\mathbb{R}}_+$). We call the elements of the dependent sum $\sum_{I \in [0, \bar{\mathbb{R}}_+)} X^I$ trajectories (over $X$). By $[0, \mathbb{R}_+]$, $[0, \bar{\mathbb{R}}_+)$ and $[\bar{0}, \bar{\mathbb{R}}_+)$ we denote the following corresponding subsets of $[0, \bar{\mathbb{R}}_+]$: $([0, d] | d \in \mathbb{R}_+)$, $([0, d] | d \in \bar{\mathbb{R}}_+)$ and $([0, d] | d \in \bar{\mathbb{R}}_+)$. By $X \amalg Y$ we denote the disjoint union, which is the categorical coproduct in the category of sets with the corresponding left and right injections inl: $X \to X \amalg Y$, inr: $Y \to X \amalg Y$. To reduce clutter, we often use plain union $X \cup Y$ in place of $X \amalg Y$ if X and Y are disjoint by construction.
|
| 95 |
+
|
| 96 |
+
By $a \triangleleft b \triangleright c$ we denote the case distinction construct: a if b is true and c otherwise. By ! we denote the empty function, i.e. a function with the empty domain. For the sake of succinctness, we use the notation $e^t$ for the function application $e(t)$ with real-value t.
|
| 97 |
+
|
| 98 |
+
## 3 An imperative hybrid while-language and its semantics
|
| 99 |
+
|
| 100 |
+
This section introduces the syntax and operational semantics of our language. We first fix a stock of n-variables $\mathcal{X} = \{x_1, \dots, x_n\}$ over which we build atomic programs, according to the grammar
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\begin{aligned}
|
| 104 |
+
At(\mathcal{X}) &\ni x := t \mid x'_1 = t_1, \dots, x'_n = t_n \quad \texttt{for } t \\
|
| 105 |
+
LTerm(\mathcal{X}) &\ni r \mid r \cdot x \mid t+s
|
| 106 |
+
\end{aligned}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $x \in \mathcal{X}$, $r \in \mathbb{R}$, $t_i, t, s \in LTerm(\mathcal{X})$. An atomic program is thus either a classical assignment $x := t$ or a differential statement $x'_1 = t_1, \dots, x'_n = t_n$ for t. The latter reads as "run the system of differential equations $x'_1 = t_1, \dots, x'_n = t_n$ for t time units". We then define the while-language via the grammar
|
| 110 |
+
|
| 111 |
+
$$ Prog(\mathcal{X}) \ni a \mid p; q \mid \texttt{if} b \texttt{then} p \texttt{else} q \mid \texttt{while} b \texttt{do} \{ p \} $$
|
| 112 |
+
|
| 113 |
+
where $p, q \in Prog(\mathcal{X})$, $a \in At(\mathcal{X})$ and $b$ is an element of the free Boolean algebra generated by the terms $t \leqslant s$ and $t \geqslant s$. The expression `wait t` (from the previous section) is encoded as the differential statement $x'_1 = 0, \dots, x'_n = 0$ for t.
|
| 114 |
+
---PAGE_BREAK---
|
| 115 |
+
|
| 116 |
+
*Remark 1.* The systems of differential equations that our language allows are always linear. This is not to say that we could not consider more expressive systems; in fact we could straightforwardly extend the language in this direction, for its semantics (presented below) is not impacted by specific choices of solvable systems of differential equations. But here we do not focus on such choices regarding the expressivity of continuous dynamics and concentrate on a core hybrid semantics instead on which to study the fundamentals of hybrid programming.
|
| 117 |
+
|
| 118 |
+
In the sequel we abbreviate differential statements $x_1' = t_1, \dots, x_n' = t_n$ for $t$, where $\bar{x}'$ and $\bar{t}$ abbreviate the corresponding vectors of variables $x_1' \dots x_n'$ and linear-combination terms $t_1 \dots t_n$. We call functions of type $\sigma: \mathcal{X} \to \mathbb{R}$ environments; they map variables to the respective valuations. We use the notation $\sigma\nabla[\bar{\nu}/\bar{x}]$ to denote the environment that maps each $x_i$ in $\bar{x}$ to $v_i$ in $\bar{\nu}$ and the rest of variables in the same way as $\sigma$. Finally, we denote by $\phi_{\sigma}^{\bar{x}'=\bar{t}}: [0, \infty) \to \mathbb{R}^n$ the solution of a system of differential equations $\bar{x}' = \bar{t}$ with $\sigma$ determining the initial condition. When clear from context, we omit the superscript in $\phi_{\sigma}^{\bar{x}'=\bar{t}}$. For a linear-combination term $t$ the expression $t\sigma$ denotes the corresponding interpretation according to $\sigma$ and analogously for $b\sigma$ where $b$ is a Boolean expression.
|
| 119 |
+
|
| 120 |
+
We now introduce a small-step operational semantics for our language. Intuitively, the semantics establishes a set of rules for reducing a triple $\langle program \rangle$ to an environment, via a *finite* sequence of reduction steps. The rules are presented in Figure 2. The terminal configuration $\langle skip, \sigma, t \rangle$ represents a successful end of a computation, which can then be fed into another computation (via rule (**seq-skip**→)). Contrastingly, $\langle stop, \sigma, t \rangle$ is a terminating configuration that inhibits the execution of subsequent computations. The latter is reflected in rules (**diff-stop**→) and (**seq-stop**→) which entail that, depending on the chosen time instant, we do not need to evaluate the whole program, but merely a part of it – consequently, infinite while-loops need not yield infinite reduction sequences (as explained in Remark 2). Note that time $t$ is consumed when applying the rules (**diff-stop**→) and (**diff-seq**→) in correspondence to the duration of the differential statement at hand. The rules (**seq**) and (**seq-skip**→) correspond to the standard rules of operational semantics for while languages over an imperative store [37].
|
| 121 |
+
|
| 122 |
+
*Remark 2.* Putatively infinite while-loops do not necessarily yield infinite reduction steps. Take for example the while-loop below whose iterations have always duration 1.
|
| 123 |
+
|
| 124 |
+
$$ x := 0; \while true do { x := x + 1; wait 1 } \end{while} \quad (1) $$
|
| 125 |
+
|
| 126 |
+
It yields a finite reduction sequence for the time instant 1/2, as shown below:
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\begin{aligned}
|
| 130 |
+
& x := 0; \while true do \{ x := x + 1; wait 1 \}, \sigma, 1/2 \rightarrow \\
|
| 131 |
+
& \quad \{ \text{by the rules } (\mathbf{asg}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-skip}\xrightarrow{\phantom{=}}) \} \\
|
| 132 |
+
& \while true do \{ x := x + 1; wait 1 \}, \sigma \nabla[0/x], 1/2 \rightarrow \\
|
| 133 |
+
& \quad \{ \text{by the rule } (\mathbf{wh-true}\xrightarrow{\phantom{=}}) \}
|
| 134 |
+
\end{aligned}
|
| 135 |
+
$$
|
| 136 |
+
---PAGE_BREAK---
|
| 137 |
+
|
| 138 |
+
Fig. 2: Small-step Operational Semantics
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\begin{align*}
|
| 142 |
+
& x := x + 1 ; \textcolor{blue}{wait} 1 ; \textcolor{blue}{while} \textcolor{blue}{true} \textcolor{blue}{do} \{ x := x + 1 ; \textcolor{blue}{wait} 1 \}, \sigma \nabla [0/x] , \frac{1}{2} \rightarrow \\
|
| 143 |
+
& \qquad \{\text{by the rules } (\mathbf{asg}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-skip}\xrightarrow{\phantom{=}})\} \\
|
| 144 |
+
& \textcolor{blue}{wait} 1 ; \textcolor{blue}{while} \textcolor{blue}{true} \textcolor{blue}{do} \{ x := x + 1 ; \textcolor{blue}{wait} 1 \}, \sigma \nabla [0 + 1/x] , \frac{1}{2} \rightarrow \\
|
| 145 |
+
& \qquad \{\text{by the rules } (\mathbf{diff-stop}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-stop}\xrightarrow{\phantom{=}})\} \\
|
| 146 |
+
& stop, \sigma \nabla [0 + 1/x] , 0
|
| 147 |
+
\end{align*}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
The gist is that to evaluate program (1) at time instant $1/2$, one only needs to unfold the underlying loop until surpassing $1/2$ in terms of execution time. Note that if the wait statement is removed from the program then the reduction sequence would not terminate, intuitively because all iterations would be instantaneous and thus the total execution time of the program would never reach $1/2$.
|
| 151 |
+
|
| 152 |
+
The following theorem entails that our semantics is deterministic, which is
|
| 153 |
+
instrumental for our implementation.
|
| 154 |
+
|
| 155 |
+
**Theorem 1.** For every program *p*, environment *σ*, and time instant *t* there is at most one applicable reduction rule.
|
| 156 |
+
|
| 157 |
+
Let $\to^*$ be the transitive closure of the reduction relation $\to$ that was previously presented.
|
| 158 |
+
|
| 159 |
+
**Corollary 1.** For every program term p, environments σ, σ', σ'', time instants t, t', t'', and termination flags s, s' ∈ {skip, stop}, if p, σ, t →* s, σ', t' and p, σ, t →* s', σ'', t'', then the equations s = s', σ' = σ'' and t' = t'' must hold.
|
| 160 |
+
|
| 161 |
+
*Proof.* Follows by induction on the number of reduction steps and Theorem 1. □
|
| 162 |
+
|
| 163 |
+
As alluded above, the operational semantics treats time as a resource. This is formalised below.
|
| 164 |
+
---PAGE_BREAK---
|
| 165 |
+
|
| 166 |
+
**Proposition 1.** For all program terms $p$ and $q$, environments $\sigma$ and $\sigma'$, and time instants $t, t'$ and $s$, if $p, \sigma, t \to q, \sigma'$, $t'$ then $p, \sigma, t+s \to q, \sigma'$, $t'+s$; and if $p, \sigma, t \to \text{skip}, \sigma'$, $t'$ then $p, \sigma, t+s \to \text{skip}, \sigma'$, $t'+s$.
|
| 167 |
+
|
| 168 |
+
# 4 Towards Denotational Semantics: The Hybrid Monad
|
| 169 |
+
|
| 170 |
+
A mainstream subsuming paradigm in denotational semantics is due to Moggi [24,25], who proposed to identify a computational effect of interest as a monad, around which the denotational semantics is built using standard generic mechanisms, prominently provided by category theory. In this section we recall necessary notions and results, motivated by this approach, to prepare ground for our main constructions in the next section.
|
| 171 |
+
|
| 172 |
+
**Definition 1 (Monad).** A monad $\mathbf{T}$ (on the category of sets and functions) is given by a triple $(T, \eta, (-)^*)$, consisting of an endomap $T$ over the class of all sets, together with a set-indexed class of maps $\eta_X: X \to TX$ and a so-called Kleisli lifting sending each $f: X \to TY$ to $f^*: TX \to TY$ and obeying monad laws: $\eta^* = \text{id}, f^* \cdot \eta = f, (f^* \cdot g)^* = f^* \cdot g^*$ (it follows from this definition that $T$ extends to a functor and $\eta$ to a natural transformation).
|
| 173 |
+
|
| 174 |
+
A monad morphism $\theta: \mathbf{T} \to \mathbf{S}$ from $(T, \eta^{\mathbf{T}}, (-)^{\mathbf{T}})$ to $(S, \eta^{\mathbf{S}}, (-)^{\mathbf{S}})$ is a natural transformation $\theta: T \to S$ such that $\theta \cdot \eta^{\mathbf{T}} = \eta^{\mathbf{S}}$ and $\theta \cdot f^{\mathbf{T}} = (\theta \cdot f)^{\mathbf{S}} \cdot \theta$.
|
| 175 |
+
|
| 176 |
+
We will continue to use bold capitals (e.g. **T**) for monads over the corresponding endofunctors written as capital Romans (e.g. **T**).
|
| 177 |
+
|
| 178 |
+
In order to interpret while-loops one needs additional structure on the monad.
|
| 179 |
+
|
| 180 |
+
**Definition 2 (Elgot Monad).** A monad $\mathbf{T}$ is called Elgot if it is equipped with an iteration operator $(-)^{\dagger}$ that sends each $f: X \to T(Y \Join X)$ to $f^{\dagger}: X \to TY$ in such a way that certain established axioms of iteration are satisfied [2,16].
|
| 181 |
+
|
| 182 |
+
Monad morphisms between Elgot monads are additionally required to preserve iteration: $\theta \cdot f^{\dagger\mathbf{T}} = (\theta \cdot f)^{\dagger\mathbf{S}}$ for $\theta: \mathbf{T} \to \mathbf{S}$, $f: X \to T(Y \Join X)$.
|
| 183 |
+
|
| 184 |
+
For a monad $\mathbf{T}$, a map $f: X \to TY$, called a Kleisli map, is roughly to be regarded as a semantics of a program $p$, with $X$ as the semantics of the input, and $Y$ as the semantics of the output. For example, with $T$ being the maybe monad $(-) \Join \{\perp\}$, we obtain semantics of programs as partial functions. Let us record this example in more detail for further reference.
|
| 185 |
+
|
| 186 |
+
*Example 1 (Maybe Monad M)*. The maybe monad is determined by the following data: $MX = X \Join \{\perp\}$, the unit is the left injection $\text{inl}: X \to X \Join \{\perp\}$ and given $f: X \to Y \Join \{\perp\}$, $f^*$ is equal to the copairing $\text{[f, inr]}: X \Join \{\perp\} \to Y \Join \{\perp\}$.
|
| 187 |
+
|
| 188 |
+
It follows by general considerations (enrichment of the category of Kleisli maps over complete partial orders) that **M** is an Elgot monad with the following iteration operator $(-)^{\flat}$: given $f: X \to (Y \Join X) \Join \{\perp\}$, and $x_0 \in X$, let $x_0, x_1, ...$ be the longest (finite or infinite) sequence over $X$ constructed inductively in such a way that $f(x_i) = \text{inl}(\text{inr} x_{i+1})$. Now, $f^{\flat}(x_0) = \text{inr} \perp$ if the sequence is infinite or
|
| 189 |
+
---PAGE_BREAK---
|
| 190 |
+
|
| 191 |
+
$f(x_i) = \text{inr} \perp \text{ for some } i$, and $f^z(x_0) = \text{inl} y$ if for the last element of the sequence $x_n$, which must exist, $f(x_n) = \text{inl inl } y$.
|
| 192 |
+
|
| 193 |
+
Other examples of Elgot monad can be consulted e.g. in [16].
|
| 194 |
+
|
| 195 |
+
The computational effect of *hybridness* can also be captured by a monad, called *hybrid monad* [12,14], which we recall next (in a slightly different but equivalent form). To that end, we also need to recall *Minkowski addition* for subsets of the set $\mathbb{R}_+$ of extended non-negative reals (see Section 2): $A + B = \{a + b \mid a \in A, b \in B\}$, e.g. $[a, b] + [c, d] = [a + c, b + d]$ and $[a, b] + [c, d) = [a + c, b + d)$.
|
| 196 |
+
|
| 197 |
+
**Definition 3 (Hybrid Monad H).** The hybrid monad **H** is defined as follows.
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\begin{align*}
|
| 201 |
+
-HX &= \sum_{I \in [0, \bar{R}_+]} X^I \uplus \sum_{I \in [0, \bar{R}_+]} X^I, \text{ i.e. it is a set of trajectories valued on } X \\
|
| 202 |
+
&\text{and with the domain downclosed. For any } p = \text{inj}\langle I, e \rangle \in HX \text{ with } \text{inr} \in \{\text{inl}, \\
|
| 203 |
+
&\text{inr}\}, \text{ let us use the notation } p_d = I, p_e = e, \text{ the former being the duration of} \\
|
| 204 |
+
&\text{the trajectory and the latter the trajectory itself. Let also } \varepsilon = \langle \emptyset, ! \rangle.
|
| 205 |
+
\end{align*}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
- $\eta(x) = \text{inl}\langle[0,0], \lambda t. x\rangle$, i.e. $\eta(x)$ is a trajectory of duration 0 that returns $x$.
|
| 209 |
+
|
| 210 |
+
- given $f: X \to HY$, we define $f^*: HX \to HY$ via the following clauses:
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
\begin{align*}
|
| 214 |
+
f^*(\text{inl}\langle I, e \rangle) &= \text{inj}\langle I + J, \lambda t. (f(e^t))_e^0 \rangle \quad \triangleleft t < d \triangleright (f(e^d))_e^{t-d} \\
|
| 215 |
+
&\qquad \text{if } I' = I = [0, d] \text{ for some } d, f(e^d) = \text{inj}\langle J, e' \rangle
|
| 216 |
+
\end{align*}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\begin{align*}
|
| 221 |
+
f^*(\mathrm{inl}\langle I, e \rangle) &= \mathrm{inr}\langle I', \lambda t. (f(e^t))_e^0 \rangle & \text{if } I' \neq I \\
|
| 222 |
+
f^*(\mathrm{inr}\langle I, e \rangle) &= \mathrm{inr}\langle I', \lambda t. (f(e^t))_e^0 \rangle
|
| 223 |
+
\end{align*}
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
where $I' = \bigcup \{[0,t] \subseteq I | \forall s \in [0,t]. f(e^s) \neq \mathrm{inr} \varepsilon\}$ and $\mathrm{inj} \in \{\mathrm{inl}, \mathrm{inr}\}$.
|
| 227 |
+
|
| 228 |
+
The definition of the hybrid monad **H** is somewhat intricate, so let us complement it with some explanations (details and further intuitions about the hybrid monad can also be consulted in [12]). The domain **HX** constitutes three types of trajectories representing different kinds of hybrid computation:
|
| 229 |
+
|
| 230 |
+
- (closed) convergent: $\text{inl}\langle[0,d],e\rangle \in HX$ (e.g. instant termination $\eta(x)$);
|
| 231 |
+
|
| 232 |
+
- open divergent: $\text{inr}\langle[0,d),e\rangle \in HX$ (e.g. instant divergence $\text{inr}\epsilon$ or a trajectory $[0,\infty) \rightarrow X$ which represents a computation that runs ad infinitum);
|
| 233 |
+
|
| 234 |
+
- closed divergent: $\text{inr}\langle[0,d],e\rangle \in HX$ (representing computations that start to diverge precisely after the time instant $d$).
|
| 235 |
+
|
| 236 |
+
The Kleisli lifting $f^*$ works as follows: for a given trajectory $\text{inj}\langle I, e \rangle$, we first calculate the largest interval $I' \subseteq I$ on which the trajectory $\lambda t \in I'$. $f(e^t)$ does not instantly diverge (i.e. $f(e^t) \neq \text{inr} \varepsilon$) throughout, hence $I'$ is either $[0, d']$ or $[0, d')$ for some $d'$. Now, the first clause in the definition of $f^*$ corresponds to the successful composition scenario: the argument trajectory $\langle I, e \rangle$ is convergent, and composing $f$ with $e$ as described in the definition of $I'$ does not yield divergence all over $I$. In that case, we essentially concatenate $\langle I, e \rangle$ with $f(e^d)$, the latter being the trajectory computed by $f$ at the last point of $e$. The remaining two clauses correspond to various flavours of divergence, including divergence of the input $(\text{inr}\langle I, e\rangle)$ and divergences occurring along $f \cdot e$. Incidentally, this explains how closed divergent trajectories may arise: if $I' = [0, d']$ and $d'$ is properly smaller than $d$, then we diverge precisely *after* $d'$, which is possible e.g. if the program behind $f$ continuously checks a condition which did not fail up until $d'$.
|
| 237 |
+
---PAGE_BREAK---
|
| 238 |
+
|
| 239 |
+
# 5 Deconstructing the Hybrid Monad
|
| 240 |
+
|
| 241 |
+
As mentioned in the introduction, in [14] we used **H** for giving semantics to a functional language HYBCORE whose programs are interpreted as morphisms of type $X \to HY$. Here, we are dealing with an imperative language, which from a semantic point of view amounts to fixing a type of states *S*, shared between all programs; the semantics of a program is thus restricted to morphisms of type *S* $\to HS$. As explained next, this allows us to make do with a simpler monad **H**<sub>S</sub>, globally parametrized by *S*. The new monad **H**<sub>S</sub> has the property that $H_S S$ is naturally isomorphic to *HS*. Apart from (relative to **H**) simplicity, the new monad enjoys further benefits, specifically **H**<sub>S</sub> is mathematically a better behaved structure, e.g. in contrast to **H**, Elgot iteration on **H**<sub>S</sub> is constructed as a least fixed point. Factoring the denotational semantics through **H**<sub>S</sub> thus allows us to bridge the gap to the operational semantics given in Section 3, and facilitates the soundness and adequacy proof in the forthcoming Section 6.
|
| 242 |
+
|
| 243 |
+
In order to define $H_S$, it is convenient to take a slightly broader perspective. We will also need to make a detour through the topic of ordered monoid modules with certain completeness properties so that we can characterise iteration on $H_S$ as a least fixed point.
|
| 244 |
+
|
| 245 |
+
**Definition 4 (Monoid Module, Generalized Writer Monad [14]).** Given a (not necessarily commutative) monoid ($\mathbb{M}, +, 0$), a monoid module is a set $\mathbb{E}$ equipped with a map $\triangleright: \mathbb{M} \times \mathbb{E} \to \mathbb{E}$ (monoid action), subject to the laws $0 \triangleright e = e$, $(m+n) \triangleright e = m \triangleright (n \triangleright e)$.
|
| 246 |
+
|
| 247 |
+
Every monoid-module pair $(\mathbb{M}, \mathbb{E})$ induces a generalized writer monad $T = (T, \eta, (-)^*)$ with $T = \mathbb{M} \times (-) \cup \mathbb{E}$, $\eta_X(x) = \langle 0, x \rangle$, and
|
| 248 |
+
|
| 249 |
+
$$f^*(m, x) = (m + n, y) \quad \text{where} \quad m \in \mathbb{M}, x \in X, f(x) = \langle n, y \rangle \in \mathbb{M} \times Y$$
|
| 250 |
+
|
| 251 |
+
$$f^*(m, x) = m \triangleright e \quad \text{where} \quad m \in \mathbb{M}, x \in X, f(x) = e \in \mathbb{E}$$
|
| 252 |
+
|
| 253 |
+
$$f^*(e) = e \quad \text{where} \quad e \in \mathbb{E}$$
|
| 254 |
+
|
| 255 |
+
This generalizes the writer monad ($\mathbb{E} = \emptyset$) and the exception monad ($\mathbb{M} = 1$).
|
| 256 |
+
|
| 257 |
+
*Example 2.* A simple motivating example of a monoid-module pair $(\mathbb{M}, \mathbb{E})$ is the pair $(\mathbb{R}_+, \mathbb{R}_+)$ where the monoid operation is addition with 0 as the unit and the monoid action is also addition.
|
| 258 |
+
|
| 259 |
+
More specifically, we are interested in ordered monoids and (conservatively) complete monoid modules. These are defined as follows.
|
| 260 |
+
|
| 261 |
+
**Definition 5 (Ordered Monoids, (Conservatively) Complete Monoid Modules [7]).** We call a monoid $(\mathbb{M}, 0, +)$ an ordered monoid if it is equipped with a partial order $\leq$, such that $0$ is the least element of this order and $+$ is right-monotone (but not necessarily left-monotone).
|
| 262 |
+
|
| 263 |
+
An ordered $\mathbb{M}$-module w.r.t. an ordered monoid $(\mathbb{M}, +, 0, \leq)$, is an $\mathbb{M}$-module $(\mathbb{E}, \triangleright)$ together with a partial order $\sqsubseteq$ and a least element $\perp$, such that $\triangleright$ is
|
| 264 |
+
---PAGE_BREAK---
|
| 265 |
+
|
| 266 |
+
monotone on the right and $(- \triangleright \perp)$ is monotone, i.e.
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
\overline{\perp \sqsubseteq x} \qquad \frac{x \sqsubseteq y}{a \triangleright x \sqsubseteq a \triangleright y} \qquad \frac{a \le b}{a \triangleright \perp \sqsubseteq b \triangleright \perp}
|
| 270 |
+
$$
|
| 271 |
+
|
| 272 |
+
We call the last property restricted left monotonicity.
|
| 273 |
+
|
| 274 |
+
An ordered $\mathbb{M}$-module is $(\omega)$-complete if for every $\omega$-chain $s_1 \sqsubseteq s_2 \sqsubseteq \dots$ on $\mathbb{E}$ there is a least upper bound $\bigcup_i s_i$ and $\triangleright$ is continuous on the right, i.e.
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
\overline{\forall i. s_i \sqsubseteq \bigsqcup_i s_i} \qquad \frac{\forall i. s_i \sqsubseteq x}{\bigsqcup_i s_i \sqsubseteq x} \qquad \overline{a \triangleright \bigsqcup_i s_i \sqsubseteq \bigsqcup_i a \triangleright s_i}
|
| 278 |
+
$$
|
| 279 |
+
|
| 280 |
+
(the law $\bigsqcup_i a \triangleright s_i \sqsubseteq a \triangleright \bigsqcup_i s_i$ is derivable). Such an $\mathbb{M}$-module is conservatively complete if additionally for every $\omega$-chain $a_1 \sqsubseteq a_2 \sqsubseteq \dots$ in $\mathbb{M}$, such that the least upper bound $\bigvee_i a_i$ exists, $(\bigvee_i a_i) \triangleright \perp = \bigsqcup_i a_i \triangleright \perp$.
|
| 281 |
+
|
| 282 |
+
A homomorphism $h: \mathbb{E} \to \mathbb{F}$ of (conservatively) complete monoid $\mathbb{M}$-modules is required to be monotone and structure-preserving in the following sense: $h(\perp) = \perp$, $h(a \triangleright x) = a \triangleright h(x)$, $h(\bigsqcup_i x_i) = \bigsqcup_i h(x_i)$.
|
| 283 |
+
|
| 284 |
+
The completeness requirement for $\mathbb{M}$-modules has a standard motivation coming from domain theory, where $\sqsubseteq$ is regarded as an *information order* and completeness is needed to ensure that the relevant semantic domain can accommodate infinite behaviours. The conservativity requirement additionally ensures that the least upper bounds, which exist in $\mathbb{M}$ agree with those in $\mathbb{E}$. Our main example is as follows (we will use it for building $\mathbf{H}_S$ and its iteration operator).
|
| 285 |
+
|
| 286 |
+
**Definition 6 (Monoid Module of Trajectories).** The ordered monoid of finite open trajectories $(\text{Trj}_S, \hat{\wedge}, \langle\emptyset, !\rangle, \leqslant)$ over a given set $S$, is defined as follows: $\text{Trj}_S = \sum_{I \in [0, \bar{R}_+)} S^I$, the unit is the empty trajectory $\varepsilon = \langle\emptyset, !\rangle$; summation is concatenation of trajectories $\hat{\wedge}$, defined as follows:
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
\langle[0, d_1), e_1\rangle^{\wedge} \langle[0, d_2), e_2\rangle = \langle[0, d_1 + d_2), \lambda t. e_1^t \triangleleft t < d_1 \triangleright e_2^{t-d_1}\rangle.
|
| 290 |
+
$$
|
| 291 |
+
|
| 292 |
+
The relation $\leqslant$ is defined as follows: $\langle[0, d_1), e_1\rangle \leqslant \langle[0, d_2), e_2\rangle$ if $d_1 \leqslant d_2$ and $e_1^t = e_2^t$ for every $t \in [0, d_1)$. We can additionally consider both sets $\sum_{I \in [0, \bar{R}_+)} S^I$ and $\sum_{I \in [0, \bar{R}_+]} S^I$ as $\text{Trj}_S$-modules, by defining the monoid action $\triangleright$ also as concatenation of trajectories and by equipping these sets with the order $\sqsubseteq$: $\langle I_1, e_1\rangle \sqsubseteq \langle I_2, e_2\rangle$ if $I_1 \subseteq I_2$ and $e_1^t = e_2^t$ for all $t \in I_1$.
|
| 293 |
+
|
| 294 |
+
Consider the following functors:
|
| 295 |
+
|
| 296 |
+
$$
|
| 297 |
+
H'_S X = \sum_{I \in [0, \bar{R}_+)} S^I \times X \cup \sum_{I \in [0, \bar{R}_+)} S^I
|
| 298 |
+
$$
|
| 299 |
+
|
| 300 |
+
$$
|
| 301 |
+
(2)
|
| 302 |
+
$$
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
H_S X = \sum_{I \in [0, \bar{R}_+)} S^I \times X \cup \sum_{I \in [0, \bar{R}_+]} S^I
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
(3)
|
| 309 |
+
|
| 310 |
+
Both of them extend to monads $H'_S$ and $H_S$ as they are instances of Definition 4. Moreover, it is laborious but straightforward to prove that both $H'_S X$ and $H_S X$ are conservatively complete Trj$_S$-modules on X [7], i.e. conservatively complete
|
| 311 |
+
---PAGE_BREAK---
|
| 312 |
+
|
| 313 |
+
Trj<sub>S</sub>-modules, equipped with distinguished maps η: X → H'<sub>S</sub>X, η: X → H<sub>S</sub>X.
|
| 314 |
+
In each case η sends x ∈ X to ⟨ε, x⟩. The partial order on H'<sub>S</sub>X (which we will
|
| 315 |
+
use for obtaining the least upper bound of a certain sequence of approximations)
|
| 316 |
+
is given by the clauses below and relies on the previous order ≤ on trajectories:
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\frac{\langle I, e \rangle \le \langle I', e' \rangle}{\langle I, e \rangle \sqsubseteq \langle I', e' \rangle, x}
|
| 320 |
+
\qquad
|
| 321 |
+
\frac{\langle I, e \rangle \le \langle I', e' \rangle}{\langle I, e \rangle \sqsubseteq \langle I', e' \rangle}
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
The monad given by (2) admits a sharp characterization, which is an instance of
|
| 325 |
+
a general result [7]. In more detail,
|
| 326 |
+
|
| 327 |
+
**Proposition 2.** The pair $(H'_S X, \eta)$ is a free conservatively complete $\text{Trj}_S$-module on $X$, i.e. for every conservatively complete $\text{Trj}_S$-module $\mathbb{E}$ and a map $f: X \to \mathbb{E}$, there is unique homomorphism $\hat{f}: H'_S X \to \mathbb{E}$ such that $\hat{f} \cdot \eta = f$.
|
| 328 |
+
|
| 329 |
+
Intuitively, Proposition 2 ensures that $H'_S X$ is a least conservatively complete $\text{Trj}_S$-module generated by $X$. This characterization entails a construction of an iteration operator on $\mathbf{H}'_S$ as a least fixpoint. This, in fact, also transfers to $\mathbf{H}_S$ (as detailed in the proof of the following theorem).
|
| 330 |
+
|
| 331 |
+
**Theorem 2.** Both $\mathbf{H}'_S$ and $\mathbf{H}_S$ are Elgot monads, for which $f^\dagger$ is computed as a least fixpoint of $\omega$-continuous endomaps $g \mapsto [\eta,g]^* \cdot f$ over the function spaces $X \to \mathbf{H}'_S Y$ and $X \to \mathbf{H}_S Y$ correspondingly.
|
| 332 |
+
|
| 333 |
+
In this section's remainder, we formally connect the monad **H**<sub>S</sub> with the monad **H**,
|
| 334 |
+
the latter introduced in our previous work and used for providing a semantics
|
| 335 |
+
to the functional language HYBCORE. In the following section we provide a
|
| 336 |
+
semantics for the current imperative language via the monad **H**<sub>S</sub>. Specifically,
|
| 337 |
+
in this section we will show how to build **H** from **H**<sub>S</sub> by considering additional
|
| 338 |
+
semantic ingredients on top of the latter.
|
| 339 |
+
|
| 340 |
+
Let us subsequently write η<sup>S</sup>, (–)<sup>★</sup><sub>S</sub> and (–)<sup>†</sup><sub>S</sub> for the unit, the Kleisli lifting and the Elgot iteration of **H**<sub>S</sub>. Note that *S*, *X* ↦→ **H**<sub>S</sub>*X* is a parametrized monad in the sense of Uustalu [35], in particular *H*<sub>S</sub> is functorial in *S* and for every *f*: *S* → *S*′, *H*<sub>*f*</sub>: *H*<sub>S</sub> → *H*<sub>S</sub>*′* is a monad morphism.
|
| 341 |
+
|
| 342 |
+
Then we introduce the following technical natural transformations $\iota$: $H_S X \to X \circled(S \circled{\perp})$ and $\tau$: $H_{S \circled{Y}} X \to H_S X$. First, let us define $\iota$:
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
\iota(I, e, x) = \begin{cases} \operatorname{inl} \operatorname{inl} e^0, & \text{if } I \neq \emptyset \\ \operatorname{inl} x, & \text{otherwise} \end{cases} \qquad \iota(I, e) = \begin{cases} \operatorname{inr} \operatorname{inl} e^0, & \text{if } I \neq \emptyset \\ \operatorname{inr} \operatorname{inr} \perp, & \text{otherwise} \end{cases}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
In words: $\iota$ returns the initial point for non-zero length trajectories, and otherwise returns either an accompanying value from $X$ or $\perp$ depending on that if the given trajectory is convergent or divergent. The functor $(-) \bowtie E$ for every $E$ extends to a monad, called the *exception monad*. The following is easy to show for $\iota$.
|
| 349 |
+
|
| 350 |
+
**Lemma 1.** For every $S$, $\iota: H_S \rightarrow (-) \bowtie (S \bowtie \{\perp\})$ is a monad morphism.
|
| 351 |
+
|
| 352 |
+
Next we define $\tau : H_{S \circled{Y}} X \rightarrow H_S X$:
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
\tau(I, e, x) = \begin{cases} \langle I, e, x \rangle, & \text{if } I = I' \\ \langle I', e' \rangle, & \text{otherwise} \end{cases} \qquad \tau(I, e) = \langle I', e' \rangle
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
where ⟨I', e'] is the largest such trajectory that for all t ∈ I', et = inl ett.
|
| 359 |
+
---PAGE_BREAK---
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\begin{align*}
|
| 363 |
+
[\mathbf{x} := \mathbf{t}](\sigma) &= \eta(\sigma \triangleright [\mathbf{t}\sigma/\mathbf{x}]) \\
|
| 364 |
+
[\bar{\mathbf{x}}' = \bar{u} \text{ for } \mathbf{t}](\sigma) &= \langle [0, \mathbf{t}\sigma), \lambda t. \sigma \triangleright [\phi_{\sigma}(t)/\bar{\mathbf{x}}], \sigma \triangleright [\phi_{\sigma}(\mathbf{t}\sigma)/\bar{\mathbf{x}}] \rangle \\
|
| 365 |
+
[\mathbf{p}; \mathbf{q}](\sigma) &= [\mathbf{q}]^*([\mathbf{p}](\sigma)) \\
|
| 366 |
+
[\texttt{if } \mathbf{b} \texttt{ then } \mathbf{p} \texttt{ else } \mathbf{q}](\sigma) &= [\mathbf{p}](\sigma) \triangleleft \mathbf{b}\sigma \triangleright [\mathbf{q}](\sigma) \\
|
| 367 |
+
[\texttt{while } \mathbf{b} \texttt{ do } \{\mathbf{p}\}](\sigma) &= (\lambda \sigma . (\hat{H} \operatorname{inr})([\mathbf{p}](\sigma)) \triangleleft \mathbf{b}\sigma \triangleright \eta(\operatorname{inl} \sigma))^\dagger(\sigma)
|
| 368 |
+
\end{align*}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
Fig. 3: Denotational semantics.
|
| 372 |
+
|
| 373 |
+
**Lemma 2.** For all *S* and *Y*, $\tau: H_{S\omega Y} \to H_S$ is a monad morphism.
|
| 374 |
+
|
| 375 |
+
We now arrive at the main result of this section.
|
| 376 |
+
|
| 377 |
+
**Theorem 3.** The correspondence $S \mapsto H_S S$ extends to an Elgot monad as follows:
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\begin{align*}
|
| 381 |
+
\eta(x \in S) &= \eta^S(x), \\
|
| 382 |
+
(f: X \rightarrow H_S S)^* &= (H_X X \xrightarrow{H_{\iota,f}^{\mathrm{id}}} H_{S\omega\{\perp\}} X \xrightarrow{\tau} H_S X \xrightarrow{f_S^*} H_S S), \\
|
| 383 |
+
(f: X \rightarrow H_{S\omega X}(S \Join X))^{\dagger} &= (X \xrightarrow{f_{S\omega X}^{\dagger}} H_{S\omega X} S \xrightarrow{H_{[\mathrm{inl},(\iota',f)]^{\mathrm{id}}}} H_{S\omega\{\perp\}} S \xrightarrow{\tau} H_S S).
|
| 384 |
+
\end{align*}
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
where $\iota' = [\mathrm{inl}, \mathrm{id}] \cdot \iota : H_S S \to S \Join \{\perp\}$ and $(-)^\sharp : (X \to (S \Join X) \Join \{\perp\}) \to (X \to S \Join \{\perp\})$ is the iteration operator of the maybe-monad $(-) \Join \{\perp\}$ (as in Example 1). Moreover, thus defined monad is isomorphic to $\mathbf{H}$.
|
| 388 |
+
|
| 389 |
+
*Proof (Proof Sketch).* It is first verified that the monad axioms are satisfied using abstract properties of $\iota$ and $\tau$, mainly provided by Lemmas 1 and 2. Then the isomorphism $\theta: H_S S \cong HS$ is defined as expected: $\theta([0, d], e, x) = \mathrm{inl}\langle[0, d], \hat{e}\rangle$ where $\hat{e}^t = \hat{e}^0$ for $t \in [0, d)$, $\hat{e}^d = x$; and $\theta(I, e) = \mathrm{inr}\langle I, e\rangle$. It is easy to see that $\theta$ respects the unit. The fact that $\theta$ respects Kleisli lifting amounts to a (tedious) verification by case distinction. Checking the formula for $(-)^\dagger$ amounts to transferring the definition of $(-)^\dagger$, as defined in previous work [13], along $\theta$. See the full proof in [15]. □
|
| 390 |
+
|
| 391 |
+
# 6 Soundness and Adequacy
|
| 392 |
+
|
| 393 |
+
Let us start this section by providing a denotational semantics to our language using the results of the previous section. We will then provide a soundness and adequacy result that formally connects the thus established denotational semantics with the operational semantics presented in Section 3.
|
| 394 |
+
|
| 395 |
+
First, consider the monad in (3) and fix $S = \mathbb{R}^\lambda$. We denote the obtained instance of $H_S$ as $\hat{H}$. Intuitively, we interpret a program $p$ as a map $[[p]] : S \to \hat{H}S$ which given an environment (a map from variables to values) returns a trajectory over $S$. The definition of $[[p]]$ is inductive over the structure of $p$ and is given in Figure 3.
|
| 396 |
+
---PAGE_BREAK---
|
| 397 |
+
|
| 398 |
+
In order to establish soundness and adequacy between the small-step operational semantics and the denotational semantics, we will use an auxiliary device. Namely, we will introduce a *big-step* operational semantics that will serve as midpoint between the two previously introduced semantics. We will show that the small-step semantics is equivalent to the big-step one and then establish soundness and adequacy between the big-step semantics and the denotational one. The desired result then follows by transitivity. The big-step rules are presented in Figure 4 and follow the same reasoning than the small-step ones. The expression $p, \sigma, t \Downarrow r, \sigma'$ means that $p$ paired with $\sigma$ evaluates to $r, \sigma'$ at time instant $t$.
|
| 399 |
+
|
| 400 |
+
Fig. 4: Big-step Operational Semantics
|
| 401 |
+
|
| 402 |
+
Next, we need the following result to formally connect both styles of operational semantics.
|
| 403 |
+
|
| 404 |
+
**Lemma 3.** *Given a program p, an environment σ and a time instant t*
|
| 405 |
+
|
| 406 |
+
1. if $p, \sigma, t \rightarrow p', \sigma', t'$ and $p', \sigma', t' \Downarrow skip, \sigma''$ then $p, \sigma, t \Downarrow skip, \sigma''$;
|
| 407 |
+
|
| 408 |
+
2. if $p, \sigma, t \rightarrow p', \sigma', t'$ and $p', \sigma', t' \Downarrow stop, \sigma''$ then $p, \sigma, t \Downarrow stop, \sigma''$.
|
| 409 |
+
|
| 410 |
+
*Proof.* The proof follows by induction over the derivation of the small step relation. □
|
| 411 |
+
|
| 412 |
+
**Theorem 4.** *The small-step semantics and the big-step semantics are related as follows. Given a program p, an environment σ and a time instant t*
|
| 413 |
+
---PAGE_BREAK---
|
| 414 |
+
|
| 415 |
+
1. $p, \sigma, t \Downarrow \mathit{skip}, \sigma' \text{ iff } p, \sigma, t \to^\star \mathit{skip}, \sigma', 0$;
|
| 416 |
+
|
| 417 |
+
2. $p, \sigma, t \Downarrow \mathit{stop}, \sigma' \text{ iff } p, \sigma, t \to^\star \mathit{stop}, \sigma', 0.$
|
| 418 |
+
|
| 419 |
+
*Proof.* The right-to-left direction is obtained by induction over the length of the small-step reduction sequence using Lemma 3. The left-to-right direction follows by induction over the proof of the big-step judgement using Proposition 1. $\square$
|
| 420 |
+
|
| 421 |
+
Finally, we can connect the operational and the denotational semantics in the
|
| 422 |
+
expected way.
|
| 423 |
+
|
| 424 |
+
**Theorem 5 (Soundness and Adequacy).** *Given a program p, an environment σ and a time instant t*
|
| 425 |
+
|
| 426 |
+
1. $p, \sigma, t \to^* \mathit{skip}, \sigma', 0 \text{ iff } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t) \to \mathbb{R}^\mathcal{X}, \sigma');$
|
| 427 |
+
|
| 428 |
+
2. $p, \sigma, t \to^* \mathit{stop}, \sigma', 0 \text{ iff either } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \sigma'') \text{ or } [\mathbf{p}](\sigma) = \mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \text{ and in either case with } t' > t \text{ and } h(t) = \sigma'.$
|
| 429 |
+
|
| 430 |
+
Here, “soundness” corresponds to the left-to-right directions of the equivalences and “adequacy” to the right-to-left ones.
|
| 431 |
+
|
| 432 |
+
*Proof.* By Theorem 4, we equivalently replace the goal as follows:
|
| 433 |
+
|
| 434 |
+
1. $p, \sigma, t \Downarrow \mathit{skip}, \sigma' \text{ iff } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t) \to \mathbb{R}^{\mathcal{X}}, \sigma');$
|
| 435 |
+
|
| 436 |
+
2. $p, \sigma, t \Downarrow \mathit{stop}, \sigma' \text{ iff either } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \sigma'') \text{ or } [\mathbf{p}](\sigma) = \mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \text{ and in either case with } t' > t \text{ and } h(t) = \sigma'.$
|
| 437 |
+
|
| 438 |
+
Then the “soundness” direction is obtained by induction over the derivation of
|
| 439 |
+
the rules in Fig. 4. The “adequacy” direction follows by structural induction over
|
| 440 |
+
$p$; for while-loops, we call the fixpoint law $[\eta, f^\dagger]^* \cdot f = f^\dagger$ of Elgot monads. $\square$
|
| 441 |
+
|
| 442 |
+
# 7 Implementation
|
| 443 |
+
|
| 444 |
+
This section presents our prototype implementation – LINCE – which is available
|
| 445 |
+
online both to run in our servers and to be compiled and executed locally
|
| 446 |
+
(http://arcatools.org/lince). Its architecture is depicted in Figure 5. The
|
| 447 |
+
dashed rectangles correspond to its main components. The one on the left
|
| 448 |
+
(Core engine) provides the parser respective to the while-language and the
|
| 449 |
+
engine to evaluate hybrid programs using the small-step operational semantics
|
| 450 |
+
of Section 3. The one on the right (Inspector) depicts trajectories produced
|
| 451 |
+
by hybrid programs according to parameters specified by the user and provides
|
| 452 |
+
an interface to evaluate hybrid programs at specific time instants (the initial
|
| 453 |
+
environment $\sigma: \mathcal{X} \to \mathbb{R}$ is assumed to be the function constant on zero). As
|
| 454 |
+
already mentioned, plots are generated by automatically evaluating at different
|
| 455 |
+
time instants the program given as input. Incoming arrows in the figure denote
|
| 456 |
+
an input relation and outgoing arrows denote an output relation. The two main
|
| 457 |
+
components are further explained below.
|
| 458 |
+
|
| 459 |
+
**Core engine.** Our implementation extensively uses the computer algebra tool SAGEMATH [31]. This serves two purposes: (1) to solve systems of differential
|
| 460 |
+
---PAGE_BREAK---
|
| 461 |
+
|
| 462 |
+
Fig. 5: Depiction of LINCE's architecture
|
| 463 |
+
|
| 464 |
+
equations (present in hybrid programs); and (2) to correctly evaluate if-then-
|
| 465 |
+
else statements. Regarding the latter, note that we do not merely use predicate
|
| 466 |
+
functions in programming languages for evaluating Boolean conditions, essentially
|
| 467 |
+
because such functions tend to give wrong results in the presence of real numbers
|
| 468 |
+
(due to the finite precision problem). Instead of this, LINCE uses SAGEMATH
|
| 469 |
+
and its ability to perform advanced symbolic manipulation to check whether
|
| 470 |
+
a Boolean condition is true or not. However, note that this will not always
|
| 471 |
+
give an output, fundamentally because solutions of linear differential equations
|
| 472 |
+
involve transcendental numbers and real-number arithmetic with such numbers is
|
| 473 |
+
undecidable [20]. We leave as future work the development of more sophisticated
|
| 474 |
+
techniques for avoiding errors in the computational evaluation of hybrid programs.
|
| 475 |
+
|
| 476 |
+
**Inspector.** The user interacts with LINCE at two different stages: (a) when inputting a hybrid program and (b) when inspecting trajectories using LINCE's output interfaces. The latter case consists of adjusting different parameters for observing the generated plots in an optimal way.
|
| 477 |
+
|
| 478 |
+
**Event-triggered programs.** Observe that the differential statements $x_1' = t, \dots, x_n' = t$ for $t$ are *time-triggered*: they terminate precisely when the instant of time $t$ is achieved. In the area of hybrid systems it is also usual to consider *event-triggered* programs: those that terminate *as soon as* a specified condition $\psi$ becomes true [38,6,11]. So we next consider atomic programs of the type $x_1' = t, \dots, x_n' = t$ until $\psi$ where $\psi$ is an element of the free Boolean algebra generated by $t \le s$ and $t \ge s$ where $t, s \in LTerm(X)$, signalling the termination of the program. In general, it is impossible to determine with *exact* precision when such programs terminate (again due to the undecidability of real-number arithmetic with transcendental numbers). A natural option is to tackle this problem by checking the condition $\psi$ periodically, which essentially reduces event-triggered programs into time-triggered ones. The cost is that the evaluation of a program might greatly diverge from the nominal behaviour, as discussed for instance in documents [4,6] where an analogous approach is discussed for the well-established simulation tools SIMULINK and MODELICA. In our case, we allow programs of the form $x_1' = t, \dots, x_n' = t$ until$_\epsilon$ $\psi$ in the tool and define them as the abbreviation of `while ¬ψ do { x_1' = t, \dots, x_n' = t for ε }`. This sort of abbreviation has the advantage of avoiding spurious evaluations of hybrid programs w.r.t. the established semantics. We could indeed easily allow such event-triggered programs natively in our language (i.e. without recurring to
|
| 479 |
+
---PAGE_BREAK---
|
| 480 |
+
|
| 481 |
+
Fig. 6: Position of the bouncing ball over time (plot on the left); zoomed in position of the bouncing ball at the first bounce (plot on the right).
|
| 482 |
+
|
| 483 |
+
abbreviations) and extend the semantics accordingly. But we prefer not to do this at the moment, because we wish first to fully understand the ways of limiting spurious computational evaluations arising from event-triggered programs.
|
| 484 |
+
|
| 485 |
+
*Remark 3.* SIMULINK and MODELICA are powerful tools for simulating hybrid systems, but lack a well-established, formal semantics. This is discussed for example in [3,9], where the authors aim to provide semantics to subsets of SIMULINK and MODELICA. Getting inspiration from control theory, the language of SIMULINK is circuit-like, block-based; the language of MODELICA is *acausal* and thus particularly useful for modelling electric circuits and the like which are traditionally modelled by systems of equations.
|
| 486 |
+
|
| 487 |
+
*Example 3 (Bouncing Ball)*. As an illustration of the approach described above for event-triggered programs, take a bouncing ball dropped at a positive height $p$ and with no initial velocity $v$. Due to the gravitational acceleration $g$, it falls to the ground and bounces back up, losing part of its kinetic energy in the process. This can be approximated by the following hybrid program
|
| 488 |
+
|
| 489 |
+
$$ (p' = v, v' = g \ \mathbf{until}_{0.01} p \le 0 \land v \le 0); (v := v \times -0.5) $$
|
| 490 |
+
|
| 491 |
+
where 0.5 is the dampening factor of the ball. We now want to drop the ball from a specific height (e.g. 5 meters) and let it bounce until it stops. Abbreviating the previous program into $b$, this behaviour can be approximated by $p := 5; v := 0; while true do { b}$. Figure 6 presents the trajectory generated by the ball (calculated by LINCE). Note that since $\epsilon = 0.01$ the ball reaches below ground, as shown in Figure 6 on the right. Other examples of event- and time-triggered programs can be seen in LINCE's website.
|
| 492 |
+
|
| 493 |
+
# 8 Conclusions and future work
|
| 494 |
+
|
| 495 |
+
We introduced small-step and big-step operational semantics for hybrid programs suitable for implementation purposes and provided a denotational counterpart via the notion of Elgot monad. These semantics were then linked by a soundness and adequacy theorem [37]. We regard these results as a stepping stone for developing computational tools and techniques for hybrid programming; which we attested
|
| 496 |
+
---PAGE_BREAK---
|
| 497 |
+
|
| 498 |
+
with the development of LINCE. With this work as basis, we plan to explore the
|
| 499 |
+
following research lines in the near future.
|
| 500 |
+
|
| 501 |
+
**Program equivalence.** Our denotational semantics entails a natural notion of program equivalence (denotational equality) which inherently includes classical laws of iteration and a powerful uniformity principle [33], thanks to the use of Elgot monads. We intend to further explore the equational theory of our language so that we can safely refactor/simplify hybrid programs. Note that the theory includes equational schema like `(x := a; x := b) = x := b` and `(wait a; wait b) = wait (a + b)` thus encompassing not only usual laws of programming but also axiomatic principles behind the notion of time.
|
| 502 |
+
|
| 503 |
+
**New program constructs.** Our while-language is intended to be as simple as possible whilst harbouring the core, uncontroversial features of hybrid programming. This was decided so that we could use the language as both a theoretical and practical basis for advancing hybrid programming. A particular case that we wish to explore next is the introduction of new program constructs, including e.g. non-deterministic or probabilistic choice and exception operations `raiseware`. Denotationally, the fact that we used monadic constructions readily provides a palette of techniques for this process, e.g. tensoring and distributive laws [22,23].
|
| 504 |
+
|
| 505 |
+
**Robustness.** A core aspect of hybrid programming is that programs should be *robust*: small variations in their input should *not* result in big changes in their output [32,21]. We wish to extend LINCE with features for detecting non-robust programs. A main source of non-robustness are conditional statements `if b then p else q`: very small changes in their input may change the validity of b and consequently cause a switch between (possibly very different) execution branches. Currently, we are working on the systematic detection of non-robust conditional statements in hybrid programs, by taking advantage of the notion of $\delta$-perturbation [20].
|
| 506 |
+
|
| 507 |
+
**Acknowledgements** The first author would like to acknowledge support of German Research Council (DFG) under the project A High Level Language for Monad-based Processes (GO 2161/1-2). The second author was financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia, within project POCI-01-0145-FEDER-030947. The third author was partially supported by National Funds through FCT/MCTES, within the CISTER Research Unit (UIDB/04234/2020); by COMPETE 2020 under the PT2020 Partnership Agreement, through ERDF, and by national funds through the FCT, within project POCI-01-0145-FEDER-029946; by the Norte Portugal Regional Operational Programme (NORTE 2020) under the Portugal 2020 Partnership Agreement, through ERDF and also by national funds through the FCT, within project NORTE-01-0145-FEDER-028550; and by the FCT within project ECSEL/0016/2019 and the ECSEL Joint Undertaking (JU) under grant agreement No 876852. The JU receives support from the European Union's Horizon 2020 research and innovation programme and Austria, Czech Republic, Germany, Ireland, Italy, Portugal, Spain, Sweden, Turkey.
|
| 508 |
+
---PAGE_BREAK---
|
| 509 |
+
|
| 510 |
+
References
|
| 511 |
+
|
| 512 |
+
1. J. Adámek, H. Herrlich, and G. Strecker. *Abstract and concrete categories*. John Wiley & Sons Inc., New York, 1990.
|
| 513 |
+
|
| 514 |
+
2. J. Adámek, S. Milius, and J. Velebil. Elgot theories: a new perspective on the equational properties of iteration. *Mathematical Structures in Computer Science*, 21(2):417–480, 2011.
|
| 515 |
+
|
| 516 |
+
3. O. Bouissou and A. Chapoutot. An operational semantics for Simulink's simulation engine. In *ACM SIGPLAN Notices*, vol. 47, pp. 129–138. ACM, 2012.
|
| 517 |
+
|
| 518 |
+
4. D. Broman. Hybrid simulation safety: Limbos and zero crossings. In *Principles of Modeling*, pp. 106–121. Springer, 2018.
|
| 519 |
+
|
| 520 |
+
5. Z. Chaochen, C. A. R. Hoare, and A. P. Ravn. A calculus of durations. *Information Processing Letters*, 40(5):269–276, 1991.
|
| 521 |
+
|
| 522 |
+
6. D. A. Copp and R. G. Sanfelice. A zero-crossing detection algorithm for robust simulation of hybrid systems jumping on surfaces. *Simulation Modelling Practice and Theory*, 68:1–17, 2016.
|
| 523 |
+
|
| 524 |
+
7. T. L. Diezel and S. Goncharov. Towards Constructive Hybrid Semantics. In Z. M. Ariola, ed., *5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)*, vol. 167 of LIPIcs, pp. 24:1–24:19, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
|
| 525 |
+
|
| 526 |
+
8. C. Elgot. Monadic computation and iterative algebraic theories. In *Studies in Logic and the Foundations of Mathematics*, vol. 80, pp. 175–230. Elsevier, 1975.
|
| 527 |
+
|
| 528 |
+
9. S. Foster, B. Thiele, A. Cavalcanti, and J. Woodcock. Towards a UTP semantics for Modelica. In *International Symposium on Unifying Theories of Programming*, pp. 44–64. Springer, 2016.
|
| 529 |
+
|
| 530 |
+
10. P. Fritzson. *Principles of object-oriented modeling and simulation with Modelica 3.3: a cyber-physical approach*. John Wiley & Sons, 2014.
|
| 531 |
+
|
| 532 |
+
11. R. Goebel, R. G. Sanfelice, and A. R. Teel. Hybrid dynamical systems. *IEEE Control Systems*, 29(2):28–93, 2009.
|
| 533 |
+
|
| 534 |
+
12. S. Goncharov, J. Jakob, and R. Neves. A semantics for hybrid iteration. In *29th International Conference on Concurrency Theory, CONCUR 2018*. Schloss Dagstuhl – Leibniz-Zentrum fuer Informatik, 2018.
|
| 535 |
+
|
| 536 |
+
13. S. Goncharov, J. Jakob, and R. Neves. A semantics for hybrid iteration. CoRR, abs/1807.01053, 2018.
|
| 537 |
+
|
| 538 |
+
14. S. Goncharov and R. Neves. An adequate while-language for hybrid computation. In *Proceedings of the 21st International Symposium on Principles and Practice of Programming Languages 2019*, PPDP ’19, pp. 11:1–11:15, New York, NY, USA, 2019. ACM.
|
| 539 |
+
|
| 540 |
+
15. S. Goncharov, R. Neves, and J. Proença. Implementing hybrid semantics: From functional to imperative. CoRR, abs/2009.14322, 2020.
|
| 541 |
+
|
| 542 |
+
16. S. Goncharov, L. Schröder, C. Rauch, and M. Piróg. Unifying guarded and un-guarded iteration. In *International Conference on Foundations of Software Science and Computation Structures*, pp. 517–533. Springer, 2017.
|
| 543 |
+
|
| 544 |
+
17. T. A. Henzinger. The theory of hybrid automata. In *LICS96: Logic in Computer Science, 11th Annual Symposium, New Jersey, USA, July 27-30, 1996*, pp. 278–292. IEEE, 1996.
|
| 545 |
+
|
| 546 |
+
18. P. Höfner and B. Möller. An algebra of hybrid systems. *The Journal of Logic and Algebraic Programming*, 78(2):74–97, 2009.
|
| 547 |
+
|
| 548 |
+
19. J. J. Huerta y Munive and G. Struth. Verifying hybrid systems with modal kleene algebra. In J. Desharnais, W. Guttmann, and S. Joosten, eds., *Relational*
|
| 549 |
+
---PAGE_BREAK---
|
| 550 |
+
|
| 551 |
+
*and Algebraic Methods in Computer Science*, pp. 225–243, Cham, 2018. Springer International Publishing.
|
| 552 |
+
|
| 553 |
+
20. S. Kong, S. Gao, W. Chen, and E. Clarke. dreach: $\delta$-reachability analysis for hybrid systems. In *International Conference on TOOLS and Algorithms for the Construction and Analysis of Systems*, pp. 200–205. Springer, 2015.
|
| 554 |
+
|
| 555 |
+
21. D. Liberzon and A. S. Morse. Basic problems in stability and design of switched systems. *IEEE Control Systems*, 19(5):59–70, 1999.
|
| 556 |
+
|
| 557 |
+
22. C. Lüth and N. Ghani. Composing monads using coproducts. In M. Wand and S. L. P. Jones, eds., *ICFP'02: Functional Programming, 7th ACM SIGPLAN International Conference, Pittsburgh, USA, October 04 - 06, 2002*, pp. 133–144. ACM, 2002.
|
| 558 |
+
|
| 559 |
+
23. E. Manes and P. Mulry. Monad compositions I: general constructions and recursive distributive laws. *Theory and Applications of Categories*, 18(7):172–208, 2007.
|
| 560 |
+
|
| 561 |
+
24. E. Moggi. Computational lambda-calculus and monads. In *Proceedings of the Fourth Annual Symposium on Logic in Computer Science (LICS '89), Pacific Grove, California, USA, June 5-8, 1989*, pp. 14–23. IEEE Computer Society, 1989.
|
| 562 |
+
|
| 563 |
+
25. E. Moggi. Notions of computation and monads. *Information and computation*, 93(1):55–92, 1991.
|
| 564 |
+
|
| 565 |
+
26. R. Neves. *Hybrid programs*. PhD thesis, Minho University, 2018.
|
| 566 |
+
|
| 567 |
+
27. P. C. Ölveczky and J. Meseguer. Semantics and pragmatics of real-time maude. *Higher-order and symbolic computation*, 20(1-2):161–196, 2007.
|
| 568 |
+
|
| 569 |
+
28. A. Platzer. Differential dynamic logic for hybrid systems. *Journal of Automated Reasoning*, 41(2):143–189, 2008.
|
| 570 |
+
|
| 571 |
+
29. A. Platzer. *Logical Analysis of Hybrid Systems: Proving Theorems for Complex Dynamics*. Springer, Heidelberg, 2010.
|
| 572 |
+
|
| 573 |
+
30. R. R. Rajkumar, I. Lee, L. Sha, and J. Stankovic. Cyber-physical systems: the next computing revolution. In *DAC'10: Design Automation Conference, 47th ACM/IEEE Conference, Anaheim, USA, June 13-18, 2010*, pp. 731–736. IEEE, 2010.
|
| 574 |
+
|
| 575 |
+
31. W. Stein et al. *Sage Mathematics Software (Version 6.4.1)*. The Sage Development Team, 2015. http://www.sagemath.org/.
|
| 576 |
+
|
| 577 |
+
32. R. Shorten, F. Wirth, O. Mason, K. Wulff, and C. King. Stability criteria for switched and hybrid systems. *Society for Industrial and Applied Mathematics (review)*, 49(4):545–592, 2007.
|
| 578 |
+
|
| 579 |
+
33. A. Simpson and G. Plotkin. Complete axioms for categorical fixed-point operators. In *Logic in Computer Science, LICS 2000*, pp. 30–41, 2000.
|
| 580 |
+
|
| 581 |
+
34. K. Suenaga and I. Hasuo. Programming with infinitesimals: A while-language for hybrid system modeling. In *International Colloquium on Automata, Languages, and Programming*, pp. 392–403. Springer, 2011.
|
| 582 |
+
|
| 583 |
+
35. T. Uustalu. Generalizing substitution. *RAIRO-Theoretical Informatics and Applications*, 37(4):315–336, 2003.
|
| 584 |
+
|
| 585 |
+
36. R. van Glabbeek. The linear time-branching time spectrum (extended abstract). In *Theories of Concurrency, CONCUR 1990*, vol. 458, pp. 278–297, 1990.
|
| 586 |
+
|
| 587 |
+
37. G. Winskel. *The formal semantics of programming languages: an introduction*. MIT press, 1993.
|
| 588 |
+
|
| 589 |
+
38. H. Witsenhausen. A class of hybrid-state continuous-time dynamic systems. *IEEE Transactions on Automatic Control*, 11(2):161–167, 1966.
|
samples/texts_merged/3226827.md
ADDED
|
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# EXPLAIN: A Tool for Performing Abductive Inference
|
| 5 |
+
|
| 6 |
+
Isil Dillig and Thomas Dillig
|
| 7 |
+
|
| 8 |
+
{idillig, tdillig}@cs.wm.edu
|
| 9 |
+
|
| 10 |
+
Computer Science Department, College of William & Mary
|
| 11 |
+
|
| 12 |
+
**Abstract.** This paper describes a tool called EXPLAIN for performing abductive inference. Logical abduction is the problem of finding a simple explanatory hypothesis that explains observed facts. Specifically, given a set of premises Γ and a desired conclusion φ, abductive inference finds a simple explanation ψ such that Γ ∧ ψ |= φ, and ψ is consistent with known premises Γ. Abduction has many useful applications in verification, including inference of missing preconditions, error diagnosis, and construction of compositional proofs. This paper gives a brief tutorial introduction to EXPLAIN and describes the basic inference algorithm.
|
| 13 |
+
|
| 14 |
+
## 1 Introduction
|
| 15 |
+
|
| 16 |
+
The fundamental ingredient of automated logical reasoning is *deduction*, which allows deriving valid conclusions from a given set of premises. For example, consider the following set of facts:
|
| 17 |
+
|
| 18 |
+
(1) $\forall x. (\text{duck}(x) \Rightarrow \text{quack}(x))$
|
| 19 |
+
|
| 20 |
+
(2) $\forall x. ((\text{duck}(x) \lor \text{goose}(x)) \Rightarrow \text{waddle}(x))$
|
| 21 |
+
|
| 22 |
+
(3) $\text{duck}(\text{donald})$
|
| 23 |
+
|
| 24 |
+
Based on these premises, logical deduction allows us to reach the conclusion:
|
| 25 |
+
|
| 26 |
+
$$ \text{waddle}(\text{donald}) \land \text{quack}(\text{donald}) $$
|
| 27 |
+
|
| 28 |
+
This form of forward deductive reasoning forms the basis of all SAT and SMT solvers as well as first-order theorem provers and verification tools used today.
|
| 29 |
+
|
| 30 |
+
A complementary form of logical reasoning to deduction is *abduction*, as introduced by Charles Sanders Peirce [1]. Specifically, abduction is a form of backward logical reasoning, which allows inferring likely premises from a given conclusion. Going back to our earlier example, suppose we know premises (1) and (2), and assume that we have observed that the formula waddle(donald) ∧ quack(donald) is true. Here, since the given premises do not imply the desired conclusion, we would like to find an explanatory hypothesis ψ such that the following deduction is valid:
|
| 31 |
+
|
| 32 |
+
$$
|
| 33 |
+
\begin{array}{c}
|
| 34 |
+
\forall x. (\text{duck}(x) \Rightarrow \text{quack}(x)) \\
|
| 35 |
+
\forall x. ((\text{duck}(x) \lor \text{goose}(x)) \Rightarrow \text{waddle}(x)) \\
|
| 36 |
+
\psi \\
|
| 37 |
+
\hline
|
| 38 |
+
\text{waddle}(\text{donald}) \land \text{quack}(\text{donald})
|
| 39 |
+
\end{array}
|
| 40 |
+
$$
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
The problem of finding a logical formula $\psi$ for which the above deduction is valid is known as *abductive inference*. For our example, many solutions are possible, including the following:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
\begin{align*}
|
| 47 |
+
\psi_1 &: \text{duck}(\text{donald}) \wedge \neg\text{quack}(\text{donald}) \\
|
| 48 |
+
\psi_2 &: \text{waddle}(\text{donald}) \wedge \text{quack}(\text{donald}) \\
|
| 49 |
+
\psi_3 &: \text{goose}(\text{donald}) \wedge \text{quack}(\text{donald}) \\
|
| 50 |
+
\psi_4 &: \text{duck}(\\
|
| 51 |
+
&\qquad \text{donald})
|
| 52 |
+
\end{align*}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
While all of these solutions make the deduction valid, some of these solutions are more desirable than others. For example, $\psi_1$ contradicts known facts and is therefore a useless solution. On the other hand, $\psi_2$ simply restates the desired conclusion, and despite making the deduction valid, gets us no closer to explaining the observation. Finally, $\psi_3$ and $\psi_4$ neither contradict the premises nor restate the conclusion, but, intuitively, we prefer $\psi_4$ over $\psi_3$ because it makes fewer assumptions.
|
| 56 |
+
|
| 57 |
+
At a technical level, given premises $\Gamma$ and desired conclusion $\phi$, abduction is the problem of finding an explanatory hypothesis $\psi$ such that:
|
| 58 |
+
|
| 59 |
+
(1) $\Gamma \wedge \psi \models \phi$
|
| 60 |
+
|
| 61 |
+
(2) $\Gamma \wedge \psi \nvDash \text{false}$
|
| 62 |
+
|
| 63 |
+
Here, the first condition states that $\psi$, together with known premises $\Gamma$, entails the desired conclusion $\phi$. The second condition stipulates that $\psi$ is consistent with known premises. As illustrated by the previous example, there are many solutions to a given abductive inference problem, but the most desirable solutions are usually those that are as simple and as general as possible.
|
| 64 |
+
|
| 65 |
+
Recently, abductive inference has found many useful applications in verification, including inference of missing function preconditions [2, 3], diagnosis of error reports produced by verification tools [4], and for computing underapproximations [5]. Furthermore, abductive inference has also been used for inferring specifications of library functions [6] and for automatically synthesizing circular compositional proofs of program correctness [7].
|
| 66 |
+
|
| 67 |
+
In this paper, we describe our tool, called **EXPLAIN**, for performing logical abduction in the combination theory of Presburger arithmetic and propositional logic. The solutions computed by EXPLAIN are both simple and general: EXPLAIN always yields a logically weakest solution containing the fewest possible variables.
|
| 68 |
+
|
| 69 |
+
## 2 A Tutorial Introduction to EXPLAIN
|
| 70 |
+
|
| 71 |
+
The EXPLAIN tool is part of the SMT solver MISTRAL, which is available at http://www.cs.wm.edu/~tdillig/mistral under GPL license. MISTRAL is written in C++ and provides a C++ interface for EXPLAIN. In this section, we give a brief tutorial on how to solve abductive inference problems using EXPLAIN.
|
| 72 |
+
|
| 73 |
+
As an example, consider the abduction problem defined by the premises $x \le 0$ and $y > 1$ and the desired conclusion $2x - y + 3z \le 10$ in the theory of linear
|
| 74 |
+
---PAGE_BREAK---
|
| 75 |
+
|
| 76 |
+
1. Term* x = VariableTerm::make("x");
|
| 77 |
+
|
| 78 |
+
2. Term* y = VariableTerm::make("y");
|
| 79 |
+
|
| 80 |
+
3. Term* z = VariableTerm::make("z");
|
| 81 |
+
|
| 82 |
+
4. Constraint c1(x, ConstantTerm::make(0), ATOM_LEQ);
|
| 83 |
+
|
| 84 |
+
5. Constraint c2(y, ConstantTerm::make(1), ATOM_GT);
|
| 85 |
+
|
| 86 |
+
6. Constraint premises = c1 & c2;
|
| 87 |
+
|
| 88 |
+
7. map<Term*, long int> elems;
|
| 89 |
+
|
| 90 |
+
8. elems[x] = 2;
|
| 91 |
+
|
| 92 |
+
9. elems[y] = -1;
|
| 93 |
+
|
| 94 |
+
10. elems[z] = 3;
|
| 95 |
+
|
| 96 |
+
11. Term* t = ArithmeticTerm::make(elems);
|
| 97 |
+
|
| 98 |
+
12. Constraint conclusion(t, ConstantTerm::make(10), ATOM_LEQ);
|
| 99 |
+
|
| 100 |
+
13. Constraint explanation = conclusion.abduce(premises);
|
| 101 |
+
|
| 102 |
+
14. cout << "Explanation: " << explanation << endl;
|
| 103 |
+
|
| 104 |
+
Fig. 1: C++ code showing how to use EXPLAIN for performing abduction
|
| 105 |
+
|
| 106 |
+
integer arithmetic. In other words, we want to find a simple formula $\psi$ such that:
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\begin{array}{l}
|
| 110 |
+
x \le 0 \land y > 1 \land \psi \models 2x - y + 3z \le 10 \\
|
| 111 |
+
x \le 0 \land y > 1 \land \psi \not\models false
|
| 112 |
+
\end{array}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
Figure 1 shows C++ code for using EXPLAIN to solve the above abductive inference problem. Here, lines 1-12 construct the constraints used in the example, while line 13 invokes the **abduce** method of EXPLAIN for performing abduction. Lines 1-3 construct variables *x*, *y*, *z*, and lines 4 and 5 form the constraints *x* ≤ 0 and *y* > 1 respectively. In MISTRAL, the operators &, |, ! are overloaded and are used for conjoining, disjoining, and negating constraints respectively. Therefore, line 6 constructs the premise *x* ≤ 0 ∧ *y* > 1 by conjoining c1 and c2. Lines 7-12 construct the desired conclusion 2*x* − *y* + 3*z* ≤ 10. For this purpose, we first construct the arithmetic term 2*x* − *y* + 3*z* (lines 7-11). An ArithmeticTerm consists of a map from terms to coefficients; for instance, for the term 2*x* − *y* + 3*z*, the coefficients of *x*, *y*, *z* are specified as 2, −1, 3 in the elemts map respectively.
|
| 116 |
+
|
| 117 |
+
The more interesting part of Figure 1 is line 13, where we invoke the **abduce** method to compute a solution to our abductive inference problem. For this example, the solution computed by EXPLAIN (and printed out at line 14) is *z* ≤ 4. It is easy to confirm that *z* ≤ 4 ∧ *x* ≤ 0 ∧ *y* > 1 logically implies 2*x* − *y* + 3*z* ≤ 10 and that *z* ≤ 4 is consistent with our premises.
|
| 118 |
+
|
| 119 |
+
In general, the abductive solutions computed by EXPLAIN have two theoretical guarantees: First, they contain as few variables as possible. For instance, in our example, although $z-x \leq 4$ is also a valid solution to the abduction problem, EXPLAIN always yields a solution with the fewest number of variables because such solutions are generally simpler and more concise. Second, among the class of solutions that contain the same set of variables, EXPLAIN always yields the
|
| 120 |
+
---PAGE_BREAK---
|
| 121 |
+
|
| 122 |
+
logically weakest explanation. For instance, in our example, while $z = 0$ is also a valid solution to the abduction problem, it is logically stronger than $z \le 4$. Intuitively, logically weak solutions to the abduction problem are preferable because they make fewer assumptions and are therefore more likely to be true.
|
| 123 |
+
|
| 124 |
+
## 3 Algorithm for Performing Abductive Inference
|
| 125 |
+
|
| 126 |
+
In this section, we describe the algorithm used in EXPLAIN for performing abductive inference. First, let us observe that the entailment $\Gamma \wedge \psi \models \phi$ can be rewritten as $\psi \models \Gamma \Rightarrow \phi$. Furthermore, in addition to entailing $\Gamma \Rightarrow \phi$, we want $\psi$ to obey the following three requirements:
|
| 127 |
+
|
| 128 |
+
1. The solution $\psi$ should be consistent with $\Gamma$ because an explanation that contradicts known premises is not useful
|
| 129 |
+
|
| 130 |
+
2. To ensure the simplicity of the explanation, $\psi$ should contain as few variables as possible
|
| 131 |
+
|
| 132 |
+
3. To capture the generality of the abductive explanation, $\psi$ should be no stronger than any other solution $\psi'$ satisfying the first two requirements
|
| 133 |
+
|
| 134 |
+
Now, consider a minimum satisfying assignment (MSA) of $\Gamma \Rightarrow \phi$. An MSA of a formula $\varphi$ is a partial satisfying assignment of $\varphi$ that contains as few variables as possible. The formal definition of MSAs as well as an algorithm for computing them are given in [8]. Clearly, an MSA $\sigma$ of $\Gamma \Rightarrow \phi$ entails $\Gamma \Rightarrow \phi$ and satisfies condition (2). Unfortunately, an MSA of $\Gamma \Rightarrow \phi$ does not satisfy condition (3), as it is a logically strongest solution containing a given set of variables.
|
| 135 |
+
|
| 136 |
+
Given an MSA of $\Gamma \Rightarrow \phi$ containing variables $V$, we observe that a logically weakest solution containing only $V$ is equivalent to $\forall \bar{V}$. ($\Gamma \Rightarrow \phi$), where $\bar{V}$ = free($\Gamma \Rightarrow \phi$)-$V$. Hence, given an MSA of $\Gamma \Rightarrow \phi$ consistent with $\Gamma$, an abductive solution satisfying all conditions (1)-(3) can be obtained by applying quantifier elimination to $\forall \bar{V}$. ($\Gamma \Rightarrow \phi$).
|
| 137 |
+
|
| 138 |
+
Thus, to solve the abduction problem, what we want is a largest set of variables $X$ such that $(\forall X.(\Gamma \Rightarrow \phi)) \wedge \Gamma$ is satisfiable. We call such a set of variables $X$ a maximum universal subset (MUS) of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$. Given an MUS $X$ of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$, the desired solution to the abductive inference problem is obtained by eliminating quantifiers from $\forall X.(\Gamma \Rightarrow \phi)$ and then simplifying the resulting formula with respect to $\Gamma$ using the algorithm from [9].
|
| 139 |
+
|
| 140 |
+
Pseudo-code for our algorithm for solving an abductive inference problem defined by premises $\Gamma$ and conclusion $\phi$ is shown in Figure 2. The **abduce** function given in lines 1-5 first computes an MUS of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$ using the helper **find_mus** function. Given such a maximum universal subset $X$, we obtain a quantifier-free abductive solution $\chi$ by applying quantifier elimination to the formula $\forall X.(\Gamma \Rightarrow \phi)$. Finally, at line 4, to ensure that the final abductive solution does not contain redundant subparts that are implied by the premises, we apply the simplification algorithm from [9] to $\chi$. This yields our final abductive solution $\psi$ which satisfies our criteria of minimality and generality and that is not redundant with respect to the original premises.
|
| 141 |
+
---PAGE_BREAK---
|
| 142 |
+
|
| 143 |
+
```
|
| 144 |
+
abduce(φ, Γ) {
|
| 145 |
+
1. φ = (Γ ⇒ φ)
|
| 146 |
+
2. Set X = find_mus(φ, Γ, free(φ), 0)
|
| 147 |
+
3. χ = elim(∀X.φ)
|
| 148 |
+
4. ψ = simplify(χ, Γ)
|
| 149 |
+
5. return ψ
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
find_mus(φ, Γ, V, L) {
|
| 153 |
+
6. If V = ∅ or |V| ≤ L return ∅
|
| 154 |
+
7. U = free(φ) - V
|
| 155 |
+
8. if( UNSAT (Γ ∧ ∀U.φ)) return ∅
|
| 156 |
+
9. Set best = ∅
|
| 157 |
+
10. choose x ∈ V
|
| 158 |
+
|
| 159 |
+
11. if(SAT(∀x.φ)) {
|
| 160 |
+
12. Set Y = find_mus(∀x.φ, Γ, V \ {x}, L - 1);
|
| 161 |
+
13. If (|Y| + 1 > L) { best = Y ∪ {x}; L = |Y| + 1 }
|
| 162 |
+
14. Set Y = find_mus(φ, Γ, V \ {x}, L);
|
| 163 |
+
15. If (|Y| > L) { best = Y }
|
| 164 |
+
|
| 165 |
+
16. return best;
|
| 166 |
+
}
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
Fig. 2: Algorithm for performing abduction
|
| 170 |
+
|
| 171 |
+
The function `find_mus` used in `abduce` is shown in lines 6-16 of Figure 2. This algorithm directly extends the `find_mus` algorithm we presented earlier in [8] to exclude universal subsets that contradict Γ. At every recursive invocation, `find_mus` picks a variable x from the set of free variables in φ. It then recursively invokes `find_mus` to compute the sizes of the universal subsets with and without x and returns the larger universal subset. In this algorithm, L is a lower bound on the size of the MUS and is used to terminate search branches that cannot improve upon an existing solution. Therefore, the search for an MUS terminates if we either cannot improve upon an existing solution L, or the universal subset U at line 7 is no longer consistent with Γ. The return value of `find_mus` is therefore a largest set X of variables for which Γ ∧ ∀X.φ is satisfiable.
|
| 172 |
+
|
| 173 |
+
# 4 Experimental Evaluation
|
| 174 |
+
|
| 175 |
+
To explore the size of abductive solutions and the cost of computing such solutions in practice, we collected 1455 abduction problems generated by the Compass program analysis system for inferring missing preconditions of functions. In each abduction problem $(\Gamma \land \psi) \Rightarrow \phi$, $\Gamma$ represents known invariants, and
|
| 176 |
+
---PAGE_BREAK---
|
| 177 |
+
|
| 178 |
+
Fig. 3: Size of Formula vs. Size of Abductive Solution and Time for Abduction
|
| 179 |
+
|
| 180 |
+
$\phi$ is the weakest precondition of an assertion in some function $f$. Hence, the solution $\psi$ to the abduction problem represents a potential missing precondition of $f$ sufficient to guarantee the safety of the assertion.
|
| 181 |
+
|
| 182 |
+
The left-hand side of Figure 3 plots the size of the formula $\Gamma \Rightarrow \phi$, measured as the number of leaves in the formula, versus the size of the computed abductive solution. As this graph shows, the abductive solution is generally much smaller than the original formula, demonstrating that our abduction algorithm generates small explanations in practice. The right-hand side of Figure 3 plots the size of the formula $\Gamma \Rightarrow \phi$ versus the time taken to solve the abduction problem. As expected, the time increases with formula size, but remains tractable even for the largest abduction problems in our benchmark set.
|
| 183 |
+
|
| 184 |
+
## References
|
| 185 |
+
|
| 186 |
+
1. Peirce, C.: Collected papers of Charles Sanders Peirce. Belknap Press (1932)
|
| 187 |
+
2. Calcagno, C., Distefano, D., O'Hearn, P., Yang, H.: Compositional shape analysis by means of bi-abduction. POPL 44(1) (2009) 289–300
|
| 188 |
+
3. Giacobazzi, R.: Abductive analysis of modular logic programs. In: Proceedings of the 1994 International Symposium on Logic programming, Citeseer (1994) 377–391
|
| 189 |
+
4. Dillig, I., Dillig, T., Aiken, A.: Automated error diagnosis using abductive inference. In: PLDI. (2012)
|
| 190 |
+
5. Gulwani, S., McCloskey, B., Tiwari, A.: Lifting abstract interpreters to quantified logical domains. In: POPL, ACM (2008) 235–246
|
| 191 |
+
6. Zhu, H., Dillig, I., Dillig, T.: Abduction-based inference of library specifications for source-sink property verification. In: Technical Report, College of William & Mary. (2012)
|
| 192 |
+
7. Li, B., Dillig, I., Dillig, T., McMillan, K., Sagiv, M.: Synthesis of circular compositional program proofs via abduction. In: To appear in TACAS. (2013)
|
| 193 |
+
8. Dillig, I., Dillig, T., McMillan, K., Aiken, A.: Minimum satisfying assignments for SMT, CAV (2012)
|
| 194 |
+
9. Dillig, I., Dillig, T., Aiken, A.: Small formulas for large programs: On-line constraint simplification in scalable static analysis. Static Analysis (2011) 236–252
|
samples/texts_merged/3251599.md
ADDED
|
@@ -0,0 +1,679 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Research Article
|
| 5 |
+
|
| 6 |
+
On Retarded Integral Inequalities for Dynamic Systems
|
| 7 |
+
on Time Scales
|
| 8 |
+
|
| 9 |
+
Qiao-Luan Li,¹ Xu-Yang Fu,¹ Zhi-Juan Gao,¹ and Wing-Sum Cheung²
|
| 10 |
+
|
| 11 |
+
¹College of Mathematics & Information Science, Hebei Normal University, Shijiazhuang 050024, China
|
| 12 |
+
|
| 13 |
+
²Department of Mathematics, The University of Hong Kong, Hong Kong
|
| 14 |
+
|
| 15 |
+
Correspondence should be addressed to Wing-Sum Cheung; wscheung@hku.hk
|
| 16 |
+
|
| 17 |
+
Received 13 September 2013; Accepted 16 January 2014; Published 20 February 2014
|
| 18 |
+
|
| 19 |
+
Academic Editor: Jaeyoung Chung
|
| 20 |
+
|
| 21 |
+
Copyright © 2014 Qiao-Luan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
| 22 |
+
|
| 23 |
+
The object of this paper is to establish some nonlinear retarded inequalities on time scales which can be used as handy tools in the theory of integral equations with time delays.
|
| 24 |
+
|
| 25 |
+
**1. Introduction**
|
| 26 |
+
|
| 27 |
+
Integral inequalities play an important role in the qualitative analysis of differential and integral equations. The well-known Gronwall inequality provides explicit bounds for solutions of many differential and integral equations. On the basis of various initiatives, this inequality has been extended and applied to various contexts (see, e.g., [1-4]), including many retarded ones (see, e.g., [5-9]).
|
| 28 |
+
|
| 29 |
+
Recently, Ye and Gao [7] obtained the following.
|
| 30 |
+
|
| 31 |
+
**Theorem A.** Let $I = [t_0, T) \subset \mathbb{R}$, $a(t), b(t) \in C(I, \mathbb{R}^+)$, $\phi(t) \in C([t_0 - r, t_0], \mathbb{R}^+)$, $a(t_0) = \phi(t_0)$, and $u(t) \in C([t_0 - r, T), \mathbb{R}^+)$ with
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\begin{aligned}
|
| 35 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u(s-r) ds, && t \in [t_0, T) \\
|
| 36 |
+
& u(t) \le \phi(t), && t \in [t_0 - r, t_0),
|
| 37 |
+
\end{aligned}
|
| 38 |
+
\quad (1) $$
|
| 39 |
+
|
| 40 |
+
where $\beta > 0$. Then, the following assertions hold.
|
| 41 |
+
|
| 42 |
+
(i) Suppose that $\beta > 1/2$. Then,
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
\begin{aligned}
|
| 46 |
+
& u(t) \le e^t [w_1(t) + y_1(t)]^{1/2}, && t \in [t_0 + r, T), \\
|
| 47 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) ds, && (2) \\
|
| 48 |
+
& t \in [t_0, t_0+r),
|
| 49 |
+
\end{aligned}
|
| 50 |
+
$$
|
| 51 |
+
|
| 52 |
+
where $K_1 = \Gamma(2\beta - 1)e^{-2r}/4^{\beta-1}$, $C_1 = \max\{2, e^{2r}\}$, $w_1(t) = C_1e^{-2t_0}a^2(t)$, $\phi_1(t) = C_1e^{-2t_0}\phi^2(t)$, and
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
\begin{aligned}
|
| 56 |
+
& y_1(t) \\
|
| 57 |
+
& = \int_{t_0}^{t_0+r} K_1 b^2(s) \phi_1(s-r) ds \\
|
| 58 |
+
& \quad \cdot \exp \left( \int_{t_0+r}^{t} K_1 b^2(\tau) d\tau \right) \\
|
| 59 |
+
& + \int_{t_0+r}^{t} w_1(s-r) K_1 b^2(s) \exp \left( \int_{s}^{t} K_1 b^2(\tau) d\tau \right) ds.
|
| 60 |
+
\end{aligned}
|
| 61 |
+
\quad (3) $$
|
| 62 |
+
|
| 63 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing $C^1$-functions, then
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\begin{aligned}
|
| 67 |
+
& u(t) \le \sqrt{C_1} a(t) \exp \left( t - t_0 + \frac{K_1}{2} \int_{t_0}^{t} b^2(s) ds \right), && (4) \\
|
| 68 |
+
& t \in [t_0, T).
|
| 69 |
+
\end{aligned}
|
| 70 |
+
\quad (ii) $$
|
| 71 |
+
|
| 72 |
+
(ii) Suppose that $0 < \beta \le 1/2$. Then,
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\begin{aligned}
|
| 76 |
+
& u(t) \le e^t [w_2(t) + y_2(t)]^{1/q}, && t \in [t_0 + r, T), \\
|
| 77 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) ds, && (5) \\
|
| 78 |
+
& t \in [t_0, t_0 + r),
|
| 79 |
+
\end{aligned}
|
| 80 |
+
$$
|
| 81 |
+
---PAGE_BREAK---
|
| 82 |
+
|
| 83 |
+
where $K_2 = [(\Gamma(1 - (\beta p))/p^{1-p(1-\beta)})^{1/p}, C_2 =$
|
| 84 |
+
max $\{2^{q-1}, e^{qr}\}, w_2(t) = C_2 e^{-qt_0} a^q(t), \phi_2(t) = C_2 e^{-qt_0} \phi^q(t)$,
|
| 85 |
+
$\psi(t) = 2^{q-1} K_2^q e^{-qr} b^q(t),$ and
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\begin{equation}
|
| 89 |
+
\begin{aligned}
|
| 90 |
+
y_2(t) &= \int_{t_0}^{t_0+r} \psi(s) \phi_2(s-r) ds \cdot \exp \left( \int_{t_0+r}^{t} \psi(\tau) d\tau \right) \\
|
| 91 |
+
&\quad + \int_{t_0+r}^{t} w_2(s-r) \psi(s) \exp \left( \int_{s}^{t} \psi(\tau) d\tau \right) ds.
|
| 92 |
+
\end{aligned}
|
| 93 |
+
\tag{6}
|
| 94 |
+
\end{equation}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing $C^1$-functions,
|
| 98 |
+
then
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
u(t) \le C_2^{1/q} a(t) \exp \left( t - t_0 + \frac{1}{q} \int_{t_0}^t \psi(s) ds \right), \quad (7)
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
t \in [t_0, T).
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
In this paper, we will further investigate functions $u$
|
| 109 |
+
satisfying the following more general inequalities:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\begin{align}
|
| 113 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u^{n/m} (s-r) \Delta s, \notag \\
|
| 114 |
+
& \phantom{u(t) \le a(t) + } t \in [t_0, T]_{\mathbb{T}}, \tag{8} \\
|
| 115 |
+
& u(t) \le \phi(t), \quad t \in [t_0-r, t_0]_{\mathbb{T}}, \notag \\
|
| 116 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} [b(s) u^{n/m}(s) + c(s) u^{n/m}(s-r)] \Delta s, \notag \\
|
| 117 |
+
& \phantom{u(t) \le a(t) + } t \in [t_0, T]_{\mathbb{T}}, \tag{9}
|
| 118 |
+
\end{align}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
u(t) \le a(t)
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
+ \int_{t_0}^{t} (t-s)^{\beta-1} [b(s) u^{n/m}(s) + c(s) u^{n/m}(s-r)] \Delta s, \\
|
| 127 |
+
t \in [t_0, T)_\mathbb{T},
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
u(t) \leq \phi(t), \quad t \in [t_0 - r, t_0]_{\mathbb{T}},
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
where $\mathbb{T}$ is any time scale, $u(t)$, $a(t)$, $b(t)$, $c(t)$, and $\phi(t)$ are real-valued nonnegative rd-continuous functions defined on $\mathbb{T}$, $m$ and $n$ are positive constants, $m \ge n$, $m \ge 1$, $(1/p) + (1/m) = 1$, $\beta > (p-1)/p$, and $[t_0, T]_\mathbb{T} := [t_0, T) \cap \mathbb{T}$.
|
| 135 |
+
|
| 136 |
+
First, we make a preliminary definition.
|
| 137 |
+
|
| 138 |
+
**Definition 1.** We say that a function $p : \mathbb{T} \to \mathbb{R}$ is regressive provided that
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
1 + \mu(t)p(t) \neq 0, \quad \forall t \in \mathbb{T}^k
|
| 142 |
+
\quad (10)
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
holds, where $\mu(t)$ is graininess function; that is, $\mu(t) := \sigma(t) - t$. The set of all regressive and rd-continuous functions $f : \mathbb{T} \to \mathbb{R}$ will be denoted by $\mathcal{R}$.
|
| 146 |
+
|
| 147 |
+
**2. Main Results**
|
| 148 |
+
|
| 149 |
+
For convenience, we first cite the following lemma.
|
| 150 |
+
|
| 151 |
+
**Lemma 2** (see [10]). Let $a \ge 0$, $p \ge q \ge 0$, $p \ne 0$; then
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
a^{q/p} \leq \frac{q}{p} K^{\frac{(q-p)}{p}} a + \frac{p-q}{p} K^{\frac{q}{p}}
|
| 155 |
+
\quad (11)
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
for any $K > 0$.
|
| 159 |
+
|
| 160 |
+
**Lemma 3.** Let $a(t) \ge 0$, $b(t) > 0$, $p(t) := nb(t)/m$, $-b \in$
|
| 161 |
+
$\mathcal{R}^+ := \{f \in \mathcal{R} : 1 + \mu(t)f(t) > 0, \text{ for all } t \in \mathbb{T}\}$, $\phi(t) \ge 0$ is
|
| 162 |
+
rd-continuous on $[t_0 - r, t_0]_{\mathbb{T}}$, and $r \ge 0$ and $m \ge n > 0$ are
|
| 163 |
+
real constants. If $u(t) \ge 0$ is rd-continuous and
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\begin{equation}
|
| 167 |
+
\begin{aligned}
|
| 168 |
+
& u^m(t) \le a(t) + \int_{t_0}^{t} b(s) u^n(s-r) \Delta s, && t \in [t_0, T]_{\mathbb{T}}, \\
|
| 169 |
+
& u(t) \le \phi(t), && t \in [t_0 - r, t_0]_{\mathbb{T}},
|
| 170 |
+
\end{aligned}
|
| 171 |
+
\tag{12}
|
| 172 |
+
\end{equation}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
then
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\begin{equation}
|
| 179 |
+
\begin{split}
|
| 180 |
+
u^m(t) &\le a(t) + \int_{t_0+r}^{t} p(s)a(s-r)e_{-p}(s,t)\Delta s \\
|
| 181 |
+
&\quad + e_{-p}(t_0+r,t) \int_{t_0}^{t_0+r} b(s)\phi^n(s-r)\Delta s \\
|
| 182 |
+
&\quad + \frac{m-n}{n}(e_{-p}(t_0+r,t)-1)
|
| 183 |
+
\end{split}
|
| 184 |
+
\tag{13}
|
| 185 |
+
\end{equation}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
for $t \in [t_0 + r, T)_\mathbb{T}$ and
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
u^m(t) \leq a(t) + \int_{t_0}^{t} b(s) \phi^n(s-r) \Delta s
|
| 192 |
+
\quad (14)
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
for $t \in [t_0, t_0 + r)_T$.
|
| 196 |
+
|
| 197 |
+
Furthermore, if $a(t)$ and $\phi(t)$ are nondecreasing with $a(t_0) = \phi^n(t_0)$, then
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
u^m(t) \le c(t)e_b(t_0, t), \quad t \in [t_0, T)_T, \quad (15)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where $c(t) := a(t) + (m-n)/n$.
|
| 204 |
+
|
| 205 |
+
*Proof.* Let $z(t) = \int_{t_0}^t b(s)u^n(s-r)\Delta s$. Then, $z(t_0) = 0$, $u^m(t) \le a(t)+z(t)$ and $z(t)$ is positive, nondecreasing for $t \in [t_0, T)_T$. By Lemma 2, we get
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\begin{align*}
|
| 209 |
+
z^\Delta (t) &= b(t) u^n (t-r) \le b(t) [a(t-r) + z(t-r)]^{n/m} \\
|
| 210 |
+
&\le b(t) \left[ \frac{n}{m} (a(t-r) + z(t-r)) + \frac{m-n}{m} \right] \\
|
| 211 |
+
&\le \frac{n}{m} b(t) z(\sigma(t)) + \frac{n}{m} b(t) a(t-r) + \frac{m-n}{m} b(t) \\
|
| 212 |
+
&= p(t) z(\sigma(t)) + p(t) a(t-r) + \frac{m-n}{n} p(t)
|
| 213 |
+
\end{align*}
|
| 214 |
+
\tag{16}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
for $t \in [t_0 + r, T)_T$. Multiplying (16) by $e_{-p}(t, t_0 + r) > 0$, we get
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
(z(t)e_{-p}(t, t_0 + r))^{\Delta} &\le p(t)a(t-r)e_{-p}(t, t_0 + r) \\
|
| 221 |
+
&\quad + \frac{m-n}{n}p(t)e_{-p}(t, t_0 + r).
|
| 222 |
+
\tag{17}
|
| 223 |
+
$$
|
| 224 |
+
---PAGE_BREAK---
|
| 225 |
+
|
| 226 |
+
Integrating both sides from $t_0 + r$ to $t$, we obtain
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
\begin{align}
|
| 230 |
+
z(t) \le e_{-p}(t_0+r,t)z(t_0+r) & \nonumber \\
|
| 231 |
+
& + e_{-p}(t_0+r,t) \int_{t_0+r}^{t} p(s)a(s-r)e_{-p}(s,t_0+r) \Delta s \nonumber \\
|
| 232 |
+
& + \frac{m-n}{n} (e_{-p}(t_0+r,t)-1). \tag{18}
|
| 233 |
+
\end{align}
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
For $t \in [t_0, t_0 + r)_{\mathbb{T}}$, $z^{\Delta}(t) \le b(t)\phi^n(t-r)$, so
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
z(t) \leq \int_{t_0}^{t} b(s) \phi^n(s-r) \Delta s. \quad (19)
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
Using (18) and (19), we get
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
\begin{align}
|
| 246 |
+
z(t) \le e_{-p}(t_0+r,t) & \int_{t_0}^{t_0+r} b(s) \phi^n(s-r) \Delta s \nonumber \\
|
| 247 |
+
& + \int_{t_0+r}^{t} p(s) a(s-r) e_{-p}(s,t) \Delta s \tag{20} \\
|
| 248 |
+
& + \frac{m-n}{n} (e_{-p}(t_0+r,t)-1) \nonumber
|
| 249 |
+
\end{align}
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
for $t \in [t_0 + r, T)_\mathbb{T}$.
|
| 253 |
+
|
| 254 |
+
Noting that $u^m(t) \le a(t) + z(t)$, inequalities (13) and (14) follow.
|
| 255 |
+
|
| 256 |
+
Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, then for $t \in [t_0, t_0 + r)_{\mathbb{T}}$, by (14), we have
|
| 257 |
+
|
| 258 |
+
$$
|
| 259 |
+
\begin{equation}
|
| 260 |
+
\begin{aligned}
|
| 261 |
+
u^m(t) &\le a(t) + \phi^n (t-r) \int_{t_0}^t b(s) \Delta s \\
|
| 262 |
+
&\le a(t) \left( 1 + \int_{t_0}^t b(s) \Delta s \right) \le c(t) e_{-b}(t_0, t).
|
| 263 |
+
\end{aligned}
|
| 264 |
+
\tag{21}
|
| 265 |
+
\end{equation}
|
| 266 |
+
$$
|
| 267 |
+
|
| 268 |
+
If $t \in [t_0 + r, T)_\mathbb{T}$, by (13),
|
| 269 |
+
|
| 270 |
+
$$
|
| 271 |
+
\begin{align*}
|
| 272 |
+
& u^m(t) \le a(t) + e_{-p}(t_0+r,t)a(t) \int_{t_0}^{t_0+r} b(s) \Delta s \\
|
| 273 |
+
& \phantom{u^m(t) \le} + a(t) \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\
|
| 274 |
+
& \phantom{u^m(t) \le} + \frac{m-n}{n} \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\
|
| 275 |
+
& \le c(t) + e_{-p}(t_0+r,t)c(t) \int_{t_0}^{t_0+r} b(s) \Delta s \tag{22} \\
|
| 276 |
+
& \phantom{u^m(t) \le} + c(t) \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\
|
| 277 |
+
& = c(t)e_{-p}(t_0+r,t) \left(1 + \int_{t_0}^{t_0+r} b(s)\Delta s\right) \\
|
| 278 |
+
& \le c(t)e_{-b}(t_0,t).
|
| 279 |
+
\end{align*}
|
| 280 |
+
$$
|
| 281 |
+
|
| 282 |
+
The proof is complete. $\square$
|
| 283 |
+
|
| 284 |
+
**Theorem 4.** Assume that $u(t)$ satisfies condition (8), $a(t) \ge 0$, $K := 2^{m-1}\Gamma^{m-1}(p\beta - p + 1)(m/pn)^{\beta m-1}e^{-nr}$, $b_1(t) := (n/m)Kb^m(t)$, $-Kb^m \in \mathcal{R}^+$; then
|
| 285 |
+
|
| 286 |
+
$$
|
| 287 |
+
\begin{align}
|
| 288 |
+
u(t) &\le e^t [w_1(t) + y_1(t)]^{1/m}, && t \in [t_0 + r, T)_\mathbb{T}, \\
|
| 289 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi^{n/m} (s-r) \Delta s, && (23)
|
| 290 |
+
\end{align}
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
t \in [t_0, t_0 + r)_\mathbb{T},
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
where $w_1(t) := 2^{m-1}a^m(t)e^{-mt_0}$, $\phi_1(t) := e^{-t_0}e^r\phi(t)$,
|
| 298 |
+
and $y_1(t) := \int_{t_0+r}^{t} b_1(s)w_1(s-r)e_{-b_1}(s,t)\Delta s + e_{-b_1}(t_0+r,t)\int_{t_0}^{t_0+r} K b^m(s)\phi_1^n(s-r)\Delta s + ((m-n)/n)(e_{-b_1}(t_0+r,t)-1)$.
|
| 299 |
+
|
| 300 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing, and
|
| 301 |
+
$a^m(t_0) = 2^{1-m}e^{(m-n)t_0}e^{nr}\phi^n(t_0)$, then
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
u(t) \le e^t [\alpha(t) e_{-Kb^n}(t_0, t)]^{1/m}, \quad t \in [t_0, T)_\mathbb{T}, \quad (24)
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
where $\alpha(t) := w_1(t) + (m-n)/n$
|
| 308 |
+
|
| 309 |
+
*Proof.* The second inequality in (23) is obvious. Next, we will prove the first inequality in (23). For $t \in [t_0, T)_\mathbb{T}$, using Hölder's inequality with indices *p* and *m*, we obtain from (8)
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\begin{align}
|
| 313 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} b(s) e^{-ns/m} u^{n/m} (s-r) \Delta s \notag \\
|
| 314 |
+
&\le a(t) + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \notag \\
|
| 315 |
+
&\qquad \times \left( \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m}. \tag{25}
|
| 316 |
+
\end{align}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
By Jensen's inequality $(\sum_{i=1}^n x_i)^{\sigma} \le n^{\sigma-1} (\sum_{i=1}^n x_i^{\sigma})$, we get
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
u^m(t) \le 2^{m-1} a^m(t)
|
| 323 |
+
+ 2^{m-1} \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{m/p}
|
| 324 |
+
\times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. \tag{26}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
For the first integral in (26), we have the estimate
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\begin{align}
|
| 331 |
+
&\int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \\
|
| 332 |
+
&= \int_{0}^{t-t_0} \tau^{p\beta-p} e^{pn(t-\tau)/m} \Delta\tau \\
|
| 333 |
+
&\le e^{pnt/m} \int_{0}^{t} \tau^{p\beta-p} e^{-pn\tau/m} \Delta\tau \tag{27} \\
|
| 334 |
+
&= e^{pnt/m} \left(\frac{m}{pn}\right)^{p\beta-p+1} \int_{0}^{pnt/m} \sigma^{p\beta-p} e^{-\sigma}\Delta\sigma \\
|
| 335 |
+
&< e^{pnt/m} \left(\frac{m}{pn}\right)^{p\beta-p+1} \Gamma(p\beta - p + 1).
|
| 336 |
+
\end{align}
|
| 337 |
+
$$
|
| 338 |
+
---PAGE_BREAK---
|
| 339 |
+
|
| 340 |
+
Hence,
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
\begin{equation} \tag{28}
|
| 344 |
+
\begin{aligned}
|
| 345 |
+
& u^m(t) \le 2^{m-1} a^m(t) + 2^{m-1} e^{mt} \Gamma^{m-1}(p\beta - p + 1) \\
|
| 346 |
+
& \quad \times \left(\frac{m}{pn}\right)^{\beta m-1} \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s \\
|
| 347 |
+
& \le 2^{m-1} a^m(t) e^{-mt_0} + 2^{m-1} \Gamma^{m-1}(p\beta - p + 1) \left(\frac{m}{pn}\right)^{\beta m-1} \\
|
| 348 |
+
& \qquad \times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s.
|
| 349 |
+
\end{aligned}
|
| 350 |
+
\end{equation}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
and so
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\begin{align*}
|
| 357 |
+
& (u(t)e^{-t})^m \\
|
| 358 |
+
&\le 2^{m-1} a^m(t) e^{-mt_0} + 2^{m-1} \Gamma^{m-1}(p\beta - p+1) \left(\frac{m}{pn}\right)^{\beta m-1} \\
|
| 359 |
+
&\qquad \times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. \tag{29}
|
| 360 |
+
\end{align*}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
Let $v(t) := e^{-t}u(t)$; then we have
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\begin{equation}
|
| 367 |
+
\begin{aligned}
|
| 368 |
+
v^m(t) &\le w_1(t) + K \int_{t_0}^{t} b^m(s) v^n(s-r) \Delta s, \\
|
| 369 |
+
&\qquad t \in [t_0, T)_{\mathbb{T}}.
|
| 370 |
+
\end{aligned}
|
| 371 |
+
\tag{30}
|
| 372 |
+
\end{equation}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
For $t \in [t_0 - r, t_0]_{\mathbb{T}}$, we have $e^{-t}u(t) \le e^{-t}\phi(t) \le e^r e^{-t_0}\phi(t)$;
|
| 376 |
+
that is, $v(t) \le \phi_1(t)$. By Lemma 3, we get
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\begin{equation}
|
| 380 |
+
\begin{aligned}
|
| 381 |
+
v^m(t) &\le w_1(t) + \int_{t_0+r}^{t} b_1(s) w_1(s-r) e_{-b_1}(s,t) \Delta s \\
|
| 382 |
+
&\quad + e_{-b_1}(t_0+r,t) \int_{t_0}^{t_0+r} K b^m(s) \phi_1^n(s-r) \Delta s \\
|
| 383 |
+
&\quad + \frac{m-n}{n} (e_{-b_1}(t_0+r,t)-1).
|
| 384 |
+
\end{aligned}
|
| 385 |
+
\tag{31}
|
| 386 |
+
\end{equation}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
Hence, the first inequality in (23) follows.
|
| 390 |
+
|
| 391 |
+
Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, and $a^m(t_0) = 2^{1-m}e^{(m-n)t_0}\phi^n(t_0)e^{nr}$, by Lemma 3, we have
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
u(t) \le e^t [\alpha(t) e_{-Kb^m}(t_0, t)]^{1/m}, \quad t \in [t_0, T)_{\mathbb{T}}. \quad (32)
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
The proof is complete.
|
| 398 |
+
|
| 399 |
+
**Lemma 5.** Let $a(t) \ge 0$, $b(t) > 0$, $c(t) > 0$, $p(t) := (nb(t)/m)$,
|
| 400 |
+
$q(t) := (nc(t)/m)$, $\gamma(t) := a(t) + (m-n)/n$ and $-p, -(p+c) \in$
|
| 401 |
+
$\mathbb{R}^+$ and let $\phi(t) \ge 0$ be rd-continuous on $[t_0 - r, t_0]_{\mathbb{T}}$, where
|
| 402 |
+
$r \ge 0$ and $m \ge n > 0$ are real constants. If $u(t) \ge 0$ is rd-
|
| 403 |
+
continuous and
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
\begin{equation}
|
| 407 |
+
\begin{aligned}
|
| 408 |
+
& u^m(t) \le a(t) + \int_{t_0}^{t} [b(s) u^n(s) + c(s) u^n(s-r)] \Delta s, \\
|
| 409 |
+
& \qquad t \in [t_0, T)_\mathbb{T},
|
| 410 |
+
\end{aligned}
|
| 411 |
+
\tag{33}
|
| 412 |
+
\end{equation}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
u(t) \leq \phi(t), \quad t \in [t_0 - r, t_0]_{\mathbb{T}},
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
then
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
\begin{align*}
|
| 423 |
+
& u^m(t) \\
|
| 424 |
+
&\leq a(t) \\
|
| 425 |
+
&\quad + \int_{t_0+r}^{t} [p(s)\gamma(s)+q(s)\gamma(s-r)] e_{-(p+q)}(s,t)\Delta s \\
|
| 426 |
+
&\quad + e_{-(p+q)}(t_0+r,t) \\
|
| 427 |
+
&\quad \times \int_{t_0}^{t_0+r} [p(s)\gamma(s)+c(s)\phi^n(s-r)] e_{-p}(s,t_0+r)\Delta s
|
| 428 |
+
\end{align*}
|
| 429 |
+
\tag{34}
|
| 430 |
+
$$
|
| 431 |
+
|
| 432 |
+
for $t \in [t_0 + r, T)_\mathbb{T}$ and
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
u^m(t) \leq a(t) + \int_{t_0}^{t} [p(s)\gamma(s) + c(s)\phi^n(s-r)] e_{-p}(s,t)\Delta s
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
for $t \in [t_0, t_0 + r]_{\mathbb{T}}$.
|
| 439 |
+
|
| 440 |
+
Furthermore, if $a(t)$ and $\phi(t)$ are nondecreasing with $a(t_0) = \phi^n(t_0)$, then
|
| 441 |
+
|
| 442 |
+
$$
|
| 443 |
+
u^m(t) \leq \gamma(t) e_{-(p+c)}(t_0, t), \quad t \in [t_0, T)_T. \quad (36)
|
| 444 |
+
$$
|
| 445 |
+
|
| 446 |
+
Proof. Let $z(t) = \int_{t_0}^t [b(s)u^n(s)+c(s)u^n(s-r)]\Delta s$. Then, $z(t_0) = 0$, $u^m(t) \leq a(t) + z(t)$, $z(t)$ is positive and nondecreasing for $t \in [t_0, T)_T$. Further, we have
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
z^\Delta (t) = b (t) u^n (t) + c (t) u^n (t-r). \quad (37)
|
| 450 |
+
$$
|
| 451 |
+
|
| 452 |
+
For $t \in [t_0, t_0 + r]_{\mathbb{T}}$, using Lemma 2, we have
|
| 453 |
+
|
| 454 |
+
$$
|
| 455 |
+
z^\Delta (t) &\le b (t) (a (t) + z (t))^{n/m} + c (t) \phi^n (t-r) \\
|
| 456 |
+
&\le b (t) \left[ \frac{n}{m} (a (t) + z (t)) + \frac{m-n}{m} \right] + c (t) \phi^n (t-r) \\
|
| 457 |
+
&\le p (t) \gamma (t) + p (t) z (\sigma (t)) + c (t) \phi^n (t-r),
|
| 458 |
+
$$
|
| 459 |
+
|
| 460 |
+
$$
|
| 461 |
+
(e_{-p}(t, t_0) z(t))^\Delta \le (p(t)\gamma(t)+c(t)\phi^n(t-r))e_{-p}(t, t_0). \quad (38)
|
| 462 |
+
$$
|
| 463 |
+
|
| 464 |
+
Integrating both sides from $t_0$ to $t$, we obtain
|
| 465 |
+
|
| 466 |
+
$$
|
| 467 |
+
z(t) \leq \int_{t_0}^{t} [p(s)\gamma(s)+c(s)\phi^n(s-r)] e_{-p}(s,t)\Delta s. \quad (39)
|
| 468 |
+
$$
|
| 469 |
+
---PAGE_BREAK---
|
| 470 |
+
|
| 471 |
+
For $t \in [t_0 + r, T)_{\mathbb{T}}$,
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
\begin{aligned}
|
| 475 |
+
z^{\Delta}(t) &\le b(t)[a(t) + z(t)]^{n/m} \\
|
| 476 |
+
&\quad + c(t)[a(t-r) + z(t-r)]^{n/m} \\
|
| 477 |
+
&\le b(t)\left(\frac{n}{m}(a(t)+z(t)) + \frac{m-n}{m}\right) \\
|
| 478 |
+
&\quad + c(t)\left(\frac{n}{m}(a(t-r)+z(t-r)) + \frac{m-n}{m}\right) \\
|
| 479 |
+
&\le \left(\frac{n}{m}b(t) + \frac{n}{m}c(t)\right)z(\sigma(t)) + \frac{n}{m}b(t)a(t) \\
|
| 480 |
+
&\quad + \frac{m-n}{m}c(t)a(t-r) + \frac{m-n}{m}b(t) + \frac{m-n}{m}c(t) \\
|
| 481 |
+
&\le (p(t)+q(t))z(\sigma(t)) + p(t)\gamma(t) + q(t)\gamma(t-r).
|
| 482 |
+
\end{aligned}
|
| 483 |
+
\tag{40}
|
| 484 |
+
$$
|
| 485 |
+
|
| 486 |
+
Hence, we get
|
| 487 |
+
|
| 488 |
+
$$
|
| 489 |
+
\begin{align}
|
| 490 |
+
(e_{-(p+q)}(t, t_0 + r) z(t))^\Delta & \tag{41} \\
|
| 491 |
+
&\le (p(t) \gamma(t) + q(t) \gamma(t-r)) e_{-(p+q)}(t, t_0 + r). \nonumber
|
| 492 |
+
\end{align}
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
Integrating both sides from $t_0 + r$ to $t$, we obtain
|
| 496 |
+
|
| 497 |
+
$$
|
| 498 |
+
\begin{align*}
|
| 499 |
+
z(t) &\le e_{-(p+q)}(t_0+r,t)z(t_0+r) \\
|
| 500 |
+
&\quad + e_{-(p+q)}(t_0+r,t) \\
|
| 501 |
+
&\quad \times \int_{t_0+r}^{t} [p(s)\gamma(s) + q(s)\gamma(s-r)] e_{-(p+q)}(s,t_0+r) \Delta s \\
|
| 502 |
+
&\le e_{-(p+q)}(t_0+r,t) \\
|
| 503 |
+
&\quad \times \int_{t_0}^{t_0+r} [p(s)\gamma(s) + c(s)\phi^n(s-r)] e_{-p}(s,t_0+r) \Delta s \\
|
| 504 |
+
&\quad + \int_{t_0+r}^{t} [p(s)\gamma(s) + q(s)\gamma(s-r)] e_{-(p+q)}(s,t) \Delta s.
|
| 505 |
+
\end{align*}
|
| 506 |
+
\tag{42}
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
Using $u^m(t) \le a(t) + z(t)$, we get inequalities (34) and (35).
|
| 510 |
+
Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, then, by (35),
|
| 511 |
+
|
| 512 |
+
$$
|
| 513 |
+
\begin{align}
|
| 514 |
+
u^m(t) &\le \gamma(t) \left( 1 + \int_{t_0}^{t} (p(s) + c(s)) e_{-p}(s,t) \Delta s \right) \notag \\
|
| 515 |
+
&\le \gamma(t) \left( 1 + \int_{t_0}^{t} (p(s) + c(s)) e_{-(p+c)}(s,t) \Delta s \right) \tag{43} \\
|
| 516 |
+
&\le \gamma(t) e_{-(p+c)}(t_0,t) \notag
|
| 517 |
+
\end{align}
|
| 518 |
+
$$
|
| 519 |
+
|
| 520 |
+
for $t \in [t_0, t_0 + r)_{\mathbb{T}}$. Furthermore, by (34),
|
| 521 |
+
|
| 522 |
+
$$
|
| 523 |
+
\begin{align*}
|
| 524 |
+
u^m(t) &\le \gamma(t) + \gamma(t) e_{-(p+q)}(t_0 + r, t) \\
|
| 525 |
+
&\quad \times \int_{t_0}^{t_0+r} (p(s)+c(s)) e_{-p}(s,t_0+r) \Delta s \\
|
| 526 |
+
&\quad + \gamma(t) \int_{t_0+r}^{t} (p(s)+q(s)) e_{-(p+q)}(s,t) \Delta s \\
|
| 527 |
+
&\le \gamma(t) e_{-(p+q)}(t_0+r, t) \\
|
| 528 |
+
&\quad \times \left( 1 + \int_{t_0}^{t_0+r} (p(s)+c(s)) e_{-(p+c)}(s,t_0+r) \Delta s \right) \\
|
| 529 |
+
&= \gamma(t) e_{-(p+c)}(t_0, t)
|
| 530 |
+
\end{align*}
|
| 531 |
+
\tag{44}
|
| 532 |
+
$$
|
| 533 |
+
|
| 534 |
+
for $t \in [t_0 + r, T)_{\mathbb{T}}$. The proof is complete. $\square$
|
| 535 |
+
|
| 536 |
+
**Theorem 6.** Assume that $u(t)$ satisfies condition (9), $a(t) \ge 0$, $K := 3^{m-1}\Gamma^{m-1}(p\beta - p + 1)(m/pn)^{\beta m-1}$, $p(t) := nKb^m(t)/m$, $c_1(t) := Ke^{-mr}c^m(t)$, $q(t) := (n/m)c_1(t)$, $-p, -(p+c_1) \in \mathbb{R}^+$.
|
| 537 |
+
|
| 538 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing, and
|
| 539 |
+
$a^m(t_0) = 3^{1-m}e^{(m-n)t_0}e^{nr}\phi^n(t_0)$, then
|
| 540 |
+
|
| 541 |
+
$$
|
| 542 |
+
u(t) \le e^{\ell} [\gamma(t) e_{-(p+c_1)}(t_0, t)]^{1/m}, \quad t \in [t_0, T]_{\mathbb{T}}, \quad (45)
|
| 543 |
+
$$
|
| 544 |
+
|
| 545 |
+
where $\gamma(t) = 3^{m-1}a^m(t)e^{-mt_0} + (m-n)/n$.
|
| 546 |
+
|
| 547 |
+
Proof. For $t \in [t_0, T)_\mathbb{T}$, using Hölder's inequality with indices $p$ and $m$, we obtain from (9) that
|
| 548 |
+
|
| 549 |
+
$$
|
| 550 |
+
\begin{align*}
|
| 551 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} b(s) e^{-ns/m} u^{n/m}(s) \Delta s \\
|
| 552 |
+
&\quad + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} c(s) e^{-ns/m} u^{n/m}(s-r) \Delta s \\
|
| 553 |
+
&\le a(t) + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \\
|
| 554 |
+
&\quad \times \left( \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s) \Delta s \right)^{1/m} \\
|
| 555 |
+
&\quad + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \\
|
| 556 |
+
&\quad \times \left( \int_{t_0}^{t} c^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m}
|
| 557 |
+
\end{align*}
|
| 558 |
+
$$
|
| 559 |
+
---PAGE_BREAK---
|
| 560 |
+
|
| 561 |
+
$$
|
| 562 |
+
\begin{equation}
|
| 563 |
+
\begin{aligned}
|
| 564 |
+
& \le a(t) + e^{nt/m} \left(\frac{m}{pn}\right)^{\beta-1+1/p} \Gamma^{1/p}(p\beta - p + 1) \\
|
| 565 |
+
& \quad \times \left[ \left( \int_{t_0}^t b^m(s) e^{-ns} u^n(s) \Delta s \right)^{1/m} \right. \\
|
| 566 |
+
& \qquad \left. + \left( \int_{t_0}^t c^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m} \right].
|
| 567 |
+
\end{aligned}
|
| 568 |
+
\tag{46}
|
| 569 |
+
\end{equation}
|
| 570 |
+
$$
|
| 571 |
+
|
| 572 |
+
By Jensen's inequality $(\sum_{i=1}^n x_i)^\sigma \le n^{\sigma-1} (\sum_{i=1}^n x_i^\sigma)$, we get
|
| 573 |
+
|
| 574 |
+
$$
|
| 575 |
+
\begin{align*}
|
| 576 |
+
& u^m(t) \\
|
| 577 |
+
&\le 3^{m-1}a^m(t) + 3^{m-1}e^{nt}\left(\frac{m}{pn}\right)^{(m\beta-1)}\Gamma^{m-1}(p\beta - p + 1) \\
|
| 578 |
+
&\quad \times \left(\int_{t_0}^t b^m(s)e^{-ns}u^n(s)\Delta s + \int_{t_0}^t c^m(s)e^{-ns}u^n(s-r)\Delta s\right). \tag{47}
|
| 579 |
+
\end{align*}
|
| 580 |
+
$$
|
| 581 |
+
|
| 582 |
+
So,
|
| 583 |
+
|
| 584 |
+
$$
|
| 585 |
+
\begin{equation}
|
| 586 |
+
\begin{aligned}
|
| 587 |
+
& (u(t)e^{-t})^m \\
|
| 588 |
+
&\le 3^{m-1} a^m(t) e^{-mt_0} \\
|
| 589 |
+
&\quad + 3^{m-1} \left(\frac{m}{pn}\right)^{(m\beta-1)} \Gamma^{m-1}(p\beta - p + 1) \\
|
| 590 |
+
&\quad \times \left( \int_{t_0}^t b^m(s) e^{-ns} u^n(s) \Delta s + \int_{t_0}^t c^m(s) e^{-ns} u^n(s-r) \Delta s \right).
|
| 591 |
+
\end{aligned}
|
| 592 |
+
\tag{48}
|
| 593 |
+
\end{equation}
|
| 594 |
+
$$
|
| 595 |
+
|
| 596 |
+
Let $v(t) := e^{-t}u(t)$, $w_2(t) := 3^{m-1}a^m(t)e^{-mt_0}$; we have
|
| 597 |
+
|
| 598 |
+
$$
|
| 599 |
+
\begin{equation}
|
| 600 |
+
\begin{aligned}
|
| 601 |
+
v^m(t) &\le w_2(t) + \int_{t_0}^t K b^m(s) v^n(s) \Delta s \\
|
| 602 |
+
&\quad + \int_{t_0}^t K e^{-nr} c^m(s) v^n(s-r) \Delta s
|
| 603 |
+
\end{aligned}
|
| 604 |
+
\tag{49}
|
| 605 |
+
\end{equation}
|
| 606 |
+
$$
|
| 607 |
+
|
| 608 |
+
for $t \in [t_0, T]_\mathbb{T}$. For $t \in [t_0 - r, t_0]_\mathbb{T}$, we have $e^{-t}u(t) \le$
|
| 609 |
+
$e^{-t}\phi(t) \le e^{-t_0}e^r\phi(t)$; that is, $v(t) \le \phi_1(t)$. By Lemma 5, we
|
| 610 |
+
get
|
| 611 |
+
|
| 612 |
+
$$
|
| 613 |
+
u(t) \le e^t [\gamma(t) e_{-(p+c_1)}(t_0, t)]^{1/m}, \quad t \in [t_0, T]_{\mathbb{T}}. \quad (50)
|
| 614 |
+
$$
|
| 615 |
+
|
| 616 |
+
The proof is complete.
|
| 617 |
+
|
| 618 |
+
The following is a simple consequence of Theorem 4.
|
| 619 |
+
|
| 620 |
+
**Corollary 7.** Suppose that $m = n = 2$,
|
| 621 |
+
|
| 622 |
+
$$
|
| 623 |
+
\begin{align}
|
| 624 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u(s-r) \Delta s, \notag \\
|
| 625 |
+
& \phantom{u(t) \le a(t) + } t \in [t_0, T), \tag{51} \\
|
| 626 |
+
& u(t) \le \phi(t), \quad t \in [t_0 - r, t_0), \notag
|
| 627 |
+
\end{align}
|
| 628 |
+
$$
|
| 629 |
+
|
| 630 |
+
then
|
| 631 |
+
|
| 632 |
+
$$
|
| 633 |
+
\begin{align*}
|
| 634 |
+
u(t) &\le e^t \left[ w_1(t) + \int_{t_0+r}^t Kb^2(s) w_1(s-r) e_{-Kb^2}(s,t) \Delta s \right. \\
|
| 635 |
+
&\qquad \left. + e_{-Kb^2}(t_0+r,t) \right. \\
|
| 636 |
+
&\qquad \left. \times \int_{t_0}^{t_0+r} Kb^2(s) \phi_1^2(s-r) \Delta s \right]^{1/2}, \\
|
| 637 |
+
&\qquad t \in [t_0+r,T)_\mathbb{T}, \\
|
| 638 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) \Delta s, \\
|
| 639 |
+
&\qquad t \in [t_0, t_0+r)_\mathbb{T},
|
| 640 |
+
\end{align*}
|
| 641 |
+
$$
|
| 642 |
+
|
| 643 |
+
where $K := \Gamma(2\beta - 1)e^{-2r} \cdot (1/4^{\beta-1})$, $w_1(t) := 2a^2(t)e^{-2t_0}$,
|
| 644 |
+
$\phi_1(t) := e^{-t_0}e^r\phi(t)$.
|
| 645 |
+
|
| 646 |
+
If $\mathbb{T} = \mathbb{R}$, then the conclusion reduces to that of Theorem A for $\beta > 1/2$.
|
| 647 |
+
|
| 648 |
+
**Conflict of Interests**
|
| 649 |
+
|
| 650 |
+
The authors declare that there is no conflict of interests regarding the publication of this paper.
|
| 651 |
+
|
| 652 |
+
**Acknowledgments**
|
| 653 |
+
|
| 654 |
+
The first author's research was supported by NNSF of China (11071054), Natural Science Foundation of Hebei Province (A2011205012). The corresponding author's research was partially supported by an HKU URG grant.
|
| 655 |
+
|
| 656 |
+
**References**
|
| 657 |
+
|
| 658 |
+
[1] R. P. Agarwal, S. Deng, and W. Zhang, “Generalization of a retarded Gronwall-like inequality and its applications,” *Applied Mathematics and Computation*, vol. 165, no. 3, pp. 599–612, 2005.
|
| 659 |
+
|
| 660 |
+
[2] B. G. Pachpatte, “Explicit bounds on certain integral inequalities,” *Journal of Mathematical Analysis and Applications*, vol. 267, no. 1, pp. 48–61, 2002.
|
| 661 |
+
|
| 662 |
+
[3] W.-S. Cheung, “Some new nonlinear inequalities and applications to boundary value problems,” *Nonlinear Analysis: Theory, Methods & Applications*, vol. 64, no. 9, pp. 2112–2128, 2006.
|
| 663 |
+
|
| 664 |
+
[4] C.-J. Chen, W.-S. Cheung, and D. Zhao, “Gronwall-Bellman-type integral inequalities and applications to BVPs,” *Journal of Inequalities and Applications*, vol. 2009, Article ID 258569, 15 pages, 2009.
|
| 665 |
+
|
| 666 |
+
[5] Y. G. Sun, “On retarded integral inequalities and their applications,” *Journal of Mathematical Analysis and Applications*, vol. 301, no. 2, pp. 265–275, 2005.
|
| 667 |
+
|
| 668 |
+
[6] H. Zhang and F. Meng, “On certain integral inequalities in two independent variables for retarded equations,” *Applied Mathematics and Computation*, vol. 203, no. 2, pp. 608–616, 2008.
|
| 669 |
+
|
| 670 |
+
[7] H. Ye and J. Gao, “Henry-Gronwall type retarded integral inequalities and their applications to fractional differential
|
| 671 |
+
---PAGE_BREAK---
|
| 672 |
+
|
| 673 |
+
equations with delay,” *Applied Mathematics and Computation*, vol. 218, no. 8, pp. 4152–4160, 2011.
|
| 674 |
+
|
| 675 |
+
[8] O. Lipovan, “A retarded Gronwall-like inequality and its applications,” *Journal of Mathematical Analysis and Applications*, vol. 252, no. 1, pp. 389–401, 2000.
|
| 676 |
+
|
| 677 |
+
[9] O. Lipovan, “A retarded integral inequality and its applications,” *Journal of Mathematical Analysis and Applications*, vol. 285, no. 2, pp. 436–443, 2003.
|
| 678 |
+
|
| 679 |
+
[10] F. Jiang and F. Meng, “Explicit bounds on some new nonlinear integral inequalities with delay,” *Journal of Computational and Applied Mathematics*, vol. 205, no. 1, pp. 479–486, 2007.
|
samples/texts_merged/3295535.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/3438890.md
ADDED
|
@@ -0,0 +1,226 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Footstep Planning Based on Univector Field Method for
|
| 5 |
+
Humanoid Robot
|
| 6 |
+
|
| 7 |
+
Youngdae Hong and Jong-Hwan Kim
|
| 8 |
+
|
| 9 |
+
Department of Electrical Engineering and Computer Science, KAIST
|
| 10 |
+
Daejeon, Korea
|
| 11 |
+
{ydhong,johkim}@rit.kaist.ac.kr
|
| 12 |
+
http://rit.kaist.ac.kr
|
| 13 |
+
|
| 14 |
+
**Abstract.** This paper proposes a footstep planning algorithm based on univector field method optimized by evolutionary programming for humanoid robot to arrive at a target point in a dynamic environment. The univector field method is employed to determine the moving direction of the humanoid robot at every footstep. Modifiable walking pattern generator, extending the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase, is utilized to generate every joint trajectory of a robot satisfying the planned footstep. The proposed algorithm enables the humanoid robot not only to avoid either static or moving obstacles but also step over static obstacles. The performance of the proposed algorithm is demonstrated by computer simulations using a modeled small-sized humanoid robot HanSaRam (HSR)-VIII.
|
| 15 |
+
|
| 16 |
+
**Keywords:** Footstep planning, univector field method, evolutionary programming, humanoid robot, modifiable walking pattern generator.
|
| 17 |
+
|
| 18 |
+
# 1 Introduction
|
| 19 |
+
|
| 20 |
+
These days research on a humanoid robot has made rapid progress for dexterous motions with the hardware development. Various humanoid robots have demonstrated stable walking with control schemes [1]-[5]. Considering the future of the humanoid robot as a service robot, research on navigation in indoor environments such as homes and offices with obstacles is now needed.
|
| 21 |
+
|
| 22 |
+
In indoor environments, most of research on navigation has been carried out for differential drive mobile robots. The navigation method of the mobile robots is categorized into separated navigation and unified navigation. The separated navigation method, such as structural navigation and deliberative navigation, separates path planning and path following as two isolated tasks. In the path planning step, a path generation algorithm is developed which connects the staring point with the end point without crossing the obstacles. To find the shortest path many searching algorithms such as A\* algorithm and dynamic programming have been applied [6]. On the other hand, in unified navigation method such as the artificial potential field method [7], [8], the path planning step and the path following step are unified in one task.
|
| 23 |
+
|
| 24 |
+
In the navigation research, differential drive mobile robots make a detour to avoid obstacles to arrive at a goal position. On the other hand, humanoid robots are able to
|
| 25 |
+
---PAGE_BREAK---
|
| 26 |
+
|
| 27 |
+
traverse obstacles by their legs. When they move around in an environment, positions of their footprints are important as there are obstacles. Thus, the study of footstep planning for humanoid robots is an important research issue.
|
| 28 |
+
|
| 29 |
+
As research on footstep planning, algorithm obtaining information of obstacle's shape and location by sensors was presented [9]. Through obtained information, a robot determines its step length which is predefined as three type step lengths and its motion such as circumventing, stepping over or stepping on obstacles. Also, an algorithm finding alternative path employing A* by heuristic cost function was developed [10]. Stable region of robot's footprints is predetermined and then a few of placements of them are selected as a discrete set. This algorithm checks collision between a robot and obstacles by 2D polygon intersection test. Human-like strategy for footstep planning was also presented [11].
|
| 30 |
+
|
| 31 |
+
In this paper, a footstep planning algorithm based on the univector field method for humanoid robot is proposed. The univector field method is one of the unified navigation methods, which is designed for fast differential drive mobile robots to enhance performances. Using this method, robot can navigate rapidly to the desired position and orientation without oscillations and unwanted inefficient motions [12], [13]. The footstep planning algorithm determines moving direction of a humanoid robot in real time and has low computing cost by employing the univector field method. Besides, it is able to modify foot placement depending on obstacle's position. Inputting the moving direction and step length of a robot at every footstep to modifiable walking pattern generator [14], every joint trajectory is generated. The proposed algorithm generates an evolutionary optimized path by evolutionary programming (EP) considering hardware limit of a robot and makes a robot arrive at a goal with desired direction. Computer simulations are carried out by a model of HanSaRam (HSR)-VIII which is a small-sized humanoid robot developed in Robot Intelligence Technology (RIT) Lab, KAIST.
|
| 32 |
+
|
| 33 |
+
The rest of the paper is organized as follows: Section 2 describes an overview of univector field method and Section 3 explains MWPG. In Section 4 a footstep planning algorithm is proposed. Computer simulation results are presented in Section 5. Finally concluding remarks follow in Section 6.
|
| 34 |
+
|
| 35 |
+
## 2 Univector Field Method
|
| 36 |
+
|
| 37 |
+
The univector field method is one of path planning methods developed for a differential drive mobile robot. The univector field consists of *move-to-goal univector field* which leads a robot to move to a destination and *avoid-obstacle univector field* which makes a robot avoid obstacles. Its moving direction is decided by combining *move-to-goal* univector field and *avoid-obstacle univector field*. The univector field method requires relatively low computing power because it does not generate a whole path from a start point to a destination before moving, but generates a moving direction decided at every step in real time. In addition, it is easy to plan a path in a dynamic environment with moving obstacles. Thus, this method of path planning is adopted and extended for a humanoid robot.
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
## 2.1 Move-to-Goal Univector Field
|
| 41 |
+
|
| 42 |
+
The move-to-goal univector field is defined as
|
| 43 |
+
|
| 44 |
+
$$ \mathbf{v}_{muf} = [-\cos(\theta_{muf}) - \sin(\theta_{muf})]^T, \quad (1) $$
|
| 45 |
+
|
| 46 |
+
where
|
| 47 |
+
|
| 48 |
+
$$ \theta_{muf} = \cos^{-1}\left(\frac{p_x - g_x}{d_{goal}}\right), d_{goal} = \sqrt{(p_x - g_x)^2 + (p_y - g_y)^2}, $$
|
| 49 |
+
|
| 50 |
+
$\theta_{muf}$ is the angle from x-axis of the goal at robot's position, $d_{goal}$ is the distance between the center of a goal and robot's position, and $(p_x, p_y)$ and $(g_x, g_y)$ are the robot's position and the goal position, respectively.
|
| 51 |
+
|
| 52 |
+
## 2.2 Avoid-Obstacle Univector Field
|
| 53 |
+
|
| 54 |
+
The avoid-obstacle univector field is defined as
|
| 55 |
+
|
| 56 |
+
$$ \mathbf{v}_{auf} = [\cos(\theta_{auf}) \sin(\theta_{auf})]^T, \quad (2) $$
|
| 57 |
+
|
| 58 |
+
where
|
| 59 |
+
|
| 60 |
+
$$ \theta_{auf} = \cos^{-1}\left(\frac{p_x - o_x}{d_{ob}}\right), d_{ob} = \sqrt{(p_x - o_x)^2 + (p_y - o_y)^2}, $$
|
| 61 |
+
|
| 62 |
+
$\theta_{auf}$ is the angle from x-axis of an obstacle at robot's position, $d_{ob}$ is the distance between the center of an obstacle and robot's position and $(o_x, o_y)$ is the position of an obstacle.
|
| 63 |
+
|
| 64 |
+
Total univector field is determined by properly combining the move-to-goal univector field and the avoid-obstacle univector field. Total univector $\mathbf{v}_{tuf}$ is defined as
|
| 65 |
+
|
| 66 |
+
$$ \mathbf{v}_{tuf} = w_{muf}\mathbf{v}_{muf} + w_{auf}\mathbf{v}_{auf}, \quad (3) $$
|
| 67 |
+
|
| 68 |
+
where $w_{muf}$ and $w_{auf}$ represent the scale factor of the move-to-goal univector field and the avoid-obstacle univector field, respectively.
|
| 69 |
+
|
| 70 |
+
# 3 Modifiable Walking Pattern Generator
|
| 71 |
+
|
| 72 |
+
The modifiable walking pattern generator (MWPG) extended the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase. In the conventional 3D-LIPM without the ZMP variation, only the homogeneous solutions of the 3D-LIPM dynamic equation were considered. However, considering the particular solutions, more extensive and unrestricted walking patterns could be generated by allowing the ZMP variation. The solutions with both homogeneous and particular parts are as follows:
|
| 73 |
+
|
| 74 |
+
Sagittal motion:
|
| 75 |
+
|
| 76 |
+
$$ \begin{bmatrix} x_f \\ v_f T_c \end{bmatrix} = \begin{bmatrix} C_T & S_T \\ S_T & C_T \end{bmatrix} \begin{bmatrix} x_i \\ v_i T_c \end{bmatrix} - \frac{1}{T_c} \begin{bmatrix} \int_0^T S_i \bar{p}(t) dt \\ \int_0^T C_i \bar{p}(t) dt \end{bmatrix}, \quad (4) $$
|
| 77 |
+
---PAGE_BREAK---
|
| 78 |
+
|
| 79 |
+
Lateral motion:
|
| 80 |
+
|
| 81 |
+
$$ \begin{bmatrix} y_f \\ w_f T_c \end{bmatrix} = \begin{bmatrix} C_T & S_T \\ S_T & C_T \end{bmatrix} \begin{bmatrix} y_i \\ w_i T_c \end{bmatrix} - \frac{1}{T_c} \begin{bmatrix} \int_0^T S_t \bar{p}(t) dt \\ \int_0^T C_t \bar{p}(t) dt \end{bmatrix}, \quad (5) $$
|
| 82 |
+
|
| 83 |
+
where $(x_i, v_i)/(x_f, v_f)$ and $(y_i, w_i)/(y_f, w_f)$ represent initial/final position and velocity of the CM in the sagittal and lateral plane, respectively. $S_t$ and $C_t$ are defined as $\cosh(t/T_c)$ and $\sinh(t/T_c)$ with time constant $T_c = \sqrt{Z_c/g}$. The functions $p(t)$ and $q(t)$ are ZMP trajectories for the sagittal and lateral planes, respectively. $\bar{p}(t) = p(T-t)$ and $\bar{q}(t) = q(T-t)$. Through the variation of the ZMP, the walking state (WS), which is the state of the point mass in the 3D-LIPM represented in terms of CM position and linear velocity can be moved to the desired WS in the region of possible trajectories expanded by applying the particular solutions. By means of the MWPG, a humanoid robot can change both sagittal and lateral step lengths, rotation angle of ankles and the period of the walking pattern [14].
|
| 84 |
+
|
| 85 |
+
# 4 Footstep Planning Algorithm
|
| 86 |
+
|
| 87 |
+
In this section, a footstep planning algorithm for a humanoid robot is described. It decides moving orientation at every footstep by univector field navigation method. Using the determined orientations, it calculates exact foot placement. Subsequently, by in-putting the moving direction and step length of a robot at every footstep by proposed footstep planning algorithm to MWPG, every joint trajectory is generated to satisfy the planned footstep.
|
| 88 |
+
|
| 89 |
+
## 4.1 Path Planning
|
| 90 |
+
|
| 91 |
+
To apply univector field method to the path generation of a humanoid robot, the following three issues are considered. To generate a natural and effective path, obstacle's boundary and virtual obstacle [15] are introduced to the avoid-obstacle univector field considering the obstacle's size and movement, respectively. Also, a hyperbolic spiral univector field is developed as a move-to-goal univector field in order to reach a destination with a desired orientation [13].
|
| 92 |
+
|
| 93 |
+
**Boundary of Avoid-Obstacle Univector Field.** The repulsive univector field by obstacles is not generated at every position but generated in a restricted range by applying a boundary to the avoid-obstacle univector field. Also, the more the robot's position becomes distant from the center of an obstacle, the more the magnitude of the repulsive univector field decreases linearly. Consequently, a robot is not influenced the repulsive univector field at the region which is away from the boundary of obstacles. Considering this boundary effect, the avoid-obstacle univector $\mathbf{v}_{auf}$ is defined as
|
| 94 |
+
|
| 95 |
+
$$ \mathbf{v}_{auf} = k_b [\cos(\theta_{auf}) \sin(\theta_{auf})]^T \quad (6) $$
|
| 96 |
+
|
| 97 |
+
where
|
| 98 |
+
|
| 99 |
+
$$ k_b = \frac{d_{boun} - (d_{ob} - o_{size})}{d_{boun}}, $$
|
| 100 |
+
---PAGE_BREAK---
|
| 101 |
+
|
| 102 |
+
o_size is the obstacle's radius, d_boun is the size of boundary and k_b is a scale factor. By introducing the boundary into the avoid-obstacle univector field, an effective path is generated.
|
| 103 |
+
|
| 104 |
+
**Virtual Obstacle.** The virtual obstacle is defined by introducing a shifting vector to the center position of a real obstacle, where the direction of shifting vector is opposed to the robots moving direction and the magnitude is proportional to the robots moving velocity. Then, the position of the center of the virtual obstacle is obtained as
|
| 105 |
+
|
| 106 |
+
$$[o_x^{\text{virtual}}, o_y^{\text{virtual}}]^T = [o_x^{\text{real}}, o_y^{\text{real}}]^T + \mathbf{s}, \quad (7)$$
|
| 107 |
+
|
| 108 |
+
$$\mathbf{s} = -k_v \mathbf{v}_{\text{robot}},$$
|
| 109 |
+
|
| 110 |
+
where $(o_x^{\text{virtual}}, o_y^{\text{virtual}})$ is the virtual obstacle's position, $(o_x^{\text{real}}, o_y^{\text{real}})$ is the real obstacle's position, $\mathbf{s}$ is the shifting vector, $k_v$ is the scale factor of the virtual obstacle and $\mathbf{v}_{\text{robot}}$ is the robot's velocity vector. When calculating the avoid-obstacle univector, the virtual obstacle's positions are used instead of the real obstacles. By introducing the virtual obstacle, a robot can avoid obstacles more safely and smoothly by a generated path at every step.
|
| 111 |
+
|
| 112 |
+
**Hyperbolic Spiral Univector Field.** The move-to-goal univector field is designed by the hyperbolic spiral for a robot to get to a target point with a desired orientation. The hyperbolic spiral univector field $\mathbf{v}_{huf}$ is defined as
|
| 113 |
+
|
| 114 |
+
$$\mathbf{v}_{huf} = [\cos(\phi_h) \sin(\phi_h)]^T, \quad (8)$$
|
| 115 |
+
|
| 116 |
+
where
|
| 117 |
+
|
| 118 |
+
$$\phi_h = \begin{cases} \theta \pm \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho+k_r}\right) & \text{if } \rho > d_e \\ \theta \pm \frac{\pi}{2} \sqrt{\frac{\rho}{d_e}} & \text{if } 0 \le \rho \le d_e, \end{cases}$$
|
| 119 |
+
|
| 120 |
+
$\theta$ is the angle from x-axis of the goal at robot's position. The notation $\pm$ represents the direction of movement, where + is when a robot moves clockwise and - counter-clockwise. $k_r$ is an adjustable parameter. If $k_r$ becomes larger, the maximal value of curvature derivative decreases and the contour of the spiral becomes smoother. $\rho$ is the distance between the center of the destination and robot's position $d_e$ is predefined radius that decides the size of the spiral.
|
| 121 |
+
|
| 122 |
+
By designing a move-to-goal univector field with hyperbolic spiral, a robot can arrive at a destination with any orientation angle. In this paper, in order to obtain the desired posture at a target position, two hyperbolic spiral univector fields are combined. The move-to-goal univector field is defined as
|
| 123 |
+
|
| 124 |
+
$$\phi_{\text{muf}} = \begin{cases} \theta_{\text{up}} + \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho_{\text{up}}+k_r}\right) & \text{if } p_y^h > g_{\text{size}} \\ \theta_{\text{down}} - \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho_{\text{down}}+k_r}\right) & \text{if } p_y^h < -g_{\text{size}}, \\ \theta_{\text{dir}} & \text{otherwise} \end{cases}, \quad (9)$$
|
| 125 |
+
|
| 126 |
+
with
|
| 127 |
+
|
| 128 |
+
$$\rho_{\text{up}} = \sqrt{p_x^{h2} + (p_y^h - d_e - g_{\text{size}})^2}, \quad \rho_{\text{down}} = \sqrt{p_x^{h2} + (p_y^h + d_e + g_{\text{size}})^2},$$
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
$$ \theta_{up} = \tan^{-1}\left(\frac{p_y^h - d_e - g_{size}}{p_x^h}\right) + \theta_{dir}, \quad \theta_{down} = \tan^{-1}\left(\frac{p_y^h + d_e + g_{size}}{p_x^h}\right) + \theta_{dir}, $$
|
| 132 |
+
|
| 133 |
+
$$ \mathbf{p}^h = \mathbf{M}_{rot} \mathbf{M}_{trans} \mathbf{p}, $$
|
| 134 |
+
|
| 135 |
+
$$ \mathbf{M}_{trans} = \begin{bmatrix} 1 & 0 & -g_x \\ 0 & 1 & -g_y \\ 0 & 0 & 1 \end{bmatrix}, \quad \mathbf{M}_{rot} = \begin{bmatrix} \cos(-\theta_{dir}) & -\sin(-\theta_{dir}) & 0 \\ \sin(-\theta_{dir}) & \cos(-\theta_{dir}) & 0 \\ 0 & 0 & 1 \end{bmatrix}, $$
|
| 136 |
+
|
| 137 |
+
$$ \mathbf{p} = [p_x \ p_y \ 1]^T, \quad \mathbf{p}^h = [p_x^h \ p_y^h \ 1]^T, $$
|
| 138 |
+
|
| 139 |
+
where $g_{size}$ is the radius of the goal region and $\theta_{dir}$ is the desired arrival angle at a target. By using the move-to-goal univector field which is composed of the hyperbolic spiral univector fields, a robot can arrive at a goal with any arrival angles.
|
| 140 |
+
|
| 141 |
+
## 4.2 Footstep Planning
|
| 142 |
+
|
| 143 |
+
While a humanoid robot moves towards a destination, there is a situation when it has to step over an obstacle if it is not too high. This is the main difference from the path planning for a differential drive mobile robot, as it tries to find a detour route to circumvent obstacles instead of stepping over them. In this section, a footstep planning algorithm is proposed, which enables a robot to traverse over the obstacles effectively.
|
| 144 |
+
|
| 145 |
+
It is very natural and efficient way that a robot steps over them instead of detouring, if its moving direction is maintained. The proposed algorithm enables a robot step over the obstacles with minimal step length while maintaining its moving direction. It is
|
| 146 |
+
|
| 147 |
+
**Fig. 1.** Stepping over an obstacle. (a) Left leg is supporting leg without additional step (b) Left leg is supporting leg with additional step (c) Right leg is supporting leg without additional step (d) Right leg is supporting leg with additional step.
|
| 148 |
+
---PAGE_BREAK---
|
| 149 |
+
|
| 150 |
+
Fig. 2. Stepping over an obstacle when an obstacle is in front of one leg
|
| 151 |
+
|
| 152 |
+
assumed that the shape of obstacles is a rectangle with narrow width and long length as shown in Fig. 1.
|
| 153 |
+
|
| 154 |
+
The forward and backward step length from a supporting leg of a humanoid robot are restricted because of hardware limitation. If an obstacle is wider in width than the maximum step length of a humanoid robot, it is not able to step over an obstacle. Thus, a humanoid robot has to step over an obstacle with the shortest possible step length in order to step over the widest possible obstacle. The step length of a humanoid robot is determined by which leg is a supporting leg when it steps over an obstacle. As the proposed algorithm considers these facts, it enables a robot to step over obstacles with the shortest step length. Fig. 1 shows the footprints to step over an obstacle using this algorithm. Fig. 1(a) and Fig. 1(d) are situations when a left foot comes close to the obstacle earlier than a right foot and Fig. 1(b) and Fig. 1(c) are situations when a right foot approaches the obstacle closely than the other one. In case of Fig. 1(a) and 1(b), the left leg is appropriate as a supporting leg for the minimum step length. On the other hand, the right leg is appropriate as a supporting leg in Fig. 1(c) and 1(d). Therefore, in order to make a left leg as a supporting leg in Fig. 1(b) and a right leg as a supporting leg in Fig. 1(d), one more step is needed before stepping over the obstacle, while such an additional step is not needed in Fig. 1(a) and 1(c).
|
| 155 |
+
|
| 156 |
+
There is a situation when an obstacle is only in front of one leg such that the other leg can be placed without considering the obstacle. The proposed algorithm deals with this situation such that it can step over the obstacle effectively like a human being. Fig. 2 shows the footprints of a robot in this case.
|
| 157 |
+
|
| 158 |
+
## 4.3 Parameter Optimization by Evolution Programming
|
| 159 |
+
|
| 160 |
+
A humanoid robot has the constraint of change in rotation of legs on account of the hardware limitation. Hence, when planning footsteps for a biped robot by the proposed algorithm, the maximum change in rotation of legs has to be assigned. In this algorithm, there are seven parameters to be assigned such as $k_v$ in the virtual obstacle, $d_{boun}$ in the avoid-obstacle univector field, $d_e$, $k_r$, $g_{size}$ in the move-to-goal univector field and $w_{muf}$, $w_{auf}$ in composition of the move-to-goal univector field and the avoid-obstacle univector field, respectively. A robot can arrive at a goal with the change in rotation of legs within constraints by selecting appropriate values of parameters mentioned above. Also to generate the most effective path, EP is employed to choose the values of parameters. The fitness function in EP is designed considering the followings:
|
| 161 |
+
---PAGE_BREAK---
|
| 162 |
+
|
| 163 |
+
* A robot should arrive at a destination with a minimum position error.
|
| 164 |
+
* The facing direction of a robot at a destination should be the desired one.
|
| 165 |
+
* A robot should not collide with obstacles.
|
| 166 |
+
* The change in rotation of legs should not exceed the constraint value.
|
| 167 |
+
|
| 168 |
+
Consequently, the fitness function is defined as
|
| 169 |
+
|
| 170 |
+
$$f = -(k_p P_{err} + k_q | \theta_{err} | + k_{col} N_{col} + k_{const} N_{const}) \quad (10)$$
|
| 171 |
+
|
| 172 |
+
where $N_{const}$ is the number of constraint violations of change in rotation of legs, $N_{col}$ is the number of obstacle collisions of the robot, $\theta_{err}$ is the difference between the desired orientation and the orientation of a robot at a goal, $P_{err}$ is the position error at a goal and $k_{const}, k_{col}, k_q, k_p$ are constants.
|
| 173 |
+
|
| 174 |
+
# 5 Simulation Results
|
| 175 |
+
|
| 176 |
+
HSR-VIII (Fig. 3(a)) is a small-sized humanoid robot that has been continuously undergoing redesign and development in RIT Lab, KAIST since 2,000. Its height and weight are 52.8 cm and 5.5 kg, respectively. It has 26 DOFs which consists of 12 DC motors with harmonic drives for reduction gears in the lower body and 14 RC servo motors in the upper body. HSR-VIII was modeled by Webot which is the 3D mobile robotics simulation software [16]. Simulations were carried out with Webot of the HSR-VIII model by applying the proposed footstep planning algorithm.
|
| 177 |
+
|
| 178 |
+
Through the simulation, seven parameters in the algorithm were optimized by EP. Maximum rotating angle of the robot's ankles was selected heuristically as 40°. After 100 generations, the parameters were optimized as $k_v=1.94$, $d_{boun}=20.09$, $d_e=30.04$, $k_r=0.99$, $g_{size}=0.94$, $w_{muf}=1.96$, $w_{auf}=1.46$.
|
| 179 |
+
|
| 180 |
+
Fig. 3(b) shows the sequence of robot's footsteps as a 2D simulation result, where there were ten obstacles of three different kinds such as five static circular obstacles
|
| 181 |
+
|
| 182 |
+
**Fig. 3.** (a) HSR-VIII. (b) Sequence of footsteps in the environment with ten obstacles of three different kinds.
|
| 183 |
+
---PAGE_BREAK---
|
| 184 |
+
|
| 185 |
+
**Fig. 4.** Snap shots of 3D simulation result by Webot in the environment with ten obstacles of three different kinds. (A goal is a circle in the right bottom corner.)
|
| 186 |
+
|
| 187 |
+
and two moving circular obstacles and three static rectangular obstacles with a height of 1.0 cm. The desired angle at a destination was fixed at 90° from x-axis. As shown in the figure, by the proposed algorithm the robot moves from a start point to a target goal in the right bottom corner, while avoiding static and moving circular obstacles and stepping over static rectangular ones by adjusting its step length. In addition, the robot faces the desired orientation at the goal. Fig. 5 shows the 3D simulation result by Webot, where the environment is the same as that used in the 2D simulation. Similar result was obtained as in Fig. 3(b). In particular, in third and sixth snapshots of the Fig. 10, it can be seen that the robot makes a turn before colliding with the moving circular obstacles predicting their movement.
|
| 188 |
+
|
| 189 |
+
# 6 Conclusion
|
| 190 |
+
|
| 191 |
+
The real-time footstep planning algorithm was proposed for a humanoid robot to travel to a destination avoiding and stepping over obstacles. The univector field method was adopted to determine the heading direction and using the determined orientations, exact foot placement was calculated. The proposed algorithm generated the efficient path by applying a boundary to the avoid-obstacle univector field and introducing the virtual obstacle concept. Furthermore, it enables a robot to get to a destination with a desired orientation by employing the hyperbolic spiral univector field. The proposed algorithm made a robot possible to step over an obstacle with minimal step length maintaining its heading orientation. It also considered the situation when an obstacle is in front of only one leg. In this case, it steps over the obstacle while placing the other leg properly as a supporting one. The effectiveness of the algorithm was demonstrated by computer simulations in dynamic environment. As a further work, experiments with a real small-sized humanoid robot HSR-VIII will be carried out using a global camera to demonstrate the applicability of the proposed algorithm.
|
| 192 |
+
---PAGE_BREAK---
|
| 193 |
+
|
| 194 |
+
References
|
| 195 |
+
|
| 196 |
+
1. Nishiwaki, K., Sugihara, T., Kagami, S., Kanehiro, F., Inaba, M., Inoue, H.: Design and Development of Research Platform for Perception- Action Integration in Humanoid Robot: H6. In: Proc. IEEE/RSJ Int. Conference on Intelligent Robots and Systems, pp. 1559–1564 (2000)
|
| 197 |
+
|
| 198 |
+
2. Kaneko, K., Kanehiro, F., Kajita, S., Hirukawa, H., Kawasaki, T., Hirata, M., Akachi, K., Isozumi, T.: Humanoid Robot HRP-2. In: Proc. IEEE Int'l. Conf. on Robotics and Automation, ICRA 2004 (2004)
|
| 199 |
+
|
| 200 |
+
3. Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N., Fujimura, K.: The intelligent ASIMO: system overview and integration. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2478–2483 (2002)
|
| 201 |
+
|
| 202 |
+
4. Ogura, Y., Aikawa, H., Shimomura, K., Kondo, H., Morishima, A., Lim, H., Takanishi, A.: Development of a New Humanoid Robot WABIAN-2. In: Proc. IEEE Int'l. Conf. on Robotics and Automation, ICRA 2006 (2006)
|
| 203 |
+
|
| 204 |
+
5. Kim, Y.-D., Lee, B.-J., Ryu, J.-H., Kim, J.-H.: Landing Force Control for Humanoid Robot by Time-Domain Passivity Approach. IEEE Trans. on Robotics 23(6), 1294–1301 (2007)
|
| 205 |
+
|
| 206 |
+
6. Kanal, L., Kumar, V. (eds.): Search in Artificial Intelligence. Springer, New York (1988)
|
| 207 |
+
|
| 208 |
+
7. Borenstein, J., Koren, Y.: Real-time obstacle avoidance for fast mobile robots. IEEE Trans. Syst., Man, Cybern. 20, 1179–1187 (1989)
|
| 209 |
+
|
| 210 |
+
8. Borenstein, J., Koren, Y.: The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Syst., Man, Cybern. 7, 278–288 (1991)
|
| 211 |
+
|
| 212 |
+
9. Yagi, M., Lumelsky, V.: Biped Robot Locomotion in Scenes with Unknown Obstacles. In: Proc. IEEE Int'l. Conf. on Robotics and Automation (ICRA 1999), Detroit, MI, May 1999, pp. 375–380 (1999)
|
| 213 |
+
|
| 214 |
+
10. Chestnutt, J., Lau, M., Cheung, G., Kuffner, J., Hodgins, J., Kanade, T.: Footstep planning for the honda asimo humanoid. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 631–636 (2005)
|
| 215 |
+
|
| 216 |
+
11. Ayaz, Y., Munawar, K., Bilal Malik, M., Konno, A., Uchiyama, M.: Human-Like Approach to Footstep Planning Among Obstacles for Humanoid Robots. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 5490–5495 (2006)
|
| 217 |
+
|
| 218 |
+
12. Kim, Y.-J., Kim, J.-H., Kwon, D.-S.: Evolutionary Programming-Based Uni-vector Field Navigation Method for Fast Mobile Robots. IEEE Trans. on Systems, Man and Cybernetics - Part B - Cybernetics 31(3), 450–458 (2001)
|
| 219 |
+
|
| 220 |
+
13. Kim, Y.-J., Kim, J.-H., Kwon, D.-S.: Univector Field Navigation Method for Fast Mobile Robots. Korea Advanced Institute of Science and Technology, Ph.D. Thesis
|
| 221 |
+
|
| 222 |
+
14. Lee, B.-J., Stonier, D., Kim, Y.-D., Yoo, J.-K., Kim, J.-H.: Modifiable Walking Pattern of a Humanoid Robot by Using Allowable ZMP Variation. IEEE Transaction on Robotics 24(4), 917–925 (2008)
|
| 223 |
+
|
| 224 |
+
15. Lim, Y.-S., Choi, S.-H., Kim, J.-H., Kim, D.-H.: Evolutionary Univector Field-based Navigation with Collision Avoidance for Mobile Robot. In: Proc. 17th World Congress The International Federation of Automatic Control, Seoul, Korea (July 2008)
|
| 225 |
+
|
| 226 |
+
16. Michel, O.: Cyberbotics Ltd. WebotsTM: Professional mobile robot simulation. Int. J. of Advanced Robotic Systems 1(1), 39–42 (2004)
|
samples/texts_merged/3450399.md
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Note 7 Supplement: RSA Extras
|
| 5 |
+
|
| 6 |
+
Computer Science 70
|
| 7 |
+
University of California, Berkeley
|
| 8 |
+
|
| 9 |
+
Summer 2018
|
| 10 |
+
|
| 11 |
+
## 1 One-Time Pad
|
| 12 |
+
|
| 13 |
+
The exclusive OR (XOR) $x \oplus y$ of two bits $x$ and $y$ is defined by:
|
| 14 |
+
|
| 15 |
+
<table><thead><tr><th>x</th><th>y</th><th>x ⊕ y</th></tr></thead><tbody><tr><td>0</td><td>0</td><td>0</td></tr><tr><td>0</td><td>1</td><td>1</td></tr><tr><td>1</td><td>0</td><td>1</td></tr><tr><td>1</td><td>1</td><td>0</td></tr></tbody></table>
|
| 16 |
+
|
| 17 |
+
In other words, $x \oplus y$ equals 1 if and only if $x$ and $y$ are different bits. Notice that $x \oplus y$ is the same as $x + y \bmod 2$. For any $x \in \{0, 1\}$, we have $x \oplus x = 0$ and $x \oplus 0 = x$. So, for any $y \in \{0, 1\}$, we have $y \oplus x \oplus x = y \oplus 0 = y$.
|
| 18 |
+
|
| 19 |
+
We can extend the XOR operation to work on bit strings $x$ and $y$ of the same length by applying the XOR operation bitwise.
|
| 20 |
+
|
| 21 |
+
**Example 1.** $01000 \oplus 11100 = 10100$.
|
| 22 |
+
|
| 23 |
+
For bit strings $x$ and $y$ of the same length, we again have $y \oplus x \oplus x = y$. This actually gives us the simplest method to encrypt our messages, known as the **one-time pad**. To send a message $m$ (a bit string), the sender and receiver both agree (in advance) on a secret key $k$, which is a bit string of the same length as the message. The sender sends $m \oplus k$ to the receiver, and the receiver decrypts the message by $m \oplus k \oplus k = m$.
|
| 24 |
+
|
| 25 |
+
If an eavesdropper intercepts the encrypted message $m \oplus k$, then without knowledge of the secret key $k$, the one-time pad is unbreakable. Indeed, since the secret key is unknown, then the eavesdropper must think that any secret
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
key is possible. Given any message $m'$, then $m' \oplus m \oplus k \oplus m' = m \oplus k$, which means that the encrypted message $m \oplus k$ could have also come from the message $m'$ with the secret key $m \oplus k \oplus m'$. We have just shown that the encrypted message could have come from *any* starting message, which means that the eavesdropper knows nothing about the original message.
|
| 29 |
+
|
| 30 |
+
The one-time pad is not very convenient, however, because in order to guarantee the safety of the scheme, the secret key should really be discarded after one use (hence the name “one-time pad”). Since the sender and receiver must agree upon the secret key beforehand, the inability to reuse the secret key significantly hinders the practicality of the scheme. Nevertheless, the one-time pad can be useful when combined with other schemes.
|
| 31 |
+
|
| 32 |
+
## 2 Application of RSA: Digital Signatures
|
| 33 |
+
|
| 34 |
+
A signature is meant to provide proof of an individual's identity. In order for the signature to be a valid proof, the signature must have the property that no other individual can produce the same signature. Unfortunately, in the real world, we know that signatures can be forged.
|
| 35 |
+
|
| 36 |
+
Inspired by this idea, we introduce the concept of a **digital signature**. As before, a digital signature is supposed to provide proof of an individual's identity. However, the property that “no other individual can produce the same signature” is replaced by the property that “no other individual can reliably produce the same signature *efficiently*”. The idea is that someone who wants to forge the signature must use some brute force method which is computationally infeasible, e.g., would require centuries or more to compute.
|
| 37 |
+
|
| 38 |
+
Suppose that you have a RSA public key ($N,e$) with corresponding private key $d$. One way to provide a “signature” is to reveal your private key $d$. If we assume that RSA is unbreakable, then the private key cannot be computed efficiently from the public key, so this would indeed constitute a signature. Unfortunately this has the drawback of revealing your private key.
|
| 39 |
+
|
| 40 |
+
Instead, the signature scheme proceeds as follows. A verifier provides the individual with some randomly chosen $x \in \{0, 1, \dots, N-1\}$ and asks the individual for $x^d \mod N$. The verifier can then check that $x^{ed} \equiv x \pmod N$.
|
| 41 |
+
|
| 42 |
+
If the individual knows the private key $d$, then this computation is fast. However, a forger without knowledge of the private key must labor to find the $y \in \{0, 1, \dots, N-1\}$ such that $y^e \equiv x \pmod N$. If RSA is unbreakable, then this cannot be done efficiently. Presently we believe that you cannot do
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
meaningfully better than exhaustive search, which can easily take centuries if $N$ is large enough.
|
| 46 |
+
|
| 47 |
+
The verifier can play this game with the individual multiple times until
|
| 48 |
+
the verifier is satisfied that the individual is not forging the signature.
|
| 49 |
+
|
| 50 |
+
# 3 RSA Attacks
|
| 51 |
+
|
| 52 |
+
The RSA scheme presented in the notes is known as “textbook RSA”. When RSA is used in practice, there are extra bells and whistles that are added to the scheme to improve its security. In this section we describe a couple of known attacks against the RSA scheme.
|
| 53 |
+
|
| 54 |
+
The first attack warns against using RSA alone. Suppose that you take
|
| 55 |
+
your credit card $m$ and pass it to the encryption function $E$ to get your
|
| 56 |
+
encrypted credit card number $E(m)$. The encrypted credit card number $E(m)$
|
| 57 |
+
is then sent to a company such as Amazon in order to complete a credit
|
| 58 |
+
card transaction. However, an eavesdropper sees $E(m)$. The eavesdropper
|
| 59 |
+
can then send $E(m)$ to the company again in order to make his or her own
|
| 60 |
+
purchases, effectively stealing your credit card.
|
| 61 |
+
|
| 62 |
+
The method to prevent this attack is to take your credit card number $m$, and in each new transaction, pad your credit card number with a randomly generated string at the end to form a longer, random string $m'$. Then, send $E(m')$ to the company. This is called *RSA with padding*. The randomness ensures that even if you send the same message twice, the encrypted messages will most likely differ, so that if the company receives the same encrypted message $E(m)$ twice in a row, then it will know to be suspicious.
|
| 63 |
+
|
| 64 |
+
The second attack is about unwittingly giving away information. Say that an attacker intercepts the encrypted message $E(m)$. Since the attacker cannot decrypt the message, it asks the company to decrypt the message in a roundabout way. First the attacker picks a random number $r$, and asks the company to please decrypt the message $E(m) \cdot r^e \bmod N$, where $(N, e)$ is the public key. After multiplying $E(m)$ by $r^e$, the result is a seemingly innocuous string, so the company complies with the request, sending back the decrypted message $mr$. Now, since the attacker knows $r$, he or she also knows $r^{-1} \bmod N$, and using this, the attacker can recover the original message $m$.
|
| 65 |
+
|
| 66 |
+
It may be surprising to learn that our cryptosystems (such as RSA) are
|
| 67 |
+
not *provably* secure, but nevertheless they are used every day.
|
samples/texts_merged/3461249.md
ADDED
|
@@ -0,0 +1,272 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Generalizing Robot Imitation Learning with Invariant Hidden Semi-Markov Models
|
| 5 |
+
|
| 6 |
+
Ajay Kumar Tanwani¶,§, Jonathan Lee§, Brijen Thananjeyan§, Michael Laskey§, Sanjay Krishnan§, Roy Fox§, Ken Goldberg§, Sylvain Calinon¶
|
| 7 |
+
|
| 8 |
+
**Abstract.** Generalizing manipulation skills to new situations requires extracting invariant patterns from demonstrations. For example, the robot needs to understand the demonstrations at a higher level while being invariant to the appearance of the objects, geometric aspects of objects such as its position, size, orientation and viewpoint of the observer in the demonstrations. In this paper, we propose an algorithm that learns a joint probability density function of the demonstrations with invariant formulations of hidden semi-Markov models to extract invariant segments (also termed as sub-goals or options), and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The algorithm takes as input the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest with a task-parameterized formulation, and adapt the segments according to the environmental changes in a systematic manner. We present variants of this algorithm in latent space with low-rank covariance decompositions, semi-tied covariances, and non-parametric online estimation of model parameters under small variance asymptotics; yielding considerably low sample and model complexity for acquiring new manipulation skills. The algorithm allows a Baxter robot to learn a pick-and-place task while avoiding a movable obstacle based on only 4 kinesthetic demonstrations.
|
| 9 |
+
|
| 10 |
+
**Keywords:** hidden Markov models, imitation learning, adaptive systems
|
| 11 |
+
|
| 12 |
+
## 1 Introduction
|
| 13 |
+
|
| 14 |
+
Generative models are widely used in robot imitation learning to estimate the distribution of the data for regenerating samples from the model [1]. Common applications include probability density function estimation, image regeneration, dimensionality reduction and so on. The parameters of the model encode the task structure which is inferred from the demonstrations. In contrast to direct trajectory learning from demonstrations, many problems arise in robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, and viewpoint of the observer. Recent trend in imitation learning is forgoing such a task structure for end-to-end supervised learning which requires a large amount of training demonstrations.
|
| 15 |
+
|
| 16 |
+
§University of California, Berkeley.
|
| 17 |
+
¶Idiap Research Institute, Switzerland.
|
| 18 |
+
Corresponding author: ajay.tanwani@berkeley.edu
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
Fig. 1: Conceptual illustration of hidden semi-Markov model (HSMM) for imitation learning: (left) 3-dimensional Z-shaped demonstrations composed of 5 equally spaced trajectory samples, (middle) demonstrations are encoded with a 3 state HMM represented by Gaussians (shown as ellipsoids) that represent the blue, green and red segments respectively. The transition graph shows a duration model (Gaussian) next to each node, (right) the generative model is combined with linear quadratic tracking (LQT) to synthesize motion in performing robot manipulation tasks from 5 different initial conditions marked with orange squares (see also Fig. 2).
|
| 22 |
+
|
| 23 |
+
The focus of this paper is to learn the joint probability density function of the human demonstrations with a family of **Hidden Markov Models (HMMs)** in an **unsupervised** manner [20]. We combine tools from statistical machine learning and optimal control to segment the demonstrations into different components or sub-goals that are sequenced together to perform manipulation tasks in a smooth manner. We first present a simple algorithm for imitation learning that combines the decoded state sequence of a hidden semi-Markov model [20,30] with a linear quadratic tracking controller to follow the demonstrated movement [2] (see Fig. 1). We then augment the model with a task-parameterized formulation such that it can be systematically adapted to changing situations such as pose/size of the objects in the environment [4,23,27]. We present latent space formulations of our approach to exploit the task structure using: 1) mixture of factor analyzers decomposition of the covariance matrix [14], 2) semi-tied covariance matrices of the mixture model [23], and 3) Bayesian non-parametric formulation of the model with Hierarchical Dirichlet process (HDP) for online learning under small variance asymptotics [24]. The paper unifies and extends our previous work on encoding manipulation skills in a task-adaptive manner [22,23,24]. Our objective is to reduce the number of demonstrations required for learning a new task, while ensuring effective generalization in new environmental situations.
|
| 24 |
+
|
| 25 |
+
## 1.1 Related Work
|
| 26 |
+
|
| 27 |
+
Imitation learning provides a promising approach to facilitate robot learning in the most 'natural' way. The main challenges in imitation learning include [16]: 1) **what-to-learn** – acquiring meaningful data to represent the important features of the task from demonstrations, and 2) **how-to-learn** – learning a control policy from the features to reproduce the demonstrated behaviour. Imitation learning algorithms typically fall into **behaviour cloning** or **inverse reinforcement learning (IRL)** approaches. IRL aims to recover the unknown reward function that is being optimized in the demonstrations, while be-
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
behaviour cloning approaches directly learn from human demonstrations in a supervised manner. Prominent approaches to imitation learning include Dynamic Movement Primitives [9], Generative Adversarial Imitation Learning [8], one-shot imitation learning [5] and so on [18].
|
| 31 |
+
|
| 32 |
+
This paper emphasizes learning manipulation skills from human demonstrations in an unsupervised manner using a family of hidden Markov models by sequencing the atomic movement segments or primitives. HMMs have been typically used for recognition and generation of movement skills in robotics [13]. Other related application contexts in imitation learning include options framework [10], sequencing primitives [15], and neural task programs [29].
|
| 33 |
+
|
| 34 |
+
A number of variants of HMMs have been proposed to address some of its shortcomings, including: 1) how to bias learning towards models with longer self-dwelling states, 2) how to robustly estimate the parameters with high-dimensional noisy data, 3) how to adapt the model with newly observed data, and 4) how to estimate the number of states that the model should possess. For example, [11] used HMMs to incrementally group whole-body motions based on their relative distance in HMM space. [13] presented an iterative motion primitive refinement approach with HMMs. [17] used the Beta Process Autoregressive HMM for learning from unstructured demonstrations. Figueroa et al. used the transformation invariant covariance matrix for encoding tasks with a Bayesian non-parametric HMM [6].
|
| 35 |
+
|
| 36 |
+
In this paper, we address these shortcomings with an algorithm that learns a hidden semi-Markov model [20,30] from a few human demonstrations for segmentation, recognition, and synthesis of robot manipulation tasks (see Sec. 2). The algorithm observes the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest, and adapts the model according to the environmental changes in a systematic manner in Sec. 3. Capturing such invariant representations allows us to compactly encode the task variations than using a standard regression problem. We present variants of the algorithm in latent space to exploit the task structure in Sec. 4. In Sec. 5, we show the application of our approach to learning a pick-and-place task from a few demonstrations, with an outlook to our future work.
|
| 37 |
+
|
| 38 |
+
## 2 Hidden Markov Models
|
| 39 |
+
|
| 40 |
+
**Hidden Markov models (HMMs)** encapsulate the spatio-temporal information by augmenting a mixture model with latent states that sequentially evolve over time in the demonstrations [20]. HMM is thus defined as a doubly stochastic process, one with sequence of hidden states and another with sequence of observations/emissions. Spatio-temporal encoding with HMMs can handle movements with variable durations, recurring patterns, options in the movement, or partial/unaligned demonstrations. Without loss of generality, we will present our formulation with semi-Markov models for the remainder of the paper. Semi-Markov models relax the Markovian structure of state transitions by relying not only upon the current state but also on the duration/elapsed time in the current state, i.e., the underlying process is defined by a *semi-Markov chain* with a variable duration time for each state. The state duration stay is a random integer variable that assumes values in the set $\{1, 2, \dots, s^{\max}\}$. The value corresponds to the
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
number of observations produced in a given state, before transitioning to the next state. **Hidden Semi-Markov Models** (HSMMs) associate an observable output distribution with each state in a semi-Markov chain [30], similar to how we associated a sequence of observations with a Markov chain in a HMM.
|
| 44 |
+
|
| 45 |
+
Let $\{\xi_t\}_{t=1}^T$ denote the sequence of observations with $\xi_t \in \mathbb{R}^D$ collected while demonstrating a manipulation task. The state may represent the visual observation, kinesthetic data such as the pose and the velocities of the end-effector of the human arm, haptic information, or any arbitrary features defining the task variables of the environment. The observation sequence is associated with a hidden state sequence $\{z_t\}_{t=1}^T$ with $z_t \in \{1...K\}$ belonging to the discrete set of K cluster indices. The cluster indices correspond to different segments of the task such as reach, grasp, move etc. We want to learn the joint probability density of the observation sequence and the hidden state sequence. The transition between one segment $i$ to another segment $j$ is denoted by the transition matrix $a \in \mathbb{R}^{K \times K}$ with $a_{i,j} \triangleq P(z_t = j | z_{t-1} = i)$. The parameters $\{\mu_j^S, \Sigma_j^S\}$ represent the mean and the standard deviation of staying $s$ consecutive time steps in state $j$ as $p(s)$ estimated by a Gaussian $\mathcal{N}(s|\mu_j^S, \Sigma_j^S)$. The hidden state follows a categorical distribution with $z_t \sim \text{Cat}(\pi_{z_{t-1}})$ where $\pi_{z_{t-1}} \in \mathbb{R}^K$ is the next state transition distribution over state $z_{t-1}$ with $\Pi_i$ as the initial probability, and the observation $\xi_t$ is drawn from the output distribution of state $j$, described by a multivariate Gaussian with parameters $\{\mu_j, \Sigma_j\}$. The overall parameter set for an HSMM is defined by $\{\Pi_i, \{a_{i,m}\}_{m=1}^K, \mu_i, \Sigma_i, \mu_i^S, \Sigma_i^S\}_{i=1}^K$.
|
| 46 |
+
|
| 47 |
+
## 2.1 Encoding with HSMM
|
| 48 |
+
|
| 49 |
+
For learning and inference in a HMM [20], we make use of the intermediary variables as: 1) **forward variable**, $\alpha_{t,i}^{HMM} \triangleq P(z_t = i, \xi_1, ..., \xi_t|\theta)$: probability of a datapoint $\xi_t$ to be in state $i$ at time step $t$ given the partial observation sequence $\{\xi_1, ..., \xi_t\}$, 2) **backward variable**, $\beta_{t,i}^{HMM} \triangleq P(\xi_{t+1}, ..., \xi_T|z_t = i, \theta)$: probability of the partial observation sequence $\{\xi_{t+1}, ..., \xi_T\}$ given that we are in the $i$-th state at time step $t$, 3) **smoothed node marginal** $\gamma_{t,i}^{HMM} \triangleq P(z_t = i|\xi_1, ..., \xi_T, \theta)$: probability of $\xi_t$ to be in state $i$ at time step $t$ given the full observation sequence $\xi$, and 4) **smoothed edge marginal** $\zeta_{t,i,j}^{HMM} \triangleq P(z_t = i, z_{t+1} = j|\xi_1, ..., \xi_T, \theta)$: probability of $\xi_t$ to be in state $i$ at time step $t$ and in state $j$ at time step $t+1$ given the full observation sequence $\xi$. Parameters $\{\Pi_i, \{a_{i,m}\}_{m=1}^K, \mu_i, \Sigma_i\}_{i=1}^K$ are estimated using the EM algorithm for HMMs, and the duration parameters $\{\mu_i^S, \Sigma_i^S\}_{i=1}^K$ are estimated empirically from the data after training using the most likely hidden state sequence $z_t = \{z_1...z_T\}$ (see supplementary materials for details).
|
| 50 |
+
|
| 51 |
+
## 2.2 Decoding from HSMM
|
| 52 |
+
|
| 53 |
+
Given the learned model parameters, the probability of the observed sequence $\{\xi_1... \xi_t\}$ to be in a hidden state $z_t = i$ at the end of the sequence (also known as filtering prob-
|
| 54 |
+
---PAGE_BREAK---
|
| 55 |
+
|
| 56 |
+
lem) is computed with the help of the forward variable as
|
| 57 |
+
|
| 58 |
+
$$P(z_t | \xi_1, \dots, \xi_t) = h_{t,i}^{\text{HMM}} = \frac{\alpha_{t,i}^{\text{HMM}}}{\sum_{k=1}^{K} \alpha_{t,k}^{\text{HMM}}} = \frac{\pi_i \mathcal{N}(\xi_t | \mu_i, \Sigma_i)}{\sum_{k=1}^{K} \pi_k \mathcal{N}(\xi_t | \mu_k, \Sigma_k)}. \quad (1)$$
|
| 59 |
+
|
| 60 |
+
Sampling from the model for predicting the sequence of states over the next time horizon $P(z_t, z_{t+1}, \dots, z_{T_p} | \xi_1, \dots, \xi_t)$ can be done in two ways: **1) stochastic sampling:** the sequence of states is sampled in a probabilistic manner given the state duration and the state transition probabilities. By stochastic sampling, motions that contain different options and do not evolve only on a single path can also be represented. Starting from the initial state $z_t = i$, the $s$ duration steps are sampled from $\{\mu_i^S, \Sigma_i^S\}$, after which the next transition state is sampled $z_{t+s+1} \sim \pi_{z_{t+s}}$. The procedure is repeated for the given time horizon in a receding horizon manner; **2) deterministic sampling:** the most likely sequence of states is sampled and remains unchanged in successive sampling trials. We use the forward variable of HSMM for deterministic sampling from the model. The forward variable $\alpha_{t,i}^{\text{HMM}} \triangleq P(z_t = i, \xi_1, \dots, \xi_t|\theta)$ requires marginalizing over the duration steps along with all possible state sequences. The probability of a datapoint $\xi_t$ to be in state $i$ at time step $t$ given the partial observation sequence $\{\xi_1, \dots, \xi_t\}$ is now specified as [30]
|
| 61 |
+
|
| 62 |
+
$$\alpha_{t,i}^{\text{HSMM}} = \sum_{s=1}^{\min(s^{\max}, t-1)} \sum_{j=1}^{K} \alpha_{t-s,j}^{\text{HSMM}} a_{j,i} \mathcal{N}(s|\mu_i^S, \Sigma_i^S) \prod_{c=t-s+1}^{t} \mathcal{N}(\xi_c | \mu_i, \Sigma_i), \quad (2)$$
|
| 63 |
+
|
| 64 |
+
where the initialization is given by $\alpha_{1,i}^{\text{HSMM}} = \Pi_i N(1|\mu_i^S, \Sigma_i^S) N(\xi_1|\mu_i, \Sigma_i)$, and the output distribution in state $i$ is conditionally independent for the $s$ duration steps given as $\prod_{c=t-s+1}^{t} N(\xi_c | \mu_i, \Sigma_i)$. Note that for $t < s^{\max}$, the sum over duration steps is computed for $t-1$ steps, instead of $s^{\max}$. Without the observation sequence for the next time steps, the forward variable simplifies to
|
| 65 |
+
|
| 66 |
+
$$\alpha_{t,i}^{\text{HSMM}} = \sum_{s=1}^{\min(s^{\max}, t-1)} \sum_{j=1}^{K} \alpha_{t-s,j}^{\text{HSMM}} a_{j,i} N(s|\mu_i^S, \Sigma_i^S). \quad (3)$$
|
| 67 |
+
|
| 68 |
+
The forward variable is used to plan the movement sequence for the next $T_p$ steps with $t = t + 1... T_p$. During prediction, we only use the transition matrix and the duration model to plan the future evolution of the initial/current state and omit the influence of the spatial data that we cannot observe, i.e., $N(\xi_t|\mu_i, \Sigma_i) = 1$ for $t > 1$. This is used to retrieve a step-wise reference trajectory $N(\hat{\mu}_t, \hat{\Sigma}_t)$ from a given state sequence $z_t$ computed from the forward variable with,
|
| 69 |
+
|
| 70 |
+
$$z_t = \{z_t, \dots, z_{T_p}\} = \arg\max_i \alpha_{t,i}^{\text{HSMM}}, \quad \hat{\mu}_t = \mu_{z_t}, \quad \hat{\Sigma}_t = \Sigma_{z_t}. \quad (4)$$
|
| 71 |
+
|
| 72 |
+
Fig. 2 shows a conceptual representation of the step-wise sequence of states generated by deterministically sampling from HSMM encoding of the Z-shaped data. In the next section, we show how to synthesise robot movement from this step-wise sequence of states in a smooth manner.
|
| 73 |
+
---PAGE_BREAK---
|
| 74 |
+
|
| 75 |
+
Fig. 2: Sampling from HSMM from an unseen initial state $\xi_0$ over the next time horizon and tracking the step-wise desired sequence of states $\mathcal{N}(\hat{\mu}_t, \hat{\Sigma}_t)$ with a linear quadratic tracking controller. Note that this converges although $\xi_0$ was not previously encountered.
|
| 76 |
+
|
| 77 |
+
## 2.3 Motion Generation with Linear Quadratic Tracking
|
| 78 |
+
|
| 79 |
+
We formulate the motion generation problem given the step-wise desired sequence of states $\{\mathcal{N}(\hat{\mu}_t, \hat{\Sigma}_t)\}_{t=1}^{T_p}$ as sequential optimization of a scalar cost function with a linear quadratic tracker (LQT) [2]. The control policy $u_t$ at each time step is obtained by minimizing the cost function over the finite time horizon $T_p$,
|
| 80 |
+
|
| 81 |
+
$$ c_t(\xi_t, u_t) = \sum_{t=1}^{T_p} (\xi_t - \hat{\mu}_t)^{\top} Q_t (\xi_t - \hat{\mu}_t) + u_t^{\top} R_t u_t, \quad (5) $$
|
| 82 |
+
|
| 83 |
+
s.t. $\xi_{t+1} = A_d\xi_t + B_d u_t,$
|
| 84 |
+
|
| 85 |
+
starting from the initial state $\xi_1$ and following the discrete linear dynamical system specified by $A_d$ and $B_d$. We consider a linear time-invariant double integrator system to describe the system dynamics. Alternatively, a time-varying linearization of the system dynamics along the reference trajectory can also be used to model the system dynamics without loss of generality. Both discrete and continuous time linear quadratic regulator/tracker can be used to follow the desired trajectory. The discrete time formulation, however, gives numerically stable results for a wide range of values of $R$. The control law $u_t^*$ that minimizes the cost function in Eq. (5) under finite horizon subject to the linear dynamics in discrete time is given as,
|
| 86 |
+
|
| 87 |
+
$$ u_t^* = K_t(\hat{\mu}_t - \xi_t) + u_t^{\text{FF}}, \quad (6) $$
|
| 88 |
+
|
| 89 |
+
where $K_t = [K_t^P, K_t^V]$ are the full stiffness and damping matrices for the feedback term, and $u_t^{\text{FF}}$ is the feedforward term (see supplementary materials for computing the
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
Fig. 3: Task-parameterized formulation of HSMM: four demonstrations on left are observed from two coordinate systems that define the start and end position of the demonstration (starting in purple position and ending in green position for each demonstration). The generative model is learned in the respective coordinate systems. The model parameters in respective coordinate systems are adapted to the new unseen object positions by computing the products of linearly transformed Gaussian mixture components. The resulting HSMM is combined with LQT for smooth retrieval of manipulation tasks.
|
| 93 |
+
|
| 94 |
+
gains). Fig. 2 shows the results of applying discrete LQT on the desired step-wise sequence of states sampled from an HSMM encoding of the Z-shaped demonstrations. Note that the gains can be precomputed before simulating the system if the reference trajectory does not change during the reproduction of the task. The resulting trajectory $\xi_t^*$ smoothly tracks the step-wise reference trajectory $\hat{\mu}_t$ and the gains $K_t^P, K_t^V$ locally stabilize $\xi_t^*$ along $\xi_t^*$ in accordance with the precision required during the task.
|
| 95 |
+
|
| 96 |
+
# 3 Invariant Task-Parameterized HSMMs
|
| 97 |
+
|
| 98 |
+
Conventional approaches to encode task variations such as change in the pose of the object is to augment the state of the environment with the policy parameters [19]. Such an encoding, however, does not capture the geometric structure of the problem. Our approach exploits the problem structure by introducing the task parameters in the form of coordinate systems that observe the demonstrations from multiple perspectives. Task-parameterization enables the model parameters to adapt in accordance with the external task parameters that describe the environmental situation, instead of hard coding the solution for each new situation or handling it in an *ad hoc* manner [27]. When a different situation occurs (pose of the object changes), changes in the task parameters/reference frames are used to modulate the model parameters in order to adapt the robot movement to the new situation.
|
| 99 |
+
|
| 100 |
+
## 3.1 Model Learning
|
| 101 |
+
|
| 102 |
+
We represent the task parameters with $F$ coordinate systems, defined by $\{A_j, b_j\}_{j=1}^F$, where $A_j$ denotes the orientation of the frame as a rotation matrix and $b_j$ represents the origin of the frame. We assume that the coordinate frames are specified by the user, based on prior knowledge about the carried out task. Typically, coordinate frames will be attached to objects, tools or locations that could be relevant in the execution of a task. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames,
|
| 103 |
+
---PAGE_BREAK---
|
| 104 |
+
|
| 105 |
+
with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint observed with respect to frame $j$. The parameters of the task-parameterized HSMM are defined by
|
| 106 |
+
|
| 107 |
+
$$ \theta = \left\{ \{\mu_i^{(j)}, \Sigma_i^{(j)}\}_{j=1}^F, \{a_{i,m}\}_{m=1}^K, \mu_i^S, \Sigma_i^S \right\}_{i=1}^K, $$
|
| 108 |
+
|
| 109 |
+
where $\mu_i^{(j)}$ and $\Sigma_i^{(j)}$ define the mean and the covariance matrix of $i$-th mixture component in frame $j$. Parameter updates of the task-parameterized HSMM algorithm remain the same as HSMM, except the computation of the mean and the covariance matrix is repeated for each coordinate system separately. The emission distribution of the $i$-th state is represented by the product of the probabilities of the datapoint to belong to the $i$-th Gaussian in the corresponding $j$-th coordinate system. The forward variable of HMM in the task-parameterized formulation is described as
|
| 110 |
+
|
| 111 |
+
$$ \alpha_{t,i}^{\text{TP-HMM}} = \left( \sum_{j=1}^{K} \alpha_{t-1,j}^{\text{HMM}} a_{j,i} \right) \prod_{j=1}^{F} \mathcal{N}(\xi_t^{(j)} | \mu_i^{(j)}, \Sigma_i^{(j)}). \quad (7) $$
|
| 112 |
+
|
| 113 |
+
Similarly, the backward variable $\beta_{t,i}^{\text{TP-HMM}}$, the smoothed node marginal $\gamma_{t,i}^{\text{TP-HMM}}$, and the smoothed edge marginal $\zeta_{t,i,j}^{\text{TP-HMM}}$ can be computed by replacing the emission distribution $\mathcal{N}(\xi_t | \mu_i, \Sigma_i)$ with the product of probabilities of the datapoint in each frame $\prod_{j=1}^{F} \mathcal{N}(\xi_t^{(j)} | \mu_i^{(j)}, \Sigma_i^{(j)})$. The duration model $\mathcal{N}(s|\mu_i^S, \Sigma_i^S)$ is used as a replacement of the self-transition probabilities $a_{i,i}$. The hidden state sequence over all demonstrations is used to define the duration model parameters $\{\mu_i^S, \Sigma_i^S\}$ as the mean and the standard deviation of staying $s$ consecutive time steps in the $i$-th state.
|
| 114 |
+
|
| 115 |
+
## 3.2 Model Adaptation in New Situations
|
| 116 |
+
|
| 117 |
+
In order to combine the output from coordinate frames of reference for an unseen situation represented by the frames $\{\tilde{\mathbf{A}}_j, \tilde{\mathbf{b}}_j\}_{j=1}^F$, we linearly transform the Gaussians back to the global coordinates with $\{\tilde{\mathbf{A}}_j, \tilde{\mathbf{b}}_j\}_{j=1}^F$, and retrieve the new model parameters $\{\tilde{\boldsymbol{\mu}}_i, \tilde{\boldsymbol{\Sigma}}_i\}$ for the $i$-th mixture component by computing the products of the linearly transformed Gaussians (see Fig. 3).
|
| 118 |
+
|
| 119 |
+
$$ \mathcal{N}(\tilde{\boldsymbol{\mu}}_i, \tilde{\boldsymbol{\Sigma}}_i) \propto \prod_{j=1}^{F} \mathcal{N}(\tilde{\mathbf{A}}_j \boldsymbol{\mu}_i^{(j)} + \tilde{\mathbf{b}}_j, \tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top). \quad (8) $$
|
| 120 |
+
|
| 121 |
+
Evaluating the products of Gaussians represents the observation distribution of HSMM, whose output sequence is decoded and combined with LQT for smooth motion generation as shown in the previous section.
|
| 122 |
+
|
| 123 |
+
$$ \tilde{\Sigma}_i = \left( \sum_{j=1}^{F} (\tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top)^{-1} \right)^{-1}, \qquad \tilde{\boldsymbol{\mu}}_i = \tilde{\Sigma}_i \sum_{j=1}^{F} (\tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top)^{-1} (\tilde{\mathbf{A}}_j \boldsymbol{\mu}_i^{(j)} + \tilde{\mathbf{b}}_j). \quad (9) $$
|
| 124 |
+
---PAGE_BREAK---
|
| 125 |
+
|
| 126 |
+
Fig. 4: Parameters representation of a diagonal, full and mixture of factor analyzers decomposition of covariance matrix. Filled blocks represent non-zero entries.
|
| 127 |
+
|
| 128 |
+
# 4 Latent Space Representations
|
| 129 |
+
|
| 130 |
+
Dimensionality reduction has long been recognized as a fundamental problem in unsupervised learning. Model-based generative models such as HSMMs tend to suffer from the *curse of dimensionality* when few datapoints are available. We use statistical subspace clustering methods that reduce the number of parameters to be robustly estimated to address this problem. A simple way to reduce the number of parameters would be to constrain the covariance structure to a diagonal or spherical/isotropic matrix, and restrict the number of parameters at the cost of treating each dimension separately. Such decoupling, however, cannot encode the important motor control principles of coordination, synergies and action-perception couplings [28].
|
| 131 |
+
|
| 132 |
+
Consequently, we seek out a latent feature space in the high-dimensional data to reduce the number of model parameters that can be robustly estimated. We consider three formulations to this end: 1) low-rank decomposition of the covariance matrix using *Mixture of Factor Analyzers (MFA)* approach [14], 2) partial tying of the covariance matrices of the mixture model with the same set of basis vectors, albeit different scale with semi-tied covariance matrices [7,23], and 3) Bayesian non-parametric sequence clustering under small variance asymptotics [12,21,24]. All the decompositions can readily be combined with invariant task-parameterized HSMM and LQT for encapsulating reactive autonomous behaviour as shown in the previous section.
|
| 133 |
+
|
| 134 |
+
## 4.1 Mixture of Factor Analyzers
|
| 135 |
+
|
| 136 |
+
The basic idea of MFA is to perform subspace clustering by assuming the covariance structure for each component of the form,
|
| 137 |
+
|
| 138 |
+
$$ \Sigma_i = \Lambda_i \Lambda_i^\top + \Psi_i, \quad (10) $$
|
| 139 |
+
|
| 140 |
+
where $\Lambda_i \in \mathbb{R}^{D \times d}$ is the factor loadings matrix with $d < D$ for parsimonious representation of the data, and $\Psi_i$ is the diagonal noise matrix (see Fig. 4 for MFA representation in comparison to a diagonal and a full covariance matrix). Note that the mixture of probabilistic principal component analysis (MPPCA) model is a special case of MFA with the distribution of the errors assumed to be isotropic with $\Psi_i = I\sigma_i^2$ [26]. The MFA model assumes that $\xi_t$ is generated using a linear transformation of $d$-dimensional vector of latent (unobserved) factors $f_t$,
|
| 141 |
+
|
| 142 |
+
$$ \xi_t = \Lambda_i f_t + \mu_i + \epsilon, \quad (11) $$
|
| 143 |
+
---PAGE_BREAK---
|
| 144 |
+
|
| 145 |
+
where $\mu_i \in \mathbb{R}^D$ is the mean vector of the $i$-th factor analyzer, $f_t \sim N(0, I)$ is a normally distributed factor, and $\epsilon \sim N(0, \Psi_i)$ is a zero-mean Gaussian noise with diagonal covariance $\Psi_i$. The diagonal assumption implies that the observed variables are independent given the factors. Note that the subspace of each cluster is not spanned by orthogonal vectors, whereas it is a necessary condition in models based on eigendecomposition such as PCA. Each covariance matrix of the mixture component has its own subspace spanned by the basis vectors of $\Sigma_i$. As the number of components increases to encode more complex skills, an increasing large number of potentially redundant parameters are used to fit the data. Consequently, there is a need to share the basis vectors across the mixture components as shown below by semi-tying the covariance matrices of the mixture model.
|
| 146 |
+
|
| 147 |
+
## 4.2 Semi-Tied Mixture Model
|
| 148 |
+
|
| 149 |
+
When the covariance matrices of the mixture model share the same set of parameters for the latent feature space, we call the model a *semi-tied* mixture model [23]. The main idea behind semi-tied mixture models is to decompose the covariance matrix $\Sigma_i$ into two terms: a common latent feature matrix $H \in \mathbb{R}^{D \times D}$ and a component-specific diagonal matrix $\Sigma_i^{(\text{diag})} \in \mathbb{R}^{D \times D}$, i.e.,
|
| 150 |
+
|
| 151 |
+
$$ \Sigma_i = H \Sigma_i^{(\text{diag})} H^\top. \quad (12) $$
|
| 152 |
+
|
| 153 |
+
The latent feature matrix encodes the locally important synergistic directions represented by $D$ non-orthogonal basis vectors that are shared across all the mixture components, while the diagonal matrix selects the appropriate subspace of each mixture component as convex combination of a subset of the basis vectors of $H$. Note that the eigen decomposition of $\Sigma_i = U_i \Sigma_i^{(\text{diag})} U_i^\top$ contains $D$ basis vectors of $\Sigma_i$ in $U_i$. In comparison, semi-tied mixture model gives $D$ globally representative basis vectors that are shared across all the mixture components. The parameters $H$ and $\Sigma_i^{(\text{diag})}$ are updated in closed form with EM updates of HSMM [7].
|
| 154 |
+
|
| 155 |
+
The underlying hypothesis in semi-tying the model parameters is that similar coordination patterns occur at different phases in a manipulation task. By exploiting the spatial and temporal correlation in the demonstrations, we reduce the number of parameters to be estimated while locking the most important synergies to cope with perturbations. This allows the reuse of the discovered synergies in different parts of the task having similar coordination patterns. In contrast, the MFA decomposition of each covariance matrix separately cannot exploit the temporal synergies, and has more flexibility in locally encoding the data.
|
| 156 |
+
|
| 157 |
+
## 4.3 Bayesian Non-Parametrics under Small Variance Asymptotics
|
| 158 |
+
|
| 159 |
+
Specifying the number of latent states in a mixture model is often difficult. Model selection methods such as cross-validation or Bayesian Information Criterion (BIC) are typically used to determine the number of states. Bayesian non-parametric approaches comprising of Hierarchical Dirichlet Processes (HDPs) provide a principled model selection procedure by Bayesian inference in an HMM with infinite number of states [25].
|
| 160 |
+
---PAGE_BREAK---
|
| 161 |
+
|
| 162 |
+
Fig. 5: Bayesian non-parametric clustering of Z-shaped streaming data under small variance asymptotics with: (left) online DP-GMM, (right) online DP-MPPCA. Note that the number of clusters and the subspace dimension of each cluster is adapted in a non-parametric manner.
|
| 163 |
+
|
| 164 |
+
These approaches provide flexibility in model selection, however, their widespread use is limited by the computational overhead of existing sampling-based and variational techniques for inference. We take a **small variance asymptotics** approximation of the Bayesian non-parametric model that collapses the posterior to a simple deterministic model, while retaining the non-parametric characteristics of the algorithm.
|
| 165 |
+
|
| 166 |
+
Small variance asymptotic (SVA) analysis implies that the covariance matrix $\Sigma_i$ of all the Gaussians is set to the isotropic noise $\sigma^2$, i.e., $\Sigma_i \approx \lim_{\sigma^2 \to 0} \sigma^2 I$ in the likelihood function and the prior distribution [12,3]. The analysis yields simple deterministic models, while retaining the non-parametric nature. For example, SVA analysis of the Bayesian non-parametric GMM leads to the DP-means algorithm [12]. Similarly, SVA analysis of the Bayesian non-parametric HMM under Hierarchical Dirichlet Process (HDP) yields the segmental $k$-means problem [21].
|
| 167 |
+
|
| 168 |
+
Restricting the covariance matrix to an isotropic/spherical noise, however, fails to encode the coordination patterns in the demonstrations. Consequently, we model the covariance matrix in its intrinsic affine subspace of dimension $d_i$ with projection matrix $\Lambda_i^{d_i} \in \mathbb{R}^{D \times d_i}$, such that $d_i < D$ and $\Sigma_i = \lim_{\sigma^2 \to 0} \Lambda_i^{d_i} \Lambda_i^{d_i^\top} + \sigma^2 I$ (akin to DP-MPPCA model). Under this assumption, we apply the small variance asymptotic limit on the remaining $(D - d_i)$ dimensions to encode the most important coordination patterns while being parsimonious in the number of parameters (see Fig. 5). Performing small-variance asymptotics of the joint likelihood of HDP-HMM yields the maximum aposteriori estimates of the parameters by iteratively minimizing the loss function*
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\begin{aligned}
|
| 172 |
+
\mathcal{L}(z, d, \mu, U, a) = & \sum_{t=1}^{T} \mathrm{dist}(\xi_t, \mu_{z_t}, U_{z_t}^{d_t})^2 + \lambda(K-1) \\
|
| 173 |
+
& + \lambda_1 \sum_{i=1}^{K} d_i - \lambda_2 \sum_{t=1}^{T-1} \log(a_{z_t, z_{t+1}}) + \lambda_3 \sum_{i=1}^{K} (\tau_i - 1),
|
| 174 |
+
\end{aligned}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where $\mathrm{dist}(\xi_t, \mu_{z_t}, U_{z_t}^d)^2$ represents the distance of the datapoint $\xi_t$ to the subspace of cluster $z_t$ defined by mean $\mu_{z_t}$ and unit eigenvectors of the covariance matrix $U_{zt}^d$ (see supplementary materials for details). The algorithm optimizes the number of clusters
|
| 178 |
+
|
| 179 |
+
*Setting $d_i = 0$ by choosing $\lambda_1 \gg 0$ gives the loss function formulation with isotropic Gaussian under small variance asymptotics [21].
|
| 180 |
+
---PAGE_BREAK---
|
| 181 |
+
|
| 182 |
+
Fig. 6: (left) Baxter robot picks the glass plate with a suction lever and places it on the cross after avoiding an obstacle of varying height, (centre-left) reproduction for previously unseen object and obstacle position, (center-right) left-right HSMM encoding of the task with duration model shown next to each state ($s^{max} = 100$), (right) rescaled forward variable evolution of the forward variable over time.
|
| 183 |
+
|
| 184 |
+
and the subspace dimension of each cluster while minimizing the distance of the data-points to the respective subspaces of each cluster. The $\lambda_2$ term favours the transitions to states with higher transition probability (states which have been visited more often before), $\lambda_3$ penalizes for transition to unvisited states with $\tau_i$ denoting the number of distinct transitions out of state $i$, while $\lambda$ and $\lambda_1$ are the penalty terms for increasing the number of states and the subspace dimension of each output state distribution.
|
| 185 |
+
|
| 186 |
+
The analysis is used here for scalable online sequence clustering that is non-parametric in the number of clusters and the subspace dimension of each cluster. The resulting algorithm groups the data in its low dimensional subspace with non-parametric mixture of probabilistic principal component analyzers based on Dirichlet process, and captures the state transition and state duration information in a HDP-HSMM. The cluster assignment and the parameter updates at each iteration minimize the loss function, thereby, increasing the model fitness while penalizing for new transitions, new dimensions and/or new clusters. An interested reader can find more details of the algorithm in [24].
|
| 187 |
+
|
| 188 |
+
# 5 Experiments, Results and Discussion
|
| 189 |
+
|
| 190 |
+
We now show how our proposed work enables a Baxter robot to learn a pick-and-place task from a few human demonstrations. The objective of the task is to place the object in a desired target position by picking it from different initial poses of the object, while adapting the movement to avoid the obstacle. The setup of pick-and-place task with obstacle avoidance is shown in Fig. 6. The Baxter robot is required to grasp the glass plate with a suction lever placed in an initial configuration as marked on the setup. The obstacle can be vertically displaced to one of the 8 target configurations. We describe the task with two frames, one frame for the object initial configuration with {$A_1, b_1$} and other frame for the obstacle {$A_2, b_2$} with $A_2 = I$ and $b_2$ to specify the centre of the obstacle. We collect 8 kinesthetic demonstrations with different initial configurations of the object and the obstacle successively displaced upwards as marked with the visual tags in the figure. Alternate demonstrations are used for the training set, while the rest are used for the test set. Each observation comprises of the end-effector Cartesian position,
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
Fig. 7: Task-Parameterized HSMM performance on pick-and-place with obstacle avoidance task: (top) training set reproductions, (bottom) testing set reproductions.
|
| 194 |
+
|
| 195 |
+
quaternion orientation, gripper status (open/closed), linear velocity, quaternion derivative, and gripper status derivative with $D = 16$, $P = 2$, and a total of 200 datapoints per demonstration. We represent the frame $\{\mathbf{A}_1, \mathbf{b}_1\}$ as
|
| 196 |
+
|
| 197 |
+
$$ \mathbf{A}_1^{(n)} = \begin{bmatrix} \mathbf{R}_1^{(n)} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \varepsilon_1^{(n)} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{R}_1^{(n)} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \varepsilon_1(n)\end{bmatrix}, \quad \mathbf{b}_1^{(n)} = \begin{bmatrix} \mathbf{p}_1^{(n)} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{1}\end{bmatrix}, \qquad (13) $$
|
| 198 |
+
|
| 199 |
+
where $\mathbf{p}_1^{(n)} \in \mathbb{R}^3, \mathbf{R}_1^{(n)} \in \mathbb{R}^{3\times3}, \varepsilon_1^{(n)} \in \mathbb{R}^{4\times4}$ denote the Cartesian position, the rotation matrix and the quaternion matrix in the $n$-th demonstration respectively. Note that we do not consider time as an explicit variable as the duration model in HSMM encapsulates the timing information locally.
|
| 200 |
+
|
| 201 |
+
Performance setting in our experiments is as follows: $\{\pi_i, \mu_i, \Sigma_i\}_{i=1}^K$ are initialized using k-means clustering algorithm, $R = 9I$, where $I$ is the identity matrix, learning converges when the difference of log-likelihood between successive demonstrations is less than $1 \times 10^{-4}$. Results of regenerating the movements with 7 mixture components are shown in Fig. 7. For a given initial configuration of the object, the model parameters are adapted by evaluating the product of Gaussians for a new frame configuration. The reference trajectory is then computed from the initial position of the robot arm using the forward variable of HSMM and tracked using LQT. The robot arm moves from its initial configuration to align itself with the first frame $\{\mathbf{A}_1, \mathbf{b}_1\}$ to grasp the object, and follows it with the movement to avoid the obstacle and subsequently, align with the second frame $\{\mathbf{A}_2, \mathbf{b}_2\}$ before placing the object and returning to a neutral position. The model exploits variability in the observed demonstrations to statistically encode different phases of the task such as reach, grasp, move, place, return. The imposed
|
| 202 |
+
---PAGE_BREAK---
|
| 203 |
+
|
| 204 |
+
Fig. 8: Latent space representations of invariant task-parameterized HSMM for a randomly chosen demonstration from the test set. Black dotted lines show human demonstration, while grey line shows the reproduction from the model (see supplementary materials for details).
|
| 205 |
+
|
| 206 |
+
Table 1: Performance analysis of invariant hidden Markov models with training MSE, testing MSE, number of parameters for pick-and-place task. MSE (in meters) is computed between the demonstrated trajectories and the generated trajectories (lower is better). Latent space formulations give comparable task performance with much fewer parameters.
|
| 207 |
+
|
| 208 |
+
<table><thead><tr><th>Model</th><th>Training MSE</th><th>Testing MSE</th><th>Number of Parameters</th></tr></thead><tbody><tr><td colspan="4">pick-and-place via obstacle avoidance (<i>K</i> = 7, <i>F</i> = 2, <i>D</i> = 16)</td></tr><tr><td>HSMM</td><td><b>0.0026</b> ± <b>0.0009</b></td><td>0.014 ± 0.0085</td><td>2198</td></tr><tr><td>Semi-Tied HSMM</td><td>0.0033 ± 0.0016</td><td>0.0131 ± 0.0077</td><td>1030</td></tr><tr><td>MFA HSMM (<i>d</i><sub>k</sub> = 1)</td><td>0.0037 ± 0.0011</td><td><b>0.0109</b> ± <b>0.0068</b></td><td><b>742</b></td></tr><tr><td>MFA HSMM (<i>d</i><sub>k</sub> = 4)</td><td>0.0025 ± 0.0007</td><td>0.0119 ± 0.0077</td><td>1414</td></tr><tr><td>MFA HSMM (<i>d</i><sub>k</sub> = 7)</td><td>0.0023 ± 0.0009</td><td>0.0123 ± 0.0084</td><td>2086</td></tr><tr><td>SVA HDP HSMM<br>(<i>K</i> = 8, <i>d̄</i><sub>k</sub> = 3.94)</td><td>0.0073 ± 0.0024</td><td>0.0149 ± 0.0072</td><td>1352</td></tr></tbody></table>
|
| 209 |
+
|
| 210 |
+
structure with task-parameters and HSMM allows us to acquire a new task in a few human demonstrations, and generalize effectively in picking and placing the object. Table 1 evaluates the performance of the invariant task-parameterized HSMM with latent space representations. We observe significant reduction in the model parameters, while achieving better generalization on the unseen situations compared to the task-parameterized HSMM with full covariance matrices (see Fig. 8 for comparison across models). It is seen that the MFA decomposition gives the best performance on test set with much fewer parameters.
|
| 211 |
+
|
| 212 |
+
# 6 Conclusions
|
| 213 |
+
|
| 214 |
+
Learning from demonstrations is a promising approach to teach manipulation skills to robots. In contrast to deep learning approaches that require extensive training data, generative mixture models are useful for learning from a few examples that are not explicitly labelled. The formulations are inspired by the need to make generative mixture models easy to use for robot learning in a variety of applications, while requiring considerably less learning time.
|
| 215 |
+
---PAGE_BREAK---
|
| 216 |
+
|
| 217 |
+
We have presented formulations for learning invariant task representations with hidden semi-Markov models for recognition, prediction, and reproduction of manipulation tasks; along with learning in latent space representations for robust parameter estimation of mixture models with high-dimensional data. By sampling the sequence of states from the model and following them with a linear quadratic tracking controller, we are able to autonomously perform manipulation tasks in a smooth manner. This has enabled a Baxter robot to tackle a pick-and-place via obstacle avoidance problem from previously unseen configurations of the environment. A relevant direction of future work is to not rely on specifying the task parameters manually, but to infer generalized task representations from the videos of the demonstrations in learning the invariant segments. Moreover, learning the task model from a small set of labelled demonstrations in a semi-supervised manner is an important aspect in extracting meaningful segments from demonstrations.
|
| 218 |
+
|
| 219 |
+
**Acknowledgements:** This work was, in large part, carried out at Idiap Research Institute and Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland. This work was in part supported by the DexROV project through the EC Horizon 2020 program (Grant 635491), and the NSF National Robotics Initiative Award 1734633 on Scalable Collaborative Human-Robot Learning (SCHooL). The information, data, comments, and views detailed herein may not necessarily reflect the endorsements of the sponsors.
|
| 220 |
+
|
| 221 |
+
## References
|
| 222 |
+
|
| 223 |
+
1. Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robot. Auton. Syst., 57(5):469-483, May 2009.
|
| 224 |
+
2. Francesco Borrelli, Alberto Bemporad, and Manfred Morari. Predictive control for linear and hybrid systems. Cambridge University Press, 2011.
|
| 225 |
+
3. Tamara Broderick, Brian Kulis, and Michael I. Jordan. Mad-bayes: Map-based asymptotic derivations from bayes. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 226-234, 2013.
|
| 226 |
+
4. S. Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics, 9(1):1-29, 2016.
|
| 227 |
+
5. Yan Duan, Marcin Andrychowicz, Brad C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. CoRR, abs/1703.07326, 2017.
|
| 228 |
+
6. Nadia Figueroa and Aude Billard. Transform-invariant non-parametric clustering of covariance matrices and its application to unsupervised joint segmentation and action discovery. CoRR, abs/1710.10060, 2017.
|
| 229 |
+
7. Mark J. F. Gales. Semi-tied covariance matrices for hidden markov models. IEEE Transactions on Speech and Audio Processing, 7(3):272-281, 1999.
|
| 230 |
+
8. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. CoRR, abs/1606.03476, 2016.
|
| 231 |
+
9. A. Ijspeert, J. Nakanishi, P Pastor, H. Hoffmann, and S. Schaal. Dynamical movement primitives: Learning attractor models for motor behaviors. Neural Computation, (25):328-373, 2013.
|
| 232 |
+
10. S. Krishnan, R. Fox, I. Stoica, and K. Goldberg. DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations. CoRR, 2017.
|
| 233 |
+
11. D. Kulic, W. Takano, and Y. Nakamura. Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden markov chains. Intl Journal of Robotics Research, 27(7):761-784, 2008.
|
| 234 |
+
---PAGE_BREAK---
|
| 235 |
+
|
| 236 |
+
12. Brian Kulis and Michael I. Jordan. Revisiting k-means: New algorithms via bayesian non-parametrics. In *Proceedings of the 29th International Conference on Machine Learning (ICML-12)*, pages 513–520, New York, NY, USA, 2012. ACM.
|
| 237 |
+
|
| 238 |
+
13. D. Lee and C. Ott. Incremental motion primitive learning by physical coaching using impedance control. In *Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS)*, pages 4133–4140, Taipei, Taiwan, October 2010.
|
| 239 |
+
|
| 240 |
+
14. G. J. McLachlan, D. Peel, and R. W. Bean. Modelling high-dimensional data by mixtures of factor analyzers. *Computational Statistics and Data Analysis*, 41(3-4):379–388, 2003.
|
| 241 |
+
|
| 242 |
+
15. Jose Medina R. and Aude Billard. Learning Stable Task Sequences from Demonstration with Linear Parameter Varying Systems and Hidden Markov Models. In *Conference on Robot Learning (CoRL)*, 2017.
|
| 243 |
+
|
| 244 |
+
16. Chrystopher L. Nehaniv and Kerstin Dautenhahn, editors. *Imitation and social learning in robots, humans, and animals: behavioural, social and communicative dimensions*. Cambridge University Press, 2004.
|
| 245 |
+
|
| 246 |
+
17. Scott Niekum, Sarah Osentoski, George Konidaris, and Andrew G Barto. Learning and generalization of complex tasks from unstructured demonstrations. In *IEEE/RSJ International Conference on Intelligent Robots and Systems*, pages 5239–5246, 2012.
|
| 247 |
+
|
| 248 |
+
18. Takayuki Osa, Joni Pajarinen, Gerhard Neumann, Andrew Bagnell, Pieter Abbeel, and Jan Peters. *An Algorithmic Perspective on Imitation Learning*. Now Publishers Inc., Hanover, MA, USA, 2018.
|
| 249 |
+
|
| 250 |
+
19. Alexandros Paraschos, Christian Daniel, Jan R Peters, and Gerhard Neumann. Probabilistic movement primitives. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, *Advances in Neural Information Processing Systems 26*, pages 2616–2624. Curran Associates, Inc., 2013.
|
| 251 |
+
|
| 252 |
+
20. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77:257–285, 1989.
|
| 253 |
+
|
| 254 |
+
21. Anirban Roychowdhury, Ke Jiang, and Brian Kulis. Small-variance asymptotics for hidden markov models. In *Advances in Neural Information Processing Systems 26*, pages 2103–2111. Curran Associates, Inc., 2013.
|
| 255 |
+
|
| 256 |
+
22. A. K. Tanwani. *Generative Models for Learning Robot Manipulation Skills from Humans*. PhD thesis, Ecole Polytechnique Federale de Lausanne, Switzerland, 2018.
|
| 257 |
+
|
| 258 |
+
23. A. K. Tanwani and S. Calinon. Learning robot manipulation tasks with task-parameterized semitied hidden semi-markov model. *IEEE Robotics and Automation Letters*, 1(1):235–242, 2016.
|
| 259 |
+
|
| 260 |
+
24. Ajay Kumar Tanwani and Sylvain Calinon. Small variance asymptotics for non-parametric online robot learning. CoRR, abs/1610.02468, 2016.
|
| 261 |
+
|
| 262 |
+
25. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical dirichlet processes. *Journal of the American Statistical Association*, 101(476):1566–1581, 2006.
|
| 263 |
+
|
| 264 |
+
26. M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analyzers. *Neural Computation*, 11(2):443–482, 1999.
|
| 265 |
+
|
| 266 |
+
27. A. D. Wilson and A. F. Bobick. Parametric hidden Markov models for gesture recognition. *IEEE Trans. on Pattern Analysis and Machine Intelligence*, 21(9):884–900, 1999.
|
| 267 |
+
|
| 268 |
+
28. D. M. Wolpert, J. Diedrichsen, and J. R. Flanagan. Principles of sensorimotor learning. *Nature Reviews*, 12:739–751, 2011.
|
| 269 |
+
|
| 270 |
+
29. Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, and Silvio Savarese. Neural task programming: Learning to generalize across hierarchical tasks. CoRR, abs/1710.01813, 2017.
|
| 271 |
+
|
| 272 |
+
30. S.-Z. Yu. Hidden semi-Markov models. Artificial Intelligence, 174:215–243, 2010.
|
samples/texts_merged/3594993.md
ADDED
|
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# On coloring box graphs
|
| 5 |
+
|
| 6 |
+
CrossMark
|
| 7 |
+
|
| 8 |
+
Emilie Hogan<sup>a</sup>, Joseph O'Rourke<sup>b</sup>, Cindy Traub<sup>c</sup>, Ellen Veomett<sup>d,*</sup>
|
| 9 |
+
|
| 10 |
+
<sup>a</sup> Pacific Northwest National Laboratory, United States
|
| 11 |
+
|
| 12 |
+
<sup>b</sup> Smith College, United States
|
| 13 |
+
|
| 14 |
+
<sup>c</sup> Southern Illinois University Edwardsville, United States
|
| 15 |
+
|
| 16 |
+
<sup>d</sup> Saint Mary's College of California, United States
|
| 17 |
+
|
| 18 |
+
## ARTICLE INFO
|
| 19 |
+
|
| 20 |
+
**Article history:**
|
| 21 |
+
Received 5 November 2013
|
| 22 |
+
Received in revised form 6 September 2014
|
| 23 |
+
Accepted 13 September 2014
|
| 24 |
+
Available online 23 October 2014
|
| 25 |
+
|
| 26 |
+
**Keywords:**
|
| 27 |
+
Graph coloring
|
| 28 |
+
Box graph
|
| 29 |
+
Chromatic number
|
| 30 |
+
|
| 31 |
+
## ABSTRACT
|
| 32 |
+
|
| 33 |
+
We consider the chromatic number of a family of graphs we call box graphs, which arise from a box complex in *n*-space. It is straightforward to show that any box graph in the plane has an admissible coloring with three colors, and that any box graph in *n*-space has an admissible coloring with *n* + 1 colors. We show that for box graphs in *n*-space, if the lengths of the boxes in the corresponding box complex take on no more than two values from the set {1, 2, 3}, then the box graph is 3-colorable, and for some graphs three colors are required. We also show that box graphs in 3-space which do not have cycles of length four (which we call "string complexes") are 3-colorable.
|
| 34 |
+
|
| 35 |
+
© 2014 Elsevier B.V. All rights reserved.
|
| 36 |
+
|
| 37 |
+
## 1. Introduction and results
|
| 38 |
+
|
| 39 |
+
There are many geometrically-defined graphs whose chromatic numbers have been studied. Perhaps the most famous such example is the Four Color Theorem, which states that any planar graph is 4-colorable [1]. Another famous example is the chromatic number of the plane. More specifically, a graph $G = (V, E)$ is defined where $V = \mathbb{R}^2$ and $(x, y) \in E$ precisely when $\|x - y\|_2 = 1$ (where $\| \cdot \|_2$ is the usual Euclidean norm in the plane). Through simple geometric constructions, one can show that $4 \le \chi(G) \le 7$ for this graph, although the precise value is still not known; see [8], for example.
|
| 40 |
+
|
| 41 |
+
In this article, we consider graphs that arise from box complexes. We first define what a box complex is:
|
| 42 |
+
|
| 43 |
+
**Definition 1.** An *n*-dimensional box is a set $B \subset \mathbb{R}^n$ that can be defined as:
|
| 44 |
+
|
| 45 |
+
$$B = \{x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$$
|
| 46 |
+
|
| 47 |
+
where $a_i < b_i$ for $i = 1, 2, \dots, n$.
|
| 48 |
+
|
| 49 |
+
An *n*-dimensional *box complex* is a set of finitely many *n*-dimensional boxes $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ such that if the intersection of two boxes $B_i \cap B_j$ is nonempty, then $B_i \cap B_j$ is a face (of any dimension) of both $B_i$ and $B_j$, for any $i$ and $j$ (see Fig. 1).
|
| 50 |
+
|
| 51 |
+
Now we can define a box graph:
|
| 52 |
+
|
| 53 |
+
**Definition 2.** An *n*-dimensional *box graph* is a graph defined on an *n*-dimensional box complex. The box graph $G(\mathcal{B}) = (V, E)$ defined on the box complex $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ is the undirected graph whose vertex set is the boxes:
|
| 54 |
+
|
| 55 |
+
$$V = \{B_1, B_2, \dots, B_m\}$$
|
| 56 |
+
|
| 57 |
+
* Corresponding author.
|
| 58 |
+
E-mail address: erv2@stmarys-ca.edu (E. Veomett).
|
| 59 |
+
---PAGE_BREAK---
|
| 60 |
+
|
| 61 |
+
Fig. 1. Examples in $\mathbb{R}^2$.
|
| 62 |
+
|
| 63 |
+
Fig. 2. Defining a 2-dimensional box graph.
|
| 64 |
+
|
| 65 |
+
and whose edges $(B_i, B_j) \in E$ record when $B_i \cap B_j$ is an $(n-1)$-dimensional face of both $B_i$ and $B_j$. In other words, the box graph is the dual graph of the box complex, and the colorings we are considering are in some sense “solid colorings.”
|
| 66 |
+
|
| 67 |
+
When it eases understanding, we may use the terms box complex and box graph interchangeably. We also may use boxes and vertices interchangeably.
|
| 68 |
+
|
| 69 |
+
The following proposition shows that, as far as the corresponding box graphs are concerned, we may as well restrict ourselves to box complexes where each of the vertices of the boxes has integer coordinates (and thus all boxes have integer lengths).
|
| 70 |
+
|
| 71 |
+
**Proposition 1.** Let $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ be a box complex and let $G(\mathcal{B}) = (V, E)$ be its corresponding box graph. There exists a box complex $\{C_1, C_2, \dots, C_m\}$ where the vertices of each $C_i$ ($i = 1, 2, \dots, m$) have all integer coordinates, such that the box graph corresponding to complex $\{C_1, C_2, \dots, C_m\}$ is the same graph $G$.
|
| 72 |
+
|
| 73 |
+
We will prove **Proposition 1** in Section 2.
|
| 74 |
+
|
| 75 |
+
We ask the following natural question:
|
| 76 |
+
|
| 77 |
+
**Question 1.** What is the minimum number of colors $k$ that are required so that every $n$-dimensional box graph has an admissible $k$-coloring?
|
| 78 |
+
|
| 79 |
+
From Fig. 2(c), we can see that three colors may be necessary to color a 2-dimensional box graph. In fact, as we will prove in Section 2, three colors are also sufficient:
|
| 80 |
+
|
| 81 |
+
**Proposition 2.** Any box graph in $n$-space has an admissible coloring with $n + 1$ colors.
|
| 82 |
+
|
| 83 |
+
Our goal is to answer **Question 1** in dimension 3, which is still open. In the case where the "boxes" are zonotopes (as opposed to right-angled bricks), sometimes 4 colors are needed [4], and in the case where the "boxes" are now touching spheres, the chromatic number is between 5 and 13 [2]. Analogously, for simplicial complexes in $\mathbb{R}^n$, $n+1$ colors suffice [6]. We suspect that any 3-dimensional box graph is 3-colorable, and we can show that this is true for a few families of 3-dimensional box graphs. The following are the main results of this paper:
|
| 84 |
+
|
| 85 |
+
**Theorem 1.** Let $G$ be an $n$-dimensional box graph such that the lengths of all of the boxes in the corresponding box complex take on no more than two values from the set $\{1, 2, 3\}$. That is, all the side lengths of the boxes are 1 or 2, or all the side lengths are 1 or 3, or all the side lengths are 2 or 3. Then $G$ is 3-colorable.
|
| 86 |
+
|
| 87 |
+
**Theorem 2.** Let $G$ be a 3-dimensional box graph that has no cycles on four vertices. Then $G$ is 3-colorable.
|
| 88 |
+
|
| 89 |
+
The rest of this paper is organized as follows: in Section 2 we will state and prove some straightforward results on box graphs. We will prove **Theorem 1** in Section 3, and we will prove **Theorem 2** in Section 4.
|
| 90 |
+
|
| 91 |
+
## **2. Straightforward results on box graphs**
|
| 92 |
+
|
| 93 |
+
As promised, we will start with proofs of **Propositions 1** and **2**.
|
| 94 |
+
---PAGE_BREAK---
|
| 95 |
+
|
| 96 |
+
**Proof of Proposition 1.** Suppose {$B_1, B_2, \dots, B_m$} is a box complex in $\mathbb{R}^n$, so that each vertex of each box has $n$ coordinates. Let $x_0, x_1, \dots, x_k$ be the list of all of the different first coordinates of all of the vertices of the boxes in the box complex. Order them so that
|
| 97 |
+
|
| 98 |
+
$$x_0 < x_1 < \cdots < x_k.$$
|
| 99 |
+
|
| 100 |
+
Now make a new box complex {$B_1^1, B_2^1, \dots, B_m^1$} such that the vertices are all the same except the first coordinates. Specifically, if the first coordinate of a vertex in $B_j$ is $x_i$, then the first coordinate of the corresponding vertex in $B_j^1$ is the integer $i$. Thus, the vertex $(x_i, y_2, y_3, \dots, y_n)$ of $B_j$ becomes the vertex $(i, y_2, y_3, \dots, y_n)$ of $B_j^1$.
|
| 101 |
+
|
| 102 |
+
Note that each $B_i^1$ is still a box, and this does not change the intersection pattern of the boxes. That is, if $B_j \cap B_\ell$ is $d$-dimensional, then so is $B_j^1 \cap B_\ell^1$. (And if $B_j \cap B_\ell$ was empty, then so is $B_j^1 \cap B_\ell^1$.)
|
| 103 |
+
|
| 104 |
+
We continue with this process for the 2nd, 3rd, ..., $n$th coordinates. Finally, we get a box complex {$B_1^n, B_2^n, \dots, B_m^n$} with the same intersection pattern as $B_1, B_2, \dots, B_m$ but with all integer coordinates for the vertices. Thus, the box graph for complex {$B_1^n, B_2^n, \dots, B_m^n$} is the same as the box graph for complex {$B_1, B_2, \dots, B_m$}. $\square$
|
| 105 |
+
|
| 106 |
+
In order to prove Proposition 2 we first give the definition of *k*-degenerate graphs, and show the well-known result that *k*-degenerate graphs are *k* + 1-colorable [5].
|
| 107 |
+
|
| 108 |
+
**Definition 3.** A graph G is *k*-degenerate if each of its induced subgraphs has a vertex of degree k.
|
| 109 |
+
|
| 110 |
+
**Lemma 1.** Every *k*-degenerate graph is *k* + 1-colorable.
|
| 111 |
+
|
| 112 |
+
**Proof.** Let $G = (V, E)$ be a $k$-degenerate graph. We will proceed by induction on $|V|$, the size of the vertex set. If $|V| = 1$ then certainly $G$ is $k$-colorable for any $k \ge 1$. Now, suppose that $|V| = m \ge 2$, and assume as the induction hypothesis that any $k$-degenerate graph on $m-1$ vertices is $k+1$-colorable.
|
| 113 |
+
|
| 114 |
+
Then, since $G$ is $k$-degenerate we know there exists a vertex $v \in V$ with $\deg(v) = k$. Consider the graph $G-v$, formed by removing vertex $v$ and all of its incident edges, with $m-1$ vertices. This graph must be $k$-degenerate since it is an induced subgraph of $G$. Therefore, by the induction hypothesis we can color $G-v$ using $k+1$ colors. Now, when $v$ and its edges are added back into $G$ we must have at least one available color since $v$ has only $k$ neighbors and there are $k+1$ total colors. Therefore, by induction, any $k$-degenerate graph is $k+1$-colorable. $\square$
|
| 115 |
+
|
| 116 |
+
We now prove Proposition 2 by showing that any box graph is *n*-degenerate.
|
| 117 |
+
|
| 118 |
+
**Proof of Proposition 2.** Let $G = (V, E)$ be a box graph, so that each $v \in V$ is a box in the corresponding box complex. We will label each box in V by its "right, forward, top" vertex. More precisely, each box can be defined as
|
| 119 |
+
|
| 120 |
+
$$\{x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$$
|
| 121 |
+
|
| 122 |
+
where $a_i < b_i$ for $i = 1, 2, \dots, n$. We then label this box with $(b_1, b_2, \dots, b_n)$.
|
| 123 |
+
|
| 124 |
+
Now find a "right, forward, top" box in the graph. That is, find a vertex $u \in V$ with corresponding label $(u_1, u_2, \dots, u_n)$ such that for any other $v \in V$ with label $(v_1, v_2, \dots, v_n)$ and $(u, v) \in E$, we have
|
| 125 |
+
|
| 126 |
+
$$u_1 \ge v_1, u_2 \ge v_2, \dots, u_n \ge v_n.$$
|
| 127 |
+
|
| 128 |
+
(Such a box is guaranteed to exist because G is finite.) Note that, by our choice of *u*, *u* has at most *n* neighbors.
|
| 129 |
+
|
| 130 |
+
Since we began with an arbitrary box graph, the existence of a degree *n* vertex must be true for all induced subgraphs of G. Therefore, any box graph corresponding to a box complex in $\mathbb{R}^n$ is *n*-degenerate, and by Lemma 1 is *n* + 1 colorable. $\square$
|
| 131 |
+
|
| 132 |
+
We note that the above argument is the *n*-dimensional analogue to the "elbow" argument in [7].
|
| 133 |
+
|
| 134 |
+
We state the following result as a reminder to the reader:
|
| 135 |
+
|
| 136 |
+
**Proposition 3.** Let $G = (V, E)$ be a graph. Then the following are equivalent:
|
| 137 |
+
|
| 138 |
+
1. The graph G contains no odd cycle.
|
| 139 |
+
|
| 140 |
+
2. The graph G is bipartite.
|
| 141 |
+
|
| 142 |
+
3. The graph G is 2-colorable.
|
| 143 |
+
|
| 144 |
+
**Proof.** Proposition 3 is a well-known introductory graph theory result. See Section I.2 of [3], for example. $\square$
|
| 145 |
+
|
| 146 |
+
The following proposition shows that if a box graph cannot be colored with just 2 colors, it must have some boxes with side lengths that are different from each other.
|
| 147 |
+
|
| 148 |
+
**Proposition 4.** Suppose a box complex only contains boxes that are cubes; that is, boxes with all side lengths equal. Then the corresponding box graph is 2-colorable.
|
| 149 |
+
|
| 150 |
+
**Proof.** Suppose a box complex contains only cubes, and let $G = (V, E)$ be the corresponding box graph. Without loss of generality, we may assume that G is connected. Thus, since all of the boxes in the corresponding box complex are cubes, they must all be cubes of the same size; let the side length of the cubes be $k$. By the proof of Proposition 1, we can assume that $k \in \mathbb{N}$ and the coordinates of all the vertices of the boxes in the box complex are integer multiples of $k$.
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
Just as we did in the proof of Proposition 2, label each $v \in V$ with the “right, forward, top” vertex. Let $(v_1, v_2, \ldots, v_n)$ be the label for vertex $v$. Color vertex $v$ with color
|
| 154 |
+
|
| 155 |
+
$$ \frac{1}{k} (v_1 + v_2 + \cdots + v_n) \pmod{2}. $$
|
| 156 |
+
|
| 157 |
+
Note that exactly two colors are used. If two vertices are adjacent: $(u, v) \in E$, then we know that their corresponding labels $(u_1, u_2, \ldots, u_n)$ and $(v_1, v_2, \ldots, v_n)$ must be the same in every coordinate except one, in which they differ by $k$. That is, there exists $i \in \{1, 2, \ldots, n\}$ such that
|
| 158 |
+
|
| 159 |
+
$$ \begin{aligned} u_j &= v_j & \text{if } j \in \{1, 2, \ldots, n\} \text{ and } j \neq i \\ u_i &= v_i \pm k. \end{aligned} $$
|
| 160 |
+
|
| 161 |
+
Thus, if two vertices are adjacent then their colors must be different. Thus, this is a valid 2-coloring of G. $\square$
|
| 162 |
+
|
| 163 |
+
In [4] it was proved that any box complex in $\mathbb{R}^3$ that is homeomorphic to a ball is 2-colorable.
|
| 164 |
+
|
| 165 |
+
### 3. Proof of Theorem 1
|
| 166 |
+
|
| 167 |
+
We shall prove Theorem 1 in parts via a few lemmas. Here is the first of our lemmas:
|
| 168 |
+
|
| 169 |
+
**Lemma 2.** Suppose that each side length of each box in a box complex is a positive integer which is congruent to either 1 or 2 mod 3. Then the corresponding box graph is 3-colorable.
|
| 170 |
+
|
| 171 |
+
**Proof.** Consider an $n$-dimensional box complex $\{B_1, B_2, \ldots, B_m\}$, and label each box again by its “right, forward, top” vertex coordinates, $(b_1, b_2, \ldots, b_n)$. Now, color each box by $(b_1 + b_2 + \cdots + b_n)$ mod 3. We claim that this is a valid coloring.
|
| 172 |
+
|
| 173 |
+
If two boxes, $B_i$, $B_j$ are adjacent then their right, forward, top vertices will differ in exactly one coordinate. Let $(b_{i,1}, b_{i,2}, \ldots, b_{i,n})$ be the label for $B_i$ and $(b_{j,1}, b_{j,2}, \ldots, b_{j,n})$ the label for $B_j$. Then, WLOG, $b_{i,1} \neq b_{j,1}$ and $b_{i,k} = b_{j,k}$ for $k=2, 3, \ldots, n$. These two boxes will have the same color iff $b_{i,1} - b_{j,1} \equiv 0 \pmod{3}$. However, this value is the side length of one of these boxes which we have restricted to not equal any multiple of 3. Therefore neighboring boxes may not have the same color, so this 3-coloring is admissible. $\square$
|
| 174 |
+
|
| 175 |
+
The following corollary follows directly from Lemma 2:
|
| 176 |
+
|
| 177 |
+
**Corollary 1.** Suppose a box complex in $\mathbb{R}^n$ has boxes with side lengths only equal to 1 or 2. Then the corresponding box graph is 3-colorable.
|
| 178 |
+
|
| 179 |
+
The next in our series of lemmas:
|
| 180 |
+
|
| 181 |
+
**Lemma 3.** Suppose that each side length of each box in a box complex is an odd integer. Then the corresponding box graph is 2-colorable.
|
| 182 |
+
|
| 183 |
+
**Proof.** We will prove this by showing that there can be no odd cycles in the graph (see Proposition 3).
|
| 184 |
+
|
| 185 |
+
Assume we have a box complex $\mathcal{B} = \{B_1, \ldots, B_k\}$. Consider any cycle within the corresponding box graph. Label the vertices of this cycle by the “right, forward, top” corner of the corresponding box, and label each of the edges of the cycle with the distances between those corners, mod 2. In other words, if the neighboring vertices are labeled (1, 1, ..., 1) and (4, 1, ..., 1) then we label the edge with 3 mod 2 = 1. Moreover, we will choose a direction of travel around the cycle and sign the length of the edge positive if we are moving along that edge in the positive direction, and negative if we move along the edge in the negative direction. Thus, for example, if we move from vertex (1, 1, ..., 1) to (4, 1, ..., 1), the edge is labeled with 1 since moving from 1 to 4 is in the positive direction in the first coordinate, whereas if we move from vertex (4, 1, ..., 1) to (1, 1, ..., 1), the edge is labeled with -1.
|
| 186 |
+
|
| 187 |
+
We now claim that the sum of the integers along the cycle must be 0. This is because in each dimension, any length we move in the positive direction must be traveled again in the negative direction, and therefore their parity must also be equal.
|
| 188 |
+
|
| 189 |
+
Finally, we note that, by assumption, all of the lengths are odd. Thus, all edge labels must be either 1 or -1. Since we have a list of edges labeled 1 or -1 and the sum of the labels is 0, there must be an even number of edges in the cycle. $\square$
|
| 190 |
+
|
| 191 |
+
The following corollary follows directly from Lemma 3:
|
| 192 |
+
|
| 193 |
+
**Corollary 2.** Suppose a box complex in $\mathbb{R}^n$ has boxes with side lengths only equal to 1 or 3. Then the corresponding box graph is 3-colorable.
|
| 194 |
+
|
| 195 |
+
The proof for Theorem 1 when blocks have dimensions 2 or 3, given in the remainder of this section, relies on placing a partial order on the box graph corresponding to a given box complex. The elements of the partially ordered set (poset) are the vertices of the box graph, i.e., the individual boxes that comprise the box complex. As before, we label box $\{x = (x_1, x_2, \ldots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$ by its “right, forward, top” vertex coordinates, $(b_1, b_2, \ldots, b_n)$. The order relation for this poset is induced by the following cover relation: box $B_i$ with label $(b_1, b_2, \ldots, b_n)$ covers box $B_j$ with label
|
| 196 |
+
---PAGE_BREAK---
|
| 197 |
+
|
| 198 |
+
Fig. 3. All edges above the ones drawn do not change in length after *T* is applied.
|
| 199 |
+
|
| 200 |
+
(c₁, c₂, . . . , cₙ) if and only if the two boxes are adjacent and Σn_{k=1} b_k ≥ Σn_{k=1} c_k. Since these adjacent boxes must share an (n − 1)-dimensional face, their labels will differ in exactly one coordinate, by a difference equal to the dimension of box Bᵢ orthogonal to shared face Bᵢ ∩ Bⱼ.
|
| 201 |
+
|
| 202 |
+
We note further that the sum $r(B_i) = \sum_{k=1}^{n} b_k$ of the entries of the label of a given box is a rank function for this poset. We will use the rank function and the poset structure to describe valid colorings of the box graph. This technique will consider an initial drawing of the poset (and subsequent re-drawings) with all nodes at integer heights. We then refer to the *length* of an edge in the poset as the positive vertical distance between its endpoints.
|
| 203 |
+
|
| 204 |
+
Here is the last of the lemmas that we will need for Theorem 1:
|
| 205 |
+
|
| 206 |
+
**Lemma 4.** Suppose a box complex has boxes with side lengths only equal to 2 or 3. Then the corresponding box graph is 3-colorable.
|
| 207 |
+
|
| 208 |
+
**Proof.** Consider now the case in which all dimensions of the boxes in a box complex $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ are 2 or 3. We produce the associated poset $\mathcal{P}$ described above, and make an initial drawing of $\mathcal{P}$ with nodes having heights corresponding to their ranks. Note that this implies that if two boxes $B_i$ and $B_j$ which are adjacent in the box graph are drawn with heights $h_i$ and $h_j$ respectively, then $r(B_i) - r(B_j) = h_i - h_j$, and $h_i - h_j$ is either 2 or 3 if $h_i > h_j$. In other words, all lengths of the edges in the poset are either 2 or 3. Without loss of generality, we can make this drawing so that all rank-minimal vertices have height $h$-value of 0. We now describe how to redraw the poset $\mathcal{P}$ in such a way that all adjacencies and cover relations are preserved, but all edges have lengths equivalent to 1 or 2 mod 3.
|
| 209 |
+
|
| 210 |
+
We now consider the lengths of edges in the poset, working our way in order of increasing height $h$ of the terminal endpoints. Since the first nodes occur on the line $h=0$ and all edges have length 2 or 3, no edges terminate on $h=1$, and edges that terminate on $h=2$ have length 2, which is among the desired values. Edges terminating on $h=3$ or above may have length 2 or length 3. We perform the following transformation on the drawing of the poset. Let $h_i$ denote the height of vertex $B_i$ in the initial drawing of the poset. We perform transformation $T$ below to the drawing of the poset:
|
| 211 |
+
|
| 212 |
+
$$ T(h_i) = \begin{cases} h_i & \text{if } h_i \le 2, \\ h_i + 2 & \text{if } h_i \ge 3. \end{cases} $$
|
| 213 |
+
|
| 214 |
+
Note that $T$ has no effect on the length of edges terminating at or below $h=2$, and no effect on the length of edges commencing at or above $h=3$. For edges that include the interval $[2, 3]$, two units are added to their length. In the new drawing of the poset, no edges will terminate on lines $h=3$ or $h=4$. Edges terminating on $h=5$ were either originally of length 3 commencing from $t=0$ or of length 2 commencing at $h=1$. The former now have length 5, while the length of the latter is now 4. In either case, edges terminating on $h=5$ have lengths equivalent to 1 or 2 mod 3. A similar argument shows that edges in the revised drawing that terminate on $h=6$ or $h=7$ are either of length 2, 4, or 5. (See Fig. 3.)
|
| 215 |
+
|
| 216 |
+
Any edges terminating on *h*-values of 8 or higher were not affected by the first stretch, and thus may have length 3.
|
| 217 |
+
Continue the stretching/redrawing procedure as before, extending the interval [7, 8] by two units and redrawing the poset.
|
| 218 |
+
This procedure only changes the lengths of edges which include the interval [7, 8], so in particular it does not change
|
| 219 |
+
the lengths of any prior edges. Since our complex is finite, only finitely many re-drawings are needed to draw the poset
|
| 220 |
+
with edges all having length equivalent to 1 or 2 mod 3. At that time, the nodes can be colored using the argument from
|
| 221 |
+
Lemma 2. □
|
| 222 |
+
|
| 223 |
+
We can now finally prove Theorem 1:
|
| 224 |
+
|
| 225 |
+
**Proof of Theorem 1.** This is a direct consequence of Corollaries **1**, **2**, and **Lemma 4**. □
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
**Fig. 4.** This 2 × 2 pattern (a 4-cycle in the dual) is forbidden as part of a string complex.
|
| 229 |
+
|
| 230 |
+
**Fig. 5.** An example of a string complex.
|
| 231 |
+
|
| 232 |
+
**4. Proof of Theorem 2**
|
| 233 |
+
|
| 234 |
+
First, a couple of definitions:
|
| 235 |
+
|
| 236 |
+
**Definition 4.** A *string complex* is any box complex in $\mathbb{R}^3$ that does not contain a 2 × 2 pattern of boxes shown in Fig. 4. The dual of the forbidden pattern is a 4-cycle, which is the shortest cycle possible in a box complex. So in other words, a string complex is a 3-dimensional box complex in whose corresponding graph has no 4-cycle (see Fig. 5).
|
| 237 |
+
|
| 238 |
+
We use the term “string complex” because, without the 2 × 2 pattern in Fig. 4, the box complex is forced to have lots of “holes” and be “stringy.”
|
| 239 |
+
|
| 240 |
+
**Definition 5.** A 3-dimensional box complex {$B_1, B_2, B_3, \dots, B_m$} is *reducible* to the 3-dimensional box complex {$A_1, A_2, \dots, A_\ell$} ($\ell \le m$) if one can sequentially remove boxes from complex {$B_1, B_2, \dots, B_m$} of degree $\le 2$ in order to obtain complex {$A_1, A_2, \dots, A_\ell$}. More specifically, there exists an ordering $B_1, B_2, \dots, B_m$ such that
|
| 241 |
+
|
| 242 |
+
$$B_i = A_i \quad \text{for } i = 1, 2, \dots, \ell$$
|
| 243 |
+
|
| 244 |
+
and for $j = 0, 1, 2, \dots, m - \ell - 1$, the box $B_{m-j}$ has degree $\le 2$ in the box complex
|
| 245 |
+
|
| 246 |
+
$$\{B_1, B_2, \dots, B_{m-j}\}.$$
|
| 247 |
+
|
| 248 |
+
A box complex is *irreducible* if every vertex is of degree $\ge 3$.
|
| 249 |
+
|
| 250 |
+
Note that a complex may be reducible to a smaller complex which is itself irreducible.
|
| 251 |
+
The following lemma is analogous to the tools we used in the proof of Proposition 2:
|
| 252 |
+
|
| 253 |
+
**Lemma 5.** If a 3-dimensional box complex is reducible to the empty complex, then its corresponding box graph is 3-colorable.
|
| 254 |
+
|
| 255 |
+
**Proof.** We prove by induction on $m$, the number of boxes in the box complex. Certainly if $m=1$, the box graph is 3-colorable. Suppose that $m \ge 2$, and that for any 3-dimensional box complex on $m-1$ boxes which is reducible to the empty complex, the corresponding box graph is 3-colorable. Suppose that the box complex {$B_1, B_2, \dots, B_m$} is reducible to the empty complex. That is, for $i=1, 2, \dots, n$, the box $B_i$ has degree $\le 2$ in the complex
|
| 256 |
+
|
| 257 |
+
$$\{B_1, B_2, \dots, B_n\}.$$
|
| 258 |
+
|
| 259 |
+
Note that the box complex {$B_1, B_2, \dots, B_{m-1}$} is also reducible to the empty complex and has $m-1$ boxes in it. Thus, by our inductive assumption, the corresponding graph is 3-colorable. Now, because $B_m$ had degree $\le 2$ in the box complex {$B_1, B_2, \dots, B_m$}, we can choose to color $B_m$ a color which is different from the colors of its neighbors. Thus, we have proven the lemma. $\square$
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
**Fig. 6.** $b_0$ is the topmost, leftmost box in the top layer $T$.
|
| 263 |
+
|
| 264 |
+
By Lemma 5, Theorem 2 is a direct corollary of the following theorem and its subsequent corollary:
|
| 265 |
+
|
| 266 |
+
**Theorem 3.** Every string complex is reducible.
|
| 267 |
+
|
| 268 |
+
**Proof.** Assume to the contrary. That is, let $\mathcal{S} = \{S_1, S_2, \dots, S_m\}$ be an irreducible string complex. We will show that irreducibility implies the complex must contain a 2 × 2 pattern of boxes, which contradicts the assumption that the complex is a string complex.
|
| 269 |
+
|
| 270 |
+
Let $T_1, T_2, \dots, T_\ell$ be the top layer of boxes in $\mathcal{S}$; say the top faces lie in a plane parallel to the xy-plane, extreme in the +z direction. We first claim that every box in $T_1, T_2, \dots, T_\ell$ must have degree $\ge 2$ within the complex $\mathcal{T} = \{T_1, T_2, \dots, T_\ell\}$. Suppose otherwise. That is, suppose there is a box $T_i$ with degree $\le 1$ within the box complex $\mathcal{T}$. Then $T_i$ can have at most degree 2 in the complex $\mathcal{S}$ by joining to a box beneath it. But we know that every box in $\mathcal{S}$ must have degree $\ge 3$, because the complex $\mathcal{S}$ was irreducible. Thus, it is indeed true that each $T_i$, $i = 1, 2, \dots, \ell$ has degree $\ge 2$ in the complex $\mathcal{T}$.
|
| 271 |
+
|
| 272 |
+
Now we look at an extreme corner box of $T_1, T_2, \dots, T_\ell$. Specifically, let $b_0$ be backmost (extreme in the +y direction), and among the topmost boxes of $\mathcal{T}$, leftmost (extreme in the -x direction). So $b_0$ is a type of “upper left corner”. Because it is extreme in two directions, two of its faces in $\mathcal{T}$ are exposed, so it must have exactly degree 2 in $\mathcal{T}$. Because we assumed $\mathcal{S}$ is irreducible, $b_0$ (and indeed every box of $\mathcal{S}$) must have degree $\ge 3$. So $b_0$ must be adjacent to a box $b'_0$ beneath it (beneath in the z-direction). See Fig. 6.
|
| 273 |
+
|
| 274 |
+
Let $b_1$ and $b_2$ be the boxes adjacent to $b_0$ in $T$, with $b_1$ adjacent to $b_0$ in the x-direction as in the figure. Again, by our previous arguments, $b_1$ must have degree $\ge 2$ in $\mathcal{T}$. It is already adjacent to $b_0$ to its left, and it cannot be adjacent to a box above it, because it is topmost. So it must be adjacent to one or both of the boxes labeled $b_3$ and $b_4$ in the figure.
|
| 275 |
+
|
| 276 |
+
However, $b_1$ cannot be adjacent to $b_3$, for then $\{b_0, b_1, b_2, b_3\}$ forms a 2 × 2 pattern, contradicting the assumption that $\mathcal{S}$ is a string complex. Therefore $b_1$ must be adjacent to $b_4$ in Fig. 6. Now $b_1$ has degree exactly 2 in $T$. Because it must have degree $\ge 3$ for $\mathcal{S}$ to be irreducible, it must be adjacent to box $b'_1$ underneath. But now $\{b_0, b_1, b'_0, b'_1\}$ forms a 2 × 2 pattern, again contradicting the assumption that $\mathcal{S}$ is a string complex.
|
| 277 |
+
|
| 278 |
+
We have now exhausted all possibilities, which have led to contradictions. So the assumption that $\mathcal{S}$ is irreducible is false, and $\mathcal{S}$ must be reducible. ☐
|
| 279 |
+
|
| 280 |
+
**Corollary 3.** Every string complex can be reduced to the empty complex.
|
| 281 |
+
|
| 282 |
+
**Proof.** Let $\mathcal{S}$ be a string complex. It cannot be irreducible by Theorem 3, and so it must have a box $b$ of degree $\le 2$. Let $\mathcal{S}_1 = \mathcal{S} \setminus b$ be the complex with $b$ removed. We claim that $\mathcal{S}_1$ is again a string complex. The reason is that the forbidden 2 × 2 pattern cannot be created by the removal of a box. Therefore, applying Theorem 3 again, $\mathcal{S}_1$ is reducible. Continuing in this manner, we can reduce $\mathcal{S}$ to the empty complex. ☐
|
| 283 |
+
|
| 284 |
+
**5. Conclusion**
|
| 285 |
+
|
| 286 |
+
That box complexes in $\mathbb{R}^2$ sometimes need 3 colors is a straightforward observation, but whether any box complex in $\mathbb{R}^3$ might need 4 colors is an open question. Although it is natural to expect that the chromatic number might be $n+1$ for boxes in $\mathbb{R}^n$ as it is for simplices, we in fact have no example that requires more than 3 colors for any $n \ge 3$.
|
| 287 |
+
|
| 288 |
+
**Acknowledgments**
|
| 289 |
+
|
| 290 |
+
We thank the participants of the 2012 AMS Mathematics Research Institute for stimulating discussions, and we thank the referees for their insightful comments. The proof of Theorem 2 was developed in collaboration with Smith students Lily Du, Jessica Lord, Micaela Mendlow, Emily Merrill, Viktoria Pardey, Rawia Salih, and Stephanie Wang. The first, third and last authors were supported by an AMS Mathematics Research Communities grant.
|
| 291 |
+
---PAGE_BREAK---
|
| 292 |
+
|
| 293 |
+
References
|
| 294 |
+
|
| 295 |
+
[1] K. Appel, W. Haken, Every planar map is four colorable, Bull. Amer. Math. Soc. 82 (5) (1976) 711-712.
|
| 296 |
+
|
| 297 |
+
[2] Bhaskar Bagchi, Basudeb Datta, Higher-dimensional analogues of the map coloring problem, Amer. Math. Monthly 120 (8) (2013) 733–737.
|
| 298 |
+
|
| 299 |
+
[3] Béla Bollobás, Modern Graph Theory, in: Graduate Texts in Mathematics, vol. 184, Springer-Verlag, New York, 1998.
|
| 300 |
+
|
| 301 |
+
[4] Suzanne Gallagher, Joseph O'Rourke, Coloring objects built from bricks, in: Proc. 15th Canad. Conf. Comput. Geom., 2003, pp. 56–59.
|
| 302 |
+
|
| 303 |
+
[5] Alexandr V. Kostochka, On almost (k - 1)-degenerate (k + 1)-chromatic graphs and hypergraphs, Discrete Math. 313 (4) (2013) 366–374.
|
| 304 |
+
|
| 305 |
+
[6] Joseph O'Rourke, A note on solid coloring of pure simplicial complexes, December 2010, arXiv:1012.4017 [cs.DM].
|
| 306 |
+
|
| 307 |
+
[7] Tom Sibley, Stan Wagon, Rhombic Penrose tilings can be 3-colored, Amer. Math. Monthly 107 (3) (2000) 251–253.
|
| 308 |
+
|
| 309 |
+
[8] Alexander Soifer, Chromatic number of the plane & its relatives. I. The problem & its history, Geombinatorics 12 (3) (2003) 131–148.
|
samples/texts_merged/3723390.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Capacity of multiservice WCDMA Networks with variable GoS
|
| 5 |
+
|
| 6 |
+
Nidhi Hegde and Eitan Altman
|
| 7 |
+
|
| 8 |
+
INRIA 2004 route des Lucioles, B.P.93 06902 Sophia-Antipolis, France
|
| 9 |
+
Email: {Nidhi.Hegde, Eitan.Altman} @sophia.inria.fr
|
| 10 |
+
|
| 11 |
+
Abstract— Traditional definitions of capacity of CDMA networks are either related to the number of calls they can handle (pole capacity) or to the arrival rate that guarantees that the rejection rate (or outage) is below a given fraction (Erlang capacity). We extend the latter definition to other quality of service (QoS). We consider best-effort (BE) traffic sharing the network resources with real-time (RT) applications. BE applications can adapt their instantaneous transmission rate to the available one and thus need not be subject to admission control or outages. Their meaningful QoS is the average delay. The delay aware capacity is defined as the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. We compute both the blocking probability of the RT traffic having an adaptive Grade of Service (GoS) as well as the expected delay of the BE traffic for an uplink multicell WCDMA system. This yields the Erlang capacity for former and the delay capacity for the latter.
|
| 12 |
+
|
| 13 |
+
## I. INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Third generation mobile networks such as the Universal Mobiles Telecommunications System (UMTS), will provide a wide variety of services to users, including multimedia applications and interactive real-time applications as well as best-effort applications such as file transfer, Internet browsing, and electronic mail. These services have varied quality of service (QoS) requirements; real time applications (RT) needs some guaranteed minimum transmission rate as well as delay bounds which requires reservation of system capacity. We assume that RT traffic is subject to Call Admission Control (CAC) in order to guarantee the minimum rates for accepted RT calls. This implies that RT traffic may suffer rejections whose rate is then an important QoS for such applications. In contrast, Best-effort (BE) applications can adapt their transmission rate to the network's available resources and is therefore not subject to CAC. The relevant QoS measure for BE traffic is then the expected sojourn time (or delay) of a call in the system (e.g. the expected time to download a file).
|
| 16 |
+
|
| 17 |
+
We consider BE traffic sharing the network resources with RT applications. Our aim is to compute both the blocking (or rejection) probability of the RT traffic as well as the expected delay of the BE traffic for an uplink multicell WCDMA system. Although RT calls need a minimum guaranteed transmission rate, they are assumed to be able to adapt to network resources in a way similar to the BE traffic. For example, in the case of voice applications, UMTS will use the Adaptive Multi-Rate (AMR) codec that offers eight different transmission rates of voice that vary between 4.75 kbps to 12.2 kbps, and that can be dynamically changed every 20 msec. Although both RT
|
| 18 |
+
|
| 19 |
+
and BE traffic have adaptive rates, we identify a key difference between the two: The *duration* of a RT call does not depend on the instantaneous assigned rate it gets (only the quality may change), whereas for BE calls, the *total volume transmitted* during the call does not depend on the assigned rate; the duration of BE calls therefore does depend on the dynamic rate assignment. We propose a probabilistic model that takes these features into account and enables to compute the performance measures of interest: we compute the blocking probabilities and the average throughput per RT calls, the expected average number of RT and BE calls in the system, and the expected delay of BE call.
|
| 20 |
+
|
| 21 |
+
We extend the notion of capacity in order to describe the amount of traffic for which the system can offer reasonable QoS. Traditional definitions of capacity of networks are either related to the number of calls they can handle (pole capacity) or to the arrival rate that guarantees that the rejection rate (or outage) is below a given fraction (Erlang capacity, see [11]). We extend the latter definition to other QoS. The delay aware capacity, suitable in particular for the BE traffic, is defined as the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. We compute it as a function of other parameters of the system (rate of arrival and characteristics of RT traffic, the CAC and downgrading policy applied to RT traffic).
|
| 22 |
+
|
| 23 |
+
We briefly mention related work. In [10], an uplink CDMA with two classes is considered, the RT traffic is transmitted all the time, the non real time mobiles (NRT) are time-shared. A related idea has also been analyzed in [6]. The benefits of time sharing is studied and conditions for silencing some are obtained. The capacity of voice/data CDMA systems is also analyzed in [7] where both classes are modeled as VBR traffic. Adaptive features of transmission rates are not considered in the above references. In [1], the author considers the influence of the value of a fixed (not-adaptive) bandwidth per BE calls on the Erlang capacity of the system (that includes also RT calls), taking into account that a lower bandwidth implies longer call durations. A limiting capacity (as the fixed bandwidth vanishes) is identified and computed. Related work [2], [9] has also been done in wireline ATM networks (although without the power control aspects and without the downgrading features of wireless).
|
| 24 |
+
|
| 25 |
+
The structure of this paper is as follows. Next section introduces the model and preliminaries. Section III computes the performance of RT and BE traffic in the case of a
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
single sector using a matrix geometric approach. This is then extended in Section IV to the multisector multicell case using a fix point argument. In Section V we provide numerical examples and we end with a concluding section.
|
| 29 |
+
|
| 30 |
+
## II. PRELIMINARIES
|
| 31 |
+
|
| 32 |
+
We consider the uplink of a multi-service WCDMA system with K service classes. Let $X_j$ be the number of ongoing calls of type j in some given sector, and $\mathbf{X} = (X_1, \dots, X_K)$. In CDMA systems, in order for a signal to be received, the ratio of it's received power to the sum of the background noise and interference must be greater than a given constant. For some given $\mathbf{X}$, this condition is as follows [5]:
|
| 33 |
+
|
| 34 |
+
$$ \frac{P_j}{N + I_{\text{own}} + I_{\text{other}} - P_j} \triangleq \gamma_j \ge \tilde{\Delta}'_j, \quad j = 1, \dots, K, \quad (1) $$
|
| 35 |
+
|
| 36 |
+
where N is the background noise, and $I_{\text{own}}$ and $I_{\text{other}}$ are the total power received from the mobiles within the considered sector, and within the other sectors or cells, respectively. $\gamma_j$ is the ratio of received power to total receive noise and interference at the base station, SIR, and $\tilde{\Delta}'_j$ is the required SIR for a call of class j, given by $\tilde{\Delta}'_j = E_j/N_o R_j W$ where $E_j$ is the energy per transmitted bit of type j, $N_o$ is the thermal noise density, $W$ is the WCDMA modulation bandwidth, and $R_j$ is the transmission rate of the type j call.
|
| 37 |
+
|
| 38 |
+
The interference received from mobiles in the same sector is simply $I_{\text{own}} = \sum_{j=1}^K X_j P_j$. When $X_j$ for all $j=1, \dots, K$ is fixed, we also make the standard assumption [5] that the other-cell interference is proportional to interference for own cell, by some constant $f$, as such:
|
| 39 |
+
|
| 40 |
+
$$ I_{\text{other}} = f I_{\text{own}}. \quad (2) $$
|
| 41 |
+
|
| 42 |
+
Note that the above assumes perfect power control. Due to inaccuracies in the closed-loop fast power control mechanism, mainly due to shadow fading of the radio signal, the $\gamma_j$ may not be equal to $\tilde{\Delta}'_j$ at all times. We now define $\gamma_j$ to be a random variable of the form $\gamma_j = 10^{\xi_j/10}$, where $\xi_j \sim N(\mu_\xi, \sigma_\xi)$ includes the shadow fading component and $\sigma_\xi$ is the standard deviation of shadow fading with typical values between 0.3 and 2 dB [4], [11]. It follows then that $\gamma_j$ has a lognormal distribution given by: $f_{\gamma_j}(x_j) = \frac{h}{x_j \sigma_\xi \sqrt{2\pi}} \exp\left(-\frac{(h \ln(x_j) - \mu_\xi)^2}{2\sigma_\xi^2}\right)$ where $h = 10/\ln 10$.
|
| 43 |
+
|
| 44 |
+
Since $\gamma_j$ is now a random variable, we can write the condition (1) in terms of $\tilde{\gamma}_j$, the average received SIR. We would now like to determine the required SIR, $\tilde{\Delta}_j$ such that $\tilde{\gamma}_j = \tilde{\Delta}_j$ where $\tilde{\Delta}_j$ includes power control errors and replaces $\tilde{\Delta}'_j$ in (1). We determine $\tilde{\Delta}_j$ for the outage condition: $\Pr[\gamma_j \ge \tilde{\Delta}'_j] = \beta$ [12]. The reliability, $\beta$, is typically set to 99%. We have:
|
| 45 |
+
|
| 46 |
+
$$ \Pr[\gamma_j \ge \tilde{\Delta}'_j] = \beta = \int_{\tilde{\Delta}'}^{\infty} f_{\gamma_j}(x) dx = Q \left( \frac{h \ln \tilde{\Delta}' - \mu_{\xi}}{\sigma_{\xi}} \right) $$
|
| 47 |
+
|
| 48 |
+
where $Q(x) = \int_x^\infty \frac{1}{2\pi} e^{-t^2/2} dt.$
|
| 49 |
+
|
| 50 |
+
By inverting the above Q-function, we have:
|
| 51 |
+
|
| 52 |
+
$$ \tilde{\Delta}'_j = 10^{\left(\frac{Q^{-1}(\beta)\sigma_\xi}{10} + \frac{\mu_\xi}{10}\right)} \quad (3) $$
|
| 53 |
+
|
| 54 |
+
Since $\gamma_j$ is a lognormal random variable, its expectation is given by: $\tilde{\gamma}_j = \exp\left(\frac{\sigma_\xi^2}{2h^2} + \frac{\mu_\xi}{h}\right)$. We solve for $\mu_\xi$, to obtain:
|
| 55 |
+
|
| 56 |
+
$$ \mu_\xi = h \ln \tilde{\gamma}_j - \frac{\sigma_\xi^2}{2h} \quad (4) $$
|
| 57 |
+
|
| 58 |
+
We use (3) and (4) to get:
|
| 59 |
+
|
| 60 |
+
$$ \tilde{\Delta}'_j = \tilde{\gamma}_j 10^{\frac{Q^{-1}(\beta)\sigma_\xi}{10} - \frac{\sigma_\xi^2}{20h}} $$
|
| 61 |
+
|
| 62 |
+
We then have the SIR condition in (1) modified as follows:
|
| 63 |
+
|
| 64 |
+
$$ \tilde{\gamma}_j \geq \tilde{\Delta}'_j \Gamma = \frac{E_j R_j}{N_o W} \Gamma \triangleq \tilde{\Delta}_j \quad (5) $$
|
| 65 |
+
|
| 66 |
+
where
|
| 67 |
+
|
| 68 |
+
$$ \Gamma = 10^{\frac{\sigma_{\xi}^{2}}{20h} - \frac{Q^{-1}(\beta)\sigma_{\xi}}{10}}. $$
|
| 69 |
+
|
| 70 |
+
Note that $\Gamma$ is independent of service class. The value of $\Gamma$ is a function of the standard deviation of the shadow fading of users, $\sigma_\xi$, whose value varies with user mobility. Differences in the signal fading due only to user mobility are not considered in this paper. The above modified required SIR now includes a correction for imperfect power control.
|
| 71 |
+
|
| 72 |
+
Revisiting (1), we notice that in order serve a large of number of ongoing calls, that is to keep the $X_j$s high, we must keep the $P_j$s as low as possible. We then solve for the minimum required received power $P_j$ satisfying (5) which is known to be the one that gives strict equality $\tilde{\gamma}_j = \tilde{\Delta}_j$ in (5):
|
| 73 |
+
|
| 74 |
+
$$ P_j = \frac{N\Delta_j}{1 - (1+f)\sum_{k=1}^{K} X_k \Delta_k} \quad (6) $$
|
| 75 |
+
|
| 76 |
+
where $\Delta_j = \frac{\tilde{\Delta}'_j}{1+\tilde{\Delta}'_j}$ turns out to be the signal-to-total-power ratio, STPR (see [1, eq 4]).
|
| 77 |
+
|
| 78 |
+
Define the loading as:
|
| 79 |
+
|
| 80 |
+
$$ \theta = \sum_{j=1}^{K} X_j \Delta_j(\mathbf{X}). \quad (7) $$
|
| 81 |
+
|
| 82 |
+
This definition reflects the fact that $\Delta_j$ is a function of the number of each type of call in the system (since it depends on the transmission rate $R_j$ and since $R_j$ will be determined as a function of the system state). In this paper we consider both real time (RT) and best-effort (BE) services that receive a variable rate. As explained in Section III, the rate received by RT calls, and thus $\Delta_{RT}$, depends on the number of RT calls. The rate received by BE calls depends on both $X_{RT}$ and $X_{BE}$. We maintain this dependence throughout the paper, however for notational convenience we will sometimes drop the argument $(\mathbf{X})$.
|
| 83 |
+
|
| 84 |
+
Now we may define the integer capacity of the cell as the set $X^*$ of vectors $\mathbf{X}$ such that the received powers of the mobiles stays finite, i.e. the denominator of (6) does not vanish [1]. In the equation for minimum received power shown in (6),
|
| 85 |
+
---PAGE_BREAK---
|
| 86 |
+
|
| 87 |
+
this implies the condition $\theta(1+f) < 1$. The system prevents, through Call Admission Control (CAC), that the denominator vanishes; more generally, it is desirable to be even more conservative and to impose a bound on the capacity, $\Theta_{\epsilon} = 1 - \epsilon$ where $\epsilon > 0$. Thus the CAC will ensure that $\theta \le \Theta_{\epsilon}/(1+f)$. Later on we shall consider special combined policies for RT traffic that combine CAC with some rate adaptation, along with a rate adaptation for NRT traffic, which will result in a further restriction on the number of RT calls that the system can handle (which will also be called, with some abuse of notation, the integer capacity of RT traffic).
|
| 88 |
+
|
| 89 |
+
### III. SINGLE SECTOR IN ISOLATION
|
| 90 |
+
|
| 91 |
+
Let us first consider a single sector, so that we may exclude interference from other sectors and other cells in the calculations, thereby setting $f = 0$, in this section. We consider a base station with uplink capacity such that
|
| 92 |
+
|
| 93 |
+
$$ \theta \le \Theta_{\epsilon}. \quad (8) $$
|
| 94 |
+
|
| 95 |
+
Here we define *capacity* in terms of the sum of $\Delta$'s, STPR, of all users. We denote by individual normalized bandwidth, the individual required STPR that corresponds to a particular rate. For example, a call that requires a rate of $y$ bps requires a normalized bandwidth of $\Delta = \frac{E/N_o}{W/y+E/N_o}$ where $E/N_o$ is the requirement specified for the given service type of the call.
|
| 96 |
+
|
| 97 |
+
#### A. Real Time Calls
|
| 98 |
+
|
| 99 |
+
We assume a single type of RT calls capable of accepting a variable rate, with a requested transmission rate $R_{\text{RT}}^r$. From (5) and the definition of $\Delta_j$ that follows (6), we derive the required bandwidth $\Delta_{\text{RT}}^r$ that corresponds to rate $R_{\text{RT}}^r$:
|
| 100 |
+
|
| 101 |
+
$$ \Delta_{\text{RT}}^{r} = \frac{E_{\text{RT}}/N_o}{W/R_{\text{RT}}^{r} + E_{\text{RT}}/N_o}. $$
|
| 102 |
+
|
| 103 |
+
We now introduce the parameters of the call admission control for the RT traffic. All BE calls in the sector share equally the capacity remaining after RT calls have been allocated the required normalized bandwidth. In addition, we assume that some portion of the capacity is reserved for BE calls, thus the RT calls have a maximum capacity, denoted by $L_{\text{RT}}$. Let us denote $L_{\text{BE}}$ to be the minimum portion of the total capacity available for BE calls. We then have $L_{\text{BE}} = \Theta_{\epsilon} - L_{\text{RT}}$. We have the following condition for the capacity bound on RT calls:
|
| 104 |
+
|
| 105 |
+
$$ X_{\text{RT}}\Delta_{\text{RT}} \le L_{\text{RT}} \quad (9) $$
|
| 106 |
+
|
| 107 |
+
where $\Delta_{\text{RT}}$ is the normalized bandwidth received by each RT call. Note that this value will depend on the number of RT calls, and thus may vary.
|
| 108 |
+
|
| 109 |
+
The integer capacity for RT calls, such that they all receive the requested rate $R_{\text{RT}}^r$ and bandwidth $\Delta_{\text{RT}}^r$, is then given by
|
| 110 |
+
$$ N_{\text{RT}} = \left\lfloor \frac{L_{\text{RT}}}{\Delta_{\text{RT}}^r} \right\rfloor. $$
|
| 111 |
+
|
| 112 |
+
1) CAC and GoS control: In a strict call admission control scheme for RT calls, new RT call arrivals would be blocked and cleared when there are $N_{\text{RT}}$ RT calls in the sector. However, in UMTS, we can control the GoS, by providing RT calls with a variable transmission rate [3]. In such a case, we may allow more than $N_{\text{RT}}$ RT calls, at the expense of reducing the transmission rate of all RT calls, thus keeping the total normalized bandwidth occupied by all RT calls within the limit. Let us then define a second threshold for admission of RT calls, $M_{\text{RT}} > N_{\text{RT}}$. Call admission control for RT calls then is as follows. As long as the number of RT calls is less than $N_{\text{RT}}$, all RT calls receive the requested normalized bandwidth $\Delta_{\text{RT}}^r$. When the number $j$ of RT calls is more than $N_{\text{RT}}$ but not more than $M_{\text{RT}}$, all RT calls receive with equality a modified (reduced) normalized bandwidth, denoted here as $\Delta_{\text{RT}}^j$, such that (9) is still satisfied. If there are $M_{\text{RT}}$ RT calls in the sector, new RT call arrivals are blocked and cleared. $M_{\text{RT}}$ may be chosen so that RT calls receive a minimum transmission rate of $R_{\text{RT}}^m$, with normalized bandwidth $\Delta_{\text{RT}}^m$, even in the worst case. The integer capacity for RT calls then is $M_{\text{RT}} = \lceil \frac{L_{\text{RT}}}{\Delta_{\text{RT}}^m} \rceil$, where $\Delta_{\text{RT}}^m = \frac{E_{\text{RT}}/N_o}{W/R_{\text{RT}}^m+E_{\text{RT}}/N_o}$, as derived from (5). The bandwidth received by each RT call at some time $t$ is thus a function of $X_{\text{RT}}(t)$ as follows:
|
| 113 |
+
|
| 114 |
+
$$ \Delta_{\text{RT}}(X_{\text{RT}}(t)) = \begin{cases} \Delta_{\text{RT}}^{r} & 1 \le X_{\text{RT}}(t) \le N_{\text{RT}}; \\ L_{\text{RT}}/X_{\text{RT}}(t) & N_{\text{RT}} < X_{\text{RT}}(t) < M_{\text{RT}}. \end{cases} \quad (10) $$
|
| 115 |
+
|
| 116 |
+
2) RT Traffic Model: We assume that RT calls arrive according to a Poisson process with rate $\lambda_{\text{RT}}$. The duration of an RT call is assumed to have an exponential distribution with mean $1/\mu_{\text{RT}}$, and is not affected by the allocated bandwidth. Let $X_1(t)$ and $X_2(t)$ represent the number of RT customers and BE customers respectively, at time $t$ in the given sector. The number of RT calls in the system is not affected by the BE calls. Therefore, $X_1(t)$ follows a birth and death process, with birth rate $\lambda_{\text{RT}}$ and death rate $\mu_{\text{BE}}$. The steady-state probabilities $\pi_{\text{RT}}(x)$ of the number of RT calls $x$ in the system are given by:
|
| 117 |
+
|
| 118 |
+
$$ \mathrm{Pr}[X_{\mathrm{RT}} = x] = \lim_{t \to \infty} \mathrm{Pr}[X_{\mathrm{RT}}(t) = x] = \frac{\rho_{\mathrm{RT}}^x / x!}{\sum_{i=0}^{M_{\mathrm{RT}}} \rho_{\mathrm{RT}}^i / i!} \quad (11) $$
|
| 119 |
+
|
| 120 |
+
where $\rho_{\mathrm{RT}} = \lambda_{\mathrm{RT}}/\mu_{\mathrm{RT}}$. For RT calls, we are interested in the call blocking probability and the average throughput. The call blocking probability is given by:
|
| 121 |
+
|
| 122 |
+
$$ P_B^{\mathrm{RT}} = \pi_R(M_R) = \frac{\rho_R^{M_R}/M_R!}{\sum_{i=0}^{M_R} \rho_R^i / i!} \quad (12) $$
|
| 123 |
+
|
| 124 |
+
We define $r(x)$ to be the transmission rate received by RT calls when there are $x$ RT calls in the sector, as follows:
|
| 125 |
+
|
| 126 |
+
$$ r(X_{\text{RT}}) = \frac{\Delta_{\text{RT}}(X_{\text{RT}}) W}{(1 - \Delta_{\text{RT}}(X_{\text{RT}})) E_{\text{RT}}/N_o} $$
|
| 127 |
+
|
| 128 |
+
Since the transmission rate of RT calls is affected by the number of RT calls, we would like to include in our definition of expected throughput, a measure of the number of RT calls in the sector. We define the expected throughput per call as
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
the ratio of the expected global throughput to the expected number of RT calls in the sector, as follows:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathbb{E}[r(X_{\mathrm{RT}})] = \frac{\sum_{x=1}^{M_{\mathrm{RT}}} \mathrm{Pr}[X_{\mathrm{RT}} = x] x r(x)}{\sum_{x=1}^{M_{\mathrm{RT}}} \mathrm{Pr}[X_{\mathrm{RT}} = x] x} \quad (13)
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
B. Best-Effort Calls
|
| 138 |
+
|
| 139 |
+
We define $C(x)$ to be the capacity available to BE calls when there are $x$ RT calls, as follows:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
C(x) = \begin{cases} \Theta_{\epsilon} - x \Delta_{\text{RT}}^r , & x \le N_{\text{RT}}; \\ L_{\text{BE}} , & N_{\text{RT}} < x \le M_{\text{RT}}. \end{cases}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
All BE calls in the sector share equally the available band-
|
| 146 |
+
width. We can then model BE service by a processor shar-
|
| 147 |
+
ing(PS) discipline with a random service capacity. We study
|
| 148 |
+
two performance metrics for BE calls: the average sojourn
|
| 149 |
+
time of a BE call for given values of RT and BE load, and
|
| 150 |
+
the maximum BE arrival rate such that the average delay is
|
| 151 |
+
always bounded by a given constant.
|
| 152 |
+
|
| 153 |
+
Best-effort calls arrive according to a Poisson process with rate $\lambda_{\text{BE}}$. The required workload of BE classes, i.e. file sizes, are i.i.d exponentially distributed with mean $1/\mu_{\text{BE}}$. The departure rate of BE calls is given by $\nu(X_{\text{RT}}) = \mu_{\text{BE}}R_{\text{BE}}(X_{\text{RT}})$, where $R_{\text{BE}}(X_{\text{RT}})$ is the total BE rate corresponding to the available BE capacity $C(X_{\text{RT}})$, as follows:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
R_{\text{BE}}(X_{\text{RT}}) = \frac{C(X_{\text{RT}})W}{(1 - C(X_{\text{RT}})) E_{\text{BE}}/N_o}.
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
We assume no call admission control for BE calls. The process $(X_2(t), X_1(t))$ is an irreducible Markov chain. It is ergodic if and only if the average service capacity available to BE calls is greater than the BE load (as in [2]):
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\mu_{\text{BE}} \mathbb{E} R_{\text{BE}}(X_{\text{RT}}) > \lambda_{\text{BE}}. \tag{14}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
Specifically, the process $(X_2(t), X_1(t))$ is a homogeneous quasi birth and death process(QBD) with the generator $Q$. The stationary distribution of this system, $\pi$, is calculated by $\pi Q = 0$, with the normalization condition $\pi e = 1$ where $e$ is a vector of ones of proper dimension. $\pi$ represents the steady-state probability of the two-dimensional process lexicographically: we partition $\pi$ as $[\pi(0), \pi(1), ...]$ with the vector $\pi(i)$ for level $i$, where the levels correspond to the number of BE calls in the system. We may further partition each level into the number of RT calls, $\pi(i) = [\pi(i, 0), \pi(i, 1), ..., \pi(i, M_{RT})]$, for $i \ge 0$.
|
| 166 |
+
|
| 167 |
+
The generator $Q$ has the form:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
Q = \begin{bmatrix}
|
| 171 |
+
B & A_0 & 0 & 0 & \cdots \\
|
| 172 |
+
A_2 & A_1 & A_0 & 0 & \cdots \\
|
| 173 |
+
0 & A_2 & A_1 & A_0 & \cdots \\
|
| 174 |
+
0 & 0 & \ddots & \ddots & \ddots
|
| 175 |
+
\end{bmatrix} \quad (15)
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where the matrices $B$, $A_0$, $A_1$, and $A_2$ are square matrices of
|
| 179 |
+
size ($M_{\text{RT}} + 1$). $A_0$ corresponds to a BE connection arrival,
|
| 180 |
+
given by $A_0 = \text{diag}(\lambda_{\text{BE}})$. $A_2$ corresponds to a departure of
|
| 181 |
+
a BE call. The departure rate for BE calls is $\nu(X_{\text{RT}})$. Thus
|
| 182 |
+
$A_2 = \text{diag}(\nu(i); 0 \le i \le M_{\text{RT}})$ $A_1$ corresponds to the arrival
|
| 183 |
+
|
| 184 |
+
and departure processes of the RT calls. $A_1$ is tri-diagonal as
|
| 185 |
+
follows:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\begin{align*}
|
| 189 |
+
A_1[i, i+1] &= \lambda_{RT} \\
|
| 190 |
+
A_1[i, i-1] &= i\mu_{RT} \\
|
| 191 |
+
A_1[i, i] &= -\lambda_{RT} - i\mu_{RT} - \lambda_{BE} - \nu(i)
|
| 192 |
+
\end{align*}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
We also have $B = A_1 + A_2$.
|
| 196 |
+
|
| 197 |
+
The steady-state equations can be written as:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
0 = \pi(0)B + \pi(1)A_2 \quad (16)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
0 = \pi(i-1)A_0 + \pi(i)A_1 + \pi(i+1)A_2, \quad i \ge 1 \quad (17)
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
We follow the matrix-geometric solution to this QBD [8].
|
| 208 |
+
Assuming stability as shown in (14), the steady-state solution
|
| 209 |
+
$\pi$ exists, and is given by:
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
\pi(i) = \pi(0)\mathbf{R}^i \qquad (18)
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
where the matrix **R** is the minimal non-negative solution to
|
| 216 |
+
the equation:
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
A_0 + R A_1 + R^2 A_2 = 0 \quad (19)
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
In order to solve for **R**, we find it efficient to write $A_1 = T-S$
|
| 223 |
+
where *S* is a diagonal matrix and *T* has a zero diagonal. The
|
| 224 |
+
diagonal matrix *S* is positive and invertible, and we may write
|
| 225 |
+
(19) as **R** = (*A*₀ + **R**T + **R**²*A*₂)*S⁻¹. This equation can then
|
| 226 |
+
be solved by successive iterations starting with **R** = 0, a zero
|
| 227 |
+
matrix.
|
| 228 |
+
|
| 229 |
+
Once the matrix **R** is known, we may find π(0) using the boundary condition (16) and the normalization πe = 1 which using (18) is equivalent to π(0)(I − R)⁻¹e = 1. The marginal distribution of the number of RT calls can easily be obtained by using (11). The marginal probability of the number BE calls is
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\mathrm{Pr}[X_{\mathrm{BE}} = i] = \sum_{j=0}^{M_{\mathrm{RT}}} \pi(i,j) = \pi(i)e = \pi(0)\mathbf{R}^i e.
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
One way to compute the above is by finding the $M_{RT} + 1$
|
| 236 |
+
eigenvalues and corresponding eigenvectors of the matrix
|
| 237 |
+
$\mathbf{R}$. All $M_{RT} + 1$ eigenvalues of the matrix $\mathbf{R}$ are distinct
|
| 238 |
+
[9] and therefore $\mathbf{R}$ is diagonalizable. Define $D$ to be a
|
| 239 |
+
diagonal matrix containing the eigenvalues of $\mathbf{R}$, $r_i$, on the
|
| 240 |
+
diagonal, and $V$ to be a matrix containing the corresponding
|
| 241 |
+
eigenvectors, $v_i$ as columns. We then have:
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\mathrm{Pr}[X_{\mathrm{BE}} = i] = \pi(0)\mathbf{R}^i e = \pi(0)V D^i V^{-1}e = \sum_{k=0}^{M_{\mathrm{RT}}} a_k r_k^i
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
where $a_k = \pi(0)v_k e'_k V^{-1}e$ and $e'_k$ is a zero vector of proper dimension with the $k$th element equal to one. The expectation of $X_{BE}$ is as follows:
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
\mathbb{E}[X_{\text{BE}}] = \sum_{k=0}^{M_{\text{RT}}} a_k \frac{r_k}{(1-r_k)^2} \quad (20)
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
We can now use Little’s Law to calculate the average sojourn time of a BE session, $T_{BE} = E[X_{BE}]/\lambda_{BE}$. Having obtained the expected delay of BE traffic in terms of the
|
| 254 |
+
---PAGE_BREAK---
|
| 255 |
+
|
| 256 |
+
system parameters, one can now obtain the delay aware capacity of BE traffic, i.e. the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant.
|
| 257 |
+
|
| 258 |
+
IV. EXTENSION TO MULTIPLE SECTORS
|
| 259 |
+
|
| 260 |
+
In this section we provide an analysis for the multi-sector multi-cell case, by including an approximation for the other-sector interference, $I_{\text{other}}$. Above in (2), we have made the assumption that $I_{\text{other}}$ is proportional to $I_{\text{own}}$ by a constant $f$. Such a definition of other sector interference and the subsequent derivation of minimum required received power in (6) holds for a static network with a fixed number of mobiles. However, in our dynamic model of stochastic arrivals and holding times, such a definition may not hold at all times. We then approximate the instantaneous interference $I_{\text{other}}$ by its average $\mathbb{E}[I_{\text{other}}]$. We modify (2) to $I_{\text{other}} = f\mathbb{E}[I_{\text{own}}] = \sum_{j=1}^{K} \mathbb{E}[X_j \Delta_j(\mathbf{X})]$. The minimum required received power in (6) is now as follows:
|
| 261 |
+
|
| 262 |
+
$$P_j = \frac{N \Delta_j}{1 - \sum_{j=1}^{K} X_j \Delta_j - f \mathbb{E}[X_j \Delta_j(\mathbf{X})]}$$
|
| 263 |
+
|
| 264 |
+
Let $G$ denote the expected other-sector (and cell) interference, $G = f \sum_{j=1}^{K} \mathbb{E}[X_j \Delta_j(\mathbf{X})]$. The equation for $P_j$, above then implies the condition $\theta \le 1-G$. This condition is equivalent to (8) with $\Theta_G = 1-G$ replacing $\Theta_\epsilon$.
|
| 265 |
+
|
| 266 |
+
The expected interference due to RT calls is calculated as follows:
|
| 267 |
+
|
| 268 |
+
$$f\mathbb{E}[X_{RT}\Delta_{RT}(X_{RT})] = f \sum_{i=0}^{M_{BE}} \pi_{RT}(i)i\Delta_{RT}(i)$$
|
| 269 |
+
|
| 270 |
+
where we use (11) for $\pi_{RT}(i)$. For BE calls, we need not calculate the steady state distribution $\pi$. Since BE calls use all of the remaining capacity, the sum of the STPRs of the BE calls, where there is at least one BE call, is simply the available BE capacity, $C(X_{RT})$. The expected interference due to BE calls is given by:
|
| 271 |
+
|
| 272 |
+
$$f\mathbb{E}[X_{BE}\Delta_{BE}(\mathbf{X})] = f(1-\pi(0)e)\sum_{i=0}^{M_{RT}} \pi_{RT}(i)C(i)$$
|
| 273 |
+
|
| 274 |
+
where $\pi(0)e$ is the probability that there are no BE calls in the sector, and can be calculated using only (16) and the normalization condition $\pi e = 1$. For each fixed value of $G$, say $g$, we can obtain the probabilities $\pi_{RT}$ and $\pi(0)$ using $\Theta_g$ instead of $\Theta_\epsilon$. We denote these values by $\pi_{RT}^g$ and $\pi^g(0)$ respectively, and the expectation operator corresponding to these probabilities as $\mathbb{E}^g$. Define $F(g) = f \sum_{j \in K} \mathbb{E}^g[X_j \Delta_j(\mathbf{X})]$. $G$ then is the solution of the fixed point equation:
|
| 275 |
+
|
| 276 |
+
$$g = F(g) \quad (21)$$
|
| 277 |
+
|
| 278 |
+
We can now set the BE threshold as $L_{\text{BE}}^g = \Theta_g - L_{\text{RT}}$. Under such a definition, for a given $L_{\text{RT}}$, $F(g)$ can be shown to be continuous in $g$. $F(g)$ also maps onto itself, and thus by the Brower Fixed Point Theorem, there exists a solution. $F(g)$ can be shown to be nonincreasing in $g$, implying uniqueness of the solution to (21).
|
| 279 |
+
|
| 280 |
+
V. NUMERICAL RESULTS
|
| 281 |
+
|
| 282 |
+
In this section we perform numerical experiments to evaluate the performance of RT and BE calls. The rate requested by the RT calls is 12.2kbps(the maximum rate for AMR speech service in UMTS [3]). For the results shown here we have assumed a minimum acceptable rate of 7.95kbps, which is one of the eight possible rates for the AMR speech class. We assume that the set of rates acceptable to RT calls is continuous. We assume no minimum rate for BE calls. The average file size of a BE call is assumed to be 20kBytes. We assume $E_{\text{RT}}/N_o = 4.1\text{dB}$, $E_{\text{BE}}/N_o = 3.1\text{dB}$ [3], a chip rate $W = 3.84\text{Mcps}$ and $\Theta_\epsilon = 1-10^{-5}$. We define the load in terms of the total RT rate available, $R_T$. The total RT rate is in turn defined as the product of the minimum RT rate and the integer capacity for RT calls if there were no BE threshold, $R_T = [\frac{\Theta_\epsilon}{\Delta_{RT}^m}] R_{RT}^m$. The normalized load for RT calls is defined by $\tilde{\rho}_{RT} = \frac{\lambda_{RT} R_{RT}^r}{\mu_{RT} R_T^r}$, and the BE normalized load is $\tilde{\rho}_{BE} = \frac{\lambda_{BE}}{\mu_{BE} R_T^r}$.
|
| 283 |
+
|
| 284 |
+
We consider the heavy traffic regime, where $\tilde{\rho}_{RT} = 0.5$ and $\tilde{\rho}_{BE} = 0.55$. We keep the normalized loads constant and vary the holding time of the RT calls. We evaluate the performance metrics of interest as a function of the BE reserved capacity, $L_{\text{BE}}$.
|
| 285 |
+
|
| 286 |
+
Figure 1 shows the change in RT call blocking probability, computed using (12), as the BE Threshold, $L_{\text{BE}}$ is varied from 0 to $\Theta_\epsilon$. As expected, as $L_{\text{BE}}$ is increased, there is less capacity available for RT calls, and their call blocking probability increases. We may observe the tradeoff between the service qualities of BE and RT calls in Figures 2 and 3. These figures show the expected RT throughput and expected BE sojourn time, respectively. In Figure 2 we see that the expected RT throughput, computed using (13), is close to the requested rate of 12.2kbps up to a BE threshold of approximately $L_{\text{BE}} = 0.35$. As $L_{\text{BE}}$ is increased further, the expected RT throughput gradually drops, always remaining above the minimum rate of 7.95kbps.
|
| 287 |
+
|
| 288 |
+
Fig. 1. RT Call Blocking for heavy traffic
|
| 289 |
+
|
| 290 |
+
The sensitivity of BE service quality is seen in Figures 3 and 4 with respect to not only the BE threshold, but also the RT call duration. In Figure 3 the expected BE sojourn time, computed using (20) and Little's Law, decreases as $L_{\text{BE}}$ is increased.
|
| 291 |
+
---PAGE_BREAK---
|
| 292 |
+
|
| 293 |
+
Fig. 2. Expected RT Throughput
|
| 294 |
+
|
| 295 |
+
For small values of $L_{BE}$ we see that the expected BE sojourn time varies greatly with increasing $L_{BE}$, when the duration of RT calls is large (smaller values of $\mu_{RT}$). The duration of the RT calls determines the time scale of the evolution of the number of RT calls in the system, and thus the available capacity for the BE calls. When the mean duration of RT calls is small, the number of RT calls evolves much faster relative to the BE calls, and thus we would expect the BE calls to obtain a capacity that is fairly constant. When the mean duration of RT calls is large, the changes in capacity received by BE calls might cause the BE queue to build up for long periods during which there are many ongoing RT calls, thus resulting in higher average sojourn times. For related results for non-variable RT GoS, see [2] and [9]. We observe from the figure that this effect can be diminished by increasing the BE threshold. An increase in $L_{BE}$ means that for BE calls the reserved capacity is substantial compared to the capacity remaining after RT calls are served, an effect similar to having a constant capacity.
|
| 296 |
+
|
| 297 |
+
Fig. 3. Expected BE Sojourn Time
|
| 298 |
+
|
| 299 |
+
The delay aware capacity of BE calls for a fixed RT load is shown in Figure 4. Here, we find the maximum BE arrival rate such that $T_{BE} \le c$, where c is a constant, set to 0.25 in this figure. As expected, the maximum BE arrival rate increases as $L_{BE}$ increases allowing a larger portion of the total capacity for BE calls. We note again the sensitivity to mean RT call duration at smaller values of $L_{BE}$, where the delay capacity
|
| 300 |
+
|
| 301 |
+
approximately doubles when $\mu_{RT}$ is changed from 10 to 0.001.
|
| 302 |
+
|
| 303 |
+
Fig. 4. BE Delay Aware Capacity
|
| 304 |
+
|
| 305 |
+
## VI. CONCLUSION
|
| 306 |
+
|
| 307 |
+
We have modelled resource sharing of BE applications with RT applications in WCDMA networks. Both type of traffic have flexibility to adapt to the available bandwidth but unlike BE traffic, RT traffic requires strict minimum bounds on the throughput. We studied the performance of both BE and RT traffic and examined the impact of reservation of some portion of the bandwidth for the BE applications. We introduced a novel capacity definition related to the delay of BE traffic and showed how to compute it.
|
| 308 |
+
|
| 309 |
+
## REFERENCES
|
| 310 |
+
|
| 311 |
+
[1] Eitan Altman. Capacity of multi-service cdma cellular networks with best-effort applications. In *Proceedings of ACM MOBICOM*, September 2002.
|
| 312 |
+
|
| 313 |
+
[2] Eitan Altman, Damien Artiges, and Karim Traore. On the integration of best-effort and guaranteed performance services. In *European Transactions on Telecommunications, Special Issue on Architectures, Protocols and Quality of Service for the Internet of the Future*, 2, February-March 1999.
|
| 314 |
+
|
| 315 |
+
[3] Harri Holma and Antti Toskala, editors. WCDMA for UMTS, Radio Access For Third Generation Mobile Communications. John Wiley & Sons, Ltd., 2001.
|
| 316 |
+
|
| 317 |
+
[4] Insoo Koo, JeeHwan Ahn, Jeong-A Lee, and Kiseon Kim. Analysis of erland capacity for the multimedia DS-CDMA systems. *IEICE Transactions of Fundamentals*, E82-A(5):849–55, May 1999.
|
| 318 |
+
|
| 319 |
+
[5] Jaana Laiho and Achim Wacker. Radio network planning process and methods for WCDMA. *Annales des Télécommunications*, 56(5-6):317–31, 2001.
|
| 320 |
+
|
| 321 |
+
[6] R. Leelahakriengkrai and R. Agrawal. Scheduling in multimedia CDMA wireless networks. Technical Report ECE-99-3, ECE Dept., University of Wisconsin-Madison, July 1999.
|
| 322 |
+
|
| 323 |
+
[7] N. Mandayam, J. Holtzman, and S. Barberis. Performance and capacity of a voice/data CDMA system with variable bit rate sources. In *Special Issue on Insights into Mobile Multimedia Communications*. Academic Press Inc., January 1997.
|
| 324 |
+
|
| 325 |
+
[8] M. F. Neuts. Matrix-geometric solutions in stochastic models: an algorithmic approach. The John Hopkins Unversity Press, 1981.
|
| 326 |
+
|
| 327 |
+
[9] R. Núnez Qeuija and O.J. Boxma. Analysis of a multi-server queueing model of ABR. *J. Appl. Math. Stoch. Anal.*, 11(3), 1998.
|
| 328 |
+
|
| 329 |
+
[10] S. Ramakrishna and Jack M. Holtzman. A scheme for throughput maximization in a dual-class CDMA system. *IEEE Journal Selected Areas in Comm.*, 16:830–44, 1998.
|
| 330 |
+
|
| 331 |
+
[11] Audrey M. Viterbi and Andrew J. Viterbi. Erlang capacity of a power controlled CDMA system. *IEEE Journal on Selected Areas in Communications*, 11(6):892–900, August 1993.
|
| 332 |
+
|
| 333 |
+
[12] Qiang Wu, Wei-Ling Wu, and Jiong-Pan Zhou. Effects of slow fading SIR errors on CDMA capacity. In *Proceedings of IEEE VTC*, pages 2215–17, 1997.
|
samples/texts_merged/4364106.md
ADDED
|
@@ -0,0 +1,764 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# ASYMPTOTIC BEHAVIOR OF COUPLED INCLUSIONS WITH VARIABLE EXPONENTS
|
| 5 |
+
|
| 6 |
+
PETER E. KLOEDEN*
|
| 7 |
+
|
| 8 |
+
Mathematisches Institut, Universität Tübingen
|
| 9 |
+
D-72076 Tübingen, Germany
|
| 10 |
+
|
| 11 |
+
JACSON SIMSEN
|
| 12 |
+
|
| 13 |
+
Instituto de Matemática e Computação, Universidade Federal de Itajubá
|
| 14 |
+
Av. BPS n. 1303, Bairro Pinheirinho, 37500-903, Itajubá - MG - Brazil
|
| 15 |
+
|
| 16 |
+
PETRA WITTBOLD
|
| 17 |
+
|
| 18 |
+
Fakultät für Mathematik, Universität of Duisburg-Essen
|
| 19 |
+
Thea-Leymann-Str. 9, 45127 Essen, Germany
|
| 20 |
+
|
| 21 |
+
*(Communicated by Alain Miranville)*
|
| 22 |
+
|
| 23 |
+
**ABSTRACT.** This work concerns the study of asymptotic behavior of the solutions of a nonautonomous coupled inclusion system with variable exponents. We prove the existence of a pullback attractor and that the system of inclusions is asymptotically autonomous.
|
| 24 |
+
|
| 25 |
+
**1. Introduction.** Nonlinear reaction-diffusion equations have been studied extensively in recent years and a special attention has been given to coupled reaction-diffusion equations from various fields of applied sciences arising from epidemics, biochemistry and engineering [18]. Reaction-diffusion systems are naturally applied in chemistry where the most common is the change in space and time of the concentration of one or more chemical substances. One interest in chemical kinetics is the construction of mathematical models that can describe the characteristics of a chemical reaction. Mathematical models for electrorheological fluids were considered in [19, 20, 21] and variable exponents do appear in the diffusion term (see also [7, 9]). Reaction-diffusion systems can be perturbed by discontinuous nonlinear terms, which leads to study differential inclusions rather than differential equations, for example, evolution differential inclusion systems with positively sublinear upper semicontinuous multivalued reaction terms *F* and *G* (see [6]).
|
| 26 |
+
|
| 27 |
+
2000 Mathematics Subject Classification. Primary: 35B40, 35B41, 35K57; Secondary: 35K55, 35K92.
|
| 28 |
+
|
| 29 |
+
**Key words and phrases.** Pullback attractor, reaction-diffusion coupled systems, variable exponents, asymptotically autonomous problems.
|
| 30 |
+
|
| 31 |
+
This work was initiated when the second author was supported with CNPq scholarship - process 202645/2014-2 (Brazil). The first author was supported by Chinese NSF grant 11571125. The second author was partially supported by the Brazilian research agency FAPEMIG process PPM 00329-16.
|
| 32 |
+
|
| 33 |
+
* Corresponding author.
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
This work concerns the coupled system of inclusions:
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
(S) \quad \left\{
|
| 40 |
+
\begin{array}{ll}
|
| 41 |
+
\dfrac{\partial u_1}{\partial t} - \operatorname{div}(D_1(t, \cdot)|\nabla u_1|^{p(\cdot)-2}\nabla u_1) + |u_1|^{p(\cdot)-2}u_1 \in F(u_1, u_2) & t > \tau \\
|
| 42 |
+
\\
|
| 43 |
+
\dfrac{\partial u_2}{\partial t} - \operatorname{div}(D_2(t, \cdot)|\nabla u_2|^{q(\cdot)-2}\nabla u_2) + |u_2|^{q(\cdot)-2}u_2 \in G(u_1, u_2) & t > \tau \\
|
| 44 |
+
\\
|
| 45 |
+
\dfrac{\partial u_1}{\partial n}(t,x) = \dfrac{\partial u_2}{\partial n}(t,x) = 0 & \text{in } \partial\Omega, \\
|
| 46 |
+
\\
|
| 47 |
+
(u_1(\tau), u_2(\tau)) = (u_{0,1}, u_{0,2}) \text{ in } L^2(\Omega) \times L^2(\Omega), &
|
| 48 |
+
\end{array}
|
| 49 |
+
\right.
|
| 50 |
+
$$
|
| 51 |
+
|
| 52 |
+
on a bounded domain $\Omega \subset \mathbb{R}^n$, $n \ge 1$, with smooth boundary, where $F$ and $G$ are
|
| 53 |
+
bounded, upper semicontinuous and positively sublinear multivalued maps and the
|
| 54 |
+
exponents $p(\cdot), q(\cdot) \in C(\Omega)$ satisfy
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
p^+ := \max_{x \in \bar{\Omega}} p(x) > p^- := \min_{x \in \bar{\Omega}} p(x) > 2, \quad q^+ := \max_{x \in \bar{\Omega}} q(x) > q^- := \min_{x \in \bar{\Omega}} q(x) > 2.
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
In addition, the diffusion coefficients $D_1, D_2$ are assumed to satisfy:
|
| 61 |
+
|
| 62 |
+
**Assumption D.** $D_1, D_2 : [\tau, T] \times \Omega \to \mathbb{R}$ are functions in $L^\infty([\tau, T] \times \Omega)$ satisfying:
|
| 63 |
+
(i) There is a positive constant $\beta$ such that $0 < \beta \le D_i(t, x)$ for almost all $(t, x) \in [\tau, T] \times \Omega$, $i = 1, 2$.
|
| 64 |
+
(ii) $D_i(t, x) \ge D_i(s, x)$ a.a. $x \in \Omega$ and $t \le s$ in $[\tau, T]$, $i = 1, 2$.
|
| 65 |
+
|
| 66 |
+
In this work we extend the results in [15] for a single inclusion to the case of a coupled inclusion system. We will prove that the strict generalized process (see Definition 2.7 in Section 2) defined by (S) possesses a pullback attractor. Moreover, we prove that the system (S) is in fact asymptotically autonomous. It makes use of a collection of ideas and results of some recent, distinct previous works [15, 22, 23, 27] of the authors, which are applied here to a new problem to yield interesting new results. Regarding [13, 14, 15] where an equation and a single inclusion of this type of problems were considered, the coupled system can not be treated in the same way as the single case, the principal additional technical difficulty is to adjust the results considering two inclusions, in this sense, the main technical difficulty appears to prove dissipativity.
|
| 67 |
+
|
| 68 |
+
The paper is organized as follows. First, in Section 2 we provide some definitions and results on existence of global solutions and generalized processes. In Section 3 we prove the existence of the pullback attractor for the system (S). In Section 4 we say some words about forward attraction and in the last section we prove that the system (S) is asymptotically autonomous.
|
| 69 |
+
|
| 70 |
+
**2. Preliminaries, existence of global solutions and generalized processes.**
|
| 71 |
+
|
| 72 |
+
Consider now the system (S) in the following abstract form
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
(S2) \quad \left\{
|
| 76 |
+
\begin{array}{ll}
|
| 77 |
+
\dfrac{du}{dt}(t) + A(t)u(t) \in F(u(t), v(t)) & t > \tau \\
|
| 78 |
+
\\
|
| 79 |
+
\dfrac{dv}{dt}(t) + B(t)v(t) \in G(u(t), v(t)) & t > \tau \\
|
| 80 |
+
(u(\tau), v(\tau)) = (u_0, v_0) \in H \times H,
|
| 81 |
+
\end{array}
|
| 82 |
+
\right.
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $F$ and $G$ are bounded, upper semicontinuous and positively sublinear mul-
|
| 86 |
+
tivalued maps (see Definitions 2.4, 2.3 and 2.5, respectively) and, for each $t > \tau$,
|
| 87 |
+
$A(t)$ and $B(t)$ are univalued maximal monotone operators of subdifferential type
|
| 88 |
+
in a real separable Hilbert space $H$. Specifically, $A(t) = \partial\varphi^t$ and $B(t) = \partial\psi^t$ for
|
| 89 |
+
---PAGE_BREAK---
|
| 90 |
+
|
| 91 |
+
nonnegative mappings $\varphi^t$, $\psi^t$ with $\partial\varphi^t(0) = \partial\psi^t(0) = 0$, $\forall t \in \mathbb{R}$ and the mappings $\varphi^t$, $\psi^t$ satisfy:
|
| 92 |
+
|
| 93 |
+
**Assumption A.** Let $T > \tau$ be fixed.
|
| 94 |
+
|
| 95 |
+
(A.1) There is a set $Z \subset (\tau, T]$ of zero measure such that $\phi^t$ is a lower semicontinuous proper convex function from $H$ into $(-\infty, \infty]$ with a nonempty effective domain for each $t \in [\tau, T] \setminus Z$.
|
| 96 |
+
|
| 97 |
+
(A.2) For any positive integer $r$ there exist a constant $K_r > 0$, an absolutely continuous function $g_r : [\tau, T] \to \mathbb{R}$ with $g'_r \in L^\beta(\tau, T)$ and a function of bounded variation $h_r : [\tau, T] \to \mathbb{R}$ such that if $t \in [\tau, T] \setminus Z$, $w \in D(\phi^t)$ with $|w| \le r$ and $s \in [t, T] \setminus Z$, then there exists an element $\tilde{w} \in D(\phi^s)$ satisfying
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\begin{align*}
|
| 101 |
+
|\tilde{w} - w| &\le |g_r(s) - g_r(t)|(\phi^t(w)) + K_r)^{\alpha}, \\
|
| 102 |
+
\phi^s(\tilde{w}) &\le \phi^t(w) + |h_r(s) - h_r(t)|(\phi^t(w) + K_r),
|
| 103 |
+
\end{align*}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $\alpha$ is some fixed constant with $0 \le \alpha \le 1$ and
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\beta := \begin{cases} 2 & \text{if } 0 \le \alpha \le \frac{1}{2}, \\ \frac{1}{1-\alpha} & \text{if } \frac{1}{2} \le \alpha \le 1 \end{cases} .
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
Let us first review some concepts and results from the literature, which will be useful in the sequel. We refer the reader to [2, 3, 29] for more details about multivalued analysis theory.
|
| 113 |
+
|
| 114 |
+
**2.1. Setvalued mappings.** Let $X$ be a real Banach space and $M$ a Lebesgue measurable subset in $\mathbb{R}^q$, $q \ge 1$.
|
| 115 |
+
|
| 116 |
+
**Definition 2.1.** The map $G : M \to P(X)$ is called measurable if for each closed subset $C$ in $X$ the set
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
G^{-1}(C) = \{y \in M; G(y) \cap C \neq \emptyset\}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
is Lebesgue measurable.
|
| 123 |
+
|
| 124 |
+
If $G$ is a univ alued map, the above definition is equivalent to the usual definition
|
| 125 |
+
of a measurable function.
|
| 126 |
+
|
| 127 |
+
**Definition 2.2.** By a selection of $E: M \to P(X)$ we mean a function $f: M \to X$
|
| 128 |
+
such that $f(y) \in E(y)$ a.e. $y \in M$, and we denote by Sel$E$ the set
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\mathrm{SelE} \doteq \{ f, f : M \to X \text{ is a measurable selection of } E \}.
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
**Definition 2.3.** Let $U$ be a topological space. A mapping $G : U \to P(X)$ is called upper semicontinuous [weakly upper semicontinuous] at $u \in U$, if
|
| 135 |
+
|
| 136 |
+
(i) $G(u)$ is nonempty, bounded, closed and convex.
|
| 137 |
+
|
| 138 |
+
(ii) For each open subset [open set in the weak topology] $D$ in $X$ satisfying $G(u) \subset D$, there exists a neighborhood $V$ of $u$, such that $G(v) \subset D$, for each $v \in V$.
|
| 139 |
+
|
| 140 |
+
If $G$ is upper semicontinuous [weakly upper semicontinuous] at each $u \in U$, then it
|
| 141 |
+
is called upper semicontinuous [weakly upper semicontinuous] on $U$.
|
| 142 |
+
---PAGE_BREAK---
|
| 143 |
+
|
| 144 |
+
**Definition 2.4.** $F,G: H \times H \rightarrow P(H)$ are said to be bounded if, whenever $B_1, B_2 \subset H$ are bounded, then $F(B_1, B_2) = \bigcup_{(u,v) \in B_1 \times B_2} F(u,v)$ and $G(B_1, B_2) = \bigcup_{(u,v) \in B_1 \times B_2} G(u,v)$ are bounded in $H$.
|
| 145 |
+
|
| 146 |
+
In order to obtain global solutions we impose the following suitable conditions on terms $F$ and $G$.
|
| 147 |
+
|
| 148 |
+
**Definition 2.5 ([24]).** The pair $(F,G)$ of maps $F, G: H \times H \to P(H)$, which takes bounded subsets of $H \times H$ into bounded subsets of $H$, is called positively sublinear if there exist $a > 0, b > 0, c > 0$ and $m_0 > 0$ such that for each $(u,v) \in H \times H$ with $\|u\| > m_0$ or $\|v\| > m_0$ for which either there exists $f_0 \in F(u,v)$ satisfying $\langle u, f_0 \rangle > 0$ or there exists $g_0 \in G(u,v)$ with $\langle v, g_0 \rangle > 0$, then both
|
| 149 |
+
|
| 150 |
+
$$ \|f\| \le a\|u\| + b\|v\| + c \quad \text{and} \quad \|g\| \le a\|u\| + b\|v\| + c $$
|
| 151 |
+
|
| 152 |
+
hold for each $f \in F(u,v)$ and each $g \in G(u,v)$.
|
| 153 |
+
|
| 154 |
+
## 2.2. Generalized processes.
|
| 155 |
+
In order to study the asymptotic behavior of the solutions of the system (S) we will work with a multivalued process defined by a generalized process. We will review these concepts which had been considered in [22, 23] and can be used in the study of infinite dimensional dynamical systems.
|
| 156 |
+
|
| 157 |
+
**Definition 2.6.** Let $(X, \rho)$ be a complete metric space. A generalized process $\mathcal{G} = \{\mathcal{G}(\tau)\}_{\tau \in \mathbb{R}}$ on $X$ is a family of function sets $\mathcal{G}(\tau)$ consisting of maps $\varphi : [\tau, \infty) \to X$, satisfying the conditions:
|
| 158 |
+
|
| 159 |
+
(C1) For each $\tau \in \mathbb{R}$ and $z \in X$ there exists at least one $\varphi \in \mathcal{G}(\tau)$ with $\varphi(\tau) = z$;
|
| 160 |
+
|
| 161 |
+
(C2) If $\varphi \in \mathcal{G}(\tau)$ and $s \ge 0$, then $\varphi^{+s} \in \mathcal{G}(\tau + s)$, where $\varphi^{+s} := \varphi_{|\tau+s,\infty)}$;
|
| 162 |
+
|
| 163 |
+
(C3) If $\{\varphi_j\}_{j \in \mathbb{N}} \subset \mathcal{G}(\tau)$ and $\varphi_j(\tau) \to z$, then there exists a subsequence $\{\varphi_\mu\}_{\mu \in \mathbb{N}}$ of $\{\varphi_j\}_{j \in \mathbb{N}}$ and $\varphi \in \mathcal{G}(\tau)$ with $\varphi(\tau) = z$ such that $\varphi_\mu(t) \to \varphi(t)$ for each $t \ge \tau$.
|
| 164 |
+
|
| 165 |
+
**Definition 2.7.** A generalized process $\mathcal{G} = \{\mathcal{G}(\tau)\}_{\tau \in \mathbb{R}}$ which satisfies the condition
|
| 166 |
+
(C4) (Concatenation) If $\varphi, \psi \in \mathcal{G}$ with $\varphi \in \mathcal{G}(\tau)$, $\psi \in \mathcal{G}(r)$ and $\varphi(s) = \psi(s)$ for
|
| 167 |
+
some $s \ge r \ge \tau$, then $\theta \in \mathcal{G}(\tau)$, where $\theta(t) := \begin{cases} \varphi(t), & t \in [\tau, s] \\ \psi(t), & t \in (s, \infty) \end{cases}$,
|
| 168 |
+
is called an exact (or strict) generalized process.
|
| 169 |
+
|
| 170 |
+
A multivalued process $\{U_G(t, \tau)\}_{t \ge \tau}$ defined by a generalized process $\mathcal{G}$ is a family of multivalued operators $U_G(t, \tau) : P(X) \to P(X)$ with $-\infty < \tau \le t < +\infty$, such that for each $\tau \in \mathbb{R}$
|
| 171 |
+
|
| 172 |
+
$$ U_G(t, \tau)E = \{\varphi(t); \varphi \in \mathcal{G}(\tau), \text{ with } \varphi(\tau) \in E\}, t \geq \tau. $$
|
| 173 |
+
|
| 174 |
+
**Theorem 2.8 ([22, 23]).** Let $\mathcal{G}$ be an exact generalized process. If $\{U_{\mathcal{G}}(t, \tau)\}_{t \geq \tau}$ is a multivalued process defined by $\mathcal{G}$, then $\{U_{\mathcal{G}}(t, \tau)\}_{t \geq \tau}$ is an exact multivalued process on $P(X)$, i.e.,
|
| 175 |
+
|
| 176 |
+
1. $U_{\mathcal{G}}(t, t) = Id_{P(X)}$,
|
| 177 |
+
|
| 178 |
+
2. $U_{\mathcal{G}}(t, \tau) = U_{\mathcal{G}}(t, s)U_{\mathcal{G}}(s, \tau)$ for all $-\infty < \tau \le s \le t < +\infty$.
|
| 179 |
+
|
| 180 |
+
A family of sets $K = \{K(t) \subset X : t \in \mathbb{R}\}$ will be called a nonautonomous set. The family $K$ is closed (compact, bounded) if $K(t)$ is closed (compact, bounded) for all $t \in \mathbb{R}$. The $\omega$-limit set $\omega(t, E)$ consists of the pullback limits of all converging sequences $\{\xi_n\}_{n \in \mathbb{N}}$ where $\xi_n \in U_{\mathcal{G}}(t, s_n)E$, $s_n \to -\infty$. Let $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ be a family of subsets of $X$. We have the following concepts of invariance:
|
| 181 |
+
---PAGE_BREAK---
|
| 182 |
+
|
| 183 |
+
• $\mathcal{A}$ is positively invariant if $U_G(t, \tau)\mathcal{A}(\tau) \subset \mathcal{A}(t)$ for all $-\infty < \tau \le t < \infty$;
|
| 184 |
+
|
| 185 |
+
• $\mathcal{A}$ is negatively invariant if $\mathcal{A}(t) \subset U_G(t, \tau)\mathcal{A}(\tau)$ for all $-\infty < \tau \le t < \infty$;
|
| 186 |
+
|
| 187 |
+
• $\mathcal{A}$ is invariant if $U_G(t, \tau)\mathcal{A}(\tau) = \mathcal{A}(t)$ for all $-\infty < \tau \le t < \infty$.
|
| 188 |
+
|
| 189 |
+
**Definition 2.9.** Let $t \in \mathbb{R}$.
|
| 190 |
+
|
| 191 |
+
1. A set $\mathcal{A}(t) \subset X$ pullback attracts a set $B \in X$ at time $t$ if
|
| 192 |
+
$$ \mathrm{dist}(U_{\mathcal{G}}(t, s)B, \mathcal{A}(t)) \to 0 \quad \text{as } s \to -\infty. $$
|
| 193 |
+
|
| 194 |
+
2. A family $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ pullback attracts bounded sets of $X$ if $\mathcal{A}(\tau) \subset X$
|
| 195 |
+
pullback attracts all bounded subsets at $\tau$, for each $\tau \in \mathbb{R}$. In this case, we
|
| 196 |
+
say that the nonautonomous set $\mathcal{A}$ is pullback attracting.
|
| 197 |
+
|
| 198 |
+
3. A set $\mathcal{A}(t) \subset X$ pullback absorbs bounded subsets of $X$ at time $t$ if, for each bounded set $B$ in $X$, there exists $T = T(t, B) \le t$ such that $U_G(t, \tau)B \subset \mathcal{A}(t)$ for all $\tau \le T$.
|
| 199 |
+
|
| 200 |
+
4. A family $\{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ pullback absorbs bounded subsets of $X$ if for each $t \in \mathbb{R}$ $\mathcal{A}(t)$ pullback absorbs bounded sets at time $t$.
|
| 201 |
+
|
| 202 |
+
2.3. **Strong solutions.** Consider the following initial value problem:
|
| 203 |
+
|
| 204 |
+
$$ (P_t) \quad \left\{ \begin{aligned} \frac{du}{dt}(t) + A(t)u(t) &\ni f(t), & t > \tau \\ u(\tau) &= u_0 \end{aligned} \right. $$
|
| 205 |
+
|
| 206 |
+
where for each $t > \tau$, $A(t)$ is maximal monotone in a Hilbert space $H$, $f \in L^1(\tau, T; H)$ and $u_0 \in H$. Moreover, suppose $\mathcal{D}(A(t)) = \mathcal{D}(A(\tau))$, $\forall t, \tau \in \mathbb{R}$ and $\overline{\mathcal{D}(A(t))} = H$, for all $t \in \mathbb{R}$.
|
| 207 |
+
|
| 208 |
+
**Definition 2.10.** A function $u : [\tau, T] \to H$ is called a strong solution of $(P_t)$ on $[\tau, T]$ if
|
| 209 |
+
|
| 210 |
+
(i) $u \in C([\tau, T]; H)$;
|
| 211 |
+
|
| 212 |
+
(ii) $u$ is absolutely continuous on any compact subset of $(\tau, T)$;
|
| 213 |
+
|
| 214 |
+
(iii) $u(t)$ is in $D(A(t))$ for a.e. $t \in [\tau, T]$, $u(\tau) = u_0$ and satisfies the inclusion in $(P_t)$ for a.e. $t \in [\tau, T]$.
|
| 215 |
+
|
| 216 |
+
**Definition 2.11.** A strong solution of (S2) is a pair $(u, v)$ satisfying: $u, v \in C([\tau, T]; H)$ for which there exist $f, g \in L^1(\tau, T; H)$, $f(t) \in F(u(t), v(t))$, $g(t) \in G(u(t), v(t))$ a.e. in $(\tau, T)$, and such that $(u, v)$ is a strong solution (see Definition 2.10) over $(\tau, T)$ to the system $(P_1)$ below:
|
| 217 |
+
|
| 218 |
+
$$ (P_1) \quad \left\{ \begin{aligned} \frac{du}{dt} + A(t)u &= f \\ \frac{dv}{dt} + B(t)v &= g \\ u(\tau) &= u_0, v(\tau) = v_0 \end{aligned} \right. $$
|
| 219 |
+
|
| 220 |
+
**Theorem 2.12 ([27]).** Let $A = \{A(t)\}_{t>\tau}$ and $B = \{B(t)\}_{t>\tau}$ be families of uni-
|
| 221 |
+
valued operators $A(t) = \partial\varphi^t$, $B(t) = \partial\psi^t$ with $\varphi^t$, $\psi^t$ non negative maps satisfying
|
| 222 |
+
**Assumption A** with $\partial\varphi^t(0) = \partial\psi^t(0) = 0$. Also suppose each one of A and B
|
| 223 |
+
generates a compact evolution process, and let $F, G: H \times H \to P(H)$ be upper
|
| 224 |
+
semicontinuous and bounded multivalued maps. Then given a bounded subset $B_0 \subset$
|
| 225 |
+
$H \times H$, there exists $T_0 > 0$ such that for each $(u_0, v_0) \in B_0$ there exists at least one
|
| 226 |
+
strong solution $(u, v)$ of (S2) defined on $[\tau, T_0]$. If, in addition, the pair $(F, G)$ is
|
| 227 |
+
positively sublinear, given $T > \tau$, the same conclusion is true with $T_0 = T$.
|
| 228 |
+
---PAGE_BREAK---
|
| 229 |
+
|
| 230 |
+
Let $D(u(\tau), v(\tau))$ be the set of the solutions of (S2) with initial data $(u_{\tau}, v_{\tau})$ and define $G(\tau) := \bigcup_{(u_{\tau}, v_{\tau}) \in H \times H} D(u(\tau), v(\tau))$. Consider $\mathbb{G} := \{G(\tau)\}_{\tau \in \mathbb{R}}$.
|
| 231 |
+
|
| 232 |
+
**Theorem 2.13 ([27]).** Under the conditions of Theorem 2.12, $\mathbb{G}$ is an exact generalized process.
|
| 233 |
+
|
| 234 |
+
Let $\Omega \subset \mathbb{R}^n$, $n \ge 1$, be a bounded smooth domain and write $H := L^2(\Omega)$ and $Y := W^{1,p(\cdot)}(\Omega)$ with $p^- > 2$. Then $Y \subset H \subset Y^*$ with continuous and dense embeddings. We refer the reader to [7, 8] and references therein to see properties of the Lebesgue and Sobolev spaces with variable exponents. In particular, with
|
| 235 |
+
|
| 236 |
+
$$L^{p(\cdot)}(\Omega) := \{u : \Omega \to \mathbb{R} : u \text{ is measurable, } \int_{\Omega} |u(x)|^{p(x)} dx < \infty\}$$
|
| 237 |
+
|
| 238 |
+
and $L_+^\infty(\Omega) := \{q \in L^\infty(\Omega) : \text{ess inf } q \ge 1\}$, define
|
| 239 |
+
|
| 240 |
+
$$\rho(u) := \int_{\Omega} |u(x)|^{p(x)} dx, \quad \|u\|_{L^{p(\cdot)}(\Omega)} := \inf \left\{ \lambda > 0 : \rho\left(\frac{u}{\lambda}\right) \le 1 \right\}.$$
|
| 241 |
+
|
| 242 |
+
for $u \in L^{p(\cdot)}(\Omega)$ and $p \in L_+^\infty(\Omega)$.
|
| 243 |
+
|
| 244 |
+
Consider the operator $A(t)$ defined in $Y$ such that for each $u \in Y$ is associated the following element of $Y^*$, $A(t)u: Y \to \mathbb{R}$ given by
|
| 245 |
+
|
| 246 |
+
$$A(t)u(v) := \int_{\Omega} D_1(t,x) |\nabla u(x)|^{p(x)-2} \nabla u(x) \cdot \nabla v(x) dx + \int_{\Omega} |u(x)|^{p(x)-2} u(x)v(x) dx.$$
|
| 247 |
+
|
| 248 |
+
The authors proved in [13] that:
|
| 249 |
+
|
| 250 |
+
• For each $t \in [\tau, T]$ the operator $A(t): Y \to Y^*$, with domain $Y = W^{1,p(\cdot)}(\Omega)$, is maximal monotone and $A(t)(Y) = Y^*$.
|
| 251 |
+
|
| 252 |
+
• The realization of the operator $A(t)$ in $H = L^2(\Omega)$, i.e.,
|
| 253 |
+
|
| 254 |
+
$$A_H(t)u = -\operatorname{div}(D_1(t)|\nabla u(t)|^{p(x)-2}\nabla u(t)) + |u(t)|^{p(x)-2}u(t),$$
|
| 255 |
+
|
| 256 |
+
is maximal monotone in $H$ for each $t \in [\tau, T]$.
|
| 257 |
+
|
| 258 |
+
• The operator $A_H(t)$ is the subdifferential $\partial\varphi_{p(\cdot)}^t$ of the convex, proper and lower semicontinuous map $\varphi_{p(\cdot)}^t: L^2(\Omega) \to \mathbb{R} \cup \{+\infty\}$ given by
|
| 259 |
+
|
| 260 |
+
$$\varphi_{p(\cdot)}^t(u) = \begin{cases} \left[ \int_{\Omega} \frac{D_1(t,x)}{p(x)} |\nabla u|^{p(x)} dx + \int_{\Omega} \frac{1}{p(x)} |u|^{p(x)} dx \right] & \text{if } u \in Y \\ +\infty, & \text{otherwise.} \end{cases} \quad (1)$$
|
| 261 |
+
|
| 262 |
+
Using the following elementary assertion we can obtain estimates on the operator considering only two cases.
|
| 263 |
+
|
| 264 |
+
**Proposition 1 ([1]).** Let $\lambda, \mu$ be arbitrary nonnegative numbers. For every positive $\alpha, \theta, \alpha \ge \theta$,
|
| 265 |
+
|
| 266 |
+
$$\lambda^{\alpha} + \mu^{\theta} \geq \frac{1}{2^{\alpha}} \begin{cases} (\lambda + \mu)^{\alpha} & \text{if } \lambda + \mu < 1, \\ (\lambda + \mu)^{\theta} & \text{if } \lambda + \mu \geq 1. \end{cases}$$
|
| 267 |
+
|
| 268 |
+
Then it is easy to show that for every $u \in Y$
|
| 269 |
+
|
| 270 |
+
$$\langle A(t)u, u\rangle_{Y^*,Y} \geq \frac{\min\{\beta, 1\}}{2^{p^+}} \begin{cases} \|u\|_Y^{p_+} & \text{if } \|u\|_Y < 1, \\ \|u\|_Y^{p_-} & \text{if } \|u\|_Y \geq 1. \end{cases} \quad (2)$$
|
| 271 |
+
|
| 272 |
+
From Example 4.4 in the last section of [27] we can apply Theorem 2.12 and Theorem 2.13 for $A(t)u = -\operatorname{div}(D_1(t, \cdot)|\nabla u|^{p(\cdot)-2}\nabla u) + |u|^{p(\cdot)-2}u$ and $B(t)v =$
|
| 273 |
+
---PAGE_BREAK---
|
| 274 |
+
|
| 275 |
+
$-div(D_2(t, \cdot)|\nabla v|^{q(\cdot)-2}\nabla v) + |v|^{q(\cdot)-2}v$ and conclude that system (S) has global solutions and it defines an exact generalized process $\mathbb{G}$.
|
| 276 |
+
|
| 277 |
+
**3. Existence of the pullback attractor.** First, we provide estimates on the solutions in the spaces $H \times H$ and $Y \times Y$.
|
| 278 |
+
|
| 279 |
+
**Lemma 3.1.** Let $(u_1, u_2)$ be a solution of problem (S). Then there exist a positive number $r_0$ and a constant $T_0$ which do not depend on the initial data, such that
|
| 280 |
+
|
| 281 |
+
$$\|(u_1(t), u_2(t))\|_{H \times H} \le r_0, \quad \forall t \ge T_0 + \tau.$$
|
| 282 |
+
|
| 283 |
+
*Proof.* Let $\varphi = (u_1, u_2) \in \mathbb{G}$ be a solution of (S). Then there exists a pair $(f,g) \in \text{Sel } F(u_1, u_2) \times \text{Sel } G(u_1, u_2)$ with $f, g \in L^1(\tau, T; H)$ for each $T > \tau$ such that $u_1$, $u_2$ satisfy the problem
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
\left\{
|
| 287 |
+
\begin{array}{ll}
|
| 288 |
+
\displaystyle \frac{du_1}{dt} + A(t)(u_1) = f & \text{in } (\tau, T) \times \Omega, \\
|
| 289 |
+
\\
|
| 290 |
+
\displaystyle \frac{du_2}{dt} + B(t)(u_2) = g & \text{in } (\tau, T) \times \Omega, \\
|
| 291 |
+
\\
|
| 292 |
+
u_1(\tau,x) = u_{1,0}(x), \quad u_2(\tau,x) = u_{2,0}(x) & \text{in } \Omega.
|
| 293 |
+
\end{array}
|
| 294 |
+
\right.
|
| 295 |
+
\qquad (3)
|
| 296 |
+
$$
|
| 297 |
+
|
| 298 |
+
Let $\alpha := 4(|\Omega| + 1)^2$ and $\sigma := \frac{\min\{\beta, 1\}}{2\max\{p^+, q^+\}}$. Multiplying the first equation by $u_1$, the second equation in (3) by $u_2$ and using (2) we obtain
|
| 299 |
+
|
| 300 |
+
$$
|
| 301 |
+
\frac{1}{2} \frac{d}{dt} \|u_1(t)\|_H^2 \leq \begin{cases} -\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_H^{p^+} + \langle f(t), u_1(t) \rangle_H & \text{if } t \in I_1, \\ -\frac{\sigma}{\alpha^{q^-}} \|u_1(t)\|_H^{q^-} + \langle f(t), u_1(t) \rangle_H & \text{if } t \in I_2, \end{cases} \quad (4)
|
| 302 |
+
$$
|
| 303 |
+
|
| 304 |
+
where
|
| 305 |
+
|
| 306 |
+
$I_1 := \{t \in (\tau, T) : \|u_1(t)\|_Y < 1\}, \quad I_2 := \{t \in (\tau, T) : \|u_1(t)\|_Y \ge 1\},$
|
| 307 |
+
|
| 308 |
+
and
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \leq
|
| 312 |
+
\begin{cases}
|
| 313 |
+
-\frac{\sigma}{\alpha^{q^+}} \|u_2(t)\|_H^{q^+} + \langle g(t), u_2(t) \rangle_H & \text{if } t \in \tilde{I}_1 \\
|
| 314 |
+
-\frac{\sigma}{\alpha^{q^-}} \|u_2(t)\|_H^{q^-} + \langle g(t), u_2(t) \rangle_H & \text{if } t \in \tilde{I}_2,
|
| 315 |
+
\end{cases}
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
where
|
| 319 |
+
|
| 320 |
+
$$
|
| 321 |
+
\tilde{I}_1 := \{t \in (\tau, T) : \|u_2(t)\|_Y < 1\}, \quad \tilde{I}_2 := \{t \in (\tau, T) : \|u_2(t)\|_Y \ge 1\}.
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
Now, define $r := \frac{p^+}{p^-} > 1$ and let $r'$ be such that $\frac{1}{r} + \frac{1}{r'} = 1$. Then, by Young's inequality,
|
| 325 |
+
|
| 326 |
+
$$
|
| 327 |
+
-\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_{H}^{p^+} \le r \left( -\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_{H}^{p^-} + \frac{\sigma}{\alpha^{p^+} r'} \right). \quad (5)
|
| 328 |
+
$$
|
| 329 |
+
|
| 330 |
+
Using (5) in (4) we obtain
|
| 331 |
+
|
| 332 |
+
$$
|
| 333 |
+
\frac{1}{2} \frac{d}{dt} \|u_1(t)\|_{H}^{2} \leq -C_{2} \|u_{1}(t)\|_{H}^{p^{-}} + \langle f(t), u_{1}(t) \rangle_{H} + C_{1} \quad \forall t \in I := (\tau, T), \quad (6)
|
| 334 |
+
$$
|
| 335 |
+
|
| 336 |
+
where $C_1 := \frac{L\sigma}{p^{-}\alpha^{p^{-}}}$
|
| 337 |
+
and $C_2 := \frac{\min\{1,\beta\}}{(2\alpha)^L}$ with $L := \max\{p^+, q^+\}$.
|
| 338 |
+
|
| 339 |
+
In an analogous way, taking $\tilde{r} := \frac{q^+}{q^-} > 1$ and $\tilde{r}'$ such that $\frac{1}{\tilde{r}} + \frac{1}{\tilde{r}'} = 1$ we have
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
\frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \le -\tilde{C}_2 \|u_2(t)\|_H^{q^-} + \langle g(t), u_2(t) \rangle_H + \tilde{C}_1, \quad \forall t \in I,
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
where $\tilde{C}_1 := \frac{L\sigma}{q^{-}\alpha^{q^{-}}}$ and $\tilde{C}_2 = C_2 = \frac{\min\{1,\beta\}}{(2\alpha)^L}$.
|
| 346 |
+
---PAGE_BREAK---
|
| 347 |
+
|
| 348 |
+
We can suppose, without losing generality that $p^{-} \ge q^{-}$. If $p^{-} = q^{-}$ we obtain a similar expression as (6) with $q^{-}$ in the place of $p^{-}$. If $p^{-} > q^{-}$, taking $\theta := \frac{p^{-}}{q^{-}} > 1$, $\theta'$ such that $\frac{1}{\theta'} + \frac{1}{\theta} = 1$ and $\epsilon > 0$ we have
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\begin{align*}
|
| 352 |
+
\|u_1(t)\|_H^{q^-} &= \frac{\epsilon}{\epsilon} \|u_1(t)\|_H^{q^-} \le \frac{1}{\theta' \epsilon \theta'} + \frac{1}{\theta} \epsilon^\theta \|u_1(t)\|_H^{p^-} \\
|
| 353 |
+
\text{and then} \quad & \\
|
| 354 |
+
& -C_2 \|u_1(t)\|_H^{p^-} \le \frac{\theta}{\epsilon^\theta} \left[ \frac{C_2}{\theta' \epsilon \theta'} - C_2 \|u_1(t)\|_H^{q^-} \right].
|
| 355 |
+
\end{align*}
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
Thus we obtain
|
| 359 |
+
|
| 360 |
+
$$
|
| 361 |
+
\left\{
|
| 362 |
+
\begin{array}{l}
|
| 363 |
+
\displaystyle \frac{1}{2} \frac{d}{dt} \|u_1(t)\|_H^2 \le -\frac{C_2 \theta}{\epsilon^{\theta}} \|u_1(t)\|_H^{q_-} + \langle f(t), u_1(t) \rangle_H + C_1 + \frac{\theta C_2}{\theta' \epsilon^{\theta} \epsilon^{\theta'}} \\
|
| 364 |
+
\\
|
| 365 |
+
\displaystyle \frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \le -\tilde{C}_2 \|u_2(t)\|_H^{q_-} + \langle g(t), u_2(t) \rangle_H + \tilde{C}_1
|
| 366 |
+
\end{array}
|
| 367 |
+
\right.
|
| 368 |
+
\quad (7)
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
We estimate $\langle f(t), u_1(t) \rangle_H$ and $\langle g(t), u_2(t) \rangle_H$ using the assumption that $(F, G)$ is positively sublinear (see Definition 2.5) and Young's inequality. Choosing a convenient, sufficiently small $\epsilon$ we obtain
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\begin{align*}
|
| 375 |
+
\frac{1}{2} \frac{d}{dt} (\|u_1(t)\|_H^2 + \|u_2(t)\|_H^2) &\le -C_5 (\|u_1(t)\|_H^{q_-} + \|u_2(t)\|_H^{q_-}) + C_6 \\
|
| 376 |
+
&\le -\frac{C_5}{2^{q-2}} (\|u_1(t)\|_H^2 + \|u_2(t)\|_H^2)^{\frac{q-2}{2}} + C_6,
|
| 377 |
+
\end{align*}
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
where $C_5$, $C_6 > 0$ are constants that depend on the numbers $|\Omega|$, $\beta$, $p^-$, $p^+$, $q^-$, $q^+$, $a$, $b$, $c$ and $m_0$.
|
| 381 |
+
|
| 382 |
+
Hence, the function $y(t) := \|u_1(t)\|_H^2 + \|u_2(t)\|_H^2$ satisfies the inequality
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
y'(t) \leq - \frac{2C_5}{2^{q/2}} y(t)^{\frac{q-}{2}} + 2C_6, \quad t > 0.
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
From Lemma 5.1 in [28] we obtain
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
y(t) \le \left( \frac{C_6}{\frac{C_5}{2^{q^-/2}}} \right)^{2/q^-} + \left[ \frac{2C_5}{2^{q^-/2}} (q^-/2 - 1)(t-\tau) \right]^{-1/(q^-/2-1)} .
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
Let $T_0 > 0$ be such that $\left[ \frac{2C_5}{2^{q^-/2}} \left(\frac{q^-}{2} - 1\right) T_0 \right]^{-1/(q^-/2-1)} \le 1$. Then,
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\|u_1(t)\|_{H}^{2} + \|u_{2}(t)\|_{H}^{2} \leq \kappa_{0} := (C_{6}2^{q^{-}/2}/C_{5})^{2/q^{-}} + 1 \quad \text{for all } t \geq T_{0} + \tau. \quad \square
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
**Lemma 3.2.** Let $(u_1, u_2)$ be a solution of problem (S). Then there exist positive constants $r_1$ and $T_1 > T_0$, which do not depend on the initial data, such that
|
| 401 |
+
|
| 402 |
+
$$
|
| 403 |
+
\|(u_1(t), u_2(t))\|_{Y \times Y} \le r_1, \quad \forall t \ge T_1 + \tau.
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
Proof. Take $T_1 > T_0$. Since $(u_1, u_2)$ is a solution of (S) there exists a pair $(f,g) \in \operatorname{Sel} F(u,v) \times \operatorname{Sel} G(u,v)$ with $f, g \in L^1(\tau,T;H)$ such that $u$ and $v$ satisfy the problem
|
| 407 |
+
|
| 408 |
+
$$
|
| 409 |
+
\left\{
|
| 410 |
+
\begin{array}{ll}
|
| 411 |
+
\displaystyle \frac{du_1}{dt} + A(t)(u_1) = f & \text{in } (\tau, T) \times \Omega, \\
|
| 412 |
+
\\
|
| 413 |
+
\displaystyle \frac{du_2}{dt} + B(t)(u_2) = g & \text{in } (\tau, T) \times \Omega.
|
| 414 |
+
\end{array}
|
| 415 |
+
\right.
|
| 416 |
+
$$
|
| 417 |
+
---PAGE_BREAK---
|
| 418 |
+
|
| 419 |
+
Consider $\varphi_{p(\cdot)}^t$ as in (1). Using Assumption D (ii),
|
| 420 |
+
|
| 421 |
+
$$ \frac{d}{dt} \varphi_{p(\cdot)}^{t}(u_{1}(t)) \leq \left\langle \partial \varphi_{p(\cdot)}^{t}(u_{1}(t)), \frac{du_{1}}{dt}(t) \right\rangle $$
|
| 422 |
+
|
| 423 |
+
and then we obtain
|
| 424 |
+
|
| 425 |
+
$$ \frac{d}{dt} \varphi_{p(\cdot)}^{t}(u_1(t)) + \frac{1}{2} \left\| f(t) - \frac{du_1}{dt}(t) \right\|_{H}^{2} \leq \frac{1}{2} \|f(t)\|_{H}^{2}. $$
|
| 426 |
+
|
| 427 |
+
Now by Lemma 3.1 and the fact that $F$ and $G$ are bounded, there exists a positive constant $C_0$ such that $\|f(t)\|_H \le C_0$ for all $t \ge T_0 + \tau$. Then, by the definition of a subdifferential and the Uniform Gronwall Lemma (see [28]), there exists a positive constant $C_1$ such that $\varphi_{p(\cdot)}^t(u_1(t)) \le C_1$ for all $t \ge T_1 + \tau$. Consequently, there exists a positive constant $K_1$ such that $\|u_1(t)\|_Y \le K_1$ for all $t \ge T_1 + \tau$.
|
| 428 |
+
|
| 429 |
+
In a similar way, we conclude $\|u_2(t)\|_Y \le K_2$ for all $t \ge T_1 + \tau$ for a positive constant $K_2$. The assertion of the lemma then follows. $\square$
|
| 430 |
+
|
| 431 |
+
Let $U_G$ be the multivalued process defined by the generalized process $G$. We know from [23] that for all $t \ge s$ in $\mathbb{R}$ the map $x \mapsto U_G(t,s)x \in P(H \times H)$ is closed, so we obtain from Theorem 18 in [4] the following result
|
| 432 |
+
|
| 433 |
+
**Theorem 3.3.** If for any $t \in \mathbb{R}$ there exists a nonempty compact set $D(t)$ which pullback attracts all bounded sets of $H \times H$ at time $t$, then the set $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ with $\mathcal{A}(t) = \bigcup_{B \in \mathcal{B}(H \times H)} \omega(t, B)$, is the unique compact, negatively invariant pullback attracting set which is minimal in the class of closed pullback attracting nonautonomous sets. Moreover, the sets $\mathcal{A}(t)$ are compact.
|
| 434 |
+
|
| 435 |
+
**Theorem 3.4.** The multivalued evolution process $U_G$ associated with system (S) has a compact, negatively invariant pullback attracting set $\mathfrak{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ which is minimal in the class of closed pullback attracting nonautonomous sets. Moreover, the sets $\mathcal{A}(t)$ are compact.
|
| 436 |
+
|
| 437 |
+
*Proof.* By Lemma 3.2 we have that the family $D(t) := \overline{B_{Y \times Y}(0, r_1)}^{H \times H}$ of compact sets of $H \times H$ is attracting. The result thus follows from Theorem 3.3. $\square$
|
| 438 |
+
|
| 439 |
+
**4. Forward attraction.** Pullback attractors contain all of the bounded entire solutions of the nonautonomous dynamical system [11, 12]. Simple counterexamples show that a pullback attractor need not be attracting in the forward sense [11]. However, since the pullback absorbing set $D$ above is also forward absorbing (the absorption time is independent of the initial time $\tau$), the forward omega limit sets $\omega_f(\tau, D)$ of the multivalued process starting at time $\tau$ are nonempty and compact subsets of the compact set $D$. Moreover, it follows by the positive invariance of the $D$ and the two-parameter semi-group property that they are increasing in time. The forward limiting dynamics thus tends to the nonempty compact subset $\omega_f^\infty(D) = \cup_{\tau \ge 0} \omega_f(\tau, D) \subset D$, which was called the forward attracting set in [16]. (It is related to the Vishik uniform attractor, when that exists, but can be smaller since the attraction here need not be uniform in the initial time).
|
| 440 |
+
|
| 441 |
+
As shown in Proposition 8 of [16] (in the context of single valued difference equations, but a similar proof holds here) the forward attracting set $\omega_f^\infty(D)$ is asymptotically positively invariant with respect to the set valued process $U_G(t, \tau)$,
|
| 442 |
+
---PAGE_BREAK---
|
| 443 |
+
|
| 444 |
+
i.e., if for any monotone decreasing sequence $\varepsilon_p \to 0$ as $p \to \infty$ there exists a monotone increasing sequence $T_p \to \infty$ as $p \to \infty$ such that for each $\tau \ge T_p$
|
| 445 |
+
|
| 446 |
+
$$U_G(t, \tau)\omega_f^\infty(D) \subset B_{\varepsilon_p}(\omega_f^\infty(D)), \quad t \ge \tau,$$
|
| 447 |
+
|
| 448 |
+
where $B_{\varepsilon_p}(\omega_f^\infty(D)) := \{x \in H \times H : \operatorname{dist}_{H \times H}(x, \omega_f^\infty(D)) < \varepsilon_p\}$.
|
| 449 |
+
|
| 450 |
+
Simple counterexamples show that the set $\omega_f^\infty(D)$ need not be invariant or even positive invariant, although it may be in special cases depending on the nature of the time varying terms in the system. For asymptotically autonomous systems $\omega_f^\infty(D)$ is contained in the global attractor $\mathcal{A}_\infty$ for the multivalued semigroup $G$ associated with the limiting autonomous system.
|
| 451 |
+
|
| 452 |
+
Moreover, it is possible to compare the global attractor $\mathcal{A}_\infty$ with the limit-set $\mathcal{A}(\infty)$ defined by $\mathcal{A}(\infty) := \bigcap_{t \in \mathbb{R}} (\cup_{r \ge t} \mathcal{A}(r))$ and which can be characterized by
|
| 453 |
+
|
| 454 |
+
$$\bigcup_{r_n \nearrow \infty} \{x \in X : \exists x_n \in \mathcal{A}(r_n) \text{ s. t. } x_n \to x\}.$$
|
| 455 |
+
|
| 456 |
+
This kind of comparison was done in [26] for the multivalued context.
|
| 457 |
+
|
| 458 |
+
**Theorem 4.1** ([26]). Suppose the pullback attractor $\mathcal{A}$ is forward compact, i.e., $\cup_{r \ge t} \mathcal{A}(r)$ is precompact for each $t \in \mathbb{R}$. Moreover, suppose that for each solution $u$ of problem (8) there exists a solution $v$ of problem (9) such that $u(t+\tau) \to v(t)$ in $X$ as $\tau \to +\infty$ for each $t \ge 0$ whenever $\psi_\tau \in \mathcal{A}(\tau)$ and $\psi_\tau \to \psi_0$ in $X$ as $\tau \to +\infty$. Then $\mathcal{A}_\infty \supset \mathcal{A}(\infty)$.
|
| 459 |
+
|
| 460 |
+
To obtain the equality $\mathcal{A}_\infty = \mathcal{A}(\infty)$ we need to assume stronger conditions as in the next result.
|
| 461 |
+
|
| 462 |
+
**Theorem 4.2** ([26]). Under the same assumptions of Theorem 4.1, we have $\mathcal{A}_\infty = \mathcal{A}(\infty)$ if we further assume the following conditions:
|
| 463 |
+
|
| 464 |
+
(a) $\mathcal{A}(\infty)$ forward attracts $\mathcal{A}_\infty$ by $U_G(\cdot, 0)$, i.e.,
|
| 465 |
+
|
| 466 |
+
$$\lim_{t \to +\infty} \operatorname{dist}(U_G(t, 0)\mathcal{A}_\infty, \mathcal{A}(\infty)) = 0;$$
|
| 467 |
+
|
| 468 |
+
(b) $\lim_{t \to +\infty} \sup_{x \in \mathcal{A}_\infty} \operatorname{dist}(G(t)x, U_G(t, 0)x) = 0.$
|
| 469 |
+
|
| 470 |
+
5. Asymptotic upper semicontinuity. In this section we establish the asymptotic upper semicontinuity of the elements of the pullback attractor. Specifically, we prove that the system (S) is asymptotically autonomous.
|
| 471 |
+
|
| 472 |
+
5.1. **Theoretical results.** In this subsection motivated by problem (S), we study the asymptotic behavior of an abstract nonautonomous multivalued problem in a Hilbert space *H* of the form
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
\left\{
|
| 476 |
+
\begin{array}{ll}
|
| 477 |
+
\displaystyle \frac{du_1}{dt}(t) + A(t)u_1(t) \in F(u_1(t), u_2(t)) & t > \tau \\
|
| 478 |
+
\\
|
| 479 |
+
\displaystyle \frac{du_2}{dt}(t) + B(t)u_2(t) \in G(u_1(t), u_2(t)) & t > \tau \\
|
| 480 |
+
\\
|
| 481 |
+
(u_1(\tau), u_2(\tau)) = (\psi_{1,\tau}, \psi_{2,\tau}) =: \psi_{\tau},
|
| 482 |
+
\end{array}
|
| 483 |
+
\right.
|
| 484 |
+
\qquad (8)
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
compared with that of an autonomous multivalued problem of the form
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
\left\{
|
| 491 |
+
\begin{array}{ll}
|
| 492 |
+
\displaystyle \frac{dv_1}{dt}(t) + A_\infty v_1(t) \in F(v_1(t), v_2(t)) & t > 0 \\
|
| 493 |
+
\\
|
| 494 |
+
\displaystyle \frac{dv_2}{dt}(t) + B_\infty v_2(t) \in G(v_1(t), v_2(t)) & t > 0 \\
|
| 495 |
+
\\
|
| 496 |
+
(v_1(0), v_2(0)) = (\psi_{1,0}, \psi_{2,0})) =: \psi_0,
|
| 497 |
+
\end{array}
|
| 498 |
+
\right.
|
| 499 |
+
\qquad (9)
|
| 500 |
+
$$
|
| 501 |
+
---PAGE_BREAK---
|
| 502 |
+
|
| 503 |
+
where $A(t), B(t), A_\infty$ and $B_\infty$ are univalued operators in $H \times H$ and $F, G: H \times H \to P(H \times H)$ are multivalued maps.
|
| 504 |
+
|
| 505 |
+
Under appropriate relationships between the operators $A(t)$, $A_\infty$ and $B(t)$, $B_\infty$, the autonomous problem (9) is the asymptotic autonomous version of the nonautonomous problem (8). In particular, we establish the convergence in the Hausdorff semi-distance of the component subsets of the pullback attractor of the nonautonomous problem (8) to the global autonomous attractor of the autonomous problem (9).
|
| 506 |
+
|
| 507 |
+
Some definitions on multivalued semigroups are recalled here, see for example [5, 17, 24] for more details.
|
| 508 |
+
|
| 509 |
+
**Definition 5.1.** Let $X$ be a complete metric space. The map $G : \mathbb{R}^+ \times X \to P(X)$ is called a multivalued semigroup (or *m-semiflow*) if
|
| 510 |
+
|
| 511 |
+
(1) $G(0, \cdot) = \mathbf{1}$ is the identity map;
|
| 512 |
+
|
| 513 |
+
(2) $G(t_1 + t_2, x) \subset G(t_1, G(t_2, x))$, for all $x \in X$ and $t_1, t_2 \in \mathbb{R}^+$.
|
| 514 |
+
|
| 515 |
+
It is called strict (or exact) if $G(t_1 + t_2, x) = G(t_1, G(t_2, x))$, for all $x \in X$ and $t_1, t_2 \in \mathbb{R}^+$.
|
| 516 |
+
|
| 517 |
+
**Definition 5.2.** Let $G$ be a multivalued semigroup on $X$. The set $A \subset X$ attracts the subset $B$ of $X$ if $\lim_{t \to \infty} \text{dist}_H(G(t, B), A) = 0$. The set $M$ is said to be a global $B$-attractor for $G$ if $M$ attracts any nonempty bounded subset $B \subset X$.
|
| 518 |
+
|
| 519 |
+
Suppose that the multivalued evolution process $\{U(t, \tau) : t \ge \tau\}$ in $H \times H$ associated with problem (8) has a pullback attractor $\mathcal{A} = \{\mathcal{A}(t) : t \in \mathbb{R}\}$ and that the multivalued semigroup $G : \mathbb{R}^+ \times H \times H \to P(H \times H)$ associated with problem (9) has a global autonomous $B$-attractor $\mathcal{A}_\infty$ in the Hilbert space $H \times H$. The following result will be used later to establish the convergence in the Hausdorff semi-distance of the component subsets $\mathcal{A}(t)$ of the pullback attractor $\mathcal{A}$ to $\mathcal{A}_\infty$ as $t \to \infty$.
|
| 520 |
+
|
| 521 |
+
**Theorem 5.3.** Suppose that $C := \bigcup_{\tau \ge 0} \mathcal{A}(\tau)$ is a compact subset of $H \times H$. In addition, suppose that for each solution $u$ of problem (8) there exists a solution $v$ of problem (9) with initial values $\psi_\tau$ and $\psi_0$, respectively, such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$ whenever $\psi_\tau \in \mathcal{A}(\tau)$ and $\psi_\tau \to \psi_0$ in $H$ as $\tau \to +\infty$. Then
|
| 522 |
+
|
| 523 |
+
$$ \lim_{t \to +\infty} \text{dist}_{H \times H}(\mathcal{A}(t), \mathcal{A}_\infty) = 0. $$
|
| 524 |
+
|
| 525 |
+
*Proof.* Suppose that this is not true. Then there would exist an $\epsilon_0 > 0$ and a real sequence $\{\tau_n\}_{n \in \mathbb{N}}$ with $\tau_n \nearrow +\infty$ such that $\text{dist}_{H \times H}(\mathcal{A}(\tau_n), \mathcal{A}_\infty) \ge 3\epsilon_0$ for all $n \in \mathbb{N}$. Since the sets $\mathcal{A}(\tau_n)$ are compact, there exist $a_n \in \mathcal{A}(\tau_n)$ such that
|
| 526 |
+
|
| 527 |
+
$$ \text{dist}_{H \times H}(a_n, \mathcal{A}_\infty) = \text{dist}_{H \times H}(\mathcal{A}(\tau_n), \mathcal{A}_\infty) \ge 3\epsilon_0, \quad (10) $$
|
| 528 |
+
|
| 529 |
+
for each $n \in \mathbb{N}$. By the attraction property of the multivalued semigroup, we have $\text{dist}_{H \times H}(G(\tau_{n_0}, C), \mathcal{A}_\infty) \le \epsilon_0$ for $n_0 > 0$ large enough. Moreover, by the negative invariance of the pullback attractor there exist $b_n \in \mathcal{A}(\tau_n - \tau_{n_0}) \subset C$ for $n > n_0$ such that $a_n \in U(\tau_n, \tau_n - \tau_{n_0})b_n$ for each $n > n_0$. Since $C$ is compact, there is a convergent subsequence $b_{n'}' \to b \in C$. Since $a_{n'}' \in U(\tau_{n'}, \tau_{n'} - \tau_{n_0})b_{n'}'$ there exists
|
| 530 |
+
---PAGE_BREAK---
|
| 531 |
+
|
| 532 |
+
a solution $u_{n'} = (u_{1n'}, u_{2n'})$ of
|
| 533 |
+
|
| 534 |
+
$$
|
| 535 |
+
\begin{cases}
|
| 536 |
+
\frac{du_{1n'}}{dt}(t) + A(t)u_{1n'}(t) \in F(u_{1n'}(t), u_{2n'}(t)) \\
|
| 537 |
+
\frac{du_{2n'}}{dt}(t) + B(t)u_{2n'}(t) \in G(u_{1n'}(t), u_{2n'}(t)) \\
|
| 538 |
+
u_{n'}(\tau_{n'} - \tau_{n_0}) = b_{n'},
|
| 539 |
+
\end{cases}
|
| 540 |
+
$$
|
| 541 |
+
|
| 542 |
+
such that $a_{n'} = u_{n'}(\tau_{n'})$.
|
| 543 |
+
|
| 544 |
+
Writing $\tau_{n'} = \tau_{n_0} + (\tau_{n'} - \tau_{n_0})$ and using the hypotheses with $t = \tau_{n_0}$ and $\tau = \tau_{n'} - \tau_{n_0} \to +\infty$ (as $n' \to +\infty$), there exists a solution $v_{n'}$ of
|
| 545 |
+
|
| 546 |
+
$$
|
| 547 |
+
\left\{
|
| 548 |
+
\begin{array}{l}
|
| 549 |
+
\displaystyle \frac{dv_{1n'}}{dt}(t) + A_\infty v_{1n'}(t) \in F(v_{1n'}(t), v_{2n'}(t)) \\
|
| 550 |
+
\displaystyle \frac{dv_{2n'}}{dt}(t) + B_\infty v_{2n'}(t) \in G(v_{1n'}(t), v_{2n'}(t)) \\
|
| 551 |
+
v_{n'}(0) = b,
|
| 552 |
+
\end{array}
|
| 553 |
+
\right.
|
| 554 |
+
$$
|
| 555 |
+
|
| 556 |
+
such that
|
| 557 |
+
|
| 558 |
+
$$
|
| 559 |
+
\| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} < \epsilon_0
|
| 560 |
+
$$
|
| 561 |
+
|
| 562 |
+
for $n'$ large enough. Hence,
|
| 563 |
+
|
| 564 |
+
$$
|
| 565 |
+
\begin{align*}
|
| 566 |
+
\mathrm{dist}_{H \times H} (a_{n'}, \mathcal{A}_{\infty}) &= \mathrm{dist}_{H \times H} (u_{n'}(\tau_{n'}), \mathcal{A}_{\infty}) \\
|
| 567 |
+
&\leq \| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} + \mathrm{dist}_{H \times H} (v_{n'}(\tau_{n_0}), \mathcal{A}_{\infty}) \\
|
| 568 |
+
&\leq \| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} + \mathrm{dist}_{H \times H} (G(\tau_{n_0}, C), \mathcal{A}_{\infty}) \\
|
| 569 |
+
&\leq 2\epsilon_0,
|
| 570 |
+
\end{align*}
|
| 571 |
+
$$
|
| 572 |
+
|
| 573 |
+
which contradicts (10). □
|
| 574 |
+
|
| 575 |
+
The next result is very useful for checking that the hypothesis of asymptotic
|
| 576 |
+
continuity of the nonautonomous flow in the preceeding theorem for problems like
|
| 577 |
+
(8) holds. In order to obtain the result we suppose that the operators $A(t)$ and $A_\infty$
|
| 578 |
+
satisfy the following assumption.
|
| 579 |
+
|
| 580 |
+
**Assumption G.** For each $\tau \in \mathbb{R}$ there exist non increasing functions $g_{1,\tau}$, $g_{2,\tau}$ :
|
| 581 |
+
$[0,+\infty) \rightarrow [0,+\infty)$ such that $g_{i,\tau}(t) \rightarrow 0$ as $\tau \rightarrow +\infty$, for each $t \ge 0$, $i=1,2$, and
|
| 582 |
+
$\langle A(t+\tau)u_1(t+\tau) - A_\infty v_1(t), u_1(t+\tau) - v_1(t) \rangle \ge -g_{1,\tau}(t)$, for all $t \in \mathbb{R}^+$, $\tau \in \mathbb{R}$,
|
| 583 |
+
and
|
| 584 |
+
$\langle B(t+\tau)u_2(t+\tau) - B_\infty v_2(t), u_2(t+\tau) - v_2(t) \rangle \ge -g_{2,\tau}(t)$, for all $t \in \mathbb{R}^+$, $\tau \in \mathbb{R}$,
|
| 585 |
+
for any solution $u = (u_1,u_2)$ of (8) and $v = (v_1,v_2)$ of (9).
|
| 586 |
+
|
| 587 |
+
**Lemma 5.4.** Suppose that Assumption G is satisfied. If $\psi_\tau = (\psi_{1,\tau}, \psi_{2,\tau}) \to \psi_0 = (\psi_{1,0}, \psi_{2,0})$ in $H \times H$ as $\tau \to +\infty$, then for each solution $u$ of (8) there exists a solution $v$ of (9) such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$.
|
| 588 |
+
---PAGE_BREAK---
|
| 589 |
+
|
| 590 |
+
*Proof.* Let $u$ be a solution of (8) then there exists $f = (f_1, f_2)$ with $f_1, f_2 \in L^2([\tau, T]; H)$ such that $f_1(t) \in F(u_1(t), u_2(t))$ and $f_2(t) \in G(u_1(t), u_2(t))$, a.e., and
|
| 591 |
+
|
| 592 |
+
$$
|
| 593 |
+
\left\{
|
| 594 |
+
\begin{array}{ll}
|
| 595 |
+
\dfrac{du_1}{dt}(t) + A(t)u_1(t) = f_1(t), & \text{a.e in } (\tau, T], \\
|
| 596 |
+
\dfrac{du_2}{dt}(t) + B(t)u_2(t) = f_2(t), & \text{a.e in } (\tau, T], \\
|
| 597 |
+
u(\tau) = \psi_{\tau}. &
|
| 598 |
+
\end{array}
|
| 599 |
+
\right.
|
| 600 |
+
\qquad (11)
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
Consider $g \in L^2([0, T]; H \times H)$ such that $g(t) = f(t+\tau)$ and let $v$ be the unique solution of the problem
|
| 604 |
+
|
| 605 |
+
$$
|
| 606 |
+
\left\{
|
| 607 |
+
\begin{array}{ll}
|
| 608 |
+
\dfrac{dv_1}{dt}(t) + A_\infty v_1(t) = g_1(t), & \text{a.e in } (0, T], \\
|
| 609 |
+
\dfrac{dv_2}{dt}(t) + B_\infty v_2(t) = g_2(t), & \text{a.e in } (0, T], \\
|
| 610 |
+
v(0) = \psi_0. &
|
| 611 |
+
\end{array}
|
| 612 |
+
\right.
|
| 613 |
+
\qquad (12)
|
| 614 |
+
$$
|
| 615 |
+
|
| 616 |
+
Subtracting the equations in (11) from the equations in (12) gives
|
| 617 |
+
|
| 618 |
+
$$ \frac{d}{dt}(u_1(t+\tau) - v_1(t)) + A(t+\tau)u_1(t+\tau) - A_{\infty}v_1(t) = f_1(t+\tau) - g_1(t) $$
|
| 619 |
+
|
| 620 |
+
and
|
| 621 |
+
|
| 622 |
+
$$ \frac{d}{dt}(u_2(t+\tau) - v_2(t)) + B(t+\tau)u_2(t+\tau) - B_{\infty}v_2(t) = f_2(t+\tau) - g_2(t) $$
|
| 623 |
+
|
| 624 |
+
for a.e. $t \in [0, T]$. Multiplying by $u_i(t+\tau) - v_i(t)$ and taking the inner product, then using Assumption G, we obtain
|
| 625 |
+
|
| 626 |
+
$$ \frac{1}{2} \frac{d}{dt} \|u_i(t+\tau) - v_i(t)\|_H^2 \leq g_{i,\tau}(t), \quad i=1,2. $$
|
| 627 |
+
|
| 628 |
+
Integrating this last inequality from 0 to t, gives
|
| 629 |
+
|
| 630 |
+
$$ \|u_i(t+\tau) - v_i(t)\|_H^2 \leq \| \psi_{i,\tau} - \psi_{i,0} \|_H^2 + 2tg_{i,\tau}(0). $$
|
| 631 |
+
|
| 632 |
+
Since $\psi_{i,\tau} \to \psi_{i,0}$ in $H$ and $g_{i,\tau}(0) \to 0$ as $\tau \to +\infty$, the result follows. $\square$
|
| 633 |
+
|
| 634 |
+
5.2. **Application to system (S).** The results in Subsection 5.1 are applied here to the nonlinear system of inclusions with spatially variable exponents (S) in the Hilbert space $\tilde{H} = H \times H$, with $H := L^2(\Omega)$.
|
| 635 |
+
|
| 636 |
+
We assume that the diffusion coefficients satisfy Assumption D and the additional Assumption D (iii) that follows:
|
| 637 |
+
|
| 638 |
+
**Assumption D (iii).** For each $t \ge 0$, $D_i(t+\tau, \cdot) \to D_i^*(\cdot)$ in $L^\infty(\Omega)$ as $\tau \to +\infty$, for $i=1,2$.
|
| 639 |
+
|
| 640 |
+
Assumptions D (i)—D (ii) imply that the pointwise limit $D_i^*(x)$ as $t \to \infty$ exists and satisfies $0 < \beta \le D_i^*(x)$ for almost all $x \in \Omega$, $i=1,2$. Then the problem (S) with $D^*(x) = (D_1^*(x), D_2^*(x))$ is autonomous and has a global autonomous B-attractor as a particular case of the results in Section 3 (see also a direct proof in [25] for the autonomous system of inclusions without the nonlinear perturbation $|u|^{p(\cdot)-2}u$).
|
| 641 |
+
|
| 642 |
+
We will show that the dynamics of the original nonautonomous problem is asymptotically autonomous and its pullback attractor converges upper semicontinuously
|
| 643 |
+
---PAGE_BREAK---
|
| 644 |
+
|
| 645 |
+
to the autonomous global B-attractor $\mathcal{A}_\infty$ of the problem
|
| 646 |
+
|
| 647 |
+
$$
|
| 648 |
+
\left\{
|
| 649 |
+
\begin{array}{l}
|
| 650 |
+
\displaystyle \frac{\partial v_1}{\partial t}(t) - \operatorname{div} (D_1^* |\nabla v_1(t)|^{p(x)-2} \nabla v_1(t)) + |v_1(t)|^{p(x)-2} v_1(t) \in F(v_1(t), v_2(t)), \\[6pt]
|
| 651 |
+
\displaystyle \frac{\partial v_2}{\partial t}(t) - \operatorname{div} (D_2^* |\nabla v_2(t)|^{q(x)-2} \nabla v_2(t)) + |v_2(t)|^{q(x)-2} v_2(t) \in G(v_1(t), v_2(t)), \\[6pt]
|
| 652 |
+
v(0) = \psi_0.
|
| 653 |
+
\end{array}
|
| 654 |
+
\right.
|
| 655 |
+
\tag{13}
|
| 656 |
+
$$
|
| 657 |
+
|
| 658 |
+
In particular, we consider the operators
|
| 659 |
+
|
| 660 |
+
$$
|
| 661 |
+
\begin{align*}
|
| 662 |
+
A(t)u_1 &:= -\operatorname{div} (D_1(t)|\nabla u_1|^{p(x)-2}\nabla u_1) + |u_1|^{p(x)-2}u_1, \\
|
| 663 |
+
B(t)u_2 &:= -\operatorname{div} (D_2(t)|\nabla u_2|^{q(x)-2}\nabla u_2) + |u_2|^{q(x)-2}u_2, \\
|
| 664 |
+
A_\infty v_1 &:= -\operatorname{div} (D_1^*|\nabla v_1|^{p(x)-2}\nabla v_1) + |v_1|^{p(x)-2}v_1, \\
|
| 665 |
+
B_\infty v_2 &:= -\operatorname{div} (D_2^*|\nabla v_2|^{q(x)-2}\nabla v_2) + |v_2|^{q(x)-2}v_2.
|
| 666 |
+
\end{align*}
|
| 667 |
+
$$
|
| 668 |
+
|
| 669 |
+
Applying Lemma 3.1, there exist positive constants $T_0$, $B_0$ such that
|
| 670 |
+
|
| 671 |
+
$$
|
| 672 |
+
\|u(t)\|_{H \times H} \le B_0, \quad \forall t \ge T_0 + \tau.
|
| 673 |
+
$$
|
| 674 |
+
|
| 675 |
+
Moreover, applying Lemma 3.2 for $Y = W^{1,p(x)}(\Omega)$, there exist positive constants $T_1$, $B_1$ such that
|
| 676 |
+
|
| 677 |
+
$$
|
| 678 |
+
\|u(t)\|_{Y \times Y} \le B_1, \quad \forall t \ge T_1 + \tau. \tag{14}
|
| 679 |
+
$$
|
| 680 |
+
|
| 681 |
+
Since also $\|v(t)\|_{Y \times Y} \le B_1$ for all $t \ge T_1 + \tau$ and $Y \subset H$ with compact embedding, it follows
|
| 682 |
+
|
| 683 |
+
**Corollary 1.** $\overline{\cup_{\tau \in \mathbb{R}} \mathcal{A}(\tau)}$ is a compact subset of $H \times H$.
|
| 684 |
+
|
| 685 |
+
Using estimate (14), the proof of the next result follows the same lines as the
|
| 686 |
+
proof of Theorem 4.2 of [14], and therefore is omitted here.
|
| 687 |
+
|
| 688 |
+
**Theorem 5.5.** If $\{\psi_\tau : \tau \in \mathbb{R}\}$ is a bounded set in $Y \times Y$ and $\psi_\tau \to \psi_0$ in $H \times H$ as $\tau \to +\infty$, then Assumption G is satisfied with $g_{i,\tau}(t) = K \|D_i(t+\tau, \cdot) - D_i^*(\cdot)\|_{L^\infty(\Omega)}$, $(i=1,2)$ where K is a positive constant.
|
| 689 |
+
|
| 690 |
+
Observe that by Assumption D (iii) the function $g_{i,\tau}: [0, +\infty) \to [0, +\infty)$ given in Theorem 5.5 satisfies $g_{i,\tau}(t) \to 0$ as $\tau \to +\infty$ for each $t \ge 0$. The next result gives the desired asymptotic upper semi-continuous convergence.
|
| 691 |
+
|
| 692 |
+
**Theorem 5.6.** $\lim_{t \to +\infty} \text{dist}_{H \times H}(\mathcal{A}(t), \mathcal{A}_\infty) = 0$.
|
| 693 |
+
|
| 694 |
+
*Proof.* Suppose that $\psi_\tau \in A(\tau)$ and $\psi_\tau \to \psi_0$ in $H \times H$. Using the negatively invariance of the pullback attractor and the estimate (14) it follows that $\{\psi_\tau : \tau \in \mathbb{R}\}$ is a bounded set in $Y \times Y$. Theorem 5.5 then guarantees that Assumption G is satisfied. Thus, from Lemma 5.4, for each solution $u = (u_1, u_2)$ of (S) there exists a solution $v = (v_1, v_2)$ of (13) such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$. Theorem 5.3 then yields $\lim_{t \to +\infty} \text{dist}(A(t), A_\infty) = 0$. $\square$
|
| 695 |
+
|
| 696 |
+
REFERENCES
|
| 697 |
+
|
| 698 |
+
[1] C. O. Alves, S. Shmarev, J. Simsen and M. S. Simsen, *The Cauchy problem for a class of parabolic equations in weighted variable Sobolev spaces: existence and asymptotic behavior*, *J. Math. Anal. Appl.*, **443** (2016), 265–294.
|
| 699 |
+
---PAGE_BREAK---
|
| 700 |
+
|
| 701 |
+
[2] J. P. Aubin and A. Cellina, *Differential Inclusions: Set-Valued Maps and Viability Theory*, Springer-Verlag, Berlin, 1984.
|
| 702 |
+
|
| 703 |
+
[3] J. P. Aubin and H. Frankowska, *Set-valued Analysis*, Birkhäuser, Berlin, 1990.
|
| 704 |
+
|
| 705 |
+
[4] T. Caraballo, J. A. Langa, V. S. Melnik and J. Valero, Pullback attractors for nonautonomous and stochastic multivalued dynamical systems, *Set-Valued Analysis*, **11** (2003), 153–201.
|
| 706 |
+
|
| 707 |
+
[5] T. Caraballo, P. Marin-Rubio and J. C. Robinson, A comparison between two theories for multivalued semiflows and their asymptotic behaviour, *Set-Valued Analysis*, **11** (2003), 297–322.
|
| 708 |
+
|
| 709 |
+
[6] J. I. Díaz and I. I. Vrabie, Existence for reaction diffusion systems. A compactness method approach, *J. Math. Anal. Appl.*, **188** (1994), 521–540.
|
| 710 |
+
|
| 711 |
+
[7] L. Diening, P. Harjulehto, P. Hästö and M. Rúžička, *Lebesgue and Sobolev Spaces with Variable Exponents*, Springer-Verlag, Berlin, Heidelberg, 2011.
|
| 712 |
+
|
| 713 |
+
[8] X. L. Fan and Q. H. Zhang, Existence of solutions for $p(x)$-laplacian Dirichlet problems, *Nonlinear Anal.*, **52** (2003), 1843–1852.
|
| 714 |
+
|
| 715 |
+
[9] P. Harjulehto, P. Hästö, U. Lê and M. Nuortio, Overview of differential equations with non-standard growth, *Nonlinear Analysis*, **72** (2010), 4551–4574.
|
| 716 |
+
|
| 717 |
+
[10] P. E. Kloeden and T. Lorenz, Construction of nonautonomous forward attractors, *Proc. Amer. Mat. Soc.*, **144** (2016), 259–268.
|
| 718 |
+
|
| 719 |
+
[11] P. E. Kloeden and P. Marín-Rubio, Negatively invariant sets and entire trajectories of set-valued dynamical systems, *J. Setvalued & Variational Analysis*, **19** (2011), 43–57.
|
| 720 |
+
|
| 721 |
+
[12] P. E. Kloeden and M. Rasmussen, *Nonautonomous Dynamical Systems*, Amer. Math. Soc. Providence, 2011.
|
| 722 |
+
|
| 723 |
+
[13] P. E. Kloeden and J. Simsen, Pullback attractors for non-autonomous evolution equation with spatially variable exponents, *Commun. Pure & Appl. Analysis*, **13** (2014), 2543–2557.
|
| 724 |
+
|
| 725 |
+
[14] P. E. Kloeden and J. Simsen, Attractors of asymptotically autonomous quasilinear parabolic equation with spatially variable exponents, *J. Math. Anal. Appl.*, **425** (2015), 911–918.
|
| 726 |
+
|
| 727 |
+
[15] P. E. Kloeden, J. Simsen and M. S. Simsen, A pullback attractor for an asymptotically autonomous multivalued Cauchy problem with spatially variable exponent, *J. Math. Anal. Appl.*, **445** (2017), 513–531.
|
| 728 |
+
|
| 729 |
+
[16] P. E. Kloeden and Meihua Yang, Forward attraction in nonautonomous difference equations, *J. Difference Eqns. Applns.*, **22** (2016), 513–525.
|
| 730 |
+
|
| 731 |
+
[17] V. S. Melnik and J. Valero, On attractors of multivalued semi-flows and differential inclusions, *Set-Valued Anal.*, **6** (1998), 83–111.
|
| 732 |
+
|
| 733 |
+
[18] C. V. Pao, On nonlinear reaction-diffusion systems, *J. Math. Anal. Appl.*, **87** (1982), 165–198.
|
| 734 |
+
|
| 735 |
+
[19] K. Rajagopal and M. Rúžička, Mathematical modelling of electrorheological fluids, *Contin. Mech. Thermodyn.*, **13** (2001) 59–78.
|
| 736 |
+
|
| 737 |
+
[20] M. Rúžička, Flow of shear dependent elecrorheological fluids, *C. R. Acad. Sci. Paris, Série I*, **329** (1999), 393–398.
|
| 738 |
+
|
| 739 |
+
[21] M. Rúžička, *Electrorheological Fluids: Modeling and Mathematical Theory*, Lectures Notes in Mathematics, vol. 1748, Springer-Verlag, Berlin, 2000.
|
| 740 |
+
|
| 741 |
+
[22] J. Simsen and J. Valero, Characterization of Pullback Attractors for Multivalued Nonautonomous Dynamical Systems, Advances in Dynamical Systems and Control, 179–195, Stud. Syst. Decis. Control, 69, Springer, [Cham], 2016.
|
| 742 |
+
|
| 743 |
+
[23] J. Simsen and E. Capelato, Some properties for exact generalized processes, *Continuous and Distributed Systems II*, 209–219, Studies in Systems, Decision and Control. led. 30, Springer International Publishing, 2015.
|
| 744 |
+
|
| 745 |
+
[24] J. Simsen and C. B. Gentile, On p-Laplacian differential inclusions-Global existence, compactness properties and asymptotic behavior, *Nonlinear Analysis*, **71** (2009), 3488–3500.
|
| 746 |
+
|
| 747 |
+
[25] J. Simsen and M. S. Simsen, Existence and upper semicontinuity of global attractors for $p(x)$-Laplacian systems, *J. Math. Anal. Appl.*, **388** (2012), 23–38.
|
| 748 |
+
|
| 749 |
+
[26] J. Simsen and M. S. Simsen, On asymptotically autonomous dynamics for multivalued evolution problems, *Discrete Contin. Dyn. Syst. Ser. B*, **24** (2019), no. 8, 3557–3567.
|
| 750 |
+
|
| 751 |
+
[27] J. Simsen and P. Wittbold, Compactness results with applications for nonautonomous coupled inclusions, *J. Math. Anal. Appl.*, **479** (2019), 426–449.
|
| 752 |
+
|
| 753 |
+
[28] R. Temam, *Infinite-Dimensional Dynamical Systems in Mechanics and Physics*, Springer-Verlag, New York, 1988.
|
| 754 |
+
|
| 755 |
+
[29] I.I. Vrabie, *Compactness Methods for Nonlinear Evolutions*, Second Editon, Pitman Monographs and Surveys in Pure and Applied Mathematics, New York, 1995.
|
| 756 |
+
---PAGE_BREAK---
|
| 757 |
+
|
| 758 |
+
Received March 2019; revised June 2019.
|
| 759 |
+
|
| 760 |
+
*E-mail address:* kloeden@na-uni.tuebingen.de
|
| 761 |
+
|
| 762 |
+
*E-mail address:* jacson@unifei.edu.br
|
| 763 |
+
|
| 764 |
+
*E-mail address:* petra.wittbold@uni-due.de
|
samples/texts_merged/4409661.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/450057.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/4808858.md
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
**Problem 1** In an LC circuit with $C = 4.00$ μF, the maximum potential difference across the capacitor is 1.50 V and the maximum current through the inductor is 50 mA.
|
| 5 |
+
|
| 6 |
+
(a) What is the inductance $L$?
|
| 7 |
+
|
| 8 |
+
(b) What is the frequency of oscillations?
|
| 9 |
+
|
| 10 |
+
(c) How long does it take for the charge to rise from 0 to its maximum value?
|
| 11 |
+
|
| 12 |
+
**Problem 4** A circuit is composed of two metal rails 8 cm apart, a resistor with $R = 1 \Omega$ connecting them, and a rod at the other end which moves at a speed of 0.45 m/s. A uniform magnetic field $B = 0.1$ T points perpendicular to the plane of the circuit.
|
| 13 |
+
|
| 14 |
+
(a) Find the induced emf in the circuit.
|
| 15 |
+
|
| 16 |
+
(b) Find the current in the circuit.
|
| 17 |
+
|
| 18 |
+
(c) If the rod moved in the opposite direction, how would your answers change?
|
| 19 |
+
|
| 20 |
+
**Problem 5** While upgrading the electronics in your car stereo, you calculate that you need to construct an LC circuit that oscillates at 20 Hz. If you have a 40 mH inductor, what capacitor do you need to buy from Radio Shack?
|
| 21 |
+
|
| 22 |
+
**Problem 6** You have an LC circuit that includes a small, unavoidable resistance from the wires. The inductor is 1.5 mH and the capacitor is 3 mF. The capacitor is initially charged to 30 μC. After 100 oscillations, the maximum charge on the capacitor is only 5 μC.
|
| 23 |
+
|
| 24 |
+
(a) What is the resistance of the circuit?
|
| 25 |
+
|
| 26 |
+
(b) How much energy has been lost?
|
| 27 |
+
|
| 28 |
+
(c) Where did this energy go?
|
samples/texts_merged/4872902.md
ADDED
|
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Computation of Time-Domain Frequency Stability and
|
| 5 |
+
Jitter from PM Noise Measurements*
|
| 6 |
+
|
| 7 |
+
W. F. Walls and F. L. Walls
|
| 8 |
+
|
| 9 |
+
Femtosecond Systems Inc.,
|
| 10 |
+
4894 Van Gordon St. Suite 301N,
|
| 11 |
+
Wheat Ridge, CO 80033, USA
|
| 12 |
+
|
| 13 |
+
National Institute of Standards and Technology,
|
| 14 |
+
325 Broadway Boulder, CO 80303, USA
|
| 15 |
+
|
| 16 |
+
Abstract
|
| 17 |
+
|
| 18 |
+
This paper explores the effect of phase modulation (PM), amplitude modulation (AM), and thermal noise on the rf spectrum, phase jitter, timing jitter, and frequency stability of precision sources.
|
| 19 |
+
|
| 20 |
+
**1. Introduction**
|
| 21 |
+
|
| 22 |
+
In this paper we review the basic definitions generally used to describe phase
|
| 23 |
+
modulation (PM) noise, amplitude modulation (AM) noise, fractional frequency stability,
|
| 24 |
+
timing jitter and phase jitter in precision sources. From these basic definitions we can then
|
| 25 |
+
compute the effect of frequency multiplication or division on these measures of
|
| 26 |
+
performance. We find that under ideal frequency multiplication or division by a factor N,
|
| 27 |
+
the PM noise and phase jitter of a source is intrinsically changed by a factor of N². The
|
| 28 |
+
fractional frequency stability and timing jitter are, however, unchanged as long as we can
|
| 29 |
+
determine the average zero crossings. After a sufficiently large N, the carrier power
|
| 30 |
+
density is less than the PM noise power. This condition is often referred to as carrier
|
| 31 |
+
collapse. Ideal frequency translation results in the addition of the PM noise of the two
|
| 32 |
+
sources. The effect of AM noise on the multiplied or translated signals can be increased
|
| 33 |
+
or decreased depending on the component non-linearity. Noise added to a precision signal
|
| 34 |
+
results in equal amounts of PM and AM noise. The upper and lower PM (or AM)
|
| 35 |
+
sidebands are exactly equal and 100% correlated, independent of whether the PM (or AM)
|
| 36 |
+
originate from random or coherent processes [1].
|
| 37 |
+
|
| 38 |
+
## 2. Basic Definitions
|
| 39 |
+
|
| 40 |
+
2.1 Descriptions of Voltage Wave Form
|
| 41 |
+
|
| 42 |
+
The output of a precision source can be written as
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
V(t) = [V_o + \varepsilon(t)][\cos(2\pi v_o t) + \phi(t)], \quad (1)
|
| 46 |
+
$$
|
| 47 |
+
|
| 48 |
+
* Work of the US Government not subject to US copyright.
|
| 49 |
+
† Presently at Total Frequency, Boulder, CO 80303.
|
| 50 |
+
---PAGE_BREAK---
|
| 51 |
+
|
| 52 |
+
where $v_0$ is the average frequency, and $V_0$ is the average amplitude. Phase/frequency variations are included in $\phi(t)$ and the amplitude variations are included in $\epsilon(t)$ [2]. The instantaneous frequency is given by
|
| 53 |
+
|
| 54 |
+
$$ v = v_o + \frac{1}{2\pi} \frac{d}{dt} \phi(t) \quad (2a) $$
|
| 55 |
+
|
| 56 |
+
The instantaneous fractional frequency deviation is given by
|
| 57 |
+
|
| 58 |
+
$$ y(t) = \frac{1}{2\pi v_o} \frac{d}{dt} \phi(t) \quad (2b) $$
|
| 59 |
+
|
| 60 |
+
The power spectral density (PSD) of phase fluctuations $S_\phi(f)$ is the mean squared phase fluctuation $\delta\phi(f)$ at Fourier frequency $f$ from the carrier in a measurement bandwidth of 1 Hz. This includes the contributions at both the upper and lower sidebands. These sidebands are exactly equal in amplitude and are 100% correlated [1]. Thus experimentally
|
| 61 |
+
|
| 62 |
+
$$ S_{\phi}(f) = \frac{[\delta\phi(f)]^2}{BW} \quad \text{radians}^2/\text{Hz}, \quad (3) $$
|
| 63 |
+
|
| 64 |
+
where BW is the measurement bandwidth in Hz. Since the BW is small compared to $f$, $S_\phi(f)$ appears locally to be white and obeys Gaussian statistics. The fractional 1-sigma confidence interval is $1 \pm 1/\sqrt{N}$ [3].
|
| 65 |
+
|
| 66 |
+
Often the PM noise is specified as single side band noise $\ell(f)$, which is defined as $1/2$ of $S_\phi(f)$. The units are generally given in dBc/Hz, which is short hand for dB below the carrier in a 1 Hz bandwidth.
|
| 67 |
+
|
| 68 |
+
$$ \ell(f) = 10 \log \left[ \frac{1}{2} S_{\phi}(f) \right] \quad \text{dBc/Hz}. \quad (4) $$
|
| 69 |
+
|
| 70 |
+
Frequency modulation noise is often specified as $S_y(f)$ which is the PSD of fractional frequency fluctuations. $S_y(f)$ is related to $S_\phi(f)$ by
|
| 71 |
+
|
| 72 |
+
$$ s_y(f) = \frac{f^2}{\nu^2} S_\phi(f) \quad 1/\text{Hz}. \quad (5) $$
|
| 73 |
+
|
| 74 |
+
In the laser literature one often sees the frequency noise expressed as the PSD of frequency modulation $S_\phi^\bullet$, which is related to $S_y(f)$ as.
|
| 75 |
+
|
| 76 |
+
$$ S_\phi^\bullet \phi(f) = f^2 S_y(f) = f^2 S_\phi(f) \quad \text{Hz}^2/\text{Hz}. \quad (6) $$
|
| 77 |
+
|
| 78 |
+
The amplitude modulation (AM) noise $S_a(f)$ is the mean squared fractional amplitude fluctuations at Fourier frequency $f$ from the carrier in a measurement bandwidth of 1 Hz. Thus experimentally
|
| 79 |
+
|
| 80 |
+
$$ S_a(f) = \left( \frac{\delta\epsilon(f)}{V_0} \right)^2 \frac{1}{BW} \quad 1/\text{Hz}, \quad (7) $$
|
| 81 |
+
|
| 82 |
+
where BW is the measurement bandwidth in Hz.
|
| 83 |
+
---PAGE_BREAK---
|
| 84 |
+
|
| 85 |
+
The rf power spectrum for small PM and AM noise is approximately given by
|
| 86 |
+
|
| 87 |
+
$$V^2(f) \equiv V_o^2 [e^{-\phi_c^2} + S_\phi(f) + S_a(f)] \quad (8)$$
|
| 88 |
+
|
| 89 |
+
Where $e^{-\phi_c^2}$ is the approximate power in the carrier at Fourier frequencies from 0 to $f_c$. $\phi_c^2$ is the mean squared phase fluctuation due to the PM noise at frequencies larger than $f_c$ [4]. $\phi_c^2$ is calculated from.
|
| 90 |
+
|
| 91 |
+
$$\phi_c^2 = \int_{-\infty}^{\infty} S_{\phi}(f) df. \quad (9)$$
|
| 92 |
+
|
| 93 |
+
The half-power bandwidth of the signal, 2 fc can be found by setting $\phi_c^2 = 0.7$. The difference between the half-power and the 3 dB bandwidth depends on the shape of $S_\phi(f)$ [4].
|
| 94 |
+
|
| 95 |
+
## 2.2 Frequency Stability In The Time Domain
|
| 96 |
+
|
| 97 |
+
The frequency of even a precision source is often not stationary in time, so traditional statistical methods to characterize it diverge with increasing number of samples [2]. Special statistics have been developed to handle this problem. The most common is the two-sample or Allan variance (AVAR), which is based on analyzing the fluctuations of adjacent samples of fractional frequency averaged over a period $\tau$. The square root of the Allan variance $\sigma_y(\tau)$, often called ADEV, is defined as
|
| 98 |
+
|
| 99 |
+
$$\sigma_y(\tau) = \left\langle \frac{1}{2} \left[ y(t+\tau) - \bar{y}(t) \right]^2 \right\rangle^{1/2} \quad (10)$$
|
| 100 |
+
|
| 101 |
+
$\sigma_y(\tau)$ can be estimated from a finite set of frequency averages, each of length $\tau$ from
|
| 102 |
+
|
| 103 |
+
$$\sigma_y(\tau) = \left[ \frac{1}{2(M-1)} \sum_{i=1}^{M-1} (y_i - \bar{y})^2 \right]^{1/2} \quad (11)$$
|
| 104 |
+
|
| 105 |
+
This assumes that there is no dead time between samples [2]. If there is dead time, the results are biased depending on the amount of dead time and the type of PM noise. See [2] for details.
|
| 106 |
+
|
| 107 |
+
$\sigma_y(\tau)$ can also be calculated from the $S_\phi(f)$ using
|
| 108 |
+
|
| 109 |
+
$$\sigma_y(\tau) = \left( \frac{\sqrt{2}}{\pi v_o \tau} \right) \left[ \int_0^\infty H_\phi(f) |S_\phi(f)| \sin^4(\pi f \tau) df \right]^{1/2} \quad (12)$$
|
| 110 |
+
|
| 111 |
+
where $H_o(f)$ is the transfer function of the system used for measuring $\sigma_y(\tau)$ or $\delta t$ below [2]. $H_\phi(f)$ must
|
| 112 |
+
---PAGE_BREAK---
|
| 113 |
+
|
| 114 |
+
Figure 1. Placement of the yis used in the computation of σy(τ) and δt = τσy(τ).
|
| 115 |
+
|
| 116 |
+
have a low-pass characteristic for σy(τ) to converge in the presence of white PM or flicker PM noise. In practice the measurement system always has a finite bandwidth but if this is not controlled or known, the results for σy(τ) will have little meaning [2]. See Table 1. If H₀(f) has a low pass characteristic with a very sharp roll off at a maximum frequency f_h, it can be replaced by 1 and the integration terminated at f_h. Practical examples usually require the exact shape of H₀(f). Programs exist that numerically compute σy(τ) for an arbitrary combination of these 5 noise types [5]. Most sources contain at least three of them plus long-term drift or aging.
|
| 117 |
+
|
| 118 |
+
## 2.3 Effects of Frequency Multiplication, Division, and Translation
|
| 119 |
+
|
| 120 |
+
Frequency multiplication by a factor N is the same as phase amplification by a factor N. For example 2π radians is amplified to 2πN radians. Since PM noise is the mean squared phase fluctuation, the PM noise must increase by N². Thus
|
| 121 |
+
|
| 122 |
+
$$S_{\phi}(Nv_o, f) = N^2 S_{\phi}(v_o, f) + \text{Multiplication PM}, \quad (13)$$
|
| 123 |
+
|
| 124 |
+
where Multiplication PM is the noise added by the multiplication process.
|
| 125 |
+
|
| 126 |
+
We see from Eqs. (8), (9) and (13) that the power in the carrier decreases exponentially as $e^{-N^2}$. After a sufficiently large multiplication factor N, the carrier power density is less than the PM noise power. This is often referred to as carrier collapse [4]. Ideal frequency translation results in the addition of the PM noise of the two sources [2]. The half power bandwidth of the signal also changes with frequency multiplication.
|
| 127 |
+
|
| 128 |
+
Frequency division can be considered as frequency multiplication by a factor 1/N. The effect is to reduce the PM noise by a factor 1/N². The only difference is that there can be aliasing of the broadband PM noise at the input to significantly increase the output PM above that calculated for a perfect divider [6]. This effect can be avoided by using narrow
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
band filter at the input or intermediate stages. Ideal frequency multiplication or division does not change $\sigma_y(\tau)$.
|
| 132 |
+
|
| 133 |
+
Frequency translation has the effect of adding the PM noise of the input signal $v_i$ and the reference signal $v_o$ to that of the PM noise in the nonlinear device providing the translation.
|
| 134 |
+
|
| 135 |
+
$$S_{\phi}(v_2, f) = S_{\phi}(v_o, f) + S_{\phi}(v_1, f) + \text{Translation PM.} \quad (14)$$
|
| 136 |
+
|
| 137 |
+
Thus dividing a high frequency signal, rather than mixing two high frequency signals generally produces a low frequency reference signal with less residual noise.
|
| 138 |
+
|
| 139 |
+
### 3. Effect Of Multiplicative Noise
|
| 140 |
+
|
| 141 |
+
Multiplicative noise is noise modulation power that remains proportional to the signal level. For example consider the case where the gain is modulated by some process with an index $\beta$ as
|
| 142 |
+
|
| 143 |
+
$$\text{Gain} = G_o(1+\beta)\cos\Omega\tau \quad (15)$$
|
| 144 |
+
|
| 145 |
+
If we assume an input signal given by
|
| 146 |
+
|
| 147 |
+
$$V_{in} = V_o \cos[2\pi v_o t + \phi(t)] \quad (16)$$
|
| 148 |
+
|
| 149 |
+
then the output voltage will have the form
|
| 150 |
+
|
| 151 |
+
$$V_{out} = V_o G_o + V_o G_o \beta \cos\Omega t \cos[2\pi v_o t + \phi(t)] \quad (17)$$
|
| 152 |
+
|
| 153 |
+
The amplitude fluctuation is seen to be proportional to the input signal. Using Eqs. (1) and (7) we can compute the AM noise to be
|
| 154 |
+
|
| 155 |
+
$$\frac{1}{2} S_a(f) = \frac{\beta^2}{2} \quad (18)$$
|
| 156 |
+
|
| 157 |
+
Similarly if the phase is modulated as
|
| 158 |
+
|
| 159 |
+
$$\phi(t) = \beta \cos[\Omega(t)] \quad (19)$$
|
| 160 |
+
|
| 161 |
+
the output voltage will be of the form
|
| 162 |
+
|
| 163 |
+
$$V_{out} = V_o \cos[\omega\tau + \beta \cos[\Omega(t)]] \quad (20)$$
|
| 164 |
+
|
| 165 |
+
The phase fluctuation is proportional to the input signal and the PM is calculated using Eqs. (1) and (3) to be
|
| 166 |
+
|
| 167 |
+
$$\frac{1}{2} S_{\phi}(f) = \frac{\beta^2}{4} \quad (21)$$
|
| 168 |
+
---PAGE_BREAK---
|
| 169 |
+
|
| 170 |
+
### 4. Effect of Additive Noise
|
| 171 |
+
|
| 172 |
+
The addition of a noise signal $V_n(t)$ to the signal $V_o(t)$ yields a total signal
|
| 173 |
+
|
| 174 |
+
$$V(t) = V_o(t) + V_n(t) \quad (22)$$
|
| 175 |
+
|
| 176 |
+
Since the noise term $V_n(t)$ is uncorrelated with $V_o(t)$, 1/2 the power contributes to AM noise and 1/2 the power contributes to PM noise.
|
| 177 |
+
|
| 178 |
+
$$\text{AM } V_n(t)/\sqrt{2} \text{ PM } V_n(t)/\sqrt{2} \quad (23)$$
|
| 179 |
+
|
| 180 |
+
$$L(f) = \frac{s_{\phi}(f)}{2} = \frac{s_{a}(f)}{2} = \frac{V_{n}^{2}(f)}{4V_{o}^{2}} \frac{1}{BW} \quad (24)$$
|
| 181 |
+
|
| 182 |
+
where BW is the bandwidth in Hz. We see that the AM and PM is proportional to inverse power. These results can be applied to amplifier and detection circuits as follows. The input noise power to the amplifier is given by kTBW. The gain of the amplifier from a matched source into a match load is $G_o$. The noise power to the load is just kTBWG_oF, where F is the noise figure. The output power to the load is $P_o$. Using Eq. (24) we obtain
|
| 183 |
+
|
| 184 |
+
$$L(f) = \frac{s_{\phi}(f)}{2} = \frac{s_{a}(f)}{2} = \frac{V_{n}^{2}(f)}{4V_{o}^{2}} \frac{1}{BW} = \frac{2kTBWFG_{o}}{4P_{o}BW} = \frac{kTFG_{o}}{2P_{o}} = -177\text{dBc/Hz} \quad (25)$$
|
| 185 |
+
|
| 186 |
+
for T= 300K, F=1, P_o/G_o = P_in = 0 dBm.
|
| 187 |
+
|
| 188 |
+
### 5. Phase Jitter
|
| 189 |
+
|
| 190 |
+
The phase jitter $\delta\phi$ is computed from the PM noise spectrum using
|
| 191 |
+
|
| 192 |
+
$$\delta\phi = \int_{0}^{\infty} [S_{\phi}(f)] H(f) df \quad (26)$$
|
| 193 |
+
|
| 194 |
+
Generally $H(f)$ must have the shape of the high pass filter or a minimum cutoff frequency $f_{min}$ used to exclude low frequency changes for the integration, or $\delta\phi$ will diverge due to random walk FM, flicker FM, or white FM noise processes. Usually $H(f)$ also has a low pass characteristic at high frequencies to limit the effects of flicker PM and white PM [2]. See Table 1.
|
| 195 |
+
|
| 196 |
+
### 6. Timing Jitter
|
| 197 |
+
|
| 198 |
+
Recall that $\sigma_y(\tau)$ is the fractional frequency stability of adjacent samples each of length $\tau$. See Fig. 1. The time jitter $\delta t$ is the timing error that accumulates after a period $\tau$. $\delta t$ is related to $\sigma_y(\tau)$ by
|
| 199 |
+
|
| 200 |
+
$$\frac{\delta t}{\tau} = \frac{\delta v}{v} = \sigma_y(\tau) \quad \delta t = \tau \sigma_y(\tau) \quad (27)$$
|
| 201 |
+
---PAGE_BREAK---
|
| 202 |
+
|
| 203 |
+
Table 1 shows the asymptotic forms of $\sigma_y(\tau)$, $\delta t$, and $\delta\phi$ as a function of $\tau$, $f_{\text{min}}$, and $f_h$ for the 5 common noise types at frequency $v_o$ and $Nv_o$, under the assumption that $2\pi f_h \tau > 1$. It is interesting to note that for white phase noise, all three measures are dominated by $f_h$[5]. For random walk frequency modulation (FM) and flicker FM, $\sigma_y(\tau)$ is independent of $f_h$ and instead is dominated by $S_\phi(1/\tau)$ or $S_\phi(f_{\text{min}})$. Also, the timing jitter is independent of $N$ as long as we can still identify zero crossings, while the phase jitter, which is proportional to frequency, is multiplied by a factor $N$. Typical sources usually contain at least 3 of these noise types.
|
| 204 |
+
|
| 205 |
+
Table 1. $\sigma_y(\tau)$, $\delta t$, and $\delta\phi$ as a function of $\tau$, $f_{\text{min}}$, and $f_h$ at carrier frequency $v_o$ and $Nv_o$
|
| 206 |
+
|
| 207 |
+
<table><thead><tr><th>Noise type</th><th>S<sub>φ</sub>(f)</th><th>σ<sub>y</sub>(τ)</th><th>δt at v<sub>o</sub> or Nv<sub>o</sub></th><th>δφ at v<sub>o</sub></th><th>δφ at N v<sub>o</sub></th></tr></thead><tbody><tr><td>Random<br>Walk FM</td><td>[v<sup>2</sup>/f<sup>4</sup>]h<sub>2</sub></td><td>π[(2/3)h<sub>2</sub>τ]<sup>1/2</sup></td><td>Tπ[(2/3)h<sub>2</sub>τ]<sup>1/2</sup></td><td>v[[(1/(3f<sub>min</sub>)<sup>3</sup>)h<sub>2</sub>]<sup>1/2</sup></td><td>Nv[[(1/(3f<sub>min</sub>)<sup>3</sup>)h<sub>2</sub>]<sup>1/2</sup></td></tr><tr><td>Flicker FM</td><td>[v<sup>2</sup>/f<sup>3</sup>]h<sub>1</sub></td><td>[2ln(2)h<sub>1</sub>]<sup>1/2</sup></td><td>τ[2ln(2)h<sub>1</sub>]<sup>1/2</sup></td><td>v[[(1/(2f<sub>min</sub>)<sup>2</sup>)h<sub>1</sub>]<sup>1/2</sup></td><td>Nv[[(1/(2f<sub>min</sub>)<sup>2</sup>)h<sub>1</sub>]<sup>1/2</sup></td></tr><tr><td>White FM</td><td>[v<sup>2</sup>/f<sup>2</sup>]h<sub>0</sub></td><td>{(1/(2τ))h<sub>0</sub>}<sup>1/2</sup></td><td>[(τ/2)h<sub>0</sub>]<sup>1/2</sup></td><td>v{{((1/f<sub>min</sub>)-/f<sub>h</sub>)]h<sub>0</sub>}<sup>1/2</sup></td><td>Nv[(1/(f<sub>min</sub>)-/f<sub>h</sub>)h<sub>0</sub>]<sup>1/2</sup></td></tr><tr><td>Flicker PM</td><td>[v<sup>2</sup>/f<sup>1</sup>]h<sub>1</sub></td><td>{(1/(2π)) [1.038<br>+3ln(2πf<sub>h</sub>τ)h<sub>1</sub>]<sup>1/2</sup></td><td>[1/(2π)] [1.038<br>+3ln(2πf<sub>h</sub>τ)h<sub>1</sub>]<sup>1/2</sup></td><td>v[ln(f<sub>h</sub>/f<sub>min</sub>)h<sub>1</sub>]<sup>1/2</sup></td><td>Nv[ln(f<sub>h</sub>/f<sub>min</sub>)h<sub>1</sub>]<sup>1/2</sup></td></tr><tr><td>White PM</td><td>[v<sup>2</sup>f<sup>-2</sup>]h<sub>2</sub></td><td>{1/(2πτ)}[3f<sub>h</sub>h<sub>2</sub>]<sup>1/2</sup></td><td>[1/(2π)}[3f<sub>h</sub>h<sub>2</sub>]<sup>1/2</sup></td><td>v[f<sub>n</sub>,h<sub>2</sub>]<sup>1/2</sup></td><td>Nv[f<sub>n</sub>,h<sub>2</sub>]<sup>1/2</sup></td></tr></tbody></table>
|
| 208 |
+
|
| 209 |
+
## 7. Discussion
|
| 210 |
+
|
| 211 |
+
We have explored the effects of phase modulation (PM), amplitude modulation (AM), and additive noise on the rf spectrum, phase jitter, timing jitter, and frequency stability of precision sources. Under ideal frequency multiplication or division by a factor $N$, the PM noise and phase jitter of a source is changed by a factor of $N^2$. After a sufficiently large $N$, the carrier power density is less than the PM noise power. This condition is often referred to as carrier collapse. Noise added to a precision signal results in equal amounts of PM and AM noise. The upper and lower PM (or AM) sidebands are exactly equal and 100% correlated, independent of whether the PM (or AM) originates from random or coherent processes.
|
| 212 |
+
|
| 213 |
+
## 8. Acknowledgements
|
| 214 |
+
|
| 215 |
+
We gratefully acknowledge helpful discussions with David A. Howe, A. Sen Gupta, and Jeff Vollin.
|
| 216 |
+
|
| 217 |
+
## References
|
| 218 |
+
|
| 219 |
+
[1] F.L. Walls, "Correlation Between Upper and Lower Sidebands," IEEE Trans. Ultrason., Ferroelectrics, and Freq. Cont., 47, 407-410, 2000.
|
| 220 |
+
|
| 221 |
+
[2] D.B. Sullivan, D.W. Allan, D.A. Howe, and F.L. Walls, "Characterization of Clocks and Oscillators", NIST Tech. Note 1337, 1-342, 1990.
|
| 222 |
+
|
| 223 |
+
[3] F.L. Walls, D.B. Percival, and W.R. Irelan, "Biases and Variances of Several FFT Spectral Estimators as a Function of Noise Type and Number of Samples," Proc. 43rd Ann. Symp. Freq. Control, Denver, CO, May 31-June 2, 336-341, 1989. Also found in [1].
|
| 224 |
+
---PAGE_BREAK---
|
| 225 |
+
|
| 226 |
+
[4] F.L. Walls and A. DeMarchi, "RF Spectrum of a Signal After Frequency Multiplication: Measurement and Comparison with a Simple Calculation," IEEE Trans. Instrum. Meas., **24**, 210-217, 1975.
|
| 227 |
+
|
| 228 |
+
[5] F.L. Walls, J. Gary, A. O'Gallagher, R. Sweet, and L. Sweet, Time Domain Frequency Stability Calculated from the Frequency Domain Description: Use of the SIGINT Software Package to Calculate Time Domain Frequency Stability from the Frequency Domain, NISTIR 89-3916 (revised), 1-31, 1991.
|
| 229 |
+
|
| 230 |
+
[6] A. SenGupta and F.L. Walls, "Effect of Aliasing on Spurs and PM Noise in Frequency Dividers," Proc. Intl. IEEE Freq. Cont. Symp., Kansas City, MO, June 6-9, 2000.
|
samples/texts_merged/4994833.md
ADDED
|
@@ -0,0 +1,529 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Sampling variance update method in
|
| 5 |
+
Monte Carlo Model Predictive Control*
|
| 6 |
+
|
| 7 |
+
Shintaro Nakatani* Hisashi Date**
|
| 8 |
+
|
| 9 |
+
* Graduate School of Systems and Information Engineering, University of Tsukuba, Ibaraki, Japan (e-mail: nakatani-s@roboken.iit.tsukuba.ac.jp).
|
| 10 |
+
|
| 11 |
+
** Faculty of Engineering, Information and Systems, University of Tsukuba, Ibaraki, Japan (e-mail: hdate@iit.tsukuba.ac.jp)
|
| 12 |
+
|
| 13 |
+
**Abstract:** This study describes the influence of user parameters on control performance in a Monte-Carlo model predictive control (MCMPC). MCMPC based on Monte-Carlo sampling depends significantly on the characteristics of sampling distribution. We quantified the effect of user determinable parameters on control performance using the relationship between the algorithm of MCMPC and convergence to the optimal solution. In particular, we investigated the limitations associated with the variance of sampling distribution causing a trade-off relationship with the convergence speed and accuracy of estimation. To overcome this limitation, we proposed two variance updating methods and new MCMPC algorithm. Furthermore, the effectiveness of the numerical simulation was verified.
|
| 14 |
+
|
| 15 |
+
**Keywords:** Optimal control theory, Monte-Carlo methods, Randomized methods, Model predictive and optimization-based control
|
| 16 |
+
|
| 17 |
+
# 1. INTRODUCTION
|
| 18 |
+
|
| 19 |
+
In recent years, model predictive control (MPC) has attracted considerable attention in various fields owing to its ability of explicitly handling the required constraints Carlos E. Garcia and Morari (1989), Ohtsuka (2004). In MPC, an algorithm is used to determine the optimal control inputs by repeatedly solving the optimization problem with constraint up to a finite time in the future. From the view point of implementation, MPC can be separated into two categories, i.e., gradient and sample-based MPC.
|
| 20 |
+
|
| 21 |
+
The former method is currently being researched to be applied in various real-world systems. The C/GMRES proposed by Ohtsuka (2004) is a quite efficient method among gradient-based MPC. The C/GMRES is known to be an efficient algorithm Cairano and Kolmanovsky (2019) for nonlinear systems and has been considered for application in various systems such as smart grid systems Toru (2012) and vehicle collision avoidance control Masashi Nanno (2010).
|
| 22 |
+
|
| 23 |
+
In gradient-based MPC, the optimal input is determined by solving the optimal control problem using the gradient information of the cost function. Therefore, if the optimal control problem is simple, the optimal solution can be derived quickly and accurately. Alternatively, the target system is limited to systems with differentiable cost function.
|
| 24 |
+
|
| 25 |
+
In another method, i.e., sample-based MPC, the optimal input is determined using Monte-Carlo approximation. In general, Monte-Carlo method requires a significant number of computational resources; therefore, real-time im-
|
| 26 |
+
|
| 27 |
+
plementation of sample-based MPC is difficult. However, in literature Williams et al. (2016); Ohyama and Date (2017), it has been reported that the efficient approach is to take advantage of the parallel nature of sampling and use graphical processing unit to implement it in real time. In addition, as sample-based MPC does not require gradient information of the cost function, there are many significant advantages. The literature Nakatani and Date (2019) describes the feature of the Monte-Carlo model predictive control (MCMPC), which is a type of sample-based MPC. It also explains its capability of handling discontinuous events, based on the result of experiments of collision of pendulum on a cart.
|
| 28 |
+
|
| 29 |
+
From theoretical point of view, the most successful method is the path integral optimal control framework Kappen (2007); Satoh et al. (2017). The key idea in this framework is that the solution of the optimal control problem is transformed into the expectation over all possible trajectories and corresponding trajectory costs. This transformation allows stochastic optimal control problems to be solved by using a Monte-Carlo approximation with guaranteed convergence. However, in these studies, effect of the variance of sampling distribution on convergence was not considered. Williams et al. (2015) mentions this problem and proposes a framework that allows users to freely determine the variance of the sampling distribution. These previous studies are common in that the theory of path integration is applied to stochastic optimal control problems.
|
| 30 |
+
|
| 31 |
+
Alternatively, the MCMPC investigated herein aims to overcome the optimal control problem for deterministic systems. Therefore, herein we discuss the convergence of MCMPC by considering the optimal control problem for
|
| 32 |
+
|
| 33 |
+
* This work was not supported by any organization
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
discrete-time linear systems, wherein the only optimal
|
| 37 |
+
solution can be derived analytically.
|
| 38 |
+
|
| 39 |
+
This study aims to mainly describe the trade-off relation-
|
| 40 |
+
ship between the variance of sampling distribution and the
|
| 41 |
+
convergence, i.e., if we choose large sampling variance, the
|
| 42 |
+
convergence can be fastened while a large noise remains on
|
| 43 |
+
the solution. This problem requires that the variance must
|
| 44 |
+
be properly controlled to perfectly match the sub-optimal
|
| 45 |
+
input to the optimal solution. This also means that we
|
| 46 |
+
need to adjust the sampling variance properly to achieve
|
| 47 |
+
fast convergence and precision at the same time. Two types
|
| 48 |
+
of variance update methods are proposed: The one is in-
|
| 49 |
+
spired by cooling principle in simulated annealing method
|
| 50 |
+
and the other is based on the use of the most recent sample
|
| 51 |
+
variance. These methods are compared in simulation of
|
| 52 |
+
a linear system. Besides the variance update methods,
|
| 53 |
+
we also introduce two types of optimization among the
|
| 54 |
+
Monte Carlo samples: Top-1 sample and weighted mean.
|
| 55 |
+
Taking the best sample among all samples tends to achieve
|
| 56 |
+
fast convergence but suffered from large estimation noise
|
| 57 |
+
compared with weighted mean. These are compared in
|
| 58 |
+
simulation.
|
| 59 |
+
|
| 60 |
+
Based on these results, we show that the newly proposed
|
| 61 |
+
method is one of the effective methods for the problem
|
| 62 |
+
discussed in this paper.
|
| 63 |
+
|
| 64 |
+
## 2. FINITE-TIME OPTIMAL CONTROL PROBLEM FOR DISCRETE-TIME LINEAR SYSTEMS
|
| 65 |
+
|
| 66 |
+
We considered an optimal control problem for discrete-
|
| 67 |
+
time linear systems on the *k*-th control cycle with predic-
|
| 68 |
+
tion for *I*-th steps, indicated by {$k|0$}, . . . , {$k|i$}, . . . , {$k|I$}.
|
| 69 |
+
Consider a class of linear discrete-time systems described
|
| 70 |
+
by the following equation:
|
| 71 |
+
|
| 72 |
+
$$x_{\{k|i+1\}} = Ax_{\{k|i\}} + Bu_{\{k|i\}}, \quad (1)$$
|
| 73 |
+
|
| 74 |
+
where the state is denoted by $x_{\{k|i\}} \in \mathbb{R}^n$, control input by $u_{\{k|i\}} \in \mathbb{R}^1$, and system matrices are denoted by $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times 1}$. In addition, it is assumed that the initial state $x_{\{k|0\}}$ of the system at each control cycle $k$ is known and there are no constraint about input or state for simplicity. For the system (1), the cost function used in the finite-time optimal control problem from the current control cycle to $I$-steps future is described by following equation:
|
| 75 |
+
|
| 76 |
+
$$J(x_k, u_k, k) = \frac{1}{2} \sum_{i=0}^{N-1} \left( x_{\{k|i+1\}}^T Q x_{\{k|i+1\}} + u_{\{k|i\}}^T R u_{\{k|i\}} \right), \quad (2)$$
|
| 77 |
+
|
| 78 |
+
where the $Q \in \mathbb{R}^{n \times n}$ is the positive definite weight for the state, $R \in \mathbb{R}^1$ is the positive definite weight for the input. In the rest of this study, we use $J$ as the cost value unless otherwise noted. Then, the solution of this optimal control problem is defined as
|
| 79 |
+
|
| 80 |
+
$$u_{\{k|i\}}^* = \arg \min_{u_{\{k|i\}}} J(x_k, u_k, k). \quad (3)$$
|
| 81 |
+
|
| 82 |
+
At this moment, by using the fact that the time evolution
|
| 83 |
+
of the system (1) can be expressed using only the initial
|
| 84 |
+
state $x_{\{k|0\}}$ and input sequences $u_{\{k|0\}}, \dots, u_{\{k|N-1\}}$, we
|
| 85 |
+
can rewrite the equation (2) as following equation:
|
| 86 |
+
|
| 87 |
+
$$J(x_k, u_k, k) = \frac{1}{2} \hat{\mathbf{u}}^T \hat{\mathbf{Q}} \hat{\mathbf{u}} + x_{\{k|0\}}^T \hat{\mathbf{B}} \hat{\mathbf{u}} + \frac{1}{2} x_{\{k|0\}}^T \hat{\mathbf{A}} x_{\{k|0\}}, \quad (4)$$
|
| 88 |
+
|
| 89 |
+
where the matrices $\hat{A} \in \mathbb{R}^{n \times n}$, $\hat{B} \in \mathbb{R}^{n \times N}$, and $\hat{Q} \in \mathbb{R}^{N \times N}$ and the vector $\hat{u} \in \mathbb{R}^I$, are shown in from (5) to (8).
|
| 90 |
+
|
| 91 |
+
$$\hat{A} = A^T QA + (A^2)^T QA^2 + \cdots + (A^N)^T QA^N \quad (5)$$
|
| 92 |
+
|
| 93 |
+
$$\hat{B} = \left[ \sum_{k=1}^{N} (A^k)^T QA^{k-1} B, \dots, \sum_{k=j}^{N} (A^k)^T QA^{k-j} B, \dots, (A^N)^T QB \right] \quad (6)$$
|
| 94 |
+
|
| 95 |
+
$$\hat{Q} = \begin{bmatrix} \hat{q}_{11} & \cdots & \hat{q}_{1j} & \cdots & \hat{q}_{1I} \\ \vdots & \ddots & \vdots & & \vdots \\ \hat{q}_{1i} & \cdots & \hat{q}_{ij} & & \hat{q}_{iI} \\ \vdots & & \vdots & & \vdots \\ \hat{q}_{I1} & \cdots & \hat{q}_{jI} & & \hat{q}_{II} \end{bmatrix} \quad (7)$$
|
| 96 |
+
|
| 97 |
+
$$\hat{\mathbf{u}} = [u_{\{k|0\}}, \dots, u_{\{k|I-1\}}] \quad (8)$$
|
| 98 |
+
|
| 99 |
+
The matrix $\hat{Q}$, whose element in the *i*-th row and *j*-th column of the upper triangle, is a symmetric matrix $\hat{Q}$ and is given by
|
| 100 |
+
|
| 101 |
+
$$\hat{q}_{ij} =
|
| 102 |
+
\begin{cases}
|
| 103 |
+
\displaystyle \sum_{k=0}^{N-i} B^T (A^k)^T Q A^k B + R, & (i=j) \\
|
| 104 |
+
\displaystyle \sum_{k=j-i}^{N-i} B^T (A^k)^T Q A^{k+i-j} B. & (i<j)
|
| 105 |
+
\end{cases}
|
| 106 |
+
\quad (9)$$
|
| 107 |
+
|
| 108 |
+
If the matrix $\hat{Q}$ is positive definite symmetric matrix, the unique solution $\mathbf{u}^*$ can be obtained as
|
| 109 |
+
|
| 110 |
+
$$\mathbf{u}^* = -\hat{\mathbf{Q}}^{-1}\hat{\mathbf{B}}^T x_{\{k|0\}}. \quad (10)$$
|
| 111 |
+
|
| 112 |
+
These discussions so far are a general theory when con-
|
| 113 |
+
sidering a finite-time optimal control problem using cost
|
| 114 |
+
function (2) for discrete-time linear systems (1). In the
|
| 115 |
+
next section, we discuss the relationship between algorithm
|
| 116 |
+
of MCMPC which takes expectation over all possible tra-
|
| 117 |
+
jectories as sub-optimal input and convergence. We also
|
| 118 |
+
propose an alternative method: Top-1 sample algorithm
|
| 119 |
+
for MCMPC.
|
| 120 |
+
|
| 121 |
+
## 3. ALGORITHM OF TWO TYPES MCMPC
|
| 122 |
+
|
| 123 |
+
In this section, we describe two different MCMPC algo-
|
| 124 |
+
rithms. First, we describe the relationship between con-
|
| 125 |
+
vergence and the normal type MCMPC algorithm that
|
| 126 |
+
uses the expectation over all possible trajectories as sub-
|
| 127 |
+
optimal inputs. Next, we describe the TOP1 sample
|
| 128 |
+
MCMPC algorithm that uses the best trajectories from
|
| 129 |
+
all sample trajectories as a sub-optimal input.
|
| 130 |
+
|
| 131 |
+
### 3.1 Relation between algorithm of normal type MCMPC and convergence
|
| 132 |
+
|
| 133 |
+
Normal type MCMPC consists of three main phases.
|
| 134 |
+
|
| 135 |
+
**Phase 1**
|
| 136 |
+
Generating input sequenses
|
| 137 |
+
|
| 138 |
+
**Phase 2**
|
| 139 |
+
Running forward simulation in parallel
|
| 140 |
+
---PAGE_BREAK---
|
| 141 |
+
|
| 142 |
+
**Phase 3**
|
| 143 |
+
|
| 144 |
+
Estimating the sub-optimal input sequences $\tilde{\mathbf{u}}$
|
| 145 |
+
|
| 146 |
+
At the Phase 1, input sequences are generated by random
|
| 147 |
+
sampling from normal distribution as following equation:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\hat{\mathbf{u}} \sim \mathcal{N}(\bar{\mathbf{u}}, \Sigma), \quad (11)
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where the mean values $\bar{\mathbf{u}}$ is initialized and updated by
|
| 154 |
+
using following equation:
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\bar{\mathbf{u}} = \begin{cases} \mathbf{0}, & (k=0) \\ [\tilde{u}_{\{k|0\}}, \dots, \tilde{u}_{\{k|I-1\}}]^T, & (k \neq 0) \end{cases} \tag{12}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
where $\tilde{u}$ means sub-optimal input estimated in the pre-
|
| 161 |
+
vious estimation. $\Sigma \in \mathbb{R}^{I \times I}$ is the variance-covariance
|
| 162 |
+
matrix and satisfies the following two assumptions.
|
| 163 |
+
|
| 164 |
+
Assumption 1. The standard deviation $\sigma$ used in all prediction steps is constant.
|
| 165 |
+
|
| 166 |
+
Assumption 2. For all $u_{\{|\cdot|\}} \in \mathbb{R}^1$, each element are independent from each other:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
E(u_{\{|i|\}} u_{\{|j|\}}) = 0, (i \neq j) \quad (13)
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where $E(\cdot)$ means expected value.
|
| 173 |
+
|
| 174 |
+
Then, we can describe Σ as following equation (14) using
|
| 175 |
+
these two assumptions.
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\Sigma = \begin{bmatrix} \sigma^2 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \sigma^2 \end{bmatrix} \tag{14}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Therefore, $\hat{u}$ can be regarded as a random variable with probability density function (PDF) as shown in the following equation:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
f(\hat{\mathbf{u}}) = \frac{1}{\sqrt{2\pi}\sigma^2} \exp \left( -\frac{1}{2}(\hat{\mathbf{u}} - \bar{\mathbf{u}})^T \Sigma^{-1} (\hat{\mathbf{u}} - \bar{\mathbf{u}}) \right) \\
|
| 185 |
+
= \frac{1}{\sqrt{2\pi}\sigma^2} \exp \left( -\frac{1}{2\sigma^2} (\hat{\mathbf{u}} - \bar{\mathbf{u}})^T (\hat{\mathbf{u}} - \bar{\mathbf{u}}) \right). \tag{15}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
In Phase 2, the system state for the number of samples
|
| 189 |
+
used for predicting and estimating is updated using the
|
| 190 |
+
system model (1) and input sequences sampled randomly
|
| 191 |
+
as shown in (11). The updated system state and randomly
|
| 192 |
+
sampled inputs are also used to calculate the cost values
|
| 193 |
+
$J(x_k, u_k, k)$.
|
| 194 |
+
|
| 195 |
+
In Phase 3, sub-optimal input sequences $\tilde{\mathbf{u}}$ are derived as
|
| 196 |
+
the sample mean using the randomly sampled inputs $\hat{\mathbf{u}}$
|
| 197 |
+
and the weights $w(\hat{\mathbf{u}})$ for each input sequence:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\tilde{\mathbf{u}} = \frac{\sum_{k=1}^{M} w(\hat{\mathbf{u}})\hat{\mathbf{u}}}{\sum_{k=1}^{M} w(\hat{\mathbf{u}})}, \quad (16)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where $w(\hat{\mathbf{u}})$ can be derived as the following equation if $\hat{\mathcal{Q}}$ is positive definite:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\begin{align*}
|
| 207 |
+
w(\hat{\mathbf{u}}) &= \exp\left(-\frac{J}{\lambda^2}\right) \\
|
| 208 |
+
&= \exp\left(-\frac{1}{2\lambda^2}\hat{\mathbf{u}}^T\hat{\mathcal{Q}}\hat{\mathbf{u}} - \frac{1}{\lambda^2}\alpha_0^T\hat{\mathcal{B}}\hat{\mathbf{u}} - \frac{1}{2\lambda^2}\alpha_{\{k|0\}}^T\hat{\mathcal{A}}\alpha_{\{k|0\}}\right) \\
|
| 209 |
+
&= \exp\left(-\frac{1}{2\lambda^2}(\hat{\mathbf{u}} - \mathbf{u}^*)^T\hat{\mathcal{Q}}(\hat{\mathbf{u}} - \mathbf{u}^*) + \text{const}\right),
|
| 210 |
+
\end{align*}
|
| 211 |
+
\tag{17}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
where $\lambda$ is positive constant. Then, $E(\tilde{\mathbf{u}})$, the expected
|
| 215 |
+
value of the sample mean (16), can be described by
|
| 216 |
+
following equation:
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
E(\tilde{\mathbf{u}}) = \int w(\hat{\mathbf{u}}) d\hat{\mathbf{u}}. \quad (18)
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
Note that we are interested in the expected value of the
|
| 223 |
+
function (17) approximated by using a random variable $\hat{\mathbf{u}}$
|
| 224 |
+
with the PDF (15). Then, equation (18) can be rewritten
|
| 225 |
+
as the following equation from the definition of the expec-
|
| 226 |
+
tation of the function of random variables:
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
E(\tilde{\mathbf{u}}) = \int w(\hat{\mathbf{u}}) f(\hat{\mathbf{u}}) d\hat{\mathbf{u}}
|
| 230 |
+
= (\sigma^2 \hat{\mathcal{Q}} + \lambda^2 I)^{-1} (\sigma^2 \hat{\mathcal{Q}} \mathbf{u}^* + \lambda^2 \tilde{\mathbf{u}}), \quad (19)
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
where $I \in \mathbb{R}^{N \times N}$ is the identity matrix. The derivation
|
| 234 |
+
of (19) is shown in Appendix A. Then, the variance of
|
| 235 |
+
the sample mean $\Sigma_S$ can be expressed by the following
|
| 236 |
+
equation:
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
\Sigma_S = \frac{\sigma^2 \lambda^2}{M} (\sigma^2 \hat{\Omega} + \lambda^2 I)^{-1}, \quad (20)
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
where *M* is the total number of samples used for the pre-
|
| 243 |
+
diction and estimation, (See Appendix A for derivation).
|
| 244 |
+
Next, we consider the relationship between iteration of
|
| 245 |
+
prediction and estimation and the convergence of sub-
|
| 246 |
+
optimal input sequences **û**. Considering about updating
|
| 247 |
+
the expected value in (11) by repeating the estimation
|
| 248 |
+
shown in (18), and the sub-optimal input value by the
|
| 249 |
+
*d*-th estimation is **û***<sub>*d*</sub>, **û***<sub>*d+1*</sub> can be described as
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\bar{\mathbf{u}}_{d+1} = E(\bar{\mathbf{u}}) = (\sigma^2 \hat{\Omega} + \lambda^2 I_N)^{-1} (\sigma^2 \hat{\Omega} \mathbf{u}^* + \lambda^2 \bar{\mathbf{u}}_d). \quad (21)
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
If we define the error between the optimal input sequences $\mathbf{u}^*$ and the sub-optimal input $\bar{\mathbf{u}}_d$ estimated by the $d$-th estimation as $\boldsymbol{e}_d = \bar{\mathbf{u}}_d - \mathbf{u}^*$, we can describe the $d+1$-th estimation error as
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\boldsymbol{e}_{d+1} = \left( \frac{\sigma^2}{\lambda^2} \hat{\boldsymbol{Q}} + I \right)^{-1} \boldsymbol{e}_d
|
| 259 |
+
\quad (22)
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
As a result of the above considerations, we obtain the
|
| 263 |
+
theorem on the relationship between convergence and
|
| 264 |
+
parameters specific to MCMPC as shown below.
|
| 265 |
+
|
| 266 |
+
Theorem 1. In (4), it is assumed that the matrix $\hat{\mathcal{Q}}$ is a real positive definite symmetric matrix and the unique optimal inputs sequences exists as shown in (10).
|
| 267 |
+
|
| 268 |
+
Then, the sub-optimal input $\bar{\mathbf{u}}_d$ converges to $\mathbf{u}^*$ when $d \to \infty$.
|
| 269 |
+
|
| 270 |
+
**Proof.** The necessary and sufficient condition for the error $\boldsymbol{e}_d$ to asymptotically converge to 0 is that the
|
| 271 |
+
---PAGE_BREAK---
|
| 272 |
+
|
| 273 |
+
absolute value of all eigenvalues of matrix $\Omega$ shown in (23) is less than 1.
|
| 274 |
+
|
| 275 |
+
$$ \Omega = \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \qquad (23) $$
|
| 276 |
+
|
| 277 |
+
Assuming that for any real positive definite symmetric matrices $M_A, M_B$, the following inequality holds:
|
| 278 |
+
|
| 279 |
+
$$ \lambda_i(M_A + M_B) > \lambda_i(M_A), \qquad (24) $$
|
| 280 |
+
|
| 281 |
+
where $\lambda_i(Z)$ means the i-th eigenvalue of a matrix Z (Proof omitted.). Based on the assumption that $\hat{Q}$ is a real positive definite symmetric matrix, the following equation holds:
|
| 282 |
+
|
| 283 |
+
$$ \lambda_i \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right) > \lambda_i(I) = 1. \qquad (25) $$
|
| 284 |
+
|
| 285 |
+
Since $\lambda_i(Z^{-1}) = \frac{1}{\lambda_i(Z)}$ holds for any non-singular matrix, the following inequality holds:
|
| 286 |
+
|
| 287 |
+
$$ \lambda_i(\Omega) = \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) < \lambda_i(I). \qquad (26) $$
|
| 288 |
+
|
| 289 |
+
As the eigenvalues of all real positive definite symmetric matrices are positive real numbers, the absolute value of all eigenvalues of the matrix $\Omega$ is less than 1. Then, the error $e_d$ satisfies the following equation:
|
| 290 |
+
|
| 291 |
+
$$ \lim_{d \to \infty} e_d = 0. \qquad (27) $$
|
| 292 |
+
|
| 293 |
+
This means:
|
| 294 |
+
|
| 295 |
+
$$ \lim_{d \to \infty} (\bar{u}_d - u^*) = 0. \qquad (28) $$
|
| 296 |
+
|
| 297 |
+
Thus, the sub-optimal input sequences $\bar{u}_d$ converges asymptotically to $u^*$ when $d \to \infty$. $\square$
|
| 298 |
+
|
| 299 |
+
**Corollary 1.** When $\sigma \to \infty$, Eq. (26) satisfies the following equation:
|
| 300 |
+
|
| 301 |
+
$$ \lim_{\sigma \to \infty} \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) = 0, \forall i. \qquad (29) $$
|
| 302 |
+
|
| 303 |
+
Eq. (29) shows that if $\sigma \to \infty$, the first estimation result $\bar{u}^{(1)}$ satisfies $\bar{u}^{(1)} = u^*$. Therefore, if $\sigma$ is larger, the sub-optimal input sequences $\bar{u}_d$ converges to the optimal values faster.
|
| 304 |
+
|
| 305 |
+
Then, the variance-covariance matrix of the sample mean $\Sigma_S$ shown in Eq. (20) can be described as the following equation:
|
| 306 |
+
|
| 307 |
+
$$ \lim_{\sigma \to \infty} \Sigma_S = \frac{\lambda^2 \hat{Q}^{-1}}{M}. \qquad (30) $$
|
| 308 |
+
|
| 309 |
+
Eq. (30) means that if $\lambda$ is sufficiently small, the variance of the sub-optimal input sequences $\bar{u}_d$ is small. This consideration is consistent with the results of path integral analysis. Therefore, this means that there is a tradeoff between convergence and variance. Moreover, equation (30) shows that if sample number $M$ is large, the error of the expected value $E(\bar{u})$ by the Monte-Carlo approximation is $O(1/\sqrt{M})$.
|
| 310 |
+
|
| 311 |
+
**Corollary 2.** When $\sigma \to 0$, equation (20) satisfies the following equation:
|
| 312 |
+
|
| 313 |
+
$$ \lim_{\sigma \to 0} \Sigma_S = 0, \qquad (31) $$
|
| 314 |
+
|
| 315 |
+
However, the eigenvalue of the coefficient matrix $\Omega$ in equation (22) is as shown below:
|
| 316 |
+
|
| 317 |
+
$$ \lim_{\sigma \to 0} \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) = 1, \forall i. \qquad (32) $$
|
| 318 |
+
|
| 319 |
+
These equations show that there is a tradeoff between the convergence and variance of sample mean $\Sigma_S$. Equation (31) and (32) show that if the user chooses the variance $\sigma^2$ as small as possible to eliminate the variance of sample mean $\Sigma_S$, the error $e_d$ at the previous estimation will remain. Moreover, if $\sigma$ is too small, the sub-optimal input sequences $\bar{u}_d$ slowly converges to the optimal values.
|
| 320 |
+
|
| 321 |
+
From Corollary 1 and Corollary 2, it is understood that the variance needs to be controlled appropriately to improve the estimation accuracy and convergence speed.
|
| 322 |
+
|
| 323 |
+
## 3.2 Algorithm of TOP1 sample MCMPC
|
| 324 |
+
|
| 325 |
+
In Top1 sample MCMPC, the optimization problem is solved by iterating the following three processes within the same control cycle.
|
| 326 |
+
|
| 327 |
+
### Phase 1
|
| 328 |
+
Generating input sequences
|
| 329 |
+
|
| 330 |
+
### Phase 2
|
| 331 |
+
Running forward simulation in parallel
|
| 332 |
+
|
| 333 |
+
### Phase 3
|
| 334 |
+
Estimating the sub-optimal input sequences $\tilde{u}$ and updating standard deviation $\sigma$.
|
| 335 |
+
|
| 336 |
+
Phase 1 and phase 2 are the same as the MCMPC algorithm described above.
|
| 337 |
+
|
| 338 |
+
In phase 3, sub-optimal input sequences $\tilde{u}$ is described by the following equation:
|
| 339 |
+
|
| 340 |
+
$$ \tilde{u} = \arg\min_{u_{\{k|i\}\in U}} J(x_k, u_k, k), \qquad (33) $$
|
| 341 |
+
|
| 342 |
+
where $U$ means a set of all inputs sequences $\hat{u}$ randomly sampled in phase 1. In addition, the standard deviation $\sigma$ updated as described in section 4
|
| 343 |
+
|
| 344 |
+
## 3.3 Model predictive control algorithm
|
| 345 |
+
|
| 346 |
+
So far we have described how to repeat the prediction in one control cycle. In the model predictive control we propose, the prediction is repeated every control cycle, and the sub-optimal input predicted in the previous control cycle is re-optimized. So, sub-optimal input in k-th control cycle correspond to the result of iteration of $k \times d$ times predictions.
|
| 347 |
+
|
| 348 |
+
# 4. SAMPLING VARIANCE UPDATE METHODS
|
| 349 |
+
|
| 350 |
+
In this section, we describe two types of update methods that are used each time of the iteration of precision. The first variance update method used in this study can be described as following equation:
|
| 351 |
+
|
| 352 |
+
$$ \sigma_d = \gamma^d \sigma_0, \qquad (34) $$
|
| 353 |
+
|
| 354 |
+
where $\gamma$ is a positive constant $\gamma \in [0.8, 1.0]$, and $d$ is the number of iteration, and $\sigma_0$ is a parameter that represents the initial standard deviation that should be designed by the user. Equation (34) is inspired by the cooling schedule used in the simulated annealing (SA) method. In SA, it is guaranteed that the estimated value can reach the optimal solution when $\gamma$ is chosen appropriately and cooled enough times. For example, if we chose $\gamma = 1/\log(1+d)$, estimated value reliably converges to optimal value. But, the cooling rate $\gamma = 1/\log(1+d)$ is too slow, so, in practically, the
|
| 355 |
+
---PAGE_BREAK---
|
| 356 |
+
|
| 357 |
+
cooling rate $\gamma \in [0.8, 1.0)$ is generally used Rosen and Nakano (1994).
|
| 358 |
+
|
| 359 |
+
The second method can be described by the following equation:
|
| 360 |
+
|
| 361 |
+
$$\sigma_d = \sqrt{\frac{1}{\sum_{m=1}^{M} w_{d-1}(\hat{\mathbf{u}})}}. \quad (35)$$
|
| 362 |
+
|
| 363 |
+
Equation (35) corresponds to the error variance of equation (16) that can be calculated based on the error propagation law. Note that equation (35) is a variance update method that reflects the quality of the estimation results. In the rest of this study, we will refer to the method shown earlier as the geometric cooling method and the method shown later as latest sample variance method.
|
| 364 |
+
|
| 365 |
+
## 5. NUMERICAL SIMULATION
|
| 366 |
+
|
| 367 |
+
In this section, we first show the models used in two different numerical simulations. Next, we show the simulation results when using normal type MCMPC, which shows the effect of variance $\sigma$ on convergence. Furthermore, we show the results of applying the two types of variance update methods shown in the subsection 4 to normal type MCMPC and Top1 sample MCMPC. Finally, we show the results of the application to the problem of swing-up stabilization of a double inverted pendulum, which is a type of nonlinear system.
|
| 368 |
+
|
| 369 |
+
### 5.1 Simulation models
|
| 370 |
+
|
| 371 |
+
**Example 1.** As the first example, we consider the optimal control problem when MCMPC is applied to a three-dimensional unstable discrete-time linear system that can be described by the following equation:
|
| 372 |
+
|
| 373 |
+
$$ \begin{aligned} x_{k+1} &= Ax_k + Bu_k \\ x_k &\in \mathbb{R}^3, u_k \in \mathbb{R}^1 \end{aligned} \quad (36) $$
|
| 374 |
+
|
| 375 |
+
where we denote coefficient matrices A and B as show in the following equations:
|
| 376 |
+
|
| 377 |
+
$$A = \begin{bmatrix} 0 & 1 & 0 \\ 0 & -1.1364 & 0.273 \\ 0 & -0.1339 & -0.1071 \end{bmatrix} \quad (37)$$
|
| 378 |
+
|
| 379 |
+
$$B = \begin{bmatrix} 0 \\ 0 \\ 0.0893 \end{bmatrix}, \quad (38)$$
|
| 380 |
+
|
| 381 |
+
then, the eigenvalues of A are as $\Lambda = [0, -1.1059, -0.1376]^T$. Since one of eigenvalues of A exists outside of the unit circle, system (36) is an unstable system. Then we consider an optimal control problem for system (36) that takes a prediction horizon $N = 15$, initial state $x_0 = [2.98, 0.7, 0.0]^T$, state weight matrix Q and an input weight R as follows:
|
| 382 |
+
|
| 383 |
+
$$Q = \operatorname{diag}(2.0, 1.0, 0.1), \quad R = 1. \quad (39)$$
|
| 384 |
+
|
| 385 |
+
Then, the optimal input sequences $\mathbf{u}^*$ can be easily calculated using equation (3). In this study, we show only the analytical solution $u_0^* = -2.69$ used in the following discussion.
|
| 386 |
+
|
| 387 |
+
**Example 2.** As the second example, we consider the swing-up stabilization of an arm type double inverted pendulum.
|
| 388 |
+
|
| 389 |
+
Table 1. Parameters of arm type double pendulum
|
| 390 |
+
|
| 391 |
+
<table><thead><tr><td>Name</td><td>Symbol (·)</td><td>Value</td></tr></thead><tbody><tr><td>Angle of the first link</td><td>$\theta_1$ (rad)</td><td>Variable</td></tr><tr><td>Angle of the second link</td><td>$\theta_2$ (rad)</td><td>Variable</td></tr><tr><td>First link drive torque</td><td>$\tau_1$ (N·m)</td><td>Variable</td></tr><tr><td>Mass of first link</td><td>$m_1$ (kg)</td><td>-</td></tr><tr><td>Mass of second link</td><td>$m_2$ (kg)</td><td>$9.60 \times 10^{-2}$</td></tr><tr><td>Coefficient of friction</td><td>$\mu_2$ (kg·m²s⁻¹)</td><td>$1.26 \times 10^{-4}$</td></tr><tr><td>Gravity acceleration</td><td>$g$ (ms⁻²)</td><td>9.81</td></tr><tr><td>Length of first link</td><td>$L_1$ (m)</td><td>$2.27 \times 10^{-1}$</td></tr><tr><td>Length of second link</td><td>$l_2$ (m)</td><td>$1.95 \times 10^{-1}$</td></tr><tr><td>Moment of inertia</td><td>$J_2$ (kg·m²)</td><td>$1.10 \times 10^{-3}$</td></tr><tr><td>Positive constant</td><td>$a_1$</td><td>6.29</td></tr><tr><td>Positive constant</td><td>$b_1$</td><td>$1.64 \times 10^1$</td></tr></tbody></table>
|
| 392 |
+
|
| 393 |
+
Fig. 1. Model of arm type double pendulum
|
| 394 |
+
|
| 395 |
+
The state equation of the arm type double inverted pendulum shown in Fig. 1 can be described by the following two equations:
|
| 396 |
+
|
| 397 |
+
$$\ddot{\theta}_1(t) = -a_1\dot{\theta}_1(t) + b_1u(t) \quad (40)$$
|
| 398 |
+
|
| 399 |
+
$$\alpha_1 \cos \theta_{12}(t) \cdot \ddot{\theta}_1(t) + \alpha_2 \ddot{\theta}_2(t) = \alpha_1 \dot{\theta}^2(t) \sin \theta_{12}(t) + \alpha_3 \sin \theta_2(t) \\ + \mu_2 \dot{\theta}_1(t) - \mu_2 \dot{\theta}_2(t) \quad (41)$$
|
| 400 |
+
|
| 401 |
+
The time-invariant parameters $\alpha_1$, $\alpha_2$, and $\alpha_3$ and the variable $\theta_{12}$ in Equation (40) and Equation (41) are as follows:
|
| 402 |
+
|
| 403 |
+
$$\begin{align} \alpha_1 &= m_2 L_1 l_2, & \alpha_2 &= J_2 + m_2 l_2^2 \\ \alpha_3 &= m_2 l_2 g, & \theta_{12}(t) &= \theta_1(t) - \theta_2(t). \end{align} \quad (42)$$
|
| 404 |
+
|
| 405 |
+
The parameters of equations (40) to (42) and Fig. 1 are listed in Table 2. Then we consider an optimal control problem for this example that takes a prediction horizon $N = 80$, initial state shown in equation (43), state weight matrix Q and an input weight R shown in equation (44).
|
| 406 |
+
|
| 407 |
+
$$[\theta_1(0), \dot{\theta}_1(0), \theta_2(0), \dot{\theta}_2(0)] = [\pi, 0, \pi, 0]. \quad (43)$$
|
| 408 |
+
|
| 409 |
+
$$Q = \operatorname{diag}(5.0, 0.01, 5.0, 0.01), \quad R = 1. \quad (44)$$
|
| 410 |
+
|
| 411 |
+
### 5.2 Trade-off between precision and convergence
|
| 412 |
+
|
| 413 |
+
In this subsection, we consider the relationship between the variance $\sigma$ of the sampling distribution and convergence using the result of applying normal type MCMPC to Example 1. Fig. 3 shows the average and standard deviation $3\sigma$ of the simulation results of 30 independent trials under each condition.
|
| 414 |
+
---PAGE_BREAK---
|
| 415 |
+
|
| 416 |
+
Table 2. Parameters (for Example 1)
|
| 417 |
+
|
| 418 |
+
<table><thead><tr><th>Name</th><th>Symbol</th><th>Value</th></tr></thead><tbody><tr><td>Num of predictive steps</td><td>N</td><td>15 step</td></tr><tr><td>Num of samples</td><td>M</td><td>5,000</td></tr><tr><td>Num of iterations</td><td>d</td><td>100</td></tr><tr><td>Variance</td><td>σ<sup>2</sup></td><td>Variable value</td></tr><tr><td>Variance</td><td>λ</td><td>6.3</td></tr></tbody></table>
|
| 419 |
+
|
| 420 |
+
Fig. 2. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1
|
| 421 |
+
|
| 422 |
+
Table 2 lists the specific parameters of MCMPC used in this simulation to confirm the relationship between variance $\sigma$ and convergence. In Fig. 3, we compare the result when $\sigma$ gradually increase to 0.5, 1.0, 2.0, 4.0. As $\sigma$ increases, error $e_0$ converges to 0 with fewer iterations. However, it can be confirmed that the variation in error $e_0$ as the variance $\sigma$ increases. This result is a good example showing that the variance $\sigma$ of sampling distribution results in a trade-off relationship between the speed of convergence and the accuracy of the estimated sub-optimal inputs at the time of convergence.
|
| 423 |
+
|
| 424 |
+
From the results shown in Fig.3, it is necessary to update the variance $\sigma$ appropriately to obtain the optimal inputs faster and more accurately.
|
| 425 |
+
|
| 426 |
+
## 5.3 Comparison of sampling variance update methods
|
| 427 |
+
|
| 428 |
+
Fig. 3 shows the result obtained by using geometric cooling method, as shown in (34). Then, we plotted the result of the average of 30 independent trials and range of the standard deviation $3\sigma$ in Fig. 3. The upper figure shows the result obtained using normal type MCMPC, whereas the lower figure shows the results obtained using Top1 sample MCMPC. We determined $\gamma$ in equation (34) using the following equation:
|
| 429 |
+
|
| 430 |
+
$$ \gamma = \exp \left( \frac{1}{D} \log \left( \frac{\delta}{\sigma_0} \right) \right) \quad (45) $$
|
| 431 |
+
|
| 432 |
+
where *D* number of iterations, $\sigma_0$ is initial variance $\sigma$ of sampling distribution, and $\delta$ is variance $\sigma$ of sampling distribution used in the *D*-th iterations. In this simulation, the conditions of $D = 100, \delta = 10^{-5}$ remained, and the value of $\sigma_0$ was changed from 0.5 to 4.0. In the upper figure in Fig. 3, it can be confirmed that the error $e_0$ may or may not converge to 0 depending on the initial variance $\sigma_0$. On the contrary, in the lower figure in Fig. 3, the error $e_0$ converges to 0 at any initial variance. In either case, the variation with respect to the estimated sub-optimal
|
| 433 |
+
|
| 434 |
+
Fig. 3. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 when using geometric cooling method. (This figure shows results of mean and variance $3\sigma$ of 30 trials.)
|
| 435 |
+
|
| 436 |
+
input can be reduced. When normal type MCMPC was applied, the error $e_0$ in result did not converge to 0 when the initial variance $\sigma_0$ was set considerably small because $\sigma_d$ converged earlier than error $e_0$.
|
| 437 |
+
|
| 438 |
+
Fig. 4 shows the result obtained by applying latest sample variance method, as shown in (35). In the upper figure, which shows the result obtained by applying the normal type MCMPC, it can be confirmed that the error $e_0$ did not converge because $\sigma$ converged earlier than error $e_0$. Alternatively, when the TOP1 sample MCMPC, as shown in the lower figure in Fig. 4, is applied, the error $e_0$ and variation in the error $e_0$ of results converged near 0.
|
| 439 |
+
|
| 440 |
+
These results shown in Fig. 3 and Fig. 4 indicate that the two variance update methods proposed in this study cannot improve the trade off relationship between the convergence speed and the estimation accuracy when the normal type MCMPC is applied. However, when the update method shown in (34) is applied, choosing the appropriate (i.e., sufficiently large) initial variance can improve the trade-off relationship. On the other hand, in the case of TOP1 sample MCMPC, any of the updating methods can reliably converge to the optimal solution if sufficient iteration is taken. This means that TOP1 sample MCMPC has high affinity with any distribution update method.
|
| 441 |
+
|
| 442 |
+
## 5.4 Application to a nonlinear system
|
| 443 |
+
|
| 444 |
+
In this section, we show the results of applying what we have analogized so far to nonlinear systems. The discussion of convergence for the linear system can be applied to a nonlinear system that can be linearly approximated around the optimal solution. The system model and cost function are shown in Example 2. The parameters of the controller used for this simulation are as shown in Table 3. We set the initial variance to the lower bound given by:
|
| 445 |
+
|
| 446 |
+
$$ \sigma_0 \geq \frac{u_{max} - u_{min}}{6}. \quad (46) $$
|
| 447 |
+
|
| 448 |
+
The method of determining the variance $\sigma_0$ as in equation (46) is also used in Nakatani and Date (2019). Fig. 5 shows time responses of $\theta_1, \theta_2, \dot{\theta}_1, \dot{\theta}_2$, respectively, and shows a plot of the average value of 30 trials and a stan-
|
| 449 |
+
---PAGE_BREAK---
|
| 450 |
+
|
| 451 |
+
Fig. 4. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 when using latest sample variance method. (This figure shows results of mean and variance $3\sigma$ of 30 trials.)
|
| 452 |
+
|
| 453 |
+
dard deviation $3\sigma$. In addition, (a) shows in the figure corresponds to the result of applying the TOP1 sample MCMPC, and (b) is the result of applying the normal type MCMPC. When the variance update method considered in this study was applied to normal type MCMPC, none of the methods achieved swing-up stabilization. For this reason, the result shown in Fig. 5 is a result of applying normal type MCMPC without variance updating. Moreover, the result of TOP1 sample MCMPC is the result of using the variance update method shown in equation (34). In addition, the variance $\sigma$ used in this simulation was one with the best performance among the five different simulations using variance $\sigma_0^2 = 0.5, 1.0, 2.0, 3.0, 4.0$ in normal type MCMPC. Both controllers stabilized the swing up in approximately 2.0 s after the start of control.
|
| 454 |
+
|
| 455 |
+
The upper figure in Fig. 6 and Fig. 7 shows the input sequences. Immediately after the start of control, TOP1 sample MCMPC selects the smallest input that satisfies the input constraints. On the contrary, the normal type MCMPC selects the conservative input. The lower figure in Fig. 6 and Fig. 7 shows the value of the cost function calculated based on the input sequences predicted in each control cycle. The smaller the value shown in Fig. 6 in each control cycle, the better the control performance. According to the results shown in this study, the TOP1 sample MCMPC demonstrates superior control performance. Moreover, this result was the same when the initial variance $\sigma_0$ and the variance update method were changed.
|
| 456 |
+
|
| 457 |
+
In normal type MCMPC, when the variance $\sigma$ or the distributed update method was changed, the control performance deteriorated or the swing-up stability could not be stabilized due to the trade-off relationship described in subsection 3.1.
|
| 458 |
+
|
| 459 |
+
## 6. CONCLUSION
|
| 460 |
+
|
| 461 |
+
Herein, we examined the relationship between the convergence of MCMPC and user determinable parameters. Additionally, it was analytically verified that the variance $\sigma$ of sampling distribution has a trade off relationship with the convergence speed and the accuracy of estimation. Next, we proposed two types of variance update meth-
|
| 462 |
+
|
| 463 |
+
Table 3. Parameters (for Example 2)
|
| 464 |
+
|
| 465 |
+
<table><thead><tr><td>Name</td><td>Value</td></tr></thead><tbody><tr><td>Simulation time</td><td>5.0 (s)</td></tr><tr><td>Control cycle</td><td>100 (Hz)</td></tr><tr><td>Prediction horizon</td><td>0.8 (s)</td></tr><tr><td>Num of predictive steps</td><td>80 step</td></tr><tr><td>Num of samples</td><td>5,000</td></tr><tr><td>Num of iterations</td><td>100</td></tr><tr><td>σ<sup>2</sup><sub>0</sub> or σ<sup>2</sup></td><td>1.0</td></tr><tr><td>λ<sup>2</sup></td><td>40</td></tr><tr><td>γ</td><td>0.9</td></tr><tr><td>Input constraint</td><td>-3.0 ≤ u(t) ≤ 3.0 (V)</td></tr></tbody></table>
|
| 466 |
+
|
| 467 |
+
Fig. 5. Simulation result ((a) TOP1 sample MCMPC vs (b) Normal type MCMPC). Left side top: time response of $\theta_1$. Right side top: time response of $\theta_2$. Left side bottom: time response of $\dot{\theta}_1$. Right side bottom: time response of $\dot{\theta}_2$.
|
| 468 |
+
|
| 469 |
+
Fig. 6. Top: Simulation result of input sequences. Bottom: Cost value calculated in each control cycle. (This figure shows results of mean and variance $3\sigma$ of 30 trials.)
|
| 470 |
+
|
| 471 |
+
ods and TOP1 sample MCMPC to overcome this trade-off problem. Finally, we completed numerical simulations and discussed the effects of applying the variance update method and TOP1 sample MCMPC. We also showed an example of numerical simulation applied to a nonlinear system and examined the possibility of applying the proposed analogy for controlling nonlinear systems.
|
| 472 |
+
---PAGE_BREAK---
|
| 473 |
+
|
| 474 |
+
Fig. 7. Top: Simulation result of input sequences. Bottom: Cost value calculated in each control cycle. (This figure shows results of one trial out of 30 trials.)
|
| 475 |
+
|
| 476 |
+
REFERENCES
|
| 477 |
+
|
| 478 |
+
Cairano, S.D. and Kolmanovsky, I.V. (2019). Automotive applications of model predictive control. In *Handbook of Model Predictive Control*, 493–527. Springer International Publishing, Cham.
|
| 479 |
+
|
| 480 |
+
Carlos E. Garcia, D.M.P. and Morari, M. (1989). Model predictive control: Theory and practice—a survey. *Automatica*, **25**, 335–348.
|
| 481 |
+
|
| 482 |
+
Kappen, H.J. (2007). An introduction to stochastic control theory, path integrals and reinforcement learning. *Proc. 9th Granada seminor on computational physics: Cooperative behavior in nearal systems*, 149–181.
|
| 483 |
+
|
| 484 |
+
Masashi Nanno, T.O. (2010). Nonlinear model predictive control for vehicle collision avoidance using c/gmres algorithm. presented at the 2010 IEEE International Conference on Control Applications, Yokohama, Japan, September 8–10.
|
| 485 |
+
|
| 486 |
+
Nakatani, S. and Date, H. (2019). Swing up control of inverted pendulum on a cart with collision by monte carlo model predictive control. *2019 58th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)*, 1050–1055.
|
| 487 |
+
|
| 488 |
+
Namekawa, T. (2012). Distributed and predictive control for smart grids. *Journal of the Society of Instrument and Control Engineers*, **51**, 62–68.
|
| 489 |
+
|
| 490 |
+
Ohtsuka, T. (2004). A continuation/gmres method for fast computation of nonlinear receding horizon control. *Automatica*, **40**, 563–574.
|
| 491 |
+
|
| 492 |
+
Ohyama, S. and Date, H. (2017). Parallelized nonlinear model predictive control on gpu. *2017 11th Asian Control Conference (ASCC)*, Gold Coast, QLD, 1620–1625.
|
| 493 |
+
|
| 494 |
+
Rosen, E.B. and Nakano, R. (1994). Simulated annealing: Basics and recent topics on simulated annealing [in japanese]. *Journal of Japanese Society for Artificial Intelligence*, 365–372.
|
| 495 |
+
|
| 496 |
+
Satoh, S., Kappen, H.J., and Saeki, M. (2017). An iterative method for nonlinear stochastic optimal control based on path integrals. *IEEE Transactions on Automatic Control*, **62**, 262–276.
|
| 497 |
+
|
| 498 |
+
Williams, G., Aldrich, A., and Theodorou, E. (2015). Model predictive path integral control using covariance variable importance sampling. arXiv preprint arXiv:1509.01149.
|
| 499 |
+
|
| 500 |
+
Williams, G., Paul Drews, B.G., Rehg, J.M., and Theodorou, E.A. (2016). Aggressive driving with model predictive path integral control. *IEEE international Conference on Robotics and Automation (ICRA)*, Stockholm, Sweden, 1433–1440.
|
| 501 |
+
|
| 502 |
+
Appendix A. DERIVATION OF SAMPLE MEAN EXPECTATION AND VARIANCE OF SAMPLE MEAN
|
| 503 |
+
|
| 504 |
+
In this section, we describe how to derive the analytical solution (21) from Eq. (18). Substituting the results of Eq. (15) and Eq. (17) for Eq. (18) can be transformed as:
|
| 505 |
+
|
| 506 |
+
$$
|
| 507 |
+
\begin{align*}
|
| 508 |
+
E(\tilde{u}) &= \frac{1}{\sqrt{2\pi}\sigma^2} \int \exp\left(-\frac{1}{2\lambda^2}(\tilde{u}-\mathbf{u}^*)^T \hat{\mathcal{Q}} (\tilde{u}-\mathbf{u}^-) - (\tilde{u}-\mathbf{u}^-)^T \Sigma^{-1} (\tilde{u}-\mathbf{u}^-)\right) d\tilde{u} \\
|
| 509 |
+
&= \bar{C}_1 \int \exp\left(-\frac{1}{2\lambda^2}(\tilde{u}-\mathbf{u}^*)^T \hat{\mathcal{Q}} (\tilde{u}-\mathbf{u}^-) - \frac{1}{2\sigma^2}(\tilde{u}-\mathbf{u}^-)^T (\tilde{u}-\mathbf{u}^-)\right) d\tilde{u} \\
|
| 510 |
+
&= \bar{C}_2 \int \exp\left(-\tilde{u}^T \left(\frac{1}{2\lambda^2}\hat{\mathcal{Q}} + \frac{1}{2\sigma^2}I\right)\tilde{u} + \frac{1}{\lambda^2}(\tilde{u}^*)^T \hat{\mathcal{Q}} \tilde{u} + \frac{1}{\sigma^2}\tilde{u}^T \tilde{u}\right) d\tilde{u} \\
|
| 511 |
+
&= \bar{C}_3 \int \exp\left(-\frac{1}{2\lambda^2\sigma^2}\tilde{u}^T (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I)\tilde{u} + \frac{\sigma^2}{\lambda^2\sigma^2}(\tilde{u}^*)^T \hat{\mathcal{Q}} \tilde{u} + \frac{\lambda^2}{\lambda^2\sigma^2}\tilde{u}^T \tilde{u}\right) d\tilde{u} \\
|
| 512 |
+
&= \bar{C}_4 \int \exp\left(-\frac{1}{2\lambda^2\sigma^2}(\tilde{u}-\mathbf{u})^T (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I)(\tilde{u}-\mathbf{u})\right) d\tilde{u}
|
| 513 |
+
\end{align*}
|
| 514 |
+
(A.1)
|
| 515 |
+
$$
|
| 516 |
+
|
| 517 |
+
where $\bar{C}_1, \bar{C}_2, \bar{C}_3$ and $\bar{C}_4$ are equivalent to terms that are listed as constants to arrange them into terms of the quadratic form and other terms related to $\hat{\mathbf{u}}$, respectively. Then, we define the contents of the exponential function on the fourth line in Eq. (A.1) as $g$, and obtain a stationary point by partial differentiation of $g$ with $\hat{\mathbf{u}}$ to obtain the following result:
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
\left. \frac{\partial g}{\partial \hat{\mathbf{u}}} \right|_{\hat{\mathbf{u}}=\bar{\mathbf{u}}} = (\sigma^2 \hat{\mathcal{Q}} + \lambda^2 I) \bar{\mathbf{u}} - (\sigma^2 \hat{\mathcal{Q}} \mathbf{u}^* + \lambda^2 \bar{\mathbf{u}}) = 0. \quad (\text{A.2})
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
Here, solving the Eq. (A.2) for $\tilde{\mathbf{u}}$ agrees with the result of Eq. (A.2).
|
| 524 |
+
|
| 525 |
+
Next, we find the variance of sample mean $\tilde{\mathbf{u}}$ using Eq. (A.1). Let random variable $\hat{\mathbf{u}}$ be a random variable that follows a multidimensional normal distribution with expected value $\tilde{\mathbf{u}}$ and variance $\Sigma_S$. From the PDF of this distribution and the result of the coefficient comparison of the integrand on the fifth line in Eq. (A.1), the variance of $\Sigma_S$ can be shown as:
|
| 526 |
+
|
| 527 |
+
$$
|
| 528 |
+
\frac{1}{2\lambda^2\sigma^2} (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I) = \frac{1}{2}\Sigma_S^{-1} \qquad (\text{A.3})
|
| 529 |
+
$$
|
samples/texts_merged/503850.md
ADDED
|
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
QUESTION 1
|
| 5 |
+
|
| 6 |
+
$A$ = the complement of $\angle B$ degrees
|
| 7 |
+
|
| 8 |
+
$B$ = the supplement of $\angle C$ degrees
|
| 9 |
+
|
| 10 |
+
$C$ = the supplement of the complement of $\angle D$ degrees
|
| 11 |
+
|
| 12 |
+
$D$ = the central angle of a circle with radius 4 with corresponding arc length of $\pi$
|
| 13 |
+
|
| 14 |
+
$$\text{Find } A + B + C + D$$
|
| 15 |
+
---PAGE_BREAK---
|
| 16 |
+
|
| 17 |
+
QUESTION 2
|
| 18 |
+
|
| 19 |
+
A = the number of diagonals of an icosagon (20 sided polygon)
|
| 20 |
+
|
| 21 |
+
B = the area of an isosceles trapezoid with base lengths 4 and 28 and a height of 5
|
| 22 |
+
|
| 23 |
+
C = the height of a rectangular prism with a length of 20, a width of 9, and a space diagonal of 25
|
| 24 |
+
|
| 25 |
+
D = the volume of a hemisphere with radius 6
|
| 26 |
+
|
| 27 |
+
Find $A+B+\frac{D}{C}$
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
QUESTION 3
|
| 31 |
+
|
| 32 |
+
Puneet lives in a box with dimensions $20ft \times 15ft \times 10ft$. There is a door with dimensions $7ft \times 4ft$. Each can of paint can cover $100 ft^2$.
|
| 33 |
+
|
| 34 |
+
A = the number of paint cans needed to paint the door
|
| 35 |
+
|
| 36 |
+
B = the number of paint cans needed to paint Puneet's house given that he paints the entire surface area of the house
|
| 37 |
+
|
| 38 |
+
C = the length of the longest sandwich Puneet can fit into his box
|
| 39 |
+
|
| 40 |
+
D = the ratio of the volume of the box to the surface area of the box
|
| 41 |
+
|
| 42 |
+
Find $AC + BD$
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
QUESTION 4
|
| 46 |
+
|
| 47 |
+
A semicircle is inscribed in an equilateral triangle so that the diameter rests on one side of the triangle and is tangent to
|
| 48 |
+
the other two sides. Let A be the radius of the semicircle when the side lengths of the triangle equals 24.
|
| 49 |
+
|
| 50 |
+
Two poles of height 6 ft. and 8 ft. are located 12 ft. away from each other. Jenny attaches two cables that connect the top of one pole to the bottom of the other. Let B be the height of the intersection of the two cables from the ground.
|
| 51 |
+
|
| 52 |
+
Jenny likes pie and $\pi$. She buys herself a two-dimensional pie with radius 14 in. Let C be the area of her pie in $in^2$.
|
| 53 |
+
|
| 54 |
+
Find $A + B + C$.
|
| 55 |
+
---PAGE_BREAK---
|
| 56 |
+
|
| 57 |
+
QUESTION 5
|
| 58 |
+
|
| 59 |
+
A = the length of the inradius of a triangle with side lengths 7, 8, and 9
|
| 60 |
+
|
| 61 |
+
B = the length of circumradius of a triangle with side lengths 10, 10, and 14
|
| 62 |
+
|
| 63 |
+
C = the area of a triangle with side lengths 14, 60, and 66
|
| 64 |
+
|
| 65 |
+
D = the area of a triangle with side lengths 12 and 15 and included angle of 60°
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\text{Hint: Area} = \frac{1}{2} ab \sin C \text{ where C is the angle between } a \text{ and } b
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Find $A\sqrt{5} + B\sqrt{51} - \frac{C}{\sqrt{2}} + \frac{D}{\sqrt{3}}$
|
| 72 |
+
---PAGE_BREAK---
|
| 73 |
+
|
| 74 |
+
**QUESTION 6**
|
| 75 |
+
|
| 76 |
+
A = the sum of the coordinates of the centroid of a triangle with vertices (5, 7), (-1, 5), and (8, 0)
|
| 77 |
+
|
| 78 |
+
B = the slope of the median from vertex B of a triangle with vertices A(31, 7), B(19, 21), C(25, 12)
|
| 79 |
+
|
| 80 |
+
C = the measure of ∠D in degrees in △DOG if the opposite side length is $4\sqrt{2}$, ∠G equals 45° and DO equals 8
|
| 81 |
+
|
| 82 |
+
Find A+B+C.
|
| 83 |
+
---PAGE_BREAK---
|
| 84 |
+
|
| 85 |
+
QUESTION 7
|
| 86 |
+
|
| 87 |
+
(Figure not drawn to scale. A quadrilateral is drawn over two parallel lines.)
|
| 88 |
+
|
| 89 |
+
What is the sum of $\angle B$ and $\angle F$ if $\angle A = 42^\circ$, $\angle C = 79^\circ$, $\angle E = 135^\circ$, and $\angle D = 51^\circ$?
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
QUESTION 8
|
| 93 |
+
|
| 94 |
+
Two spheres are inscribed in a rectangular box so that each sphere is tangent to five sides of the box and the other sphere.
|
| 95 |
+
If the radius of each of the spheres is 4 in, then the volume of the box is A in³.
|
| 96 |
+
|
| 97 |
+
If a frustum of the cone has radii 6 in and 8 in and a height of 4 in, then the lateral surface area is Bπ in².
|
| 98 |
+
|
| 99 |
+
An ant is sitting on the center of the top face of a right, cylindrical can of soup with radius 4 in and height 6π in. The ant wants to get down to the ground so it takes the shortest path to the edge of the face and climbs down the side of the can. The ant spirals down the can, rotating around once and arriving at the point directly underneath his position on the top edge. The length of the path the ant took from his original position to the ground is C in.
|
| 100 |
+
|
| 101 |
+
Find A+B+C.
|
| 102 |
+
---PAGE_BREAK---
|
| 103 |
+
|
| 104 |
+
QUESTION 9
|
| 105 |
+
|
| 106 |
+
Add the values in the parentheses to $x$ if they are true. Subtract them from $x$ if they are false. Begin with $x = 0$.
|
| 107 |
+
|
| 108 |
+
(5) The incenter of a triangle is the center of its inscribed circle
|
| 109 |
+
|
| 110 |
+
(-3) The circumcenter of a triangle is equidistant from the sides of the triangle
|
| 111 |
+
|
| 112 |
+
(-2) The orthocenter is the intersection of the altitudes of a triangle
|
| 113 |
+
|
| 114 |
+
(7) The centroid is the intersection of the medians of a triangle
|
| 115 |
+
|
| 116 |
+
(10) Euler's line is made up of the orthocenter, circumcenter, and the incenter
|
| 117 |
+
|
| 118 |
+
After performing these operations, what is $x$?
|
| 119 |
+
---PAGE_BREAK---
|
| 120 |
+
|
| 121 |
+
QUESTION 10
|
| 122 |
+
|
| 123 |
+
A cylinder with radius 3 and height $\frac{9}{4}$ is inscribed in a cone with radius 8.
|
| 124 |
+
|
| 125 |
+
$A$ = the volume of cylinder
|
| 126 |
+
|
| 127 |
+
$B$ = the height of the cone
|
| 128 |
+
|
| 129 |
+
$C$ = the volume of the cone
|
| 130 |
+
|
| 131 |
+
Find $\frac{AC}{B}$.
|
| 132 |
+
---PAGE_BREAK---
|
| 133 |
+
|
| 134 |
+
QUESTION 11
|
| 135 |
+
|
| 136 |
+
Siddarth is obsessed with the song Bang by Griana Arande. Jeewoo, unfortunately, has bad music taste and likes All the Single Men by Jeyonce. The song Bang by Griana Arande is 3 minutes long. All the Single Men by Jeyonce is also 3 minutes long. If Siddarth starts to listen to the song randomly at a time between 12:00 pm and 12:30 pm and if Jenny listens to All the Single Men by Jeyonce randomly between 12:00 and 12:30 p.m., what is the probability that their songs are both playing at a given time between 12:00 to 12:30 p.m.
|
| 137 |
+
---PAGE_BREAK---
|
| 138 |
+
|
| 139 |
+
QUESTION 12
|
| 140 |
+
|
| 141 |
+
A = the number of sides of an undecagon
|
| 142 |
+
|
| 143 |
+
B = the number of faces of a hexahedron
|
| 144 |
+
|
| 145 |
+
C = the number of vertices of a figure with 12 edges and 8 faces
|
| 146 |
+
|
| 147 |
+
D = the number of space diagonals in a dodecahedron
|
| 148 |
+
|
| 149 |
+
Find (A+D) - (B+C)
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
QUESTION 13
|
| 153 |
+
|
| 154 |
+
A = sin 60°
|
| 155 |
+
|
| 156 |
+
B = sin 30°
|
| 157 |
+
|
| 158 |
+
C = cos 45°
|
| 159 |
+
|
| 160 |
+
D = tan 60°
|
| 161 |
+
|
| 162 |
+
Find ABCD.
|
| 163 |
+
---PAGE_BREAK---
|
| 164 |
+
|
| 165 |
+
QUESTION 14
|
| 166 |
+
|
| 167 |
+
(The figure is not drawn to scale.)
|
| 168 |
+
|
| 169 |
+
The lengths of *a* and *b* are 6 and 4, respectively. How many possible combinations of (*c*, *d*) exist if *c* and *d* are integer lengths?
|
samples/texts_merged/5396754.md
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Monte Carlo Sampling in Path
|
| 5 |
+
Space: Calculating Time Correlation
|
| 6 |
+
Functions by Transforming
|
| 7 |
+
Ensembles of Trajectories
|
| 8 |
+
|
| 9 |
+
Cite as: AIP Conference Proceedings 690, 192 (2003); https://doi.org/10.1063/1.1632129
|
| 10 |
+
Published Online: 06 November 2003
|
| 11 |
+
|
| 12 |
+
Christoph Dellago, and Phillip L. Geissler
|
| 13 |
+
|
| 14 |
+
ARTICLES YOU MAY BE INTERESTED IN
|
| 15 |
+
|
| 16 |
+
Precision shooting: Sampling long transition pathways
|
| 17 |
+
The Journal of Chemical Physics **129**, 194101 (2008); https://doi.org/10.1063/1.2978000
|
| 18 |
+
|
| 19 |
+
An efficient transition path sampling algorithm for nanoparticles under pressure
|
| 20 |
+
The Journal of Chemical Physics **127**, 154718 (2007); https://doi.org/10.1063/1.2790431
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
Monte Carlo Sampling in Path Space:
|
| 24 |
+
Calculating Time Correlation Functions
|
| 25 |
+
by Transforming Ensembles of Trajectories
|
| 26 |
+
|
| 27 |
+
Christoph Dellago\* and Phillip L. Geissler\^\textsuperscript{†}
|
| 28 |
+
|
| 29 |
+
\*Institute for Experimental Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria
|
| 30 |
+
|
| 31 |
+
\^\textsuperscript{†}Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA 02139
|
| 32 |
+
|
| 33 |
+
**Abstract.** Computational studies of processes in complex systems with metastable states are often complicated by a wide separation of time scales. Such processes can be studied with transition path sampling, a computational methodology based on an importance sampling of reactive trajectories capable of bridging this time scale gap. Within this perspective, ensembles of trajectories are sampled and manipulated in close analogy to standard techniques of statistical mechanics. In particular, the population time correlation functions appearing in the expressions for transition rate constants can be written in terms of free energy differences between ensembles of trajectories. Here we calculate such free energy differences with thermodynamic integration, which, in effect, corresponds to reversibly changing between ensembles of trajectories.
|
| 34 |
+
|
| 35 |
+
INTRODUCTION
|
| 36 |
+
|
| 37 |
+
Transition path sampling is a computational technique developed by us and others to study rare events in complex systems [1, 2, 3]. Although rare, such events are crucially important in many condensed matter systems. Nucleation of first order phase transitions, transport in solids, chemical reactions in solution, and protein folding all occur on time scales which are long compared to basic molecular motions. Transition path sampling, which is based on an importance sampling in trajectory space, can provide insights into mechanism and kinetics of processes involving dynamical bottlenecks. In the following we will give a brief overview of this methodology, focusing on the calculation of reaction rate constants. In this framework reaction rates are related to the reversible work required to manipulate ensembles of trajectories. As a consequence, rate constants can be calculated using free energy estimation methods familiar from equilibrium statistical mechanics, such as umbrella sampling and thermodynamic integration. For an in depth treatment of all aspects of transition path sampling we refer the reader to the review articles [2] and [3].
|
| 38 |
+
|
| 39 |
+
In the path sampling approach dynamical pathways of length $t$ are represented by ordered sequences of $L = t/\Delta t + 1$ states, $x(t) \equiv \{x_0, x_{\Delta t}, x_{2\Delta t}, \dots, x_t\}$. Consecutive states are separated by a time increment $\Delta t$. Such dynamical pathways can be deterministic trajectories as generated by Newtonian dynamics or stochastic trajectories as constructed from Langevin dynamics or from Monte Carlo simulations. For Markovian single step transition probabilities $p(x_{i\Delta t} \rightarrow x_{(i+1)\Delta t})$ the statistical weight $\mathcal{P}[x(t)]$ of a particular
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
trajectory $x(t)$ is
|
| 43 |
+
|
| 44 |
+
$$ \mathcal{P}[x(t)] = \rho(x_0) \prod_{i=0}^{L-1} p(x_{i\Delta t} \rightarrow x_{(i+1)\Delta t}), \quad (1) $$
|
| 45 |
+
|
| 46 |
+
where $\rho(x_0)$ is the distribution of initial states $x_0$. In many applications, $\rho(x_0)$ will be an equilibrium distribution such as the canonical distribution, but non-equilibrium distributions of initial conditions are possible as well.
|
| 47 |
+
|
| 48 |
+
In applying transition path sampling one is usually interested in finding dynamical pathways connecting stable (or metastable) states, which we name *A* and *B*. Then, the probability of a *reactive* pathway, i.e., of a pathway starting in *A* and ending in *B*, is
|
| 49 |
+
|
| 50 |
+
$$ \mathcal{P}_{AB}[x(t)] = h_A(x_0) \mathcal{P}[x(t)] h_B(x_t) / Z_{AB}(t), \quad (2) $$
|
| 51 |
+
|
| 52 |
+
where $h_A(x)$ and $h_B(x)$ are the population functions for regions *A* and *B*. That is, $h_A(x)$ is 1 if $x$ is in *A* and 0 otherwise, and $h_B(x)$ is defined analogously. The factor $Z_{AB}$,
|
| 53 |
+
|
| 54 |
+
$$ Z_{AB}(t) = \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] h_B(x_t), \quad (3) $$
|
| 55 |
+
|
| 56 |
+
normalizes the reactive path probability, and the notation $\int \mathcal{D}x(t)$ indicates an integration over all time slices of the pathway. The quantity $Z_{AB}(t)$ can be viewed as a partition function characterizing the ensemble of all reactive pathways. This analogy between conventional equilibrium statistical mechanics and the statistics of trajectories will be important in the discussion of reaction kinetics in the next section. The distribution $\mathcal{P}_{AB}[x(t)]$, which weights trajectories in the *transition path ensemble*, is a statistical description of all dynamical pathways connecting regions *A* and *B*.
|
| 57 |
+
|
| 58 |
+
To sample the transition path ensemble we have developed several Monte Carlo simulation techniques [4, 5]. In these algorithms, which are importance sampling procedures in trajectory space, one proceeds by generating trial pathways from existing trajectories via what we call the shooting and shifting method [4]. Newly generated trial pathways are then accepted with a probability obeying the detailed balance condition. This condition guarantees that pathways are sampled according to their weight in the transition path ensemble. The detailed balance condition can be satisfied by choosing an acceptance probability according to the celebrated Metropolis rule [6]. Using such an acceptance probability in conjunction with the shooting and shifting algorithms one can efficiently explore trajectory space and harvest reactive pathways with their proper weight. Statistical analysis of the harvested pathways can then provide information on the kinetics of transition. The basis for this type of analysis will be discussed in the following section.
|
| 59 |
+
|
| 60 |
+
REACTION RATES
|
| 61 |
+
|
| 62 |
+
The time correlation function of state populations
|
| 63 |
+
|
| 64 |
+
$$ C(t) = \frac{\langle h_A(x_0) h_B(x_t) \rangle}{\langle h_A(x_0) \rangle} \quad (4) $$
|
| 65 |
+
---PAGE_BREAK---
|
| 66 |
+
|
| 67 |
+
provides a link between the microscopic dynamics of the system and the phenomeno-
|
| 68 |
+
logical description of the kinetics in terms of the forward and backward reaction rate
|
| 69 |
+
constants k<sub>AB</sub> and k<sub>BA</sub>, respectively [7]. If the reaction time τ<sub>rxn</sub> = (k<sub>AB</sub> + k<sub>BA</sub>)<sup>-1</sup> is sig-
|
| 70 |
+
nificantly larger than the time τ<sub>mol</sub> necessary to cross the barrier top, C(t) approaches its
|
| 71 |
+
long time value exponentially after the short molecular transient time τ<sub>mol</sub>:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
C(t) \approx \langle h_B \rangle (1 - \exp\{-t/\tau_{\text{rxn}}\}), \quad (5)
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
For τ<sub>mol</sub> < t ≪ τ<sub>rxn</sub> the population correlation function C(t) grows linearly:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
C(t) \approx k_{AB}t. \tag{6}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
Thus, the forward reaction rate constant can be determined from the slope of C(t) in this
|
| 84 |
+
time regime.
|
| 85 |
+
|
| 86 |
+
To evaluate C(t) in the transition path sampling framework we rewrite it in terms of
|
| 87 |
+
sums over trajectories:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
C(t) = \frac{\int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] h_B(x_t)}{\int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)]} = \frac{Z_{AB}(t)}{Z_A}. \quad (7)
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
The above expression can be viewed as the ratio between the “partition functions” for
|
| 94 |
+
two different path ensembles: one, $Z_A$, in which pathways start in A and end anywhere,
|
| 95 |
+
and one, $Z_{AB}(t)$, in which pathways start in A and end in B. This perspective suggests
|
| 96 |
+
that we determine the correlation function $C(t)$ via calculation of $\Delta F(t) \equiv F_{AB}(t) - F_A =$
|
| 97 |
+
$-\ln Z_{AB}(t) + \ln Z_A$, in effect a difference of free energies. From the free energy difference
|
| 98 |
+
one can then immediately determine the time correlation function, $C(t) = \exp[-\Delta F(t)]$.
|
| 99 |
+
The free energy difference $\Delta F(t)$ can be viewed as the work necessary to reversibly
|
| 100 |
+
change from a path ensemble with free final points $x_t$ to a path ensemble in which the
|
| 101 |
+
final points $x_t$ are required to reside in region B.
|
| 102 |
+
|
| 103 |
+
In principle, one can determine the reaction rate constant $k_{AB}$ by calculating the time
|
| 104 |
+
correlation function $C(t)$ at various times and by taking a numerical derivative with
|
| 105 |
+
respect to $t$. This procedure is, however, numerically costly since it requires repeated
|
| 106 |
+
free energy calculations. Fortunately, the reversible work $\Delta F(t')$ for a given time $t'$ can
|
| 107 |
+
be written as a sum of the reversible work $\Delta F(t)$ for a different time $t$ and the reversible
|
| 108 |
+
work $F(t',t)$ necessary to change $t$ to $t'$ [2]:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\Delta F(t') = \Delta F(t) + F(t', t). \tag{8}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
This reversible work $F(t',t)$ can then be calculated for all times between 0 and $t'$ in
|
| 115 |
+
a single transition path sampling simulation, as described in detail in Ref. [2]. In the
|
| 116 |
+
following sections we will focus on ways to determine the reversible work $\Delta F(t)$ for a
|
| 117 |
+
single time $t$.
|
| 118 |
+
|
| 119 |
+
MODEL
|
| 120 |
+
|
| 121 |
+
To illustrate the numerical methods presented in this paper we have used them to
|
| 122 |
+
calculate the time correlation function C(t) for isomerizations occurring in a simple
|
| 123 |
+
---PAGE_BREAK---
|
| 124 |
+
|
| 125 |
+
diatomic molecule immersed in a bath of purely repulsive particles, schematically shown on the left hand side panel of Fig. 1. A very similar model has been studied by Straub, Borkovec, and Berne [8]. This two dimensional model consists of *N* point particles of unit mass interacting via the Weeks-Chandler-Anderson potential [9],
|
| 126 |
+
|
| 127 |
+
$$V_{\text{WCA}}(r) = \begin{cases} 4\epsilon \left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} \right] + \epsilon & \text{for } r \le r_{\text{WCA}} \equiv 2^{1/6}\sigma, \\ 0 & \text{for } r > r_{\text{WCA}}. \end{cases} \quad (9)$$
|
| 128 |
+
|
| 129 |
+
Here, *r* is the interparticle distance, and *ε* and *σ* specify the strength and the interaction radius of the potential, respectively. In addition, two of the *N* particles are bound to each other by a double well potential
|
| 130 |
+
|
| 131 |
+
$$V_{\text{dw}}(r) = h \left[ 1 - \frac{(r - r_{\text{WCA}} - w)^2}{w^2} \right]^2, \quad (10)$$
|
| 132 |
+
|
| 133 |
+
where *h* denotes the height of the potential energy barrier separating the potential energy wells located at $r_{\text{WCA}} = 2^{1/6}\sigma$ and $r_{\text{WCA}} + w$.
|
| 134 |
+
|
| 135 |
+
**FIGURE 1.** (a) Schematic representation of the diatomic molecule (dark grey disks) held together by a spring immersed in the WCA fluid (light grey disks). (b) Intramolecular (solid line) and intermolecular (dashed line) potential energy. The parameters determining height and width of the double well potential are $h = 6\epsilon$ and $w = 0.5\sigma$. The thin lines denote the "drawbridge" constraining potential used in the thermodynamic integration and are labelled from $\lambda = 10$ to $\lambda = 100$ according to their slopes. The limits $r_A$ and $r_B$ for states A and B, respectively, are shown as vertical dotted lines.
|
| 136 |
+
|
| 137 |
+
The diatomic molecule held together by the potential shown in Fig. 1 can reside in two states. In the *contracted* state the interatomic distance *r* fluctuates around $r_{\text{WCA}}$, while in the *expanded* state *r* is close to $r_{\text{WCA}} + w$. Due to interactions with the solvent particles, transitions between the two states can occur provided the total energy of the system is sufficiently high. Collisions with solvent particles provide the energy for activation as well as the dissipation necessary to stabilize the molecule in one of the wells after a barrier crossing has occurred. For high barriers, transitions between the extended and the contracted state are rare. In all calculations the system is defined to be in state A if the interatomic distance $r < r_A = 1.35\sigma$ and in state B if $r > r_B = 1.45\sigma$. These limiting values are denoted by vertical dotted lines in the right hand side panel of Fig. 1. The
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
Newtonian equations of motion are integrated with the velocity Verlet algorithm [10] using a time step of $\Delta t = 0.002(m\sigma^2/\epsilon)^{1/2}$.
|
| 141 |
+
|
| 142 |
+
THERMODYNAMIC INTEGRATION
|
| 143 |
+
|
| 144 |
+
In Ref. [4] we determined the time correlation function $C(t)$ with an umbrella sampling approach. Here we show how the time correlation function $C(t)$ from Equ. (7) can be calculated with a strategy analogous to thermodynamic integration, a method used to estimate the free energy difference between ensembles [11, 12]. In a conventional thermodynamic integration, one introduces a coupling parameter $\lambda$, which can transform one ensemble into the other when changed from $\lambda_i$ to $\lambda_f$. Derivatives of the free energy with respect to $\lambda$ calculated at intermediate values of $\lambda$ can then be used to compute the free energy difference by numerical integration from $\lambda_i$ to $\lambda_f$.
|
| 145 |
+
|
| 146 |
+
Thermodynamic integration can also be used to calculate free energy differences between path ensembles. Such a strategy has in effect been used by S. Sun [13] to efficiently estimate free energy difference in the fast switching method recently proposed by Jarzynski [14, 15, 16, 17, 18]. For our purpose we introduce a function $\Theta(x, \lambda)$ depending on the configuration $x$ and on a parameter $\lambda$. The dependence on $\lambda$ is chosen such that $\Theta(x, \lambda_i) = 1$ and $\Theta(x, \lambda_f) = h_B(x)$. Using this function $\Theta$ one can then continuously transform an ensemble of paths starting in A and ending anywhere into an ensemble of pathways beginning in A and ending in B.
|
| 147 |
+
|
| 148 |
+
Introducing the partition function
|
| 149 |
+
|
| 150 |
+
$$Z(t, \lambda) \equiv \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) \quad (11)$$
|
| 151 |
+
|
| 152 |
+
we generalize the time correlation function $C(t)$ from Equ. (7) as the ratio between partition functions for $\lambda$ and $\lambda_i$:
|
| 153 |
+
|
| 154 |
+
$$C(t, \lambda) = Z(t, \lambda) / Z(t, \lambda_i). \qquad (12)$$
|
| 155 |
+
|
| 156 |
+
For $\lambda = \lambda_f$ this function is just the correlation function $C(t) = \exp(-\Delta F)$ we wish to determine. We calculate the reversible work $F(t, \lambda) = -\ln Z(t, \lambda)$ by first taking its derivative with respect to $\lambda$:
|
| 157 |
+
|
| 158 |
+
$$\frac{\partial F(t, \lambda)}{\partial \lambda} = -\frac{\partial \ln Z(t, \lambda)}{\partial \lambda} = -\frac{1}{Z(t, \lambda)} \frac{\partial}{\partial \lambda} Z(t, \lambda). \quad (13)$$
|
| 159 |
+
|
| 160 |
+
Using the definition of $Z$ we obtain:
|
| 161 |
+
|
| 162 |
+
$$\frac{\partial F(t, \lambda)}{\partial \lambda} = - \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \frac{\partial \Theta(x_t, \lambda)}{\partial \lambda} / Z(t, \lambda). \quad (14)$$
|
| 163 |
+
|
| 164 |
+
To bring this expression into a form amenable to a path sampling simulation we define an “energy” $U(x, \lambda)$ related to the function $\Theta$ by:
|
| 165 |
+
|
| 166 |
+
$$U(x, \lambda) = -\ln \Theta(x, \lambda). \quad (15)$$
|
| 167 |
+
---PAGE_BREAK---
|
| 168 |
+
|
| 169 |
+
Inserting the above expression into Eq. (14) we finally obtain:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\frac{\partial F(t, \lambda)}{\partial \lambda} = \frac{1}{Z(t, \lambda)} \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) \frac{\partial U(x_t, \lambda)}{\partial \lambda} = \left\langle \frac{\partial U(x_t, \lambda)}{\partial \lambda} \right\rangle_{\lambda}. \quad (16)
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
Here, 〈· · · 〉$_{λ}$ denotes a path average carried out in the ensemble described by
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\mathcal{P}[x(t), \lambda] \equiv h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) / Z(t, \lambda). \quad (17)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
This is the ensemble of all pathways starting in region A with a bias Θ(xᵢ, λ) acting on xᵢ, the last time slice of the pathway. The biasing function Θ(x, λ) is designed to pull the path endpoints gradually towards region B as λ is increased and to finally confine them to region B for λ = λ_f. From derivatives ∂F(t, λ)/∂λ computed for several values of λ in the range between λ_i and λ_f one then can calculate the reversible work ΔF(t) = F(t, λ_f) - F(t, λ_i) by integration:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\Delta F(t) = \int_{\lambda_i}^{\lambda_f} d\lambda \left\langle \frac{\partial U(x_t, \lambda)}{\partial \lambda} \right\rangle_{\lambda}. \qquad (18)
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
The correlation function we originally set out to compute is then simply given by $C(t) = \exp[-\Delta F(t)]$.
|
| 188 |
+
|
| 189 |
+
To study transitions of our solvated diatomic molecule, we introduce a “drawbridge” potential anchored at $r_B$:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
U(x, \lambda) \equiv \lambda \times [r_B - r(x)] \times \theta[r_B - r(x)]. \tag{19}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
Here, $r_B$ is the lower limit of $r$ in region $B$ and $\theta$ is the Heaviside theta function. By lifting the drawbridge from $\lambda = 0$ to $\lambda = \infty$ one can continuously confine the initially free endpoints of the pathways to final region $B$. For this drawbridge biasing potential the derivative of the reversible work $F(t, \lambda)$ is given by
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\frac{\partial F(t, \lambda)}{\partial \lambda} = \left\langle [r_B - r(x_t)] \times \theta[r_B - r(x_t)] \right\rangle_{\lambda}. \quad (20)
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
We have used Equ. (20) to calculate $\partial F(t, \lambda)/\partial \lambda$ for $t = 0.8(m\sigma^2/\epsilon)^{1/2}$ at 100 equidistant values of $\lambda$ in the range from $\lambda = 0$ to $\lambda = 100$. Each single path sampling simulation consisted of $2 \times 10^6$ attempted path moves. In this sequence of path sampling simulations starting at $\lambda = 0$ and ending at $\lambda = 100$, corresponding to a *compression* of pathways, the final path of simulation *n* was used as initial path for simulation *n* + 1. Results of these simulations are plotted in Fig. 2. Derivatives of the reversible work with respect to $\lambda$ are shown on the left hand side. The right panel contains the reversible work $F(t, \lambda)$ as a function of $\lambda$ as obtained by numerical integration. The plateau value of $F(t, \lambda) = 9.85$ reached at $\lambda \sim 40$ is the reversible work $\Delta F(t)$ necessary to confine the final points of the pathways to region *B*. To investigate if these results are affected by hysteresis, we have carried out a sequence of path sampling simulations corresponding to an *expansion* of the path ensemble. In this sequence of simulations we started with pathways constrained to end in region *B* end then subsequently lowered $\lambda$ from an initial value of 100
|
| 202 |
+
---PAGE_BREAK---
|
| 203 |
+
|
| 204 |
+
FIGURE 2. Results of path ensemble thermodynamic integration simulations. Left hand side: derivatives of the reversible work $F(t, \lambda)$ with respect to the coupling parameter $\lambda$ calculated in a path compression simulation (solid line) and in a path expansion simulation (dashed line). In both cases $\partial F/\partial \lambda$ was calculated at 101 equidistant values of $\lambda$ in the range from 0 to 100. Right hand side: Reversible work $F(t, \lambda)$ as a function of $\lambda$ obtained by numerical integration of the curves shown on the left hand side. Again, the solid line denotes results of a path ensemble compression while the dashed line refers to a path ensemble expansion. The free energy difference obtained from these simulations is $\Delta F(t) = 9.85$ corresponding to a correlation function value of $C(t) = 5.27 \times 10^{-5}$.
|
| 205 |
+
|
| 206 |
+
to a final value of 0. The reversible work and its derivative obtained by path expansions are shown as dashed lines in Fig. 2. Path compression and path expansion yield almost identical results.
|
| 207 |
+
|
| 208 |
+
In this work we have borrowed many familiar ideas and techniques from statistical thermodynamics (e.g., reversible work, thermodynamic integration) in order to compute intrinsically dynamical quantities (e.g., rate constants). Thermodynamic concepts become directly useful for this purpose once the dynamical problem has been reduced to characterizing the statistical consequences of imposing constraints (of reactivity) on stationary distributions (of dynamical pathways). This task, in the context of phase space ensembles, is the central challenge of classical statistical mechanics. Remarkably, such a thermodynamic interpretation extends even to the nonequilibrium realm. Recent results concerning *irreversible* transformations between equilibrium states [14, 15, 16, 17, 18] have analogous meaning for finite-time switching between ensembles of trajectories, opening new routes for rate constant calculations. We are working to develop transition path sampling methods exploiting this analogy.
|
| 209 |
+
|
| 210 |
+
## ACKNOWLEDGMENTS
|
| 211 |
+
|
| 212 |
+
P.L.G. is an MIT Science Fellow. The calculations were performed on the Schrödinger II Linux cluster of the Vienna University Computer Center.
|
| 213 |
+
---PAGE_BREAK---
|
| 214 |
+
|
| 215 |
+
REFERENCES
|
| 216 |
+
|
| 217 |
+
1. C. Dellago, P. G. Bolhuis, F. S. Csajka, and D. Chandler, *J. Chem. Phys.* **108**, 1964 (1998).
|
| 218 |
+
|
| 219 |
+
2. C. Dellago, P. G. Bolhuis, and P. L. Geissler, *Adv. Chem. Phys.* **123**, 1 (2002);
|
| 220 |
+
|
| 221 |
+
3. Peter G. Bolhuis, D. Chandler, C. Dellago, Phillip L. Geissler, *Ann. Rev. Phys. Chem.* **53**, 291 (2002).
|
| 222 |
+
|
| 223 |
+
4. C. Dellago, P. G. Bolhuis, and D. Chandler, *J. Chem. Phys.* **108**, 9263 (1998).
|
| 224 |
+
|
| 225 |
+
5. P. G. Bolhuis, C. Dellago, and D. Chandler, *Faraday Discuss.* **110**, 421 (1998).
|
| 226 |
+
|
| 227 |
+
6. N. Metropolis, A. W. Metropolis, M. N. Rosenbluth, A. H. Teller, and E. Teller, *J. Chem. Phys.* **21**, 1087 (1953).
|
| 228 |
+
|
| 229 |
+
7. D. Chandler, *Introduction to Modern Statistical Mechanics*, Oxford University Press (1987).
|
| 230 |
+
|
| 231 |
+
8. J. E. Straub, M. Borkovec, and B. J. Berne, *J. Chem. Phys.* **89**, 4833 (1988).
|
| 232 |
+
|
| 233 |
+
9. J. D. Weeks, D. Chandler, and H. C. Andersen, *J. Chem. Phys.* **54**, 5237 (1971).
|
| 234 |
+
|
| 235 |
+
10. M. P. Allen and D. J. Tildesley, *Computer Simulations of Liquids*, Oxford University Press, Oxford (1987).
|
| 236 |
+
|
| 237 |
+
11. J. G. Kirkwood, *J. Chem. Phys.* **3**, 300 (1935).
|
| 238 |
+
|
| 239 |
+
12. D. Frenkel and B. Smit, *Understanding Molecular Simulation*, 2nd edition, Academic Press (2002).
|
| 240 |
+
|
| 241 |
+
13. S. X. Sun, *J. Chem. Phys.* **118**, 5769 (2003).
|
| 242 |
+
|
| 243 |
+
14. C. Jarzynski, *Phys. Rev. Lett* **78**, 2690 (1997).
|
| 244 |
+
|
| 245 |
+
15. C. Jarzynski, *Phys. Rev. E* **56**, 5018 (1997).
|
| 246 |
+
|
| 247 |
+
16. G. E. Crooks, *J. Stat. Phys.* **90**, 1480 (1997).
|
| 248 |
+
|
| 249 |
+
17. G. E. Crooks, *Phys. Rev. E* **60**, 2721 (1999).
|
| 250 |
+
|
| 251 |
+
18. G. E. Crooks, *Phys. Rev. E* **61**, 2361 (2000).
|
samples/texts_merged/5647681.md
ADDED
|
@@ -0,0 +1,487 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
A note on sufficiency in binary panel models
|
| 5 |
+
|
| 6 |
+
Koen Jochmans, Thierry Magnac
|
| 7 |
+
|
| 8 |
+
► To cite this version:
|
| 9 |
+
|
| 10 |
+
Koen Jochmans, Thierry Magnac. A note on sufficiency in binary panel models. 2015. hal-01248065
|
| 11 |
+
|
| 12 |
+
HAL Id: hal-01248065
|
| 13 |
+
|
| 14 |
+
https://hal-sciencespo.archives-ouvertes.fr/hal-01248065
|
| 15 |
+
|
| 16 |
+
Preprint submitted on 23 Dec 2015
|
| 17 |
+
|
| 18 |
+
**HAL** is a multi-disciplinary open access
|
| 19 |
+
archive for the deposit and dissemination of sci-
|
| 20 |
+
entific research documents, whether they are pub-
|
| 21 |
+
lished or not. The documents may come from
|
| 22 |
+
teaching and research institutions in France or
|
| 23 |
+
abroad, or from public or private research centers.
|
| 24 |
+
|
| 25 |
+
L'archive ouverte pluridisciplinaire **HAL**, est
|
| 26 |
+
destinée au dépôt et à la diffusion de documents
|
| 27 |
+
scientifiques de niveau recherche, publiés ou non,
|
| 28 |
+
émanant des établissements d'enseignement et de
|
| 29 |
+
recherche français ou étrangers, des laboratoires
|
| 30 |
+
publics ou privés.
|
| 31 |
+
---PAGE_BREAK---
|
| 32 |
+
|
| 33 |
+
# A NOTE ON SUFFICIENCY IN BINARY PANEL MODELS
|
| 34 |
+
|
| 35 |
+
Koen Jochmans
|
| 36 |
+
Thierry Magnac
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
A NOTE ON SUFFICIENCY IN BINARY PANEL MODELS
|
| 40 |
+
|
| 41 |
+
KOEN JOCHMANS AND THIERRY MAGNAC
|
| 42 |
+
|
| 43 |
+
December 4, 2015
|
| 44 |
+
|
| 45 |
+
Consider estimating the slope coefficients of a fixed-effect binary-choice model from two-period panel data. Two approaches to semiparametric estimation at the regular parametric rate have been proposed. One is based on a sufficient statistic, the other is based on a conditional-median restriction. We show that, under standard assumptions, both approaches are equivalent.
|
| 46 |
+
|
| 47 |
+
KEYWORDS: binary choice, fixed effects, panel data, regular estimation, sufficiency.
|
| 48 |
+
|
| 49 |
+
INTRODUCTION
|
| 50 |
+
|
| 51 |
+
A classic problem in panel data analysis is the estimation of the vector of slope coefficients, $\beta$, in fixed-effect linear models from binary response data on $n$ observations.
|
| 52 |
+
|
| 53 |
+
In seminal work, Rasch (1960) constructed a conditional maximum-likelihood estimator for the fixed-effect logit model by building on a sufficiency argument. Chamberlain (2010) and Magnac (2004) have shown that sufficiency is necessary for estimation at the $n^{-1/2}$ rate to be possible in general.
|
| 54 |
+
|
| 55 |
+
Manski (1987) proposed a maximum-score estimator of $\beta$. His estimator relies on a conditional median restriction and does not require sufficiency. However, it converges at the slow rate $n^{-1/3}$. Horowitz (1992) suggested smoothing the maximum-score criterion function and showed that, by doing so, the convergence rate can be improved, although the $n^{-1/2}$-rate remains unattainable.
|
| 56 |
+
|
| 57 |
+
Lee (1999) has given an alternative conditional-median restriction and derived a $n^{-1/2}$-consistent maximum rank-correlation estimator of $\beta$. He provided sufficient conditions for this condition to hold that restrict the distribution of the fixed effects and the covariates. It can be shown that these restrictions involve the unknown parameter $\beta$ through index-sufficiency requirements on the distribution of the covariates, and that these can severely restrict the values that $\beta$ is allowed to take.
|
| 58 |
+
|
| 59 |
+
In this note we reconsider the conditional-median restriction of Lee (1999) under standard assumptions and look for conditions that imply it to hold for any $\beta$. We find that imposing the
|
| 60 |
+
|
| 61 |
+
Department of Economics, Sciences Po, 28 rue des Saints Pères, 75007 Paris, France.
|
| 62 |
+
koen.jochmans@sciencespo.fr.
|
| 63 |
+
|
| 64 |
+
GREMAQ and IDEI, Toulouse School of Economics, 21 Allée de Brienne, 31000 Toulouse, France.
|
| 65 |
+
thierry.magnac@tse-fr.eu.
|
| 66 |
+
---PAGE_BREAK---
|
| 67 |
+
|
| 68 |
+
conditional-median restriction is equivalent to requiring sufficiency.
|
| 69 |
+
|
| 70 |
+
1. MODEL AND ASSUMPTIONS
|
| 71 |
+
|
| 72 |
+
Suppose that binary outcomes $y_i = (y_{i1}, y_{i2})$ relate to a set of observable covariates $x_i = (x_{i1}, x_{i2})$ through the threshold-crossing model
|
| 73 |
+
|
| 74 |
+
$$y_{i1} = 1\{x_{i1}\beta + \alpha_i \geq u_{i1}\}, \quad y_{i2} = 1\{x_{i2}\beta + \alpha_i \geq u_{i2}\},$$
|
| 75 |
+
|
| 76 |
+
where $u_i = (u_{i1}, u_{i2})$ are latent disturbances, $\alpha_i$ is an unobserved effect, and $\beta$ is a parameter vector of conformable dimension, say $k$. The challenge is to construct an estimator of $\beta$ from a random sample ${(y_i, x_i), i = 1, \dots, n}$ that converges at the regular $n^{-1/2}$ rate.
|
| 77 |
+
|
| 78 |
+
Let $\Delta y_i = y_{i2} - y_{i1}$ and $\Delta x_i = x_{i2} - x_{i1}$. The following assumption will be maintained throughout.
|
| 79 |
+
|
| 80 |
+
ASSUMPTION 1 (Identification and regularity)
|
| 81 |
+
|
| 82 |
+
(a) $u_i$ is independent of $(x_i, \alpha_i)$.
|
| 83 |
+
|
| 84 |
+
(b) $\Delta x_i$ is not contained in a proper linear subspace of $\mathbb{R}^k$.
|
| 85 |
+
|
| 86 |
+
(c) The first component of $\Delta x_i$ continuously varies over $\mathcal{R}$ (for almost all values of the other components) and the first component of $\beta$ is not equal to zero.
|
| 87 |
+
|
| 88 |
+
(d) $\alpha_i$ varies continuously over $\mathcal{R}$ (for almost all values of $x_i$).
|
| 89 |
+
|
| 90 |
+
(e) The distribution of $u_i$ admits a strictly positive, continuous, and bounded density function with respect to Lebesgue measure.
|
| 91 |
+
|
| 92 |
+
Parts (a)-(c) collect sufficient conditions that ensure that $\beta$ is identified while Parts (d)-(e) are conventional regularity conditions (see Magnac 2004). From here on out we omit the 'almost surely' qualifier from all conditional statements.
|
| 93 |
+
|
| 94 |
+
Assumption 1 does not parametrize the distribution of $u_i$ nor does it restrict the dependence between $\alpha_i$ and $x_i$ beyond the complete-variation requirement of Assumption 1(d). As such, our approach is semiparametric and we treat the $\alpha_i$ as fixed effects.
|
| 95 |
+
|
| 96 |
+
2. CONDITIONS FOR REGULAR ESTIMATION
|
| 97 |
+
|
| 98 |
+
Magnac (2004, Theorem 1) has shown that, under Assumption 1, the semiparametric efficiency bound for $\beta$ is zero unless $y_{i1} + y_{i2}$ is a sufficient statistic for $\alpha_i$. Sufficiency can be stated as follows.
|
| 99 |
+
---PAGE_BREAK---
|
| 100 |
+
|
| 101 |
+
**CONDITION 1 (Sufficiency)** There exists a real function G, independent of $\alpha_i$, such that
|
| 102 |
+
|
| 103 |
+
$$ \mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = G(\Delta x_i \beta) $$
|
| 104 |
+
|
| 105 |
+
for all $\alpha_i \in \mathbb{R}$.
|
| 106 |
+
|
| 107 |
+
Condition 1 states that data in first-differences follow a single-indexed binary-choice model. This yields a variety of estimators of $\beta$, such as semiparametric maximum likelihood (Klein and Spady 1993), that are $n^{-1/2}$-consistent under standard assumptions.
|
| 108 |
+
|
| 109 |
+
Magnac (2004, Theorem 3) derived conditions on the distributions of $u_i$ and $\Delta u_i$ that imply
|
| 110 |
+
that Condition 1 holds.
|
| 111 |
+
|
| 112 |
+
On the other hand, Lee (1999) considered estimation of $\beta$ based on a sign restriction. We write
|
| 113 |
+
$\mathrm{med}(x)$ for the median of random variable $x$ and let $\sgn(x) = 1\{x > 0\} - 1\{x < 0\}$.
|
| 114 |
+
|
| 115 |
+
**CONDITION 2 (Median restriction)** For any two observations i and j,
|
| 116 |
+
|
| 117 |
+
$$ \mathrm{med} \left( \frac{\Delta y_i - \Delta y_j}{2} \mid x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j \right) = \mathrm{sgn}(\Delta x_i \beta - \Delta x_j \beta) $$
|
| 118 |
+
|
| 119 |
+
holds.
|
| 120 |
+
|
| 121 |
+
Condition 2 suggests a rank estimator for $\beta$. Conditions for this estimator to be $n^{-1/2}$-consistent
|
| 122 |
+
are stated in Sherman (1993).
|
| 123 |
+
|
| 124 |
+
Lee (1999, Assumption 1) restricted the joint distribution of $\alpha_i, x_i$, and $x_{i1}\beta, x_{i2}\beta$ to ensure that
|
| 125 |
+
Condition 2 holds. Aside from these restrictions going against the fixed-effect approach, they do
|
| 126 |
+
not hold uniformly in $\beta$, in general. The Appendix contains additional discussion and an example.
|
| 127 |
+
|
| 128 |
+
### 3. EQUIVALENCE
|
| 129 |
+
|
| 130 |
+
The main result of this paper is the equivalence of Conditions 1 and 2 as requirements for $n^{-1/2}$-
|
| 131 |
+
consistent estimation of any $\beta$.
|
| 132 |
+
|
| 133 |
+
**THEOREM 1 (Equivalence)** *Under Assumption 1 Condition 2 holds for any $\beta$ if and only if Condition 1 holds.*
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
PROOF: We start with two lemmas that are instrumental in showing Theorem 1.
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
LEMMA 1 (Sufficiency) Condition 1 is equivalent to the existence of a continuously-differentiable,
|
| 141 |
+
strictly-decreasing function c, independent of αᵢ, such that
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = c(\Delta x_i \beta)
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
for all $\alpha_i \in \mathbb{R}$.
|
| 148 |
+
|
| 149 |
+
PROOF: Conditional on $\Delta y_i \neq 0$ and on $\alpha_i, x_i$, the variable $\Delta y_i$ is Bernoulli with success probability
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \frac{1}{1 + \frac{\mathrm{Pr}(\Delta y_i = -1 | x_i, \alpha_i)}{\mathrm{Pr}(\Delta y_i = 1 | x_i, \alpha_i)}}.
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Re-arranging this expression and enforcing Condition 1 shows that
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\frac{\Pr(\Delta y_i = -1|x_i, \alpha_i)}{\Pr(\Delta y_i = 1|x_i, \alpha_i)} = \frac{1 + G(\Delta x_i \beta)}{G(\Delta x_i \beta)},
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
which is a function of $\Delta x_i \beta$ only. Monotonicity of this function follows easily, as in Magnac (2004,
|
| 162 |
+
Proof of Theorem 2). This completes the proof of Lemma 1.
|
| 163 |
+
Q.E.D.
|
| 164 |
+
|
| 165 |
+
LEMMA 2 (Median restriction) Let
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\tilde{c}(x_i) = \frac{\Pr(\Delta y_i = -1|x_i)}{\Pr(\Delta y_i = 1|x_i)}.
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Condition 2 is equivalent to the sign restriction
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
holding for any two observations *i* and *j*.
|
| 178 |
+
|
| 179 |
+
PROOF: Conditional on $\Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j$ (and the covariates),
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\frac{\Delta y_i - \Delta y_j}{2} = \begin{cases} 1 & \text{if } \Delta y_i = 1 \text{ and } \Delta y_j = -1 \\ -1 & \text{if } \Delta y_j = 1 \text{ and } \Delta y_i = -1. \end{cases}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
Therefore, it is Bernoulli with success probability
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathrm{Pr}(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j) = \frac{1}{1 + r(x_i, x_j)},
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
r(x_i, x_j) = \frac{\Pr(\Delta y_i = -1, \Delta y_j = 1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j)}{\Pr(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j)}.
|
| 195 |
+
$$
|
| 196 |
+
---PAGE_BREAK---
|
| 197 |
+
|
| 198 |
+
Note that
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\mathrm{med} \left( \frac{\Delta y_i - \Delta y_j}{2} \middle| x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j \right) = \mathrm{sgn} \left( \frac{1}{1+r(x_i, x_j)} - \frac{r(x_i, x_j)}{1+r(x_i, x_j)} \right).
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
By the Bernoulli nature of the outcomes in the first step and random sampling of the observations
|
| 205 |
+
in the second step, we have that
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
r(x_i, x_j) = \frac{\Pr(\Delta y_i = -1, \Delta y_j = 1 | x_i, x_j)}{\Pr(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j)} = \frac{\Pr(\Delta y_i = -1 | x_i) \Pr(\Delta y_j = 1 | x_j)}{\Pr(\Delta y_i = 1 | x_i) \Pr(\Delta y_j = -1 | x_j)} = \frac{\tilde{c}(x_i)}{\tilde{c}(x_j)}.
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
Therefore, Condition 2 can be written as
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta).
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
This completes the proof of Lemma 2.
|
| 218 |
+
|
| 219 |
+
Q.E.D.
|
| 220 |
+
|
| 221 |
+
We first establish that Condition 1 implies Condition 2. Armed with Lemmas 1 and 2 this is a
|
| 222 |
+
simple task. First note that, because the function $c$ is strictly decreasing by Lemma 1, Condition
|
| 223 |
+
1 implies that
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\operatorname{sgn}(c(\Delta x_j \beta) - c(\Delta x_i \beta)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta).
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
Under Condition 1 we also have that
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
c(\Delta x_i \beta) = \frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = \frac{\Pr(\Delta y_i = -1 | x_i)}{\Pr(\Delta y_i = 1 | x_i)} = \tilde{c}(x_i).
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
Therefore,
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta).
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
By Lemma 2, this is Condition 2.
|
| 242 |
+
|
| 243 |
+
To see that Condition 2 implies Condition 1, first note that
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = \frac{\Pr(u_{i1} \le \tilde{\alpha}_i - \frac{1}{2}\Delta x_i \beta, u_{i2} > \tilde{\alpha}_i + \frac{1}{2}\Delta x_i \beta)}{\Pr(u_{i1} > \tilde{\alpha}_i - \frac{1}{2}\Delta x_i \beta, u_{i2} \le \tilde{\alpha}_i + \frac{1}{2}\Delta x_i \beta)}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
where we let $\tilde{\alpha}_i = \alpha_i + \frac{1}{2}(x_{i1} + x_{i2})\beta$. Therefore,
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\mathrm{Pr}(\Delta y_i = 1|x_i, \Delta y_i \neq 0, \alpha_i) = \tilde{G}(\Delta x_i \beta, \tilde{\alpha})
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
for some function $\tilde{G}$, and
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = \int \tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i) P(d\tilde{\alpha} | x_i, \Delta y_i \neq 0),
|
| 259 |
+
$$
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
where $P(\tilde{\alpha}_i|x_i, \Delta y_i \neq 0)$ denotes the distribution of $\tilde{\alpha}_i$ given $x_i$ and $\Delta y_i \neq 0$. Next, by Lemma 2, Condition 2 implies that
|
| 263 |
+
|
| 264 |
+
$$ \Delta x_i \beta = \Delta x_j \beta \iff \tilde{c}(x_i) = \tilde{c}(x_j) \iff E[\tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i)|x_i, \Delta y_i \neq 0] = E[\tilde{G}(\Delta x_j \beta, \tilde{\alpha}_j)|x_j, \Delta y_j \neq 0]. $$
|
| 265 |
+
|
| 266 |
+
Hence, it must hold that
|
| 267 |
+
|
| 268 |
+
$$ \int_{-\infty}^{+\infty} \tilde{G}(v, \tilde{\alpha}) \{ P(d\tilde{\alpha}|x_i, \Delta y_i \neq 0) - P(d\tilde{\alpha}|x_j, \Delta y_i \neq 0) \} = 0 $$
|
| 269 |
+
|
| 270 |
+
for all values $v \in \mathcal{R}$ and all $(x_i, x_j)$. Because the distribution of $\alpha_i$ given $x_i$ and $\Delta y_i \neq 0$ is unrestricted, this condition holds if and only if the function $\tilde{G}$ does not depend on $\tilde{\alpha}_i$, and so not on $\alpha_i$. Moreover, we must have that
|
| 271 |
+
|
| 272 |
+
$$ \tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i) = \Pr(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \Pr(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = G(\Delta x_i \beta) $$
|
| 273 |
+
|
| 274 |
+
for some function $G$. This is Condition 1. This completes the proof of Theorem 1. Q.E.D.
|
| 275 |
+
|
| 276 |
+
## APPENDIX (NOT FOR PUBLICATION)
|
| 277 |
+
|
| 278 |
+
The notation in Lee (1999) decomposes $x$ into its continuously varying single component whose coefficient is equal to 1 and the remaining variables. We shall denote $a$ the first component and $z$ the remaining variables so that $x = (a, z)$. We denote by $\theta$ the coefficient of $z$ in $x\beta$ so that $\beta = (1, \theta)$, and omit the subscript $i$ throughout.
|
| 279 |
+
|
| 280 |
+
Assumptions (g) and (h) of Lee (1999) can be written as
|
| 281 |
+
|
| 282 |
+
$$ (g) \quad \alpha \perp \Delta z | \Delta a + \theta \Delta z, $$
|
| 283 |
+
|
| 284 |
+
$$ (h) \quad a_1 + \theta z_1 \perp \Delta z | \Delta a + \theta \Delta z, \alpha $$
|
| 285 |
+
|
| 286 |
+
in which, e.g., $\Delta z = z_2 - z_1$.
|
| 287 |
+
|
| 288 |
+
We first prove that these conditions imply an index sufficiency requirement on the distribution function of regressors. Second, we provide an example in which these conditions restrict the parameter of interest to only two possible values, except in non-generic cases.
|
| 289 |
+
|
| 290 |
+
### Index sufficiency
|
| 291 |
+
|
| 292 |
+
Denote by $f$ the density with respect to some dominating measure and rewrite (h) as
|
| 293 |
+
|
| 294 |
+
$$ f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z, \alpha) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z, \alpha) f(\Delta z | \Delta a + \theta \Delta z, \alpha). $$
|
| 295 |
+
|
| 296 |
+
As Condition (g) can be written as
|
| 297 |
+
|
| 298 |
+
$$ f(\Delta z | \Delta a + \theta \Delta z, \alpha) = f(\Delta z | \Delta a + \theta \Delta z), $$
|
| 299 |
+
---PAGE_BREAK---
|
| 300 |
+
|
| 301 |
+
we therefore have that
|
| 302 |
+
|
| 303 |
+
$$f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z, \alpha) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z, \alpha) f(\Delta z | \Delta a + \theta \Delta z),$$
|
| 304 |
+
|
| 305 |
+
which we can multiply by $f(\alpha | \Delta a + \theta \Delta z)$ and integrate with respect to $\alpha$ to get
|
| 306 |
+
|
| 307 |
+
$$f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z) f(\Delta z | \Delta a + \theta \Delta z).$$
|
| 308 |
+
|
| 309 |
+
As this expression can be rewritten as
|
| 310 |
+
|
| 311 |
+
$$f(\Delta z | \Delta a + \theta \Delta z, a_1 + z_1 \theta) = f(\Delta z | \Delta a + \theta \Delta z),$$
|
| 312 |
+
|
| 313 |
+
Conditions (g) and (h) of Lee (1999) demand that
|
| 314 |
+
|
| 315 |
+
$$f(\Delta z | a_1 + z_1\theta, a_2 + z_2\theta) = f(\Delta z | \Delta a + \theta\Delta z, a_1 + z_1\theta) = f(\Delta z | \Delta a + \theta\Delta z),$$
|
| 316 |
+
|
| 317 |
+
or in terms of the original variables, that
|
| 318 |
+
|
| 319 |
+
$$f(\Delta z | x_1\beta, x_2\beta) = f(\Delta z | \Delta x\beta),$$
|
| 320 |
+
|
| 321 |
+
This is an index sufficiency requirement on the data generating process of the regressors $x$ that is
|
| 322 |
+
driven by the parameter of interest, $\beta$.
|
| 323 |
+
|
| 324 |
+
*Example*
|
| 325 |
+
|
| 326 |
+
To illustrate, suppose that $z$ is a single dimensional regressor and that regressors are jointly normal
|
| 327 |
+
with a restricted covariance matrix allowing for contemporaneous correlation only. Moreover,
|
| 328 |
+
|
| 329 |
+
$$\begin{pmatrix} a_1 \\ a_2 \\ z_1 \\ z_2 \end{pmatrix} \sim N \left( \begin{pmatrix} \mu_{a_1} \\ \mu_{a_2} \\ \mu_{z_1} \\ \mu_{z_2} \end{pmatrix}, \begin{pmatrix} \sigma_{a_1}^2 & 0 & \sigma_{a_1 z_1} & 0 \\ 0 & \sigma_{a_2}^2 & 0 & \sigma_{a_2 z_2} \\ \sigma_{a_1 z_1} & 0 & \sigma_{z_1}^2 & 0 \\ 0 & \sigma_{a_2 z_2} & 0 & \sigma_{z_2}^2 \end{pmatrix} \right).$$
|
| 330 |
+
|
| 331 |
+
Then
|
| 332 |
+
|
| 333 |
+
$$\begin{pmatrix} \Delta z \\ x_1\beta \\ x_2\beta \end{pmatrix} \sim N \left( \begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \end{pmatrix}, \begin{pmatrix} \Sigma_{11} & \Sigma_{12} & \Sigma_{13} \\ \Sigma_{12} & \Sigma_{22} & \Sigma_{23} \\ \Sigma_{13} & \Sigma_{23} & \Sigma_{33} \end{pmatrix} \right)$$
|
| 334 |
+
|
| 335 |
+
for
|
| 336 |
+
---PAGE_BREAK---
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
\begin{align*}
|
| 340 |
+
\mu_1 &= \mu_{z_2} - \mu_{z_1} \\
|
| 341 |
+
\mu_2 &= \mu_{a_1} + \mu_{z_1} \theta \\
|
| 342 |
+
\mu_3 &= \mu_{a_2} + \mu_{z_2} \theta
|
| 343 |
+
\end{align*}
|
| 344 |
+
$$
|
| 345 |
+
|
| 346 |
+
and
|
| 347 |
+
|
| 348 |
+
$$
|
| 349 |
+
\begin{align*}
|
| 350 |
+
\Sigma_{11} &= \operatorname{var}(\Delta z) = \operatorname{var}(z_1) + \operatorname{var}(z_2) \\
|
| 351 |
+
\Sigma_{12} &= \operatorname{cov}(\Delta z, x_1 \beta) = -\operatorname{cov}(z_1, a_1 + z_1 \theta) \\
|
| 352 |
+
&= -\operatorname{cov}(a_1, z_1) - \theta \operatorname{var}(z_1) \\
|
| 353 |
+
&= -\sigma_{a_1 z_1} - \theta \sigma_{z_1}^2 \\
|
| 354 |
+
\Sigma_{13} &= \operatorname{cov}(\Delta z, x_2 \beta) = \operatorname{cov}(z_2, a_2 + z_2 \theta) \\
|
| 355 |
+
&= \operatorname{cov}(a_2, z_2) + \theta \operatorname{var}(z_2) \\
|
| 356 |
+
&= \sigma_{a_2 z_2} + \theta \sigma_{z_2}^2 \\
|
| 357 |
+
\Sigma_{22} &= \operatorname{var}(x_1 \beta) = \operatorname{var}(a_1 + z_1 \theta) \\
|
| 358 |
+
&= \operatorname{var}(a_1) + \theta^2 \operatorname{var}(z_1) + \theta 2 \operatorname{cov}(a_1, z_1) \\
|
| 359 |
+
&= \sigma_{a_1}^2 + 2\theta \sigma_{a_1 z_1} + \theta^2 \sigma_{z_1}^2 \\
|
| 360 |
+
\Sigma_{33} &= \operatorname{var}(x_2 \beta) = \operatorname{var}(a_2 + z_2 \theta) \\
|
| 361 |
+
&= \operatorname{var}(a_2) + \theta^2 \operatorname{var}(z_2) + \theta 2 \operatorname{cov}(a_2, z_2) \\
|
| 362 |
+
&= \sigma_{a_2}^2 + 2\theta \sigma_{a_2 z_2} + \theta^2 \sigma_{z_2}^2 \\
|
| 363 |
+
\Sigma_{23} &= \operatorname{cov}(x_1 \beta, x_2 \beta) = 0.
|
| 364 |
+
\end{align*}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
From standard results on the multivariate normal distribution we have that
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\Delta z | x_1 \beta, x_2 \beta
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
is normal with constant variance and conditional mean function
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
m(x_1\beta, x_2\beta) = \mu_1 + \frac{(\Sigma_{13}\Sigma_{22} - \Sigma_{12}\Sigma_{23})(x_2\beta - \mu_3) - (\Sigma_{13}\Sigma_{23} - \Sigma_{12}\Sigma_{33})(x_1\beta - \mu_2)}{\Sigma_{22}\Sigma_{33} - \Sigma_{23}^2}.
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
To satisfy the condition of index sufficiency we need that
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
(\Sigma_{13}\Sigma_{22} - \Sigma_{12}\Sigma_{23}) = (\Sigma_{13}\Sigma_{23} - \Sigma_{12}\Sigma_{33}).
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
Plugging-in the expressions from above, this becomes
|
| 386 |
+
|
| 387 |
+
$$(\sigma_{a_2 z_2} + \theta \sigma_{z_2}^2)(\sigma_{a_1}^2 + 2\theta\sigma_{a_1 z_1} + \theta^2\sigma_{z_1}^2) = (\sigma_{a_1 z_1} + \theta\sigma_{z_1}^2)(\sigma_{a_2}^2 + 2\theta\sigma_{a_2 z_2} + \theta^2\sigma_{z_2}^2).$$
|
| 388 |
+
---PAGE_BREAK---
|
| 389 |
+
|
| 390 |
+
We can write this condition as the third-order polynomial equation (in $\theta$)
|
| 391 |
+
|
| 392 |
+
$$C + B\theta + A\theta^2 + D\theta^3 = 0$$
|
| 393 |
+
|
| 394 |
+
with coefficients
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\begin{align*}
|
| 398 |
+
C &= \sigma_{a_1}^2 \sigma_{a_2 z_2} - \sigma_{a_2}^2 \sigma_{a_1 z_1} \\
|
| 399 |
+
B &= \sigma_{a_1}^2 \sigma_{z_2}^2 + 2\sigma_{a_2 z_2} \sigma_{a_1 z_1} - \sigma_{a_2}^2 \sigma_{z_1}^2 - 2\sigma_{a_2 z_2} \sigma_{a_1 z_1} \\
|
| 400 |
+
&= \sigma_{a_1}^2 \sigma_{z_2}^2 - \sigma_{a_2}^2 \sigma_{z_1}^2 \\
|
| 401 |
+
A &= \sigma_{a_1 z_1} \sigma_{z_2}^2 - \sigma_{a_2 z_2} \sigma_{z_1}^2 \\
|
| 402 |
+
D &= 0.
|
| 403 |
+
\end{align*}
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
For $t = 1, 2$, let
|
| 407 |
+
|
| 408 |
+
$$\rho_t = \frac{\sigma_{a_t z_t}}{\sigma_{a_t} \sigma_{z_t}}, r_t = \frac{\sigma_{a_t}}{\sigma_{z_t}}.$$
|
| 409 |
+
|
| 410 |
+
Then
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\begin{align*}
|
| 414 |
+
\frac{C}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \rho_2 r_1 - \rho_1 r_2 \\
|
| 415 |
+
\frac{B}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \frac{r_1}{r_2} - \frac{r_2}{r_1} \\
|
| 416 |
+
\frac{A}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \frac{\rho_1}{r_2} - \frac{\rho_2}{r_1}.
|
| 417 |
+
\end{align*}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
The polynomial condition therefore is
|
| 421 |
+
|
| 422 |
+
$$(\rho_2 r_1 - \rho_1 r_2) + \left( \frac{r_1}{r_2} - \frac{r_2}{r_1} \right) \theta + \left( \frac{\rho_1}{r_2} - \frac{\rho_2}{r_1} \right) \theta^2 = 0.$$
|
| 423 |
+
|
| 424 |
+
Note that the leading polynomial coefficient is equal to zero if and only if $\rho_1 r_1 = \rho_2 r_2$. This leads to three mutually-exclusive cases:
|
| 425 |
+
|
| 426 |
+
(i) The data are stationary, that is, $\rho_1 = \rho_2$ and $r_1 = r_2$. Then all polynomial coefficients are zero so that all values of $\theta$ satisfy Lee's restriction.
|
| 427 |
+
|
| 428 |
+
(ii) We have $\rho_1 r_1 = \rho_2 r_2$ but $r_1 \neq r_2$. Then the resulting linear equation admits one and only one solution in $\theta$.
|
| 429 |
+
|
| 430 |
+
(iii) The leading polynomial coefficient is non-zero, so, $\rho_1 r_1 \neq \rho_2 r_2$. In this case the discriminant
|
| 431 |
+
---PAGE_BREAK---
|
| 432 |
+
|
| 433 |
+
of the second-order polynomial equals
|
| 434 |
+
|
| 435 |
+
$$
|
| 436 |
+
\begin{align*}
|
| 437 |
+
\Delta &= \left(\frac{r_1}{r_2} - \frac{r_2}{r_1}\right)^2 - 4 \left(\frac{\rho_1}{r_2} - \frac{\rho_2}{r_1}\right) (\rho_2 r_1 - \rho_1 r_2) \\
|
| 438 |
+
&= \left(\frac{r_1}{r_2}\right)^2 + \left(\frac{r_2}{r_1}\right)^2 - 2 - 4 \left( \rho_1 \rho_2 \left\{ \frac{r_1}{r_2} + \frac{r_2}{r_1} \right\} - (\rho_1^2 + \rho_2^2) \right).
|
| 439 |
+
\end{align*}
|
| 440 |
+
$$
|
| 441 |
+
|
| 442 |
+
Set $x = \frac{r_1}{r_2} \ge 0$ and write
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
\Delta(x) = x^2 + \frac{1}{x^2} - 2 - 4(\rho_1\rho_2(x + \frac{1}{x}) - (\rho_1^2 + \rho_2^2)),
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
which is smooth for $x > 0$. The derivative of $\Delta$ with respect to $x$ equals
|
| 449 |
+
|
| 450 |
+
$$
|
| 451 |
+
\begin{align*}
|
| 452 |
+
\Delta'(x) &= 2x - \frac{2}{x^3} - 4(\rho_1\rho_2(1 - \frac{1}{x^2})) \\
|
| 453 |
+
&= \frac{2}{x^3}(x^4 - 1) - 4\rho_1\rho_2\frac{1}{x^2}(x^2 - 1) \\
|
| 454 |
+
&= \frac{2}{x^3}(x^2 - 1)(x^2 + 1 - 2\rho_1\rho_2 x).
|
| 455 |
+
\end{align*}
|
| 456 |
+
$$
|
| 457 |
+
|
| 458 |
+
Note that the Cauchy-Schwarz inequality implies that $x^2 + 1 - 2\rho_1\rho_2 x \ge 0$ so that, for $x \ge 0$,
|
| 459 |
+
|
| 460 |
+
$$
|
| 461 |
+
\operatorname{sgn}(\Delta'(x)) = \operatorname{sgn}(x - 1).
|
| 462 |
+
$$
|
| 463 |
+
|
| 464 |
+
Further, $\Delta(1) = 4(\rho_1 - \rho_2)^2$. Therefore, $\Delta(x)$ is always non-negative. Hence, in this case, the polynomial condition generically has two solutions in $\theta$.
|
| 465 |
+
|
| 466 |
+
Conclusion
|
| 467 |
+
|
| 468 |
+
Conditions (g) and (h) of Lee (1999) imply an index-sufficiency condition for the distribution function of regressors. In generic cases in a standard example, this condition is restrictive and is not verified by every possible value of the parameter of interest, $\theta$, but only two.
|
| 469 |
+
|
| 470 |
+
REFERENCES
|
| 471 |
+
|
| 472 |
+
Chamberlain, G. (2010), “Binary Response Models for Panel Data: Identification and Information,” *Econometrica*, 78, 159–168.
|
| 473 |
+
|
| 474 |
+
Horowitz, J. L. (1992), “A Smoothed Maximum Score Estimator for the Binary Response Model,” *Econometrica*, 60, 505–531.
|
| 475 |
+
|
| 476 |
+
Klein, R. W., and Spady, R. H. (1993), “An Efficient Semiparametric Estimator for Binary Choice Models,” *Econometrica*, 61, 387–421.
|
| 477 |
+
|
| 478 |
+
Lee, M.-J. (1999), “A Root-N Consistent Semiparametric Estimator for Related-Effects Binary Response Panel Data,” *Econometrica*, 67, 427–433.
|
| 479 |
+
---PAGE_BREAK---
|
| 480 |
+
|
| 481 |
+
Magnac, T. (2004), "Panel Binary Variables and Sufficiency: Generalizing Conditional Logit," *Econometrica*, 72, 1859-1876.
|
| 482 |
+
|
| 483 |
+
Manski, C. F. (1987), "Semiparametric Analysis of Random Effects Linear Models from Binary Panel Data," *Econometrica*, 55, 357-362.
|
| 484 |
+
|
| 485 |
+
Rasch, G. (1960), "Probabilistic models for some intelligence and attainment tests," Unpublished report, The Danish Institute of Educational Research, Copenhagen.
|
| 486 |
+
|
| 487 |
+
Sherman, R. P. (1993), "The Limiting Distribution of the Maximum Rank Correlation Estimator," *Econometrica*, 61, 123-137.
|