Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +19 -0
- samples_new/pdfs/174916.pdf +3 -0
- samples_new/pdfs/213815.pdf +3 -0
- samples_new/pdfs/2515306.pdf +3 -0
- samples_new/pdfs/2590883.pdf +3 -0
- samples_new/pdfs/2779026.pdf +3 -0
- samples_new/pdfs/2909063.pdf +0 -0
- samples_new/pdfs/3226827.pdf +3 -0
- samples_new/pdfs/3594993.pdf +3 -0
- samples_new/pdfs/3884483.pdf +3 -0
- samples_new/pdfs/450057.pdf +3 -0
- samples_new/pdfs/4523932.pdf +0 -0
- samples_new/pdfs/4808858.pdf +0 -0
- samples_new/pdfs/503850.pdf +3 -0
- samples_new/pdfs/5396754.pdf +3 -0
- samples_new/pdfs/5718759.pdf +3 -0
- samples_new/pdfs/598288.pdf +3 -0
- samples_new/pdfs/6324184.pdf +3 -0
- samples_new/pdfs/6535016.pdf +3 -0
- samples_new/pdfs/7100604.pdf +3 -0
- samples_new/pdfs/7334540.pdf +0 -0
- samples_new/pdfs/7569662.pdf +3 -0
- samples_new/pdfs/7642017.pdf +3 -0
- samples_new/pdfs/88513.pdf +3 -0
- samples_new/pdfs/904681.pdf +0 -0
- samples_new/texts_merged/1117773.md +241 -0
- samples_new/texts_merged/1168240.md +345 -0
- samples_new/texts_merged/1772599.md +1063 -0
- samples_new/texts_merged/1808935.md +409 -0
- samples_new/texts_merged/1836869.md +606 -0
- samples_new/texts_merged/1885128.md +507 -0
- samples_new/texts_merged/1973835.md +0 -0
- samples_new/texts_merged/199837.md +284 -0
- samples_new/texts_merged/2092097.md +346 -0
- samples_new/texts_merged/213815.md +271 -0
- samples_new/texts_merged/230879.md +885 -0
- samples_new/texts_merged/2634535.md +447 -0
- samples_new/texts_merged/2865847.md +129 -0
- samples_new/texts_merged/2909063.md +56 -0
- samples_new/texts_merged/3147359.md +589 -0
- samples_new/texts_merged/3148538.md +141 -0
- samples_new/texts_merged/3193892.md +136 -0
- samples_new/texts_merged/3224121.md +735 -0
- samples_new/texts_merged/3327355.md +0 -0
- samples_new/texts_merged/339686.md +125 -0
- samples_new/texts_merged/3495399.md +382 -0
- samples_new/texts_merged/3594993.md +309 -0
- samples_new/texts_merged/3764397.md +278 -0
- samples_new/texts_merged/3884483.md +0 -0
- samples_new/texts_merged/393503.md +393 -0
.gitattributes
CHANGED
|
@@ -474,3 +474,22 @@ samples/pdfs/88513.pdf filter=lfs diff=lfs merge=lfs -text
|
|
| 474 |
samples/pdfs/7100604.pdf filter=lfs diff=lfs merge=lfs -text
|
| 475 |
samples/pdfs/6324184.pdf filter=lfs diff=lfs merge=lfs -text
|
| 476 |
samples/pdfs/3594993.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 474 |
samples/pdfs/7100604.pdf filter=lfs diff=lfs merge=lfs -text
|
| 475 |
samples/pdfs/6324184.pdf filter=lfs diff=lfs merge=lfs -text
|
| 476 |
samples/pdfs/3594993.pdf filter=lfs diff=lfs merge=lfs -text
|
| 477 |
+
samples_new/pdfs/598288.pdf filter=lfs diff=lfs merge=lfs -text
|
| 478 |
+
samples_new/pdfs/213815.pdf filter=lfs diff=lfs merge=lfs -text
|
| 479 |
+
samples_new/pdfs/7642017.pdf filter=lfs diff=lfs merge=lfs -text
|
| 480 |
+
samples_new/pdfs/174916.pdf filter=lfs diff=lfs merge=lfs -text
|
| 481 |
+
samples_new/pdfs/2590883.pdf filter=lfs diff=lfs merge=lfs -text
|
| 482 |
+
samples_new/pdfs/503850.pdf filter=lfs diff=lfs merge=lfs -text
|
| 483 |
+
samples_new/pdfs/5718759.pdf filter=lfs diff=lfs merge=lfs -text
|
| 484 |
+
samples_new/pdfs/5396754.pdf filter=lfs diff=lfs merge=lfs -text
|
| 485 |
+
samples_new/pdfs/2515306.pdf filter=lfs diff=lfs merge=lfs -text
|
| 486 |
+
samples_new/pdfs/7100604.pdf filter=lfs diff=lfs merge=lfs -text
|
| 487 |
+
samples_new/pdfs/3594993.pdf filter=lfs diff=lfs merge=lfs -text
|
| 488 |
+
samples_new/pdfs/88513.pdf filter=lfs diff=lfs merge=lfs -text
|
| 489 |
+
samples_new/pdfs/3226827.pdf filter=lfs diff=lfs merge=lfs -text
|
| 490 |
+
samples_new/pdfs/450057.pdf filter=lfs diff=lfs merge=lfs -text
|
| 491 |
+
samples_new/pdfs/6535016.pdf filter=lfs diff=lfs merge=lfs -text
|
| 492 |
+
samples_new/pdfs/3884483.pdf filter=lfs diff=lfs merge=lfs -text
|
| 493 |
+
samples_new/pdfs/6324184.pdf filter=lfs diff=lfs merge=lfs -text
|
| 494 |
+
samples_new/pdfs/2779026.pdf filter=lfs diff=lfs merge=lfs -text
|
| 495 |
+
samples_new/pdfs/7569662.pdf filter=lfs diff=lfs merge=lfs -text
|
samples_new/pdfs/174916.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05a800cecc802eb3f82f462ddff2dd8daa0771a6ce717e3e84800e17f337561c
|
| 3 |
+
size 204800
|
samples_new/pdfs/213815.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06e9adeff2122523bd3938a54a4a2e9396f19a0493482d64c1aa2fa65a177ccc
|
| 3 |
+
size 6112738
|
samples_new/pdfs/2515306.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5120455c0dc104437507c5370565044b483315cb48ada647722fdbdb4057b87c
|
| 3 |
+
size 764141
|
samples_new/pdfs/2590883.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5390b6be32aeb0875f7d4f522697d32dad2862eb9c51be71143a71e867957204
|
| 3 |
+
size 167859
|
samples_new/pdfs/2779026.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59a44297d9405d89e3fb080a6b7496229f7af203559c16c49d881bc637c42227
|
| 3 |
+
size 542522
|
samples_new/pdfs/2909063.pdf
ADDED
|
Binary file (60.3 kB). View file
|
|
|
samples_new/pdfs/3226827.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:681ddeeb2a99bad4e10f704daf4da962e4edc64f6578a4dd36fe738232011ca3
|
| 3 |
+
size 251483
|
samples_new/pdfs/3594993.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:74000028fd96bfc4062871bd5147aad8dc53df7ff99d18bc0fce190094e2fb85
|
| 3 |
+
size 950286
|
samples_new/pdfs/3884483.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7d5b428e4718f6eea9e4f339288c737ce6712885c927aa18629e1556b1cd84c8
|
| 3 |
+
size 8815754
|
samples_new/pdfs/450057.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0e8f5cf638a50a4e41abbf7bb0bb69c31f8777bd2d596a4ef8856a3dfa5353aa
|
| 3 |
+
size 534024
|
samples_new/pdfs/4523932.pdf
ADDED
|
Binary file (78.3 kB). View file
|
|
|
samples_new/pdfs/4808858.pdf
ADDED
|
Binary file (55.4 kB). View file
|
|
|
samples_new/pdfs/503850.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10487c8ab89ccdf9e3fd5b63249bea553fed989039648635f64e7ee31bba3a2d
|
| 3 |
+
size 170955
|
samples_new/pdfs/5396754.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ae2aaf9495b382ec4c0fa50a47325ecdfbf6aea22bcb9f9c2be4ee26753c78ab
|
| 3 |
+
size 648593
|
samples_new/pdfs/5718759.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b88daa241e1270f14bec6eeffa97f2e1d6896bbdeeeb438a292ca7a4543f25db
|
| 3 |
+
size 524618
|
samples_new/pdfs/598288.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1351ca814e16291a77eee385b3e6d0e757228d6db1e351d574672aa42e4592e7
|
| 3 |
+
size 12844441
|
samples_new/pdfs/6324184.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc163029d74c722c4e3627b1293ad1285200dab469830fcc721df7c526b6b998
|
| 3 |
+
size 519591
|
samples_new/pdfs/6535016.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d2e67da59147ff5910b0724ff8a46141d5b5ceccc1425987b8167c9d9a991679
|
| 3 |
+
size 3727298
|
samples_new/pdfs/7100604.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e52f1e44ce3c6cc834e106bdaf95939a40032c44f12a3fa6d088b1d7afcc3a28
|
| 3 |
+
size 396778
|
samples_new/pdfs/7334540.pdf
ADDED
|
Binary file (79.7 kB). View file
|
|
|
samples_new/pdfs/7569662.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f4502ab7df7ec80f5adf2f232488fc92bbd4ae5bafec8ebe85abbc2d8e47eb94
|
| 3 |
+
size 494074
|
samples_new/pdfs/7642017.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f6a5875ee6e30cc47f9ff423b1c5d0e057deb5112994076221e9d042c8c629f7
|
| 3 |
+
size 9991213
|
samples_new/pdfs/88513.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02de63da41877b559923a4d2e6845c79c2f2d757f1ab43999c5438ea69a957e0
|
| 3 |
+
size 682886
|
samples_new/pdfs/904681.pdf
ADDED
|
Binary file (41.6 kB). View file
|
|
|
samples_new/texts_merged/1117773.md
ADDED
|
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Resolving electron transfer kinetics in porous electrodes via diffusion-less
|
| 5 |
+
cyclic voltammetry
|
| 6 |
+
|
| 7 |
+
Shida Yang,<sup>ac</sup> Yang Li,<sup>b</sup> Qing Chen.<sup>ab*</sup>
|
| 8 |
+
|
| 9 |
+
<sup>a</sup>Department of Chemistry, <sup>b</sup>Department of Mechanical and Aerospace Engineering, and
|
| 10 |
+
<sup>c</sup>The Energy Institute, HKUST, Hong Kong.
|
| 11 |
+
|
| 12 |
+
*Corresponding Author E-mail: chenqing@ust.hk (Qing Chen)
|
| 13 |
+
---PAGE_BREAK---
|
| 14 |
+
|
| 15 |
+
**Figure S1.** Background current on Ti foil as assembled in the cell with the active electrolyte but without the carbon felt. (a) $K_3Fe(CN)_6$, (b) $FeCl_3$, and (c) $VOSO_4$. The currents are at least two orders of magnitude lower than those measured with the carbon felt for all three cases, so no background subtraction is necessary for the analysis.
|
| 16 |
+
---PAGE_BREAK---
|
| 17 |
+
|
| 18 |
+
**Figure S2.** Electrochemical surface area measurements of the carbon felt electrode in the electrolytes of (a) $K_3Fe(CN)_6$, (b) $FeCl_3$, and (c) $VOSO_4$. We scan CV in ranges of potential with no visible Faradaic current and plot the average currents against the scan rates. The slopes are divided with a specific capacitance of 20 µF/cm² to derive the areas.
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
**Figure S3.** X-ray photoelectron spectra of different carbon felts.
|
| 22 |
+
|
| 23 |
+
**Table S1.** O/C ratio of different carbon felts and the corresponding standard rate constants $k^0$ of VO$^{2+}$/VO$_2^+$ on these electrodes.
|
| 24 |
+
|
| 25 |
+
<table><thead><tr><th>Carbon Felt</th><th>C ratio/%</th><th>O ratio/%</th><th>O/C</th><th>k<sup>0</sup> (cm/s)</th></tr></thead><tbody><tr><td>CeTech CF020, 400 °C</td><td>92.51</td><td>7.49</td><td>0.081</td><td>1.56±0.15 × 10<sup>-6</sup></td></tr><tr><td>SGL GFA6EA, 400 °C</td><td>90.14</td><td>9.86</td><td>0.109</td><td>1.642±0.072 × 10<sup>-7</sup></td></tr><tr><td>SGL GFA6EA, 450 °C</td><td>89.34</td><td>10.66</td><td>0.119</td><td>2.095±0.518 × 10<sup>-7</sup></td></tr><tr><td>SGL GFA6EA, 500 °C</td><td>88.93</td><td>11.07</td><td>0.124</td><td>2.455±0.216 × 10<sup>-8</sup></td></tr></tbody></table>
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
**Figure S4.** Additional results of the RFB tests. (a) Electrochemical impedance spectroscopy (EIS) and (b) IR-corrected polarization curves of VRFB with CF baked at different temperatures.
|
| 29 |
+
|
| 30 |
+
**Table S2.** Polarization resistance of VRFB with different CF.
|
| 31 |
+
|
| 32 |
+
<table><thead><tr><td>SGL CF</td><td>Ru/Ω cm²</td><td>polarization resistance/Ω cm²</td><td>corrected polarization resistance/Ω cm²</td></tr></thead><tbody><tr><td>400°C</td><td>0.395</td><td>0.487</td><td>0.092</td></tr><tr><td>450°C</td><td>0.421</td><td>0.540</td><td>0.119</td></tr><tr><td>500°C</td><td>0.450</td><td>0.664</td><td>0.214</td></tr></tbody></table>
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
**Table S3.** Summary of standard rate constants *k* of VO<sub>2</sub><sup>+</sup>/VO<sub>2</sub><sup>+</sup> reported in literature.
|
| 36 |
+
|
| 37 |
+
<table>
|
| 38 |
+
<thead>
|
| 39 |
+
<tr>
|
| 40 |
+
<td>
|
| 41 |
+
Electrodes
|
| 42 |
+
</td>
|
| 43 |
+
<td>
|
| 44 |
+
Treatment
|
| 45 |
+
</td>
|
| 46 |
+
<td>
|
| 47 |
+
Method
|
| 48 |
+
</td>
|
| 49 |
+
<td>
|
| 50 |
+
Area
|
| 51 |
+
</td>
|
| 52 |
+
<td>
|
| 53 |
+
k (cm/s)
|
| 54 |
+
</td>
|
| 55 |
+
<td>
|
| 56 |
+
Ref
|
| 57 |
+
</td>
|
| 58 |
+
</tr>
|
| 59 |
+
</thead>
|
| 60 |
+
<tbody>
|
| 61 |
+
<tr>
|
| 62 |
+
<td>
|
| 63 |
+
SGL Carbon GFD4.6
|
| 64 |
+
</td>
|
| 65 |
+
<td>
|
| 66 |
+
Baked at 400 °C for 12 hrs
|
| 67 |
+
</td>
|
| 68 |
+
<td>
|
| 69 |
+
Symmetrical RFB
|
| 70 |
+
</td>
|
| 71 |
+
<td>
|
| 72 |
+
Electro-chemical
|
| 73 |
+
</td>
|
| 74 |
+
<td>
|
| 75 |
+
2.38×10<sup>-6</sup>
|
| 76 |
+
</td>
|
| 77 |
+
<td>
|
| 78 |
+
[1]
|
| 79 |
+
</td>
|
| 80 |
+
</tr>
|
| 81 |
+
<tr>
|
| 82 |
+
<td>
|
| 83 |
+
Disk made from carbon felt (SigraCELL GFA6, SGL carbon)
|
| 84 |
+
</td>
|
| 85 |
+
<td>
|
| 86 |
+
Baked at 400 °C for 30 hrs
|
| 87 |
+
</td>
|
| 88 |
+
<td>
|
| 89 |
+
Linear sweep voltammetry (LSV)
|
| 90 |
+
</td>
|
| 91 |
+
<td>
|
| 92 |
+
Geometric
|
| 93 |
+
</td>
|
| 94 |
+
<td>
|
| 95 |
+
1.6-8.8×10<sup>-8</sup>
|
| 96 |
+
</td>
|
| 97 |
+
<td>
|
| 98 |
+
[2]
|
| 99 |
+
</td>
|
| 100 |
+
</tr>
|
| 101 |
+
<tr>
|
| 102 |
+
<td>
|
| 103 |
+
Ultra-microelectrode made from carbon felts (GrafTech)
|
| 104 |
+
</td>
|
| 105 |
+
<td>
|
| 106 |
+
Electrochemical oxidation and reduction
|
| 107 |
+
</td>
|
| 108 |
+
<td>
|
| 109 |
+
LSV and EIS
|
| 110 |
+
</td>
|
| 111 |
+
<td>
|
| 112 |
+
Electro-chemical
|
| 113 |
+
</td>
|
| 114 |
+
<td>
|
| 115 |
+
1.7-17×10<sup>-5</sup>
|
| 116 |
+
</td>
|
| 117 |
+
<td>
|
| 118 |
+
[3]
|
| 119 |
+
</td>
|
| 120 |
+
</tr>
|
| 121 |
+
<tr>
|
| 122 |
+
<td>
|
| 123 |
+
Carbon felt (Sigratherm GFA5)
|
| 124 |
+
</td>
|
| 125 |
+
<td>
|
| 126 |
+
Not mentioned
|
| 127 |
+
</td>
|
| 128 |
+
<td>
|
| 129 |
+
Galvanic charging / discharging
|
| 130 |
+
</td>
|
| 131 |
+
<td>
|
| 132 |
+
Calculated
|
| 133 |
+
</td>
|
| 134 |
+
<td>
|
| 135 |
+
3×10<sup>-7</sup>
|
| 136 |
+
</td>
|
| 137 |
+
<td>
|
| 138 |
+
[4]
|
| 139 |
+
</td>
|
| 140 |
+
</tr>
|
| 141 |
+
<tr>
|
| 142 |
+
<td>
|
| 143 |
+
Carbon felt (Liao Yang Carbon Fiber Sci-tech. Co., Ltd. China)
|
| 144 |
+
</td>
|
| 145 |
+
<td>
|
| 146 |
+
None
|
| 147 |
+
</td>
|
| 148 |
+
<td>
|
| 149 |
+
CV and EIS
|
| 150 |
+
</td>
|
| 151 |
+
<td>
|
| 152 |
+
Geometric
|
| 153 |
+
</td>
|
| 154 |
+
<td>
|
| 155 |
+
1.84×10<sup>-3</sup>
|
| 156 |
+
</td>
|
| 157 |
+
<td>
|
| 158 |
+
[5]
|
| 159 |
+
</td>
|
| 160 |
+
</tr>
|
| 161 |
+
<tr>
|
| 162 |
+
<td>
|
| 163 |
+
Carbon paper (29, SGL group)
|
| 164 |
+
</td>
|
| 165 |
+
<td>
|
| 166 |
+
Baked at 450 °C for 30 hrs
|
| 167 |
+
</td>
|
| 168 |
+
<td>
|
| 169 |
+
Polarization curve and EIS in a RFB
|
| 170 |
+
</td>
|
| 171 |
+
<td>
|
| 172 |
+
Electro-chemical
|
| 173 |
+
</td>
|
| 174 |
+
<td>
|
| 175 |
+
0.2-1.8×10<sup>-7</sup>
|
| 176 |
+
</td>
|
| 177 |
+
<td>
|
| 178 |
+
[6]
|
| 179 |
+
</td>
|
| 180 |
+
</tr>
|
| 181 |
+
<tr>
|
| 182 |
+
<td>
|
| 183 |
+
Carbon paper (10AA, SGL group)
|
| 184 |
+
</td>
|
| 185 |
+
<td>
|
| 186 |
+
None
|
| 187 |
+
</td>
|
| 188 |
+
<td>
|
| 189 |
+
Symmetrical RFB
|
| 190 |
+
</td>
|
| 191 |
+
<td>
|
| 192 |
+
Gas adsorption
|
| 193 |
+
</td>
|
| 194 |
+
<td>
|
| 195 |
+
2.05×10<sup>-6</sup>
|
| 196 |
+
</td>
|
| 197 |
+
<td>
|
| 198 |
+
[7]
|
| 199 |
+
</td>
|
| 200 |
+
</tr>
|
| 201 |
+
<tr>
|
| 202 |
+
<td>
|
| 203 |
+
Carbon paper (Shanghai Hesen, Ltd. HCP030 N)
|
| 204 |
+
</td>
|
| 205 |
+
<td>
|
| 206 |
+
Electrochemical oxidation and reduction
|
| 207 |
+
</td>
|
| 208 |
+
<td>
|
| 209 |
+
CV
|
| 210 |
+
</td>
|
| 211 |
+
<td>
|
| 212 |
+
Gas adsorption
|
| 213 |
+
</td>
|
| 214 |
+
<td>
|
| 215 |
+
1.04×10<sup>-3</sup>
|
| 216 |
+
</td>
|
| 217 |
+
<td>
|
| 218 |
+
[8]
|
| 219 |
+
</td>
|
| 220 |
+
</tr>
|
| 221 |
+
</tbody>
|
| 222 |
+
</table>
|
| 223 |
+
|
| 224 |
+
SI references:
|
| 225 |
+
|
| 226 |
+
[1] M. V. Holland-Cunz, J. Friedl, U. Stimming, *J. Electroanal. Chem.* **2018**, *819*, 306-311.
|
| 227 |
+
---PAGE_BREAK---
|
| 228 |
+
|
| 229 |
+
[2] Y. Li, J. Parrondo, S. Sankarasubramanian, V. Ramani, *J. Phys. Chem. C* **2019**, *123*, 6370-6378.
|
| 230 |
+
|
| 231 |
+
[3] M. A. Miller, A. Bourke, N. Quill, J. S. Wainright, R. P. Lynch, D. N. Buckley, R. F. Savinell, *J. Electrochem. Soc.* **2016**, *163*, A2095.
|
| 232 |
+
|
| 233 |
+
[4] A. A. Shah, M. J. Watt-Smith, F. C. Walsh, *Electrochim. Acta* **2008**, *53*, 8087-8100.
|
| 234 |
+
|
| 235 |
+
[5] W. Li, Z. Zhang, Y. Tang, H. Bian, T.-W. Ng, W. Zhang, C.-S. Lee, *Adv. Sci.* **2016**, *3*, 1500276.
|
| 236 |
+
|
| 237 |
+
[6] K. V. Greco, A. Forner-Cuenca, A. Mularczyk, J. Eller, F. R. Brushett, *ACS Appl. Mater. Interfaces* **2018**, *10*, 44430-44442.
|
| 238 |
+
|
| 239 |
+
[7] D. Aaron, C.-N. Sun, M. Bright, A. B. Papandrew, M. M. Mench, T. A. Zawodzinski, *ECS Electrochemistry Letters* **2013**, *2*, A29.
|
| 240 |
+
|
| 241 |
+
[8] X. W. Wu, T. Yamamura, S. Ohta, Q. X. Zhang, F. C. Lv, C. M. Liu, K. Shirasaki, I. Satoh, T. Shikama, D. Lu, S. Q. Liu, *J Appl Electrochem* **2011**, *8*.
|
samples_new/texts_merged/1168240.md
ADDED
|
@@ -0,0 +1,345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Approximating quadratic programming with bound constraints
|
| 5 |
+
|
| 6 |
+
Yinyu Ye*
|
| 7 |
+
|
| 8 |
+
Department of Management Sciences
|
| 9 |
+
The University of Iowa
|
| 10 |
+
Iowa City, Iowa 52242, U.S.A.
|
| 11 |
+
|
| 12 |
+
March 31, 1997
|
| 13 |
+
|
| 14 |
+
## Abstract
|
| 15 |
+
|
| 16 |
+
We consider the problem of approximating the global maximum of a quadratic program (QP) with $n$ variables subject to bound constraints. Based on the results of Goemans and Williamson [4] and Nesterov [6], we show that a $4/7$ approximate solution can be obtained in polynomial time.
|
| 17 |
+
|
| 18 |
+
**Key words.** Quadratic programming, global maximizer, approximation algorithm
|
| 19 |
+
|
| 20 |
+
*This author is supported in part by NSF grant DMI-9522507.
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
Consider the quadratic programming (QP) problem
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
\begin{array}{ll}
|
| 29 |
+
\text{(QP)} & q(Q) := \text{Maximize} \quad q(x) := x^T Q x \\
|
| 30 |
+
& \text{Subject to} \quad -e \leq x \leq e,
|
| 31 |
+
\end{array}
|
| 32 |
+
$$
|
| 33 |
+
|
| 34 |
+
where $Q \in \mathbb{R}^{n \times n}$ is given and $e \in \mathbb{R}^n$ is the vector of all ones. Let $x = x(Q)$ be a maximizer of the problem. In this paper, without loss of generality, we assume that $x \neq 0$.
|
| 35 |
+
|
| 36 |
+
Normally, there is a linear term in the objective function: $q(x) = x^T Q x + c^T x$. However, the problem can be homogenized as
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
\begin{array}{ll}
|
| 40 |
+
\text{Maximize} & q(x) := x^T Q x + tc^T x \\
|
| 41 |
+
\text{Subject to} & -e \leq x \leq e, \quad -1 \leq t \leq 1
|
| 42 |
+
\end{array}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
by adding a scalar variable $t$. There always is an optimal solution $(x, t)$ for this problem in which $t=1$ or $t=-1$. If $t=1$, then $x$ is also optimal for the non-homogeneous problem; if $t=-1$, then $-x$ is optimal for the non-homogeneous problem. Thus, without loss of generality, we can let $q(x) = x^T Q x$ throughout this paper.
|
| 46 |
+
|
| 47 |
+
The function $q(x)$ has a minimizer and a maximizer over the bounded feasible set $-e \leq x \leq e$. Let $\underline{q} := -q(-Q)$ and $q := q(Q)$ denote their minimal and maximal objective values, respectively. An $\epsilon$-maximal solution or $\epsilon$-maximizer, $\epsilon \in [0, 1]$, for (QP) is defined as an $-e \leq x \leq e$ such that
|
| 48 |
+
|
| 49 |
+
$$ \frac{\underline{q} - q(x)}{\underline{q} - q} \leq \epsilon. $$
|
| 50 |
+
|
| 51 |
+
Note that according to this definition any feasible solution $x$ is a 1-maximizer.
|
| 52 |
+
|
| 53 |
+
Recently, there were several significant results on approximating specific quadratic problems. Goemans and Williamson [4] proved an approximation result for the Maxcut problem where $\epsilon \le 1 - 0.878$. Nesterov [6] generalized their result to approximating a boolean QP problem
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\begin{array}{ll}
|
| 57 |
+
\text{Maximize} & q(x) = x^T Q x \\
|
| 58 |
+
\text{Subject to} & |x_j| = 1, \ j = 1, \dots, n.
|
| 59 |
+
\end{array}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where $\epsilon \le 4/7$. Some negative results were given by Bellare and Rogaway [1].
|
| 63 |
+
|
| 64 |
+
There are also several approximation algorithms developed for approximating (QP) when the feasible set is a convex polytope. Pardalos and Rosen [8] developed a partitioning and linear programming based algorithm with an approximation bound $\epsilon = \epsilon(Q)$, where $\epsilon(Q)$, a function of the QP data, is less than 1. Vavasis [10] and Ye [11] developed a polynomial-time algorithm, based on solving a ball-constrained quadratic problem, to compute an $(1 - \frac{1}{n^2})$-maximal solution. When
|
| 65 |
+
---PAGE_BREAK---
|
| 66 |
+
|
| 67 |
+
the polytope is {$x: -e \le x \le e$}, Fu, Luo and Ye [2] further proved a $(1-\frac{1}{n})$ polynomial-time algorithm.
|
| 68 |
+
|
| 69 |
+
In this note, we extend Goemans and Williamson and Nesterov's result to approximating (QP). We establish the same 4/7 result for approximating this problem. This result is based on a modification of Goemans and Williamson's algorithm and a generalization of Nesterov's proving technique.
|
| 70 |
+
|
| 71 |
+
## 2 Positive Semi-Definite Relaxation
|
| 72 |
+
|
| 73 |
+
The approximation algorithm for (QP) is to solve a positive semi-definite programming (SDP) relaxation problem
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\begin{array}{l@{\quad}c@{\quad}l}
|
| 77 |
+
\text{(SDP)} & \mathcal{s}(Q) := & \underset{\mathbf{X}}{\text{Maximize}} \quad \langle \mathbf{Q}, \mathbf{X} \rangle \\
|
| 78 |
+
& & \text{Subject to} \quad d(\mathbf{X}) \le e, \mathbf{X} \succeq \mathbf{0}.
|
| 79 |
+
\end{array}
|
| 80 |
+
\tag{1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
Here, $X \in \Re^{n \times n}$ is a symmetric matrix, $\langle \cdot, \cdot \rangle$ is the matrix inner product $\langle Q, X \rangle = \operatorname{trace}(QX)$, $d(X)$ is a vector containing the diagonal components of $X$, and $X \succeq Z$ means that $X - Z$ is positive semi-definite.
|
| 84 |
+
|
| 85 |
+
The dual of the problem is
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\begin{array}{l@{\quad}c@{\quad}l}
|
| 89 |
+
\mathcal{s}(\mathbf{Q}) = & \text{Minimize} & e^T y \\
|
| 90 |
+
\text{Subject to} & D(y) & \succeq Q, y \ge 0,
|
| 91 |
+
\end{array}
|
| 92 |
+
\tag{2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $D(y)$ is the diagonal matrix such that $d(D(y)) = y \in \Re^n$. Denote by $X(Q)$ and $y(Q)$ an optimal solution pair for the primal (1) and dual (2).
|
| 96 |
+
|
| 97 |
+
The positive semi-definite relaxation was first proposed by Lovász and Shrijver [5], also see recent papers by Fujie and Kojima [3] and Polijak, Rendl and Wolkowicz [9]. This relaxation problem can be solved in polynomial time, e.g., see Nesterov and Nemirovskii [7].
|
| 98 |
+
|
| 99 |
+
We have the following relations between (QP) and (SDP).
|
| 100 |
+
|
| 101 |
+
**Proposition 1** Let $q = q(Q), \underline{q} = -q(-Q), s = s(Q), \underline{s} = -s(-Q), \text{ and } \underline{y} = -y(Q)$. Then,
|
| 102 |
+
|
| 103 |
+
1. $\underline{q}$ is the minimal objective value of $x^T Q x$ in the feasible set of (QP);
|
| 104 |
+
|
| 105 |
+
2. $\underline{s} = e^T \underline{y}$ and it is the minimal objective value of $\langle Q, X \rangle$ in the feasible set of (SDP);
|
| 106 |
+
|
| 107 |
+
3.
|
| 108 |
+
|
| 109 |
+
$$ \underline{s} = -s(-Q) \le \underline{q} = -q(-Q) \le q(Q) = q \le s(Q) = s. $$
|
| 110 |
+
---PAGE_BREAK---
|
| 111 |
+
|
| 112 |
+
**Proof.** The first and second statements are straightforward to verify. Let $X = x(Q)x(Q)^T \in \mathbb{R}^{n \times n}$.
|
| 113 |
+
Then $X \succeq 0$, $d(X) \le e$ and $\langle Q, X \rangle = q(x(Q)) = q(Q)$. Thus, we have $q(Q) = \langle Q, X \rangle \le s(Q)$.
|
| 114 |
+
Similarly, we can prove $q(-Q) \le s(-Q)$, or $-s(-Q) \le -q(-Q)$. ■
|
| 115 |
+
|
| 116 |
+
In what follows, we also let $x = x(Q)$, $X = X(Q)$. Since $X$ is positive semi-definite, there is a factorization matrix $V = (v_1, \dots, v_n) \in \mathbb{R}^{n \times n}$, i.e., $v_j$ is the $j$th column of $V$, such that $X = V^T V$.
|
| 117 |
+
The algorithm, similar to Goemans and Williamson [4], generates a random vector $u$ uniformly distributed on an $n$-dimensional unit ball and then assigns
|
| 118 |
+
|
| 119 |
+
$$ \hat{x} = D\sigma(V^Tu), \quad (3) $$
|
| 120 |
+
|
| 121 |
+
where
|
| 122 |
+
|
| 123 |
+
$$ D = \operatorname{diag}(\|v_1\|, \dots, \|v_n\|) = \operatorname{diag}(\sqrt{x_{11}}, \dots, \sqrt{x_{nn}}), $$
|
| 124 |
+
|
| 125 |
+
and for any $x \in \mathbb{R}^n$, $\sigma(x)$ is the vector whose components are $\operatorname{sign}(x_j)$, $j = 1, \dots, n$, that is,
|
| 126 |
+
$\operatorname{sign}(x_j) = 1$ if $x_j \ge 0$ and $\operatorname{sign}(x_j) = -1$ otherwise.
|
| 127 |
+
|
| 128 |
+
It is easily seen that $\hat{x}$ is a feasible point for (QP) and we will show later that the expected objective value, $E_u q(\hat{x})$, satisfies
|
| 129 |
+
|
| 130 |
+
$$ \frac{q - E_u q(\hat{x})}{q - \underline{q}} \le \frac{\pi}{2} - 1 \le \frac{4}{7}. $$
|
| 131 |
+
|
| 132 |
+
# 3 Approximation Analysis
|
| 133 |
+
|
| 134 |
+
The following two lemmas are analogues to Lemmas 1 and 2 of Nesterov [6].
|
| 135 |
+
|
| 136 |
+
**Lemma 1**
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\begin{array}{l@{\quad}l}
|
| 140 |
+
\text{Maximize} & \sigma(V^T u)^T D Q D \sigma(V^T u) \\
|
| 141 |
+
\text{Subject to} & \|v_j\| \le 1, \quad j = 1, \dots, n, \quad \|u\| = 1, \\
|
| 142 |
+
\text{where} & D = \operatorname{diag}(\|v_1\|, \dots, \|v_n\|).
|
| 143 |
+
\end{array}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
**Proof.** Since $D\sigma(V^Tu)$ is a feasible point for (QP) for any feasible $V$ and $u$, we have
|
| 147 |
+
|
| 148 |
+
$$ q(Q) \geq \sigma(V^T u)^T D Q D \sigma(V^T u). $$
|
| 149 |
+
|
| 150 |
+
On the other hand, for any fixed $u$ with $\|u\| = 1$, we let $v_j = x_j u$, $j = 1, \dots, n$. Then $D\sigma(V^Tu) = x$.
|
| 151 |
+
Thus, for a particular feasible $V$ and $u$ we have
|
| 152 |
+
|
| 153 |
+
$$ q(Q) = q(x) \leq \sigma(V^T u)^T D Q D \sigma(V^T u). $$
|
| 154 |
+
|
| 155 |
+
These two give the desired result. ■
|
| 156 |
+
---PAGE_BREAK---
|
| 157 |
+
|
| 158 |
+
**Lemma 2**
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\begin{array}{ll}
|
| 162 |
+
q(Q) = & \text{Maximize} \quad \mathbb{E}_u(\sigma(V^T u)^T D Q D \sigma(V^T u)) \\
|
| 163 |
+
& \text{Subject to} \quad \|v_j\| \le 1, j = 1, \dots, n, \\
|
| 164 |
+
\text{where} & \\
|
| 165 |
+
& D = \text{diag}(\|v_1\|, \dots, \|v_n\|).
|
| 166 |
+
\end{array}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
**Proof.** Again, since $D\sigma(V^T u)$ is a feasible point for (QP), we have for any feasible $V$
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
q(Q) \geq \mathbb{E}_u (\sigma(V^T u)^T D Q D \sigma(V^T u)).
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
On the other hand, for any fixed $u$ with $\|u\| = 1$, we have
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\mathbb{E}_u (\sigma(V^T u)^T D Q D \sigma(V^T u)) = \sum_{i=1}^{n} \sum_{j=1}^{n} q_{ij} \|v_i\| \|v_j\| \mathbb{E}_u (\sigma(v_i^T u) \sigma(v_j^T u)). \quad (4)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Let us choose $v_i = \frac{\bar{x}_i}{\|\bar{x}\|} x$, $i = 1, \dots, n$. Then
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\mathbb{E}_u(\sigma(v_i^T u)\sigma(v_j^T u)) = \begin{cases} 1 & \text{if } \sigma(x_i) = \sigma(x_j) \\ -1 & \text{otherwise.} \end{cases}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
Thus,
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\|v_i\| \|v_j\| \mathbb{E}_u (\sigma(v_i^T u) \sigma(v_j^T u)) = x_i x_j
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
which implies that for a particular feasible V
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
q(Q) = q(x) \leq \mathbb{E}_u (\sigma(V^T u)^T D Q D \sigma(V^T u)).
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
These two give the desired result. ■
|
| 200 |
+
|
| 201 |
+
For any function of one variable $f(t)$ and $X \in \mathbb{R}^{n \times n}$, let $f[X] \in \mathbb{R}^{n \times n}$ be the matrix with the components $f(x_{ij})$. For example, $[X]^p$ denotes a matrix with the components $x_{ij}^p$. Nesterov [6] has also proved the next technical lemma.
|
| 202 |
+
|
| 203 |
+
**Lemma 3** Let $X \succeq 0$ and $d(X) \le 1$. Then $\arcsin[X] \succeq X$. ■
|
| 204 |
+
|
| 205 |
+
Now we are ready to prove the following theorem.
|
| 206 |
+
|
| 207 |
+
**Theorem 1**
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\begin{array}{ll}
|
| 211 |
+
q(Q) = & \text{Supremum} \quad \frac{2}{\pi} \langle Q, D \arcsin[D^{\top} X D^{-1}] D \rangle \\
|
| 212 |
+
& \text{Subject to} \quad d(X) \le e, X > 0,
|
| 213 |
+
\end{array}
|
| 214 |
+
$$
|
| 215 |
+
|
| 216 |
+
where
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
D = \operatorname{diag}(\sqrt{x_{11}}, \ldots, \sqrt{x_{nn}}).
|
| 220 |
+
$$
|
| 221 |
+
---PAGE_BREAK---
|
| 222 |
+
|
| 223 |
+
**Proof.** For any $X = V^T V > 0$, $d(X) \le e$, we have
|
| 224 |
+
|
| 225 |
+
$$E_u(\sigma(v_i^T u)\sigma(v_j^T u)) = 1 - 2\text{Pr}\{\sigma(v_i^T u) \neq \sigma(v_j^T u)\} = 1 - 2\text{Pr}\{\sigma(\frac{v_i^T u}{\|v_i\|}) \neq \sigma(\frac{v_j^T u}{\|v_j\|})\}.$$
|
| 226 |
+
|
| 227 |
+
From Lemma 1.2 of Goemans and Williamson [4], we have
|
| 228 |
+
|
| 229 |
+
$$\mathrm{Pr}\{\sigma(\frac{v_i^T u}{\|v_i\|}) \neq \sigma(\frac{v_j^T u}{\|v_j\|})\} = \frac{1}{\pi} \arccos(\frac{v_i^T v_j}{\|v_i\|\|v_j\|}).$$
|
| 230 |
+
|
| 231 |
+
Using the above lemma and equality (4) and noting $\arcsin(t)+\arccos(t) = \frac{\pi}{2}$ give the desired result.
|
| 232 |
+
|
| 233 |
+
Theorem 1 leads us to
|
| 234 |
+
|
| 235 |
+
**Theorem 2** We have
|
| 236 |
+
|
| 237 |
+
1.
|
| 238 |
+
|
| 239 |
+
$$q - s \geq \frac{2}{\pi}(s - s).$$
|
| 240 |
+
|
| 241 |
+
2.
|
| 242 |
+
|
| 243 |
+
$$s - q \geq \frac{2}{\pi}(s - s).$$
|
| 244 |
+
|
| 245 |
+
3.
|
| 246 |
+
|
| 247 |
+
$$s - s \geq q - q \geq \frac{4 - \pi}{\pi}(s - s).$$
|
| 248 |
+
|
| 249 |
+
**Proof.** Recall $y = -y(-Q) \le 0$, $s = -s(-Q) = e^T y$, and $Q - D(y) \ge 0$. Thus, for any $X > 0$, $d(X) \le e$ and $D = \operatorname{diag}(\sqrt{x_{11}}, \dots, \sqrt{x_{nn}})$, we have from Theorem 1
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\begin{align*}
|
| 253 |
+
q = q(Q) &\ge \frac{2}{\pi} \langle Q, D \arcsin[D^T X D]^T D \rangle \\
|
| 254 |
+
&= \frac{2}{\pi} \langle Q - D(y) + D(y), D \arcsin[D^T X D]^T D \rangle \\
|
| 255 |
+
&= \frac{2}{\pi} \left( \langle Q - D(y), D \arcsin[D^T X D]^T D \rangle + \langle D(y), D \arcsin[D^T X D]^T D \rangle \right) \\
|
| 256 |
+
&\ge \frac{2}{\pi} \left( \langle Q - D(y), D D^T X D^T D \rangle + \langle D(y), D \arcsin[D^T X D^T D] \rangle \right) \\
|
| 257 |
+
&\quad (\text{since } Q - D(y) \ge 0 \text{ and } \arcsin[D^T X D]^T D \ge D^T X D^T) \\
|
| 258 |
+
&= \frac{2}{\pi} \left( \langle Q - D(y), X \rangle + \langle D(y), D \arcsin[D^T X D]^T D \rangle \right) \\
|
| 259 |
+
&= \frac{2}{\pi} \left( \langle Q, X \rangle - \langle D(y), X \rangle + \langle D(y), D \arcsin[D^T X D]^T D \rangle \right) \\
|
| 260 |
+
&= \frac{2}{\pi} \left( \langle Q, X \rangle - y^T d(X) + y^T d(D \arcsin[D^T X D]^T D) \right)
|
| 261 |
+
\end{align*}
|
| 262 |
+
$$
|
| 263 |
+
---PAGE_BREAK---
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\begin{align*}
|
| 267 |
+
&= \frac{2}{\pi} \left( \langle Q, X \rangle - \underline{y}^T d(X) + \overline{y}^T \left( \frac{\pi}{2} d(X) \right) \right) \\
|
| 268 |
+
&= \frac{2}{\pi} \left( \langle Q, X \rangle + \left(\frac{\pi}{2} - 1\right) \underline{y}^T d(X) \right) \\
|
| 269 |
+
&\geq \frac{2}{\pi} \left( \langle Q, X \rangle + \left(\frac{\pi}{2} - 1\right) \overline{y}^T e \rangle \right) \\
|
| 270 |
+
&\quad (\text{since } 0 \le d(X) \le e \text{ and } \underline{y} \le 0) \\
|
| 271 |
+
&= \frac{2}{\pi} \left( \langle Q, X \rangle + \left(\frac{\pi}{2} - 1\right) \underline{s} \right).
|
| 272 |
+
\end{align*}
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
Let $X$ converge to $X$, then $\langle Q, X \rangle \to s$ and we have the desired first inequality.
|
| 276 |
+
|
| 277 |
+
Replacing $Q$ with $-Q$ proves the second inequality in the theorem.
|
| 278 |
+
|
| 279 |
+
Adding the first two inequalities gives the third statement in the theorem. ■
|
| 280 |
+
|
| 281 |
+
The result indicates that the positive semi-definite relaxation value $s - \underline{s}$ is a constant approximation of $q - \underline{q}$.
|
| 282 |
+
|
| 283 |
+
The following corollary can be derived from the proof of the above theorem.
|
| 284 |
+
|
| 285 |
+
**Corollary 1** Let $X = V^T V > 0$, $d(X) \le e$, $D = \operatorname{diag}(\sqrt{x_{11}}, \dots, \sqrt{x_{nn}})$, and $\hat{x} = D\sigma(V^T u)$ where $u$ with $\|u\| = 1$ is a random vector uniformly distributed on the unit ball. Moreover, let $X \to X$. Then,
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\lim_{X \succcurlyeq \bar{X}} E_u(q(\hat{x})) = \lim_{X \succcurlyeq \bar{X}} \frac{2}{\pi} \langle Q, D \arcsin[D^{-1}XD^{-1}]D \rangle \geq \frac{2}{\pi}s + (1 - \frac{2}{\pi})\underline{s}.
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
Finally, we have
|
| 292 |
+
|
| 293 |
+
**Theorem 3** Let $\hat{x}$ be generated above from $X = X$. Then
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
\frac{q - E_u q(\hat{x})}{q - \underline{q}} \leq \frac{\pi}{2} - 1.
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
**Proof.** Noting that
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
s \ge q \ge \frac{2}{\pi}s + (1-\frac{2}{\pi})s \ge (1-\frac{2}{\pi})s + \frac{2}{\pi}\underline{s} \ge \underline{q} \ge \underline{s}
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
we have
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
\begin{align*}
|
| 309 |
+
\frac{q - E_u q(\hat{x})}{q - \underline{q}} &\le \frac{q - \frac{2}{\pi}s - (1 - \frac{2}{\pi})s}{q - \underline{q}} \\
|
| 310 |
+
&\le \frac{q - \frac{2}{\pi}s - (1 - \frac{2}{\pi})s}{q - (1 - \frac{2}{\pi})s - \frac{2}{\pi}\underline{s}} \\
|
| 311 |
+
&\le \frac{s - \frac{2}{\pi}s - (1 - \frac{2}{\pi})s}{s - (1 - \frac{2}{\pi})s - \frac{2}{\pi}\underline{s}}
|
| 312 |
+
\end{align*}
|
| 313 |
+
$$
|
| 314 |
+
---PAGE_BREAK---
|
| 315 |
+
|
| 316 |
+
$$
|
| 317 |
+
\begin{aligned}
|
| 318 |
+
&= \frac{(1 - \frac{2}{\pi})(s - s)}{\frac{2}{\pi}(s - s)} \\
|
| 319 |
+
&= \frac{(1 - \frac{2}{\pi})}{\frac{2}{\pi}} = \frac{\pi}{2} - 1.
|
| 320 |
+
\end{aligned}
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
References
|
| 324 |
+
|
| 325 |
+
[1] M. Bellare and P. Rogaway, "The complexity of approximating a nonlinear program," *Mathematical Programming* 69 (1995) 429-442.
|
| 326 |
+
|
| 327 |
+
[2] M. Fu, Z.-Q. Luo and Y. Ye, "Approximation algorithms for quadratic programming," manuscript, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, CANADA L8S 4K1, 1996.
|
| 328 |
+
|
| 329 |
+
[3] T. Fujie and M. Kojima, "Semidefinite programming relaxation for nonconvex quadratic programs," Research Report B-298, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro, Tokyo 152, May 1995. To appear in *Journal of Global Optimization*.
|
| 330 |
+
|
| 331 |
+
[4] M. X. Goemans and D. P. Williamson, "Improved approximation algorithms for Maximum Cut and Satisfiability problems using semidefinite programming," *Journal of ACM* 42 (1995) 1115-1145.
|
| 332 |
+
|
| 333 |
+
[5] L. Lovász and A. Shrijver, "Cones of matrices and setfunctions, and 0-1 optimization," *SIAM Journal on Optimization* 1 (1990) 166-190.
|
| 334 |
+
|
| 335 |
+
[6] Yu. E. Nesterov, "Quality of semidefinite relaxation for nonconvex quadratic optimization," CORE Discussion Paper, #9719, Belgium, March 1997.
|
| 336 |
+
|
| 337 |
+
[7] Yu. E. Nesterov and A. S. Nemirovskii, *Interior Point Polynomial Methods in Convex Programming: Theory and Algorithms* (SIAM Publications, SIAM, Philadelphia, 1993).
|
| 338 |
+
|
| 339 |
+
[8] P. M. Pardalos and J. B. Rosen, *Constrained Global Optimization: Algorithms and Applications* (Springer-Verlag, Lecture Notes in Computer Sciences 268, 1987).
|
| 340 |
+
|
| 341 |
+
[9] S. Polijak, F. Rendl and H. Wolkowicz, "A recipe for semidefinite relaxation for 0-1 quadratic programming," *Journal of Global Optimization* 7 (1995) 51-73.
|
| 342 |
+
|
| 343 |
+
[10] S. A. Vavasis, *Nonlinear Optimization: Complexity Issues* (Oxford Science, New York, 1991).
|
| 344 |
+
|
| 345 |
+
[11] Y. Ye, "On affine scaling algorithms for nonconvex quadratic programming," *Mathematical Programming* 56 (1992) 285-300.
|
samples_new/texts_merged/1772599.md
ADDED
|
@@ -0,0 +1,1063 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Stability Properties of Linear File-Sharing Networks
|
| 5 |
+
|
| 6 |
+
L. Leskelä, Philippe Robert, Florian Simatos
|
| 7 |
+
|
| 8 |
+
► To cite this version:
|
| 9 |
+
|
| 10 |
+
L. Leskelä, Philippe Robert, Florian Simatos. Stability Properties of Linear File-Sharing Networks.
|
| 11 |
+
2009. inria-00401104
|
| 12 |
+
|
| 13 |
+
HAL Id: inria-00401104
|
| 14 |
+
|
| 15 |
+
https://hal.inria.fr/inria-00401104
|
| 16 |
+
|
| 17 |
+
Preprint submitted on 2 Jul 2009
|
| 18 |
+
|
| 19 |
+
**HAL** is a multi-disciplinary open access
|
| 20 |
+
archive for the deposit and dissemination of sci-
|
| 21 |
+
entific research documents, whether they are pub-
|
| 22 |
+
lished or not. The documents may come from
|
| 23 |
+
teaching and research institutions in France or
|
| 24 |
+
abroad, or from public or private research centers.
|
| 25 |
+
|
| 26 |
+
L'archive ouverte pluridisciplinaire **HAL**, est
|
| 27 |
+
destinée au dépôt et à la diffusion de documents
|
| 28 |
+
scientifiques de niveau recherche, publiés ou non,
|
| 29 |
+
émanant des établissements d'enseignement et de
|
| 30 |
+
recherche français ou étrangers, des laboratoires
|
| 31 |
+
publics ou privés.
|
| 32 |
+
---PAGE_BREAK---
|
| 33 |
+
|
| 34 |
+
# STABILITY PROPERTIES OF LINEAR FILE-SHARING NETWORKS
|
| 35 |
+
|
| 36 |
+
LASSE LESKELÄ, PHILIPPE ROBERT, AND FLORIAN SIMATOS
|
| 37 |
+
|
| 38 |
+
**ABSTRACT.** File-sharing networks are distributed systems used to disseminate files among a subset of the nodes of the Internet. A file is split into several pieces called chunks, the general simple principle is that once a node of the system has retrieved a chunk, it may become a server for this chunk. A stochastic model is considered for arrival times and durations of time to download chunks. One investigates the maximal arrival rate that such a network can accommodate, i.e., the conditions under which the Markov process describing this network is ergodic. Technical estimates related to the survival of interacting branching processes are key ingredients to establish the stability of these systems. Several cases are considered: networks with one and two chunks where a complete classification is obtained and several cases of a network with *n* chunks.
|
| 39 |
+
|
| 40 |
+
## CONTENTS
|
| 41 |
+
|
| 42 |
+
1. Introduction 1
|
| 43 |
+
|
| 44 |
+
2. Analysis of the Single-Chunk Network 4
|
| 45 |
+
|
| 46 |
+
3. Yule Processes with Deletions 9
|
| 47 |
+
|
| 48 |
+
4. Analysis of the Multi-Chunk Network 15
|
| 49 |
+
|
| 50 |
+
Appendix A. Proof of Proposition 3.3 21
|
| 51 |
+
|
| 52 |
+
References 24
|
| 53 |
+
|
| 54 |
+
## 1. INTRODUCTION
|
| 55 |
+
|
| 56 |
+
File-sharing networks are distributed systems used to disseminate information among a subset of the nodes of the Internet (overlay network). The general simple principle is the following: once a node of the system has retrieved a file it becomes a server for this file. The advantage of this scheme is that it disseminates information in a very efficient way as long as the number of servers is growing rapidly. The growth of the number of servers is not necessarily without bounds since a node having this file may stop being a server after some time. These schemes have been used for some time now in peer-to-peer systems such as BitTorrent or Emule, for example to distribute large files over the Internet.
|
| 57 |
+
|
| 58 |
+
An improved version of this principle consists in splitting the original file into several pieces (called “chunks”) so that a given node can retrieve simultaneously several chunks of the same file from different servers. In this case, the rate to get a given file may thus increase significantly. At the same time, the global capacity of
|
| 59 |
+
|
| 60 |
+
*Date:* July 2, 2009.
|
| 61 |
+
|
| 62 |
+
*Key words and phrases.* Peer-to-Peer Algorithms; Killed Branching Processes;
|
| 63 |
+
|
| 64 |
+
Work partially supported by SCALP Project funded by EEC Network of Excellence Euro-FGI, and the Academy of Finland.
|
| 65 |
+
---PAGE_BREAK---
|
| 66 |
+
|
| 67 |
+
the file-sharing system is also increased since a node becomes a server of a chunk as soon as it has retrieved it and not only when it has the whole file. This improvement has interesting algorithmic implications since each node has to establish a matching between chunks and servers. Strategies to maximize the global efficiency of the file sharing systems have to be devised. See for instance Massoulié and Vojnović [12], Bonald et al. [4] and Massoulié and Twigg [11].
|
| 68 |
+
|
| 69 |
+
The efficiency of these systems can be considered from different points of view.
|
| 70 |
+
|
| 71 |
+
**Transient behavior:** A new file is owned by one node, given there are potentially *N* other nodes interested by it, how long does it take so that a given node retrieves it ? significant fraction $\alpha \in (0, 1]$ of the *N* nodes retrieve it ? See Yang and de Veciana [26] and Simatos et al. [22]. See also Robert and Simatos [19].
|
| 72 |
+
|
| 73 |
+
**Stationary behavior:** A constant flow of requests enters, is the capacity of the file-sharing system sufficient to cope with this flow ?
|
| 74 |
+
|
| 75 |
+
In this paper, the stationary behavior is investigated in a stochastic context: arrival times are random as well as chunk transmission times. In this setting mathematical studies are quite scarce, see Qiu and Srikant [17], Simatos et al. [22], Susitaival et al. [24] and references therein. A simple strategy to disseminate chunks is considered: chunks are retrieved sequentially and a given node can be the server of only the last chunk it got. See Massoulié and Vojnović [12] and Parvez et al. [16] for a detailed motivation of this situation.
|
| 76 |
+
|
| 77 |
+
In this paper, the sequential scheme for disseminating a file that is divided into
|
| 78 |
+
n chunks is analyzed. New requests arrive according to a Poisson process at rate $\lambda$,
|
| 79 |
+
and become downloaders of chunk 1. Users who have obtained chunks 1,...,k act
|
| 80 |
+
simultaneously as uploaders of chunk k and downloaders of chunk k + 1, and the
|
| 81 |
+
users who have all the chunks leave the network at rate $\nu$. The transmission rate
|
| 82 |
+
of chunk k is denoted by $\mu_k$, and $x_k$ is the number of users having obtained chunks
|
| 83 |
+
1,...,k. In this way, the total transmission rate of chunk k in the network is $\mu_k x_k$.
|
| 84 |
+
The flow of users can be modeled as the linear network depicted in Figure 1.
|
| 85 |
+
|
| 86 |
+
FIGURE 1. Transition rates of the linear network outside boundaries.
|
| 87 |
+
|
| 88 |
+
The main problem analyzed in the paper is the determination of a constant $\lambda^*$ such that if $\lambda < \lambda^*$ [resp. $\lambda > \lambda^*$], then the associated Markov process is ergodic [resp. transient]. As it will be seen, the constant $\lambda^*$ may be infinite in some cases so that the file-sharing network is always stable independently of the value of $\lambda$. The main technical difficulty to prove stability/instability results for this class of stochastic networks is that, except for the input, the Markov process has unbounded jump rates, in fact proportional to one of the coordinates of the current state. Note that loss networks have also this characteristic but in this case, the stability problem is trivial since the state space is finite. See Kelly [8].
|
| 89 |
+
|
| 90 |
+
**Fluid Limits for File-Sharing Networks.** Classically, to analyze the stability properties of stochastic networks, one can use the limits of a scaling of the Markov
|
| 91 |
+
---PAGE_BREAK---
|
| 92 |
+
|
| 93 |
+
process, the so-called fluid limits. The scaling consists in speeding up time by
|
| 94 |
+
the norm $\|x\|$ of the initial state $x$, by scaling the state vector by $1/\|x\|$ and by
|
| 95 |
+
letting $\|x\|$ go to infinity. See Bramson [5], Chen and Yao [6] and Robert [18] for
|
| 96 |
+
example. This scaling is, however, better suited to "locally additive" processes, that
|
| 97 |
+
is, Markov processes that behave locally as random walks. Since the transition rates
|
| 98 |
+
are unbounded, it may occur that the corresponding fluid limits have discontinuities;
|
| 99 |
+
this complicates a lot the analysis of a possible limiting dynamical system. Roughly
|
| 100 |
+
speaking, this is due to the fact that, because of the unbounded transition rates,
|
| 101 |
+
events occur on the time scale $t \mapsto t \log \|x\|$ instead of $t \mapsto \|x\|t$. See the case of
|
| 102 |
+
the $M/M/\infty$ queue in Chapter 9 of Robert [18], and Simatos and Tibi [23] for a
|
| 103 |
+
discussion of this phenomenon in a related context.
|
| 104 |
+
|
| 105 |
+
A "fluid scaling" is nevertheless available for file-sharing networks. A possible description for a possible candidate $(x_i(t))$ for this limiting picture would satisfy the following differential equations,
|
| 106 |
+
|
| 107 |
+
$$ (1) \qquad \begin{cases} \dot{x}_0(t) = \lambda - \mu_1 x_1(t), \\ \dot{x}_i(t) = \mu_i x_i(t) - \mu_{i+1} x_{i+1}(t), & 1 \le i \le n-1, \\ \dot{x}_n(t) = \mu_n x_n(t) - \nu x_n(t). \end{cases} $$
|
| 108 |
+
|
| 109 |
+
For the sake of simplicity the behavior at the boundaries {$x : x_i = 0$}, $i \ge 1$ is
|
| 110 |
+
ignored in the above equations. This has been, up to now, one of the main tools to
|
| 111 |
+
investigate mathematical models of file-sharing networks. See Qiu and Srikant [17],
|
| 112 |
+
Núñez-Queija and Prabhu [15] for example. In the context of loss networks, an
|
| 113 |
+
analogous limiting picture can be rigorously justified when the input rates and
|
| 114 |
+
buffer sizes are scaled by some $N$ and the state variable by $1/N$. This scaling is not
|
| 115 |
+
useful here, since the problem is precisely of determining the values of $\lambda$ for which
|
| 116 |
+
the associated Markov is ergodic whereas in the above scaling $\lambda$ is scaled. From
|
| 117 |
+
this point of view Equations (1) are therefore quite informal. They can nevertheless
|
| 118 |
+
give some insight into the qualitative behavior of these networks but they cannot
|
| 119 |
+
apparently be used to prove stability results. Their interpretation near boundaries
|
| 120 |
+
is in particular not clear.
|
| 121 |
+
|
| 122 |
+
**Interacting Branching Processes.** Since scaling techniques do not apply here, one needs to resort to different techniques to study stability: coupling the linear file-sharing network with interacting branching processes is a key idea. For $i \ge 1$, without the departures the process $(X_i(t))$ would be a branching process where individuals give birth to one child at rate $\mu_i$. This description of such a file-sharing system as a branching process is quite natural. It has been used to analyze the transient behavior of these systems. See Yang and de Veciana [26], Dang *et al.* [7] and Simatos *et al.* [22]. A departure for $(X_i(t))$ can be seen as a death of an individual of class *i* and at the same time as a birth of an individual of class *i*+1. The file-sharing network can thus be described as a system of interacting branching processes with a constant input rate $\lambda$.
|
| 123 |
+
|
| 124 |
+
To tackle the general problem of stability, several key ingredients are used in
|
| 125 |
+
this paper: Lyapunov functions, coupling arguments and precise estimations of
|
| 126 |
+
the growth of a branching process killed by another branching process. As it will
|
| 127 |
+
be seen, several results used come from the branching process formulation of the
|
| 128 |
+
stochastic model. In particular Section 3 is devoted to the derivation of results
|
| 129 |
+
concerning killed branching processes. The stability properties of networks with
|
| 130 |
+
---PAGE_BREAK---
|
| 131 |
+
|
| 132 |
+
a single-chunk file are analyzed in detail in Section 2. In Section 4, file-sharing networks with $n$ chunks are studied and the case $n = 2$ is investigated thoroughly.
|
| 133 |
+
|
| 134 |
+
**Acknowledgements.**
|
| 135 |
+
|
| 136 |
+
This paper has benefited from various interesting discussions with S. Borst, I. Norros, R. Núñez-Queija, B.J. Prabhu, and H. Reittu.
|
| 137 |
+
|
| 138 |
+
## 2. ANALYSIS OF THE SINGLE-CHUNK NETWORK
|
| 139 |
+
|
| 140 |
+
This section is devoted to the study of a class of two-dimensional Markov jump processes $(X_0(t), X_1(t))$, the corresponding Q-matrix $\Omega_r$ is given, for $x = (x_0, x_1) \in \mathbb{N}^2$, by
|
| 141 |
+
|
| 142 |
+
$$ (2) \quad \begin{cases} \Omega_r[(x_0, x_1), (x_0 + 1, x_1)] = \lambda, \\ \Omega_r[(x_0, x_1), (x_0 - 1, x_1 + 1)] = \mu r(x_0, x_1) (x_1 \lor 1) \mathbf{1}_{\{x_0>0\}}, \\ \Omega_r[(x_0, x_1), (x_0, x_1 - 1)] = \nu x_1, \end{cases} $$
|
| 143 |
+
|
| 144 |
+
where $x \mapsto r(x)$, referred to as the *rate function*, is some fixed function on $\mathbb{N}^2$ with values in $[0, 1]$ and $n \lor m$ denotes $\max(n, m)$ for $n, m \in \mathbb{N}^2$. This corresponds to a more general model than the linear file-sharing network of Figure 1 in the case $n=1$, where for the sake of simplicity $\mu_1$ is noted $\mu$ in this section.
|
| 145 |
+
|
| 146 |
+
From a modeling perspective, this Markov process describes the following system. Requests for a single file arrive with rate $\lambda$, the first component $X_0(t)$ is the number of requests which did not get the file, whereas the second component is the number of requests having the file and acting as servers until they leave the file-sharing network. The constant $\mu$ can be viewed as the file transmission rate, and $\nu$ as the rate at which servers having all chunks leave. The term $r(x_0, x_1)$ describes the interaction of downloaders and uploaders in the system. The term $x_1 \lor 1$ can be interpreted so that there is one server permanent server in the network, which is contacted if there are no other上传er nodes in the system. A related system where there is always one permanent server for the file can be modeled by replacing the term $x_1 \lor 1$ by $x_1 + 1$. See the remark at the end of this section.
|
| 147 |
+
|
| 148 |
+
Several related examples of this class of models have been recently investigated. The case
|
| 149 |
+
|
| 150 |
+
$$ r(x_0, x_1) = \frac{x_0}{x_0 + x_1} $$
|
| 151 |
+
|
| 152 |
+
is considered in Núñez-Queija and Prabhu [15] and Massoulié and Vojnović [12]; in this case the downloading time of the file is neglected. Susitaival et al. [24] analyzes the rate function $r(x)$
|
| 153 |
+
|
| 154 |
+
$$ r(x_0, x_1) = 1 \wedge \left( \alpha \frac{x_0}{x_1} \right) $$
|
| 155 |
+
|
| 156 |
+
with $\alpha > 0$ and $a \land b$ denotes $\min(a, b)$ for $a, b \in \mathbb{R}$. This model allows to take into account that a request cannot be served by more than one server. See also Qiu and Srikant [17].
|
| 157 |
+
|
| 158 |
+
With a slight abuse of notation, for $0 < \delta \le 1$, the matrix $\Omega_\delta$ will refer to the case when the function $r$ is identically equal to $\delta$. Note that the boundary condition $x_1 \lor 1$ for departures from the first queue prevents the second coordinate from ending up in the absorbing state 0. Other possibilities are discussed at the end of this section. In the following $(X^r(t)) = (X_0^r(t), X_1^r(t))$ [resp. $(X^\delta(t))$] will denote a Markov process with Q-matrix $\Omega_r$ [resp. $\Omega_\delta$].
|
| 159 |
+
---PAGE_BREAK---
|
| 160 |
+
|
| 161 |
+
**Free Process.** For $\delta > 0$, $Q_\delta$ denotes the following $Q$-matrix
|
| 162 |
+
|
| 163 |
+
$$ (3) \qquad \begin{cases} Q_\delta[(y_0, y_1), (y_0 + 1, y_1)] = \lambda, \\ Q_\delta[(y_0, y_1), (y_0 - 1, y_1 + 1)] = \mu\delta(y_1 \vee 1), \\ Q_\delta[(y_0, y_1), (y_0, y_1 - 1)] = \nu y_1. \end{cases} $$
|
| 164 |
+
|
| 165 |
+
The process $(Y^\delta(t)) = (Y_0^\delta(t), Y_1^\delta(t))$, referred to as the free process, will denote a Markov process with $Q$-matrix $Q_\delta$. Note that the first coordinate $Y_0^\delta$ may become negative. The second coordinate $(Y_1^\delta(t))$ of the free process is a classical birth-and-death process. It is easily checked that if $\rho_\delta$ defined as $\delta\mu/\nu$ is such that $\rho_\delta < 1$, then $(Y_1^\delta(t))$ is an ergodic Markov process converging in distribution to $Y_1^\delta(\infty)$ and that
|
| 166 |
+
|
| 167 |
+
$$ (4) \quad \lambda^*(\delta) \stackrel{\text{def.}}{=} \nu \mathbb{E}(Y_1^\delta(\infty)) = \mu \mathbb{E}(Y_1^\delta(\infty) \vee 1) = \frac{\delta \mu}{(1 - \rho_\delta)(1 - \log(1 - \rho_\delta))}. $$
|
| 168 |
+
|
| 169 |
+
When $\rho_\delta > 1$, then the process $(Y^\delta(t))$ converges almost surely to infinity. In the sequel $\lambda^*(1)$ is simply denoted $\lambda^*$.
|
| 170 |
+
|
| 171 |
+
In the following it will be assumed, Condition (C) below, that the rate function $r$ converges to 1 as the first coordinate goes to infinity; as will be seen, the special case $r \equiv 1$ then plays a special role, and so before analyzing the stability properties of $(X^r(t))$, one begins with an informal discussion when the rate function $r$ is identically equal to 1. Since the departure rate from the system is proportional to the number of requests/servers in the second queue, a large number of servers in the second queue gives a high departure rate, irrespectively of the state of the first queue. The input rate of new requests being constant, the real bottleneck with respect to stability is therefore when the first queue is large. The interaction of the two processes $(X_0^1(t))$ and $(X_1^1(t))$ is expressed through the indicator function of the set $\{X_0^1(t) > 0\}$. The second queue $(X_1^1(t))$ locally behaves like the birth-and-death process $(Y_1^1(t))$ as long as $(X_0^1(t))$ is away from 0. The two cases $\rho_1 > 1$ and $\rho_1 < 1$ are considered.
|
| 172 |
+
|
| 173 |
+
If $\rho_1 > 1$, i.e., $\mu > \nu$, the process $(X_1^1(t))$ is a transient process as long as the first coordinate is non-zero. Consequently, departures from the second queue occur faster and faster. Since, on the other hand, arrivals occur at a steady rate, departures eventually outpace arrivals. The fact that the second queue grows when $(X_0(t))$ is away from 0 stabilizes the system independently of the value of $\lambda$, and so the system should be stable for any $\lambda > 0$.
|
| 174 |
+
|
| 175 |
+
If $\rho_1 < 1$, and as long as $(X_0(t))$ is away from 0, the coordinate $(X_1^1(t))$ locally behaves like the ergodic Markov process $(Y_1^1(t))$. Hence if $(X_0^1(t))$ is non-zero for long enough, the requests in the first queue see in average $\mathbb{E}(Y_1^1(\infty) \vee 1)$ servers which work at rate $\mu$. Therefore, the stability condition for the first queue should be
|
| 176 |
+
|
| 177 |
+
$$ \lambda < \mu \mathbb{E}(Y_1^1(\infty) \vee 1) = \lambda^* $$
|
| 178 |
+
|
| 179 |
+
where $\lambda^* = \lambda^*(1)$ is defined by Equation (4). Otherwise if $\lambda > \lambda^*$, the system should be unstable.
|
| 180 |
+
|
| 181 |
+
**Markovian Notations.** In the following, one will use the following convention, if $(U(t))$ is a Markov process, the index $u$ of $\mathbb{P}_u((U(t)) \in \cdot)$ will refer to the initial condition of this Markov process.
|
| 182 |
+
---PAGE_BREAK---
|
| 183 |
+
|
| 184 |
+
**Transience and Recurrence Criteria for $(X^r(t))$.**
|
| 185 |
+
|
| 186 |
+
**Proposition 2.1 (Coupling).** If $X^r(0) = Y^1(0) \in \mathbb{N}^2$, there exists a coupling of the processes $(X^r(t))$ and $(Y^1(t))$ such that the relation
|
| 187 |
+
|
| 188 |
+
$$ (5) \qquad X_0^r(t) \ge Y_0^1(t) \text{ and } X_1^r(t) \le Y_1^1(t), $$
|
| 189 |
+
|
| 190 |
+
holds for all $t \ge 0$ and for any sample path.
|
| 191 |
+
|
| 192 |
+
For any $0 \le \delta \le 1$, if
|
| 193 |
+
|
| 194 |
+
$$ \tau_{\delta} = \inf\{t \ge 0 : r(X^r(t)) \le \delta\} \text{ and } \sigma = \inf\{t \ge 0 : X_0^r(t) = 0\}, $$
|
| 195 |
+
|
| 196 |
+
and if $X^1(0) = Y^\delta(0) \in \mathbb{N}^2$ then there exists a coupling of the processes $(X^r(t))$ and $(Y^\delta(t))$ such that, for any sample path, the relation
|
| 197 |
+
|
| 198 |
+
$$ (6) \qquad X_0^r(t) \le Y_0^\delta(t) \text{ and } X_1^r(t) \ge Y_1^\delta(t) $$
|
| 199 |
+
|
| 200 |
+
holds for all $t \le \tau_\delta \wedge \sigma$.
|
| 201 |
+
|
| 202 |
+
*Proof.* Let $X^r(0) = (x_0, x_1)$ and $Y^1(0) = (y_0, y_1)$ be such that $x_0 \ge y_0$ and $x_1 \le y_1$, one has to prove that the processes $(X^r(t))$ and $(Y^1(t))$ can be constructed such that Relation (5) holds at the time of the next jump of one of them. See Leskelä [10] for the existence of couplings using analytical, nonconstructive techniques.
|
| 203 |
+
|
| 204 |
+
The arrival rates in the first queue are the same for both processes. If $x_1 < y_1$, a departure from the second queue for $(Y^1(t))$ or $(X^r(t))$ preserves the order relation (5) and if $x_1 = y_1$, this departure occurs at the same rate for both processes and thus the corresponding instant can be chosen at the same (exponential) time. For the departures from the first to the second queue, the departure rate for $(X^r(t))$ is $\mu r(x_0, x_1)(x_1 \vee 1)\mathbb{I}_{\{x_0>0\}} \le \mu(y_1 \vee 1)$ which is the departure rate for $(Y^1(t))$, hence the corresponding departure instants can be taken in the reverse order so that Relation (5) also holds at the next jump instant. The first part of the proposition is proved.
|
| 205 |
+
|
| 206 |
+
The rest of the proof is done in a similar way: The initial states $X^r(0) = (x_0, x_1)$ and $Y^\delta(0) = (y_0, y_1)$ are such that $x_0 \le y_0$ and $x_1 \ge y_1$. With the killing of the processes at time $\tau_\delta \wedge \sigma$ one can assume additionally that $x_0 \neq 0$ and that the relation $r(x_0, x_1) \ge \delta$ holds; Under these assumptions one can check by inspecting the next transition that (6) holds. The proposition is proved. $\square$
|
| 207 |
+
|
| 208 |
+
**Proposition 2.2.** *Under the condition $\mu < \nu$, the relation*
|
| 209 |
+
|
| 210 |
+
$$ \liminf_{t \to +\infty} \frac{X_0^r(t)}{t} \geq \lambda - \lambda^* $$
|
| 211 |
+
|
| 212 |
+
holds almost surely. In particular, if $\mu < \nu$ and $\lambda > \lambda^*$, then the process $(X^r(t))$ is transient.
|
| 213 |
+
|
| 214 |
+
*Proof.* By Proposition 2.1, one can assume that there exists a version of $(Y^1(t))$ such that $X_0^r(0) = Y_0^1(0)$ and the relation $X_0^r(t) \ge Y_0^1(t)$ holds for any $t \ge 0$. From Definition (3) of the Q-matrix of $(Y^1(t))$, one has, for $t \ge 0$,
|
| 215 |
+
|
| 216 |
+
$$ Y^{1}(t) = Y^{1}(0) + N_{\lambda}(t) - A(t), $$
|
| 217 |
+
|
| 218 |
+
where $(N_\lambda(t))$ is a Poisson process with parameter $\lambda$ and $(A(t))$ is the number of arrivals (jumps of size 1) for the second coordinate $(Y_1^1(t))$: in particular
|
| 219 |
+
|
| 220 |
+
$$ \mathbb{E}(A(t)) = \mu \mathbb{E} \left( \int_{0}^{t} Y_{1}^{1}(s) \vee 1 ds \right). $$
|
| 221 |
+
---PAGE_BREAK---
|
| 222 |
+
|
| 223 |
+
Since $(Y_1^1(t))$ is an ergodic Markov process under the condition $\mu < \nu$, the ergodic theorem in this setting gives that
|
| 224 |
+
|
| 225 |
+
$$ \lim_{t \to +\infty} \frac{1}{t} A(t) = \lim_{t \to +\infty} \frac{1}{t} \mathbb{E}(A(t)) = \mu \mathbb{E} (Y_1^1(\infty) \lor 1) = \lambda^*, $$
|
| 226 |
+
|
| 227 |
+
by Equation (4), hence $(Y_0^1(t)/t)$ converges almost surely to $\lambda - \lambda^*$. The proposition is proved. $\square$
|
| 228 |
+
|
| 229 |
+
The next result establishes the ergodicity result of this section.
|
| 230 |
+
|
| 231 |
+
**Proposition 2.3.** If the rate function $r$ is such that, for any $x_1 \in \mathbb{N}$,
|
| 232 |
+
|
| 233 |
+
(C)
|
| 234 |
+
|
| 235 |
+
$$ \lim_{x_0 \to +\infty} r(x_0, x_1) = 1, $$
|
| 236 |
+
|
| 237 |
+
and if $\mu \ge \nu$, or if $\mu < \nu$ and $\lambda < \lambda^*$ with
|
| 238 |
+
|
| 239 |
+
$$ \lambda^* = \frac{\mu}{(1-\rho)(1-\log(1-\rho))}, $$
|
| 240 |
+
|
| 241 |
+
and $\rho = \mu/\nu$, then $(X^r(t))$ is an ergodic Markov process.
|
| 242 |
+
|
| 243 |
+
Note that Condition (C) is satisfied for the functions $r$ considered in the models considered by Núñez-Queija and Prabhu [15] and in Susitaival et al. [24]. See above.
|
| 244 |
+
|
| 245 |
+
*Proof.* If $x = (x_0, x_1) \in \mathbb{R}^2$, $|x|$ denotes the norm of $x$, $|x| = |x_0| + |x_1|$. The proof uses Foster's criterion as stated in Robert [18, Theorem 9.7]. If there exist constants $K_0, K_1, t_0, t_1$ and $\eta > 0$ such that, for $x = (x_0, x_1) \in \mathbb{N}^2$,
|
| 246 |
+
|
| 247 |
+
(8)
|
| 248 |
+
|
| 249 |
+
$$ \mathbb{E}_{(x_0,x_1)}(|X^r(t_1)| - |x|) \leq -t_1, \text{ if } x_1 \geq K_1, $$
|
| 250 |
+
|
| 251 |
+
(9)
|
| 252 |
+
|
| 253 |
+
$$ \mathbb{E}_{(x_0,x_1)}(|X^r(t_0)| - |x|) \leq -\eta t_0, \text{ if } x_0 \geq K_0 \text{ and } x_1 < K_1, $$
|
| 254 |
+
|
| 255 |
+
then the Markov process $(X^r(t))$ is ergodic.
|
| 256 |
+
|
| 257 |
+
Relation (8) is straightforward to establish: if $x_1 \ge K_1$, one gets, by considering only $K_1$ of the $x_1$ initial servers in the second queue and the Poisson arrivals, that
|
| 258 |
+
|
| 259 |
+
$$ \mathbb{E}_{(x_0,x_1)}(|X^r(1)| - |x|) \leq \lambda - K_1(1 - e^{-\nu}), $$
|
| 260 |
+
|
| 261 |
+
hence it is enough to take $t_1 = 1$ and $K_1 = (\lambda+1)/(1-e^{-\nu})$ to have Relation (8).
|
| 262 |
+
|
| 263 |
+
One has therefore to establish Inequality (9). Let $\tau_\delta$ and $\sigma$ be the stopping times introduced in Proposition 2.1, one first proves an intermediate result: for any $t > 0$ and any $x_1 \in \mathbb{N}$,
|
| 264 |
+
|
| 265 |
+
$$ (10) \quad \lim_{x_0 \to +\infty} \mathbb{P}_{(x_0,x_1)}(\sigma \wedge \tau_\delta \le t) = 0. $$
|
| 266 |
+
|
| 267 |
+
Fix $x_1 \in \mathbb{N}$ and $t \ge 0$: for $\varepsilon > 0$, there exists $D_1$ such that
|
| 268 |
+
|
| 269 |
+
$$ \mathbb{P}_{x_1} \left( \sup_{0 \le s \le t} Y_1^1(s) \ge D_1 \right) \le \varepsilon, $$
|
| 270 |
+
|
| 271 |
+
from Proposition 2.1, this gives the relation valid for all $x_0 \ge 0$,
|
| 272 |
+
|
| 273 |
+
$$ \mathbb{P}_{(x_0,x_1)} \left( \sup_{0 \le s \le t} X_1^r(s) \ge D_1 \right) \le \varepsilon. $$
|
| 274 |
+
|
| 275 |
+
By Condition (C), there exists $\gamma \ge 0$ (that depends on $x_1$) such that $r(x_0, x_1) \ge \delta$ when $x_0 \ge \gamma$. As long as $(X^r(t))$ stays in the subset $\{(y_0, y_1) : y_1 \le D_1\}$, the transition rates of the first component $(X_0^r(t))$ are uniformly bounded. Consequently,
|
| 276 |
+
---PAGE_BREAK---
|
| 277 |
+
|
| 278 |
+
there exists $K$ such that, for $x_0 \ge K$,
|
| 279 |
+
|
| 280 |
+
$$ \mathbb{P}_{(x_0,x_1)} \left[ \sup_{s \le t} X_0^r(s) \le \gamma, \sup_{s \le t} X_1^r(s) \le D_1 \right] \le \varepsilon. $$
|
| 281 |
+
|
| 282 |
+
Relation (10) follows from the last two inequalities and the identity
|
| 283 |
+
|
| 284 |
+
$$ \mathbb{P}_{(x_0,x_1)}(\sigma \wedge \tau_\delta \le t) \le \mathbb{P}_{(x_0,x_1)}\left(\sup_{s \le t} X_0^r(s) \le \gamma\right). $$
|
| 285 |
+
|
| 286 |
+
One returns to the proof of Inequality (9). By definition of the Q-matrix of the process $(X^r(t))$,
|
| 287 |
+
|
| 288 |
+
$$ \mathbb{E}_{(x_0,x_1)}(|X^r(t)| - |x|) = \lambda t - \nu \int_0^t \mathbb{E}_{(x_0,x_1)}(X_1^r(u)) du, x \in \mathbb{N}^2, t \ge 0. $$
|
| 289 |
+
|
| 290 |
+
For any $x \in \mathbb{N}^2$, there exists a version of $(Y^\delta(t))$ with initial condition $Y^\delta(0) = X^r(0) = x$, and such that Relation (6) holds for $t < \tau_\delta \wedge \sigma$, in particular
|
| 291 |
+
|
| 292 |
+
$$ \begin{aligned} \mathbb{E}_x(X_1^r(t)) &\geq \mathbb{E}_x(X_1^r(t); t < \tau_\delta \wedge \sigma) \\ &\geq \mathbb{E}_x(Y_1^\delta(t); t < \tau_\delta \wedge \sigma) = \mathbb{E}_x(Y_1^\delta(t)) - \mathbb{E}_x(Y_1^\delta(t); t \geq \tau_\delta \wedge \sigma). \end{aligned} $$
|
| 293 |
+
|
| 294 |
+
Cauchy-Schwarz inequality shows that for any $t \ge 0$ and $x \in \mathbb{N}^2$
|
| 295 |
+
|
| 296 |
+
$$ \begin{aligned} \int_0^t \mathbb{E}_x(Y_1^\delta(u); \tau_\delta \wedge \sigma \le u) du &\le \int_0^t \sqrt{\mathbb{E}_x\left[(Y_1^\delta(u))^2\right]} \sqrt{\mathbb{P}_x(\tau_\delta \wedge \sigma \le u)} du \\ &\le \sqrt{\mathbb{P}_x(\tau_\delta \wedge \sigma \le t)} \int_0^t \sqrt{\mathbb{E}_x\left[(Y_1^\delta(u))^2\right]} du, \end{aligned} $$
|
| 297 |
+
|
| 298 |
+
by gathering these inequalities, and by using the fact that the process $(Y_1^\delta(t))$ depends only on $x_1$ and not $x_0$, one finally gets the relation
|
| 299 |
+
|
| 300 |
+
$$ (11) \quad \frac{1}{t} \mathbb{E}_x(|X(t)| - |x|) \leq \lambda - \frac{\nu}{t} \int_0^t \mathbb{E}_{x_1}(Y_1^\delta(u)) du + c(x_1, t) \sqrt{\mathbb{P}_x(\tau_\delta \wedge \sigma \le t)} $$
|
| 301 |
+
|
| 302 |
+
with
|
| 303 |
+
|
| 304 |
+
$$ c(x_1, t) = \frac{\nu}{t} \int_{0}^{t} \sqrt{\mathbb{E}_{x_1} [ (Y_1^{\delta}(u))^2 ]} du. $$
|
| 305 |
+
|
| 306 |
+
Two cases are considered.
|
| 307 |
+
|
| 308 |
+
(1) If $\mu > \nu$, if $\delta < 1$ is such that $\delta\mu > \nu$, the process $(Y_1^\delta(t))$ is transient, so that
|
| 309 |
+
|
| 310 |
+
$$ \lim_{t \to +\infty} \frac{1}{t} \int_0^t \mathbb{E}_{x_1}(Y_1^\delta(u)) du = +\infty, $$
|
| 311 |
+
|
| 312 |
+
for each $x_1 \ge 0$.
|
| 313 |
+
|
| 314 |
+
(2) If $\mu < \nu$, one takes $\delta = 1$, or if $\mu = \nu$, one takes $\delta < 1$ close enough to 1 so that $\lambda < \lambda^*(\delta)$. In both cases, $\lambda < \lambda^*(\delta)$ and the process $(Y_1^\delta(t))$ converges in distribution, hence
|
| 315 |
+
|
| 316 |
+
$$ \lim_{t \to +\infty} \frac{1}{t} \int_0^t \mathbb{E}_{x_1}(Y_1^\delta(u)) du = \nu E(Y_1^\delta(\infty)) = \lambda^*(\delta) > \lambda $$
|
| 317 |
+
|
| 318 |
+
for each $x_1 \ge 0$.
|
| 319 |
+
---PAGE_BREAK---
|
| 320 |
+
|
| 321 |
+
Consequently in both cases, there exist constants $\eta > 0$, $\delta < 1$ and $t_0 > 0$ such that for any $x_1 \le K_1$,
|
| 322 |
+
|
| 323 |
+
$$ (12) \qquad \lambda - \nu \frac{1}{t_0} \int_0^{t_0} \mathbb{E}_{x_1}(Y_1^\delta(u)) du \le -\eta, $$
|
| 324 |
+
|
| 325 |
+
with Relation (11), one gets that if $x_1 \le K_1$ then
|
| 326 |
+
|
| 327 |
+
$$ \frac{1}{t_0} \mathbb{E}_x(|X(t_0)| - |x|) \le -\eta + c^* \sqrt{\mathbb{P}_x(\tau_\delta \wedge \sigma \le t_0)}, $$
|
| 328 |
+
|
| 329 |
+
where $c^* = \max(c(n, t_0), 0 \le n \le K_1)$. By Identity (10), there exists $K_0$ such that, for all $x_0 \ge K_0$ and $x_1 \le K_1$, the relation
|
| 330 |
+
|
| 331 |
+
$$ c^* \sqrt{\mathbb{P}_{(x_0,x_1)}(\tau_{\delta} \wedge \sigma \le t_0)} \le \frac{\eta}{2} $$
|
| 332 |
+
|
| 333 |
+
holds. This relation and the inequalities (12) and (11) give Inequality (9). The proposition is proved. $\square$
|
| 334 |
+
|
| 335 |
+
**Another Boundary Condition.** The boundary condition $x_1 \lor 1$ in the transition rates of $(X(t))$, Equation (2), prevents the second coordinate from ending up in the absorbing state 0. It amounts to suppose that a permanent server gets activated when no node may offer the file. Another way to avoid this absorbing state is to suppose that a permanent node is always active, which gives transition rates with $x_1+1$ instead. This choice was for instance made in Núñez-Queija and Prabhu [15]. All our results apply for this other boundary condition: the only difference that is when $\nu > \mu$, the value of the threshold $\lambda^*$ of Equation (4) is given by the quantity $\lambda^* = \mu\nu/(\nu - \mu)$.
|
| 336 |
+
|
| 337 |
+
### 3. YULE PROCESSES WITH DELETIONS
|
| 338 |
+
|
| 339 |
+
This section introduces the tools which are necessary in order to generalize the results of the previous section to the multi-chunk case $n \ge 2$. A Yule process $(Y(t))$ with rate $\mu > 0$ is a Markovian branching process with Q-matrix
|
| 340 |
+
|
| 341 |
+
$$ (13) \qquad q_Y(x, x+1) = \mu x, \quad \forall x \ge 0. $$
|
| 342 |
+
|
| 343 |
+
An individual gives birth to a child, or equivalently splits into two particles, with rate $\mu$. Let $(\sigma_n)$ be the split times of a Yule process started with one particle, it is not difficult to check that, for $n \ge 1$,
|
| 344 |
+
|
| 345 |
+
$$ \sigma_n \stackrel{\text{dist.}}{=} \sum_{\ell=1}^{n} \frac{E_{\ell}^{\mu}}{\ell} \stackrel{\text{dist.}}{=} \max(E_1^{\mu}, \dots, E_n^{\mu}), $$
|
| 346 |
+
|
| 347 |
+
where $(E_{\ell}^{\mu})$ are i.i.d. exponential random variables with parameter $\mu$. If $\lambda > \mu$ then, by using Fubini's Theorem,
|
| 348 |
+
|
| 349 |
+
$$ (14) \qquad
|
| 350 |
+
\begin{aligned}
|
| 351 |
+
\mathbb{E}\left(\sum_{\ell=1}^{+\infty} e^{-\lambda\sigma_\ell}\right) &= \mathbb{E}\left(\sum_{\ell=1}^{+\infty} \int_0^{+\infty} \lambda e^{-\lambda x} 1_{\{\sigma_\ell \le x\}} dx\right) = \int_0^{+\infty} \lambda e^{-\lambda x} \sum_{\ell=1}^{+\infty} \mathbb{P}(\sigma_\ell \le x) dx \\
|
| 352 |
+
&= \int_0^{+\infty} \lambda e^{-\lambda x} \frac{1-e^{-\mu x}}{e^{-\mu x}} dx = \frac{\mu}{\lambda-\mu} < +\infty.
|
| 353 |
+
\end{aligned}
|
| 354 |
+
$$
|
| 355 |
+
|
| 356 |
+
In this section one considers some specific results on variants of this stochastic model when some individuals are killed. In terms of branching processes, this amounts to prune the tree, i.e., to cut some edges of the tree, and the subtree attached to
|
| 357 |
+
---PAGE_BREAK---
|
| 358 |
+
|
| 359 |
+
it. This procedure is fairly common for branching processes, in the Crump-Mode-
|
| 360 |
+
Jagers model for example, see Kingman [9]. See also Neveu [14] or Aldous and
|
| 361 |
+
Pitman [1]. Two situations are considered: the first one when the deletions are
|
| 362 |
+
part of the internal dynamics, so that each individual dies out after an exponential
|
| 363 |
+
time, and the other when killings are given by an exogenous process and occur at
|
| 364 |
+
fixed (random or deterministic) epochs.
|
| 365 |
+
|
| 366 |
+
**Constant Death Rate and Regeneration.** Let $(Z(t))$ be the birth-and-death process whose $Q$-matrix $Q_Z$ is given by, for $\mu_Z > 0$ and $\nu > 0$,
|
| 367 |
+
|
| 368 |
+
$$
|
| 369 |
+
(15) \qquad q_Z(z, z+1) = \mu_Z(z \lor 1) \text{ and } q_Z(z, z-1) = \nu z.
|
| 370 |
+
$$
|
| 371 |
+
|
| 372 |
+
The lifetime of an individual is exponentially distributed with parameter $v$, and the
|
| 373 |
+
process restarts with one individual after some time when it hits 0. This process
|
| 374 |
+
can be described equivalently as a time-changed $M/M/1$ queue or as a sequence
|
| 375 |
+
of independent branching processes. As it will be seen these two viewpoints are
|
| 376 |
+
complementary.
|
| 377 |
+
|
| 378 |
+
In the rest of this part, $\mu_Z$ and $\nu$ are fixed, $(Z(t))$ is the Markov process with $Q$-matrix $Q_Z$, $(\sigma_n)$ is the sequence of times of its positive jumps, the birth instants, and $(B_\sigma(t))$ is the corresponding counting process of $(\sigma_n)$, for $t \ge 0$,
|
| 379 |
+
|
| 380 |
+
$$
|
| 381 |
+
B_{\sigma}(t) = \sum_{i \ge 1} 1_{\{\sigma_i \le t\}}.
|
| 382 |
+
$$
|
| 383 |
+
|
| 384 |
+
**Proposition 3.1 (Queueing Representation).** If $Z(0) = z \in \mathbb{N}$, then
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
(16) \qquad (Z(t), t \ge 0) \stackrel{\text{dist.}}{=} (L(C(t)), t \ge 0),
|
| 388 |
+
$$
|
| 389 |
+
|
| 390 |
+
where $(L(t))$ is the process of the number of jobs of an $M/M/1$ queue with input
|
| 391 |
+
rate $\mu_Z$ and service rate $\nu$ and with $L(0) = z$ and $C(t) = \inf\{s > 0 : A(s) > t\}$,
|
| 392 |
+
where
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
A(t) = \int_{0}^{t} \frac{1}{1 \vee L(u)} du.
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
*Proof.* It is not difficult to check that the process $(M(t)) \stackrel{\text{def.}}{=} (L(C(t)))$ has the Markov property. Let $Q_M$ be its $Q$-matrix. For $z \ge 0$,
|
| 399 |
+
|
| 400 |
+
$$
|
| 401 |
+
\P(L(C(h)) = z + 1 | L(0) = z) = \mu_Z \mathbb{E}(C(h)) + o(h) = \mu_Z (z \vee 1)h + o(h),
|
| 402 |
+
$$
|
| 403 |
+
|
| 404 |
+
hence $q_M(z, z + 1) = \mu_Z(z \vee 1)$. Similarly $q_M(z, z - 1) = \nu z$. The proposition is proved. $\square$
|
| 405 |
+
|
| 406 |
+
**Corollary 3.1.** For any $\gamma > (\mu_Z - \nu) \lor 0$ and $z = Z(0) \in \mathbb{N}$,
|
| 407 |
+
|
| 408 |
+
$$
|
| 409 |
+
(17) \qquad \mathbb{E}_z \left( \sum_{n=1}^{+\infty} e^{-\gamma \sigma_n} \right) < +\infty.
|
| 410 |
+
$$
|
| 411 |
+
|
| 412 |
+
*Proof.* Proposition 3.1 shows that, in particular, the sequences of positive jumps of $(Z(t))$ and of $(L(C(t)))$ have the same distribution. Hence, if $N_{\mu_Z} = (t_n)$ is the arrival process of the $M/M/1$ queue, a Poisson process with parameter $\mu_Z$, then, with the notations of the above proposition, the relation
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
(\sigma_n) \stackrel{\text{dist.}}{=} (A(t_n))
|
| 416 |
+
$$
|
| 417 |
+
---PAGE_BREAK---
|
| 418 |
+
|
| 419 |
+
holds. By using standard martingale properties of stochastic integrals with respect to Poisson processes, see Rogers and Williams [20], one gets for $t \ge 0$,
|
| 420 |
+
|
| 421 |
+
$$ (18) \qquad \begin{aligned} \mathbb{E}_z \left( \sum_{n \ge 1} e^{-\gamma A(t_n)} \right) &= \mathbb{E}_z \left( \int_0^\infty e^{-\gamma A(s)} N_{\mu_Z}(ds) \right) = \mu_Z \mathbb{E}_z \left( \int_0^\infty e^{-\gamma A(s)} ds \right) \\ &= \mu_Z \int_0^\infty e^{-\gamma u} \mathbb{E}_z (Z(u) \vee 1) du, \end{aligned} $$
|
| 422 |
+
|
| 423 |
+
where Relation (16) has been used for the last equality. Kolmogorov's equation for the process $(Z(t))$ gives that
|
| 424 |
+
|
| 425 |
+
$$ \begin{aligned} \phi(t) &\stackrel{\text{def.}}{=} \mathbb{E}_z(Z(t)) = \mu_Z \int_0^t \mathbb{E}_z(Z(u) \vee 1) du - \nu \int_0^t \mathbb{E}_z(Z(u)) du \\ &\le (\mu_Z - \nu) \int_0^t \phi(u) du + \mu_Z t, \end{aligned} $$
|
| 426 |
+
|
| 427 |
+
therefore, by Gronwall's Lemma,
|
| 428 |
+
|
| 429 |
+
$$ \phi(t) \le \phi(0) + \mu_Z \int_0^t ue^{(\mu_Z - \nu)u} du \le z + \frac{\mu_Z}{\mu_Z - \nu} te^{(\mu_Z - \nu)t}. $$
|
| 430 |
+
|
| 431 |
+
From Equation (18), one concludes that
|
| 432 |
+
|
| 433 |
+
$$ \mathbb{E}_z \left( \sum_n e^{-\gamma \sigma_n} \right) = \mathbb{E}_z \left( \sum_n e^{-\gamma A(t_n)} \right) < +\infty. $$
|
| 434 |
+
|
| 435 |
+
The proposition is proved. $\square$
|
| 436 |
+
|
| 437 |
+
**A Branching Process.** Before hitting 0, the Markov process $(Z(t))$ whose Q-matrix is given by Relation (15) can be seen a Bellman-Harris branching process. Its Malthusian parameter is given by $\alpha = \mu_Z - \nu$. See Athreya and Ney [3]. In this setting, it describes the evolution of a population of independent particles, at rate $\lambda \stackrel{\text{def.}}{=} \mu_Z + \nu$ each of these particles either splits into two particles with probability $p \stackrel{\text{def.}}{=} \mu_Z / (\mu_Z + \nu)$ or dies. These processes will be referred to as $(p, \lambda)$-branching processes in the sequel.
|
| 438 |
+
|
| 439 |
+
A $(p, \lambda)$-branching process survives with positive probability only when $p > 1/2$, in which case the probability of extinction $q$ is equal to $q = (1-p)/p = \nu/\mu_Z$. The main (and only) difference with a branching process is that $Z$ regenerates after hitting 0. When it regenerates, it again behaves as a $(p, \lambda)$-branching process (started with one particle), until it hits 0 again.
|
| 440 |
+
|
| 441 |
+
**Proposition 3.2 (Branching Representation).** If $Z(0) = z \in \mathbb{N}$ and $(\tilde{Z}(t))$ is a $(p, \lambda)$-branching process started with $z \in \mathbb{N}$ particles and $\tilde{T}$ its extinction time, then
|
| 442 |
+
|
| 443 |
+
$$ (Z(t), 0 \le t \le T) \stackrel{\text{dist.}}{=} (\tilde{Z}(t), 0 \le t \le \tilde{T}), $$
|
| 444 |
+
|
| 445 |
+
where $T = \inf\{t \ge 0 : Z(t) = 0\}$ is the hitting time of 0 by $(Z(t))$.
|
| 446 |
+
|
| 447 |
+
**Corollary 3.2.** Suppose that $\mu_Z > \nu$. Then $\mathbb{P}_z$-almost surely for any $z \ge 0$, there exists a finite random variable $Z(\infty)$ such that,
|
| 448 |
+
|
| 449 |
+
$$ \lim_{t \to +\infty} e^{-(\mu_Z - \nu)t} Z(t) = Z(\infty) \quad \text{and} \quad Z(\infty) > 0. $$
|
| 450 |
+
---PAGE_BREAK---
|
| 451 |
+
|
| 452 |
+
*Proof.* When $\mu_Z > \nu$, the process $(Z(t))$ couples in finite time with a supercritical $(p, \lambda)$-branching process $(\tilde{Z}(t))$ conditioned on non-extinction; this follows readily from Proposition 3.2 (or see the Appendix for details). Since for any supercritical $(p, \lambda)$-branching process, $(\exp(-(\mu_Z - \nu)t)\tilde{Z}(t))$ converges almost surely to a finite random variable $\tilde{Z}(\infty)$, positive on the event of non-extinction (see Nerman [13]), one gets the desired result. $\square$
|
| 453 |
+
|
| 454 |
+
Due to its technicality, the proof of the following result is postponed to the Appendix; this result is used in the proof of Proposition 3.5.
|
| 455 |
+
|
| 456 |
+
**Proposition 3.3.** Suppose that $\mu_Z > \nu$, if
|
| 457 |
+
|
| 458 |
+
$$ (19) \qquad \eta^*(x) = \frac{2 - x - \sqrt{x(4-3x)}}{2(1-x)}, \quad 0 < x < 1, $$
|
| 459 |
+
|
| 460 |
+
then for any $0 < \eta < \eta^*(\nu/\mu_Z)$,
|
| 461 |
+
|
| 462 |
+
$$ \sup_{z \ge 0} \left[ \mathbb{E}_z \left( \sup_{t \ge \sigma_1} \left( e^{\eta(\mu_Z - \nu)t} B_\sigma(t)^{-\eta} \right) \right) \right] < +\infty. $$
|
| 463 |
+
|
| 464 |
+
**A Yule Process Killed at Fixed Instants.** In this part, it is assumed that, provided that it is non-empty, at epochs $\sigma_n$, $n \ge 1$, an individual is removed from the population of an ordinary Yule process ($Y(t)$) with rate $\mu_W$ starting with $Y(0) = w \in \mathbb{N}$ individuals. It is assumed that $(\sigma_n)$ is some fixed non-decreasing sequence. It will be shown that the process $(W(t))$ obtained by killing one individual of $Y(t)$) at each of the successive instants $(\sigma_n)$ survives with positive probability when the series with general term $(\exp(-\mu_W\sigma_n))$ converges.
|
| 465 |
+
|
| 466 |
+
In the following, a related result will be considered in the case where the sequence $(\sigma_n)$ is given by the sequence of birth times of the process $(Z(t))$ introduced above. See Alsmeyer [2] and the references therein for related models.
|
| 467 |
+
|
| 468 |
+
One denotes
|
| 469 |
+
|
| 470 |
+
$$ \kappa = \inf\{n \ge 1 : W(\sigma_n) = 0\}. $$
|
| 471 |
+
|
| 472 |
+
The process $(W(t))$ can be represented in the following way:
|
| 473 |
+
|
| 474 |
+
$$ (20) \qquad W(t) = Y(t) - \sum_{i=1}^{\kappa} X_i(t) 1_{\{\sigma_i \le t\}}, $$
|
| 475 |
+
|
| 476 |
+
where, for $1 \le i \le \kappa$ and $t \ge \sigma_i$, $X_i(t)$ is the total number of children at time $t$ in the original Yule process of the $i$th individual killed at time $\sigma_i$. In terms of trees, $(W(t))$ can be seen as a subtree of $(Y(t))$: for $1 \le i \le \kappa$, $(X_i(t))$ is the subtree of $(Y(t))$ associated with the $i$th particle killed at time $\sigma_i$.
|
| 477 |
+
|
| 478 |
+
It is easily checked that $(X_i(t - \sigma_i), t \ge \sigma_i)$ is a Yule process starting with one individual and, since a killed individual cannot have one of his descendants killed, that the processes
|
| 479 |
+
|
| 480 |
+
$$ (\tilde{X}_i(t)) = (X_i(t + \sigma_i), t \ge 0), \quad 1 \le i \le \kappa, $$
|
| 481 |
+
|
| 482 |
+
are independent Yule processes.
|
| 483 |
+
|
| 484 |
+
For any process $(U(t))$, one denotes:
|
| 485 |
+
|
| 486 |
+
$$ (21) \qquad (M_U(t)) \stackrel{\text{def.}}{=} (e^{-\mu_W t} U(t)). $$
|
| 487 |
+
---PAGE_BREAK---
|
| 488 |
+
|
| 489 |
+
If $(\tilde{X}(t))$ is a Yule process with rate $\mu_W$, the martingale $(M_{\tilde{X}}(t))$ converges almost surely and in $L_2$ to a random variable $M_{\tilde{X}}(\infty)$ with an exponential distribution with mean $\tilde{X}(0)$, and by Doob's Inequality
|
| 490 |
+
|
| 491 |
+
$$ \mathbb{E}\left(\sup_{t \ge 0} M_{\tilde{X}}(t)^2\right) \le 2 \sup_{t \ge 0} \mathbb{E}\left(M_{\tilde{X}}(t)^2\right) < +\infty. $$
|
| 492 |
+
|
| 493 |
+
See Athreya and Ney [3]. Consequently
|
| 494 |
+
|
| 495 |
+
$$ e^{-\mu_W t} W(t) = M_Y(t) - \sum_{i=1}^{\kappa} e^{-\mu_W \sigma_i} M_{\tilde{X}_i}(t - \sigma_i) 1_{\{\sigma_i \le t\}}, $$
|
| 496 |
+
|
| 497 |
+
and for any $t \ge 0$,
|
| 498 |
+
|
| 499 |
+
$$ \sum_{i=1}^{\kappa} e^{-\mu_W \sigma_i} M_{\tilde{X}_i}(t-\sigma_i) 1_{\{\sigma_i \le t\}} \le \sum_{i=1}^{\kappa} e^{-\mu_W \sigma_i} \sup_{s \ge 0} M_{\tilde{X}_i}(s). $$
|
| 500 |
+
|
| 501 |
+
Assume now that $\sum_{i \ge 1} e^{-\mu_W \sigma_i} < +\infty$: then the last expression is integrable, and Lebesgue's Theorem implies that $(M_W(t)) = (\exp(-\mu_W t)W(t))$ converges almost surely and in $L_2$ to
|
| 502 |
+
|
| 503 |
+
$$ M_W(\infty) = M_Y(\infty) - \sum_{i=1}^{\kappa} e^{-\mu_W \sigma_i} M_{\tilde{X}_i}(\infty). $$
|
| 504 |
+
|
| 505 |
+
Clearly, for some $w^*$ large enough and then for any $w \ge w^*$, one has
|
| 506 |
+
|
| 507 |
+
$$ \mathbb{E}_w(M_W(\infty)) \ge w - \sum_{i=1}^{+\infty} e^{-\mu_W \sigma_i} > 0, $$
|
| 508 |
+
|
| 509 |
+
in particular $\mathbb{P}_w(M_W(\infty) > 0) > 0$ and $\mathbb{P}_w(W(t) \ge 1, \forall t \ge 0) > 0$. If $Y(0) = w < w^*$ and $\sigma_1 > 0$, then $\mathbb{P}_w(Y(\sigma_1) \ge w^* + 1) > 0$ and therefore, by translation at time $\sigma_1$, the same conclusion holds when the sequence $(\exp(-\mu_W \sigma_i))$ has a finite sum. The following proposition has thus been proved.
|
| 510 |
+
|
| 511 |
+
**Proposition 3.4.** Let $(W(t))$ be a process growing as a Yule process with rate $\mu_W$ and for which individuals are killed at non-decreasing instants $(\sigma_n)$ with $\sigma_1 > 0$. If
|
| 512 |
+
|
| 513 |
+
$$ \sum_{i=1}^{+\infty} e^{-\mu_W \sigma_i} < +\infty, $$
|
| 514 |
+
|
| 515 |
+
then as $t$ gets large, and for any $w \ge 1$, the variable $(\exp(-\mu_W t)W(t))$ converges $\mathbb{P}_w$-almost surely and in $L_2$ to a finite random variable $M_W(\infty)$ such that $\mathbb{P}_w(M_W(\infty) > 0) > 0$.
|
| 516 |
+
|
| 517 |
+
The previous proposition establishes the minimal results needed in Section 4. However, Kolmogorov's Three-Series, see Williams [25], can be used in conjunction with Fatou's Lemma to show that $(W(t))$ dies out almost surely when the series with general term $(\exp(-\mu_W \sigma_n))$ diverges.
|
| 518 |
+
|
| 519 |
+
**A Yule Process Killed at the Birth Instants of a Bellman-Harris Process.**
|
| 520 |
+
|
| 521 |
+
In this subsection, one considers a Yule process $(Y(t))$ with parameter $\mu_W$ with Q-matrix defined by Relation (13) and an independent Markov process $(Z(t))$ with Q-matrix defined by Relation (15). In particular $\mu_Z - \nu$ is the Malthusian parameter of $(Z(t))$. A process $(W(t))$ is defined by killing one individual of $(Y(t))$ at each of
|
| 522 |
+
---PAGE_BREAK---
|
| 523 |
+
|
| 524 |
+
the birth instants $(\sigma_n)$ of $(Z(t))$. As before $(B_\sigma(t))$ denotes the counting process association to the non-decreasing sequence $(\sigma_n)$,
|
| 525 |
+
|
| 526 |
+
$$B_{\sigma}(t) = \sum_{i \ge 1} 1_{\{\sigma_i \le t\}}.$$
|
| 527 |
+
|
| 528 |
+
**Proposition 3.5.** Assume that $\mu_Z - \nu > \mu_W$, and let $H_0$ be the extinction time of $(W(t))$, i.e.,
|
| 529 |
+
|
| 530 |
+
$$H_0 = \inf\{t \ge 0 : W(t) = 0\},$$
|
| 531 |
+
|
| 532 |
+
then the random variable $H_0$ is almost surely finite and:
|
| 533 |
+
|
| 534 |
+
(i) $Z(H_0) - Z(0) \le e^{\mu_W H_0} M_Y^*$ where
|
| 535 |
+
$$M_Y^* = \sup_{t \ge 0} e^{-\mu_W t} Y(t).$$
|
| 536 |
+
|
| 537 |
+
(ii) There exists a finite constant $C$ such that for any $z \ge 0$ and $w \ge 1$,
|
| 538 |
+
|
| 539 |
+
$$ (22) \qquad \mathbb{E}_{(w,z)}(H_0) \le C (\log(w) + 1). $$
|
| 540 |
+
|
| 541 |
+
Note that the subscript $(w, z)$ refers to the initial state of the Markov process $(W(t), Z(t))$.
|
| 542 |
+
|
| 543 |
+
*Proof.* Define $\alpha = \mu_Z - \nu$. Concerning the almost sure finiteness of $H_0$, note that Equation (20) entails that $W(t) \le Y(t) - B_\sigma(t)$ for all $t \ge 0$ on the event $\{H_0 = +\infty\}$. As $t$ goes to infinity, both $\exp(-\mu_W t)Y(t)$ and $\exp(-\alpha t)B_\sigma(t)$ converge almost surely to positive and finite random variables (see Nerman [13]), which implies, when $\alpha = \mu_Z - \nu > \mu_W$, that $W(t)$ converges to $-\infty$ on $\{H_0 = +\infty\}$, and so this event is necessarily of probability zero.
|
| 544 |
+
|
| 545 |
+
The first point (i) of the proposition comes from Identity (20) at $t = H_0$:
|
| 546 |
+
|
| 547 |
+
$$ (23) \qquad Z(H_0) - Z(0) \le B_\sigma(H_0) \le Y(H_0) \le e^{\mu_W H_0} M_Y^*. $$
|
| 548 |
+
|
| 549 |
+
By using the relation $\exp(x) \ge x$, Equation (22) follows from the following bound: for any $\eta < \eta^*(\nu/\mu_Z)$ (recall that $\eta^*$ is given by Equation (19)),
|
| 550 |
+
|
| 551 |
+
$$ (24) \qquad \sup_{w \ge 1, z \ge 0} \left[ w^{-\eta} \mathbb{E}_{(w,z)} \left( e^{\eta(\alpha - \mu_W)H_0} \right) \right] < +\infty. $$
|
| 552 |
+
|
| 553 |
+
So all is left to prove is this bound. Under $\mathbb{P}_{(w,z)}$, $(Y(t))$ can be represented as the sum of $w$ i.i.d. Yule processes, and so $M_Y^* \le M_{Y,1}^* + \cdots + M_{Y,w}^*$ with $(M_{Y,i}^*)$ i.i.d. distributed like $M_Y^*$ under $\mathbb{P}_{(1,z)}$; Inequality (23) then entails that
|
| 554 |
+
|
| 555 |
+
$$ e^{(\alpha - \mu_W)H_0} \le \left( \sum_{i=1}^{w} M_{Y,i}^{*} \right) \times \sup_{t \ge \sigma_1} \left( e^{\alpha t} / B_{\sigma}(t) \right). $$
|
| 556 |
+
|
| 557 |
+
By independence of $(M_{Y,i}^*)$ and $(B_\sigma(t))$, Jensen's inequality gives for any $\eta < 1$:
|
| 558 |
+
|
| 559 |
+
$$ \mathbb{E}_{(w,z)} (e^{\eta(\alpha - \mu_W)H_0}) \le w^\eta (\mathbb{E}(M_{Y,1}^*))^\eta \mathbb{E}_z \left( \sup_{t \ge \sigma_1} (e^{\eta\alpha t} B_\sigma(t)^{-\eta}) \right), $$
|
| 560 |
+
|
| 561 |
+
hence the bound (24) follows from Proposition 3.3. $\square$
|
| 562 |
+
|
| 563 |
+
One concludes this section with a Markov chain which will be used in Section 4. Define recursively the sequence $(V_n)$ by, $V_0 = v$ and
|
| 564 |
+
|
| 565 |
+
$$ (25) \qquad V_{n+1} = \sum_{k=1}^{A_n(V_n)} I_k, n \ge 0, $$
|
| 566 |
+
---PAGE_BREAK---
|
| 567 |
+
|
| 568 |
+
where $(I_k)$ are identically distributed integer valued random variables independent of $V_n$ and $A_n(V_n)$, and such that $\mathbb{E}(I_1) = p$ for some $p \in (0, 1)$. For $v > 0$, $A_n(v)$ is an independent random variable with the same distribution as $Z(H_0)$ under $\mathbb{P}_{(1,v)}$, i.e., with the initial condition $(W(0), Z(0)) = (1, v)$.
|
| 569 |
+
|
| 570 |
+
The above equation (25) can be interpreted as a branching process with immi-
|
| 571 |
+
gration, see Seneta [21], or also as an autoregressive model.
|
| 572 |
+
|
| 573 |
+
**Proposition 3.6.** Under the condition $\mu_Z - \nu > \mu_W$, if $(V_n)$ is the Markov chain defined by Equation (25) and, for $K \ge 0$,
|
| 574 |
+
|
| 575 |
+
$$N_K = \inf\{n \ge 0 : V_n \le K\},$$
|
| 576 |
+
|
| 577 |
+
then there exist $\gamma > 0$ and $K \in \mathbb{N}$ such that
|
| 578 |
+
|
| 579 |
+
$$
|
| 580 |
+
(26) \quad \mathbb{E}(N_K|V_0 = v) \le \frac{1}{\gamma} \log(1+v), \quad \forall v \ge 0.
|
| 581 |
+
$$
|
| 582 |
+
|
| 583 |
+
The Markov chain (V_n) is in particular positive recurrent.
|
| 584 |
+
|
| 585 |
+
*Proof.* For $V_0 = v \in \mathbb{N}$, Jensen's Inequality and Definition (25) give the relation
|
| 586 |
+
|
| 587 |
+
$$
|
| 588 |
+
(27) \quad \mathbb{E}_v \log \left( \frac{1+V_1}{1+v} \right) \le \mathbb{E}_{(1,v)} \log \left[ \frac{1+pZ(H_0)}{1+v} \right].
|
| 589 |
+
$$
|
| 590 |
+
|
| 591 |
+
From Proposition 3.5 and by using the same notations, one gets that, under $\mathbb{P}_{(1,v)}$,
|
| 592 |
+
|
| 593 |
+
$$
|
| 594 |
+
Z(H_0) \leq v + e^{\mu_w H_0} M_Y^*,
|
| 595 |
+
$$
|
| 596 |
+
|
| 597 |
+
where $(Y(t))$ is a Yule process starting with one individual. By looking at the birth instants of $(Z(t))$, it is easily checked that the random variable $H_0$ under $\mathbb{P}_{(1,v)}$ is stochastically bounded by $H_0$ under $\mathbb{P}_{(1,0)}$. The integrability of $H_0$ under $\mathbb{P}_{(1,0)}$ (proved in Proposition 3.5) and of $M_Y^*$ give that the expression
|
| 598 |
+
|
| 599 |
+
$$
|
| 600 |
+
\log \left( \frac{1 + p(v + e^{\mu_w H_0} M_Y^*)}{1 + v} \right)
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
bounding the right hand side of Relation (27) is also an integrable random variable
|
| 604 |
+
under $\mathbb{P}_{(1,0)}$. Lebesgue's Theorem gives therefore that
|
| 605 |
+
|
| 606 |
+
$$
|
| 607 |
+
\limsup_{v \to +\infty} \left[ \mathbb{E}_v \log \left( \frac{1+V_1}{1+v} \right) \right] \leq \log p < 0.
|
| 608 |
+
$$
|
| 609 |
+
|
| 610 |
+
Consequently, one concludes that $v \mapsto \log(1+v)$ is a Lyapunov function for the Markov chain $(V_n)$, i.e., if $\gamma = -(\log p)/2$, there exists $K$ such that for $v \ge K$,
|
| 611 |
+
|
| 612 |
+
$$
|
| 613 |
+
\mathbb{E}_v \log (1 + V_1) - \log (1 + v) \le -\gamma.
|
| 614 |
+
$$
|
| 615 |
+
|
| 616 |
+
Foster's criterion, see Theorem 8.6 of Robert [18], implies that $(V_n)$ is indeed ergodic
|
| 617 |
+
and that Relation (26) holds. $\square$
|
| 618 |
+
|
| 619 |
+
4. ANALYSIS OF THE MULTI-CHUNK NETWORK
|
| 620 |
+
|
| 621 |
+
In this section it is assumed that a file of *n* chunks is distributed by the file-sharing network within the following framework, corresponding to Figure 1. Chunks are delivered in the sequential order, and, for *k* ≥ 1, requests with chunks 1, ..., *k* provide service for requests with one less chunk.
|
| 622 |
+
|
| 623 |
+
For $0 \le k < n$ and $t \ge 0$, the variable $X_k(t)$ denotes the number of requests downloading the $(k+1)$st chunk; for $k = n$, $X_n(t)$ is the number of requests having all the chunks. When taking into account the boundaries in the transition rates
|
| 624 |
+
---PAGE_BREAK---
|
| 625 |
+
|
| 626 |
+
described in Figure 1, one gets the following $Q$-matrix for the $(n+1)$-dimensional
|
| 627 |
+
Markov process $(X_k(t), 0 \le k \le n)$:
|
| 628 |
+
|
| 629 |
+
$$
|
| 630 |
+
\begin{equation}
|
| 631 |
+
\begin{aligned}
|
| 632 |
+
Q(f)(x) ={}& \lambda[f(x+e_0)-f(x)] + \sum_{k=1}^{n} \mu_k(x_k \lor 1)[f(x+e_k-e_{k-1})-f(x)]1_{\{x_{k-1}>0\}} \\
|
| 633 |
+
& + \nu x_n[f(x-e_n)-f(x)],
|
| 634 |
+
\end{aligned}
|
| 635 |
+
\end{equation}
|
| 636 |
+
$$
|
| 637 |
+
|
| 638 |
+
where $x \in \mathbb{N}^{n+1}$, $f: \mathbb{N}^{n+1} \to \mathbb{R}_+$ is a function and for, $0 \le k \le n$, $e_k \in \mathbb{N}^{n+1}$ is the $k$th unit vector. Note that, as before, to avoid absorbing states, it is assumed that there is a server for the $k$th chunk when $x_k = 0$. The first section corresponds to the case $n = 2$ in a more general setting.
|
| 639 |
+
|
| 640 |
+
It is first shown in Proposition 4.1 that the network is stable for sufficiently small input rate $\lambda$. Proposition 4.2 studies the analog of the two-dimensional case with $\mu > \nu$, i.e., when $\mu_1 > \cdots > \mu_{n-1} > \mu_n - \nu > 0$, it is proved that the network is stable for any input rate $\lambda$. When this condition fails, it is shown that for $n = 2$ the network can only accommodate a finite input rate.
|
| 641 |
+
|
| 642 |
+
**Proposition 4.1.** *Under the condition*
|
| 643 |
+
|
| 644 |
+
$$
|
| 645 |
+
(28) \qquad \sum_{k=1}^{n} \frac{\lambda}{\mu_k} < 1,
|
| 646 |
+
$$
|
| 647 |
+
|
| 648 |
+
the Markov process (X(t)) is ergodic for any $\nu > 0$.
|
| 649 |
+
|
| 650 |
+
Condition (28) is obviously not sharp as can be seen in the case $n=1$ analyzed
|
| 651 |
+
in Section 2. But the proposition shows that there is always a positive threshold
|
| 652 |
+
$\lambda^*$ such that the system is stable when $\lambda < \lambda^*$.
|
| 653 |
+
|
| 654 |
+
*Proof.* For $x \in \mathbb{N}^{n+1}$ and $(\alpha_k) \in \mathbb{R}^{n+1}$, define $f(x) = \alpha_0 x_0 + \dots + \alpha_n x_n$, then
|
| 655 |
+
|
| 656 |
+
$$
|
| 657 |
+
Q(f)(x) = \lambda\alpha_0 - \sum_{k=1}^{n} (\alpha_{k-1} - \alpha_k)\mu_k(x_k \vee 1)1_{\{x_{k-1}>0\}} - \nu x_n \alpha_n.
|
| 658 |
+
$$
|
| 659 |
+
|
| 660 |
+
For $\varepsilon > 0$, one can choose $(\alpha_k)$ so that $\alpha_0 = 1$ and
|
| 661 |
+
|
| 662 |
+
$$
|
| 663 |
+
\alpha_{k-1} - \alpha_k = \frac{\lambda}{\mu_k} + \varepsilon, \quad 1 \le k \le n,
|
| 664 |
+
$$
|
| 665 |
+
|
| 666 |
+
hence
|
| 667 |
+
|
| 668 |
+
$$
|
| 669 |
+
\alpha_n = 1 - \left( n\varepsilon + \sum_{i=1}^{n} \frac{\lambda}{\mu_k} \right),
|
| 670 |
+
$$
|
| 671 |
+
|
| 672 |
+
so that, for $\varepsilon$ small enough, the $\alpha_k$'s, $0 \le k \le n$ are decreasing and positive under
|
| 673 |
+
the condition of the proposition; in particular the set $\{x : f(x) \le K\}$ is finite for
|
| 674 |
+
any $K \ge 0$.
|
| 675 |
+
|
| 676 |
+
Take $K = (1+\lambda)/\nu$, then if $x \in \mathbb{N}^{n+1}$ is such that $f(x) \ge K$, either $x_k > 0$ for some $0 \le k \le n-1$ and in this case
|
| 677 |
+
|
| 678 |
+
$$
|
| 679 |
+
Q(f)(x) \leq \lambda - \mu_{k+1}(\alpha_k - \alpha_{k+1}) = -\varepsilon\mu_{k+1} < 0,
|
| 680 |
+
$$
|
| 681 |
+
|
| 682 |
+
or $x_n \ge K$ so that
|
| 683 |
+
|
| 684 |
+
$$
|
| 685 |
+
Q(f)(x) \leq \lambda - \nu K = -1 < 0.
|
| 686 |
+
$$
|
| 687 |
+
|
| 688 |
+
A Lyapunov function criteria for Markov processes shows that this implies that
|
| 689 |
+
the Markov process $(X(t))$ is ergodic. See Proposition 8.14 of Robert [18] for
|
| 690 |
+
example. □
|
| 691 |
+
---PAGE_BREAK---
|
| 692 |
+
|
| 693 |
+
**Decreasing Service Rates.** The analog of the “good” case $\mu > \nu$ is proved in the next proposition.
|
| 694 |
+
|
| 695 |
+
**Proposition 4.2.** *Under the condition $\mu_1 > \mu_2 > \cdots > \mu_{n-1} > \mu_n - \nu > 0$, the Markov process $(X(t)) = (X_k(t), 0 \le k \le n)$ describing the linear file-sharing network is ergodic for any $\lambda \ge 0$.*
|
| 696 |
+
|
| 697 |
+
*Proof.* The proof proceeds in two steps: first coupling arguments with Yule processes allow to prove (30); then one can use the same technique as in the proof of Proposition 2.3, see Robert [18, Theorem 9.7].
|
| 698 |
+
|
| 699 |
+
*Step 1 (coupling).* Let $(W_n(t))$ be the process with $Q$-matrix defined by Relation (15) with $\mu_Z = \mu_n$ and starting at $W_n(0) = w_n \ge 1$. Since $\mu_n > \nu$, the process $(\exp(-(\mu_n-\nu)t)W_n(t))$ converges almost surely to a finite and positive random variable $M_{W_n}(\infty)$ by Corollary 3.2. Moreover, since $\mu_{n-1} > \mu_n - \nu > 0$, Corollary 3.1 entails that the birth instants $(\sigma_\ell^n)$ of this process are such that
|
| 700 |
+
|
| 701 |
+
$$ \sum_{\ell \ge 1} e^{-\mu_{n-1} \sigma_\ell^n} < +\infty, \text{ almost surely.} $$
|
| 702 |
+
|
| 703 |
+
Let $(Y_{n-1}(t))$ be an independent Yule process with parameter $\mu_{n-1}$ with initial condition $Y_{n-1}(0) = w_{n-1} \ge 1$ and $(W_{n-1}(t))$ the resulting process when its individuals are killed at the instants $(\sigma_\ell^n)$ of births of $(W_n(t))$: the previous equation and Proposition 3.4 show that $(W_{n-1}(t))$ can survive forever with a positive probability.
|
| 704 |
+
|
| 705 |
+
Let $(Y_{n-2}(t))$ be an independent Yule process starting from $w_{n-2} \ge 1$ with parameter $\mu_{n-2}$. Define $(W_{n-2}(t))$ the resulting process when the individuals of $(Y_{n-2}(t))$ are killed at the birth instants $(\tilde{\sigma}_\ell^{n-1})$ of $(W_{n-1}(t))$. Since $\mu_{n-2} > \mu_{n-1}$, the birth instants $(\tilde{\sigma}_\ell^{n-1})$ of $(Y_{n-1}(t))$ satisfy
|
| 706 |
+
|
| 707 |
+
$$ \sum_{\ell=1}^{+\infty} e^{-\mu_{n-2}\tilde{\sigma}_{\ell}^{n-1}} < +\infty $$
|
| 708 |
+
|
| 709 |
+
almost surely by Equation (14) (which still holds for a Yule process starting with more than one particle). Since the birth instants $(\sigma_\ell^{n-1})$ of $(W_{n-1}(t))$ are a subsequence of $(\tilde{\sigma}_\ell^{n-1})$, the same relationship holds for $(\sigma_\ell^{n-1})$, and therefore, with a positive probability, the three processes $(e^{-(\mu_n-\nu)t}W_n(t))$, $(e^{-\mu_{n-1}t}W_{n-1}(t))$ and $(e^{-\mu_{n-2}t}W_{n-2}(t))$ converge simultaneously to positive and finite random variables $M_{W_n}(\infty)$, $M_{W_{n-1}}(\infty)$ and $M_{W_{n-2}}(\infty)$, respectively. This construction can be repeated inductively to give the existence of $n$ processes $(W_k(t), k = 1, \dots, n)$ such that $(\sigma_\ell^k)$ is the sequence of birth times of $W_k$, $W_n$ is the birth-and-death process with $Q$-matrix (15), $W_k$ for $1 \le k \le n-1$ is a Yule process with parameter $\mu_k$ killed at $(\sigma_\ell^{k+1})$, and the event $\mathcal{E} = \{M_{W_1}(\infty) > 0, \dots, M_{W_n}(\infty) > 0\}$ has a positive probability. On this event, $W_k(t) \ge 1$ for all $t \ge 0$ and $1 \le k \le n-1$, and
|
| 710 |
+
|
| 711 |
+
$$ \lim_{t \to +\infty} W_n(t) = +\infty. $$
|
| 712 |
+
---PAGE_BREAK---
|
| 713 |
+
|
| 714 |
+
For $0 \le k \le n-1$, one defines $(X_k^S(t)) = (X_{k,n-k}^S(t), \dots, X_{k,n}^S(t))$, the $k$th saturated system, as the $(k+1)$-dimensional Markov process with generator
|
| 715 |
+
|
| 716 |
+
$$
|
| 717 |
+
\begin{equation}
|
| 718 |
+
\begin{split}
|
| 719 |
+
(Q_k^S(f)(x) = \mu_{n-k}(x_{n-k} \lor 1)[f(x + e_{n-k}) - f(x)] \\
|
| 720 |
+
\qquad & + \sum_{\ell=1}^k \mu_{n-k+\ell}(x_{n-k+\ell} \lor 1)[f(x + e_{n-k+\ell} - e_{n-k+\ell-1}) - f(x)] \mathbf{1}_{\{x_{n-k+\ell-1} > 0\}} \\
|
| 721 |
+
\qquad & + \nu x_n[f(x - e_n) - f(x)],
|
| 722 |
+
\end{split}
|
| 723 |
+
\end{equation}
|
| 724 |
+
$$
|
| 725 |
+
|
| 726 |
+
where $x \in \mathbb{N}^{k+1}$ and $f : \mathbb{N}^{k+1} \to \mathbb{R}_+$ is an arbitrary function. Compared with the process $(X_\ell(t), 1 \le \ell \le n)$ with generator $Q$, it amounts to look at the $k+1$ last queues $(X_{n-k}(t), \dots, X_n(t))$ under the assumption that the queue $n-k-1$ is saturated, i.e., $X_{n-k-1}(t) \equiv +\infty$ for all $t \ge 0$.
|
| 727 |
+
|
| 728 |
+
Note that for any $0 \le k \le n-1$, the transition rates of the Markov processes $(W_{n-\ell}(t), 0 \le \ell \le k)$ and $(X_{k,n-\ell}^S(t), 0 \le \ell \le k)$ are identical as long as no coordinate hits 0; one thus concludes that, with positive probability, the relation
|
| 729 |
+
|
| 730 |
+
$$
|
| 731 |
+
\lim_{t \to +\infty} X_{k,n}^{S}(t) = +\infty
|
| 732 |
+
$$
|
| 733 |
+
|
| 734 |
+
holds when $X_{k,n-l}^S(0) \ge 1$, $l=0,\dots,k$. Consequently, since the set $(\mathbb{N}-\{0\})^{k+1}$ can be reached with positive probability from any initial state in $\mathbb{N}^{k+1}$ by $(X_k^S(t))$, then
|
| 735 |
+
|
| 736 |
+
$$
|
| 737 |
+
(30) \qquad \lim_{t \to +\infty} \mathbb{E}(X_{k,n}^S(t)) = +\infty.
|
| 738 |
+
$$
|
| 739 |
+
|
| 740 |
+
Step 2 (Foster's criterion). We use Foster's criterion as stated in Theorem 9.7 of Robert [18]. First we inspect the case when $X_n(0)$ is large, then the case when $X_n(0)$ is bounded and $X_{n-1}(0)$ is large, etc... The key idea is that when $X_{n-k-1}(0)$ is large, then the process $(X_{n-k}(t), \dots, X_n(t))$ essentially behaves as the process $(X_k^S(t))$, for which Relation (30) ensures that the output rate is arbitrarily large.
|
| 741 |
+
|
| 742 |
+
Let $X(0) = x = (x_k) \in \mathbb{N}^{n+1}$, since the last queue serves at rate $\nu$ each request, for $t \ge 0$,
|
| 743 |
+
|
| 744 |
+
$$
|
| 745 |
+
\mathbb{E}(\|X(t)\|) \le \|x\| + \lambda t - x_n (1 - e^{-\nu t}),
|
| 746 |
+
$$
|
| 747 |
+
|
| 748 |
+
where $\|x\| = x_0 + \dots + x_n$ for $x = (x_0, \dots, x_n) \in \mathbb{N}^{n+1}$. Define $t_n = 1$ and let $K_n$ be such that $\lambda t_n - K_1(1 - \exp(-\nu)) \le -1$, so that the relation
|
| 749 |
+
|
| 750 |
+
$$
|
| 751 |
+
\mathbb{E}_x(\|X(t_n)\|) - \|x\| \le -1,
|
| 752 |
+
$$
|
| 753 |
+
|
| 754 |
+
holds when $x_n \ge K_n$.
|
| 755 |
+
|
| 756 |
+
From Equation (30) with $k=0$, one gets that there exists some $t_{n-1}$ such that for any $x_n \le K_n$,
|
| 757 |
+
|
| 758 |
+
$$
|
| 759 |
+
\nu \int_0^{t_{n-1}} \mathbb{E}_{x_n} (X_{0,n}^S(u)) du \geq \lambda t_{n-1} + 2.
|
| 760 |
+
$$
|
| 761 |
+
|
| 762 |
+
The two processes $(X_0^S(t))$ and $(X(t))$ can be built on the same probability space such that if they start from the same initial state, then the two processes $(X_{0,n}^S(t))$ and $(X_n(t))$ are identical as long as $X_{n-1}(t)$ stays positive. Since moreover the hitting time $\inf\{t \ge 0 : X_{n-1}(t) = 0\}$ goes to infinity as $x_{n-1}$ goes to infinity
|
| 763 |
+
---PAGE_BREAK---
|
| 764 |
+
|
| 765 |
+
for any $x_n \le K_n$, one gets that there exists $K_{n-1}$ such that if $x_{n-1} \ge K_{n-1}$ and $x_n < K_n$, then the relation
|
| 766 |
+
|
| 767 |
+
$$
|
| 768 |
+
\begin{align*}
|
| 769 |
+
\mathbb{E}_x(\|X(t_{n-1})\|) - \|x\| &= \lambda t_{n-1} - \nu \int_0^{t_{n-1}} \mathbb{E}_x(X_n(u)) du \\
|
| 770 |
+
&\le \lambda t_{n-1} - \left( \nu \int_0^{t_{n-1}} \mathbb{E}_{x_n}(X_{0,n}^S(u)) du - 1 \right) \le -1
|
| 771 |
+
\end{align*}
|
| 772 |
+
$$
|
| 773 |
+
|
| 774 |
+
holds.
|
| 775 |
+
|
| 776 |
+
By induction, one gets in a similar way that there exist constants $t_n, \dots, t_0$ and $K_n, \dots, K_0$ such that for any $0 \le l \le n$, if $x_n \le K_n$, $x_{n-1} \le K_{n-1}$, $\dots$, $x_{n-l+1} \le K_{n-l+1}$ and $x_{n-l} > K_{n-l}$, then
|
| 777 |
+
|
| 778 |
+
$$
|
| 779 |
+
\mathbb{E}_x (\|X(t_{n-l})\|) - \|x\| \le -1.
|
| 780 |
+
$$
|
| 781 |
+
|
| 782 |
+
Theorem 8.13 of Robert [18] shows that (X(t)) is an ergodic Markov process. The proposition is proved. $\square$
|
| 783 |
+
|
| 784 |
+
**Analysis of the Two-Chunk Network.** In this subsection, one investigates the case when the monotonicity condition $\mu_1 > \cdots > \mu_{n-1} > \mu_n - \nu > 0$ fails. In general we conjecture the existence of bottlenecks which implies that the network can only accommodate a finite input rate. For instance, when $\mu_n - \nu < 0$, then it is easily seen that the network is unstable for $\lambda > \lambda^*$ where $\lambda^*$ is defined in Equation (32) below.
|
| 785 |
+
|
| 786 |
+
The first non-trivial case occurs for $n=2$, for which the monotonicity condition breaks in two situations, either when $\mu_2 - \nu > \mu_1$ or when $\mu_2 < \nu$. The latter case can be dealt in fact with the exact same arguments as before. See Proposition 4.4.
|
| 787 |
+
|
| 788 |
+
The actual difficulty is when $\mu_2 - \nu > \mu_1$: then the stationary behavior of $(X_2(t))$ is linked to the stationary behavior of the first saturated model $(X_1^S(t))$ defined through its Q-matrix (29). The difficulty in this case is that one needs to compare two processes which grow exponentially fast.
|
| 789 |
+
|
| 790 |
+
**Proposition 4.3.** Assume that $\mu_2 - \nu > \mu_1$, then the first saturated process $(X_1^S(t))$ with Q-matrix defined by Equation (29) is ergodic.
|
| 791 |
+
|
| 792 |
+
**Corollary 4.1.** If $\mu_2 - \nu > \mu_1$ and if
|
| 793 |
+
|
| 794 |
+
$$
|
| 795 |
+
\lambda_2^* \stackrel{\text{def.}}{=} \nu \mathbb{E}_{\pi^S} (X_{1,2}^S(0)),
|
| 796 |
+
$$
|
| 797 |
+
|
| 798 |
+
where $\pi^S$ is the invariant distribution of the Markov process $(X_1^S(t))$, then the process $(X(t)) = (X_k(t), k = 0, 1, 2)$ describing the linear file-sharing network with parameters $\lambda, \mu_1, \mu_2$ and $\nu$ is ergodic for $\lambda < \lambda_2^*$ and transient for $\lambda > \lambda_2^*$.
|
| 799 |
+
|
| 800 |
+
*Sketch of Proof.* The proof of the transience when $\lambda > \lambda_2^*$ follows similarly as in Section 2: when $X_0(0)$ is large, the process $(X_1(t), X_2(t))$ can be coupled for some time with the second saturated system $(X_1^S(t))$. Since the output rate $\lambda_2^*$ of this system is smaller than the input rate $\lambda$, this implies that $(X_0(t))$ builds up, and it can indeed be shown that $X_0(t)/t$ converges almost surely to $\lambda - \lambda_2^*$.
|
| 801 |
+
|
| 802 |
+
The ergodicity when $\lambda < \lambda_2^*$ is slightly more complicated, but it involves the same arguments as the ones employed in the proof of Proposition 4.2. The details are omitted. $\square$
|
| 803 |
+
---PAGE_BREAK---
|
| 804 |
+
|
| 805 |
+
*Proof of Proposition 4.3.* Denote $(X_1^S(t)) = (X_{1,1}^S(t), X_{1,2}^S(t))$, then as long as the first coordinate $X_{1,1}^S$ is positive, the process $(X_1^S(t))$ has the same distribution as $(W(t), Z(t))$ introduced in Section 3: $(Z(t))$ is a Bellman-Harris process with Malthusian parameter $\mu_2 - \nu$ and $(W(t))$ is a Yule process with parameter $\mu_1$ killed at times of births of $(Z(t))$.
|
| 806 |
+
|
| 807 |
+
By Proposition 3.5 and since $\mu_2 - \nu > \mu_1$, one has that $(X_{1,1}^S(t))$ returns infinitely often to 0. When $(X_{1,1}^S(t))$ is at 0 it jumps to 1 after an exponential time with parameter $\mu_1$, one denotes by $(E_{\mu_1,n})$ the corresponding i.i.d. sequence of successive residence times at 0. One defines the sequence $(S_n)$ by induction, $S_0 = 0$ and then
|
| 808 |
+
|
| 809 |
+
$$S_{n+1} = \inf\{t > S_n : X_{1,1}^S(t) = 0\} + E_{\mu_1, n+1}, \quad n \ge 0.$$
|
| 810 |
+
|
| 811 |
+
For $n \ge 1$, $X_{1,1}^S(S_n) = 1$ and for $n \ge 0$, define $M_n \stackrel{\text{def.}}{=} X_{1,2}^S(S_n)$. With the notations of Proposition 3.5, $(X_{1,1}^S(t))$ hits 0 after a duration of $H_{0,n}$ and at that time $(X_{1,2}^S(t))$ is at $Z(H_{0,n})$ with the initial condition $Z(0) = M_n$; while $X_{1,1}^S$ is still at 0, the dynamics of $X_{1,2}^S$ is simple, since it just empties. Finally, at time $S_{n+1} = S_n + H_{0,n} + E_{\mu_1,n+1}$, $(X_{1,1}^S(t))$ returns to 1 and at this instant the location of $(X_{1,2}^S(t))$ is given by
|
| 812 |
+
|
| 813 |
+
$$X_{1,2}^{S}(S_{n+1}) = M_{n+1} = \sum_{i=1}^{Z(H_{0,n})} 1_{\{E_{\nu,i}>E_{\mu_1,n+1}\}},$$
|
| 814 |
+
|
| 815 |
+
where $(E_{\nu,i})$ are i.i.d. exponential random variables with parameter $\nu$, the ith variable being the residence time of the ith request in node 2. Consequently, $(M_n, n \ge 1)$ is a Markov chain whose transitions are defined by Relation (25) with $p = \nu / (\nu + \mu_1)$; note that $(M_n, n \ge 0)$ has the same dynamics only when $X_{1,1}^S(0) = 1$.
|
| 816 |
+
|
| 817 |
+
Define for any $K > 0$ the stopping time $T_K$
|
| 818 |
+
|
| 819 |
+
$$T_K = \inf\{t \ge 0 : X_{1,2}^S(t) \le K, X_{1,1}^S(t) = 1\}.$$
|
| 820 |
+
|
| 821 |
+
The ergodicity of $(X_1^S(t))$ will follow from the finiteness of $\mathbb{E}_{(x_1,x_2)}(T_K)$ for some $K$ large enough and for arbitrary $x = (x_1, x_2) \in \mathbb{N}^2$. The strong Markov property of $(X_1^S(t))$ applied at time $S_1$ gives
|
| 822 |
+
|
| 823 |
+
$$\mathbb{E}_{(x_1,x_2)}(T_K) \le 2\mathbb{E}_{(x_1,x_2)}(S_1) + \mathbb{E}_{(x_1,x_2)}\left[\mathbb{E}_{(1,X_{1,2}^S(S_1))}(T_K)\right],$$
|
| 824 |
+
|
| 825 |
+
and so one only needs to study $T_K$ conditioned on $\{X_{1,1}^S(0) = 1\}$ since $\mathbb{E}_{(x_1,x_2)}(S_1)$ is finite in view of Proposition 3.5.
|
| 826 |
+
|
| 827 |
+
Then, on this event and with $N_K$ defined in Proposition 3.6, the identity
|
| 828 |
+
|
| 829 |
+
$$ (31) \qquad T_K = \sum_{i=0}^{N_K} (H_{0,i} + E_{\mu_1,i}) $$
|
| 830 |
+
|
| 831 |
+
holds. For $i \ge 0$, the Markov property of $(M_n, n \ge 0)$ gives
|
| 832 |
+
|
| 833 |
+
$$ \mathbb{E}_{(x_1,x_2)}(H_{0,i} 1_{\{i \le N_K\}}) = \mathbb{E}_{(x_1,x_2)}(\mathbb{E}_{(1,M_i)}(H_0) 1_{\{i \le N_K\}}) $$
|
| 834 |
+
|
| 835 |
+
With the same argument as in the proof of Proposition 3.6, one has
|
| 836 |
+
|
| 837 |
+
$$ \mathbb{E}_{(1,M_i)}(H_0) \le \mathbb{E}_{(1,0)}(H_0) < +\infty, $$
|
| 838 |
+
---PAGE_BREAK---
|
| 839 |
+
|
| 840 |
+
with Equations (31) and (26) of Proposition (3.6), one gets that for some $\gamma > 0$ and some $K > 0$,
|
| 841 |
+
|
| 842 |
+
$$ \mathbb{E}_{(x_1,x_2)}(T_K) \leq 2\mathbb{E}_{(x_1,x_2)}(S_1) + C \left(1 + \mathbb{E}_{(x_1,x_2)}\left[\log\left(1 + X_{1,2}^S(S_1)\right)\right]\right) $$
|
| 843 |
+
|
| 844 |
+
with the constant $C = (\mathbb{E}_{(1,0)}(H_0) + 1/\mu_2)/\gamma$. This last term is finite for any $(x_1, x_2)$ in view of Proposition 3.5, which proves the proposition. $\square$
|
| 845 |
+
|
| 846 |
+
**Proposition 4.4.** If $\nu > \mu_2$ and
|
| 847 |
+
|
| 848 |
+
$$ (32) \qquad \lambda^* \stackrel{\text{def.}}{=} \frac{\mu_2}{(1 - \mu_2/\nu)(1 - \log(1 - \mu_2/\nu))}, $$
|
| 849 |
+
|
| 850 |
+
then the Markov process $(X(t)) = (X_k(t), k = 0, 1, 2)$ is transient if $\lambda > \lambda^*$ and ergodic if $\lambda < \lambda^*$.
|
| 851 |
+
|
| 852 |
+
*Sketch of Proof.* The result for transience comes directly from the fact that the last coordinate is stochastically dominated by the birth-and-death process $(Y_1^1(t))$ of Section 2.
|
| 853 |
+
|
| 854 |
+
As before, the arguments employed in the proof of Proposition 4.2 to prove ergodicity can also be used, for this reason they are only sketched. One has in fact to consider the following situations.
|
| 855 |
+
|
| 856 |
+
— If there are many customers in the last queue, then the total number of customers instantaneously decreases.
|
| 857 |
+
|
| 858 |
+
— If there are many customers in the second queue, then the last queue has time to get close to stationarity, the input rate is $\lambda$ and the output rate is $\lambda^*$.
|
| 859 |
+
|
| 860 |
+
— Finally, if there are many customers in the first queue, then it is easily seen that the second queue builds up, since it grows like a Yule process killed at times $(\sigma_n)$ where the sequence $(\sigma_n)$ essentially grows linearly since the last queue is stable. Hence the second queue reaches high values and the last queue offers an output rate of $\lambda^*$.
|
| 861 |
+
|
| 862 |
+
Hence when $\lambda < \lambda^*$, the Markov process $(X(t))$ is ergodic. $\square$
|
| 863 |
+
|
| 864 |
+
## APPENDIX A. PROOF OF PROPOSITION 3.3
|
| 865 |
+
|
| 866 |
+
In this appendix the notations of Section 3 are used. Since the random variable $(B_\sigma(t) | Z(0) = 0)$ is stochastically smaller than $(B_\sigma(t) | Z(0) = z)$ for any $z \in \mathbb{N}$, it is enough to show that for $\eta < \eta^*(\nu/\mu_Z)$
|
| 867 |
+
|
| 868 |
+
$$ \mathbb{E}_0 \left[ \sup_{t \ge \sigma_1} (e^{\eta \alpha t} B_\sigma(t)^{-\eta}) \right] < +\infty, $$
|
| 869 |
+
|
| 870 |
+
where $\alpha = \mu_Z - \nu > 0$.
|
| 871 |
+
|
| 872 |
+
Note that the process $(B_\sigma(t+\sigma_1), t \ge 0)$ under $\mathbb{P}_0$ has the same distribution as $(B_\sigma(t)+1, t \ge 0)$ under $\mathbb{P}_1$, and by independence of $\sigma_1$, an exponentially random variable with parameter $\mu_Z$, and $(B_\sigma(t+\sigma_1), t \ge 0)$, one gets
|
| 873 |
+
|
| 874 |
+
$$ \mathbb{E}_0 \left[ \sup_{t \ge \sigma_1} (e^{\eta \alpha t} B_\sigma(t)^{-\eta}) \right] = \mathbb{E}_0 (e^{\eta \alpha \sigma_1}) \mathbb{E}_1 \left[ \sup_{t \ge 0} (e^{\eta \alpha t} (B_\sigma(t) + 1)^{-\eta}) \right]. $$
|
| 875 |
+
|
| 876 |
+
Since $\alpha < \mu_Z$ and $\eta^*(\nu/\mu_Z) < 1$, then $\mathbb{E}_0(\exp(\eta\alpha\sigma_1))$ is finite, and all one needs to prove is that the second term is finite as well.
|
| 877 |
+
|
| 878 |
+
Define $\tau$ as the last time $Z(t) = 0$:
|
| 879 |
+
|
| 880 |
+
$$ \tau = \sup\{t \ge 0 : Z(t) = 0\}, $$
|
| 881 |
+
---PAGE_BREAK---
|
| 882 |
+
|
| 883 |
+
with the convention that $\tau = +\infty$ if $(Z(t))$ never returns to 0. Recall that, because of the assumption $\mu_Z > \nu$, with probability 1, the process $(Z(t))$ returns to 0 a finite number of times.
|
| 884 |
+
|
| 885 |
+
Conditioned on the event $\{\tau = +\infty\}$, the process $(Z(t))$ is a $(p, \lambda)$-branching process conditioned on survival, with $\lambda = \mu_Z + \nu$ and $p = \mu_Z/\lambda$. Such a branching process conditioned on survival can be decomposed as $Z = Z_{(1)} + Y$, where $(Y(t))$ is a Yule process $(Y(t))$ with parameter $\alpha$. See Athreya and Ney [3]. Consequently, for any $0 < \eta < 1$,
|
| 886 |
+
|
| 887 |
+
$$ \mathbb{E}_1 \left[ \sup_{t \ge 0} \left( e^{\eta \alpha t} (B_\sigma(t) + 1)^{-\eta} \right) | \tau = +\infty \right] \le \mathbb{E}_1 \left[ \sup_{t \ge 0} \left( e^{\eta \alpha t} Y(t)^{-\eta} \right) \right]. $$
|
| 888 |
+
|
| 889 |
+
Since the nth split time $t_n$ of $(Y(t))$ is distributed like the maximum of n i.i.d. exponential random variables, $Y(t)$ for $t \ge 0$ is geometrically distributed with parameter $1 - e^{-\alpha t}$, hence,
|
| 890 |
+
|
| 891 |
+
$$
|
| 892 |
+
\begin{aligned}
|
| 893 |
+
\sup_{t \ge 0} \left[ e^{\eta \alpha t} \mathbb{E}_1 \left( \frac{1}{Y(t)^{\eta}} \right) \right] &= \sup_{t \ge 0} \left[ e^{-(1-\eta)\alpha t} \sum_{k \ge 1} \frac{(1-e^{-\alpha t})^{k-1}}{k^{\eta}} \right] \\
|
| 894 |
+
&\le \sup_{0 \le u \le 1} \left[ (1-u)^{1-\eta} \sum_{k \ge 1} \frac{u^{k-1}}{k^{\eta}} \right].
|
| 895 |
+
\end{aligned}
|
| 896 |
+
$$
|
| 897 |
+
|
| 898 |
+
For $0 < u < 1$, the relation
|
| 899 |
+
|
| 900 |
+
$$
|
| 901 |
+
\begin{aligned}
|
| 902 |
+
(1-u)^{1-\eta} \sum_{k \ge 1} \frac{u^{k-1}}{k^\eta} &\le (1-u)^{1-\eta} \int_0^\infty \frac{u^x}{(1+x)^\eta} dx, \\
|
| 903 |
+
&= \left(\frac{1-u}{-\log u}\right)^{1-\eta} \int_0^\infty \frac{e^{-x}}{(x-\log u)^\eta} dx,
|
| 904 |
+
\end{aligned}
|
| 905 |
+
$$
|
| 906 |
+
|
| 907 |
+
holds, hence
|
| 908 |
+
|
| 909 |
+
$$ \sup_{t \ge 0} \left[ e^{\eta \alpha t} \mathbb{E}_1 \left( \frac{1}{Y(t)^{\eta}} \right) \right] < +\infty. $$
|
| 910 |
+
|
| 911 |
+
The process $(e^{-\alpha t}Y(t))$ being a martingale, by convexity the process $(e^{\eta\alpha t}Y(t)^{-\eta})$
|
| 912 |
+
is a non-negative sub-martingale. For any $\eta \in (0, 1)$ Doob's $L_p$ inequality gives the
|
| 913 |
+
existence of a finite $q(\eta) > 0$ such that
|
| 914 |
+
|
| 915 |
+
$$ \mathbb{E}_1 \left[ \sup_{t \ge 0} (e^{\eta \alpha t} Y(t)^{-\eta}) \right] \le q(\eta) \sup_{t \ge 0} \left[ e^{\eta \alpha t} \mathbb{E}_1 \left( \frac{1}{Y(t)^{\eta}} \right) \right] < +\infty. $$
|
| 916 |
+
|
| 917 |
+
The following result has therefore been proved.
|
| 918 |
+
|
| 919 |
+
**Lemma A.1.** For any $0 < \eta < 1$,
|
| 920 |
+
|
| 921 |
+
$$ \mathbb{E}_1 \left[ \sup_{t \ge 0} \left( e^{\eta \alpha t} (B_{\sigma}(t) + 1)^{-\eta} \right) | \tau = +\infty \right] < +\infty. $$
|
| 922 |
+
---PAGE_BREAK---
|
| 923 |
+
|
| 924 |
+
On the event $\{\tau < +\infty\}$, $(Z(t))$ hits a geometric number of times 0 and then couples with a $(p, \lambda)$-branching process conditioned on survival. On this event,
|
| 925 |
+
|
| 926 |
+
$$
|
| 927 |
+
\begin{align*}
|
| 928 |
+
& \sup_{t \ge 0} (e^{\eta \alpha t} (B_{\sigma}(t) + 1)^{-\eta}) \\
|
| 929 |
+
&= \max \left( \sup_{0 \le t \le \tau} (e^{\eta \alpha t} (B_{\sigma}(t) + 1)^{-\eta}), \sup_{t \ge \tau} (e^{\eta \alpha t} (B_{\sigma}(t) + 1)^{-\eta}) \right) \\
|
| 930 |
+
&\le e^{\eta \alpha \tau} \left( 1 + \sup_{t \ge 0} (e^{\eta \alpha t} (B'_{\sigma}(t) + 1)^{-\eta}) \right)
|
| 931 |
+
\end{align*}
|
| 932 |
+
$$
|
| 933 |
+
|
| 934 |
+
where $B'_\sigma(t)$ for $t \ge \tau$ is the number of births in $(\tau, t]$ of a $(p, \lambda)$-branching process conditioned on survival and independent of the variable $\tau$, consequently
|
| 935 |
+
|
| 936 |
+
$$
|
| 937 |
+
\mathbb{E}_1 \left[ \sup_{t \ge 0} \left( e^{\eta \alpha t} (B_\sigma(t) + 1)^{-\eta} \right) \middle| \tau < +\infty \right] \le \mathbb{E}_1 (e^{\eta \alpha \tau} | \tau < +\infty) \\
|
| 938 |
+
\times \left( 1 + \mathbb{E}_1 \left[ \sup_{t \ge 0} \left( e^{\eta \alpha t} (B_\sigma(t) + 1)^{-\eta} \right) \middle| \tau = +\infty \right] \right).
|
| 939 |
+
$$
|
| 940 |
+
|
| 941 |
+
In view of Lemma A.1, the proof of Proposition 3.3 will be finished if one can prove that
|
| 942 |
+
|
| 943 |
+
$$
|
| 944 |
+
\mathbb{E}_1 (e^{\eta \alpha \tau} |\tau < +\infty) < +\infty,
|
| 945 |
+
$$
|
| 946 |
+
|
| 947 |
+
which actually comes from the following decomposition: under $\mathbb{P}_1(\cdot | \tau < +\infty)$, the
|
| 948 |
+
random variable $\tau$ can be written as
|
| 949 |
+
|
| 950 |
+
$$
|
| 951 |
+
\tau = \sum_{k=1}^{1+G} (T_k + E_{\mu_Z,k})
|
| 952 |
+
$$
|
| 953 |
+
|
| 954 |
+
where G is a geometric random variable with parameter q = ν/μ<sub>Z</sub>, (T<sub>k</sub>) is an i.i.d.
|
| 955 |
+
sequence with the same distribution as the extinction time of a (p, λ)-branching
|
| 956 |
+
process starting with one particle and conditioned on extinction and (E<sub>μ<sub>Z</sub>,k</sub>) are
|
| 957 |
+
i.i.d. exponential random variables with parameter μ<sub>Z</sub>.
|
| 958 |
+
|
| 959 |
+
Since *q* is the probability of extinction of a (*p*, *λ*)-branching process started with
|
| 960 |
+
one particle, *G* + 1 represents the number of times (*Z*(*t*)) hits 0 before going to
|
| 961 |
+
infinity. This representation entails
|
| 962 |
+
|
| 963 |
+
$$
|
| 964 |
+
\mathbb{E}_1 (e^{\eta \alpha \tau} | \tau < +\infty) = \mathbb{E} (\gamma(\eta)^{G+1}) \quad \text{where} \quad \gamma(\eta) = \mathbb{E} (e^{\eta \alpha (T_1 + E_{\mu_Z,1})}).
|
| 965 |
+
$$
|
| 966 |
+
|
| 967 |
+
A (p, λ)-branching process conditioned on extinction is actually a (1 − p, λ)-branching process. See again Athreya and Ney [3]. Thus T₁ satisfies the following recursive distributional equation:
|
| 968 |
+
|
| 969 |
+
$$
|
| 970 |
+
T_1^{\text{dist.}} := E_{\lambda} + 1_{\{\xi=2\}}(T_1 \lor T_2),
|
| 971 |
+
$$
|
| 972 |
+
|
| 973 |
+
where $\mathbb{P}(\xi = 2) = 1-p$ and $E_\lambda$ is an exponential random variable with parameter $\lambda$. This equation yields
|
| 974 |
+
|
| 975 |
+
$$
|
| 976 |
+
\P(T_1 \ge t) \le e^{-\lambda t} + 2\lambda(1-p) \int_0^t \P(T_1 \ge t-u) e^{-\lambda u} du,
|
| 977 |
+
$$
|
| 978 |
+
|
| 979 |
+
and Gronwall's Lemma applied to the function $t \mapsto \exp(\lambda t)\mathbb{P}(T_1 \ge t)$ gives that
|
| 980 |
+
|
| 981 |
+
$$
|
| 982 |
+
\P(T_1 \ge t) \le e^{(\lambda - 2\lambda_p)t} = e^{(\nu - \mu_Z)t}
|
| 983 |
+
$$
|
| 984 |
+
---PAGE_BREAK---
|
| 985 |
+
|
| 986 |
+
hence for any $0 < \eta < 1$,
|
| 987 |
+
|
| 988 |
+
$$\mathbb{E}_1(e^{\eta\alpha T_1}) \le \frac{1}{1-\eta}.$$
|
| 989 |
+
|
| 990 |
+
Since *G* is a geometric random variable with parameter *q*, $\mathbb{E}(\gamma(\eta)^G)$ is finite if and only if $\gamma(\eta) < q$. Since finally
|
| 991 |
+
|
| 992 |
+
$$\gamma(\eta) = \frac{\mu_Z}{\mu_Z - \eta\alpha} \mathbb{E} (e^{\eta\alpha T_1}) \le \frac{\mu_Z}{(1-\eta)(\mu_Z - \eta\alpha)}$$
|
| 993 |
+
|
| 994 |
+
one can easily check that $\gamma(\eta) < q$ for $\eta < \eta^*(\nu/\mu_Z)$ as defined by Equation (19), which concludes the proof of Proposition 3.3.
|
| 995 |
+
|
| 996 |
+
REFERENCES
|
| 997 |
+
|
| 998 |
+
[1] David Aldous and Jim Pitman, *Tree-valued Markov chains derived from Galton-Watson processes*, Annales de l'Institut Henri Poincaré. Probabilités et Statistiques **34** (1998), no. 5, 637-686.
|
| 999 |
+
|
| 1000 |
+
[2] Gerold Alsmeyer, *On the Galton-Watson Predator-Prey Process*, Annals of Applied Probability **3** (1993), no. 1, 198-211.
|
| 1001 |
+
|
| 1002 |
+
[3] K. B. Athreya and P. E. Ney, *Branching processes*, Springer, 1972.
|
| 1003 |
+
|
| 1004 |
+
[4] Thomas Bonald, Laurent Massoulié, Fabien Mathieu, Diego Perino, and Andrew Twigg, *Epidemic live streaming: optimal performance trade-offs*, Proceedings of SIGMETRICS'08 (New York, NY, USA), ACM, 2008, pp. 325-336.
|
| 1005 |
+
|
| 1006 |
+
[5] Maury Bramson, *Stability of queueing networks*, Lecture Notes in Mathematics, vol. 1950, Springer, Berlin, 2008, Lectures from the 36th Probability Summer School held in Saint-Flour, July 2-15, 2006.
|
| 1007 |
+
|
| 1008 |
+
[6] Hong Chen and David D. Yao, *Fundamentals of queueing networks*, Springer-Verlag, New York, 2001, Performance, asymptotics, and optimization, Stochastic Modelling and Applied Probability.
|
| 1009 |
+
|
| 1010 |
+
[7] T. D. Dang, R. Pereczes, and S. Molnár, *Modeling the population of file-sharing peer-to-peer networks with branching processes*, IEEE Symposium on Computers and Communications (ISCC'07) (Aveiro, Portugal), July 2007.
|
| 1011 |
+
|
| 1012 |
+
[8] F.P. Kelly, *Loss networks*, Annals of Applied Probability **1** (1991), no. 3, 319-378.
|
| 1013 |
+
|
| 1014 |
+
[9] J. F. C. Kingman, *The first birth problem for an age-dependent branching process.*, Annals of Probability **3** (1975), no. 5, 790-801.
|
| 1015 |
+
|
| 1016 |
+
[10] L. Leskelä, *Stochastic relations of random variables and processes*, J. Theor. Probab. (2009), To appear.
|
| 1017 |
+
|
| 1018 |
+
[11] Laurent Massoulié and Andrew Twigg, *Rate-optimal schemes for Peer-to-Peer live streaming*, Performance Evaluations **65** (2008), no. 11-12, 804-822.
|
| 1019 |
+
|
| 1020 |
+
[12] Laurent Massoulié and Milan Vojnović, *Coupon replication systems*, Proceedings of SIGMETRICS'05 (Banff, Alberta, Canada), no. 1, June 2005, pp. 2-13.
|
| 1021 |
+
|
| 1022 |
+
[13] Olle Nerman, *On the convergence of supercritical general (C-M-J) branching processes*, Z. Wahrscheinlichkeitstheorie verw. Gebiete **57** (1981), 365-395.
|
| 1023 |
+
|
| 1024 |
+
[14] J. Neveu, *Erasing a branching tree*, Advances in Applied Probability (1986), no. suppl., 101-108.
|
| 1025 |
+
|
| 1026 |
+
[15] R. Núñez-Queija and B. J. Prabhu, *Scaling laws for file dissemination in P2P networks with random contacts*, Proceedings of IWQoS, 2008.
|
| 1027 |
+
|
| 1028 |
+
[16] Nadim Parvez, Carey Williamson, Anirban Mahanti, and Niklas Carlsson, *Analysis of bittorrent-like protocols for on-demand stored media streaming*, SIGMETRICS '08: Proceedings of the 2008 ACM SIGMETRICS international conference on Measurement and modeling of computer systems (New York, NY, USA), ACM, 2008, pp. 301-312.
|
| 1029 |
+
|
| 1030 |
+
[17] Dongyu Qiu and R. Srikant, *Modeling and performance analysis of bittorrent-like peer-to-peer networks*, SIGCOMM '04: Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications (New York, NY, USA), ACM, 2004, pp. 367-378.
|
| 1031 |
+
|
| 1032 |
+
[18] Philippe Robert, *Stochastic networks and queues*, Stochastic Modelling and Applied Probability Series, vol. 52, Springer, New-York, June 2003.
|
| 1033 |
+
---PAGE_BREAK---
|
| 1034 |
+
|
| 1035 |
+
[19] Philippe Robert and Florian Simatos, Occupancy schemes associated to Yule processes, Advances in Applied Probability 41 (2009), no. 2, To Appear.
|
| 1036 |
+
|
| 1037 |
+
[20] L. C. G. Rogers and David Williams, *Diffusions, Markov processes, and martingales. Vol. 2: Itô calculus*, John Wiley & Sons Inc., New York, 1987.
|
| 1038 |
+
|
| 1039 |
+
[21] E. Seneta, *On the supercritical branching process with immigration*, Mathematical Biosciences 7 (1970), 9-14.
|
| 1040 |
+
|
| 1041 |
+
[22] Florian Simatos, Philippe Robert, and Fabrice Guillemin, *A queueing system for modeling a file sharing principle*, Proceedings of SIGMETRICS'08 (New York, NY, USA), ACM, 2008, pp. 181-192.
|
| 1042 |
+
|
| 1043 |
+
[23] Florian Simatos and Danielle Tibi, *Spatial homogenization in a stochastic network with mobility*, Annals of Applied Probability (2009), To Appear.
|
| 1044 |
+
|
| 1045 |
+
[24] Riikka Susitaival, Samuli Aalto, and Jorma Virtamo, *Analyzing the dynamics and resource usage of P2P file sharing by a spatio-temporal model*, International Workshop on P2P for High Performance Computation Sciences, 2006.
|
| 1046 |
+
|
| 1047 |
+
[25] David Williams, *Probability with martingales*, Cambridge University Press, 1991.
|
| 1048 |
+
|
| 1049 |
+
[26] Xiangying Yang and Gustavo de Veciana, *Service capacity of peer to peer networks*, Proceedings of IEEE Infocom'04, ACM, 2004, pp. 2242-2252.
|
| 1050 |
+
|
| 1051 |
+
(L. Leskelä) HELSINKI UNIVERSITY OF TECHNOLOGY, DEPARTMENT OF MATHEMATICS AND SYSTEMS ANALYSIS, PO BOX 1100, 02015 TKK, FINLAND
|
| 1052 |
+
|
| 1053 |
+
*E-mail address:* lasse.leskela@iki.fi
|
| 1054 |
+
|
| 1055 |
+
*URL:* http://www.iki.fi/lsl
|
| 1056 |
+
|
| 1057 |
+
(Ph. Robert, F. Simatos) INRIA PARIS — ROCQUENCOURT, DOMAINE DE VOLUCEAU, BP 105, 78153 LE CHESNAY, FRANCE.
|
| 1058 |
+
|
| 1059 |
+
*E-mail address:* Philippe.Robert@inria.fr
|
| 1060 |
+
|
| 1061 |
+
*E-mail address:* Florian.Simatos@inria.fr
|
| 1062 |
+
|
| 1063 |
+
*URL:* http://www-rocq.inria.fr/~robert
|
samples_new/texts_merged/1808935.md
ADDED
|
@@ -0,0 +1,409 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Ostensive Automatic Schema Mapping for Taxonomy-based Peer-to-Peer Systems
|
| 5 |
+
|
| 6 |
+
Yannis Tzitzikas¹ and Carlo Meghini
|
| 7 |
+
|
| 8 |
+
Istituto di Scienza e Tecnologie dell' Informazione [ISTI]
|
| 9 |
+
Consiglio Nazionale delle Ricerche [CNR], Pisa, Italy
|
| 10 |
+
Email: {tzitzik|meghini}@iei.pi.cnr.it
|
| 11 |
+
|
| 12 |
+
**Abstract** This paper considers Peer-to-Peer systems in which peers employ taxonomies for describing the contents of their objects and for formulating semantic-based queries to the other peers of the system. As each peer can use its own taxonomy, peers are equipped with inter-taxonomy mappings in order to carry out the required translation tasks. As these systems are ad-hoc, the peers should be able to create or revise these mappings on demand and at run-time. For this reason, we introduce an ostensive data-driven method for automatic mapping and specialize it for the case of taxonomies.
|
| 13 |
+
|
| 14 |
+
## 1 Introduction
|
| 15 |
+
|
| 16 |
+
There is a growing research interest on peer-to-peer systems like Napster, Gnutella, FreeNet and many others. A peer-to-peer (P2P) system is a distributed system in which participants (the peers) rely on one another for service, rather than solely relying on dedicated and often centralized servers. Many examples of P2P systems have emerged recently, most of which are wide-area, large-scale systems that provide content sharing [4], storage services [19], or distributed "grid" computation [2, 1]. Smaller-scale P2P systems also exist, such as federated, server-less file systems [10, 7] and collaborative workgroup tools [3].
|
| 17 |
+
|
| 18 |
+
Existing peer-to-peer (P2P) systems have focused on specific application domains (e.g. music file sharing) or on providing file-system-like capabilities. These systems do not yet provide semantic-based retrieval services. In most of the cases, the name of the object (e.g. the title of a music file) is the only means for describing the contents of the object. Semantic-based retrieval in P2P systems is a great challenge. In general, the language that can be used for indexing the objects of the domain and for formulating semantic-based queries, can be *free* (e.g natural language) or *controlled*, i.e. object descriptions and queries may have to conform to a specific vocabulary and syntax. The first case, resembles distributed Information Retrieval (IR) systems and this approach is applicable in the case where the objects of the domain have a textual content (e.g. see
|
| 19 |
+
|
| 20 |
+
¹ Work done during the postdoctoral studies of the author at CNR-ISTI as an ERCIM fellow.
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
[23]). In this paper we focus on the second case where the objects of a peer are indexed according to a specific conceptual model represented in a data model (e.g. relational, object-oriented, logic-based, etc), and content searches are formulated using a specific query language. This approach, which can be called "database approach", starts to receive noteworthy attention by the researchers, as is believed that the database and knowledge base research has much to contribute to the P2P grand challenge through its wealth of techniques for sophisticated semantics-based data models and query processing techniques (e.g. see [14, 9, 18, 15, 32]). A P2P system might impose a single conceptual model on all participants to enforce uniform, global access, but this will be too restrictive. Alternatively, a limited number of conceptual models may be allowed, so that traditional information mediation and integration techniques will likely apply (with the restriction that there is no central authority). The case of fully heterogeneous conceptual models makes uniform global access extremely challenging and this is the case that we are interested in.
|
| 24 |
+
|
| 25 |
+
The first and basic question that we have to investigate is which conceptual modeling approach is appropriate for the P2P paradigm. We would like a scalable conceptual modeling approach which also allows bridging the various kinds of heterogeneity in a systematic and easy manner. As there are no central servers, or mediators, each participating source must have (or be able to create) *mappings*, or articulations, between its conceptual model and the conceptual models of its neighbors in order to be able to translate the received queries to queries that can be understood (and thus answered) by the recipient sources. These mapping could be established manually (as in the case of Semantic Web [8]) but the more appropriate approach for a P2P network, and the more challenging, is the *automatic mapping*. For all these reasons, a simple, conceptually clear, and application-independent conceptual modeling approach seems to be advantageous.
|
| 26 |
+
|
| 27 |
+
In this paper we consider the case where peers employ *taxonomies*. Note that it is quite easy to create a taxonomy for a source or a mediator. Even ordinary Web users can design this kind of conceptual model. Taxonomies can be constructed either from scratch, or by extracting them from existing taxonomies (e.g. from the taxonomy of Yahoo! or ODP) using special-purpose languages and tools (e.g. see [30]). Furthermore, the design of taxonomies can be done more systematically if done following a faceted approach (e.g. see [27, 26]). In addition, thanks to techniques that have emerged recently [31], taxonomies of compound terms can be also defined in a flexible and systematic manner. However, the more important for P2P systems, advantage of taxonomies is that their simplicity and modeling uniformity allows integrating the contents of several sources without having to tackle complex structural differences. Indeed, as it is shown in [32], inter-taxonomy mappings offer a *uniform* method for bridging *naming, contextual and granularity* heterogeneities between the taxonomies of the sources. Given this conceptual modeling approach, a mediator does not have to tackle complex structural differences between the sources, as it happens with relational mediators (e.g. see [22, 21]) and Description Logics-based medi-
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
ators (e.g. see [17, 11]). Moreover, it allows the integration of *schema* and *data* in a uniform manner. Another advantage of this conceptual modeling approach is that query evaluation in taxonomy-based sources and mediators can be done efficiently (polynomial time).
|
| 31 |
+
|
| 32 |
+
In this paper we introduce a data-driven method for automatic taxonomy articulation. We call this method *ostensive* because the meaning of each term is explained by ostension, i.e. by pointing to something (here, to a set of objects) to which the term applies. For example, the word "rose" can be defined ostensively by pointing to a rose and saying "that is a rose". Instead, the verbal methods of term definition (e.g. the synonyms or the analytic method) presuppose that the learner already knows some other terms and, thus, they are useless to someone who does not know these terms; e.g. verbal word definitions are useless to a small child who has not learnt any words at all.
|
| 33 |
+
|
| 34 |
+
Specifically, in this paper we describe an ostensive articulation method that can be used for articulating both single terms and queries, and it can be implemented efficiently by a communication protocol. However, ostensive articulation is possible in a P2P system only if the domain of the peers is not disjoint. If it is disjoint then we cannot derive any articulation. This problem can be tackled by employing *reference collections*. For instance, each peer can have its own taxonomy, but before joining the network it must first index the objects of a small reference object set. Consequently, peers can build automatically the desired articulations by running the articulation protocol on this reference collection.
|
| 35 |
+
|
| 36 |
+
The rest of this paper is organized as follows: Section 2 introduces a general formal framework for ostensive articulation. Section 3 specializes and describes ostensive articulation for taxonomy-based sources. Section 4 discusses the application of ostensive articulation in P2P systems of taxonomy-based sources, and finally, Section 5 concludes the paper.
|
| 37 |
+
|
| 38 |
+
## 2 Ostensive Articulation
|
| 39 |
+
|
| 40 |
+
Let us first introduce the general framework. We view a source $S$ as a function $S: Q \to \mathcal{A}$ where $Q$ is the set of all queries that $S$ can answer, and $\mathcal{A}$ is the set of all answers, i.e. $\mathcal{A}=\{S(q) | q \in Q\}$. As we focus on retrieval queries, we assume that $\mathcal{A}$ is a subset of $\mathcal{P}(Obj)$ where `Obj` is the set of all objects stored at the source.
|
| 41 |
+
|
| 42 |
+
The ostensive articulation technique that we shall introduce requires a "naming service", i.e. a method for computing one (or may more) name (e.g. query) for each set of objects $R \subseteq Obj$. Let $Q_N$ denote the set of all names. In general, $Q_N = Q$, however we introduce $Q_N$ because we may want names to be queries of a specific form. For supporting the naming service we would like a function $n: \mathcal{P}(Obj) \to Q_N$ such that for each $R \subseteq Obj$, $S(n(R)) = R$. Having such a function, we would say that $n(R)$ is an exact name for $R$. Note that if $S$ is an onto function and $Q_N = Q$, then the naming function $n$ coincides with the inverse relation of $S$, i.e. with the relation $S^{-1}: \mathcal{P}(Obj) \to Q$. However, this
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
is not always the case, as more often than not, *S* is not an onto function, i.e. *A* ⊂ *P*(Obj). For this reason we shall introduce two naming functions, a lower naming function *n*⁻ and an upper naming function *n*⁺. To define these functions, we first need to define an ordering over queries. Given two queries, *q* and *q'* in *Q*, we write *q* ≤ *q'* if *S(q) ⊆ S(q')*, and we write *q* ∼ *q'*, if both *q* ≤ *q'* and *q'* ≤ *q* hold. Note that ∼ is an equivalence relation over *Q*, and let *Q~* denote the set of equivalence classes induced by ∼ over *Q*. Note that ≤ is a partial order over *Q~*.
|
| 46 |
+
|
| 47 |
+
Now we can define the function $n^-$ and $n^+$ as follows:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\begin{align*}
|
| 51 |
+
n^{-}(R) &= \text{lub}\{\, q \in Q_{N} \mid S(q) \subseteq R \} \\
|
| 52 |
+
n^{+}(R) &= \text{glb}\{\, q \in Q_{N} \mid S(q) \supseteq R \}
|
| 53 |
+
\end{align*}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $R$ is any subset of $Obj$. Now let $R$ be a subset of $Obj$ for which both $n^{-}(R)$ and $n^{+}(R)$ are defined (i.e. the above lub and glb exist). It is clear that in this case it holds:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
S(n^{-}(R)) \subseteq R \subseteq S(n^{+}(R))
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
and that $n^-(R)$ and $n^+(R)$ are the best "approximations" of the exact name of $R$. Note that if $S(n^-(R)) = S(n^+(R))$ then both $n^-(R)$ and $n^+(R)$ are exact names of $R$.
|
| 63 |
+
|
| 64 |
+
If $Q_N$ is a query language that (a) supports disjunction ($\vee$) and conjunction ($\wedge$) and is closed with respect to these, and (b) has a top ($\top$) and a bottom ($\bot$) element such that $S(\top) = Obj$ and $S(\bot) = \emptyset$, then the functions $n^-$ and $n^+$ are defined for every subset $R$ of $Obj$. Specifically, in this case $(Q_\sim, \le)$ is a complete lattice, thus these functions are defined as:
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\begin{align*}
|
| 68 |
+
n^{-}(R) &= \bigvee \{ q \in Q_{N} \mid S(q) \subseteq R \} \\
|
| 69 |
+
n^{+}(R) &= \bigwedge \{ q \in Q_{N} \mid S(q) \supseteq R \}
|
| 70 |
+
\end{align*}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
As $Q_N$ is usually an infinite language, $n^-(R)$ and $n^+(R)$ are queries of infinite length. This means that in practice we also need for a method for computing a query of finite length that is equivalent to $n^-(R)$ and another one that is equivalent to $n^+(R)$.
|
| 74 |
+
|
| 75 |
+
If however $Q_N$ does not satisfy the above ((a) and (b)) conditions, then $n^-(R)$ and $n^+(R)$ may not exist. For example, this happens if we want to establish relationships between single terms of two taxonomy-based sources, or between atomic concepts of two Description Logics-based sources. For such cases, we can define $n^-$ and $n^+$ as follows:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\begin{align*}
|
| 79 |
+
n^{-}(R) &= \max\{ q \in Q_{N} \mid S(q) \subseteq R \} \\
|
| 80 |
+
n^{+}(R) &= \min\{ q \in Q_{N} \mid S(q) \supseteq R \}
|
| 81 |
+
\end{align*}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
where max returns the maximal element(s), and min the minimal(s). Clearly, in this case we may have several lower and upper names for a given R.
|
| 85 |
+
|
| 86 |
+
We can now proceed and describe the ostensive articulation. Consider two sources $S_i : Q_i \to P(Obj_i)$, and $S_j : Q_j \to P(Obj_j)$. Ostensive articulation is
|
| 87 |
+
---PAGE_BREAK---
|
| 88 |
+
|
| 89 |
+
possible only if their domains are not disjoint, i.e. if $Obj_i \cap Obj_j \neq \emptyset$. Let $C$ denote their common domain, i.e. $C = Obj_i \cap Obj_j$. The method that we shall describe yields relationships that are extensionally valid in $C$.
|
| 90 |
+
|
| 91 |
+
Suppose that $S_i$ wants to establish an articulation $a_{i,j}$ to a source $S_j$. An articulation $a_{i,j}$ can contain relationships of the form:
|
| 92 |
+
|
| 93 |
+
(i) $q_i \geq q_j$,
|
| 94 |
+
|
| 95 |
+
(ii) $q_i \leq q_j$
|
| 96 |
+
|
| 97 |
+
where $q_i \in Q_i$, $q_j \in Q_j$. These relationships have the following meaning:
|
| 98 |
+
|
| 99 |
+
(i) $q_i \geq q_j$ means that $S_i(q_i) \cap C \supseteq S_j(q_j) \cap C$
|
| 100 |
+
|
| 101 |
+
(ii) $q_i \leq q_j$ means that $S_i(q_i) \cap C \subseteq S_j(q_j) \cap C$
|
| 102 |
+
|
| 103 |
+
Before describing ostensive articulation let us make a couple of remarks. The first is that the form (i or ii) of the relationships of an articulation depends on the internal structure and functioning of the source that uses the articulation. For instance, suppose that $S_i$ acts as a mediator over $S_j$. If $S_i$ wants to compute complete (with respect to $C$) answers, then it should use only relationships of type (i) during query translation. On the other hand, if $S_i$ wants to compute sound (with respect to $C$) answers then it should use relationships of type (ii) (e.g. see [21]).
|
| 104 |
+
|
| 105 |
+
Another interesting remark is that if $S_i$ is a mediator that adopts a global-as-view modeling approach, then all $q_i$ that appear in $a_{i,j}$ are primitive concepts. On the other hand, if $S_i$ adopts a local-as-view approach then all $q_j$ that appear in $a_{i,j}$ are primitive concepts of $S_j$.
|
| 106 |
+
|
| 107 |
+
Below we describe ostensive articulation for the more general case where $S_i$ is interested in relationships of both, (i) and (ii), types, and where $q_i, q_j$ can be arbitrary queries. Let $n_j^-$ and $n_j^+$ be the naming functions of $S_j$ as defined earlier. Also let $S_i^c(q) = S_i(q) \cap C$ and $S_j^c(q) = S_j(q) \cap C$. Now suppose that $S_i$ wants to articulate a query $q_i \in Q_i$. The query $q_i$ should be articulated as follows:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\begin{aligned}
|
| 111 |
+
-q_i &\ge n_j^-(S_i^c(q_i)) && \text{if } S_i^c(q_i) \supseteq S_j^c(n_j^-(S_i^c(q_i))) \\
|
| 112 |
+
-q_i &\le n_j^-(S_i^c(q_i)) && \text{if } S_i^c(q_i) \subseteq S_j^c(n_j^-(S_i^c(q_i))) \\
|
| 113 |
+
-q_i &\ge n_j^+(S_i^c(q_i)) && \text{if } S_i^c(q_i) \supseteq S_j^c(n_j^+(S_i^c(q_i))) \\
|
| 114 |
+
-q_i &\le n_j^+(S_i^c(q_i)) && \text{if } S_i^c(q_i) \subseteq S_j^c(n_j^+(S_i^c(q_i)))
|
| 115 |
+
\end{aligned}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
Observe the role of the naming functions. $S_j$ instead of checking all queries in $Q_j$, it just uses its naming functions in order to compute the lower and the upper name of the set $S_i(q_i) \cap C$. Recall that the naming functions (by definition) return the most precise (semantically close) mapping for $q_i$, thus this is all that we need.
|
| 119 |
+
|
| 120 |
+
Furthermore, as we shall see below, the above relationships can be obtained without extensive communication. In fact, they can be obtained by a quite simple and efficient (in terms of exchanged messages) distributed protocol. The protocol
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
$$S_i: \begin{array}{l} (1) A := S_i(q_i); \\ (2) \n\end{array}$$
|
| 124 |
+
|
| 125 |
+
$$S_j: \begin{array}{l} (3) F := A \setminus Obj_j \\ (4) A := A \cap Obj_j; \\ (5) down := n_j^-(A); Bdown := S_j(\text{down}); \\ (6) up := n_j^+(A); Bup := S_j(\text{up}); \\ (7) \end{array}$$
|
| 126 |
+
|
| 127 |
+
$$S_i: \begin{array}{l} (8) \text{If } (A \setminus F) \supseteq (Bdown \cap Obj_i) \text{ then set } q_i \geq \text{down}; \\ (9) \text{If } (A \setminus F) \subseteq (Bdown \cap Obj_i) \text{ then set } q_i \leq \text{down}; \\ (10) \text{If } (A \setminus F) \supseteq (Bup \cap Obj_i) \text{ then set } q_i \geq \text{up}; \\ (11) \text{If } (A \setminus F) \subseteq (Bup \cap Obj_i) \text{ then set } q_i \leq \text{up} \end{array}$$
|
| 128 |
+
|
| 129 |
+
Fig. 1. The ostensive articulation protocol
|
| 130 |
+
|
| 131 |
+
is sketched in Figure 1. Note that only two messages have to be exchanged between $S_i$ and $S_j$ for articulating the query $q_i$.
|
| 132 |
+
|
| 133 |
+
Another interesting point is that $S_i$ and $S_j$ do not have to a-priori know (or compute) their common domain $C$, as $C$ is "discovered" during the run of the protocol (this is the reason why $S_j$ stores in $F$ and sends to $S_i$ those terms that do not belong to $Obj_j$).
|
| 134 |
+
|
| 135 |
+
In the case where $Q_N \subset Q$, the only difference is that the message that $S_j$ sends to $S_i$ may contain more than one *up* and *down* queries.
|
| 136 |
+
|
| 137 |
+
A source can run the above protocol in order to articulate one, several or all of its terms (or queries).
|
| 138 |
+
|
| 139 |
+
## 3 Ostensive Articulation for Taxonomy-based Sources
|
| 140 |
+
|
| 141 |
+
Here we shall specialize ostensive articulation for the case of taxonomy-based sources. Examples of this kind of sources include Web Catalogs (like Yahoo!, Open Directory) and Classification Schemes used in Library and Information Science
|
| 142 |
+
|
| 143 |
+
We view a taxonomy-based source $S$ as a quadruple $S = (T, \preceq, I, Q)$ where:
|
| 144 |
+
|
| 145 |
+
- $T$ is a finite set of names called *terms*, e.g. **Caranies**, **Birds**.
|
| 146 |
+
|
| 147 |
+
- $\preceq$ is a reflexive and transitive binary relation over $T$ called *subsumption*, e.g. **Canaries** $\preceq$ **Birds**.
|
| 148 |
+
|
| 149 |
+
- $I$ is a function $I: T \to P(Obj)$ called *interpretation* where *Obj* is a finite set of objects. For example *Obj* = {1, ..., 100} and $I(\mathbf{Canaries}) = \{1, 3, 4\}$.
|
| 150 |
+
|
| 151 |
+
- $Q$ is the set of all queries defined by the grammar $q ::= t | q^\wedge q' | q^\vee q' | \neg q | (q)$ where $t$ is a term in $T$.
|
| 152 |
+
|
| 153 |
+
Figure 2 shows an example of a source consisting of 8 terms and 3 objects².
|
| 154 |
+
|
| 155 |
+
We assume that every terminology $T$ also contains two special terms, the *top term*, denoted by $\top$, and the *bottom term*, denoted by $\bot$. The top term subsumes
|
| 156 |
+
|
| 157 |
+
² We illustrate only the Hasse diagram of the subsumption relation.
|
| 158 |
+
---PAGE_BREAK---
|
| 159 |
+
|
| 160 |
+
**Fig. 2.** Graphical representation of a source
|
| 161 |
+
|
| 162 |
+
every other term *t*, i.e. *t* ≲ ⊤. The bottom term is strictly subsumed by every
|
| 163 |
+
other term *t* different than top and bottom, i.e. ⊥ ≲ ⊥, ⊥ ≲ ⊤, and ⊥ ≺ *t*,
|
| 164 |
+
for every *t* such that *t* ≠ ⊤ and *t* ≠ ⊥. We also assume that *I*(⊥) = ∅ in every
|
| 165 |
+
interpretation *I*.
|
| 166 |
+
|
| 167 |
+
The answer $S(q)$ of a query $q$ is defined as follows (for more see [33]):
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
\begin{align*}
|
| 171 |
+
S(t) &= \bigcup \{ I(t') \mid t' \preceq t \} \\
|
| 172 |
+
S(q \land q') &= S(q) \cap S(q') \\
|
| 173 |
+
S(q \lor q') &= S(q) \cup S(q') \\
|
| 174 |
+
S(\neg q) &= \mathit{Obj} \setminus S(q)
|
| 175 |
+
\end{align*}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
For example, in Figure 2 we have $S(\text{DB}) = \{\text{1,2}\}$, as $S(\text{DB}) = I(\text{DB}) \cup I(\text{Databases}) \cup I(\text{RDB}) = \{\text{1,2}\}$, and $S(\text{DB} \land \text{JournalArticle}) = \{\text{1}\}$. We define the *index* of an object *o* with respect to an interpretation *I*, denoted by $D_I(o)$, as follows: $D_I(o) = \bigwedge \{t \in T \mid o \in I(t)\}$. For example, in the source of Figure 2 we have $D_I(3) = \text{JournalArticle}$ and $D_I(1) = \text{RDB} \land \text{JournalArticle}$.
|
| 179 |
+
|
| 180 |
+
Let us now define the naming functions for this kind of sources. We define the
|
| 181 |
+
set of names $Q_N$ as follows: $Q_N = \{ q \in Q \mid q \text{ does not contain negation "¬"} \}$.
|
| 182 |
+
We exclude queries with negation because, as showed in [32], if such queries
|
| 183 |
+
appear in articulations then we may get systems which do not have a unique
|
| 184 |
+
minimal model and this makes query evaluation more complicated and less effi-
|
| 185 |
+
cient.
|
| 186 |
+
|
| 187 |
+
The lower and upper name of a set $R \subseteq Obj$ are defined as in the general
|
| 188 |
+
framework and clearly ($Q_N, \leq$) is a complete lattice. What remains is to find
|
| 189 |
+
finite length queries that are equivalent to $n^-(R)$ and $n^+(R)$.
|
| 190 |
+
|
| 191 |
+
**Theorem 1.**
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\begin{align*}
|
| 195 |
+
n^{-}(R) &\sim \bigvee \{ D_{I}(o) \mid o \in R, S(D_{I}(o)) \subseteq R \} \\
|
| 196 |
+
n^{+}(R) &\sim \bigvee \{ D_{I}(o) \mid o \in R \}
|
| 197 |
+
\end{align*}
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
The proof is given in [34]. It is clear that the above queries have finite length,
|
| 201 |
+
hence they are the queries that we are looking for. For this purpose, hereafter
|
| 202 |
+
we shall use $n^-(R)$ and $n^+(R)$ to denote the above queries. Note that if the
|
| 203 |
+
set $\{o \in R, S(D_I(o)) \subseteq R\}$ is empty then we consider that $n(R)^- = \perp$. Some
|
| 204 |
+
examples from the source shown in Figure 3 follow:
|
| 205 |
+
---PAGE_BREAK---
|
| 206 |
+
|
| 207 |
+
Fig. 3. Example of a source
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\begin{align*}
|
| 211 |
+
n^+({1,3}) &= (\text{tomatoes} \land \text{red}) \lor (\text{apples} \land \text{green}) \\
|
| 212 |
+
n^-({1,3}) &= (\text{tomatoes} \land \text{red}) \lor (\text{apples} \land \text{green}) \\
|
| 213 |
+
n^+({1,3,5}) &= (\text{tomatoes} \land \text{red}) \lor (\text{apples} \land \text{green}) \lor (\text{apples} \land \text{red}) \\
|
| 214 |
+
n^-({1,3,5}) &= (\text{tomatoes} \land \text{red}) \lor (\text{apples} \land \text{green})
|
| 215 |
+
\end{align*}
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
Let us now demonstrate the articulation protocol for taxonomy-based sources.
|
| 219 |
+
Consider the sources shown in Figure 4 and suppose that $S_1$ wants to articulate
|
| 220 |
+
its terms with queries of $S_2$. In the following examples we omit the set $F$ (from
|
| 221 |
+
the message of line (7) of Figure 1) as it is always empty.
|
| 222 |
+
|
| 223 |
+
Fig. 4. An example of two sources S₁ and S₂
|
| 224 |
+
|
| 225 |
+
The steps for articulating the term **cabbages** follow:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
\begin{array}{l}
|
| 229 |
+
S_1 \rightarrow S_2 : \{\text{1}\} \\
|
| 230 |
+
S_2 \rightarrow S_1 : (\bot, \emptyset), (\mathbf{green}, \{1,5,6\}) \\
|
| 231 |
+
S_1 : \mathbf{cabbages} \preceq \mathbf{green}
|
| 232 |
+
\end{array}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
The steps for articulating the term apples follow:
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\begin{array}{l}
|
| 239 |
+
S_1 \rightarrow S_2 : \{\mathbf{4}, \mathbf{5}\} \\
|
| 240 |
+
S_2 \rightarrow S_1 : (\bot, \emptyset), (\mathbf{red} \lor \mathbf{green}, \{\mathbf{1}, \mathbf{2}, \mathbf{3}, \mathbf{4}, \mathbf{5}, \mathbf{6}\}) \\
|
| 241 |
+
S_1 : \mathbf{apples} \preceq \mathbf{red} \lor \mathbf{green}
|
| 242 |
+
\end{array}
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
The steps for articulating the term foods follow:
|
| 246 |
+
---PAGE_BREAK---
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
\begin{array}{l@{\quad}l}
|
| 250 |
+
S_1 \to S_2 & : \{1,2,3,4,5,6,7\} \\
|
| 251 |
+
S_2 \to S_1 & : (\text{red} \lor \text{green}, \{1,2,3,4,5,6\}), \\
|
| 252 |
+
& (\text{red} \lor \text{green} \lor \text{yellow}, \{1,2,3,4,5,6,7,8\}) \\
|
| 253 |
+
S_1 & : \text{foods} \succeq \text{red} \lor \text{green}, \\
|
| 254 |
+
& \qquad \text{foods} \sim \text{red} \lor \text{green} \lor \text{yellow}
|
| 255 |
+
\end{array}
|
| 256 |
+
$$
|
| 257 |
+
|
| 258 |
+
If $S_1$ runs the protocol for each term of its taxonomy, it will infer the following relationships:
|
| 259 |
+
|
| 260 |
+
cabbages $\preceq$ green
|
| 261 |
+
tomatoes $\preceq$ red
|
| 262 |
+
apples $\preceq$ red $\vee$ green
|
| 263 |
+
bananas $\preceq$ green $\vee$ yellow
|
| 264 |
+
vegetables $\preceq$ green $\vee$ red
|
| 265 |
+
fruits $\preceq$ red $\vee$ green $\vee$ yellow
|
| 266 |
+
foods $\succeq$ red $\vee$ green
|
| 267 |
+
foods $\sim$ red $\vee$ green $\vee$ yellow
|
| 268 |
+
|
| 269 |
+
If $S_2$ runs this protocol for each term of its taxonomy, it will infer the following relationships:
|
| 270 |
+
|
| 271 |
+
red $\succeq$ tomatoes
|
| 272 |
+
red $\preceq$ tomatoes $\vee$ apples
|
| 273 |
+
green $\succeq$ cabbages
|
| 274 |
+
green $\preceq$ cabbages $\vee$ apples $\vee$ bananas
|
| 275 |
+
yellow $\preceq$ bananas
|
| 276 |
+
color $\sim$ cabbages $\vee$ tomatoes $\vee$ apples $\vee$ bananas
|
| 277 |
+
|
| 278 |
+
The protocol can be used not only for articulating single terms to queries, but also for articulating queries to queries. For example, the steps for articulating the query apples $\lor$ bananas follow:
|
| 279 |
+
|
| 280 |
+
$$
|
| 281 |
+
\begin{array}{l}
|
| 282 |
+
S_1 \to S_2 : \{4, 5, 6, 7\} \\
|
| 283 |
+
S_2 \to S_1 : (\text{red} \lor \text{green} \lor \text{yellow}, \{1,2,3,4,5,6,7,8\}) \\
|
| 284 |
+
S_1 : \text{apples} \lor \text{bananas} \preceq \text{red} \lor \text{green} \lor \text{yellow}
|
| 285 |
+
\end{array}
|
| 286 |
+
$$
|
| 287 |
+
|
| 288 |
+
Now consider the case where we do not want to articulate terms with queries, but terms with *single terms* only, i.e. consider the case where $Q_N = T$. Note that now $lub\{t \in T | S(t) \subseteq R\}$ and $glb\{t \in T | S(t) \supseteq R\}$ do not always exist. For example, consider the source shown in Figure 5.(a). Note that $n^+(\{1\}) = glb\{t, t'\}$ which does not exist. For the source shown in Figure 5.(b) note that $n^-(\{1,2\}) = lub\{t,t'\}$ which does not exist. Therefore, we can define the upper and lower names of a set $R$ as follows: $n^-(R) = max(\{t \in T | S(t) \subseteq R\})$ and $n^+(R) = min(\{t \in T | S(t) \supseteq R\})$. Consider for example the source shown in Figure 5.(c). Here we have:
|
| 289 |
+
|
| 290 |
+
$$ n^{-}(\{1, 2, 3\}) = max(\{c, d, e, b\}) = \{b\} $$
|
| 291 |
+
|
| 292 |
+
$$ n^{+}(\{1, 2, 3\}) = min(\{b, a\}) = \{b\} $$
|
| 293 |
+
---PAGE_BREAK---
|
| 294 |
+
|
| 295 |
+
Fig. 5. An example of three sources
|
| 296 |
+
|
| 297 |
+
Certainly, the relationships obtained by the term-to-term articulation are less expressive than the relationships obtained by the term-to-queries articulation. For instance, suppose that we want to articulate the terms of the source $S_1$ in each one of the three examples that are shown in Figure 6. Table 1 shows the articulation $a_{1,2}$ that is derived by the *term-to-term* articulation and the *term-to-queries* articulation in each of these three examples.
|
| 298 |
+
|
| 299 |
+
Fig. 6. Three examples
|
| 300 |
+
|
| 301 |
+
<table><thead><tr><th>Example</th><th colspan="2">$a_{1,2}$</th></tr><tr><th></th><th>term-to-term art.</th><th>term-to-query art.</th></tr></thead><tbody><tr><td>Figure 6.(a)</td><td>$a \supseteq b$<br>$a \supseteq b'$</td><td>$a \sim b \lor b'$</td></tr><tr><td>Figure 6.(b)</td><td>$a \preceq b$<br>$a \preceq b'$</td><td>$a \sim b \land b'$<br>$a' \preceq b \lor b'$</td></tr><tr><td>Figure 6.(c)</td><td></td><td>$a \preceq b \lor b'$<br>$a' \preceq b \lor b'$</td></tr></tbody></table>
|
| 302 |
+
|
| 303 |
+
**Table 1.** Term-to-term vs term-to-query articulation
|
| 304 |
+
|
| 305 |
+
# 4 Ostensive Articulation in Taxonomy-based P2P Systems
|
| 306 |
+
|
| 307 |
+
We demonstrated how ostensive articulation can be applied on taxonomy-based
|
| 308 |
+
sources for constructing inter-taxonomy articulations. Ostensive articulation is
|
| 309 |
+
---PAGE_BREAK---
|
| 310 |
+
|
| 311 |
+
possible in a P2P system only if the domain of the peers is not disjoint. We also assume that every object of *Obj* has the same identity (e.g. object identifier, URI) in all sources. For domains where no accepted identity/naming standards exist, mapping tables such as the ones proposed in [18] can be employed to tackle this problem. Also techniques from the area of information fusion (that aim at recognizing different objects that represent the same reality) could be also employed for the same purpose. If however the domain of the peers is disjoint then we cannot derive any articulation. One method to tackle this problem is to employ reference collections. For instance, each peer can have its own taxonomy, but before joining the network it must first index the objects of a small object set. Consequently, peers can build automatically the desired articulations by running the articulation protocol on this reference collection. Running the protocol on the reference collection *C* means that the sources $S_1$ and $S_2$ instead of using $S_1(q_1)$ and $S_2(q_2)$, they use $S_1(q_1) \cap C$ and $S_2(q_2) \cap C$ respectively. Also note that the employment of reference collections can: (a) enhance the accuracy of the resulting articulation, and/or (b) enhance efficiency. For instance, if *C* corresponds to a well known, thus well-indexed set of objects then it can improve the quality of the obtained articulations. For example in the case where $S_1$ and $S_2$ are bibliographic sources, *C* can be a set of 100 famous papers in computer science. A reference collection can also enhance the efficiency of the protocol since a smaller number of objects go back and forth. This is very important, especially in P2P where involved sources are distant.
|
| 312 |
+
|
| 313 |
+
In a P2P system of taxonomy-based sources, a source apart from object queries now accepts content-based queries, i.e. queries (e.g. boolean expressions) expressed in terms of its taxonomy. For answering a query a source may have to query the neighbor sources. The role of articulations during query evaluation has been described in [33] (for the mediator paradigm) and in [32] (for the P2P paradigm). Roughly, a source in a P2P network can serve any or all of the following roles: primary source, mediator, and query initiator. As a *primary* source it provides original content to the system and is the authoritative source of that data. Specifically, it consists of a taxonomy (i.e. a pair (*T*, $\le$)) plus an object base (i.e. an interpretation *I*) that describes a set of objects (*Obj*) in terms of the taxonomy. As a *mediator* it has a taxonomy but does not store or provide any content: its role is to provide a uniform query interface to other sources, i.e. it forwards the received queries after first selecting the sources to be queried and formulating the query to be sent to each one of them. These tasks are determined by the articulations of the mediator. As a *query initiator* it acts as client in the system and poses new queries. Figure 7 sketches graphically the architecture of a network consisting of four peers $S_1, ..., S_4$; two primary sources ($S_3$ and $S_4$), one mediator ($S_2$) and one source that is both primary and mediator ($S_1$). Triangles denote taxonomies, cylinders object bases, and circles inter-taxonomy mappings. $S_2$ is a mediator over $S_1, S_3$ and $S_4$, while $S_1$ is a mediator over $S_2$ and $S_3$. For more about this architecture and the associated semantics and query evaluation methods please refer to [32].
|
| 314 |
+
---PAGE_BREAK---
|
| 315 |
+
|
| 316 |
+
Fig. 7. A P2P network based on taxonomies and inter-taxonomy mappings
|
| 317 |
+
|
| 318 |
+
5 Conclusion
|
| 319 |
+
|
| 320 |
+
The contribution of this paper is a formal framework for ostensive data-driven articulation. Roughly, the approaches for linking two conceptual models or tax-onomies can be broadly classified as either *model*-driven or *data*-driven.
|
| 321 |
+
|
| 322 |
+
The model-driven approach starts with a (theoretical) model of how the two taxonomies are constructed and how they are used. Subsequently, the mapping approaches have to address the compatibility, structural and semantic differences and heterogeneities that exist. This is done using software tools (that usually rely on lexical resources) that assist the designer during the articulation process (e.g. see [25, 29, 5, 24]).
|
| 323 |
+
|
| 324 |
+
On the other hand, in the *data-driven* approach the mappings are *discovered* by examining how terms are used in indexing the objects. The advantage of such an approach is that it does not make any assumptions on how the two taxonomies are constructed, or how they are used. All it requires is the presence of two databases that contain several objects in common. However, the data-driven approach does have inherent difficulties. First, unless one has a large collection of objects that have been indexed using *both* taxonomies, spurious correlation can result in inappropriate linking. Second, if a term is not assigned to any of the common objects, one cannot establish a link for that term. Third, rarely occurring terms can result in statistically insignificant links. Finally, the validation of data-driven approaches can only be statistical in nature. In spite of these inherent difficulties, data-driven approaches can be formalized and automated. However, most of the data-driven approaches that can be found in the literature are applicable only if the domain is a set of documents (texts) (e.g. [6, 16, 12, 20, 28]), and they cannot establish mappings between queries.
|
| 325 |
+
|
| 326 |
+
The technique described in this paper is quite general and expressive as it can be used for articulating not only single terms but also queries. Furthermore, it can be used for articulating the desired set of terms or queries (it is not obligatory to articulate the entire taxonomies). Another distinctive feature of this technique is that it can be implemented efficiently by a communication protocol, thus the involved sources do not have to reside on the same machine. Therefore it seems appropriate for automatic articulation in P2P systems which is probably the more challenging issue in P2P computing [9].
|
| 327 |
+
|
| 328 |
+
We also demonstrated how it can be applied to taxonomy-based sources. An interesting remark is that the proposed method can be applied not only to manually constructed taxonomies but also to taxonomies derived automatically on the basis of an inference service. For instance, it can be applied on sources
|
| 329 |
+
---PAGE_BREAK---
|
| 330 |
+
|
| 331 |
+
indexed using taxonomies of compound terms which are defined algebraically [31]. Furthermore it can be applied on concept lattices formed using Description Logics (DL) [13].
|
| 332 |
+
|
| 333 |
+
One issue for further research, is to investigate how a source that wants to articulate a set $F \subseteq Q$ must use the described protocol in order to obtain the desired articulation with the minimal number of exchanged messages and the less network throughput. Another issue for further research is to investigate ostensive articulation for other kinds of sources.
|
| 334 |
+
|
| 335 |
+
## Acknowledgements
|
| 336 |
+
|
| 337 |
+
The first author wants to thank his wife Tonia for being an endless source of happiness and inspiration.
|
| 338 |
+
|
| 339 |
+
## References
|
| 340 |
+
|
| 341 |
+
1. "About LEGION - The Grid OS" (www.appliedmeta.com/legion/about.html), 2000.
|
| 342 |
+
|
| 343 |
+
2. "How Entropia Works" (www.entropia.com/how.asp), 2000.
|
| 344 |
+
|
| 345 |
+
3. "Groove" (www.groove.net), 2001.
|
| 346 |
+
|
| 347 |
+
4. "Napster" (www.naptster.com), 2001.
|
| 348 |
+
|
| 349 |
+
5. Bernd Amann and Irini Fundulaki. "Integrating Ontologies and Thesauri to Build RDF Schemas". In *Proceedings of the Third European Conference for Digital Libraries ECDL '99*, Paris, France, 1999.
|
| 350 |
+
|
| 351 |
+
6. S. Amba. "Automatic Linking of Thesauri". In *Proceeding of SIGIR '96*, Zurich, Switzerland, 1996. ACM Press.
|
| 352 |
+
|
| 353 |
+
7. T.E. Anderson, M. Dahlin, J. M. Neefe, D. A. Patterson, D. S. Roselli, and R. Wang. "Serveless Network File Systems". *SOSP*, 29(5), 1995.
|
| 354 |
+
|
| 355 |
+
8. Tim Berners-Lee, James Hendler, and Ora Lassila. "The Semantic Web". *Scientific American*, May 2001.
|
| 356 |
+
|
| 357 |
+
9. Philip A. Bernstein, F. Giunchiglia, A. Kementsietsidis, J. Mylopoulos, L. Serafini, and I. Zaihrayeu. "Data Management for Peer-to-Peer Computing: A Vision". In *Proceedings of WebDB02*, Madison, Wisconsin, June 2002.
|
| 358 |
+
|
| 359 |
+
10. W. J. Bolosky, J. R. Douceur, D. Ely, and M. Theimer. "Feasibility of a Serveless Distributed File System Deployed on an Existing Set of Desktop PCs". In *Proceedings of Measurement and Modeling of Computer Systems*, June 2000.
|
| 360 |
+
|
| 361 |
+
11. Diego Calvanese, Giuseppe De Giacomo, and Maurizio Lenzerini. A framework for ontology integration. In *Proc. of the 2001 Int. Semantic Web Working Symposium (SWWS 2001)*, pages 303-316, 2001.
|
| 362 |
+
|
| 363 |
+
12. A. Doan, J. Madhavan, P. Domingos, and A. Halevy. "Learning to Map between Ontologies on the Semantic Web". In *Proceedings of the World-Wide Web Conference (WWW-2002)*, 2002.
|
| 364 |
+
|
| 365 |
+
13. F.M. Donini, M. Lenzerini, D. Nardi, and A. Schaerf. "Reasoning in Description Logics", chapter 1. CSLI Publications, 1997.
|
| 366 |
+
|
| 367 |
+
14. Steven Gribble, Alon Halevy, Zachary Ives, Maya Rodrig, and Dan Suiu. "What can Databases do for Peer-to-Peer?". In *Proceedings of WebDB01*, Santa Barbara, CA, 2001.
|
| 368 |
+
---PAGE_BREAK---
|
| 369 |
+
|
| 370 |
+
15. Alon Halevy, Zachary Ives, Peter Mork, and Igor Tatarinov. "Piazza: Data Management Infrastructure for Semantic Web Applications". In *Proceedings of WWW'2003*, May 2003.
|
| 371 |
+
|
| 372 |
+
16. Heiko Helleg, Jurgen Krause, Thomas Mandl, Jutta Marx, Matthias Muller, Peter Mutschke, and Robert Strogen. "Treatment of Semantic Heterogeneity in Information Retrieval". Technical Report 23, Social Science Information Centre, May 2001. (http://www.gesis.org/en/publications/reports/iz working papers/).
|
| 373 |
+
|
| 374 |
+
17. Vipul Kashyap and Amit Sheth. "Semantic Heterogeneity in Global Information Systems: the Role of Metadata, Context and Ontologies". In *Cooperative Information Systems: Trends and Directions*. Academic Press, 1998.
|
| 375 |
+
|
| 376 |
+
18. A. Kementsietsidis, Marcelo Arenas, and Rene J. Miller. "Mapping Data in Peer-to-Peer Systems: Semantics and Algorithmic Issues". In *Int. Conf. on Management of Data, SIGMOD'2003*, San Diego, California, June 2003.
|
| 377 |
+
|
| 378 |
+
19. J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gum-madi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao. "Oceanstore: An Architecture for Global-Scale Persistent Storage". In *ASPLOS*, November 2000.
|
| 379 |
+
|
| 380 |
+
20. M. Lacher and G. Groh. "Facilitating the Exchange of Explicit Knowledge Through Ontology Mappings". In *Proceedings of the 14th Int. FLAIRS Conference*, 2001.
|
| 381 |
+
|
| 382 |
+
21. Maurizio Lenzerini. Data integration: A theoretical perspective. In *Proc. ACM PODS 2002*, pages 233–246, Madison, Wisconsin, USA, June 2002.
|
| 383 |
+
|
| 384 |
+
22. Alon Y. Levy. "Answering Queries Using Views: A Survey". *VLDB Journal*, 2001.
|
| 385 |
+
|
| 386 |
+
23. Bo Ling, Zhiguo Lu, Wee Siong Ng, BengChin Ooi, Kian-Lee Tan, and Aoying Zhou. "A Content-Based Resource Location Mechanism in PeerIS". In *Proceedings of the 3rd International Conference on Web Information Systems Engineering, WISE 2002*, Singapore, December 2002.
|
| 387 |
+
|
| 388 |
+
24. Bernardo Magnini, Luciano Serafini, and Manuela Speranza. "Making Explicit the Hidden Semantics of Hierarchical Classification". In *Atti dell'Ottavo Congresso Nazionale dell'Associazione Italiana per l'Intelligenza Artificiale, LNCS. Springer Verlag*, 2003.
|
| 389 |
+
|
| 390 |
+
25. P. Mitra, G. Wiederhold, and J. Jannink. "Semi-automatic Integration of Knowledge sources". In *Proc. of the 2nd Int. Conf. On Information FUSION*, 1999.
|
| 391 |
+
|
| 392 |
+
26. Ruben Prieto-Diaz. "Implementing Faceted Classification for Software Reuse". *Communications of the ACM*, 34(5), 1991.
|
| 393 |
+
|
| 394 |
+
27. S. R. Ranganathan. "The Colon Classification". In Susan Artandi, editor, *Vol IV of the Rutgers Series on Systems for the Intellectual Organization of Information*. New Brunswick, NJ: Graduate School of Library Science, Rutgers University, 1965.
|
| 395 |
+
|
| 396 |
+
28. I. Ryutaro, T. Hideaki, and H. Shinichi. "Rule Induction for Concept Hierarchy Allignment". In *Proceedings of the 2nd Workshop on Ontology Learning at the 17th Int. Conf. on AI (IJCAI)*, 2001.
|
| 397 |
+
|
| 398 |
+
29. Marios Sintichakis and Panos Constantopoulos. "A Method for Monolingual The-sauri Merging". In *Proceedings of 20th International Conference on Research and Development in Information Retrieval, ACM SIGIR'97*, Philadelphia, PA, USA, July 1997.
|
| 399 |
+
|
| 400 |
+
30. Nicolas Spyratos, Yannis Tzitzikas, and Vassilis Christophides. "On Personaliz-ing the Catalogs of Web Portals". In *15th International FLAIRS Conference, FLAIRS'02*, Pensacola, Florida, May 2002.
|
| 401 |
+
|
| 402 |
+
31. Yannis Tzitzikas, Anastasia Analyti, Nicolas Spyratos, and Panos Constantopou-los. "An Algebraic Approach for Specifying Compound Terms in Faceted Tax-onomies". In *13th European-Japanese Conference on Information Modelling and Knowledge Bases*, Kitakyushu, Japan, June 2003.
|
| 403 |
+
---PAGE_BREAK---
|
| 404 |
+
|
| 405 |
+
32. Yannis Tzitzikas, Carlo Meghini, and Nicolas Spyratos. "Taxonomy-based Conceptual Modeling for Peer-to-Peer Networks". In *Proceedings of 22th Int. Conf. on Conceptual Modeling, ER'2003*, Chicago, Illinois, October 2003.
|
| 406 |
+
|
| 407 |
+
33. Yannis Tzitzikas, Nicolas Spyratos, and Panos Constantopoulos. "Mediators over Ontology-based Information Sources". In *Second International Conference on Web Information Systems Engineering, WISE 2001*, Kyoto, Japan, December 2001.
|
| 408 |
+
|
| 409 |
+
34. Yannis T. Tzitzikas. "*Collaborative Ontology-based Information Indexing and Retrieval*". PhD thesis, Department of Computer Science - University of Crete, September 2002.
|
samples_new/texts_merged/1836869.md
ADDED
|
@@ -0,0 +1,606 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Exact and Efficient Inference for Collective Flow
|
| 5 |
+
Diffusion Model via Minimum Convex Cost Flow Algorithm
|
| 6 |
+
|
| 7 |
+
Yasunori Akagi,¹ Takuya Nishimura,¹ Yusuke Tanaka,¹ Takeshi Kurashima,¹ Hiroyuki Toda¹
|
| 8 |
+
|
| 9 |
+
¹NTT Service Evolution Laboratories, NTT Corporation,
|
| 10 |
+
1-1 Hikari-no-oka, Yokosuka-Shi, Kanagawa, 239-0847, Japan
|
| 11 |
+
{yasunori.akagi.cu, takuya.nishimura.fk, yusuke.tanaka.rh, takeshi.kurashima.uf, hiroyuki.toda.xb}@hco.ntt.co.jp
|
| 12 |
+
|
| 13 |
+
## Abstract
|
| 14 |
+
|
| 15 |
+
Collective Flow Diffusion Model (CFDM) is a general framework to find the hidden movements underlying aggregated population data. The key procedure in CFDM analysis is MAP inference of hidden variables. Unfortunately, existing approaches fail to offer exact MAP inferences, only approximate versions, and take a lot of computation time when applied to large scale problems. In this paper, we propose an exact and efficient method for MAP inference in CFDM. Our key idea is formulating the MAP inference problem as a combinatorial optimization problem called Minimum Convex Cost Flow Problem (C-MCFP) with no approximation or continuous relaxation. On the basis of this formulation, we propose an efficient inference method that employs the C-MCFP algorithm as a subroutine. Our experiments on synthetic and real datasets show that the proposed method is effective both in single MAP inference and people flow estimation with EM algorithm.
|
| 16 |
+
|
| 17 |
+
## 1. Introduction
|
| 18 |
+
|
| 19 |
+
With recent advances in GPS, Wi-Fi, and various sensors, the importance of location information has grown and is being utilized in various fields. However, it is often difficult to obtain data about individual movements because of privacy concerns or the difficulty of tracking individuals over time. Instead, aggregated count data is relatively easy to obtain as it does not include individual movement information. For example, mobile spatial statistics (Terada, Nagata, and Kobayashi 2013), which is the hourly population data of fixed size square grids calculated from mobile network operational data, are available for purchase in Japan. As another example, traffic data is often obtained not in the form of tracking data of individual cars, but in the form of count data acquired by cameras or sensors installed on road networks (Yang and Zhou 1998; Morimura, Osogami, and Idé 2013).
|
| 20 |
+
|
| 21 |
+
Although there are various uses for these aggregated count data, their applicability is limited because they do not contain explicit information about people movements. In order to utilize such data, Collective Graphical Model
|
| 22 |
+
|
| 23 |
+
(CGM)(Sheldon and Dietterich 2011), which enables us to conduct learning and inference with aggregated count data, was proposed. In particular, Collective Flow Diffusion Model (CFDM) (Kumar, Sheldon, and Srivastava 2013), which is a special case of CGM, has been proposed to infer people flows between the areas by modeling individual movements via a Markov chain approach; it has been applied to the analysis of the hidden movements behind observed count data in a traffic network (Kumar, Sheldon, and Srivastava 2013), urban space (Iwata et al. 2017; Akagi et al. 2018; Iwata and Shimizu 2019), amusement park (Du, Kumar, and Varakantham 2014) and exhibition halls (Tanaka et al. 2018).
|
| 24 |
+
|
| 25 |
+
An important function in CFDM analysis is MAP (maximum a posteriori) inference of the number of moving people from observed population data and parameters of the probabilistic model. This process is mainly used in two ways: (i) As a method for recovering people flow given observed population data and a human mobility model. Even if we can design a probabilistic model of human mobility using domain knowledge or estimate the model using another small set of movement (trajectory) data, we have to conduct MAP inference in order to know the number of people moving between areas. (ii) As a method for conducting E-step in the EM (Expectation Maximization) algorithm to estimate people flow and parameters of the probabilistic model simultaneously. Although E-step was implemented by the well-designed MCMC (Sheldon and Dietterich 2011) in the original CFDM proposal, its scalability was problematic. In order to address this issue, a method that uses MAP inference as an alternative to the regular expectation operation was widely used in subsequent research (Iwata et al. 2017; Akagi et al. 2018; Tanaka et al. 2018).
|
| 26 |
+
|
| 27 |
+
Although methods for realizing MAP inference for CFDM are very important, previous proposals have several crucial drawbacks. (i) They do not provide exact MAP inference because they use continuous relaxation and Stirling's approximation. (ii) Each optimum solution element is non-integer because of continuous relaxation. As a result, the optimum solutions are dense with many non-zero elements and each solution occupies a lot of memory. (iii) When we deal with large scale problems, a lot of computation time is still
|
| 28 |
+
|
| 29 |
+
Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 30 |
+
---PAGE_BREAK---
|
| 31 |
+
|
| 32 |
+
needed to solve the optimization problem.
|
| 33 |
+
|
| 34 |
+
In this paper, we propose a novel method for MAP inference in CFDM that overcomes the shortcomings of previous approaches. Our key idea is formulating the MAP inference problem in CFDM as a combinatorial optimization problem called (non-linear) Minimum Cost Flow Problem (MCFP). Moreover, we prove that all cost functions of the MCFP are "discrete convex functions", discrete analogues of the continuous convex function. This fact indicates that the formulated MCFP is a Minimum Convex Cost Flow Problem (C-MCFP) variant, which is an efficiently solvable subclass of MCFP. On the basis of this formulation, we propose an efficient inference method that employs the C-MCFP algorithm as a subroutine. The proposal has the following advantages:
|
| 35 |
+
|
| 36 |
+
1. It offers exact MAP inference as no approximation is used.
|
| 37 |
+
|
| 38 |
+
2. Optimum solution elements are integers, which is consistent with the number of moving people. Moreover, the solution tends to be sparse and we can hold it with less memory by use of the sparse matrix data structure.
|
| 39 |
+
|
| 40 |
+
3. By utilizing efficient algorithms for C-MCFP, fast estimation is possible. In addition, it is easy to use in practice because it is not necessary to set hyperparameters, and the calculation time is relatively insensitive to the probabilistic models and the optimum solutions.
|
| 41 |
+
|
| 42 |
+
Our results are significant in that they bridge two distinct research topics, graph algorithms and CFDM inference. This work is the first to regard CFDM inference as a discrete optimization problem on a graph (all efficient existing methods transform the inference problem into a continuous optimization problem via approximation). Our non-trivial finding of the discrete convexity of the cost function is an important key in revealing the hidden relationship between graph algorithms and inference in collective flow diffusion.
|
| 43 |
+
|
| 44 |
+
Experiments on synthetic and real datasets show that the proposed method is effective for MAP inference in terms of both running time and solution quality such as sparsity. Of particular note, running time is accelerated 10 times or more and sparsity of optimum solution is dramatically increased in most cases. Moreover, we use the proposal to conduct people flow estimation via the EM algorithm and confirm its effectiveness.
|
| 45 |
+
|
| 46 |
+
## 2. Problem Setting
|
| 47 |
+
|
| 48 |
+
For positive integer $k$, we denote $[k] := \{1, \dots, k\}$. Suppose that the target space is discretized into $n$ distinct areas. The people who were in area $i \in [n]$ at timestep $t$ will stay in $i$ or move to another area to be observed in area $j \in \Gamma_i$ at timestep $t+1$, where $\Gamma_i \subseteq [n]$ is the set of areas that can be moved to from area $i$. This process will be repeated for each $t \in [T-1]$, where $T$ is the total number of timesteps.
|
| 49 |
+
|
| 50 |
+
The problem we address in this paper is formulated as follows. Suppose we are given the population of area $i$ at timestep $t$, $\dot{N}_{t,i}$ ($i \in [n], t \in [T]$). Our goal is to estimate the number of people who leave area $i$ at time $t$ and whose next area is $j$ at time $t+1$, $M_{tij}$ ($i \in [n], j \in [n], t \in [T-1]$). Figure 1 shows an example of this problem setting.
|
| 51 |
+
|
| 52 |
+
Figure 1: An example of the problem setting where the number of areas $n = 3$ and the number of total timesteps $T = 3$.
|
| 53 |
+
|
| 54 |
+
# 3. Background
|
| 55 |
+
|
| 56 |
+
## 3.1 Collective Flow Diffusion Model (CFDM)
|
| 57 |
+
|
| 58 |
+
Let $\theta_i = \{\theta_{ij}\}_{j \in \Gamma_i} (\sum_{j \in \Gamma_i} \theta_{ij} = 1)$ be the transition probability from area $i$ to other areas (including $i$ itself). We here assume $\theta_i$ does not depend on timestep $t$. Given population $N_{t,i}$ and transition probability $\theta_i$, the transition population $M_{ti} = \{M_{tij}\}_{j \in \Gamma_i} (t \in [T-1], i \in [n])$ is assumed to be decided by the following multinomial distribution: $M_{ti} \sim \text{Multi}(N_{t,i}, \theta_i)$. Given $\mathcal{N} = \{N_{t,i} | t \in [T], i \in [n]\}$ and $\mathcal{M} = \{M_{ti} | t \in [T-1], i \in [n]\}$, the likelihood function of $\theta = \{\theta_i | i \in [n]\}$ is given by
|
| 59 |
+
|
| 60 |
+
$$ P(\mathcal{M} | N, \theta) \propto \prod_{t=1}^{T-1} \prod_{i \in [n]} \left( \frac{N_{t,i}!}{\prod_{j \in \Gamma_i} M_{tij}!} \prod_{j \in \Gamma_i} \theta_{ij}^{M_{tij}} \right). \quad (1) $$
|
| 61 |
+
|
| 62 |
+
In addition, the population in each area, $N_{t,i}$, and the transition population between areas, $M_{ti}$, satisfy the following two relationships $N_{t,i} = \sum_{j \in \Gamma_i} M_{tij}$, $N_{t+1,i} = \sum_{j \in \Gamma_i} M_{tji}$ ($t \in [T-1], i \in [n]$), which represent the law of conservation in the number of people.
|
| 63 |
+
|
| 64 |
+
Our purpose is to estimate the true number of people moving between areas. We consider two problems: (i) estimation of $\mathcal{M}$ given $\mathcal{N}$ and $\theta$, and (ii) estimation of $\mathcal{M}$ and $\theta$ given only $\mathcal{N}$. The first problem, includes, for example, the case where it is possible to design a human movement model (i.e. $\theta$) in the target space based on domain knowledge, geographical information, or other data related to people movement such as a small amount of trajectory data. The second problem corresponds to the case that there is no clue as to $\theta$ and it is necessary to estimate everything from $\mathcal{N}$.
|
| 65 |
+
|
| 66 |
+
In any case, an important subroutine in achieving our pur-
|
| 67 |
+
---PAGE_BREAK---
|
| 68 |
+
|
| 69 |
+
pose is solving the following MAP inference problem:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\begin{align}
|
| 73 |
+
\max_{M} \quad & P(M | N, \theta) \nonumber \\
|
| 74 |
+
\text{s.t.} \quad & N_{t,i} = \sum_{j \in \Gamma_i} M_{tij} \quad (t \in [T-1], i \in [n]), \tag{2} \\
|
| 75 |
+
& N_{t+1,i} = \sum_{j \in \Gamma_i} M_{tji} \quad (t \in [T-1], i \in [n]), \nonumber \\
|
| 76 |
+
& M_{tij} \in \mathbb{Z}_{\ge 0} \quad (t \in [T-1], i \in [n], j \in \Gamma_i). \nonumber
|
| 77 |
+
\end{align}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
In the first problem, the optimum solution of (2) is the de-
|
| 81 |
+
sired answer. A common approach to solving the second
|
| 82 |
+
problem is to estimate, alternatively, *M* and *θ* by the EM
|
| 83 |
+
algorithm considering *M* as a hidden variable and *θ* as pa-
|
| 84 |
+
rameter of a probabilistic model. Since high computational
|
| 85 |
+
cost is incurred in calculating the expected value of hidden
|
| 86 |
+
variable *M* by MCMC, a method to replace the expected
|
| 87 |
+
value with the solution of the MAP inference problem has
|
| 88 |
+
already been proposed (Sheldon et al. 2013) and is being
|
| 89 |
+
widely used to conduct E-step. This approach solves the opt-
|
| 90 |
+
imization problem (2) iteratively.
|
| 91 |
+
|
| 92 |
+
**3.2 Minimum Cost Flow Problems**
|
| 93 |
+
|
| 94 |
+
(Non-linear) Minimum Cost Flow Problem (MCFP) is a
|
| 95 |
+
combinatorial optimization problem defined as follows. Let
|
| 96 |
+
$G = (V, E)$ be a directed graph, where each node $i \in V$ has
|
| 97 |
+
supply value $b_i \in \mathbb{Z}$, and each edge $(i, j) \in E$ has capac-
|
| 98 |
+
ity $l_{ij} \in \mathbb{Z}_{\ge 0}$ and cost function $c_{ij}: \mathbb{Z}_{\ge 0} \rightarrow \mathbb{R}$. If $b_i > 0$
|
| 99 |
+
we call node $i$ to be source, and if $b_i < 0$ we call sink.
|
| 100 |
+
MCFP is the problem of finding a minimum cost flow on $G$
|
| 101 |
+
that satisfies the supply constraints at all nodes and capacity
|
| 102 |
+
constraints at all edges. MCFP can be described as follows:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\begin{align}
|
| 106 |
+
\min_{x \in \mathbb{Z}^{|E|}} \quad & \sum_{(i,j) \in E} c_{ij}(x_{ij}) \notag \\
|
| 107 |
+
\text{s.t.} \quad & \sum_{j:(i,j) \in E} x_{ij} - \sum_{j:(j,i) \in E} x_{ji} = b_i \quad (i \in V), \tag{3} \\
|
| 108 |
+
& 0 \le x_{ij} \le l_{ij} \quad ((i,j) \in E). \notag
|
| 109 |
+
\end{align}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
Note that this paper considers the problems that restrict fea-
|
| 113 |
+
sible **x** to integer values i.e. **x** ∈ Z^{|E|}. Generally, MCFP (3)
|
| 114 |
+
is NP-hard and difficult to solve efficiently. However, spe-
|
| 115 |
+
cial cost functions make it possible to derive efficient opti-
|
| 116 |
+
mization algorithms. For example, MCFP with linear cost
|
| 117 |
+
functions, which is the most famous special case of MCFP,
|
| 118 |
+
is polynomial-time solvable and many efficient algorithms
|
| 119 |
+
have been developed (Kiraly and Kovacs 2012). Moreover,
|
| 120 |
+
Minimum Convex Cost Flow Problem (C-MCFP), in which
|
| 121 |
+
every cost function $c_{ij}$ satisfies “discrete convexity” $c_{ij}(x + 1) + c_{ij}(x - 1) \ge 2 \cdot c_{ij}(x)$ ($x = 1, 2, ...$), is known to be
|
| 122 |
+
an efficiently solvable subclass of MCFP (Ahuja, Magnanti,
|
| 123 |
+
and Orlin 1993).
|
| 124 |
+
|
| 125 |
+
4. Proposed Method
|
| 126 |
+
|
| 127 |
+
4.1 Formulation as C-MCFP
|
| 128 |
+
|
| 129 |
+
We show that the optimization problem (2) can be
|
| 130 |
+
formulated as C-MCFP. After taking the logarithm of
|
| 131 |
+
|
| 132 |
+
Figure 2: An example of MCFP formulation when the num-
|
| 133 |
+
ber of areas *n* = 3. *o* is the source and *d* is the sink of the
|
| 134 |
+
flow network. The capacity of edge (*o*, *u*ᵢ) equals to *N*ᵡᵢ and
|
| 135 |
+
the capacity of edge (*v*ᵢ, *d*) equals to *N*ᵡ₁ᵢ.
|
| 136 |
+
|
| 137 |
+
the objective function (1) and omitting terms that
|
| 138 |
+
do not depend on M, the objective function equals
|
| 139 |
+
∑<sub>t∈[T-1]</sub> ∑<sub>i∈[n]</sub> ∑<sub>j∈Γ<sub>i</sub></sub> (-log M<sub>tij</sub>! + M<sub>tij</sub> log θ<sub>ij</sub>). Since
|
| 140 |
+
we can split (2) into independently solvable T - 1 subprob-
|
| 141 |
+
lems by t, all we have to do is solve the minimization prob-
|
| 142 |
+
lems described as follows for each t ∈ [T - 1]:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\begin{equation}
|
| 146 |
+
\begin{array}{ll}
|
| 147 |
+
\min_{M_t} & \displaystyle \sum_{i \in [n]} \sum_{j \in \Gamma_i} (\log M_{tij}! - M_{tij} \log \theta_{ij}) \\
|
| 148 |
+
\text{s.t.} & N_{t,i} = \sum_{j \in \Gamma_i} M_{tij} \quad (i \in [n]), \\
|
| 149 |
+
& N_{t+1,i} = \sum_{j \in \Gamma_i} M_{tji} \quad (i \in [n]), \\
|
| 150 |
+
& M_{tij} \in \mathbb{Z}_{\ge 0} \quad (i \in [n], j \in \Gamma_i).
|
| 151 |
+
\end{array}
|
| 152 |
+
\tag{4}
|
| 153 |
+
\end{equation}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
In order to formulate the problem (4) as MCFP, we con-
|
| 157 |
+
struct an instance by the procedure described below (an ex-
|
| 158 |
+
ample is shown in Figure 2):
|
| 159 |
+
|
| 160 |
+
1. Let $V = \{u_i\}_{i \in [n]} \cup \{v_i\}_{i \in [n]} \cup \{o, d\}$. $u_i$ and $v_i$ correspond to area $i$ at timestep $t$ and timestep $t+1$, respectively. $o$ is the source node and $d$ is the sink node of the flow network.
|
| 161 |
+
|
| 162 |
+
2. For $i \in [n]$, add edge $(o, u_i)$ with cost function $0$ (constant function) and capacity $N_{t,i}$.
|
| 163 |
+
|
| 164 |
+
3. For $i \in [n]$, add edge $(v_i, d)$ with cost function $0$ and capacity $N_{t+1,i}$.
|
| 165 |
+
|
| 166 |
+
4. For $i \in [n]$ and $j \in \Gamma_i$, add edge $(u_i, v_j)$ with cost function $f_{ij}(x) := \log x! - x \cdot \log \theta_{ij}$ and capacity $+\infty$.
|
| 167 |
+
|
| 168 |
+
5. Set $b_o = \sum_{i \in [n]} N_{t,i}$, $b_d = -b_o = -\sum_{i \in [n]} N_{t,i}$ and $b_{u_i} = b_{v_i} = 0$ ($i \in [n]$).
|
| 169 |
+
|
| 170 |
+
For the MCFP instance constructed above, the following
|
| 171 |
+
holds.
|
| 172 |
+
|
| 173 |
+
**Proposition 1.** For $M_i^*$ defined by $M_{tij}^* = x_{ui,v_j}^* (i \in [n], j \in \Gamma_i)$ where $\boldsymbol{x}^*$ is the optimum solution of the MCFP
|
| 174 |
+
---PAGE_BREAK---
|
| 175 |
+
|
| 176 |
+
instance constructed above, $M_t^*$ is an optimum solution of
|
| 177 |
+
the optimization problem (4).
|
| 178 |
+
|
| 179 |
+
**Proof of Proposition 1.** Let $\boldsymbol{x}$ to be a feasible solution of the constructed MCFP. From the non-negativity of $x_{ij}$ and flow conservation constraints at node $o$ and $d$, $x_{ou_i} = N_{ti}$ and $x_{v_{id}} = N_{t+1,i}$ ($\forall i \in [n]$) must be satisfied. From these facts and flow conservation constraints at node $u_i$ and $v_i$, $N_{t,i} = \sum_{j \in \Gamma_i} x_{u_i v_j}$ and $N_{t+1,i} = \sum_{j \in \Gamma_i} x_{v_j u_i}$ ($\forall i \in [n]$) hold. Since we restrict $\boldsymbol{x}$ to integer values and total MCFP cost is $\sum_{i \in [n]} \sum_{j \in \Gamma_i} (\log x_{u_i v_j}! - x_{u_i v_j} \log \theta_{ij})$, the constructed MCFP is equivalent to the optimization problem (4), so the proposition holds. $\square$
|
| 180 |
+
|
| 181 |
+
**Proposition 2.** For the MCFP instance constructed above, all cost functions satisfy discrete convexity, i.e. $c_{ij}(x+1)+c_{ij}(x-1) \ge 2 \cdot c_{ij}(x)$ ($x=1,2,\dots$).
|
| 182 |
+
|
| 183 |
+
*Proof of Proposition 2.* It is clear that a constant function satisfies discrete convexity, so it is sufficient to check for $f_{ij}$. We have $f_{ij}(x+1) + f_{ij}(x-1) - 2 \cdot f_{ij}(x) = \log(x+1)! + \log(x-1)! - 2 \cdot \log x! = \log(x+1) - \log x \ge 0$. This confirms the discrete convexity of $f_{ij}$. $\square$
|
| 184 |
+
|
| 185 |
+
Proposition 1 says that by solving MCFP we can get an
|
| 186 |
+
optimum solution of problem (4). Proposition 2 shows that
|
| 187 |
+
the constructed MCFP instance belongs to C-MCFP. Since
|
| 188 |
+
C-MCFP is an efficiently solvable subclass of MCFP as de-
|
| 189 |
+
scribed in 3.2, we can design efficient algorithms to tackle
|
| 190 |
+
the original MAP inference problem (4).
|
| 191 |
+
|
| 192 |
+
Note that problem (4) may not have any feasible solution if $\sum_{i \in [n]} N_{t,i} \neq \sum_{i \in [n]} N_{t+1,i}$ holds or $|\Gamma_i|$ ($i \in [n]$) is small. Such cases occur frequently when dealing with noisy real data. Even in such cases, our method with slight modification can output reasonable solutions. We describe this modification in Section 4.3.
|
| 193 |
+
|
| 194 |
+
## 4.2 Algorithm
|
| 195 |
+
|
| 196 |
+
We describe here an algorithm that can find exact optimum
|
| 197 |
+
solutions of C-MCFP, called Capacity Scaling algorithm
|
| 198 |
+
(CS) (Minoux 1986). CS is an algorithm that successively
|
| 199 |
+
augments flow along the shortest path from source to sink
|
| 200 |
+
in a residual graph, which is an auxiliary graph calculated
|
| 201 |
+
from the current flow. By maintaining a scalar value, called
|
| 202 |
+
potential, on each node and modifying edge costs to ensure
|
| 203 |
+
that they are non-negative, we can utilize Dijkstra's algo-
|
| 204 |
+
rithm (Dijkstra 1959), which is a fast algorithm for shortest
|
| 205 |
+
path search in graphs with non-negative edge costs. In or-
|
| 206 |
+
der to reduce the number of shortest path searches, CS is
|
| 207 |
+
designed to carry sufficiently large number of flows in each
|
| 208 |
+
path augmentation. The algorithm utilized in our work is the
|
| 209 |
+
one described in Chapter 14.5 of (Ahuja, Magnanti, and Or-
|
| 210 |
+
lin 1993). Although this algorithm is based on the idea of
|
| 211 |
+
(Minoux 1986), some changes have been made, so its com-
|
| 212 |
+
putation complexity differs from that of (Minoux 1986).
|
| 213 |
+
|
| 214 |
+
Given a C-MCFP instance with graph $G = (V, E)$, The-
|
| 215 |
+
orem 14.1 of (Ahuja, Magnanti, and Orlin 1993) claims that
|
| 216 |
+
CS runs in $O(|E| \cdot \log U \cdot S)$, where $U := \max_{i \in V} |b_i|$ is the
|
| 217 |
+
maximum absolute value of flow demand and $S$ is the time
|
| 218 |
+
complexity for solving a shortest path problem in graph $G$
|
| 219 |
+
|
| 220 |
+
**Algorithm 1** Algorithm for solving MAP inference problem (2) via capacity scaling algorithm
|
| 221 |
+
|
| 222 |
+
**Require:** Population of each area and time *N*, transition matrix $\theta$
|
| 223 |
+
**for all** *t* ∈ [*T* − 1] **do**
|
| 224 |
+
Construct C-MCFP instance based on *N*<sub>*t*</sub>, *N*<sub>*t*+1</sub>, θ by the procedure described in Section 4.1
|
| 225 |
+
Get optimum solution *x*<sup>*</sup> of constructed C-MCFP by capacity scaling algorithm
|
| 226 |
+
**for all** *i* ∈ [*n*] **do**
|
| 227 |
+
**for all** *j* ∈ Γ<sub>*i*</sub> **do**
|
| 228 |
+
*M*<sub>*tij*</sub><sup>*</sup> ← *x*<sub>*u*_i*v*_j*</sub>
|
| 229 |
+
**end for**
|
| 230 |
+
**end for**
|
| 231 |
+
**end for**
|
| 232 |
+
**return** *M*<sup>*</sup>
|
| 233 |
+
|
| 234 |
+
with non-negative edge costs. According to Dijkstra's algo-
|
| 235 |
+
rithm with binary heap, *S* is bounded by *O*(*|E|* · log *|V|*), so
|
| 236 |
+
the total time complexity is *O*(*|E|*<sup>2</sup> · log *|V|* · log *U*). When
|
| 237 |
+
this algorithm is used to solve problem (4), its time complex-
|
| 238 |
+
ity is *O*(*m*<sup>2</sup> · log *n* · log *F*), where *n* is the number of areas,
|
| 239 |
+
*m* is the number of edges of the adjacency graph between
|
| 240 |
+
the areas determined by Γ<sub>*i*</sub> (*i* ∈ [*n*]) and *F* := ∑<sub>*i*∈[*n*]</sub> *N*<sub>*t*,i*</sub>
|
| 241 |
+
is the total population of targeted areas. Note that, the to-
|
| 242 |
+
tal complexity does not depend on the maximum value of
|
| 243 |
+
edge capacity, and it is guaranteed that the algorithm runs
|
| 244 |
+
efficiently even if the graph contains an edge with infinite
|
| 245 |
+
capacity.
|
| 246 |
+
|
| 247 |
+
CS is a suitable algorithm for solving our problem in
|
| 248 |
+
the following sense: When dealing with real-world datasets,
|
| 249 |
+
sometimes *F* is extremely large (for example, in mobile spa-
|
| 250 |
+
tial statistics in the Greater Tokyo Area, which consists of
|
| 251 |
+
population distribution data by time and area, *F* is about
|
| 252 |
+
10<sup>6</sup>–10<sup>7</sup>). Therefore, the algorithm used to solve the formu-
|
| 253 |
+
lated C-MCFP should have sub-linear time complexity with
|
| 254 |
+
respect to *F*. Accordingly, CS is appropriate since its time
|
| 255 |
+
complexity is proportional to log *F*.
|
| 256 |
+
|
| 257 |
+
The overall algorithm for solving the original MAP infer-
|
| 258 |
+
ence problem (2) is summarized in Algorithm 1.
|
| 259 |
+
|
| 260 |
+
## 4.3 Handling with Infeasible Cases
|
| 261 |
+
|
| 262 |
+
As mentioned in Section 4.1, when dealing with real-world
|
| 263 |
+
data, there may not be feasible solution to problem (4). To
|
| 264 |
+
address this problem and output a reasonable solution, we
|
| 265 |
+
add a few more steps in the instance construction procedure
|
| 266 |
+
described in Section 4.1.
|
| 267 |
+
|
| 268 |
+
First, we add edge (o,d) with linear cost function Cx,
|
| 269 |
+
where C is a sufficiently large constant, and capacity +∞.
|
| 270 |
+
Next, we set b_o = S, b_d = -S, b_{u_i} = b_{v_i} = 0 (i ∈ [n]),
|
| 271 |
+
where S := max(∑_{i∈[n]} N_{t,i}, ∑_{i∈[n]} N_{t+1,i}). This newly
|
| 272 |
+
formulated MCFP always has a feasible solution and still
|
| 273 |
+
belongs to C-MCFP, so we can solve this by CS.
|
| 274 |
+
|
| 275 |
+
In this case, $M_t^*$ calculated from the optimum solu-
|
| 276 |
+
tion of the MCFP does not necessarily satisfy the pop-
|
| 277 |
+
ulation conservation law $N_{t,i} = \sum_{j \in \Gamma_i} M_{tij}^*, N_{t+1,i} = \sum_{j \in \Gamma_i} M_{tji}^*(i \in [n])$, which are the constraints of the origi-
|
| 278 |
+
---PAGE_BREAK---
|
| 279 |
+
|
| 280 |
+
nal problem (4). We can interpret these discrepancies as fol-
|
| 281 |
+
lows: $N_{t,i} - \sum_{j \in \Gamma_i} M_{tij}^*$ is outflow from area $i$ to some-
|
| 282 |
+
where outside the targeted areas, and $N_{t+1,i} - \sum_{j \in \Gamma_i} M_{tji}^*$ is inflow from somewhere outside the targeted areas to area
|
| 283 |
+
$i$ between timesteps $t$ and $t + 1$.
|
| 284 |
+
|
| 285 |
+
5. Experimental results
|
| 286 |
+
|
| 287 |
+
Here, we use numerical experiments to demonstrate the practical utility of the proposed method. All experiments are conducted on a 64-bit CentOS 7.3 machine with Xeon(R) Gold 6126 CPU(2.60GHz)x2 and 512 GB memory. The capacity scaling algorithm is implemented in C++ (g++ 4.8.5 with the -O3 option); other codes were written in python 2.7.12 with SciPy (Jones et al. 2001).
|
| 288 |
+
|
| 289 |
+
5.1 Compared methods
|
| 290 |
+
|
| 291 |
+
We compare the proposed method with commonly used ones used in CFDM inference (Iwata et al. 2017; Akagi et al. 2018; Tanaka et al. 2018). In this method, we solve an optimization problem that has the following objective function $f(M_t) + \frac{\lambda}{2} \cdot g(M_t)$ under constraints $M_{tij} \in \mathbb{R}_{\ge 0}$, where
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
f(\mathbf{M}_t) = \sum_{i \in [n], j \in \Gamma_i} (M_{tij} \log M_{tij} - M_{tij}(1 + \log \theta_{ij})),
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
g(\mathbf{M}_t) = \sum_{i \in [n]} \left[ (N_{t,i} - \sum_{j \in \Gamma_i} M_{tij})^2 + (N_{t+1,i} - \sum_{j \in \Gamma_i} M_{tji})^2 \right]
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
and $\lambda$ is a hyperparameter. This problem is derived by applying Stirling’s approximation and continuous relaxation to the objective function of (4), and adding constraints of people conservation law to objective function as penalty terms. $\lambda$ controls the strength of penalty terms. This optimization problem has a convex objective function and bound constraints, so we can get the global optimum by L-BFGS-B method (Byrd et al. 1995), which is implemented in scipy\.optimize. Our experiments explored three methods with $\lambda$ values of \{1, 10, 100\}.
|
| 302 |
+
|
| 303 |
+
5.2 MAP inference: Synthetic data
|
| 304 |
+
|
| 305 |
+
First, we compare running times and characteristics of the
|
| 306 |
+
optimum solutions of MAP inference problem (2) obtained
|
| 307 |
+
by each method using synthetic data. We randomly generate
|
| 308 |
+
synthetic instances of the MAP inference problem (2). We
|
| 309 |
+
consider an L × L grid space, where each cell corresponds
|
| 310 |
+
to one area. Γᵢ is set to be [n] for ∀i ∈ [n] (i.e. we consider
|
| 311 |
+
the “fully connected” situation). We set T = 2 and Nₜ,ᵢ ~
|
| 312 |
+
Multi(F, p_t) (t = 1, 2), where F is the total population in
|
| 313 |
+
the grid space and p₁, p₂ ~ Dirichlet(1). θ is generated in
|
| 314 |
+
two ways as follows.
|
| 315 |
+
|
| 316 |
+
1. $\theta_i \sim \text{Dirichlet}(1)$ for each $i \in [n]$ independently. We call this generation procedure "Dirichlet".
|
| 317 |
+
|
| 318 |
+
2. $\theta_{ij} = \exp(-\text{dist}(i, j)) / \sum_{j \in \Gamma_i} \exp(-\text{dist}(i, j))$, where $\text{dist}(i, j)$ is the Euclidean distance between cell $i$ and $j$. We call this procedure "Exponential decay". This procedure reflects the characteristics typical of movements that people are likely to take over short distances rather than long ones.
|
| 319 |
+
|
| 320 |
+
To clarify the dependence of computation time on the num-
|
| 321 |
+
ber of areas, L², and total population, F, we solve the MAP
|
| 322 |
+
inference problem for L = 10, 20, 30 fixing F to 10⁴, and
|
| 323 |
+
for F = 10⁴, 10⁵, 10⁶ fixing L to 20. We generate 10 ran-
|
| 324 |
+
dom instances for each evaluation.
|
| 325 |
+
|
| 326 |
+
The average running times (seconds) for 10 instances by
|
| 327 |
+
each algorithm are summarized in Table 1. Each experiment
|
| 328 |
+
is executed with the time limit of 1000 seconds. If run-
|
| 329 |
+
ning time exceeds the time limit, running time of the trial
|
| 330 |
+
is recorded as 1000 seconds. In such a case, the averaged
|
| 331 |
+
value is underestimated. To clarify this, we tag average run-
|
| 332 |
+
ning time in the table with "> " if the time limit is exceeded
|
| 333 |
+
in even one instance. In the parentheses, standard deviation
|
| 334 |
+
of running times are shown if all 10 trials are completed in
|
| 335 |
+
the time limit. L-BFGS-B methods have longer running time
|
| 336 |
+
than the proposed method and varies with parameter settings
|
| 337 |
+
and instances. This unstable behavior will be problematic in
|
| 338 |
+
practical usage. The proposed method outperforms all other
|
| 339 |
+
methods in all settings. In particular, it offers the advantage
|
| 340 |
+
that it can solve problems with small computational time and
|
| 341 |
+
work stably even when L and F are large.
|
| 342 |
+
|
| 343 |
+
In order to compare the characteristics of optimum solu-
|
| 344 |
+
tions output by the proposed method and L-BFGS-B ($\lambda$ =
|
| 345 |
+
1), we solve two examples with $L = 5$, $F = 10^2$, “Expo-
|
| 346 |
+
nential decay” and $L = 5$, $F = 10^3$, “Exponential decay”
|
| 347 |
+
instances and checked the solutions in detail. The results are
|
| 348 |
+
shown in Figure 3. In this figure, the $L^2 \times L^2$ optimum so-
|
| 349 |
+
lution matrix obtained by each method are presented as a
|
| 350 |
+
heatmap. To investigate the sparsity structure of the solution,
|
| 351 |
+
the maximum value of heatmap is set to 1 and minimum
|
| 352 |
+
value to 0. While the solution obtained by L-BFGS-B is
|
| 353 |
+
blurred and contains a lot of small but non-zero elements (el-
|
| 354 |
+
ements with light colors) because of continuous relaxation,
|
| 355 |
+
proposed method is able to produce sparse solutions. We cal-
|
| 356 |
+
culated the sparseness of each solution by (# of near-zero
|
| 357 |
+
(< 10⁻⁴) elements)/(# of whole elements); the yielded val-
|
| 358 |
+
ues are 90%, 67% with proposed method and 0%, 0 % with
|
| 359 |
+
L-BFGS-B. This implies that the memory needed to hold the
|
| 360 |
+
solution can be reduced significantly by using sparse matrix
|
| 361 |
+
structure. Although we can get sparse solutions by rounding
|
| 362 |
+
the solutions of existing methods, this operation violates the
|
| 363 |
+
constraint of population conservation and degrades solution
|
| 364 |
+
quality.
|
| 365 |
+
|
| 366 |
+
**5.3 MAP inference: Real data**
|
| 367 |
+
|
| 368 |
+
We evaluate running times and characteristics of the opti-
|
| 369 |
+
mum solutions using real-world spatio-temporal population
|
| 370 |
+
data. We use mobile spatial statistics (Terada, Nagata, and
|
| 371 |
+
Kobayashi 2013), which is the hourly population data for
|
| 372 |
+
fixed size square grids calculated from mobile network op-
|
| 373 |
+
erational data. We use Tokyo and Kanagawa prefecture data,
|
| 374 |
+
which is the main part of the capital region of Japan, on
|
| 375 |
+
April 1st, 2015 (weekday) and April 5th, 2014 (holiday).
|
| 376 |
+
*N*<sub>*t*</sub> is the population of each area at the clock time of *t*-
|
| 377 |
+
hour for *t* ∈ {0, 1, ..., 22} on each day. In order to com-
|
| 378 |
+
pare the performances of the methods at different cell width,
|
| 379 |
+
we aggregate population data of each cell and made datasets
|
| 380 |
+
with cell sizes of 5km × 5km, 2km × 2km, and 1km ×
|
| 381 |
+
1km. The resulting datasets contain 200, 1017, 3711 cells,
|
| 382 |
+
---PAGE_BREAK---
|
| 383 |
+
|
| 384 |
+
Table 1: The average running time (seconds) of 10 synthetic instances when *F* is fixed to 10⁴ (above) and when *L* is fixed to 20 (below). The best running time is highlighted for each problem size. Values with "> " are underestimates due to the time limit. Standard deviation is shown in parentheses if all 10 trials are completed in the time limit.
|
| 385 |
+
|
| 386 |
+
<table><thead><tr><th rowspan="2">type of θ<br>L</th><th colspan="3">Dirichlet</th><th colspan="3">Exponential decay</th></tr><tr><th>10</th><th>20</th><th>30</th><th>10</th><th>20</th><th>30</th></tr></thead><tbody><tr><td>Proposed</td><td><b>0.05 (0.00)</b></td><td><b>0.61 (0.01)</b></td><td><b>4.54 (0.16)</b></td><td><b>0.03 (0.00)</b></td><td><b>0.46 (0.03)</b></td><td><b>6.29 (2.60)</b></td></tr><tr><td>L-BFGS-B (λ = 1)</td><td>6.51 (0.91)</td><td>132.86 (15.46)</td><td>357.32 (39.76)</td><td>13.51 (2.00)</td><td>273.25 (18.86)</td><td>>911.22 (-)</td></tr><tr><td>L-BFGS-B (λ = 10)</td><td>7.40 (1.27)</td><td>143.14 (13.25)</td><td>387.09 (56.31)</td><td>13.87 (1.69)</td><td>281.40 (19.18)</td><td>>936.14 (-)</td></tr><tr><td>L-BFGS-B (λ = 100)</td><td>9.65 (2.01)</td><td>169.83 (17.19)</td><td>440.77 (69.87)</td><td>15.79 (1.36)</td><td>297.40 (20.42)</td><td>>975.64 (-)</td></tr></tbody></table>
|
| 387 |
+
|
| 388 |
+
<table><thead><tr><th rowspan="2">type of θ<br>F</th><th colspan="3">Dirichlet</th><th colspan="3">Exponential decay</th></tr><tr><th>10<sup>4</sup></th><th>10<sup>5</sup></th><th>10<sup>6</sup></th><th>10<sup>4</sup></th><th>10<sup>5</sup></th><th>10<sup>6</sup></th></tr></thead><tbody><tr><td>Proposed</td><td><strong>0.71 (0.09)</strong></td><td><strong>4.19 (0.85)</strong></td><td><strong>14.25 (1.56)</strong></td><td><strong>0.68 (0.22)</strong></td><td><strong>2.44 (0.58)</strong></td><td><strong>4.93 (0.94)</strong></td></tr><tr><td>L-BFGS-B (λ = 1)</td><td>140.16 (15.34)</td><td>434.25 (114.80)</td><td>>804.52 (-)</td><td>323.87 (30.86)</td><td>>1000.00 (-)</td><td>>1000.00 (-)</td></tr><tr><td>L-BFGS-B (λ = 10)</td><td>149.29 (14.35)</td><td>503.72 (117.16)</td><td>>880.68 (-)</td><td>340.96 (41.54)</td><td>>1000.00 (-)</td><td>>1000.00 (-)</td></tr><tr><td>L-BFGS-B (λ = 100)</td><td>175.65 (18.26)</td><td>793.54 (146.68)</td><td>>899.83 (-)</td><td>356.24 (48.56)</td><td>>1000.00 (-)</td><td>>887.22 (-)</td></tr></tbody></table>
|
| 389 |
+
|
| 390 |
+
Table 2: The average running time (seconds) for real data. The best running time is highlighted for each cell width. Values with "> " are underestimates due to the time limit. Standard deviation is shown in parentheses if all 10 trials are completed in the time limit.
|
| 391 |
+
|
| 392 |
+
<table>
|
| 393 |
+
<thead>
|
| 394 |
+
<tr>
|
| 395 |
+
<th rowspan="2">dataset<br/>cell width</th>
|
| 396 |
+
<th colspan="3">April 1st, 2015</th>
|
| 397 |
+
<th colspan="3">April 5th, 2015</th>
|
| 398 |
+
</tr>
|
| 399 |
+
<tr>
|
| 400 |
+
<th>5km</th>
|
| 401 |
+
<th>2km</th>
|
| 402 |
+
<th>1km</th>
|
| 403 |
+
<th>5km</th>
|
| 404 |
+
<th>2km</th>
|
| 405 |
+
<th>1km</th>
|
| 406 |
+
</tr>
|
| 407 |
+
</thead>
|
| 408 |
+
<tbody>
|
| 409 |
+
<tr>
|
| 410 |
+
<th scope="row">Proposed</th>
|
| 411 |
+
<td><strong>0.84 (0.16)</strong></td>
|
| 412 |
+
<td><strong>9.16 (1.49)</strong></td>
|
| 413 |
+
<td><strong>59.40 (22.38)</strong></td>
|
| 414 |
+
<td><strong>0.41 (0.01)</strong></td>
|
| 415 |
+
<td><strong>6.52 (1.15)</strong></td>
|
| 416 |
+
<td><strong>54.00 (10.70)</strong></td>
|
| 417 |
+
</tr>
|
| 418 |
+
<tr>
|
| 419 |
+
<th scope="row">L-BFGS-B (λ = 1)</th>
|
| 420 |
+
<td>196.46 (139.61)</td>
|
| 421 |
+
<td>>1000.00 (-)</td>
|
| 422 |
+
<td>>1000.00 (-)</td>
|
| 423 |
+
<td>68.76 (25.43)</td>
|
| 424 |
+
<td>>940.84 (-)</td>
|
| 425 |
+
<td>>1000.00 (-)</td>
|
| 426 |
+
</tr>
|
| 427 |
+
<tr>
|
| 428 |
+
<th scope="row">L-BFGS-B (λ = 10)</th>
|
| 429 |
+
<td>14.96 (34.63)</td>
|
| 430 |
+
<td>>1000.00 (-)</td>
|
| 431 |
+
<td>>1000.00 (-)</td>
|
| 432 |
+
<td>10.90 (19.85)</td>
|
| 433 |
+
<td>>1000.00 (-)</td>
|
| 434 |
+
<td>>1000.00 (-)</td>
|
| 435 |
+
</tr>
|
| 436 |
+
<tr>
|
| 437 |
+
<th scope="row">L-BFGS-B (λ = 100)</th>
|
| 438 |
+
<td>2.04 (0.73)</td>
|
| 439 |
+
<td>>811.94 (-)</td>
|
| 440 |
+
<td>>1000.00 (-)</td>
|
| 441 |
+
<td>0.99 (0.89)</td>
|
| 442 |
+
<td>>697.78 (-)</td>
|
| 443 |
+
<td>>1000.00 (-)</td>
|
| 444 |
+
</tr>
|
| 445 |
+
</tbody>
|
| 446 |
+
</table>
|
| 447 |
+
|
| 448 |
+
respectively.
|
| 449 |
+
|
| 450 |
+
We construct $\theta$ by the same procedure as “Exponen-
|
| 451 |
+
tial decay” in the synthetic data experiment and set
|
| 452 |
+
$\Gamma_i = \{j \mid j \in [n], \text{dist}(i, j) \le 5\}$, where $\text{dist}(i, j)$ is Eu-
|
| 453 |
+
clidean distance between cell $i$ and cell $j$ in the grid space.
|
| 454 |
+
|
| 455 |
+
The results are summarized in Table 2. Time limit is set to be 1000 seconds, and average running time standard deviation are calculated in the same way as in the experiment on synthetic data.
|
| 456 |
+
|
| 457 |
+
As shown, proposed method is able to solve
|
| 458 |
+
all instances in about 60 seconds.
|
| 459 |
+
|
| 460 |
+
On the other hand, com-
|
| 461 |
+
pared methods fail to process 2km × 2km and 1km × 1km
|
| 462 |
+
datasets regardless of the value of λ.
|
| 463 |
+
|
| 464 |
+
This shows the effec-
|
| 465 |
+
tiveness of the proposed method.
|
| 466 |
+
|
| 467 |
+
**5.4 EM algorithm: Synthetic data**
|
| 468 |
+
|
| 469 |
+
As mentioned, MAP inference is used for conducting E-step of EM algorithm to estimate the number of moving people and probabilistic model parameters.
|
| 470 |
+
|
| 471 |
+
Here, we compare EM algorithm performance achieved with the proposed method and with the existing method using simulation data.
|
| 472 |
+
|
| 473 |
+
We consider people movement in an *L* × *L* sized grid space (*L* = 10, 12). We construct transition matrix $\theta^{\text{true}}$ by $\theta_{ij} \propto s_i \cdot \exp(-\beta \cdot \text{dist}(i, j))$, where $s_i > 0$ ($i \in [n]$) is a parameter that represents how likely people are to gather at area *j*, and $\beta$ is a parameter that controls the decay of transition probability with increasing distance between *i* and *j*. This transition matrix is a variant of the one used in (Akagi et al. 2018).
|
| 474 |
+
|
| 475 |
+
We set $\beta^{\text{true}} = 0.5$ and $s_i^{\text{true}}$ as follows: first, we randomly selected 3 areas from $[n]$ and set $s_i^{\text{true}} = 10$. For other areas, we set $s_i^{\text{true}} = 1$. We generate the population
|
| 476 |
+
|
| 477 |
+
of each area, *N*, and number of moving people between areas, *M*, by simulating people movement following the procedure written in Section 3.1 until timestep *T* = 10 using transition matrix $\theta^{\text{true}}$. We set initial population $N_{1,i}$ to $10^4$ ($i \in [n]$).
|
| 478 |
+
|
| 479 |
+
Our task is to estimate the number of moving peo-
|
| 480 |
+
ple, M, from observed population N by the EM algo-
|
| 481 |
+
rithm. For details of the EM algorithm, please see (Ak-
|
| 482 |
+
agi et al. 2018). In the algorithms, Γᵢ is set to be [n] for
|
| 483 |
+
∀i ∈ [n]. We evaluate algorithm performance by Normal-
|
| 484 |
+
ized Absolute Error (NAE) of M, which is calculated by
|
| 485 |
+
∑<sub>t,i,j</sub> |M<sub>tij</sub><sup>true</sup> - M<sub>tij</sub><sup>estimated</sup>| / ∑<sub>t,i,j</sub> M<sub>tij</sub><sup>true</sup>. EM algorithm
|
| 486 |
+
is iterated 200 times for each method.
|
| 487 |
+
|
| 488 |
+
Figure 4 plots NAE versus the elapsed time for the EM algorithm with proposed method and previous method.
|
| 489 |
+
|
| 490 |
+
It can be seen that the proposed method yields better NAE values more quickly than the previous method, especially at large *L*. For example, in the case of *L* = 12, it took the L-BFGS-B method about 9657 seconds to reach 1.15 for NAE (the dashed line in Figure 4). The proposed method, on the other hand, took only 24 seconds or so, which is about 400 times faster.
|
| 491 |
+
|
| 492 |
+
**6. Related Work**
|
| 493 |
+
|
| 494 |
+
Several methods have been proposed to realize MAP in-
|
| 495 |
+
ference efficiently in CGM, which is a general framework
|
| 496 |
+
including CFDM, (Sheldon et al. 2013; Sun, Sheldon, and
|
| 497 |
+
Kumar 2015; Nguyen et al. 2016; Vilnis et al. 2015). Note
|
| 498 |
+
that existing methods provide non-exact MAP inference and
|
| 499 |
+
output non-integer solutions.
|
| 500 |
+
|
| 501 |
+
In (Akagi et al. 2018), an ef-
|
| 502 |
+
---PAGE_BREAK---
|
| 503 |
+
|
| 504 |
+
Figure 3: Comparison of optimum solution matrix in an $L \times L$ grid space obtained by proposed method and L-BFGS-B ($\lambda = 1$) with $\theta$ type of “Exponential decay”. The left is when $(L, F) = (5, 10^2)$ and the right is when $(L, F) = (5, 10^3)$, where F is the total population of the targeted areas. Sparsity pattern of obtained $L^2 \times L^2$ solution matrix is presented as a heatmap. $(i, j)$-element of solution matrix represents the number of moving people from area *i* to area *j*. In order to investigate sparsity structure of solutions, maximum value of color map is set to be 1 and minimum value is 0. The output of L-BFGS-B method is blurred and contains a lot of small but non-zero elements. In contrast, solution by proposed method is noticeably sparse.
|
| 505 |
+
|
| 506 |
+
ficient optimization method for CFDM is proposed, but it
|
| 507 |
+
can be used only under a specially factorized probabilistic
|
| 508 |
+
model, which is designed to model human movements in ur-
|
| 509 |
+
ban spaces. In contrast, the proposal of this paper is widely
|
| 510 |
+
available and poses no excessive constraints on the underly-
|
| 511 |
+
ing transition model structure.
|
| 512 |
+
|
| 513 |
+
There is a lot of work on people flow estimation via CFDM. For example, (Iwata et al. 2017; Akagi et al. 2018; Iwata and Shimizu 2019) deal with the estimation of people flows in urban spaces by utilizing variational inference, a factorized probabilistic model, or neural networks. In (Kumar, Sheldon, and Srivastava 2013) and (Tanaka et al. 2018), the inflow and outflow of each area at each timestep are assumed to be available, while (Tanaka et al. 2018) considers a time delay between before and after movement. Thus, there are many variations in terms of the observation model and the probabilistic model underlying movement. The method proposed herein can be used as a subroutine in any of these approaches by appropriately constructing instances of MCFP to suit the problem.
|
| 514 |
+
|
| 515 |
+
Attempts to estimate human movement from aggregated
|
| 516 |
+
|
| 517 |
+
Figure 4: NAE (Normalized Absolute Error) as a function of elapsed time for EM algorithm with each MAP inference method.
|
| 518 |
+
|
| 519 |
+
count data have received a lot of attention. As a particularly
|
| 520 |
+
relevant study, Xue et al. proposed an algorithm for recov-
|
| 521 |
+
ering personal trajectories from aggregated count data for
|
| 522 |
+
the purpose of evaluating privacy risk for publishing such
|
| 523 |
+
data (Xu et al. 2017). Sheldon et al. proposed a method
|
| 524 |
+
to reconstruct sample paths of a Markov chain from par-
|
| 525 |
+
tial observations for the purpose of analyzing bird migra-
|
| 526 |
+
tion patterns (Sheldon, Elmohamed, and Kozen 2008). Al-
|
| 527 |
+
though those methods are similar to our method in the sense
|
| 528 |
+
of solving combinatorial assignment problems to recover
|
| 529 |
+
movement from aggregated data, there are two distinct dif-
|
| 530 |
+
ferences: (i) Those methods focus on recovering each indi-
|
| 531 |
+
vidual trajectory, not the collective movement of targets. (ii)
|
| 532 |
+
Those method do not have a mechanism to estimate the pa-
|
| 533 |
+
rameters of movement models.
|
| 534 |
+
|
| 535 |
+
Many studies on another direction, predicting population
|
| 536 |
+
or people flow in cities, have been published (Konishi et al.
|
| 537 |
+
2016; Zhang et al. 2019; Jiang et al. 2019). Their approach is
|
| 538 |
+
to forecast future city dynamics at each area from past data
|
| 539 |
+
or other features in a supervised way, using classical regres-
|
| 540 |
+
sion models or deep learning architecture, etc. Our purpose
|
| 541 |
+
is estimating people flows between areas from only popu-
|
| 542 |
+
lation snapshots at incremental timesteps in a unsupervised
|
| 543 |
+
way, which is a totally different task from future prediction.
|
| 544 |
+
|
| 545 |
+
**7. Conclusion**
|
| 546 |
+
|
| 547 |
+
In this paper, we proposed a novel method for MAP infer-
|
| 548 |
+
ence in collective flow diffusion model. First, we showed
|
| 549 |
+
that the MAP inference problem can be formulated as a min-
|
| 550 |
+
imum convex cost flow problem. Based on this formulation,
|
| 551 |
+
we proposed an efficient algorithm for MAP inference prob-
|
| 552 |
+
---PAGE_BREAK---
|
| 553 |
+
|
| 554 |
+
lem using capacity scaling algorithm. Extensive evaluations on both real and synthetic datasets showed that our algorithm outperforms previous alternatives in terms of running time and optimum solution quality.
|
| 555 |
+
|
| 556 |
+
## References
|
| 557 |
+
|
| 558 |
+
Ahuja, R. K.; Magnanti, T. L.; and Orlin, J. B. 1993. *Network Flows: Theory, Algorithms, and Applications*. Prentice-Hall, Inc.
|
| 559 |
+
|
| 560 |
+
Akagi, Y.; Nishimura, T.; Kurashima, T.; and Toda, H. 2018. A fast and accurate method for estimating people flow from spatiotemporal population data. In *IJCAI*, 3293–3300.
|
| 561 |
+
|
| 562 |
+
Byrd, R. H.; Lu, P.; Nocedal, J.; and Zhu, C. 1995. A limited memory algorithm for bound constrained optimization. *SIAM Journal on Scientific Computing* 16(5):1190–1208.
|
| 563 |
+
|
| 564 |
+
Dijkstra, E. W. 1959. A note on two problems in connexion with graphs. *Numerische mathematik* 1(1):269–271.
|
| 565 |
+
|
| 566 |
+
Du, J.; Kumar, A.; and Varakantham, P. 2014. On understanding diffusion dynamics of patrons at a theme park. In *AAMAS*, 1501–1502.
|
| 567 |
+
|
| 568 |
+
Iwata, T., and Shimizu, H. 2019. Neural collective graphical models for estimating spatio-temporal population flow from aggregated data. In *AAAI*, 3935–3942.
|
| 569 |
+
|
| 570 |
+
Iwata, T.; Shimizu, H.; Naya, F.; and Ueda, N. 2017. Estimating people flow from spatiotemporal population data via collective graphical mixture models. *ACM Transactions on Spatial Algorithms and Systems* 3(1):1–18.
|
| 571 |
+
|
| 572 |
+
Jiang, R.; Song, X.; Huang, D.; Song, X.; Xia, T.; Cai, Z.; Wang, Z.; Kim, K.-S.; and Shibasaki, R. 2019. Deepurban-event: A system for predicting citywide crowd dynamics at big events. In *KDD*, 2114–2122. ACM.
|
| 573 |
+
|
| 574 |
+
Jones, E.; Oliphant, T.; Peterson, P.; et al. 2001–. SciPy: Open source scientific tools for Python.
|
| 575 |
+
|
| 576 |
+
Kiraly, Z., and Kovacs, P. 2012. Efficient implementations of minimum-cost flow algorithms. *Acta Univ. Sapientiae* 4(1):67–118.
|
| 577 |
+
|
| 578 |
+
Konishi, T.; Maruyama, M.; Tsubouchi, K.; and Shimosaka, M. 2016. Cityprophet: City-scale irregularity prediction using transit app logs. In *Ubicomp*, 752–757. ACM.
|
| 579 |
+
|
| 580 |
+
Kumar, A.; Sheldon, D.; and Srivastava, B. 2013. Collective diffusion over networks: Models and inference. In *UAI*.
|
| 581 |
+
|
| 582 |
+
Minoux, M. 1986. Solving integer minimum cost flows with separable convex cost objective polynomially. In *Netflow at Pisa*. Springer. 237–239.
|
| 583 |
+
|
| 584 |
+
Morimura, T.; Osogami, T.; and Idé, T. 2013. Solving inverse problem of Markov chain with partial observations. In *NIPS*, 1655–1663.
|
| 585 |
+
|
| 586 |
+
Nguyen, T.; Kumar, A.; Lau, H. C.; and Sheldon, D. 2016. Approximate inference using DC programming for collective graphical models. In *AISTATS*, 685–693.
|
| 587 |
+
|
| 588 |
+
Sheldon, D. R., and Dietterich, T. G. 2011. Collective graphical models. In *NIPS*, 1161–1169.
|
| 589 |
+
|
| 590 |
+
Sheldon, D.; Sun, T.; Kumar, A.; and Dietterich, T. 2013. Approximate inference in collective graphical models. In *ICML*, 1004–1012.
|
| 591 |
+
|
| 592 |
+
Sheldon, D.; Elmohamed, M.; and Kozen, D. 2008. Collective inference on markov models for modeling bird migration. In *NIPS*, 1321–1328.
|
| 593 |
+
|
| 594 |
+
Sun, T.; Sheldon, D.; and Kumar, A. 2015. Message passing for collective graphical models. In *ICML*, 853–861.
|
| 595 |
+
|
| 596 |
+
Tanaka, Y.; Iwata, T.; Kurashima, T.; Toda, H.; and Ueda, N. 2018. Estimating latent people flow without tracking individuals. In *IJCAI*, 3556–3563.
|
| 597 |
+
|
| 598 |
+
Terada, M.; Nagata, T.; and Kobayashi, M. 2013. Population estimation technology for mobile spatial statistics. *NTT DOCOMO Technical Journal* 14(3):10–15.
|
| 599 |
+
|
| 600 |
+
Vilnis, L.; Belanger, D.; Sheldon, D.; and McCallum, A. 2015. Bethe projections for non-local inference. In *UAI*, 892–901.
|
| 601 |
+
|
| 602 |
+
Xu, F.; Tu, Z.; Li, Y.; Zhang, P.; Fu, X.; and Jin, D. 2017. Trajectory recovery from ash: User privacy is not preserved in aggregated mobility data. In *WWW*, 1241–1250.
|
| 603 |
+
|
| 604 |
+
Yang, H., and Zhou, J. 1998. Optimal traffic counting locations for origin-destination matrix estimation. *Transportation Research Part B: Methodological* 32(2):109–126.
|
| 605 |
+
|
| 606 |
+
Zhang, J.; Zheng, Y.; Sun, J.; and Qi, D. 2019. Flow prediction in spatio-temporal networks based on multitask deep learning. *IEEE Transactions on Knowledge and Data Engineering*.
|
samples_new/texts_merged/1885128.md
ADDED
|
@@ -0,0 +1,507 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
We are IntechOpen,
|
| 5 |
+
the world's leading publisher of
|
| 6 |
+
Open Access books
|
| 7 |
+
Built by scientists, for scientists
|
| 8 |
+
|
| 9 |
+
5,300
|
| 10 |
+
Open access books available
|
| 11 |
+
|
| 12 |
+
131,000
|
| 13 |
+
International authors and editors
|
| 14 |
+
|
| 15 |
+
160M
|
| 16 |
+
Downloads
|
| 17 |
+
|
| 18 |
+
Our authors are among the
|
| 19 |
+
TOP 1%
|
| 20 |
+
most cited scientists
|
| 21 |
+
|
| 22 |
+
154
|
| 23 |
+
Countries delivered to
|
| 24 |
+
|
| 25 |
+
12.2%
|
| 26 |
+
Contributors from top 500 universities
|
| 27 |
+
|
| 28 |
+
WEB OF SCIENCE™
|
| 29 |
+
|
| 30 |
+
Selection of our books indexed in the Book Citation Index
|
| 31 |
+
in Web of Science™ Core Collection (BKCI)
|
| 32 |
+
|
| 33 |
+
Interested in publishing with us?
|
| 34 |
+
Contact book department@intechopen.com
|
| 35 |
+
|
| 36 |
+
Numbers displayed above are based on latest data collected.
|
| 37 |
+
For more information visit www.intechopen.com
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
Low Sampling Rate Time Acquisition Schemes
|
| 41 |
+
and Channel Estimation Algorithms of
|
| 42 |
+
Ultra-Wideband Signals
|
| 43 |
+
|
| 44 |
+
Wei Xu and Jiaxiang Zhao
|
| 45 |
+
|
| 46 |
+
Nankai University
|
| 47 |
+
China
|
| 48 |
+
|
| 49 |
+
# 1. Introduction
|
| 50 |
+
|
| 51 |
+
Ultra-wideband (UWB) communication is a viable technology to provide high data rates over broadband wireless channels for applications, including wireless multimedia, wireless Internet access, and future-generation mobile communication systems (Karaoguz, 2001; Stoica et al., 2005). Two of the most critical challenges in the implementation of UWB systems are the timing acquisition and channel estimation. The difficulty in them arises from UWB signals being the ultra short low-duty-cycle pulses operating at very low power density. The Rake receiver (Turin, 1980) as a prevalent receiver structure for UWB systems utilizes the high diversity in order to effectively capture signal energy spread over multiple paths and boost the received signal-to-noise ratio (SNR). However, to perform maximal ratio combining (MRC), the Rake receiver needs the timing information of the received signal and the knowledge of the channel parameters, namely, gains and tap delays. Timing errors as small as fractions of a nanosecond could seriously degrade the system performance (Lovelace & Townsend, 2002; Tian & Giannakis, 2005). Thus, accurate timing acquisition and channel estimation is very essentially for UWB systems.
|
| 52 |
+
|
| 53 |
+
Many research efforts have been devoted to the timing acquisition and channel estimation of UWB signals. However, most reported methods suffer from the restrictive assumptions, such as, demanding a high sampling rates, a set of high precision time-delay systems or invoking a line search, which severally limits their usages. In this chapter, we are focusing on the low sampling rate time acquisition schemes and channel estimation algorithms of UWB signals. First, we develop a novel optimum data-aided (DA) timing offset estimator that utilizes only symbol-rate samples to achieve the channel delay spread scale timing acquisition. For this purpose, we exploit the statistical properties of the power delay profile of the received signals to design a set of the templates to ensure the effective multipath energy capture at any time. Second, we propose a novel optimum data-aided channel estimation scheme that only relies on frame-level sampling rate data to derive channel parameter estimates from the received waveform. The simulations are provided to demonstrate the effectiveness of the proposed approach.
|
| 54 |
+
---PAGE_BREAK---
|
| 55 |
+
|
| 56 |
+
## 2. The channel model
|
| 57 |
+
|
| 58 |
+
From the channel model described in (Foerster, 2003), the impulse response of the channel is
|
| 59 |
+
|
| 60 |
+
$$h(t) = X \sum_{n=1}^{N} \sum_{k=1}^{K(n)} \alpha_{nk} \delta(t - T_n - \tau_{nk}) \quad (1)$$
|
| 61 |
+
|
| 62 |
+
where $X$ is the log-normal shadowing effect. $N$ and $K(n)$ represent the total number of the clusters, and the number of the rays in the $n$th cluster, respectively. $T_n$ is the time delay of the $n$th cluster relative to a reference at the receiver, and $\tau_{nk}$ is the delay of the $k$th multipath component in the $n$th cluster relative to $T_n$. From (Foerster, 2003), the multipath channel coefficient $\alpha_{nk}$ can be expressed as $\alpha_{nk} = p_{nk}\beta_{nk}$ where $p_{nk}$ assumes either +1 or -1 with equal probability, and $\beta_{nk} > 0$ has log-normal distribution.
|
| 63 |
+
|
| 64 |
+
The power delay profile (the mean square values of $\{\beta_{nk}^2\}$) is exponential decay with respect to $\{T_n\}$ and $\{\tau_{nk}\}$, i.e.,
|
| 65 |
+
|
| 66 |
+
$$\langle \beta_{nk}^2 \rangle = \langle \beta_{00}^2 \rangle \exp(-\frac{T_n}{\Gamma}) \exp(-\frac{\tau_{nk}}{\gamma}) \quad (2)$$
|
| 67 |
+
|
| 68 |
+
where $\langle \beta_{00}^2 \rangle$ is the average power gain of the first multipath in the first cluster. $\Gamma$ and $\gamma$ are power-delay time constants for the clusters and the rays, respectively.
|
| 69 |
+
|
| 70 |
+
The model (1) is employed to generate the impulse responses of the propagation channels in our simulation. For simplicity, an equivalent representation of (1) is
|
| 71 |
+
|
| 72 |
+
$$h(t) = \sum_{l=0}^{L-1} \alpha_l \delta(t - \tau_l) \quad (3)$$
|
| 73 |
+
|
| 74 |
+
where $L$ represents the total number of the multipaths, $\alpha_l$ includes log-normal shadowing and multipath channel coefficients, and $\tau_l$ denotes the delay of the $l$th multipath relative to the reference at the receiver. Without loss of generality, we assume $\tau_0 < \tau_1 < \dots < \tau_{L-1}$. Moreover, the channel only allows to change from burst to burst but remains invariant (i.e., $\{\alpha_l, \tau_l\}_{l=0}^{L-1}$ are constants) over one transmission burst.
|
| 75 |
+
|
| 76 |
+
## 3. Low sampling rate time acquisition schemes
|
| 77 |
+
|
| 78 |
+
One of the most acute challenges in realizing the potentials of the UWB systems is to develop the time acquisition scheme which relies only on symbol-rate samples. Such a low sampling rate time acquisition scheme can greatly lower the implementation complexity. In addition, the difficulty in UWB synchronization also arises from UWB signals being the ultrashort low-duty-cycle pulses operating at very low power density. Timing errors as small as fractions of a nanosecond could seriously degrade the system performance (Lovelace & Townsend, 2002; Tian & Giannakis, 2005).
|
| 79 |
+
|
| 80 |
+
A number of timing algorithms are reported for UWB systems recently. Some of the timing algorithms(Tian & Giannakis, 2005; Yang & Giannakis, 2005; Carbonelli & Mengali, 2006; He & Tepedelenlioglui, 2008) involve the sliding correlation that usually used in traditional narrowband systems. However, these approaches inevitably require a searching procedure and are inherently time-consuming. Too long synchronization time will affect
|
| 81 |
+
---PAGE_BREAK---
|
| 82 |
+
|
| 83 |
+
symbol detection. Furthermore, implementation of such techniques demands very fast
|
| 84 |
+
and expensive A/D converters and therefore will result in high power consumption.
|
| 85 |
+
Another approach (Carbonelli & Mengali, 2005; Furusawa et al., 2008; Cheng & Guan, 2008;
|
| 86 |
+
Sasaki et al., 2010) is to synchronize UWB signals through the energy detector. The merits
|
| 87 |
+
of using energy detectors are that the design of timing acquisition scheme could benefit
|
| 88 |
+
from the statistical properties of the power delay profile of the received signals. Unlike
|
| 89 |
+
the received UWB waveforms which is unknown to receivers due to the pulse distortions,
|
| 90 |
+
the statistical properties of the power delay profile are invariant. Furthermore, as shown
|
| 91 |
+
in (Carbonelli & Mengali, 2005), an energy collection based receiver can produce a low
|
| 92 |
+
complexity, low cost and low power consumption solution at the cost of reduced channel
|
| 93 |
+
spectral efficiency.
|
| 94 |
+
|
| 95 |
+
In this section, a novel optimum data-aided timing offset estimator that only relies on
|
| 96 |
+
symbol-rate samples for frame-level timing acquisition is derived. For this purpose, we
|
| 97 |
+
exploit the statistical properties of the power delay profile of the received signals to design
|
| 98 |
+
a set of the templates to ensure the effective multipath energy capture at any time. We show
|
| 99 |
+
that the frame-level timing offset acquisition can be transformed into an equivalent amplitude
|
| 100 |
+
estimation problem. Thus, utilizing the symbol-rate samples extracted by our templates and
|
| 101 |
+
the ML principle, we obtain channel-dependent amplitude estimates and optimum timing
|
| 102 |
+
offset estimates.
|
| 103 |
+
|
| 104 |
+
**3.1 The signal model**
|
| 105 |
+
|
| 106 |
+
During the acquisition stage, a training sequence is transmitted. Each UWB symbol is transmitted over a time-interval of $T_s$ seconds that is subdivided into $N_f$ equal size frame-intervals of length $T_f$. A single frame contains exactly one data modulated ultrashort pulse $p(t)$ of duration $T_p$. And the transmitted waveform during the acquisition has the form as
|
| 107 |
+
|
| 108 |
+
$$s(t) = \sqrt{E_f} \sum_{j=0}^{NN_f-1} d_{[j]_{N_{ds}}} p(t - jT_f - a_{\lfloor \frac{j}{N_f} \rfloor}) \quad (4)$$
|
| 109 |
+
|
| 110 |
+
where {$d_l$}$_{l=0}^{{N_{ds}}-1}$ with $d_l \in \{\pm 1\}$ is the DS sequence. The time shift $\Delta$ is chosen to be $T_h/2$ with $T_h$ being the delay spread of the channel. The assumption that there is no inter-frame interference suggests $T_h \le T_f$. For the simplicity, we assume $T_h = T_f$ and derive the acquisition algorithm. Our scheme can easily be extended to the case where $T_f \ge T_h$. The training sequence {$a_n$}$_{n=0}^{N-1}$ is designed as
|
| 111 |
+
|
| 112 |
+
$$\underbrace{\{0, 0, 0, \dots, 0}_{n=0,1,\dots,N_0-1}, \underbrace{1, 0, 1, 0, \dots, 1, 0}_{n=N_0,N_0+1,\dots,N-1}}$$
|
| 113 |
+
|
| 114 |
+
(5)
|
| 115 |
+
|
| 116 |
+
i.e., the first $N_0$ consecutive symbols are chosen to be 0, and the rest symbols alternately switch between 1 and 0.
|
| 117 |
+
|
| 118 |
+
The transmitted signal propagates through an L-path fading channel as shown in (3). Using the first arriving time $\tau_0$, we define the relative time delay of each multipath as $\tau_{l,0} = \tau_l - \tau_0$
|
| 119 |
+
---PAGE_BREAK---
|
| 120 |
+
|
| 121 |
+
Fig. 1. The block diagram of acquisition approach.
|
| 122 |
+
|
| 123 |
+
for $1 \le l \le L - 1$. Thus the received signal is
|
| 124 |
+
|
| 125 |
+
$$r(t) = \sqrt{E_f} \sum_{j=0}^{NN_f-1} d_{[j]_{N_{ds}}} p_R(t-jT_f - a_{\lfloor \frac{j}{N_f} \rfloor} \Delta - \tau_0) + n(t) \quad (6)$$
|
| 126 |
+
|
| 127 |
+
where $n(t)$ is the zero-mean additive white Gaussian noise (AWGN) with double-side power spectral density $\sigma_n^2/2$ and $p_R(t) = \sum_{l=0}^{L-1} \alpha_l p(t - \tau_{l,0})$ represents the convolution of the channel impulse response (3) with the transmitted pulse $p(t)$.
|
| 128 |
+
|
| 129 |
+
The timing information of the received signal is contained in the delay $\tau_0$ which can be decomposed as
|
| 130 |
+
|
| 131 |
+
$$\tau_0 = n_s T_s + n_f T_f + \zeta \quad (7)$$
|
| 132 |
+
|
| 133 |
+
with $n_s = \lfloor \frac{\tau_0}{T_s} \rfloor$, $n_f = \lfloor \frac{\tau_0 - n_s T_s}{T_f} \rfloor$ and $\zeta \in [0, T_f)$.
|
| 134 |
+
|
| 135 |
+
In the next section, we present an DA timing acquisition scheme based on the following assumptions: 1) There is no interframe interference, i.e., $\tau_{L-1,0} \le T_f$. 2) The channel is assumed to be quasi-static, i.e., the channel is constant over a block duration. 3) Since the symbol-level timing offset $n_s$ can be estimated from the symbol-rate samples through the traditional estimation approach, we assumed $n_s = 0$. In this chapter, we focus on acquiring timing with frame-level resolution, which relies on only symbol-rate samples.
|
| 136 |
+
|
| 137 |
+
## 3.2 Analysis of symbol-rate sampled data $Y_0[n]$
|
| 138 |
+
|
| 139 |
+
As shown in Fig. 1, the received signal (6) first passes through a square-law detector. Then, the resultant output is separately correlated with the pre-devised templates $W_0(t)$, $W_1(t)$ and $W_2(t)$, and sampled at $nT_s$ which yields $\{Y_0[n]\}_{n=1}^{N-1}$, $\{Y_1[n]\}_{n=1}^{N-1}$ and $\{Y_2[n]\}_{n=1}^{N-1}$. Utilizing these samples, we derive an optimal timing offset estimator $\hat{n}_f$.
|
| 140 |
+
|
| 141 |
+
In view of (6), the output of the square-law detector is
|
| 142 |
+
|
| 143 |
+
$$ \begin{aligned} R(t) &= r_s^2(t) = (r_s(t) + n(t))^2 = r_s^2(t) + m(t) \\ &= E_f \sum_{j=0}^{NN_f-1} p_R^2(t - jT_f - a_{\lfloor \frac{j}{N_f} \rfloor} \Delta - \tau_0) + m(t) \end{aligned} \quad (8) $$
|
| 144 |
+
---PAGE_BREAK---
|
| 145 |
+
|
| 146 |
+
where $m(t) = 2r_s(t)n(t) + n^2(t)$. When the template $W(t)$ is employed, the symbol rate sampled data $Y[n]$ is
|
| 147 |
+
|
| 148 |
+
$$ Y[n] = \int_{0}^{T_s} R(t+nT_s)W(t)dt. \quad (9) $$
|
| 149 |
+
|
| 150 |
+
Now we derive the decomposition of $Y_0[n]$, i.e., the symbol-rate samples when the template $W_0(t)$ defined as
|
| 151 |
+
|
| 152 |
+
$$ W_0(t) = \sum_{k=0}^{N_f-1} w(t-kT_f), \quad w(t) = \begin{cases} 1, & 0 \le t < \frac{T_f}{2} \\ -1, & \frac{T_f}{2} \le t < T_f \\ 0, & \text{others} \end{cases} \quad (10) $$
|
| 153 |
+
|
| 154 |
+
is employed. Substituting $W_0(t)$ for $W(t)$ in (9), we obtain symbol-rate sampled data $Y_0[n]$. Recalling (5), we can derive the following proposition of $Y_0[n]$.
|
| 155 |
+
|
| 156 |
+
**Proposition 1:** 1) For $1 \le n < N_0$, $Y_0[n]$ can be expressed as
|
| 157 |
+
|
| 158 |
+
$$ Y_0[n] = N_f I_{\xi,0} + M_0[n], \quad (11) $$
|
| 159 |
+
|
| 160 |
+
2) For $N_0 \le n \le N-1$, $Y_0[n]$ can be represented as
|
| 161 |
+
|
| 162 |
+
$$ Y_0[n] = \begin{cases} (2\Psi - N_f)I_{\xi,a_{n-1}} + M_0[n], & \zeta \in [0, T_\eta) \\ (2\Psi - N_f + 1)I_{\xi,a_{n-1}} + M_0[n], & \zeta \in [T_\eta, T_\eta + \frac{T_f}{2}) \\ (2\Psi - N_f + 2)I_{\xi,a_{n-1}} + M_0[n], & \zeta \in [T_\eta + \frac{T_f}{2}, T_f) \end{cases} \quad (12) $$
|
| 163 |
+
|
| 164 |
+
where $\Psi \triangleq n_f - \frac{1}{2}\epsilon$, $\epsilon \in [-\frac{1}{2}, \frac{1}{2}]$ and $T_\eta \in [\frac{T_f}{4}, \frac{T_f}{2}]$. $M_0[n]$ is the sampled noise, and $I_{\xi,a_n}$ is defined as
|
| 165 |
+
|
| 166 |
+
$$ I_{\xi,a_n} \triangleq E_f \int_0^{T_f} \sum_{m=0}^2 p_R^2(t+mT_f-a_n\Delta-\xi)w(t)dt. \quad (13) $$
|
| 167 |
+
|
| 168 |
+
We prove the Proposition 1 and the fact that the sampled noise $M_0[n]$ can be approximated by a zero mean Gaussian variable in (Xu et al., 2009) in Appendix A and Appendix B respectively. There are some remarks on the Proposition 1:
|
| 169 |
+
|
| 170 |
+
1) The fact of $a_{n-1} \in \{0, 1\}$ suggests that $I_{\xi,a_{n-1}}$ in (12) is equal to either $I_{\xi,0}$ or $I_{\xi,1}$. Furthermore, $I_{\xi,0}$ and $I_{\xi,1}$ satisfy $I_{\xi,1} = -I_{\xi,0}$ whose proof is contained in *Fact 1* of Appendix I.
|
| 171 |
+
|
| 172 |
+
2) Equation (12) suggests that the decomposition of $Y_0[n]$ varies when $\zeta$ falls in different subintervals, so correctly estimating $n_f$ need to determine to which region $\zeta$ belongs.
|
| 173 |
+
|
| 174 |
+
3) *Fact 2* of Appendix A which states
|
| 175 |
+
|
| 176 |
+
$$ \left\{ \begin{array}{ll} I_{\xi,0} > 0, & \zeta \in [0, T_{\eta}) \cup [T_{\eta} + \frac{T_f}{2}, T_f] \\ I_{\xi,0} < 0, & \zeta \in [T_{\eta}, T_{\eta} + \frac{T_f}{2}) \end{array} \right. \quad (14) $$
|
| 177 |
+
|
| 178 |
+
suggests that it is possible to utilize the sign of $I_{\xi,0}$ to determine to which subinterval $\zeta$ belongs. However, when $I_{\xi,0} > 0$, $\zeta$ could belong to either $[0, T_{\eta})$ or $[T_{\eta} + \frac{T_f}{2}, T_f)$. To resolve this difficulty, we introduce the second template $W_1(t)$ in the next section.
|
| 179 |
+
---PAGE_BREAK---
|
| 180 |
+
|
| 181 |
+
### 3.3 Analysis of symbol-rate sampled data $Y_1[n]$
|
| 182 |
+
|
| 183 |
+
The symbol-rate sampled data $Y_1[n]$ is obtained when the template $W_1(t)$ is employed. $W_1(t)$ is a delayed version of $W_0(t)$ with the delayed time $T_d$ where $T_d \in [0, \frac{T_f}{2}]$. Our simulations show that we obtain the similar performance for the different choices of $T_d$. For the simplicity, we choose $T_d = \frac{T_f}{4}$ for the derivation. Thus, we have
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\begin{aligned}
|
| 187 |
+
Y_1[n] &= \int_{\frac{T_f}{4}}^{T_s+\frac{T_f}{4}} R(t+nT_s)W_0\left(t-\frac{T_f}{4}\right)dt \\
|
| 188 |
+
&= \int_0^{T_s} R(t+nT_s+\frac{T_f}{4})W_0(t)dt.
|
| 189 |
+
\end{aligned}
|
| 190 |
+
\quad (15) $$
|
| 191 |
+
|
| 192 |
+
Then we can derive the following proposition of $Y_1[n]$.
|
| 193 |
+
|
| 194 |
+
**Proposition 2:1)** For $1 \le n < N_0$, $Y_1[n]$ can be expressed as
|
| 195 |
+
|
| 196 |
+
$$ Y_1[n] = N_f J_{\zeta,0} + M_0[n]. \quad (16) $$
|
| 197 |
+
|
| 198 |
+
2) For $N_0 \le n \le N-1$, $Y_1[n]$ can be decomposed as
|
| 199 |
+
|
| 200 |
+
$$ Y_1[n] = \begin{cases} (2\Psi - N_f - 1)J_{\zeta, a_{n-1}} + M_1[n], & \zeta \in [0, T_\eta - \frac{T_f}{4}) \\ (2\Psi - N_f)J_{\zeta, a_{n-1}} + M_1[n], & \zeta \in [T_\eta - \frac{T_f}{4}, T_\eta + \frac{T_f}{4}) \\ (2\Psi - N_f + 1)J_{\zeta, a_{n-1}} + M_1[n], & \zeta \in [T_\eta + \frac{T_f}{4}, T_f) \end{cases} \quad (17) $$
|
| 201 |
+
|
| 202 |
+
where $J_{\zeta,0}$ satisfies
|
| 203 |
+
|
| 204 |
+
$$ \left\{
|
| 205 |
+
\begin{array}{ll}
|
| 206 |
+
J_{\zeta,0} < 0, & \zeta \in [0, T_{\eta} - \frac{T_f}{4}) \cup [T_{\eta} + \frac{T_f}{4}, T_f) \\
|
| 207 |
+
J_{\zeta,0} > 0, & \zeta \in [T_{\eta} - \frac{T_f}{4}, T_{\eta} + \frac{T_f}{4}).
|
| 208 |
+
\end{array}
|
| 209 |
+
\right.
|
| 210 |
+
\quad (18) $$
|
| 211 |
+
|
| 212 |
+
Equation (14) and (18) suggest that the signs of $I_{\zeta,0}$ and $J_{\zeta,0}$ can be utilized jointly to determine the range of $\zeta$, which is summarized as follows:
|
| 213 |
+
|
| 214 |
+
**Proposition 3:** $\zeta \in [0, T_f]$ defined in (7) satisfies
|
| 215 |
+
|
| 216 |
+
1. If $I_{\zeta,0} > 0$ and $J_{\zeta,0} > 0$, then $\zeta \in (T_{\eta} - \frac{T_f}{4}, T_{\eta})$.
|
| 217 |
+
|
| 218 |
+
2. If $I_{\zeta,0} < 0$ and $J_{\zeta,0} > 0$, then $\zeta \in (T_{\eta}, T_{\eta} + \frac{T_f}{4})$.
|
| 219 |
+
|
| 220 |
+
3. If $I_{\zeta,0} < 0$ and $J_{\zeta,0} < 0$, then $\zeta \in (T_{\eta} + \frac{T_f}{4}, T_{\eta} + \frac{T_f}{2})$.
|
| 221 |
+
|
| 222 |
+
4. If $I_{\zeta,0} > 0$ and $J_{\zeta,0} < 0$, then $\zeta \in (0, T_{\eta} - \frac{T_f}{4}) \cup (T_{\eta} + \frac{T_f}{2}, T_f)$.
|
| 223 |
+
|
| 224 |
+
The last case of Proposition 3 suggests that using the signs of $I_{\zeta,0}$ and $J_{\zeta,0}$ is not enough to determine whether we have $\zeta \in (0, T_{\eta} - \frac{T_f}{4})$ or $\zeta \in (T_{\eta} + \frac{T_f}{2}, T_f)$. To resolve this difficulty, the third template $W_2(t)$ is introduced. $W_2(t)$ is an auxiliary template and is defined as
|
| 225 |
+
|
| 226 |
+
$$ W_2(t) = \sum_{k=0}^{N_f-1} v(t-kT_f), \quad v(t) = \begin{cases} 1, & T_f - 2T_v \le t < T_f - T_v \\ -1, & T_f - T_v \le t < T_f \\ 0, & \text{others} \end{cases} \quad (19) $$
|
| 227 |
+
|
| 228 |
+
where $T_v \in (0, T_f/10]$. Similar to the proof of (14), we can prove that in this case, either $K_{\zeta,0} > 0$ for $0 < \zeta < T_{\eta} - \frac{T_f}{4}$ or $K_{\zeta,0} < 0$ for $T_{\eta} + \frac{T_f}{4} < \zeta < T_f$ is valid, which yields the information to determine which region $\zeta$ belongs to.
|
| 229 |
+
---PAGE_BREAK---
|
| 230 |
+
|
| 231 |
+
### 3.4 The computation of the optimal timing offset estimator $\hat{n}_f$
|
| 232 |
+
|
| 233 |
+
To seek the estimate of $n_f$, we first compute the optimal estimates of $I_{\xi,0}$ and $J_{\xi,0}$ using (11) and (16). Then, we use the estimate $\hat{I}_{\xi,0}, \hat{J}_{\xi,0}$ and Proposition 3 to determine the region to which $\xi$ belongs. The estimate $\hat{\Psi}$ therefore can be derived using the proper decompositions of (12) and (17). Finally, recalling the definition in (12) $\Psi = n_f - \frac{\epsilon}{2}$ with $\epsilon \in [-\frac{1}{2}, \frac{1}{2}]$, we obtain $\hat{n}_f = [\hat{\Psi}]$, where $[\cdot]$ stands for the round operation.
|
| 234 |
+
|
| 235 |
+
According to the signs of $\hat{I}_{\xi,0}$ and $\hat{J}_{\xi,0}$, we summarize the ML estimate $\hat{\Psi}$ as follows:
|
| 236 |
+
|
| 237 |
+
**Proposition 4:**
|
| 238 |
+
|
| 239 |
+
* When $\hat{I}_{\xi,0} > 0$ and $\hat{J}_{\xi,0} > 0$, $\hat{\Psi} = \frac{1}{A} \sum_{n=N_0}^{N-1} [Z_n + N_f(I_{\xi,0}^2 + J_{\xi,0}^2)]$.
|
| 240 |
+
|
| 241 |
+
* When $\hat{I}_{\xi,0} < 0$ and $\hat{J}_{\xi,0} > 0$, $\hat{\Psi} = \frac{1}{A} \sum_{n=N_0}^{N-1} [Z_n + (N_f - 1)I_{\xi,0}^2 + N_f J_{\xi,0}^2]$.
|
| 242 |
+
|
| 243 |
+
* When $\hat{I}_{\xi,0} < 0$ and $\hat{J}_{\xi,0} < 0$, $\hat{\Psi} = \frac{1}{A} \sum_{n=N_0}^{N-1} [Z_n + (N_f - 1)(I_{\xi,0}^2 + J_{\xi,0}^2)]$.
|
| 244 |
+
|
| 245 |
+
* When $\hat{I}_{\xi,0} > 0$ and $\hat{J}_{\xi,0} < 0$, $\hat{\Psi} = \begin{cases} \frac{1}{A} \sum_{n=N_0}^{N-1} [Z_n + N_f I_{\xi,0}^2 + (N_f + 1) J_{\xi,0}^2] & , \hat{K}_{\xi,0} > 0 \\ \frac{1}{A} \sum_{n=N_0}^{N-1} [Z_n + (N_f - 2) I_{\xi,0}^2 + (N_f - 1) J_{\xi,0}^2] & , \hat{K}_{\xi,0} < 0 \end{cases}$
|
| 246 |
+
|
| 247 |
+
where $A \triangleq 2(N - N_0)(I_{\xi,0}^2 + J_{\xi,0}^2)$ and $Z_n \triangleq Y_0[n]I_{\xi,a_{n-1}} + Y_1[n]J_{\xi,a_{n-1}}$. The procedures of computing the optimal ML estimate $\hat{\Psi}$ in Proposition 4 are identical. Therefore, we only present the computation steps when $\hat{I}_{\xi,0} > 0$ and $\hat{J}_{\xi,0} > 0$.
|
| 248 |
+
|
| 249 |
+
1. Utilizing (11) and (16), we obtain the ML estimates
|
| 250 |
+
|
| 251 |
+
$$ \hat{I}_{\xi,0} = \frac{1}{(N_0-1)N_f} \sum_{n=1}^{N_0-1} Y_0[n], \quad \hat{J}_{\xi,0} = \frac{1}{(N_0-1)N_f} \sum_{n=1}^{N_0-1} Y_1[n]. \qquad (20) $$
|
| 252 |
+
|
| 253 |
+
2. From (1) of Proposition 3, it follows that $T_\eta - \frac{T_f}{4} < \zeta < T_\eta$ when $\hat{I}_{\xi,0} > 0$ and $\hat{J}_{\xi,0} > 0$.
|
| 254 |
+
|
| 255 |
+
3. According to the region of $\zeta$, we can select the right equations from (12) and (17) as
|
| 256 |
+
|
| 257 |
+
$$ Y_0[n] = (2\Psi - N_f)I_{\zeta,a_{n-1}} + M_0[n] \qquad (21) $$
|
| 258 |
+
|
| 259 |
+
$$ Y_1[n] = (2\Psi - N_f)J_{\zeta,a_{n-1}} + M_1[n]. \qquad (22) $$
|
| 260 |
+
|
| 261 |
+
Thus the log-likelihood function $\ln p(y; \Psi, I_{\zeta,a_{n-1}}, J_{\zeta,a_{n-1}})$ is
|
| 262 |
+
|
| 263 |
+
$$ \sum_{n=N_0}^{N-1} \left\{ [Y_0[n] - (2\Psi - N_f) I_{\zeta,a_{n-1}}]^2 + [Y_1[n] - (2\Psi - N_f) J_{\zeta,a_{n-1}}]^2 \right\}. $$
|
| 264 |
+
|
| 265 |
+
It follows the ML estimate $\hat{\Psi} = \frac{1}{A}\sum_{n=N_0}^{N-1}[Z_n + N_f(I_{\zeta,0}^2 + J_{\zeta,0}^2)]$.
|
| 266 |
+
|
| 267 |
+
### 3.5 Simulation
|
| 268 |
+
|
| 269 |
+
In this section, computer simulations are performed. We use the second-order derivative of the Gaussian pulse to represent the UWB pulse. The propagation channels are generated
|
| 270 |
+
---PAGE_BREAK---
|
| 271 |
+
|
| 272 |
+
Fig. 2. MSE performance under CM2 with $d = 4m$..
|
| 273 |
+
|
| 274 |
+
Fig. 3. BER performance under CM2 with $d = 4m$..
|
| 275 |
+
|
| 276 |
+
by the channel model CM2 described in (Foerster, 2003). Other parameters are selected as follows: $T_p = 1$ns, $N_f = 25$, $T_f = 100$ns, $T_v = T_f/10$ and the transmitted distance $d = 4m$. In all the simulations, we assume that $n_f$ and $\zeta$ are uniformly distributed over $[0, N_f - 1]$ and $[0, T_f]$ respectively. To evaluate the effect of the estimate $\hat{n}_f$ on the bit-error-rates (BERs) performance, we assume there is an optimal channel estimator at the receiver to obtain the perfect template for tracking and coherent demodulation. The signal-to-noise ratios (SNRs)
|
| 277 |
+
---PAGE_BREAK---
|
| 278 |
+
|
| 279 |
+
in all figures are computed through $E_s/\sigma_n^2$ where $E_s$ is the energy spread over each symbol at the transmitter and $\sigma_n^2$ is the power spectral density of the noise.
|
| 280 |
+
|
| 281 |
+
In Fig. 2 present the normalized mean-square error (MSE: $E\{|\hat{n}_f - n_f|/N_f\}^2\}$) of the proposed algorithm in contrast to the approach using noisy template proposed in (Tian & Giannakis, 2005). The figure shows that the proposed algorithm (blue curve) outperforms that in (Tian & Giannakis, 2005) (red curve) when the SNR is larger than 10dB. For both algorithms, the acquisition performance improves with an increase in the length of training symbols $N$, as illustrated by the performance gap among $N = 12$ and $N = 30$. Fig. 3 illustrates the BER performance for the both algorithms. The BERs corresponding to perfect timing (green curve) and no timing (Magenta curve) are also plotted for comparisons.
|
| 282 |
+
|
| 283 |
+
## 4. Low sampling rate channel estimation algorithms
|
| 284 |
+
|
| 285 |
+
The channel estimation of UWB systems is essential to effectively capture signal energy spread over multiple paths and boost the received signal-to-noise ratio (SNR). The low sampling rate channel estimation algorithms have the merits that can greatly lower the implementation complexity and reduce the costs. However, the development of low sampling rate channel estimation algorithms is extremely challenging. This is primarily due to the facts that the propagation models of UWB signals are frequency selective and far more complex than traditional radio transmission channels.
|
| 286 |
+
|
| 287 |
+
Classical approaches to this problem are using the maximum likelihood (ML) method or approximating the solutions of the ML problem. The main drawback of these approaches is that the computational complexity could be prohibitive since the number of parameters to be estimated in a realistic UWB channel is very high (Lottici et al., 2002). Other approaches reported are the minimum mean-squared error schemes which have the reduced complexity at the cost of performance (Yang & Giannakis, 2004). Furthermore, sampling rate of the received UWB signal is not feasible with state-of-the-art analog-to-digital converters (ADC) technology. Since UWB channels exhibit clusters (Cramer et al., 2002), a cluster-based channel estimation method is proposed in (Carbonelli & Mitra, 2007). Different methods such as subspace approach (Xu & Liu, 2003), first-order cyclostationary-based method (Wang & Yang, 2004) and compressed sensing based method (Paredes et al., 2007; Shi et al., 2010) proposed for UWB channel estimation are too complex to be implemented in actual systems.
|
| 288 |
+
|
| 289 |
+
In this section, we develop a novel optimum data-aided channel estimation scheme that only relies on frame-level sampling rate data to derive channel parameter estimates from the received waveform. To begin with, we introduce a set of especially devised templates for the channel estimation. The received signal is separately correlated with these pre-devised templates and sampled at frame-level rate. We show that each frame-level rate sample of any given template can be decomposed to a sum of a frequency-domain channel parameter and a noise sample. The computation of time-domain channel parameter estimates proceeds through the following two steps: In step one, for each fixed template, we utilize the samples gathered at this template and the maximum likelihood criterion to compute the ML estimates of the frequency-domain channel parameters of these samples. In step two, utilizing the computed frequency-domain channel parameters, we can compute the time-domain channel parameters via inverse fast transform (IFFT). As demonstrated in the simulation example,
|
| 290 |
+
---PAGE_BREAK---
|
| 291 |
+
|
| 292 |
+
Fig. 4. The block diagram of channel estimation scheme.
|
| 293 |
+
|
| 294 |
+
when the training time is fixed, more templates used for the channel estimation yield the better (BER) performance.
|
| 295 |
+
|
| 296 |
+
## 4.1 The signal model
|
| 297 |
+
|
| 298 |
+
During the channel estimation process, a training sequence is transmitted. Each UWB symbol is transmitted over a time-interval of $T_s$ seconds that is subdivided into $N_f$ equal size frame-intervals of length $T_f$, i.e., $T_s = N_f T_f$. A frame is divided into $N_c$ chips with each of duration $T_c$, i.e., $T_f = N_c T_c$. A single frame contains exactly one data modulated ultrashort pulse $p(t)$ (so-called monocycle) of duration $T_p$ which satisfies $T_p \le T_c$. The pulse $p(t)$ normalized to satisfy $\int p(t)^2 dt = 1$ can be Gaussian, Rayleigh or other. Then the waveform for the training sequence can be written as
|
| 299 |
+
|
| 300 |
+
$$s(t) = \sqrt{E_f} \sum_{n=0}^{N_s-1} \sum_{j=0}^{N_f-1} b_n p(t - nT_s - jT_f) \quad (23)$$
|
| 301 |
+
|
| 302 |
+
where $E_f$ represents the energy spread over one frame and $N_s$ is the length of the training sequence; $b_n$ denotes data, which is equal to 1 during training phase.
|
| 303 |
+
Our goal is to derive the estimate of the channel parameter sequence $\mathbf{h} = [h_0, h_1, \dots, h_{L-1}]$. Since from the assumption $L$ is unknown, we define a $N_c$-length sequence $\mathbf{p}$ as
|
| 304 |
+
|
| 305 |
+
$$\mathbf{p} = [h_0, h_1, \dots, h_{L-1}, h_L, h_{L+1}, \dots, h_{N_c-1}] \quad (24)$$
|
| 306 |
+
|
| 307 |
+
where $h_l = 0$ for $l \ge L$. The transmitted signal propagates through an $L$-path fading channel as shown in (3). Thus the received signal is
|
| 308 |
+
|
| 309 |
+
$$r(t) = \sqrt{E_f} \sum_{n=0}^{N_s-1} \sum_{j=0}^{N_f-1} \sum_{l=0}^{N_c-1} h_l p(t - nT_s - jT_f - lT_c) + n(t) \quad (25)$$
|
| 310 |
+
|
| 311 |
+
where $n(t)$ is the zero-mean additive white Gaussian noise (AWGN) with double-side power spectral density $\sigma_n^2/2$.
|
| 312 |
+
---PAGE_BREAK---
|
| 313 |
+
|
| 314 |
+
## 4.2 The choices of templates
|
| 315 |
+
|
| 316 |
+
In this section, a novel channel estimation method that relies on symbal-level samples is derived. As shown in Fig. 4, the received signal (25) is separately correlated with the pre-devised templates $W_0(t), W_1(t), \dots, W_S(t)$, and sampled at $nT_m$ where sampling period $T_m$ is on the order of $T_f$. Let $Y_i[n]$ denote the n-th sample corresponding to the template $W_i(t)$, that is,
|
| 317 |
+
|
| 318 |
+
$$ Y_i[n] = \int_0^{T_m} r(t + nT_m)W_i(t)dt \quad (26) $$
|
| 319 |
+
|
| 320 |
+
with $i = 0, 1, \dots, S$. Utilizing these samples, we derive the ML estimate of the channel parameter sequence **p** in (24).
|
| 321 |
+
|
| 322 |
+
First we introduce a set of $S+1$ templates used for the channel estimation. The number $S$ is chosen as a positive integer factor of $N_c/2$ by assuming that $N_c$ which represents the number of chips $T_c$ in each frame is an even number. That is, we have $N_c = 2SM$ with $M$ also being defined as a positive integer factor of $N_c/2$. The $i$-th template is defined as
|
| 323 |
+
|
| 324 |
+
$$ W_i(t) = \sqrt{E_f} \sum_{k=0}^{N_o-1} \omega_{N_o}^{ik} [p(t - kT_c) + p(t - T_f - kT_c)] \quad (27) $$
|
| 325 |
+
|
| 326 |
+
with $N_o = 2S = N_c/M$, $\omega_{N_o}^{ik} = e^{-j\frac{2\pi ik}{N_o}}$ and $i \in \{0, 1, \dots, S\}$. The duration of each template $W_i(t)$ is equal to the sampling period $T_m$ which can be expressed as
|
| 327 |
+
|
| 328 |
+
$$ T_m = (N_c + N_o)T_c = T_f + N_o T_c. \quad (28) $$
|
| 329 |
+
|
| 330 |
+
## 4.3 The computation of the channel parameter sequence p
|
| 331 |
+
|
| 332 |
+
In this section, we derive the channel estimation scheme that only relies on frame-level sampling rate data. To begin with, let us introduce some notations. Recalling the equation $N_o = N_c/M$ following (27), we divide the $N_c$-length sequence **p** into $M$ blocks each of size $N_o$. Therefore, equation (24) becomes
|
| 333 |
+
|
| 334 |
+
$$ \mathbf{p} = [\mathbf{h}_0, \mathbf{h}_1, \dots, \mathbf{h}_m, \dots, \mathbf{h}_{M-1}] \quad (29) $$
|
| 335 |
+
|
| 336 |
+
where the *m*-th block $\mathbf{h}_m$ is defined as
|
| 337 |
+
|
| 338 |
+
$$ \mathbf{h}_m = [h_{mN_o}, h_{mN_o+1}, \dots, h_{mN_o+N_o-1}] \quad (30) $$
|
| 339 |
+
|
| 340 |
+
with $m \in \{0, 1, \dots, M-1\}$. Let $\mathbf{F}_i$ denote the $N_o$-length coefficient sequence of the $i$-th template $W_i(t)$ in (27), i.e.,
|
| 341 |
+
|
| 342 |
+
$$ \mathbf{F}_i = [\omega_{N_o}^0 \omega_{N_o}^i \omega_{N_o}^{2i} \dots \omega_{N_o}^{(N_o-1)i}] . \quad (31) $$
|
| 343 |
+
|
| 344 |
+
The discrete Fourier transform (DFT) of the $N_o$-length sequence $\mathbf{h}_m = [h_{mN_o}, h_{mN_o+1}, \dots, h_{mN_o+N_o-1}]$ is denoted as
|
| 345 |
+
|
| 346 |
+
$$ \mathbf{H}_m = [H_m^0, H_m^1, \dots, H_m^i, \dots, H_m^{N_o-1}] \quad (32) $$
|
| 347 |
+
---PAGE_BREAK---
|
| 348 |
+
|
| 349 |
+
where the frequency-domain channel parameter $H_m^i$ is
|
| 350 |
+
|
| 351 |
+
$$ H_m^i = \mathbf{F}_i \mathbf{h}_m^T = \sum_{k=0}^{N_o-1} \omega_{N_o}^{ik} h_{mN_o+k} \quad (33) $$
|
| 352 |
+
|
| 353 |
+
with $m \in \{0, 1, \dots, M-1\}$ and $i \in \{0, 1, \dots, S\}$.
|
| 354 |
+
|
| 355 |
+
Our channel estimation algorithm proceeds through the following two steps.
|
| 356 |
+
|
| 357 |
+
**Step 1:** Utilizing the set of frame-level samples $\{Y_i[n]\}_{n=1}^N$ generated from the i-th template, we compute the ML estimates of the frequency-domain channel parameters $\{H_m^i\}_{m=1}^M$ for $i \in \{0, 1, \dots, S\}$. To do this, we show that the samples $\{Y_i[n]\}_{n=0}^{N-1}$ from the i-th template has the following decomposition.
|
| 358 |
+
|
| 359 |
+
**Proposition 1:** Every sample in the set $\{Y_i[n]\}_{n=0}^{N-1}$ can be decomposed into the sum of a frequency-domain channel parameter and a noise sample, that is,
|
| 360 |
+
|
| 361 |
+
$$ \left\{ \begin{array}{l} Y_i[qM] = 2E_f H_0^i + Z_i[qM] \\ Y_i[qM+1] = 2E_f H_1^i + Z_i[qM+1] \\ \vdots \\ Y_i[qM+m] = 2E_f H_m^i + Z_i[qM+m] \\ \vdots \\ Y_i[qM+M-1] = 2E_f H_{M-1}^i + Z_i[qM+M-1] \end{array} \right. \qquad (34) $$
|
| 362 |
+
|
| 363 |
+
where $Z_i[n]$ represents the noise sample. The parameter $q$ belongs to the set $\{0, 1, \dots, Q-1\}$ with $Q = \lfloor \frac{N}{M} \rfloor$.
|
| 364 |
+
|
| 365 |
+
Performing ML estimation to the $(m+1)$-th equation in (34) for $q=0, 1, \dots, Q-1$, we can compute the ML estimate $\hat{H}_m^i$ for the frequency-domain channel parameter $H_m^i$ as
|
| 366 |
+
|
| 367 |
+
$$ \hat{H}_m^i = \frac{1}{2E_f Q} \sum_{q=0}^{Q-1} Y_i[qM+m] \quad (35) $$
|
| 368 |
+
|
| 369 |
+
with $m \in \{0, 1, \dots, M-1\}$ and $i \in \{0, 1, \dots, S\}$.
|
| 370 |
+
|
| 371 |
+
**Step 2:** Utilizing the computed frequency-domain channel parameters $\{\hat{H}_m^i\}_{i=0}^S$ from the Step 1, we derive the estimate of the time-domain channel sequence $\mathbf{h}_m$ for $m \in \{0, 1, \dots, M-1\}$. From the symmetry of the DFT, the time-domain channel parameter sequence $\mathbf{h}_m = [h_{mN_o} \ h_{mN_o+1} \ \dots \ h_{mN_o+N_o-1}]$ is a real valued sequence, which suggests that the DFT of $\mathbf{h}_m$ satisfies
|
| 372 |
+
|
| 373 |
+
$$ H_m^{N_o-i} = (\hat{H}_m^i)^* \quad (36) $$
|
| 374 |
+
|
| 375 |
+
with $i \in \{0, 1, \dots, S\}$ and $S = N_o/2$.
|
| 376 |
+
|
| 377 |
+
Utilizing equation (36), we obtain the estimate for the $N_o$-point DFT of $\mathbf{h}_m$ as
|
| 378 |
+
|
| 379 |
+
$$ \hat{\mathbf{H}}_m = [\hat{H}_m^0, \hat{H}_m^1, \dots, \hat{H}_m^S, (\hat{H}_m^{S-1})^*, \dots, (\hat{H}_m^2)^*, (\hat{H}_m^1)^*] \quad (37) $$
|
| 380 |
+
---PAGE_BREAK---
|
| 381 |
+
|
| 382 |
+
The estimate of the time-domain channel parameter $\hat{h}_m$ can be computed via $N_o$-point IFFT. In view of equation (29), the estimated channel parameter sequence **p** in (24) is given by
|
| 383 |
+
|
| 384 |
+
$$ \hat{\mathbf{p}} = [\hat{\mathbf{h}}_0, \hat{\mathbf{h}}_1, \dots, \hat{\mathbf{h}}_{M-1}]. \quad (38) $$
|
| 385 |
+
|
| 386 |
+
Fig. 5. MSE performance of the algorithm proposed in (Wang & Ge, 2007) and the proposed algorithm with different number of templates ($S = 4, 8, 16$), when the length of the training sequence $N_s$ is 30.
|
| 387 |
+
|
| 388 |
+
## 4.4 Simulation
|
| 389 |
+
|
| 390 |
+
In this section, computer simulations are performed to test the proposed algorithm. The propagation channels are generated by the channel model CM 4 described in (Foerster, 2003). We choose the second-order derivative of the Gaussian pulse as the transmitted pulse with duration $T_p = 1$ ns. Other parameters are selected as follows: $T_f = 64$ ns, $T_c = 1$ ns, $N_c = 64$ and $N_f = 24$.
|
| 391 |
+
|
| 392 |
+
Fig. 5 presents the normalized mean-square error (MSE) of our channel estimation algorithm with different number of templates ($S = 4, 8, 16$) when the length of the training sequence $N_s$ is 30. As a comparison, we also plot the MSE curve of the approach in (Wang & Ge, 2007) which needs chip-level sampling rate. Fig. 6 illustrates the bit-error-rates (BERs) performance for the both algorithms. The BERs corresponding to the perfect channel estimation (Perfect CE) is also plotted for comparisons. From these figures, the MSE and BER performances of our algorithm improve as the number of templates increases. In particular, as shown in Fig. 5 and Fig. 6, the MSE and BER performances of our algorithm that relies only on the frame-level sampling period $T_f = 64$ ns is comparable to that of the approach proposed in (Wang & Ge, 2007) which requires chip-level sampling period $T_c = 1$ ns.
|
| 393 |
+
---PAGE_BREAK---
|
| 394 |
+
|
| 395 |
+
Fig. 6. BER performance of Perfect CE, the algorithm proposed in (Wang & Ge, 2007) and the proposed algorithm with different number of templates ($S = 4, 8, 16$), when the length of the training sequence $N_s$ is 30.
|
| 396 |
+
|
| 397 |
+
## 5. Conclusion
|
| 398 |
+
|
| 399 |
+
In this chapter, we are focusing on the low sampling rate time acquisition schemes and channel estimation algorithms of UWB signals. First, we develop a novel optimum data-aided (DA) timing offset estimator that utilizes only symbol-rate samples to achieve the channel delay spread scale timing acquisition. For this purpose, we exploit the statistical properties of the power delay profile of the received signals to design a set of the templates to ensure the effective multipath energy capture at any time. Second, we propose a novel optimum data-aided channel estimation scheme that only relies on frame-level sampling rate data to derive channel parameter estimates from the received waveform.
|
| 400 |
+
|
| 401 |
+
## 6. References
|
| 402 |
+
|
| 403 |
+
* Karaoguz, J. (2001). High-rate wireless personal area networks, *IEEE Commun. Mag.*, vol. 39, pp. 96-102.
|
| 404 |
+
|
| 405 |
+
* Lovelace, W. M. & Townsend, J. K. (2002). The effect of timing jitter and tracking on the performance of impulse radio, *IEEE J. Sel. Areas Commun.*, vol. 20, no. 9, pp. 1646-1651.
|
| 406 |
+
|
| 407 |
+
* Tian, Z. & Giannakis, G. B. (2005). BER sensitivity to mistiming in ultrawideband impulse radios-part I: modeling, *IEEE Trans. Signal Processing*, vol. 53, no. 4, pp. 1550-1560.
|
| 408 |
+
|
| 409 |
+
* Tian, Z. & Giannakis, G. B. (2005). A GLRT approach to data-aided timing acquisition in UWB radios-Part I: Algorithms, *IEEE Trans. Wireless Commun.*, vol. 53, no. 11, pp. IV.2956-2967.
|
| 410 |
+
|
| 411 |
+
* Yang, L. & Giannakis, G. B. (2005). Timing Ultra-wideband Signals with Dirty Templates, *IEEE Trans. on Commun.*, vol. 53, pp. 1952-1963.
|
| 412 |
+
---PAGE_BREAK---
|
| 413 |
+
|
| 414 |
+
Carbonelli, C. & Mengali, U. (2006). Synchronization algorithms for UWB signals, *IEEE Trans. on Commun.*, vol. 54, no. 2, pp. 329-338.
|
| 415 |
+
|
| 416 |
+
He, N. & Tepedelenlioglui, C. (2008). Joint Pulse and Symbol Level Acquisition of UWB Receivers, *IEEE Trans. on Wireless Commun.*, vol. 7, no. 1, pp. 6-14.
|
| 417 |
+
|
| 418 |
+
Carbonelli, C. & Mengali, U. (2005). Low complexity synchronization for UWB noncoherent receivers, in *Proc. 2005 Vehicular Technology Conf.*, vol. 2, pp. 1350-1354.
|
| 419 |
+
|
| 420 |
+
Furusawa, K.; Sasaki, M.; Hioki, J.; Itami, M.; (2008). Schemes of optimization of energy detection receivers for UWB-IR communication systems under different channel model, *IEEE International Conference on Ultra-Wideband*, pp.157 - 160, Leibniz Universitat Hannover, Germany.
|
| 421 |
+
|
| 422 |
+
Cheng, X. & Guan, Y. (2008). Effects of synchronization errors on energy detection of UWB signals, *IEEE International Conference on Ultra-Wideband*, pp.161 - 164, Leibniz Universitat Hannover, Germany.
|
| 423 |
+
|
| 424 |
+
Sasaki, M.; Ohno, J.; Ohno, H.; Ohno, K.; Itami, M. (2010). A study on multi-user access in energy detection UWB-IR receiver, *2010 IEEE 11th International Symposium on Spread Spectrum Techniques and Applications (ISITA)* pp.141 - 146, Taichung, Taiwan.
|
| 425 |
+
|
| 426 |
+
Xu, W.; Zhao,J.; Wang, D. (2009). A Frame-Level Timing Acquisition Scheme of Ultra-wideband Signals Using Multi-templates, *The 6th International Symposium on Wireless Communication Systems*, pp.61 - 65, Tuscany, Italy.
|
| 427 |
+
|
| 428 |
+
J. Foerster, Channel modeling sub-committee report final, *IEEE P802.15-02/490*.
|
| 429 |
+
|
| 430 |
+
Stoica, L.; Rabbachin, A.; Repo, H.; Tiuraniemi,T.; Oppermann, I. (2005). An ultra-wideband system architecture for tag based wireless sensor networks, *IEEE Trans. on Veh. Technol.*, vol. 54, no. 5, pp. 1632-1645.
|
| 431 |
+
|
| 432 |
+
Turin, G. L. (1980). Introduction to spread-spectrum antimultipath techniques and their application to urban digital radio, *Proc. IEEE*, vol. 68, pp. 328-353.
|
| 433 |
+
|
| 434 |
+
Lottici, V; D'Andrea, A. N.; Mengali, U. (2002). Channel estimation for ultra-wideband communications, *IEEE J. Select. Areas Commun.*, vol. 20, no. 9, pp. 1638-1645.
|
| 435 |
+
|
| 436 |
+
Yang, L. & Giannakis, G. B. (2004). Optimal pilot waveform assisted modulation for ultra-wideband communications, *IEEE Trans. Wireless Commun.*, vol. 3, no. 4, pp. 1236-1249.
|
| 437 |
+
|
| 438 |
+
Cramer, R. J. M.; Scholtz, R. A.; Win, M. Z. (2002). Evaluation of an ultra wideband propagation channel, *IEEE Trans. Antennas Propagat.*, vol. 50, No. 5.
|
| 439 |
+
|
| 440 |
+
Carbonelli, C. & Mitra, U. (2007). Clustered ML Channel Estimation for Ultra-Wideband Signals, *IEEE Trans. Wireless Commun.*, vol. 6, No. 7,pp.2412 - 2416.
|
| 441 |
+
|
| 442 |
+
Paredes, J.L.; Arce, G.R.; Wang, Z. (2007). Ultra-Wideband Compressed Sensing: Channel Estimation, *IEEE Journal of Selected Topics in Signal Processing*, vol. 1, No. 3,pp.383 - 395.
|
| 443 |
+
|
| 444 |
+
Shi, L.; Zhou, Z.; Tang, L.; Yao, H.; Zhang, J. (2010). Ultra-wideband channel estimation based on Bayesian compressive sensing, *2010 International Symposium on Communications and Information Technologies (ISCIT)*, pp.779 - 782, Tokyo, Japan.
|
| 445 |
+
|
| 446 |
+
Wang, X. & Ge, H. (2007). On the CRLB and Low-Complexity Channel Estimation for UWB Communications. *IEEE 41st Annual Conference on Information Sciences and Systems*, Baltimore, pp. 151-153.
|
| 447 |
+
---PAGE_BREAK---
|
| 448 |
+
|
| 449 |
+
Xu, Z. & Liu, P. (2003). A subspace approach to blind estimation of ultrawideband channels, in *Proc. IEEE Thirty-Seventh Asilomar Conference on Signals, Systems & Computers*. vol. 2, pp. 1249-1253.
|
| 450 |
+
|
| 451 |
+
Wang, Z. & Yang, X. (2004). Ultra wide-band communications with blind channel estimation based on first-order statistics, in *Proc. IEEE (ICASSP-04)*. vol. 4, pp. iv-529 - iv-532, Montreal, Canada.
|
| 452 |
+
---PAGE_BREAK---
|
| 453 |
+
|
| 454 |
+
ULTRA WIDEBAND
|
| 455 |
+
COMMUNICATIONS
|
| 456 |
+
|
| 457 |
+
NOVEL TRENDS - SYSTEM, ARCHITECTURE
|
| 458 |
+
AND IMPLEMENTATION
|
| 459 |
+
|
| 460 |
+
Edited by Mohammad A. Matin
|
| 461 |
+
|
| 462 |
+
Ultra Wideband Communications: Novel Trends - System,
|
| 463 |
+
Architecture and Implementation
|
| 464 |
+
|
| 465 |
+
Edited by Dr. Mohammad Matin
|
| 466 |
+
|
| 467 |
+
ISBN 978-953-307-461-0
|
| 468 |
+
|
| 469 |
+
Hard cover, 348 pages
|
| 470 |
+
|
| 471 |
+
Publisher InTech
|
| 472 |
+
|
| 473 |
+
Published online 27, July, 2011
|
| 474 |
+
|
| 475 |
+
Published in print edition July, 2011
|
| 476 |
+
|
| 477 |
+
This book has addressed few challenges to ensure the success of UWB technologies and covers several research areas including UWB low cost transceiver, low noise amplifier (LNA), ADC architectures, UWB filter, and high power UWB amplifiers. It is believed that this book serves as a comprehensive reference for graduate students in UWB technologies.
|
| 478 |
+
|
| 479 |
+
## How to reference
|
| 480 |
+
|
| 481 |
+
In order to correctly reference this scholarly work, feel free to copy and paste the following:
|
| 482 |
+
|
| 483 |
+
Wei Xu and Jiaxiang Zhao (2011). Low Sampling Rate Time Acquisition Schemes and Channel Estimation Algorithms of Ultra-Wideband Signals, Ultra Wideband Communications: Novel Trends - System, Architecture and Implementation, Dr. Mohammad Matin (Ed.), ISBN: 978-953-307-461-0, InTech, Available from: http://www.intechopen.com/books/ultra-wideband-communications-novel-trends-system-architecture-and-implementation/low-sampling-rate-time-acquisition-schemes-and-channel-estimation-algorithms-of-ultra-wideband-signa
|
| 484 |
+
|
| 485 |
+
## INTECH
|
| 486 |
+
|
| 487 |
+
open science | open minds
|
| 488 |
+
|
| 489 |
+
### InTech Europe
|
| 490 |
+
|
| 491 |
+
University Campus STeP Ri
|
| 492 |
+
Slavka Krautzeka 83/A
|
| 493 |
+
51000 Rijeka, Croatia
|
| 494 |
+
Phone: +385 (51) 770 447
|
| 495 |
+
Fax: +385 (51) 686 166
|
| 496 |
+
www.intechopen.com
|
| 497 |
+
|
| 498 |
+
### InTech China
|
| 499 |
+
|
| 500 |
+
Unit 405, Office Block, Hotel Equatorial Shanghai
|
| 501 |
+
No.65, Yan An Road (West), Shanghai, 200040, China
|
| 502 |
+
中国上海市延安西路65号上海国际贵都大饭店办公楼405单元
|
| 503 |
+
Phone: +86-21-62489820
|
| 504 |
+
Fax: +86-21-62489821
|
| 505 |
+
---PAGE_BREAK---
|
| 506 |
+
|
| 507 |
+
© 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the [Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License](http://creativecommons.org/licenses/by-nc-nd/3.0/), which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.
|
samples_new/texts_merged/1973835.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/199837.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
POLYNOMIAL SYSTEMS, H-BASES, AND
|
| 5 |
+
AN APPLICATION FROM KINEMATIC
|
| 6 |
+
TRANSFORMS
|
| 7 |
+
|
| 8 |
+
Tomas Sauer and Dominik Wagenfuehr
|
| 9 |
+
|
| 10 |
+
**Abstract.** We review some algebraic methods to solve systems of polynomial equations and illustrate these methods with a real-world problem that comes from computing kinematic transforms in robotics.
|
| 11 |
+
|
| 12 |
+
*Keywords:* Gröbner basis, H-basis, polynomial system, kinematic transform
|
| 13 |
+
|
| 14 |
+
*AMS classification:* 65H10, 13P10, 70B15
|
| 15 |
+
|
| 16 |
+
§1. Introduction
|
| 17 |
+
|
| 18 |
+
Polynomial systems of equations and the structure of their solutions play a crucial role in many fields of theoretical and applied mathematics. The importance of polynomial equations in applications is often due to the need to determine locations of points from given euclidian distances which obviously leads to quadratic equations.
|
| 19 |
+
|
| 20 |
+
The mathematical formulation is as follows: Suppose we are given a finite set $F \subset \mathbb{K}[x] = \mathbb{K}[x_1, \dots, x_n]$ of polynomials in the $n$ variables $x_1, \dots, x_n$ with coefficients in the field $\mathbb{K}$, where usually $\mathbb{K} = \mathbb{Q}, \mathbb{R}, \mathbb{C}$, i.e., the rational, real or complex numbers. Given the equations $F$, the goal is to find the solutions $X \subset \overline{\mathbb{K}}^n$ of the system $F(X) = 0$ in the algebraic closure $\overline{\mathbb{K}}$ of $\mathbb{K}$, that is,
|
| 21 |
+
|
| 22 |
+
$$ X = \{ x \in \overline{\mathbb{K}}^n : f(x) = 0, f \in F \}. \qquad (1) $$
|
| 23 |
+
|
| 24 |
+
Note that there are two major differences to the “standard approach” for solving nonlinear equations by means of Newton’s method: The number of equations, $\#F$, need not coincide with the number of variables, $n$, and we are not interested in a single solution, but in the set of all solutions of $F(X) = 0$.
|
| 25 |
+
|
| 26 |
+
The equations $f(X) = 0, f \in F$, trivially remain valid if each of them is multiplied by an arbitrary polynomial $q_f \in \mathbb{K}[x]$ and if any such modified equations are added. Hence,
|
| 27 |
+
|
| 28 |
+
$$ F(X) = 0 \Leftrightarrow \langle F \rangle(X) = 0, \quad \langle F \rangle = \left\{ \sum_{f \in F} q_f f : q_f \in \mathbb{K}[x] \right\}, \quad (2) $$
|
| 29 |
+
|
| 30 |
+
where $\langle F \rangle$ is the *ideal generated by* $F$; recall that an ideal $\mathcal{I}$ is a subset of $\mathbb{K}[x]$ which is closed under addition and multiplication by arbitrary polynomials, cf. [4]. A subset $G$ of an ideal $\mathcal{I}$ is called a *basis* for the ideal $\mathcal{I}$ if $G$ generates the ideal, i.e., $\mathcal{I} = \langle G \rangle$. With this terminology
|
| 31 |
+
---PAGE_BREAK---
|
| 32 |
+
|
| 33 |
+
at hand, we can rephrase (2) as that the solution $X$ depends only on the ideal $\mathcal{I}$, but not on the
|
| 34 |
+
individual basis $F$. This simple observation is the fundamental idea behind all the algebraic
|
| 35 |
+
methods to solve polynomial systems by interpreting the original equations as a basis of an
|
| 36 |
+
ideal and then computing another basis for the same ideal from which the solution of the
|
| 37 |
+
polynomial system is more easily accessible. In other words: Algebraic methods transform a
|
| 38 |
+
given system of equations into a simpler or more useful form.
|
| 39 |
+
|
| 40 |
+
§2. Gröbner bases, H-bases and eigenvalues
|
| 41 |
+
|
| 42 |
+
Gröbner bases as well as H-bases are special ideal bases which provide representations of minimal degree, where these two types of bases differ by being related to different notions of degree. For Gröbner bases, we need the concept of a term order "<" on $\mathbb{N}_0^n$, that is, a well-ordering on $\mathbb{N}_0^n$ which is compatible with addition, cf. [4]. With respect to this order, any polynomial
|
| 43 |
+
|
| 44 |
+
$$f(x) = \sum_{\alpha \in \mathbb{N}_0^n} f_\alpha x^\alpha, \quad f_\alpha \in \mathbb{K}, \quad \#\{\alpha : f_\alpha \neq 0\} < \infty,$$
|
| 45 |
+
|
| 46 |
+
has a maximal nonzero coefficient $f_\alpha$ and $\alpha$ is called the *(multi)degree* of the polynomial
|
| 47 |
+
while $f_\alpha x^\alpha$ is usually named the *leading term* of $f$. For H-bases, on the other hand, the
|
| 48 |
+
degree is not a multiindex, but a number, namely the maximal length $|\alpha| = \alpha_1 + \cdots + \alpha_n$ of
|
| 49 |
+
the indices of nonzero coefficients – the usual *total degree*. Nevertheless, we will write the
|
| 50 |
+
degree of a polynomial $f$ as $\delta(f)$, regardless of whether $\delta(f) \in \mathbb{N}_0^n$ or $\delta(f) \in \mathbb{N}_0$; indeed,
|
| 51 |
+
there is a joint framework in terms of graded rings, see [5], and [10] for the application in
|
| 52 |
+
ideal bases and interpolation. A finite set $H \subset \mathbb{K}[x]$ is called *Gröbner basis* or *H-basis*,
|
| 53 |
+
depending on whether $\delta$ is based on on a term order or on the total degree, if any $f \in \langle H \rangle$
|
| 54 |
+
can be written as
|
| 55 |
+
|
| 56 |
+
$$f = \sum_{h \in H} f_h h, \quad f_h \in \mathbb{K}[x], \quad \delta(f) \ge \delta(f_h h), \quad h \in H. \tag{3}$$
|
| 57 |
+
|
| 58 |
+
The crucial point of Gröbner bases and H-bases is the degree constraint in (3) which helps to avoid a certain redundancy: Assume that one term in the sum on the right hand side were of higher degree than $f$, then there must be at least a second term of the same or higher degree compensating its leading term, and the representation would be redundant, all the terms of degree higher than that of $f$ unneeded. But the main practical advantage of Gröbner bases and the main reason for their development in [2] is the fact that they permit the *algorithmic computation* of a unique remainder $r$,
|
| 59 |
+
|
| 60 |
+
$$f = \sum_{h \in H} f_h h + r. \quad (4)$$
|
| 61 |
+
|
| 62 |
+
This can be extended to the grading by total degree [6, 9] and even to arbitrary gradings in
|
| 63 |
+
such a way that the remainder $r$ depends only on $\langle H \rangle$ and the parameters of the grading,
|
| 64 |
+
see [11] for details. Thus, we have a method to compute a normal form $\nu_{\langle H \rangle}$ modulo $\langle H \rangle$
|
| 65 |
+
and to efficiently perform arithmetic in the quotient ring $\mathcal{P} := \mathbb{K}[x]/\langle H \rangle$. Moreover, $\mathcal{P}$ is a
|
| 66 |
+
---PAGE_BREAK---
|
| 67 |
+
|
| 68 |
+
finite dimensional space if and only if the ideal $\mathcal{I} = \langle H \rangle$ has dimension zero which is in turn equivalent to a finite number of solutions $X$ for $H(X) = 0$.
|
| 69 |
+
|
| 70 |
+
So here is the first part of the algebraic simplification: Starting with a finite set $F$ of polynomial equations, one computes a Gröbner basis or H-basis $H$ for the ideal $\langle F \rangle$ from which it can be decided whether $F(X) = 0$ has no solution (this happens if and only if $1 \in H$), a finite number of solutions or infinitely many solutions. It is even possible, see [4], to determine the dimension of the algebraic variety formed by the solutions. But in this paper let us assume that $X$ were nonempty and finite.
|
| 71 |
+
|
| 72 |
+
The classical method [13], see also [1, 4], to find $X$ is by means of elimination ideals: A purely lexicographical Gröbner basis for a zero dimensional ideal contains some univariate polynomials whose greatest common divisor vanishes at the projections of the common zeros to this coordinate. Solving and substituting the solutions eliminates the variable and continuing this process, one can systematically find all the common zeros. Unfortunately, this process has a terrible complexity and can be very sensitive to perturbations of the coefficients, cf. [7], which limits its use in practical applications.
|
| 73 |
+
|
| 74 |
+
There is, however, a different approach proposed by Möller and Stetter [8, 12] which is based on multiplication tables on the quotient space $\mathcal{P}$. To that end, observe that multiplication of $f, g \in \mathcal{P}$ is defined as $\nu_{\mathcal{I}}(fg)$ and that for fixed $g \in \mathbb{K}[x]$ the operation
|
| 75 |
+
|
| 76 |
+
$$f \mapsto M_g(f) := \nu_{\mathcal{I}}(fg)$$
|
| 77 |
+
|
| 78 |
+
is a linear operator on $\mathcal{P}$ that can be represented with respect to a basis of $\mathcal{P}$ by a matrix $M_g$ – the so called multiplication table. For $j = 1, \dots, n$ let now $M_j$ denote the multiplication table for the coordinate polynomials $g(x) = x_j$, then the $M_j$ generalize the classical Frobenius companion matrix, form a commuting family of matrices, have joint eigenvectors and the respective eigenvalues are the coordinates of the common zeros. Thus, the solutions of $F(X) = 0$ can be found by relying on well-developed methods from Numerical Linear Algebra and the flexibility of H-bases now offers an approach that changes continuously with the parameters and thus is much less sensitive to perturbations, see again [7] for an example.
|
| 79 |
+
|
| 80 |
+
### §3. Practical Examples
|
| 81 |
+
|
| 82 |
+
In this section we want to apply and illustrate the mathematical concepts of the preceding chapters. To that end, we take a look at two slightly different kinematics. First, we will consider a simple example in three dimensions to show how we obtain the equations needed as starting ideal basis for the computation of a Gröber basis or H-basis. Then we present a kinematic that still appears to be quite simple but leads monstrous Gröbner bases and H-bases and also point out how crucial it is to incorporate “implicit” physical restrictions into the system of equations.
|
| 83 |
+
|
| 84 |
+
All our kinematics follow the same basic layout: The manipulator (in most cases used for melding or milling) is connected to three (or more) rods of variable length. In the inverse kinematic transform we know the position of the manipulator and want to compute the “machine parameters”, i.e., the lengths of the rods, while in the forward kinematic transform the location of the manipulator is to be determined from the lengths of the rods. In both cases the ideal basis which we first must construct is the same, namely the implicit system of equations.
|
| 85 |
+
---PAGE_BREAK---
|
| 86 |
+
|
| 87 |
+
Figure 1: Simple 3D kinematic.
|
| 88 |
+
|
| 89 |
+
The only difference consists of the choice which of the parameters are considered variables to be solved.
|
| 90 |
+
|
| 91 |
+
**3.1. A Simple 3D-Kinematic**
|
| 92 |
+
|
| 93 |
+
The first example is really easy to solve and we only use it to demonstrate how to obtain
|
| 94 |
+
the equations from which we compute the Gröbner- or H-Basis. First we take a look at the
|
| 95 |
+
construction. In figure 1 the construction is fixed in three points A₁, A₂ and A₃, coplanar
|
| 96 |
+
with the origin {0}, and have the same distance *a* to {0}. Furthermore, the distance between
|
| 97 |
+
every two points is constant. Now it is easy to see how to obtain the equations we need.
|
| 98 |
+
Consider the projection S of T = (x, y, z) in the plane generated by A₁, A₂ and A₃. With
|
| 99 |
+
Pythagoras we have
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
l_i = y^2 + \|A_i - S\|_2^2, \quad i = 1, 2, 3,
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
which directly leads to the set of equations
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
y^2 + x^2 + (a-z)^2 - l_1^2 = 0,
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
y^2 + \left(-\frac{\sqrt{3}}{2}a - x\right)^2 + \left(\frac{-1}{2}a - z\right)^2 - l_2^2 = 0,
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
y^2 + \left(\frac{\sqrt{3}}{2}a - x\right)^2 + \left(\frac{-1}{2}a - z\right)^2 - l_3^2 = 0.
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
In Maple notation, the ideal is thus generated by $F := [x^2 + y^2 + (a-z)^2 - l_1^2, y^2 + (-\frac{\sqrt{3}}{2}a-x)^2 + (\frac{1}{2}a-z)^2 - l_2^2, y^2 + (\frac{\sqrt{3}}{2}a-x)^2 + (\frac{-1}{2}a-z)^2 - l_3^2]$.
|
| 120 |
+
---PAGE_BREAK---
|
| 121 |
+
|
| 122 |
+
Because we used the (square of the) lengths $l_1, l_2$ and $l_3$ explicitly in our ideal basis we can give the solution of the inverse kinematic transform directly as
|
| 123 |
+
|
| 124 |
+
$$l_1 = \sqrt{y^2 + x^2 + (a-z)^2},$$
|
| 125 |
+
|
| 126 |
+
$$l_2 = \sqrt{y^2 + \left(-\frac{\sqrt{3}}{2}a - x\right)^2 + \left(\frac{-1}{2}a - z\right)^2},$$
|
| 127 |
+
|
| 128 |
+
$$l_3 = \sqrt{y^2 + \left(\frac{\sqrt{3}}{2}a - x\right)^2 + \left(\frac{-1}{2}a - z\right)^2}.$$
|
| 129 |
+
|
| 130 |
+
For the forward transform we switch the roles of variables and constants which are now declared as $x, y, z$ and $a, b, l_1, l_2, l_3$, respectively. Without further problems we compute an H-basis of $F$ as $H = [9a^2y^2 - 3l_1^2a^2 + l_4^2 - l_3^2l_2^2 + l_2^4 + 9a^4 - l_2^2l_1^2 - 3a^2l_2^2 - 3a^2l_3^2 + l_1^4 - l_1^2l_3^2, 6az - l_2^2 + 2l_1^2 - l_3^2, 12ax + 2\sqrt{3}l_3^2 - 2\sqrt{3}l_2^2]$ and by means of multiplication tables of $\mathcal{P}$ and the corresponding eigenvectors we find that
|
| 131 |
+
|
| 132 |
+
$$x = \frac{\sqrt{3}(l_3^2 - l_1^2)}{6a},$$
|
| 133 |
+
|
| 134 |
+
$$y = \frac{\sqrt{-l_2^4 + 3l_1^2a^2 - l_3^4 + l_3^2l_2^2 + 3a^2l_3^2 - 9a^4 + l_2^2l_1^2 + 3a^2l_2^2 - l_1^4 + l_1^2l_3^2}}{-3a},$$
|
| 135 |
+
|
| 136 |
+
$$z = \frac{-2l_1^2 + l_2^2 + l_3^2}{6a}.$$
|
| 137 |
+
|
| 138 |
+
Note that the equations for $x$ and $z$ are significantly simpler than the one for $y$.
|
| 139 |
+
|
| 140 |
+
Since $y$ appears quadratically in the H-basis, it follows that together with $(x, y, z)$ also $(x, -y, z)$ is a solution of the system. However, this second solution is impossible in physical reality because the rods are flexible but fixed and cannot cross themselves. Unfortunately, it appears impossible to eliminate this unwanted "solution" a priori by adding more equations to the system; in fact, the only way to distinguish between the two solutions is by means of inequalities.
|
| 141 |
+
|
| 142 |
+
**Remark 1.** It is worthwhile to mention that not for all values of $l_1, l_2$ and $l_3$ the solution belongs to the real domain as in some cases the solution gains an additional imaginary part because the three rods have no common point. Though physically impossible this is absolutely correct mathematically. Finding additional constraints that eliminate complex solutions would consist of determining the associated *real* ideal.
|
| 143 |
+
|
| 144 |
+
## 3.2. The realistic problem
|
| 145 |
+
|
| 146 |
+
Now we want to take a close look at a slightly extended version of the latter three dimensional kinematic used in practical applications. In figure 2 the upper part of the construction equals the one in figure 1 while the lower part differs with the manipulator being attached centrally under a platform which is held and moved by the rods. To make things simpler, we assume that the vertices $B_1, B_2$ and $B_3$ of the platform form an equilateral triangle with distance $b$ between the points and barycenter $T = (x, y, z)$. To stabilize the construction, the platform
|
| 147 |
+
---PAGE_BREAK---
|
| 148 |
+
|
| 149 |
+
Figure 2: Complex 3D kinematic.
|
| 150 |
+
|
| 151 |
+
is also linked to the origin $\{0\}$ by an additionally guiding rod which is attached perpendicular
|
| 152 |
+
in $T$.
|
| 153 |
+
|
| 154 |
+
We will not discuss the ideal basis construction in full detail but should mention a few
|
| 155 |
+
facts. First, it is not possible to compute the value of $T$ directly, but it is easily found as
|
| 156 |
+
midpoint of the triangle formed by $B_1, B_2, B_3$ once these locations are determined. The
|
| 157 |
+
lengths $l_1, l_2$ and $l_3$ are just as easy to obtain as before from the equations
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\|S - A_i\|_2^2 + \|S - B_i\|_2^2 = \|B_i - A_i\|_2^2, \quad i = 1, 2, 3,
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
in which *S* is the projection of *T*, leading to
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\begin{align*}
|
| 167 |
+
x_1^2 + (z_1 - a)^2 + y_1^2 &= l_1^2, \\
|
| 168 |
+
\left(x_2 + \frac{\sqrt{3}a}{2}\right)^2 + \left(z_2 + \frac{a}{2}\right)^2 + y_2^2 &= l_2^2, \\
|
| 169 |
+
\left(x_3 - \frac{\sqrt{3}a}{2}\right)^2 + \left(z_3 + \frac{a}{2}\right)^2 + y_3^2 &= l_3^2.
|
| 170 |
+
\end{align*}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
As mentioned previously the triangle is equilateral giving us the additional three equations
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
(x_i - x_j)^2 + (y_i - y_j)^2 + (z_i - z_j)^2 = b^2, \quad 1 \le i < j \le 3.
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
The orthogonality of the system can finally be described by the inner products $(T - B_i, T) =$
|
| 180 |
+
---PAGE_BREAK---
|
| 181 |
+
|
| 182 |
+
0, i = 1, ..., 3, which leads to
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\begin{align*}
|
| 186 |
+
(x - x_1) x + (y - y_1) y + (z - z_1) z &= 0, \\
|
| 187 |
+
(x - x_2) x + (y - y_2) y + (z - z_2) z &= 0, \\
|
| 188 |
+
(x - x_3) x + (y - y_3) y + (z - z_3) z &= 0.
|
| 189 |
+
\end{align*}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
Finally we need the fact that the midpoint T of the triangle can be written as sum of the outer points $T = \frac{B_1+B_2+B_3}{3}$ yielding three more equations
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
(x_1 + x_2 + x_3) = 3x, \quad (y_1 + y_2 + y_3) = 3y, \quad (z_1 + z_2 + z_3) = 3z.
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
Together, these twelve equations forms our initial ideal basis $F := [x_1^2 + (z_1 - a)^2 + y_1^2 - l_1^2, (x_2 + \frac{\sqrt{3}a}{2})^2 + (z_2 + \frac{a}{2})^2 + y_2^2 - l_2^2, (x_3 - \frac{\sqrt{3}a}{2})^2 + (z_3 + \frac{a}{2})^2 + y_3^2 - l_3^2, (x_1 - x_2)^2 + (y_1 - y_2)^2 + (z_1 - z_2)^2 - b^2, (x_1 - x_3)^2 + (y_1 - y_3)^2 + (z_1 - z_3)^2 - b^2, (x_2 - x_3)^2 + (y_2 - y_3)^2 + (z_2 - z_3)^2 - b^2, (x - x_1)x + (y - y_1)y + (z - z_1)z, (x - x_2)x + (y - y_2)y + (z - z_2)z, (x - x_3)x + (y - y_3)y + (z - z_3)z, (x_1+x_2+x_3)-3x, (y_1+y_2+y_3)-3y, (z_1+z_2+z_3)-3z]$.
|
| 199 |
+
|
| 200 |
+
This time we begin with the more interesting forward kinematic transformation and are only interested in the dimension of the variety of the solutions $F(X) = 0$. To do so, we substitute some numerical values for the constants $l_1, l_2, l_3, a$ and $b$ and compute a Gröbner basis which can be done without many problems but with a little bit of time (a tdeg ordered basis has no less than 56 elements). Computing the dimension, we surprisingly realize that the ideal is one-dimensional and not zero-dimensional as it should be if we wanted a finite number of solutions and to apply multiplication tables for their computation.
|
| 201 |
+
|
| 202 |
+
So the first question is why we found a one-dimensional variety. For convenience, we substitute (as before) {$a = \sqrt{3}, b = 3, l_i = 4 \mid i = 1, 2, 3$} (see figure 3), and the desired final solution for the platform is
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
T = (0, 4, 0)^T, \quad B_1 = (0, 4, \sqrt{3})^T, \quad B_2 = \left(-\frac{3}{2}, 4, -\frac{\sqrt{3}}{2}\right)^T, \quad B_3 = \left(\frac{3}{2}, 4, -\frac{\sqrt{3}}{2}\right)^T.
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
If we rotate the lower triangle counterclockwise around the origin, so that $B_2$ is below $A_3$, $B_1$ below $A_2$ and $B_3$ below $A_1$ (see figure 4), we find that the point $T' = (0, \sqrt{7}, 0)^T$ resulting from
|
| 209 |
+
|
| 210 |
+
$$
|
| 211 |
+
B'_1 = \left(-\frac{3}{2}, \sqrt{7}, -\frac{\sqrt{3}}{2}\right)^T, \quad B'_2 = \left(\frac{3}{2}, \sqrt{7}, -\frac{\sqrt{3}}{2}\right)^T, \quad B'_3 = \left(0, \sqrt{7}, \sqrt{3}\right)^T.
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
is another solution of our polynomial system.
|
| 215 |
+
|
| 216 |
+
Consequently, we obtain, by simple rotation, a one-parameter family of solutions and that is precisely the reason why our ideal is not zero-dimensional, so that we have add more equations to the ideal basis in order to prevent rotations. In such situations, it is a good idea to give a closer look to reality and indeed it turns out that such torsions of the robot are impossible since the guiding rod is connected to the upper part by a *universal joint* that can only move forwards/backwards and left/right but does not permit rotational movement.
|
| 217 |
+
---PAGE_BREAK---
|
| 218 |
+
|
| 219 |
+
Figure 3: Simple Substitution.
|
| 220 |
+
|
| 221 |
+
Figure 4: Simple Rotated Substitution.
|
| 222 |
+
---PAGE_BREAK---
|
| 223 |
+
|
| 224 |
+
Again, we will not discuss the modeling of the joint in detail, but here is the basic idea behind our approach: If we know the center $T = (x, y, z)$ of the triangle, the position of the outer points $B_1, B_2, B_3$ is fixed. So take a look at the point $S := (0, -\sqrt{x^2+y^2+z^2}, 0)$ which is just the position of $T$ if the kinematic is not moved to any side ("rest position"). We can calculate the angle $\alpha$ between $S$ and $T$, more precisely the term $c_{\alpha} = \cos\alpha$. Let the points $B'_1, B'_2, B'_3$ be the vertices of the lower triangle in this rest position. With the help of rotation matrices and the angle $\alpha$ we can then compute the solution for the points $B_1, B_2, B_3$ explicitly. Doing so adds eleven further equations to our former ideal basis which makes us end up with $F := [x_1^2 + (z_1-a)^2 + y_1^2 - l_1^2, (x_2 + \frac{\sqrt{3}a}{2})^2 + (z_2 + \frac{a}{2})^2 + y_2^2 - l_2^2, (x_3 - \frac{\sqrt{3}a}{2})^2 + (z_3 + \frac{a}{2})^2 + y_3^2 - l_3^2, (x_1 - x_2)^2 + (y_1 - y_2)^2 + (z_1 - z_2)^2 - b^2, (x_1 - x_3)^2 + (y_1 - y_3)^2 + (z_1 - z_3)^2 - b^2, (x_2 - x_3)^2 + (y_2 - y_3)^2 + (z_2 - z_3)^2 - b^2, (x - x_1)x + (y - y_1)y + (z - z_1)z, (x - x_2)x + (y - y_2)y + (z - z_2)z, (x - x_3)x + (y - y_3)y + (z - z_3)z, (x_1 + x_2 + x_3) - 3x, (y_1 + y_2 + y_3) - 3y, (z_1 + z_2 + z_3) - 3z], \sqrt{3}dl(x-x_1) - bxz, \sqrt{3}dl(y-y_1) - byz, \sqrt{3}l(z-z_1) + bd, \sqrt{3}lby + 2\sqrt{3}dl(x-x_2) + bxz, -\sqrt{3}lbx + 2\sqrt{3}dl(y-y_2) + byz, 2\sqrt{3}l(z-z_2) - bd, -\sqrt{3}lby + 2\sqrt{3}dl(x-x_3) + bxz, \sqrt{3}lbx + 2\sqrt{3}dl(y-y_3) + byz, 2\sqrt{3}l(z-z_3) - bd, x^2 + y^2 - d^2, x^2 + y^2 + z^2 - l^2]$, where $d = \sqrt{x^2+y^2}$ and $l = \sqrt{x^2+y^2+z^2}$.
|
| 225 |
+
|
| 226 |
+
To solve the inverse kinematic problem, we choose the variables as $x_1, y_1, z_1, x_2, y_2, z_2, x_3, y_3, z_3, l, d, l_1, l_2, l_3$ and the constants as $x, y, z, a, b$. The H-Basis can be easily computed as $H = [(y^2+x^2)x_1-2xz_z-xy^2+2z^2x-x^3, z_1+2z_3-3z, (2x^2+2y^2)y_2+2zy_z+xbd-2y^3-2yx^2-2yz^2, z_2-z_3, (2x^2+2y^2)x_3+2xz_z+ybd-2z^2x-2xy^2-2x^3, (2x^2+2y^2)y_3+2zy_z-xbd-2y^3-2yx^2-2yz^2, (2x^2+2y^2)x_2+2xz_z-ybd-2z^2x-2x^3-2xy^2, y^2+x^2)y_1-2zy_z-yx^2-y^3+2yz^2, (z^2+y^2+x^2)d^2-2y^2x^2-x^4-x^2z^2-y^4-z^2y^2, (6z^2+6y^2+6x^2)z_3d+(b\sqrt{3}x^2+b\sqrt{3}y^2)l+(-6z^3-6zy^2-6zx^2)d, (12z^2+12y^2+12x^2)z_3^2+(-24zy^2-24zx^2-24z^3)z_3+12z^4+12x^2z^2-b^2x^2-y^2b^2+12z^2y^2, 3bld+(6x^2\sqrt{3}+6\sqrt{3}z^2+6y\sqrt{3})z_3-6\sqrt{3}z^3-6\sqrt{3}zx^2-6y^2z\sqrt{3}, 6z_3l-6zl+\sqrt{3}bd, l^2-x^2-y^2-z^2, 3l_l^1-12az_3-3x^2-b^2-3z^2+18za-3y^2-3a^2, 6y^2+6x^2)l_l^{\frac{1}{5}}+(6xz\sqrt{3}a-6ax^4-6ay^4)z_l-3ayb\sqrt{3}d-6x^4a^4-6xz_za\sqrt{3}-6y^4-6x^4a\sqrt{3}-6xa\sqrt{3}y^4-6x^4a^4-18y^4x^4-18y^4x^4-6a\sqrt{3}y^4-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-18y^{4/5}-6a^{4/5}-6y^{4/5}-6x^{4/5}-(3z^2+3y^2+3x^2))^{-1}/(3z+3)^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z))^{-7/(9)}$, $l_0 = (-(-9z_x z_y - 9z_y z_x - 9z_x z_z - 9z_y z_z) / (9$
|
| 227 |
+
---PAGE_BREAK---
|
| 228 |
+
|
| 229 |
+
$$6x^6 + 6xa\sqrt{3}y^4 - 6x^2a^2z^2 - 12x^2a^2y^2 + 12x^3a\sqrt{3}y^2 - 24y^2x^2z^2 - 2b^2x^2z^2 - 4b^2x^2y^2 - 6a^2y^2z^2 - 2y^2b^2z^2 - 6x^4a^2 - 12y^4z^2 - 18y^4x^2 - 12x^4z^2 - 18x^4y^2 - 2b^2x^4 - 6a^2y^4 - 2y^4b^2 - 12zax^2y^2 + 6x^5a\sqrt{3} - 3ay^3b\sqrt{3}d - 3ayb\sqrt{3}dz^2 - 3ayb\sqrt{3}dx^2 + 6x^3a\sqrt{3}z^2 + 3laxzbd + lax^2\sqrt{3}bd + lay^2\sqrt{3}bd / (6z^2y^2 + 6y^4 + 12y^2x^2 + 6x^2z^2 + 6x^4))^{1/2}, \text{ where}$$
|
| 230 |
+
|
| 231 |
+
$$d = \sqrt{x^2 + y^2} \text{ and } l = \sqrt{x^2 + y^2 + z^2}.$$
|
| 232 |
+
|
| 233 |
+
For the forward transform, the variables are $x_1, y_1, z_1, x_2, y_2, z_2, x_3, y_3, z_3, x, y, zl, d$ and the constants $l_1, l_2, l_3$. Because both Computer Algebra systems we used, Singular and Maple, cannot even compute a Gröbner basis for the ideal as it is given in this form, we had to relocate the points $A_1, A_2$ and $A_3$ to the next integer grid value. Furthermore, we will substitute $\{a = 2, b = 4, l_i = 3 \mid i = 1, 2, 3\}$ because the symbolic solution is still too complex, thus changing the ideal to $F = [x_1^2+y_1^2+(2-z_1)^2-9, (-2-x_2)^2+y_2^2+(-1-z_2)^2-9, (2-x_3)^2+y_3^2+(-1-z_3)^2-9, (x-x_1)x+(y-y_1)y+(z-z_1)z, (x-x_2)x+(y-y_2)y+(z-z_2)z, (x-x_3)x+(y-y_3)y+(z-z_3)z, (x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2-13, (x_3-x_2)^2+(y_3-y_2)^2+(z_3-z_2)^2-16, (x_1-x_3)^2+(y_1-y_3)^2+(z_1-z_3)^2-13, x_1+x_2+x_3-3x, y_1+y_2+y_3-3y, z_1+z_2+z_3-3z, 2dl(x-x_1)-4xz, 2dl(y-y_1)-4yz, 2l(z-z_1)+4d, 8ly+4dl(x-x_2)+4xz, -8lx+4dl(y-y_2)+4yz, 4l(z-z_2)-4d, -8ly+4dl(x-x_3)+4xz, 8lx+4dl(y-y_3)+4yz, 4l(z-z_3)-4d, x^2+y^2-d^2, x^2+y^2+z^2-l^2]$.
|
| 234 |
+
|
| 235 |
+
A (tdeg-ordered) Gröbner basis contains no less than 83 elements and therefore cannot be called very small. But at least we can figure out that there are 40 solutions to the equations and with the algorithm from [3, p. 134ff] we can compute the number of real solutions and discover that there are only four of them, thus, up to symmetry, the desired solution and probably one with crossed rods as before.
|
| 236 |
+
|
| 237 |
+
In summary one can say that presently the realistic problem is inaccessible, but its terrible complexity originates from “contamination” by the 36 complex solutions which correspond to physically impossible configurations. This is one more major drawback of algebraic methods which can find the solutions only in the algebraic closure of the original field.
|
| 238 |
+
|
| 239 |
+
## References
|
| 240 |
+
|
| 241 |
+
[1] ADAMS, W. W., AND LOUSTAUNAU, P. *An Introduction to Groebner Bases*, vol. 3 of *Graduate Studies in Mathematics*. AMS, 1994.
|
| 242 |
+
|
| 243 |
+
[2] BUCHBERGER, B. *Ein Algorithmus zum Auffinden der Basiselemente des Restklassen-rings nach einem nulldimensionalen Polonomideal*. PhD thesis, Innsbruck, 1965.
|
| 244 |
+
|
| 245 |
+
[3] COHEN, A. M., CUYPERS, H., AND STERK, M., Eds. *Some Tapas of Computer Algebra*, vol. 4 of *Algorithms and Computations in Mathematics*. Springer, 1999.
|
| 246 |
+
|
| 247 |
+
[4] COX, D., LITTLE, J., AND O'SHEA, D. *Ideals, Varieties and Algorithms*, 2. ed. Undergraduate Texts in Mathematics. Springer-Verlag, 1996.
|
| 248 |
+
|
| 249 |
+
[5] EISENBUD, D. *Commutative Algebra with a View Toward Algebraic Geometry*, vol. 150 of *Graduate Texts in Mathematics*. Springer, 1994.
|
| 250 |
+
|
| 251 |
+
[6] MÖLLER, H. M., AND SAUER, T. H-bases for polynomial interpolation and system solving. *Advances Comput. Math.* **12** (2000), 335–362.
|
| 252 |
+
---PAGE_BREAK---
|
| 253 |
+
|
| 254 |
+
[7] MÖLLER, H. M., AND SAUER, T. H-bases II: Applications to numerical problems. In *Curve and Surface fitting: Saint-Malo 1999* (2000), A. Cohen, C. Rabut, and L. L. Schumaker, Eds., Vanderbilt University Press, pp. 333–342.
|
| 255 |
+
|
| 256 |
+
[8] MÖLLER, H. M., AND STETTER, H. J. Multivariate polynomial equations with multiple zeros solved by matrix eigenproblems. *Numer. Math.* **70** (1995), 311–329.
|
| 257 |
+
|
| 258 |
+
[9] SAUER, T. Gröbner bases, H-bases and interpolation. *Trans. Amer. Math. Soc.* **353** (2001), 2293–2308.
|
| 259 |
+
|
| 260 |
+
[10] SAUER, T. Ideal bases for graded polynomial rings and applications to interpolation. In *Multivariate Approximation and Interpolation with Applications* (2002), M. Gasca, Ed., vol. 20 of *Monograph. Academia de Ciencias de Zaragoza*, Academia de Ciencias Zaragoza, pp. 97–110.
|
| 261 |
+
|
| 262 |
+
[11] SAUER, T. Polynomial interpolation in several variables: Lattices, differences, and ideals. In *Multivariate Approximation and Interpolation*, M. Buhmann, W. Hausmann, K. Jetter, W. Schaback, and J. Stöckler, Eds. Elsevier, 2006, pp. 189–228.
|
| 263 |
+
|
| 264 |
+
[12] STETTER, H. J. Matrix eigenproblems at the heart of polynomial system solving. *SIGSAM Bull.* **30**, 4 (1995), 22–25.
|
| 265 |
+
|
| 266 |
+
[13] TRINKS, W. Über B. Buchbergers Verfahren, Systeme algebraischer Gleichungen zu lösen. *J. Number Theory* **10** (1978), 475–488.
|
| 267 |
+
|
| 268 |
+
Tomas Sauer
|
| 269 |
+
|
| 270 |
+
Lehrstuhl für Numerische Mathematik
|
| 271 |
+
Universität Giessen
|
| 272 |
+
|
| 273 |
+
Heinrich-Buff-Ring 44
|
| 274 |
+
D-35392 Gießen, Germany
|
| 275 |
+
|
| 276 |
+
Dominik Wagenführ
|
| 277 |
+
|
| 278 |
+
Siemens AG
|
| 279 |
+
A&D MC RD 7
|
| 280 |
+
|
| 281 |
+
Frauenauracher Str. 80
|
| 282 |
+
D-91056 Erlangen, Germany
|
| 283 |
+
|
| 284 |
+
Tomas.Sauer@math.uni-giessen.de Dominik.Wagenfuehr@automation.siemens.co
|
samples_new/texts_merged/2092097.md
ADDED
|
@@ -0,0 +1,346 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# A GROUP OF AUTOMORPHISMS OF THE HOMOTOPY GROUPS
|
| 5 |
+
|
| 6 |
+
HIROSHI UEHARA
|
| 7 |
+
|
| 8 |
+
It is well known that the fundamental group $\pi_1(X)$ of an arcwise connected topological space $X$ operates on the $n$-th homotopy group $\pi_n(X)$ of $X$ as a group of automorphisms. In this paper I intend to construct geometrically a group $\mathcal{H}(X)$ of automorphisms of $\pi_n(X)$, for every integer $n \ge 1$, which includes a normal subgroup isomorphic to $\pi_1(X)$, so that the factor group of $\mathcal{H}(X)$ by $\pi_1(X)$ is completely determined by some invariant $\mathcal{L}(X)$ of the space $X$. The complete analysis of the operation of the group on $\pi_n(X)$ is given in §3, §4, and §5.
|
| 9 |
+
|
| 10 |
+
Throughout the whole paper, $X$ denotes an arcwise connected topological space which has such suitable homotopy extension properties as a polyhedron does, and all mappings are continuous transformations.
|
| 11 |
+
|
| 12 |
+
## §1. Definition of the group $\mathcal{H}(X)$.
|
| 13 |
+
|
| 14 |
+
Let $x_0$ be an arbitrary point of the space $X$, and $\Omega$ a collection $\mathcal{X}^*(x_0, x_0)$ of all the mappings that transform $X$ into $X$ and $x_0$ into $x_0$. For two maps $a, b \in \Omega$, $a$ is said to be homotopic to $b$ (in notation : $a \sim b$) if there exists a homotopy $h_t \in \Omega$ (for $1 \le t \le 0$) such that $h_0 = a$ and $h_1 = b$. A mapping $a \in \Omega$ is called to have a (two sided) homotopy inverse, if there is a map $\varphi \in \Omega$ such that $\alpha\varphi \sim 1$ and $\varphi\alpha \sim 1$, where 1 denotes the identity transformation of $X$ onto itself. Let $\Omega^*$ be the collection of all the mappings belonging to $\Omega$, each of which has a homotopy inverse.
|
| 15 |
+
|
| 16 |
+
Now let $X \times I$ be the topological product of $X$ and the line segment $I$ between 0 and 1, and let us consider the totality $U$ of the mappings $\vartheta : X \times I \rightarrow X$ which satisfy the following conditions :
|
| 17 |
+
|
| 18 |
+
$$ (1.1) \qquad \begin{aligned} \text{i)} & \quad \theta |_{X \times 0} \in \Omega^* \\ \text{ii)} & \quad \theta(x_0, 1) = x_0 \end{aligned} \} $$
|
| 19 |
+
|
| 20 |
+
For two maps $\theta, \theta' \in U$, $\theta$ is homotopic to $\theta'$ (notation : $\theta \sim \theta'$) if there exists a homotopy $h_t : X \times I \to X$ (for $1 \le t \le 0$) such that
|
| 21 |
+
|
| 22 |
+
Received Oct. 25, 1950.
|
| 23 |
+
|
| 24 |
+
I should like to express my sincere gratitude for the courtesies extended to me by Professor S. T. Hu. This paper is inspired by his paper, "On the Whitehead group of automorphisms of the relative homotopy groups."
|
| 25 |
+
---PAGE_BREAK---
|
| 26 |
+
|
| 27 |
+
$$ (1.2) \qquad \begin{alignedat}{2} \text{i)} \qquad & h_0 &&= \theta, \\ \text{ii)} \qquad & h_t(x_0, 0) &&= h_t(x_0, 1) = x_0. \end{alignedat} $$
|
| 28 |
+
|
| 29 |
+
It is easily verified that this relation is an equivalent relation, and therefore $U$ is divided into equivalent classes in this sense.
|
| 30 |
+
|
| 31 |
+
We shall denote by $[\theta]$ the class containing $\theta$. For $\theta \in U$ we construct a mapping $\sigma_0 \in U$ as follows: a mapping $\bar{\sigma}_\theta$ which is defined continuously on the set $((X \times 0) \uplus (x_0 \times I))$ such that $\bar{\sigma}_\theta(x, 0) = x$ and $\bar{\sigma}_\theta(x_0, t) = \theta(x_0, t)$, can be extended to a mapping $\sigma_0 \in U$, provided that $\{x_0\}$ has a homotopy extension property in $X$ relative to $X$. The extended mapping is, of course, not unique but the homotopy class containing $\sigma_0$ is uniquely determined if the set $((x_0 \times I) \uplus (X \times 0) \uplus (X \times 1))$ has a homotopy extension property in $X \times I$ relative to $X$; another arbitrarily extended map $\sigma'_0$ is homotopic to $\sigma_0$. Now two maps $\theta_1, \theta_2 \in U$ are 'multiplied' together by the rule,
|
| 32 |
+
|
| 33 |
+
$$ (1.3) \qquad \theta_1 \times \theta_2(x, t) = \begin{cases} \rho(x, 2t), & \frac{1}{2} \le t \le 0, \\ \sigma_{\theta_2}(\rho(x, 1), 2t-1), & 1 \le t \le \frac{1}{2}, \end{cases} $$
|
| 34 |
+
|
| 35 |
+
where $\rho(x, t) = \theta_2(\theta_1(x, t), 0)$. Then we have
|
| 36 |
+
|
| 37 |
+
**LEMMA 1.1** $\theta_1 \times \theta_2$ is again a member of the collection $U$.
|
| 38 |
+
|
| 39 |
+
*Proof.* Let $a_1(x) = \theta_1(x, 0)$, $a_2(x) = \theta_2(x, 0)$, then both $a_1$ and $a_2$ belong to $\Omega^*$, so that $a_1$ and $a_2$ have homotopy inverses $\varphi_1, \varphi_2$ respectively. From the considerations that $\varphi_1\varphi_2$ is a homotopy inverse of $\omega a_1$ and that $\theta_1 \times \theta_2(x, 0) = \rho(x, 0) = \theta_2(\theta_1(x, 0), 0) = \theta_2(a_1(x), 0) = a_2(a_1(x))$, we have $\theta_1 \times \theta_2 | X \times 0 = \Omega^*$ and therefore the condition (1.1) i) is satisfied. Also we have $\theta_1 \times \theta_2(x_0, 1) = \sigma_{\theta_2}(\rho(x_0, 1), 1) = \sigma_{\theta_2}(x_0, 1) = \theta_2(x_0, 1) = x_0$. This proves the Lemma.
|
| 40 |
+
|
| 41 |
+
**LEMMA 1.2** The class $[\theta_1 \times \theta_2]$ depends only on the classes $[\theta_1]$ and $[\theta_2]$.
|
| 42 |
+
|
| 43 |
+
*Proof.* Let $\theta'_1 \in [\theta_1]$ and $\theta'_2 \in [\theta_2]$, then there exist two homotopies $h_s$, $k_s$: $X \times I \rightarrow X$ ($1 \le s \le 0$) such that $h_0 = \theta'_1$, $h_1 = \theta'_2$, $k_0 = \theta'_2$, and $k_1 = \theta'_2$. Putting $\rho_s(x, t) = k_s(h_s(x, t), 0)$, we have
|
| 44 |
+
|
| 45 |
+
$$ (1.4) \qquad \left. \begin{aligned} \text{i)} & \rho_s(x, t) = \theta_s(\theta'_1(x, t), 0), \quad \rho_s(t) = \theta'_2(\theta'_1(t), 0), \\ \text{ii)} & \rho_s(x_0, 0) = k_s(h_s(x_0, 0), 0) = k_s(x_0, 0) = x_0, \\ \text{iii)} & \rho_s(x_0, 1) = k_s(h_s(x_0, 1), 0) = k_s(x_0, 0) = x_0. \end{aligned} \right\} $$
|
| 46 |
+
|
| 47 |
+
Since $k_s(x_0, 0) = k_s(x_0, 1) = x_0$, we can construct, in virtue of the homotopy extension properties previously mentioned, $\sigma_{k_s} \in U$ ($1 \le s \le 0$), which is also continuous with respect to $\epsilon$, just as in case of $\sigma_\epsilon$. Then clearly we have $\sigma_{k_s}(x, 0) = x$ and $\sigma_{k_s}(x_0, t) = k_s(x_0, t)$ by the construction of the function $\sigma_{k_s}$.
|
| 48 |
+
|
| 49 |
+
$$ H_s(x, t) = \begin{cases} \rho_s(x, 2t), & \frac{1}{2} \le t \le 0, \\ \sigma_{k_s}(\rho_s(x, 1), 2t-1), & 1 \le t \le \frac{1}{2}, \end{cases} $$
|
| 50 |
+
---PAGE_BREAK---
|
| 51 |
+
|
| 52 |
+
is obviously continuous and satisfies the conditions (1.2) of the homotopy; as
|
| 53 |
+
to the condition ii), we have $H_3(x_0, 0) = \rho_5(x_0, 0) = x_0$ from (1.4) ii) and
|
| 54 |
+
$H_3(x_0, 1) = \sigma_{k_8}(\rho_5(x_0, 1), 1) = \sigma_{k_5}(x_0, 1) = k_5(x_0, 1) = x_0$ from (1.4) iii).
|
| 55 |
+
|
| 56 |
+
Since (1.2) i) is evidently satisfied from (1.4) i), the lemma has been proved.
|
| 57 |
+
Thus the multiplication in $U$ induces a multiplication in the set of the homotopy
|
| 58 |
+
classes ; $[\theta_1] \times [\theta_3] \equiv [\theta_1 \times \theta_2]$.
|
| 59 |
+
|
| 60 |
+
**THEOREM 1.** By the multiplication defined above, all the homotopy classes of $U$ constitute a group $\mathfrak{A}(X)$ with $x_0$ as the base point.
|
| 61 |
+
|
| 62 |
+
*Proof.* Let us prove that the multiplicatoin is associative. Let $\theta_1, \theta_2, \theta_3 \in U$,
|
| 63 |
+
then $([\theta_1] \times [\theta_2]) \times [\theta_3]$ and $[\theta_1] \times ([\theta_2] \times [\theta_3])$ are represented by mappings
|
| 64 |
+
$(\theta_1 \times \theta_2) \times \theta_3$ and $\theta_1 \times (\theta_2 \times \theta_3)$ respectively. By definition
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\begin{align*}
|
| 68 |
+
(\theta_1 \times \theta_2) \times \theta_3 (x, t) &= \begin{cases}
|
| 69 |
+
\theta_3 (\theta_2 (\theta_1 (x, 4t), 0), 0), & \frac{1}{2} \ge t \ge 0, x \in X, \\
|
| 70 |
+
\theta_3 (\sigma_{\theta_2} (\theta_2 (\theta_1 (x, 1), 0), 4t-1), 0), & \frac{1}{2} \ge t \ge \frac{1}{4}, x \in X, \\
|
| 71 |
+
\sigma_{\theta_3} (\theta_3 (\sigma_{\theta_2} (\theta_2 (\theta_1 (x, 1), 0), 1), 0), 2t-1), & 1 \ge t \ge \frac{3}{4}, x \in X,
|
| 72 |
+
\end{cases}
|
| 73 |
+
\\[1em]
|
| 74 |
+
\theta_4 \times (\theta_2 \times \theta_3) (x, t) &= \begin{cases}
|
| 75 |
+
(\theta_3 (\theta_2 (\theta_1 (x, 2t), 0), 0), & \frac{1}{2} \ge t \ge 0, x \in X, \\
|
| 76 |
+
\sigma_{\theta_2 \times \theta_3} (\theta_3 (\theta_2 (\theta_1 (x, 1), 0), 0), 2t-1), & 1 \ge t \ge \frac{1}{4}, x \in X.
|
| 77 |
+
\end{cases}
|
| 78 |
+
\end{align*}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
As it is rather difficult to show directly the existence of homotopy between
|
| 82 |
+
($\theta_1 \times \theta_2$) $\times$ $\theta_3$ and $\theta_1 \times (\theta_2' \times \theta_3)$, we prove it by making use of the homotopy
|
| 83 |
+
extension property referred to above. From the relation above we have ($\theta_1 \times \theta_2$)
|
| 84 |
+
$\times \theta_5 (x, 0) = \theta_3 (\theta_5 (\theta_1 (x, 0), 0), 0) = \theta_1 \times (\theta_2 \times \theta_3) (x, 0)$, and from the property
|
| 85 |
+
of $\sigma_\theta$ we have
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
(1.6) \quad (\theta_1 \times \theta_2) \times \theta_3(x_0, t) = \begin{cases} \theta_3(\theta_2(\theta_1(x_0, 4t), 0), 0), & \frac{1}{2} \ge t \ge 1 \\ \theta_3(\theta_2(x_0, 4t-1), 0), & \frac{1}{2} \le t \le 1 \\ \theta_3(x_0, 2t-1), & 1 \le t \le \frac{1}{2} \end{cases}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
Since $\sigma_{\theta_0 \times \theta_3}(\theta_3(\theta_2(\theta_1(x_0, 1), 0), 0), 2t-1) =: \sigma_{\theta_0 \times \theta_3}(x_0, 2t-1) =$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\begin{align*}
|
| 95 |
+
&\theta_{\delta}(0, 0), &&\frac{1}{2} \ge t \geq \frac{1}{2}, \\
|
| 96 |
+
&\sigma_{\delta_{\delta}}(0, 0), &&\frac{1}{2} \ge t \geq 1, \\
|
| 97 |
+
&\sigma_{\delta_{\delta}}(x_{\delta}, 4t - 3) &&= \sigma_{\delta_{\delta}}(x_{\delta}, 4t - 3) = \theta_{\delta}(x_{\delta}, 4t - 3), &&1 \ge t \ge \frac{3}{4},
|
| 98 |
+
\end{align*}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
we have
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
(1.7) \quad \theta_t \times (\theta_s \times \theta_d)(x_o, t) = \begin{cases} \theta_s(\theta_t(\theta_s(x_o, 2t), 0), 0), & \frac{1}{2} \ge t \ge 0, \\ \theta_s(\theta_t(\theta_s(x_o, 4t-2), 0)), & \frac{1}{4} \le t \le \frac{3}{2}, \\ \theta_s(x_o, 4t-3), & 1 \le t \le 1. \end{cases}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
From (1.6) and (1.7) there exists a homotopy $h(x, s, t)$ defined on $\{x_n\} \times I^s \times I^t$
|
| 108 |
+
---PAGE_BREAK---
|
| 109 |
+
|
| 110 |
+
such that
|
| 111 |
+
|
| 112 |
+
$$h(x_0, 0, t) = (\theta_1 \times \theta_2) \times \theta_3(x_0, t), \quad 1 \ge t \ge 0,$$
|
| 113 |
+
|
| 114 |
+
$$h(x_0, 1, t) = \theta_1 \times (\theta_2 \times \theta_3)(x_0, t), \quad 1 \ge t \ge 0,$$
|
| 115 |
+
|
| 116 |
+
$$h(x_0, s, 0) = h(x_0, s, 1) = x_0, \quad 1 \ge s \ge 0.$$
|
| 117 |
+
|
| 118 |
+
and
|
| 119 |
+
|
| 120 |
+
Moreover putting
|
| 121 |
+
|
| 122 |
+
$$h(x, 0, t) = (\theta_1 \times \theta_2) \times \theta_3 (x, t), \quad x \in X, \ 1 \ge t \ge 0,$$
|
| 123 |
+
|
| 124 |
+
$$h(x, 1, t) = \theta_1 \times (\theta_2 \times \theta_3)(x, t), \quad x \in X, \ 1 \ge t \ge 0,$$
|
| 125 |
+
|
| 126 |
+
$$h(x, s, 0) = \theta_3(\theta_2(\theta_1(x, 0), 0), 0), \quad x \in X, \ 1 \ge s \ge 0,$$
|
| 127 |
+
|
| 128 |
+
and
|
| 129 |
+
|
| 130 |
+
$h$ is defined continuously on the set $\{(X \times \frac{s}{I} \times 0) \cup [(x_0 \times \frac{s}{I}) \cup (X \times 0) \cup (X \times 1)] \\ \times \frac{t}{I}\}$. Thus, if $\{(x_0 \times I) \cup (X \times 0) \cup (X \times 1)\}$ has a homotopy extension property in $X \times I$ relative to $X$, $h$ can be extended to a mapping $X \times \frac{s}{I} \times \frac{t}{I} \to X$, which gives a homotopy between $(\theta_1 \times \theta_2) \times \theta_3$ and $\theta_1 \times (\theta_2 \times \theta_3)$.
|
| 131 |
+
|
| 132 |
+
Next we must prove the existence of the unity in $\mathfrak{A}(X)$. Let $\theta_0(x, t) = x$, then clearly $\theta_0 \in U$. For any $\theta \in U$ we have from the definition of multiplication
|
| 133 |
+
|
| 134 |
+
$$ (\theta \times \theta_0)(x, t) = \begin{cases} \rho(x, 2t), & x \in X, \quad \frac{1}{2} \le t \le 0, \\ \sigma_{\theta_0}(\rho(x, 1), 2t-1), & x \in X, \quad 1 \le t \le \frac{1}{2}, \end{cases} $$
|
| 135 |
+
|
| 136 |
+
where $\rho(x, 2t) = \theta_0(\theta(x, 2t), 0) = \theta(x, 2t)$, and $\sigma_{\theta_0}(x, t) = x$ may be assumed. Since $\sigma_{\theta_0}(\rho(x, 1), 2t-1) = \rho(x, 1) = \theta_0(\theta(x, 1), 0) = \theta(x, 1)$ for $1 \le t \le \frac{1}{2}$, we have
|
| 137 |
+
|
| 138 |
+
$$ (\theta \times \theta_0)(x, t) = \begin{cases} \theta(x, 2t), & x \in X, \quad \frac{1}{2} \le t \le 0, \\ \theta(x, 1), & x \in X, \quad 1 \le t \le \frac{1}{2}. \end{cases} $$
|
| 139 |
+
|
| 140 |
+
Let us define a homotopy $h_s(x, t)$ for $1 \le s \le 0$ as follows;
|
| 141 |
+
|
| 142 |
+
$$ h_s(x, t) = \begin{cases} \theta\left(x, \frac{2t}{1+s}\right), & x \in X, \quad \frac{s+1}{2} \le t \le 0, \\ \theta(x, 1), & x \in X, \quad 1 \le t \le \frac{s+1}{2}, \end{cases} $$
|
| 143 |
+
|
| 144 |
+
then $h_s$ satisfies the conditions of the homotopy (1.2), so that $h_0 = \theta \times \theta_0$ and $h_1 = 0$. Thus $\theta_0$ represents the right side unity of the group $\mathfrak{A}(X)$.
|
| 145 |
+
|
| 146 |
+
Lastly we proceed to show the existence of the inverse element of any element $[\theta] \in \mathfrak{A}(X)$. By the assumption on an element $\theta$ in $U$, we have $\theta |_{X \times 0} = 0$, so that $\theta |_{X \times 0}$ has a homotopy inverse $\varphi |_{\Omega^*}$. Now we define a mapping $\theta^{-1} : U$ as follows: if we put
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\begin{align*}
|
| 150 |
+
\theta^{-1}(x, 0) &= \varphi(x), && x \in X, \\
|
| 151 |
+
\theta^{-1}(x_0, t) &= \varphi(\theta(x_0, 1-t)), && 1 \ge t \ge 0.
|
| 152 |
+
\end{align*}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
then $\theta^{-1}$ can be extended to a map: $X \times I \to X$ because of the homotopy
|
| 156 |
+
---PAGE_BREAK---
|
| 157 |
+
|
| 158 |
+
extension property of {$x_0$}. This extended map $\theta^{-1}$ is shown to represent the inverse of $[\theta]$. Indeed, we have
|
| 159 |
+
|
| 160 |
+
$$ \theta \times \theta^{-1}(x, t) = \begin{cases} \rho(x, 2t), & \frac{1}{2} \le t \le 0, x \in X, \\ \sigma_{\theta^{-1}}(\rho(x, 1), 2t-1), & 1 \le t \le \frac{1}{2}, x \in X, \end{cases} $$
|
| 161 |
+
|
| 162 |
+
where $\rho(x, t) = \theta^{-1}(\theta(x, t), 0) = \varphi(\theta(x, t))$, $\sigma_{\theta^{-1}}(x, 0) = x$, and $\sigma_{\theta^{-1}}(x_0, t) = \theta^{-1}(x_0, t) = \varphi(\theta(x_0, 1-t))$. As $\varphi$ is a homotopy inverse of $\theta |_{X \times 0}$, and on the other hand $\sigma_{\theta^{-1}}|_{X_0 \times I}$ represents the inverse element of $[\rho |_{X_0 \times I}]$, we have a continuous function $h$ defined on $((X \times \overline{I} \times 0) \cup (X \times 0) \cup (X \times 1)) \cap (X_0 \times \overline{I}) \times \overline{I})$ such that
|
| 163 |
+
|
| 164 |
+
$$ h(x, s, 0) = k(x, s), \quad x \in X, s \in \overline{I}, $$
|
| 165 |
+
|
| 166 |
+
$$ h(x_0, s, t) = l(s, t), \quad s \in \overline{I}, t \in \overline{I}, $$
|
| 167 |
+
|
| 168 |
+
$$ h(x, 0, t) = \theta \times \theta^{-1}(x, t), \quad x \in X, t \in \overline{I}, $$
|
| 169 |
+
|
| 170 |
+
$$ h(x, 1, t) = x, \quad x \in X, t \in \overline{I}, $$
|
| 171 |
+
|
| 172 |
+
where $k$ is a homotopy obtained by the relation $\varphi\theta \sim 1$, and $l$ is also a homotopy whose existence is assured by $\rho(x_0, 1-t) = \sigma_{\theta^{-1}}(x_0, t)$. Again, by the aid of a homotopy extension property of $((x_0 \times I) \cup (X \times 0) \cup (X \times 1)))$, $h$ can be extended to a map $X \times I \times I \to X$, which gives a desired homotopy. This completes the proof.
|
| 173 |
+
|
| 174 |
+
In order to clarify the conditions preassigned to the space $X$ we put down here all the homotopy extension properties assumed in the arguments of the above Theorem;
|
| 175 |
+
|
| 176 |
+
i) $\{x_0\}$ has a homotopy extension property in $X$ relative to $X$,
|
| 177 |
+
|
| 178 |
+
(1.8) ii) $\{(x_0 \times I) \cup (X \times 0) \cup (X \times 1)\}$ has a homotopy extension property in $X \times I$ relative to $X$.
|
| 179 |
+
|
| 180 |
+
These assumptions are, of course, satisfied by a polyhedron.
|
| 181 |
+
|
| 182 |
+
## § 2. A group of automorphisms $\Sigma(X)$ and the structure of $\mathfrak{A}(X)$.
|
| 183 |
+
|
| 184 |
+
Now we define a group $\Sigma(X)$, which operates on $\pi_n(X)$, as we shall see later, as a group of automorphisms, and study a homomorphism of $\mathfrak{A}(X)$ onto $\Sigma(X)$, the kernel of which is isomorphic to the fundamental group $\pi_1(X)$ of $X$.
|
| 185 |
+
|
| 186 |
+
Let us define a homotopy concept in $\Omega^*$ in the following sense: we shall write $a \sim b$ for $a, b \in \Omega^*$ if there exists a homotopy $h_t \in \Omega$ ($1 \le t \le 0$) such that $h_0 = a$ and $h_1 = b$. Then $\Omega^*$ is divided into homotopy classes. Let us denote by $\Sigma(X)$ the set of all the homotopy classes. For two maps $a, b \in \Omega^*$ we define $(a \times b)(x) = b(a(x))$ for any $x \in X$. Then $a \times b \in \Omega^*$ because $a \times b \in \Omega$ follows immediately from the definition and, if $\varphi$ and $\psi$ are homotopy inverses of $a$
|
| 187 |
+
---PAGE_BREAK---
|
| 188 |
+
|
| 189 |
+
and $b$ respectively, $\psi \times \varphi \in \Omega^*$ is a homotopy inverse of $a \times b$. Furthermore, if $a \sim a'$ and $a \sim b'$, $a \times b \sim a' \times b'$. Thus the multiplication in $\Omega^*$ induces a multiplication in $\Sigma(X)$.
|
| 190 |
+
|
| 191 |
+
**THEOREM 2.** $\Sigma(X)$ constitutes a group.
|
| 192 |
+
|
| 193 |
+
*Proof.* It is evident from the definition of multiplication that the associative law holds. As to the existence of unity, let $E$ be a class containing the identity transformation of $X$, then $E \cdot A = A$ and $A \cdot E = A$ for any $A \in \Sigma(X)$. Lastly for any $A = [a]$ we choose $A^{-1} = [\varphi]$ containing a homotopy inverse $\varphi$ of $a$. Then $AA^{-1} = E$ and $A^{-1}A = E$ is clear from the definition of homotopy inverse.
|
| 194 |
+
|
| 195 |
+
**THEOREM 3.** $\Sigma(X)$ operates on the *n*-th homotopy group $\pi_n(X, x_0)$, for every integer $n \ge 1$, as a group of automorphisms.
|
| 196 |
+
|
| 197 |
+
*Proof.* Let $f$ be a representative of an element $\alpha$ of $\pi_n(X)$ and let $\alpha$ be a representative of $A \in \Sigma(X)$. Let us take the mapping $\text{af}: S^n \to X$ as a representative of $A\alpha$. The correspondence $A; \alpha \to A\alpha$ is a transformation of $\pi_n(X)$ into itself because, if $f'$ is another representative of $\alpha$, we have $\text{af} \sim \text{af}'$, and if $\alpha'$ is another representative of $A$, we have also $\text{af} \sim \alpha'f$. Then it is easily proved that this correspondence is an automorphism of $\pi_n(X)$.
|
| 198 |
+
|
| 199 |
+
*Example of $\Sigma(X)$:*
|
| 200 |
+
|
| 201 |
+
Let $X$ be an $n$-sphere $S^n$, then from the concept of Brouwer's degree we have $\Sigma(S^n) = \{E = [1], A = [-1]\}$ where $E$ is a class containing the identity transformation and $A$ is a class containing a mapping of degree $-1$. Since clearly $A^2 = A \cdot A = E$, the group is a cyclic group of order 2.
|
| 202 |
+
|
| 203 |
+
Now we intend to define a homomorphism $\varphi$ of $\mathfrak{A}(X)$ onto $\Sigma(X)$. Let $\theta \in U$ be a representative of an element of $\mathfrak{A}(X)$, then $a_\theta = \theta | X \times 0$ represents an element of $\Sigma(X)$. From the homotopy concepts given in §1 and §2, it is obvious that if $\theta \sim \theta'$, we have $a_\theta \sim a_{\theta'}$. By the correspondence $\varphi: [\theta] \to [a_\theta]$ we have the following theorem.
|
| 204 |
+
|
| 205 |
+
**THEOREM 4.** $\varphi$ is a homomorphism of $\mathfrak{A}(X)$ onto $\Sigma(X)$, the kernel of which is isomorphic to the fundamental group $\pi_1(X)$.
|
| 206 |
+
|
| 207 |
+
*Proof.* For two elements $[\theta_1], [\theta_2] \in \mathfrak{A}(X)$, we have $\varphi([\theta_1]) = [a_{\theta_1}]$ and $\varphi([\theta_2]) = [a_{\theta_2}]$. By definition $\varphi([\theta_1] \times [\theta_2]) = \varphi([\theta_1 \times \theta_2])$ may be represented by a mapping $\theta_1 \times \theta_2 | X \times 0 = \rho(x, 0) = \theta_2(\theta_1(x, 0), 0)$, so that $\theta_1 \times \theta_2 | X \times 0 = a_{\theta_1} \times a_{\theta_2}$. Thus $\varphi([\theta_1] \times [\theta_2]) = \varphi([\theta_1]) \times \varphi([\theta_2])$ is proved. Clearly $\varphi$ is an onto-homomorphism from the definition of the group.
|
| 208 |
+
|
| 209 |
+
Lastly, in order to complete the proof it is sufficient to prove that the kernel of $\varphi$ is isomorphic to $\pi_1(X)$. If $\varphi([\theta]) = [a_\theta]$ is unity, we may take without loss of generality a representative $\theta$ of $[\theta]$ as follows :
|
| 210 |
+
---PAGE_BREAK---
|
| 211 |
+
|
| 212 |
+
$$ (2.1) \qquad \left. \begin{array}{l} \text{i)} \quad \theta: X \times I \to X, \\ \text{ii)} \quad \vartheta(x, 0) = x, \\ \text{iii)} \quad \vartheta(x_{\theta}, 1) = x_0, \end{array} \right\} $$
|
| 213 |
+
|
| 214 |
+
for (1.8) is assumed. To any element $[\theta]$ belonging to the kernel of $\varphi$ let there correspond an element $[\xi_0]$ of the fundamental group $\pi_1(X)$ by the rule,
|
| 215 |
+
|
| 216 |
+
$$ (2.2) \qquad \xi_0(t) = \theta(x_0, t). $$
|
| 217 |
+
|
| 218 |
+
This correspondence $\lambda$ has a definite meaning because, if $\theta \sim \theta'$, $\xi_0$ and $\xi_0'$ represent the same element of $\pi_1(X)$. Let us prove that $\lambda$ is an isomorphism. Let $[\theta_1], [\theta_2]$ be two elements belonging to the kernel of $\varphi$, then $[\theta_1] \times [\theta_2]$ is represented by a map $\theta_1 \times \theta_2$,
|
| 219 |
+
|
| 220 |
+
$$ \theta_1 \times \theta_2(x, t) = \begin{cases} \theta_3(\theta_1(x, 2t), 0), & 1 \le t \le 0, x \in X, \\ \sigma_{\theta_2}(\theta_3(\theta_1(x, 1), 0), 2t-1), & 1 \le t \le \frac{1}{2}, x \in X. \end{cases} $$
|
| 221 |
+
|
| 222 |
+
Since from (2.1) we have $\theta_2(x, 0) = x$, $\theta_2(\theta_1(x, 2t), 0) = \theta_1(x, 2t)$ and $\sigma_{\theta_2}(\theta_2(\theta_1(x, 1), 0), 2t-1) = \sigma_{\theta_2}(\theta_1(x, 1), 2t-1)$ so that by (2.2)
|
| 223 |
+
|
| 224 |
+
$$ \hat{\xi}_{\theta_1 \times \theta_2}(t) = \begin{cases} \theta_1(x_{\theta_1}, 2t), & \frac{1}{2} \le t \le 0, \\ \sigma_{\theta_2}(\theta_1(x_{\theta_2}, 1), 2t-1), & 1 \le t \le \frac{1}{2}. \end{cases} $$
|
| 225 |
+
|
| 226 |
+
Since $\theta_1(x_0, 1) = x_0$ and $\sigma_{\theta_2}(x_0, t) = \theta_1(x_0, t)$, we have $\sigma_{\theta_2}(\theta_1(x_0, 1), 2t-1) = \theta_2(x_0, 2t-1)$. Now $\xi_{\theta_1 \times \theta_2}(t)$ may be described as follows:
|
| 227 |
+
|
| 228 |
+
$$ \hat{\xi}_{\theta_1 \times \theta_2}(t) = \begin{cases} \theta_1(x_0, 2t), & \frac{1}{2} \le t \le 0, \\ \theta_2(x_0, 2t-1), & 1 \le t \le \frac{1}{2}. \end{cases} $$
|
| 229 |
+
|
| 230 |
+
On the other hand, we have, by the definition of the fundamental group,
|
| 231 |
+
|
| 232 |
+
$$ \lambda([\theta_1] \times [\theta_2]) = [\hat{\xi}_{\theta_1 \times \theta_2}] = [\hat{\xi}_{\theta_1}] \circ [\hat{\xi}_{\theta_2}] = \lambda[\theta_1] \circ \lambda[\theta_2], $$
|
| 233 |
+
|
| 234 |
+
so that the homomorphism is established.
|
| 235 |
+
|
| 236 |
+
Clearly $\lambda$ is an onto-homomorphism, because of the homotopy extension property (1.3) i). It remains only to prove that from $\xi_{\theta_1} \sim \xi_{\theta_2}$ follows $\theta_1 \sim \theta_2$. It may be assumed that $\theta_1(x, 0) = x$ and $\theta_2(x, 0) = 0$. Since $\xi_{\theta_1} = \xi_{\theta_2}$, a homotopy $h_s(t)$ ($1 \le s \le 0$) exists such that $h_0(t) = \theta_1(x_0, t)$, $h_1(t) = \theta_2(x_0, t)$ and $h_s(0) = h_s(1) = x_0$. A continuous function $h$ may be defined on the set $\{(X \times I)^s (0)^\tau [(X \times 0)^\tau (X \times 1)^\tau (x_0 \times I)] \times I\}$ as follows:
|
| 237 |
+
|
| 238 |
+
$$ h(x, s, 0) = x, \quad x \in X, s \in I^s, \\ h(x, 0, t) = \theta_1(x, t), \quad x \in X, t \in I^t, \\ h(x, 1, t) = \theta_2(x, t), \quad x \in X, t \in I^t, \\ h(x_0, s, t) = h_s(t), \quad s \in I^s, t \in I^t. $$
|
| 239 |
+
|
| 240 |
+
If (1.3) ii) is assumed, it is proved by the aid of the extended map $h: X \times I^s \times I^t$
|
| 241 |
+
---PAGE_BREAK---
|
| 242 |
+
|
| 243 |
+
→ X that $\theta_1$ is homotopic to $\theta_2$. This completes the proof.
|
| 244 |
+
|
| 245 |
+
### § 3. Operation of $\mathfrak{A}(X)$ on the homotopy groups.
|
| 246 |
+
|
| 247 |
+
Let $f$ be a representative of an element $\alpha \in \pi_n(X)$ and $\theta$ be a representative of an element $\vartheta \in \mathfrak{A}(X)$. Let us define $\vartheta\alpha = [h] \in \pi_n(X)$ by the rule,
|
| 248 |
+
|
| 249 |
+
$$ (3.1) \qquad h(x) \equiv \theta(f(x), 1). $$
|
| 250 |
+
|
| 251 |
+
This definition has a definite meaning in the sense that $[h]$ depends only on $\alpha$ and $\vartheta$. Then we have,
|
| 252 |
+
|
| 253 |
+
**THEOREM 5.** $\vartheta\alpha = (A\alpha)^{\xi}$ where $A = \varphi(\vartheta) \in \Sigma(X)$ and $\xi$ is an element of $\pi_1(X)$ represented by $\theta(x_0, t)$ ($1 \ge t \ge 0$).
|
| 254 |
+
|
| 255 |
+
*Proof.* From the definition of homomorphism $\varphi$, $A$ is represented by $a_0(x) = \theta(x, 0)$, and therefore $\theta(f(x), 0) = a_0f(x)$. It is an immediate consequence of the operation of $A$ that $a_0f$ represents an element $A\alpha$ of $\pi_n(X)$. Moreover if $f(p) = x_0$ for a fixed point $p \in S^n$, $\theta(f(p), t) = \theta(x_0, t)$ represents an element $\xi$ of $\pi_1(X)$, so that according to the operation of $\pi_1$ on $\pi_n$ due to Eilenberg $h(x) = \theta(f(x), 1)$ represents an element $(A\alpha)^{\xi} \in \pi_n$. This completes the proof.
|
| 256 |
+
|
| 257 |
+
As a direct consequence of Theorem 5 we have,
|
| 258 |
+
|
| 259 |
+
**THEOREM 6.** $\mathfrak{A}(X)$ is a group of automorphisms of $\pi_n(X)$ for every integer $n \ge 1$.
|
| 260 |
+
|
| 261 |
+
*Proof.* Because of the combination of automorphisms $A$ and $\xi$, the operation of $\vartheta \in \mathfrak{A}(X)$ on $\pi_n$ is also an automorphism of $\pi_n(X)$.
|
| 262 |
+
|
| 263 |
+
### § 4. Algebraic construction of $\mathfrak{A}(X)$.
|
| 264 |
+
|
| 265 |
+
Now that the operation of $\mathfrak{A}(X)$ on $\pi_n$ has been clarified by Theorem 5, we can construct the group $\mathfrak{A}(X)$ from a purely algebraic standpoint. Let $\chi(X) = \{(A, \xi)\}; A \in \Sigma(X), \xi \in \pi_1(X)\}$; the totality of all the ordered pairs consisting of an arbitrarily chosen element of $\Sigma(X)$ and of an arbitrarily chosen element of $\pi_1(X)$. Defining $(A, \xi)(\alpha) = (A\alpha)^{\xi}$ for any $\alpha \in \pi_n(X)$, $(A, \xi)$ operates on $\pi_n(X)$, for every integer $\pi \ge 1$, as an automorphism. If we define a multiplication in the set $\chi(X)$ of automorphisms just defined by the rule,
|
| 266 |
+
|
| 267 |
+
$$ (B, \eta)(A, \xi)(\alpha) = (B, \eta)((A, \xi)(\alpha)), $$
|
| 268 |
+
|
| 269 |
+
then we have $(B, \eta)(A, \xi) \in \chi(X)$. In order to prove this, we need the following lemma.
|
| 270 |
+
|
| 271 |
+
**LEMMA 4.1** $A(\alpha^{\xi}) = (A\alpha)^{\xi} = (A, A_{\xi})(\alpha)$ for any $\alpha \in \pi_n$, where $A_{\xi}$ can be interpreted in the sense that $\Sigma(X) \ni A$ operates on the homotopy group of any dimension, especially on the fundamental group too.
|
| 272 |
+
|
| 273 |
+
*Proof.* Let $\alpha$ be represented by a mapping $f: S^n \to X, S^n \ni p_0 \to x_0$ and let
|
| 274 |
+
---PAGE_BREAK---
|
| 275 |
+
|
| 276 |
+
ξ = [e(t), 1 ≡ t ≡ 0]. We have a mapping F : {Sⁿ × (0) ⌣ (p₀) × I} → X such that F(x, 0) ≅ f(x) for any x ∈ Sⁿ, and F(p₀, t) ≅ e(t). From the homotopy extension property of a polyhedron we have an extended map $\bar{F}: S^n \times I \to X$ of F. Since $\bar{F}(x, 0) = f(x)$ and $\bar{F}(p_0, t) = e(t)$, $\bar{F}(x, 1)$ represents an element $a^t \in \pi_n(X)$. Let a be a representative of A. Putting $a(\bar{F}(x, t)) \equiv G(x, t): S^n \times I \to X$ we have $[G(x, 0)] = A\alpha$ from $G(x, 0) = a(f(x))$ and $[G(x, 1)] = A(\alpha^t)$ from $G(x, 1) = a(\bar{F}(x, 1))$. Also, from $G(x_0, t) = a(e(t))$ follows $[G(x_0, t)] = A\xi$. Thus we have $A(\alpha^t) = (A\alpha)^{A^t}$. Making use of the lemma, we have
|
| 277 |
+
|
| 278 |
+
$$
|
| 279 |
+
\begin{align*}
|
| 280 |
+
(B, \eta)(A, \xi)(\alpha) &\equiv (B, \eta)((A, \xi)(\alpha)) = (B, \eta)((A\alpha)^{\eta}) \\
|
| 281 |
+
&= (B((A\alpha)^{\eta}))^{\eta} \\
|
| 282 |
+
&= ((B(A\alpha))^{B\eta})^{\eta} \\
|
| 283 |
+
&= (B(A\alpha))^{B\eta\cdot\eta} \equiv (A \cdot B, B\xi \cdot \eta)(\alpha).
|
| 284 |
+
\end{align*}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
Thus
|
| 288 |
+
$(B, \eta)(A, \xi) = (A \cdot B, B\xi \cdot \eta) \in \chi(X).$
|
| 289 |
+
|
| 290 |
+
**THEOREM 7.** By this multiplication $\chi(X)$ forms a group.
|
| 291 |
+
|
| 292 |
+
*Proof.* As to the associative law we have
|
| 293 |
+
|
| 294 |
+
$$
|
| 295 |
+
\begin{align*}
|
| 296 |
+
(C, \zeta)(B, \eta)(A, \xi) &= (C, \zeta)(AB, B\xi \cdot \eta) \\
|
| 297 |
+
&= (AB \cdot C, C(B\xi \cdot \eta) \cdot \zeta) \\
|
| 298 |
+
&= (ABC, BC\xi \cdot C\eta \cdot \zeta)
|
| 299 |
+
\end{align*}
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
$$
|
| 303 |
+
\begin{align*}
|
| 304 |
+
((C, \zeta)(B, \eta))(A, \xi) &= (BC, C\eta \cdot \zeta)(A, \xi) \\
|
| 305 |
+
&= (A \cdot BC, BC\xi(C\eta \cdot \zeta)) \\
|
| 306 |
+
&= (ABC, BC\xi \cdot C\eta \cdot \zeta)
|
| 307 |
+
\end{align*}
|
| 308 |
+
$$
|
| 309 |
+
|
| 310 |
+
Thus
|
| 311 |
+
$$
|
| 312 |
+
(C, \zeta)((B, \eta)(A, \xi)) = ((C, \zeta)(B, \eta))(A, \xi)
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
The existence of the unity is proved as follows :
|
| 316 |
+
|
| 317 |
+
($E$, $e$)(A, $\xi$) = ($AE$, $E\xi \cdot e$) = (A, $\xi$) where E, e are the unities of $\Sigma(X)$ and $\pi_1(X)$ respectively.
|
| 318 |
+
|
| 319 |
+
The existence of an inverse element is proved thus :
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
(A^{-1}, A^{-1}\xi^{-1})(A, \xi) = (AA^{-1}, A^{-1}\xi \cdot A^{-1}\xi^{-1}) = (E, A^{-1}(\xi\xi^{-1})) = (E, e).
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
This completes the proof.
|
| 326 |
+
|
| 327 |
+
Now the following MAIN THEOREM concerning the relation of two groups $\mathfrak{A}(X)$ and $\chi(X)$ imparts the complete analysis to the structure of $\mathfrak{A}(X)$ and also to the operation of $\mathfrak{A}(X)$ on $\pi_n(X)$ for every integer $n \ge 1$.
|
| 328 |
+
|
| 329 |
+
**MAIN THEOREM 8.** $\mathfrak{A}(x)$ is isomorphic to the group $\chi(X)$. Moreover, an isomorphism can be established between these groups, preserving the operation on the homotopy groups.
|
| 330 |
+
|
| 331 |
+
*Proof.* The method of proof being analogous as for Theorems 4, 5, we shall
|
| 332 |
+
---PAGE_BREAK---
|
| 333 |
+
|
| 334 |
+
restrict ourselves to show the correspondence between two groups. Let $\theta$ be a representative of $\partial \mathfrak{U}(X)$ and let $a_0 = \theta | X \times 0, \xi_0 = \theta | x_0 \times I$. Then to $\partial$ let there correspond $([a_0], [\xi_0]) \in \chi(X)$. It can be shown that this correspondence is an isomorphism and that the operations of $\partial$ and of the corresponding element $([a_0], [\xi_0])$ on $\pi_n$ are the same.
|
| 335 |
+
|
| 336 |
+
§ 5. Some remarks on the group $\mathfrak{U}(X)$.
|
| 337 |
+
|
| 338 |
+
By the aid of the main theorem it is advantageous to use $\chi(X)$ in place of $\mathfrak{U}(X)$ in calculating the invariant $\mathfrak{U}(X)$ of the space $X$. As is easily seen, two distinct elements of $\chi(X)$ do not always operate differently on $\pi_n$ so that as the group of the operation on $\pi_n$, $\chi(X)$ may be reduced to a smaller group. This reduction gives rise to an analogous classification of the space $X$ as the simplicity of a space due to Eilenberg.
|
| 339 |
+
|
| 340 |
+
Let $\chi^*(X)$ be the totality of all elements in $\chi(X)$ whose operations on any element of $\pi_n(X)$ are trivial; i.e. $\chi^*(X) = \{(A, \xi) ; (A, \xi)(\alpha) = \alpha$ for any element $\alpha \in \pi_n(X)\}$. Then $\chi^*(X)$ is clearly a normal subgroup of $\chi(X)$. Similarly, put $\chi^{**}(X) = \{(A, e) ; (A, e)(\alpha) = \alpha$ for any $\alpha \in \pi_n(X)\}$ and $\chi^{***}(X) = \{(E, \xi) ; (E, \xi)(\alpha) = \alpha$ for any $\alpha \in \pi_n(X)\}$, then these two groups are also normal in $\Sigma(X)$ and $\pi_1(X)$ respectively as well as in $\chi(X)$. It is well known that the space is $n$-simple in the sense of Eilenberg if $\chi^{***}(X) \cong \pi_1(X)$. It may be an interesting problem to consider the spaces satisfying the conditions such as $\chi^*(X) = \chi(X)$ or $\chi^{**}(X) \cong \Sigma(X)$.
|
| 341 |
+
|
| 342 |
+
BIBLIOGRAPHY
|
| 343 |
+
|
| 344 |
+
[1] Eilenberg, S., On the relation between the fundamental group of a space and higher homotopy groups, Fundamenta Math. 22 (1939).
|
| 345 |
+
|
| 346 |
+
[2] Hu, S. T., On the Whitehead Group of automorphisms of the relative homotopy groups, Portugaliae Math. 7 (1948).
|
samples_new/texts_merged/213815.md
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Design and Performance of a 24 GHz Band FM-CW
|
| 5 |
+
Radar System and Its Application
|
| 6 |
+
|
| 7 |
+
Kazuhiro Yamaguchi\*, Mitsumasa Saito\†, Kohei Miyasaka\* and Hideaki Matsue\*
|
| 8 |
+
|
| 9 |
+
\* Tokyo University of Science, Suwa
|
| 10 |
+
|
| 11 |
+
‡ CQ-S net Inc., Japan
|
| 12 |
+
|
| 13 |
+
Email: yamaguchi@rs.tus.ac.jp, matsue@rs.suwa.tus.ac.jp, saitoh@kpe.biglobe.ne.jp
|
| 14 |
+
|
| 15 |
+
*Abstract*—This paper describes a design and performance of a FM-CW (Frequency Modulated Continuous Wave) radar system using 24 GHz band. The principle for measuring the distance and the small displacement of target object is described, and the differential detection method for detecting the only target is proposed under the environments which multiple objects are located. In computer simulation, the basic performance of FM-CW radar system is analyzed about the distance resolution and error value according to the various sampling time and sweep bandwidth. Furthermore, the FM-CW radar system with the proposed differential detection method can clearly detect only the target object under the multiple object environment, and the small displacement within 3.11 mm can be measured. In experiment, the performance about measuring the distance and displacement is described by using the designed 24 GHz FM-CW radar system. As the results, it is confirmed that 24 GHz FM-CW radar system with the proposed differential detection method is effective for measuring target under the environments which multiple objects are located.
|
| 16 |
+
|
| 17 |
+
Fig. 1. Sawtooth frequency modulation.
|
| 18 |
+
|
| 19 |
+
I. INTRODUCTION
|
| 20 |
+
|
| 21 |
+
Radar systems with 24 GHz band is based on ARIB standard T73 [1] as sensors for detecting or measuring mobile objects for specified low power radio station. And the 24 GHz band radar system can be applied in various field such as security, medical imaging and so on under indoor and outdoor environments. There are various radar systems have been proposed [2], [3], [4], [5]. The pulsed radar system measures the period between the signal is transmitted and received. The pulsed radar can detect the distance in far field, however, the target in near field can not be detected correctly. The Doppler radar system measures the frequency difference between the reflected and transmitted signals. The Doppler radar can detect the moving velocity of the target, however, the distance of the target can not be detected. The FM-CW (Frequency-Modulated Continuous-Wave) radar system [6], [7] is the most widely used for detecting the distance of the target object in near field and the small displacement of the target.
|
| 22 |
+
|
| 23 |
+
In this paper, we used and developed the 24 GHz FM-CW radar system for measuring the distance and displacement of an object when the object is static or moves very slowly. The basic performance of the 24 GHz FM-CW radar system for measuring a target object is analyzed by using the computer simulation. Moreover, we proposed the differential detection method for signal processing in the FM-CW radar system in order to detect only the target object under the environments which multiple objects are located. Furthermore, an example of application with the 24 GHz FM-CW radar system is shown in experiment.
|
| 24 |
+
|
| 25 |
+
This paper consists of the following sections. Section II describes the principle of a FM-CW radar system. Section III describes and analyses the basic performance and the proposed differential detection method in computer simulation. Section IV shows the experimental results with 24 GHz FM-CW radar system. Finally, Section V concludes this paper.
|
| 26 |
+
|
| 27 |
+
II. PRINCIPLE FOR FMCW RADAR
|
| 28 |
+
|
| 29 |
+
FM-CW (Frequency-Modulated Continuous-Wave) radar
|
| 30 |
+
is a radar transmitting a continuous carrier modulated by a
|
| 31 |
+
periodic function such as a sawtooth wave to provide range
|
| 32 |
+
data shown in Fig. 1. Fig. 2 shows the block diagram of a
|
| 33 |
+
FM-CW radar system [8].
|
| 34 |
+
|
| 35 |
+
In the FM-CW radar system, frequency modulated signal
|
| 36 |
+
at the VCO is transmitted from the transmitter Tx, then signals
|
| 37 |
+
reflected from the targets are received at the receiver Rx.
|
| 38 |
+
Transmitted and received signals are multiplied by a mixer, and
|
| 39 |
+
beat signals are generated as multiplying the two signals. The
|
| 40 |
+
beat signal pass through a low pass filter, then an output signal
|
| 41 |
+
is obtained. In this process, the frequency of the input signal
|
| 42 |
+
is varied with time at the VCO. The modulation waveform
|
| 43 |
+
with a linear sawtooth pattern [9] as shown in Fig. 1. This
|
| 44 |
+
figure illustrates frequency-time relation in the FM-CW radar,
|
| 45 |
+
and the red line denotes the transmitted signal and the blue
|
| 46 |
+
line denotes the received signal. Here, f₀ denotes the center
|
| 47 |
+
frequency, fₛ denotes the frequency bandwidth for sweep, and
|
| 48 |
+
tₛ denotes the period for sweep.
|
| 49 |
+
|
| 50 |
+
We define that the transmitting signal $V_T(f, x)$ at the
|
| 51 |
+
transmitter Tx in Fig. 2 is represented as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
V_{\mathrm{T}}(f,x)=A e^{j \frac{2 \pi f}{c} x},
|
| 55 |
+
\quad(1)
|
| 56 |
+
$$
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
Fig. 2. Block diagram of a FM-CW radar system.
|
| 60 |
+
|
| 61 |
+
where *f* denotes a frequency at a time, *x* denotes a distance between a target and the transmitter, *A* denotes an amplitude value and *c* denotes the speed of light.
|
| 62 |
+
|
| 63 |
+
The reflected signal $V_R(f, x)$ at the receiver Rx in Fig. 2 is represented as
|
| 64 |
+
|
| 65 |
+
$$ V_R(f, x) = \sum_{k=1}^{K} A \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{2\pi f}{c} (2d_k - x)} , \quad (2) $$
|
| 66 |
+
|
| 67 |
+
where $\gamma_k$ and $\varphi_k$ are the reflectivity coefficients for amplitude and phase on kth target, respectively. $\alpha_k$, denotes amplitude coefficient for transmission loss from kth target, and $d_k$ is the distance between the transmitter and the kth target.
|
| 68 |
+
|
| 69 |
+
Here, at the receiver whose position is $x = 0$, Eq. (2) is rewritten as
|
| 70 |
+
|
| 71 |
+
$$ V_R(f, 0) = \sum_{k=1}^{K} A \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{2\pi f}{c} (2d_k)} . \quad (3) $$
|
| 72 |
+
|
| 73 |
+
The beat signal are generated as multiplying the transmitted signal in Eq. (1) and the received signal in Eq. (3) at the position $x = 0$. After LPF, the output signal $V_{\text{out}}(f, 0)$ is generated by
|
| 74 |
+
|
| 75 |
+
$$ V_{\text{out}}(f, 0) = \sum_{k=1}^{K} A^2 \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f d_k}{c}} . \quad (4) $$
|
| 76 |
+
|
| 77 |
+
By using signal processing, a distance and a displacement for the target are given from the generated output signal in Eq. (4). By using the Fourier transform, the distance spectrum of the output signal $P(x)$ is calculated as follows.
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\begin{align}
|
| 81 |
+
P(x) &= \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} V_{\text{out}} e^{-j \frac{4\pi f}{c} x} df \nonumber \\
|
| 82 |
+
&= \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} \sum_{k=1}^{K} A^2 \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f d_k}{c}} e^{-j \frac{4\pi f x}{c}} df \nonumber \\
|
| 83 |
+
&= A^2 \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} \int_{f_0 - \frac{f_w}{2}}^{f_0 + \frac{f_w}{2}} e^{j \frac{4\pi f (d_k - x)}{c}} df \nonumber \\
|
| 84 |
+
&= A^2 \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f_0 (d_k - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} . \tag{5}
|
| 85 |
+
\end{align}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
The amplitude value of the distance spectrum $|P(x)|$ in Eq. (5) is given as
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\begin{aligned}
|
| 92 |
+
|P(x)| &= A^2 \left| \sum_{k=1}^{K} \alpha_k \gamma_k e^{j \varphi_k} e^{j \frac{4\pi f_0 (d_k - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} \right| \\
|
| 93 |
+
&\leq A^2 f_w \sum_{k=1}^{K} \alpha_k \gamma_k \left| \frac{\sin\left\{\frac{2\pi f_w (d_k - x)}{c}\right\}}{\frac{2\pi f_w (d_k - x)}{c}} \right|, \quad (6)
|
| 94 |
+
\end{aligned}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
and we have equality if and only if the phase components $\phi_k + \frac{4\pi f_0 (d_k - x)}{c}$ about all of $k$ are equal.
|
| 98 |
+
|
| 99 |
+
Here, we assumed that the number of target is 1. The distance spectrum in Eq. (5) is rewritten as
|
| 100 |
+
|
| 101 |
+
$$ P(x) = A^2 \alpha_1 \gamma_1 e^{j \varphi_1} e^{j \frac{4\pi f_0 (d_1 - x)}{c}} f_w \frac{\sin\left\{\frac{2\pi f_w (d_1 - x)}{c}\right\}}{\frac{2\pi f_w (d_1 - x)}{c}}, \quad (7) $$
|
| 102 |
+
|
| 103 |
+
and the amplitude value of distance spectrum is given as
|
| 104 |
+
|
| 105 |
+
$$ |P(x)| = A^2 \alpha_1 \gamma_1 f_w \left| \frac{\sin\left\{\frac{2\pi f_w (d_1-x)}{c}\right\}}{\frac{2\pi f_w (d_1-x)}{c}} \right|. \quad (8) $$
|
| 106 |
+
|
| 107 |
+
This equation indicates that the distance for the target is generated by the amplitude value of distance spectrum.
|
| 108 |
+
|
| 109 |
+
The phase value of distance spectrum $\angle P(x)$ is represented as
|
| 110 |
+
|
| 111 |
+
$$ \angle P(x) = \varphi_1 + \frac{4\pi f_0 (d_1 - x)}{c} = \theta_1(x) . \quad (9) $$
|
| 112 |
+
|
| 113 |
+
Here, $\theta_1(x)$ satisfy $-\pi \leq \theta_1(x) \leq \pi$, then the displacement for the target is
|
| 114 |
+
|
| 115 |
+
$$ -\frac{c(-\pi - \varphi_1)}{4\pi f_0} \leq d_1 \leq \frac{c(\pi - \varphi_1)}{4\pi f_0} . \quad (10) $$
|
| 116 |
+
|
| 117 |
+
If the phase value satisfies $\phi_1 = 0$, Eq. (10) is rewritten as $-3.11 [\text{mm}] \leq d_1 \leq +3.11 [\text{mm}]$ with $f_0 = 24.15 [\text{GHz}]$. That is, the small displacement of the target within $\pm 3.11 [\text{mm}]$ is generated by the phase value of distance spectrum.
|
| 118 |
+
---PAGE_BREAK---
|
| 119 |
+
|
| 120 |
+
TABLE I. PARAMETERS IN COMPUTER SIMULATIONS
|
| 121 |
+
|
| 122 |
+
<table><thead><tr><td>Parameters</td><td>Value</td></tr></thead><tbody><tr><td>Center frequency</td><td>24.15 GHz</td></tr><tr><td>Bandwidth</td><td>50, 100, 200, 400 MHz</td></tr><tr><td>Sweep time</td><td>1024 µs</td></tr><tr><td>Sampling time of sweep</td><td>0.1, 1, 10 µs</td></tr><tr><td>Number of FFT points</td><td>4096</td></tr><tr><td>Window function</td><td>hamming</td></tr></tbody></table>
|
| 123 |
+
|
| 124 |
+
Fig. 3. Resolution for distance spectrum according to sweep bandwidth.
|
| 125 |
+
|
| 126 |
+
On the other hands, the maximum distance for measuring
|
| 127 |
+
$d_{\max}$ is
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\begin{aligned}
|
| 131 |
+
\Delta f &= \frac{f_w}{t_w/t_s} [\text{Hz}] \, , \\
|
| 132 |
+
d_{\max} &= \frac{c}{4\Delta f} [\text{m}] \, ,
|
| 133 |
+
\end{aligned}
|
| 134 |
+
\quad (11) $$
|
| 135 |
+
|
| 136 |
+
where $t_w$ denotes the sweep time, $t_s$ denotes the interval time for sampling. For example, in the case with $t_w = 1024$ [µs] and $t_s = 1$ [µs], the maximum distance is $d_{\max} = 384$ [m].
|
| 137 |
+
|
| 138 |
+
III. COMPUTER SIMULATION
|
| 139 |
+
|
| 140 |
+
A. Basic Performance
|
| 141 |
+
|
| 142 |
+
At first, we describe the basic performance about the FM-CW radar with 24 GHz band. Parameters for computer simulation are listed in Table I. Center frequency is 24.15 GHz, bandwidth are 50, 100, 200, and 400 MHz. Note that the 400 MHz bandwidth is only used for the computer simulation because of standards in the Radio Law in Japan. Sweep time is 1024 µs, sampling times of sweep are 0.1, 1, 10 µs, number of FFT points is 4096, and the hamming windows is adapted as the window function in signal processing.
|
| 143 |
+
|
| 144 |
+
We assumed that a static target is located at 10 m from the transmitter and receiver, and the distance spectrums are outputted with various parameters. Fig. 3 shows the amplitude value for distance spectrum versus measured distance with various sweep bandwidth. The result shows that the sweep bandwidth influences the distance resolutions and widely bandwidth can improve the resolution. In the case with $t_s = 1$ µs, the distance resolutions with $f_w = 50, 100, 200, 400$ MHz are ±5, ±1.5, ±1, ±0.5 m, respectively. Fig. 4 shows the amplitude value for distance spectrum versus measured distance with various sampling time. The result shows that
|
| 145 |
+
|
| 146 |
+
Fig. 4. Error value for distance spectrum according to sampling interval.
|
| 147 |
+
|
| 148 |
+
Fig. 5. Distance spectrum for measuring moving target.
|
| 149 |
+
|
| 150 |
+
the sampling interval influences the error about the measured distance and shortly sampling interval can reduce the error value for distance. In the case with $f_w = 200$ MHz, the error values about the measured distance with $t_s = 10$ µs is about 0.5 m.
|
| 151 |
+
|
| 152 |
+
Fig. 5 shows the result for measuring a slowly moving target with $f_w = 200$ MHz and $t_s = 1$ µs. The target moved from 10 m to 20 m at intervals of 0.5 m. Fig. 5(a) shows
|
| 153 |
+
---PAGE_BREAK---
|
| 154 |
+
|
| 155 |
+
Fig. 6. Measured displacement.
|
| 156 |
+
|
| 157 |
+
the amplitude value versus measured distance versus target distance with 3-dimensional viewing, and Fig. 5(b) shows measured distance versus target distance with 2-dimensional viewing. The color in (b) is corresponding to the strength of the amplitude value in (a). From these figures, it is confirmed that the distance can be measured correctly according to the positions of the moving target.
|
| 158 |
+
|
| 159 |
+
Fig. 6 shows the result for measuring a target with small displacement, and the measured displacement versus target displacement is outputted. The object is located at 10 m from the receiver, and the object moved from -5 mm to 5 mm at intervals of 0.1 mm. The small displacement can be measured by the phase value of distance spectrum, and the measured displacement is corresponding to the target displacement. Note that the measured displacement denotes the relative displacement and it is not corresponding to the absolute distance between the receiver and the target object. The small displacement within ±3.11 mm is correctly measured with the parameters of the FM-CW radar system in this paper, however, the displacement more than ±3.11 mm has uncertainty.
|
| 160 |
+
|
| 161 |
+
## B. Proposed target detection
|
| 162 |
+
|
| 163 |
+
As mentioned in the above section, the FM-CW radar system can measure the distance and the small displacement for 1 target object. However, it is a special case that only the reflected signal on a target can be received at the receiver. In general, the receiver may receive the reflected signals from many objects. Therefore, when there is some objects for measuring the target distance, signal processing for detecting the distance spectrum from the only target is required.
|
| 164 |
+
|
| 165 |
+
The proposed method removes the signals from the other objects by using the differential detection of distance spectrum. Fig. 7 shows the distance spectrum when the target object moves from 10 m to 20 m and the other objects are located at 15 m and 20 m. The transmitted signal is reflected on the target and the other objects, the receiver receives several reflected signals. Therefore, the distance spectrum of the other objects are also generated by the FM-CW radar system in Fig. 7(a), and the distance spectrum of the target can not be detected clearly. In particular, when the reflection coefficient of the target is lower than that of the other objects, the distance spectrum of the other object has higher amplitude value than that of the target.
|
| 166 |
+
|
| 167 |
+
Fig. 7. Distance spectrum for measuring moving target distance with / without the differential detection under the environments which multiple objects are located.
|
| 168 |
+
|
| 169 |
+
In the proposed differential detection, at first, the distance spectrum of the other objects $P_0$ is generated beforehand in Fig. 7(a). Then, the distance spectrum of the target and the other object $P$ is subtracted by $P_0$. By using the differential detection, distance spectrum removed the distance spectrum of the other targets is generated as $P-P_0$. Therefore, the distance spectrum of the desired target is only detected. Fig. 7(b) shows the distance spectrum by using the proposed differential detection method, and the distance spectrum of the target is correctly measured. As compared with the measured distance spectrums in Fig. 7(a) and (b), it is clearly confirmed that the proposed method can detect target distance by using the difference detection. The proposed differential detection can effectively detect the moving or static target distance from multiple reflections of the background static objects.
|
| 170 |
+
|
| 171 |
+
# IV. EXPERIMENTS
|
| 172 |
+
|
| 173 |
+
In order to evaluate the effectiveness of the proposed method for detecting the target distance and displacement, we develop a FM-CW radar system and carried out the experiments with the radar system in actual environment. Table II lists the parameters, and the developed FM-CW radar system get a certificate of conformity with technical regulations in
|
| 174 |
+
---PAGE_BREAK---
|
| 175 |
+
|
| 176 |
+
TABLE II. PARAMETERS IN EXPERIMENTS
|
| 177 |
+
|
| 178 |
+
<table><thead><tr><td>Parameters</td><td>Value</td></tr></thead><tbody><tr><td>Center frequency f<sub>0</sub></td><td>24.15 GHz</td></tr><tr><td>Sweep bandwidth f<sub>w</sub></td><td>200 MHz</td></tr><tr><td>Sweep time t<sub>w</sub></td><td>1024 μs</td></tr><tr><td>Sampling time of sweep t<sub>s</sub></td><td>1 μs</td></tr><tr><td>Transmitter power output</td><td>0.007 W</td></tr><tr><td>Antenna gain</td><td>11 dBi</td></tr><tr><td>Range of distance</td><td>0 - 100 m</td></tr><tr><td>Range of relative displacement</td><td>±3.11 mm</td></tr></tbody></table>
|
| 179 |
+
|
| 180 |
+
Fig. 8. Distance spectrum for measuring moving target distance with / without the differential detection.
|
| 181 |
+
|
| 182 |
+
Article 38-6 Paragraph 1 of the Radio Law in Japan, and developed FM-CW radar system is accommodate to ARIB standard T73 in Japan [1].
|
| 183 |
+
|
| 184 |
+
## A. Distance Spectrum
|
| 185 |
+
|
| 186 |
+
Fig. 8 shows the distance spectrum of a moving target. A person walked away from the FM-CW radar and then came close between 2 [m] to 10 [m]. In Fig. 8(a), several distance spectrums of the person and the background objects are outputted. The distance spectrum of the moving person is not clearly detected in Fig. 8(a). In order to detect the distance spectrum of the moving person with the differential
|
| 187 |
+
|
| 188 |
+
detection method, the distance spectrum without the person is measured beforehand. By generating the distance spectrum of the background objects beforehand, the distance spectrum of the moving person is correctly detected in Fig. 8(b) with the proposed differential detection. Therefore, the FM-CW radar system can measure movement of the target person effectively.
|
| 189 |
+
|
| 190 |
+
Fig. 9 shows the result of measuring the small displacement for human breathing. The human's chest movement is measured within the range of relative small displacement. In Fig. 9, it is detected that the period of breathing is about 4 [s] and the breathing movement is about within ±2 [mm].
|
| 191 |
+
|
| 192 |
+
## B. Example for application
|
| 193 |
+
|
| 194 |
+
Finally, we show an example of application with 24 GHz FM-CW radar system. Fig. 10 shows a setup of the FM-CW radar system for detecting human breathing in actual environments. The FM-CW radar satisfies the safety guideline, and the details of the safety guideline is described in Appendix.
|
| 195 |
+
|
| 196 |
+
Fig. 11 shows the example for detecting human breathing.
|
| 197 |
+
|
| 198 |
+
Fig. 9. Displacement for measuring the movement of human breathing.
|
| 199 |
+
|
| 200 |
+
Fig. 10. Setup of FM-CW Radar for detecting human breathing.
|
| 201 |
+
|
| 202 |
+
Fig. 11. Example of application.
|
| 203 |
+
---PAGE_BREAK---
|
| 204 |
+
|
| 205 |
+
The distance spectrum in this example is measured as following flow.
|
| 206 |
+
|
| 207 |
+
1) Measuring distance spectrum without any person.
|
| 208 |
+
|
| 209 |
+
2) A person comes to the bed. The radar received signals from human's body.
|
| 210 |
+
|
| 211 |
+
3) The person lies asleep on the bed. The radar detects the person's breathing movement.
|
| 212 |
+
|
| 213 |
+
By generating the distance spectrum of the background objects without the person, the distance spectrum of the person is only detected. When the person comes within the range of radar, the radar system can detect reflected signals from the person, and the distance spectrums of the human's body are detected. After the person lies on the bed, the radar system can detect the small displacement for the person's breathing movement. By using the differential detection method, the distance and small displacement of the moving object is clearly detected.
|
| 214 |
+
|
| 215 |
+
## V. CONCLUSION
|
| 216 |
+
|
| 217 |
+
In this paper, design and performance of a FM-CW radar system with 24 GHz band is described. In computer simulations, basic performances of FM-CW radar system is analyzed about the distance resolution and error value according to the sweep time and the sampling interval, respectively. Moreover, the differential detection method for detecting only the target object is proposed for measuring the distance and the displacement of the target under the environments which multiple objects are located. In experiments, the distance spectrum of the target object is clearly detected by using the differential detection method under the environments which multiple objects are located. Furthermore, an example of application for detecting human's breathing movement is shown. As the result, the 24 GHz FM-CW radar with the proposed differential detection method effectively detect the distance and the small displacement under the environments which multiple objects are located.
|
| 218 |
+
|
| 219 |
+
## ACKNOWLEDGMENT
|
| 220 |
+
|
| 221 |
+
A part of this work was supported by “Ashita wo Ninau Kanagawa Venture Project” of Kanagawa in Japan.
|
| 222 |
+
|
| 223 |
+
The authors appreciate Prof. Toshio Nojima at Hokkaido University in Japan getting the valuable advices for analyzing the safety properties of the developed FM-CW radar system according to the safety guideline.
|
| 224 |
+
|
| 225 |
+
## REFERENCES
|
| 226 |
+
|
| 227 |
+
[1] ARIB STD-T73 Rev. 1.1, *Sensors for Detecting or Measureing Mobile Objects for Specified Low Power Radio Station*, Association of Radio Industries and Businesses Std.
|
| 228 |
+
|
| 229 |
+
[2] S. MIYAKE and Y. MAKINO, "Application of millimeter-wave heating to materials processing(special issue; recent trends on microwave and millimeter wave application technology)," *IEICE transactions on electronics*, vol. 86, no. 12, pp. 2365-2370, dec 2003.
|
| 230 |
+
|
| 231 |
+
[3] M. Skolnik, *Introduction to Radar Systems*. McGraw Hill, 2003.
|
| 232 |
+
|
| 233 |
+
[4] S. Fujimori, T. Uebo, and T. Iritani, "Short-range high-resolution radar utilizing standing wave for measuring of distance and velocity of a moving target," *ELECTRONICS AND COMMUNICATIONS IN JAPAN PART I-COMMUNICATIONS*, vol. 89, no. 5, pp. 52-60, 2006.
|
| 234 |
+
|
| 235 |
+
[5] T. Uebo, Y. Okubo, and T. Iritani, "Standing wave radar capable of measuring distances down to zero meters," *IEICE TRANSACTIONS ON COMMUNICATIONS*, vol. 88, no. 6, pp. 2609-2615, jun 2005.
|
| 236 |
+
|
| 237 |
+
[6] T. SAITO, T. NINOMIYA, O. ISAJI, T. WATANABE, H. SUZUKI, and N. OKUBO, "Automotive fm-cw radar with heterodyne receiver," *IEICE transactions on communications*, vol. 79, no. 12, pp. 1806-1812, dec 1996.
|
| 238 |
+
|
| 239 |
+
[7] W. Butler, P. Poitevin, and J. Bjomholt, "Benefits of wide area intrusion detection systems using fmcw radar," in *Security Technology, 2007 41st Annual IEEE International Carnahan Conference on*, Oct 2007, pp. 176-182.
|
| 240 |
+
|
| 241 |
+
[8] M. Skolnik, *Radar Handbook, Third Edition*. McGraw-Hill Education, 2008.
|
| 242 |
+
|
| 243 |
+
[9] W. Sediono and A. Lestari, "2d image reconstruction of radar indera," in *Mechatronics (ICOM), 2011 4th International Conference On*, May 2011, pp. 1-4.
|
| 244 |
+
|
| 245 |
+
[10] C95.1-2005, *IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields*, 3 kHz to 300 GHz, IEEE Std.
|
| 246 |
+
|
| 247 |
+
[11] Ministry of Internal Affairs and Communications. [Online]. Available: http://www.tele.soumu.go.jp/resource/j/material/dwn/guide38.pdf
|
| 248 |
+
|
| 249 |
+
# APPENDIX
|
| 250 |
+
|
| 251 |
+
In general, electromagnetic wave must be satisfied the guidelines on human exposure to electromagnetic fields, where it have been instituted in various organizations. IEEE C95.1 in USA [10] and ICNIRP in Europe are the guidelines, and MIC also have instituted the guideline in Japan [11].
|
| 252 |
+
|
| 253 |
+
Developed 24 GHz FM-CW radar in this paper have the properties as follow. The power of the transmitter is 7 [mW], the transmitting antenna gain is 11 [dBi], the effective radiated power is 88 [mW], the radiation angle of the transmitting wave is about 50 [degree], and the distance between the transmitter and the human is 2.5 [m]. According to the radar equation, the electric field strength $E$ and the power density $P$ on the human body is calculated as
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\begin{aligned}
|
| 257 |
+
E &= \sqrt{\frac{30 \times 0.088}{2.5}} = 0.65 \text{ [V/m]} , \\
|
| 258 |
+
P &= \frac{E^2}{z_0} = \frac{0.65^2}{120\pi} = 1.12 \times 10^{-4} \text{ [mW/cm}^2\text{]} .
|
| 259 |
+
\end{aligned}
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
According to the guideline [11], these parameters must be satisfied as
|
| 263 |
+
|
| 264 |
+
$$
|
| 265 |
+
\begin{aligned}
|
| 266 |
+
&E \leq 61.4 \text{ [V/m]} , \\
|
| 267 |
+
&P \leq 1 \text{ [mW/cm}^2\text{]} .
|
| 268 |
+
\end{aligned}
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
Therefore, the developed 24 GHz FM-CW radar system in this paper sufficiently satisfies the conditions in the guideline.
|
samples_new/texts_merged/230879.md
ADDED
|
@@ -0,0 +1,885 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Imaging Below the Diffraction Limit: A Statistical Analysis
|
| 5 |
+
|
| 6 |
+
Morteza Shahram and Peyman Milanfar, Senior Member, IEEE
|
| 7 |
+
|
| 8 |
+
**Abstract**—The present paper is concerned with the statistical analysis of the resolution limit in a so-called “diffraction-limited” imaging system. The canonical case study is that of incoherent imaging of two closely-spaced sources of possibly unequal brightness. The objective is to study how far beyond the classical Rayleigh limit of resolution one can reach at a given signal to noise ratio. The analysis uses tools from statistical detection and estimation theory. Specifically, we will derive explicit relationships between the minimum detectable distance between two closely-spaced point sources imaged incoherently at a given SNR. For completeness, asymptotic performance analysis for the estimation of the unknown parameters is carried out using the Cramér-Rao bound. To gain maximum intuition, the analysis is carried out in one dimension, but can be well extended to the two-dimensional case and to more practical models.
|
| 9 |
+
|
| 10 |
+
**Index Terms**—Cramér-Rao bound, diffraction, estimation, hypothesis test, imaging, Rayleigh limit, resolution, super-resolution.
|
| 11 |
+
|
| 12 |
+
## I. INTRODUCTION
|
| 13 |
+
|
| 14 |
+
IN incoherent optical imaging systems the image of an ideal point source is captured as a spatially extended pattern known as the point-spread function (PSF), as shown for the one-dimensional case in Fig. 1. In two dimensions, this function is the well-known Airy diffraction pattern [1]. When two closely-located point sources are measured through this kind of optical imaging system, the measured signal is the incoherent sum of the respective shifted point spread functions. According to the classical Rayleigh criterion, two incoherent point sources are “barely resolved” when the central peak of the diffraction pattern generated by one point source falls exactly on the first zero of the pattern generated by the second one. A more detailed and complete explanation of incoherent imaging and related topics can be found in [1] and [2].
|
| 15 |
+
|
| 16 |
+
The Rayleigh criterion for resolution in an imaging system is generally considered as an accurate estimate of limits in practice. But under certain conditions related to signal-to-noise ratio (SNR), resolution beyond the Rayleigh limit is indeed possible. This can be called the super-resolution limit [3]. Indeed, at sufficiently high sampling rates, and in the absence of noise, arbitrarily small details can be resolved.
|
| 17 |
+
|
| 18 |
+
To gain maximum intuition and perspective from the foregoing analysis, all discussion herein will be carried out in the
|
| 19 |
+
|
| 20 |
+
Fig. 1. Image of point source captured by diffraction-limited imaging.
|
| 21 |
+
|
| 22 |
+
one-dimensional case, which can later be extended to the two-dimensional case. To begin, let us assume that the original signal of interest is the sum of two impulse functions separated by a small distance $d$:¹
|
| 23 |
+
|
| 24 |
+
$$ \sqrt{\alpha\delta}\left(x - \frac{d}{2}\right) + \sqrt{\beta\delta}\left(x + \frac{d}{2}\right). \quad (1) $$
|
| 25 |
+
|
| 26 |
+
As mentioned before, the image will be the incoherent sum of two point spread functions, resulting from an imaging aperture (or slit in one-dimensional case, as seen in Fig. 2)
|
| 27 |
+
|
| 28 |
+
$$ s(x; \alpha, \beta, d) = \alpha h\left(x - \frac{d}{2}\right) + \beta h\left(x - \frac{d}{2}\right) \quad (2) $$
|
| 29 |
+
|
| 30 |
+
where for our specific case of incoherent imaging $h(x) = \sin(\pi^2 x) = [\sin(\pi x)/\pi x]^2$, but other PSF's can also be considered. Finally, the measured signal includes discretized samples corrupted with additive (readout) noise. Given samples at $x_k$ ($k = 1, \dots, N$) of the measured signal, we can rewrite the measurement model as
|
| 31 |
+
|
| 32 |
+
$$ g(x_k) = s(x_k; \alpha, \beta, d) + w(x_k) \\ = \alpha h\left(x_k - \frac{d}{2}\right) + \beta h\left(x_k - \frac{d}{2}\right) + w(x_k) \quad (3) $$
|
| 33 |
+
|
| 34 |
+
where $w(x_k)$ is assumed to be a zero-mean Gaussian white noise process with variance $\sigma^2$.
|
| 35 |
+
|
| 36 |
+
With the present definition, the Rayleigh limit corresponds to $d=1$ as can be seen in Figs. 1 and 2. This means that for values $d < 1$, the two point sources are (in the classical Rayleigh sense)
|
| 37 |
+
|
| 38 |
+
Manuscript received March 3, 2003; revised November 3, 2003. This work was supported in part by NSF CAREER Grant CCR-9984246. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Thierry Blu.
|
| 39 |
+
|
| 40 |
+
The authors are with the Department of Electrical Engineering, University of California, Santa Cruz, CA 95064 USA (e-mail: shahram@ee.ucsc.edu; milanfar@ee.ucsc.edu).
|
| 41 |
+
|
| 42 |
+
Digital Object Identifier 10.1109/TIP.2004.826096
|
| 43 |
+
|
| 44 |
+
¹From now on we refer to $\alpha$ and $\beta$ as intensities and also we assume that $\alpha, \beta > 0$. Also, note that this model (for now) assumes point sources symmetrically placed about the (known) origin. This model will be generalized later in the paper.
|
| 45 |
+
---PAGE_BREAK---
|
| 46 |
+
|
| 47 |
+
Fig. 2. Incoherent imaging of two closely located point sources.
|
| 48 |
+
|
| 49 |
+
"unresolvable." It is important to note that the Rayleigh criterion does not consider the presence of noise.
|
| 50 |
+
|
| 51 |
+
In the last forty years or so, there have been several attempts, and more recently surveys, of the problem of resolution from the statistical viewpoint. Of these, the most significant earliest works were done by Helstrom [4]–[6]. In particular, in [5] and [6], he derived lower bounds on the mean-squared error of unbiased estimators for the source positions, the distance between the sources, and the radiance values, using the Cramér-Rao inequality. In [5], he considered two separate situations. In the first, the problem of whether any signal was present or not was treated, whereas in the second, the question of whether one or two sources were present was treated. (This second scenario is, of course, what interests us in the present paper.) Helstrom described a geometrical optics field model of the problem involving a general radiance distribution and point spread function, for objects with arbitrary shape. To study the case of the circular aperture and point sources, he applied a complex and remarkable set of approximations and simplifications of the initial model. Also, he assumed that the distance between the point sources is known to the detector.
|
| 52 |
+
|
| 53 |
+
In [3] and [7], an approximate statistical theory was given to compute the required number of detected photons (similar to the notion of signal to noise ratio) for a certain desired resolution, and the value of achievable resolution by image restoration techniques was also investigated by numerical and iterative deconvolution. In these papers the definition of resolution was made as the separation of the two point sources that can be resolved through a deconvolution procedure. In [7], the analysis of the achievable resolution in deconvolved astronomical images was studied based on a criterion similar to Rayleigh's.
|
| 54 |
+
|
| 55 |
+
In [9] and [12] two-point resolution of imaging systems was studied using a model fitting theory where the probability of resolution was computed based on the structural change of the stationary points of the likelihood function. Also in [11] the Cramér-Rao lower bound formulation was used to study the limits to attainable precision of estimated distance between the two point sources. Assuming a Gaussian PSF, they determined a lower bound for the estimation error variance. Also, in [10], the reader can find a very comprehensive review of past and present approaches to the concept of resolution. In this paper,
|
| 56 |
+
|
| 57 |
+
we also compute the Cramér-Rao (CR) lower bound in exact, closed form for two different cases. This analysis is in fact extendable to any point spread function.
|
| 58 |
+
|
| 59 |
+
Finally, an interesting, more recent paper [13] views the resolution problem from the information theory perspective. This line of thinking, again with simplifying approximations, is used to compute limits of resolution enhancement using Shannon's theorem of maximum transferable information via a noisy channel. The paper [13] considers the case of equally bright nearby point sources and derives an expression relating resolution (here defined as the inverse of the discernable distance between two equally bright point sources), logarithmically to the SNR.
|
| 60 |
+
|
| 61 |
+
The results of our paper extend, illuminate, and unify the earlier works in this field using more modern tools in statistical signal processing. Namely, we use locally optimal tests, which lead to more explicit, readily interpreted, and applicable results. In addition, we study various cases including unknown and/or unequal intensities, which have not been considered in their full complexity before.² The present results clarify, arguably for the first time, the specific effects of the relevant parameters on the definition of resolution, and its limits, as needed in practice.
|
| 62 |
+
|
| 63 |
+
In this paper we formulate the problem of two-point resolution in terms of statistical estimation/detection. Our approach is to precisely define a quantitative measure of resolution in statistical terms by addressing the following question: what is the minimum separation between two point sources (maximum attainable resolution limit) that is detectable at a given signal-to-noise ratio (SNR). In contrast to earlier definitions of resolution, there is little ambiguity in our proposed definition, and all parameters (PSF, noise variance, sampling rate, etc.) will be explicitly present in the formulation. Our earlier work on this problem was presented in [14], which essentially covers the material in Section IV-A of this paper.
|
| 64 |
+
|
| 65 |
+
The organization of the paper is as follows. Section II will explain and formulate our definition, and the corresponding statistical framework and models, in detail. In Section III, in order to use linear detection/estimation structures, we will discuss a signal approximation approach. In Section IV, we will present our statistical analysis for different cases of increasing generality. The asymptotic performance of the maximum likelihood estimate of the unknown parameters in terms of the Cramér-Rao lower bound will be discussed in Section V. Finally, some comments and conclusion will be presented in Section VI.
|
| 66 |
+
|
| 67 |
+
## II. STATISTICAL ANALYSIS FRAMEWORK
|
| 68 |
+
|
| 69 |
+
The question of whether one or two peaks are present in the measured signal can be formulated in statistical terms. Specifically, for the proposed model the equivalent question is whether the parameter *d* is equal to zero or not. If *d* = 0 then we only have one peak and if *d* > 1 then there are two resolved peaks according to the Rayleigh criterion. So the problem of interest revolves around values of *d* in the range of 0 ≤ *d* < 1. Therefore, we can define two hypotheses, which will form the basis of our statistical framework. Namely, let $\hat{H}_0$ denote the null hy-
|
| 70 |
+
|
| 71 |
+
²Reference [9] considered the case of unequal intensities in a different framework.
|
| 72 |
+
---PAGE_BREAK---
|
| 73 |
+
|
| 74 |
+
pothesis that $d = 0$ (one peak present) and let $\Pi_1$ denote the
|
| 75 |
+
alternate hypothesis that $d > 0$ (two peaks present)
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\begin{equation}
|
| 79 |
+
\begin{cases}
|
| 80 |
+
H_0: d = 0 & \text{One peak is present} \\
|
| 81 |
+
H_1: d > 0 & \text{Two peaks are present}
|
| 82 |
+
\end{cases}
|
| 83 |
+
\tag{4}
|
| 84 |
+
\end{equation}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
Given discrete samples of the measured signal, we can rewrite
|
| 88 |
+
the problem as
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\left\{
|
| 92 |
+
\begin{array}{ll}
|
| 93 |
+
H_0: & \mathbf{g} = \mathbf{s}_0 + \mathbf{w} \\
|
| 94 |
+
H_1: & \mathbf{g} = \mathbf{s} + \mathbf{w}
|
| 95 |
+
\end{array}
|
| 96 |
+
\right.
|
| 97 |
+
\qquad (5)
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\begin{align*}
|
| 104 |
+
\mathbf{g} &= [g(x_1), \dots, g(x_N)]^T, \\
|
| 105 |
+
\mathbf{w} &= [w(x_1), \dots, w(x_N)]^T, \\
|
| 106 |
+
\mathbf{s} &= [s(x_1; \alpha, \beta, d), \dots, s(x_N; \alpha, \beta, d)]^T, \\
|
| 107 |
+
\mathbf{s}_0 &= [s_0(x_1), \dots, s_0(x_N)]^T,
|
| 108 |
+
\end{align*}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
and
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
s(x_k; \alpha, \beta, d) = \alpha h \left( x_k - \frac{d}{2} \right) + \beta h \left( x_k + \frac{d}{2} \right) \quad (6)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
s_0(x_k) = s(x_k; \alpha, \beta, d)|_{d=0} = (\alpha + \beta)h(x_k). \quad (7)
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
This is a problem of detecting a deterministic signal with unknown parameters $(\alpha, \beta$, and $d$, in general). From (5), since the probability density function (PDF) under $H_1$ is not known exactly, it is not possible to design optimal detectors (in the Neyman-Pearson sense) by simply forming the likelihood ratio. The general structure of composite hypothesis testing is involved when unknown parameters appear in the PDF's [16, p. 248]. There are two major approaches for composite hypothesis testing. The first is to use explicit prior knowledge as to the likely values of parameters of interest and apply a Bayesian method to this detection problem. However, there is generally no such a priori information available. Alternately, the second approach, the Generalized Likelihood Ratio Test (GLRT) first computes maximum likelihood (ML) estimates of the unknown parameters, and then will use these estimated value to form the standard Neyman-Pearson (NP) detector. Our focus will be on GLRT-type methods because of less restrictive assumptions and easier computation and implementation; but most importantly, because uniformly most powerful (UMP) and locally most powerful (LMP) tests can be developed for the parameter range $0 \le d < 1$.
|
| 122 |
+
|
| 123 |
+
To be a bit more specific, consider the case where it is known
|
| 124 |
+
that $\alpha = \beta = 1$, with the parameter $d$ unknown. The GLRT
|
| 125 |
+
approach offers to decide $\Pi_1$ if
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
L(\mathbf{g}) = \frac{\max_{d} p(\mathbf{g}, d, H_1)}{p(\mathbf{g}, H_0)} = \frac{p(\mathbf{g}, \hat{d}, H_1)}{p(\mathbf{g}, H_0)} > \gamma \quad (8)
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $\hat{d}$ denotes the ML estimate of $d$, and $p(\mathbf{g}, d; H_1)$ and $p(\mathbf{g}; H_0)$ are PDF's under $\Pi_1$ and $\Pi_0$, respectively. Assuming additive white Gaussian noise (AWGN) with variance $\sigma^2$ and $\hat{\mathbf{s}} = [s(x_1; 1, 1, \hat{d}), \dots, s(x_N; 1, 1, \hat{d})]^T$ we will have:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\begin{align*}
|
| 135 |
+
L(\mathbf{g}) &= \frac{\frac{1}{(2\pi\sigma^2)^{N/2}} \exp\left(-\frac{1}{2\sigma^2} ||\mathbf{g} - \hat{\mathbf{s}}||^2\right)}{\frac{1}{(2\pi\sigma^2)^{N/2}} \exp\left(-\frac{1}{2\sigma^2} ||\mathbf{g} - \mathbf{s}_0||^2\right)} \\
|
| 136 |
+
&= \exp\left(-\frac{1}{2\sigma^2}\left(-||\hat{\mathbf{s}}||^2 + ||\mathbf{s}_0||^2 + 2\mathbf{g}^T(\hat{\mathbf{s}} - \mathbf{s}_0)\right)\right).
|
| 137 |
+
\end{align*}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
Therefore, $\Pi_1$ will be chosen if
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
- \| \hat{\mathbf{s}} \|^{2} + 2 \mathbf{g}^{T} (\hat{\mathbf{s}} - \mathbf{s}_{0}) > \gamma'. \quad (9)
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
Equivalently,
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\sum_{k=1}^{N} & -\left[\alpha h\left(x_k - \frac{\hat{d}}{2}\right) + \beta h\left(x_k + \frac{\hat{d}}{2}\right)\right]^2 \\
|
| 150 |
+
& + 2\left[\alpha h\left(x_k - \frac{\hat{d}}{2}\right) + \beta h\left(x_k + \frac{\hat{d}}{2}\right)\right] \\
|
| 151 |
+
& - (\alpha + \beta)h(x_k) \\[-0.3em]
|
| 152 |
+
& g(x_k) > \gamma' \qquad (10)
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
where the ML estimate of $d$ in the above involves solving the
|
| 156 |
+
following minimization problem
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\min_{d} \sum_{k=1}^{N} \left[ \alpha h \left( x_k - \frac{d}{2} \right) + \beta h \left( x_k + \frac{d}{2} \right) - g(x_k) \right]^2 \Rightarrow \hat{d} \quad (11)
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
It should be clear from the above that this detection/estimation problem is highly nonlinear. However, since the range of interest are the values of $0 \le d < 1$, these representing resolution beyond the Rayleigh limit, it is quite appropriate for the purposes of the our analysis to consider approximating the model of the signal around $d = 0$, and to apply locally optimal detectors. This is the approach we take.
|
| 163 |
+
|
| 164 |
+
III. (QUADRATIC) MODEL APPROXIMATION
|
| 165 |
+
|
| 166 |
+
Much of the complexity we encountered in the earlier formu-
|
| 167 |
+
lation of the problem can be remedied by appealing to an ap-
|
| 168 |
+
proximation of the signal model. This approximate model is de-
|
| 169 |
+
rived by expanding the signal about the small parameter values
|
| 170 |
+
around $d = 0$. As alluded to earlier, this approximation is quite
|
| 171 |
+
adequate in the sense that all the parameter values of interest for
|
| 172 |
+
resolution beyond the Rayleigh diffraction limit are contained in
|
| 173 |
+
the range $[0, 1]$ anyway.
|
| 174 |
+
|
| 175 |
+
We consider the Taylor series expansion of $s(x_k; \alpha, \beta, d)$ around $d = 0$, with all other variables fixed.³ More specifically,
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
s(x_k; \alpha, \beta, d) \approx (\alpha + \beta)h(x_k) + \frac{\beta - \alpha}{2}dh_1(x_k) \\ + \frac{\alpha + \beta}{8}d^2h_2(x_k) \quad (12)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where $h_1(\cdot)$ and $h_2(\cdot)$ denote the first and second order derivatives of $h(\cdot)$ and where for $h(x) = \sin c^2(x)$
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\begin{align}
|
| 185 |
+
h_1(x_k) &= \left. \frac{\partial h(x)}{\partial x} \right|_{x=x_k} \\
|
| 186 |
+
&= \frac{2\sin(\pi x_k)(\sin(\pi x_k) - \pi x_k \cos(\pi x_k))}{\pi^2 x_k^3} \tag{13}
|
| 187 |
+
\end{align}
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
h_2(x_k) = \left. \frac{\partial^2 h(x)}{\partial x^2} \right|_{x=x_k} \\
|
| 192 |
+
= \frac{(4\pi^2 x_k^2 - 3) \cos(2\pi x_k) - 4\pi x_k \sin(2\pi x_k) + 3}{2\pi^2 x_k^4}. \quad (14)
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
³It is important here to note that this is an approximation about the *parameter* of interest *d*, and not the variable *x*; as such it therefore is a global approximation of the function.
|
| 196 |
+
---PAGE_BREAK---
|
| 197 |
+
|
| 198 |
+
In the above approximation, we elect to keep terms up to order 2 of the Taylor expansion. This gives a rather more accurate representation of the signal, and more importantly, if we only kept the first order term, then in the case $\alpha = \beta$, the first order term would simply vanish and *no* term in $d$ would appear in the approximation. The reader can find a more detailed discussion on the accuracy of this approximation in Appendix A. The proposed approximation simplifies the hypothesis testing problem to essentially a linear detection problem (as we will see in the next section). The approximation is helpful in that we can carry out our analysis more simply. In addition, it leads to a general form of locally optimum detectors [16, p. 217] as will be discussed later.
|
| 199 |
+
|
| 200 |
+
Continuing with vector notation we have:
|
| 201 |
+
|
| 202 |
+
$$ s \approx (\alpha + \beta)\mathbf{h} + \frac{\beta - \alpha}{2} d\mathbf{h}_1 + \frac{\alpha + \beta}{8} d^2\mathbf{h}_2 \quad (15) $$
|
| 203 |
+
|
| 204 |
+
where
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
\begin{aligned}
|
| 208 |
+
\mathbf{h} &= [h(x_1), \dots, h(x_N)]^T \\
|
| 209 |
+
\mathbf{h}_1 &= [h_1(x_1), \dots, h_1(x_N)]^T \\
|
| 210 |
+
\mathbf{h}_2 &= [h_2(x_1), \dots, h_2(x_N)]^T.
|
| 211 |
+
\end{aligned}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
Writing in the form of hypotheses described earlier in (5)
|
| 215 |
+
|
| 216 |
+
$$
|
| 217 |
+
\left\{
|
| 218 |
+
\begin{array}{l}
|
| 219 |
+
H_0: \tilde{\mathbf{g}} = (\alpha + \beta)\mathbf{h} + \mathbf{w} \\
|
| 220 |
+
H_1: \tilde{\mathbf{g}} = (\alpha + \beta)\mathbf{h} + \frac{\beta-\alpha}{2} d\mathbf{h}_1 + \frac{\alpha-\beta}{8} d^2\mathbf{h}_2 + \mathbf{w}
|
| 221 |
+
\end{array}
|
| 222 |
+
\right.
|
| 223 |
+
\quad (16)
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
where we distinguish $\tilde{\mathbf{g}}$ from $\mathbf{g}$ due to the approximated model. According to this model, we define the measured signal-to-noise ratio (per sample) as follows:
|
| 227 |
+
|
| 228 |
+
$$ \text{SNR} = \frac{1}{N\sigma^2} \left\| (\alpha + \beta)\mathbf{h} + \frac{\beta - \alpha}{2} d\mathbf{h}_1 + \frac{\alpha + \beta}{8} d^2\mathbf{h}_2 \right\|^2 . \quad (17) $$
|
| 229 |
+
|
| 230 |
+
For any symmetric PSF ($h(x)$) and in the case of above-Nyquist sampling, the following relations can be verified
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
\begin{aligned}
|
| 234 |
+
\mathbf{h}^T \mathbf{h}_1 &= 0 \\
|
| 235 |
+
\mathbf{h}_2^T \mathbf{h}_1 &= 0 \\
|
| 236 |
+
\mathbf{h}^T \mathbf{h}_2 &= -\mathbf{h}_1^T \mathbf{h}_1.
|
| 237 |
+
\end{aligned}
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
Therefore, we can rewrite (17) in the following form:
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
\begin{aligned}
|
| 244 |
+
\text{SNR} ={}& \frac{1}{N\sigma^2} \left[ (\alpha + \beta)^2 E_0 + \left(\frac{\beta - \alpha}{2}\right)^2 d^2 E_1 \right. \\
|
| 245 |
+
& \qquad \left. + \left(\frac{\alpha + \beta}{8}\right)^2 d^4 E_2 - \left(\frac{\alpha + \beta}{2}\right)^2 d^2 E_1 \right] \\
|
| 246 |
+
={}& \frac{1}{N\sigma^2} \left[ (\alpha + \beta)^2 E_0 - \alpha\beta d^2 E_1 + \left(\frac{\alpha + \beta}{8}\right)^2 d^4 E_2 \right]
|
| 247 |
+
\end{aligned}
|
| 248 |
+
\quad (18) $$
|
| 249 |
+
|
| 250 |
+
where we define
|
| 251 |
+
|
| 252 |
+
$$ E_0 = \mathbf{h}^T \mathbf{h} = f_s \int_{-\infty}^{+\infty} h^2(x) dx \quad (19) $$
|
| 253 |
+
|
| 254 |
+
$$ E_1 = h_1^T h_1 = f_s \int_{-\infty}^{-\infty} \left[ \frac{\partial h(x)}{\partial x} \right]^2 dx \quad (20) $$
|
| 255 |
+
|
| 256 |
+
$$ E_2 = h_2^T h_2 = f_s \int_{-\infty}^{-\infty} \left[ \frac{\partial^2 h(x)}{\partial x^2} \right]^2 dx \quad (21) $$
|
| 257 |
+
|
| 258 |
+
as energy terms.⁴
|
| 259 |
+
|
| 260 |
+
⁴In above-Nyquist sampling, SNR is independent of $N$ (and $f_s$) since energy terms are all proportional to $f_s$. See Appendix B for details and explicit computations of these energy terms for the case of $h(x) = \text{sinc}^2(x)$.
|
| 261 |
+
|
| 262 |
+
IV. DETECTION THEORY FOR THE APPROXIMATED MODEL
|
| 263 |
+
|
| 264 |
+
In this section, we develop detection strategies for the hypothesis testing problem of interest based upon the approximated model. It is illuminating to study the various cases of interest in order. Our earlier assumptions were equal, known intensities, symmetrically located point sources about a given center, and the energy constraint $\alpha + \beta = 2$. In the interest of clarity and ease of exposition, we start with the case when all these assumptions hold. Then we will extend the discussion in order of increasing levels of generality by relaxing an assumption in each step. Namely, we will treat the problem for the following cases:
|
| 265 |
+
|
| 266 |
+
• the case of equal, known intensities $\alpha = \beta = 1$, with symmetrically located point sources;
|
| 267 |
+
|
| 268 |
+
• the case of unknown intensities but $\alpha + \beta = 2$, with symmetrically located point sources;
|
| 269 |
+
|
| 270 |
+
• the case of unknown intensities but $\alpha + \beta = 2$, asymmetrically located point sources;
|
| 271 |
+
|
| 272 |
+
• the case of unknown intensities, asymmetrically located point sources.
|
| 273 |
+
|
| 274 |
+
By considering (16), we notice that when $\alpha + \beta = 2$ is known to the detector (the first three cases), $(\alpha+\beta)\mathbf{h}$ is a common known term in both hypotheses and it is independent from $d$. Therefore, we may simplify further
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
\left\{
|
| 278 |
+
\begin{array}{l}
|
| 279 |
+
H_0: y = w \\
|
| 280 |
+
H_1: y = \frac{\beta-\alpha}{2} d\mathbf{h}_1 + \frac{\alpha-\beta}{8} d^2\mathbf{h}_2 + w
|
| 281 |
+
\end{array}
|
| 282 |
+
\right.
|
| 283 |
+
\quad (22)
|
| 284 |
+
$$
|
| 285 |
+
|
| 286 |
+
where $y = \tilde{\mathbf{g}} - (\alpha + \beta)\mathbf{h}$. As we began to describe earlier, when $\alpha = \beta$, the hypothesis test will be reduced to the case of detecting a known signal with unknown positive amplitude ($D = d^2$). For this case, there exist well-known optimal detection strategies.
|
| 287 |
+
|
| 288 |
+
A. The Case of Equal Intensities, Symmetrically Located Point Sources
|
| 289 |
+
|
| 290 |
+
When $\alpha = \beta = 1$, (22) is reduced to
|
| 291 |
+
|
| 292 |
+
$$
|
| 293 |
+
\left\{
|
| 294 |
+
\begin{array}{l}
|
| 295 |
+
H_0: y = w \\
|
| 296 |
+
H_1: y = \frac{d^2}{16}\mathbf{h}_2 + w
|
| 297 |
+
\end{array}
|
| 298 |
+
\right.
|
| 299 |
+
\quad (23)
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
It is readily shown that given this model, the ML estimate for the parameter $d^2$ is given by
|
| 303 |
+
|
| 304 |
+
$$ d^2 = 4 (\mathbf{h}_2^T \mathbf{h}_2)^{-1} \mathbf{h}_2^T y. \quad (24) $$
|
| 305 |
+
|
| 306 |
+
Next, the test statistic resulting from the (generalized) Neyman-Pearson likelihood ratio is given by
|
| 307 |
+
|
| 308 |
+
$$ T'(y) = \frac{1}{\sigma^2} (\mathbf{h}_2^T \mathbf{h}_2)^{-1} (\mathbf{h}_2^T y)^2 . \quad (25) $$
|
| 309 |
+
|
| 310 |
+
We note that the expression for the test-statistic is essentially an energy detector with the condition that the value of $d^2$ is in fact estimated from the data itself. The detector structure, due to our knowledge of the sign of the unknown distance parameter, is effectively producing a one-sided test, and hence is in fact a Uniformly Most Powerful (UMP) detector in the sense that it produces the highest detection probability for all values of the unknown parameter, and for a given false-alarm rate [16, p. 194]. Therefore, the above test-statistic can be simply replaced by
|
| 311 |
+
|
| 312 |
+
$$ T'(y) = \sqrt{T(y)} = \sqrt{\frac{1}{\sigma^2} (\mathbf{h}_2^T \mathbf{h}_2)^{-1} (\mathbf{h}_2^T y)}. \quad (26) $$
|
| 313 |
+
|
| 314 |
+
⁵Where point sources are located at $-d_1$ and $+d_2$ instead of $-(d/2)$ and $(d/2)$.
|
| 315 |
+
---PAGE_BREAK---
|
| 316 |
+
|
| 317 |
+
For any given data set y, we decide $H_1$ if the statistic exceeds a specified threshold
|
| 318 |
+
|
| 319 |
+
$$T'(y) > \gamma. \quad (27)$$
|
| 320 |
+
|
| 321 |
+
The choice of $\gamma$ is motivated by the level of tolerable false alarm (or false-positive) in a given problem, but is typically kept very low.⁶ The detection rate ($P_d$) and false-alarm rate ($P_f$) for this detector are related as [16, p. 254]
|
| 322 |
+
|
| 323 |
+
$$P_d = Q(Q^{-1}(P_f) - \sqrt{\eta}) \quad (28)$$
|
| 324 |
+
|
| 325 |
+
where
|
| 326 |
+
|
| 327 |
+
$$\eta = \frac{d^2}{4} \sqrt{\frac{E_2}{\sigma^2}} \quad (29)$$
|
| 328 |
+
|
| 329 |
+
and $Q$ is the right-tail probability function for a standard Gaussian random variable (zero mean and unit variance); and $Q^{-1}$ is the inverse of this function [16, p. 20]. A particularly intriguing and useful relationship is the behavior of the smallest peak separation $d$, which can be detected with very high probability (say 0.99), and very low false alarm rate (say $10^{-6}$) at a given SNR. According to (18), (28), and (29), the relation between $d_{min}$ and required SNR can be made explicit
|
| 330 |
+
|
| 331 |
+
$$ \begin{align} \text{SNR} &= (Q^{-1}(P_f) - Q^{-1}(P_d))^2 \frac{64E_0 - 16d^2E_1 + d^4E_2}{Nd^4E_2} \tag{30} \\ &= \frac{1}{N}(Q^{-1}(P_f) - Q^{-1}(P_d))^2 \nonumber \\ &\quad \times \left( \frac{64E_0}{E_2} \frac{1}{d^4} - \frac{16E_1}{E_2} \frac{1}{d^2} + 1 \right). \tag{31} \end{align} $$
|
| 332 |
+
|
| 333 |
+
The above expression gives an implicit relation between the smallest detectable distance between the two (equal intensity) sources, at the particular SNR. As an example, for $h(x) = \operatorname{sinc}^2(x)$ and for the specified choice of $P_d = 0.99$ and $P_f = 10^{-6}$, if we collect $N$ equally spaced samples at $\{x_k\}$ within the interval $[-10, 10]$, at the Nyquist rate, we have
|
| 334 |
+
|
| 335 |
+
$$ \begin{aligned} \text{SNR} &= 50.12 \frac{\frac{140}{\pi^4} - \frac{14}{\pi^2}d^2 + d^4}{Nd^4} \\ &= \frac{72.04 - 71.1d^2 + 50.12d^4}{Nd^4} \end{aligned} $$
|
| 336 |
+
|
| 337 |
+
A plot of this function is shown in Fig. 3. It is worth noting that in (31), the term involving $d^{-1}$ dominates for small $d$. Therefore, a reasonably informative (but approximate) way to write SNR is
|
| 338 |
+
|
| 339 |
+
$$\text{SNR} \approx \frac{1}{N} (Q^{-1}(P_f) - Q^{-1}(P_d))^2 \frac{E_0}{E_2} \frac{1}{d^4} = \frac{c}{Nd^4} \quad (32)$$
|
| 340 |
+
|
| 341 |
+
where the coefficient $c$ is a function only of the selected $P_f$ and $P_d$. It is worth noting that for any sampling rate higher than the Nyquist rate, we can rewrite $c$ in (32) as follows:
|
| 342 |
+
|
| 343 |
+
$$c = 64(Q^{-1}(P_f) - Q^{-1}(P_d))^2 \frac{\int_{-\infty}^{-\infty} h^2(x) dx}{\int_{-\infty}^{-\infty} \left[ \frac{\partial^2 h(x)}{\partial x^2} \right]^2 dx} \quad (33)$$
|
| 344 |
+
|
| 345 |
+
⁶In [9] and [12] a similar criterion (in a different framework) has been proposed, where they applied a sign test (i.e., a fixed threshold) to decide if there is one or two point sources present. This approach gives a detector with a fixed false alarm rate.
|
| 346 |
+
|
| 347 |
+
Fig. 3. Minimum detectable *d* as a function of SNR (in dB) at the Nyquist rate (exact and approximate).
|
| 348 |
+
|
| 349 |
+
Fig. 4. Minimum detectable *d* versus SNR (in dB) at Nyquist rate, and at twice Nyquist rate.
|
| 350 |
+
|
| 351 |
+
A plot of the approximate expression in (32) is also shown in Fig. 3 to be compared against the exact expression (31). The above relation (32) is a neat and rather intuitive power law that one can use to, for instance, understand the required SNR to achieve a particular resolution level of interest below the diffraction limit. Fig. 4 shows the curves defined by (30) for different sampling rates; namely Nyquist rate and twice Nyquist. As one would expect, the minimum detectable *d* becomes smaller as the number of samples increases, but it does not do so at a very fast rate because of the proportionality between SNR and the sampling rate.⁷
|
| 352 |
+
|
| 353 |
+
## B. The Case of Unknown α and β, Symmetrically Located Point Sources
|
| 354 |
+
|
| 355 |
+
In this section we discuss a more general case where neither the intensities α and β, nor the distance *d*, are known.⁸ Equation
|
| 356 |
+
|
| 357 |
+
⁷Similar analysis for the two-dimensional extension of this problem is presented in [22].
|
| 358 |
+
|
| 359 |
+
⁸But we assume that $\alpha + \beta = 2$ is known to the detector.
|
| 360 |
+
---PAGE_BREAK---
|
| 361 |
+
|
| 362 |
+
(22) leads to a detection problem defined in terms of a linear
|
| 363 |
+
model over the parameter set $\theta$ defined as follows:
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\begin{align}
|
| 367 |
+
& y = H\theta + w \tag{34} \\
|
| 368 |
+
& H = [h_1; h_2] \tag{35} \\
|
| 369 |
+
& \theta = \begin{bmatrix} d(\alpha - \beta) \\ \frac{d^2}{4} \end{bmatrix} \tag{36}
|
| 370 |
+
\end{align}
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
where we note that the matrix $H$ has orthogonal columns.
|
| 374 |
+
Specifically, the detection problem is now posed as
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\left\{
|
| 378 |
+
\begin{array}{ll}
|
| 379 |
+
H_0: & A\theta = b \\
|
| 380 |
+
H_1: & A\theta \neq b
|
| 381 |
+
\end{array}
|
| 382 |
+
\right.
|
| 383 |
+
\tag{37}
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
where
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \quad b = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad (38)
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
The GLRT for this problem is given by ([16], p. 274):
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
T(y) = \frac{1}{\sigma^2} \hat{\theta}' A^{-T} [A(H' H)^{-1} A']^{-1} A \hat{\theta} \quad (39)
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
= \frac{1}{\sigma^2} \left( \frac{(h_1^T y)^2}{E_1} + \frac{(h_2^T y)^2}{E_2} \right) \quad (40)
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
where
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\hat{\theta} = (\mathbf{H}^T \mathbf{H})^{-1} \mathbf{H} \mathbf{y}. \quad (41)
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
The performance of this detector is characterized by
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
P_f = Q_{\chi_2^2}(\gamma) \qquad (42)
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
P_d = Q_{\chi_2^2}(\lambda) \quad (43)
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
$$
|
| 419 |
+
\lambda = \frac{1}{\sigma^2} \theta^T A^{-1} [A(H^T H)^{-1} A']^{-1} A \theta \quad (44)
|
| 420 |
+
$$
|
| 421 |
+
|
| 422 |
+
$$
|
| 423 |
+
= \frac{1}{\sigma^2} \left( \left( \frac{\alpha - \beta}{2} \right)^2 d^2 E_1 + \frac{1}{16} d^4 E_2 \right) \quad (45)
|
| 424 |
+
$$
|
| 425 |
+
|
| 426 |
+
where $Q_{\chi_2^2}$ is the right tail probability for a Central Chi-Squared PDF with 2 degrees of freedom, and $Q_{\chi_2^2(\lambda)}$ is the right tail probability for a noncentral Chi-Squared PDF with 2 degrees of freedom and noncentrality parameter $\lambda$. In order to perform the same analysis as Section 4.1 (i.e., $d_{min}$ versus SNR curve), we start by computing the required $\lambda$ from the above expressions, based on the fixed values of $P_d$ and $P_f$. Then, using the relation (18), we will have
|
| 427 |
+
|
| 428 |
+
$$
|
| 429 |
+
SNR = \frac{\lambda(P_f, P_d)}{\bar{N}} \frac{64E_0 - 16\alpha\beta d^2 E_1 + d^4 E_2}{4(\alpha - \beta)^2 d^2 E_1 + d^4 E_2} \quad (46)
|
| 430 |
+
$$
|
| 431 |
+
|
| 432 |
+
where $\lambda(P_f, P_d)$ represents the required value of noncentrality parameter as a function of the desired $P_f$ and $P_d$. For instance, for the case of $h(x) = \text{sinc}^2(x)$, with $P_d = 0.99$ and $P_f = 10^{-6}$ we have
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
SNR = \frac{56.29 \frac{140}{\pi^4} - \frac{14}{\pi^2} \alpha \beta d^2 + d^4}{N \frac{7}{2\pi^2} (\alpha - \beta)^2 d^2 + d^4}. \quad (47)
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
It is useful to compare the performance of this detector (in terms
|
| 439 |
+
of minimum detectable *d*) against the "best" case where the pa-
|
| 440 |
+
rameters *d*, *α* and *β* are actually known. In fact, a comparison in
|
| 441 |
+
Fig. 5 demonstrates that, happily (and perhaps rather unexpect-
|
| 442 |
+
edly), the curves are very close, implying that the performance
|
| 443 |
+
|
| 444 |
+
Fig. 5. $d_{\min}$ versus SNR (dB) for $\alpha = 1.2$ and $\beta = 0.8$.
|
| 445 |
+
|
| 446 |
+
Fig. 6. GLRT for $\alpha \neq \beta$ and the case $\alpha = \beta$, symmetric sources; $d_{\min}$ versus SNR(dB).
|
| 447 |
+
|
| 448 |
+
of GLRT is very close to the optimal detector for which all pa-
|
| 449 |
+
rameters are known.
|
| 450 |
+
|
| 451 |
+
An interesting observation arises from a comparison of the
|
| 452 |
+
minimum detectable *d* for the cases *α* = *β* and *α* ≠ *β*, shown
|
| 453 |
+
in Fig. 6. It is seen that unequal *α* and *β* yield better detec-
|
| 454 |
+
tion. That is, for a fixed *d*, the required SNR for resolving two
|
| 455 |
+
closely-spaced unequally bright point sources is *smaller* than
|
| 456 |
+
the SNR required to resolve two *equally spaced* sources. This
|
| 457 |
+
result seems counter-intuitive. Yet, the reason behind it is some-
|
| 458 |
+
what clear in hindsight. Equal *α* and *β* produce a perfectly
|
| 459 |
+
symmetric signal (without noise) and therefore result in redun-
|
| 460 |
+
dancy in the measured signal content. With unequal *α* and *β*,
|
| 461 |
+
an anti-symmetric part is added to the signal information and
|
| 462 |
+
better decision is made possible. This phenomenon is a result
|
| 463 |
+
of by the assumption of symmetry of point sources around the
|
| 464 |
+
origin (*x* = 0). If the center of the point sources is not known,
|
| 465 |
+
the results can be different, as we will explain in the next section.
|
| 466 |
+
|
| 467 |
+
C. The Case of Unknown Intensities But α + β = 2 with Asymmetrically Located Point Sources
|
| 468 |
+
|
| 469 |
+
With the earlier machinery in place, in this section, we study
|
| 470 |
+
the case where the point sources are not located symmetrically
|
| 471 |
+
---PAGE_BREAK---
|
| 472 |
+
|
| 473 |
+
around the origin ($x=0$). We consider the following model for this case:
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
\begin{aligned}
|
| 477 |
+
g(x_k) &= s(x_k; \alpha, \beta, d_1, d_2) + w(x_k) \\
|
| 478 |
+
&= \alpha h(x_k - d_1) + \beta h(x_k + d_2) + w(x_k)
|
| 479 |
+
\end{aligned}
|
| 480 |
+
\quad (48) $$
|
| 481 |
+
|
| 482 |
+
where $d_1$ and $d_2$ are unknown and $d = d_1 + d_2$ is the distance between the point sources. The Taylor expansion for the signal term in (48) around $(d_1, d_2) = (0, 0)$ is given by
|
| 483 |
+
|
| 484 |
+
$$ s(x_k; \alpha, \beta, d_1, d_2) = (\alpha + \beta)h(x_k) + \frac{(\alpha d_1 + \beta d_2)h_1(x_k)}{2} + \frac{\alpha d_1^2 + \beta d_2^2}{2}h_2(x_k). \quad (49) $$
|
| 485 |
+
|
| 486 |
+
Here we consider the general case of unknown $\alpha$ and $\beta$ but $\alpha+\beta=2$ is known to the detector. However, we assume that the test for determining whether one peak is present or two peaks are present is performed at some point located between the two point sources. Hence, the hypothesis test can be expressed as
|
| 487 |
+
|
| 488 |
+
$$ H_0: [d_1 \ d_2] = [0 \ 0] \\ H_1: [d_1 \ d_2] \neq [0 \ 0] \quad (50) $$
|
| 489 |
+
|
| 490 |
+
or equivalently (see (51) at the bottom of the page). By removing the known common term $(\alpha + \beta)h(x_k)$, we have the following linear model:
|
| 491 |
+
|
| 492 |
+
$$ y = H\theta_a + w $$
|
| 493 |
+
|
| 494 |
+
where
|
| 495 |
+
|
| 496 |
+
$$
|
| 497 |
+
\begin{align*}
|
| 498 |
+
H &= [\mathbf{h}_1, \mathbf{h}_2] \\
|
| 499 |
+
\theta_a &= \begin{bmatrix} -\alpha d_1 + \beta d_2 \\ \frac{\alpha d_1^2 + \beta d_2^2}{2} \end{bmatrix} \tag{52}
|
| 500 |
+
\end{align*} $$
|
| 501 |
+
|
| 502 |
+
and where the subscript “a” an $\theta_a$ is denoting the asymmetric case, to be distinguished from (36). Then, the corresponding hypotheses are given by
|
| 503 |
+
|
| 504 |
+
$$ H_0: A\theta_a = b \\ H_1: A\theta_a \neq b \quad (53) $$
|
| 505 |
+
|
| 506 |
+
where
|
| 507 |
+
|
| 508 |
+
$$ A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad b = \begin{bmatrix} 0 \\ 0 \end{bmatrix} $$
|
| 509 |
+
|
| 510 |
+
just as in Section IV-B. The GLRT for (53) will be
|
| 511 |
+
|
| 512 |
+
$$ T(y) = \frac{1}{\sigma^2} \left( \frac{(\mathbf{h}_1^t y)^2}{E_1} + \frac{(\mathbf{h}_2^t y)^2}{E_2} \right). \quad (54) $$
|
| 513 |
+
|
| 514 |
+
From (54), the performance of this detector is characterized by
|
| 515 |
+
|
| 516 |
+
$$
|
| 517 |
+
\begin{align*}
|
| 518 |
+
P_f &= Q_{\chi_d^2}(\gamma) \\
|
| 519 |
+
P_d &= Q_{\lambda_d^2(\lambda)}(\gamma) \\
|
| 520 |
+
\lambda &= \frac{1}{\sigma^2} \left( (-\alpha d_1 + \beta d_2)^2 E_1 + \left( \frac{\alpha d_1^2 + \beta d_2^2}{2} \right)^2 E_2 \right). \quad (55)
|
| 521 |
+
\end{align*} $$
|
| 522 |
+
|
| 523 |
+
Now, to obtain the relation between SNR and ($d_1, d_2$), we first need to compute the SNR for the model of (48), which is given by
|
| 524 |
+
|
| 525 |
+
$$ \text{SNR} = \frac{1}{N\sigma^2} \left[ (\alpha + \beta)^2 E_0 - (\alpha + \beta)(d_1 + d_2)^2 E_1 + \left( \frac{\alpha d_1^2 + \beta d_2^2}{2} \right)^2 E_2 \right]. \quad (56) $$
|
| 526 |
+
|
| 527 |
+
The value of $\sigma^2$ in (55) can be obtained for the desired $P'_d$ and $P'_f$. By substituting this value in (56) we will have (57), shown at the bottom of the page. In order to present the results in this case, let us assume that⁹ $\alpha d_1 \approx \beta d_2$ (i.e., we perform the test at a point which is closer to the stronger peak.). It can be easily shown that the value of $\lambda$ in (55) is maximized for the case of $\alpha = \beta$. This shows that when $\alpha d_1 \approx \beta d_2$, the performance for the case of equal intensities is better than the performance of the case with unequal intensities. Fig. 7 confirms this result by showing the curves for $d_{\min}$ versus SNR for two cases: equal intensities and unequal intensities (we assume $h(x) = \sin c^2(x)$). By comparing this results and that of the previous section, we conclude that the assumption of symmetrically located point sources around the test point plays a very important role in the performance of the detector. Also, it is worth mentioning that with the assumption of $\alpha d_1 \approx \beta d_2$, we can approximate (57) for the range of small $d_1$ and $d_2$ in the following informative ways:
|
| 528 |
+
|
| 529 |
+
$$
|
| 530 |
+
\begin{align}
|
| 531 |
+
\text{SNR} &= \frac{\lambda(P_f, P_d)}{N} \frac{4(\alpha + \beta)^2}{(\alpha d_1^2 + \beta d_2^2)^2} \frac{E_0}{E_2} = \frac{\lambda(P_f, P_d)}{N} \frac{4}{d_1^2 d_2^2} \frac{E_0}{E_2} \nonumber \\
|
| 532 |
+
&= \frac{\lambda(P_f, P_d)}{N} \frac{4(\alpha + \beta)^4 E_0}{\alpha^2 \beta^2 d^4 E_2} \tag{58}
|
| 533 |
+
\end{align} $$
|
| 534 |
+
|
| 535 |
+
⁹See Appendix C for a justification.
|
| 536 |
+
|
| 537 |
+
$$
|
| 538 |
+
\begin{cases}
|
| 539 |
+
H_0: \tilde{g}(x_k) = (\alpha + \beta)h(x_k) + w(x_k) \\
|
| 540 |
+
H_1: \tilde{g}(x_k) = (\alpha + \beta)h(x_k) + (-\alpha d_1 + \beta d_2)h_1(x_k) + \frac{\alpha d_1^2 - \beta d_2^2}{2}h_2(x_k) + w(x_k)
|
| 541 |
+
\end{cases}
|
| 542 |
+
\quad (51) $$
|
| 543 |
+
|
| 544 |
+
$$ \text{SNR} = \frac{\lambda(P_f, P_d)}{N} \frac{(\alpha + \beta)^2 E_0 - (\alpha\beta(d_1 + d_2))^2 E_1 + \left(\frac{\alpha d_1^2 + \beta d_2^2}{2}\right)^2 E_2}{(-\alpha d_1 + \beta d_2)^2 E'_1 + \left(\frac{\alpha d_1^2}{2}\right)^2 E'_2}. \quad (57) $$
|
| 545 |
+
---PAGE_BREAK---
|
| 546 |
+
|
| 547 |
+
Fig. 7. $d_{\min}$ versus SNR(dB); $d = d_1 + d_2$ and $\alpha d_1 = \beta d_2$; equal intensities and unequal intensities.
|
| 548 |
+
|
| 549 |
+
Fig. 8. $d_{\min}$ versus SNR(dB); $d = d_1 + d_2$ and $\alpha d_1 = \beta d_2$ detectors with and without the assumption of $\alpha + \beta = 2$.
|
| 550 |
+
|
| 551 |
+
### D. The Case of Unknown Intensities, Asymmetrically Located Point Sources
|
| 552 |
+
|
| 553 |
+
Here, we analyze the most general case in which we assume that the energy of point sources ($\alpha + \beta$) is unknown to the detector, as well as the individual $\alpha, \beta, d_1$, and $d_2$. Recalling (51), we can set up another linear model as follows:
|
| 554 |
+
|
| 555 |
+
$$ \tilde{\mathbf{g}} = \mathbf{H}_u \boldsymbol{\theta}_u + \mathbf{w} $$
|
| 556 |
+
|
| 557 |
+
where
|
| 558 |
+
|
| 559 |
+
$$ \begin{aligned} \mathbf{H}_u &= [\mathbf{h}, \mathbf{h}_1, \mathbf{h}_2] \\ \boldsymbol{\theta}_u &= \begin{bmatrix} \alpha + \beta \\ -\alpha d_1 + \beta d_2 \\ \frac{\alpha d_1^2 - \beta d_2^2}{2} \end{bmatrix} \end{aligned} \quad (59) $$
|
| 560 |
+
|
| 561 |
+
and the subscript "u" denotes the completely unknown parameters. The above setup leads to the following hypothesis test:
|
| 562 |
+
|
| 563 |
+
$$ \begin{cases} H_0: & \mathbf{A}_u \boldsymbol{\theta}_u = \mathbf{b} \\ H_1: & \mathbf{A}_u \boldsymbol{\theta}_u \neq \mathbf{b} \end{cases} \quad (60) $$
|
| 564 |
+
|
| 565 |
+
where
|
| 566 |
+
|
| 567 |
+
$$ \mathbf{A}_u = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} , \quad \mathbf{b} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} . $$
|
| 568 |
+
|
| 569 |
+
The GLRT for (60) will be
|
| 570 |
+
|
| 571 |
+
$$ T'(\tilde{\mathbf{g}}) = \frac{1}{\sigma^2} \left( \frac{(\mathbf{h}_1^T \tilde{\mathbf{g}})^2}{E_1} + \frac{(E_2 \mathbf{h}_1^T \tilde{\mathbf{g}} + E_0 \mathbf{h}_2^T \tilde{\mathbf{g}})^2}{E_0(E_0 E_2 - E_1^2)} \right). \quad (61) $$
|
| 572 |
+
|
| 573 |
+
The performance of this detector is given by¹⁰
|
| 574 |
+
|
| 575 |
+
$$ P_f = Q_{\chi_2^2}(\gamma) \\ P_d = Q_{\chi_2^2(\lambda)}(\gamma) \\ \lambda = \frac{1}{\sigma^2} \left( (-\alpha d_1 + \beta d_2)^2 E_1 + \left( \frac{\alpha d_1^2 + \beta d_2^2}{2} \right)^2 \left( E_2 - \frac{E_1^2}{E_0} \right) \right). \quad (62) $$
|
| 576 |
+
|
| 577 |
+
Consequently, the relation between ($d_1, d_2$) and SNR is given by (63) as shown at the bottom of the page. By comparing (57) and (63), it can be readily shown that because of the negative term $-(E_1^2/E_0)$, the detector without the knowledge of $\alpha + \beta$ performs more poorly than the detector which knows $\alpha + \beta = 2$. Fig. 8 displays the performance of these two different detectors in terms of the minimum detectable $d$ versus SNR for the case of $h(x) = \text{sinc}^2(x)$.
|
| 578 |
+
|
| 579 |
+
## V. THE CRAMÉR-RAO LOWER BOUND ON ESTIMATION OF THE UNKNOWN PARAMETERS
|
| 580 |
+
|
| 581 |
+
In the interest of completeness, in this section we present results on the estimation of the unknown parameters of the model. In particular, we study the asymptotic performance of ML estimate of the unknown parameters, using the Cramér-Rao lower bound (CRLB). CRLB [15, p. 27] is a covariance inequality bound which treats the parameters as unknown deterministic quantities and provides a local bound on the mean square error (MSE) of their estimate. Being able to compute a lower bound
|
| 582 |
+
|
| 583 |
+
¹⁰Note that according to the Cauchy-Schwarz inequality $E_0 E_2 \ge E_1^2$.
|
| 584 |
+
|
| 585 |
+
$$ \text{SNR} = \frac{\lambda(P_f; P_d)}{N} \frac{(\alpha + \beta)^2 E_0 - \alpha \beta (d_1 + d_2)^2 E_1 + \left(\frac{\alpha d_1^2 + \beta d_2^2}{2}\right)^2 E_2}{(-\alpha d_1 + \beta d_2)^2 E_1 + \left(\frac{\alpha d_1^2 - \beta d_2^2}{2}\right)^2 (E_2 - \frac{E_1^2}{E_0})} \quad (63) $$
|
| 586 |
+
---PAGE_BREAK---
|
| 587 |
+
|
| 588 |
+
Fig. 9. $\sqrt{\text{CRLB}(\hat{d})}$ versus $\hat{d}$ for two different cases.
|
| 589 |
+
|
| 590 |
+
on the variance of the parameter $d$, in particular, is rather helpful
|
| 591 |
+
in verifying and confirming the earlier results of this paper. For
|
| 592 |
+
example we shall see how the difference between $\alpha$ and $\beta$ af-
|
| 593 |
+
fects the variance of the estimate in different cases. Here, we
|
| 594 |
+
compute the CRLB for following cases:
|
| 595 |
+
|
| 596 |
+
• the signal model in (3), i.e., known intensities but un-
|
| 597 |
+
known *d*;
|
| 598 |
+
|
| 599 |
+
* the signal model in (48), i.e., unknown α, β, d₁, and d₂.
|
| 600 |
+
To verify the details of the calculations (carried out mostly in
|
| 601 |
+
the frequency domain), we refer the reader to Appendix B. Re-
|
| 602 |
+
calling (3), the CRLB for the parameter *d* (assuming �� and β
|
| 603 |
+
known), is given by (64) and (65) at the bottom of the page. To
|
| 604 |
+
compute the CRLB for the second case, when α, β, d₁, and d₂
|
| 605 |
+
are unknown, the Fisher Information matrix is computed.¹¹ We
|
| 606 |
+
have
|
| 607 |
+
|
| 608 |
+
$$
|
| 609 |
+
\operatorname{cov}(\hat{d}_1, \hat{d}_2, \hat{\alpha}, \hat{\beta}) \geq \Psi^{-1}(d_1, d_2, \alpha, \beta) \quad (66)
|
| 610 |
+
$$
|
| 611 |
+
|
| 612 |
+
where $\Psi$ is the 4 × 4 symmetric Fisher Information matrix with
|
| 613 |
+
its elements defined by the equations at the bottom of the next
|
| 614 |
+
page. The bound on the variance of $\hat{d}_1$ and $\hat{d}_2$ can be obtained
|
| 615 |
+
by taking the elements (1, 1) and (2, 2) of the inverse Fisher
|
| 616 |
+
information matrix $\Psi^{-1}$, respectively. Also, the CRLB on $d =$
|
| 617 |
+
$d_1 + d_2$ is computed from
|
| 618 |
+
|
| 619 |
+
$$
|
| 620 |
+
\mathrm{CRLB}(\hat{d}) = [\Psi^{-1}]_{11} + [\Psi^{-1}]_{22} + 2[\Psi^{-1}]_{12}. \quad (67)
|
| 621 |
+
$$
|
| 622 |
+
|
| 623 |
+
¹¹We thank Prof. Jeff Fessler for sharing with us his calculations for the con-
|
| 624 |
+
tinuous data case.
|
| 625 |
+
|
| 626 |
+
Fig. 10. $\sqrt{\text{CRLB}(\hat{d})}$ versus $\alpha$ for two different cases.
|
| 627 |
+
|
| 628 |
+
Fig. 9 shows the square-root of the CRLB (to maintain the same units as *d*) for *d*, for fixed values of the intensities *α* and *β*, versus the parameter value *d*, for two different cases; namely, the known intensity case with symmetrically located point sources, and the unknown *α*, *β*, *d*₁ and *d*₂ case. In this figure, we observe that the curves in each case are rather close for *d* > 0.5, and they are distinct when *α* is unknown and *d* is smaller than 0.5. In Fig. 10, the value of *d* = 0.3 is fixed, and the square-root of CRLB for *d̂* is shown over a range of values of *α*. The graph demonstrates the effect of the difference of *α* and *β* on the CRLB. As seen in this figure, the CRLB for the second case (unknown *α*, *β*, *d*₁ and *d*₂) increases rapidly when moving away from (*α*, *β*) = (1, 1); but for known *α* and *β*, there is a (rather slow) decay away from the position *α* = *β* = 1. The observed phenomenon is counter-intuitive, but can be readily explained by looking at the derivatives we computed in the calculation of the CRLB. When point sources are located symmetrically, with unequal intensities, the shape of the overall signal is dramatically different than the case when *α* = *β* = 1. This difference is accentuated further as the value of *α* − *β* becomes larger. Whereas for second case, because of uncertainty about the center and intensities of point sources, if *α* − *β* ≠ 0, the overall shape looks more like a single peak is present. The observed behavior is consistent with what we saw before where we demonstrated that unequal *α* and *β* yields improved detection if the center is known and vice versa.
|
| 629 |
+
|
| 630 |
+
VI. CONCLUSION
|
| 631 |
+
|
| 632 |
+
We have set out in this paper to address the question of
|
| 633 |
+
resolution from a sound statistical viewpoint. In particular, we
|
| 634 |
+
|
| 635 |
+
$$
|
| 636 |
+
\begin{align}
|
| 637 |
+
\operatorname{var}(\hat{d}) &\ge \frac{\sigma^2}{\sum_k \left( \frac{\partial S(x_k, d)}{\partial d} \right)^2} = \frac{\sigma^2}{\frac{1}{2\pi} \int_{-\pi}^{\pi} \left| \frac{\partial S(\omega, d)}{\partial d} \right|^2 d\omega} \tag{64} \\
|
| 638 |
+
&= \frac{\sigma^2}{f_s \frac{\pi^2}{15} (\alpha^2 + \beta^2) + \frac{\alpha^3}{\pi^3 d^3} \left[ (\pi^2 d^2 - 3) \sin(2\pi d) + 3\pi d \cos(2\pi d) + 3\pi d \right]} \tag{65}
|
| 639 |
+
\end{align}
|
| 640 |
+
$$
|
| 641 |
+
---PAGE_BREAK---
|
| 642 |
+
|
| 643 |
+
have explicitly answered a very practical question: What is the minimum detectable distance between two point sources imaged incoherently at a given signal-to-noise ratio? Or equivalently, what is the minimum SNR required to discriminate two point sources separated by a distance smaller than the Rayleigh limit? Based on different assumptions and models, we explicitly studied four different cases in our detection-theoretic approach, from the simplest to the most general case. We employed a hypothesis testing framework using like locally
|
| 644 |
+
|
| 645 |
+
most powerful tests, where the original highly nonlinear problem was approximated using a quadratic model in the parameter *d*. We also discussed asymptotic performance for estimation of the unknown parameters. The analysis has been carried out in one dimension to facilitate the presentation and to yield maximum intuition. We have begun the analysis in 2-D, including studies as a function of different aperture shapes and lenses, and the complete 2-D (spatial integration) sampling model. This 2-D analysis is not so different in spirit from the
|
| 646 |
+
|
| 647 |
+
$$
|
| 648 |
+
\Psi(1, 1) = \frac{1}{\sigma^2} \sum_k \left( \frac{\partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_1} \right)^2 = \frac{\alpha^2}{2\pi\sigma^2} \int_{-\pi}^{\pi} |\omega f_s H(\omega, f_s)|^2 d\omega = \frac{f_s}{\sigma^2} \frac{4\pi^2 \alpha^2}{15}
|
| 649 |
+
$$
|
| 650 |
+
|
| 651 |
+
$$
|
| 652 |
+
\Psi(2, 2) = \frac{1}{\sigma^2} \sum_k \left( \frac{\partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_2} \right)^2 = \frac{\beta^2}{2\pi\sigma^2} \int_{-\pi}^{\pi} |\omega f_s H(\omega, f_s)|^2 d\omega = \frac{f_s}{\sigma^2} \frac{4\pi^2 \beta^2}{15}
|
| 653 |
+
$$
|
| 654 |
+
|
| 655 |
+
$$
|
| 656 |
+
\Psi(3, 3) = \frac{1}{\sigma^2} \sum_k \left( \frac{\partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial \alpha} \right)^2 = \frac{1}{2\pi\sigma^2} \int_{-\pi}^{\pi} |H(\omega, f_s)|^2 d\omega = \frac{f_s}{\sigma^2} \frac{2}{3}
|
| 657 |
+
$$
|
| 658 |
+
|
| 659 |
+
$$
|
| 660 |
+
\Psi(4, 4) = \frac{1}{\sigma^2} \sum_k \left( \frac{\partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial \beta} \right)^2 = \frac{1}{2\pi\sigma^2} \int_{\pi}^{\pi} |H(\omega, f_s)|^2 d\omega = \frac{f_s}{\sigma^2} \frac{2}{3}
|
| 661 |
+
$$
|
| 662 |
+
|
| 663 |
+
$$
|
| 664 |
+
\Psi(1, 2) = \frac{1}{\sigma^2} \sum_k \frac{\partial s(x_k; \alpha, \beta, d_1, d_2) \partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_1 \partial d_2}
|
| 665 |
+
$$
|
| 666 |
+
|
| 667 |
+
$$
|
| 668 |
+
= -\frac{\alpha\beta}{2\pi\sigma^2} \int_{-\pi}^{\pi} |\omega f_s H(\omega, f_s)|^2 \cos(\omega f_s (d_1 + d_2)) d\omega
|
| 669 |
+
$$
|
| 670 |
+
|
| 671 |
+
$$
|
| 672 |
+
= \frac{f_s 2\alpha\beta (\pi^2(d_1+d_2)^2 - 3)\sin(2\pi(d_1+d_2)) + 6\pi(d_1+d_2)\cos^2(\pi(d_1+d_2))}{\sigma^2 \pi^3 (d_1+d_2)^5}
|
| 673 |
+
$$
|
| 674 |
+
|
| 675 |
+
$$
|
| 676 |
+
\Psi(1, 3) = \frac{1}{\sigma^2} \sum_k \frac{\partial s(x_k; \alpha, \beta, d_1, d_2) \partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_1 \partial \alpha} = -\frac{\alpha}{2\pi\sigma^2} \int_{-\pi}^{\pi} \omega f_s |H(\omega, f_s)|^2 d\omega = 0
|
| 677 |
+
$$
|
| 678 |
+
|
| 679 |
+
$$
|
| 680 |
+
\Psi(1, 4) = \frac{1}{\sigma^2} \sum_k \frac{\partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_1} \frac{\partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial \beta}
|
| 681 |
+
$$
|
| 682 |
+
|
| 683 |
+
$$
|
| 684 |
+
= -\frac{\alpha}{2\pi\sigma^2} \int_{-\pi}^{\pi} \omega f_s |H(\omega, f_s)|^2 \sin(\omega f_s (d_1 + d_2)) d\omega
|
| 685 |
+
$$
|
| 686 |
+
|
| 687 |
+
$$
|
| 688 |
+
= \frac{f_s}{\sigma^2} \frac{\alpha}{2\pi^3} \frac{3\sin(2\pi(d_1 + d_2)) - 4\pi(d_1 + d_2)\cos^2(\pi(d_1 + d_2)) - 2\pi(d_1 + d_2)}{(d_1 + d_2)^4}
|
| 689 |
+
$$
|
| 690 |
+
|
| 691 |
+
$$
|
| 692 |
+
\Psi(2, 3) = \frac{1}{\sigma^2} \sum_k \frac{\partial s(x_k; \alpha, \beta, d_1, d_2) \partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_2 \partial \alpha}
|
| 693 |
+
$$
|
| 694 |
+
|
| 695 |
+
$$
|
| 696 |
+
= -\frac{\beta}{2\pi\sigma^2} \int_{-\pi}^{\pi} |\omega f_s| H(\omega, f_s|^2) \sin(\omega f_s (d_1 + d_2)) d\omega
|
| 697 |
+
$$
|
| 698 |
+
|
| 699 |
+
$$
|
| 700 |
+
= -\frac{f_s - \beta}{\sigma^2 2\pi^3} - -\frac{3\sin(2\pi(d_1 + d_2)) - 4\pi(d_1 + d_2)\cos^2(\pi(d_1 + d_2)) - 2\pi(d_1 + d_2)}{(d_1 + d_2)^4}
|
| 701 |
+
$$
|
| 702 |
+
|
| 703 |
+
$$
|
| 704 |
+
\Psi(2, 4) = -\frac{1}{\sigma^2} \sum_k \frac{\partial s(x_k; \alpha, \beta, d_1, d_2) \partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial d_2 \partial \beta} = -\frac{\beta}{2\pi\sigma^2} \int_{-\pi}^{\pi} w f_s |H(w; f_s)|^2 dw = 0
|
| 705 |
+
$$
|
| 706 |
+
|
| 707 |
+
$$
|
| 708 |
+
\Psi(3, 4) = -\frac{1}{\sigma^2} \sum_k \frac{\partial s(x_k; \alpha, \beta, d_1, d_2) \partial s(x_k; \alpha, \beta, d_1, d_2)}{\partial a \\ \partial b}
|
| 709 |
+
$$
|
| 710 |
+
|
| 711 |
+
$$
|
| 712 |
+
= -\frac{1}{2\pi\sigma^2} \int_{-\pi}^{\pi} |H(\omega, f_s)|^2 \cos(\omega f_s (d_1 + d_2)) d\omega
|
| 713 |
+
$$
|
| 714 |
+
|
| 715 |
+
$$
|
| 716 |
+
= -\frac{f_s}{\sigma^2 2\pi^3} - -\frac{-\sin(2\pi(d_1 + d_2)) + 2\pi(d_1 + d_2)}{(d_1 + d_2)^4}
|
| 717 |
+
$$
|
| 718 |
+
---PAGE_BREAK---
|
| 719 |
+
|
| 720 |
+
1-D case, but is significantly more messy; so we have elected to defer its presentation to the near future.
|
| 721 |
+
|
| 722 |
+
The major conclusion of this paper is that for a given imaging scenario (in this case, incoherent imaging through a slit), with required probabilities of detection and false alarm, the minimum resolvable separation between two sources from uniformly sampled data can be derived explicitly as a function of the SNR per sample of the imaging array, and the sampling rate. The most useful rule of thumb we glean from these results is that for the case of equal intensities (or for the case of unequal intensities with a proper choice of test point), the minimum resolvable distance is essentially proportional to the inverse of the SNR to the fractional power of 1/4. The proportionality constant was shown to be a function of the probabilities of detection and false alarm, and the point spread function. In deriving these results, we have unified and generalized much of the literature on this topic that, while sparse, has spanned the course of roughly four decades.
|
| 723 |
+
|
| 724 |
+
Many interesting questions remain to be studied. Of these, the analysis of the problem as a function of the sampling rate and sampling strategy come to mind. For instance, it is useful to study the performance in the presence of aliasing (i.e., sub-Nyquist sampling). It would also be interesting to study the effect of nonuniform sampling on performance.
|
| 725 |
+
|
| 726 |
+
It is important to note that the strategy for the analysis of resolution we have put forward here is very generally applicable to other types of imaging systems. Once the point-spread function of the imaging system is known, the signal model $s(x; d)$ is determined, and the same line of reasoning can be carried out. The optical imaging scenario we have described here should really be thought of as a canonical example of the application of the general strategy we propose for studying resolution. Extensions of these ideas can also be considered to study limits to resolution for indirect imaging such as in computed tomography.
|
| 727 |
+
|
| 728 |
+
As for other extensions and applications in optical imaging, an appealing direction is to study the limits to super-resolution from video [23]–[25]. The analysis presented here can help answer questions regarding the ability of image super-resolution methods to integrate multiple low resolution frames to produce a high resolution image from aliased data.
|
| 729 |
+
|
| 730 |
+
Finally, we wish to mention that this paper, we hope, represents one step forward in an overall methodology for studying imaging and image processing that appeals directly to concepts in information theory. This approach and point of view has been sorely lacking in the imaging community, and we hope that it will become more pervasive in the years to come.
|
| 731 |
+
|
| 732 |
+
## APPENDIX A
|
| 733 |
+
|
| 734 |
+
### ON THE ACCURACY OF THE QUADRATIC APPROXIMATION
|
| 735 |
+
|
| 736 |
+
Here, we present an analysis to demonstrate the accuracy of the Taylor expansion proposed in Section 3. We consider the general model of (48) and its Taylor expansion in (49). Let us define residual percentage error of the approximation as follows:
|
| 737 |
+
|
| 738 |
+
$$ \epsilon = \frac{\left\| s - (\alpha + \beta)\mathbf{h} - (-\alpha d_1 + \beta d_2)\mathbf{h}_1 - \frac{\alpha d_1^2 - \beta d_2^2}{2}\mathbf{h}_2 \right\|^2}{\|\mathbf{s}\|^2} \quad (68) $$
|
| 739 |
+
|
| 740 |
+
Fig. 11. Residual percentage error of the quadratic model; $\alpha d_1 = \beta d_2$.
|
| 741 |
+
|
| 742 |
+
Fig. 12. Residual percentage error of the quadratic model; $\alpha = \beta = 1$.
|
| 743 |
+
|
| 744 |
+
Consider the case when $\alpha d_1 = \beta d_2$ (See Appendix C). Fig. 11 shows the upper bound when $(d = d_1 + d_2 = 1)$ for $\epsilon$ as a function of $\alpha$ for $h(x) = \text{sinc}^2(x)$ (Note that again for above-Nyquist sampling, $\epsilon$ is independent from the sampling rate.). The maximum of $\epsilon$ is less than 20% in any case. Also, as seen in this figure, the approximation error for $d = 0.7$ is always less than 2.5%. Fig. 12 shows the curve for $\epsilon$ versus $d$ which indicates that the approximation error is quite acceptable for the range of interest near $d = 0$. To have a picture of the local error in the approximation, the error term
|
| 745 |
+
|
| 746 |
+
$$ \epsilon(x; \alpha, \beta, d_1, d_2) = s(x; \alpha, \beta, d_1, d_2) - (\alpha + \beta)h(x) \\ - (-\alpha d_1 + \beta d_2)h_1(x) - \frac{\alpha d_1^2 + \beta d_2^2}{2}h_2(x) $$
|
| 747 |
+
|
| 748 |
+
is shown in Fig. 13 for two different values of $d$ over the range of the variable $x$ in [-10, 10].
|
| 749 |
+
|
| 750 |
+
## APPENDIX B
|
| 751 |
+
|
| 752 |
+
### FREQUENCY DOMAIN REPRESENTATION; PARSEVAL'S THEOREM FOR THE SIGNAL $s(x; d)$
|
| 753 |
+
|
| 754 |
+
Considering the sampled signal of the general model, where the point sources are located at $-d_1$ and $d_2$ we have
|
| 755 |
+
|
| 756 |
+
$$ \begin{aligned} s(n; \alpha, \beta, d_1, d_2) &= s(x; \alpha, \beta, d_1, d_2)|_{x=\frac{n}{f_s}} \\ &= \alpha h\left(\frac{n}{f_s} - d_1\right) + \beta h\left(\frac{n}{f_s} + d_2\right). \end{aligned} \quad (69) $$
|
| 757 |
+
---PAGE_BREAK---
|
| 758 |
+
|
| 759 |
+
Fig. 13. Difference between the actual signal and the quadratic model; $\alpha = \beta = 1$.
|
| 760 |
+
|
| 761 |
+
For the case of above-Nyquist sampling,¹² in the frequency domain we will have the following 2π-periodic representation (see (70) at the bottom of the page) where $H(\omega, f_s) = (f_s^2/2\pi)((2\pi/f_s) - |\omega|)$ is the DTFT of $h(x_k)$ when $h(x) = \text{sinc}^2(x)$ and sampling rate is $f_s$. Correspondingly, for this case, the functions $h_1(x)$ and $h_2(x)$ can be written in the frequency domain as
|
| 762 |
+
|
| 763 |
+
$$ H_1(\omega, f_s) = \begin{cases} j \frac{\omega f_s}{2\pi} \left( \frac{2\pi}{f_s} - |\omega| \right) & |\omega| < \frac{2\pi}{f_s} \\ 0 & \frac{2\pi}{f_s} \le |\omega| \le 2\pi \end{cases} \quad (71) $$
|
| 764 |
+
|
| 765 |
+
$$ H_2(\omega, f_s) = \begin{cases} -\frac{\omega^2 f_s^4}{2\pi} \left(\frac{2\pi}{f_s} - |\omega|\right) & |\omega| < \frac{2\pi}{f_s} \\ 0 & \frac{2\pi}{f_s} \le |\omega| \le 2\pi \end{cases} \quad (72) $$
|
| 766 |
+
|
| 767 |
+
Using Parseval's identities [19]:
|
| 768 |
+
|
| 769 |
+
$$ \sum_{n=-\infty}^{\infty} |x(n)|^2 = \frac{1}{2\pi} \int_{-\pi}^{\pi} |X(\omega)|^2 d\omega \quad (73) $$
|
| 770 |
+
|
| 771 |
+
$$ \sum_{n=-\infty}^{\infty} x(n)y^{*}(n) = \frac{1}{2\pi} \int_{-\pi}^{\pi} X(\omega)Y^{*}(n) d\omega \quad (74) $$
|
| 772 |
+
|
| 773 |
+
we can easily compute the following terms:
|
| 774 |
+
|
| 775 |
+
$$ E_0 = h^T h = f_s \frac{2}{3} \quad (75) $$
|
| 776 |
+
|
| 777 |
+
$$ E_1 = h_1^T h_1 = f_s \frac{4\pi^2}{15} \quad (76) $$
|
| 778 |
+
|
| 779 |
+
$$ E_2 = h_2^T h_2 = f_s \frac{32\pi^4}{105} \quad (77) $$
|
| 780 |
+
|
| 781 |
+
and
|
| 782 |
+
|
| 783 |
+
$$ h_1^T s_0 = h_1^T h_2 = 0 \quad (78) $$
|
| 784 |
+
|
| 785 |
+
¹²To recover exactly $s(x; d)$ would mathematically require an infinite number of measurements (or samples) $s(n; d)$ [21]. But since we have considered a fairly large range (−10 to 10) for sampling, and since the energy in the tails of the function in the range is very small, the effect of aliasing is essentially negligible.
|
| 786 |
+
|
| 787 |
+
Note that in every case the energy terms are proportional to the sampling rate. It can be shown [20] that the energy of any uniformly (super-critically) sampled version of a band-limited signal is proportional to the sampling rate.
|
| 788 |
+
|
| 789 |
+
## APPENDIX C
|
| 790 |
+
IS $\alpha d_1 \approx \beta d_2$ A REASONABLE ASSUMPTION?
|
| 791 |
+
|
| 792 |
+
Suppose that we first wish to determine a location at which we carry out our hypothesis test. A reasonable way to find a good candidate is to compute the correlation of the signal with a shifted version of $h(x)$ and find the point where the correlation is maximum (this would yield a point near the brighter of the two peaks). Consider
|
| 793 |
+
|
| 794 |
+
$$ R_{sh}(|\tau|, \alpha, \beta, d_1, d_2) = \int_{-\infty}^{+\infty} (s(x; \alpha, \beta, d_1, d_2) + w(x))h(x + \tau) dx \quad (79) $$
|
| 795 |
+
|
| 796 |
+
$$ = \int_{-\infty}^{+\infty} (\alpha h(x - d_1) + \beta h(x + d_2) + w(x))h(x + \tau) dx \quad (80) $$
|
| 797 |
+
|
| 798 |
+
$$ = \alpha R_{hh}(|\tau| - d_1) + \beta R_{hh}(|\tau| + d_2) + u(|\tau|) \quad (81) $$
|
| 799 |
+
|
| 800 |
+
where $R_{sh}$ and $R_{hh}$ are the cross-correlation and autocorrelation functions, respectively, and
|
| 801 |
+
|
| 802 |
+
$$ u(|\tau|) = \int_{-\infty}^{-\infty} w(x)h(x + \tau) dx \quad (82) $$
|
| 803 |
+
|
| 804 |
+
is a noise term (with zero mean). It should be clear from the model that $R_{sh1}$ would be maximized at $\tau = 0$. Also, since $d_1$ and $d_2$ are assumed to be small, by using the Taylor expansion around $|\tau| - d_1 = 0$ and $|\tau| + d_2 = 0$, we will have
|
| 805 |
+
|
| 806 |
+
$$ R_{hh}(|\tau| - d_1) = \xi_0 + (|\tau| - d_1)\xi_1 + (|\tau| - d_1)^2\xi_2 \quad (83) $$
|
| 807 |
+
|
| 808 |
+
$$ R_{hh}(|\tau| + d_2) = \xi_0 + (|\tau| + d_2)\xi_1 + (|\tau| + d_2)^2\xi_2 \quad (84) $$
|
| 809 |
+
|
| 810 |
+
where $\xi_0$, $\xi_1$, and $\xi_2$ are some constant coefficients of the above Taylor expansion. Also, it can be shown that $\xi_1 = 0$. Therefore, we can write (81) as follows:
|
| 811 |
+
|
| 812 |
+
$$ R_{sh}(|\tau|, \alpha, \beta, d_1, d_2) = (\alpha + \beta)\xi_0 + (\alpha|\tau| - d_1)^2 + \beta(|\tau| - d_2)^2\xi_2 + u(|\tau|) \quad (85) $$
|
| 813 |
+
|
| 814 |
+
Taking derivative of $R_{sh}(|\tau|, \alpha, \beta, d_1, d_2)$ with respect to $\tau$ and setting it to zero will result in:
|
| 815 |
+
|
| 816 |
+
$$ (\alpha + \beta)|\tau| = \alpha d_1 - \beta d_2 \quad (86) $$
|
| 817 |
+
|
| 818 |
+
Hence, a proper selection of $\tau$ (i.e., the test point) will lead to $\alpha d_1 \approx \beta d_2$.
|
| 819 |
+
|
| 820 |
+
$$ S(\omega, d) = \begin{cases} H(\omega, f_s)(\alpha \exp(-j\omega f_s d_1) + \beta \exp(j\omega f_s d_2)) & |\omega| < \frac{2\pi}{f_s} \\ 0 & \frac{2\pi}{f_s} \le |\omega| \le 2\pi \end{cases} \quad (70) $$
|
| 821 |
+
---PAGE_BREAK---
|
| 822 |
+
|
| 823 |
+
ACKNOWLEDGMENT
|
| 824 |
+
|
| 825 |
+
The authors wish to acknowledge Prof. A. Shakouri of U.C., Santa Cruz, for providing the early practical inspiration from the laboratory bench that led them to consider the questions addressed in this paper. They thank Prof. J. Fessler of the University of Michigan for his helpful suggestions for the CRLB analysis. They also thank the reviewers for their constructive comments and suggestions.
|
| 826 |
+
|
| 827 |
+
REFERENCES
|
| 828 |
+
|
| 829 |
+
[1] J. W. Goodman, *Introduction to Fourier Optics*. New York: McGraw-Hill, 1996.
|
| 830 |
+
|
| 831 |
+
[2] J. D. Gaskill, *Linear Systems, Fourier Transforms, and Optics*. New York: Wiley, 1978.
|
| 832 |
+
|
| 833 |
+
[3] L. B. Lucy, "Statistical limits to super-resolution," *Astron. Astrophys*, vol. 261, pp. 706-710, 1992.
|
| 834 |
+
|
| 835 |
+
[4] C. W. Helstrom, "The detection and resolution of optical signals," *IEEE Trans. Inf. Theory*, vol. IT-10, pp. 275-287, 1964.
|
| 836 |
+
|
| 837 |
+
[5] ——, "Detection and resolution of incoherent objects by a background-limited optical system," *J. Opt. Soc. Amer.*, vol. 59, pp. 164-175, 1969.
|
| 838 |
+
|
| 839 |
+
[6] ——, "Resolvability of objects from the standpoint of statistical parameter estimation," *J. Opt. Soc. Amer.*, vol. 60, pp. 659-666, 1970.
|
| 840 |
+
|
| 841 |
+
[7] L. B. Lucy, "Resolution limits for deconvolved images," *Astron. J.*, vol. 104, pp. 1260-1265, 1992.
|
| 842 |
+
|
| 843 |
+
[8] A. van den Bos, "Ultimate resolution: A mathematical framework," *Ultramicroscopy*, vol. 47, pp. 298-306, 1992.
|
| 844 |
+
|
| 845 |
+
[9] A. J. den Dekker, "Model-based optical resolution," *IEEE Trans. Instrum. Meas.*, vol. 46, pp. 798-802, 1997.
|
| 846 |
+
|
| 847 |
+
[10] A. J. den Dekker and A. van den Bos, "Resolution, a survey," *J. Opt. Soc. Amer.*, vol. 14, pp. 547-557, 1997.
|
| 848 |
+
|
| 849 |
+
[11] E. Bettens, D. Van Dyck, A. J. den Dekker, J. Sijbers, and A. van den Bos, "Model-based two-object resolution from observations having counting statistics," *Ultramicroscopy*, vol. 77, pp. 37-48, 1999.
|
| 850 |
+
|
| 851 |
+
[12] A. van den Bos, "Resolution in model-based measurements," *IEEE Trans. Instrum. Meas.*, vol. 51, pp. 1055-1060, 2002.
|
| 852 |
+
|
| 853 |
+
[13] E. L. Kosarev, "Shannon's superresolution limit for signal recovery," *Inverse Problem*, vol. 6, pp. 55-76, 1990.
|
| 854 |
+
|
| 855 |
+
[14] P. Milanfar and A. Shakouri, "A Statistical analysis of diffraction-limited imaging," in *Proc. Int. Conf. Image Processing*, Sept. 2002, pp. 864-867.
|
| 856 |
+
|
| 857 |
+
[15] S. M. Kay, *Fundamentals of Statistical Signal Processing, Estimation Theory*: Prentice-Hall, Inc., 1998.
|
| 858 |
+
|
| 859 |
+
[16] ——, *Fundamentals of Statistical Signal Processing, Detection Theory*. Englewood Cliffs, NJ: Prentice-Hall, 1998.
|
| 860 |
+
|
| 861 |
+
[17] ——, *Modern Spectral Estimation, Theory and Application*. Englewood Cliffs, NJ: Prentice-Hall, 1988.
|
| 862 |
+
|
| 863 |
+
[18] ——, "Spectrum analysis, a modern perspective," *Proc. IEEE*, vol. 69, no. 11, pp. 1380-1418, 1981.
|
| 864 |
+
|
| 865 |
+
[19] A. V. Oppenheim and R. W. Schafer, *Discrete-Time Signal Processing*. Englewood Cliffs, NJ: Prentice-Hall, 1993.
|
| 866 |
+
|
| 867 |
+
[20] P. P. Vaidyanathan, "Generalizations of the sampling theorem: Seven decades after Nyquist," *IEEE Trans. Circuits Syst.*, vol. 48, pp. 1094-1109, Sept. 2001.
|
| 868 |
+
|
| 869 |
+
[21] M. Vetterli, P. Marziliano, and T. Blu, "Sampling signals with finite rate of innovation," *IEEE Trans. Signal Processing*, vol. 50, pp. 1417-1428, June 2002.
|
| 870 |
+
|
| 871 |
+
[22] M. Shahram and P. Milanfar, "A statistical analysis of achievable resolution in incoherent imaging," in *Proc. SPIE Annual Meeting*, San Diego, CA, Aug. 2003, URL: http://www.soe.ucsc.edu/~milanfar/publications.htm.
|
| 872 |
+
|
| 873 |
+
[23] M. Elad and A. Feuer, "Restoration of single super-resolution image from several blurred, noisy and down-sampled measured images," *IEEE Trans. Image Processing*, vol. 6, pp. 1646-1658, Dec. 1997.
|
| 874 |
+
|
| 875 |
+
[24] N. Nguyen, P. Milanfar, and G. H. Golub, "A computationally efficient image superresolution algorithm," *IEEE Trans. Image Processing*, vol. 10, pp. 573-583, Apr. 2001.
|
| 876 |
+
|
| 877 |
+
[25] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, "Fast and robust multi-frame superresolution," *IEEE Trans. Image Processing*, to be published.
|
| 878 |
+
|
| 879 |
+
**Morteza Shahram** received the B.S. degree from the Amir-Kabir University of Technology, Tehran, Iran, in 1996 and the M.S. degree from the Sharif University of Technology, Tehran, in 1998 both in electrical engineering. He is currently pursuing the Ph.D. degree in electrical engineering at the University of California, Santa Cruz.
|
| 880 |
+
He was with the Signal Company, Tehran, as a Research Engineer from 1996 to 2001. His research interests are statistical signal and image processing and information-theoretic imaging.
|
| 881 |
+
|
| 882 |
+
**Peyman Milanfar** (S'90-M'93-SM'98) received the B.S. degree in electrical engineering/matematics from the University of California, Berkeley, in 1988, and the S.M., E.E., and Ph.D. degrees in electrical engineering from the Massachusetts Institute of Technology, Cambridge, in 1990, 1992, and 1993, respectively.
|
| 883 |
+
Until 1999, he was a Senior Research Engineer at SRI International, Menlo Park, CA. He is currently Associate Professor of Electrical Engineering at the University of California, Santa Cruz. He was a Consulting Assistant Professor of computer science at Stanford University from 1998-2000, and a visiting Associate Professor there from June to December 2002. His technical interests are in statistical signal and image processing, and inverse problems.
|
| 884 |
+
Dr. Milanfar won a National Science Foundation CAREER award in 2000.
|
| 885 |
+
He was an associate editor for the IEEE SIGNAL PROCESSING LETTERS from 1998 to 2001.
|
samples_new/texts_merged/2634535.md
ADDED
|
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
ORIGINAL ARTICLE
|
| 5 |
+
|
| 6 |
+
WILEY
|
| 7 |
+
|
| 8 |
+
# Why is free education so popular? A political economy explanation
|
| 9 |
+
|
| 10 |
+
Juan A. Correa¹ | Yijia Lu² | Francisco Parro³ | Mauricio Villena³
|
| 11 |
+
|
| 12 |
+
¹Facultad de Economía y Negocios,
|
| 13 |
+
Universidad Andres Bello, Santiago,
|
| 14 |
+
Chile
|
| 15 |
+
|
| 16 |
+
²School of Law, New York University,
|
| 17 |
+
New York, New York
|
| 18 |
+
|
| 19 |
+
³School of Business, Universidad Adolfo
|
| 20 |
+
Ibáñez, Santiago, Chile
|
| 21 |
+
|
| 22 |
+
**Correspondence**
|
| 23 |
+
|
| 24 |
+
Francisco Parro, School of Business,
|
| 25 |
+
Universidad Adolfo Ibáñez, 7941169,
|
| 26 |
+
Santiago, Chile.
|
| 27 |
+
Email: fjparrog@gmail.com
|
| 28 |
+
|
| 29 |
+
## Abstract
|
| 30 |
+
|
| 31 |
+
This paper analyzes the political support for different funding regimes of education in a one-person, one-vote democracy. We focus the analysis on four systems that have had a preponderant presence in the political debate on education: a private system, a public system that delivers the same resources to each student (universal-free education), a public system that intends to equalize results, and a public system that aims to maximize the output of the economy. We show that a system of universal free education is the Condorcet winner. The level of income inequality and the degree to which income distribution is skewed to the right are key factors behind this conclusion. We also show that the voting outcome of public versus private funding for education depends crucially on the type of public funding under consideration.
|
| 32 |
+
|
| 33 |
+
## 1 | INTRODUCTION
|
| 34 |
+
|
| 35 |
+
Universal free education has become popular in several regions of the world. Western democracies have it at different stages of the educational ladder. European countries, such as France, provide free tuition to European students, and Germany offers free tuition even to international students. Argentina, the Czech Republic, and Greece supply free education at all educational levels. Most of the United States primary and secondary students attend public schools, which provide free education, funded by a mix of federal, regional, and local resources.¹ In other countries, such as Chile, South Africa, and the United Kingdom, where
|
| 36 |
+
|
| 37 |
+
¹An extensive cross-country analysis of education's tuition fee schemes can be found in Bentaouet Kattan (2006).
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
higher education is not free, social movements have pressured the authorities to implement a scheme of universal free education for higher education.² In this paper, we give a political economy explanation for the popularity of free education.
|
| 41 |
+
|
| 42 |
+
A system of universal free education allocates public funds equally across students. This system, however, is not completely consistent with the main implications of a strand of the literature that emphasizes, first, the importance of economic growth to improve living standards and, second, human capital investments as the engine to promote growth (Benhabib & Spiegel, 1994; Hanushek & Kimko, 2000; among others). This branch of the literature points to a system in which public resources for education should be allocated to students with higher skills (so as to maximize aggregate output), relying on alternative instruments for redistribution. Universal free education also implies that public funds are allocated regardless of the student's family income. However, studies such as Samoff (1996) and Larkin and Staton (2001) highlight the importance of equity in the allocation of public resources spent on education. This implies that disadvantaged students should be supported with more resources, which would allow equalizing human capital across students. Hence, universal free education does not point in the direction suggested by these two strands of the literature.
|
| 43 |
+
|
| 44 |
+
A third strand of the literature suggests that different public funding systems should be implemented at different stages of the educational system. Empirical studies document low returns to interventions targeting disadvantaged adolescents, but high economic returns for remedial investments targeting young disadvantaged children (Cunha & Heckman, 2007; Cunha, James, Lochner, & Masterov, 2006; Heckman, 2008; Heckman & Masterov, 2007). This evidence implies an equity-efficiency trade-off for late child investments but not for early investments (Cunha & Heckman, 2007). Thus, public resources for education should focus on low-income students at earlier stages. However, at later stages, when human capital inequalities are difficult to undo, public resources should be shifted toward high-human capital students so as to maximize output, relying on an alternative instrument for socially desirable redistribution. The popularity of free education at different stages of education is not completely aligned with the implications derived from this third strand of the literature.
|
| 45 |
+
|
| 46 |
+
Then, why is universal free education so popular in the world? This paper gives a political economy explanation for this popularity. We model a static economy populated by a continuum of heterogeneous agents or *parents*, and each of them has one child and must vote for the funding regime that will finance the education of the child. Parents are heterogeneous in terms of human capital, which equals the family income. The parents' human capital is exogenously given and distributed according to a lognormal distribution function, as in Glomm and Ravikumar (1992) and Becker (1993). We study the Condorcet winner among four funding regimes that frequently appear in the political debate: a private system, a public system that delivers the same resources to all students, a public system that intends to equalize results, and a public system that aims to maximize the output of the economy.
|
| 47 |
+
|
| 48 |
+
Our analysis shows that a public system that universally invests the same resources in each student is the Condorcet winner in a one-person, one-vote democracy. The intuition behind our
|
| 49 |
+
|
| 50 |
+
²In Chile, the Confederation of Chilean Student Federations (CONFECH), a national body made up of students at Chilean universities, led a series of student protests across the country in 2011. The student movement demanded, among other things, an increase in state support for public universities and free public education. In South Africa, the "Fees Must Fall" movement emerged in 2015 after the government announced an increase in mandatory fees at the universities. Students were placated after the proposal for the increase was dropped. The 2010 United Kingdom student protests were a series of demonstrations held in opposition to the planned increase of the cap on tuition fees by the Conservative-Liberal Democrat coalition government. The biggest demonstration occurred in November 2010, officially known under the phrase of "Fund Our Future: Stop Education Cuts," where thousands of students marched through central London demanding free education.
|
| 51 |
+
---PAGE_BREAK---
|
| 52 |
+
|
| 53 |
+
key finding relies on the lognormal distribution of income, which is skewed to the right. A public system that equalizes outcomes will channel more resources per student to a minority of poor students. The efficiency-oriented system, in contrast, diverts more resources per student to a minority of wealthy students. The majority therefore does not favor public systems that disproportionally benefit a small group of either poor or rich agents, in comparison to the system that equalizes resources across students. In addition, the lognormal income distribution also implies that the median income is below the per capita level. A proportional tax on income that is then redistributed evenly between all students benefits those whose income falls below the mean. Then, the latter agents, who are the majority, prefer the public system that invest the same amount in each student, rather than the private system.
|
| 54 |
+
|
| 55 |
+
Therefore, our paper provides a political economy explanation for the popularity of universal free education. We show that an ex ante egalitarian public funding system for education is the Condorcet winner when it is confronted by a private system, an ex post egalitarian public system, and an output-maximizing public system. In addition, we show that the voting outcome of public versus private funding for education depends crucially on the type of public funding under consideration. Concretely, we prove that voters might choose a private system when a government proposes as a single alternative either a public system that intends to equalize results or a public system that aims to maximize the output of the economy. Thus, the voting outcome of the public versus private funding systems is not a trivial issue. We also discuss extensions to the baseline model, to show that our main result holds in democracies with a limited degree of either elitism or populism and in a type of top-up education system.
|
| 56 |
+
|
| 57 |
+
Our work builds upon earlier studies of the political economy of education funding. Creedy and Francois (1990) examine the conditions under which an uneducated majority of individuals support the financing of a proportion of the costs of education through the tax system. Glomm and Ravikumar (1992) analyze the political support for private versus public education, but in their model, voters face only one public funding design. Fernandez and Rogerson (1995) claim that the net effect of public support for higher education is a transfer of resources from poor to rich agents. They show that the underlying factor behind this result is the fact that education is only partially publicly provided. Then, the rich and the middle class may vote for relatively low subsidies to exclude poorer agents from education in the presence of credit constraints to privately finance education. Epple and Romano (1996) study the existence and properties of voting equilibria over public school expenditure in the presence of a private alternative.
|
| 58 |
+
|
| 59 |
+
More recently, De Fraja (2001) studies the voting equilibrium when voters must choose between two higher education reforms: the imposition of an ability test for admission to a university and a uniform subsidy to university attendance financed by a proportional tax on income. In a similar line, Anderberg and Balestrino (2008) study the voting equilibrium when there are two options to finance higher education in an economy with credit constraints: A subsidy to those who participate in education and a proportional income tax. Borck and Wimbersky (2014) study the political determination of higher education finance. The authors focus their analysis on the factors that might contribute toward higher education reforms from a traditional tax-subsidy scheme to income-contingent loan schemes or graduate taxes.
|
| 60 |
+
|
| 61 |
+
These previous studies have not analyzed the political support for education funding systems when private education competes with public funding alternatives aiming at equalizing resources, equalizing results, or maximizing output. Including a complete list of public funding, alternatives is important since, as we show explicitly in this paper, the Condorcet winner indeed depends on the specific design for the public funding alternative. In this sense, the analysis developed by Glomm and Ravikumar (1992), who consider a single public funding system, does
|
| 62 |
+
---PAGE_BREAK---
|
| 63 |
+
|
| 64 |
+
not contain straightforward implications about the Condorcet winner for the case in which the pool of alternatives for the voters includes several public funding schemes.
|
| 65 |
+
|
| 66 |
+
The rest of this paper is organized as follows. Section 2 presents the model and derives human capital formation under different education funding systems. Section 3 analyzes the political support for alternative education funding systems. Section 4 discusses extensions to our model. Finally, Section 5 concludes.
|
| 67 |
+
|
| 68 |
+
## 2 | THE MODEL
|
| 69 |
+
|
| 70 |
+
Consider a static economy populated by a continuum of heterogeneous agents or *parents*, each with only one child.³ Children are differentiated by the human capital they inherit from their parents. This initial human capital of the child is an input for the child's formal education. Parent *i*'s initial human capital, $h_P^i$, is exogenously given and distributed according to a lognormal distribution function $G$ with parameters $\mu$ and $\sigma^2$ over support $(0, +\infty)$.⁴ We normalize the size of the population to 1.
|
| 71 |
+
|
| 72 |
+
Children do not make any decisions. They only receive education, which is used to accumulate human capital. Each parent decides how to allocate her income $h_P^i$ between consumption $c^i$ and her child's education $y^i$. We set labor to 1; thus, an agent's labor earnings equal her human capital. Parents cannot borrow against the future earnings of their children, since there is no capital market in this economy.⁵
|
| 73 |
+
|
| 74 |
+
All individuals have identical preferences. The preferences are for own consumption and for the total human capital they pass on to their descendants, as in Banerjee and Newman (1991).⁶ Specifically, agent *i* has the following utility function,
|
| 75 |
+
|
| 76 |
+
$$U(c^i, h_c^i) = \ln c^i + \lambda \ln h_c^i, \quad (1)$$
|
| 77 |
+
|
| 78 |
+
where $c^i$ is the agent's consumption and $h_c^i$ is the total human capital passed on to the child, discounted by $\lambda \in (0,1)$. The human capital passed on is determined by the following equation:⁷
|
| 79 |
+
|
| 80 |
+
$$h_c^i = \Theta(v^i + y^i)^{\gamma} (h_P^i)^{\delta}, \quad (2)$$
|
| 81 |
+
|
| 82 |
+
which depends upon agent *i*'s human capital $h_P^i$ and the total amount $v^i + y^i$ of resources invested in the education of the child, where $v^i$ are the resources (or voucher) invested in education by the government in the child of agent *i* and $y^i$ are the resources invested in education by agent *i*, the parent. The parameter $\Theta > 0$ is an exogenous constant. The parameter
|
| 83 |
+
|
| 84 |
+
³Sleebos (2003) reports that the average fertility rate in OECD countries is about 1.6 children per woman. Docquier (2004) shows that there is no clear relation between income and fertility in developed countries. However, a more general model with endogenous fertility rates would be an interesting avenue for future research.
|
| 85 |
+
|
| 86 |
+
⁴Since Gibrat (1931), the lognormal distribution has been extensively used to describe within- or between-country income distributions. The lognormal distribution has been empirically shown to explain most of the income distribution (see Clementi & Gallegati, 2005; Neal & Rosen, 2000; among others).
|
| 87 |
+
|
| 88 |
+
⁵Several studies have highlighted capital market imperfections as an important aspect of the investment in human capital (e.g., Aghion & Bolton, 1992; Becker, 1993; Becker & Tomes, 1979; Galor, 2000; Moav, 2002; among others).
|
| 89 |
+
|
| 90 |
+
⁶A more sophisticated formulation for altruism (Kohlberg, 1976; Loury, 1981; Becker, 1986; Banerjee & Newman, 1991; Becker, 1993, among others) leads to an untractable formulation when comparing different regimes.
|
| 91 |
+
|
| 92 |
+
⁷The human capital that parents pass on their children can be interpreted either as the initial skills of preprimary students who starts formal education or as the amount of human capital with which a secondary student starts her tertiary education.
|
| 93 |
+
---PAGE_BREAK---
|
| 94 |
+
|
| 95 |
+
$\gamma \in (0,1)$ captures the returns to investment in education and the parameter $\delta > 0$ captures the returns to the parental human capital.
|
| 96 |
+
|
| 97 |
+
The only difference between the educational systems studied is made by the constraints imposed upon $v^i$ and $y^i$. Under a purely private system, the government makes no investment in education, so $v^i = 0$. Agent $i$, therefore, divides her income $h_p^i$ between consumption $c^i$ and private investment in the education of her child $y^i$, with $h_p^i = c^i + y^i$. Under public education, only the government invests in education, so $y^i = 0$. Since agent $i$ spends nothing on education, all of the post-tax income $(1-\tau)h_p^i$ goes into consumption: $c^i = (1-\tau)h_p^i$, where $\tau$ is the tax rate on the agent's income. The total revenue raised by the government is $\tau H_p$, where $H_p = \int h_p dG(h)$. This revenue is distributed among the students in the following three ways in the public education systems we study: (a) equally ex ante, with $v^i = v^j$, $\forall i,j$; (b) equally ex post, so that $h_c^i = h_c^j, \forall i,j$; and (c) output maximizing, so that $dh_c^i/dv^i = dh_c^j/dv^j, \forall i,j$. In all three cases, budget balance requires $\mathbb{E}[v] = \tau H_p$, where $\mathbb{E}$ denotes the expectation operator.
|
| 98 |
+
|
| 99 |
+
## 2.1 | The private education system (S1)
|
| 100 |
+
|
| 101 |
+
In this section, we study the optimal investment in education under a purely private funding system, where the government's investment in education is absent ($v^i = 0, \forall i$). Agent $i$, therefore, chooses $c^i$ and $y^i$ to maximize $U(c^i, h_c^i)$ subject to the technology of human capital formation $h_c^i = \theta(y^i)'(h_p^i)^{\delta}$ and the feasibility constraint $h_p^i = c^i + y^i$. The first order condition with respect to $y^i$ is
|
| 102 |
+
|
| 103 |
+
$$y^i = \left( \frac{\lambda\gamma}{1 + \lambda\gamma} \right) h_p^i. \quad (3)$$
|
| 104 |
+
|
| 105 |
+
Therefore, parents invest a constant fraction $\lambda\gamma/(1+\lambda\gamma)$ of their income in the education of their children. We prove later that the fraction of the income that parents privately invest in education is identical to the majority's preferred tax rate.
|
| 106 |
+
|
| 107 |
+
## 2.2 | The public education systems
|
| 108 |
+
|
| 109 |
+
Now suppose that education is financed publicly. No private acquisition of education is allowed, so $y^i = 0$. Thus, agents consume their after-tax income $c^i = (1-\tau)h_p^i$. Public education is financed by a proportional income tax $\tau$. The resources collected by the government are used to provide education to children. We focus on three different public funding systems. In the first, the government invests an equal amount of money in each student. In the second, the government invests resources to equalize the human capital of the students at the end of the education stage. In the third, the government seeks to maximize the total human capital of the economy.
|
| 110 |
+
|
| 111 |
+
### 2.2.1 | The ex ante egalitarian public education system (S2)
|
| 112 |
+
|
| 113 |
+
In this public system, the government invests the same amount of resources in each student. The subsidy given to each student is denoted by $v$. Under the constraint that total expenditures must be equal to the total resources collected by the proportional income tax, the equilibrium investment in each student is
|
| 114 |
+
|
| 115 |
+
$$v = \tau \mathbb{E}[h_p]. \quad (4)$$
|
| 116 |
+
---PAGE_BREAK---
|
| 117 |
+
|
| 118 |
+
Hence, the government gives a flat subsidy to all students. The amount of this subsidy is equal to a fraction $\tau$ of the per capita income of the economy. Moreover, since agent i's utility is $\ln(1 - \tau) + \gamma\lambda \ln \tau + (\text{terms independent of } \tau)$, the tax rate $\tau^i$ that maximizes agent i's utility is $\tau^i = \gamma\lambda/(1 + \gamma\lambda)$. Since $\tau^i$ is independent of agent i's characteristics, the same tax rate maximizes all agents' utilities. Therefore, we have that the government chooses $\tau = \gamma\lambda/(1 + \gamma\lambda)$, which is the tax rate preferred by all parents.
|
| 119 |
+
|
| 120 |
+
### 2.2.2 | The ex post egalitarian public education system (S3)
|
| 121 |
+
|
| 122 |
+
In this system, the government seeks to remedy initial inequalities in human capital through investments in education that equalize ex post human capital. To do so, the government invests in agents i and j the amounts $v^i$ and $v^j$, respectively, such that $h_c^i = h_c^j$. Therefore, the relative public investment in students from different families must satisfy $v^i/v^j = (h_p^j/h_p^i)^{\delta/\gamma}$. Taking expectations with respect to j and imposing the balanced-budget constraint $\mathbb{E}[v] = \tau\mathbb{E}[h_p]$, we have that the amount invested by the government on a student from family i is
|
| 123 |
+
|
| 124 |
+
$$ v^i = \tau \mathbb{E}[h_p] \left( \frac{(h_p^i)^{\delta/\gamma}}{\mathbb{E}[h_p^{\delta/\gamma}]} \right). \quad (5) $$
|
| 125 |
+
|
| 126 |
+
Therefore, each student receives a proportion of the per capita subsidy delivered under regime S2. This proportion decreases with the initial level of the human capital of the student (or, equivalently, with the family income). Specifically, the proportion of the per capita voucher that each student receives varies according to some measure of the gap between the initial human capital of the student and the average human capital of the economy. Poorer students receive more resources to compensate for their initial lower levels of human capital so that the results of the educational process are equalized across all students.
|
| 127 |
+
|
| 128 |
+
Additionally, as in the case of an ex ante egalitarian system, the same argument applies to show that the tax rate chosen is $\tau = \gamma\lambda/(1 + \gamma\lambda)$.
|
| 129 |
+
|
| 130 |
+
### 2.2.3 | The output maximizing public education system (S4)
|
| 131 |
+
|
| 132 |
+
In the third public system, the government invests the collected resources to maximize the total human capital of the economy. Given this goal, the efficient expenditure is achieved when the marginal product of investment in each student is equalized, that is, $dh_c^i/dv^i = dh_c^j/dv^j$, $\forall i, j$. Therefore, the relative amount of resources invested in each family is $v^i/v^j = (h_p^i/h_p^j)^{\delta/(1-\gamma)}$. As we did before, taking expectations with respect to j and imposing the balanced-budget constraint on the government $\mathbb{E}[v] = \tau\mathbb{E}[h_p]$, we obtain⁸
|
| 133 |
+
|
| 134 |
+
$$ v^i = \tau \mathbb{E}[h_p] \left( \frac{(h_p^i)^{\delta/(1-\gamma)}}{\mathbb{E}[h_p^{\delta/(1-\gamma)}}} \right). \quad (6) $$
|
| 135 |
+
|
| 136 |
+
⁸Equation (6) characterizes a maximum only if the second-order condition holds: $\gamma(\gamma-1)(v^i v^j - 2(h_p^i)^2)^{\delta/2} < 0$, $\forall i$. This condition holds since we have assumed that $\gamma \in (0, 1)$.
|
| 137 |
+
---PAGE_BREAK---
|
| 138 |
+
|
| 139 |
+
In this regime, each student receives a voucher that is increasing in the level of the student's initial human capital, since the marginal product of public investment in education is higher in students with a greater initial human capital. Therefore, output maximization requires providing larger subsidies to better-endowed students. As in the previous cases, it is straightforward to show that the tax rate chosen by the majority is $\tau = \gamma\lambda/(1 + \gamma\lambda)$.
|
| 140 |
+
|
| 141 |
+
# 3 | POLITICAL SUPPORT FOR THE EDUCATION FUNDING SYSTEMS
|
| 142 |
+
|
| 143 |
+
In this section, we analyze the political support for different education funding systems in a one-person, one-vote democracy. Concretely, we study the existence and identity of the Condorcet winner among the four funding systems described in Section 2. The game is solved by backward induction. First, the taxes are determined for each system. Then, the systems are compared in pairwise elections and the Condorcet winner is elected.
|
| 144 |
+
|
| 145 |
+
## 3.1 | Utility comparison
|
| 146 |
+
|
| 147 |
+
We first derive the indirect utility $V(h_p^i)$ of an agent $i$ under the four funding systems. In the expressions below, we group the terms to facilitate a comparison of the channels through which the agent's human capital $h_p^i$ impacts the agent's utility. In addition, we discuss each of these channels and assess the ones that matter in our comparison.
|
| 148 |
+
|
| 149 |
+
$$V^{S1}(h_p^i) = \ln\left(\frac{1}{1+\lambda\gamma}\right)h_p^i + \lambda \ln \theta + \lambda\delta \ln h_p^i + \lambda\gamma \ln\left(\frac{\lambda\gamma}{1+\lambda\gamma}\right)h_p^i, \quad (7)$$
|
| 150 |
+
|
| 151 |
+
$$V^{S2}(h_p^i) = \ln(1 - \tau)h_p^i + \lambda \ln \theta + \lambda\delta \ln h_p^i + \lambda\gamma \ln \tau \mathbb{E}[h_p], \quad (8)$$
|
| 152 |
+
|
| 153 |
+
$$V^{S3}(h_p^i) = \ln(1-\tau)h_p^i + \lambda \ln \theta + \lambda\delta \ln h_p^i + \lambda\gamma \left[ -\frac{\delta}{\gamma} \ln h_p^i + \ln \tau \mathbb{E}[h_p] - \ln \mathbb{E}[(h_p)^{-\delta/\gamma}] \right], \quad (9)$$
|
| 154 |
+
|
| 155 |
+
$$V^{S4}(h_p^i) = \ln(1-\tau)h_p^i + \lambda \ln \theta + \lambda\delta \ln h_p^i + \lambda\gamma \left[ \left(\frac{\delta}{1-\gamma}\right) \ln h_p^i + \ln \tau \mathbb{E}[h_p] - \ln \mathbb{E}[(h_p)^{\delta/(1-\gamma)}] \right]. \quad (10)$$
|
| 156 |
+
|
| 157 |
+
Human capital influences the utility of an agent through three channels. First, human capital determines the income of the agent and, thus, the agent's consumption. The equilibrium of disposable income for consumption under the private system is $(1/(1+\lambda\gamma))h_p^i$, and it is $(1-\tau)h_p^i$ under each of the public systems. We have already shown that the chosen tax rate is $\lambda\gamma/(1+\lambda\gamma)$. Thus, the amount invested by each parent in the private system equals the taxes paid by them to finance a public system. It follows that the equilibrium consumption level reached by any agent is the same under each of the four education funding systems.
|
| 158 |
+
---PAGE_BREAK---
|
| 159 |
+
|
| 160 |
+
We, therefore, conclude that the impact of a funding system on the disposable income of an agent is not a decisive factor to tilt the balance in favor of one of the funding systems.
|
| 161 |
+
|
| 162 |
+
Human capital also affects the indirect utility of agents through the production technology of human capital, described by Equation (2). Agents have preferences not only on consumption but also on the human capital they pass on to their children. Thus, the human capital of a parent directly determines the child's human capital and, through this channel, influences the parent's indirect utility. The latter effect is equal to $\lambda\delta\ln h_p^i$ and is identical under the four systems. Hence, neither does this channel play a role in the choice of the education funding system.
|
| 163 |
+
|
| 164 |
+
The third channel through which human capital affects the utility of an agent is the parental income's impact on the resources for education that the child receives under each of the funding systems. In the private system, parents invest a fixed fraction of their income, as reflected by the term $\ln y^i = \ln(\lambda\gamma/(1 + \lambda\gamma))h_p^i$ in Equation (7). Thus, there is a positive relationship between parental income and the resources invested in the student of the corresponding family. The ex ante egalitarian public education system (S2) invests the same resources in each family, as captured by the term $\ln v^i = \ln \tau \mathbb{E}[h_p]$ in Equation (8). Thus, there is no relationship between one family's income and the resources that the system invests in the student from that family. The ex post egalitarian public education system (S3) seeks to equalize ex post human capital. Thus, this system invests more in students from low-income families, generating a negative relationship between parental income and the resources invested by the system in the student. This relationship is expressed by $\ln v^i = -(\delta/\gamma)\ln h_p^i + \ln \tau \mathbb{E}[h_p] - \ln \mathbb{E}[(h_p)^{-(\delta/\gamma)}]$ in Equation (9). The opposite occurs with the efficient system (S4), which invests more in students from high-income families, as expressed by the term $\ln v^i = (\delta/(1-\gamma))\ln h_p^i + \ln \tau \mathbb{E}[h_p] - \ln \mathbb{E}[(h_p)^{(\delta/(1-\gamma))}]$ in Equation (10). Therefore, different systems invest differently in the student of a given family, even though the resources that the family disburses under each of the funding systems are identical.
|
| 165 |
+
|
| 166 |
+
The previous discussion implies that parents will support the system that invests the most in their children. The private system (S1) and the efficient system (S4) invest more in students from richer families, whereas the opposite occurs under the ex post egalitarian public education system (S3). The ex ante egalitarian public education system (S2) is neutral as it invests exactly the same amount in each student.
|
| 167 |
+
|
| 168 |
+
As an intermediate step in our analysis, we express Equations (7)–(10) in a simpler and more informative form. To do so, note that the resources invested by each of the systems in a student depend on first and second moments of the income distribution, that is, the average income and how unequally it is distributed over the families. We use the properties of the lognormal distribution to derive an expression for $\mathbb{E}[h_p]$, $\mathbb{E}[(h_p)^{-(\delta/\gamma)}]$, and $\mathbb{E}[(h_p)^{(\delta/(1-\gamma))}]$. For a lognormal distribution, we know that $\mathbb{E}[(h_p)^n] = \exp(n\mu + (1/2)n^2\sigma^2)$ for any $n \in \mathbb{R}$. Therefore,
|
| 169 |
+
|
| 170 |
+
$$ \mathbb{E}[h_p] = \exp\left(\mu + \frac{1}{2}\sigma^2\right), \qquad (11) $$
|
| 171 |
+
|
| 172 |
+
$$ \mathbb{E}[(h_p)^{-\delta/\gamma}] = \exp\left(-\frac{\delta}{\gamma}\mu + \frac{1}{2}\left(\frac{\delta}{\gamma}\right)^2\sigma^2\right), \qquad (12) $$
|
| 173 |
+
---PAGE_BREAK---
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\mathbb{E}[(h_p)^{\delta/(1-\gamma)}] = \exp\left(\left(\frac{\delta}{1-\gamma}\right)\mu + \frac{1}{2}\left(\frac{\delta}{1-\gamma}\right)^2\sigma^2\right). \quad (13)
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
We substitute (11)–(13) into Equations (7)–(10) and obtain the utility of an agent *i* as a function of the first and second moments of the income distribution. To do so, we use the fact that $\tau = \lambda\gamma/(1 + \lambda\gamma)$ and let $\omega^i = \ln(1 - \tau)h_p^i + \lambda \ln \theta + \lambda\delta \ln h_p^i + \lambda\gamma \ln \tau$. Observe that $\omega^i$ is the same for all the education funding systems. Thus, we can focus the analysis on the elements of the indirect utility function that are affected by the investment that the funding system makes in the students, as we have already concluded in the earlier discussion.
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
V^{S1}(h_p^i) = \omega^i + \lambda\gamma \ln h_p^i, \quad (14)
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
V^{\mathcal{S}2}(h_p^i) = \omega^i + \lambda\gamma \left( \mu + \frac{1}{2} \sigma^2 \right), \quad (15)
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
V^{\mathcal{S}3}(h_p^i) = \omega^i + \lambda\gamma \left[ -\frac{\delta}{\gamma} \ln h_p^i + \left(1 + \frac{\delta}{\gamma}\right) \mu + \frac{1}{2} \left(1 - \left(\frac{\delta}{\gamma}\right)^2\right) \sigma^2 \right], \quad (16)
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
V^{\mathcal{S}4}(h_p^i) = \omega^i + \lambda\gamma \left[ \left( \frac{\delta}{1-\gamma} \right) \ln h_p^i + \left( 1 - \frac{\delta}{1-\gamma} \right) \mu + \frac{1}{2} \left( 1 - \left( \frac{\delta}{1-\gamma} \right)^2 \right) \sigma^2 \right]. \quad (17)
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
Note that $\sigma = 0$ in a completely egalitarian economy, in which the four systems give agent $i$ the same utility if $h_p^i = \exp(\mu)$; that is, $V^\ell(\exp(\mu)) = \omega^i + \lambda\gamma\mu$, for all $j \in \{S1,S2,S3,S4\}$. This agent with income $h_p^i = \exp(\mu)$ is the one with the median income of a lognormal distribution. Positive levels of inequality, however, break this indifference between the systems and make the choice of the Condorcet winner nontrivial.
|
| 198 |
+
|
| 199 |
+
**3.2 | Pairwise elections and the Condorcet winner**
|
| 200 |
+
|
| 201 |
+
In this section, we use Equations (14)–(17) to study pairwise voting among the four regimes. Define $h^{\text{Sa,Sb}}$ as the income level at which the indirect utilities of the agent under systems Sa and Sb are the same, where $a, b \in \{1,2,3,4\}$. We compute this income threshold for the pairs $\{S2,S1\}$, $\{S2,S3\}$, and $\{S2,S4\}$. For each of these pairwise comparisons involving S2, we assess whether a majority coalition exists to elect S2. We show that in any pairwise election involving S2, this system emerges as the winner.
|
| 202 |
+
|
| 203 |
+
Using Equations (14)–(17), we obtain
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
h^{\mathcal{S}2,\mathcal{S}1} = \exp\left(\mu + \frac{1}{2}\sigma^2\right), \qquad (18)
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
h^{\mathcal{S}2,\mathcal{S}3} = \exp\left(\mu - \frac{1}{2}\frac{\delta}{\gamma}\sigma^2\right), \tag{19}
|
| 211 |
+
$$
|
| 212 |
+
---PAGE_BREAK---
|
| 213 |
+
|
| 214 |
+
TABLE 1 Condorcet winner, $\delta > (1 - \gamma)$ and $\sigma > 0$
|
| 215 |
+
|
| 216 |
+
<table><thead><tr><th>Election</th><th>I</th><th>II</th><th>III</th><th>IV</th><th>Outcome</th></tr></thead><tbody><tr><td>{S2, S1}</td><td>S2</td><td>S2</td><td>S1</td><td>S1</td><td>S2</td></tr><tr><td>{S2, S3}</td><td>S3</td><td>S2</td><td>S2</td><td>S2</td><td>S2</td></tr><tr><td>{S2, S4}</td><td>S2</td><td>S2</td><td>S2</td><td>S4</td><td>S2</td></tr></tbody></table>
|
| 217 |
+
|
| 218 |
+
$$h^{S2,S4} = \exp\left(\mu + \frac{1}{2}\left(\frac{\delta}{1-\gamma}\right)^2\right). \quad (20)$$
|
| 219 |
+
|
| 220 |
+
We examine the cases for which $\sigma > 0$. We divide our analysis into three cases, $\delta > (1 - \gamma)$, $\delta < (1 - \gamma)$, and $\delta = (1 - \gamma)$, since the ranking of the $h^{S1,S3}$ terms above change depending on the relative values of $\delta$ and $\gamma$.⁹
|
| 221 |
+
|
| 222 |
+
Suppose first $\delta > (1 - \gamma)$. It follows that $h^{S2,S3} < h^{S2,S1} < h^{S2,S4}$. Therefore, Equations (18)–(20) divide the population into four groups depending on their income $h_p^i$: Group I for income level $h_p^i \le h^{S2,S3}$; Group II for income level $h^{S2,S3} < h_p^i \le h^{S2,S1}$; Group III for income level $h^{S2,S1} < h_p^i \le h^{S2,S4}$; and Group IV for income level $h_p^i > h^{S2,S4}$. The median voter is the agent $m$ with an income level $h_p^m = \exp(\mu)$. Thus, this division of the income space implies that the median voter belongs to group II. We analyze the majority voting equilibria in the following pairwise elections: {S2, S1}, {S2, S3}, and {S2, S4}.
|
| 223 |
+
|
| 224 |
+
Consider first the {S2, S1} election. The indirect utility functions $V^{S1}(h_p^i)$ and $V^{S2}(h_p^i)$ imply that $V^{S2}(h_p^i) \ge V^{S1}(h_p^i)$ for all $h_p^i \le h^{S2,S1}$. Then, S2 provides a greater level of utility than S1 for all agents in Groups I and II. Thus, these agents with incomes below $h^{S2,S1}$ strictly prefer the ex ante egalitarian public education system (S2) to the private system (S1). Since the median voter is in Group II, it follows that Groups I and II form a majority who prefers S2 to S1. Intuitively, the public system invests a fraction $\tau = \lambda\gamma / (1 + \lambda\gamma)$ of the mean income of the economy in each student's education. By contrast, the private system puts a fraction $\lambda\gamma / (1 + \lambda\gamma)$ of the family's income into the student's education. Thus, agents with incomes below the mean income prefer S2, since the S2 public system invests more in their children than these agents' investment levels under the private system S1.
|
| 225 |
+
|
| 226 |
+
Consider next the {S2, S3} election. In this case, we have that $V^{S2}(h_p^i) \ge V^{S3}(h_p^i)$ for all $h_p^i \ge h^{S2,S3}$. Then, agents with an income level above $h^{S2,S3}$ strictly support S2 over S3. Therefore, all agents from Groups II, III, and IV form a majority to elect S2 from the {S2, S3} election. Intuitively, S3 invests more in students from low-income families and less in students from high-income families than S2. Therefore, students from richer families (with $h_p^i \ge h^{S2,S3}$) receive more resources under a public system that delivers a flat subsidy (S2) than under a public system that attempts to equalize ex post results (S3).
|
| 227 |
+
|
| 228 |
+
Lastly, consider the {S2, S4} election. We have that $V^{S2}(h_p^i) \ge V^{S4}(h_p^i)$, for all $h_p^i \le h^{S2,S4}$. Then, agents with an income level below $h^{S2,S4}$ strictly prefer S2 over S4. Therefore, agents from Groups I, II, and III form a majority that strictly prefers S2 to S4. Intuitively, in comparison to
|
| 229 |
+
|
| 230 |
+
⁹The cases $\delta > (1 - \gamma)$, $\delta < (1 - \gamma)$, and $\delta = (1 - \gamma)$ correspond to increasing, decreasing, and constant returns to scale in the production function of human capital. We show that S2 is always the Condorcet winner under each of these cases. However, the political support for system S2 in the {$S2,S4$} election becomes more pronounced under increasing returns. The latter is a direct consequence of the fact that, under system S4, resources become much more concentrated on the richest students as returns to scale increase.
|
| 231 |
+
---PAGE_BREAK---
|
| 232 |
+
|
| 233 |
+
**TABLE 2** Condorcet winner, $\delta < (1 - \gamma)$ and $\sigma > 0$
|
| 234 |
+
|
| 235 |
+
<table><thead><tr><th>Election</th><th>I</th><th>II</th><th>III</th><th>IV</th><th>Outcome</th></tr></thead><tbody><tr><td>{S2, S1}</td><td>S2</td><td>S2</td><td>S2</td><td>S1</td><td>S2</td></tr><tr><td>{S2, S3}</td><td>S3</td><td>S2</td><td>S2</td><td>S2</td><td>S2</td></tr><tr><td>{S2, S4}</td><td>S2</td><td>S2</td><td>S4</td><td>S4</td><td>S2</td></tr></tbody></table>
|
| 236 |
+
|
| 237 |
+
S2, system S4 invests more in students from high-income families at the expense of all of the agents in Groups I, II and III.
|
| 238 |
+
|
| 239 |
+
Table 1 summarizes the voting outcome in the one-on-one elections {S2, S1}, {S2, S3}, and {S2, S4} for the case $\delta > (1 - \gamma)$.
|
| 240 |
+
|
| 241 |
+
We perform now an analogous analysis for the case $\delta < (1 - \gamma)$. In this case, $h^{S2,S3} < h^{S2,S4} < h^{S2,S1}$. This again divides the agents into four groups, depending on income $h_p^i$: Group I with $h_p^i \le h^{S2,S3}$; Group II with $h^{S2,S3} < h_p^i \le h^{S2,S4}$; Group III with $h^{S2,S4} < h_p^i \le h^{S2,S1}$; and Group IV with $h_p^i > h^{S2,S1}$. By the same analysis as the one above, we can show that a majority coalition exists to support S2 in each of the three pairwise elections (see Table 2).
|
| 242 |
+
|
| 243 |
+
Lastly, consider the case in which $\delta = (1 - \gamma)$. It follows that $h^{S2,S3} < h^{S2,S4} = h^{S2,S1}$, dividing the population into three groups: Group I with $h_p^i \le h^{S2,S3}$; Group II with $h^{S2,S3} < h_p^i \le h^{S2,S4} = h^{S2,S1}$; and Group III with $h_p^i > h^{S2,S4} = h^{S2,S1}$. The median income is again in Group II. Then, this is a case in which the private system (S1) and the output maximizing system (S4) generate a subsidy schedule such that the indifference between these systems and the system S2 is observed for the same threshold agent: the one with income $h_p^i = h^{S2,S4} = h^{S2,S1}$. Proceeding analogously to what we did previously, we show in Table 3 the results for this case.
|
| 244 |
+
|
| 245 |
+
Therefore, we conclude from the results of Tables 1 through 3 that a public funding system that collects taxes to invest the same amount in each student is the Condorcet winner.
|
| 246 |
+
|
| 247 |
+
## 3.3 | Public versus private funding for education: The type of public funding matters
|
| 248 |
+
|
| 249 |
+
We have shown that the ex ante egalitarian public education system S2 is the Condorcet winner in pairwise elections pitching S2 against private system S1 and the other two public systems, S3 and S4. In this section, we explore whether the other two public education funding schemes S3 and S4 also beat private education S1 in pairwise elections. That is, we study the Condorcet winner when the private system is confronted by only one public funding alternative that is different from S2. This analysis will shed light on whether the design and number of public funding alternatives matter for political support of public education over private education.
|
| 250 |
+
|
| 251 |
+
**TABLE 3** Condorcet winner, $\delta = (1 - \gamma)$ and $\sigma > 0$
|
| 252 |
+
|
| 253 |
+
<table><thead><tr><th>Election</th><th>I</th><th>II</th><th>III</th><th>Outcome</th></tr></thead><tbody><tr><td>{S2, S1}</td><td>S2</td><td>S2</td><td>S1</td><td>S2</td></tr><tr><td>{S2, S3}</td><td>S3</td><td>S2</td><td>S2</td><td>S2</td></tr><tr><td>{S2, S4}</td><td>S2</td><td>S2</td><td>S4</td><td>S2</td></tr></tbody></table>
|
| 254 |
+
---PAGE_BREAK---
|
| 255 |
+
|
| 256 |
+
Consider first the {S1, S3} pairwise election. In this case, the threshold for the indifference between the systems is
|
| 257 |
+
|
| 258 |
+
$$h^{\mathrm{S1,S3}} = \exp\left(\mu + \frac{1}{2}\left(1 - \frac{\delta}{\gamma}\right)\sigma^2\right). \quad (21)$$
|
| 259 |
+
|
| 260 |
+
We have three cases: $\delta > \gamma$, $\delta < \gamma$, and $\delta = \gamma$. As we did before, we analyze the nontrivial case in which $\sigma > 0$. Suppose $\delta > \gamma$. Then, we have two income groups: Group I consists of agents with income levels $h_p^i \le h^{\mathrm{S1,S3}}$ and Group II consists of agents with income levels $h_p^i > h^{\mathrm{S1,S3}}$. In this case, the voter with the median income belongs to Group II since $\exp(\mu) > h^{\mathrm{S1,S3}}$. Equations (14) and (16) imply that $V^{\mathrm{S3}}(h_p^i) \ge V^{\mathrm{S1}}(h_p^i)$, for all $h_p^i \le h^{\mathrm{S1,S3}}$. Then, all agents with income levels below $h^{\mathrm{S1,S3}}$ support the public system S3. Intuitively, the private system (S1) results in greater investment for the wealthier students, whereas the public system (S3) leads to greater investment in the poorer students. Therefore, Group I votes for the public system, whereas Group II votes for the private system. Since the median-income voter belongs to Group II, it follows that the majority chooses the private system in the {S1, S3} election.
|
| 261 |
+
|
| 262 |
+
Suppose now $\delta < \gamma$. In this case, the median income belongs to Group I because $\exp(\mu) < h^{\mathrm{S1,S3}}$. As before, Group I votes for the public system S3, whereas Group II votes for the private system. However, with the median-income voter now in Group I, the majority chooses the public system S3. Lastly, when $\delta = \gamma$, the median income coincides with the threshold $h^{\mathrm{S1,S3}}$. Thus, half of the voters support the private system and the other half support the public system, resulting in a tie. Table 4 summarizes these results.
|
| 263 |
+
|
| 264 |
+
Our analysis shows that when public education is pitched against private education, political support indeed depends on the type of public education under consideration. When the private system (S1) and the ex post egalitarian public education system (S3) is proposed to the voters, the majority votes for the private system when the returns to investment in education are relatively low compared with the returns to endowed human capital, as expressed by the condition $\gamma < \delta$. A greater influence of endowed human capital on the formation of the students' human capital requires that an ex post egalitarian public education system (S3) redistribute even more resources to the poor, since $\nu^i/\nu^j = (h_F^j/h_P^i)^{\delta/\gamma}$. Thus, the public resources for education become more concentrated on a minority of low-income students, making the public system S3 less popular than the private system S1 for the majority.
|
| 265 |
+
|
| 266 |
+
We show next that a similar conclusion results when the private system (S1) and the output-maximizing public system (S4) are the only alternatives for the voters. In this case, the income threshold for the indifference between the systems is
|
| 267 |
+
|
| 268 |
+
$$h^{\mathrm{S1,S4}} = \exp\left(\mu + \frac{1}{2}\left(1 + \left(\frac{\delta}{1-\gamma}\right)\sigma^2\right)\right), \quad (22)$$
|
| 269 |
+
|
| 270 |
+
TABLE 4 Condorcet winner, {S1, S3} Election
|
| 271 |
+
|
| 272 |
+
<table><thead><tr><th>Parameters</th><th>I</th><th>II</th><th>Outcome</th></tr></thead><tbody><tr><td>δ > γ and σ > 0</td><td>S3</td><td>S1</td><td>S1</td></tr><tr><td>δ < γ and σ > 0</td><td>S3</td><td>S1</td><td>S3</td></tr><tr><td>δ = γ and σ > 0</td><td>S3</td><td>S1</td><td>S1-S3</td></tr></tbody></table>
|
| 273 |
+
---PAGE_BREAK---
|
| 274 |
+
|
| 275 |
+
**TABLE 5** Condorcet winner, {S1, S4} Election
|
| 276 |
+
|
| 277 |
+
<table><thead><tr><th>Parameters</th><th>I</th><th>II</th><th>Outcome</th></tr></thead><tbody><tr><td>δ > 1 − γ and σ > 0</td><td>S1</td><td>S4</td><td>S1</td></tr><tr><td>δ < 1 − γ and σ > 0</td><td>S4</td><td>S1</td><td>S4</td></tr><tr><td>δ = 1 − γ and σ > 0</td><td>S1–S4</td><td>S1–S4</td><td>S1–S4</td></tr></tbody></table>
|
| 278 |
+
|
| 279 |
+
for $\delta \neq (1 - \gamma)$. Then, we have Group I with income levels $h_p^i \le h^{S1,S4}$ and Group II with income levels $h_p^i > h^{S1,S4}$. We again have three cases: $\delta > 1 - \gamma$, $\delta < 1 - \gamma$, and $\delta = 1 - \gamma$. Consider first the case $\delta > 1 - \gamma$. Equations (14) and (17) imply that $V^{S1}(h_p^i) \ge V^{S4}(h_p^i)$, for all $h_p^i \le h^{S1,S4}$. Thus, agents in Group I support S1 whereas agents in Group II support system S4. The median-income voter is in Group I. Therefore the majority votes for the private system S1. The intuition behind this result again relies on the relative importance of parental human capital on the formation of the human capital of their children. A higher value of $\delta$ increases the marginal product of endowed resources for the richer students relative to poorer students. As a result, the S4 public system channels even more resources to a minority of rich students, making the private system S1 more appealing to the majority.
|
| 280 |
+
|
| 281 |
+
Suppose now $\delta < 1 - \gamma$. In this case, we have that $V^{S1}(h_p^i) \ge V^{S4}(h_p^i)$, for all $h_p^i \ge h^{S1,S4}$. Thus, agents in Group I support S4 whereas agents in Group II support S1. The median-income voter is in Group I and, therefore, the majority now votes for the public funding system S4. In this case, even though the public system S4 invests less in students from poorer families than in richer students, the amount received by the poor students under S4 is greater than the amount received by them under S1. The reason for this is that a relatively lower $\delta$ implies that the difference in the marginal product of investment across students is smaller. Hence, differences in the resources delivered across students by the system that aims at equalizing marginal product are not so pronounced as the ones that would be observed under the private system.
|
| 282 |
+
|
| 283 |
+
Lastly, suppose $\delta = 1 - \gamma$. Then, we have $V^{S1}(h_p^i) = V^{S4}(h_p^i)$, for all $h_p^i \in (0, \infty)$, the public system and the private system lead to the same outcome for each family, resulting in a tie. Table 5 summarizes the results.
|
| 284 |
+
|
| 285 |
+
The previous analysis again reinforces the important message that the voting outcome of public versus private funding for education depends crucially on the type of public funding under consideration. As we have shown, when the public funding alternative employs a design that aims to equalize ex post results or maximize output, the majority may elect private education. We also demonstrate that the introduction of an ex ante egalitarian public funding system can resolve this indeterminacy.
|
| 286 |
+
|
| 287 |
+
# 4 | EXTENSIONS OF THE MODEL
|
| 288 |
+
|
| 289 |
+
In this section, we discuss two extensions to the baseline model. First, we address the case of incomplete democracies, where a fraction of the agents do not participate in politics. Second, we introduce an example to consider the complementarity between private and public education.
|
| 290 |
+
|
| 291 |
+
## 4.1 | Incomplete democracies
|
| 292 |
+
|
| 293 |
+
Our main result hinges upon the assumption that voters fully participate in a democracy in practice. However, voting turnout is never complete. In some democracies, the rich are more
|
| 294 |
+
---PAGE_BREAK---
|
| 295 |
+
|
| 296 |
+
likely to participate in politics than the poor; in other democracies, the opposite can be true. We define democracy as incomplete when the voting turnout is less than 100%. An incomplete democracy can be biased toward either the rich or the poor. We define a democracy as “elitist” if it excludes a fraction of the poorest agents of the economy. Analogously, we define a democracy as “populist” if it excludes a fraction of the richest agents of the economy. We show in the appendix that our main result holds for democracies with a limited degree of either elitism or populism. We now discuss the intuition behind this result.
|
| 297 |
+
|
| 298 |
+
Consider first the case of an elitist democracy, which excludes a fraction of the poorest agents of the economy. Since parents will support the system that invests the most in their children, the poorest parents will support S3 over S2 because the ex post egalitarian system gives more resources to children from low-income families than the system that invests equally across students. Thus, the fact that S2 is preferred to S3 in a complete democracy immediately implies that S2 is also chosen when a number of the poorest agents do not participate in politics. In addition, the poorest parents support S2 over S1 and S2 over S4 in pairwise elections because the ex ante egalitarian system invests more in their children than the private system and the efficient public system. In the appendix, we prove that a limited degree of elitism still leaves S2 as the winner in the {$S2,S1$} and {$S2,S4$} elections.
|
| 299 |
+
|
| 300 |
+
Consider now the case of a populist democracy, which excludes a fraction of the richest agents of the economy. The richest parents support S1 over S2 and S4 over S2. Both the private system and the output maximizing system invest more heavily in their children than the ex ante egalitarian system. In complete democracies, S2 is elected in pairwise elections {$S2,S1$} and {$S2,S4$.} Therefore, S2 would also be supported by the majority when a fraction of the richest parents are excluded from voting. In addition, the richest parents prefer system S2 in the {$S2,S3$} election. The appendix shows that the ex ante egalitarian system still wins the {$S2,S3$} election when a democracy's degree of populism is limited.
|
| 301 |
+
|
| 302 |
+
## 4.2 | Private and public education as complements
|
| 303 |
+
|
| 304 |
+
The analysis so far has assumed that private and public education are perfect substitutes in the human capital formation of students; note that the production technology of human capital is $h_c^i = \theta(v^i + y^i)^{\gamma}(h_p^i)^{\delta}$. The perfect substitutability between different systems is a realistic setting to study the political outcome when voters must choose a single alternative from a pool of purely private and public funding schemes.
|
| 305 |
+
|
| 306 |
+
The case in which public and private education are complements introduces two types of changes in the baseline model developed in Section 2. First, the production technology of human capital must address the complementarity between private and public education. Second, the information flow between private and public players must be precisely stated. Several modeling options arise from these considerations.
|
| 307 |
+
|
| 308 |
+
We sketch an example that modifies the production technology to show how our analysis can readily accommodate the complementarity between private and public education. Suppose that the educational process has two stages. In the first stage, agents carry out optimal private investment leading to $h_c^i$, which is the human capital of the student belonging to family i at the end of the first stage. Equations (2) and (3) imply that $h_c^i = \theta(\lambda\gamma/(1+\lambda\gamma))^{\gamma}(h_p^i)^{\delta+1}$. In the second stage, politicians present the three public funding alternatives to the voters, who then choose the winner. Suppose the level of the human capital of the student at the end of the first
|
| 309 |
+
---PAGE_BREAK---
|
| 310 |
+
|
| 311 |
+
stage becomes her initial human capital for the second stage. Substituting $h_c^i$ into Equation (2), we obtain
|
| 312 |
+
|
| 313 |
+
$$h_c^i = \theta^{1+\delta} (\nu^i)^{\gamma} \left( \frac{\lambda^{\gamma}}{1+\lambda^{\gamma}} \right)^{\delta\gamma} (h_p^i)^{(\gamma+\delta)\delta}. \quad (23)$$
|
| 314 |
+
|
| 315 |
+
Note that, compared to Equation (2), this setting could exacerbate or mitigate differences in human capital across families, depending on whether $\gamma + \delta \le 1$. This has implications for the amount of resources that the ex post egalitarian public education system (S3) allocates to students from low-income families and the amount that the efficient system (S4) allocates to students from wealthy families. However, the analysis performed in Section 3.2 still holds once we reparametrize $\delta$ as $\tilde{\delta} = (\gamma + \delta)\delta$ and we consider the one-on-one elections that only include the public systems for the second stage.
|
| 316 |
+
|
| 317 |
+
We have shown one possible way to address the complementarity between private and public education. Interesting avenues for future research include the study of sequential voting, with agents first choosing from a pool of different private education schemes followed by a second-round election to choose from a pool of public funding systems. This could shed light on how the design of public funding schemes can affect agents’ choices of private investment in education.
|
| 318 |
+
|
| 319 |
+
# 5 | CONCLUSIONS
|
| 320 |
+
|
| 321 |
+
This paper analyzed the political support for different education funding regimes in a one-person, one-vote political system. We showed that a public system that collects taxes and delivers the same amount of resources to each family is the Condorcet winner. In economies with some degree of income inequality, a system that seeks to equalize or maximize educational outcomes concentrates resources on a minority of the population and, therefore, lacks majority support. In addition, families with an income level below the mean receive more net resources under a public system that employs flat subsidies than under a private system. Therefore, a private system also lacks majority support.
|
| 322 |
+
|
| 323 |
+
The results of this paper provide a political economy explanation for the observation that governments tend to favor free education for all students (i.e., to spend the same amount on each student). Our paper also highlights the importance of specifying the type of public education under discussion. In particular, we show that voters may favor private education over public education when the latter equalizes or maximizes ex post educational outcomes.
|
| 324 |
+
|
| 325 |
+
## ORCID
|
| 326 |
+
|
| 327 |
+
Francisco Parroz http://orcid.org/0000-0002-4395-9540
|
| 328 |
+
|
| 329 |
+
## REFERENCES
|
| 330 |
+
|
| 331 |
+
Aghion, P., & Bolton, P. (1992). Distribution and growth in models of imperfect capital markets. *European Economic Review*, 36(2–3), 603–611.
|
| 332 |
+
|
| 333 |
+
Anderberg, D., & Balestrino, A. 2008. The political economy of post-compulsory education policy with endogenous credit constraints (CESifo Working Paper Series 2304). Munich; CESifo Group.
|
| 334 |
+
|
| 335 |
+
Banerjee, A. V., & Newman, A. F. (1991). Risk-bearing and the theory of income distribution. *Review of Economic Studies*, 58(2), 211–235.
|
| 336 |
+
---PAGE_BREAK---
|
| 337 |
+
|
| 338 |
+
Becker, G. S., & Tomes, N. (1979). An equilibrium theory of the distribution of income and intergenerational mobility. *Journal of Political Economy*, **87**(6), 1153-1189.
|
| 339 |
+
|
| 340 |
+
Becker, G. S. (1986). Human capital and the rise and fall of families. *Journal of Labor Economics*, **4**(3), 1-39.
|
| 341 |
+
|
| 342 |
+
Becker, G. S. (1993). *Human capital: A theoretical and empirical analysis with special reference to education* (3rd ed.). Chicago, IL: University of Chicago Press.
|
| 343 |
+
|
| 344 |
+
Benhabib, J., & Spiegel, M. M. (1994). The role of human capital in economic development evidence from aggregate cross-country data. *Journal of Monetary Economics*, **34**(2), 143-173.
|
| 345 |
+
|
| 346 |
+
Bentaouet Kattan, R. (2006). *Implementation of free basic education policy* (World Bank Education Working Papers Series No. 7).
|
| 347 |
+
|
| 348 |
+
Borck, R., & Wimbersky, M. (2014). Political economics of higher education finance. *Oxford Economic Papers*, **66**(1), 115-139.
|
| 349 |
+
|
| 350 |
+
Clementi, F., & Gallegati, M. (2005). Pareto's law of income distribution: Evidence for Germany, the United Kingdom, and the United States. In A. Chatterjee, S. Yarlagadda, & B. K. Chakrabarti (Eds.), *Econophysics of wealth distributions* (pp. 3-14). Milano: New Economic Windows, Springer.
|
| 351 |
+
|
| 352 |
+
Creedy, J., & Francois, P. (1990). Financing higher education and majority voting. *Journal of Public Economics*, **43**(2), 181-200.
|
| 353 |
+
|
| 354 |
+
Cunha, F., Heckman, J., Lochner, L. J., & Masterov, D. V. (2006). Interpreting the evidence on life cycle skill formation. In E. A. Hanushek, & F. Welch (Eds.), *Handbook of the Economics of Education* (pp. 697-812). Amsterdam: North-Holland.
|
| 355 |
+
|
| 356 |
+
Cunha, F., & Heckman, J. (2007). The technology of skill formation. *American Economic Review*, **97**(2), 31-47.
|
| 357 |
+
|
| 358 |
+
De Fraja, G. (2001). Education policies: Equity, efficiency and voting equilibrium. *Economic Journal*, **11**(471), 104-119.
|
| 359 |
+
|
| 360 |
+
Docquier, F. (2004). Income distribution, non-convexities and the fertility: Income relationship. *Economica*, **71**(282), 261-273.
|
| 361 |
+
|
| 362 |
+
Epple, D., & Romano, R. E. (1996). Ends against the middle: Determining public service provision when there are private alternatives. *Journal of Public Economics*, **62**(3), 297-325.
|
| 363 |
+
|
| 364 |
+
Fernandez, R., & Rogerson, R. (1995). On the political economy of education subsidies. *Review of Economic Studies*, **62**(2), 249-262.
|
| 365 |
+
|
| 366 |
+
Galor, O. (2000). Income distribution and the process of development. *European Economic Review*, **44**(4-6), 706-712.
|
| 367 |
+
|
| 368 |
+
Gibrat, R. 1931. *Les Inégalités Économiques*. Paris: Librairie du Recueil Sirey.
|
| 369 |
+
|
| 370 |
+
Glomm, G., & Ravikumar, B. (1992). Public versus private investment in human capital: Endogenous growth and income inequality. *Journal of Political Economy*, **100**(4), 818-834.
|
| 371 |
+
|
| 372 |
+
Hanushek, E. A., & Kimko, D. D. (2000). Schooling, labor-force quality, and the growth of nations. *American Economic Review*, **90**(5), 1184-1208.
|
| 373 |
+
|
| 374 |
+
Heckman, J. J. (2008). Schools, skills and synapses. *Economic Inquiry*, **46**(3), 289-324.
|
| 375 |
+
|
| 376 |
+
Heckman, J. J., & Masterov, D. V. (2007). The productivity argument for investing in young children. *Review of Agricultural Economics*, **29**(3), 446-493.
|
| 377 |
+
|
| 378 |
+
Kohlberg, E. (1976). A model of economic growth with altruism between generations. *Journal of Economic Theory*, **13**(1), 1-13.
|
| 379 |
+
|
| 380 |
+
Larkin, J., & Staton, P. (2001). Access, inclusion, climate, empowerment (AICE): A framework for gender equity in market-driven education. *Canadian Journal of Education*, **26**(3), 361-376.
|
| 381 |
+
|
| 382 |
+
Loury, G. C. (1981). Intergenerational transfers and the distribution of earnings. *Econometrica*, **49**(4), 843-867.
|
| 383 |
+
|
| 384 |
+
Moav, O. (2002). Income distribution and macroeconomics: The persistence of inequality in a convex technology framework. *Economics Letters*, **75**(2), 187-192.
|
| 385 |
+
|
| 386 |
+
Neal, D., & Rosen, S. (2000). Theories of the distribution of earnings. In A. B. Atkinson, & F. Bourguignon (Eds.), *Handbook of Income Distribution* (Vol. 1, pp. 379-427). Amsterdam: Elsevier North-Holland.
|
| 387 |
+
|
| 388 |
+
Samoff, J. (1996). Which priorities and strategies for education? *International Journal of Educational Development*, **16**(3), 249-71.
|
| 389 |
+
|
| 390 |
+
Sleebos, J. (2003). Low fertility rates in OECD countries: Facts and policy responses (OECD Labour Market and Social Policy Occasional Papers No. 15).
|
| 391 |
+
---PAGE_BREAK---
|
| 392 |
+
|
| 393 |
+
**How to cite this article:** Correa JA, Lu Y, Parro F, Villena M. Why is free education so popular? A political economy explanation. *Journal of Public Economic Theory*. 2019;1–19.
|
| 394 |
+
https://doi.org/10.1111/jpet.12396
|
| 395 |
+
|
| 396 |
+
APPENDIX
|
| 397 |
+
|
| 398 |
+
In this appendix, we formally prove that our main result holds for democracies with a limited
|
| 399 |
+
degree of either elitism or populism. Consider first the percentiles of the income distribution in
|
| 400 |
+
which the agents with human capital $h^{S2,S1}$, $h^{S2,S3}$, and $h^{S2,S4}$ are located. These agents are
|
| 401 |
+
indifferent between the funding systems in the corresponding pairwise elections analyzed in
|
| 402 |
+
Section 3.2. The lognormal income distribution implies that an agent with income $h_P^i$ is located
|
| 403 |
+
in the $\Phi((\ln h_P^i - \mu)/\sigma) \times 100\%$ percentile of the income distribution, where $\Phi$ is the
|
| 404 |
+
cumulative function of the standard normal distribution. For instance, an agent with income
|
| 405 |
+
$h_P^i = \exp(\mu)$ is in the $\Phi(0) \times 100\%$ = 50th percentile of the income distribution. Let
|
| 406 |
+
$p^{S\alpha,S\beta} \times 100\%$ be the income percentile of an agent with income $h^{S\alpha,S\beta}$. Then, Equations
|
| 407 |
+
(18)–(20) imply
|
| 408 |
+
|
| 409 |
+
$$p^{S2,S1} = \Phi\left(\frac{\sigma}{2}\right), \qquad (A1)$$
|
| 410 |
+
|
| 411 |
+
$$p^{S2,S3} = \Phi\left(-\frac{\delta\sigma}{2\gamma}\right), \qquad (A2)$$
|
| 412 |
+
|
| 413 |
+
$$p^{S2,S4} = \Phi\left(\frac{\delta\sigma}{2(1-\gamma)}\right), \qquad (A3)$$
|
| 414 |
+
|
| 415 |
+
We now use Equations (A1)–(A3) to examine whether the ex ante egalitarian public education system (S2) remains the Condorcet winner in democracies with some degree of elitism or populism.
|
| 416 |
+
|
| 417 |
+
In Section 3.2, we concluded that all agents with an income below $h^{S2,S1}$ prefer the ex ante egalitarian public education system (S2) over the private system (S1) in a pairwise election. Thus, Equation (A1) implies that $\Phi(\sigma/2) \times 100\% > 50\%$ of voters prefer S2. Suppose an elitist democracy excludes a fraction $x$ of the poorest agents from voting. We can compute the $x$ such that S2 is still the winner of the {S2, S1} election¹⁰:
|
| 418 |
+
|
| 419 |
+
$$\frac{\Phi(\sigma/2) - x}{1-x} > 0.5. \qquad (A4)$$
|
| 420 |
+
|
| 421 |
+
Therefore, an elitist democracy that excludes less than $\tilde{x}^1 = 2(\Phi(\sigma/2) - 0.5)$ of the poorest agents still votes for the ex ante egalitarian public education system (S2) in the pairwise election {S2, S1}.
|
| 422 |
+
---PAGE_BREAK---
|
| 423 |
+
|
| 424 |
+
We proceed analogously for the other two pairwise elections: {$S2, S3$} and {$S2, S4$.} As shown in Section 3.2, all agents with an income above $h_t^{S2,S3}$ prefer the ex ante egalitarian public education system (S2) over the ex post egalitarian public education system (S3) in a one-on-one election. Then, Equation (A2) implies that $100\% - \Phi(-\delta\sigma/2\gamma) \times 100\% > 50\%$ of voters prefer S2 to S3. Then, we can use an equation analogous to (A4) to derive the fraction of the richest agents that could be excluded from voting without affecting the selection of S2 in the {$S2, S3$} comparison:
|
| 425 |
+
|
| 426 |
+
$$ \frac{1 - \Phi(-\delta\sigma/2\gamma) - z}{1 - z} > 0.5. \quad (A5) $$
|
| 427 |
+
|
| 428 |
+
Therefore, a populist democracy that excludes less than $\bar{z} = 2(0.5 - \Phi(-\delta\sigma/2\gamma))$ of the richest agents still elects S2 over S3.
|
| 429 |
+
|
| 430 |
+
Lastly, we know from Section 3.2 that all agents with an income level below $h_t^{S2,S4}$ prefer the ex ante egalitarian public education system (S2) over the output maximizing system (S4) in a one-on-one election. Therefore, Equation (A3) implies that $\Phi(\delta\sigma/(2(1-\gamma))) \times 100\% > 50\%$ of the voters vote for S2. The equation analogous to (A4) is
|
| 431 |
+
|
| 432 |
+
$$ \frac{\Phi(\delta\sigma/(2(1-\gamma))) - x}{1-x} > 0.5. \quad (A6) $$
|
| 433 |
+
|
| 434 |
+
Thus, from Equation (A6) we conclude that an elitist democracy that excludes less than $\tilde{x}^2 = 2(\Phi(\delta\sigma/(2(1-\gamma)))) - 0.5$ of the poorest agents still selects the ex ante egalitarian public education system (S2) in the one-on-one election {$S2, S4$}.
|
| 435 |
+
|
| 436 |
+
We show now that the ex ante egalitarian public education system (S2) is still the Condorcet winner in democracies with a limited degree of elitism and populism. Consider first an elitist democracy that excludes less than $\min\{\tilde{x}^1, \tilde{x}^2\}$ of the poorest agents of the economy. By construction, the ex ante egalitarian public education system (S2) wins the pairwise elections {$S2, S1$} and {$S2, S4$}. Moreover, the ex post egalitarian public education system (S3) invests more resources in students from low-income families. Thus, the fact that S2 is preferred to S3 in a complete democracy immediately implies that S2 is also selected when a number of the poorest agents do not participate in politics. Formally, the political support for system S2 in the {$S2, S3$} election when $x$ of the poorest agents are excluded from voting is $(1 - \Phi(-\delta\sigma/2\gamma))/(1-x) \times 100\%$. We have already established that in a complete democracy ($x=0$), $(1 - \Phi(-\delta\sigma/2\gamma)) \times 100\% > 50\%$. Since $((1 - \Phi(-\delta\sigma/2\gamma))/(1-x)) \times 100\% > 1 - \Phi(-\delta\sigma/2\gamma) \times 100\% > 50\%$ for any positive value of $x$, it follows that S2 will also be selected in the {$S2, S3$} election within an incomplete democracy that excludes less than $\min\{\tilde{x}^1, \tilde{x}^2\}$ of the poorest agents. Hence, S2 remains the Condorcet winner even if a fraction of the poorest agents do not participate in elections.
|
| 437 |
+
|
| 438 |
+
Similarly, consider a populist democracy that excludes less than $\tilde{z}$ of the richest agents. By construction, the ex ante egalitarian public education system (S2) wins the {$S2, S3$} election. In addition, we know that systems S1 and S4 invest more resources in students from richer families, which makes these funding systems especially popular among the richest agents. We have shown that system S2 wins the one-on-one elections {$S2, S1$} and {$S2, S4$} in the context of a complete democracy. Then, it will also win in an incomplete democracy that excludes a fraction of the richest agents. Formally, the political support for system S2 in the {$S2, S1$} and {$S2, S4$} elections when a fraction $z$ of the richest agents are excluded from voting is $((\Phi(\sigma/2))/(1-z)) \times 100\%$
|
| 439 |
+
---PAGE_BREAK---
|
| 440 |
+
|
| 441 |
+
and ((Φ(δσ/(2(1 − γ))))/(1 − z)) × 100%, respectively. We have already established that in
|
| 442 |
+
complete democracies (z = 0), Φ(σ/2) × 100% > 50% and Φ(δσ/(2(1 − γ))) × 100% > 50%.
|
| 443 |
+
These two conditions imply that ((Φ(σ/2))/(1 − z)) × 100% > 50% and (((Φ(δσ/
|
| 444 |
+
(2(1 − γ))))/(1 − z)) × 100% > 50%, for any positive fraction z. Thus, S2 wins the pairwise
|
| 445 |
+
elections {S2, S1} and {S2, S4} in a populist democracy that excludes less than $\bar{z}$ of the richest agents.
|
| 446 |
+
Hence, S2 remains the Condorcet winner even if a fraction of the richest agents do not participate in
|
| 447 |
+
elections.
|
samples_new/texts_merged/2865847.md
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Join Decompositions for Efficient Synchronization
|
| 5 |
+
of CRDTs after a Network Partition
|
| 6 |
+
|
| 7 |
+
[Work in progress report]
|
| 8 |
+
|
| 9 |
+
Vitor Enes
|
| 10 |
+
|
| 11 |
+
Carlos Baquero
|
| 12 |
+
|
| 13 |
+
Paulo Sérgio Almeida
|
| 14 |
+
|
| 15 |
+
Ali Shoker
|
| 16 |
+
|
| 17 |
+
HASLab/INESC TEC and Universidade do Minho
|
| 18 |
+
|
| 19 |
+
## Abstract
|
| 20 |
+
|
| 21 |
+
State-based CRDTs allow updates on local replicas without remote synchronization. Once these updates are propagated, possible conflicts are resolved deterministically across all replicas. $\delta$-CRDTs bring significant advantages in terms of the size of messages exchanged between replicas during normal operation. However, when a replica joins the system after a network partition, it needs to receive the updates it missed and propagate the ones performed locally. Current systems solve this by exchanging the full state bidirectionally or by storing additional metadata along the CRDT. We introduce the concept of join-decomposition for state-based CRDTs, a technique orthogonal and complementary to delta-mutation, and propose two synchronization methods that reduce the amount of information exchanged, with no need to modify current CRDT definitions.
|
| 22 |
+
|
| 23 |
+
## 1. Introduction
|
| 24 |
+
|
| 25 |
+
The concept of Conflict-free Replicated Data Type (CRDT) was introduced in (Shapiro et al. 2011) and presents two flavors of CRDTs: state-based and operation-based. A state-based CRDT can be defined as a triple $(S, \bar{=}^*, \sqcup)$ where $S$ is a join-semilattice, $\bar{=}^*$ its partial order, and $\sqcup$ is a binary join operator that derives the least upper bound for every two elements of $S$.
|
| 26 |
+
|
| 27 |
+
With $\delta$-CRDTs (Almeida et al. 2016), every time a replica performs an update, it will only send the information needed to reflect this update in other replicas, with the anti-entropy algorithm keeping at each node metadata tracking which deltas still need to be propagated to current peers. However, after a long partition, such metadata is discarded. In this situation, when a replica goes online again, the other remote replicas typically send their full state so this replica sees the updates it missed.
|
| 28 |
+
|
| 29 |
+
(Linde et al. 2016) introduces the concept of $\Delta$-CRDTs where replicas exchange metadata used to calculate a $\Delta$ that reflects the missed updates. As this metadata is typically smaller than the full state, less is demanded from the network. In this approach CRDTs need to be extended to maintain the additional metadata for $\Delta$ derivation, and if this metadata needs to be garbage collected the mechanism will fall-back to standard full state transmission.
|
| 30 |
+
|
| 31 |
+
In this paper we will present a mechanism that does not add additional metadata to standard state-based CRDTs, but instead is able to decompose the state into smaller states than can be selected and grouped in a $\Delta$ for efficient transmission.
|
| 32 |
+
|
| 33 |
+
## 1.1 Problem Statement
|
| 34 |
+
|
| 35 |
+
Consider replica *A* with state *a* and replica *B* with state *b*, which at some point stop disseminating updates but keep updating their local state. When these replicas go online, what should replica *A* send to replica *B* so that *B* sees the updates performed on *a* since they stopped communicating? We could try to find *c* such that:
|
| 36 |
+
|
| 37 |
+
$$a = b \sqcup c$$
|
| 38 |
+
|
| 39 |
+
but if both replicas performed updates while they were offline, their states are concurrent, and there's no such *c*. (We say two states *a* and *b* are concurrent if *a* is not less than *b* and *b* is not less than *a* in the partial order: $a \parallel b \iff a \supseteq b \land b \supseteq a$..) The trick is how to find *c* ($\Delta$ from now on) which reflects the updates in the join of *a* and *b* still missing in *b*:
|
| 40 |
+
|
| 41 |
+
$$a \sqcup b = b \sqcup \Delta$$
|
| 42 |
+
|
| 43 |
+
The trivial example would be $\Delta = a$, but we would like to send less information than the full state. So, how can replica *A* calculate a smaller $\Delta$ to be sent to replica *B*, reflecting the missed updates?
|
| 44 |
+
|
| 45 |
+
## 1.2 Contributions
|
| 46 |
+
|
| 47 |
+
Firstly, we introduce the concept of join-decomposition for state-based CRDTs, a technique orthogonal and complementary to delta-mutation. Then, we propose two synchronization techniques. *State Driven*: replica *B* sends its full state *b* to replica *A* and replica *A* is able to derive $\Delta$. *Digest Driven*: replica *B* sends some information about its state *b*, smaller than *b* itself, but enough to allow replica *A* to compute $\Delta$.
|
| 48 |
+
|
| 49 |
+
## 2. Join Decompositions
|
| 50 |
+
|
| 51 |
+
We now explain how the concept of join-decomposition (Birkhoff 1937) can be applied to state-based CRDTs. Given state $r \in S$, we say that $D \in \mathcal{P}(S)$ is a join-decomposition of $r$ if:
|
| 52 |
+
|
| 53 |
+
$$\sqcup D = r \qquad (i)$$
|
| 54 |
+
|
| 55 |
+
$$\forall s \in D \cdot \sqcup (D \setminus \{s\}) \subseteq r \qquad (ii)$$
|
| 56 |
+
|
| 57 |
+
Property (i) states that the join of all elements in a join-decomposition of $r$ should be $r$. Property (ii) says that each element in a join-decomposition is not redundant: joining the remaining elements is not enough to produce $r$.
|
| 58 |
+
---PAGE_BREAK---
|
| 59 |
+
|
| 60 |
+
We are interested in decompositions made up of “basic” irreducible elements. An element $s$ is join-irreducible if it cannot result from a join of two elements other than itself, i.e.:
|
| 61 |
+
|
| 62 |
+
$$t \sqcup u = s \Rightarrow t = s \lor u = s$$
|
| 63 |
+
|
| 64 |
+
We say $D$ is a join-irreducible decomposition if $D$ is a join-decomposition and:
|
| 65 |
+
|
| 66 |
+
$$\forall s \in D \cdot s \text{ is join-irreducible} \qquad (iii)$$
|
| 67 |
+
|
| 68 |
+
States in common CRDTs typically have join-irreducible decompositions, and we now present some examples of decomposition functions, which take a state and return a join-irreducible decomposition.
|
| 69 |
+
|
| 70 |
+
## 2.1 Example Decompositions
|
| 71 |
+
|
| 72 |
+
A GCounter is a simple replicated counter where its value can only increase (Almeida et al. 2016). It is represented as a map from ids to naturals, i.e., $GCounter = I \hookrightarrow N$, and each replica can only increase the value of the counter in its position of the map. The value of the counter is the sum of all increments. For example, $p = \{A \mapsto 3, B \mapsto 5\}$ means replica A has incremented the counter three times, replica B five times, hence the value is eight. For each state $s$, a join-irreducible decomposition can be obtained by function:
|
| 73 |
+
|
| 74 |
+
$$D^{GCounter}(s) = \{\{i \mapsto v\} | (i, v) \in s\}$$
|
| 75 |
+
|
| 76 |
+
The decomposition for the GCounter $p$ above would be $\{{A \mapsto 3}, \{B \mapsto 5\}\}$.
|
| 77 |
+
|
| 78 |
+
To allow both increments and decrements we can compose two GCounter by pairing them (Baquero et al. 2015) and we have a PNCounter $= (I \hookrightarrow N) \times (I \hookrightarrow N)$. Join-irreducible decompositions can be obtained through:
|
| 79 |
+
|
| 80 |
+
$$D^{PNCounter}((p,n)) = \{(\{i \mapsto v\}, \{} | (i,v) \in p\} \\ \cup \{\{\}, \{i \mapsto v\} | (i,v) \in n\}$$
|
| 81 |
+
|
| 82 |
+
As a final example, an Add-Wins set has state $\mathit{AWSet} = (E \hookrightarrow \mathcal{P}(D)) \times \mathcal{P}(D)$. This CRDT is a pair where the first component is a map (from element, in $E$, to a set of supporting dots (unique event identifiers), in $\mathcal{P}(D)$) and the second component is a causal context represented as a set of dots $\mathcal{P}(D)$ (Almeida et al. 2016). When an element is added to the set, a new entry in the map is created, if needed, mapping this element to a new dot, and current dots for the element, if any, are discarded. This new dot is also added to the causal context. To remove an element, we remove its entry from the map. An example for this data type where two elements $(x$ and $y)$ were added and another (initially marked with unique dot $a2$) was removed is $s = (\{x \mapsto \{a1\}, y \mapsto \{b1, c1\}\}, \{a1, a2, b1, c1\})$. (The *range* function `rng` returns all sets of supporting dots in the mapping.) The join-irreducible decomposition of state $(m, c)$ can be obtained through function:
|
| 83 |
+
|
| 84 |
+
$$D^{\mathit{AWSet}}((m,c)) = \{(\{e \mapsto \{d\}\}, \{d\}) | (e,s) \in m, d \in s\} \\ \cup \{\{\}, \{d\} | d \in c \setminus \bigcup \mathrm{rang} m\}$$
|
| 85 |
+
|
| 86 |
+
The join-irreducible decomposition for the state $s$ above is:
|
| 87 |
+
|
| 88 |
+
$$\{(\{x \mapsto \{a1\}\}, \{a1\}), \\ (\{y \mapsto \{b1\}\}, \{b1\}), \\ (\{y \mapsto \{c1\}\}, \{c1\}), \\ (\{\}, \{a2\})\}$$
|
| 89 |
+
|
| 90 |
+
## 3. Efficient Synchronization
|
| 91 |
+
|
| 92 |
+
**State Driven** The State Driven approach can be applied to all state-based CRDTs as long as we have a corresponding join-decomposition. We define $\min^\Delta : S \times S \to S$ as a function that given two states (the local state $a$ and the remote replica state $b$) will produce a $\Delta$. Join-irreducible decompositions will in general produce smaller $\Delta$s. Let $D : S \to \mathcal{P}(S)$ be a function that produces a join-decomposition.
|
| 93 |
+
|
| 94 |
+
$$\min^{\Delta}(a, b) = \bigcup\{s | s \in D(a) \land b \sqsubseteq b \sqcup s\}$$
|
| 95 |
+
|
| 96 |
+
This $\min^\Delta$ function joins all $s$ in the local state join-decomposition that strictly inflate the remote state. If the local replica ships the resulting $\Delta$, to be joined to the remote replica, and joins the state received from the remote replica to its local state, both these replicas will reach convergence (if in the meantime no new update was performed).
|
| 97 |
+
|
| 98 |
+
**Digest Driven** With the Digest Driven approach we achieve the same results of State Driven but by exchanging less information. We re-define $\min^\Delta : S \times M \to S$ as a function that given the local state $a$ and some digest $m$ related to the remote state will produce a $\Delta$.
|
| 99 |
+
|
| 100 |
+
$$\min^{\Delta}(a,m) = \bigcup\{s | s \in D(a) \land \inf(s,m)\}$$
|
| 101 |
+
|
| 102 |
+
This digest will be data-type specific, which means that $\min^\Delta$ will use a type-specific function $\inf(s,m)$ to check if $s$ inflates the remote state summarized by the received digest $m$.
|
| 103 |
+
|
| 104 |
+
A digest extraction function digest: $S \to M$ and the inflation test $\inf: S \times M \to B$ for the causal $\mathit{AWSet}$ CRDT can be defined as:
|
| 105 |
+
|
| 106 |
+
$$\begin{align*}
|
| 107 |
+
\operatorname{digest}^{\mathit{AWSet}}((m,c)) &= (\bigcup \operatorname{rang} m, c) \\
|
| 108 |
+
\operatorname{inf}^{\mathit{AWSet}}((e,\{d\}), (a,c)) &=
|
| 109 |
+
\begin{cases}
|
| 110 |
+
T & \text{if } d \notin c \lor (e = \{\} \land d \in a) \\
|
| 111 |
+
F & \text{otherwise}
|
| 112 |
+
\end{cases}
|
| 113 |
+
\end{align*}$$
|
| 114 |
+
|
| 115 |
+
The function digest<sup>AWSet</sup> returns a pair where the first component is the set of active dots (the supporting dots of elements that were added and not yet removed) and the second component is the full causal context. The inflation check $\inf_{\mathit{AWSet}}$ will return $T$ for $s \in D(a)$ if the dot in $s$ has not been seen in the other replica or $s$ represents a removed element (i.e., $(\{\}, \{d\})$) that has been added and not yet removed in the other replica ($d$ is still in the active dots).
|
| 116 |
+
|
| 117 |
+
If the Digest Driven technique is performed bidirectionally and no updates occurred, both replicas will converge (otherwise, they can still be collected separately in a dedicated buffer for further transmission).
|
| 118 |
+
|
| 119 |
+
## References
|
| 120 |
+
|
| 121 |
+
P. S. Almeida, A. Shoker, and C. Baquero. Delta State Replicated Data Types. CoRR, abs/1603.01529, 2016. URL http://arxiv.org/abs/1603.01529.
|
| 122 |
+
|
| 123 |
+
C. Baquero, P. S. Almeida, A. Cunha, and C. Ferreira. Composition of State-based CRDTs. 2015.
|
| 124 |
+
|
| 125 |
+
G. Birkhoff. Rings of sets. Duke Math. J., 3(3):443–454, 1937.
|
| 126 |
+
|
| 127 |
+
A. Linde, J. Leitão, and N. Preguiça. Δ-CRDTs: Making δ-CRDTs Delta-Based. PaPoc 2016, 2016.
|
| 128 |
+
|
| 129 |
+
M. Shapiro, N. Preguiça, C. Baquero, and M. Zawirski. Conflict-free Replicated Data Types. Technical Report RR-7687, July 2011. URL http://hal.inria.fr/inria-00609399/en/.
|
samples_new/texts_merged/2909063.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# y⁺ Calculation, Example 6D
|
| 5 |
+
|
| 6 |
+
Example 6D: Consider a high-velocity fluid over a flat plate. It is desired to find the thickness of the viscous sublayer at $y^+=1$. The fluid is H₂O at 395 K and 1 MPa. Its free stream velocity is 700 m/s, and has a boundary layer $\delta=0.1$ m.
|
| 7 |
+
|
| 8 |
+
## Solutions:
|
| 9 |
+
|
| 10 |
+
1) Use the "Yplus_LIKE_Eddy_Scales_Book_Version.m" application found in my CFD/turbulence book, "Applied Computational Fluid Dynamics and Turbulence Modeling", Springer International Publishing, 1st Ed., ISBN 978-3-030-28690-3, 2019, DOI: 10.1007/978-3-030-28691-0.
|
| 11 |
+
|
| 12 |
+
or
|
| 13 |
+
|
| 14 |
+
2) Get a free copy of "Yplus_LIKE_Eddy_Scales_Book_Version.m" at www.cfdturbulence.com, or email me at tayloreddydk1@gmail.com.
|
| 15 |
+
|
| 16 |
+
or
|
| 17 |
+
|
| 18 |
+
3) Use the free $y^+$ estimation GUI tool offered by cfd-online, which is at http://www.cfd-online.com/Tools/yplus.php
|
| 19 |
+
|
| 20 |
+
or
|
| 21 |
+
|
| 22 |
+
4) Follow the step-by-step solution shown in the next slide.
|
| 23 |
+
---PAGE_BREAK---
|
| 24 |
+
|
| 25 |
+
$y^+$ Calculation, Example 6D
|
| 26 |
+
|
| 27 |
+
From $P$ and $T$, $\rho = 942 \text{ kg/m}^3$ and $\mu = 2.28 \times 10^{-4} \text{ kg/m-s}$.
|
| 28 |
+
|
| 29 |
+
$$v = \frac{\mu}{\rho} = \frac{2.28 \times 10^{-4}}{942} = 2.43 \times 10^{-7} \text{ m}^2/\text{s}$$
|
| 30 |
+
|
| 31 |
+
$$Re_x = \frac{U_\infty \delta(x)}{v} = \frac{700 * 0.1}{2.43 \times 10^{-7}} = 2.87 \times 10^{8}, < 10^{9}$$
|
| 32 |
+
|
| 33 |
+
$$C_f = [2 \log_{10}(Re_x) - 0.65]^{-2.3} = [2 \log_{10}(2.87 \times 10^8) - 0.65]^{-2.3} = 1.60 \times 10^{-3}$$
|
| 34 |
+
|
| 35 |
+
$$\tau_w = C_f \frac{\rho U_\infty^2}{2} = 1.60 \times 10^{-3} \frac{942 * 700^2}{2} = 3.78 \times 10^5$$
|
| 36 |
+
|
| 37 |
+
$$u_* = \sqrt{\frac{\tau_w}{\rho}} = \sqrt{\frac{3.78 \times 10^5}{942}} = 20.0$$
|
| 38 |
+
|
| 39 |
+
$$y(\text{at } y^+=1) = \frac{y^+ v}{u_*} = \frac{1 * 2.43 \times 10^{-7}}{20} = 1.22 \times 10^{-8} \text{ m}$$
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
# y⁺ Calculation, Example 6D Solutions
|
| 43 |
+
|
| 44 |
+
## Approach 1 and 2 (the Matlab script, Yplus_LIKE_Eddy_Scales_Book_Version.m)
|
| 45 |
+
|
| 46 |
+
$$Re_x = 2.89 \times 10^8$$
|
| 47 |
+
|
| 48 |
+
$$y(\text{at } y^+=1) = 1.23 \times 10^{-8} \text{ m}$$
|
| 49 |
+
|
| 50 |
+
## Approach 4 (previous slide)
|
| 51 |
+
|
| 52 |
+
$$Re_x = 2.87 \times 10^8$$
|
| 53 |
+
|
| 54 |
+
$$y(\text{at } y^+=1) = 1.22 \times 10^{-8} \text{ m}$$
|
| 55 |
+
|
| 56 |
+
## Approach 3 (cfd-online tool)
|
samples_new/texts_merged/3147359.md
ADDED
|
@@ -0,0 +1,589 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Conference Paper
|
| 5 |
+
|
| 6 |
+
# Implementing Hybrid Semantics: From Functional to Imperative
|
| 7 |
+
|
| 8 |
+
Sergey Goncharov
|
| 9 |
+
Renato Neves
|
| 10 |
+
José Proença*
|
| 11 |
+
|
| 12 |
+
*CISTER Research Centre
|
| 13 |
+
CISTER-TR-201008
|
| 14 |
+
|
| 15 |
+
2020/11/30
|
| 16 |
+
---PAGE_BREAK---
|
| 17 |
+
|
| 18 |
+
# Implementing Hybrid Semantics: From Functional to Imperative
|
| 19 |
+
|
| 20 |
+
Sergey Goncharov, Renato Neves, José Proença*
|
| 21 |
+
|
| 22 |
+
*CISTER Research Centre
|
| 23 |
+
Polytechnic Institute of Porto (ISEP P.Porto)
|
| 24 |
+
Rua Dr. António Bernardino de Almeida, 431
|
| 25 |
+
4200-072 Porto
|
| 26 |
+
Portugal
|
| 27 |
+
Tel.: +351.22.8340509, Fax: +351.22.8321159
|
| 28 |
+
E-mail: sergey.goncharov@fau.de, nevrenato@di.uminho.pt, pro@isep.ipp.pt
|
| 29 |
+
https://www.cister-labs.pt
|
| 30 |
+
|
| 31 |
+
## Abstract
|
| 32 |
+
|
| 33 |
+
Hybrid programs combine digital control with differential equations, and naturally appear in a wide range of application domains, from biology and control theory to real-time software engineering. The entanglement of discrete and continuous behaviour inherent to such programs goes beyond the established computer science foundations, producing challenges related to e.g. infinite iteration and combination of hybrid behaviour with other effects. A systematic treatment of hybridness as a dedicated computational effect has emerged recently. In particular, a generic idealized functional language HybCore with a sound and adequate operational semantics has been proposed. The latter semantics however did not provide hints to implementing HybCore as a runnable language, suitable for hybrid system simulation (e.g. the semantics features rules with uncountably many premises). We introduce an imperative counterpart of HybCore, whose semantics is simpler and runnable, and yet intimately related with the semantics of HybCore at the level of hybrid monads. We then establish a corresponding soundness and adequacy theorem. To attest that the resulting semantics can serve as a firm basis for the implementation of typical tools of programming oriented to the hybrid domain, we present a web-based prototype implementation to evaluate and inspect hybrid programs, in the spirit of GHCI for Haskell and UTop for OCaml. The major asset of our implementation is that it formally follows the operational semantic rules.
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
# Implementing Hybrid Semantics: From Functional to Imperative
|
| 37 |
+
|
| 38 |
+
Sergey Goncharov¹, Renato Neves² and José Proença³
|
| 39 |
+
|
| 40 |
+
¹ Dept. of Comp. Sci., FAU Erlangen-Nürnberg, Germany
|
| 41 |
+
|
| 42 |
+
² University of Minho & INESC-TEC, Portugal
|
| 43 |
+
|
| 44 |
+
³ CISTER/ISEP, Portugal
|
| 45 |
+
|
| 46 |
+
**Abstract.** Hybrid programs combine digital control with differential equations, and naturally appear in a wide range of application domains, from biology and control theory to real-time software engineering. The entanglement of discrete and continuous behaviour inherent to such programs goes beyond the established computer science foundations, producing challenges related to e.g. infinite iteration and combination of hybrid behaviour with other effects. A systematic treatment of *hybridness* as a dedicated computational effect has emerged recently. In particular, a generic idealized functional language HYBCORE with a sound and adequate operational semantics has been proposed. The latter semantics however did not provide hints to implementing HYBCORE as a runnable language, suitable for hybrid system simulation (e.g. the semantics features rules with uncountably many premises). We introduce an imperative counterpart of HYBCORE, whose semantics is simpler and runnable, and yet intimately related with the semantics of HYBCORE at the level of *hybrid monads*. We then establish a corresponding soundness and adequacy theorem. To attest that the resulting semantics can serve as a firm basis for the implementation of typical tools of programming oriented to the hybrid domain, we present a web-based prototype implementation to evaluate and inspect hybrid programs, in the spirit of GHCI for HASKELL and UTOP for OCAML. The major asset of our implementation is that it formally follows the operational semantic rules.
|
| 47 |
+
|
| 48 |
+
## 1 Introduction
|
| 49 |
+
|
| 50 |
+
**The core idea of hybrid programming.** Hybrid programming is a rapidly emerging computational paradigm [26,29] that aims at using principles and techniques from programming theory (e.g. compositionality [12,26], Hoare calculi [29,34], theory of iteration [2,8]) to provide formal foundations for developing computational systems that interact with physical processes. Cruise controllers are a typical example of this pattern; a very simple case is given by the hybrid program below.
|
| 51 |
+
|
| 52 |
+
```c
|
| 53 |
+
while true do {
|
| 54 |
+
if v ≤ 10 then (v' = 1 for 1) else (v' = -1 for 1) (cruise controller)
|
| 55 |
+
}
|
| 56 |
+
```
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
In a nutshell, the program specifies a digital controller that periodically measures and regulates a vehicle's velocity (v): if the latter is less or equal than 10 the controller accelerates during 1 time unit, as dictated by the program statement $v' = 1 \text{ for } 1$ ($v' = 1$ is a differential equation representing the velocity's rate of change over time. The value 1 on the right-hand side of for is the duration during which the program statement runs). Otherwise, it decelerates during the same amount of time ($v' = -1 \text{ for } 1$). Figure 1 shows the output respective to this hybrid program for an initial velocity of 5.
|
| 60 |
+
|
| 61 |
+
Note that in contrast to standard programming, the cruise controller involves not only classical constructs (while-loops and conditional statements) but also differential ones (which are used for describing physical processes). This cross-disciplinary combination is the core feature of hybrid programming and has a notably wide range of application domains (see [29,30]). However, it also hinders the use of classical techniques of programming, and thus calls for a principled extension of programming theory to the hybrid setting.
|
| 62 |
+
|
| 63 |
+
Fig. 1: Vehicle's velocity
|
| 64 |
+
|
| 65 |
+
As is already apparent from the (cruise controller) example, we stick to an *imperative* programming style, in particular, in order to keep in touch with the established denotational models of physical time and computation. A popular alternative to this for modelling real-time and hybrid systems is to use a *declarative* programming style, which is done e.g. in real-time Maude [27] or Modelica [10]. A well-known benefit of declarative programming is that programs are very easy to write, however on the flip side, it is considerably more difficult to define what they exactly mean.
|
| 66 |
+
|
| 67 |
+
**Motivation and related work.** Most of the previous research on formal hybrid system modelling has been inspired by automata theory and Kleene algebra (as the corresponding algebraic counterpart). These approaches led to the well-known notion of hybrid automaton [17] and Kleene algebra based languages for hybrid systems [28,18,19]. From the purely semantic perspective, these formalizations are rather close and share such characteristic features as *nondeterminism* and what can be called *non-refined divergence*. The former is standardly justified by the focus on formal verification of safety-critical systems: in such contexts overabstraction is usually desirable and useful. However, coalescing *purely hybrid* behaviour with nondeterminism detaches semantic models from their prototypes as they exist in the wild. This brings up several issues. Most obviously, a nondeterministic semantics, especially not given in an operational form, cannot directly serve as a basis for languages and tools for hybrid system testing and simulation. Moreover, models with nondeterminism baked in do not provide a clear indication of how to combine hybrid behaviour with effects other
|
| 68 |
+
---PAGE_BREAK---
|
| 69 |
+
|
| 70 |
+
than nondeterminism (e.g. probability), or to combine it with nondeterminism in a different way (van Glaabeek's spectrum [36] gives an idea about the diversity of potentially arising options). Finally, the Kleene algebra paradigm strongly suggests a relational semantics for programs, with the underlying relations connecting a state on which the program is run with the states that the program can reach. As previously indicated by Höfner and Möller [18], this view is too coarse-grained and contrasts to the trajectory-based one where a program is associated with a trajectory of states (recall Figure 1). The trajectory-based approach provides an appropriate abstraction for such aspects as notions of convergence, periodic orbits, and duration-based predicates [5]. This potentially enables analysis of properties such as *how fast* our (cruise controller) example reaches the target velocity or for *how long* it exceeds it.
|
| 71 |
+
|
| 72 |
+
The issue of *non-refined divergence* mentioned earlier arises from the Kleene algebra law $p;0 = 0$ in conjunction with Fischer-Ladner's encoding of while-loops `while b do { p }` as $(b;p)*; \neg b$. This creates a havoc with all divergent programs `while true do { p }` as they become identified with divergence 0, thus making the above example of a (cruise controller) meaningless. This issue is extensively discussed in Höfner and Möller's work [18] on a *nondeterministic* algebra of trajectories, which tackles the problem by disabling the law $p;0 = 0$ and by introducing a special operator for infinite iteration that inherently relies on nondeterminism. This iteration operator inflates trajectories at so-called 'Zeno points' with arbitrary values, which in our case would entail e.g. the program
|
| 73 |
+
|
| 74 |
+
$$ x := 1; while true do { wait x; x := x/2 } \quad (\text{zeno}) $$
|
| 75 |
+
|
| 76 |
+
to output at time instant 2 all possible values in the valuation space (the expression `wait t` represents a wait call of t time units). More details about Zeno points can be consulted in [18,14].
|
| 77 |
+
|
| 78 |
+
In previous work [12,14], we pursued a *purely hybrid* semantics via a simple *deterministic functional* language HYBCORE, with while-loops for which we used Elgot's notion of iteration [8] as the underlying semantic structure. That resulted in a semantics of finite and infinite iteration, corresponding to a refined view of divergence. Specifically, we developed an operational semantics and also a denotational counterpart for HYBCORE. An important problem of that semantics, however, is that it involves infinitely many premisses and requires calculating total duration of programs, which precludes using such semantics directly in implementations. Both the above examples (cruise controller) and (zeno) are affected by this issue. In the present paper we propose an *imperative* language with a denotational semantics similar to HYBCORE's one, but now provide a clear recipe for executing the semantics in a constructive manner.
|
| 79 |
+
|
| 80 |
+
**Overview and contributions.** Building on our previous work [14], we devise operational and denotational semantics suitable for implementation purposes, and provide a soundness and adequacy theorem relating both these styles of semantics. Results of this kind are well-established yardsticks in the programming language theory [37], and beneficial from a practical perspective. For example, small-step operational semantics naturally guides the implementation of compilers for
|
| 81 |
+
---PAGE_BREAK---
|
| 82 |
+
|
| 83 |
+
programming languages, whilst denotational semantics is more abstract, syntax-independent, and guides the study of program equivalence, of the underlying computational paradigm, and its combination with other computational effects.
|
| 84 |
+
|
| 85 |
+
As mentioned before, in our previous work [14] we introduced a simple functional hybrid language HYBCORE with operational and denotational monad-based semantics. Here, we work with a similar imperative while-language, whose semantics is given in terms of a global state space of trajectories over $\mathbb{R}^n$, which is a commonly used carrier when working with solutions of systems of differential equations. A key principle we have taken as a basis for our new semantics is the capacity to determine behaviours of a program p by being able to examine only some subterms of it. In order to illustrate this aspect, first note that our semantics does not reduce program terms p and initial states $\sigma$ (corresponding to valuation functions $\sigma: \mathcal{X} \to \mathbb{R}$ on program variables $\mathcal{X}$) to states $\sigma'$, as usual in classical programming. Instead it reduces triples p, $\sigma$, t of programs p, initial states $\sigma$ and time instants t to a state $\sigma'$; such a reduction can be read as "given $\sigma$ as the initial state, program p produces a state $\sigma'$ at time instant t". Then, the reduction process of p, $\sigma$, t to a state only examines fragments of p or unfolds it when strictly necessary, depending of the time instant t. For example, the reduction of the (cruise controller) unfolds the underlying loop only twice for the time instant $1 + 1/2$ (the time instant $1 + 1/2$ occurred in the second iteration of the loop). This is directly reflected in our prototype implementation of an interactive evaluator of hybrid programs LINCE. It is available online and comes with a series of examples for the reader to explore (http://arcatools.org/lince). The plot in Figure 1 was automatically obtained from LINCE, by calling on the previously described reduction process for a predetermined sequence of time instants t.
|
| 86 |
+
|
| 87 |
+
For the denotational model, we build on our previous work [12,14] where hybrid programs are interpreted via a suitable monad **H**, called the *hybrid monad* and capturing the computational effect of *hybridness*, following the seminal approach of Moggi [24,25]. Our present semantics is more lightweight and is naturally couched in terms of another monad **H**<sub>S</sub>, parametrized by a set **S**. In our case, as mentioned above, **S** is the set of trajectories over $\mathbb{R}^n$ where *n* is the number of available program variables $\mathcal{X}$. The latter monad is in fact parametrized in a formal sense [35] and comes out as an instance of a recently emerged generic construction [7]. A remarkable salient feature of that construction is that it can be instantiated in a constructive setting (without using any choice principles) – although we do not touch upon this aspect here, in our view this reinforces the fundamental nature of our semantics. Among various benefits of **H**<sub>S</sub> over **H**, the former monad enjoys a construction of an iteration operator (in the sense of Elgot [8]) as a *least fixpoint*, calculated as a limit of an $\omega$-chain of approximations, while for **H** the construction of the iteration operator is rather intricate and no similar characterization is available. A natural question that arises is: how are **H** and **H**<sub>S</sub> related? We do answer it by providing an instructive connection, which sheds light on the construction of **H**, by explicitly identifying semantic ingredients which have to be added to **H**<sub>S</sub> to obtain **H**. Additionally, this results in “backward compatibility” with our previous work.
|
| 88 |
+
---PAGE_BREAK---
|
| 89 |
+
|
| 90 |
+
**Document structure.** After short preliminaries (Section 2), in Section 3 we introduce our while-language and its operational semantics. In Sections 4 and 5, we develop the denotational model for our language and connect it formally to the existing hybrid monad [12,14]. In Section 6, we prove a soundness and adequacy result for our operational semantics w.r.t. the developed model. Section 7 describes LINCE's architecture. Finally, Section 8 concludes and briefly discusses future work. Omitted proofs and examples are found in the extended version of the current paper [15].
|
| 91 |
+
|
| 92 |
+
## 2 Preliminaries
|
| 93 |
+
|
| 94 |
+
We assume familiarity with category theory [1]. By $\mathbb{R}$, $\mathbb{R}_+$ and $\bar{\mathbb{R}}_+$ we respectively denote the sets of reals, non-negative reals, and extended non-negative reals (i.e. $\mathbb{R}_+$ extended with the infinity value $\infty$). Let $[0, \bar{\mathbb{R}}_+)$ denote the set of downsets of $\bar{\mathbb{R}}_+$ having the form $[0, d]$ ($d \in \mathbb{R}_+$) or the form $[0, d)$ ($d \in \bar{\mathbb{R}}_+$). We call the elements of the dependent sum $\sum_{I \in [0, \bar{\mathbb{R}}_+)} X^I$ trajectories (over $X$). By $[0, \mathbb{R}_+]$, $[0, \bar{\mathbb{R}}_+)$ and $[\bar{0}, \bar{\mathbb{R}}_+)$ we denote the following corresponding subsets of $[0, \bar{\mathbb{R}}_+]$: $([0, d] | d \in \mathbb{R}_+)$, $([0, d] | d \in \bar{\mathbb{R}}_+)$ and $([0, d] | d \in \bar{\mathbb{R}}_+)$. By $X \amalg Y$ we denote the disjoint union, which is the categorical coproduct in the category of sets with the corresponding left and right injections inl: $X \to X \amalg Y$, inr: $Y \to X \amalg Y$. To reduce clutter, we often use plain union $X \cup Y$ in place of $X \amalg Y$ if X and Y are disjoint by construction.
|
| 95 |
+
|
| 96 |
+
By $a \triangleleft b \triangleright c$ we denote the case distinction construct: a if b is true and c otherwise. By ! we denote the empty function, i.e. a function with the empty domain. For the sake of succinctness, we use the notation $e^t$ for the function application $e(t)$ with real-value t.
|
| 97 |
+
|
| 98 |
+
## 3 An imperative hybrid while-language and its semantics
|
| 99 |
+
|
| 100 |
+
This section introduces the syntax and operational semantics of our language. We first fix a stock of n-variables $\mathcal{X} = \{x_1, \dots, x_n\}$ over which we build atomic programs, according to the grammar
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\begin{aligned}
|
| 104 |
+
At(\mathcal{X}) &\ni x := t \mid x'_1 = t_1, \dots, x'_n = t_n \quad \texttt{for } t \\
|
| 105 |
+
LTerm(\mathcal{X}) &\ni r \mid r \cdot x \mid t+s
|
| 106 |
+
\end{aligned}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $x \in \mathcal{X}$, $r \in \mathbb{R}$, $t_i, t, s \in LTerm(\mathcal{X})$. An atomic program is thus either a classical assignment $x := t$ or a differential statement $x'_1 = t_1, \dots, x'_n = t_n$ for t. The latter reads as "run the system of differential equations $x'_1 = t_1, \dots, x'_n = t_n$ for t time units". We then define the while-language via the grammar
|
| 110 |
+
|
| 111 |
+
$$ Prog(\mathcal{X}) \ni a \mid p; q \mid \texttt{if} b \texttt{then} p \texttt{else} q \mid \texttt{while} b \texttt{do} \{ p \} $$
|
| 112 |
+
|
| 113 |
+
where $p, q \in Prog(\mathcal{X})$, $a \in At(\mathcal{X})$ and $b$ is an element of the free Boolean algebra generated by the terms $t \leqslant s$ and $t \geqslant s$. The expression `wait t` (from the previous section) is encoded as the differential statement $x'_1 = 0, \dots, x'_n = 0$ for t.
|
| 114 |
+
---PAGE_BREAK---
|
| 115 |
+
|
| 116 |
+
*Remark 1.* The systems of differential equations that our language allows are always linear. This is not to say that we could not consider more expressive systems; in fact we could straightforwardly extend the language in this direction, for its semantics (presented below) is not impacted by specific choices of solvable systems of differential equations. But here we do not focus on such choices regarding the expressivity of continuous dynamics and concentrate on a core hybrid semantics instead on which to study the fundamentals of hybrid programming.
|
| 117 |
+
|
| 118 |
+
In the sequel we abbreviate differential statements $x_1' = t_1, \dots, x_n' = t_n$ for $t$, where $\bar{x}'$ and $\bar{t}$ abbreviate the corresponding vectors of variables $x_1' \dots x_n'$ and linear-combination terms $t_1 \dots t_n$. We call functions of type $\sigma: \mathcal{X} \to \mathbb{R}$ environments; they map variables to the respective valuations. We use the notation $\sigma\nabla[\bar{\nu}/\bar{x}]$ to denote the environment that maps each $x_i$ in $\bar{x}$ to $v_i$ in $\bar{\nu}$ and the rest of variables in the same way as $\sigma$. Finally, we denote by $\phi_{\sigma}^{\bar{x}'=\bar{t}}: [0, \infty) \to \mathbb{R}^n$ the solution of a system of differential equations $\bar{x}' = \bar{t}$ with $\sigma$ determining the initial condition. When clear from context, we omit the superscript in $\phi_{\sigma}^{\bar{x}'=\bar{t}}$. For a linear-combination term $t$ the expression $t\sigma$ denotes the corresponding interpretation according to $\sigma$ and analogously for $b\sigma$ where $b$ is a Boolean expression.
|
| 119 |
+
|
| 120 |
+
We now introduce a small-step operational semantics for our language. Intuitively, the semantics establishes a set of rules for reducing a triple $\langle program \rangle$ to an environment, via a *finite* sequence of reduction steps. The rules are presented in Figure 2. The terminal configuration $\langle skip, \sigma, t \rangle$ represents a successful end of a computation, which can then be fed into another computation (via rule (**seq-skip**→)). Contrastingly, $\langle stop, \sigma, t \rangle$ is a terminating configuration that inhibits the execution of subsequent computations. The latter is reflected in rules (**diff-stop**→) and (**seq-stop**→) which entail that, depending on the chosen time instant, we do not need to evaluate the whole program, but merely a part of it – consequently, infinite while-loops need not yield infinite reduction sequences (as explained in Remark 2). Note that time $t$ is consumed when applying the rules (**diff-stop**→) and (**diff-seq**→) in correspondence to the duration of the differential statement at hand. The rules (**seq**) and (**seq-skip**→) correspond to the standard rules of operational semantics for while languages over an imperative store [37].
|
| 121 |
+
|
| 122 |
+
*Remark 2.* Putatively infinite while-loops do not necessarily yield infinite reduction steps. Take for example the while-loop below whose iterations have always duration 1.
|
| 123 |
+
|
| 124 |
+
$$ x := 0; \while true do { x := x + 1; wait 1 } \end{while} \quad (1) $$
|
| 125 |
+
|
| 126 |
+
It yields a finite reduction sequence for the time instant 1/2, as shown below:
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\begin{aligned}
|
| 130 |
+
& x := 0; \while true do \{ x := x + 1; wait 1 \}, \sigma, 1/2 \rightarrow \\
|
| 131 |
+
& \quad \{ \text{by the rules } (\mathbf{asg}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-skip}\xrightarrow{\phantom{=}}) \} \\
|
| 132 |
+
& \while true do \{ x := x + 1; wait 1 \}, \sigma \nabla[0/x], 1/2 \rightarrow \\
|
| 133 |
+
& \quad \{ \text{by the rule } (\mathbf{wh-true}\xrightarrow{\phantom{=}}) \}
|
| 134 |
+
\end{aligned}
|
| 135 |
+
$$
|
| 136 |
+
---PAGE_BREAK---
|
| 137 |
+
|
| 138 |
+
Fig. 2: Small-step Operational Semantics
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\begin{align*}
|
| 142 |
+
& x := x + 1 ; \textcolor{blue}{wait} 1 ; \textcolor{blue}{while} \textcolor{blue}{true} \textcolor{blue}{do} \{ x := x + 1 ; \textcolor{blue}{wait} 1 \}, \sigma \nabla [0/x] , \frac{1}{2} \rightarrow \\
|
| 143 |
+
& \qquad \{\text{by the rules } (\mathbf{asg}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-skip}\xrightarrow{\phantom{=}})\} \\
|
| 144 |
+
& \textcolor{blue}{wait} 1 ; \textcolor{blue}{while} \textcolor{blue}{true} \textcolor{blue}{do} \{ x := x + 1 ; \textcolor{blue}{wait} 1 \}, \sigma \nabla [0 + 1/x] , \frac{1}{2} \rightarrow \\
|
| 145 |
+
& \qquad \{\text{by the rules } (\mathbf{diff-stop}\xrightarrow{\phantom{=}}) \text{ and } (\mathbf{seq-stop}\xrightarrow{\phantom{=}})\} \\
|
| 146 |
+
& stop, \sigma \nabla [0 + 1/x] , 0
|
| 147 |
+
\end{align*}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
The gist is that to evaluate program (1) at time instant $1/2$, one only needs to unfold the underlying loop until surpassing $1/2$ in terms of execution time. Note that if the wait statement is removed from the program then the reduction sequence would not terminate, intuitively because all iterations would be instantaneous and thus the total execution time of the program would never reach $1/2$.
|
| 151 |
+
|
| 152 |
+
The following theorem entails that our semantics is deterministic, which is
|
| 153 |
+
instrumental for our implementation.
|
| 154 |
+
|
| 155 |
+
**Theorem 1.** For every program *p*, environment *σ*, and time instant *t* there is at most one applicable reduction rule.
|
| 156 |
+
|
| 157 |
+
Let $\to^*$ be the transitive closure of the reduction relation $\to$ that was previously presented.
|
| 158 |
+
|
| 159 |
+
**Corollary 1.** For every program term p, environments σ, σ', σ'', time instants t, t', t'', and termination flags s, s' ∈ {skip, stop}, if p, σ, t →* s, σ', t' and p, σ, t →* s', σ'', t'', then the equations s = s', σ' = σ'' and t' = t'' must hold.
|
| 160 |
+
|
| 161 |
+
*Proof.* Follows by induction on the number of reduction steps and Theorem 1. □
|
| 162 |
+
|
| 163 |
+
As alluded above, the operational semantics treats time as a resource. This is formalised below.
|
| 164 |
+
---PAGE_BREAK---
|
| 165 |
+
|
| 166 |
+
**Proposition 1.** For all program terms $p$ and $q$, environments $\sigma$ and $\sigma'$, and time instants $t, t'$ and $s$, if $p, \sigma, t \to q, \sigma'$, $t'$ then $p, \sigma, t+s \to q, \sigma'$, $t'+s$; and if $p, \sigma, t \to \text{skip}, \sigma'$, $t'$ then $p, \sigma, t+s \to \text{skip}, \sigma'$, $t'+s$.
|
| 167 |
+
|
| 168 |
+
# 4 Towards Denotational Semantics: The Hybrid Monad
|
| 169 |
+
|
| 170 |
+
A mainstream subsuming paradigm in denotational semantics is due to Moggi [24,25], who proposed to identify a computational effect of interest as a monad, around which the denotational semantics is built using standard generic mechanisms, prominently provided by category theory. In this section we recall necessary notions and results, motivated by this approach, to prepare ground for our main constructions in the next section.
|
| 171 |
+
|
| 172 |
+
**Definition 1 (Monad).** A monad $\mathbf{T}$ (on the category of sets and functions) is given by a triple $(T, \eta, (-)^*)$, consisting of an endomap $T$ over the class of all sets, together with a set-indexed class of maps $\eta_X: X \to TX$ and a so-called Kleisli lifting sending each $f: X \to TY$ to $f^*: TX \to TY$ and obeying monad laws: $\eta^* = \text{id}, f^* \cdot \eta = f, (f^* \cdot g)^* = f^* \cdot g^*$ (it follows from this definition that $T$ extends to a functor and $\eta$ to a natural transformation).
|
| 173 |
+
|
| 174 |
+
A monad morphism $\theta: \mathbf{T} \to \mathbf{S}$ from $(T, \eta^{\mathbf{T}}, (-)^{\mathbf{T}})$ to $(S, \eta^{\mathbf{S}}, (-)^{\mathbf{S}})$ is a natural transformation $\theta: T \to S$ such that $\theta \cdot \eta^{\mathbf{T}} = \eta^{\mathbf{S}}$ and $\theta \cdot f^{\mathbf{T}} = (\theta \cdot f)^{\mathbf{S}} \cdot \theta$.
|
| 175 |
+
|
| 176 |
+
We will continue to use bold capitals (e.g. **T**) for monads over the corresponding endofunctors written as capital Romans (e.g. **T**).
|
| 177 |
+
|
| 178 |
+
In order to interpret while-loops one needs additional structure on the monad.
|
| 179 |
+
|
| 180 |
+
**Definition 2 (Elgot Monad).** A monad $\mathbf{T}$ is called Elgot if it is equipped with an iteration operator $(-)^{\dagger}$ that sends each $f: X \to T(Y \Join X)$ to $f^{\dagger}: X \to TY$ in such a way that certain established axioms of iteration are satisfied [2,16].
|
| 181 |
+
|
| 182 |
+
Monad morphisms between Elgot monads are additionally required to preserve iteration: $\theta \cdot f^{\dagger\mathbf{T}} = (\theta \cdot f)^{\dagger\mathbf{S}}$ for $\theta: \mathbf{T} \to \mathbf{S}$, $f: X \to T(Y \Join X)$.
|
| 183 |
+
|
| 184 |
+
For a monad $\mathbf{T}$, a map $f: X \to TY$, called a Kleisli map, is roughly to be regarded as a semantics of a program $p$, with $X$ as the semantics of the input, and $Y$ as the semantics of the output. For example, with $T$ being the maybe monad $(-) \Join \{\perp\}$, we obtain semantics of programs as partial functions. Let us record this example in more detail for further reference.
|
| 185 |
+
|
| 186 |
+
*Example 1 (Maybe Monad M)*. The maybe monad is determined by the following data: $MX = X \Join \{\perp\}$, the unit is the left injection $\text{inl}: X \to X \Join \{\perp\}$ and given $f: X \to Y \Join \{\perp\}$, $f^*$ is equal to the copairing $\text{[f, inr]}: X \Join \{\perp\} \to Y \Join \{\perp\}$.
|
| 187 |
+
|
| 188 |
+
It follows by general considerations (enrichment of the category of Kleisli maps over complete partial orders) that **M** is an Elgot monad with the following iteration operator $(-)^{\flat}$: given $f: X \to (Y \Join X) \Join \{\perp\}$, and $x_0 \in X$, let $x_0, x_1, ...$ be the longest (finite or infinite) sequence over $X$ constructed inductively in such a way that $f(x_i) = \text{inl}(\text{inr} x_{i+1})$. Now, $f^{\flat}(x_0) = \text{inr} \perp$ if the sequence is infinite or
|
| 189 |
+
---PAGE_BREAK---
|
| 190 |
+
|
| 191 |
+
$f(x_i) = \text{inr} \perp \text{ for some } i$, and $f^z(x_0) = \text{inl} y$ if for the last element of the sequence $x_n$, which must exist, $f(x_n) = \text{inl inl } y$.
|
| 192 |
+
|
| 193 |
+
Other examples of Elgot monad can be consulted e.g. in [16].
|
| 194 |
+
|
| 195 |
+
The computational effect of *hybridness* can also be captured by a monad, called *hybrid monad* [12,14], which we recall next (in a slightly different but equivalent form). To that end, we also need to recall *Minkowski addition* for subsets of the set $\mathbb{R}_+$ of extended non-negative reals (see Section 2): $A + B = \{a + b \mid a \in A, b \in B\}$, e.g. $[a, b] + [c, d] = [a + c, b + d]$ and $[a, b] + [c, d) = [a + c, b + d)$.
|
| 196 |
+
|
| 197 |
+
**Definition 3 (Hybrid Monad H).** The hybrid monad **H** is defined as follows.
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\begin{align*}
|
| 201 |
+
-HX &= \sum_{I \in [0, \bar{R}_+]} X^I \uplus \sum_{I \in [0, \bar{R}_+]} X^I, \text{ i.e. it is a set of trajectories valued on } X \\
|
| 202 |
+
&\text{and with the domain downclosed. For any } p = \text{inj}\langle I, e \rangle \in HX \text{ with } \text{inr} \in \{\text{inl}, \\
|
| 203 |
+
&\text{inr}\}, \text{ let us use the notation } p_d = I, p_e = e, \text{ the former being the duration of} \\
|
| 204 |
+
&\text{the trajectory and the latter the trajectory itself. Let also } \varepsilon = \langle \emptyset, ! \rangle.
|
| 205 |
+
\end{align*}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
- $\eta(x) = \text{inl}\langle[0,0], \lambda t. x\rangle$, i.e. $\eta(x)$ is a trajectory of duration 0 that returns $x$.
|
| 209 |
+
|
| 210 |
+
- given $f: X \to HY$, we define $f^*: HX \to HY$ via the following clauses:
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
\begin{align*}
|
| 214 |
+
f^*(\text{inl}\langle I, e \rangle) &= \text{inj}\langle I + J, \lambda t. (f(e^t))_e^0 \rangle \quad \triangleleft t < d \triangleright (f(e^d))_e^{t-d} \\
|
| 215 |
+
&\qquad \text{if } I' = I = [0, d] \text{ for some } d, f(e^d) = \text{inj}\langle J, e' \rangle
|
| 216 |
+
\end{align*}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\begin{align*}
|
| 221 |
+
f^*(\mathrm{inl}\langle I, e \rangle) &= \mathrm{inr}\langle I', \lambda t. (f(e^t))_e^0 \rangle & \text{if } I' \neq I \\
|
| 222 |
+
f^*(\mathrm{inr}\langle I, e \rangle) &= \mathrm{inr}\langle I', \lambda t. (f(e^t))_e^0 \rangle
|
| 223 |
+
\end{align*}
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
where $I' = \bigcup \{[0,t] \subseteq I | \forall s \in [0,t]. f(e^s) \neq \mathrm{inr} \varepsilon\}$ and $\mathrm{inj} \in \{\mathrm{inl}, \mathrm{inr}\}$.
|
| 227 |
+
|
| 228 |
+
The definition of the hybrid monad **H** is somewhat intricate, so let us complement it with some explanations (details and further intuitions about the hybrid monad can also be consulted in [12]). The domain **HX** constitutes three types of trajectories representing different kinds of hybrid computation:
|
| 229 |
+
|
| 230 |
+
- (closed) convergent: $\text{inl}\langle[0,d],e\rangle \in HX$ (e.g. instant termination $\eta(x)$);
|
| 231 |
+
|
| 232 |
+
- open divergent: $\text{inr}\langle[0,d),e\rangle \in HX$ (e.g. instant divergence $\text{inr}\epsilon$ or a trajectory $[0,\infty) \rightarrow X$ which represents a computation that runs ad infinitum);
|
| 233 |
+
|
| 234 |
+
- closed divergent: $\text{inr}\langle[0,d],e\rangle \in HX$ (representing computations that start to diverge precisely after the time instant $d$).
|
| 235 |
+
|
| 236 |
+
The Kleisli lifting $f^*$ works as follows: for a given trajectory $\text{inj}\langle I, e \rangle$, we first calculate the largest interval $I' \subseteq I$ on which the trajectory $\lambda t \in I'$. $f(e^t)$ does not instantly diverge (i.e. $f(e^t) \neq \text{inr} \varepsilon$) throughout, hence $I'$ is either $[0, d']$ or $[0, d')$ for some $d'$. Now, the first clause in the definition of $f^*$ corresponds to the successful composition scenario: the argument trajectory $\langle I, e \rangle$ is convergent, and composing $f$ with $e$ as described in the definition of $I'$ does not yield divergence all over $I$. In that case, we essentially concatenate $\langle I, e \rangle$ with $f(e^d)$, the latter being the trajectory computed by $f$ at the last point of $e$. The remaining two clauses correspond to various flavours of divergence, including divergence of the input $(\text{inr}\langle I, e\rangle)$ and divergences occurring along $f \cdot e$. Incidentally, this explains how closed divergent trajectories may arise: if $I' = [0, d']$ and $d'$ is properly smaller than $d$, then we diverge precisely *after* $d'$, which is possible e.g. if the program behind $f$ continuously checks a condition which did not fail up until $d'$.
|
| 237 |
+
---PAGE_BREAK---
|
| 238 |
+
|
| 239 |
+
# 5 Deconstructing the Hybrid Monad
|
| 240 |
+
|
| 241 |
+
As mentioned in the introduction, in [14] we used **H** for giving semantics to a functional language HYBCORE whose programs are interpreted as morphisms of type $X \to HY$. Here, we are dealing with an imperative language, which from a semantic point of view amounts to fixing a type of states *S*, shared between all programs; the semantics of a program is thus restricted to morphisms of type *S* $\to HS$. As explained next, this allows us to make do with a simpler monad **H**<sub>S</sub>, globally parametrized by *S*. The new monad **H**<sub>S</sub> has the property that $H_S S$ is naturally isomorphic to *HS*. Apart from (relative to **H**) simplicity, the new monad enjoys further benefits, specifically **H**<sub>S</sub> is mathematically a better behaved structure, e.g. in contrast to **H**, Elgot iteration on **H**<sub>S</sub> is constructed as a least fixed point. Factoring the denotational semantics through **H**<sub>S</sub> thus allows us to bridge the gap to the operational semantics given in Section 3, and facilitates the soundness and adequacy proof in the forthcoming Section 6.
|
| 242 |
+
|
| 243 |
+
In order to define $H_S$, it is convenient to take a slightly broader perspective. We will also need to make a detour through the topic of ordered monoid modules with certain completeness properties so that we can characterise iteration on $H_S$ as a least fixed point.
|
| 244 |
+
|
| 245 |
+
**Definition 4 (Monoid Module, Generalized Writer Monad [14]).** Given a (not necessarily commutative) monoid ($\mathbb{M}, +, 0$), a monoid module is a set $\mathbb{E}$ equipped with a map $\triangleright: \mathbb{M} \times \mathbb{E} \to \mathbb{E}$ (monoid action), subject to the laws $0 \triangleright e = e$, $(m+n) \triangleright e = m \triangleright (n \triangleright e)$.
|
| 246 |
+
|
| 247 |
+
Every monoid-module pair $(\mathbb{M}, \mathbb{E})$ induces a generalized writer monad $T = (T, \eta, (-)^*)$ with $T = \mathbb{M} \times (-) \cup \mathbb{E}$, $\eta_X(x) = \langle 0, x \rangle$, and
|
| 248 |
+
|
| 249 |
+
$$f^*(m, x) = (m + n, y) \quad \text{where} \quad m \in \mathbb{M}, x \in X, f(x) = \langle n, y \rangle \in \mathbb{M} \times Y$$
|
| 250 |
+
|
| 251 |
+
$$f^*(m, x) = m \triangleright e \quad \text{where} \quad m \in \mathbb{M}, x \in X, f(x) = e \in \mathbb{E}$$
|
| 252 |
+
|
| 253 |
+
$$f^*(e) = e \quad \text{where} \quad e \in \mathbb{E}$$
|
| 254 |
+
|
| 255 |
+
This generalizes the writer monad ($\mathbb{E} = \emptyset$) and the exception monad ($\mathbb{M} = 1$).
|
| 256 |
+
|
| 257 |
+
*Example 2.* A simple motivating example of a monoid-module pair $(\mathbb{M}, \mathbb{E})$ is the pair $(\mathbb{R}_+, \mathbb{R}_+)$ where the monoid operation is addition with 0 as the unit and the monoid action is also addition.
|
| 258 |
+
|
| 259 |
+
More specifically, we are interested in ordered monoids and (conservatively) complete monoid modules. These are defined as follows.
|
| 260 |
+
|
| 261 |
+
**Definition 5 (Ordered Monoids, (Conservatively) Complete Monoid Modules [7]).** We call a monoid $(\mathbb{M}, 0, +)$ an ordered monoid if it is equipped with a partial order $\leq$, such that $0$ is the least element of this order and $+$ is right-monotone (but not necessarily left-monotone).
|
| 262 |
+
|
| 263 |
+
An ordered $\mathbb{M}$-module w.r.t. an ordered monoid $(\mathbb{M}, +, 0, \leq)$, is an $\mathbb{M}$-module $(\mathbb{E}, \triangleright)$ together with a partial order $\sqsubseteq$ and a least element $\perp$, such that $\triangleright$ is
|
| 264 |
+
---PAGE_BREAK---
|
| 265 |
+
|
| 266 |
+
monotone on the right and $(- \triangleright \perp)$ is monotone, i.e.
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
\overline{\perp \sqsubseteq x} \qquad \frac{x \sqsubseteq y}{a \triangleright x \sqsubseteq a \triangleright y} \qquad \frac{a \le b}{a \triangleright \perp \sqsubseteq b \triangleright \perp}
|
| 270 |
+
$$
|
| 271 |
+
|
| 272 |
+
We call the last property restricted left monotonicity.
|
| 273 |
+
|
| 274 |
+
An ordered $\mathbb{M}$-module is $(\omega)$-complete if for every $\omega$-chain $s_1 \sqsubseteq s_2 \sqsubseteq \dots$ on $\mathbb{E}$ there is a least upper bound $\bigcup_i s_i$ and $\triangleright$ is continuous on the right, i.e.
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
\overline{\forall i. s_i \sqsubseteq \bigsqcup_i s_i} \qquad \frac{\forall i. s_i \sqsubseteq x}{\bigsqcup_i s_i \sqsubseteq x} \qquad \overline{a \triangleright \bigsqcup_i s_i \sqsubseteq \bigsqcup_i a \triangleright s_i}
|
| 278 |
+
$$
|
| 279 |
+
|
| 280 |
+
(the law $\bigsqcup_i a \triangleright s_i \sqsubseteq a \triangleright \bigsqcup_i s_i$ is derivable). Such an $\mathbb{M}$-module is conservatively complete if additionally for every $\omega$-chain $a_1 \sqsubseteq a_2 \sqsubseteq \dots$ in $\mathbb{M}$, such that the least upper bound $\bigvee_i a_i$ exists, $(\bigvee_i a_i) \triangleright \perp = \bigsqcup_i a_i \triangleright \perp$.
|
| 281 |
+
|
| 282 |
+
A homomorphism $h: \mathbb{E} \to \mathbb{F}$ of (conservatively) complete monoid $\mathbb{M}$-modules is required to be monotone and structure-preserving in the following sense: $h(\perp) = \perp$, $h(a \triangleright x) = a \triangleright h(x)$, $h(\bigsqcup_i x_i) = \bigsqcup_i h(x_i)$.
|
| 283 |
+
|
| 284 |
+
The completeness requirement for $\mathbb{M}$-modules has a standard motivation coming from domain theory, where $\sqsubseteq$ is regarded as an *information order* and completeness is needed to ensure that the relevant semantic domain can accommodate infinite behaviours. The conservativity requirement additionally ensures that the least upper bounds, which exist in $\mathbb{M}$ agree with those in $\mathbb{E}$. Our main example is as follows (we will use it for building $\mathbf{H}_S$ and its iteration operator).
|
| 285 |
+
|
| 286 |
+
**Definition 6 (Monoid Module of Trajectories).** The ordered monoid of finite open trajectories $(\text{Trj}_S, \hat{\wedge}, \langle\emptyset, !\rangle, \leqslant)$ over a given set $S$, is defined as follows: $\text{Trj}_S = \sum_{I \in [0, \bar{R}_+)} S^I$, the unit is the empty trajectory $\varepsilon = \langle\emptyset, !\rangle$; summation is concatenation of trajectories $\hat{\wedge}$, defined as follows:
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
\langle[0, d_1), e_1\rangle^{\wedge} \langle[0, d_2), e_2\rangle = \langle[0, d_1 + d_2), \lambda t. e_1^t \triangleleft t < d_1 \triangleright e_2^{t-d_1}\rangle.
|
| 290 |
+
$$
|
| 291 |
+
|
| 292 |
+
The relation $\leqslant$ is defined as follows: $\langle[0, d_1), e_1\rangle \leqslant \langle[0, d_2), e_2\rangle$ if $d_1 \leqslant d_2$ and $e_1^t = e_2^t$ for every $t \in [0, d_1)$. We can additionally consider both sets $\sum_{I \in [0, \bar{R}_+)} S^I$ and $\sum_{I \in [0, \bar{R}_+]} S^I$ as $\text{Trj}_S$-modules, by defining the monoid action $\triangleright$ also as concatenation of trajectories and by equipping these sets with the order $\sqsubseteq$: $\langle I_1, e_1\rangle \sqsubseteq \langle I_2, e_2\rangle$ if $I_1 \subseteq I_2$ and $e_1^t = e_2^t$ for all $t \in I_1$.
|
| 293 |
+
|
| 294 |
+
Consider the following functors:
|
| 295 |
+
|
| 296 |
+
$$
|
| 297 |
+
H'_S X = \sum_{I \in [0, \bar{R}_+)} S^I \times X \cup \sum_{I \in [0, \bar{R}_+)} S^I
|
| 298 |
+
$$
|
| 299 |
+
|
| 300 |
+
$$
|
| 301 |
+
(2)
|
| 302 |
+
$$
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
H_S X = \sum_{I \in [0, \bar{R}_+)} S^I \times X \cup \sum_{I \in [0, \bar{R}_+]} S^I
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
(3)
|
| 309 |
+
|
| 310 |
+
Both of them extend to monads $H'_S$ and $H_S$ as they are instances of Definition 4. Moreover, it is laborious but straightforward to prove that both $H'_S X$ and $H_S X$ are conservatively complete Trj$_S$-modules on X [7], i.e. conservatively complete
|
| 311 |
+
---PAGE_BREAK---
|
| 312 |
+
|
| 313 |
+
Trj<sub>S</sub>-modules, equipped with distinguished maps η: X → H'<sub>S</sub>X, η: X → H<sub>S</sub>X.
|
| 314 |
+
In each case η sends x ∈ X to ⟨ε, x⟩. The partial order on H'<sub>S</sub>X (which we will
|
| 315 |
+
use for obtaining the least upper bound of a certain sequence of approximations)
|
| 316 |
+
is given by the clauses below and relies on the previous order ≤ on trajectories:
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\frac{\langle I, e \rangle \le \langle I', e' \rangle}{\langle I, e \rangle \sqsubseteq \langle I', e' \rangle, x}
|
| 320 |
+
\qquad
|
| 321 |
+
\frac{\langle I, e \rangle \le \langle I', e' \rangle}{\langle I, e \rangle \sqsubseteq \langle I', e' \rangle}
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
The monad given by (2) admits a sharp characterization, which is an instance of
|
| 325 |
+
a general result [7]. In more detail,
|
| 326 |
+
|
| 327 |
+
**Proposition 2.** The pair $(H'_S X, \eta)$ is a free conservatively complete $\text{Trj}_S$-module on $X$, i.e. for every conservatively complete $\text{Trj}_S$-module $\mathbb{E}$ and a map $f: X \to \mathbb{E}$, there is unique homomorphism $\hat{f}: H'_S X \to \mathbb{E}$ such that $\hat{f} \cdot \eta = f$.
|
| 328 |
+
|
| 329 |
+
Intuitively, Proposition 2 ensures that $H'_S X$ is a least conservatively complete $\text{Trj}_S$-module generated by $X$. This characterization entails a construction of an iteration operator on $\mathbf{H}'_S$ as a least fixpoint. This, in fact, also transfers to $\mathbf{H}_S$ (as detailed in the proof of the following theorem).
|
| 330 |
+
|
| 331 |
+
**Theorem 2.** Both $\mathbf{H}'_S$ and $\mathbf{H}_S$ are Elgot monads, for which $f^\dagger$ is computed as a least fixpoint of $\omega$-continuous endomaps $g \mapsto [\eta,g]^* \cdot f$ over the function spaces $X \to \mathbf{H}'_S Y$ and $X \to \mathbf{H}_S Y$ correspondingly.
|
| 332 |
+
|
| 333 |
+
In this section's remainder, we formally connect the monad **H**<sub>S</sub> with the monad **H**,
|
| 334 |
+
the latter introduced in our previous work and used for providing a semantics
|
| 335 |
+
to the functional language HYBCORE. In the following section we provide a
|
| 336 |
+
semantics for the current imperative language via the monad **H**<sub>S</sub>. Specifically,
|
| 337 |
+
in this section we will show how to build **H** from **H**<sub>S</sub> by considering additional
|
| 338 |
+
semantic ingredients on top of the latter.
|
| 339 |
+
|
| 340 |
+
Let us subsequently write η<sup>S</sup>, (–)<sup>★</sup><sub>S</sub> and (–)<sup>†</sup><sub>S</sub> for the unit, the Kleisli lifting and the Elgot iteration of **H**<sub>S</sub>. Note that *S*, *X* ↦→ **H**<sub>S</sub>*X* is a parametrized monad in the sense of Uustalu [35], in particular *H*<sub>S</sub> is functorial in *S* and for every *f*: *S* → *S*′, *H*<sub>*f*</sub>: *H*<sub>S</sub> → *H*<sub>S</sub>*′* is a monad morphism.
|
| 341 |
+
|
| 342 |
+
Then we introduce the following technical natural transformations $\iota$: $H_S X \to X \circled(S \circled{\perp})$ and $\tau$: $H_{S \circled{Y}} X \to H_S X$. First, let us define $\iota$:
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
\iota(I, e, x) = \begin{cases} \operatorname{inl} \operatorname{inl} e^0, & \text{if } I \neq \emptyset \\ \operatorname{inl} x, & \text{otherwise} \end{cases} \qquad \iota(I, e) = \begin{cases} \operatorname{inr} \operatorname{inl} e^0, & \text{if } I \neq \emptyset \\ \operatorname{inr} \operatorname{inr} \perp, & \text{otherwise} \end{cases}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
In words: $\iota$ returns the initial point for non-zero length trajectories, and otherwise returns either an accompanying value from $X$ or $\perp$ depending on that if the given trajectory is convergent or divergent. The functor $(-) \bowtie E$ for every $E$ extends to a monad, called the *exception monad*. The following is easy to show for $\iota$.
|
| 349 |
+
|
| 350 |
+
**Lemma 1.** For every $S$, $\iota: H_S \rightarrow (-) \bowtie (S \bowtie \{\perp\})$ is a monad morphism.
|
| 351 |
+
|
| 352 |
+
Next we define $\tau : H_{S \circled{Y}} X \rightarrow H_S X$:
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
\tau(I, e, x) = \begin{cases} \langle I, e, x \rangle, & \text{if } I = I' \\ \langle I', e' \rangle, & \text{otherwise} \end{cases} \qquad \tau(I, e) = \langle I', e' \rangle
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
where ⟨I', e'] is the largest such trajectory that for all t ∈ I', et = inl ett.
|
| 359 |
+
---PAGE_BREAK---
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\begin{align*}
|
| 363 |
+
[\mathbf{x} := \mathbf{t}](\sigma) &= \eta(\sigma \triangleright [\mathbf{t}\sigma/\mathbf{x}]) \\
|
| 364 |
+
[\bar{\mathbf{x}}' = \bar{u} \text{ for } \mathbf{t}](\sigma) &= \langle [0, \mathbf{t}\sigma), \lambda t. \sigma \triangleright [\phi_{\sigma}(t)/\bar{\mathbf{x}}], \sigma \triangleright [\phi_{\sigma}(\mathbf{t}\sigma)/\bar{\mathbf{x}}] \rangle \\
|
| 365 |
+
[\mathbf{p}; \mathbf{q}](\sigma) &= [\mathbf{q}]^*([\mathbf{p}](\sigma)) \\
|
| 366 |
+
[\texttt{if } \mathbf{b} \texttt{ then } \mathbf{p} \texttt{ else } \mathbf{q}](\sigma) &= [\mathbf{p}](\sigma) \triangleleft \mathbf{b}\sigma \triangleright [\mathbf{q}](\sigma) \\
|
| 367 |
+
[\texttt{while } \mathbf{b} \texttt{ do } \{\mathbf{p}\}](\sigma) &= (\lambda \sigma . (\hat{H} \operatorname{inr})([\mathbf{p}](\sigma)) \triangleleft \mathbf{b}\sigma \triangleright \eta(\operatorname{inl} \sigma))^\dagger(\sigma)
|
| 368 |
+
\end{align*}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
Fig. 3: Denotational semantics.
|
| 372 |
+
|
| 373 |
+
**Lemma 2.** For all *S* and *Y*, $\tau: H_{S\omega Y} \to H_S$ is a monad morphism.
|
| 374 |
+
|
| 375 |
+
We now arrive at the main result of this section.
|
| 376 |
+
|
| 377 |
+
**Theorem 3.** The correspondence $S \mapsto H_S S$ extends to an Elgot monad as follows:
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\begin{align*}
|
| 381 |
+
\eta(x \in S) &= \eta^S(x), \\
|
| 382 |
+
(f: X \rightarrow H_S S)^* &= (H_X X \xrightarrow{H_{\iota,f}^{\mathrm{id}}} H_{S\omega\{\perp\}} X \xrightarrow{\tau} H_S X \xrightarrow{f_S^*} H_S S), \\
|
| 383 |
+
(f: X \rightarrow H_{S\omega X}(S \Join X))^{\dagger} &= (X \xrightarrow{f_{S\omega X}^{\dagger}} H_{S\omega X} S \xrightarrow{H_{[\mathrm{inl},(\iota',f)]^{\mathrm{id}}}} H_{S\omega\{\perp\}} S \xrightarrow{\tau} H_S S).
|
| 384 |
+
\end{align*}
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
where $\iota' = [\mathrm{inl}, \mathrm{id}] \cdot \iota : H_S S \to S \Join \{\perp\}$ and $(-)^\sharp : (X \to (S \Join X) \Join \{\perp\}) \to (X \to S \Join \{\perp\})$ is the iteration operator of the maybe-monad $(-) \Join \{\perp\}$ (as in Example 1). Moreover, thus defined monad is isomorphic to $\mathbf{H}$.
|
| 388 |
+
|
| 389 |
+
*Proof (Proof Sketch).* It is first verified that the monad axioms are satisfied using abstract properties of $\iota$ and $\tau$, mainly provided by Lemmas 1 and 2. Then the isomorphism $\theta: H_S S \cong HS$ is defined as expected: $\theta([0, d], e, x) = \mathrm{inl}\langle[0, d], \hat{e}\rangle$ where $\hat{e}^t = \hat{e}^0$ for $t \in [0, d)$, $\hat{e}^d = x$; and $\theta(I, e) = \mathrm{inr}\langle I, e\rangle$. It is easy to see that $\theta$ respects the unit. The fact that $\theta$ respects Kleisli lifting amounts to a (tedious) verification by case distinction. Checking the formula for $(-)^\dagger$ amounts to transferring the definition of $(-)^\dagger$, as defined in previous work [13], along $\theta$. See the full proof in [15]. □
|
| 390 |
+
|
| 391 |
+
# 6 Soundness and Adequacy
|
| 392 |
+
|
| 393 |
+
Let us start this section by providing a denotational semantics to our language using the results of the previous section. We will then provide a soundness and adequacy result that formally connects the thus established denotational semantics with the operational semantics presented in Section 3.
|
| 394 |
+
|
| 395 |
+
First, consider the monad in (3) and fix $S = \mathbb{R}^\lambda$. We denote the obtained instance of $H_S$ as $\hat{H}$. Intuitively, we interpret a program $p$ as a map $[[p]] : S \to \hat{H}S$ which given an environment (a map from variables to values) returns a trajectory over $S$. The definition of $[[p]]$ is inductive over the structure of $p$ and is given in Figure 3.
|
| 396 |
+
---PAGE_BREAK---
|
| 397 |
+
|
| 398 |
+
In order to establish soundness and adequacy between the small-step operational semantics and the denotational semantics, we will use an auxiliary device. Namely, we will introduce a *big-step* operational semantics that will serve as midpoint between the two previously introduced semantics. We will show that the small-step semantics is equivalent to the big-step one and then establish soundness and adequacy between the big-step semantics and the denotational one. The desired result then follows by transitivity. The big-step rules are presented in Figure 4 and follow the same reasoning than the small-step ones. The expression $p, \sigma, t \Downarrow r, \sigma'$ means that $p$ paired with $\sigma$ evaluates to $r, \sigma'$ at time instant $t$.
|
| 399 |
+
|
| 400 |
+
Fig. 4: Big-step Operational Semantics
|
| 401 |
+
|
| 402 |
+
Next, we need the following result to formally connect both styles of operational semantics.
|
| 403 |
+
|
| 404 |
+
**Lemma 3.** *Given a program p, an environment σ and a time instant t*
|
| 405 |
+
|
| 406 |
+
1. if $p, \sigma, t \rightarrow p', \sigma', t'$ and $p', \sigma', t' \Downarrow skip, \sigma''$ then $p, \sigma, t \Downarrow skip, \sigma''$;
|
| 407 |
+
|
| 408 |
+
2. if $p, \sigma, t \rightarrow p', \sigma', t'$ and $p', \sigma', t' \Downarrow stop, \sigma''$ then $p, \sigma, t \Downarrow stop, \sigma''$.
|
| 409 |
+
|
| 410 |
+
*Proof.* The proof follows by induction over the derivation of the small step relation. □
|
| 411 |
+
|
| 412 |
+
**Theorem 4.** *The small-step semantics and the big-step semantics are related as follows. Given a program p, an environment σ and a time instant t*
|
| 413 |
+
---PAGE_BREAK---
|
| 414 |
+
|
| 415 |
+
1. $p, \sigma, t \Downarrow \mathit{skip}, \sigma' \text{ iff } p, \sigma, t \to^\star \mathit{skip}, \sigma', 0$;
|
| 416 |
+
|
| 417 |
+
2. $p, \sigma, t \Downarrow \mathit{stop}, \sigma' \text{ iff } p, \sigma, t \to^\star \mathit{stop}, \sigma', 0.$
|
| 418 |
+
|
| 419 |
+
*Proof.* The right-to-left direction is obtained by induction over the length of the small-step reduction sequence using Lemma 3. The left-to-right direction follows by induction over the proof of the big-step judgement using Proposition 1. $\square$
|
| 420 |
+
|
| 421 |
+
Finally, we can connect the operational and the denotational semantics in the
|
| 422 |
+
expected way.
|
| 423 |
+
|
| 424 |
+
**Theorem 5 (Soundness and Adequacy).** *Given a program p, an environment σ and a time instant t*
|
| 425 |
+
|
| 426 |
+
1. $p, \sigma, t \to^* \mathit{skip}, \sigma', 0 \text{ iff } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t) \to \mathbb{R}^\mathcal{X}, \sigma');$
|
| 427 |
+
|
| 428 |
+
2. $p, \sigma, t \to^* \mathit{stop}, \sigma', 0 \text{ iff either } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \sigma'') \text{ or } [\mathbf{p}](\sigma) = \mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \text{ and in either case with } t' > t \text{ and } h(t) = \sigma'.$
|
| 429 |
+
|
| 430 |
+
Here, “soundness” corresponds to the left-to-right directions of the equivalences and “adequacy” to the right-to-left ones.
|
| 431 |
+
|
| 432 |
+
*Proof.* By Theorem 4, we equivalently replace the goal as follows:
|
| 433 |
+
|
| 434 |
+
1. $p, \sigma, t \Downarrow \mathit{skip}, \sigma' \text{ iff } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t) \to \mathbb{R}^{\mathcal{X}}, \sigma');$
|
| 435 |
+
|
| 436 |
+
2. $p, \sigma, t \Downarrow \mathit{stop}, \sigma' \text{ iff either } [\mathbf{p}](\sigma) = (\mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \sigma'') \text{ or } [\mathbf{p}](\sigma) = \mathbf{h}: [0, t') \to \mathbb{R}^{\mathcal{X}}, \text{ and in either case with } t' > t \text{ and } h(t) = \sigma'.$
|
| 437 |
+
|
| 438 |
+
Then the “soundness” direction is obtained by induction over the derivation of
|
| 439 |
+
the rules in Fig. 4. The “adequacy” direction follows by structural induction over
|
| 440 |
+
$p$; for while-loops, we call the fixpoint law $[\eta, f^\dagger]^* \cdot f = f^\dagger$ of Elgot monads. $\square$
|
| 441 |
+
|
| 442 |
+
# 7 Implementation
|
| 443 |
+
|
| 444 |
+
This section presents our prototype implementation – LINCE – which is available
|
| 445 |
+
online both to run in our servers and to be compiled and executed locally
|
| 446 |
+
(http://arcatools.org/lince). Its architecture is depicted in Figure 5. The
|
| 447 |
+
dashed rectangles correspond to its main components. The one on the left
|
| 448 |
+
(Core engine) provides the parser respective to the while-language and the
|
| 449 |
+
engine to evaluate hybrid programs using the small-step operational semantics
|
| 450 |
+
of Section 3. The one on the right (Inspector) depicts trajectories produced
|
| 451 |
+
by hybrid programs according to parameters specified by the user and provides
|
| 452 |
+
an interface to evaluate hybrid programs at specific time instants (the initial
|
| 453 |
+
environment $\sigma: \mathcal{X} \to \mathbb{R}$ is assumed to be the function constant on zero). As
|
| 454 |
+
already mentioned, plots are generated by automatically evaluating at different
|
| 455 |
+
time instants the program given as input. Incoming arrows in the figure denote
|
| 456 |
+
an input relation and outgoing arrows denote an output relation. The two main
|
| 457 |
+
components are further explained below.
|
| 458 |
+
|
| 459 |
+
**Core engine.** Our implementation extensively uses the computer algebra tool SAGEMATH [31]. This serves two purposes: (1) to solve systems of differential
|
| 460 |
+
---PAGE_BREAK---
|
| 461 |
+
|
| 462 |
+
Fig. 5: Depiction of LINCE's architecture
|
| 463 |
+
|
| 464 |
+
equations (present in hybrid programs); and (2) to correctly evaluate if-then-
|
| 465 |
+
else statements. Regarding the latter, note that we do not merely use predicate
|
| 466 |
+
functions in programming languages for evaluating Boolean conditions, essentially
|
| 467 |
+
because such functions tend to give wrong results in the presence of real numbers
|
| 468 |
+
(due to the finite precision problem). Instead of this, LINCE uses SAGEMATH
|
| 469 |
+
and its ability to perform advanced symbolic manipulation to check whether
|
| 470 |
+
a Boolean condition is true or not. However, note that this will not always
|
| 471 |
+
give an output, fundamentally because solutions of linear differential equations
|
| 472 |
+
involve transcendental numbers and real-number arithmetic with such numbers is
|
| 473 |
+
undecidable [20]. We leave as future work the development of more sophisticated
|
| 474 |
+
techniques for avoiding errors in the computational evaluation of hybrid programs.
|
| 475 |
+
|
| 476 |
+
**Inspector.** The user interacts with LINCE at two different stages: (a) when inputting a hybrid program and (b) when inspecting trajectories using LINCE's output interfaces. The latter case consists of adjusting different parameters for observing the generated plots in an optimal way.
|
| 477 |
+
|
| 478 |
+
**Event-triggered programs.** Observe that the differential statements $x_1' = t, \dots, x_n' = t$ for $t$ are *time-triggered*: they terminate precisely when the instant of time $t$ is achieved. In the area of hybrid systems it is also usual to consider *event-triggered* programs: those that terminate *as soon as* a specified condition $\psi$ becomes true [38,6,11]. So we next consider atomic programs of the type $x_1' = t, \dots, x_n' = t$ until $\psi$ where $\psi$ is an element of the free Boolean algebra generated by $t \le s$ and $t \ge s$ where $t, s \in LTerm(X)$, signalling the termination of the program. In general, it is impossible to determine with *exact* precision when such programs terminate (again due to the undecidability of real-number arithmetic with transcendental numbers). A natural option is to tackle this problem by checking the condition $\psi$ periodically, which essentially reduces event-triggered programs into time-triggered ones. The cost is that the evaluation of a program might greatly diverge from the nominal behaviour, as discussed for instance in documents [4,6] where an analogous approach is discussed for the well-established simulation tools SIMULINK and MODELICA. In our case, we allow programs of the form $x_1' = t, \dots, x_n' = t$ until$_\epsilon$ $\psi$ in the tool and define them as the abbreviation of `while ¬ψ do { x_1' = t, \dots, x_n' = t for ε }`. This sort of abbreviation has the advantage of avoiding spurious evaluations of hybrid programs w.r.t. the established semantics. We could indeed easily allow such event-triggered programs natively in our language (i.e. without recurring to
|
| 479 |
+
---PAGE_BREAK---
|
| 480 |
+
|
| 481 |
+
Fig. 6: Position of the bouncing ball over time (plot on the left); zoomed in position of the bouncing ball at the first bounce (plot on the right).
|
| 482 |
+
|
| 483 |
+
abbreviations) and extend the semantics accordingly. But we prefer not to do this at the moment, because we wish first to fully understand the ways of limiting spurious computational evaluations arising from event-triggered programs.
|
| 484 |
+
|
| 485 |
+
*Remark 3.* SIMULINK and MODELICA are powerful tools for simulating hybrid systems, but lack a well-established, formal semantics. This is discussed for example in [3,9], where the authors aim to provide semantics to subsets of SIMULINK and MODELICA. Getting inspiration from control theory, the language of SIMULINK is circuit-like, block-based; the language of MODELICA is *acausal* and thus particularly useful for modelling electric circuits and the like which are traditionally modelled by systems of equations.
|
| 486 |
+
|
| 487 |
+
*Example 3 (Bouncing Ball)*. As an illustration of the approach described above for event-triggered programs, take a bouncing ball dropped at a positive height $p$ and with no initial velocity $v$. Due to the gravitational acceleration $g$, it falls to the ground and bounces back up, losing part of its kinetic energy in the process. This can be approximated by the following hybrid program
|
| 488 |
+
|
| 489 |
+
$$ (p' = v, v' = g \ \mathbf{until}_{0.01} p \le 0 \land v \le 0); (v := v \times -0.5) $$
|
| 490 |
+
|
| 491 |
+
where 0.5 is the dampening factor of the ball. We now want to drop the ball from a specific height (e.g. 5 meters) and let it bounce until it stops. Abbreviating the previous program into $b$, this behaviour can be approximated by $p := 5; v := 0; while true do { b}$. Figure 6 presents the trajectory generated by the ball (calculated by LINCE). Note that since $\epsilon = 0.01$ the ball reaches below ground, as shown in Figure 6 on the right. Other examples of event- and time-triggered programs can be seen in LINCE's website.
|
| 492 |
+
|
| 493 |
+
# 8 Conclusions and future work
|
| 494 |
+
|
| 495 |
+
We introduced small-step and big-step operational semantics for hybrid programs suitable for implementation purposes and provided a denotational counterpart via the notion of Elgot monad. These semantics were then linked by a soundness and adequacy theorem [37]. We regard these results as a stepping stone for developing computational tools and techniques for hybrid programming; which we attested
|
| 496 |
+
---PAGE_BREAK---
|
| 497 |
+
|
| 498 |
+
with the development of LINCE. With this work as basis, we plan to explore the
|
| 499 |
+
following research lines in the near future.
|
| 500 |
+
|
| 501 |
+
**Program equivalence.** Our denotational semantics entails a natural notion of program equivalence (denotational equality) which inherently includes classical laws of iteration and a powerful uniformity principle [33], thanks to the use of Elgot monads. We intend to further explore the equational theory of our language so that we can safely refactor/simplify hybrid programs. Note that the theory includes equational schema like `(x := a; x := b) = x := b` and `(wait a; wait b) = wait (a + b)` thus encompassing not only usual laws of programming but also axiomatic principles behind the notion of time.
|
| 502 |
+
|
| 503 |
+
**New program constructs.** Our while-language is intended to be as simple as possible whilst harbouring the core, uncontroversial features of hybrid programming. This was decided so that we could use the language as both a theoretical and practical basis for advancing hybrid programming. A particular case that we wish to explore next is the introduction of new program constructs, including e.g. non-deterministic or probabilistic choice and exception operations `raiseware`. Denotationally, the fact that we used monadic constructions readily provides a palette of techniques for this process, e.g. tensoring and distributive laws [22,23].
|
| 504 |
+
|
| 505 |
+
**Robustness.** A core aspect of hybrid programming is that programs should be *robust*: small variations in their input should *not* result in big changes in their output [32,21]. We wish to extend LINCE with features for detecting non-robust programs. A main source of non-robustness are conditional statements `if b then p else q`: very small changes in their input may change the validity of b and consequently cause a switch between (possibly very different) execution branches. Currently, we are working on the systematic detection of non-robust conditional statements in hybrid programs, by taking advantage of the notion of $\delta$-perturbation [20].
|
| 506 |
+
|
| 507 |
+
**Acknowledgements** The first author would like to acknowledge support of German Research Council (DFG) under the project A High Level Language for Monad-based Processes (GO 2161/1-2). The second author was financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia, within project POCI-01-0145-FEDER-030947. The third author was partially supported by National Funds through FCT/MCTES, within the CISTER Research Unit (UIDB/04234/2020); by COMPETE 2020 under the PT2020 Partnership Agreement, through ERDF, and by national funds through the FCT, within project POCI-01-0145-FEDER-029946; by the Norte Portugal Regional Operational Programme (NORTE 2020) under the Portugal 2020 Partnership Agreement, through ERDF and also by national funds through the FCT, within project NORTE-01-0145-FEDER-028550; and by the FCT within project ECSEL/0016/2019 and the ECSEL Joint Undertaking (JU) under grant agreement No 876852. The JU receives support from the European Union's Horizon 2020 research and innovation programme and Austria, Czech Republic, Germany, Ireland, Italy, Portugal, Spain, Sweden, Turkey.
|
| 508 |
+
---PAGE_BREAK---
|
| 509 |
+
|
| 510 |
+
References
|
| 511 |
+
|
| 512 |
+
1. J. Adámek, H. Herrlich, and G. Strecker. *Abstract and concrete categories*. John Wiley & Sons Inc., New York, 1990.
|
| 513 |
+
|
| 514 |
+
2. J. Adámek, S. Milius, and J. Velebil. Elgot theories: a new perspective on the equational properties of iteration. *Mathematical Structures in Computer Science*, 21(2):417–480, 2011.
|
| 515 |
+
|
| 516 |
+
3. O. Bouissou and A. Chapoutot. An operational semantics for Simulink's simulation engine. In *ACM SIGPLAN Notices*, vol. 47, pp. 129–138. ACM, 2012.
|
| 517 |
+
|
| 518 |
+
4. D. Broman. Hybrid simulation safety: Limbos and zero crossings. In *Principles of Modeling*, pp. 106–121. Springer, 2018.
|
| 519 |
+
|
| 520 |
+
5. Z. Chaochen, C. A. R. Hoare, and A. P. Ravn. A calculus of durations. *Information Processing Letters*, 40(5):269–276, 1991.
|
| 521 |
+
|
| 522 |
+
6. D. A. Copp and R. G. Sanfelice. A zero-crossing detection algorithm for robust simulation of hybrid systems jumping on surfaces. *Simulation Modelling Practice and Theory*, 68:1–17, 2016.
|
| 523 |
+
|
| 524 |
+
7. T. L. Diezel and S. Goncharov. Towards Constructive Hybrid Semantics. In Z. M. Ariola, ed., *5th International Conference on Formal Structures for Computation and Deduction (FSCD 2020)*, vol. 167 of LIPIcs, pp. 24:1–24:19, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
|
| 525 |
+
|
| 526 |
+
8. C. Elgot. Monadic computation and iterative algebraic theories. In *Studies in Logic and the Foundations of Mathematics*, vol. 80, pp. 175–230. Elsevier, 1975.
|
| 527 |
+
|
| 528 |
+
9. S. Foster, B. Thiele, A. Cavalcanti, and J. Woodcock. Towards a UTP semantics for Modelica. In *International Symposium on Unifying Theories of Programming*, pp. 44–64. Springer, 2016.
|
| 529 |
+
|
| 530 |
+
10. P. Fritzson. *Principles of object-oriented modeling and simulation with Modelica 3.3: a cyber-physical approach*. John Wiley & Sons, 2014.
|
| 531 |
+
|
| 532 |
+
11. R. Goebel, R. G. Sanfelice, and A. R. Teel. Hybrid dynamical systems. *IEEE Control Systems*, 29(2):28–93, 2009.
|
| 533 |
+
|
| 534 |
+
12. S. Goncharov, J. Jakob, and R. Neves. A semantics for hybrid iteration. In *29th International Conference on Concurrency Theory, CONCUR 2018*. Schloss Dagstuhl – Leibniz-Zentrum fuer Informatik, 2018.
|
| 535 |
+
|
| 536 |
+
13. S. Goncharov, J. Jakob, and R. Neves. A semantics for hybrid iteration. CoRR, abs/1807.01053, 2018.
|
| 537 |
+
|
| 538 |
+
14. S. Goncharov and R. Neves. An adequate while-language for hybrid computation. In *Proceedings of the 21st International Symposium on Principles and Practice of Programming Languages 2019*, PPDP ’19, pp. 11:1–11:15, New York, NY, USA, 2019. ACM.
|
| 539 |
+
|
| 540 |
+
15. S. Goncharov, R. Neves, and J. Proença. Implementing hybrid semantics: From functional to imperative. CoRR, abs/2009.14322, 2020.
|
| 541 |
+
|
| 542 |
+
16. S. Goncharov, L. Schröder, C. Rauch, and M. Piróg. Unifying guarded and un-guarded iteration. In *International Conference on Foundations of Software Science and Computation Structures*, pp. 517–533. Springer, 2017.
|
| 543 |
+
|
| 544 |
+
17. T. A. Henzinger. The theory of hybrid automata. In *LICS96: Logic in Computer Science, 11th Annual Symposium, New Jersey, USA, July 27-30, 1996*, pp. 278–292. IEEE, 1996.
|
| 545 |
+
|
| 546 |
+
18. P. Höfner and B. Möller. An algebra of hybrid systems. *The Journal of Logic and Algebraic Programming*, 78(2):74–97, 2009.
|
| 547 |
+
|
| 548 |
+
19. J. J. Huerta y Munive and G. Struth. Verifying hybrid systems with modal kleene algebra. In J. Desharnais, W. Guttmann, and S. Joosten, eds., *Relational*
|
| 549 |
+
---PAGE_BREAK---
|
| 550 |
+
|
| 551 |
+
*and Algebraic Methods in Computer Science*, pp. 225–243, Cham, 2018. Springer International Publishing.
|
| 552 |
+
|
| 553 |
+
20. S. Kong, S. Gao, W. Chen, and E. Clarke. dreach: $\delta$-reachability analysis for hybrid systems. In *International Conference on TOOLS and Algorithms for the Construction and Analysis of Systems*, pp. 200–205. Springer, 2015.
|
| 554 |
+
|
| 555 |
+
21. D. Liberzon and A. S. Morse. Basic problems in stability and design of switched systems. *IEEE Control Systems*, 19(5):59–70, 1999.
|
| 556 |
+
|
| 557 |
+
22. C. Lüth and N. Ghani. Composing monads using coproducts. In M. Wand and S. L. P. Jones, eds., *ICFP'02: Functional Programming, 7th ACM SIGPLAN International Conference, Pittsburgh, USA, October 04 - 06, 2002*, pp. 133–144. ACM, 2002.
|
| 558 |
+
|
| 559 |
+
23. E. Manes and P. Mulry. Monad compositions I: general constructions and recursive distributive laws. *Theory and Applications of Categories*, 18(7):172–208, 2007.
|
| 560 |
+
|
| 561 |
+
24. E. Moggi. Computational lambda-calculus and monads. In *Proceedings of the Fourth Annual Symposium on Logic in Computer Science (LICS '89), Pacific Grove, California, USA, June 5-8, 1989*, pp. 14–23. IEEE Computer Society, 1989.
|
| 562 |
+
|
| 563 |
+
25. E. Moggi. Notions of computation and monads. *Information and computation*, 93(1):55–92, 1991.
|
| 564 |
+
|
| 565 |
+
26. R. Neves. *Hybrid programs*. PhD thesis, Minho University, 2018.
|
| 566 |
+
|
| 567 |
+
27. P. C. Ölveczky and J. Meseguer. Semantics and pragmatics of real-time maude. *Higher-order and symbolic computation*, 20(1-2):161–196, 2007.
|
| 568 |
+
|
| 569 |
+
28. A. Platzer. Differential dynamic logic for hybrid systems. *Journal of Automated Reasoning*, 41(2):143–189, 2008.
|
| 570 |
+
|
| 571 |
+
29. A. Platzer. *Logical Analysis of Hybrid Systems: Proving Theorems for Complex Dynamics*. Springer, Heidelberg, 2010.
|
| 572 |
+
|
| 573 |
+
30. R. R. Rajkumar, I. Lee, L. Sha, and J. Stankovic. Cyber-physical systems: the next computing revolution. In *DAC'10: Design Automation Conference, 47th ACM/IEEE Conference, Anaheim, USA, June 13-18, 2010*, pp. 731–736. IEEE, 2010.
|
| 574 |
+
|
| 575 |
+
31. W. Stein et al. *Sage Mathematics Software (Version 6.4.1)*. The Sage Development Team, 2015. http://www.sagemath.org/.
|
| 576 |
+
|
| 577 |
+
32. R. Shorten, F. Wirth, O. Mason, K. Wulff, and C. King. Stability criteria for switched and hybrid systems. *Society for Industrial and Applied Mathematics (review)*, 49(4):545–592, 2007.
|
| 578 |
+
|
| 579 |
+
33. A. Simpson and G. Plotkin. Complete axioms for categorical fixed-point operators. In *Logic in Computer Science, LICS 2000*, pp. 30–41, 2000.
|
| 580 |
+
|
| 581 |
+
34. K. Suenaga and I. Hasuo. Programming with infinitesimals: A while-language for hybrid system modeling. In *International Colloquium on Automata, Languages, and Programming*, pp. 392–403. Springer, 2011.
|
| 582 |
+
|
| 583 |
+
35. T. Uustalu. Generalizing substitution. *RAIRO-Theoretical Informatics and Applications*, 37(4):315–336, 2003.
|
| 584 |
+
|
| 585 |
+
36. R. van Glabbeek. The linear time-branching time spectrum (extended abstract). In *Theories of Concurrency, CONCUR 1990*, vol. 458, pp. 278–297, 1990.
|
| 586 |
+
|
| 587 |
+
37. G. Winskel. *The formal semantics of programming languages: an introduction*. MIT press, 1993.
|
| 588 |
+
|
| 589 |
+
38. H. Witsenhausen. A class of hybrid-state continuous-time dynamic systems. *IEEE Transactions on Automatic Control*, 11(2):161–167, 1966.
|
samples_new/texts_merged/3148538.md
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
A CORRECTION TO “THE CONNECTIVITY
|
| 5 |
+
STRUCTURE OF THE HYPERSPACES $C_\epsilon(X)$”
|
| 6 |
+
|
| 7 |
+
by
|
| 8 |
+
ERIC L. McDOWELL
|
| 9 |
+
|
| 10 |
+
Electronically published on February 19, 2009
|
| 11 |
+
|
| 12 |
+
Topology Proceedings
|
| 13 |
+
|
| 14 |
+
**Web:** http://topology.auburn.edu/tp/
|
| 15 |
+
|
| 16 |
+
**Mail:** Topology Proceedings
|
| 17 |
+
Department of Mathematics & Statistics
|
| 18 |
+
Auburn University, Alabama 36849, USA
|
| 19 |
+
|
| 20 |
+
**E-mail:** topolog@auburn.edu
|
| 21 |
+
|
| 22 |
+
**ISSN:** 0146-4124
|
| 23 |
+
|
| 24 |
+
COPYRIGHT © by Topology Proceedings. All rights reserved.
|
| 25 |
+
---PAGE_BREAK---
|
| 26 |
+
|
| 27 |
+
A CORRECTION TO “THE CONNECTIVITY
|
| 28 |
+
STRUCTURE OF THE HYPERSPACES $C_{\epsilon}(X)$”
|
| 29 |
+
|
| 30 |
+
ERIC L. McDOWELL
|
| 31 |
+
|
| 32 |
+
ABSTRACT. We demonstrate that Proposition 3.1 of [Eric L. McDowell and B. E. Wilder, *The connectivity structure of the hyperspaces* $C_{\epsilon}(X)$, Topology Proc. **27** (2003), no. 1, 223–232] is false by constructing a locally connected metric continuum which admits a non-locally connected small-point hyperspace.
|
| 33 |
+
|
| 34 |
+
Let $X$ be a continuum with metric $d$. For any $\epsilon > 0$ the set $C_{d,\epsilon}(X) = \{A \in C(X) : \text{diam}_d(A) \le \epsilon\}$ is called a *small-point hyperspace* of $X$. The notation $C_{\epsilon}(X)$ is used when the metric on $X$ is understood.
|
| 35 |
+
|
| 36 |
+
Proposition 3.1 of [2] asserts that $X$ is locally connected if and only if $C_{\epsilon}(X)$ is locally connected for every $\epsilon > 0$. While it is true that the local connectivity of $C_{\epsilon}(X)$ for every $\epsilon > 0$ implies the local connectivity of $X$, we show in this note that the reverse implication is false.
|
| 37 |
+
|
| 38 |
+
Below we construct a locally connected continuum $X$ in $\mathbb{R}^3$ for which $C_{\epsilon}(X)$ fails to be locally connected for some $\epsilon > 0$. The metric considered on $X$ is the usual metric inherited from $\mathbb{R}^3$. All
|
| 39 |
+
|
| 40 |
+
2000 Mathematics Subject Classification. Primary 54F15; Secondary 54B20.
|
| 41 |
+
Key words and phrases. cyclic connectedness, hyperspace, locally connected continuum.
|
| 42 |
+
|
| 43 |
+
The author is grateful to Professor Sam B. Nadler, Jr. for questioning the validity of the proposition that this note addresses. The author is also grateful to the referee for suggestions which significantly enhanced this paper.
|
| 44 |
+
|
| 45 |
+
©2009 Topology Proceedings.
|
| 46 |
+
---PAGE_BREAK---
|
| 47 |
+
|
| 48 |
+
points $(r, \theta, z)$ are described using the standard cylindrical coordinate system, and all concepts and notation which are used without definition can be found in [3]. The example is similar to [4, Example 2].
|
| 49 |
+
|
| 50 |
+
**Example 1.** For each $n = 1, 2, \dots$, let $S_n$ denote the circle described by $\{(1, \theta, n^{-1}) : 0 \le \theta < 2\pi\}$ and let $S_0 = \{(1, \theta, 0) : 0 \le \theta < 2\pi\}$. For each $n = 1, 2, \dots$ and each $i = 1, 2, \dots, 2^n$, let $A_i^n$ denote the straight line segment given by $\{(1, 2\pi i/2^n, z) : 0 \le z \le n^{-1}\}$. Define $X$ to be the continuum given by
|
| 51 |
+
|
| 52 |
+
$$X = \left( \bigcup_{n=0}^{\infty} S_n \right) \cup \left( \bigcup_{n=1}^{\infty} \bigcup_{i=1}^{2^n} A_i^n \right).$$
|
| 53 |
+
|
| 54 |
+
It is straightforward to show that $X$ is a Peano continuum. We will now prove that $C_\epsilon(X)$ fails to be locally connected at the point $S_0$ when $\epsilon = 2$.
|
| 55 |
+
|
| 56 |
+
Let $\{U_1, \dots, U_k\}$ be an open cover of $S_0$ with the property that for every $n = 0, 1, \dots$ and every $i = 1, \dots, k$ it is true that
|
| 57 |
+
|
| 58 |
+
$$ (1) \quad S_n - U_i \text{ is connected and has arc length greater than } 3\pi/2. $$
|
| 59 |
+
|
| 60 |
+
Observe that $\mathcal{U} = \langle U_1, \cdots, U_k \rangle$ is an open subset of $C(X)$ that contains $S_0$ as well as all $S_n$ for $n$ sufficiently large. Select $N$ such that $S_N \in \mathcal{U}$. We will prove that $C_\epsilon(X)$ fails to be locally connected at $S_0$ by showing that every arc in $\mathcal{U}$ with endpoints $S_0$ and $S_N$ must contain a point of diameter greater than 2. Let $f: [0, 1] \to \mathcal{U}$ be an embedding for which $f(0) = S_0$ and $f(1) = S_N$. Let $\pi: X \to S_N$ denote the natural projection map. For any subset $S \subset X$ we say that $(1, \theta, z) \in S$ is an *antipodal point* of $S$ provided that $(1, \theta + \pi, z')$ belongs to $S$ for some $z'$. We will denote the set of antipodal points of $S$ by $\mathrm{AP}(S)$. We now show that
|
| 61 |
+
|
| 62 |
+
$$ (2) \quad (1, \theta, z) \in \mathrm{AP}(S) \text{ if and only if } (1, \theta, N^{-1}) \in \mathrm{AP}(\pi(S)). $$
|
| 63 |
+
|
| 64 |
+
To see (2), let $S \subset X$ and let $(1, \theta, z) \in \mathrm{AP}(S)$. By definition it follows that $(1, \theta + \pi, z')$ belongs to $S$ for some $z'$; thus, $\pi(1, \theta + \pi, z') = (1, \theta + \pi, N^{-1})$ belongs to $\pi(S)$. Since $(1, \theta, N^{-1}) = \pi(1, \theta, z) \in \pi(S)$, it follows that $(1, \theta, N^{-1}) \in \mathrm{AP}(\pi(S))$. The argument for the converse is similar.
|
| 65 |
+
|
| 66 |
+
If $M \in \mathcal{U}$ and $M \subset S_N$, then there exists an arc $A$ (possibly empty) such that $M$ is the closure of $S_N - A$; thus, the only elements
|
| 67 |
+
---PAGE_BREAK---
|
| 68 |
+
|
| 69 |
+
of $M - AP(M)$ are the points that are diametrically opposed to the interior points of A. Therefore, $AP(M)$ is either $S_N$ (if $A = \emptyset$) or the union of two disjoint arcs. Since $f(t)$ is a continuum for each $0 \le t \le 1$, it follows from continuity that
|
| 70 |
+
|
| 71 |
+
(3) $AP(\pi(f(t)))$ is either $S_N$ or the union of two disjoint arcs.
|
| 72 |
+
|
| 73 |
+
Continuity also shows that the intersection of $\pi^{-1}(AP(\pi(f(t)))))$ and $f(t)$ is closed; moreover, it follows from (2) that this intersection is equal to $AP(f(t))$. Therefore, we have that
|
| 74 |
+
|
| 75 |
+
(4) $AP(f(t))$ is closed for every $0 \le t \le 1$.
|
| 76 |
+
|
| 77 |
+
Suppose that $(1, \theta, z) \in AP(f(t))$; then $(1, \theta + \pi, z') \in f(t)$ for some $z'$. If $z' \neq z$, then $(1, \theta, z)$ and $(1, \theta + \pi, z')$ are more than two units apart. Moreover, if $(1, \theta, z) \in AP(f(t)) - \bigcup_{n=0}^{\infty} S_n$, then it follows from the connectivity of $f(t)$ that there must exist some $z'' \neq z$ with $(1, \theta + \pi, z'') \in f(t)$. It follows that
|
| 78 |
+
|
| 79 |
+
(5) if $AP(f(t)) - \bigcup_{n=0}^{\infty} S_n \neq \emptyset$ then $\text{diam}(f(t)) > 2$.
|
| 80 |
+
|
| 81 |
+
We now show that there exists some $t_0 \in [0, 1]$ for which the diameter of $f(t_0)$ is greater than 2. Begin by defining
|
| 82 |
+
|
| 83 |
+
$$t' = \min\{t : [0, 1] : AP(f(t)) \cap S_N \neq \emptyset\}.$$
|
| 84 |
+
|
| 85 |
+
Suppose that $t' = 1$. Choose $\gamma > 0$ small enough such that the $\gamma$-ball, $\mathcal{B}$, about $S_N$ has the properties that $\mathcal{B} \subset \mathcal{U}$ and $S_n \cap (\cup \mathcal{B}) = \emptyset$ for all $n \neq N$. Choose $\delta > 0$ such that if $t \in (1 - \delta, 1]$ then $H_d(f(t), S_N) < \gamma$. Let $t_0 \in (1 - \delta, 1)$. By (3) we have that $AP(f(t_0)) \neq \emptyset$. However, since $t_0 < t'$ we have by the definition of $t'$ and our choice of $\gamma$ that $AP(f(t_0)) - \bigcup_{n=0}^{\infty} S_n \neq \emptyset$. Therefore, $\text{diam}(f(t_0)) > 2$ by (5).
|
| 86 |
+
|
| 87 |
+
Now suppose that $t' < 1$. Let $q = (1, \theta, z) \in AP(f(t')) \cap S_N$ and let $q' \in f(t') \cap \pi^{-1}(1, \theta+\pi, z)$. We may assume that $q' = (1, \theta+\pi, z)$ since $d(q, q') > 2$ otherwise. Using (3), we have that $AP(\pi(f(t'))) contains an arc $I$ containing $q$. We suppose first that $q$ is an isolated point of $AP(f(t'))$. Let $\{y_i\}_{i=1}^{\infty}$ be a sequence in $I$ converging to $q$; then use (2) to select $x_i \in \pi^{-1}(y_i) \cap AP(f(t'))$ for each $i = 1, 2, \dots$. We have by (4) that $AP(f(t'))$ is closed; hence, some subsequence of $\{x_i\}_{i=1}^{\infty}$ converges to a point $x_0$ of $AP(f(t'))$. Moreover, since $\{y_i\}_{i=1}^{\infty}$ converges to $q$, we have that $x_0 \in \pi^{-1}(q)$. Finally, since $q
|
| 88 |
+
---PAGE_BREAK---
|
| 89 |
+
|
| 90 |
+
is an isolated point of $AP(f(t'))$, it follows that $x_0$ is a member of
|
| 91 |
+
$f(t') \cup \pi^{-1}(q)$ that does not belong to $S_N$. Therefore, $d(x_0, q') > 2$,
|
| 92 |
+
and thus, $\text{diam}(f(t')) > 2$. On the other hand, if $q$ is not an isolated
|
| 93 |
+
point of $AP(f(t'))$, then we may assume that the arc $I$ containing
|
| 94 |
+
$q$ belongs to $S_N \cap AP(f(t'))$. Choose $\gamma > 0$ small enough so that
|
| 95 |
+
(i) no $\gamma$-ball about a point of $I$ meets any $S_n$ for $n \neq N$ and (ii) the
|
| 96 |
+
midpoint $m = (1, \mu, z)$ of $I$ is not contained in the $\gamma$-balls about
|
| 97 |
+
the endpoints of $I$. Choose $\delta > 0$ such that if $t \in (t' - \delta, t']$, then
|
| 98 |
+
$H_d((f(t), f(t')) < \gamma$. Let $t_0 \in (t' - \delta, t')$. Since $H_d(f(t_0), f(t')) < \gamma$,
|
| 99 |
+
we have by (i), (ii), and the construction of $X$ that $f(t_0)$ contains
|
| 100 |
+
a point $m'$ for which $\pi(m') = m$; furthermore, we have by (i) that
|
| 101 |
+
$m' \in S_N$. Thus, $m' = (1, \mu, z) = m \in f(t_0)$. By a similar argument
|
| 102 |
+
we can show that $(1, \mu + \pi, z) \in f(t_0)$. Therefore, $m \in AP(t_0)$,
|
| 103 |
+
contrary to our assumption that $t_0 < t'$.
|
| 104 |
+
|
| 105 |
+
**Example 2.** K. Kuratowski [1, p. 268] describes a continuum, *K*, consisting of the segment {(*x*, 0) : 0 ≤ *x* ≤ 1}, of the vertical segments {(*m*/2n+1, *y*) : 0 ≤ *m* ≤ 2n+1, 0 ≤ *y* ≤ 1/2n} and of the level segments {(*x*, 1/2n) : 0 ≤ *x* ≤ 1}, where n = 1, 2, .... We note that *K* is similar in structure to the continuum in the previous example; however, C<sub>ρ<sub>1</sub>,ε</sub>(*K*) is locally connected when ρ<sub>1</sub> is the usual metric inherited from R<sup>2</sup>. (Informally, observe that if a subcontinuum *A* of *K* is contained in an open subset U of C(X), then U also con- tains subsets of *A* with diameter smaller than that of *A*. By first shrinking *A* to a continuum with smaller diameter within U, one can then continuously grow continua to include a subset of a target subcontinuum within U before continuously releasing *A*.)
|
| 106 |
+
|
| 107 |
+
Instead of considering the usual metric on $K$, let $h: K \to S^1 \times [0, 1]$ be an embedding which sends the leftmost vertical segment of $K$ to $\{(1, 0, z) : 0 \le z \le 1\}$ and the rightmost vertical segment of $K$ to $\{[1, 3\pi/2, z) : 0 \le z \le 1\}$, and which preserves the vertical and horizontal orientations of all subsets of $K$. Let $d$ denote the usual metric for $h(K)$ inherited from $\mathbb{R}^3$, and let $\rho_2$ denote the metric on $K$ given by $\rho_2(x, y) = d(h(x), h(y))$. Then an argument essentially identical to the one given in Example 1 can be used to show that $C_{\rho_2, \epsilon}(X)$ fails to be locally connected for $\epsilon = 2$.
|
| 108 |
+
|
| 109 |
+
Noting that the small-point hyperspaces of the arc, circle, and
|
| 110 |
+
simple triod are all locally connected, while the examples provided
|
| 111 |
+
---PAGE_BREAK---
|
| 112 |
+
|
| 113 |
+
in this article admit non-locally connected small-point hyperspaces,
|
| 114 |
+
the referee suggests the following question.
|
| 115 |
+
|
| 116 |
+
**Question 1.** *Are the small-point hyperspaces of an hereditarily locally connected continuum always locally connected?*
|
| 117 |
+
|
| 118 |
+
Recall that a continuum is said to be *cyclicly connected* provided
|
| 119 |
+
that any two points of the continuum are contained in some simple
|
| 120 |
+
closed curve. Theorem 3.11 of [2] states that $C_{\epsilon}(X)$ is cyclicly
|
| 121 |
+
connected for every $\epsilon > 0$ whenever $X$ is locally connected; however,
|
| 122 |
+
the argument that is used to justify this assertion uses Proposition
|
| 123 |
+
3.1 of [2]. Therefore, the following question remains open.
|
| 124 |
+
|
| 125 |
+
**Question 2.** If $X$ is a locally connected continuum with metric $\rho$,
|
| 126 |
+
must $C_{\rho,\epsilon}(X)$ be cyclicly connected for every $\epsilon > 0$?
|
| 127 |
+
|
| 128 |
+
REFERENCES
|
| 129 |
+
|
| 130 |
+
[1] K. Kuratowski, *Topology. Vol. II.* New edition, revised and augmented. Translated from the French by A. Kirkor. New York-London: Academic Press and Warsaw: PWN, 1968.
|
| 131 |
+
|
| 132 |
+
[2] Eric L. McDowell and B. E. Wilder, *The connectivity structure of the hyperspaces C<sub>ε</sub>(X)*, *Topology Proc.* **27** (2003), no. 1, 223-232.
|
| 133 |
+
|
| 134 |
+
[3] Sam B. Nadler, Jr. *Continuum Theory: An Introduction*. Monographs and Textbooks in Pure and Applied Mathematics, 158. New York: Marcel Dekker, Inc., 1992.
|
| 135 |
+
|
| 136 |
+
[4] Sam B. Nadler, Jr. and Thelma West, *Size levels for arcs*, Fund. Math. **141** (1992), no. 3, 243–255.
|
| 137 |
+
|
| 138 |
+
DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE; BERRY COL-
|
| 139 |
+
LEGE; MOUNT BERRY, GEORGIA 30149-5014
|
| 140 |
+
|
| 141 |
+
*E-mail address: emcdowell@berry.edu*
|
samples_new/texts_merged/3193892.md
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Anomalous VVH interactions at a linear collider
|
| 5 |
+
|
| 6 |
+
SUDHANSU S BISWAL¹,*, DEBAJYOTI CHOUDHURY²,
|
| 7 |
+
ROHINI M GODBOLE¹ and RITESH K SINGH³
|
| 8 |
+
|
| 9 |
+
¹Centre for High Energy Physics, Indian Institute of Science, Bangalore 560 012, India
|
| 10 |
+
|
| 11 |
+
²Department of Physics and Astrophysics, University of Delhi, New Delhi 110 007, India
|
| 12 |
+
|
| 13 |
+
³Laboratoire de Physique Théoretique, 91405 Orsay Cedex, France
|
| 14 |
+
|
| 15 |
+
*E-mail: sudhansu@cts.iisc.ernet.in
|
| 16 |
+
|
| 17 |
+
**Abstract.** We examine, in a model independent way, the sensitivity of a linear collider to the couplings of a light Higgs boson to a pair of gauge bosons, including the possibility of CP violation. We construct several observables that probe the various possible anomalous couplings. For an intermediate mass Higgs, a collider operating at a center of mass energy of 500 GeV and with an integrated luminosity of 500 fb⁻¹ is shown to be able to constrain the ZZH vertex at the few per cent level, with even higher sensitivity for some of the couplings. However, lack of sufficient number of observables as well as contamination from the ZZH vertex limits the precision to which anomalous part of the WWH coupling can be probed.
|
| 18 |
+
|
| 19 |
+
**Keywords.** Anomalous Higgs couplings; linear collider.
|
| 20 |
+
|
| 21 |
+
PACS Nos 13.66.Fg; 14.80.Cp; 14.70.Fm; 14.70.Hp
|
| 22 |
+
|
| 23 |
+
## 1. Introduction
|
| 24 |
+
|
| 25 |
+
The standard model (SM) of particle physics has been tested up to a high degree of accuracy, but the direct experimental verification of the phenomenon of spontaneous symmetry breaking is still pending. Various extensions of the SM have more than one Higgs boson whose CP parity and hypercharges may differ from those of the SM Higgs boson. The minimal supersymmetric standard model (MSSM) is one example of such an extended Higgs sector [1]. To establish the experimental observation of the SM Higgs boson it will be therefore, necessary to establish its properties such as hypercharge, CP parity etc. At an $e^+e^-$ collider the dominant Higgs production processes are $e^+e^- \to f\bar{f}H$, which proceed via the VVH coupling with $V = W, Z$ and $f$ any light fermion. Demanding Lorentz invariance, the VVH couplings can be parameterized as
|
| 26 |
+
|
| 27 |
+
$$ \Gamma_{\mu\nu} = g_V \left[ a_V g_{\mu\nu} + \frac{b_V}{m_V^2} (k_{1\nu} k_{2\mu} - g_{\mu\nu} k_1 \cdot k_2) + \frac{\tilde{b}_V}{m_V^2} \epsilon_{\mu\nu\alpha\beta} k_1^\alpha k_2^\beta \right], \quad (1) $$
|
| 28 |
+
|
| 29 |
+
where $k_i$ denote the momenta of the two W's (Z's); $g_W^{SM} = e \cot \theta_W M_Z$ and $g_Z^{SM} = 2eM_Z/\sin 2\theta_W$. In general, all these anomalous couplings can be complex. For
|
| 30 |
+
---PAGE_BREAK---
|
| 31 |
+
|
| 32 |
+
simplicity we assume $a_V$ to be real and close to its SM value. For processes involving $VVH$ coupling alone we can choose, without loss of generality, $g_V = g_V^{SM}$ and $a_V = 1 + \Delta a_V$. We further assume $\Delta a_W = \Delta a_Z$ and keep terms up to linear order in the anomalous couplings. The analysis will be made for the ILC with center of mass energy 500 GeV and a Higgs boson of mass 120 GeV. We will use $H \to b\bar{b}$ final state and further assume b-quark detection efficiency of 0.7. The largest contribution comes from the process, $e^+e^- \to \nu_e\bar{\nu}_e H$. This process contains two missing neutrinos in the final state. However, this receives contributions from both the $WWH$ and $ZZH$ vertices. Hence one needs to look at $e^+e^- \to Z^*H \to f\bar{f}H$ to constrain $ZZH$ anomalous couplings and then make use of this information while probing $WWH$ couplings.
|
| 33 |
+
|
| 34 |
+
## 2. Observables and kinematical cuts
|
| 35 |
+
|
| 36 |
+
We have constructed various momentum combinations $C_i$ by taking dot and scalar triple products of different linear combinations of momenta. These combinations have been listed in table 1 with their transformation properties under discrete symmetries C, P and $\tilde{T}$, where the pseudotime reversal operator ($\tilde{T}$) reverses the momenta and spins of particles without interchanging their initial and final states. Then we construct observables ($O_i$) by taking the expectation values of the signs of various $C_i$'s, i.e. $O_i = \langle \text{sign}(C_i) \rangle$. Most of these observables have definite CP and $\tilde{T}$ properties and hence can be used directly to probe the anomalous coupling which has the same CP and $\tilde{T}$ properties. In our analysis we keep the terms only upto linear order in anomalous couplings $B_i$. So all observables can be written down as
|
| 37 |
+
|
| 38 |
+
$$ \mathcal{O}(\{B_i\}) = \sum O_i B_i . $$
|
| 39 |
+
|
| 40 |
+
Measurements of these observables may be used to constrain the anomalous couplings. The possible sensitivity of these observables to the different anomalous couplings $B_i$, at a given degree of statistical significance $f$, can be obtained by demanding $|\mathcal{O}(\{B_i\}) - \mathcal{O}(\{0\})| \le f \delta\mathcal{O}$. Here $\mathcal{O}(\{0\})$ is the SM value of $\mathcal{O}$ and $\delta\mathcal{O}$ is the statistical fluctuation in $\mathcal{O}$.
|
| 41 |
+
|
| 42 |
+
**Table 1.** List of momentum correlators, their discrete transformation properties and anomalous couplings they probe. $\vec{P}_e = \vec{p}_{e-} - \vec{p}_{e+}$, $\vec{P}_f^+ = \vec{p}_f + \vec{p}_{\bar{f}}$, $\vec{P}_{\bar{f}} = \vec{p}_{\bar{f}} - \vec{p}_{\bar{f}}$.
|
| 43 |
+
|
| 44 |
+
<table><thead><tr><th>Correlator</th><th>C</th><th>P</th><th>CP</th><th>T̄</th><th>CPT̄</th><th>Probe of</th></tr></thead><tbody><tr><td>C<sub>0</sub> 1</td><td>+</td><td>+</td><td>+</td><td>+</td><td>+</td><td>a<sub>V</sub>, ℜ(b<sub>V</sub>)</td></tr><tr><td>C<sub>1</sub> ℬ<sub>e</sub> ⋅ ℬ<sub>f</sub><sup>+</sup></td><td>-</td><td>+</td><td>-</td><td>+</td><td>-</td><td>Ī(b̃<sub>V</sub>)</td></tr><tr><td>C<sub>2</sub> [ℬ<sub>e</sub> ⋅ ℬ<sub>f</sub><sup>+</sup>] ⋅ ℬ<sub>f</sub><sup>-</sup></td><td>+</td><td>-</td><td>-</td><td>-</td><td>+</td><td>&Reacr;(b̃<sub>V</sub>)</td></tr><tr><td>C<sub>3</sub> [[ℬ<sub>e</sub> ⋅ ℬ<sub>f</sub><sup>+</sup>] ⋅ ℬ<sub>f</sub><sup>-</sup>][ℬ<sub>e</sub> ⋅ ℬ<sub>f</sub><sup>+</sup>]</td><td>-</td><td>-</td><td>+</td><td>-</td><td>-</td><td>Ī(b<sub>V</sub>)</td></tr><tr><td>C<sub>4</sub> [[ℬ<sub>e</sub> ⋅ ℬ<sub>f</sub><sup>+</sup>] ⋅ ℬ<sub>f</sub><sup>-</sup>][ℬ<sub>e</sub> ⋅ ℬ<sub>f</sub><sup>-</sup>]</td><td>×</td><td>-</td><td>×</td><td>-</td><td>×</td><td>Ī(b<sub>V</sub>), ℜ(b̃<sub>V</sub>)</td></tr></tbody></table>
|
| 45 |
+
|
| 46 |
+
Sudhansu S Biswal et al
|
| 47 |
+
---PAGE_BREAK---
|
| 48 |
+
|
| 49 |
+
Anomalous VVH interactions
|
| 50 |
+
|
| 51 |
+
Statistical fluctuation in cross-section and in an asymmetry can be written as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\Delta\sigma = \sqrt{\sigma_{\text{SM}}/\mathcal{L} + \epsilon^2 \sigma_{\text{SM}}^2}, \quad (2)
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
(\Delta A)^2 = \frac{1 - A_{\text{SM}}^2}{\sigma_{\text{SM}} \mathcal{L}} + \frac{\epsilon^2}{2} (1 - A_{\text{SM}}^2)^2. \qquad (3)
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
Here $\sigma_{\text{SM}}$ and $A_{\text{SM}}$ are the SM value of cross-section and asymmetry respectively.
|
| 62 |
+
|
| 63 |
+
We choose the integrated luminosity $\mathcal{L} = 500 \text{ fb}^{-1}$, fractional systematic error $\epsilon = 0.01$ and $f = 3$.
|
| 64 |
+
|
| 65 |
+
Various kinematical cuts we impose, to suppress dominant background to the signal, are 5° ≤ θ₀ ≤ 175°; E_b, E_̄, E_l-, E_l+ ≥ 10 GeV; p<sub>T</sub><sup>missing</sup> ν ≥ 15 GeV; ΔR<sub>q₁q₂</sub> ≥ 0.7; ΔR<sub>l-l+</sub> ≥ 0.2; ΔR<sub>l-b</sub>, ΔR<sub>l-̄</sub>, ΔR<sub>l+b</sub>, ΔR<sub>l+l̄</sub> ≥ 0.4.
|
| 66 |
+
|
| 67 |
+
Here $(\Delta R)^2 \equiv (\Delta\phi)^2 + (\Delta\eta)^2$ when $\Delta\phi$ and $\Delta\eta$ denote the separation between the two jets in azimuthal angle and rapidity respectively.
|
| 68 |
+
|
| 69 |
+
We additionally impose cuts on the invariant mass of the $f\bar{f}$ system:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
R1 \equiv |m_{ff} - M_Z| \le 5 \Gamma_Z \quad \text{select Z-pole,} \tag{4}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
R2 \equiv |m_{f\bar{f}} - M_Z| \ge 5 \Gamma_Z \quad \text{de-select Z-pole.} \tag{5}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
These enhance or suppress the contribution from Z resonance in the Bjorken process respectively. $\Gamma_Z$ in the above is the width of Z boson.
|
| 80 |
+
|
| 81 |
+
**3. ZZH couplings**
|
| 82 |
+
|
| 83 |
+
To probe the anomalous ZZH couplings we consider $f\bar{f}$ final state, where $f$ is any light fermion other than neutrinos. As outlined above we can construct observables with definite CP and $\tilde{T}$ properties and thus can maximize sensitivity to the anomalous couplings for a chosen final state. One can use some of these variables to probe the anomalous couplings [1a].
|
| 84 |
+
|
| 85 |
+
Cross-section: (observable $O_0$ corresponding to correlator $C_0$). Total rates are CP and $\tilde{T}$ even quantities. Hence these can be used to constrain $\Delta a_Z$ and $\Re(b_Z)$. Total rates with $R1$ cut and $f = \mu, u, d, c, s$ can be used to probe $|\Re(b_Z)| > 0.48 \times 10^{-2}$. Similarly total cross-section for $f=e$ with $R2$ cut, $\sigma(R2; e)$ can probe $\Delta a_Z$ to $|\Delta a_Z| > 0.038$ at $3\sigma$ level. Figure 1a shows that the sensitivity to $\Re(b_Z)$ is correlated with $\Delta a_Z$, whereas the reverse is not true.
|
| 86 |
+
|
| 87 |
+
*Forward-backward asymmetry (A₁):* We define the FB asymmetry $A_1$ with respect to the polar angle of Higgs boson. Since $A_1$ is CP odd and $\tilde{T}$ even, $A_1(R1; \mu, q)$ can be used to probe $\Im(\tilde{b}_Z)$. We find that this measurement can probe $|\Im(\tilde{b}_Z)| > 0.042$.
|
| 88 |
+
|
| 89 |
+
*Up-down asymmetry (A₂):* $A_2$ is the up-down asymmetry corresponding to $f$ being above or below the H-production plane. It is a CP odd and $\tilde{T}$ odd observable and a real probe of $\Re(\tilde{b}_Z)$. Since this asymmetry requires charge determination of the final-state fermions, we cannot consider quarks in the final state. Hence using $A_2^{R2}(e)$ one will be able to constrain $|\Re(\tilde{b}_Z)| \le 0.064$ and it is shown by vertical lines in figure 1b.
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
Figure 1. Simultaneous $3\sigma$ limits on anomalous couplings with $L = 500 \text{ fb}^{-1}$: (a) $\Delta a_Z - \Re(b_Z)$ plane using cross-sections; (b) $\Re(\tilde{b}_Z) - \Im(\tilde{b}_Z)$ plane using various asymmetries.
|
| 93 |
+
|
| 94 |
+
**Polar–azimuthal asymmetry ($A_3$):** $A_3$ is a mixed polar–azimuthal asymmetry combining polar angle of Higgs boson and azimuthal angle of $f$ with respect to Higgs production plane and is CP even and $\tilde{T}$ odd. So it is sensitive only to $\Im(b_Z)$. This asymmetry requires charge measurement of $f$, hence suitable only for $f = e, \mu$. This can give a sensitivity at $3\sigma$ level as $|\Im(b_Z)| \le 0.17$. The region inside the horizontal lines in figure 1b shows $3\sigma$ variation in $A_3$.
|
| 95 |
+
|
| 96 |
+
**Another combined asymmetry ($A_4$):** We construct this combined asymmetry with respect to the polar and azimuthal angles of final state $f$. Although $A_4$ is $\tilde{T}$ odd, it does not have any definite CP property. So it is sensitive to both $\Im(b_Z)$ and $\Re(\tilde{b}_Z)$. Also $A_4$ requires charge determination of $f$ and hence we cannot consider quarks in the final-state for this observable. But we consider only $f = \mu$, because for $f = e$ many anomalous couplings contribute significantly with R1 cut. The corresponding constraint is shown in figure 1b with slant lines.
|
| 97 |
+
|
| 98 |
+
In table 2 we list all the achievable limits obtained above. We emphasize that all of them, except for $\Delta a_Z$ and $\Re(b_Z)$, are independent of other anomalous couplings. Table 2 shows that the constraint on $\Re(b_Z)$ depends on $\Delta a_Z$. Also $\tilde{T}$-odd observables require charge measurement of final-state fermions and hence quarks in the final-state cannot be considered to probe $\tilde{T}$-odd couplings leading to rather poor sensitivity to them.
|
| 99 |
+
---PAGE_BREAK---
|
| 100 |
+
|
| 101 |
+
Anomalous VVH interactions
|
| 102 |
+
|
| 103 |
+
**Table 2.** Sensitivity achievable at 3σ level for various anomalous couplings with L = 500 fb⁻¹.
|
| 104 |
+
|
| 105 |
+
<table><thead><tr><th>Coupling</th><th>3σ Bound</th><th>Observable used</th></tr></thead><tbody><tr><td>|Δa<sub>Z</sub>|</td><td>0.038</td><td>σ with R2 cut; f = e<sup>-</sup></td></tr><tr><td>|Re(b<sub>Z</sub>)|</td><td>{ 0.0048 (Δa<sub>Z</sub> = 0) <br> 0.013 (|Δa<sub>Z</sub>| = 0.038)</td><td>σ with R1 cut; f = μ, q</td></tr><tr><td>|Ξ(b<sub>Z</sub>)|</td><td>0.17</td><td>A<sub>3</sub> with R1 cut; f = μ<sup>-</sup>, e<sup>-</sup></td></tr><tr><td>|Re(̃b<sub>Z</sub>)|</td><td>0.064</td><td>A<sub>2</sub>(φ<sub>e<sup>-</sup></sub>) with R2 cut</td></tr><tr><td>|Ξ(̃b<sub>Z</sub>)|</td><td>0.042</td><td>A<sub>1</sub>(c<sub>H</sub>) with R1 cut; f = μ, q</td></tr></tbody></table>
|
| 106 |
+
|
| 107 |
+
**Table 3.** Individual 3σ limits of sensitivity.
|
| 108 |
+
|
| 109 |
+
<table><thead><tr><th>Coupling</th><th>Limit</th><th>Observable used</th></tr></thead><tbody><tr><td>|Δa| ≤ 0.018</td><td>σ<sub>R2</sub></td><td></td></tr><tr><td>|Re(b<sub>W</sub>)| ≤ 0.098</td><td>σ<sub>R2</sub></td><td></td></tr><tr><td>|Ξ(b<sub>W</sub>)| ≤ 0.62</td><td>σ<sub>R1</sub></td><td></td></tr><tr><td>|Re(̃b<sub>W</sub>)| ≤ 1.6</td><td>A<sup>1</sup><sub>FB(c<sub>H</sub>)</sub></td><td></td></tr><tr><td>|Ξ(̃b<sub>W</sub>)| ≤ 0.39</td><td>A<sup>2</sup><sub>FB(c<sub>H</sub>)</sub></td><td></td></tr></tbody></table>
|
| 110 |
+
|
| 111 |
+
**Table 4.** Simultaneous 3σ limits of sensitivity.
|
| 112 |
+
|
| 113 |
+
<table><thead><tr><th>Coupling</th><th>Δa = 0</th><th>Δa ≠ 0</th></tr></thead><tbody><tr><td>|Δa| ≤ –</td><td>0.038</td><td></td></tr><tr><td>|Re(b<sub>W</sub>)| ≤ 0.10</td><td>0.31</td><td></td></tr><tr><td>|Ξ(b<sub>W</sub>)| ≤ 1.6</td><td>1.6</td><td></td></tr><tr><td>|Re(̃b<sub>W</sub>)| ≤ 3.2</td><td>3.2</td><td></td></tr><tr><td>|Ξ(̃b<sub>W</sub>)| ≤ 0.44</td><td>0.44</td><td></td></tr></tbody></table>
|
| 114 |
+
|
| 115 |
+
## 4. WWH couplings
|
| 116 |
+
|
| 117 |
+
Due to missing neutrinos in the final state here one can only construct two observables: cross-section and forward-backward asymmetry with respect to polar angle of Higgs boson. Any deviation from SM value for cross-section largely depends on Δa$_{V}$ and Re(b$_{V}$) (CP even, T̄ even). Similarly, FB asymmetry receives a large contribution from Ξ(¯b$_{V}$) (CP odd, T̄ even). Hence there is no other direct observable to probe the remaining anomalous couplings. Assuming Δa$_{Z}$ = Δa$_{W}$ = Δa, we calculate the expressions for both the observables with R1 and R2 cuts. In table 3 we list the individual limits of sensitivity on the various anomalous couplings at 3σ level. To see what the sensitivity will be when all the anomalous couplings were to be nonzero, we construct a nine-dimensional region in parameter space and take a point from that region and calculate all the observables simultaneously. If the difference from their SM values due to these anomalous couplings is within the statistical fluctuation in SM values of these observables, then we say that the point is inside the blind region. The points on the boundary of this region give us the simultaneous limit of sensitivity of these measurements to the anomalous couplings. These are listed in table 4. These tables show that the lack of a specific observable to probe T̄-odd couplings results in rather poor sensitivity to them. For more details, see [2].
|
| 118 |
+
---PAGE_BREAK---
|
| 119 |
+
|
| 120 |
+
Sudhansu S Biswal et al
|
| 121 |
+
|
| 122 |
+
5. Conclusion
|
| 123 |
+
|
| 124 |
+
We have analyzed the sensitivity of the process $e^{+}e^{-} \rightarrow f\bar{f}H$, $f$ being a light fermion and probe different anomalous couplings. We implement various kinematical cuts on the different final-state particles so as to reduce background and also take into account finite b-tagging efficiency. When these effects are removed, our analysis reproduces the results of [4]. Although the observables constructed using optimal observable analysis [3] have maximum sensitivity to the anomalous couplings, they are a little opaque to the physics that is being probed. The observables that we have constructed by taking expectation values of sign of the correlators are simple to construct and most of them have definite CP and $\tilde{T}$ properties thus probing specific anomalous couplings. Apart from $\Re(b_V)$ and $\Delta a_V$, constraints on all the other anomalous couplings can be obtained using asymmetries and hence are robust to the effects of radiative corrections.
|
| 125 |
+
|
| 126 |
+
References
|
| 127 |
+
|
| 128 |
+
[1] See, for example, M Drees, R M Godbole and P Roy, *Theory and phenomenology of sparticles* (World Scientific, Singapore, 2004)
|
| 129 |
+
|
| 130 |
+
[1a] For detailed definition, see [2]
|
| 131 |
+
|
| 132 |
+
[2] Sudhansu S Biswal, Debajyoti Choudhury, Rohini M Godbole and Ritesh K Singh, *Phys. Rev. D73*, 035001 (2006)
|
| 133 |
+
|
| 134 |
+
[3] K Hagiwara, S Ishihara, J Kamoshita and B A Kniehl, *Euro. Phys. J. C14*, 457 (2000)
|
| 135 |
+
|
| 136 |
+
[4] T Han and J Jiang, *Phys. Rev. D63*, 096007 (2001)
|
samples_new/texts_merged/3224121.md
ADDED
|
@@ -0,0 +1,735 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Cooperation and dependencies in multipartite systems
|
| 5 |
+
|
| 6 |
+
Waldemar Kłobus,¹ Marek Miller,² Mahasweta Pandit,¹ Ray Ganardi,¹,³ Lukas Knips,⁴,⁵,⁶ Jan Dziewior,⁴,⁵,⁶
|
| 7 |
+
Jasmin Meinecke,⁴,⁵,⁶ Harald Weinfurter,⁴,⁵,⁶ Wiesław Laskowski,¹,³ and Tomasz Paterek¹,²,⁷
|
| 8 |
+
|
| 9 |
+
¹Institute of Theoretical Physics and Astrophysics, Faculty of Mathematics,
|
| 10 |
+
Physics and Informatics, University of Gdańsk, 80-308 Gdańsk, Poland
|
| 11 |
+
|
| 12 |
+
²School of Physical and Mathematical Sciences, Nanyang Technological University, 637371 Singapore
|
| 13 |
+
|
| 14 |
+
³International Centre for Theory of Quantum Technologies, University of Gdańsk, 80-308 Gdańsk, Poland
|
| 15 |
+
|
| 16 |
+
⁴Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Straße 1, 85748 Garching, Germany
|
| 17 |
+
|
| 18 |
+
⁵Department für Physik, Ludwig-Maximilians-Universität, Schellingstraße 4, 80799 München, Germany
|
| 19 |
+
|
| 20 |
+
⁶Munich Center for Quantum Science and Technology (MCQST), Schellingstraße 4, 80799 München, Germany
|
| 21 |
+
|
| 22 |
+
⁷MajuLab, International Joint Research Unit UMI 3654,
|
| 23 |
+
CNRS, Université Côte d'Azur, Sorbonne Université,
|
| 24 |
+
National University of Singapore, Nanyang Technological University, Singapore
|
| 25 |
+
|
| 26 |
+
We propose an information-theoretic quantifier for the advantage gained from cooperation that captures the degree of dependency between subsystems of a global system. The quantifier is distinct from measures of multipartite correlations despite sharing many properties with them. It is directly computable for classical as well as quantum systems and reduces to comparing the respective conditional mutual information between any two subsystems. Secret sharing provides an exemplary cooperation task where this quantifier is beneficial. Based on the new quantifier we prove an inequality characterizing the lack of monotonicity of conditional mutual information under local operations and provide intuitive understanding for it.
|
| 27 |
+
|
| 28 |
+
I. INTRODUCTION
|
| 29 |
+
|
| 30 |
+
Identifying and quantifying dependencies in multipartite systems enable their analysis and provides a better understanding of complex phenomena. The problem has been addressed by several communities, considering both classical and quantum systems. For example, in neuroscience and genetics measures of multipartite synergy were put forward [1–6], in quantitative sociology quantifiers of coordination were introduced [7], and in physics and information processing quantities aimed at characterizing genuine multiparty correlations were studied in depth [8–13]. The former quantifiers are motivated mathematically, keeping the combinatorial aspects of complex systems in mind, e.g., the synergy is the difference in the information all subsystems have about an extra system as compared to the total information contained in any subset of the systems. Many of the latter quantifiers involve difficult optimizations and are therefore hard to compute. Here, we introduce an operationally defined, simple and computable quantifier of multipartite dependency in terms of information gain from cooperation when some parties meet and try to deduce the variables of some of the remaining parties. We show how it differs from multipartite correlations, prove its essential properties and discuss the application to quantum secret sharing.
|
| 31 |
+
|
| 32 |
+
It turns out that, in order to compute the quantity introduced here, it is sufficient to consider the respective conditional mutual information between only two subsystems. Therefore, any operational meaning of the conditional mutual information, e.g., in terms of communication cost of quantum state redistribution [14, 15], applies to the dependence measure as well. In this context, we prove an inequality which characterizes the lack of monotonicity of quantum conditional mutual informa-
|
| 33 |
+
|
| 34 |
+
tion under general local operations.
|
| 35 |
+
|
| 36 |
+
II. MULTIPARTITE DEPENDENCE
|
| 37 |
+
|
| 38 |
+
Let us begin by briefly recalling fundamental relationships, e.g., that two classical variables $X_1$ and $X_2$ are statistically independent if their probabilities satisfy $P(X_1|X_2) = P(X_1)$. Alternatively, the statistical independence can be stated in terms of entropies with the help of both the Shannon entropy $H(X) = -\sum_{i=1}^{d} P(x_i) \log_d P(x_i)$, where $d$ is the number of outcomes, and the conditional entropy $H(X|Y) = -\sum_{i,j} P(x_i, y_j) \log_d \frac{P(x_i, y_j)}{P(y_j)}$. As a measure of dependence of two variables $X_1$ and $X_2$ one introduces the corresponding entropic difference $H(X_1) - H(X_1|X_2)$, the so-called mutual information $I(X_1: X_2)$ [16]. Similarly, the quantum mutual information captures the dependence between quantum subsystems [17]. However, already in the case of three variables there are two levels of independence. The variable $X_1$ can be independent of all other variables, i.e., $P(X_1|X_2X_3) = P(X_1)$, or it can be conditionally independent of one of them, e.g., $P(X_1|X_2X_3) = P(X_1|X_2)$. The former dependence is again captured by the mutual information $I(X_1: X_2X_3)$, while the so-called conditional mutual information $I(X_1: X_3|X_2) = H(X_1|X_2) - H(X_1|X_2X_3)$ considers the latter. It is thus natural to define the *tripartite dependence* as the situation where any variable depends on all the other variables. This can be quantified as the worst case conditional mutual information
|
| 39 |
+
|
| 40 |
+
$$D_3 \equiv \min[I(X_1 : X_2 | X_3), I(X_1 : X_3 | X_2), \\ I(X_2 : X_3 | X_1)]. \quad (1)$$
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
Due to strong subadditivity the conditional mutual in-
|
| 44 |
+
formation is non-negative and hence $D_3 \ge 0$ [18]. $D_3$
|
| 45 |
+
vanishes if and only if there exists a variable such that
|
| 46 |
+
already a subset of the remaining parties can gain the
|
| 47 |
+
maximally accessible information about the variable in
|
| 48 |
+
question. Note that this condition is also satisfied if a
|
| 49 |
+
variable is not correlated with the rest of the system at
|
| 50 |
+
all.
|
| 51 |
+
|
| 52 |
+
The value of $\mathcal{D}_3$ can be interpreted using an alternative expression for conditional mutual information, e.g., $I(X_1: X_3|X_2) = I(X_1: X_2X_3) - I(X_1: X_2)$. Reformulating now (1), one recognizes that $\mathcal{D}_3$ expresses the gain in information about the first subsystem that the second party has from cooperating with the third party. Accordingly, nonzero $\mathcal{D}_3$ ensures that any two parties always gain through cooperation when accessing the knowledge about the remaining subsystem. The minimal gain over the choice of parties is an alternative way to compute $\mathcal{D}_3$.
|
| 53 |
+
|
| 54 |
+
In the context of quantum subsystems we can
|
| 55 |
+
rewrite the conditional mutual information as $I(X_1 : X_3|X_2) = S(X_1|X_2) + S(X_3|X_2) - S(X_1X_3|X_2)$, where
|
| 56 |
+
e.g. $S(X_1|X_2)$ is the conditional entropy based on the
|
| 57 |
+
von Neumann entropy $S(\cdot)$. Since $S(X_1|X_2)$ is the entan-
|
| 58 |
+
glement cost of merging a state $X_1$ with $X_2$, see Ref. [19],
|
| 59 |
+
we can interpret the conditional mutual information as
|
| 60 |
+
the extra cost of merging states one by one ($X_1$ with $X_2$
|
| 61 |
+
and $X_3$ with $X_2$) instead of altogether ($X_1X_3$ with $X_2$).
|
| 62 |
+
$\mathcal{D}_3$ is the minimum extra cost of this merging.
|
| 63 |
+
|
| 64 |
+
*Secret sharing.*—An example of an intuitive applica-
|
| 65 |
+
tion of $\mathcal{D}_3$ is (quantum) secret sharing [20–23]. In the
|
| 66 |
+
tripartite setting, secret sharing requires collaboration of
|
| 67 |
+
two parties in order to read out the secret of the remain-
|
| 68 |
+
ing party. In the classical version of this problem the se-
|
| 69 |
+
cret is a random variable, e.g., the measurement outcome
|
| 70 |
+
of, say, the first observer. It is thus required that both,
|
| 71 |
+
the second as well as the third party alone has only little
|
| 72 |
+
or no information about the secret, i.e., $I(X_1: X_2)$ and
|
| 73 |
+
$I(X_1: X_3)$ are small, while both of them together can
|
| 74 |
+
reveal the result of the first observer, i.e., $I(X_1: X_2X_3)$
|
| 75 |
+
is large or unity. It is clear that the value of $\mathcal{D}_3$ (close to
|
| 76 |
+
its maximum) yields a measure for the working of secret
|
| 77 |
+
sharing. Furthermore, due to the minimization in (1), the
|
| 78 |
+
secret can be generated at any party. Below we derive
|
| 79 |
+
the classical distributions with large $\mathcal{D}_3$ as well as quan-
|
| 80 |
+
tum states which achieve maximal dependence. Quite
|
| 81 |
+
surprisingly these are mixed states belonging to the class
|
| 82 |
+
of so-called k-uniform states [24]. It turns out that these
|
| 83 |
+
states have perfect correlations along complementary lo-
|
| 84 |
+
cal measurements and therefore, by following the proto-
|
| 85 |
+
col in [22], the quantum solution to the secret sharing
|
| 86 |
+
problem offers additionally security against eavesdrop-
|
| 87 |
+
ping. In Appendix E we show that these states enable
|
| 88 |
+
perfect sharing of a quantum secret (unknown quantum
|
| 89 |
+
state) and that the value of dependence provides a lower
|
| 90 |
+
bound on the quality of quantum secret sharing for a class
|
| 91 |
+
of states. See Ref. [25] for an example of secret sharing
|
| 92 |
+
with a class of pure k-uniform states.
|
| 93 |
+
|
| 94 |
+
*Correlations and dependence.*—Before we generalize to
|
| 95 |
+
|
| 96 |
+
an arbitrary number of parties and present the properties
|
| 97 |
+
of the resulting $\mathcal{D}_N$, let us give a simple example that il-
|
| 98 |
+
lustrates the difference between multipartite correlations
|
| 99 |
+
and multipartite dependence. Consider three classical bi-
|
| 100 |
+
nary random variables described by the joint probability
|
| 101 |
+
distribution $P(000) = P(111) = \frac{1}{2}$. All three variables
|
| 102 |
+
are clearly correlated as confirmed, e.g., by quantifiers
|
| 103 |
+
introduced in Refs. [12, 13]. However, the knowledge of,
|
| 104 |
+
say, the first party about the third party does not in-
|
| 105 |
+
crease if the first observer is allowed to cooperate with
|
| 106 |
+
the second one. By examining her data, the first ob-
|
| 107 |
+
server knows the variables of both remaining parties and
|
| 108 |
+
any cooperation with one of them does not change this.
|
| 109 |
+
There is no information gain and hence this distribution
|
| 110 |
+
has vanishing tripartite dependence.
|
| 111 |
+
|
| 112 |
+
On the other hand, let us consider the joint proba-
|
| 113 |
+
bility distribution with $P(000) = P(011) = P(101) =$
|
| 114 |
+
$P(110) = \frac{1}{4}$, which can describe also a classical system.
|
| 115 |
+
Any two variables in this distribution are completely un-
|
| 116 |
+
correlated, but any two parties can perfectly decode the
|
| 117 |
+
value of the remaining variable. Hence the gain from co-
|
| 118 |
+
operation is 1 and so is the value of $\mathcal{D}_3$. This quantifier is
|
| 119 |
+
thus very good for identifying the suitability of a system
|
| 120 |
+
for secret sharing, where the secret could be at any party.
|
| 121 |
+
|
| 122 |
+
*Larger systems.*—Moving on to more complex systems,
|
| 123 |
+
we note that there are more conditions to be considered
|
| 124 |
+
already in order to define the four-partite dependence.
|
| 125 |
+
In analogy to the tripartite case the first condition is
|
| 126 |
+
to require that cooperation of any triple of parties pro-
|
| 127 |
+
vides more information about the remaining subsystem,
|
| 128 |
+
e.g., $I(X_1: X_2X_3X_4) - I(X_1: X_2X_3)$ must be positive.
|
| 129 |
+
But one should also impose that cooperation between
|
| 130 |
+
any pair brings information gain about the two remain-
|
| 131 |
+
ing variables, e.g., $I(X_1X_2: X_3X_4) - I(X_1X_2: X_3)$ must
|
| 132 |
+
be positive. The former condition demands a positive
|
| 133 |
+
conditional mutual information, $I(X_1: X_4|X_2X_3) > 0$,
|
| 134 |
+
while the latter one requires $I(X_1X_2: X_4|X_3) > 0$. In
|
| 135 |
+
order to compute $\mathcal{D}_4$ one takes the minimum of these
|
| 136 |
+
two conditional mutual informations over all permuta-
|
| 137 |
+
tions of subsystems. Note, however, that, e.g., $I(X_1X_2:
|
| 138 |
+
X_4|X_3) \ge I(X_1: X_4|X_2X_3)$ and therefore it is sufficient
|
| 139 |
+
to minimize over the conditional mutual information be-
|
| 140 |
+
tween two variables only. We emphasize that this step
|
| 141 |
+
simplifies the computation significantly. The same argu-
|
| 142 |
+
ment applies for arbitrary $N$ and leads to the definition
|
| 143 |
+
of $N$-partite dependence
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal{D}_N \equiv \min_{\text{perm}} I(X_1 : X_2 | X_3 \dots X_N), \quad (2)
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where the minimum is taken over all permutations of the
|
| 150 |
+
subsystems. In the case of a quantum system in state ρ
|
| 151 |
+
we obtain
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mathcal{D}_N(\rho) = \min_{j,k} [S(\operatorname{Tr}_j \rho) + S(\operatorname{Tr}_k \rho) - S(\operatorname{Tr}_{jk} \rho) - S(\rho)], \quad (3)
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where *j*, *k* = 1...*N* and *j* ≠ *k*. Tr<sub>*j*</sub>ρ denotes a partial trace over the subsystem *j*. In general, calculating the N-partite dependence requires computation and comparison
|
| 158 |
+
---PAGE_BREAK---
|
| 159 |
+
|
| 160 |
+
of $\binom{N}{2}$ values, i.e., scales polynomially as $N^2$, whereas for
|
| 161 |
+
permutationally invariant systems it is straightforward.
|
| 162 |
+
|
| 163 |
+
One may also like to study *k*-partite dependencies
|
| 164 |
+
within an *N*-partite system. To this aim we propose to
|
| 165 |
+
apply the definitions above to any *k*-partite subsystem
|
| 166 |
+
and take the minimum over the resulting values.
|
| 167 |
+
|
| 168 |
+
### III. PROPERTIES
|
| 169 |
+
|
| 170 |
+
The maximal *N*-partite dependence over classical dis-
|
| 171 |
+
tributions of *d*-valued variables is given by 1 (recall that
|
| 172 |
+
our logarithms are base *d*) and follows from the fact that
|
| 173 |
+
classical mutual information cannot exceed the entropy
|
| 174 |
+
of each variable. On the other hand, quantum mutual in-
|
| 175 |
+
formation is bounded by 2 and this is the bound on $\mathcal{D}_N$
|
| 176 |
+
optimized over quantum states (see Appendix D). This
|
| 177 |
+
bound is achieved by mixed states belonging to the class
|
| 178 |
+
of *k*-uniform states, in particular for $k = N - 1$ [24]. In
|
| 179 |
+
the case of *N* qubits (for *N* even) the optimal states have
|
| 180 |
+
the following form
|
| 181 |
+
|
| 182 |
+
$$ \rho_{\max} = \frac{1}{2^N} \left( \sigma_0^{\otimes N} + (-1)^{N/2} \sum_{j=1}^{3} \sigma_j^{\otimes N} \right), \quad (4) $$
|
| 183 |
+
|
| 184 |
+
where $\sigma_j$ are the Pauli matrices and $\sigma_0$ denotes the $2 \times 2$
|
| 185 |
+
identity matrix. Note that $\rho_{\max}$ is permutationally in-
|
| 186 |
+
variant and gives rise to perfect correlations or anti-
|
| 187 |
+
correlations when all observers measure locally the same
|
| 188 |
+
Pauli observable. These states are known as the general-
|
| 189 |
+
ized bound entangled Smolin states [26, 27]. They are a
|
| 190 |
+
useful quantum resource for multiparty communication
|
| 191 |
+
schemes [28] and were experimentally demonstrated in
|
| 192 |
+
Refs. [29–34]. Per definition for (N − 1)-uniform states
|
| 193 |
+
all reduced density matrices are maximally mixed, with
|
| 194 |
+
vanishing mutual information, whereas the whole system
|
| 195 |
+
is correlated. In Appendix D we provide examples of
|
| 196 |
+
states which maximize $\mathcal{D}_N$ for arbitrary $d$ and show in
|
| 197 |
+
general that the only states achieving the maximal quan-
|
| 198 |
+
tum value of 2 are (N − 1)-uniform.
|
| 199 |
+
|
| 200 |
+
Let us also offer an intuition for values of $\mathcal{D}_N$ above
|
| 201 |
+
the classical bound of one. As shown in Appendix G
|
| 202 |
+
this can only happen for mixed quantum states. One
|
| 203 |
+
could then consider an auxiliary system which purifies
|
| 204 |
+
the mixed state. High values of $\mathcal{D}_N$ correspond to learn-
|
| 205 |
+
ing simultaneously the variables of the subsystems and
|
| 206 |
+
the auxiliary system. Note that making this statement
|
| 207 |
+
mathematically precise may be difficult as the problem
|
| 208 |
+
is equivalent to the interpretation of negative values of
|
| 209 |
+
conditional entropy [19, 35, 36].
|
| 210 |
+
|
| 211 |
+
As we have already emphasized, multipartite depen-
|
| 212 |
+
dence is different from multipartite correlations. Nev-
|
| 213 |
+
ertheless, it does share a number of properties that are
|
| 214 |
+
expected from measures of genuine multipartite correla-
|
| 215 |
+
tions. Any such quantifier should satisfy a set of postu-
|
| 216 |
+
lates put forward in Refs. [11, 13]. We now show that
|
| 217 |
+
most of them also hold for $\mathcal{D}_N$ and we precisely charac-
|
| 218 |
+
terize the deviation from one of the postulates. In Ap-
|
| 219 |
+
|
| 220 |
+
pendices A-C we prove the following properties of the
|
| 221 |
+
dependence:
|
| 222 |
+
|
| 223 |
+
(i) If $\mathcal{D}_N = 0$ and one adds a party in a product state
|
| 224 |
+
then the resulting $(N+1)$-party state has $\mathcal{D}_N = 0$.
|
| 225 |
+
|
| 226 |
+
(ii) If $\mathcal{D}_N = 0$ and one subsystem is split with two of its parts placed in different laboratories then the resulting $(N+1)$-party state has $\mathcal{D}_{N+1} = 0$.
|
| 227 |
+
|
| 228 |
+
(iii) $\mathcal{D}_N$ can increase under local operations. Let us denote with the bar the quantities computed after local operations. We have the following inequality:
|
| 229 |
+
|
| 230 |
+
$$ \bar{\mathcal{D}}_N \le \mathcal{D}_N + I(X_1 X_2 : X_3 \dots X_N) - I(X_1 X_2 : \bar{X}_3 \dots \bar{X}_N), \quad (5) $$
|
| 231 |
+
|
| 232 |
+
where systems $X_1$ and $X_2$ are the ones minimizing
|
| 233 |
+
$\mathcal{D}_N$, i.e., before the operations were applied.
|
| 234 |
+
|
| 235 |
+
The properties (i) and (ii) hold for all quantifiers of
|
| 236 |
+
multipartite correlations. It is expected that measures
|
| 237 |
+
of multipartite correlations are also monotonic under lo-
|
| 238 |
+
cal operations (though note that often this condition is
|
| 239 |
+
relaxed in practice, see e.g. quantum discord). In the
|
| 240 |
+
present case, the monotonicity property does not hold in
|
| 241 |
+
general for $\mathcal{D}_N$, however, property (iii) puts a bound on
|
| 242 |
+
its maximal violation. Moreover, it has a clear interpreta-
|
| 243 |
+
tion: local operations that uncorrelate a given subsystem
|
| 244 |
+
from the others may lead to information gain when the
|
| 245 |
+
less correlated party cooperates with other parties.
|
| 246 |
+
|
| 247 |
+
Let us explain this more quantitatively for the condi-
|
| 248 |
+
tional mutual information between variables $X_1$ and $X_2$.
|
| 249 |
+
While it is well-known that this quantity is monotonic
|
| 250 |
+
under local operations on subsystems not in the condi-
|
| 251 |
+
tion [37], we prove in Appendix C that the following in-
|
| 252 |
+
equality is satisfied under local operations on arbitrary
|
| 253 |
+
subsystem (being the origin of property (iii)):
|
| 254 |
+
|
| 255 |
+
$$ I(\overline{X}_1 : \overline{X}_2 | \overline{X}_3 \dots \overline{X}_N) \le I(X_1 : X_2 | X_3 \dots X_N) + I(X_1 X_2 : X_3 \dots X_N) - I(X_1 X_2 : \overline{X}_3 \dots \overline{X}_N). \quad (6) $$
|
| 256 |
+
|
| 257 |
+
The second line is non-negative due to the data process-
|
| 258 |
+
ing inequality and it quantifies how much the local opera-
|
| 259 |
+
tions have uncorrelated the variables $X_3 \dots X_N$ from the
|
| 260 |
+
variables $X_1 X_2$. This sets the upper bound to the lack
|
| 261 |
+
of monotonicity of the conditional mutual information.
|
| 262 |
+
|
| 263 |
+
## IV. EXAMPLES
|
| 264 |
+
|
| 265 |
+
Multipartite dependence can be computed for both
|
| 266 |
+
classical and quantum systems and is a generic quan-
|
| 267 |
+
tifier of information gain from cooperation that can be
|
| 268 |
+
used across science. Here we discuss a few exemplary
|
| 269 |
+
calculations and applications of $\mathcal{D}_N$ in quantum infor-
|
| 270 |
+
mation.
|
| 271 |
+
---PAGE_BREAK---
|
| 272 |
+
|
| 273 |
+
*Pure states.*—First of all, for pure quantum states $|\Psi\rangle$, the dependence can be further simplified as
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
\begin{align}
|
| 277 |
+
\mathcal{D}_N(|\Psi\rangle) &= \min_{i,j} [S(\operatorname{Tr}_i |\Psi\rangle\langle\Psi|) \nonumber \\
|
| 278 |
+
&\quad + S(\operatorname{Tr}_j |\Psi\rangle\langle\Psi|) - S(\operatorname{Tr}_{ij} |\Psi\rangle\langle\Psi|)] \nonumber \\
|
| 279 |
+
&= \min_{i,j} [S(\rho_i) + S(\rho_j) - S(\rho_{ij})], \tag{7}
|
| 280 |
+
\end{align}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
where $\rho_i$ is the state of the system after removing all but the $i$-th particle, i.e., $\mathcal{D}_N(|\Psi\rangle)$ is given by the smallest quantum mutual information in two-partite subsystems. Here, we made use of the fact that both subsystems of a pure state have the same entropy: $S(\operatorname{Tr}_i\rho) = S(\rho_i)$ for $\rho = |\Psi\rangle\langle\Psi|$. In Appendix G we prove the following upper bound on $\mathcal{D}_N$ for pure states
|
| 284 |
+
|
| 285 |
+
$$ \mathcal{D}_N(|\Psi\rangle) \le 1. \tag{8} $$
|
| 286 |
+
|
| 287 |
+
It is a consequence of the trade-off relation between the quantum mutual information for different two-particle subsystems of a pure global state and the definition of $\mathcal{D}_N$ where the smallest conditional mutual information is chosen. In particular, the bound is achieved by N-qubit GHZ state $\frac{1}{\sqrt{d}}(|0\dots0\rangle + \dots + |d-1\dots d-1\rangle)$. Additionally, the quantum mutual information is bounded by 1 whenever the state $\rho_{ij}$ is separable [38]. A comprehensive list of dependencies within standard classes of quantum states is given in Tab. I. The analytical formula for the N-qubit Dicke states with $e$ excitations, $|D_N^e\rangle$, is presented in Appendix F. In short, if one fixes $e$ and takes the limit $N \to \infty$, the dependence $\mathcal{D}_N$ vanishes. For $e$ being a function of $N$, e.g., $e = N/2$, the dependence $\mathcal{D}_N$ tends to $1/2$.
|
| 288 |
+
|
| 289 |
+
*Entanglement without dependence.*—An intriguing question in the theory of multipartite entanglement is whether entanglement can exist without classical multipartite correlations [10]. The examples of N-party entangled states with vanishing N-party classical correlations are known in the literature [39–43], though the corresponding notions of classical correlations do not satisfy all the postulates of Refs. [11, 13]. Here we ask whether there are genuinely multipartite entangled states with no multipartite dependence and whether multipartite dependence can exist without multipartite correlations and vice versa. It turns out that all of those combinations are possible. There exist even pure genuinely multipartite entangled states without multipartite dependence. Consider any N-qubit cluster state (including linear, ring, 2D, etc.) for $N \ge 4$. It was shown in Ref. [44] that all single-particle subsystems are completely mixed and there exists at least one pair of subsystems in the bipartite completely mixed state. The corresponding entropies are equal to $S(\rho_i) = 1$ and $S(\rho_{ij}) = 2$, and lead to $\mathcal{D}_N = 0$, due to Eq. (7). Therefore, the information about a particular subsystem cannot be increased when other subsystems are brought together which explains the impossibility of the corresponding secret sharing task [45–47]. Note that there exist other subsets of observers who can successfully run secret sharing using a cluster
|
| 290 |
+
|
| 291 |
+
<table><thead><tr><th>N</th><th>state</th><th>D<sub>3</sub></th><th>D<sub>4</sub></th><th>D<sub>5</sub></th><th>D<sub>6</sub></th></tr></thead><tbody><tr><td>3</td><td>{P<sub>same</sub>}</td><td>0</td><td>-</td><td>-</td><td>-</td></tr><tr><td>3</td><td>{P<sub>even</sub>}</td><td>1</td><td>-</td><td>-</td><td>-</td></tr><tr><td>3</td><td>GHZ</td><td>1</td><td>-</td><td>-</td><td>-</td></tr><tr><td>3</td><td>D<sub>3</sub><sup>1</sup></td><td>0.9183</td><td>-</td><td>-</td><td>-</td></tr><tr><td>3</td><td>ρ<sub>nc</sub></td><td>0.5033</td><td>-</td><td>-</td><td>-</td></tr><tr><td>4</td><td>GHZ</td><td>0</td><td>1</td><td>-</td><td>-</td></tr><tr><td>4</td><td>D<sub>4</sub><sup>1</sup></td><td>0.3774</td><td>0.62256</td><td>-</td><td>-</td></tr><tr><td>4</td><td>D<sub>4</sub><sup>2</sup></td><td>0.5033</td><td>0.7484</td><td>-</td><td>-</td></tr><tr><td>4</td><td>L<sub>4</sub></td><td>1</td><td>0</td><td>-</td><td>-</td></tr><tr><td>4</td><td>3-uniform</td><td>2</td><td>0</td><td>-</td><td>-</td></tr><tr><td>5</td><td>GHZ</td><td>0</td><td>0</td><td>1</td><td>-</td></tr><tr><td>5</td><td>D<sub>5</sub><sup>1</sup></td><td>0.2490</td><td>0.2490</td><td>0.4729</td><td>-</td></tr><tr><td>5</td><td>D<sub>5</sub><sup>2</sup></td><td>0.3245</td><td>0.3245</td><td>0.6464</td><td>-</td></tr><tr><td>5</td><td>L<sub>5</sub></td><td>0</td><td>0</td><td>0</td><td>-</td></tr><tr><td>5</td><td>R<sub>5</sub></td><td>1</td><td>1</td><td>0</td><td>-</td></tr><tr><td>5</td><td>AME(5,2)</td><td>1</td><td>1</td><td>0</td><td>-</td></tr><tr><td>6</td><td>GHZ</td><td>0</td><td>0</td><td>0</td><td>1</td></tr><tr><td>6</td><td>D<sub>6</sub><sup>1</sup></td><td>0.1866</td><td>0.1634</td><td>0.1866</td><td>0.3818</td></tr><tr><td>6</td><td>D<sub>6</sub><sup>2</sup></td><td>0.2566</td><td>0.1961</td><td>0.2566</td><td>0.5637</td></tr><tr><td>6</td><td>D<sub>6</sub><sup>3</sup></td><td>0.2729</td><td>0.1961</td><td>0.2729</td><td>0.6291</td></tr><tr><td>6</td><td>L<sub>6</sub></td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>6</td><td>R<sub>6</sub></td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>6</td><td>AME(6,2)</td><td>0</td><td>2</td><td>0</td><td>0</td></tr><tr><td>6</td><td>5-uniform</td><td>0</td><td>0</td><td>0</td><td>2</td></tr></tbody></table>
|
| 292 |
+
|
| 293 |
+
TABLE I. Values of the dependence for several quantum states and probability distributions. {$P_{\text{same}}$} stands for $P(000) = P(111) = \frac{1}{2}$ and {$P_{\text{even}}$} for $P(000) = P(110) = P(101) = P(011) = \frac{1}{4}$. $D_N^k$ denotes the N-partite Dicke states with $k$ excitations $\sim |1...10...0\rangle + ... + |0...01...1\rangle$, with $k$ ones, $\rho_{nc}$ denotes the genuinely multipartite entangled state without multipartite correlations [10], the GHZ state is described in the text, $L_4$ stands for the linear cluster of four qubits and $\Psi_4$ is discussed in [48]. k-uniform states are states where all k-partite marginals are maximally mixed, whereas AME(n,d), so-called absolutely maximally entangled states, refers to $[n/2]$-uniform states of d dimensions [25].
|
| 294 |
+
|
| 295 |
+
This state also illustrates nicely that full correlations can exist without multipartite dependence. Conversely, the state $\rho_{nc} = \frac{1}{2}|D_N^c\rangle\langle D_N^c| + \frac{1}{2}|D_N^{N-1}\rangle\langle D_N^{N-1}|$ has the property of being N-partite entangled without N-partite correlation functions [10], yet its $\mathcal{D}_N$ is finite. This again shows that multipartite dependence is distinct from multipartite correlations and captures other properties of genuinely multi-partite entangled systems.
|
| 296 |
+
|
| 297 |
+
*Increasing *D* with local operations.*—We now give an analytical example where $\mathcal{D}_3$ increases under local operation on the system in the condition. Consider the following classical state
|
| 298 |
+
|
| 299 |
+
$$ \rho = \frac{1}{2} |000\rangle\langle000| + \frac{1}{8} |101\rangle\langle101| + \frac{1}{8} |110\rangle\langle110| + \frac{1}{4} |111\rangle\langle111|. \tag{9} $$
|
| 300 |
+
---PAGE_BREAK---
|
| 301 |
+
|
| 302 |
+
<table><thead><tr><th>N state</th><th>D<sub>3</sub></th><th>D<sub>4</sub></th><th>D<sub>5</sub></th><th>D<sub>6</sub></th></tr></thead><tbody><tr><td>3</td><td>D<sub>3</sub><sup>1</sup> 0.87 (0.92)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>3</td><td>ρ<sub>nc</sub> 0.45 (0.50)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>4</td><td>GHZ 0.06 (0.00)</td><td>0.95 (1.00)</td><td>-</td><td>-</td></tr><tr><td>4</td><td>D<sub>4</sub><sup>2</sup> 0.42 (0.50)</td><td>0.67 (0.75)</td><td>-</td><td>-</td></tr><tr><td>4</td><td>L<sub>4</sub> 0.90 (1.00)</td><td>0.09 (0.00)</td><td>-</td><td>-</td></tr><tr><td>4</td><td>Ψ<sub>4</sub> 0.33 (0.42)</td><td>0.39 (0.42)</td><td>-</td><td>-</td></tr><tr><td>5</td><td>ρ<sub>nc</sub> 0.25 (0.17)</td><td>0.16 (0.65)</td><td>0.171 (0.47)</td><td>-</td></tr><tr><td>6</td><td>D<sub>6</sub><sup>3</sup> 0.21 (0.27)</td><td>0.13 (0.20)</td><td>0.14 (0.27)</td><td>0.21 (0.63)</td></tr></tbody></table>
|
| 303 |
+
|
| 304 |
+
TABLE II. Illustrative values of dependence for several experimental quantum states. In brackets we give theoretical predictions for ideal states.
|
| 305 |
+
|
| 306 |
+
One verifies that its 3-dependence equals $D_3(\rho) = I(X_2 : X_3|X_1) = 0.06$, i.e., conditioning on $X_1$ gives the smallest conditional mutual information. The application of an amplitude-damping channel with Kraus operators
|
| 307 |
+
|
| 308 |
+
$$K_0 = \begin{pmatrix} 0 & 1/\sqrt{2} \\ 0 & 0 \end{pmatrix}, \quad K_1 = \begin{pmatrix} 1 & 0 \\ 0 & 1/\sqrt{2} \end{pmatrix}, \quad (10)$$
|
| 309 |
+
|
| 310 |
+
on subsystem $X_1$ produces the state $\bar{\rho}$, for which one computes $D_3(\bar{\rho}) = I(\bar{X}_1 : X_2|\bar{X}_3) = I(\bar{X}_1 : X_3|\bar{X}_2) = 0.19$. Note the change in the conditioned system minimizing the dependence. The local operation on $X_1$ has increased the information $I(X_2 : X_3|\bar{X}_1)$ above the other two conditional mutual informations.
|
| 311 |
+
|
| 312 |
+
*Experimental states.*—Finally, we move to multipartite dependence in quantum optics experiments. Table II gathers quantum states prepared with photonic qubits in Refs. [40, 49–53]. The dependencies were extracted from experimental density matrices obtained via state tomography using the evaluation described in Ref. [54]. We have chosen to present the states illustrating the properties discussed above.
|
| 313 |
+
|
| 314 |
+
The experimental data is in good agreement with the theoretical calculations. Deviations for the six qubit state $D_6^3$ result from reduced fidelities due to contributions of higher order noise in the state preparation. The same applies to the five qubit state $\rho_{nc}$ derived from $D_6^3$. Indeed, the states denoted as $\rho_{nc}$, which have vanishing correlation functions between all $N$ observers [40], clearly show a non-vanishing value for $D_N$. Hence, these states are examples for “entanglement without correlations” and “dependence without correlations”. Similarly, the experimental data of the linear cluster state $L_4$ indicates “entanglement without dependence” and “correlations without dependence”. In the experiment, the GHZ state $\sim |0000\rangle + |1111\rangle$ achieves the highest dependence of all considered states and is close to the theoretical dependence $D_4 = 1$, which is maximal over all pure states. The small value of $D_3$ for the four-partite GHZ state reflects its property of having vanishing dependence for all tripartite classically correlated subsystems.
|
| 315 |
+
|
| 316 |
+
## V. CONCLUSIONS
|
| 317 |
+
|
| 318 |
+
We have introduced a quantity, the multipartite dependence, in order to determine whether and by what amount cooperation between any subsystems brings additional information about the remaining subsystems. It is expected that this tool, which can be used in classical as well as in quantum domains, will be of broad relevance as it is directly calculable and has a clear interpretation. Furthermore, it offers an alternative to the characterization of multipartite properties via multipartite correlations.
|
| 319 |
+
|
| 320 |
+
## ACKNOWLEDGMENTS
|
| 321 |
+
|
| 322 |
+
We thank Krzysztof Szczygielski for valuable discussions. The work is supported by DFG (Germany) and NCN (Poland) within the joint funding initiative “Beethoven2” (2016/23/G/ST2/04273, 381445721), by the Singapore Ministry of Education Academic Research Fund Tier 2 Project No. MOE2015-T2-2-034, and by Polish National Agency for Academic Exchange NAWA Project No. PPN/PPO/2018/1/00007/U/00001. W.L. and R.G. acknowledge partial support by the Foundation for Polish Science (IRAP project, ICTQT, Contract No. 2018/MAB/5, cofinanced by EU via Smart Growth Operational Programme). JD and LK acknowledge support from the PhD programs IMPRS-QST and ExQM, respectively. JDMA is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC-2111 - 390814868.
|
| 323 |
+
|
| 324 |
+
## Appendix A: Proof of property (i)
|
| 325 |
+
|
| 326 |
+
If $D_N = 0$ and one adds a party in a product state then the resulting $(N+1)$-partite state has $D_N = 0$.
|
| 327 |
+
|
| 328 |
+
*Proof.* Per definition, we are minimizing the conditional mutual information over all $N$-partite subsystems of the total $(N+1)$-party state. If one takes the $N$-partite subsystem that excludes the added party, by assumptions $D_N = 0$. □
|
| 329 |
+
|
| 330 |
+
In other words, if the cooperation of $N-1$ parties within the $N$-partite system does not help in gaining additional knowledge about any other remaining party, then the cooperation with any additional independent system will not help either.
|
| 331 |
+
|
| 332 |
+
## Appendix B: Proof of property (ii)
|
| 333 |
+
|
| 334 |
+
If $D_N = 0$ and one subsystem is split with two of its parts placed in different laboratories then the resulting $(N+1)$-party state has $D_{N+1} = 0$.
|
| 335 |
+
---PAGE_BREAK---
|
| 336 |
+
|
| 337 |
+
*Proof.* Without loss of generality and in order to simplify notation let us consider an initially tri-partite system where the third party is in possession of two variables labeled $X_3$ and $X_4$. The splitting operation places these variables in separate laboratories producing a four-partite system. By assumption $\mathcal{D}_3 = 0$, but this does not specify which conditional mutual information in Eq. (1) vanishes. If this is the mutual information where the variables $X_3$ and $X_4$ of the third party enter in the condition, then this mutual information is also minimizing $\mathcal{D}_4$, and hence the latter vanishes. The second possibility is that the variables of the third party enter outside the condition, e.g., the vanishing conditional mutual information could be $I(X_1 : X_3X_4|X_2)$. From the chain rule for mutual information, $0 = I(X_1 : X_3X_4|X_2) \ge I(X_1 : X_4|X_2X_3)$. Finally, from strong subadditivity follows $\mathcal{D}_4 = 0$. In the N-partite case one writes more variables in the conditions and follows the same steps. $\square$
|
| 338 |
+
|
| 339 |
+
## Appendix C: Proof of property (iii)
|
| 340 |
+
|
| 341 |
+
Consider a state $\rho$ that is processed by general local operations (CPTP maps) to a state $\bar{\rho}$. The following upper bound on the multipartite dependence after local operations holds:
|
| 342 |
+
|
| 343 |
+
$$ \overline{\mathcal{D}}_{\mathcal{N}} \leq \mathcal{D}_{\mathcal{N}} + I(X_1 X_2 : X_3 \dots X_N) \\ -I(X_1 X_2 : \overline{X}_3 \dots \overline{X}_N), \quad (\text{C1}) $$
|
| 344 |
+
|
| 345 |
+
where systems $X_1$ and $X_2$ are the ones minimizing $\mathcal{D}_N$, i.e., before the operations were applied.
|
| 346 |
+
|
| 347 |
+
Let us begin with a lemma characterizing the lack of monotonicity of conditional mutual information under local operations.
|
| 348 |
+
|
| 349 |
+
**Lemma 1.** *The following inequality holds:*
|
| 350 |
+
|
| 351 |
+
$$ I(\overline{X}_1 : \overline{X}_2 | \overline{X}_3 \dots \overline{X}_N) \le I(X_1 : X_2 | X_3 \dots X_N) + I(X_1 X_2 : X_3 \dots X_N) - I(X_1 X_2 : \overline{X}_3 \dots \overline{X}_N), (\text{C2}) $$
|
| 352 |
+
|
| 353 |
+
where bars denote subsystems transformed by arbitrary local CPTP maps.
|
| 354 |
+
|
| 355 |
+
*Proof.* The conditional mutual information is already known to be monotonic under operations on systems not in the condition [37]:
|
| 356 |
+
|
| 357 |
+
$$ I(\overline{X}_1 : \overline{X}_2 | \overline{X}_3 \dots \overline{X}_N) \le I(X_1 : X_2 | \overline{X}_3 \dots \overline{X}_N) (\text{C3}) $$
|
| 358 |
+
|
| 359 |
+
Now we continue as follows:
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\begin{align*}
|
| 363 |
+
& I(X_1 : X_2 | \overline{X}_3 \dots \overline{X}_N) + I(X_1 X_2 : \overline{X}_3 \dots \overline{X}_N) \\
|
| 364 |
+
&= I(X_1 : X_2 \overline{X}_3 \dots \overline{X}_N) + I(X_2 : X_1 \overline{X}_3 \dots \overline{X}_N) - I(X_1 : X_2) \\
|
| 365 |
+
&\le I(X_1 : X_2 X_3 \dots X_N) + I(X_2 : X_1 X_3 \dots X_N) - I(X_1 : X_2) \\
|
| 366 |
+
&= I(X_1 : X_2 | X_3 \dots X_N) + I(X_1 X_2 : X_3 \dots X_N),
|
| 367 |
+
\end{align*}
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
where the first equation is obtained by manipulating entropies such that the mutual informations containing barred subsystems come with positive sign, next we used the data processing inequality and in the last step we
|
| 371 |
+
|
| 372 |
+
reversed the manipulations on entropies. This completes the proof of the lemma. $\square$
|
| 373 |
+
|
| 374 |
+
To complete the proof of property (iii) we write
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\begin{align*}
|
| 378 |
+
\mathcal{D}_N &= I(X_1 : X_2 | X_3 \dots X_N) \\
|
| 379 |
+
&\geq I(\overline{X}_1 : \overline{X}_2 | \overline{X}_3 \dots \overline{X}_N) - I(X_1 X_2 : X_3 \dots X_N) \\
|
| 380 |
+
&\phantom{\geq} + I(X_1 X_2 : \overline{X}_3 \dots \overline{X}_N) \\
|
| 381 |
+
&\geq \overline{\mathcal{D}}_N - I(X_1 X_2 : X_3 \dots X_N) + I(X_1 X_2 : \overline{X}_3 \dots \overline{X}_N),
|
| 382 |
+
\end{align*}
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
where in the first line we denote the subsystems such that the conditional mutual information $I(X_1 : X_2|X_3\dots X_N)$ achieves minimum in $\mathcal{D}_N$. Next, the first inequality follows from Lemma 1, and the second inequality from the fact that $I(\overline{X}_1 : \overline{X}_2|\overline{X}_3\dots\overline{X}_N)$ may not be the one minimizing $\overline{\mathcal{D}}_N$.
|
| 386 |
+
|
| 387 |
+
## Appendix D: Quantum qudit states maximizing $\mathcal{D}_N$
|
| 388 |
+
|
| 389 |
+
Let us consider a quantum state of $N$ qudits, for $N$ being a multiple of $d$ and $N \ge 3$, defined as the common eigenstate of the generators
|
| 390 |
+
|
| 391 |
+
$$ G_{1}^{(d)} = \bigotimes_{i=1}^{N} X^{(d)}, \quad G_{2}^{(d)} = \bigotimes_{i=1}^{N} Z^{(d)}, \quad (\text{D1}) $$
|
| 392 |
+
|
| 393 |
+
composed of $d$-dimensional Weyl-Heisenberg matrices
|
| 394 |
+
$$ X^{(d)} = \sum_{j=0}^{d-1} |j\rangle\langle j+1|, \quad \text{and} \quad Z^{(d)} = \sum_{j=0}^{d-1} \omega^j |j\rangle\langle j|, $$
|
| 395 |
+
with $\omega = e^{i2\pi/d}$. The explicit form of the state can be calculated in the following way:
|
| 396 |
+
|
| 397 |
+
$$ \rho_N^{(d)} = \frac{1}{d^N} \sum_{i,j=0}^{d-1} (G_1^{(d)})^i (G_2^{(d)})^j. \quad (\text{D2}) $$
|
| 398 |
+
|
| 399 |
+
The state (D2) belongs to the class of k-uniform mixed states defined in [24], with $k=N-1$.
|
| 400 |
+
|
| 401 |
+
It is known that for $N$ even the state $\rho_N^{(d)}$ has $d^{N-2}$ eigenvalues equal to $\frac{1}{d^{N-2}}$, so the entropy $S(\rho_N^{(d)})$ is equal to
|
| 402 |
+
|
| 403 |
+
$$ S(\rho_N^{(d)}) = N - 2. \quad (\text{D3}) $$
|
| 404 |
+
|
| 405 |
+
Since the state is $(N-1)$-uniform, all reduced density matrices are proportional to identity matrices giving
|
| 406 |
+
|
| 407 |
+
$$ S(\operatorname{Tr}_i \rho_N^{(d)}) = N - 1, \quad (\text{D4}) $$
|
| 408 |
+
|
| 409 |
+
$$ S(\operatorname{Tr}_{i,j} \rho_N^{(d)}) = N - 2. \quad (\text{D5}) $$
|
| 410 |
+
|
| 411 |
+
Therefore, for $N$ even
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
\begin{align}
|
| 415 |
+
\mathcal{D}_N(\rho_N^{(d)}) &= S(\operatorname{Tr}_i \rho_N^{(d)}) + S(\operatorname{Tr}_j \rho_N^{(d)}) && (\text{D6}) \\
|
| 416 |
+
&\quad -S(\operatorname{Tr}_{i,j} \rho_N^{(d)}) - S(\rho_N^{(d)}) = 2.
|
| 417 |
+
\end{align}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
In the case of $N$ odd, however, the state $\rho_N^{(d)}$ has $d^{N-1}$ eigenvalues equal to $\frac{1}{d^{N-1}}$, and by analogous calculations we get
|
| 421 |
+
|
| 422 |
+
$$ \mathcal{D}_N(\rho_N^{(d)}) = 1, \quad (\text{D7}) $$
|
| 423 |
+
---PAGE_BREAK---
|
| 424 |
+
|
| 425 |
+
for $(N-1)$-uniform states.
|
| 426 |
+
|
| 427 |
+
Now we show that the $(N-1)$-uniform states are the only ones that can achieve $\mathcal{D}_N = 2$. The requirement is
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
\begin{aligned}
|
| 431 |
+
\mathcal{D}_N &= I(X_1 : X_2 | X_3 \dots X_N) \\
|
| 432 |
+
&= I(X_1 : X_2 X_3 \dots X_N) - I(X_1 : X_3 \dots X_N) \\
|
| 433 |
+
&= 2,
|
| 434 |
+
\end{aligned}
|
| 435 |
+
\quad (\text{D8})
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
where $X_i$ stands for individual subsystem. Since in the definition of $\mathcal{D}_N$ we minimize over all permutations, the same equation holds for all permutations of subsystems. Due to subadditivity, the only way to satisfy (D8) is
|
| 439 |
+
|
| 440 |
+
$$
|
| 441 |
+
\begin{aligned}
|
| 442 |
+
I(X_1 : X_3 \dots X_N) &= 0, && (\text{D9}) \\
|
| 443 |
+
I(X_1 : X_2 X_3 \dots X_N) &= 2. && (\text{D10})
|
| 444 |
+
\end{aligned}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
From the first equation we conclude that
|
| 448 |
+
|
| 449 |
+
$$ \rho_{13 \dots N} = \rho_1 \otimes \rho_{3 \dots N}, \quad (\text{D11}) $$
|
| 450 |
+
|
| 451 |
+
which also holds for all permutation of indices. After tracing out all but the 1st and 3rd subsystem, we arrive at
|
| 452 |
+
|
| 453 |
+
$$ \rho_{13} = \rho_1 \otimes \rho_3, \quad (\text{D12}) $$
|
| 454 |
+
|
| 455 |
+
which means that every pair of subsystems is described by a tensor product state. It follows that any $N-1$ particle subsystem is described by a simple tensor product, e.g.,
|
| 456 |
+
|
| 457 |
+
$$ \rho_{13 \dots N} = \rho_1 \otimes \rho_3 \otimes \dots \otimes \rho_N. \quad (\text{D13}) $$
|
| 458 |
+
|
| 459 |
+
Using (D10) we write
|
| 460 |
+
|
| 461 |
+
$$ S(X_1) - S(X_1 | X_2 X_3 \dots X_N) = 2. \quad (\text{D14}) $$
|
| 462 |
+
|
| 463 |
+
Since for the quantum conditional entropy we have
|
| 464 |
+
|
| 465 |
+
$$ -S(X_1|X_2X_3\dots X_N) \leq S(X_1), \quad (\text{D15}) $$
|
| 466 |
+
|
| 467 |
+
the bound is achieved if
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\begin{aligned}
|
| 471 |
+
&2 = S(X_1) - S(X_1 | X_2 X_3 \dots X_N) \\
|
| 472 |
+
&\leq S(X_1) + S(X_1),
|
| 473 |
+
\end{aligned}
|
| 474 |
+
$$
|
| 475 |
+
|
| 476 |
+
i.e., for $S(X_1) = 1$. Hence, taking into account (D13), all $N-1$ particle subsystems are maximally mixed, i.e., the total state is $(N-1)$-uniform.
|
| 477 |
+
|
| 478 |
+
## Appendix E: Quantum secret sharing
|
| 479 |
+
|
| 480 |
+
After introducing the $(N-1)$-uniform states, which are maximizing the $N$-dependence, we now show that they naturally feature in the task of quantum secret sharing.
|
| 481 |
+
|
| 482 |
+
Suppose Alice has a quantum state $\rho$, called the secret, that she wants to split into $n$ shares such that the secret is recoverable only when a party has all $n$ shares. A quantum secret sharing scheme [23] is a map $\mathcal{E}_n: A \to X^{\otimes n}$ such that,
|
| 483 |
+
|
| 484 |
+
$$ C_Q(\operatorname{Tr}_k \circ \mathcal{E}_n) = 0 \quad (\text{E1}) $$
|
| 485 |
+
|
| 486 |
+
where $\operatorname{Tr}_k$ is the partial trace over an arbitrary set of subsystems and $C_Q(\Lambda)$ is the quantum capacity of the channel $\Lambda$. The rate of a secret sharing scheme is given by the quantum capacity of the channel $\mathcal{E}_n$.
|
| 487 |
+
|
| 488 |
+
Consider that Alice prepares a quantum secret in the state $\rho = \frac{1}{2}(\sigma_0 + \sum_j s_j \sigma_j)$ of a single qubit, where $s_j$ are the components of the Bloch vector. Her encoding map has the $(N-1)$-uniform state as the Choi state [55], and one verifies that it leads to the outcome
|
| 489 |
+
|
| 490 |
+
$$
|
| 491 |
+
\begin{aligned}
|
| 492 |
+
\mathcal{E}_N(\rho) &= \frac{1}{2N} \left( \sigma_0^{\otimes N} \operatorname{Tr}\rho + (-1)^{N/2} \sum_{j=1}^3 \sigma_j^{\otimes N} \operatorname{Tr}(\sigma_j^T \rho) \right).
|
| 493 |
+
\end{aligned}
|
| 494 |
+
\quad (\text{E2})
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
Since for any $\rho$ we have $(\operatorname{Tr}_k \circ \mathcal{E}_N)(\rho) \propto 1$, it follows that $C_Q(\operatorname{Tr}_k \circ \mathcal{E}_N) = 0$, i.e., no subset of observers can recover the quantum secret. All of them, however, can recover it perfectly with the decoding map
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
\begin{aligned}
|
| 501 |
+
\mathcal{D}_N(\rho_N) &= \frac{1}{2} \left( \sigma_0 + (-1)^{N/2} \sum_j \operatorname{Tr}(\sigma_j^{\otimes N} \rho_N) \sigma_j \right)^T,
|
| 502 |
+
\end{aligned}
|
| 503 |
+
\quad (\text{E3})
|
| 504 |
+
$$
|
| 505 |
+
|
| 506 |
+
where $\rho_N = \mathcal{E}_N(\rho)$.
|
| 507 |
+
|
| 508 |
+
We now show that any $(N+1)$-partite state $\rho_c$ with maximally mixed marginals and non-classical dependence $\mathcal{D}_{N+1}(\rho_c) > 1$ is useful for quantum secret sharing. Consider the encoding map $\mathcal{E}_c: A \to X^{\otimes N}$ with the Choi state given by $\rho_c$, i.e., $(\mathbb{1} \otimes \mathcal{E}_c)(|\Phi\rangle\langle\Phi|) = \rho_c$, where $|\Phi\rangle$ is the maximally entangled state. The rate of quantum secret sharing admits the lower bound
|
| 509 |
+
|
| 510 |
+
$$
|
| 511 |
+
\begin{align*}
|
| 512 |
+
R &= C_Q(\mathcal{E}_c) && (\text{E4a}) \\
|
| 513 |
+
&\geq \sup_{\phi_{AN}} -S_{A|X_1\dots X_N}((\mathbb{1} \otimes \mathcal{E}_c)(\phi_{AN})) && (\text{E4b}) \\
|
| 514 |
+
&\geq -S_{A|X_1\dots X_N}(\rho_c) && (\text{E4c}) \\
|
| 515 |
+
&= I(A:X_1|X_2\dots X_N) - S(A|X_2\dots X_N) && (\text{E4d}) \\
|
| 516 |
+
&\geq I(A:X_1|X_2\dots X_N) - 1 && (\text{E4e}) \\
|
| 517 |
+
&\geq \mathcal{D}_{N+1}(\rho_c) - 1. && (\text{E4f})
|
| 518 |
+
\end{align*}
|
| 519 |
+
$$
|
| 520 |
+
|
| 521 |
+
The steps are justified as follows. The first line follows from definition. Ineq. (E4b) is the result of computing the quantum capacity of a channel [56–61], (E4c) follows because the maximally entangled state is a particular choice of $\phi_{AN}$, and the Choi state of $\mathcal{E}_c$ is $\rho_c$. Eqs. (E4d) and (E4e) follow from the properties of entropy recalling that our logarithms are base $d$. Finally, the dependence is the worst case conditional mutual information.
|
| 522 |
+
|
| 523 |
+
Since the marginals of $\rho_c$ are maximally mixed, the same holds for the encoded state $\rho_N = \mathcal{E}_c(\rho)$, i.e., no subset of parties can recover the quantum secret alone, yet for all of them together $R > 0$ holds for $\mathcal{D}_{N+1}(\rho_c) > 1$.
|
| 524 |
+
|
| 525 |
+
## Appendix F: Dependence of Dicke states
|
| 526 |
+
|
| 527 |
+
We now present an analytical formula for $\mathcal{D}_N^e$ in $N$-qubit Dicke states with $e$ excitations. For that state it is
|
| 528 |
+
---PAGE_BREAK---
|
| 529 |
+
|
| 530 |
+
given by
|
| 531 |
+
|
| 532 |
+
$$
|
| 533 |
+
\begin{equation}
|
| 534 |
+
\begin{aligned}
|
| 535 |
+
\mathcal{D}_N(D_N^e) = {}& (\begin{smallmatrix} N \\ e \end{smallmatrix})^{-1} \left[ - \frac{2(N-1)!\log\left(\frac{e}{N}\right)}{(e-1)!(N-e)!} \right. \\
|
| 536 |
+
& \qquad \left. - 2\binom{N-1}{e} \log\left(1-\frac{e}{N}\right) + \binom{N-2}{e-2} \log\left(\frac{\binom{N-2}{e-1}}{\binom{N}{e}}\right) \right] \quad (\text{F1}) \\
|
| 537 |
+
& + 2\binom{N-2}{e-1} \log\left(\frac{2\binom{N-2}{e-1}}{\binom{N}{e}}\right) + \binom{N-2}{e} \log\left(\frac{\binom{N-2}{e}}{\binom{N}{e}}\right).
|
| 538 |
+
\end{aligned}
|
| 539 |
+
\end{equation}
|
| 540 |
+
$$
|
| 541 |
+
|
| 542 |
+
This comes from the fact that for a general Dicke state with *e* excitations all one-partite reduced density matrices {$ρ_i$} have the two non-zero eigenvalues *e*/N and (N-*e*)/N, while all two-partite reduced states {$ρ_{ij}$} have the three non-vanishing eigenvalues *e*(e−1)/N(N−1), 2*e*(N−*e*)/N(N−1), and (N−*e*−1)(N−*e*)/N(N−1). For *e* as a function of the number of parties, *e* = N/*k*, in the limit of *N* → ∞, the N-dependence converges to a finite value, i.e., *D*<sub>*N*</sub>(*D*<sub>*N*</sub><sup>*e*</sup>) tends to 2(*k* − 1)/*k*<sup>2</sup>. The maximally achievable dependence of 1/2 is reached for *e* = N/2. For an arbitrarily chosen constant *e* (e.g., for the W state, *e* = 1), *D*<sub>*N*</sub>(*D*<sub>*N*</sub><sup>*e*</sup>) tends to 0 for *N* → ∞.
|
| 543 |
+
|
| 544 |
+
These results allow to answer the following question: If $\mathcal{D}_N \le 1$, are there local measurements on the subsystems with classical outcomes having conditional mutual information equal to $\mathcal{D}_N$? The answer is negative. We have optimized the conditional informations over local measurements for Dicke states with $N=3,4$ and $0 < e < N$, and observed that the values obtained are always smaller than $\mathcal{D}_N$.
|
| 545 |
+
|
| 546 |
+
## Appendix G: Bounds on mutual N-dependence
|
| 547 |
+
|
| 548 |
+
a. Bound on mixed states
|
| 549 |
+
|
| 550 |
+
$$
|
| 551 |
+
S(\mathrm{Tr}_{j}\rho) \le S(\mathrm{Tr}_{ij}\rho) + S(\rho_{i}), \quad (\mathrm{G1})
|
| 552 |
+
$$
|
| 553 |
+
|
| 554 |
+
$$
|
| 555 |
+
S(\mathrm{Tr}_i\rho) - S(\rho_i) \le S(\rho), \quad (G2)
|
| 556 |
+
$$
|
| 557 |
+
|
| 558 |
+
where $ρ_i$ is the reduced state of the *i*-th particle. Using the above inequalities we write
|
| 559 |
+
|
| 560 |
+
$$
|
| 561 |
+
\begin{align*}
|
| 562 |
+
\mathcal{D}_N(\rho) &\le S(\mathrm{Tr}_i\rho) - S(\rho) + S(\mathrm{Tr}_j\rho) - S(\mathrm{Tr}_{ij}\rho) \\
|
| 563 |
+
&\le S(\rho_i) + S(\rho_i) \\
|
| 564 |
+
&\le 2.
|
| 565 |
+
\end{align*}
|
| 566 |
+
\tag{G3}
|
| 567 |
+
$$
|
| 568 |
+
|
| 569 |
+
b. Bounds on pure states
|
| 570 |
+
|
| 571 |
+
Now we prove that for pure states we have $\mathcal{D}_N(\rho) \le 1$. Note that due to Eq. (7) from the main text we need to find the smallest mutual information $I(\rho_i : \rho_j)$, where $\rho_i$, $\rho_j$ are subsystems of the pure state $\rho$. Consider
|
| 572 |
+
|
| 573 |
+
$$
|
| 574 |
+
\begin{align}
|
| 575 |
+
I(\rho_i : \rho_j) + I(\rho_j : \rho_k) \tag{G4} \\
|
| 576 |
+
&= S(\rho_i) + S(\rho_j) - S(\rho_{ij}) + S(\rho_j) + S(\rho_k) - S(\rho_{jk}) \nonumber \\
|
| 577 |
+
&\le 2S(\rho_j) \nonumber \\
|
| 578 |
+
&\le 2, \tag{G5}
|
| 579 |
+
\end{align}
|
| 580 |
+
$$
|
| 581 |
+
|
| 582 |
+
where the first inequality comes from the strong subadditivity of entropy
|
| 583 |
+
|
| 584 |
+
$$
|
| 585 |
+
S(\rho_i) + S(\rho_k) \leq S(\rho_{ij}) + S(\rho_{jk}). \quad (G6)
|
| 586 |
+
$$
|
| 587 |
+
|
| 588 |
+
The subadditivity of quantum entropy states that for the reduced quantum states we have
|
| 589 |
+
|
| 590 |
+
Hence, this monogamy relation with respect to mutual information proves that there is always a bipartite subsystem with mutual information bounded by 1.
|
| 591 |
+
|
| 592 |
+
[1] T. Gawne and B. Richmond, Journal of Neuroscience **13**, 2758 (1993).
|
| 593 |
+
|
| 594 |
+
[2] I. Gat and N. Tishby, in Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems (Ben, V. Vedral, and A. Winter, Phys. Rev. Lett. **101**, 070502 (2008).
|
| 595 |
+
|
| 596 |
+
[3] E. Schneidman, W. Bialek, and M. J. Berry, Journal of Neuroscience **23**, 11539 (2003).
|
| 597 |
+
|
| 598 |
+
[4] E. Schneidman, S. Still, M. J. Berry, and W. Bialek, Phys. Rev. Lett. **91**, 238701 (2003).
|
| 599 |
+
|
| 600 |
+
[5] V. Varadan, I. Miller, David M., and D. Anastassiou, Bioinformatics **22**, e497 (2006).
|
| 601 |
+
|
| 602 |
+
[6] D. Anastassiou, Molecular Systems Biology **3**, 83 (2007).
|
| 603 |
+
|
| 604 |
+
[7] D. Trendafilov, D. Polani, and R. Murray-Smith, *2015 17th UKSim-AMSS International Conference on Modelling and Simulation (UKSim)*, , 361 (2015).
|
| 605 |
+
|
| 606 |
+
[8] D. L. Zhou, B. Zeng, Z. Xu, and L. You, Phys. Rev. A **74**, 052110 (2006).
|
| 607 |
+
|
| 608 |
+
[9] D. L. Zhou, Phys. Rev. Lett. **101**, 180505 (2008).
|
| 609 |
+
|
| 610 |
+
[10] C. H. Bennett, A. Grudka, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. A **83**, 012312 (2011).
|
| 611 |
+
|
| 612 |
+
[11] G. L. Giorgi, B. Bellomo, F. Galve, and R. Zambrini, Phys. Rev. Lett. **107**, 190501 (2011).
|
| 613 |
+
|
| 614 |
+
[12] D. Girolami, T. Tufarelli, and C. E. Susa, Phys. Rev. Lett. **119**, 140505 (2017).
|
| 615 |
+
|
| 616 |
+
[13] I. Devetak and J. Yard, Phys. Rev. Lett. **100**, 230501 (2008).
|
| 617 |
+
|
| 618 |
+
[14] F. G. S. L. Brandao, A. W. Harrow, J. Oppenheim, and S. Strelchuk, Phys. Rev. Lett. **115**, 050501 (2015).
|
| 619 |
+
---PAGE_BREAK---
|
| 620 |
+
|
| 621 |
+
[16] T. M. Cover and J. A. Thomas, *Elements of Information Theory* (Wiley-Interscience, 2006).
|
| 622 |
+
|
| 623 |
+
[17] K. Modi, T. Paterek, W. Son, V. Vedral, and M. Williamson, Phys. Rev. Lett. **104**, 080501 (2010).
|
| 624 |
+
|
| 625 |
+
[18] M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, 2000).
|
| 626 |
+
|
| 627 |
+
[19] M. Horodecki, J. Oppenheim, and A. Winter, Nature **436**, 673 (2005).
|
| 628 |
+
|
| 629 |
+
[20] A. Shamir, ACM **22**, 612 (1979).
|
| 630 |
+
|
| 631 |
+
[21] G. R. Blakley, Proceedings of AFIPS'79 **48**, 313 (1979).
|
| 632 |
+
|
| 633 |
+
[22] M. Hillery, V. Bužek, and A. Berthiaume, Phys. Rev. A **59**, 1829 (1999).
|
| 634 |
+
|
| 635 |
+
[23] H. Imai, J. Müller-Quade, A. C. A. Nascimento, P. Tuyls, and A. Winter, Quantum Info. Comput. **5**, 69 (2005).
|
| 636 |
+
|
| 637 |
+
[24] W. Klobus, A. Burchardt, A. Kolodziejski, M. Pandit, T. Vértesi, K. Życzkowski, and W. Laskowski, Phys. Rev. A **100**, 032112 (2019).
|
| 638 |
+
|
| 639 |
+
[25] W. Helwig, W. Cui, J. I. Latorre, A. Riera, and H.-K. Lo, Phys. Rev. A **86**, 052335 (2012).
|
| 640 |
+
|
| 641 |
+
[26] J. A. Smolin, Phys. Rev. A **63**, 032306 (2001).
|
| 642 |
+
|
| 643 |
+
[27] R. Augusiak and P. Horodecki, Phys. Rev. A **73**, 012318 (2006).
|
| 644 |
+
|
| 645 |
+
[28] R. Augusiak and P. Horodecki, Phys. Rev. A **74**, 010305R (2006).
|
| 646 |
+
|
| 647 |
+
[29] E. Amselem and M. Bourennane, Nat. Phys. **5**, 748 (2009).
|
| 648 |
+
|
| 649 |
+
[30] J. Lavoie, R. Kaltenbaek, M. Piani, and K. J. Resch, Nat. Phys. **6**, 827 (2010).
|
| 650 |
+
|
| 651 |
+
[31] E. Amselem and M. Bourennane, Nat. Phys. **6**, 827 (2010).
|
| 652 |
+
|
| 653 |
+
[32] J. Lavoie, R. Kaltenbaek, M. Piani, and K. J. Resch, Phys. Rev. Lett. **105**, 130501 (2010).
|
| 654 |
+
|
| 655 |
+
[33] J. Barreiro, P. Schindler, O. Gühne, T. Monz, M. Chwalla, C. F. Roos, M. Hennrich, and R. Blatt, Nat. Phys. **6**, 943 (2010).
|
| 656 |
+
|
| 657 |
+
[34] E. Amselem, M. Sadiq, and M. Bourennane, Sci. Rep. **3**, 1966 (2013).
|
| 658 |
+
|
| 659 |
+
[35] L. del Rio, J. Aberg, R. Renner, O. Dahlsten, and V. Vedral, Nature **474**, 61 (2011).
|
| 660 |
+
|
| 661 |
+
[36] T. K. Chuan, J. Maillard, K. Modi, T. Paterek, M. Paternostro, and M. Piani, Phys. Rev. Lett. **109**, 070501 (2012).
|
| 662 |
+
|
| 663 |
+
[37] M. M. Wilde, J. Phys. A: Math. Theor. **51**, 374002 (2018).
|
| 664 |
+
|
| 665 |
+
[38] R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Rev. Mod. Phys. **81**, 865 (2009).
|
| 666 |
+
|
| 667 |
+
[39] W. Laskowski, M. Markiewicz, T. Paterek, and M. Wieśniak, Phys. Rev. A **86**, 032105 (2012).
|
| 668 |
+
|
| 669 |
+
[40] C. Schwemmer, L. Knips, M. C. Tran, A. de Rosier, W. Laskowski, T. Paterek, and H. Weinfurter, Phys. Rev. Lett. **114**, 180501 (2015).
|
| 670 |
+
|
| 671 |
+
[41] S. Designolle, O. Giraud, and J. Martin, Phys. Rev. A **96**, 032322 (2017).
|
| 672 |
+
|
| 673 |
+
[42] M. C. Tran, M. Zuppardo, A. de Rosier, L. Knips, W. Laskowski, T. Paterek, and H. Weinfurter, Phys. Rev. A **95**, 062331 (2017).
|
| 674 |
+
|
| 675 |
+
[43] W. Klobus, W. Laskowski, T. Paterek, M. Wieśniak, and H. Weinfurter, Eur. Phys. J. D **73**, 29 (2019).
|
| 676 |
+
|
| 677 |
+
[44] P. Hyllus, O. Gühne, and A. Smerzi, Phys. Rev. A **82**, 012337 (2010).
|
| 678 |
+
|
| 679 |
+
[45] D. Markham and B. C. Sanders, Phys. Rev. A **78**, 042309 (2008).
|
| 680 |
+
|
| 681 |
+
[46] A. Keet, B. Fortescue, D. Markham, and B. C. Sanders, Phys. Rev. A **82**, 062315 (2010).
|
| 682 |
+
|
| 683 |
+
[47] D. Markham and B. C. Sanders, Phys. Rev. A **83**, 019901 (2010).
|
| 684 |
+
|
| 685 |
+
[48] H. Weinfurter and M. Żukowski, Phys. Rev. A **64**, 010102 (2001).
|
| 686 |
+
|
| 687 |
+
[49] N. Kiesel, C. Schmid, G. Toth, E. Solano, and H. Weinfurter, Phys. Rev. Lett. **98**, 063604 (2007).
|
| 688 |
+
|
| 689 |
+
[50] G. Toth, W. Wieczorek, D. Gross, R. Krischek, C. Schwemmer, and H. Weinfurter, Phys. Rev. Lett. **105**, 250403 (2010).
|
| 690 |
+
|
| 691 |
+
[51] R. Krischek, W. Wieczorek, A. Ozawa, N. Kiesel, P. Michelberger, T. Udem, and H. Weinfurter, Nat. Photonics **4**, 170 (2010).
|
| 692 |
+
|
| 693 |
+
[52] R. Krischek, C. Schwemmer, W. Wieczorek, H. Weinfurter, P. Hyllus, L. Pezze, and A. Smerzi, Phys. Rev. Lett. **107**, 080504 (2011).
|
| 694 |
+
|
| 695 |
+
[53] L. Knips, C. Schwemmer, N. Klein, M. Wieśniak, and H. Weinfurter, Phys. Rev. Lett. **117**, 210504 (2016).
|
| 696 |
+
|
| 697 |
+
[54] L. Knips, C. Schwemmer, N. Klein, J. Reuter, G. Tóth, and H. Weinfurter, ArXiv e-prints (2015), arXiv:1512.06866 [quant-ph].
|
| 698 |
+
|
| 699 |
+
[55] M.-D. Choi, Linear Alg. Appl. **10**, 285 (1975).
|
| 700 |
+
|
| 701 |
+
[56] B. Schumacher, Phys. Rev. A **54**, 2614 (1996).
|
| 702 |
+
|
| 703 |
+
[57] B. Schumacher and M.A.Nielsen,
|
| 704 |
+
Phys. Rev.
|
| 705 |
+
|
| 706 |
+
A **54**, 2629 (1996).
|
| 707 |
+
|
| 708 |
+
[58] H.
|
| 709 |
+
|
| 710 |
+
Barnum,
|
| 711 |
+
|
| 712 |
+
M.A.Nielsen
|
| 713 |
+
|
| 714 |
+
and B.Schumacher,
|
| 715 |
+
Phys.Rev.A**57**,4153(1998).
|
| 716 |
+
|
| 717 |
+
[59] H.
|
| 718 |
+
|
| 719 |
+
Barnum,
|
| 720 |
+
|
| 721 |
+
E.Knill,
|
| 722 |
+
|
| 723 |
+
and M.A.Nielsen,
|
| 724 |
+
IEEE Trans.
|
| 725 |
+
Info.Theor.
|
| 726 |
+
**46**, 1317 (2000).
|
| 727 |
+
|
| 728 |
+
[60] S.Lloyd,
|
| 729 |
+
Phys.Rev.A
|
| 730 |
+
**55**, 1613 (1997).
|
| 731 |
+
|
| 732 |
+
[61] I.Devetak,
|
| 733 |
+
IEEE Trans.
|
| 734 |
+
Info.Theor.
|
| 735 |
+
**51**, 44 (2005).
|
samples_new/texts_merged/3327355.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/339686.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
## 7.1 Vector Spaces
|
| 5 |
+
|
| 6 |
+
A **vector space** ($\mathbf{V}$, $\mathbb{F}$) is a set of vectors $\mathbf{V}$, a set of scalars $\mathbb{F}$, and two operators that satisfy the following properties:
|
| 7 |
+
|
| 8 |
+
* **Vector Addition**
|
| 9 |
+
|
| 10 |
+
- **Associative:** $\vec{u} + (\vec{v} + \vec{w}) = (\vec{u} + \vec{v}) + \vec{w}$ for any $\vec{v}, \vec{u}, \vec{w} \in \mathbf{V}$.
|
| 11 |
+
|
| 12 |
+
- **Commutative:** $\vec{u} + \vec{v} = \vec{v} + \vec{u}$ for any $\vec{v}, \vec{u} \in \mathbf{V}$.
|
| 13 |
+
|
| 14 |
+
- **Additive Identity:** There exists an additive identity $\vec{0} \in \mathbf{V}$ such that $\vec{v} + \vec{0} = \vec{v}$ for any $\vec{v} \in \mathbf{V}$.
|
| 15 |
+
|
| 16 |
+
- **Additive Inverse:** For any $\vec{v} \in \mathbf{V}$, there exists $-\vec{v} \in \mathbf{V}$ such that $\vec{v} + (-\vec{v}) = \vec{0}$. We call $-\vec{v}$ the additive inverse of $\vec{v}$.
|
| 17 |
+
|
| 18 |
+
- **Closure under vector addition:** For any two vectors $\vec{v}, \vec{u} \in \mathbf{V}$, their sum $\vec{v} + \vec{u}$ must also be in $\mathbf{V}$.
|
| 19 |
+
|
| 20 |
+
* **Scalar Multiplication**
|
| 21 |
+
|
| 22 |
+
- **Associative:** $\alpha(\beta\vec{v}) = (\alpha\beta)\vec{v}$ for any $\vec{v} \in \mathbf{V}$, $\alpha, \beta \in \mathbb{F}$.
|
| 23 |
+
|
| 24 |
+
- **Multiplicative Identity:** There exists $1 \in \mathbb{F}$ where $1 \cdot \vec{v} = \vec{v}$ for any $\vec{v} \in \mathbb{F}$. We call $1$ the multiplicative identity.
|
| 25 |
+
|
| 26 |
+
- **Distributive in vector addition:** $\alpha(\vec{u} + \vec{v}) = \alpha\vec{u} + \alpha\vec{v}$ for any $\alpha \in \mathbb{F}$ and $\vec{u}, \vec{v} \in \mathbf{V}$.
|
| 27 |
+
|
| 28 |
+
- **Distributive in scalar addition:** $(\alpha + \beta)\vec{v} = \alpha\vec{v} + \beta\vec{v}$ for any $\alpha, \beta \in \mathbb{F}$ and $\vec{v} \in \mathbf{V}$.
|
| 29 |
+
|
| 30 |
+
- **Closure under scalar multiplication:** For any vector $\vec{v} \in \mathbf{V}$ and scalar $\alpha \in \mathbb{F}$, the product $\alpha\vec{v}$ must also be in $\mathbf{V}$.
|
| 31 |
+
|
| 32 |
+
You have already seen vector spaces before! For example, $(\mathbb{R}^n, \mathbb{R})$ is the vector space of all $n$-dimensional vectors. With the definitions of vector addition and scalar multiplication defined in the previous notes you could show that it satisfies all the properties above. In fact, matrices also are a vector space $(\mathbb{R}^{n \times m}, \mathbb{R})$ since they fulfill all of the properties above as well – but in this class we will generally only deal with vector spaces containing vectors in $\mathbb{R}^n$ or $\mathbb{C}^n$.
|
| 33 |
+
|
| 34 |
+
**Additional Resources** For more on vector spaces, read *Strang* pages 123 - 125 and try Problem Set 3.1.
|
| 35 |
+
|
| 36 |
+
In Schaum's, read pages 112-114 and try problems 4.1, 4.2, and 4.71 to 4.76. Extra: Read and Understand Polynomial Spaces, Spaces of Arbitrary "Field."
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
### 7.1.1 Bases
|
| 40 |
+
|
| 41 |
+
We can use a series of vectors to define a vector space. We call this set of vectors a **basis**, which we define formally below:
|
| 42 |
+
|
| 43 |
+
**Definition 7.1 (Basis):**
|
| 44 |
+
|
| 45 |
+
Given a vector space $(V, \mathbb{F})$, a set of vectors $\{\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\}$ is a **basis** of the vector space if it satisfies the following two properties:
|
| 46 |
+
|
| 47 |
+
* $\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n$ are linearly independent vectors
|
| 48 |
+
|
| 49 |
+
* For any vector $\vec{v} \in V$, there exist scalars $\alpha_1, \alpha_2, \dots, \alpha_n \in \mathbb{F}$ such that $\vec{v} = \alpha_1\vec{v}_1 + \alpha_2\vec{v}_2 + \dots + \alpha_n\vec{v}_n$.
|
| 50 |
+
|
| 51 |
+
Intuitively, a basis of a vector space is the *minimum* set of vectors needed to represent all vectors in the vector space. If a set of vectors is linearly dependent and “spans” the vector space, it is still not a basis because we can remove at least one vector from the set and the resulting set will still span the vector space.
|
| 52 |
+
|
| 53 |
+
The next natural question to ask is: Given a vector space, is the basis unique? Intuitively, it is not because multiplying one of the vectors in a given basis by a nonzero scalar will not affect the linear independence or span of the vectors. We could alternatively construct another basis by replacing one of the vectors with the sum of itself and any other vector in the set.
|
| 54 |
+
|
| 55 |
+
To illustrate this mathematically, suppose $\{\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\}$ is a basis for the vector space we are considering.
|
| 56 |
+
Then
|
| 57 |
+
|
| 58 |
+
$$ \{\alpha \vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\} \qquad (1) $$
|
| 59 |
+
|
| 60 |
+
where $\alpha \neq 0$ is also a basis because, just as we've seen in Gaussian elimination row operations, multiplying a row by a nonzero constant does not change the linear independence or dependence of the rows. We can generalize this to say that multiplying a vector by a nonzero scalar also does not change the linear independence of the set of vectors. In addition, we know that
|
| 61 |
+
|
| 62 |
+
$$ \operatorname{span}(\{\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\}) = \operatorname{span}(\{\alpha \vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\}). \qquad (2) $$
|
| 63 |
+
|
| 64 |
+
because any vector in $\operatorname{span}(\{\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\})$ can be created as a linear combination of the set $\{\alpha\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n\}$ by dividing the scale factor on $\vec{v}_1$ by $\alpha$. We can use a similar argument to show that $\{\vec{v}_1 + \vec{v}_2, \vec{v}_2, \dots, \vec{v}_n\}$ is also a basis for the same vector space.
|
| 65 |
+
|
| 66 |
+
**Example 7.1 (Vector space ($\mathbb{R}^3, \mathbb{R}$)):** Let's try to find a basis for the vector space $(\mathbb{R}^3, \mathbb{R})$. We want to find a set of vectors that can represent any vector of the form $\begin{bmatrix} a \\ b \\ c \end{bmatrix}$ where $a,b,c \in \mathbb{R}$. One basis could be the set of standard unit vectors:
|
| 67 |
+
|
| 68 |
+
$$ \left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\} $$
|
| 69 |
+
---PAGE_BREAK---
|
| 70 |
+
|
| 71 |
+
The set of vectors is linearly independent and we can represent any vector $[\begin{matrix} a \\ b \\ c \end{matrix}]$ in the vector space using the three vectors:
|
| 72 |
+
|
| 73 |
+
$$ \begin{bmatrix} a \\ b \\ c \end{bmatrix} = a \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + b \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} + c \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}. \quad (3) $$
|
| 74 |
+
|
| 75 |
+
Alternatively, we could show that
|
| 76 |
+
|
| 77 |
+
$$ \left\{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \right\} $$
|
| 78 |
+
|
| 79 |
+
is a basis for the vector space.
|
| 80 |
+
|
| 81 |
+
Now that we have defined bases, we can define the dimension of a vector space.
|
| 82 |
+
|
| 83 |
+
**Definition 7.2 (Dimension):** The dimension of a vector space is the number of basis vectors.
|
| 84 |
+
|
| 85 |
+
Since each basis vector can be scaled by one coefficient, the dimension of a space as the fewest number of parameters needed to describe an element or member of that space. The dimension can also be thought of as the degrees of freedom of your space – that is, the number of parameters that can be varied when describing a member of that space.
|
| 86 |
+
|
| 87 |
+
**Example 7.2 (Dimension of ($\mathbb{R}^3, \mathbb{R}$)):** Previously, we identified a basis
|
| 88 |
+
|
| 89 |
+
$$ \left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\} $$
|
| 90 |
+
|
| 91 |
+
for the vector space $(\mathbb{R}^3, \mathbb{R})$. The basis consists of three vectors, so the dimension of the vector space is three.
|
| 92 |
+
|
| 93 |
+
**Note that a vector space can have many bases, but each basis must have the same number of vectors.**
|
| 94 |
+
|
| 95 |
+
We will not prove this rigorously, but let's illustrate our arguments. Suppose a basis for the vector space we're considering has $n$ vectors. This means that the minimum number of vectors we can use to represent all vectors in the vector space is $n$, because the vectors in the basis would not be linearly independent if the vector space could be represented with fewer vectors. Then we can show that any set with less than $n$ vectors cannot be a basis because it does not have enough vectors to span the vector space — there would be some vectors in the vector space that cannot be expressed as a linear combination of the vectors in the set. In addition, we can show that any set with more than $n$ vectors must be linearly dependent and therefore cannot be a basis. Combining the two arguments, we have that any other set of vectors that forms a basis for the vector space must have exactly $n$ vectors.
|
| 96 |
+
|
| 97 |
+
We introduced quite a few terms in this lecture note, and we'll see how we can connect these with our understanding of matrices in the next lecture note!
|
| 98 |
+
---PAGE_BREAK---
|
| 99 |
+
|
| 100 |
+
**Additional Resources** For more on bases, read *Strang* pages 167 - 171 and try Problem Set 3.4.
|
| 101 |
+
*Extra: Read Sections on Matrix and Function Space.*
|
| 102 |
+
|
| 103 |
+
In Schaum's, read pages 124-126 and pages 127-129. Try Problems 4.24 to 4.28, 4.97 to 4.103, and 4.33 to 4.40.
|
| 104 |
+
|
| 105 |
+
## 7.2 Practice Problems
|
| 106 |
+
|
| 107 |
+
These practice problems are also available in an interactive form on the course website.
|
| 108 |
+
|
| 109 |
+
1. True or False: $\{\begin{bmatrix} -3 \\ 1 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \end{bmatrix}, \begin{bmatrix} 5 \\ 2 \end{bmatrix}\}$ spans $\mathbb{R}^2$.
|
| 110 |
+
|
| 111 |
+
2. True or False: $\{\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \begin{bmatrix} 5 \\ -2 \\ 1 \end{bmatrix}, \begin{bmatrix} -3 \\ 6 \\ 5 \end{bmatrix}\}$ is a basis for $\mathbb{R}^3$.
|
| 112 |
+
|
| 113 |
+
3. The following vectors span $\mathbb{R}^3$:
|
| 114 |
+
|
| 115 |
+
$$ \vec{x}_1 = \begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix}, \vec{x}_2 = \begin{bmatrix} 2 \\ 5 \\ 4 \end{bmatrix}, \vec{x}_3 = \begin{bmatrix} 1 \\ 3 \\ 2 \end{bmatrix}, \vec{x}_4 = \begin{bmatrix} 2 \\ 7 \\ 4 \end{bmatrix}, \vec{x}_5 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} $$
|
| 116 |
+
|
| 117 |
+
Which vectors of this set form a basis for $\mathbb{R}^3$?
|
| 118 |
+
|
| 119 |
+
(a) $\vec{x}_1, \vec{x}_2, \vec{x}_3, \vec{x}_4, \vec{x}_5$
|
| 120 |
+
|
| 121 |
+
(b) $\vec{x}_1, \vec{x}_3, \vec{x}_5$
|
| 122 |
+
|
| 123 |
+
(c) $\vec{x}_1, \vec{x}_2, \vec{x}_4$
|
| 124 |
+
|
| 125 |
+
(d) $\vec{x}_1, \vec{x}_3, \vec{x}_4, \vec{x}_5$
|
samples_new/texts_merged/3495399.md
ADDED
|
@@ -0,0 +1,382 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# The Paramagnetic Ground State of Ruby—Revisited
|
| 5 |
+
|
| 6 |
+
J. Shell¹
|
| 7 |
+
|
| 8 |
+
A more accurate formula for the ruby spin Hamiltonian (than used in earlier JPL programs) is presented for calculating the ground-state paramagnetic spectrum of ruby and transition probability matrix elements between quantum states induced by radio-frequency magnetic fields. A coordinate system is chosen that simplifies the expressions for the radio-frequency magnetic field. Applications of the computer program to several past and current Deep Space Network maser designs are presented. The program is included in an appendix along with a sample output.
|
| 9 |
+
|
| 10 |
+
## I. Introduction
|
| 11 |
+
|
| 12 |
+
The low-noise maser amplifiers in the Deep Space Network (DSN) use ruby as the active material. The quantum states of the paramagnetic chromium ion in the ruby crystal are used in the amplification process. An external static magnetic field, $\vec{H}_{dc}$, is applied to the ruby to generate the quantum states. The nature of these states depends on the strength and orientation of this field relative to the ruby crystal c-axis. Transitions between these quantum states are induced by radio frequency (rf) magnetic fields. These transitions are used in two distinct ways. In the first instance, microwave energy from a pump source is used to alter the distribution of spins amongst the energy levels. This creates the population inversion necessary for the ruby to amplify an incoming signal. In the second instance, the process of stimulated emission amplifies the transitions resulting from an incoming “signal.” This incoming signal may be from a distant spacecraft, for example.
|
| 13 |
+
|
| 14 |
+
A good model and understanding of the ruby's paramagnetic behavior are necessary for maser design. In particular, the low-lying energy levels, which are used in cryogenic low-noise amplifiers, are of interest. The ability to calculate the transitions between levels induced by an rf field is also necessary for good maser design. This article contains a computer program that models these effects. The program can be used to select static magnetic-field strengths and orientations and microwave magnetic-field orientations and polarization. This program can aid in the understanding of current and past DSN ruby masers.
|
| 15 |
+
|
| 16 |
+
In 1970, a Fortran program was written to calculate these same quantities using a different coordinate system and different numerical values for the parameters used to describe the ruby [1]. This program was used to generate many sets of tables for maser design. Some tables exist today, but the program is no longer readily available. In 1978, the National Bureau of Standards (NBS) published a report describing the use of ruby as a standard reference material in electron paramagnetic resonance experiments [2]. It published precise values of the spectroscopic splitting factors and the zero-field splitting for ruby.
|
| 17 |
+
|
| 18 |
+
¹ Communications Ground Systems Section.
|
| 19 |
+
|
| 20 |
+
The research described in this publication was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
The program described in this article uses these more recent values. The program also uses a different coordinate system that simplifies the task of calculating transition probabilities due to an rf field. Rather than aligning the ruby crystal c-axis in the z-direction, the applied static magnetic field is chosen along the z-direction [3]. In addition, the advent of new commercial software specifically designed to work with matrices allows for a much simpler program [4]. The program listing and a sample output are included in Appendix A.
|
| 24 |
+
|
| 25 |
+
## II. Spin Hamiltonian for Ruby
|
| 26 |
+
|
| 27 |
+
A very concise description of the low-lying states, often referred to as the ground state, is made possible through the concept of an effective spin Hamiltonian. This approach includes such effects as the Zeeman splitting of the states due to applied magnetic fields, including anisotropy of this splitting. It also describes the splitting of energy levels due to the electrostatic field of surrounding atoms. In the case of ruby, this appears as a quadrupole interaction. Excellent discussions of this concept can be found in several books [5,6].
|
| 28 |
+
|
| 29 |
+
The presence of the crystal field makes the form of the Hamiltonian dependent on the orientation of the coordinate system. For example, if the ruby crystal c-axis is chosen along the z-direction, then the spin Hamiltonian, $H_s$, is given by
|
| 30 |
+
|
| 31 |
+
$$H_s = g_1\beta H_z S_z + g_2\beta(H_x S_x + H_y S_y) + D \left[ S_z^2 - \frac{1}{3}S(S+1) \right] \quad (1)$$
|
| 32 |
+
|
| 33 |
+
Here, $g_1$ and $g_2$ are spectroscopic splitting factors, $\beta$ is the Bohr magneton, and $\vec{H}_{dc} = (H_x, H_y, H_z)$ is the applied static magnetic field. The spin vector is denoted by $\vec{S}' = (S_x, S_y, S_z)$. Here, $S_x, S_y, S_z$ are spin matrices, given below. The variable $D$ represents one half of the zero-field splitting between the $S_z = \pm 1/2$ spin states and the $S_z = \pm 3/2$ spin states. The quantity $S(S+1)$ is the eigenvalue of the operator $S^2 = S_x^2 + S_y^2 + S_z^2$. Equation (1) is very similar to the expression used in [1]. The coordinate system appropriate to this form is shown in Fig. 1(a).
|
| 34 |
+
|
| 35 |
+
Personnel at Bell Telephone Laboratories used a Hamiltonian wherein the z-axis is along the applied static magnetic field [3]. The ruby crystal c-axis is specified by the polar angle, $\theta$, with respect to the dc magnetic field and an azimuthal angle, $\varphi$, with respect to the x-axis. Their result is
|
| 36 |
+
|
| 37 |
+
Fig. 1. The coordinate system used in (a) Eq. (1) and (b) Eq. (2).
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
$$
|
| 41 |
+
\begin{align}
|
| 42 |
+
H_s = {}& (g_1 \cos^2 \theta + g_2 \sin^2 \theta) \beta H_z S_z \nonumber \\
|
| 43 |
+
& + D \left( \cos^2 \theta - \frac{1}{2} \sin^2 \theta \right) \left[ S_z^2 - \frac{1}{3} S(S+1) \right] \nonumber \\
|
| 44 |
+
& + D \left( \frac{1}{2} \right) \left( \cos \theta \sin \theta \right) \left[ e^{-j\varphi} (S_z S_+ + S_+ S_z) + e^{j\varphi} (S_z S_- + S_- S_z) \right] \nonumber \\
|
| 45 |
+
& + D \left( \frac{1}{4} \right) \sin^2 \theta \left( e^{-2j\varphi} S_+^2 + e^{2j\varphi} S_-^2 \right) \tag{2}
|
| 46 |
+
\end{align}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
Here, $S_+ = S_x + jS_y$, $S_- = S_x - jS_y$, and $j = \sqrt{-1}$. We use the values for the spectroscopic splitting factors $g_1 = 1.9817$ and $g_2 = 1.9819$, and the zero-field splitting $D = -3.8076 \times 10^{-17}$ ergs, published by the National Bureau of Standards. This is the form that will be used for the results presented in this article.
|
| 50 |
+
|
| 51 |
+
The coordinate system appropriate to the Hamiltonian of Eq. (2) is shown in Fig. 1(b). From the point of view of the crystal, it's a more natural choice to choose the z-axis along the c-axis direction. From the point of view of the rf magnetic fields, it makes more sense to let the direction of the c-axis be unrestricted. The result is a more complex expression for the spin Hamiltonian. However, since a digital computer performs the calculation, the additional complexity is not a concern. Equation (2) can be shown to be almost exactly equal to Eq. (1). We have neglected terms involving the difference between $g_1$ and $g_2$ because they are nearly equal. Demonstration of the equivalence is discussed in Appendix B.
|
| 52 |
+
|
| 53 |
+
The values predicted by this program are different from the values published by Berwin [1] or Siegman [6]. This is due to the slightly different values of the spectroscopic splitting factor and zero-field splitting used by the two programs. For example, with a 2600-gauss magnetic field oriented 90 degrees to the ruby c-axis, Berwin calculates the 1–2 transition frequency to be 2.6083 GHz. The current program predicts 2.5677 GHz, a difference of 40.6 MHz, or about 1.5 percent.
|
| 54 |
+
|
| 55 |
+
In addition to choosing a coordinate system, we must choose a representation for the spin operators. This means choosing a set of base states in terms of which the spin quantum states can be expressed. The usual choice for a spin system is the set of states that are simultaneous eigenstates of the total angular momentum squared and the projection of the angular momentum along some axis, usually the z-axis. In this representation, the matrices representing $S^2$ and $S_z$ are diagonal. We also adopt this convention. For a spin $S = 3/2$ system, such as the Cr$^{+3}$ ion in ruby, $S^2$ and $S_z$ are given by $(2S+1)$-by-$(2S+1)$ matrices. In particular,
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\begin{align}
|
| 59 |
+
S^2 &= \frac{15}{4} \cdot \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \notag \\
|
| 60 |
+
S_z &= \frac{1}{2} \cdot \begin{bmatrix} 3 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -3 \end{bmatrix} \tag{3a}
|
| 61 |
+
\end{align}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
In this representation, the matrices representing the spin operators $S_x$ and $S_y$ are given by
|
| 65 |
+
---PAGE_BREAK---
|
| 66 |
+
|
| 67 |
+
$$ S_x = \frac{1}{2} \cdot \begin{bmatrix} 0 & \sqrt{3} & 0 & 0 \\ \sqrt{3} & 0 & 2 & 0 \\ 0 & 2 & 0 & \sqrt{3} \\ 0 & 0 & \sqrt{3} & 0 \end{bmatrix} \qquad (3b) $$
|
| 68 |
+
|
| 69 |
+
$$ S_y = \frac{1}{2} \cdot \begin{bmatrix} 0 & -\sqrt{3}j & 0 & 0 \\ \sqrt{3}j & 0 & -2j & 0 \\ 0 & 2j & 0 & -\sqrt{3}j \\ 0 & 0 & \sqrt{3}j & 0 \end{bmatrix} $$
|
| 70 |
+
|
| 71 |
+
From Eqs. (1) or (2) and (3), it can be seen that the spin Hamiltonian is a 4-by-4 matrix. The eigenvalues of the matrix are the energies of the discrete quantum states available to the spins. The difference in energies divided by Planck's constant determines the resonant transition frequencies. The eigenvector associated with an eigenvalue is a representation of the quantum state having that energy. The transition frequencies are calculated and displayed by the program. The eigenvectors are used to calculate the spin vectors discussed in the next section. The eigenvectors are not normally displayed, although it is a simple matter to do so.
|
| 72 |
+
|
| 73 |
+
### III. Transition Probability Matrix Elements and Spin Vectors
|
| 74 |
+
|
| 75 |
+
The ability of the rf magnetic field to induce transitions between the quantum states of ruby is fundamental to maser design. If the rf field is the signal from a spacecraft, this ability is related to the gain of the maser. If the rf field is from a microwave pump source, this ability is related to the amount of pump energy needed to saturate the transition. A measure of the ability of a given rf field to induce a transition is given by a matrix element.
|
| 76 |
+
|
| 77 |
+
The transition probability between quantum states $i$ and $j$ induced by an rf magnetic field is
|
| 78 |
+
|
| 79 |
+
$$ W_{i \to j} = \frac{1}{4} \gamma^2 g(f) |\langle j | \vec{H}_{rf}^* \cdot \vec{S} | i \rangle|^2 \quad (4) $$
|
| 80 |
+
|
| 81 |
+
where $\gamma = g\beta\mu_o/\hbar$ and $g(f)$ is the line-shape function. The matrix element mentioned above is given by $\langle j | \vec{H}_{rf}^* \cdot \vec{S} | i \rangle$. The quantum states, $\langle j | , | i \rangle$, are represented by the eigenvectors of the spin Hamiltonian. The spin vector is shorthand for $\vec{S} = (S_x, S_y, S_z)$, where the spin matrices are given above.
|
| 82 |
+
|
| 83 |
+
As seen in Eq. (4), the operator describing the interaction between the spin and the rf magnetic field has much the same form as the operator describing a spin in a static magnetic field. It takes the form of a dot product between the conjugate of the rf magnetic field vector and the spin vector. The magnetic field vector can be pulled outside the brackets, leading to the expression
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\begin{aligned}
|
| 87 |
+
\vec{H}_{rf}^* \cdot (\langle j | \vec{S} | i \rangle) &= \vec{H}_{rf}^* \cdot \{\langle j | S_x | i \rangle \hat{x} + \langle j | S_y | i \rangle \hat{y} + \langle j | S_z | i \rangle \hat{z}\} \\
|
| 88 |
+
&= H_x^* S_x^{ij} + H_y^* S_y^{ij} + H_z^* S_z^{ij} = \vec{H}_{rf}^* \cdot \vec{S}^{ij}
|
| 89 |
+
\end{aligned}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
In general, $H_x^*, S_x^{ij}, H_y^*, S_y^{ij}, H_z^*, S_z^{ij}$ are complex numbers. Thus, the transition probability between two states depends on the magnitude, orientation, and polarization of the rf magnetic field. The spin vectors, $\vec{S}^{ij} = \langle j | \vec{S} | i \rangle$, as well as the quantities $T_{ij} = |\vec{H}_{rf}^* \cdot \vec{S}^{ij}|^2$, for a user-specified rf field, are calculated by the program.
|
| 93 |
+
---PAGE_BREAK---
|
| 94 |
+
|
| 95 |
+
# IV. Program Description and Examples Using the Program
|
| 96 |
+
|
| 97 |
+
The program is written in the high-level language MATLAB. This is commercial software specifically designed to handle matrices. MATLAB has intrinsic eigenvalue and eigenvector routines. This greatly reduces the program length. After the Hamiltonian is entered into the program, the eigenvalues and eigenvectors are calculated by executing one statement. The eigenvectors are ordered with the one corresponding to the lowest energy, $e_1$, labeled $v_1$, and the next one labeled $v_2$, and so on. The eigenvectors calculated by MATLAB are also orthogonal and normalized. For a general choice of the azimuth angle, $\varphi$, the eigenvectors are complex. If the c-axis is chosen in the x–z plane, that is, $\varphi = 0$ or 180 degrees, the eigenvectors are real.
|
| 98 |
+
|
| 99 |
+
The program input consists of the static magnetic-field strength, the angles $\theta$ and $\varphi$ specifying the c-axis orientation and the rf magnetic field in phasor form. The program calculates and displays the transition frequencies (in GHz), the associated spin vectors, and the quantity $T_{ij}$ for all the transitions. A sample output follows the program listing.
|
| 100 |
+
|
| 101 |
+
The user can check the transition frequencies for selected field strength and orientation against the NBS tables. The NBS tables include values for $T_{x'}^{\alpha\beta} = |\langle\alpha|S_{x'}|\beta\rangle|^2$ and $T_{y'}^{\alpha\beta} = |\langle\alpha|S_{y'}|\beta\rangle|^2$. These can be compared against the $T_{ij}$ calculated by the program by entering $H_{rf} = (1,0,0)$ and $H_{rf} = (0,1,0)$, respectively, as program input. Note that the levels in the NBS tables are labeled in the opposite order, with level 1 being the highest and level 4 being the lowest.
|
| 102 |
+
|
| 103 |
+
In the following subsections, the program is used to analyze or describe past and current DSN masers.
|
| 104 |
+
|
| 105 |
+
## A. Example 1: S-Band Coaxial Cavity Masers
|
| 106 |
+
|
| 107 |
+
Our first example of the use of the program will be a comparison of two early 2.36-GHz (S-band) coaxial cavity masers. The first such cavity had the ruby oriented in the coaxial line, as shown in Figs. 2(a) and 2(b).² The static magnetic field was oriented perpendicular to the coaxial line. Its strength was approximately 2500 gauss. The rf magnetic-field lines of constant magnitude are circles surrounding the center conductor in a plane perpendicular to the center conductor, as shown in Fig. 2(c). The ruby c-axis is in a plane perpendicular to the static magnetic field and oriented 30 degrees out of the plane of the rf magnetic field.
|
| 108 |
+
|
| 109 |
+
With the right-hand x–y–z coordinate system in Fig. 2(a), we set $\varphi = 60$ degrees and $\theta = 90$ degrees. The rf-field lines of constant magnitude form circles in the y–z plane, and the polarization is linear. The
|
| 110 |
+
|
| 111 |
+
Fig. 2. The first S-band coaxial cavity: (a) a perspective drawing showing the direction of the static magnetic field and the crystal c-axis, (b) a side view, and (c) a top view (a typical rf magnetic field line is also shown).
|
| 112 |
+
|
| 113 |
+
² R. C. Clauss, personal communication, Jet Propulsion Laboratory, Pasadena, California, February 2002.
|
| 114 |
+
---PAGE_BREAK---
|
| 115 |
+
|
| 116 |
+
interaction of the ruby with the linear rf field depends on the angle $\psi$, shown in Fig. 2(c). We can generate a table of transition probabilities as a function of $\psi$ by changing the relative magnitude of the y- and z-components of the rf magnetic field. Because of the symmetry, we need only cover 1/4 of the circumference of the circle. We choose 10-degree increments.
|
| 117 |
+
|
| 118 |
+
A word about our notation is in order. We will represent the rf magnetic field in the form $H_{rf} = H_1(a, b, c)$, where $a, b, c$ can be complex and satisfy $|a|^2 + |b|^2 + |c|^2 = 1$. In its most general form, $H_1$ would be $H_1 = he^{j\alpha}$. The actual rf field is given by multiplying $H_{rf}$ by $e^{j\omega t}$ and taking the real part. In our examples, $H_1$ will be chosen equal to one. For example, a right-hand circular polarized wave in the x-y plane would be written as $H_{rf} = (1, -j, 0)$. If the wave is viewed as propagating toward the observer, then if the fingers of the right hand curl in the direction of vector rotation, the thumb will point toward the observer. The linear rf field phasors are listed in Table 1 along with the associated value of $T_{12}$. For the 1-2 transition, the average value of $T_{12}$ per unit rf field strength is $T_{12}/H_1 = 0.623$.
|
| 119 |
+
|
| 120 |
+
To accurately estimate the ruby absorption, we would have to account for the stronger field near the shorted end of the ruby cavity, as well as the variation of the field strength from the center conductor to the outer conductor. Since the second maser geometry in this comparison is the same as the first, we will neglect these effects. The second maser geometry is shown in Figs. 3(a) and 3(b) [7]. Now the static magnetic field is along the center conductor of the coaxial line, and the ruby c-axis is in the plane perpendicular to it. It is also the plane of the rf magnetic field, as seen in Fig. 3(c). For this orientation, we set $\theta = 90$ degrees and $\phi = 0$ degrees. Again we vary $H_{rf}$, at 10-degree increments, around 1/4 of the circumference of the circle in the x-y plane. The transition probabilities are shown in Table 2. For the 1-2 transition, the average value of $T_{12}$ per unit rf field strength is $T_{12}/H_1 = 0.892$. Therefore, the second maser geometry should be significantly better, with a transition probability for the signal transition about 43 percent greater than the first geometry.
|
| 121 |
+
|
| 122 |
+
## B. Example 2: X-band Coupled-Cavity Maser
|
| 123 |
+
|
| 124 |
+
The next example concerns the behavior of ruby as it might appear in a DSN 8.42-GHz (X-band) coupled-cavity maser. This is shown schematically in Fig. 4. The ruby crystal is shown in a cavity with a signal broadbanding cavity on the left and a pump broadbanding cavity on the right. To the left of the signal broadbanding cavity is a stepped-height pump reject filter. An applied static magnetic
|
| 125 |
+
|
| 126 |
+
Table 1. First S-band coaxial cavity.
|
| 127 |
+
|
| 128 |
+
<table><thead><tr><th>H<sub>rf</sub></th><th>T<sub>12</sub></th></tr></thead><tbody><tr><td>(0, 1, 0)</td><td>1.2451</td></tr><tr><td>(0, 0.985, 0.174)</td><td>1.2081</td></tr><tr><td>(0, 0.949, 0.342)</td><td>1.1002</td></tr><tr><td>(0, 0.866, 0.500)</td><td>0.9338</td></tr><tr><td>(0, 0.766, 0.643)</td><td>0.7306</td></tr><tr><td>(0, 0.643, 0.766)</td><td>0.5148</td></tr><tr><td>(0, 0.500, 0.866)</td><td>0.3113</td></tr><tr><td>(0, 0.342, 0.940)</td><td>0.1456</td></tr><tr><td>(0, 0.174, 0.985)</td><td>0.0377</td></tr><tr><td>(0, 0, 1)</td><td>0.0</td></tr><tr><td>—</td><td>0.623<br>(average)</td></tr></tbody></table>
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
Fig. 3. The second S-band coaxial cavity: (a) a perspective drawing showing the direction of the static magnetic field and the crystal c-axis, (b) a side view, and (c) a top view (a typical rf magnetic field line is also shown).
|
| 132 |
+
|
| 133 |
+
Table 2. Second S-band coaxial cavity.
|
| 134 |
+
|
| 135 |
+
<table><thead><tr><th>H<sub>rf</sub></th><th>T<sub>12</sub></th></tr></thead><tbody><tr><td>(1, 0, 0)</td><td>1.5985</td></tr><tr><td>(0.985, 0.174, 0)</td><td>1.5565</td></tr><tr><td>(0.949, 0.342, 0)</td><td>1.4341</td></tr><tr><td>(0.866, 0.500, 0)</td><td>1.2451</td></tr><tr><td>(0.766, 0.643, 0)</td><td>1.0144</td></tr><tr><td>(0.643, 0.766, 0)</td><td>0.7695</td></tr><tr><td>(0.500, 0.866, 0)</td><td>0.5384</td></tr><tr><td>(0.342, 0.940, 0)</td><td>0.3505</td></tr><tr><td>(0.174, 0.985, 0)</td><td>0.2280</td></tr><tr><td>(0, 1, 0)</td><td>0.1851</td></tr><tr><td>—</td><td>0.892<br/>(average)</td></tr></tbody></table>
|
| 136 |
+
|
| 137 |
+
Fig. 4. A perspective view of an X-band coupled-cavity maser. The cavities are drawn for illustrative purposes only; they are not to scale.
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
field of 4,981 gauss is oriented 90 degrees to the crystal c-axis. The signal transition is chosen between levels 1 and 2 and occurs at 8.421 GHz. The first pump transition is between levels 1 and 3 and occurs at 24.05 GHz. A second pump transition is between levels 3 and 4 and occurs at 19.21 GHz. The spin vectors for these transitions are very important to the maser design.
|
| 141 |
+
|
| 142 |
+
The spin vector for the signal transition is $\vec{S}_{12} = (-1.0735, 0.65443j, 0)$. Since we have chosen $\varphi = 0$, the c-axis is in the x-direction. Thus, if the rf fields of the signal are linearly polarized, as in the case of the coupled-cavity maser, the interaction with the ruby is stronger if the rf magnetic field is predominantly in the x-direction rather than the y-direction. The value of $T_{12}$ with $H_{rf} = (1, 0, 0)$ is 1.1525. The value of $T_{12}$ with $H_{rf} = (0, 1, 0)$ is 0.4282. Thus, the advantage is 2.69. Therefore, elongating the cavity in the x-direction will increase the coupling with the rf magnetic field. From this we can also see that rf magnetic fields in the z-direction, along the applied static magnetic field, are ineffective in inducing transitions.
|
| 143 |
+
|
| 144 |
+
The spin vector indicates that the optimum rf field polarization is elliptical. If an rf field of unit amplitude is linearly polarized in the x-direction, then $T_{12} = 1.1524$. That is the best you can do with a linearly polarized signal. However, if the rf field has the proper elliptical polarization and is of unit amplitude, then $H_{rf} = (0.854, -0.521j, 0)$ and $T_{12} = 1.582$. There also exists an rf field polarization in this plane that does not induce a response. It is $H_{rf} = (0.521, 0.854j, 0)$.
|
| 145 |
+
|
| 146 |
+
The spin vector for the first pump transition is $\vec{S}_{13} = (0, 0, 0.4140)$. Thus, a linearly polarized field in the z-direction will be required to stimulate this transition. Therefore, the pump waveguide feeding the ruby cavity must support a 24-GHz mode whose electric field is perpendicular to the applied magnetic field. Finally, the spin vector for the second pump is $\vec{S}_{34} = (-0.7229, 1.0051j, 0)$. It is similar to the signal component, except the roles of the x- and y-directions are reversed. The value of $T_{34}$ with $H_{rf} = (1, 0, 0)$ is 0.5225. The value of $T_{34}$ with $H_{rf} = (0, 1, 0)$ is 1.0102. Now the transition probability is almost twice as strong for the linear rf field polarized in the y-direction as compared to the x-direction.
|
| 147 |
+
|
| 148 |
+
### C. Example 3: Ka-Band Coupled-Cavity Maser
|
| 149 |
+
|
| 150 |
+
Our last example will concern the behavior of ruby as it is used in the current DSN 31.8- to 32.3-GHz (Ka-band) coupled-cavity maser. This is shown schematically in Fig. 5. A static magnetic field of 11,881 gauss is applied along the z-direction, and the ruby c-axis is oriented 54.735 degrees to this direction. The signal transition occurs between levels 2 and 3 at frequencies around 32 GHz. The spin vector for this transition is $\vec{S} = (-0.9777, 0.9786j, -0.0424)$. Therefore, for maximum transition probability, the rf magnetic field should be $H_{rf} = (0.707, -0.707j, 0.031)$. This is a circularly polarized
|
| 151 |
+
|
| 152 |
+
Fig. 5. A perspective view of a Ka-band coupled-cavity maser. The cavities are drawn for illustrative purposes only; they are not to scale.
|
| 153 |
+
---PAGE_BREAK---
|
| 154 |
+
|
| 155 |
+
signal in the x-y plane. For this reason, the orientation of the c-axis in azimuth is not important. The c-axis can lie anywhere on a cone at 54.735 degrees to the applied field without affecting the signal transition probability.
|
| 156 |
+
|
| 157 |
+
Two pump transitions typically are used for this operating point. The first pump between levels 1 and 3 occurs at 66.25 GHz. The spin vector for this transition is $\vec{S} = (-0.1455, 0.1519j, 0.0990)$. For maximum transition probability, the rf magnetic field should be $\vec{H}_{rf} = (0.6259, -0.6534j, -0.4259)$. This is nearly a circularly polarized signal in the x-y plane, with a significant, but smaller, component in the z-direction. For this reason, this transition normally is pumped with waveguide modes whose electric fields lie along the applied static magnetic field.
|
| 158 |
+
|
| 159 |
+
The second pump between levels 2 and 4 also occurs at 66.25 GHz. The spin vector for this transition is $\vec{S} = (-0.1289, 0.1183j, 0.0990)$. Therefore, for maximum transition probability, the rf magnetic field should be $\vec{H}_{rf} = (0.6399, -0.5873j, -0.4955)$. This is more elliptical than the first pump, but the difference between $T_{24}$ for an x-polarized rf field and a y-polarized rf field is never more than 17 percent as the c-axis is varied in azimuth. Again, the z-component is smaller than either the x- or y-component. The waveguide modes mentioned above are also used for pumping this transition. It is a fortunate situation that pump energy at the same frequency and in the same waveguide mode is effective in pumping both transitions. This is especially helpful at this operating point where the pump transitions are very weak. If $H_{rf} = (0.7071, 0.7071, 0)$, $T_{13}/T_{23} = 0.023$ and $T_{24}/T_{23} = 0.016$. This is the main reason for having the ruby cavity resonant at both the signal and pump frequencies in the coupled-cavity maser design.
|
| 160 |
+
|
| 161 |
+
## V. Conclusion
|
| 162 |
+
|
| 163 |
+
A program has been written to calculate the ground state spectrum of ruby and the transition probability due to an rf magnetic field. This information is used in the design and analysis of masers using ruby as the active material. It is based on a Hamiltonian where the z-axis is along the static magnetic field and the x- and y-axes are chosen to simplify the expressions for the rf magnetic field. The direction of the c-axis is specified by two polar angles. It is written in the language of MATLAB and is included in Appendix A for reference purposes. A discussion of some DSN masers using the results of the program is presented.
|
| 164 |
+
|
| 165 |
+
## References
|
| 166 |
+
|
| 167 |
+
[1] R. Berwin, *Paramagnetic Energy Levels of the Ground State of Cr<sup>+3</sup> in Al<sub>2</sub>O<sub>3</sub> (Ruby)*, Technical Memorandum 33-440, Jet Propulsion Laboratory, Pasadena, California, January 15, 1970.
|
| 168 |
+
|
| 169 |
+
[2] T. Chang, D. Foster, and A. H. Kahn, “An Intensity Standard for Electron Paramagnetic Resonance Using Chromium-Doped Corundum (Al<sub>2</sub>O<sub>3</sub>:Cr<sup>3+</sup>),” *Journal of Research of the National Bureau of Standards*, vol. 83, no. 2, pp. 133–164, March–April 1978.
|
| 170 |
+
|
| 171 |
+
[3] E. O. Schulz-Du Bois, “Paramagnetic Spectra of Substituted SAPPHIRES—Part I: Ruby,” *Bell System Technical Journal*, vol. 38, p. 271, January 1959.
|
| 172 |
+
|
| 173 |
+
[4] MATLAB, Version 5, The MathWorks, Inc., Natick, Massachusetts, copyright 1984–1998.
|
| 174 |
+
---PAGE_BREAK---
|
| 175 |
+
|
| 176 |
+
[5] A. Abragam and B. Bleaney, *Electron Paramagnetic Resonance of Transition Ions*, New York: Dover Publications, Inc., 1986.
|
| 177 |
+
|
| 178 |
+
[6] A. E. Siegman, *Microwave Solid State Masers*, New York: McGraw-Hill Book Company, 1964.
|
| 179 |
+
|
| 180 |
+
[7] R. C. Clauss, "A 2388 Mc Two-Cavity Maser for Planetary Radar," *Microwave Journal*, May 1965.
|
| 181 |
+
---PAGE_BREAK---
|
| 182 |
+
|
| 183 |
+
# Appendix A
|
| 184 |
+
|
| 185 |
+
## Ruby Energy Level Program and Sample Output
|
| 186 |
+
|
| 187 |
+
The MATLAB program listing follows. Statements following a “%” are comments. (Notice that MATLAB denotes $\sqrt{-1}$ by $i$.)
|
| 188 |
+
|
| 189 |
+
* an m-file called rubylevels.m to calculate the eigenvalues
|
| 190 |
+
|
| 191 |
+
* and eigenvectors of the spin hamiltonian for ruby
|
| 192 |
+
|
| 193 |
+
* it calculates the spin vector and the transition frequencies (in GHz)
|
| 194 |
+
|
| 195 |
+
* and also the transition probabilities for a given r-f magnetic field
|
| 196 |
+
|
| 197 |
+
* Hdc is along the z-axis and the c-axis direction is unrestricted
|
| 198 |
+
|
| 199 |
+
g1=1.9817; % use the values for g1, g2 and D
|
| 200 |
+
g2=1.9819; % suggested by the National Bureau
|
| 201 |
+
D=-3.8076e-17; % of Standards
|
| 202 |
+
beta=9.273e-21;
|
| 203 |
+
|
| 204 |
+
h=4981 % enter the magnetic field strength
|
| 205 |
+
thetad=90.0 % enter the polar angle
|
| 206 |
+
phid=0.0 % enter the azimuthal angle
|
| 207 |
+
Hrf=[0.854; -0.521i; 0.0] % enter the r-f field polarization
|
| 208 |
+
|
| 209 |
+
theta=pi*(thetad/180.0); % convert polar angle to radians
|
| 210 |
+
phi=pi*(phid/180.0); % convert azimuthal angle to radians
|
| 211 |
+
|
| 212 |
+
% construct the spin hamiltonian
|
| 213 |
+
Sx=(0.5)*[0 1.732 0 0;1.732 0 2 0;0 2 0 1.732;0 0 1.732 0];
|
| 214 |
+
Sy=(0.5)*[0 -1.732i 0 0;1.732i 0 -2i 0;0 2i 0 -1.732i;0 0 1.732i 0];
|
| 215 |
+
Sz=(0.5)*[3 0 0 0;0 1 0 0;0 0 -1 0;0 0 0 -3];
|
| 216 |
+
|
| 217 |
+
Sp=Sx+i*Sy; Sm=Sx-i*Sy;
|
| 218 |
+
sh1=(g1*(cos theta))^2+g2*(sin theta))^2*beta*h*Sz;
|
| 219 |
+
sh2=D*((cos theta))^2-(0.5)*(sin theta))^2*(Sz^2-1.25*eye(4));
|
| 220 |
+
sh3=D*(sin theta)*(cos theta)*(0.5)*exp(-i*phi)*(Sz*Sp+Sp*Sz);
|
| 221 |
+
sh4=D*(sin theta)*(cos theta)*(0.5)*exp(i*phi)*(Sz*Sm+Sm*Sz);
|
| 222 |
+
sh5=D*(0.25)*(sin theta))^2*(exp(-2*i*phi)*Sp^2+exp(2*i*phi)*Sm^2);
|
| 223 |
+
sh sh1+sh2+sh3+sh4+sh5;
|
| 224 |
+
|
| 225 |
+
% calculate the eigenvectors and eigenvalues
|
| 226 |
+
[evec,eval]=eig(sh);
|
| 227 |
+
|
| 228 |
+
e1=eval(1,1); e2=eval(2,2); e3=eval(3,3); e4=eval(4,4);
|
| 229 |
+
|
| 230 |
+
% the eigenvector associated with the first eigenvalue is the first
|
| 231 |
+
% column of the matrix evect, the 2nd eigenvector is the 2nd column, etc
|
| 232 |
+
|
| 233 |
+
v1=evect(:,1); v2=evect(:,2); v3=evect(:,3); v4=evect(:,4);
|
| 234 |
+
|
| 235 |
+
% order the eigenvalues such that the most negative one is labeled e1
|
| 236 |
+
% and the most positive one is labeled e4, carry the eigenvectors
|
| 237 |
+
% along with the eigenvalues
|
| 238 |
+
|
| 239 |
+
if e1>e2
|
| 240 |
+
et=e1; vt=v1;
|
| 241 |
+
e1=e2; v1=v2;
|
| 242 |
+
e2=et; v2=vt;
|
| 243 |
+
end
|
| 244 |
+
---PAGE_BREAK---
|
| 245 |
+
|
| 246 |
+
```pascal
|
| 247 |
+
if e1>e3
|
| 248 |
+
et=e1; vt=v1;
|
| 249 |
+
e1=e3; v1=v3;
|
| 250 |
+
e3=et; v3=vt;
|
| 251 |
+
end
|
| 252 |
+
|
| 253 |
+
if e1>e4
|
| 254 |
+
et=e1; vt=v1;
|
| 255 |
+
e1=e4; v1=v4;
|
| 256 |
+
e4=et; v4=vt;
|
| 257 |
+
end
|
| 258 |
+
|
| 259 |
+
if e2>e3
|
| 260 |
+
et=e2; vt=v2;
|
| 261 |
+
e2=e3; v2=v3;
|
| 262 |
+
e3=et; v3=vt;
|
| 263 |
+
end
|
| 264 |
+
|
| 265 |
+
if e2>e4
|
| 266 |
+
et=e2; vt=v2;
|
| 267 |
+
e2=e4; v2=v4;
|
| 268 |
+
e4=et; v4=vt;
|
| 269 |
+
end
|
| 270 |
+
|
| 271 |
+
if e3>e4
|
| 272 |
+
et=e3; vt=v3;
|
| 273 |
+
e3=e4; v3=v4;
|
| 274 |
+
e4=et; v4=vt;
|
| 275 |
+
end
|
| 276 |
+
```
|
| 277 |
+
|
| 278 |
+
% calculate and display the transition frequencies
|
| 279 |
+
f12=(e2-e1)/6.626e-18, f13=(e3-e1)/6.626e-18, f14=(e4-e1)/6.626e-18,
|
| 280 |
+
f23=(e3-e2)/6.626e-18, f24=(e4-e2)/6.626e-18, f34=(e4-e3)/6.626e-18,
|
| 281 |
+
|
| 282 |
+
% calculate and display the spin vectors
|
| 283 |
+
S12=[v2'*Sx*v1; v2'*Sy*v1; v2'*Sz*v1]
|
| 284 |
+
S13=[v3'*Sx*v1; v3'*Sy*v1; v3'*Sz*v1]
|
| 285 |
+
S14=[v4'*Sx*v1; v4'*Sy*v1; v4'*Sz*v1]
|
| 286 |
+
S23=[v3'*Sx*v2; v3'*Sy*v2; v3'*Sz*v2]
|
| 287 |
+
S24=[v4'*Sx*v2; v4'*Sy*v2; v4'*Sz*v2]
|
| 288 |
+
S34=[v4'*Sx*v3; v4'*Sy*v3; v4'*Sz*v3]
|
| 289 |
+
|
| 290 |
+
%display the "transition probabilities" for the rf signal
|
| 291 |
+
T12=(Hrf*S12)*(Hrf*S12)', T13=(Hrf*S13)*(Hrf*S13)',
|
| 292 |
+
T14=(Hrf*S14)*(Hrf*S14)', T23=(Hrf*S23)*(Hrf*S23)',
|
| 293 |
+
T24=(Hrf*S24)*(Hrf*S24)', T34=(Hrf*S34)*(Hrf*S34)'
|
| 294 |
+
|
| 295 |
+
The sample output follows. The user specifies the values of h, thetad, phid, and Hrf. The program determines the frequencies, spin vectors, and transition probabilities. The numbers 1,2,3,4 identify the quantum states, with 1 being the lowest energy state and 4 being the highest.
|
| 296 |
+
|
| 297 |
+
h = 4981
|
| 298 |
+
thetad = 90
|
| 299 |
+
phid = 0
|
| 300 |
+
Hrf = 0.8540
|
| 301 |
+
0 - 0.5210i
|
| 302 |
+
0
|
| 303 |
+
---PAGE_BREAK---
|
| 304 |
+
|
| 305 |
+
$$f_{12} = 8.4214$$
|
| 306 |
+
|
| 307 |
+
$$f_{13} = 24.0415$$
|
| 308 |
+
|
| 309 |
+
$$f_{14} = 43.2512$$
|
| 310 |
+
|
| 311 |
+
$$f_{23} = 15.6201$$
|
| 312 |
+
|
| 313 |
+
$$f_{24} = 34.8298$$
|
| 314 |
+
|
| 315 |
+
$$f_{34} = 19.2097$$
|
| 316 |
+
|
| 317 |
+
$$S_{12} = -1.0735 + 0.6544i$$
|
| 318 |
+
|
| 319 |
+
$$0 + 0.6544i$$
|
| 320 |
+
|
| 321 |
+
$$0$$
|
| 322 |
+
|
| 323 |
+
$$S_{13} = 0$$
|
| 324 |
+
|
| 325 |
+
$$0$$
|
| 326 |
+
|
| 327 |
+
$$0.4140$$
|
| 328 |
+
|
| 329 |
+
$$S_{14} = -0.0287 + 0.0899i$$
|
| 330 |
+
|
| 331 |
+
$$0 + 0.0899i$$
|
| 332 |
+
|
| 333 |
+
$$0$$
|
| 334 |
+
|
| 335 |
+
$$S_{23} = -0.9078 + 1.0264i$$
|
| 336 |
+
|
| 337 |
+
$$0 + 1.0264i$$
|
| 338 |
+
|
| 339 |
+
$$0$$
|
| 340 |
+
|
| 341 |
+
$$S_{24} = 0$$
|
| 342 |
+
|
| 343 |
+
$$0$$
|
| 344 |
+
|
| 345 |
+
$$0.2858$$
|
| 346 |
+
|
| 347 |
+
$$S_{34} = -0.7229 + 1.0051i$$
|
| 348 |
+
|
| 349 |
+
$$0 + 1.0051i$$
|
| 350 |
+
|
| 351 |
+
$$0$$
|
| 352 |
+
|
| 353 |
+
$$T_{12} = 1.5819$$
|
| 354 |
+
|
| 355 |
+
$$T_{13} = 0$$
|
| 356 |
+
|
| 357 |
+
$$T_{14} = 0.0051$$
|
| 358 |
+
|
| 359 |
+
$$T_{23} = 1.7160$$
|
| 360 |
+
|
| 361 |
+
$$T_{24} = 0$$
|
| 362 |
+
|
| 363 |
+
$$T_{34} = 1.3018$$
|
| 364 |
+
---PAGE_BREAK---
|
| 365 |
+
|
| 366 |
+
# Appendix B
|
| 367 |
+
|
| 368 |
+
## Derivation of the Hamiltonian Used in Equation (2)
|
| 369 |
+
|
| 370 |
+
The reader may be convinced of the equivalence of Eqs. (1) and (2) in the following way. First, Eq. (1) is expressed in spherical coordinates. This gives the result
|
| 371 |
+
|
| 372 |
+
$$H_s = g_1\beta H \cos\theta S_z + g_2\beta H (\sin\theta \cos\varphi S_x + \sin\theta \sin\varphi S_y) - D \left[S_z^2 - \frac{1}{3}S(S+1)\right] \quad (B-1)$$
|
| 373 |
+
|
| 374 |
+
Then the coordinate system is rotated three times. First the coordinate system is rotated about the z-axis by an angle $\varphi$ until the static magnetic field is in the $x'-z'$ plane. Then the coordinate system is rotated by an angle $-\theta$ about the y'-axis until the dc magnetic field is along the $z''$-direction. Finally, the coordinate system is rotated about the $z''$-axis by the angle $(\pi - \varphi)$. The rotation matrix relating the unprimed coordinates and the triple-primed coordinates is the product of the three rotation matrices:
|
| 375 |
+
|
| 376 |
+
$$\begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} \cos \varphi & -\sin \varphi & 0 \\ \sin \varphi & \cos \varphi & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} \cos \theta & 0 & \sin \theta \\ 0 & 1 & 0 \\ -\sin \theta & 0 & \cos \theta \end{bmatrix} \begin{bmatrix} -\cos \varphi & -\sin \varphi & 0 \\ \sin \varphi & -\cos \varphi & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x''' \\ y''' \\ z''' \end{bmatrix}$$
|
| 377 |
+
|
| 378 |
+
Now we use the rather remarkable fact that the spin matrices transform just like the components of a vector. Thus, the relationship between the unprimed spin operators and the triple-primed spin operators is the same as the above relationship between the coordinates. Thus, we can write
|
| 379 |
+
|
| 380 |
+
$$\begin{bmatrix} S_x \\ S_y \\ S_z \end{bmatrix} = \begin{bmatrix} -\cos\theta\cos^2\varphi - \sin^2\varphi & -\sin\varphi\cos\varphi\cos\theta + \sin\varphi\cos\varphi & \sin\theta\cos\varphi \\ -\cos\theta\sin\varphi\cos\varphi + \sin\varphi\cos\varphi & -\cos\theta\sin^2\varphi - \cos^2\varphi & \sin\theta\sin\varphi \\ \sin\theta\cos\varphi & \sin\theta\sin\varphi & \cos\theta \end{bmatrix} \begin{bmatrix} S_{x'''} \\ S_{y'''} \\ S_{z'''} \end{bmatrix}$$
|
| 381 |
+
|
| 382 |
+
Expressing the spin operators $S_x, S_y, S_z$ in Eq. (B-1) in terms of $S_{x'''}$, $S_{y'''}$, $S_{z'''}$ leads to Eq. (2), where the triple primes have been dropped. Equation (2) neglects Zeeman terms involving differences between $g_1$ and $g_2$.
|
samples_new/texts_merged/3594993.md
ADDED
|
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# On coloring box graphs
|
| 5 |
+
|
| 6 |
+
CrossMark
|
| 7 |
+
|
| 8 |
+
Emilie Hogan<sup>a</sup>, Joseph O'Rourke<sup>b</sup>, Cindy Traub<sup>c</sup>, Ellen Veomett<sup>d,*</sup>
|
| 9 |
+
|
| 10 |
+
<sup>a</sup> Pacific Northwest National Laboratory, United States
|
| 11 |
+
|
| 12 |
+
<sup>b</sup> Smith College, United States
|
| 13 |
+
|
| 14 |
+
<sup>c</sup> Southern Illinois University Edwardsville, United States
|
| 15 |
+
|
| 16 |
+
<sup>d</sup> Saint Mary's College of California, United States
|
| 17 |
+
|
| 18 |
+
## ARTICLE INFO
|
| 19 |
+
|
| 20 |
+
**Article history:**
|
| 21 |
+
Received 5 November 2013
|
| 22 |
+
Received in revised form 6 September 2014
|
| 23 |
+
Accepted 13 September 2014
|
| 24 |
+
Available online 23 October 2014
|
| 25 |
+
|
| 26 |
+
**Keywords:**
|
| 27 |
+
Graph coloring
|
| 28 |
+
Box graph
|
| 29 |
+
Chromatic number
|
| 30 |
+
|
| 31 |
+
## ABSTRACT
|
| 32 |
+
|
| 33 |
+
We consider the chromatic number of a family of graphs we call box graphs, which arise from a box complex in *n*-space. It is straightforward to show that any box graph in the plane has an admissible coloring with three colors, and that any box graph in *n*-space has an admissible coloring with *n* + 1 colors. We show that for box graphs in *n*-space, if the lengths of the boxes in the corresponding box complex take on no more than two values from the set {1, 2, 3}, then the box graph is 3-colorable, and for some graphs three colors are required. We also show that box graphs in 3-space which do not have cycles of length four (which we call "string complexes") are 3-colorable.
|
| 34 |
+
|
| 35 |
+
© 2014 Elsevier B.V. All rights reserved.
|
| 36 |
+
|
| 37 |
+
## 1. Introduction and results
|
| 38 |
+
|
| 39 |
+
There are many geometrically-defined graphs whose chromatic numbers have been studied. Perhaps the most famous such example is the Four Color Theorem, which states that any planar graph is 4-colorable [1]. Another famous example is the chromatic number of the plane. More specifically, a graph $G = (V, E)$ is defined where $V = \mathbb{R}^2$ and $(x, y) \in E$ precisely when $\|x - y\|_2 = 1$ (where $\| \cdot \|_2$ is the usual Euclidean norm in the plane). Through simple geometric constructions, one can show that $4 \le \chi(G) \le 7$ for this graph, although the precise value is still not known; see [8], for example.
|
| 40 |
+
|
| 41 |
+
In this article, we consider graphs that arise from box complexes. We first define what a box complex is:
|
| 42 |
+
|
| 43 |
+
**Definition 1.** An *n*-dimensional box is a set $B \subset \mathbb{R}^n$ that can be defined as:
|
| 44 |
+
|
| 45 |
+
$$B = \{x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$$
|
| 46 |
+
|
| 47 |
+
where $a_i < b_i$ for $i = 1, 2, \dots, n$.
|
| 48 |
+
|
| 49 |
+
An *n*-dimensional *box complex* is a set of finitely many *n*-dimensional boxes $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ such that if the intersection of two boxes $B_i \cap B_j$ is nonempty, then $B_i \cap B_j$ is a face (of any dimension) of both $B_i$ and $B_j$, for any $i$ and $j$ (see Fig. 1).
|
| 50 |
+
|
| 51 |
+
Now we can define a box graph:
|
| 52 |
+
|
| 53 |
+
**Definition 2.** An *n*-dimensional *box graph* is a graph defined on an *n*-dimensional box complex. The box graph $G(\mathcal{B}) = (V, E)$ defined on the box complex $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ is the undirected graph whose vertex set is the boxes:
|
| 54 |
+
|
| 55 |
+
$$V = \{B_1, B_2, \dots, B_m\}$$
|
| 56 |
+
|
| 57 |
+
* Corresponding author.
|
| 58 |
+
E-mail address: erv2@stmarys-ca.edu (E. Veomett).
|
| 59 |
+
---PAGE_BREAK---
|
| 60 |
+
|
| 61 |
+
Fig. 1. Examples in $\mathbb{R}^2$.
|
| 62 |
+
|
| 63 |
+
Fig. 2. Defining a 2-dimensional box graph.
|
| 64 |
+
|
| 65 |
+
and whose edges $(B_i, B_j) \in E$ record when $B_i \cap B_j$ is an $(n-1)$-dimensional face of both $B_i$ and $B_j$. In other words, the box graph is the dual graph of the box complex, and the colorings we are considering are in some sense “solid colorings.”
|
| 66 |
+
|
| 67 |
+
When it eases understanding, we may use the terms box complex and box graph interchangeably. We also may use boxes and vertices interchangeably.
|
| 68 |
+
|
| 69 |
+
The following proposition shows that, as far as the corresponding box graphs are concerned, we may as well restrict ourselves to box complexes where each of the vertices of the boxes has integer coordinates (and thus all boxes have integer lengths).
|
| 70 |
+
|
| 71 |
+
**Proposition 1.** Let $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ be a box complex and let $G(\mathcal{B}) = (V, E)$ be its corresponding box graph. There exists a box complex $\{C_1, C_2, \dots, C_m\}$ where the vertices of each $C_i$ ($i = 1, 2, \dots, m$) have all integer coordinates, such that the box graph corresponding to complex $\{C_1, C_2, \dots, C_m\}$ is the same graph $G$.
|
| 72 |
+
|
| 73 |
+
We will prove **Proposition 1** in Section 2.
|
| 74 |
+
|
| 75 |
+
We ask the following natural question:
|
| 76 |
+
|
| 77 |
+
**Question 1.** What is the minimum number of colors $k$ that are required so that every $n$-dimensional box graph has an admissible $k$-coloring?
|
| 78 |
+
|
| 79 |
+
From Fig. 2(c), we can see that three colors may be necessary to color a 2-dimensional box graph. In fact, as we will prove in Section 2, three colors are also sufficient:
|
| 80 |
+
|
| 81 |
+
**Proposition 2.** Any box graph in $n$-space has an admissible coloring with $n + 1$ colors.
|
| 82 |
+
|
| 83 |
+
Our goal is to answer **Question 1** in dimension 3, which is still open. In the case where the "boxes" are zonotopes (as opposed to right-angled bricks), sometimes 4 colors are needed [4], and in the case where the "boxes" are now touching spheres, the chromatic number is between 5 and 13 [2]. Analogously, for simplicial complexes in $\mathbb{R}^n$, $n+1$ colors suffice [6]. We suspect that any 3-dimensional box graph is 3-colorable, and we can show that this is true for a few families of 3-dimensional box graphs. The following are the main results of this paper:
|
| 84 |
+
|
| 85 |
+
**Theorem 1.** Let $G$ be an $n$-dimensional box graph such that the lengths of all of the boxes in the corresponding box complex take on no more than two values from the set $\{1, 2, 3\}$. That is, all the side lengths of the boxes are 1 or 2, or all the side lengths are 1 or 3, or all the side lengths are 2 or 3. Then $G$ is 3-colorable.
|
| 86 |
+
|
| 87 |
+
**Theorem 2.** Let $G$ be a 3-dimensional box graph that has no cycles on four vertices. Then $G$ is 3-colorable.
|
| 88 |
+
|
| 89 |
+
The rest of this paper is organized as follows: in Section 2 we will state and prove some straightforward results on box graphs. We will prove **Theorem 1** in Section 3, and we will prove **Theorem 2** in Section 4.
|
| 90 |
+
|
| 91 |
+
## **2. Straightforward results on box graphs**
|
| 92 |
+
|
| 93 |
+
As promised, we will start with proofs of **Propositions 1** and **2**.
|
| 94 |
+
---PAGE_BREAK---
|
| 95 |
+
|
| 96 |
+
**Proof of Proposition 1.** Suppose {$B_1, B_2, \dots, B_m$} is a box complex in $\mathbb{R}^n$, so that each vertex of each box has $n$ coordinates. Let $x_0, x_1, \dots, x_k$ be the list of all of the different first coordinates of all of the vertices of the boxes in the box complex. Order them so that
|
| 97 |
+
|
| 98 |
+
$$x_0 < x_1 < \cdots < x_k.$$
|
| 99 |
+
|
| 100 |
+
Now make a new box complex {$B_1^1, B_2^1, \dots, B_m^1$} such that the vertices are all the same except the first coordinates. Specifically, if the first coordinate of a vertex in $B_j$ is $x_i$, then the first coordinate of the corresponding vertex in $B_j^1$ is the integer $i$. Thus, the vertex $(x_i, y_2, y_3, \dots, y_n)$ of $B_j$ becomes the vertex $(i, y_2, y_3, \dots, y_n)$ of $B_j^1$.
|
| 101 |
+
|
| 102 |
+
Note that each $B_i^1$ is still a box, and this does not change the intersection pattern of the boxes. That is, if $B_j \cap B_\ell$ is $d$-dimensional, then so is $B_j^1 \cap B_\ell^1$. (And if $B_j \cap B_\ell$ was empty, then so is $B_j^1 \cap B_\ell^1$.)
|
| 103 |
+
|
| 104 |
+
We continue with this process for the 2nd, 3rd, ..., $n$th coordinates. Finally, we get a box complex {$B_1^n, B_2^n, \dots, B_m^n$} with the same intersection pattern as $B_1, B_2, \dots, B_m$ but with all integer coordinates for the vertices. Thus, the box graph for complex {$B_1^n, B_2^n, \dots, B_m^n$} is the same as the box graph for complex {$B_1, B_2, \dots, B_m$}. $\square$
|
| 105 |
+
|
| 106 |
+
In order to prove Proposition 2 we first give the definition of *k*-degenerate graphs, and show the well-known result that *k*-degenerate graphs are *k* + 1-colorable [5].
|
| 107 |
+
|
| 108 |
+
**Definition 3.** A graph G is *k*-degenerate if each of its induced subgraphs has a vertex of degree k.
|
| 109 |
+
|
| 110 |
+
**Lemma 1.** Every *k*-degenerate graph is *k* + 1-colorable.
|
| 111 |
+
|
| 112 |
+
**Proof.** Let $G = (V, E)$ be a $k$-degenerate graph. We will proceed by induction on $|V|$, the size of the vertex set. If $|V| = 1$ then certainly $G$ is $k$-colorable for any $k \ge 1$. Now, suppose that $|V| = m \ge 2$, and assume as the induction hypothesis that any $k$-degenerate graph on $m-1$ vertices is $k+1$-colorable.
|
| 113 |
+
|
| 114 |
+
Then, since $G$ is $k$-degenerate we know there exists a vertex $v \in V$ with $\deg(v) = k$. Consider the graph $G-v$, formed by removing vertex $v$ and all of its incident edges, with $m-1$ vertices. This graph must be $k$-degenerate since it is an induced subgraph of $G$. Therefore, by the induction hypothesis we can color $G-v$ using $k+1$ colors. Now, when $v$ and its edges are added back into $G$ we must have at least one available color since $v$ has only $k$ neighbors and there are $k+1$ total colors. Therefore, by induction, any $k$-degenerate graph is $k+1$-colorable. $\square$
|
| 115 |
+
|
| 116 |
+
We now prove Proposition 2 by showing that any box graph is *n*-degenerate.
|
| 117 |
+
|
| 118 |
+
**Proof of Proposition 2.** Let $G = (V, E)$ be a box graph, so that each $v \in V$ is a box in the corresponding box complex. We will label each box in V by its "right, forward, top" vertex. More precisely, each box can be defined as
|
| 119 |
+
|
| 120 |
+
$$\{x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$$
|
| 121 |
+
|
| 122 |
+
where $a_i < b_i$ for $i = 1, 2, \dots, n$. We then label this box with $(b_1, b_2, \dots, b_n)$.
|
| 123 |
+
|
| 124 |
+
Now find a "right, forward, top" box in the graph. That is, find a vertex $u \in V$ with corresponding label $(u_1, u_2, \dots, u_n)$ such that for any other $v \in V$ with label $(v_1, v_2, \dots, v_n)$ and $(u, v) \in E$, we have
|
| 125 |
+
|
| 126 |
+
$$u_1 \ge v_1, u_2 \ge v_2, \dots, u_n \ge v_n.$$
|
| 127 |
+
|
| 128 |
+
(Such a box is guaranteed to exist because G is finite.) Note that, by our choice of *u*, *u* has at most *n* neighbors.
|
| 129 |
+
|
| 130 |
+
Since we began with an arbitrary box graph, the existence of a degree *n* vertex must be true for all induced subgraphs of G. Therefore, any box graph corresponding to a box complex in $\mathbb{R}^n$ is *n*-degenerate, and by Lemma 1 is *n* + 1 colorable. $\square$
|
| 131 |
+
|
| 132 |
+
We note that the above argument is the *n*-dimensional analogue to the "elbow" argument in [7].
|
| 133 |
+
|
| 134 |
+
We state the following result as a reminder to the reader:
|
| 135 |
+
|
| 136 |
+
**Proposition 3.** Let $G = (V, E)$ be a graph. Then the following are equivalent:
|
| 137 |
+
|
| 138 |
+
1. The graph G contains no odd cycle.
|
| 139 |
+
|
| 140 |
+
2. The graph G is bipartite.
|
| 141 |
+
|
| 142 |
+
3. The graph G is 2-colorable.
|
| 143 |
+
|
| 144 |
+
**Proof.** Proposition 3 is a well-known introductory graph theory result. See Section I.2 of [3], for example. $\square$
|
| 145 |
+
|
| 146 |
+
The following proposition shows that if a box graph cannot be colored with just 2 colors, it must have some boxes with side lengths that are different from each other.
|
| 147 |
+
|
| 148 |
+
**Proposition 4.** Suppose a box complex only contains boxes that are cubes; that is, boxes with all side lengths equal. Then the corresponding box graph is 2-colorable.
|
| 149 |
+
|
| 150 |
+
**Proof.** Suppose a box complex contains only cubes, and let $G = (V, E)$ be the corresponding box graph. Without loss of generality, we may assume that G is connected. Thus, since all of the boxes in the corresponding box complex are cubes, they must all be cubes of the same size; let the side length of the cubes be $k$. By the proof of Proposition 1, we can assume that $k \in \mathbb{N}$ and the coordinates of all the vertices of the boxes in the box complex are integer multiples of $k$.
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
Just as we did in the proof of Proposition 2, label each $v \in V$ with the “right, forward, top” vertex. Let $(v_1, v_2, \ldots, v_n)$ be the label for vertex $v$. Color vertex $v$ with color
|
| 154 |
+
|
| 155 |
+
$$ \frac{1}{k} (v_1 + v_2 + \cdots + v_n) \pmod{2}. $$
|
| 156 |
+
|
| 157 |
+
Note that exactly two colors are used. If two vertices are adjacent: $(u, v) \in E$, then we know that their corresponding labels $(u_1, u_2, \ldots, u_n)$ and $(v_1, v_2, \ldots, v_n)$ must be the same in every coordinate except one, in which they differ by $k$. That is, there exists $i \in \{1, 2, \ldots, n\}$ such that
|
| 158 |
+
|
| 159 |
+
$$ \begin{aligned} u_j &= v_j & \text{if } j \in \{1, 2, \ldots, n\} \text{ and } j \neq i \\ u_i &= v_i \pm k. \end{aligned} $$
|
| 160 |
+
|
| 161 |
+
Thus, if two vertices are adjacent then their colors must be different. Thus, this is a valid 2-coloring of G. $\square$
|
| 162 |
+
|
| 163 |
+
In [4] it was proved that any box complex in $\mathbb{R}^3$ that is homeomorphic to a ball is 2-colorable.
|
| 164 |
+
|
| 165 |
+
### 3. Proof of Theorem 1
|
| 166 |
+
|
| 167 |
+
We shall prove Theorem 1 in parts via a few lemmas. Here is the first of our lemmas:
|
| 168 |
+
|
| 169 |
+
**Lemma 2.** Suppose that each side length of each box in a box complex is a positive integer which is congruent to either 1 or 2 mod 3. Then the corresponding box graph is 3-colorable.
|
| 170 |
+
|
| 171 |
+
**Proof.** Consider an $n$-dimensional box complex $\{B_1, B_2, \ldots, B_m\}$, and label each box again by its “right, forward, top” vertex coordinates, $(b_1, b_2, \ldots, b_n)$. Now, color each box by $(b_1 + b_2 + \cdots + b_n)$ mod 3. We claim that this is a valid coloring.
|
| 172 |
+
|
| 173 |
+
If two boxes, $B_i$, $B_j$ are adjacent then their right, forward, top vertices will differ in exactly one coordinate. Let $(b_{i,1}, b_{i,2}, \ldots, b_{i,n})$ be the label for $B_i$ and $(b_{j,1}, b_{j,2}, \ldots, b_{j,n})$ the label for $B_j$. Then, WLOG, $b_{i,1} \neq b_{j,1}$ and $b_{i,k} = b_{j,k}$ for $k=2, 3, \ldots, n$. These two boxes will have the same color iff $b_{i,1} - b_{j,1} \equiv 0 \pmod{3}$. However, this value is the side length of one of these boxes which we have restricted to not equal any multiple of 3. Therefore neighboring boxes may not have the same color, so this 3-coloring is admissible. $\square$
|
| 174 |
+
|
| 175 |
+
The following corollary follows directly from Lemma 2:
|
| 176 |
+
|
| 177 |
+
**Corollary 1.** Suppose a box complex in $\mathbb{R}^n$ has boxes with side lengths only equal to 1 or 2. Then the corresponding box graph is 3-colorable.
|
| 178 |
+
|
| 179 |
+
The next in our series of lemmas:
|
| 180 |
+
|
| 181 |
+
**Lemma 3.** Suppose that each side length of each box in a box complex is an odd integer. Then the corresponding box graph is 2-colorable.
|
| 182 |
+
|
| 183 |
+
**Proof.** We will prove this by showing that there can be no odd cycles in the graph (see Proposition 3).
|
| 184 |
+
|
| 185 |
+
Assume we have a box complex $\mathcal{B} = \{B_1, \ldots, B_k\}$. Consider any cycle within the corresponding box graph. Label the vertices of this cycle by the “right, forward, top” corner of the corresponding box, and label each of the edges of the cycle with the distances between those corners, mod 2. In other words, if the neighboring vertices are labeled (1, 1, ..., 1) and (4, 1, ..., 1) then we label the edge with 3 mod 2 = 1. Moreover, we will choose a direction of travel around the cycle and sign the length of the edge positive if we are moving along that edge in the positive direction, and negative if we move along the edge in the negative direction. Thus, for example, if we move from vertex (1, 1, ..., 1) to (4, 1, ..., 1), the edge is labeled with 1 since moving from 1 to 4 is in the positive direction in the first coordinate, whereas if we move from vertex (4, 1, ..., 1) to (1, 1, ..., 1), the edge is labeled with -1.
|
| 186 |
+
|
| 187 |
+
We now claim that the sum of the integers along the cycle must be 0. This is because in each dimension, any length we move in the positive direction must be traveled again in the negative direction, and therefore their parity must also be equal.
|
| 188 |
+
|
| 189 |
+
Finally, we note that, by assumption, all of the lengths are odd. Thus, all edge labels must be either 1 or -1. Since we have a list of edges labeled 1 or -1 and the sum of the labels is 0, there must be an even number of edges in the cycle. $\square$
|
| 190 |
+
|
| 191 |
+
The following corollary follows directly from Lemma 3:
|
| 192 |
+
|
| 193 |
+
**Corollary 2.** Suppose a box complex in $\mathbb{R}^n$ has boxes with side lengths only equal to 1 or 3. Then the corresponding box graph is 3-colorable.
|
| 194 |
+
|
| 195 |
+
The proof for Theorem 1 when blocks have dimensions 2 or 3, given in the remainder of this section, relies on placing a partial order on the box graph corresponding to a given box complex. The elements of the partially ordered set (poset) are the vertices of the box graph, i.e., the individual boxes that comprise the box complex. As before, we label box $\{x = (x_1, x_2, \ldots, x_n) \in \mathbb{R}^n : a_i \le x_i \le b_i\}$ by its “right, forward, top” vertex coordinates, $(b_1, b_2, \ldots, b_n)$. The order relation for this poset is induced by the following cover relation: box $B_i$ with label $(b_1, b_2, \ldots, b_n)$ covers box $B_j$ with label
|
| 196 |
+
---PAGE_BREAK---
|
| 197 |
+
|
| 198 |
+
Fig. 3. All edges above the ones drawn do not change in length after *T* is applied.
|
| 199 |
+
|
| 200 |
+
(c₁, c₂, . . . , cₙ) if and only if the two boxes are adjacent and Σn_{k=1} b_k ≥ Σn_{k=1} c_k. Since these adjacent boxes must share an (n − 1)-dimensional face, their labels will differ in exactly one coordinate, by a difference equal to the dimension of box Bᵢ orthogonal to shared face Bᵢ ∩ Bⱼ.
|
| 201 |
+
|
| 202 |
+
We note further that the sum $r(B_i) = \sum_{k=1}^{n} b_k$ of the entries of the label of a given box is a rank function for this poset. We will use the rank function and the poset structure to describe valid colorings of the box graph. This technique will consider an initial drawing of the poset (and subsequent re-drawings) with all nodes at integer heights. We then refer to the *length* of an edge in the poset as the positive vertical distance between its endpoints.
|
| 203 |
+
|
| 204 |
+
Here is the last of the lemmas that we will need for Theorem 1:
|
| 205 |
+
|
| 206 |
+
**Lemma 4.** Suppose a box complex has boxes with side lengths only equal to 2 or 3. Then the corresponding box graph is 3-colorable.
|
| 207 |
+
|
| 208 |
+
**Proof.** Consider now the case in which all dimensions of the boxes in a box complex $\mathcal{B} = \{B_1, B_2, \dots, B_m\}$ are 2 or 3. We produce the associated poset $\mathcal{P}$ described above, and make an initial drawing of $\mathcal{P}$ with nodes having heights corresponding to their ranks. Note that this implies that if two boxes $B_i$ and $B_j$ which are adjacent in the box graph are drawn with heights $h_i$ and $h_j$ respectively, then $r(B_i) - r(B_j) = h_i - h_j$, and $h_i - h_j$ is either 2 or 3 if $h_i > h_j$. In other words, all lengths of the edges in the poset are either 2 or 3. Without loss of generality, we can make this drawing so that all rank-minimal vertices have height $h$-value of 0. We now describe how to redraw the poset $\mathcal{P}$ in such a way that all adjacencies and cover relations are preserved, but all edges have lengths equivalent to 1 or 2 mod 3.
|
| 209 |
+
|
| 210 |
+
We now consider the lengths of edges in the poset, working our way in order of increasing height $h$ of the terminal endpoints. Since the first nodes occur on the line $h=0$ and all edges have length 2 or 3, no edges terminate on $h=1$, and edges that terminate on $h=2$ have length 2, which is among the desired values. Edges terminating on $h=3$ or above may have length 2 or length 3. We perform the following transformation on the drawing of the poset. Let $h_i$ denote the height of vertex $B_i$ in the initial drawing of the poset. We perform transformation $T$ below to the drawing of the poset:
|
| 211 |
+
|
| 212 |
+
$$ T(h_i) = \begin{cases} h_i & \text{if } h_i \le 2, \\ h_i + 2 & \text{if } h_i \ge 3. \end{cases} $$
|
| 213 |
+
|
| 214 |
+
Note that $T$ has no effect on the length of edges terminating at or below $h=2$, and no effect on the length of edges commencing at or above $h=3$. For edges that include the interval $[2, 3]$, two units are added to their length. In the new drawing of the poset, no edges will terminate on lines $h=3$ or $h=4$. Edges terminating on $h=5$ were either originally of length 3 commencing from $t=0$ or of length 2 commencing at $h=1$. The former now have length 5, while the length of the latter is now 4. In either case, edges terminating on $h=5$ have lengths equivalent to 1 or 2 mod 3. A similar argument shows that edges in the revised drawing that terminate on $h=6$ or $h=7$ are either of length 2, 4, or 5. (See Fig. 3.)
|
| 215 |
+
|
| 216 |
+
Any edges terminating on *h*-values of 8 or higher were not affected by the first stretch, and thus may have length 3.
|
| 217 |
+
Continue the stretching/redrawing procedure as before, extending the interval [7, 8] by two units and redrawing the poset.
|
| 218 |
+
This procedure only changes the lengths of edges which include the interval [7, 8], so in particular it does not change
|
| 219 |
+
the lengths of any prior edges. Since our complex is finite, only finitely many re-drawings are needed to draw the poset
|
| 220 |
+
with edges all having length equivalent to 1 or 2 mod 3. At that time, the nodes can be colored using the argument from
|
| 221 |
+
Lemma 2. □
|
| 222 |
+
|
| 223 |
+
We can now finally prove Theorem 1:
|
| 224 |
+
|
| 225 |
+
**Proof of Theorem 1.** This is a direct consequence of Corollaries **1**, **2**, and **Lemma 4**. □
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
**Fig. 4.** This 2 × 2 pattern (a 4-cycle in the dual) is forbidden as part of a string complex.
|
| 229 |
+
|
| 230 |
+
**Fig. 5.** An example of a string complex.
|
| 231 |
+
|
| 232 |
+
**4. Proof of Theorem 2**
|
| 233 |
+
|
| 234 |
+
First, a couple of definitions:
|
| 235 |
+
|
| 236 |
+
**Definition 4.** A *string complex* is any box complex in $\mathbb{R}^3$ that does not contain a 2 × 2 pattern of boxes shown in Fig. 4. The dual of the forbidden pattern is a 4-cycle, which is the shortest cycle possible in a box complex. So in other words, a string complex is a 3-dimensional box complex in whose corresponding graph has no 4-cycle (see Fig. 5).
|
| 237 |
+
|
| 238 |
+
We use the term “string complex” because, without the 2 × 2 pattern in Fig. 4, the box complex is forced to have lots of “holes” and be “stringy.”
|
| 239 |
+
|
| 240 |
+
**Definition 5.** A 3-dimensional box complex {$B_1, B_2, B_3, \dots, B_m$} is *reducible* to the 3-dimensional box complex {$A_1, A_2, \dots, A_\ell$} ($\ell \le m$) if one can sequentially remove boxes from complex {$B_1, B_2, \dots, B_m$} of degree $\le 2$ in order to obtain complex {$A_1, A_2, \dots, A_\ell$}. More specifically, there exists an ordering $B_1, B_2, \dots, B_m$ such that
|
| 241 |
+
|
| 242 |
+
$$B_i = A_i \quad \text{for } i = 1, 2, \dots, \ell$$
|
| 243 |
+
|
| 244 |
+
and for $j = 0, 1, 2, \dots, m - \ell - 1$, the box $B_{m-j}$ has degree $\le 2$ in the box complex
|
| 245 |
+
|
| 246 |
+
$$\{B_1, B_2, \dots, B_{m-j}\}.$$
|
| 247 |
+
|
| 248 |
+
A box complex is *irreducible* if every vertex is of degree $\ge 3$.
|
| 249 |
+
|
| 250 |
+
Note that a complex may be reducible to a smaller complex which is itself irreducible.
|
| 251 |
+
The following lemma is analogous to the tools we used in the proof of Proposition 2:
|
| 252 |
+
|
| 253 |
+
**Lemma 5.** If a 3-dimensional box complex is reducible to the empty complex, then its corresponding box graph is 3-colorable.
|
| 254 |
+
|
| 255 |
+
**Proof.** We prove by induction on $m$, the number of boxes in the box complex. Certainly if $m=1$, the box graph is 3-colorable. Suppose that $m \ge 2$, and that for any 3-dimensional box complex on $m-1$ boxes which is reducible to the empty complex, the corresponding box graph is 3-colorable. Suppose that the box complex {$B_1, B_2, \dots, B_m$} is reducible to the empty complex. That is, for $i=1, 2, \dots, n$, the box $B_i$ has degree $\le 2$ in the complex
|
| 256 |
+
|
| 257 |
+
$$\{B_1, B_2, \dots, B_n\}.$$
|
| 258 |
+
|
| 259 |
+
Note that the box complex {$B_1, B_2, \dots, B_{m-1}$} is also reducible to the empty complex and has $m-1$ boxes in it. Thus, by our inductive assumption, the corresponding graph is 3-colorable. Now, because $B_m$ had degree $\le 2$ in the box complex {$B_1, B_2, \dots, B_m$}, we can choose to color $B_m$ a color which is different from the colors of its neighbors. Thus, we have proven the lemma. $\square$
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
**Fig. 6.** $b_0$ is the topmost, leftmost box in the top layer $T$.
|
| 263 |
+
|
| 264 |
+
By Lemma 5, Theorem 2 is a direct corollary of the following theorem and its subsequent corollary:
|
| 265 |
+
|
| 266 |
+
**Theorem 3.** Every string complex is reducible.
|
| 267 |
+
|
| 268 |
+
**Proof.** Assume to the contrary. That is, let $\mathcal{S} = \{S_1, S_2, \dots, S_m\}$ be an irreducible string complex. We will show that irreducibility implies the complex must contain a 2 × 2 pattern of boxes, which contradicts the assumption that the complex is a string complex.
|
| 269 |
+
|
| 270 |
+
Let $T_1, T_2, \dots, T_\ell$ be the top layer of boxes in $\mathcal{S}$; say the top faces lie in a plane parallel to the xy-plane, extreme in the +z direction. We first claim that every box in $T_1, T_2, \dots, T_\ell$ must have degree $\ge 2$ within the complex $\mathcal{T} = \{T_1, T_2, \dots, T_\ell\}$. Suppose otherwise. That is, suppose there is a box $T_i$ with degree $\le 1$ within the box complex $\mathcal{T}$. Then $T_i$ can have at most degree 2 in the complex $\mathcal{S}$ by joining to a box beneath it. But we know that every box in $\mathcal{S}$ must have degree $\ge 3$, because the complex $\mathcal{S}$ was irreducible. Thus, it is indeed true that each $T_i$, $i = 1, 2, \dots, \ell$ has degree $\ge 2$ in the complex $\mathcal{T}$.
|
| 271 |
+
|
| 272 |
+
Now we look at an extreme corner box of $T_1, T_2, \dots, T_\ell$. Specifically, let $b_0$ be backmost (extreme in the +y direction), and among the topmost boxes of $\mathcal{T}$, leftmost (extreme in the -x direction). So $b_0$ is a type of “upper left corner”. Because it is extreme in two directions, two of its faces in $\mathcal{T}$ are exposed, so it must have exactly degree 2 in $\mathcal{T}$. Because we assumed $\mathcal{S}$ is irreducible, $b_0$ (and indeed every box of $\mathcal{S}$) must have degree $\ge 3$. So $b_0$ must be adjacent to a box $b'_0$ beneath it (beneath in the z-direction). See Fig. 6.
|
| 273 |
+
|
| 274 |
+
Let $b_1$ and $b_2$ be the boxes adjacent to $b_0$ in $T$, with $b_1$ adjacent to $b_0$ in the x-direction as in the figure. Again, by our previous arguments, $b_1$ must have degree $\ge 2$ in $\mathcal{T}$. It is already adjacent to $b_0$ to its left, and it cannot be adjacent to a box above it, because it is topmost. So it must be adjacent to one or both of the boxes labeled $b_3$ and $b_4$ in the figure.
|
| 275 |
+
|
| 276 |
+
However, $b_1$ cannot be adjacent to $b_3$, for then $\{b_0, b_1, b_2, b_3\}$ forms a 2 × 2 pattern, contradicting the assumption that $\mathcal{S}$ is a string complex. Therefore $b_1$ must be adjacent to $b_4$ in Fig. 6. Now $b_1$ has degree exactly 2 in $T$. Because it must have degree $\ge 3$ for $\mathcal{S}$ to be irreducible, it must be adjacent to box $b'_1$ underneath. But now $\{b_0, b_1, b'_0, b'_1\}$ forms a 2 × 2 pattern, again contradicting the assumption that $\mathcal{S}$ is a string complex.
|
| 277 |
+
|
| 278 |
+
We have now exhausted all possibilities, which have led to contradictions. So the assumption that $\mathcal{S}$ is irreducible is false, and $\mathcal{S}$ must be reducible. ☐
|
| 279 |
+
|
| 280 |
+
**Corollary 3.** Every string complex can be reduced to the empty complex.
|
| 281 |
+
|
| 282 |
+
**Proof.** Let $\mathcal{S}$ be a string complex. It cannot be irreducible by Theorem 3, and so it must have a box $b$ of degree $\le 2$. Let $\mathcal{S}_1 = \mathcal{S} \setminus b$ be the complex with $b$ removed. We claim that $\mathcal{S}_1$ is again a string complex. The reason is that the forbidden 2 × 2 pattern cannot be created by the removal of a box. Therefore, applying Theorem 3 again, $\mathcal{S}_1$ is reducible. Continuing in this manner, we can reduce $\mathcal{S}$ to the empty complex. ☐
|
| 283 |
+
|
| 284 |
+
**5. Conclusion**
|
| 285 |
+
|
| 286 |
+
That box complexes in $\mathbb{R}^2$ sometimes need 3 colors is a straightforward observation, but whether any box complex in $\mathbb{R}^3$ might need 4 colors is an open question. Although it is natural to expect that the chromatic number might be $n+1$ for boxes in $\mathbb{R}^n$ as it is for simplices, we in fact have no example that requires more than 3 colors for any $n \ge 3$.
|
| 287 |
+
|
| 288 |
+
**Acknowledgments**
|
| 289 |
+
|
| 290 |
+
We thank the participants of the 2012 AMS Mathematics Research Institute for stimulating discussions, and we thank the referees for their insightful comments. The proof of Theorem 2 was developed in collaboration with Smith students Lily Du, Jessica Lord, Micaela Mendlow, Emily Merrill, Viktoria Pardey, Rawia Salih, and Stephanie Wang. The first, third and last authors were supported by an AMS Mathematics Research Communities grant.
|
| 291 |
+
---PAGE_BREAK---
|
| 292 |
+
|
| 293 |
+
References
|
| 294 |
+
|
| 295 |
+
[1] K. Appel, W. Haken, Every planar map is four colorable, Bull. Amer. Math. Soc. 82 (5) (1976) 711-712.
|
| 296 |
+
|
| 297 |
+
[2] Bhaskar Bagchi, Basudeb Datta, Higher-dimensional analogues of the map coloring problem, Amer. Math. Monthly 120 (8) (2013) 733–737.
|
| 298 |
+
|
| 299 |
+
[3] Béla Bollobás, Modern Graph Theory, in: Graduate Texts in Mathematics, vol. 184, Springer-Verlag, New York, 1998.
|
| 300 |
+
|
| 301 |
+
[4] Suzanne Gallagher, Joseph O'Rourke, Coloring objects built from bricks, in: Proc. 15th Canad. Conf. Comput. Geom., 2003, pp. 56–59.
|
| 302 |
+
|
| 303 |
+
[5] Alexandr V. Kostochka, On almost (k - 1)-degenerate (k + 1)-chromatic graphs and hypergraphs, Discrete Math. 313 (4) (2013) 366–374.
|
| 304 |
+
|
| 305 |
+
[6] Joseph O'Rourke, A note on solid coloring of pure simplicial complexes, December 2010, arXiv:1012.4017 [cs.DM].
|
| 306 |
+
|
| 307 |
+
[7] Tom Sibley, Stan Wagon, Rhombic Penrose tilings can be 3-colored, Amer. Math. Monthly 107 (3) (2000) 251–253.
|
| 308 |
+
|
| 309 |
+
[8] Alexander Soifer, Chromatic number of the plane & its relatives. I. The problem & its history, Geombinatorics 12 (3) (2003) 131–148.
|
samples_new/texts_merged/3764397.md
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Early Collision and Fragmentation Detection of Space
|
| 5 |
+
Objects without Orbit Determination
|
| 6 |
+
|
| 7 |
+
Lyndy E. Axon*
|
| 8 |
+
|
| 9 |
+
This paper demonstrates that from using the hypothesized constraint of the admissible regions it is possible to determine if a combination of new uncorrelated debris objects have a common origin that also intersects with a known catalog object orbit, thus indicating a collision or fragmentation has occurred. Admissible region methods are used to bound the feasible orbit solutions of multiple observations using constraints on energy and radius of periapsis, propagating them to a common epoch in the past, and using sequential quadratic programming optimization to find a set of solution states that minimize the Euclidean distance between the observations at that time. If this given this set of solutions intersects with a catalog object orbit, then that object is the probabilistic source of the debris objects. This proposed method is demonstrated on an example of a low-earth object observation.
|
| 10 |
+
|
| 11 |
+
I. Introduction
|
| 12 |
+
|
| 13 |
+
A problem of constant concern for the future of space operations, especially as massive thousand-satellite constellations are in the design phase, is the tracking, orbit determination, and cataloging of all space objects in orbit around Earth. The U.S. Air Force Space command utilizes the Space Surveillance Network (SSN) to make approximately 80,000 daily observations to track which make an estimated population of over 300,000 objects with a diameter of over 1 cm, 17,000 known catalog objects greater than 10 cm in diameter, and 1300 active satellites¹².³ In over 50 years of space missions, over 5000 satellites have gone into orbit, of which less than 1300 are still operational today.⁴ Many of the remaining satellites have deorbited successfully, or were put into designated storage orbits prior to end-of-life, however, a large number of them are dormant orbiting the Earth.⁵ In addition to defunct satellites, debris from collisions, fragmentations, and launch litter the operational orbit environments from LEO to GEO. Not all the SSN's daily observations, or uncorrelated tracks, can be used to create actionable information.
|
| 14 |
+
|
| 15 |
+
Extracting actionable information from an initial UCT, is not a simple task, for with a single UCT it is not possible to uniquely identify the state of the object, or how useful it would be to immediately prioritize additional observations.³ On a daily basis, thousands observations of space objects from the SSN take place over short time periods and do not possess enough observation data geometric diversity to initiate a well-posed classical initial orbit determination (IOD) problem, such as angles-only IOD. Traditional orbit determination methods rely on the curvature of the measurements in order to produce a state estimate. However, measurements obtained from a short observation or a very short sequence of observations have linear dynamics and traditional methods fail as the observation time decreases.⁶ Optical sensors measure state information as either a series of angle measurements over time or from streaks formed during a single observation; these angular measurements form a tracklet, but the range and range-rate of the SO are not observable. Therefore, the SO state is underdetermined and for any given tracklet, a continuum of range and range-rate solutions are possible which define the admissible region for a given observation.⁷
|
| 16 |
+
|
| 17 |
+
In an operational environment when UCTs cannot be correlated with known objects in the Space Object Catalog (SOC), operators must have a method to quickly determine if a potential threat exists. Extreme examples of potential threats include a decreased capability due to a breakup of an asset, or a debris field created by a collision. These debris objects must have had an origin, and it is currently computationally difficult and time consuming to solve this problem with real-time accuracy, and as a result collisions and fragmentations of smaller space objects have occurred. To accurately correlate new UCTs with a known
|
| 18 |
+
|
| 19 |
+
*Graduate Researcher, Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology, 270 First Dr. Atlanta, GA 30313.
|
| 20 |
+
---PAGE_BREAK---
|
| 21 |
+
|
| 22 |
+
catalog object's orbit as an origin, multiple orbits must occur for LEO cases, and hours of continuous tracking is required for GEO cases. In this situation, it is more efficient to take a collection of UCTs and propagate them back over a designated period of time to determine if any of the possible states shared the same position at the same epoch, which would indicate that the observed UCTs were disparate debris from a known catalog object. Using admissible regions to initiate this approach allows the tasks of initial orbit determination and tracking to be foregone, which allows for faster actionable information. This would allow operators to track incoming UCTs and assign them as fragments or debris from a past event with a tracked catalog object, and allow for tasking of Space Surveillance Network assets to observe the catalog object as well as characterize the current state and future risks that these debris objects may pose.
|
| 23 |
+
|
| 24 |
+
Admissible region ($\mathcal{R}$) methods are methods to constrain undetermined states using a priori constraint hypotheses, and have been proposed to support data association and track-initiation tasks. Many have extended the applicability of AR methods to space situational awareness (SSA) since Milani et al. first proposed applying these methods to the detection of asteroids too-short arc (TSA) problems.⁸ The AR approach has been applied by Tommei et al. to SO detection and discrimination by using radar and optical measurements.⁹ Optimization methods to identify a best-fitting orbit solution are proposed by Siminski et al.¹⁰ Existing admissible region methods can be used discretizing the admissible region and considering the solutions at discrete points, which would allow for a particle filter approach.¹ Additionally, an optimization scheme can be used to identify the best fitting orbits within and admissible region eliminating the need to discretize the whole region.¹⁰ Fujimoto and Scheeres work shows that observations can be associated by applying Bayes' rule to an admissible region generated from two epochs, where a nonzero result indicates that the observations are correlated.¹¹ In addition, a solution technique for correlating multiple optical observations by computing the overlap between their admissible regions, as well as using highly constrained probability distributions in Poincare orbit element space has been proposed by Fujimoto and Scheeres by.¹² Worthy et al. has developed an observation association method which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions.¹³ A limitation of these methods using the intersection of the $\mathcal{R}$ volumes is that a feasible orbit can only be constructed if the observations are the same object, otherwise these iterative solution methods will fail. The proposed methodology in this paper seeks to demonstrate that given multiple new debris objects, that cannot be associated with any known catalog object, determine if a collision or fragmentation event has occurred, and from what origin, in near real-time as new UCTs become available.
|
| 25 |
+
|
| 26 |
+
This paper proposes a methodology for applying AR methods to bound the feasible orbit solutions of multiple observations using constraints on energy and radius of periapsis, propagating them to a common epoch in the past, and using sequential quadratic programming optimization to find a set of solution states that minimize the Euclidean distance between the observations at that time. This numerical zero-finding approach demonstrates that given two uncorrelated observations, and corresponding admissible regions, a line of feasible solutions exist that minimize the distance between the objects. In summary, this paper demonstrates that from using the hypothesized constraint of the admissible regions it is possible to determine if a combination of new uncorrelated debris objects have a common origin that also intersects with a known catalog object orbit, thus indicating a collision or fragmentation has occurred.
|
| 27 |
+
|
| 28 |
+
## II. Approach and Methodology
|
| 29 |
+
|
| 30 |
+
The goal of this methodology is to detect collisions and fragmentations by observing disparate debris without requiring the computational and time burden of using orbit determination. This approach can be used for a variety of orbit types and observation lengths. Given two uncorrelated observations, at two different times, $t_1$ and $t_2$, the proposed method will determine if a common origin exists for these objects at a selected epoch $t_0$. Figure 1 shows the orbital path in $\mathbb{R}^6$ orbit element space of a known catalog object as a function of time, until at some $t_0$ a break-up event occurs that results in a discrete number of debris objects. Each observation at $t_1$ and $t_2$ are of different debris from what is hypothesized to be a common origin.
|
| 31 |
+
---PAGE_BREAK---
|
| 32 |
+
|
| 33 |
+
Figure 1. Catalog Object Break-up at a given Epoch as a function of time
|
| 34 |
+
|
| 35 |
+
Given independent observations of multiple debris objects, a continuum of range and range-rate combinations define the admissible region. These range and range-rate solutions make up the undetermined portion of a potential full state; each full-state (determined or observable information combined with unobservable) correspond to a given position and velocity solution. These solutions can be propagated back to an arbitrary estimated epoch $t_0$, at which a solution manifold can be constructed by using sequential quadratic programming and selected constraint criteria to minimize the Euclidean distance between the positions of the two observed objects at $t_0$. The solution manifold represents a line of possible common origins that goes through $\mathbb{R}^6$; if it intersects with the catalog object orbit then the observed have spawned from a break-up event involving that known object. Figure 1 is a three dimensional illustration of the previous figure, but at a particular time. Notice this figure that the solution manifold will cross the orbit of the catalog object at the hypothesized epoch $t_0$.
|
| 36 |
+
|
| 37 |
+
Figure 2. Catalog Object Break-Up and Observation of Debris Objects from Ground Station
|
| 38 |
+
|
| 39 |
+
Optical measurements generate angle and angle rates of objects tracked using a streak or sequence of angle measurements of right ascension, α, and declination, δ. The parameters associated with optical measurements include the observer position and velocity, **o** and $\dot{\textbf{o}}$, respectively, as well as the times at which the observations are made. Using this information, the position, **r**, and velocity, **v** of the object are given by
|
| 40 |
+
|
| 41 |
+
$$ \mathbf{r} = \mathbf{o} + \rho \hat{\mathbf{l}} \qquad (1) $$
|
| 42 |
+
---PAGE_BREAK---
|
| 43 |
+
|
| 44 |
+
where $\rho$ is the range to the target, $\dot{\rho}$ is the range-rate, and $\hat{\mathbf{l}}$, $\hat{\mathbf{l}}_{\alpha}$, and $\hat{\mathbf{l}}_{\delta}$ are given by
|
| 45 |
+
|
| 46 |
+
$$ \mathbf{v} = \dot{\mathbf{o}} + \dot{\rho}\hat{\mathbf{l}} + \rho\dot{\alpha}\hat{\mathbf{l}}_{\alpha} + \rho\dot{\delta}\hat{\mathbf{l}}_{\delta} \quad (2) $$
|
| 47 |
+
|
| 48 |
+
$$ \hat{\mathbf{l}} = \begin{bmatrix} \cos\alpha\cos\delta \\ \sin\alpha\cos\delta \\ \sin\delta \end{bmatrix} \qquad (3) $$
|
| 49 |
+
|
| 50 |
+
$$ \hat{\mathbf{l}}_{\alpha} = \begin{bmatrix} -\sin\alpha\cos\delta \\ \cos\alpha\cos\delta \\ 0 \end{bmatrix} \qquad (4) $$
|
| 51 |
+
|
| 52 |
+
$$ \hat{\mathbf{l}}_{\delta} = \begin{bmatrix} -\cos\alpha\sin\delta \\ -\sin\alpha\sin\delta \\ \cos\delta \end{bmatrix} \qquad (5) $$
|
| 53 |
+
|
| 54 |
+
For this system, the states **x**, the observations **k**, and parameters **p** are defined as
|
| 55 |
+
|
| 56 |
+
$$ \mathbf{x}^T = [\dot{\alpha} \ \dot{\delta} \ \dot{\delta} \ \rho \ \dot{\rho}] \qquad (6) $$
|
| 57 |
+
|
| 58 |
+
$$ \mathbf{k}^T = [\alpha_1 \dots \alpha_q \ \delta_1 \dots \delta_q] \mathbf{o}^T \dot{\mathbf{o}}^T \qquad (7) $$
|
| 59 |
+
|
| 60 |
+
$$ \mathbf{p}^T = [\mathbf{o}^T \dot{\mathbf{o}}^T] \qquad (8) $$
|
| 61 |
+
|
| 62 |
+
where $\dot{\alpha}$ and $\dot{\delta}$ are the angle rates which are generated using Lagrange Interpolation shown in Equation 9, and $q$ is the number of observations. In order to limit the inherent error associated with using Lagrange interpolation from point values, streak observations are used in this methodology. The rate estimations from the center of each streak are used for further calculations as this provides a better estimate of the rate than the beginning of the streak.
|
| 63 |
+
|
| 64 |
+
$$ \begin{aligned} \dot{\alpha}(t) ={}& \alpha(t_1) \frac{(t-t_2) + (t-t_3) + \cdots + (t-t_q)}{(t_1-t_2)(t_1-t_3)\cdots(t_1-t_q)} \\ &+ \alpha(t_2) \frac{(t-t_2) + (t-t_3) + \cdots + (t-t_q)}{(t_2-t_1)(t_2-t_3)\cdots(t_2-t_q)} \\ &+ \cdots + \alpha(t_l) \frac{(t-t_2) + (t-t_3) + \cdots + (t-t_{q-1})}{(t_l-t_1)(t_l-t_3)\cdots(t_l-t_{q-1})} \end{aligned} \qquad (9) $$
|
| 65 |
+
|
| 66 |
+
For an observation with two measurements, the combined measurement and parameter vector, $\mathbf{y}^T \in \mathbb{R}^{12}$ is given by
|
| 67 |
+
|
| 68 |
+
$$ \mathbf{y}^T = [\alpha_1 \ \alpha_2 \ \delta_1 \ \delta_2] (\mathbf{t}_1 \ \mathbf{t}_2) (\mathbf{u}_1 \ \mathbf{u}_2)^T \qquad (10) $$
|
| 69 |
+
|
| 70 |
+
Given $\mathbf{y}$ and solving for the angle rates using Equation 9, four of the six states in $\mathbf{x}$ can be observed or determined; these four states, known henceforth as $\mathbf{x}_d$ are shown in Equation 11. The remaining two undetermined states, known as $\mathbf{x}_u$, are given by Equation 12.
|
| 71 |
+
|
| 72 |
+
$$ \mathbf{x}_d = \begin{bmatrix} \alpha \\ \dot{\alpha} \\ \delta \\ \dot{\delta} \end{bmatrix}_{4\times1} \qquad (11) $$
|
| 73 |
+
|
| 74 |
+
$$ \mathbf{x}_u = \begin{bmatrix} \rho \\ \dot{\rho} \\ 0 \\ 0 \end{bmatrix}_{2\times1} \qquad (12) $$
|
| 75 |
+
|
| 76 |
+
To limit the realm of possible solutions for $\mathbf{x}_u$, constraint hypotheses are imposed on the admissible regions. These constraints can be based on a priori information about the observation (e.g. is the object LEO or GEO), as well as reasonable constraints for objects in orbit around Earth can be imposed. For the
|
| 77 |
+
---PAGE_BREAK---
|
| 78 |
+
|
| 79 |
+
purpose of this paper, the primary assumption is that of 2-body motion, which allows the use of a constraint on the specific orbital energy equation. This constraint, $\kappa$ requires that the space object is in Earth's orbit, and therefore excludes hyperbolic orbit solutions. To constrain these solutions for $\mathbf{x}_u$, the admissible region set $\mathcal{R}$ can be defined as $\{\mathbf{x}_u \in \mathbb{R}^2 | \epsilon(\mathbf{r}, \dot{\mathbf{v}}) = 0\}$, which is the solution to Equation 13.⁶ The solutions to this polynomial define the two dimensional boundary of the admissible region.
|
| 80 |
+
|
| 81 |
+
$$ \kappa(\mathbf{x}_u, \mathbf{y}) = 2\epsilon(\mathbf{r}, \mathbf{v}) = \dot{\rho}^2 + w_1\dot{\rho} + T(\rho) - \frac{2\mu}{\sqrt{S(\rho)}} = 0 \quad (13) $$
|
| 82 |
+
|
| 83 |
+
Farnocchia, et. al. and Tommei et. al. define $T(\rho)$, $S(\rho)$, and coefficients $w_0$ through $w_5$ as Equations 14 and 15.¹⁴.⁹
|
| 84 |
+
|
| 85 |
+
$$ T(\rho) = w_2\rho^2 + w_3\rho + w_4, \quad S(\rho) = \rho^2 + w_5\rho + w_6 \quad (14) $$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\begin{align}
|
| 89 |
+
w_0 &= \|\mathbf{o}\|^2, & w_1 &= 2\langle \dot{\mathbf{o}} \cdot \hat{\mathbf{l}} \rangle \\
|
| 90 |
+
w_2 &= \dot{\alpha}^2 \cos^2 \delta + \dot{\delta}^2, & w_3 &= 2\dot{\alpha} \langle \dot{\mathbf{o}} \cdot \hat{\mathbf{l}}_\alpha \rangle + 2\dot{\delta} \langle \dot{\mathbf{o}} \cdot \hat{\mathbf{l}}_\delta \rangle \\
|
| 91 |
+
w_4 &= \|\dot{\mathbf{o}}\|^2, & w_5 &= 2\langle \mathbf{o} \cdot \hat{\mathbf{l}} \rangle
|
| 92 |
+
\end{align}
|
| 93 |
+
\quad (15)
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
To further constrain the realm of possible state solutions, a periapsis radius constraint is used to exclude parabolic and potentially re-entering space objects that will impact the Earth in less than one revolution. For the purpose of this paper, the minimum radius of periapsis $r_{min}$ is set at 6378 km plus $h_{atm}$, where $h_{atm}$ is 200 km. A form of this constraint, $r_p = a(1-e) \ge r_{min}$ was proposed by Maruskin et. al.¹ The periapsis constraint $r_{min} - r_p(\rho, \dot{\rho})$ was analytically developed by Farnocchia et. al. to be¹⁴
|
| 97 |
+
|
| 98 |
+
$$ (r_{min}^2 - \|D\|^2)\dot{\rho}^2 - P(\rho)\dot{\rho} - U(\rho) + r_{min}^2 T(\rho) - \frac{2r_{min}^2\mu}{\sqrt{S(\rho)}} \le 0 \quad (16) $$
|
| 99 |
+
|
| 100 |
+
with
|
| 101 |
+
|
| 102 |
+
$$ P(\rho) = 2\mathbf{D} \cdot \mathbf{E}\rho^2 + 2\mathbf{D} \cdot \mathbf{F}\rho + 2\mathbf{D} \cdot \mathbf{G} - r_{min}^2 w_1 \quad (17) $$
|
| 103 |
+
|
| 104 |
+
$$ U(\rho) = \|E\|^2 \rho^4 + 2E \cdot F\rho^3 + (2E \cdot G + \|F\|^2)\rho^2 + 2F \cdot G\rho + \|G\|^2 - 2r_{min}\mu \quad (18) $$
|
| 105 |
+
|
| 106 |
+
given the following
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\begin{align}
|
| 110 |
+
\mathbf{D} &= \mathbf{o} \times \hat{\mathbf{l}}, & \mathbf{E} &= \hat{\mathbf{l}} \times (\dot{\alpha}\mathbf{l}_{\alpha} + \dot{\delta}\mathbf{l}_{\delta}) \\
|
| 111 |
+
\mathbf{F} &= \mathbf{o} \times (\dot{\alpha}\mathbf{l}_{\alpha} + \dot{\delta}\mathbf{l}_{\delta}) + \mathbf{l} \times \dot{\mathbf{o}}, & \mathbf{G} &= \mathbf{o} \times \dot{\mathbf{o}}
|
| 112 |
+
\end{align}
|
| 113 |
+
\quad (19)
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
Additional constraints may be relevant depending on available a priori information about the space object. For example, eccentricity would be an appropriate constraint to apply to GEO observations.¹ For the purpose of this paper, only energy and radius of periapsis constraints will be imposed. Imposing these constraints on an observation **y** results in a two dimensional space of solutions to **x**_u that could possibly complete the state **x** of the observed space object.
|
| 117 |
+
|
| 118 |
+
Given two observations of an object, such as shown in Equation 10, admissible regions can be determined for each observation, $\mathcal{R}_1$ and $\mathcal{R}_2$. Each of these have a set of possible undermined states $\mathbf{x}_u$ that satisfy the aforementioned constraints. By combining these into a single variable, into a single variable **z**
|
| 119 |
+
|
| 120 |
+
$$ z = \begin{bmatrix} x_{u,1} \\ x_{u,2} \end{bmatrix} = \begin{bmatrix} \rho_1 \\ \dot{\rho}_1 \\ \rho_2 \\ \dot{\rho}_2 \end{bmatrix} \quad (20) $$
|
| 121 |
+
|
| 122 |
+
It is possible to conduct a random uniform sampling of both $\mathcal{R}_1$ and $\mathcal{R}_2$ to collect a set of **z** solutions that satisfy the constraints. Each $\mathbf{x}_{u,1}$ and $\mathbf{x}_{u,2}$, combined with $\mathbf{x}_{d,1}$ and $\mathbf{x}_{d,2}$, respectively, create a possible full state solution $mathbf{fx}_1$ and $mathbf{fx}_2$ for the observed space object. Each of these states can be converted into Cartesian position **r** and velocity **v** by using Equations 1 and 2. Propagating these states back to some common time *t* in the past, the resulting vectors are defined as
|
| 123 |
+
---PAGE_BREAK---
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\begin{align}
|
| 127 |
+
\mathbf{r}_1(t) &= [\mathbb{I} \ 0] \phi(t, \mathbf{x}_{u,1}, \mathbf{x}_{d,1}, t_1) \\
|
| 128 |
+
\mathbf{r}_2(t) &= [\mathbb{I} \ 0] \phi(t, \mathbf{x}_{u,2}, \mathbf{x}_{d,2}, t_2)
|
| 129 |
+
\end{align}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
From this, the goal is to determine if there is a set of solutions for **z** that minimize the Euclidean distance between the position vectors corresponding to each observation time. The cost function J(**z**) and gradient are as follows
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
J(\mathbf{z}) = \frac{1}{2} (\mathbf{r}_1 - \mathbf{r}_2)^T (\mathbf{r}_1 - \mathbf{r}_2) \quad (22)
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\frac{\partial J}{\partial \mathbf{z}} = \left[ \frac{\partial J}{\partial \mathbf{x}_{u,1}}, \frac{\partial J}{\partial \mathbf{x}_{u,2}} \right] = \left[ (\mathbf{r}_1 - \mathbf{r}_2)^T \cdot \left[ \mathbb{I} \ 0 \right] \frac{\partial \phi}{\partial \mathbf{x}_1} \frac{\partial \mathbf{x}_1}{\partial \mathbf{x}_{u,1}}, (\mathbf{r}_1 - \mathbf{r}_2)^T \cdot \left[ \mathbb{I} \ 0 \right] \frac{\partial \phi}{\partial \mathbf{x}_2} \frac{\partial \mathbf{x}_2}{\partial \mathbf{x}_{u,2}} \right] \quad (23)
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
**Algorithm 1:** Algorithm to Determine Solution Manifold
|
| 143 |
+
|
| 144 |
+
**Result:** Minimize Eq. 22
|
| 145 |
+
|
| 146 |
+
1 initialization of givens, observables, and parameter settings;
|
| 147 |
+
|
| 148 |
+
2 compute GS Vectors and observer unit vectors with Eq. 3, 4, & 5;
|
| 149 |
+
|
| 150 |
+
3 compute $\mathcal{R}$ boundaries for each Obs. by solving the quadratic equation for $\dot{\rho}$ given a continuous set of $\rho$ values using Eq. 13;
|
| 151 |
+
|
| 152 |
+
4 uniformly sample from $\mathcal{R}$ interiors by selecting a random $\rho$ & $\dot{\rho}$ based on the min and max values and satisfying the energy (Eq. 13) and radius of periapsis constraints (Eq. 16);
|
| 153 |
+
|
| 154 |
+
5 construct $\mathbf{z}$ (Eq. 20) by stacking the sample values from $\mathcal{R}_1$ & $\mathcal{R}_2$;
|
| 155 |
+
|
| 156 |
+
6 **for** *i* = 1:*length(*z*) *do*
|
| 157 |
+
|
| 158 |
+
7 Establish current **z** "guess" value (*z* = *z*(:, *i*)) ;
|
| 159 |
+
|
| 160 |
+
8 **while** $J(\tilde{\mathbf{z}}) \geq Tolerance$ **do**
|
| 161 |
+
|
| 162 |
+
9 Use fmincon to estimate the gradient (Eq. 23) and step **z** in that direction using nonlinear constraints in Eqs. 13 & 16;
|
| 163 |
+
|
| 164 |
+
10 Update **z** value to reflect step towards minimum;
|
| 165 |
+
|
| 166 |
+
11 Evaluate constraints (Eq. 13 and Eq. 16) given current **z** value to ensure solution still falls within $\mathcal{R}$;
|
| 167 |
+
|
| 168 |
+
12 **if** current **z** not within $\mathcal{R}$ (does not meet constraints);
|
| 169 |
+
|
| 170 |
+
13 **then**
|
| 171 |
+
|
| 172 |
+
14 Get new "guess" for **z** from fmincon by continuing;
|
| 173 |
+
|
| 174 |
+
15 **else**
|
| 175 |
+
|
| 176 |
+
16 Convert **z** to cartesian using Eq. 1 & 2 to get $\tilde{\mathbf{z}}$;
|
| 177 |
+
|
| 178 |
+
17 Propagate $\tilde{\mathbf{z}}$ to $t_0$ and calculate distance using Eq. 22;
|
| 179 |
+
|
| 180 |
+
18 **end**
|
| 181 |
+
|
| 182 |
+
19 **end**
|
| 183 |
+
|
| 184 |
+
20 save **z** solution value that minimize $J(\mathbf{z})$ (Eq. 22)
|
| 185 |
+
|
| 186 |
+
21 end
|
| 187 |
+
|
| 188 |
+
III. Results
|
| 189 |
+
|
| 190 |
+
The goal of this methodology is to detect collisions and fragmentations by observing disparate debris. To demonstrate the initial effectiveness of this approach, two independent optical observations of the same object where used. The observations were made for one second exposures 5 minutes (300 seconds) apart based on an observation taken March 1, 2014 at 02:01:36 UTC. The measurement values for the tested LEO case are given in Table 1. The observations were made using an equatorial mounted telescope from Deerlick Astronomy Village, the observer parameters are given in Table 2. Error in observation measurements were assumed to have a zero angle mean noise and approximately 0.5 arcsecond standard deviation of the noise on the angle observations (right ascension and declination). The standard deviation is approximated at this value due to the type of mount the observations were made from, as well as the exposure time.
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
**Table 1. LEO Optical Observation Measurements**
|
| 194 |
+
|
| 195 |
+
<table><thead><tr><td>Time</td><td>α (rad)</td><td>δ (rad)</td><td>Exposure (sec)</td></tr></thead><tbody><tr><td>02:01:36</td><td>1.4007</td><td>0.5556</td><td>1</td></tr><tr><td>02:06:36</td><td>1.3504</td><td>-0.6931</td><td>1</td></tr></tbody></table>
|
| 196 |
+
|
| 197 |
+
**Table 2. Observer Parameters for Deerlick Astronomy Village, GA**
|
| 198 |
+
|
| 199 |
+
<table><thead><tr><th>Latitude</th><th>Longitude</th><th>Altitude (m)</th></tr></thead><tbody><tr><td>33.561deg N</td><td>82.764deg W</td><td>176.8</td></tr></tbody></table>
|
| 200 |
+
|
| 201 |
+
From these observations, admissible regions were constructed using a radius of periapsis constraint of 6578 km (radius of Earth plus 200km), an energy constraint of less than zero (Earth orbiting), and eccentricity constraint of less than 0.7. A set *n* particle pairs **x**<sub>*u*</sub>'s that meet these constraints were then created by randomly uniformly sampling the interiors of each admissible region. Then set from the observation at t<sub>1</sub>, **x**<sub>*u*,1</sub>, is combined with **x**<sub>*u*,2</sub> from the observation at t<sub>2</sub>, it results in **z** being a 4 × *n* matrix (Equation 20). Figure 3 shows the admissible regions corresponding to each observation as well as the sampled points from each interior.
|
| 202 |
+
|
| 203 |
+
**Figure 3. Admissible Region boundaries for Observation 1 & 2**
|
| 204 |
+
|
| 205 |
+
Using this test case, two different epoch times were selected: 100 seconds and 1 hour (3600 seconds) in the past. In each of these scenarios, each column in mathbf*z* is then stepped towards a minimum solution for the cost function *J*(*z*) at time t<sub>0</sub> by using the MATLAB function fmincon from the optimization toolbox to solve for the minimum of Equation 22 given the nonlinear constraints and functions. Fmincon is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.
|
| 206 |
+
|
| 207 |
+
**A. Epoch Time = -100 Seconds**
|
| 208 |
+
|
| 209 |
+
In this scenario, 5000 particles were sampled from the admissible regions, resulting in a 4 × 5000 matrix for **z**. Each of the 5000 columns of **z** was propagated backwards 100 seconds using a two-body propagator in Ode45. The solution values for **z**, that correspond to each observation, that minimize the Euclidean distance between the observed objects are shown in Figures 4 and 5. At an epoch that is only 100 seconds before the first observation, the solution manifold appears to have very limited curvature.
|
| 210 |
+
---PAGE_BREAK---
|
| 211 |
+
|
| 212 |
+
Figure 4. Admissible Region for Observation 1 with minimized solutions and truth for $t_0 = -100$sec
|
| 213 |
+
|
| 214 |
+
Figure 5. Admissible Region for Observation 2 with minimized solutions and truth for $t_0 = -100$sec
|
| 215 |
+
|
| 216 |
+
The solution manifold line in the first observation's admissible region is much longer than the corresponding line in the second observation's admissible region. This result was to be expected, as the first observation was taken at a minimum range to the ground station, which means that a set of possible solution orbits with larger variation are possible. Conversely, the second observation was taken at a much lower elevation, thus increasing the slant range to the object from the ground station. This provides a smaller amount of variation in the solution states. The 3D plots shown in Figures 6 and 7 show the solution manifold in the position and velocity space. In these figures, the first observation is indicated with a blue arrow, the second with an orange arrow, and the ground station with a green arrow. The solution manifold is a short line made up of red (observation 1) and blue (observation 2) position solutions that clearly intersects with the shown known object truth orbit at the given epoch. This indicates that the observed debris objects have the same origin and it is possible that they spawned from an event involving the shown known object orbit.
|
| 217 |
+
---PAGE_BREAK---
|
| 218 |
+
|
| 219 |
+
Figure 6. 3D Plot Earth Hemisphere with Solution Manifold and True Catalog Orbit for $t_0 = -100sec$
|
| 220 |
+
|
| 221 |
+
Figure 7. Solution Manifold and True Catalog Orbit Intersection for $t_0 = -100sec$
|
| 222 |
+
|
| 223 |
+
## B. Epoch Time = -1 Hour
|
| 224 |
+
|
| 225 |
+
In this scenario, 1100 particles were sampled from the admissible regions, resulting in a 4 × 1100 matrix for z. Each of the 1100 columns of *mathbf{z}* was propagated backwards one hour (3600 seconds) using a two-body propagator in Ode45. The solution values for *mathbf{z}*, that correspond to each observation, that minimize the Euclidean distance between the observed objects are shown in Figures 4 and 5. At an epoch that is one hour prior to the first observation, the solution manifold appears to have an increased amount of curvature when compared with the corresponding results from the previous scenario. This is especially true of the solution manifold in Observation 2's admissible region.
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
Figure 8. Admissible Region for Observation 1 with minimized solutions and truth for $t_0 = -3600$ sec
|
| 229 |
+
|
| 230 |
+
Figure 9. Admissible Region for Observation 2 with minimized solutions and truth for $t_0 = -3600$ sec
|
| 231 |
+
|
| 232 |
+
Just as was shown with the first scenario, the solution manifold line in the first observation's admissible region is much longer than the corresponding line in the second observation's admissible region. However, for an epoch of one hour prior to the first observation, the solution manifolds display much more curvature than the 100 second scenario. This is especially evident in the solution manifold corresponding to observation 2 in Figure 9. The 3D plots shown in Figure 10 show the solution manifold in the position and velocity space. In this figure, the first observation is indicated with a blue arrow, the second with an orange arrow, and the ground station with a green arrow. The solution manifold is a short line made up of red (observation 1) and blue (observation 2) position solutions that clearly intersects with the shown known object truth orbit at the given epoch. This indicates that the observed debris objects have the same origin and it is possible that they spawned from an event involving the shown known object orbit. The solution manifold displays much more interesting characteristics and curvature in this scenario, as it extends well beyond the known object orbit.
|
| 233 |
+
---PAGE_BREAK---
|
| 234 |
+
|
| 235 |
+
Figure 10. 3D Plot Earth Hemisphere with Solution Manifold and True Catalog Orbit for $t_0 = -3600sec$
|
| 236 |
+
|
| 237 |
+
### C. Error and Challenges
|
| 238 |
+
|
| 239 |
+
Error in observation measurements were assumed to have a zero angle mean noise and approximately 0.5 arcsecond standard deviation of the noise on the angle observations (right ascension and declination). The standard deviation is approximated at this value due to the type of mount the observations were made from, as well as the exposure time. Error also is inherent with any numerical propagation method, such as Ode45. The relative and absolute tolerances were set to $1e^{-12}$ to limit error throughout this process. Additional sources of error can be found from using Lagrangian interpolation in order to estimate the angle rates of each observation. As aforementioned, to minimize this error source, streaks were used and the observation information from the center of the streak was used and the rates were estimated using the beginning and the end of the streak. For future work, the rates will be fed in as part of the 4-state $x_d$ and not estimated based on the right ascension and declination of the observations. This approach is computationally slow because implementing fmincon, which estimates the gradient, instead of using a gradient based approach like steepest descent. A limitation of the method described here is that the epoch time, $t_0$, is arbitrary, and may be based on a priori information (e.g. last known observation, etc.), but would require an iterative "guessing" process to select a good estimate for $t_0$ which increases computational cost.
|
| 240 |
+
|
| 241 |
+
## IV. Conclusions
|
| 242 |
+
|
| 243 |
+
The results in this paper, though there are limitations, illustrate that it is possible to detect fragments and collisions much sooner than current capabilities that rely on orbit determination. The current state-of-the-art relies on orbit determination, which requires multiple observations over at least two orbits for a LEO object and continual observation over hours for a GEO object. The approach outlined in this paper requires only independent observations of two debris orbits and to answer the same hypothesis, with the cost largely in computation. The problem reduces to a 4-dimensional particle swarm optimization, which can easily be solved using a gradient-based method. By using the hypothesized constraint of the admissible regions it was demonstrated that it is possible to determine if a combination of new uncorrelated debris objects have a common origin that also intersects with a known catalog object orbit, thus indicating break-up of that known object has occurred.
|
| 244 |
+
---PAGE_BREAK---
|
| 245 |
+
|
| 246 |
+
V. Future Work
|
| 247 |
+
|
| 248 |
+
This paper reflects a very initial endeavour into understanding the limitations and applications of this methodology. Additional test cases, including one on a GEO break up as well as another using LEO collision path, will need to be done, as this paper only demonstrates that if you have two observations of the same object that a zero-finding problem is possible. Other phenomenology should also be considered, such as radar observations. LEO observations are not typically made using optical or electro-optical hardware; conversely, GEO observations are almost exclusively made with these methods. Radar observations have different admissible region structure, as they provide a different set of observable, or determined states. In this scenario, $x_d$ is a 2 × 1 matrix, whereas in optical it is 4 × 1 matrix. Therefore, to use radar information in the methodology described in this paper, additional observations would need to be included to create a closed solution.
|
| 249 |
+
|
| 250 |
+
References
|
| 251 |
+
|
| 252 |
+
¹J. M. Maruskin, D. J. Scheeres, and K. T. Alfriend, "Correlation of optical observations of objects in earth orbit," *Journal of Guidance, Control, and Dynamics*, Vol. 32, No. 1, 2009, pp. 194-209.
|
| 253 |
+
|
| 254 |
+
²A. Rossi, "The earth orbiting space debris," *Serbian Astronomical Journal*, Vol. 170, 2005, pp. 1-12.
|
| 255 |
+
|
| 256 |
+
³M. J. Holzinger, K. K. Luu, C. Sabol, and K. Hill, "Uncorrelated-Track Classification, Characterization, and Prioritization Using Admissible Regions and Bayesian Inference," *Journal of Guidance, Control, and Dynamics*, 2016, pp. 2469-2484.
|
| 257 |
+
|
| 258 |
+
⁴K. Wormnes, R. Le Letty, L. Summerer, R. Schonenborg, O. Dubois-Matra, E. Luraschi, A. Cropp, H. Krag, and J. Delaval, "ESA technologies for space debris remediation," *6th IAASS Conference:Safety is Not an Option, Montrel*, 2013.
|
| 259 |
+
|
| 260 |
+
⁵P. d. Selding, "Orbital Debris a Growing Problem with No End in Sight," *Space News*, Vol. 31, 2006.
|
| 261 |
+
|
| 262 |
+
⁶J. L. Worthy, *Initialization of sequential estimation for unobservable dynamical systems using partial information in the presence of systemic uncertainty*. PhD thesis, Georgia Institute of Technology, 2017.
|
| 263 |
+
|
| 264 |
+
⁷J. L. Worthy III and M. J. Holzinger, "Incorporating uncertainty in admissible regions for uncorrelated detections," *Journal of Guidance, Control, and Dynamics*, Vol. 38, No. 9, 2015, pp. 1673-1689.
|
| 265 |
+
|
| 266 |
+
⁸A. Milani, G. F. Gronchi, M. d. Vitturi, and Z. Knežević, "Orbit determination with very short arcs. I admissible regions," *Celestial Mechanics and Dynamical Astronomy*, Vol. 90, No. 1-2, 2004, pp. 57-85.
|
| 267 |
+
|
| 268 |
+
⁹G. Tommei, A. Milani, and A. Rossi, "Orbit determination of space debris: admissible regions," *Celestial Mechanics and Dynamical Astronomy*, Vol. 97, No. 4, 2007, pp. 289-304.
|
| 269 |
+
|
| 270 |
+
¹⁰J. A. Siminski, O. Montenbruck, H. Fiedler, and T. Schildknecht, "Short-arc tracklet association for geostationary objects," *Advances in space research*, Vol. 53, No. 8, 2014, pp. 1184-1194.
|
| 271 |
+
|
| 272 |
+
¹¹K. Fujimoto and D. J. Scheeres, "Applications of the admissible region to space-based observations," *Advances in Space Research*, Vol. 52, No. 4, 2013, pp. 696-704.
|
| 273 |
+
|
| 274 |
+
¹²K. Fujimoto and D. J. Scheeres, "Correlation of optical observations of earth-orbiting objects and initial orbit determination," *Journal of guidance, control, and dynamics*, Vol. 35, No. 1, 2012, pp. 208-221.
|
| 275 |
+
|
| 276 |
+
¹³J. L. Worthy, M. J. Holzinger, and D. J. Scheeres, "An optimization approach for observation association with systemic uncertainty applied to electro-optical systems," *Advances in Space Research*, 2018.
|
| 277 |
+
|
| 278 |
+
¹⁴D. Farnocchia, G. Tommei, A. Milani, and A. Rossi, "Innovative methods of correlation and orbit determination for space debris," *Celestial Mechanics and Dynamical Astronomy*, Vol. 107, No. 1-2, 2010, pp. 169-185.
|
samples_new/texts_merged/3884483.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/393503.md
ADDED
|
@@ -0,0 +1,393 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# On the Choice of Multiple Flat Outputs for Fault Detection and Isolation of a Flat System
|
| 5 |
+
|
| 6 |
+
Rim RAMMAL*, Tudor-Bogdan AIRIMITOAIE*,
|
| 7 |
+
Franck CAZAURANG*, Jean LÈVINE**,
|
| 8 |
+
Pierre MELCHIOR*
|
| 9 |
+
|
| 10 |
+
* Univ. Bordeaux, Bordeaux INP, CNRS, IMS, 33405 Talence, France
|
| 11 |
+
ictional redundancy in which multiple sensors and actuators are used to measure and control a particular variable (Chen et al., 2015). The drawbacks of this method are the extra equipment, maintenance cost and additional space required to accommodate the equipment. This approach was improved later on by the introduction of the *model-based analytical redundancy method*, based on the notion of *generating residual signals*. These residues are defined as the difference between the measured variables and the estimated ones. In the case of no fault, and in the ideal case of noise free observations, the values of the residues are equal to zero. In the non-zero case, the estimation method must be specified, see e.g. the observer-based approach (Tousi and Khorasani, 2011), the parity-space approach (Diversi et al., 2002) or the Kalman-based approach (Izadian and Khayyer, 2010). However, in these approaches, a sensor may be wrongly declared faulty because of the lack of efficiency of the estimation algorithm, hence the importance of the notion of *detectability*.
|
| 12 |
+
|
| 13 |
+
Recently, the flatness property has been introduced into the repertoire of FDI techniques (Suryawan et al., 2010; Martínez-Torres et al., 2014). Here, residues are calculated using the differential flatness property. Roughly speaking, let us recall that a system is said to be flat if all the
|
| 14 |
+
|
| 15 |
+
state and input variables can be expressed as functions of a particular variable, called flat output, and a finite number of its successive derivatives. The method presented in Suryawan et al. (2010) is dedicated to linear flat systems and uses the properties of B-spline parameterisation to estimate the time derivatives of the flat output, which may not be defined because of the presence of noise. This derivative estimation can take time and cause a delay in the reconfiguration process. In order to overcome these issues, a high-gain observer has been proposed in Martínez-Torres et al. (2014) to evaluate the time derivative of the noisy signals. The observer may be complemented by a low-pass filter to improve its performance. Note that the latter method can be applied to both, linear and nonlinear flat systems.
|
| 16 |
+
|
| 17 |
+
In the present flatness-based FDI approach, an effort is made to dissociate the theoretical *isolability* property, based on residue computation, and the estimation process. For this purpose, we compute the residues between the measurements and their expression exactly obtained from the measured flat outputs and their derivatives estimated online. The treatment of these residues slightly differs from the ones of the previous approaches (Kóscielny et al., 2016): every sensor and actuator admits a *fault alarm signature*, i.e. a number of residues affected by a fault on this sensor/actuator and a fault on a sensor/actuator is isolable if its corresponding fault alarm signature is distinct. In practice, the treatment of these residues is adapted, in the presence of noise, by introducing a threshold and an estimation process as in the previous approaches (Martínez-Torres et al., 2013). Moreover, we show that it is possible to increase the isolability of faults by considering several flat outputs, at the condition that they are independent,
|
| 18 |
+
|
| 19 |
+
**Abstract:** This paper presents a rigorous definition of the isolability of a fault in a flat system whose flat outputs are measured by sensors that are subject to faults. In particular, if only one sensor or actuator is faulty at a time, we show that the isolation of faults can be achieved if a pair of flat outputs satisfies some independence condition. A detailed characterization of this condition is presented. Finally, the pertinence of the isolability concept is demonstrated on the example of a three tank system.
|
| 20 |
+
|
| 21 |
+
**Keywords:** nonlinear flat system, flat output, fault detection and isolation, three tank system.
|
| 22 |
+
|
| 23 |
+
## 1. INTRODUCTION
|
| 24 |
+
---PAGE_BREAK---
|
| 25 |
+
|
| 26 |
+
thus completing in a rigorous way some heuristic results of Martínez-Torres et al. (2013). These results are applied to a three tank FDI problem where we compute two independent flat outputs that allow the isolation of all possible simple faults (only one faulty sensor or actuator at a time).
|
| 27 |
+
|
| 28 |
+
The main contributions of this paper are the above mentioned rigorous definition of isolability of faults and the characterization of the flat outputs to be used in the fault isolation.
|
| 29 |
+
|
| 30 |
+
This paper is organized as follows: section 2 introduces the basic concepts of FDI for nonlinear differentially flat systems and their definitions. Section 3 discusses the conditions for independence between flat outputs. Section 4 deals with the application of this FDI approach to the three tank system. Finally, section 5 concludes the paper.
|
| 31 |
+
|
| 32 |
+
## 2. FLATNESS-BASED FDI
|
| 33 |
+
|
| 34 |
+
### 2.1 Differentially Flat System
|
| 35 |
+
|
| 36 |
+
Consider the following nonlinear system
|
| 37 |
+
|
| 38 |
+
$$ \begin{cases} \dot{x} = f(x, u) \\ y = h(x, u) \end{cases} \quad (1) $$
|
| 39 |
+
|
| 40 |
+
where $x$, the vector of states, evolves in a $n$-dimensional manifold $X$, $u \in \mathbb{R}^m$ is the vector of inputs, $y \in \mathbb{R}^p$ is the measured output, $m \le n$, $\text{rank}(\frac{\partial f}{\partial u}) = m$ and $m \le p$. Let $(x, \bar{u}) \triangleq (x, u, \dot{u}, \ddot{u}, \ldots)$ be a prolongation of the coordinates $(x, u)$ to the manifold of jets of infinite order $\mathcal{X} \triangleq X \times \mathbb{R}_\infty^m$ (Fliess et al., 1999), (Levine, 2009, Chapter 5).
|
| 41 |
+
|
| 42 |
+
In the sequel, we systematically denote by $\bar{\xi} \triangleq (\xi, \dot{\xi}, \ddot{\xi}, \ldots)$ the sequence of infinite order jets of a vector $\xi$ and $\tilde{\xi}^{(\alpha)} \triangleq (\xi, \dot{\xi}, \ddot{\xi}, \ldots, \xi^{(\alpha)})$ the truncation at the finite order $\alpha \in \mathbb{N}$ of the previous sequence.
|
| 43 |
+
|
| 44 |
+
The system (1) is flat at a point $(x_0, \bar{u}_0) \in \mathcal{X}$ if and only if there exist a vector $z = (\bar{z}_1, \ldots, \bar{z}_m) \in \mathbb{R}^m$, two integers $\rho$ and $\nu$ and mappings $\psi$ defined on a neighbourhood $\mathcal{V}$ of $(x_0, \bar{u}_0)$ in $\mathcal{X}$ and $\varphi = (\varphi_0, \varphi_1, \ldots)$ defined on a neighbourhood $\mathcal{W} \subset \mathcal{V}$ of $\bar{z} \triangleq (z, \dot{z}, \ddot{z}, \ldots) \triangleq \psi(x_0, \bar{u}_0)$ in $\mathbb{R}_\infty^m$ such that:
|
| 45 |
+
|
| 46 |
+
(1) $z = \psi(x, \bar{u}^{(\nu)}) \in \mathcal{W}$
|
| 47 |
+
|
| 48 |
+
(2) $\bar{z}_1, \ldots, \bar{z}_m$ and their successive derivatives are linearly independent in $\mathcal{W}$
|
| 49 |
+
|
| 50 |
+
(3) The state $x$ and the input $u$ are functions of $z$ and its successive derivatives:
|
| 51 |
+
|
| 52 |
+
$$ (x, u) = (\varphi_0(\bar{z}^{(\rho)}), \varphi_1(\bar{z}^{(\rho+1)})) \in \operatorname{pr}_{X \times \mathbb{R}^m}(\mathcal{V}) \quad (2) $$
|
| 53 |
+
|
| 54 |
+
where $\operatorname{pr}_{X \times \mathbb{R}^m}(\mathcal{V})$ is the canonical projection from $\mathcal{V}$ to $X \times \mathbb{R}^m$
|
| 55 |
+
|
| 56 |
+
(4) The differential equation $\dot{\varphi}_0(\bar{z}) = f(\varphi_0(\bar{z}), \varphi_1(\bar{z}))$ is identically satisfied in $\mathcal{W}$.
|
| 57 |
+
|
| 58 |
+
The vector $z$ is called flat output of the system. The mappings $\psi$ and $\varphi$ are called Lie-Bäcklund isomorphisms and are inverse of one another.
|
| 59 |
+
|
| 60 |
+
**Remark 1.** The property of flatness is not defined globally. The Lie-Bäcklund isomorphisms $\psi$ and $\varphi$ are non unique and only locally defined. Thus, there might exist points in $\mathcal{X}$ where no such isomorphisms exist or, otherwise
|
| 61 |
+
|
| 62 |
+
stated, where the system is not flat. It has been proven in Kaminski et al. (2018) that the set of intrinsic singularities contains the set of equilibrium points of the system that are not first order controllable.
|
| 63 |
+
|
| 64 |
+
### 2.2 Fault Detection and Isolation
|
| 65 |
+
|
| 66 |
+
For the flat system (1), we suppose that the vector $y^s = (y_1^s, \ldots, y_p^s)^T$ is measured by sensors $S_1, \ldots, S_p$ respectively. We also suppose that the flat output $z$ is part of these measurements according, without loss of generality, to
|
| 67 |
+
|
| 68 |
+
$$ z^s = (y_1^s, \ldots, y_m^s)^T. \quad (3) $$
|
| 69 |
+
|
| 70 |
+
Moreover, the value of the input vector $u = (u_1, \ldots, u_m)^T$, corresponding to the actuators $A_1, \ldots, A_m$, is assumed to be available at every time. We now propose a new definition of the notion of residue that generalizes the one introduced by Martínez-Torres et al. (2014).
|
| 71 |
+
|
| 72 |
+
According to (2), the state and input read:
|
| 73 |
+
|
| 74 |
+
$$ x^z = \varphi_0(\overline{z^s^{(\rho)}}), \quad u^z = \varphi_1(\overline{z^s^{(\rho+1)}}) \quad (4) $$
|
| 75 |
+
|
| 76 |
+
where the superscript $z$ indicates that they are evaluated as functions of the measurements $z^s$ and, according to (1),
|
| 77 |
+
|
| 78 |
+
$$ y_k^z \triangleq h_k(\varphi_0(\overline{z^s^{(\rho)}}), \varphi_1(\overline{z^s^{(\rho+1)}})) \quad (5) $$
|
| 79 |
+
|
| 80 |
+
is the virtual value of $y_k$ computed via the measured flat output $z^s$.
|
| 81 |
+
|
| 82 |
+
Note that the first $m$ components of $y^z$ are equal to the corresponding components of $z^s$:
|
| 83 |
+
|
| 84 |
+
$$ y^z = (\overline{z^s}, \tilde{h}(\varphi_0(\overline{z^s}), \varphi_1(\overline{z^s})))^T \quad (6) $$
|
| 85 |
+
|
| 86 |
+
with $\tilde{h} = (h_{m+1}(\varphi_0(\overline{z^s}), \varphi_1(\overline{z^s})), \dots, h_p(\varphi_0(\overline{z^s}), \varphi_1(\overline{z^s})))^T$. **Definition 1.** The $k$th-sensor residue $R_{S_k}$ and $l$th-input residue $R_{A_l}$, for $k=1,\dots,p$ and $l=1,\dots,m$, are given by:
|
| 87 |
+
|
| 88 |
+
$$ R_{S_k} = y_k^s - y_k^{\tilde{z}}, \quad R_{A_l} = u_l - u_l^{\tilde{z}}. \quad (7) $$
|
| 89 |
+
|
| 90 |
+
In total, we have $p+m$ residues for a single flat output $z^s$ and we denote the full residue vector by:
|
| 91 |
+
|
| 92 |
+
$$ r = (R_{S_1}, \dots, R_{S_m}, R_{S_{m+1}}, \dots, R_{S_p}, R_{A_1}, \dots, R_{A_m})^T \\ = (r_1, \dots, r_m, r_{m+1}, \dots, r_p, r_{p+1}, \dots, r_{p+m})^T \quad (8) $$
|
| 93 |
+
|
| 94 |
+
and according to (6)
|
| 95 |
+
|
| 96 |
+
$$ r = (0, \dots, 0, R_{S_{m+1}}, \dots, R_{S_p}, R_{A_1}, \dots, R_{A_m})^T \\ = (0, \dots, 0, r_{m+1}, \dots, r_p, r_{p+1}, \dots, r_{p+m})^T. \quad (9) $$
|
| 97 |
+
|
| 98 |
+
Measured and calculated variables are illustrated in Fig. 1.
|
| 99 |
+
|
| 100 |
+
A residue who is always equal to zero indicates that it cannot be affected by faults on one of the sensors or actuators. Then, we eliminate it and truncate the residue vector to keep the last $p$ components only. This truncated vector is denoted by $r_\tau$:
|
| 101 |
+
|
| 102 |
+
$$ r_\tau = (R_{S_{m+1}}, \dots, R_{S_p}, R_{A_1}, \dots, R_{A_m})^T \\ = (r_{\tau_1}, r_{\tau_2}, \dots, r_{\tau_p})^T. \quad (10) $$
|
| 103 |
+
|
| 104 |
+
**Hypothesis:** From now on, we assume that there is only one fault at a time affecting the sensors or actuators.
|
| 105 |
+
|
| 106 |
+
In practice, due to the presence of noises on sensors and actuators, the successive derivatives of $z^s$ may not be
|
| 107 |
+
---PAGE_BREAK---
|
| 108 |
+
|
| 109 |
+
Fig. 1. Flatness-based residual generation
|
| 110 |
+
|
| 111 |
+
defined. We assume that they are computed via a high-
|
| 112 |
+
gain observer, possibly completed by a low-pass filter as in
|
| 113 |
+
Martínez-Torres et al. (2014) to improve its robustness.
|
| 114 |
+
Moreover, a threshold is associated to each residue. In
|
| 115 |
+
the non faulty case, the residues in (10) will not exceed
|
| 116 |
+
their thresholds. If, otherwise, at least one of the residues
|
| 117 |
+
exceeds its threshold then a fault alert is launched. If
|
| 118 |
+
several residues in (10) trigger an alert at the same time, a
|
| 119 |
+
fault alarm signature, defined below, is required to isolate
|
| 120 |
+
the fault.
|
| 121 |
+
|
| 122 |
+
For this purpose, we introduce the so-called *signature matrix*:
|
| 123 |
+
|
| 124 |
+
*Definition 2.* (Signature matrix). Given the vector of residues $r_{\tau}$ defined in (10) and $\zeta = (y_1^s, \dots, y_p^s, u_1, \dots, u_m)^T \in \mathbb{R}^{p+m}$ the vector of available measurements. We define by the *signature matrix* associated to $z^s$, the matrix **S** given by:
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathbf{S} = \begin{pmatrix}
|
| 128 |
+
\sigma_{1,1} & \sigma_{1,2} & \cdots & \sigma_{1,p+m} \\
|
| 129 |
+
\vdots & \vdots & \ddots & \vdots \\
|
| 130 |
+
\sigma_{p,1} & \sigma_{p,2} & \cdots & \sigma_{p,p+m}
|
| 131 |
+
\end{pmatrix} \quad (11)
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
with
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\sigma_{i,j} \triangleq \begin{cases} 0 & \text{if } \frac{\partial r_{\tau_i}}{\partial \zeta_j^{(\varrho)}} = 0 \quad \forall \varrho \in \{0, 1, \dots\} \\ 1 & \text{if } \exists \varrho \in \{0, 1, \dots\} \text{ s.t. } \frac{\partial r_{\tau_i}}{\partial \zeta_j^{(\varrho)}} \neq 0 \end{cases} \tag{12}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
*Remark 1.* Each column $\Sigma_j$ of the signature matrix $\mathbf{S}$ indicates whether a residue $r_{\tau_i}$ is or is not functionally affected by a fault on the measurement $\zeta_j$. So in (12), $\sigma_{i,j} = 0$ means that the residue $r_{\tau_i}$ is not affected by a fault on the measurement $\zeta_j$ and $\sigma_{i,j} = 1$ means that the residue may be affected.
|
| 141 |
+
|
| 142 |
+
*Definition 3.* A column $\Sigma_j$ of the signature matrix $\mathbf{S}$ is called *fault alarm signature* or simply *signature*, associated to the sensor/actuator $\zeta_j$.
|
| 143 |
+
|
| 144 |
+
From the signature matrix **S** we propose the following
|
| 145 |
+
definitions of detectability and isolability in the flatness
|
| 146 |
+
context:
|
| 147 |
+
|
| 148 |
+
*Definition 4.* (Detectability). A fault on a sensor/actuator $\zeta_j$ is detectable if, and only if there exists at least one $i \in \{1, \dots, p\}$ such that $\sigma_{i,j} = 1$.
|
| 149 |
+
|
| 150 |
+
*Definition 5.* (Isolability). A fault on a sensor $S_k$,
|
| 151 |
+
$k = 1, \dots, p$, is said *isolable* if, and only if, its correspond-
|
| 152 |
+
ing fault alarm signature $\Sigma_k$ in the signature matrix $S$ is
|
| 153 |
+
distinct from the others, i.e.
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\Sigma_k \neq \Sigma_j, \quad \forall j = 1, \dots, p+m, \quad j \neq k. \tag{13}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
An isolable fault on the actuator $A_l$, for $l = 1, \dots, m$, is
|
| 160 |
+
defined analogously:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\Sigma_{p+l} \neq \Sigma_j, \quad \forall j = 1, \dots, p+m, \quad j \neq p+l. \quad (14)
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
We define $\mu$ as the number of distinct signatures of the
|
| 167 |
+
signature matrix $\mathbf{S}$ associated to $z^s$. Then, $\mu$ is the number
|
| 168 |
+
of isolable faults associated to $z^s$.
|
| 169 |
+
|
| 170 |
+
A more general, but much more complicated, definition of isolability in the structured residual context of polynomial systems has been introduced in Staroswiecki and Comtet-Varga (2001), based on elimination techniques.
|
| 171 |
+
|
| 172 |
+
Definition 5 means that if the signature matrix $\mathbf{S}$ has two identical signatures, i.e. $\Sigma_i = \Sigma_j$, for two different sensors/actuators $\zeta_i \neq \zeta_j$, then we cannot make a decision on the faulty device, hence the fault is detected but cannot be isolated. Thus, the number of isolated faults is equal to the number of distinct signatures in the matrix $\mathbf{S}$.
|
| 173 |
+
|
| 174 |
+
## 2.3 The Example of the three tank System
|
| 175 |
+
|
| 176 |
+
We consider a three tank system made up with three cylindrical tanks of cross-sectional area S, connected to each other by means of cylindrical pipes of section S<sub>n</sub>, and two pumps P<sub>1</sub> and P<sub>2</sub> that supply tanks T<sub>1</sub> and T<sub>2</sub>. These three tanks are also connected to a central reservoir through pipes (see Fig. 2).
|
| 177 |
+
|
| 178 |
+
The model is given by:
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\dot{x}_1 = -Q_{10}(x_1) - Q_{13}(x_1, x_3) + u_1 \quad (15)
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\dot{x}_2 = -Q_{20}(x_2) + Q_{32}(x_2, x_3) + u_2
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\dot{x}_3 = Q_{13}(x_1, x_3) - Q_{32}(x_2, x_3) - Q_{30}(x_3) \quad (17)
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
where the state variables $x_i$, $i = 1, 2, 3$ represent the water level of each tank, $Q_{i0}$, $i = 1, 2, 3$ the outflow between each tank and the central reservoir, $Q_{13}$ is the outflow between tanks $T_1$ and $T_3$ and $Q_{32}$ the outflow between tanks $T_3$ and $T_2$, $u_1$ and $u_2$ are the incoming flows by unit of surface of each pump.
|
| 193 |
+
|
| 194 |
+
We assume the following inequalities to avoid singularities¹:
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
x_1 > x_3 > x_2.
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
We consider that the valves connecting tanks $T_1$ and $T_3$
|
| 201 |
+
with the central reservoir are closed, i.e. $Q_{10} \equiv 0$ and
|
| 202 |
+
$Q_{30} \equiv 0$. The expressions of $Q_{13}$, $Q_{32}$ and $Q_{20}$ are given
|
| 203 |
+
by:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
Q_{13}(x_1, x_3) = a_{z1} \sqrt{2g(x_1 - x_3)} \quad (18)
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
Q_{20}(x_2) = a_{z2} \sqrt{2g(x_2)}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
Q_{32}(x_2, x_3) = a_{z3} \sqrt{2g(x_3 - x_2)} \quad (20)
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
¹ According to the *Remark 1*, the point $\bar{x} \in \mathcal{X}$ s.t. $x_1 = x_2 = x_3$ is an equilibrium point which is not first order controllable, then it is a point of intrinsic flatness singularity.
|
| 218 |
+
---PAGE_BREAK---
|
| 219 |
+
|
| 220 |
+
Fig. 2. *Three Tank System*, Source: (Noura et al., 2009)
|
| 221 |
+
|
| 222 |
+
where $a_{zr}$, $r = 1, 2, 3$, is the flow coefficient and $g$ the gravitational force. Each tank $T_i$ is equipped with a sensor $\mathbf{S}_i$ to measure its level $x_i$. Hence, the measured output is:
|
| 223 |
+
|
| 224 |
+
$$y^s = (y_1^s, y_2^s, y_3^s)^T = (x_1^s, x_2^s, x_3^s)^T \quad (21)$$
|
| 225 |
+
|
| 226 |
+
The system (15)-(16)-(17) is flat with $z = (x_1, x_3)^T = (z_1, z_2)^T$ as flat output. The measured flat output is then given by $z^s = (y_1^s, y_3^s)^T = (z_1^s, z_2^s)^T$. In order to construct the vector of residues, using (4) and (5), we set:
|
| 227 |
+
|
| 228 |
+
$$\begin{aligned}
|
| 229 |
+
y_1^z &= z_1^s \\
|
| 230 |
+
y_2^z &= z_2^s - \frac{1}{2g} \left( \frac{a_{z1} \sqrt{2g(z_1^s - z_2^s) - \dot{z}_2^s}}{a_{z3}} \right)^2 \\
|
| 231 |
+
y_3^z &= z_2^s \\
|
| 232 |
+
u_1^z &= \dot{z}_1^s + a_{z1} \sqrt{2g(z_1^s - z_2^s)} \\
|
| 233 |
+
u_2^z &= \dot{y}_2^z - a_{z3} \sqrt{2g(z_2^s - y_2^z)} + a_{z2} \sqrt{2gy_2^z}.
|
| 234 |
+
\end{aligned}$$
|
| 235 |
+
|
| 236 |
+
According to (7), the vector of residues, associated to $z^s$, is then given by:
|
| 237 |
+
|
| 238 |
+
$$r = \begin{pmatrix} R_{S_1} \\ R_{S_2} \\ R_{S_3} \\ R_{A_1} \\ R_{A_2} \end{pmatrix} = \begin{pmatrix} y_1^s \\ y_2^s \\ y_3^s \\ u_1 \\ u_2 \end{pmatrix} - \begin{pmatrix} y_1^z \\ y_2^z \\ y_3^z \\ u_1^z \\ u_2^z \end{pmatrix}. \quad (22)$$
|
| 239 |
+
|
| 240 |
+
However, residues $R_{S_1}$ and $R_{S_3}$ are identically zero:
|
| 241 |
+
|
| 242 |
+
$$\begin{aligned}
|
| 243 |
+
R_{S_1} &= y_1^s - y_1^z = z_1^s - z_1^s = 0 \\
|
| 244 |
+
R_{S_3} &= y_3^s - y_3^z = z_2^s - z_2^s = 0
|
| 245 |
+
\end{aligned} \quad (23)$$
|
| 246 |
+
|
| 247 |
+
hence, according to (10), the vector $r$ is truncated to:
|
| 248 |
+
|
| 249 |
+
$$r_\tau = (R_{S_2}, R_{A_1}, R_{A_2})^T = (r_{\tau_1}, r_{\tau_2}, r_{\tau_3})^T. \quad (24)$$
|
| 250 |
+
|
| 251 |
+
Therefore, the signature matrix $\mathbf{S}$, associated to $z^s$, is constructed as follows:
|
| 252 |
+
|
| 253 |
+
- All the residues in (24) depend on the measurement of $z^s = (y_1^s, y_3^s)^T$ then the first and the third columns of the signature matrix contain only ones:
|
| 254 |
+
|
| 255 |
+
$$\sigma_{i,1} = \sigma_{i,3} = 1, \forall i = 1, 2, 3$$
|
| 256 |
+
|
| 257 |
+
- Only residue $r_{\tau_1}$ depends on $y_2^s$ and its successive derivatives, then the second column will be such that:
|
| 258 |
+
|
| 259 |
+
$$\sigma_{1,2} = 1 \text{ and } \sigma_{i,2} = 0, i = 2, 3$$
|
| 260 |
+
|
| 261 |
+
- Since $r_{\tau_2}$ depends only on $u_1$ and $r_{\tau_3}$ depends only on $u_2$, then column 4 and column 5 of $\mathbf{S}$ are such that:
|
| 262 |
+
|
| 263 |
+
$$\sigma_{2,4} = 1 \text{ and } \sigma_{i,4} = 0 \forall i = 1, \dots, 3, i \neq 2$$
|
| 264 |
+
|
| 265 |
+
and
|
| 266 |
+
|
| 267 |
+
$\sigma_{3,5} = 1$ and $\sigma_{i,5} = 0 \forall i = 1, \dots, 3, i \neq 3$
|
| 268 |
+
|
| 269 |
+
respectively.
|
| 270 |
+
|
| 271 |
+
Hence, the signature matrix, associated to $r_\tau$, is given by:
|
| 272 |
+
|
| 273 |
+
$$\mathbf{S} = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \end{pmatrix}. \quad (25)$$
|
| 274 |
+
|
| 275 |
+
According to definition 4, all faults on the three tank system's sensors and actuators are detectable. Since fault alarm signatures $\Sigma_2$, $\Sigma_4$ and $\Sigma_5$ are distinct, then, according to definition 5, faults on sensor $\mathbf{S}_2$ and actuators $\mathbf{A}_1$ and $\mathbf{A}_2$ are isolable. This reflects the fact that if, at some point during system operation, a fault alarm is launched with the signature $\Sigma_2$ then we conclude that the sensor $\mathbf{S}_2$ is faulty. However, if we obtain a signature like $\Sigma_1$, the fault could be on the sensor $\mathbf{S}_1$ or $\mathbf{S}_3$, since signatures $\Sigma_1$ and $\Sigma_3$ are identical. Then, a fault on $\mathbf{S}_1$ or $\mathbf{S}_3$ cannot be isolated. To conclude, this example shows that the isolability property is strongly conditioned by the dependence of the flat output with respect to the measured variables. This motivates the study of the choice of flat outputs of the next section.
|
| 276 |
+
|
| 277 |
+
**Remark 2.** In Nagy et al. (2009), it has been shown that system (15)-(16)-(17) is observable through $x_1$ only and that $x_2$ and $x_3$ can be estimated using $x_1$ given the measurements of $u_1$ and $u_2$, leading to different isolability results. The reader may refer to this article for more details. Note that, here, the measurements of $u_1$ and $u_2$ are not necessary to guarantee the $x_2$-isolability.
|
| 278 |
+
|
| 279 |
+
### 3. FLAT OUTPUT SELECTION
|
| 280 |
+
|
| 281 |
+
In order to get more isolability on systems sensor and actuator, the authors in Martínez-Torres et al. (2014) propose to increase the number of residues by using several flat outputs. These flat outputs must be *independent* in the sense that when we use them together we gain more isolability of faults. In this section, we propose a characterization of the relation between different flat outputs using a so-called *augmented signature matrix*. This characterization leads to a decision concerning the choice of flat outputs that are useful for the isolability.
|
| 282 |
+
|
| 283 |
+
According to definition 5, the number $\mu$ of isolated faults by a flat output $z$ is equal to the number of distinct signatures $\Sigma_k$ of its signature matrix. Then, in order to get more isolability of faults, we need to increase the number of distinct signatures. This is possible when different projections of the system's output $y$ are available that are flat outputs. For this purpose, we introduce definitions 6 and 7.
|
| 284 |
+
|
| 285 |
+
In the following, we denote the $i^{th}$ element of the set of q flat output vectors $Z_i$ by $Z_i = (z_{i1}, \dots, z_{im})^T$.
|
| 286 |
+
|
| 287 |
+
*Definition 6.* (Augmented signature matrix). Let $Z_1, \dots, Z_q$ be q different flat output vectors of the flat system (1), such that $Z_i = \text{pr}_{\mathbb{R}^m}(y)$. The *augmented signature matrix* $\tilde{\mathbf{S}}$ associated to $Z_1, \dots, Z_q$ is defined by:
|
| 288 |
+
|
| 289 |
+
$$\tilde{\mathbf{S}} = \begin{pmatrix} \mathbf{S}_1 \\ \mathbf{S}_2 \\ \vdots \\ \mathbf{S}_q \end{pmatrix} \quad (26)$$
|
| 290 |
+
---PAGE_BREAK---
|
| 291 |
+
|
| 292 |
+
where $\mathbf{S}_i$ is the signature matrix associated to the flat output vector $Z_i$.
|
| 293 |
+
|
| 294 |
+
The choice of flat output vectors is not arbitrary. They must be independent in the sense given by the following definition:
|
| 295 |
+
|
| 296 |
+
*Definition 7.* (Independence). Let $\tilde{\mathbf{S}}$ be the augmented signature matrix associated to $Z_1$ and $Z_2$:
|
| 297 |
+
|
| 298 |
+
$$ \tilde{\mathbf{S}} = \begin{pmatrix} \mathbf{S}_1 \\ \mathbf{S}_2 \end{pmatrix}, $$
|
| 299 |
+
|
| 300 |
+
$\mu_i, i = 1, 2$, the number of distinct signatures of the matrix $\mathbf{S}_i$ and $\tilde{\mu}$ the number of distinct signatures of the augmented matrix $\tilde{\mathbf{S}}$. We say that $Z_1$ and $Z_2$ are *independent* if, and only if
|
| 301 |
+
|
| 302 |
+
$$ \tilde{\mu} > \mu_1 \quad \text{and} \quad \tilde{\mu} > \mu_2. \tag{27} $$
|
| 303 |
+
|
| 304 |
+
Definition 7 means that two flat outputs are independent if, by using them together, the number of distinct signatures increases which corresponds to the number of isolated faults. If the condition (27) is not satisfied then the combination of $Z_1$ and $Z_2$ is not helpful for the isolability, and we have to find another combination by calculating more flat outputs. To conclude, the condition of full isolability is given by the following proposition:
|
| 305 |
+
|
| 306 |
+
*Proposition 2.* Let $Z_1, \dots, Z_q$ be $q$ different flat output vectors of the system (1). A full isolability of faults on sensors and actuators is achieved if the augmented matrix
|
| 307 |
+
|
| 308 |
+
$$ \tilde{\mathbf{S}} = \begin{pmatrix} \mathbf{S}_1 \\ \mathbf{S}_2 \\ \vdots \\ \mathbf{S}_q \end{pmatrix} $$
|
| 309 |
+
|
| 310 |
+
has $p+m$ distinct signatures, i.e. $\tilde{\mu} = p+m$.
|
| 311 |
+
|
| 312 |
+
# 4. APPLICATION TO THE THREE TANK SYSTEM
|
| 313 |
+
|
| 314 |
+
Back to the three tank system presented in section 2.3, we denote by $Z_1$ the flat output vector $Z_1 = (z_{11}, z_{12})^T = (x_1, x_3)^T$. The corresponding vector of residues is given by (24). We recall the signature matrix associated to $Z_1$, and we denote it by $\mathbf{S}_1$:
|
| 315 |
+
|
| 316 |
+
$$ \mathbf{S}_1 = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \end{pmatrix} \tag{28} $$
|
| 317 |
+
|
| 318 |
+
We also recall that, according to definition 5, faults on sensors $\mathbf{S}_1$ and $\mathbf{S}_3$ cannot be isolated. The number of distinct signatures of $\mathbf{S}_1$ is $\mu_1 = 3$.
|
| 319 |
+
|
| 320 |
+
In order to increase the number of isolable faults, we consider $Z_2 = (z_{21}, z_{22})^T = (x_2, x_3)^T$ another flat output vector of the three tank system. It is measured by sensors $\mathbf{S}_2$ and $\mathbf{S}_3$, i.e. $Z_2^s = (z_{21}^s, z_{22}^s)^T = (y_2^s, y_3^s)^T$. To construct the vector of residues associated to $Z_2^s$ and its signature matrix, we set, using (4) and (5):
|
| 321 |
+
|
| 322 |
+
$$
|
| 323 |
+
\begin{align*}
|
| 324 |
+
y_1^{Z_2} &= z_{22}^s + \frac{1}{2g} \left( a_{z3} \sqrt{2g(z_{22}^s - z_{21}^s)} + \dot{z}_{22}^s \right)^2 \\
|
| 325 |
+
y_2^{Z_2} &= z_{21}^s \\
|
| 326 |
+
y_3^{Z_2} &= z_{22}^s \\
|
| 327 |
+
u_1^{Z_2} &= \dot{z}_{22}^s + a_{z1} \sqrt{2g(z_{21}^s - z_{22}^s)} \\
|
| 328 |
+
u_2^{Z_2} &= \dot{y}_{2}^{Z_2} - a_{z3} \sqrt{2g(z_{22}^s - y_{2}^{Z_2})} + a_{z2} \sqrt{2gy_{2}^{Z_2}}.
|
| 329 |
+
\end{align*}
|
| 330 |
+
$$
|
| 331 |
+
|
| 332 |
+
Therefore, as shown for the flat output $Z_1$, residues $R_{S_2}^{Z_2}$ and $R_{S_3}^{Z_2}$ are identically zero and the truncated vector of residues (10) reads:
|
| 333 |
+
|
| 334 |
+
$$ r_{\tau}^{Z_2} = \begin{pmatrix} R_{\mathbf{S}_1}^{Z_2} \\ R_{A_1}^{Z_2} \\ R_{A_2}^{Z_2} \end{pmatrix} = \begin{pmatrix} y_2^s \\ u_1 \\ u_2 \end{pmatrix} - \begin{pmatrix} y_2^{Z_2} \\ u_1^{Z_2} \\ u_2^{Z_2} \end{pmatrix}. \tag{29} $$
|
| 335 |
+
|
| 336 |
+
Hence, the signature matrix associated to $Z_2$ is given by:
|
| 337 |
+
|
| 338 |
+
$$ S_2 = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 1 \end{pmatrix}. \tag{30} $$
|
| 339 |
+
|
| 340 |
+
Signatures $\Sigma_1, \Sigma_4$ and $\Sigma_5$ in the matrix $\mathbf{S}_2$ are distinct, then, according to definition 5, faults on sensor $\mathbf{S}_1$ and actuators $\mathbf{A}_1$ and $\mathbf{A}_2$ are isolable by the flat output $Z_2$. Moreover, the number of distinct signatures of $\mathbf{S}_2$ is $\mu_2 = 3$. However, since signatures $\Sigma_2$ and $\Sigma_3$ are identical, then faults on sensors $\mathbf{S}_2$ and $\mathbf{S}_3$ cannot be isolated.
|
| 341 |
+
|
| 342 |
+
It remains to be verified whether the two flat outputs $Z_1$ and $Z_2$ are independent.
|
| 343 |
+
|
| 344 |
+
The augmented signature matrix associated to $Z_1$ and $Z_2$ is given by:
|
| 345 |
+
|
| 346 |
+
$$ \tilde{\mathbf{S}} = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 1 \end{pmatrix}. \tag{31} $$
|
| 347 |
+
|
| 348 |
+
The number of distinct fault alarm signatures of $\tilde{\mathbf{S}}$ is $\tilde{\mu} = 5$, and we have
|
| 349 |
+
|
| 350 |
+
$$ \tilde{\mu} > \mu_1 \quad \text{and} \quad \tilde{\mu} > \mu_2. $$
|
| 351 |
+
|
| 352 |
+
Then, according to definition 6, the flat output vectors $Z_1$ and $Z_2$ are independent. Moreover, since $\tilde{\mu} = p+m$, then flat output vectors $Z_1$ and $Z_2$ ensure full isolability of faults on the three tank system.
|
| 353 |
+
|
| 354 |
+
Simulation results that confirm the effectiveness of this approach can be found in Martínez-Torres et al. (2013).
|
| 355 |
+
|
| 356 |
+
# 5. CONCLUSION
|
| 357 |
+
|
| 358 |
+
The current paper introduces a novel and rigorous definition of the isolability of faults affecting a system's sensors and actuators, using the flatness-based FDI approach. The described condition of isolability provides an efficient way to select flat outputs that are useful for fault isolation. Our results are tested and validated using the three tank system. Future work should focus on the development of a method that calculates independent flat outputs directly.
|
| 359 |
+
---PAGE_BREAK---
|
| 360 |
+
|
| 361 |
+
REFERENCES
|
| 362 |
+
|
| 363 |
+
Chen, J., Li, H., Sheng, D., and Li, W. (2015). A hybrid data-driven modeling method on sensor condition monitoring and fault diagnosis for power plants. *International Journal of Electrical Power & Energy Systems*, 71, 274-284.
|
| 364 |
+
|
| 365 |
+
Diversi, R., Simani, S., and Soverini, U. (2002). Robust residual generation for dynamic processes using decoupling technique. In *Proceedings of the International Conference on Control Applications*, volume 2, 1270–1275. IEEE.
|
| 366 |
+
|
| 367 |
+
Flies, M., Lévine, J., Martin, P., and Rouchon, P. (1999). A lie-backlund approach to equivalence and flatness of nonlinear systems. *IEEE Transactions on automatic control*, 44(5), 922–937.
|
| 368 |
+
|
| 369 |
+
Izadian, A. and Khayyer, P. (2010). Application of kalman filters in model-based fault diagnosis of a dc-dc boost converter. In *IECON 2010-36th Annual Conference on IEEE Industrial Electronics Society*, 369–372. IEEE.
|
| 370 |
+
|
| 371 |
+
Kaminski, Y.J., Lévine, J., and Ollivier, F. (2018). Intrinsic and apparent singularities in differentially flat systems, and application to global motion planning. *Systems & Control Letters*, 113, 117–124.
|
| 372 |
+
|
| 373 |
+
Kóscielny, J.M., Syfert, M., Rostek, K., and Sztyber, A. (2016). Fault isolability with different forms of the faults-symptoms relation. *International Journal of Applied Mathematics and Computer Science*, 26(4), 815–826.
|
| 374 |
+
|
| 375 |
+
Levine, J. (2009). *Analysis and control of nonlinear systems: A flatness-based approach*. Springer Science & Business Media.
|
| 376 |
+
|
| 377 |
+
Martínez-Torres, C., Lavigne, L., Cazaurang, F., Alcorta-García, E., and Díaz-Romero, D.A. (2013). Fault detection and isolation on a three tank system using differential flatness. In *2013 European Control Conference (ECC)*, 2433–2438. IEEE.
|
| 378 |
+
|
| 379 |
+
Martínez-Torres, C., Lavigne, L., Cazaurang, F., Alcorta-García, E., and Díaz-Romero, D.A. (2014). Flatness-based fault tolerant control. *Dyna*, 81(188), 131–138.
|
| 380 |
+
|
| 381 |
+
Nagy, A.M., Marx, B., Mourot, G., Schutz, G., and Ragot, J. (2009). State estimation of the three-tank system using a multiple model. In *Proceedings of the 48th IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference*, 7795–7800. IEEE.
|
| 382 |
+
|
| 383 |
+
Noura, H., Theilliol, D., Ponsart, J.C., and Chamseddine, A. (2009). *Fault-tolerant control systems: Design and practical applications*. Springer Science & Business Media.
|
| 384 |
+
|
| 385 |
+
Staroswiecki, M. and Comtet-Varga, G. (2001). Analytical redundancy relations for fault detection and isolation in algebraic dynamic systems. *Automatica*, 37(5), 687–699.
|
| 386 |
+
|
| 387 |
+
Suryawan, F., De Doná, J., and Seron, M. (2010). Fault detection, isolation, and recovery using spline tools and differential flatness with application to a magnetic levitation system. In *2010 Conference on Control and Fault-Tolerant Systems (SysTol)*, 293–298. IEEE.
|
| 388 |
+
|
| 389 |
+
Thirumarimurugan, M., Bagyalakshmi, N., and Paarkavi, P. (2016). Comparison of fault detection and isolation methods: A review. In *2016 10th International Conference on Intelligent Systems and Control (ISCO)*, 1–6. IEEE.
|
| 390 |
+
|
| 391 |
+
Tousi, M. and Khorasani, K. (2011). Robust observer-based fault diagnosis for an unmanned aerial vehicle. In *2011 IEEE International Systems Conference*, 428–434. IEEE.
|
| 392 |
+
|
| 393 |
+
Zhou, Y., Xu, G., and Zhang, Q. (2014). Overview of fault detection and identification for non-linear dynamic systems. In *2014 IEEE International Conference on Information and Automation (ICIA)*, 1040–1045. IEEE.
|